Programming on the Apple M1
It has been a few months since I bought the Mac Mini with Apple's M1 chip and the experience has been a mix of surprises and frustration. Since I have been building a product, I have touched a bunch of different areas, which should be useful for you to gauge your own experience.
Apple's M1 chip is really fast. This cannot be understated. In comparison to my 2017 MacBook Pro with the 3.1GHz Intel Core i5 chip, various tasks are faster, and even basic things like typing and mouse movements feel faster due to the reduced latency.
Web browsers and prebuilt Node.js binaries are available for the M1 Mac, so there are zero compatibility issues there. The only issue is that you may encounter older npm packages that fail to install as they rely on native binaries and do not have a fallback to build from the source.
|Task||Time on M1 (s)||Time on Dual Intel Core i5 (s)||Speed Up Factor|
|Building this blog||9.71||84||8.65|
|Building facebook/react repo||221||495||2.24|
|Running tests in facebook/react repo||48||366||7.63|
|Running tests in facebook/react repo (Single Thread)||135||458||3.39|
The speed difference is so big that I worry that M1 chip web developers will neglect performance as they will not perceive any slowness in their applications. Web browsers need to start offering custom factors for CPU throttling as a 4x slowdown is no longer accurate on these fast devices.
I have a simple service that uses Rust and PostgreSQL and I had no issues compiling. The Rust application took 29.79s on the M1, compared 228s to on the Intel i5.
Binaries without a build from source option mean that you could find yourself having to fork upstream packages to support the M1 Mac, but I have not encountered this yet.
Docker support has also been smooth sailing, everything just works.
The M1 chip has an integrated GPU, but the big issue is that, like AMD GPUs, support for machine learning libraries is lacking.
I managed to install Tensorflow without issues, but PyTorch seems to only work if you install it on Rosetta, the ARM to x86_64 instruction translation layer that MacOS provides. This means that not only do you not get GPU support, but also take a hit on the CPU performance.
As a hobbyist, this is a reasonable tradeoff, but I would advise steering away from Apple Silicon for machine learning work until the platform is stable.
Which M1 Mac should I get?
The same M1 chip is available on the Apple Mac Mini, MacBook Pro, and MacBook Air. The biggest difference between the three devices is the thermal dissipation, which will only be apparent if you run long-running, intensive tasks. If you get the base model for MacBook Air, you only get the 7 cores instead of 8 on the integrated GPU.
If you already have an external monitor, the Mac Mini is my recommendation as it is the most affordable. If you need portability, the MacBook Air is a good choice as the few occasions where you need air cooling on the MacBook Pro are not worth the extra cost and weight.
I can see Apple silicon taking over the programming world as the performance difference is amazing. Given that every software developer in Silicon Valley already uses a Mac, non-mac consumers might find themselves losing out as developers will not realize they have a performance issue at hand.
I look forward to embracing Apple silicon chips, I'll just have to be aware that what is fast on my machine may not be the case for others, and account for it.