Moore’s Law – Dead

Intel_Pentium_P54C_die

Mark Bohr, recently explained to a group of people how Moore’s Law (every two years the number of transistors on a computer chip will double) died ten years ago.  It’s just that no one noticed.

How did it die?  Well, around ten years ago was when the switch from powerful single core processors to multiple core processors really started taking off.  It was also about the same time where the speed of the processor (GHz) started peaking and actually started descending.  So, while the number of transistors on a chip stopped doubling every two years that did not prevent chip manufacturers from doubling the performance every two years.

I mean, imagine the pressure to go from the 4,500 transistor Intel 8080 processor to the latest Intel chips that contain 4,500,000,000 transistors.  A million fold increase in the number of transistors.  But we’ve hit the limit.  There are even some physical reasons why the density cannot continue to double such as the quantum mechanical effect that causes transistors to “leak” electrons.  They also discovered that there is a limit as to how often silicon-based CPUs could perform functions (around 4 billion times per second) without excess heat causing the silicon to, well, melt.  Different materials are being looked at, but even if you start using photonics like HP is doing with there memristor architecture, there is still a physical limitation of how fast things can switch.  We are reaching the limit of what we can physically hope to achieve with regard to processing speed.

So, what’s left?  We’re already dumping core after core after core into processors.  I recently saw a screenshot of the Performance Monitor of the machine that does the nightly build for Windows 10.  It is an eight processor box.  With 10 cores per processor.  Hyper-threaded.  What does this mean?  One hundred and sixty threads that can be running simultaneously.

In order to effectively utilize 160 threads, however, the process that is running needs to be multi-threaded.  And that is where developers need to go.  Single threaded execution is not longer a viable option for many systems.  It hasn’t been for years.  There are some systems that are designed to be single threaded and take advantage of being single threaded.  Node.js is one of those systems.  But the advantage with Node.js is that it is also so very easy to spin up another instance of Node.js and have two high performance single threaded systems running at the same time.  Load getting high?  Spin up another one.  Or another.  The amount of time required to start up another Node.js thread is measured in single digit seconds.

But many of our applications are not system level applications, they are “business” applications and as such they behave differently.  They are more linear in fashion and that is where the downfall is coming about:  linear progressions.  We need to stop thinking about step 2 following step 1 and step 3 following step 2 and start thinking about what steps can be done in parallel.  Project Managers do this all the time when they schedule a project.  From a developers perspective it is the identical thing:  scheduling multiple tasks to run at the same time.

The processors aren’t necessarily going to get faster for your application, but you are going to get more of them to play with.  Start learning how to take advantage of that and make your applications perform better.