“AI has ushered in a new golden age of semiconductor innovation,” reports Forbes: For most of the history of computing, the prevailing chip architecture has been the CPU, or central processing unit… But while CPUs’ key advantage is versatility, today’s leading AI techniques demand a very specific — and intensive — set of computations. Deep learning entails the iterative execution of millions or billions of relatively simple multiplication and addition steps… CPUs process computations sequentially, not in parallel. Their computational core and memory are generally located on separate modules and connected via a communication system (a bus) with limited bandwidth. This creates a choke point in data movement known as the “von Neumann bottleneck”. The upshot: it is prohibitively inefficient to train a neural network on a CPU… In the early 2010s, the AI community began to realize that Nvidia’s gaming chips were in fact well suited to handle the types of workloads that machine learning algorithms demanded. Through sheer good fortune, the GPU had found a massive new market. Nvidia capitalized on the opportunity, positioning itself as the market-leading provider of AI hardware. The company has reaped incredible gains as a result: Nvidia’s market capitalization jumped twenty-fold from 2013 to 2018. Yet as Gartner analyst Mark Hung put it, “Everyone agrees that GPUs are not optimized for an AI workload.” The GPU has been adopted by the AI community, but it was not born for AI. In recent years, a new crop of entrepreneurs and technologists has set out to reimagine the computer chip, optimizing it from the ground up in order to unlock the limitless potential of AI. In the memorable words of Alan Kay: “People who are really serious about software should make their own hardware….” The race is on to develop the hardware that will power the upcoming era of AI. More innovation is happening in the semiconductor industry today than at any time since Silicon Valley’s earliest days. Untold billions of dollars are in play. Some highlights from the article: Google, Amazon, Tesla, Facebook and Alibaba, among other technology giants, all have in-house AI chip programs. Groq has announced a chip performing one quadrillion operations per second. “If true, this would make it the fastest single-die chip in history.” Cerebras’ chip “is about 60 times larger than a typical microprocessor. It is the first chip in history to house over one trillion transistors (1.2 trillion, to be exact). It has 18 GB memory on-chip — again, the most ever.” Lightmatter believes using light instead of electricity “will enable its chip to outperform existing solutions by a factor of ten.” Read more of this story at Slashdot.