Tag Archives: marketing

Moore’s Law Smells Funny

…maybe we need “Integrated Cleverness Law”

“Jazz is not dead, it just smells funny.” – Frank Zappa 1973
from Be-Bop Tango (Of The Old Jazzmen’s Church)

Marketing is about managing expectations. IC marketing must position next-generation chips as adding significant new/improved functionalities, and for over 50 years the IC fab industry has leaned on the conceptual crutch of “so-called Moore’s Law” (as Gordon Moore always refers to it) to do so. For 40 years the raw device count was a good proxy for a better IC, but since the end of Dennard Scaling the raw transistor count on a chip is no longer the primary determinant of value.

Intel’s has recently released official positions on Moore’s Law, and the main position is certainly correct:  “Advances in Semi Manufacturing Continue to Make Products Better and More Affordable,” as per the sub-headline of the blog post by Stacy Smith, executive vice president leading manufacturing, operations, and sales for Intel. Smith adds that “We have seen that it won’t end from lack of benefits, and that progress won’t be choked off by economics.” This is what has been meant by “Moore’s Law” all along.

When I interviewed Gordon Moore about all of this 20 years ago (“The Return of Cleverness” Solid State Technology, July 1997, 359), he wisely reminded us that before the industry reaches the limits of physical scaling we will be working with billions of transistors in a square centimeter of silicon. There are no ends to the possibilities of cleverly combining billions of transistors with sensors and communications technologies to add more value to our world. Intel’s recent spend of US$15B to acquire MobileEye is based on a plan to cost-effective integrate novel functionalities, not to merely make the most dense IC.

EETimes reports that at the International Symposium on Physical Design (ISPD 2017) Intel described more than a dozen technologies it is developing with universities and the SRC to transcend the limitations of CMOS. Ian Young, a senior fellow with Intel’s Technology Manufacturing Group and director of exploratory integrated circuits in components research, recently became the editor-in-chief of a new technical journal called the IEEE Journal of Exploratory Solid-State Computational Devices and Circuits, which explores these new CMOS-fab compatible processes.

Meanwhile, Intel’s Mark Bohr does an admirable job of advocating for reason when discussing the size of minimally scaled ICs. Bohr is completely correct in touting Intel’s hard-won lead in making devices smaller, and the company’s fab prowess remains unparalleled.

As I posted here three years ago in my “Moore’s Law Is Dead” blog series, our industry would be better served by retiring the now-obsolete simplification that more = better. As Moore himself says, cleverness in design and manufacturing will always allow us to make more valuable ICs. Maybe it is time to retire “Moore’s Law” and begin leveraging a term like “Integrated Cleverness Law” when telling the world that the next generation of ICs will be better.


Moore’s Law is Dead – (Part 4) Why?

We forgot Moore merely meant that IC performance would always improve (Part 4 of 4)

IC marketing must convince customers to design ICs into electronic products. In 1965, when Gordon Moore first told the world that IC component counts would double in each new product generation, the main competition for ICs was discrete chips. Moore needed a marketing tool to convince early customers to commit to using ICs, and the best measure of an IC was simply the component count. When Moore updated his “Law” in 1975 (see Part 1 of this series for more details), ICs had clearly won the battle with discretes for logic and memory functions, but most designs still had only single-digit thousands of transistors so increases in the raw counts still conveyed the idea of better chips.

MooresLaw_1965_graphFor almost 50 years, “Moore’s Law” doubling of component counts was a reasonable proxy for better ICs. Also, if we look at Moore’s original graph from 1965 (right), we see that for a given manufacturing technology generation there is a minimal cost/component at a certain component count. “What`s driven the industry is lower cost,” said Moore in 1997. “The cost of electronics has gone down over a million-fold in this time period, probably ten million-fold, actually. While these other things are important, to me the cost is what has made the technology pervasive.”

Fast forward to today, and we have millions of transistors working in combinations of “standard cell” blocks of pre-defined functionalities at low cost. Graphics Processor Units (GPU) and other Application Specific Integrated Circuits (ASIC) take advantage of billions of components to provide powerful functionalities at low cost. Better ICs today are measured not by mere component counts, but by performance metrics such as graphics rendering speed or FLOPS.

The limits of lithography (detailed in Part 2 of this blog series) mean that further density improvements will be progressively more expensive, and the atomic limits of physical reality (detailed in Part 3) impose a hard-stop on density at ~1000x of today’s leading-edge ICs. “If we say we can`t improve the density anymore because we run up against all these limitations, then we lose that factor and we`re left with increasing the die size,” said Moore in 1997.

Since the cost of an IC is proportional to the die size, and since the cost/area of lithographic patterning is not decreasing with tighter design-rules, increasing the die size will almost certainly increase cost proportionally. We may not need larger dice with more transistors, however, as future markets for ICs may be better served by the same number of transistors integrated with new functionalities.

International R&D center IMEC knows as well as any organization the challenges of pushing lithography and junction-formation and ohmic contacts to atomic limits. In the 2014 Imec Technology Forum, held the first week of June in Brussels, president and chief executive officer Luc Van den hove’s keynote address focused on the applications of ICs into communications, energy, health-care, security, and transportation applications.

TI has been making ICs since they were co-invented by Kilby in 1959, and over a decade ago TI made a conscious decision to stop chasing ever-smaller digital. First it outsourced digital chip fabrication to foundries, and in 2012 began retiring digital communications chips. Without continually shrinking components, how has TI managed to survive? By focusing on design and integration of analog components, in the most recent financial quarter the company posted 58% gross margin on $3.29B in sales.

At The ConFab last month, Dr. Gary Patton, vice president, semiconductor research and development center at IBM, said there is a bright future in microelectronics (as documented at Pete’s Posts blog).

The commercial semiconductor manufacturing industry will see only continued revenue growth in the future. We will process more area of silicon ICs each year, in support of shipping an ever increasing number of chips worldwide. More fabs will be built needing more tools and an increasing number of new materials.

Moreover, next generation chips will be faster or smaller or cheaper or more functional, and so will better serve the needs of new downstream customers. ASICs and 3D heterogeneous chip stacks will create new IC product categories leading to new market opportunities. Personalized health care could be the next revolution in information technologies, requiring far more sensors and communications and memory and logic chips. With a billion components, the possibilities for new designs to create new IC functionalities seems endless.

However, we are past the era when the next chips will be simultaneously faster and smaller and cheaper and more functional. We have to accept the end of Dennard Scaling and the economic limits of optical lithography. Still, we should remember what Gordon Moore meant in 1965 when he first talked about the future of IC manufacturing, because one factor remains the same:

The next generation of commercial IC chips will be better.

Past posts in the blog series:

Moore’s Law is Dead – (Part 1) What defines the end.

Moore’s Law is Dead – (Part 2) When we reach economic limits,

Moore’s Law is Dead – (Part 3) Where we reach atomic limits.

Future posts in this blog will ruminate about new materials, designs, and technologies for next 50 years of IC manufacturing.


Moore’s Law is Dead – (Part 1) What?

…twice the number of components won’t appear on the next IC chip (Part 1 of 4)

Gordon Moore always calls it “so-called Moore’s Law” when discussing his eponymous observation about IC scaling trends, and he has always acknowledged that it’s no more and no less than a marketing tool used to inform an ecosystem of downstream chip-users of price:performance improvements planned. The original observation published in 1965 and updated in 1975 established that the number of functional circuit components—including transistors, diodes, and any passive components—on a single IC chip doubled periodically.

MooresLaw_1975_graphWhen Moore updated this in 1975 (Moore, Gordon. “Progress in Digital Integrated Electronics” IEEE, IEDM Tech Digest (1975) pp.11-13.) he decomposed the sources of increasing component count as follows:

  • Die size increase,
  • Dimension decrease (a.k.a. “shrink”),
  • Device and Circuit design (a.k.a. “cleverness”).

Note that Moore never said anything about cost, speed, power-consumption, or reliability. It was left to the IC sales guys to inform that lithographic R&D meant that the next generation chips would actually be smaller and cost less, and most importantly the ability to maintain Dennard Scaling with power-reduction/transistor rules meant that each chip reliably consumed less power. This was the glory era when each new chip generation provided it all:  more components, faster speed, and cheaper price.

Five years ago, Gordon Moore and Jay Last provided an insightful review of the founding of the IC industry at the Computer History Museum, which I covered in an independent blog posting (http://www.betasights.net/wordpress/?p=758). As well summarized in the “Transistor Count” entry at Wikipedia (https://en.wikipedia.org/wiki/Transistor_count) by 1975 the industry was working on designs with 10k transistors, and 100k by 1982, and 1 million by 1989. Incredibly, the trend continued to 1 billion transistors on a chip in production in 2010.

In my interview with Gordon Moore published in the July 1997 issue of Solid State Technology, he emphasized two points:  the atomic limits of IC manufacturing, and the fact that when we start to reach atomic limits we’ll be able to put 1 billion circuit elements into a square centimeter of silicon. However, henceforth we will no longer get it all with the next generation chips, and will only be able to choose one from the glory list that used to be a package deal (pun intended):  more, faster, cheaper. IC innovation will certainly continue, but it will not come through smaller and faster and cheaper circuit elements. Moore’s accurate prediction of gigascale circuitry on cheap chips explicitly sets the stage for the next 50 years of innovation in IC manufacturing…we’ve only begun to play with billions of transistors.

Make no mistake, everyone wishes that Moore’s Law was still alive and well. IC fabs most of all, but everyone from economists and politicians promising exponential growth (http://www.foreignpolicy.com/articles/2010/10/11/opening_gambit_moores_flaw) to futurists selling absurd fantasies of benevolent nanobots (http://www.singularity.org) deeply wish that Moore’s Law would continue. Sadly, no exponential in the real world can go on forever, and we make mistakes when we blindly ignore changing conditions behind an exponential trend.

More than ever before, people with little understanding of what Gordon Moore said let alone what he meant try to discuss the ramifications of an eventual end to Moore’s Law. In particular, people who have never worked in a semiconductor fab nor designed a commercial IC love extrapolating prior trend-lines forward without an understanding of how we got here nor a clue about the real atomic and economic limits of IC production.

Some analyses ignore the realities of manufacturing process control (http://www.mooreslaw.org/) while others revel in extrapolations based on mathematical abstractions and economic theories (http://www.ebnonline.com/author.asp?section_id=3315&doc_id=273652), and such work can be so bad that it is “not even wrong”.

Imminent posts in this blog series will discuss:

Moore’s Law is Dead – (Part 2) When we reach economic limits,

Moore’s Law is Dead – (Part 3) Where we reach atomic limits,

Moore’s Law is Dead – (Part 4) Why we say long live “Moore’s Law”!