Are we using Moore’s name in vain?

By Zvi Or-Bach, President & CEO of MonolithIC 3D

The assertion that Moore made in April 1965 Electronics paper was:

“Thus there is a minimum cost at any given time in the evolution of the technology. At present, it is reached when 50 components are used per circuit. But the minimum is rising rapidly while the entire cost curve is falling (see graph below).”


“The complexity for minimum component costs has increased at a rate of roughly a factor of two per year (see graph on next page). Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years.”


Clearly Moore’s law is about cost, and Gordon Moore’s observation was that the optimum number of components (nowadays – transistors) to achieve minimum cost will double every year.

The reduction of cost per component for many years was directly related to the reduction in feature size – dimensional scaling. But many other technology improvements made important contributions as well, such as increasing the wafer size from 2″ all the way to 12.”

But many observers these days suggest that 28nm will be the optimal feature size with respect to cost for many years to come. Below are some charts suggesting so:


And more analytical work by IBS’ Dr. Handel Jones:


Graphically presented in the following chart:


Recently EE Times reported that EUV Still Promising on IMEC’s Road Map. IMEC provided a road map to transistor scaling all the way to 5nm, as illustrated in the following chart:


Yes, we probably can keep on scaling but, clearly, at escalating complexity and with completely new materials below 7nm. As dimensional scaling requires more advanced lithography it is clear that costs will keep moving up, and the additional complexity of transistor structures and all other complexities associated with these extreme efforts will most likely drive the costs even higher.

Looking at the other roadmap chart provided by IMEC and focusing on the SRAM bit cell in the first row, the situation seems far worse:


Since at 28 nm SRAM bit cell is already 0.081μm2, this chart indicates that future transistor scaling is barely applicable to the SRAM bit cell, which effectively is not scaling any more.

Unfortunately, most SoC die area is already dominated by SRAM and predicted to be so even more in the future, as illustrated by the following chart:


Source:. Y. Zorian, Embedded memory test and repair: infrastructure IP for SOC yield, in Proceedings the International Test Conference (ITC), 2002, pp. 340–349

Dimensional scaling was not an integral part of Moore’s assertion in 1965 – cost was. But dimensional scaling became the “law of the land” and, just like other laws, the industry seems fully committed to follow it even when it does not make sense anymore. The following chart captures Samsung’s view of the future of dimensional scaling for NV memory, and it seems relevant to the future of logic scaling just as well.



Easily post a comment below using your Linkedin, Twitter, Google or Facebook account. Comments won't automatically be posted to your social media accounts unless you select to share.

10 thoughts on “Are we using Moore’s name in vain?

  1. Bob Conn

    It occurs to me that when Gordon Moore made his projection folks (including me) were using bipolar transistors. His projection survived the transition to a fundamentally different technology – MOS. Perhaps other, as yet undeveloped, technology will help us along. Maybe stacked transistors – not 2.5D or 3D we have today, but build the p-channel on top of the n-channel. Or back to multi-level logic and forward to nanotubes. I don’t believe the technology is the limitation. I do believe the ‘mature’ semiconductor industry has become very conservative and things off the obvious evolutionary path rarely get much attention. I expect something will happen that will cause a shift. I’m curious as to how it turns out.

    Bob Conn

  2. Ed Korczynski

    When Moore updated this in 1975 (Moore, Gordon. “Progress in Digital Integrated Electronics” IEEE, IEDM Tech Digest (1975) pp.11-13.) he decomposed the sources of increasing component count as follows:
    * Die size increase,
    * Dimension decrease (a.k.a. “shrink”),
    * Device and Circuit design (a.k.a. “cleverness”).
    You can find an interview I did with Moore in 1997, in which he discussed these factors and predicted that when we would reach the limits of shrinks we’d be at ~1 billion transistors/chip (true today) and that the way forward would be a “return of cleverness” in circuit design (along with larger chips):

    As Moore himself routinely asserted “so called Moore’s Law” was always about cost-effective manufacturing, so a change in slope of the Moore’s Law trend-line must be induced by the economic limits of optical lithography.

  3. Brian Cronquist

    I agree with Bob – a new technology will help us along. Direct p over n (or n over p) is being actively worked on by imec, and transistors over transistors with metal between the strata is being actively worked on by BeSang and MonolithIC3D. All these can offer a continuance of the Moore’s Law economic prediction. And I think the economic limits of optical lithography that Ed brought up from Moore (fascinating interview Ed!) can be mitigated by going to these technologies (p over n, etc…monolithic or ‘sequential’ 3D). These technologies, in my view, could be characterized as a kind of ‘circuit design cleverness’…..stacking transistors so very close (a few microns) to each other coupled with the rich/dense vertical connectivity can reduce costs and allow further integration growth (Moore), and can breed many new architectures and devices. Moore 3.0 perhaps?

  4. Pingback: Blog Review November 11 2013 | Semiconductor Manufacturing & Design Community

  5. Dev Gupta

    At 14 nm compared to 28 nm the cost per transistor might indeed go up by 15 % but the system cost would still go down due to more integration ( shrink ), lower power etc. giving system designers enough incentive to shift. But also don’t underestimate the “cleverness” factor mentioned by Dr. Moore, has worked not just in design but in process too incl. Litho. Don’t forget Immersion Litho, something elegant like that might save us again from current high budget corporate brute force approach to EUV etc.

  6. V. Tsai

    Reread Gordon’s 1965 Electronics article, esp. his original draft. He was very clear about the benefits coming from lithographic scaling, though this combination of words had yet to come into existence. There’s lots more in this article that most miss, overlook, or didn’t take the time to read in the first place. It’s really worth a reread.

  7. Gilbert M de Guzman

    New circuit topography opportunities can be created from either using various materials exhibiting old properties alongside with new processing development if Moore’s law is not strictly followed.

  8. Pingback: Samsung, Intel Capital and Applied Materials fund Inpria to develop advanced semiconductor materials | Anchor Science LLC

  9. Pingback: SRC and MIT extend high resolution lithography capabilities | Anchor Science LLC

  10. Michael Clayton

    The 60’s started with no ion implanters, no plasma etch systems, no lasers, no SEM’s at first (Westinghouse invented those), so lots of B&L metallurgical microscopes, small wafer sizes that varied greatly within very short time intervals, all bipolar, mostly military customers, analog dominant (audio and RF for radios), but epitaxy as key process step which went away with MOS later. Furnaces were highly modified industrial heat-treating beasts, with quartz closed-tube diffusion gradually giving way to open tube gas and liquid vaporizer dopants, with huge uniformity issues down the tube and across the wafers.
    The process engineers, called cooks in those days, were making their own tools in many cases. Many early IC makers made their own wafers, some even made their own polysilicon starting material. Dislocations in silicon were widely studied. The Fairchild planar process for transistors, with overlay metallization, and diffused isolation was the key to manufacturability and reliability compared to late 1950’s efforts such as Kilby’s silicon bars. Process engineers and technicians were changing jobs often with huge salary increases. The Wagon-Wheel bar in what was becoming Silicon Valley was a place to change jobs, sometimes twice in same night. Device physicists ruled, as the Product Engineer had not yet emerged triumphant. And we still used the text books from Bell Labs to start the 60’s IC decade and the Electrochemical Society was as popular as the IRE (which became the IEEE) for creative new process and device articles.

    Packaging started in modified transistor “cans” just with more leads, but quickly went to flat packages and everything had lots of gold plating. Wire bonding was all gold thermocompression method, to aluminum, with “purple plague” rampant until we learned to lower the TC bonding temperatures. Wire bonding started out with equipment created for the transistor makers by Kulicke and Soffa, who had been making beer-bottle handling equipment earlier. Very few of the transistor makers survived to the IC era, just a the earlier diode leaders were not the transistor leaders.

    So here’s to the 60’s as the most chaotic and creative time in equipment, materials, circuits, processing as we moved from transistors to integrated circuits, from many wafer sizes to standards, from flower baskets and trays to cassettes thanks to IBM, and from levis in the fab to cleanroom gowns and laminar flow. By the 70’s the Japanese were kicking our butts. And the cost reduction curves were well noted so the future was assured in the minds of the investors, but the global wars were just beginning.

    And yet most history stories of the IC talk about Moore’s Law, just as history of technology people talk about the S-curves, massive over-simplications like the hypergeometric growth discontinuities we call economic black swan events.


Leave a Reply

Your email address will not be published. Required fields are marked *


KLA-Tencor announces new defect inspection systems
07/12/2018KLA-Tencor Corporation announced two new defect inspection products at SEMICON West this week, addressing two key challenges in tool and process monit...
3D-Micromac unveils laser-based high-volume sample preparation solution for semiconductor failure analysis
07/09/2018microPREP 2.0 provides order of magnitude time and cost savings compared to traditional sample...
Leak check semiconductor process chambers quickly and reliably
02/08/2018INFICON,a manufacturer of leak test equipment, introduced the UL3000 Fab leak detector for semiconductor manufacturing maintenance teams t...