Back-side illumination, wafer-scale optics drive 2??-5?? jump in CMOS image sensor performance
It's not just 3D packaging technology where CMOS image sensors are driving IC technology these days. Ultrathin silicon that enables back-side illumination (BSI), and integrated wafer-level optics are bringing sharply improved performance, lower costs, and smaller size, driving CMOS image sensors into more and more markets—and these technologies may soon impact other IC manufacturing as well.
Thinning silicon wafers down to 5µm transparent films to let light through the back side is driving a 2× to 5× improvement in sensitivity for smaller pixel image sensors. Yole Développement sees CMOS sensors now moving quickly into higher performance applications, including high resolution digital SLR cameras and digital video recorders, as they come to match the performance of CCDs at lower cost. These thinning and annealing technologies may also open new possibilities for 3D stacking and integration of very thin layers of memory and logic devices in the future. Wafer-level optics are also starting to reduce camera module size and cost in even demanding handset applications, and could also bring similar improvements to projection lens systems for gaming stations and micro displays.
CMOS image sensors drive ultrathin wafer technology roadmap
One of the key breakthroughs that enables these major improvements in performance and price is backside illumination, which requires thinning the silicon wafer down to a transparent 5µm active layer, so the light can come in directly to the photo diode from the backside. Putting the electrical distribution layers behind the photo diode allows a simpler and more flexible BEOL architecture design, eliminating the need to leave openings in the pattern to let light through, and naturally lets in much more light, allowing higher resolution or higher sensitivity for the same given sensor size and cost. Next step, now in the development stage, is to take advantage of this increased design flexibility and add more intelligence to the system, by stacking a microprocessor DSP below the sensor—moving towards the ideal of controlling each pixel individually as in the human eye, for much improved sensor performance across different light conditions.
|Figure 1. Example of CMOS BSI "SOI" process flow. (Source: Yole Développement, "CMOS Image Sensors: Technology and Markets 2010")|
The BSI process flow starts with making the photo diodes. Then the device wafer is bonded at low temperature to a silicon or glass carrier, using either adhesive polymers or molecular oxide-to-oxide bonding. US-based 3D-IC company Ziptronix offers one low-temperature bonding solution, which presses together ultraflat wafers, well prepared through specific surface preparation treatments. Next, the 1mm thick photo diode wafer is thinned down to 40-50µm with Disco or Accretech grinding tools, then thinned further with CMP, and finally etched down to an etch-stop layer at 5µm. This radical wafer thinning typically requires precise control of wet etching after initial grinding and CMP. One option, used by Sony and others, is to use SOI wafers from Soitec, using the buried oxide layer as an inner etch stop layer at the oxide interface—though the high cost of SOI wafers may limit the process to high-end imaging applications only. Others, including OmniVision, working with TSMC and Xintec, claim to have developed a lower-cost alternative process using bulk silicon wafers with graded implant layers. The trick is to find a highly selective etch chemistry that will stop precisely at the required 5µm thin silicon interface, just before reaching the photodiode structures.
Also critical is the annealing process, since this 5µm thin silicon film needs to include a very narrow implant gradient, to prevent recombination in the epi silicon and to push the photons down to the photo diodes. Since typical annealing ovens can only be controlled to about 30µm layer precision, the finer implant gradients require annealing with a nano-second, local heating laser process. The French equipment company Excico supplies a tool that uses a UV excimer type of laser source with a large spot for tight precision with better image quality and higher throughput.
This ability to build and handle these ultrathin layers also opens new possibilities for 3D stacking and integrating very thin active layers monolithically on top of other semiconductor applications. Indeed, such type of process set-up has the potential to be re-used in the future for 3D integration of memory + memory, logic + memory or MEMS + logic applications.
Coming next: Wafer-level integration of optics
The next development just starting to impact the CMOS image sensor business is wafer-scale integration of the optics, to drastically reduce size and cost, and to simplify the assembly process and supply chain. The camera module unit remains one of the largest components in most cell phones, and among the most complex to manufacture, requiring sourcing and assembly of up to 15 different components, including not only lenses, but also IR filters, caps, barrels, spacers, autofocus mechanisms, and other parts. An attractive alternative is wafer-level processing of optical lenses, today already in low-volume production by Heptagon for STMicroelectronics and by Anteryon for Toshiba camera modules.
|Figure 2. Wafer-level camera module assembly steps. (Source: Yole Développement, "CMOS Image Sensors: Technology and Markets 2010")|
A typical wafer-level optic process involves dropping a polymer layer on to a glass wafer, pressing in the desired pattern through molding and UV replication with nanoimprint lithography tools, to imprint some 4000 lenses at once on each 8-in. wafer. This wafer is then aligned and bonded to another lens or spacer wafer, then tested and finally diced into small, low-cost optical lens camera-module cubes.
Companies are also working on integrating autofocus functions at the wafer level. A number of players, including SEMCO, are working on electro active polymers, whose thickness can be controlled and driven electrically by applying a defined voltage. Others like Siimpel (recently acquired by Tessera) have a MEMS- based solution, using a spring-like silicon structure.
|Figure 3. CMOS image sensors technology drivers—new challenges to face. (Source: Yole Développement, "CMOS Image Sensors: Technology and Markets 2010")|
But the front-runner currently appears to be a potentially breakthrough technology using liquid crystal polymers. The startup LensVector plans to start production this year of a four-layer stack of 8-in. glass wafers, encapsulating liquid crystal polymers that change shape when voltage is applied to the driving electrodes—all in less than 500µm total thickness. This technology has the potential to significantly bring down the size and cost of camera modules in the future.
These increasingly integrated optics will likely find application in other products as well, to simplify the manufacture and reduce the size and cost of other optoelectronics, across applications ranging from biomedical endoscopy to consumer products like digital cameras, pico projectors, headsup automotive displays, projection systems for gaming, and LED lighting.
Wafer-level packaging moves to higher-performance devices, more applications
Image sensor makers were among the first to move to volume production of wafer-level packaging and through-silicon vias (TSVs), as the technologies offered a solution to the big yield losses from the complex demands of alignment in packaging and assembling the optics with the high cost sensor chips into plastic modules. From initial use of Tessera's ShellcaseOP glass-capping technology with low-end CIF and VGA format single megapixel camera units, the WLP technology has moved upstream to more complex and finer pitch devices up to 2-3 megapixels. TSMC's Xintec packaging unit is currently running more than 200,000 8-in. WLPs a year, mostly for OnmiVision. Toshiba, Samsung, and STMicroelectronics are producing internally in volume. Considerable MEMS volumes, and some LEDs and memory chips as well, are also starting to use similar technology as the infrastructure builds up worldwide and costs come down with volume. — Jerôme Baron, Yole Développement