Issue



Engineered to the task: why camera-phone cameras are different


06/01/2009







At face value, the camera inside a cell phone is no different from that found in many other digital imaging applications. The actual cell phone camera, however, is a different animal that has been engineered specifically for the small, mobile platform. The principal differences are cost, size, and performance, with customization made possible by the huge demand. Wafer-level processes can aid in cost reduction, increased reliability, and simplified assembly.

Giles Humpston, Tessera, San Jose, CA USA

Despite its seeming pervasiveness, the camera phone is a relatively recent invention, as the first model was only launched in Japan in 2001. At face value, the camera inside a cell phone is no different from that found in many other digital imaging applications including digital still cameras, camcorders, Web cams, automotive displays, machine systems, and toys. It is merely a device that captures a scene in electronic format. The actual cell phone camera, however, is a different animal that has been engineered specifically for the small, mobile platform. The principal differences are cost, size, and performance, with customization made possible by the huge demand; camera phones will likely account for over half of the 2.5 billion camera modules expected to be manufactured in 2009.

Cost

The cost of camera modules is anomalous compared with other imager categories (Fig. 1). Manufacturing volume alone does not account for the discrepancy, as camera modules for phones are a rapidly evolving product in a market where few standards exist. Each camera module model has a short life span and the manufacturing volume seldom exceeds a few million units. Some cell phone companies have attempted to establish standards, but these are usually limited in scope to physical dimensions and the electronic interface. This allows the camera phone manufacturer to source compatible camera modules from various providers without getting involved in the interior architecture. It frees the camera module manufacturer to develop their product. Unsurprisingly, a main area of innovation is cost reduction. Through ongoing technical development, there is a realistic possibility of achieving the long-sought sub-$1 camera module.


Figure 1. Relationship between price and resolution, for four common imaging device categories. Camera modules benefit from extremely high manufacturing volume and technical innovation to deliver a very low price per pixel. Source: Tessera.
Click here to enlarge image

A camera module’s price is principally determined by the imager diagonal; the relationship being almost a cubic power. This arises because the imager diagonal strongly influences the die size and hence the silicon cost per die. The imager diagonal also dictates the diameter of the mating optics and therefore their cost. One way to decrease imager size is to use smaller pixels, and image sensor manufacturers have progressively decreased pixel dimensions. The current norm is around 1.75µm; most companies have roadmaps out to 0.9µm, an evolution that will result in approximately 60% more die per wafer achieved with negligible increase in processing costs. These small pixel technologies will enable a 3megapixel sensor with diagonal of under 0.1" and a 22megapixel sensor with a diagonal of just 0.25".

Yield is another battlefront. Manufacturing yield for image sensors is lower than conventional CMOS die, because an imager must have both electrical and optical functionality and these must be preserved both at the die level and after the long and complex series of processes required to assemble a camera module. Low yield is just cost in another guise. The principal cause of yield loss of camera modules is contamination of the optically sensitive area of the die by particles. The optically sensitive area of the die is extremely delicate and generally cannot be cleaned if contaminated, making naked imager die incompatible with semiconductor back-end processes like probing and dicing and the complete suite of camera module assembly processes. The solution currently being adopted is wafer-scale packaging. The image sensor packages were evolved from EPROMs, where a transparent portion of the enclosure provides optical access to the die but protects against mechanical and chemical damage and can be cleaned of debris. Wafer-scale packaging of image sensors has a considerable economic advantage over discrete packaging since the materials and process costs are shared among the good die on a wafer, which can number several thousand VGA sensors on a 200mm wafer. Consequently, the imager industry is transitioning rapidly to wafer-scale packaging and it is predicted that by 2012 nearly 70% of imagers will be housed in this type of protective enclosure (Fig. 2). Wafer-scale packaging provides consequential cost benefits, one of which is that the imager can be presented as a surface mount component. The camera module can then be soldered to the handset PCB simultaneously with other components. Without this interconnect, a flexible circuit and connector are necessary, which cost roughly $0.30 per camera module. These items are also the major cause of field failures of camera modules in cell phones.


Figure 2. Predicted market share of camera modules containing imagers packaged at the wafer scale, according to several independent market research firms. The adoption is driven by the high and consistent yield that can be obtained using imager die housed in a protective enclosure.
Click here to enlarge image

Another strategy to reduce camera module cost is to decrease the elements in the optical train and use lower-cost materials and assembly processes. A mega pixel camera from just a few years ago would have had up to four glass lenses. Most, if not all, modern lenses are manufactured by injection molding of plastic. This process permits more complex optical surfaces to be realized so that fewer lenses are necessary.

Size

The current fashion in portable electronics products is extreme thinness. Often the tallest component — therefore determining camera-phone handset thickness — is the camera module. It is not uncommon to see slider-style handsets with a bulge centred about the camera module. Flip-style handsets tend to have the camera module located on the axis of the hinge, because this is the thickest part of the product.

Several factors determine the height of a camera module. Physics dictates that the length of the optical train is proportional to the imager diagonal. Hence the moves to decrease cost by using smaller pixels is helpful in this regard. The adoption of wafer-scale packaging also makes it possible to decrease the thickness of the imager die. The standard thickness for 200mm wafers is 750µm. The glass cover of the wafer-scale package provides mechanical support to the silicon, making it possible to decrease the thickness of the silicon to typically between 100µm and 200µm. This is not a physical limit, but rather dictated by the design of the image sensor; dark leakage current rises if too much bulk silicon is removed.

The most recent innovation in camera module technology aimed at decreasing cost and size is the wafer-level camera. Traditionally, camera modules are manufactured using chip-on-board (COB) assembly. In this approach, the image sensor die is attached to a substrate and interconnected by wire bonds. An enclosure is then glued in place over the imager. Meanwhile, the lenses are made as discrete components and precisely aligned and positioned in a lens barrel. The imager is powered up and the lens barrel is screwed into the enclosure until a well-focused image is obtained, then the barrel is locked in position. This slow, labor-intensive process opens ample opportunity for yield loss. However, the major objection to this design is that the housing and barrel are physically large, increasing the dimensions of the camera module; parts of the barrel protrude beyond the total track length. Typically, a conventional camera module is around 5mm tall.

Technology has now been developed to manufacture whole wafers of identical optical components. The optical element is fabricated using replication processes, where an optical surface is formed on each side of a flat substrate. This does not mean that the substrate has to be optically passive. Another possibility is to apply to the substrate a thin layer of metal, containing a hole, which then functions as an aperture. Apertures and infrared filter coatings can also be applied to one or both of the optical surfaces. This creates an integrated optical component that has all the functionality necessary for a solid state camera. Wafer-level manufacturing enables a compact, rugged lens, with predictable optical performance and negligible part-to-part variation. Mating of the wafer-scale optical components with imagers packaged at the wafer-scale can then be done with great precision. As with any wafer-scale process, the resulting cost per part is 30???50% lower and it is possible to decrease the camera module height.


Figure 3. Forecast market share of reflowable camera modules in camera phones by year. Source: Techno Systems Research Co. Ltd., 2007.
Click here to enlarge image

Through judicious materials selection for the integrated optical component, camera modules fabricated at the wafer-scale can survive lead-free solder reflow temperatures. Conventional camera modules are unable to withstand this thermal excursion, which means they must be interfaced to the handset by a flexible lead and connector. The benefits of surface mounting are improved reliability and decreased piece part and assembly costs. Reflowable camera modules are a relatively recent innovation, so the infrastructure to manufacture them in high volume is not yet established and the necessary knowledge base not widely disseminated. Reflowable camera modules currently account for around 3% of the handset market, but are predicted to ramp rapidly to over 32% by 2011 (Fig. 3).

Performance

Reducing the cost and size of camera modules detrimentally impacts image quality. Fortunately, many of the process and materials changes have also improved repeatability. This enables use of software-enhanced optics to correct known optical defects. For example, if the constraints on size and cost mean that the picture corners will always be blurred to the same degree, edge-sharpening algorithms can be applied. The customer is happy with the picture because the inherent deficiencies in the camera module have been corrected or masked and pictures appear good in all areas. The camera lens designer is often faced with the challenge of managing chromatic distortion. Another common defect in camera modules is “purple fringing,” a distortion that occurs in areas of high contrast shot with inexpensive lenses. Chromatic aberration is constant for a given optical design, so this can be corrected by embedded algorithms. By necessity, every CMOS imager already contains an image processing pipeline; the cost of a few additional gates to perform these corrections is negligible.

Software-enhanced optics can endow camera modules with performance that is unobtainable from conventional lens trains no matter how large. The basis of the approach is to design a specialty lens that manipulates the optical rays during their passage through the camera to provide an intensity distribution on the imager with desired features. The manipulated image is not used as is; it needs further software correction. However, because the image was manipulated in a known manner in the optical domain, it can be digitally restored so high-quality output can be extracted. Some possible features include full optical zoom with no moving parts, continuous depth of field, and small F-number optics for low light environments. The latter is particularly important for the camera phone market owing to the social trend of taking photographs on camera phones in low light; typically in the evening and in venues like clubs and restaurants where there can be =5lux illumination compared with >350lux outdoors in daylight.

Allaying software with a camera module, especially in the form of embedded algorithms, can provide additional features that do not necessarily improve the native picture quality but give the perception of improvement. An example is red-eye reduction. It is possible to embed the necessary algorithms on the handset and correct the affected photographs without any user intervention or even knowledge that it has been done. Similarly, a look through most photograph albums will confirm that people are the subject matter of the majority of photographs. Software can locate the principal face, ensure that it is correctly exposed, in focus, and properly color balanced, giving a high-performance impression.

Conclusion

Solid state camera modules are manufactured in vast numbers, around half of which end up in camera phones. Camera modules for handsets are differentiated from their brethren on three counts: cost, size, and performance. Shrinking pixel sizes and the adoption of wafer-scale packaging for the imager have significantly reduced price per pixel. Wafer-scale optics and integrated components make possible the reflowable, wafer-scale, camera module that is substantially smaller than any camera module made by discrete assembly. Cost saving endeavor and compact optics tend to detrimentally impact optical quality. Performance is restored by using software, particularly in the form of embedded algorithms. The net result is that the camera module for a camera phone is both cheaper and smaller than a conventional camera module but can deliver better performance.

Giles Humpston phd is a metallurgist and has a doctorate in alloy phase equilibria. Humpston serves as director, research and development, Tessera Technologies, Inc., 3025 Orchard Pkwy San Jose, CA 95134, USA; ph: 408-321-6000; ghumpston@tessera.com.