A monolithic integrated circuit (also known as IC, microchip, silicon chip, computer chip or chip) is a miniaturized electronic circuit (consisting mainly of semiconductor devices, as well as passive components) that has been manufactured in the surface of a thin substrate of semiconductor material. A hybrid integrated circuit is a miniaturized electronic circuit constructed of individual semiconductor devices, as well as passive components, bonded to a substrate or circuit board. This article is about monolithic integrated circuits.
Integrated Circuits can be found in almost every electronic device today. Anything, from a common wristwatch to a personal computer has Integrated Circuits in it. There are circuits that control almost everything, as simple as a temperature control in a common iron or a clock in a microwave oven. This has made a major difference in how we operate electronic items. Not only does it make electronic items simpler to use, for example, on most microwave ovens now, you have preset controls for different settings. Now you can push a button and it will automatically set the time for defrosting an item or popping popcorn.
In the future, Integrated circuits may even be used for medical purposes. For example, Research has been going on since the late 1980s in which they are trying to develop a computer chip that can be attached to the brain to repair different types of brain damage. With this kind of link, they would be able to repair some kinds of blindness or even memory loss from brain damage.
Only a half-century after their development was initiated, integrated circuits can be found everywhere. Computers, cellular phones, and other digital appliances are now entangled parts of the structure of modern technological societies. In other words, modern computing, communications, manufacturing, and transport systems, including the Internet, all depend on the existence of integrated circuits. Indeed, many scholars believe that the digital revolution that is based on integrated circuits is one of the most significant developments in the history of mankind.
Integrated circuits were made possible by experimental discoveries showing that semiconductor devices could perform the functions of vacuum tubes, and by mid-twentieth-century technology advancements in semiconductor device fabrication. The integration of large numbers of tiny transistors into a small chip was an enormous improvement over the manual assembly of circuits using discrete electronic components. The integrated circuit's mass production capability, reliability, and building-block approach to circuit design ensured the rapid adoption of standardized ICs in place of designs using discrete transistors.
There are two main advantages of ICs over discrete circuits: cost and performance. Cost is low because the chips, with all their components, are printed as a unit by photolithography and not constructed one transistor at a time. Performance is high, because the components are small, close together, switch quickly, and consume little power. As of 2006, chip areas range from a few square millimeters (mm2) to around 250 mm2, with up to 1 million transistors per mm2.
Among the most advanced integrated circuits are the microprocessors, that control everything from computers to cellular phones to digital microwave ovens. Digital memory chips are another family of integrated circuit that is crucially important to the modern information society. While the cost of designing and developing a complex integrated circuit is quite high, when spread across typically millions of production units the individual IC cost is minimized. The performance of ICs is high because the small size allows short traces which in turn allows low power logic (such as CMOS) to be used at fast switching speeds.
ICs have consistently migrated to smaller feature sizes over the years, allowing more circuitry to be packed on each chip. This increased capacity per unit area can be used to decrease cost and/or increase functionality. Moore's law, in its modern interpretation, states that the number of transistors in an integrated circuit doubles every two years. In general, as the feature size shrinks, almost everything improves—the cost-per-unit and the switching power consumption go down, and the speed goes up. However, ICs with nanometer-scale devices are not without their problems, principal among which is leakage current, although these problems are not insurmountable and will likely be improved by the introduction of high-k dielectrics. Since these speed and power consumption gains are apparent to the end user, there is fierce competition among manufacturers to use finer geometries. This process, and the expected progress over the next few years, is well described by the International Technology Roadmap for Semiconductors (ITRS).
Integrated circuits can be classified into analog, digital and mixed signal (both analog and digital on the same chip).
Digital integrated circuits can contain anything from one to millions of logic gates, flip-flops, multiplexers, and other circuits in a few square millimeters. The small size of these circuits allows high speed, low power dissipation, and reduced manufacturing cost compared with board-level integration. These digital ICs, typically microprocessors, digital signal processors (DSPs), and microcontrollers work using binary mathematics to process "one" and "zero" signals.
Analog ICs, such as sensors, power-management circuits, and operational amplifiers work by processing continuous signals. They perform functions like amplification, active filtering, demodulation, mixing, etc. Analog ICs ease the burden on circuit designers by having expertly designed analog circuits available instead of designing a difficult analog circuit from scratch.
ICs can also combine analog and digital circuits on a single chip to create functions such as analog-to-digital converters and digital-to-analog converters. Such circuits offer smaller size and lower cost, but must carefully account for signal interference.
The semiconductors of the periodic table of the chemical elements were identified as the most likely materials for a solid state vacuum tube by researchers like William Shockley at Bell Laboratories starting in the 1930s. Starting with copper oxide, proceeding to germanium, then silicon, the materials were systematically studied in the 1940s and 1950s. Today, silicon monocrystals are the main substrate used for integrated circuits (ICs) although some III-V compounds of the periodic table such as gallium arsenide are used for specialized applications like LEDs, lasers, and the highest-speed integrated circuits. It took decades to perfect methods of creating crystals without defects in the crystalline structure of the semiconducting material.
Semiconductor ICs are fabricated in a layer process which includes these key process steps:
The main process steps are supplemented by doping, cleaning and planarisation steps.
Mono-crystal silicon wafers (or for special applications, silicon on sapphire or gallium arsenide wafers) are used as the substrate. Photolithography is used to mark different areas of the substrate to be doped or to have polysilicon, insulators or metal (typically aluminum) tracks deposited on them.
Since a CMOS device only draws current on the transition between logic states, CMOS devices consume much less current than bipolar devices.
A (random access memory) is the most regular type of integrated circuit; the highest density devices are thus memories; but even a microprocessor will have memory on the chip. (See the regular array structure at the bottom of the first image.) Although the structures are intricate—with widths which have been shrinking for decades—the layers remain much thinner than the device widths. The layers of material are fabricated much like a photographic process, although light waves in the visible spectrum cannot be used to "expose" a layer of material, as they would be too large for the features. Thus photons of higher frequencies (typically ultraviolet) are used to create the patterns for each layer. Because each feature is so small, electron microscopes are essential tools for a process engineer who might be debugging a fabrication process.
Each device is tested before packaging using very expensive automated test equipment (ATE), a process known as wafer testing, or wafer probing. The wafer is then cut into small rectangles called dice. Each good die (N.B. die is the singular form of dice, although dies is also used as the plural) is then connected into a package using aluminum (or gold) wires which are welded to pads, usually found around the edge of the die. After packaging, the devices go through final test on the same or similar ATE used during wafer probing. Test cost can account for over 25 percent of the cost of fabrication on lower cost products, but can be negligible on low yielding, larger, and/or higher cost devices.
As of 2005, a fabrication facility (commonly known as a semiconductor fab) costs over a billion US Dollars to construct, because much of the operation is automated. The most advanced processes employ the following specifications:
The earliest integrated circuits were packaged in ceramic flat packs, which continued to be used by the military for their reliability and small size for many years. Commercial circuit packaging quickly moved to the dual in-line package (DIP), first in ceramic and later in plastic. In the 1980s, pin counts of VLSI circuits exceeded the practical limit for DIP packaging, leading to pin grid array (PGA) and leadless chip carrier (LCC) packages. Surface mount packaging appeared in the early 1980s and became popular in the late 1980s, using finer lead pitch with leads formed as either gull-wing or J-lead, as exemplified by Small-Outline Integrated Circuit. A carrier which occupies an area about 30 percent – 50 percent less than an equivalent DIP, with a typical thickness that is 70 percent less. This package has "gull wing" leads protruding from the two long sides and a lead spacing of 0.050 inches.
Small-Outline Integrated Circuit (SOIC) and PLCC packages. In the late 1990s, PQFP and TSOP packages became the most common for high pin count devices, though PGA packages are still often used for high-end microprocessors. Intel and AMD are currently transitioning from PGA packages on high-end microprocessors to land grid array (LGA) packages.
Ball grid array (BGA) packages have existed since the 1970s.
Traces out of the die, through the package, and into the printed circuit board have very different electrical properties, compared to on-chip signals. They require special design techniques and need much more electric power than signals confined to the chip itself.
When multiple die are put in one package, it is called SiP, for System In Package. When multiple die are combined on a small substrate, often ceramic, it's called a MCM, or Multi-Chip Module. The boundary between a big MCM and a small printed circuit board is sometimes fuzzy.
The integrated circuit was first conceived by a radar scientist, Geoffrey W.A. Dummer (born 1909), working for the Royal Radar Establishment of the British Ministry of Defence, and published in Washington, D.C. on May 7, 1952. Dummer unsuccessfully attempted to build such a circuit in 1956.
The first integrated circuits were manufactured independently by two scientists: Jack Kilby of Texas Instruments and Robert Noyce of Fairchild Semiconductor. Kilby filed a patent application for a "Solid Circuit" made of germanium on February 6, 1959. Kilby received several patents: U.S. Patent 3138743 , U.S. Patent 3138747 , U.S. Patent 3261081 , and U.S. Patent 3434015 . (See Chip that Jack built.) Noyce was awarded a patent for a more complex "unitary circuit" made of silicon on April 25, 1961. He credited Kurt Lehovec of Sprague Electric for a key concept behind the IC: the principle of p-n junction isolation by the action of a biased p-n junction (the diode).
===SSI, MSI, LSI===
The first integrated circuits contained only a few transistors. Called "Small-Scale Integration" (SSI), they used circuits containing transistors numbering in the tens.
SSI circuits were crucial to early aerospace projects, and vice-versa. Both the Minuteman missile and Apollo program needed lightweight digital computers for their inertially-guided flight computers; the Apollo guidance computer led and motivated the integrated-circuit technology, while the Minuteman missile forced it into mass-production.
These programs purchased almost all of the available integrated circuits from 1960 through 1963, and almost alone provided the demand that funded the production improvements to get the production costs from $1,000/circuit (in 1960 dollars) to merely $25/circuit (in 1963 dollars). They began to appear in consumer products at the turn of the decade, a typical application being FM inter-carrier sound processing in television receivers.
The next step in the development of integrated circuits, taken in the late 1960s, introduced devices which contained hundreds of transistors on each chip, called "Medium-Scale Integration" (MSI).
They were attractive economically because while they cost little more to produce than SSI devices, they allowed more complex systems to be produced using smaller circuit boards, less assembly work (because of fewer separate components), and a number of other advantages.
Further development, driven by the same economic factors, led to "Large-Scale Integration" (LSI) in the mid 1970s, with tens of thousands of transistors-per-chip.
LSI circuits began to be produced in large quantities around 1970, for computer main memories and pocket calculators.
The final step in the development process, starting in the 1980s and continuing on, was "Very Large-Scale Integration" (VLSI), with hundreds of thousands of transistors, and beyond (well past several million in the latest stages).
For the first time it became possible to fabricate a CPU on a single integrated circuit, to create a microprocessor. In 1986, the first one megabit Random Access Memory (RAM) chips were introduced, which contained more than one million transistors. Microprocessor chips produced in 1994 contained more than three million transistors.
This step was largely made possible by the codification of "design rules" for the CMOS technology used in VLSI chips, which made production of working devices much more of a systematic endeavor. (See the 1980 landmark text by Carver Mead and Lynn Conway referenced below.)
To reflect further growth of the complexity, the term ULSI that stands for "Ultra-Large Scale Integration" was proposed for chips of complexity more than 1 million of transistors. However there is no qualitative leap between VLSI and ULSI, hence normally in technical texts the "VLSI" term covers ULSI as well, and "ULSI" is reserved only for cases when it is necessary to emphasize the chip complexity, e.g. in marketing.
The most extreme integration technique is wafer-scale integration (WSI), which uses whole uncut wafers containing entire computers (processors as well as memory). Attempts to take this step commercially in the 1980s (for instance, by Gene Amdahl) failed, mostly because of defect-free manufacturability problems, and it does not now seem to be a high priority for industry.
The WSI technique failed commercially, but advances in semiconductor manufacturing allowed for another attack on IC complexity, known as System-on-Chip (SOC) design. In this approach, components traditionally manufactured as separate chips to be wired together on a printed circuit board are designed to occupy a single chip that contains memory, microprocessor(s), peripheral interfaces, Input/Output logic control, data converters, and other components, together composing the whole electronic system.
In the 1980s programmable integrated circuits were developed. These devices contain circuits whose logical function and connectivity can be programmed by the user, rather than being fixed by the integrated circuit manufacturer. This allows a single chip to be programmed to implement different LSI-type functions such as logic gates, adders, and registers. Current devices named FPGAs (Field Programmable Gate Arrays) can now implement tens of thousands of LSI circuits in parallel and operate up to 400 MHz.
The techniques perfected by the integrated circuits industry over the last three decades have been used to create microscopic machines, known as MEMS. These devices are used in a variety of commercial and military applications. Examples of commercial applications include DLP projectors, inkjet printers, and accelerometers used to deploy automobile airbags.
In the past, radios could not be fabricated in the same low-cost processes as microprocessors. But since 1998, a large number of radio chips have been developed using CMOS processes. Examples include Intel's DECT cordless phone, or Atheros's 802.11 card.
Ever since ICs were created, some chip designers have used the silicon surface area for surreptitious, non-functional images or words. These are sometimes referred to as Chip Art, Silicon Art, Silicon Graffiti, or Silicon Doodling.
A list of notable manufacturers; some operating, some defunct:
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
The history of this article since it was imported to New World Encyclopedia: