The number of transistors that can be placed inexpensively on an integrated circuit doubles approximately every two years. (Moore's law)This empirical law is primarily known for the increasing trend of processors speed, which is widely influenced by the capacity of miniaturizing transistors in silicon-based chips. However, the general formulation of the law affects all semiconductor technologies, such as CPUs and electronic memories.
We may also note that similar laws exist for magnetic disks, which storage density doubles annually, and for network capacity (optical fiber's bandwidth doubles every nine months.) This set of empirical laws describe the exponential growth that we have been experiencing for many years.
But do limits to these exponential laws exist?
When miniaturization is involved, atomic limits are usually the ultimate frontier. The current most advanced production process for semiconductor devices is the 45 nanometer technology, which length refers to half of the average distance between memory cells. Each cell consists of two or four transistors, whose size is in the range of 100-200 nm.
For comparison, the size of an hydrogen atom is roughly 0.1 nm, and it is indeed the smallest existent atom. The average distance between silicon atoms arranged in a crystal like the ones used in chips is about 0.5 nm.
The predicted technology for 2015, according to the International Technology Roadmap for Semiconductors, is the 11 nanometer one. This means miniaturized components will be electrically insulated by a layer of... twenty atoms.
The consequences for the inner working of these devices are disastrous. As you may know, the physics models that govern modern integrated circuits are not classical: their design follows quantum mechanics. At such a small scale, quantum tunneling (the same process that charges flash memory cells) can make insulated cells interact. Silicon (and its doped derivates) usage cannot scale to the atomic sizes.
But limits are made to be infringed. For example, the integrated circuits are manufactered by photolithography. Without descending into the details, let's just say that a photoresistive material is "painted" over some area of the chip, which is then exposed to a light flash so that excess material is somehow eliminated from the chip, like if it were a chunk of photographic film. What remains under the shielding layer are the tracks and soon-to-be components of the integrated circuit.
Cool. What's the problem? The light used for this process, as all the other forms of light, has a minimal wavelength of about 200 nm. Thus transferring patterns of elements smaller than this size should not be feasible: it should be like revealing the presence of a 1-inch hole in a wall by launching basketballs at it, in the darkness. But it has been made possible with new techniques and tricks when the limit was reached.
And the same has been done with multi-core architectures: the power consumption of a CMOS cpu is proportional to its clock frequency, and the external components do not increase in speed as a cpu. So when an effective barrier in frequency was reached, the electronic engineers started increasing the processors in a single package instead of having them run faster. The result are the current dual- and quad-core architectures (and the new problem: how to parallelize tasks efficiently to keep all cores busy?)
Maybe Moore's law in its current form will cease to work in the future, but the technology level and the computing power it brings along are still likely to increase for a long time. There were, and still are, great business incentives to continue to enhance computing devices, and clever men always find another way to make money.