What is the difference between CCD sensor and CMOS sensor?
Charge-Coupled Devices (CCDs) were originally the right answer. The CCD was the first high quality sensor, and they're still found in a few cameras.
CMOS sensors, however, are the better choice in most cases. Much of this has been about chip process. CCDs are largely analog devices, and need special production processes that aren't really used by other devices. This has slowed the evolution of CCDs and kept the prices high -- CCDs today are absolutely more expensive than CMOS imagers of the same resolution. CMOS chips leverage the vast production capabilities of the digital semiconductor industry, being made in a process similar to that used for DRAM and CPUs.
Early CMOS chips were pretty evil... low cost and low quality. This started to change in the early DSLR era, when in 2000, Canon produced the first CMOS-powered DSLR, the EOS D30, which produced photos with measurably less noise than the CCD-based cameras of the era. Before long, higher end CMOS had become common for use in DSLRs and professional camcorders (around the time of the move to HD video). Meanwhile, the lower-end CMOS sensors improved in quality, to the point where they basically took over all low-end photography and video (P&S cameras, consumer camcorders, smartphones, etc).
CMOS sensors have been using the advantage of digital technology to improve image quality. The analog signals don't have to travel as far on a CMOS chip, and as the geometries shrink, noise has been dramatically reduced, to the point we're at today, where CMOS-based still cameras come with ISOs over 100,000, besting any film that's been invented.
CCD still has one advantage, though that's more of "current practice" than any hard limitation. A CCD is basically a huge analog shift register. Each pixel accumulates a charge during exposure -- light strikes a photodiode, each photo releases an electron. Such a sensor is basically always active, so it needs a mechanical shutter as well, to stop the accumulation of charge and allow the "coupling" of charge from cell to cell to sense amplifier and analog to digital converter... kind of a bucket brigade of charge.
Most current CCDs use a thing called interline transfer. Basically, the CCD has twice as many pixels as the rating... every other pixel is masked off from light. So the sensor exposes, then shifts every pixel over by one, from light-gathering pixels to dark pixels, where the charge will no longer change. The dark pixels are then shifted out while the next set of light pixels is exposed. This is what allowed modern camcorders: the effect of a global electronic shutter and fast operation. As a down side, it also limited the light gathering area of the CCD to less than 50% of the total area, though that's been somewhat mitigated by microlenses and other cool tricks.
CMOS sensors are fast enough to shift out pixel data nearly in real time, while they can be used with a mechanical shutter (as on any DSLR or mirrorless ILM camera), they don't have to be. For video and cheaper consumer models, the electronic shutter is all you have. With no need for a backing store of charge, CMOS became much cheaper (and as before, it's already in a cheaper IC process). The down side is what's been termed "the jello effect". The CMOS sensor will have groups of pixels enabled in turn for charge collection, a few lines at a time or so depending on the sensor architecture. These expose and then shift the result very quickly out of the chip and into digital. Then the next set gets exposed, and so far. The end result is that the exposure varies in time a bit over the sensor... and when you're shooting fast moving things, you get this "jello" effect. Not an issue with digital still photography on a good camera, but definitely a video concern. Again, it's possible to build a CMOS sensor with a global electronic shutter (using a backing store like the CCD would do it), but the current emphasis has been on lowering costs.
What Anthony is alluding to has nothing to do with the sensor technology. In the classic small-sensor camcorder, there are three sensors: 0ne for red, one for green, one for blue. They're all the same, and can be CMOS or CCD. The trick is that they're very small, and right behind the lens there's a diachroic prism, which splits light by color to feed each sensor separately. This is not done in large-sensor cameras (still or video). Rather, using a much larger sensor and 4x as many pixels as you need for your video size, there's "pixel bucketing" where each video pixel is actually made of four physical pixels (often straight Bayer pattern RGBG, sometimes RWGB, where "W" is an unfiltered pixel ... "white" ... no color information but about 3x more sensitive).
Charge-Coupled Devices (CCDs) were originally the right answer. The CCD was the first high quality sensor, and they're still found in a few cameras.
CMOS sensors, however, are the better choice in most cases. Much of this has been about chip process. CCDs are largely analog devices, and need special production processes that aren't really used by other devices. This has slowed the evolution of CCDs and kept the prices high -- CCDs today are absolutely more expensive than CMOS imagers of the same resolution. CMOS chips leverage the vast production capabilities of the digital semiconductor industry, being made in a process similar to that used for DRAM and CPUs.
Early CMOS chips were pretty evil... low cost and low quality. This started to change in the early DSLR era, when in 2000, Canon produced the first CMOS-powered DSLR, the EOS D30, which produced photos with measurably less noise than the CCD-based cameras of the era. Before long, higher end CMOS had become common for use in DSLRs and professional camcorders (around the time of the move to HD video). Meanwhile, the lower-end CMOS sensors improved in quality, to the point where they basically took over all low-end photography and video (P&S cameras, consumer camcorders, smartphones, etc).
CMOS sensors have been using the advantage of digital technology to improve image quality. The analog signals don't have to travel as far on a CMOS chip, and as the geometries shrink, noise has been dramatically reduced, to the point we're at today, where CMOS-based still cameras come with ISOs over 100,000, besting any film that's been invented.
CCD still has one advantage, though that's more of "current practice" than any hard limitation. A CCD is basically a huge analog shift register. Each pixel accumulates a charge during exposure -- light strikes a photodiode, each photo releases an electron. Such a sensor is basically always active, so it needs a mechanical shutter as well, to stop the accumulation of charge and allow the "coupling" of charge from cell to cell to sense amplifier and analog to digital converter... kind of a bucket brigade of charge.
Most current CCDs use a thing called interline transfer. Basically, the CCD has twice as many pixels as the rating... every other pixel is masked off from light. So the sensor exposes, then shifts every pixel over by one, from light-gathering pixels to dark pixels, where the charge will no longer change. The dark pixels are then shifted out while the next set of light pixels is exposed. This is what allowed modern camcorders: the effect of a global electronic shutter and fast operation. As a down side, it also limited the light gathering area of the CCD to less than 50% of the total area, though that's been somewhat mitigated by microlenses and other cool tricks.
CMOS sensors are fast enough to shift out pixel data nearly in real time, while they can be used with a mechanical shutter (as on any DSLR or mirrorless ILM camera), they don't have to be. For video and cheaper consumer models, the electronic shutter is all you have. With no need for a backing store of charge, CMOS became much cheaper (and as before, it's already in a cheaper IC process). The down side is what's been termed "the jello effect". The CMOS sensor will have groups of pixels enabled in turn for charge collection, a few lines at a time or so depending on the sensor architecture. These expose and then shift the result very quickly out of the chip and into digital. Then the next set gets exposed, and so far. The end result is that the exposure varies in time a bit over the sensor... and when you're shooting fast moving things, you get this "jello" effect. Not an issue with digital still photography on a good camera, but definitely a video concern. Again, it's possible to build a CMOS sensor with a global electronic shutter (using a backing store like the CCD would do it), but the current emphasis has been on lowering costs.
What Anthony is alluding to has nothing to do with the sensor technology. In the classic small-sensor camcorder, there are three sensors: 0ne for red, one for green, one for blue. They're all the same, and can be CMOS or CCD. The trick is that they're very small, and right behind the lens there's a diachroic prism, which splits light by color to feed each sensor separately. This is not done in large-sensor cameras (still or video). Rather, using a much larger sensor and 4x as many pixels as you need for your video size, there's "pixel bucketing" where each video pixel is actually made of four physical pixels (often straight Bayer pattern RGBG, sometimes RWGB, where "W" is an unfiltered pixel ... "white" ... no color information but about 3x more sensitive).
No comments:
Post a Comment