At its core, a digital camera performs a deceptively simple task: it converts light into numbers. Everything we associate with “image quality” (color, contrast, sharpness, dynamic range, and even style) emerges only later, once those numbers are interpreted. To understand digital cameras properly, it’s important to mentally separate light capture from image rendering.
The Sensor Measures Light, Not Color
A camera sensor is not a color-sensitive device in the way humans are. Each pixel on the sensor is essentially a tiny photon counter, sort of a light bucket. When photons strike the sensor’s silicon surface, they release electrons via the photoelectric effect. These electrons accumulate in proportion to the amount of light received. More photons mean more electrons, and therefore a stronger signal. Importantly, this signal represents only brightness, not color.
To capture color, manufacturers place a Color Filter Array (CFA) over the sensor. Most cameras use a Bayer pattern (though there are other types like Fuji Xtrans or Foveon). In this pattern, 50% of the pixels are filtered green, 25% red, and 25% blue. Each pixel therefore measures only one color component of the scene. This design mirrors human vision, which is most sensitive to green luminance detail.
At this stage, the sensor output is not an image at all. It’s a mosaic of brightness measurements, each filtered through red, green, or blue glass. Color doesn’t truly exist yet. It must be reconstructed later.
Gain, ISO, and Digitization Turn Electrons into Numbers
Once electrons have accumulated in each pixel, the camera converts that charge into a voltage. This voltage is extremely small and must be amplified before it can be digitized. This amplification stage is where ISO comes into play. Contrary to popular belief, ISO doesn’t make the sensor more sensitive to light. Sensitivity is fixed by physics. ISO merely determines how much the signal is amplified.
In cameras this amplification is often done in multiple stages, sometimes using dual-gain architectures. At certain ISO thresholds, the camera switches to a higher-gain readout path that improves signal-to-noise ratio, particularly in the shadows. You have probably heard a term dual-base ISO. That’s it.
One of the differences between stills and cinema cameras is how the analog gain stages are tuned, in other words, how the amplification is implemented. Cinema cameras optimize analog gain stage for cinema-style exposure practices.
After amplification, the signal is passed to an Analog-to-Digital Converter (ADC), which turns the voltage into a digital number. This is where bit depth matters. RAW files typically store data at 12 or 14 bits per pixel, but some cameras use higher precision. For example RED’s R3D RAW format uses 16 bits. Higher bit depth doesn’t increase dynamic range by itself, but it allows the camera to describe tonal differences, especially in the shadows, with greater precision. That’s why higher bit depths generally mean better image quality. When choosing a camera, instead of chasing megapixels, it’s better to chase higher bit depths.
What RAW Really Is and Why It Matters
RAW is often misunderstood as a high-quality image format. In reality, RAW is not an image at all. It’s a structured record of sensor measurements. The data is linear, meaning that if one pixel received twice as much light as another, its numerical value is exactly twice as large. This linearity is crucial, because the physical world behaves linearly, even though human perception doesn’t. We’ll see in a moment why this is so crucial for truly understanding digital imaging.
RAW data is also undemosaiced. Each pixel still contains only one color channel. White balance is not applied, color is not finalized, and contrast is not shaped. Even sharpening and noise reduction are absent. What you see on the camera screen is merely a preview—a rendered interpretation layered on top of the RAW data. This is why RAW files are so flexible. Adjusting white balance in post-production doesn’t “fix” color; it simply changes multipliers applied to linear data. No information is lost because the underlying measurements remain intact.
Reconstructing Color: Demosaicing and White Balance
Because each pixel sees only one color, the camera or post-processing software must reconstruct the missing color channels for every pixel. This process is known as demosaicing. Using surrounding pixel values, the software estimates what the red, green, and blue components should be at each location. The quality of this step depends heavily on the algorithm used, the noise level of the sensor, and the spatial frequency of the scene.
White balance comes next, though conceptually it’s very simple. White balance is nothing more than a set of gain multipliers applied to the red, green, and blue channels. In RAW workflows, these multipliers are stored as metadata, not baked into the data itself. This is why changing white balance in a RAW file is effectively lossless.
Linear Data and the Need for Log
Although linear data accurately represents physical light, it doesn’t resemble how humans perceive brightness. Human vision is logarithmic: we are far more sensitive to relative differences in dark tones than in bright ones. Linear images therefore look dark, flat, and low-contrast when viewed directly.
To solve this mismatch, cameras and post-production pipelines apply logarithmic transfer functions. Log curves compress highlights while preserving shadow detail, allowing wide dynamic range scenes to fit into limited bit depths.
Color Science for the Look
Sensors do not naturally see colors the way humans do. Their spectral sensitivities differ significantly from human cone responses. To compensate, manufacturers design color matrices and transforms that map sensor data into perceptual color spaces.
In high-end cameras, this mapping is optimized for pleasing photographic color, particularly skin tones and natural saturation.
Dynamic Range and Highlight Behavior
Dynamic range is often misunderstood as a function of bit depth or resolution. In reality, it is defined by the ratio between the brightest signal the sensor can capture and its noise floor. Almost all cameras prioritize highlight preservation, though the roll-off characteristics differ between stills and cinema cameras. Photography-centric camera’s approach tends to feel natural and forgiving for stills, while cinema camera’s approach is optimized for grading latitude and controlled highlight compression in motion workflows.
The Final Image is Rendered for Humans
Only at the very end of the pipeline does the camera or software produce something we recognize as an image. Gamma is applied, contrast is shaped, colors are finalized, and sharpening and noise reduction are introduced. JPEGs and video files are fully rendered outputs, designed for immediate viewing. RAW files, by contrast, remain open-ended descriptions of captured light.
The Most Important Truth
No matter which camera you are using, the fundamental truth remains the same: the camera records light as linear numbers, and everything we perceive as color, tone, and style is the result of mathematical interpretation applied afterward. Understanding this distinction is the key to mastering exposure, color grading, and image quality across all digital cameras.
