Wednesday 30 January 2013

Pixels


‘Pixel’ is short for Picture Element. When we see graphic images on digital devices the display divides the screen into thousands or millions of pixels, arranged in rows and columns. Each pixel has its own address in this grid and is represented by dots or squares.

                                                          
Pixels build up a sample of an original image and are the smallest component of a digital image. The more pixels used to represent an image, the closer it will resemble the original.

The number of pixels used to create an image is often referred to as the ‘resolution’. The best digital cameras have the highest pixel count because they produce a higher-quality image.




In colour images a pixel is typically comprised of three of four colour dots – a red, a blue and a green. When these colour dots converge, they build coloured pixels. You might have spent most of your youth playing Mario games with 256 x 224 pixels – now a typical HD monitor can display 1,366 x 768 pixels.


RGB

Mixing red (R), green (G), and blue (B) can produce a large part of the visible spectrum. When these three colors overlap, they produce white, and hence this is known as an additive color model.

Computer monitors produce colors by emitting light through red, green, and blue phosphors. But that only explains part of it. Your monitor settings can be customized depending on your preference and hardware such as your graphics card.

Monitors have various display settings, such as 256 colors (8 bit), high color (16 bit), or even true color (32 bit). If you’re using true color, you’ll notice a world of difference when compared to 256 colors.

You can also adjust settings like your monitor’s brightness or play with the color levels pretty easily – processors very similar to changing the brightness and color levels on your TV set. Therefore, a color on one computer can look completely different on another computer.



Colour space & YUV

A device color space simply describes the range of colors, or gamut, that a camera can see, a printer can print, or a monitor can display.

Editing color spaces, on the other hand, such as Adobe RGB or sRGB, are device-independent. They also determine a color range you can work in. Their design allows you to edit images in a controlled, consistent manner.

A device color space is tied to the idiosyncrasies of the device it describes. An editing space, on the other hand, is gray balanced — colors with equal amounts of Red, Green, and Blue appear neutral. Editing spaces also are perceptually uniform; i.e. changes to lightness, hue, or saturation are applied equally to all the colors in the image.

Colorspace is a bit unusual. The Y component determines the brightness of the color (referred to as luminance or luma), while the U and V components determine the color itself (the chroma).

Y ranges from 0 to 1 (or 0 to 255 in digital formats), while U and V range from -0.5 to 0.5 (or -128 to 127 in signed digital form, or 0 to 255 in unsigned form). Some standards further limit the ranges so the out-of-bounds values indicate special information like synchronization.

One neat aspect of YUV is that you can throw out the U and V components and get a grey-scale image. Since the human eye is more responsive to brightness than it is to color, many lossy image compression formats throw away half or more of the samples in the chroma channels to reduce the amount of data to deal with, without severely destroying the image quality.


No comments:

Post a Comment