logo

English

How video compression works 1/5

by digipine posted Nov 02, 2017
?

Shortcut

PrevPrev Article

NextNext Article

Larger Font Smaller Font Up Down Go comment Print Attachment
?

Shortcut

PrevPrev Article

NextNext Article

Larger Font Smaller Font Up Down Go comment Print Attachment

BDTI explains how video codecs like MPEG-4 and H.264 work, and how they differ from one another. It also explains the demands codecs make on processors.

By BDTI 
[For a closer look at H.264, see H.264: the codec to watch]

Digital video compression and decompression algorithms (codecs) are at the heart of many modern video products, from DVD players to multimedia jukeboxes to video-capable cell phones. Understanding the operation of video compression algorithms is essential for developers of the systems, processors, and tools that target video applications. In this article, we explain the operation and characteristics of video codecs and the demands codecs make on processors. We also explain how codecs differ from one another and the significance of these differences.



Starting with stills

Because video clips are made up of sequences of individual images, or "frames," video compression algorithms share many concepts and techniques with still-image compression algorithms. Therefore, we begin our exploration of video compression by discussing the inner workings of transform-based still image compression algorithms such as JPEG, which are illustrated in Figure 1.


abca5501d702b6ab2f565f19dd1f18a5.gif

 


The image compression techniques used in JPEG and in most video compression algorithms are "lossy." That is, the original uncompressed image can't be perfectly reconstructed from the compressed data, so some information from the original image is lost. The goal of using lossy compression is to minimize the number of bits that are consumed by the image while making sure that the differences between the original (uncompressed) image and the reconstructed image are not perceptible—or at least not objectionable—to the human eye.

Switching to frequency
The first step in JPEG and similar image compression algorithms is to divide the image into small blocks and transform each block into a frequency-domain representation. Typically, this step uses a discrete cosine transform (DCT) on blocks that are eight pixels wide by eight pixels high. Thus, the DCT operates on 64 input pixels and yields 64 frequency-domain coefficients, as shown in Figure 2. (Transforms other than DCT and block sizes other than eight by eight pixels are used in some algorithms. For simplicity, we discuss only the 8x8 DCT in this article.)


 03af673cad82955b251071ec871823de.gif


The DCT itself is not lossy; that is, an inverse DCT (IDCT) could be used to perfectly reconstruct the original 64 pixels from the DCT coefficients. The transform is used to facilitate frequency-based compression techniques. The human eye is more sensitive to the information contained in low frequencies (corresponding to large features in the image) than to the information contained in high frequencies (corresponding to small features). Therefore, the DCT helps separate the more perceptually significant information from less perceptually significant information. After the DCT, the compression algorithm encodes the low-frequency DCT coefficients with high precision, but uses fewer bits to encode the high-frequency coefficients. In the decoding algorithm, an IDCT transforms the imperfectly coded coefficients back into an 8x8 block of pixels.

A single two-dimensional eight-by-eight DCT or IDCT requires a few hundred instruction cycles on a typical DSP such as the Texas Instruments TMS320C55x. Video compression algorithms often perform a vast number of DCTs and/or IDCTs per second. For example, an MPEG-4 video decoder operating at VGA (640x480) resolution and a frame rate of 30 frames per second (fps) would require roughly 216,000 8x8 IDCTs per second, depending on the video content. In older video codecs these IDCT computations could consume as many as 30% of the processor cycles. In newer, more demanding codec algorithms such as H.264, however, the inverse transform (which is often a different transform than the IDCT) takes only a few percent of the decoder cycles.

Because the DCT and other transforms operate on small image blocks, the memory requirements of these functions are typically negligible compared to the size of frame buffers and other data in image and video compression applications. 

TAG •

List of Articles
No. Subject Author Date Views
67 TI사의 DSP 종류 digipine 2017.11.02 769
» How video compression works 1/5 file digipine 2017.11.02 1596
65 How video compression works 2/5 - Quantization, coding, and prediction file digipine 2017.11.02 5991
64 How video compression works 3/5 - Color and motion file digipine 2017.11.02 1595
63 How video compression works 4/5 - Encoder secrets and motion compensation digipine 2017.11.02 340
62 How video compression works 5/5 - Deblocking, deringing, and color space conversion file digipine 2017.11.02 23718
61 임베디드 무선랜 개발 iwconfig wpa_supplicant digipine 2017.11.02 775
60 Linux Wi-Fi Setup Tools and Commands digipine 2017.11.02 15454
59 서보 모터 (Servo-motor) 의 내부 구조와 회전 원리 file digipine 2017.11.02 807
58 서보모터의 기초와 제어 file digipine 2017.11.02 423
57 차세대 비휘발성 메모리 기술동향 file digipine 2017.11.02 247
56 반도체 장비/공정 기술 용어집 digipine 2017.11.02 1890
55 반도체 전공정 후공정 설명 digipine 2017.11.02 4305
54 SECS 프로토콜 개요 digipine 2017.11.03 905
53 RFID와 USN 에 대해서 digipine 2017.11.03 226
52 DaVinci 에 대한 소개글 digipine 2017.11.03 112
51 L4 장비의 핵심 로드 밸런싱(Load Balancing) digipine 2017.11.03 946
50 네트웍 용어중 bps, cps, BPS, pps 의 차이점 digipine 2017.11.03 1580
49 UC 환경을 위한 종합 네트워크 FMC digipine 2017.11.03 202
48 CISCO Router 설정 팁 - QOS 1 digipine 2017.11.03 1817
Board Pagination Prev 1 2 3 ... 4 Next
/ 4