Saturday, January 1, 2011

【 Weak current College 】 video format interface terminology 】


ASF is an abbreviation from AdvancedStreamingformat literally (advanced streaming format) means should see the usefulness of this format. To put it bluntly ASF is MICROSOFT to and present Realplayer competition and developed in a way you can directly watch the video online file compression format! since it uses the MPEG4 compression algorithm, so the compression rate and the image quality is very good. Since the ASF is to an Internet instant viewing of video "flow" format, so it's image quality than VCD is not surprisingly poor a little, but higher than the same video format "flow" of RAM format. However if you are not taken into account in the Internet, selected the best quality to compress the file, the resulting video file than to VCD (MPEG1) good is not strange, but in this case, lose the original intention of ASF originally, it is better to simply use the NAVI or DIVX. But Microsoft's "child," is to have its unique advantages, the most obvious are the various types of software support is nobody's business.


NAVI is an abbreviation that is newAVI named ShadowRealm underground organization developed a new video format. It is a compression algorithm by MicrosoftASF modifications to (and not imagine AVI) video formats in pursuit of just compression and image quality, so in order to pursue this goal NAVI, improved the original number of ASF format, let NAVI can have a higher frame rate (frame.rate). Of course, this is at the expense of ASF video stream characteristics as price. Generally speaking, NAVI is a way to remove the video flow characteristics of improved ASF format! then simple point is---non-network version of the ASF!


AVI is the abbreviation for the AudioVideoInterleave appears needless explains that Microsoft published by WIN3.1 times of old video format for our services for several years. If this is not know, I see you still don't look down, and the benefits of this thing, well, just good, call easily compatible, image quality is good, but the drawback I think it is well known: the size is big! because this point, we now can see the birth of the MPEG1 to MPEG4 for now.


MPEG is MotionPictureExpertsGroup abbreviation, it includes MPEG-1, MPEG-2 and MPEG-4 (note that no MPEG-3, familiar MPEGLayeur3 MP3 only). MPEG-1 believe is that we come into contact with most of the time, because it is widely used in the manufacture of VCD and some video clips download network application above, we can say that 99% of VCD MPEG1 format is used, the compression was (note the VCD2.0 and not the description is used MPEG VCD) use MPEG-1 compression algorithm, you can put a 120 minute long film (not video files) to about size. 1.2GB MPEG-2 is used in the making of the DVD (compression), while in some HDTV (high definition TV broadcasting) and some demanding video editing, processing, it also has considerable application surface. Using MPEG-2 compression algorithm to compress a 120 minute long film (not video files) can compress to a 4 to a 8 GB size (of course, its image quality, and other performance indicators MPEG-1 is no better than that). MPEG-4 is a new compression algorithm, using this algorithm of the ASF format can put a 120 minute long film (not video files) to about streaming video 300M, available for online viewing. Other DIVX format can also be compressed to about 600M, but its image quality is much better than the ASF.


DIVX video encoding technology can be said to be a threat to the DVD of the video compression format (some say it is DVD killer), modified by Microsoftmpeg4v3, use MPEG4 compression algorithm. At the same time it is to break the various agreements and ASF. While this is said to be United States prohibits the export of encoding technology---MPEG4 compression of a DVD only need 2 CDROM! this means that you do not need to buy a DVDROM or you can get and it almost video quality, but it requires you have a CDROM! Besidesplaying this kind of encoding, the machine is not high, the CPU is mhzs (regardless of whether you are a PII, CELERON, PIII, AMDK6/2, AMDK6III, AMDATHALON, CYRIXx86) with 64 MB of memory and an 8 MB memory card can smooth playback. This is a great technology, the future is not limited!


QuickTime (MOV), Apple (Apple) company was founded in a video format, in a long time, it is only in Apple's MAC machine exists. Later development to support the WINDOWS platforms, but to be fair, it either in the local playback or as video stream format in the Internet, is an excellent video encoding format. So far, it has a total of four versions, with 4.0 version compression rates the best!


REALVIDEO (RA or RAM) format from the outset is positioned in the video streaming applications,Can be regarded as a pioneer of video streaming technology. It can use 56KMODEM dial-conditions for uninterrupted video playback, of course, its image quality and MPEG2, DIVX, etc. than is afraid to flattered in now. After all, you want to achieve in online transmission of uninterrupted video requires a lot of bandwidth, the ASF in its powerful competitors!


Digital video
Digital video is the first camera or video capture device, external images in color and brightness information into electrical signals, and then logged to storage media (e.g. videos). Playback, the video signal is converted into frame information, and to approximately 30 pieces per second speed projection to the monitor, so that human eyes see it as a continuous uninterrupted movement. Movie playback of frame rate is approximately 24 frames per second. If you use the oscilloscope (a test tool) to watch, no projection of simulation signals look like a brain scan images from some continuous jagged peaks and valleys. In order to store Visual information, analog video signal peaks and valleys have adopted digital/analog (D/A) converters to change to number "0" or"1". This transformation process is what we call video capture (or acquisition process). If you want to watch TV on the digital video, you will need a digital-to-analog converter will decode the binary information into analog signals can be played.


Codec is a video signal is compressed and uncompressed. Computer industry defined by 24-bit true color measurement system, which defines the nearly a million color, close to the limit of human vision. Now, the most basic VGA video is 640 * 480 pixels. This means that if the video needs to 30 frames per second playback speed, each seconds to transfer up to 27MB information 1GB capacity hard disk will only be able to store approximately 37 seconds of video information. Therefore must be compressed. By abandoning some digital information or easy to be our eyes and brain to ignore the image information, the amount of the video. The video compression decompression software or hardware is the codec. Codec compression from General 2: 1-100: 1, make the work with large amounts of video data possible.

Dynamic and static image compression
Static image compression technology is primarily a compression of the spatial information, and dynamic image, in addition to the compression space information, also on the time information. At present has formed three compression standard:

1.JPEG (JointPhotographicExpertsGroup) standard:

For continuous tones, grayscale, color/monochrome image compression. With higher compression ratio of graphic files (a 1000KB bmp files compressed into JPEG format may only have 20-30KB), in the compresses the degree of distortion in the process. The current use of a wide range (especially Internet Web page). This type of lossy compression at the expense of less detail with typical 4: 1 to 10: 1 compression ratio to archive a static image. Dynamic JPEG (M-JPEG) can order to each frame of video sparks row compression, as each frame is independent of the image. Dynamic JPEG can produce high-quality, full screen, full-motion video, however, it needs to rely on additional hardware.

2.H.261 standard: mainly applicable for video telephony and video conferencing.

3.MPEG (MotionPictureExpertsGroup, global image/sound/system compression standards) standard: including MPEG video, MPEG audio and MPEG System (audio and video synchronization) in three parts. MPEG compression standard is designed for motion picture, the basic approach is — in unit time collection and save the information in the first frame, then it only stores the remaining frame relative to the first frame changes, in order to achieve the purpose of the compression. MPEG compression standard for the compression between frames, the average compression ratio of up to 50: 1, compression rate is high, and have a uniform format, good compatibility.

In multimedia data compression standard, more use of MPEG standards, including MPEG-1, 2, 4, etc.

MPEG-1 1.5Mbps data transfer rates for the transmission of digital storage media moving image and sound encoding, after MPEG-1 standard compression, video data compression rate is 1/100-1/200, audio compression rate is 1/6.5. MPEG-1 provides 30 frames per second 352 * 240 resolution images, when using an appropriate compression technology, has close to home video system (VHS) video quality. MPEG-1 allows over 70 minutes of high-quality video and audio are stored on a CD-ROM disc. The VCD is MPEG-1 standard, the standard is a family-oriented television quality level of video and audio compression standard.

MPEG-2 mainly for high-definition television (HDTV), the transmission rate is 10Mbps, compatible with MPEG-1, applicable to 1.5-60Mbps even higher encoding range. MPEG-2 with 30 frames per second in resolution 704 * 480 is MPEG-1 four times the playback speed. It is suitable for demanding broadcast and entertainment applications, such as: DSS satellite broadcast and DVD, MPEG-2 is the home video system (VHS) video resolution.

MPEG is ultra low bit-rate moving images and language of compression standard for the transmission rate is lower than the real time image transmission 64Mbps, it not only covers the low frequency band, and also to the development of high-frequency band. Compared to previous two standards, MPEG 4 for multimediaData compression provides a broader platform. It more defined is a format, a framework, rather than a specific algorithm. It can be all kinds of multimedia technology fully used to come in, including compression itself some tools, algorithms and image synthesis, speech synthesis technology.

The number/die go install a converter that will convert the digital signals into analog signals. The higher the number of bits DAC, the less signal distortion. Images are more clear and stable.

AVI is the voice and video synchronization together file format. It on the video file uses a lossy compression method, but the compression is quite high, so even though the surface quality is not very good, but its still a very wide range of applications. AVI support 256 colors and RLE compression. AVI information is mainly used in multimedia CD, used to save the TV, film and other imaging information.

On a color encoding methods are collectively referred to as "color space" or "gamut". Use the simplest words, world any color "color space" can be defined as a fixed number or a variable. RGB (red, green, blue) color space is only one of a kind. Using this encoding method, each color is available three variables to represent — red, green, and blue intensity. Record and display the RGB color image is one of the most common scenario. However, it lacks and early black and white display system for good compatibility. Therefore, a number of electronic appliances manufacturers commonly used method is to convert the RGB, YUV color space, to maintain compatibility with, and need to replace back to RGB format so that the computer monitor display color graphics.

YUV (also called YCrCb) is the European television system used by a color coded method (belong to PAL). YUV is mainly used for optimizing color video signal transmission, making it backwards compatible with old black and white TV. And RGB video signal transmission compared to its greatest advantage is that you only need to take up very little bandwidth (RGB requires three independent video signals transmitted simultaneously). The "Y" indicates that the brightness (Luma or Luminance), which is a grayscale values; and the "U" and "V" indicates that a color (Chrominance or Chroma), role is a description of the image color and saturation for the color of the specified pixel. "Brightness" is through the RGB input signals to create a specific part of the RGB signal to overlay. The "color" then defines the color of the two aspects-hue and saturation, between Cr and CB. Of these, Cr reflects the red portion GB input signals and RGB signal luminance value differences. While CB reflects the RGB input signal blue part and RGB signal brightness differences between the values of the same.

Composite video and S-Video
NTSC and PAL color video signal is composed of a first-basic black and white video signal, and then in each level synchronization pulse after a color and a brightness signal impulses. Because the color signal is determined by a variety of data "overlay", so called "composite video." S-Video is a kind of signal quality higher video interface, it cancelled the signal superposition method, which can effectively prevent some unnecessary quality loss. Its function is to RGB color and brightness of the separation process.

Baseband video is a simple analog signals from a video simulation data and video synchronization data constitute, for the receiver correctly display images. Signal details depend on the application of video standard or "standard"--NTSC (national television standards Committee of the United States, NationalTelevisionStandardsCommittee), PAL (phase, PhaseAlternateLine line), and SECAM (sequential transfer and store color TV system, France adopted a SECAM, SEquentialCouleurAvecMemoire).

In the PC field, the use of standard, there are not compatible. Take the resolution, of each frame has a 625-line (50Hz), while others only 525 lines per frame (60Hz). The latter is used in North America and Japan, are collectively referred to as NTSC standard. Typically, a video signal is generated from a video source, such as camera, VCR or TV tuner, etc. To transfer images, video source must first build a vertical synchronization signal (VSYNC). This signal will reset the receiver devices (PC monitor), with a new image from the top of the screen is beginning to show. VSYNC signals issued, video source then scanned images of the first row. Completed, the video source also generates a horizontal sync signal to reset the receiver to the left of the screen displays the next line. And for each row of the image, you want to send a scan line and a horizontal sync pulse signal.

In addition, the NTSC standard also provides video source needs to be sent per second 30 sites full of images (frames). If not otherwise dealt with, flashing phenomenon can be very serious. To resolve this problem, and each frame is divided into two sections, each 262.5 rows. Part of all the odd lines and partly all even-numbered rows. Display, first scan the odd-numbered rows, and then scan the even-numbered rows, you can effectively improve the stability of the image display, reduce flicker. At present, there are three main color television standard, i.e. NTSC, PAL and SECAM format, three standard is not uniform. China is PAL-D format.

Ultra6cale is Rockwell (Locke Weir) using a scan converter technology. Can the vertical and horizontal direction display arbitrary scaling. InTV such interlaced device displays one row at a time when thevideo, the whole process itself has very troublesome. UltraScale technology used, even as in the computer monitor that Cheung, sparks row similar aspect orientation free expansion.

No comments:

Post a Comment