Saturday, December 18, 2010
Weak current College】 【audio, streaming media, image coding standards on】.
<br> video and audio, streaming media, image coding standard, a video encoding <BR> Review Summary 1. MPEG standard is the development of MPEG Moving Pictures Experts Group (Moving Pictures Experts Group) abbreviation.. Is a dedicated digital .video, audio, technology development and standardization of the outstanding organization that is ISO (International Standard Organization) and IEC (International Electronic Committee) jointly established in 1988, the formal organization of code is the ISO / IEC JTCI/SC29 /. WG11, members .of the experts from different countries have the industry's most influential research institutions.. In ten years time, MPEG organization has achieved fruitful results, its also a great development.. November 1992, MPEG-1 is approved, and in 1993 was accepted as international .standards ISO Organization; November 1994, MPEG-2 standard, and is an international standard in 1995; MPEG-4 standard is in 1999. and 2001 with version 1 and version 2; present, H.264/MPEG-4 AVC (ie ISO .MPEG-4 Part 10) are being developed.. 2. ISO MPEG-1 MPEG-1 standard developed in 1992, is designed for industry-standard, numbered ISO / IEC 11172.. MPEG-1 standard for CIF resolution (NTSC system is .352 × 240; PAL system is 352 × 288) for image compression, and the provisions of the standard video and audio information in compressed data rate after a maximum of 1.5Mbps.. MPEG-1 can be realized in different bandwidth devices such as CD .-ROM, Video-CD and other digital media for storage, you can also LAN, ISDN line for video and audio information transmission.. 3. ISO MPEG-2 standard MPEG organization launched in 1994, MPEG-2 compression standard, and become the .international standard in 1995, numbered ISO/IEC13818.. <BR> MPEG-2 standard is the standard for digital television and high-definition television applications in a variety of compression schemes under the detailed requirements and system level, coding rate up to 100Mbps ... MPEG-2 MPEG-1 is not a simple upgrade, it has done in the system and send a more detailed provisions and further improved.. Especially for MPEG-2 digital TV broadcast encoding and transmission, have been identified as SDTV and HDTV .encoding standard.. (1) MPEG-2 standard MPEG-2 standard is currently constituted is divided into nine parts, each part of the content described as follows: 1) The first part of the -ISO/IEC13818-1, System: the .system, describing multiple video, audio, and data. Synthesis <BR> basic stream transport stream and program stream way.. <BR> 2) The second part -ISO/IEC13818-2, Video: Video Description Video Coding.. .<BR> 3) The third part -ISO/IEC13818-3, Audio: Audio, MPEG-1 audio description and backward compatible with standard audio coding method.. <BR> 4) The fourth part -ISO/IEC13818-4 ., Compliance: compliance testing, describes the test whether a coded stream MPEG-2 stream approach.. <BR> 5) Part V -ISO/IEC13818-5, Software: Software MPEG-2 standard describes the first, second and third parts .of the software implementation.. <BR> 6) Part VI -ISO/IEC13818-6, DSM-CC: digital storage media - command and control, describe the interactive multimedia network session between server and user signaling set.. <BR> .; Six parts have been passed over to become the official international standard, and in the field of digital television has been widely practical application.. In addition, MPEG-2 standard has three parts: Part VII and MPEG-1 audio non-backward- .compatible multi-channel audio coding; VIII have been suspended; the provisions of Part IX of the transport stream real-time interface.. (2) MPEG-2 encoding and decoding MPEG-2 video compression principle is the use of the image of two .properties: spatial correlation and temporal correlation.. This makes two related images there are a lot of redundant information.. In order to effectively remove redundant information in the image, MPEG-2 standard will be coded images are divided into three categories, are called .I frames, P frames and B frames.. I frames use intra coding.. P frames and B frames using interframe image coding.. P frame images using only prior to the time prediction, B frames two-way time prediction, can greatly improve .the compression ratio.. MPEG-2 encoding bit stream is divided into six levels, from top to bottom as follows: video sequence layer (Sequence), group of pictures layer <BR> (GOP: Group of Picture), the image layer .(Picture), as the section level. (Slice), macro block layer (MacroBlock) and as the block layer (Block).. In the case of intra coding, coding the image only through the DCT, quantizer and encoder that generates bit .stream encoded bit stream.. In the case of interframe coding, the original image frame memory in the first forecast and compare the image to calculate the motion vector, motion vector and reference frame which generated the original image of the prediction image.. Then, .the difference between the original image and prediction of pixel image data generated by the differential DCT, quantizer and then through the encoder generates the output bit stream of encoded bit stream.. (3) MPEG-2 Profile @ Level in the existing MPEG-2 .video standard set of technical specifications include Category 5 (Profile) 4 级 (Level) components, and uses scalable coding.. Some combination of class and constitute a set of MPEG-2 standard in a particular subset of the next application.. Level ( .Level) is the input MPEG-2 format, from the limited definition of the VHS logo image to the HDTV image quality, each encoded input format has a corresponding range.. Divided into four: 1) low-level LL (Low Level), .the image input format is the pixel format of ITU-R Rec.BT 601 1 / 4, ie 352 × 240 × 30 or 352 × 288 × 25.. 2) The main-level ML (Main Level), the image input format .consistent with ITU-R Rec.BT 601 format, 720 × 480 × 30 or 720 × 576 × 25.. 3) 1440 Advanced H14L (High 1440 Level), the image aspect ratio of 4:3, with 1440 × 1080 × 30 .high-definition format.. 4) Senior HL (High Level), the image aspect ratio of 16:9, using 1920 × 1080 × 30 high-definition format.. Class (Profile) MPEG-2 refers to the different treatment, each .of which includes compression and processing a collection, the higher the class means more coding tools used to carry out more sophisticated processing, to achieve better. image quality, while achieving the cost is greater.. Divided into 5 categories: 1) Simple class SP .(Simple Profile), with minimal coding tools.. 2) The main class MP (Main Profile), an increase than a simple class of bi-directional prediction method in the case of the same bit rate, will give a better picture than a .simple class.. 3) SNR scalable class SNRP (SNR Scaleable Profile) and spatial scalability class SPP (Spatially Scaleable Profile), allowing the two types of encoded video data into the base layer and additional layer provides a more. types of broadcasting methods. ..4) Advanced Class of HP (High Profile), applied to image quality, a higher bit rate requirements of the occasion.. (4) MPEG-2 features and application of MPEG-2 has the following salient features: ① to support the .highest resolution images, including compliance with ITU-RRec.601 (CCIR601) format standard resolution and higher resolution digital TV. rate of HDTV.. ② support includes high-speed sports activities, including the most widely used, both the storage media in the .DVD, radio and television in the digital broadcast television, HDTV, video on demand can also be used in interactive (VOD) and Near Video. on-demand (NVOD).. ④ also be able to fit ATM and other broadband communications network. ..4. ISO MPEG-4 standard MPEG-4 decision in October 1998, in January 1999 as an international standard, followed by extended use and development for the second edition in 2001 had its second. version.. MPEG-4 international standard code for .the ISO/IEC14496.. MPEG-4 is characterized by its more suitable for interactive AV services, and remote monitoring, is a dynamic interactive graphics standards.. (1) MPEG-4 standard form 1) the overall framework for multimedia delivery (DMIF .): mainly to solve the interactive network, the radio environment, and the disk applications, multimedia applications, operational problems.. By DMIF, MPEG-4 can be built with special quality of service (QoS) for each channel and the basic flow of .the bandwidth.. 2) data plane: MPEG-4 data transport plane can be divided into two parts, relations and media relations, and described the object reference (OD) and the flow diagram Desktop (SMT) concept, the basic flow and the .AV objects in the same scene. appear.. 3) buffer management, and real-time identification: MPEG-4 defines a System Decoder Model (SDM), the decoding model describes an ideal handling decoding the bit stream syntax and semantics of the device ., it requires a special buffer and real-time mode.. Through effective management, better use of limited buffer space.. 4) video coding: MPEG-4 support for natural and synthetic visual object coding, synthetic visual objects, including 2D, 3D .animation and facial animation people.. 5) Audio coding: MPEG-4 support not only the natural sounds, synthesized sounds and support.. It will audio coding and synthesis coding of natural sounds combined with the characteristics of objects and support for audio.. .6) Scene Description: The scene description is used to describe the main object of the AV AV in specific scenarios, how to organize and sync issues, as well as AV AV scene objects and intellectual property protection and other issues.. (2) MPEG .-4 encoding and decoding principles a. The basic idea MPEG-4 encoding and decoding of the basic idea is based on image content of the second generation of video encoding and decoding scheme, and coding schemes are based on combination of synthesis in the standard. .. It is based on the contents of the image segmentation into different video object VO (Video Object), in the encoding process of the foreground object and background objects using different coding strategies, the prospects for people concerned about the object, the object is to .keep as much as possible. details and smooth, while the background objects do not care about using a large compression ratio of encoding strategies.. b. decoding the data structure of MPEG-4 according to the following five levels of organization to encode the image ., from top to bottom as follows: video segment VS (Video Session), video object VO (Video Object), video object layer, VOL (Video. Object Layer), video object group layer GOV (Group of Video Object Plane), video .object plane VOP (Video Object Plane).. In the MPEG-4, VO is defined as the main partition screen out different objects, each VO there are three types of information to describe: sports information, shape information and texture information.. The .composition of VO specific applications and systems dependent on the actual surroundings and the requirement of ultra-low bit rate case, VO can be a rectangular frame (ie, MPEG-1 in the traditional rectangular frames), which is compatible with the original standard for .on. content for applications that require a higher, VO may be an object in the scene or a particular level, such as news program in the commentator's head and shoulders like; VO may be computer-generated two-dimensional, three-dimensional graphics. ... c. VOP encoder structure of the encoder consists of two parts: the shape coding and texture coding traditional sports, including MPEG-4 shape coding is arbitrary shape VOP in the coding when necessary.. d. MPEG-4 encoding and decoding process .and framework for MPEG-4 encoding process: The first step is the formation of VO (VO Formation), first from the original video stream segmentation VO, followed by the coding control (Coding control) mechanism. VO and the different information distribution in all .three types of VO rate, followed by independent coding of each VO, respectively, and finally the stream of each VO compound into a bit stream.. Which, in the encoding phase of control and complex user interaction can be added to control or control by .the intelligent algorithm.. MPEG-4 now contains a grid-based model and Sprite coding technology.. Conducting image analysis, the first of each VO investigated whether a model such as the head shoulder like a typical, and if so to press the model .code; further consider whether the use of Sprite technology background, the case will produce a larger background for each. affine transformation to generate a frame and a location information can be; coming last the rest of the VO by the process code.. The decoding .process of MPEG-4 encoder is essentially anti-process, no further explanation.. (3) MPEG-4 in Profile @ Level (4) MPEG-4 of the main applications from the current situation, MPEG-4 is mainly used in three .areas: digital television, interactive graphics applications (including content. <BR> synthesis technology), interactive multimedia fields.. (5) MPEG-4 features and advantages 1) MPEG-4 is used as an international standard to develop research, which .has good compatibility and openness.. 2) MPEG-4 provides high compression ratio while the loss of data is small, with minimal data to get the best image quality.. 3) MPEG-4 is an open standard, because of its high- .quality digital images, and allows content creators from the MPEG-2 quality has been to a very low bandwidth for Internet streaming content throughout the balance of quality and bandwidth, and was all over the world. wireless, computer and entertainment companies widely used.. .4) As the MPEG-2 digital TV will eventually completely replace the existing analog TV, as with the MPEG-4 standard has introduced a new, data compression and transmission technology will tend to be more standardized.. 5. H.264/MPEG .-4 AVC (ISO MPEG-4 Part 10) H.264 (ITU-T name), or so-called MPEG-4 AVC (ISO / IEC named), is a by the ITU-T. and ISO / IEC are developing .a joint program of video codec will soon become the standard MPEG-4 Part 10 (ISO MPEG-4 Part 10).. <BR> On the technology of video encoding scheme, now officially named as ITU-T H.264 or " .; JVT / AVC Bill. ".H.264/MPEG-4 <BR> AVC MPEG-4 standard as an extension (MPEG-4 Part 10), full use of the existing MPEG-4 standard in all aspects.. .<BR> H.264/MPEG-4 AVC to the existing MPEG-4 Advanced Simple Profile on the basis of the development.. H.264/MPEG-4 AVC codec program processes the following five parts include: precision motion estimation and .intra estimation (Estimation), transform (Transform) and inverse transform, quantization (Quantization) and inverse quantization, loop filter. device (Loop Filter), entropy coding (Entropy Coding).. Although the H.264/MPEG-4 AVC Although .the technology has not been formally approved, but it can be reduced by 50% or more bandwidth capacity can be less than 1Mbps data rate transmission based on Internet Protocol (IP) broadcast quality. class video content, which is now officially promulgated ISO MPEG- .4 and MPEG-2 encoding and decoding scheme simply can not match.. Thus, H.264/MPEG-4 AVC compression ratio will be high all the requirements, high-quality applications have a profound impact.. Currently Amphion Semiconductor, Envivio, .Equator Technologies, iVAST, Texas Instruments, UB Video, and VideoLocus other chip makers and software technology providers have the technology of the related research and product development, which introduced the VLP4000 VideoLocus system for H ..264/MPEG-4 AVC codec, streaming .media solutions and performance assessment.. 6. DV standard digital camera records video not to analog signals, but by way of compressed digital signals.. To avoid incompatibility, the main products of this technology are in line with a digital compression standard: DV. .. Many international companies have developed well-known video and DV format to provide the products, of which transmission standard known as FireWire (IEEE 1394).. DV is the DCT compressed digital video signals recorded in the cassette method.. (1) DV .indicators related to 1) sampling frequency and sampling rate: DV uses 4:2:0 digital component recording system, video luminance sampling frequency is 13.5MHz, color difference signal sampling frequency is 3.375MHz.. 2) to quantify the depth: DV for .sampling the brightness and color difference signals after the 8-bit quantization.. 3) The video signal compression ratio: DV format uses a frame compression method, compression ratio of 5:1, the compressed video stream in the further treatment recorded on tape. .. (2) DV (IEEE l394) Interface IEEE l394 is a standard high-speed, short distance data transmission and exchange agreement.. IEEE1394 digital VCR Union has been recognized as the standard digital interface, digital video Association was recognized as a digital transmission .standard.. VESA (Video Experts Standards Association) agree with the 1394 for home network, and the European Digital Video Broadcasting (DVB) also agree to IEEE1394 interfaces for digital television.. (3) DV features 1) DV format with excellent image quality ., it has a high-resolution, wide color bandwidth, can restore the colorful images, the stability of jitter-free picture and so on.. 2) DV format can record the sound digitally, in order to match with the superior image, DV .format offers high-quality PCM audio record.. 3) DV format, the application based on its characteristics, can not adjust the output rate.. 4) to support DV to MiniDV cassettes VHS cassette compact than many, making the MiniDV band positions reel .drum and a corresponding change <BR> institutions, a more sophisticated.. 7. MXF format standard material exchange format (Media eXchange Format, MXF) is the main organizations and between firms (such as the Pro-MPEG, EBU and the AAF .Association) cooperation between the great progress.. MXF is a server, data flow between the tape drives and digital file exchange program materials file format.. The content may be a complete package of radio and television programs, and programs or clips.. MXF .can be self-contained application, no external material content can be preserved.. Present material exchange format (MXF) standard has been developed to version 9.. (1) MXF MXF file format to an entity referred to as video, audio and program .data (eg text) and metadata bundled together, and they are placed in a bag, the main body and bring on stream physical and some element. data.. It saves a video frame slices, each complemented by the audio and data entities and frame .-based metadata.. The latter usually contains a time code and the file format for each video frame information.. Subject can be based on several different types of material (physical), including MPEG, DV and uncompressed video / audio, it uses SMPTE .KLV data coding system, which has become the accepted standards so that advantage.. MXF is the audio / video / metadata of the package structure, the file header, end of the file, the file and optional index of body parts.. < .BR> Index section includes some metadata and file parameters for file access, the file can be a variety of body parts of video and audio formats such as MPEG, DV and uncompressed AV data.. Documents constitute a body of documents by a number of scenes ., each scene file is from the images, sound and metadata composition.. (2) MXF metadata (Metadata) 1) the meaning of metadata is the metadata of video and audio data together with auxiliary information, it records the data associated with program production ., such as: recording time, location, characters,. scene number and other relevant information.. The information in the program production, transmission, reproduction and broadcast of the various stages, always closely linked with the video and audio signals.. 2) .the role of metadata, metadata (Metadata) in future material management, information search, interactive program production and broadcast and other applications, will greatly enhance our work efficiency, and play a decisive role.. Use of metadata, we can easily find the realization .of material.. In addition to directly watch the video screen, you can also see other related meta-data through the data for more intuitive, richer, more image information.. (3) MXF and AAF relationship is Advanced Authoring Format AAF acronym, .meaning "advanced production format.". AAF is an association, which includes the world's leading supplier of electronic equipment, computer hardware and software manufacturers, and some broadcasters.. Association primarily responsible for editing and production for enhanced common AV file format standard ., that AAF format.. AAF is a format for multimedia content creation and post-production, open standards for the business community.. AAF is a kind of label and address of the image / audio / metadata packaging methods, MXF and AAF these two .formats are complementary.. AAF can be a variety of media file formats available to integrate and complement; and on AAF MXF files and various streaming media formats available have such a performance.. MXF and AAF can be used alone, they have their own strengths .in their respective fields of application; At the same time, users who do not have any dependencies.. For example, a pure broadcast system can use the MXF format; a post-production room can only use the AAF format; and post-production .facilities with a broadcaster, you can use both formats.. Both MXF and AAF file format is very similar in many ways, the relationship between two documents from the point of view MXF is a subset of AAF.. (4) MXF in the role .of 1) the use of MXF files, we can use mature, practical and simple structure, the price is not expensive IT networks to deliver high-quality video and audio data.. 2) The application of MXF file technology, you can all support .will become the international standard MXF file format, all manufacturers equipment (video recorders, servers, nonlinear production device) to send video data.. 3) In addition to providing better interoperability, that is, different devices and between different applications dealing with video and .audio.. Another contribution is sent MXF metadata.. From the beginning MXF file format as a new development, so the application of metadata to be a lot of attention.. This is not only the proper functioning of MXF files is very important, and .will also lead to a powerful management tool for the emergence of new media.. Second-rate media standards 1. ISMA MPEG-4 streaming media standard ISMA is the Internet Streaming Media Alliance (Internet Streaming Media Alliance) short, it is by Apple, .<BR> Cisco, IBM, Sun, Philips, Kasenna and other industry giants in the. December 14, 2000 initiated and announced the establishment of a full member now reached 30 or more.. ISMA Alliance aims to promote development and use of IP .end to end media streaming solutions for the international open standards.. The official website (www.ISM-Alliance.org) role is to publish the complete specifications and publish information on the activities league.. As the MPEG-4 MPEG-1 and .MPEG-2 standard alternative technologies can be large streaming media files in the case to ensure the quality of video and audio compression is very small, more conducive to the network transmission.. Based on this, ISMA MPEG-4 standard is developed and published by .the ISMA ISO MPEG-4 based streaming media, the new standards.. ISMA released a new standard in the first specification - ISMA 1.0.. The specification allows the user to only download a plug-in, you can use different companies enjoy the network .audio player and video; but not limited to use of PC, also includes a variety of handheld wireless devices (such as PDA, cell phone, etc.).. ISMA 1.0 There are two versions: Profile 0 and Profile 1.. The former .is for the narrow bandwidth wireless networks and network ready and can be applied to mobile phone, PDA and other handheld wireless devices; which is designed for broadband, can be used in set-top boxes and PC.. ISMA hopes to eventually replace the MPEG .-4 MPEG-2 standard, but the new standard in video quality and interoperability, could be improved, is still under development and constantly being improved.. 2. QuickTime (1) QuickTime MOV format for Apple's QuickTime MOV file format is the industry standard .for digital media.. QuickTime MOV file format defines a standard storage method for digital media content, using this file format can store only a single media content (such as video frames or audio samples), and can save a complete description of the media works ... QuickTime MOV file format is designed to adapt to digital media to work together with the need to store various data.. It will separate the media description and data storage media, the media description contains the number of tracks, video compression formats and time .information, also includes media data storage area of the index.. Media data is all the sampling data, such as video frames and audio samples.. Because this file format can be used to describe almost all of the media structure is between applications (regardless .of operating platform to) the ideal format to exchange data.. (2) QuickTime support the ISO / ISMA MPEG4 streaming media format, Apple's QuickTime 5.0 version from the beginning associated with MPEG-4 compression technology and the latest version of QuickTime 6 in its .first put forward in support of ISO / ISMA MPEG-4 streaming media format. (mp4 file format), and the world for creating, streaming and viewing MPEG-4 content of the first complete media solutions.. QuickTime in the ISO / ISMA MPEG .-4 streaming media applications have the following significant features: 1) full compliance with ISO MPEG-4 standard video codec, MPEG-4 video content for encoding and decoding.. 2) comply with the Internet Streaming Media Alliance (ISMA) 1.0 specification and .full support for ISMA 1.0 standard in two versions: ISMA Profile 0 and ISMA Profile 1.. 3) The Instant-On Streaming (streaming media, instant messaging) technology, which is characterized to eliminate the buffer delay to achieve the immediate retreat, frame .forward, frame back, drag between the play, stop playing can be freely positioned after the browsing and other functions., so that users can quickly sweep simple streaming media content targeted to a specific part you want to watch.. This is a RealVideo and .Windows Media is currently features can not reached.. 4) the use of MPEG-4 Advanced Audio Coding (AAC) audio coding technology, can provide a much smaller file than MP3 sound quality is much better, than Windows Media Audio (WMA) stronger ... 3. Windows Media Technologies Microsoft Windows Media technology is a variety of network bandwidth to adapt to the conditions of the flow of multimedia information publishing platform to provide a series of closely integrated services and tools for the creation, management, broadcasting and receiving through .the Internet and corporate Intranet delivery. The extremely rich streaming multimedia presentations, including streaming media production, publishing, broadcasting and management set of solutions.. In addition, it also provides a development kit (SDK) for the second development to use.. ( .1) Windows Media 9 Series Platform Microsoft Windows Media 9 Series Platform is a set of related codecs, server, player, software development kits and other aspects of a complete multimedia solution platform.. It as the latest multimedia core technology, codec quality, stability ., integration, streaming media performance, functionality and other aspects greatly improved, while the platform and the Windows operating system, close together, will. radio, television, Internet, distance learning, remote service has a wide range of applications.. (2) .Windows Media Video 9 (WMV 9) Windows Media Video 9 (WMV 9) is the first Microsoft video media technology Codec, which is derived from MPEG-4, several proprietary extensions to a given bit rate can be. provide better image quality, in .that sense, Windows Media Video 9 is the highest quality streaming video in one of the Codec.. WMV 9 The main advantages include: a local or network playback, excellent image quality, scalable media types, component download, scalable media types, stream prioritization ., multi-rate synthesis, multi-language support, environmental independence,. the relationship between the rich streams and scalability.. But the WMV 9 also has some shortcomings, such as the system can only use Windows operating system platform, cross-platform support .is poor; WMV 9 codec generates less efficient; streaming media content in the browser does not use QuickTime browser ISO / ISMA MPEG-4. video content convenient.. 4. RealNetworks's RealVideo technology developed by RealNetworks, video and audio compression specification known as RealMedia ., is very popular in the Internet, cross-platform, client / server architecture standard for multimedia applications, which uses the audio / video streams and synchronized playback technology. <BR> on the Internet to achieve full bandwidth to provide high quality multimedia, .but also on the Internet in order to be able to 28.8kbit / s transfer rate to provide stereo sound and continuous video.. RealVideo streaming media technology solution package includes: 1) client player RealPlayer, can run independently, or as a plug-in .runs in the browser, the latest version of RealOne Player.. 2) The end product is the production of content creation software, RealProducer, it will other media file formats into compressed Real streaming media format, the latest version of Helix Producer Plus 9.0. .. Third-party developers can provide the SDK by RealNetworks media format them into RealMedia file format.. 3) RealServer server software used to provide streaming services, the latest version of Helix Server 9.0 5. DivX MPEG-4 DivX MPEG-4 based technology .from Microsoft's MPEG-4 V3 version, sponsored by the DivXNetworks an open source project. ProjectMayo and developed.. DivX MPEG-4 has the following features: MPEG-4 algorithm in full compliance with ISO MPEG-4 standard OpenDivX Codec; quality excellent, .small capacity; playback environment is simple and convenient; directly to AVI file can be used in applications such as streaming media.. Three audio coding standard 1. LPCM LPCM (linear pulse code modulation) is a non-compressed digital audio technology, is a .compression of the original sound is not reproduced in the ordinary CD, DVD and other demands the highest audio quality of the occasion. has been widely used.. LPCM variety of applications in the principle is the same, except that different sampling frequency and quantization precision.
Labels:
[:]
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment