The PNG Image File Format


Original Documentation

"excerpted from the PNG (Portable Network Graphics) specification, tenth draft."

The PNG format (pronounced PiNG) was the replacement the Internet found, after
the GIF format/CompuServe/LZW compression-patent stuff. PNG is a lossless image-
compression format, which allows a large range of applications. The PNG format
is in the public domain, the latest versions of the standard and related
information can always be found at the PNG FTP archive site,
ftp.uu.net:/graphics/png/. The maintainers of the PNG specification can be
contacted by e-mail at [email protected].

The PNG format uses Motorola byte order,  scanlines always begin on byte
boundaries. When pixels are less than 8 bits deep, if the scanline width is not
evenly divisible by the number of pixels per byte then the low-order bits in the
last byte of each scanline are wasted. The contents of the padding bits added to
fill out the last byte of a scanline are unspecified.

An additional "filter" byte is added to the beginning of every scanline,
as described in detail below. The filter byte is not considered part of the
image data, but it is included in the data stream sent to the compression
step.

PNG allows the image data to be filtered before it is compressed. The
purpose of filtering is to improve the compressibility of the data. The
filter step itself does not reduce the size of the data. All PNG filters are
strictly lossless.

PNG defines several different filter algorithms, including "none" which
indicates no filtering. The filter algorithm is specified for each scanline
by a filter type byte which precedes the filtered scanline in the
precompression data stream. An intelligent encoder may switch filters
from one scanline to the next. The method for choosing which filter to
employ is up to the encoder.

A PNG image can be stored in interlaced order to allow progressive
display. The purpose of this feature is to allow images to "fade in" when
they are being displayed on-the-fly. Interlacing slightly expands the file
size on average, but it gives the user a meaningful display much more
rapidly. Note that decoders are required to be able to read interlaced
images, whether or not they actually perform progressive display.

With interlace type 0, pixels are stored sequentially from left to right,
and scanlines sequentially from top to bottom (no interlacing).

Interlace type 1, known as Adam7 after its author, Adam M. Costello,
consists of seven distinct passes over the image. Each pass transmits a
subset of the pixels in the image. The pass in which each pixel is
transmitted is defined by replicating the following 8-by-8 pattern over
the entire image, starting at the upper left corner:

1 6 4 6 2 6 4 6
7 7 7 7 7 7 7 7
5 6 5 6 5 6 5 6
7 7 7 7 7 7 7 7
3 6 4 6 3 6 4 6
7 7 7 7 7 7 7 7
5 6 5 6 5 6 5 6
7 7 7 7 7 7 7 7

Within each pass, the selected pixels are transmitted left to right within
a scanline, and selected scanlines sequentially from top to bottom. For
example, pass 2 contains pixels 4, 12, 20, etc. of scanlines 0, 8, 16, etc.
(numbering from 0,0 at the upper left corner). The last pass contains the
entirety of scanlines 1, 3, 5, etc.

The data within each pass is laid out as though it were a complete
image of the appropriate dimensions. For example, if the complete
image is 8x8 pixels, then pass 3 will contain a single scanline containing
two pixels. When pixels are less than 8 bits deep, each such scanline is
padded to fill an integral number of bytes (see Image layout). Filtering is
done on this reduced image in the usual way, and a filter type byte is
transmitted before each of its scanlines (see Filter Algorithms). Notice
that the transmission order is defined so that all the scanlines
transmitted in a pass will have the same number of pixels; this is
necessary for proper application of some of the filters.

Caution: If the image contains fewer than five columns or fewer than
five rows, some passes will be entirely empty. Encoder and decoder
authors must be careful to handle this case correctly. In particular, filter
bytes are only associated with nonempty scanlines; no filter bytes are
present in an empty pass.

A PNG file consists of a PNG signature followed by a series of chunks.
This chapter defines the signature and the basic properties of chunks.
Individual chunk types are discussed in the next chapter.


PNG Header
OFFSET              Count TYPE   Description
0000h                   8 char   ID=89h,'PNG',13,10,26,10

Chunk layout
OFFSET              Count TYPE   Description
0000h                   1 dword  Number of data bytes after this header.
0004h                   4 char   Chunk type.
								 A 4-byte chunk type code. For convenience in
								 description and in examining PNG files, type
								 codes are restricted to consist of uppercase
								 and lowercase ASCII letters (A-Z, a-z).
								 However, encoders and decoders should treat the
								 codes as fixed binary values, not character
								 strings. For example, it would not be correct
								 to represent the type code IDAT by the EBCDIC
								 equivalents of those letters.
????h                   ? byte   Data
????h                   1 dword  CRC calculated on the preceding bytes in that
								 chunk, including the chunk type code and chunk
								 data fields, but not including the length
								 field. The CRC is always present, even for
								 empty chunks such as IEND. The CRC algorithm
								 is specified below.

Chunk naming conventions
========================

Chunk type codes are assigned in such a way that a decoder can
determine some properties of a chunk even if it does not recognize the
type code. These rules are intended to allow safe, flexible extension of
the PNG format, by allowing a decoder to decide what to do when it
encounters an unknown chunk. The naming rules are not normally of
interest when a decoder does recognize the chunk's type.

Four bits of the type code, namely bit 5 (value 32) of each byte, are used
to convey chunk properties. This choice means that a human can read
off the assigned properties according to whether each letter of the type
code is uppercase (bit 5 is 0) or lowercase (bit 5 is 1). However, decoders
should test the properties of an unknown chunk by numerically testing
the specified bits; testing whether a character is uppercase or lowercase
is inefficient, and even incorrect if a locale-specific case definition is
used.

It is also worth noting that the property bits are an inherent part of the
chunk name, and hence are fixed for any chunk type. Thus, TEXT and
Text are completely unrelated chunk type codes. Decoders should
recognize codes by simple four-byte literal comparison; it is incorrect to
perform case conversion on type codes.

The semantics of the property bits are:

First Byte: 0 (uppercase) = critical, 1 (lowercase) = ancillary
   Chunks which are not strictly necessary in order to meaningfully
   display the contents of the file are known as "ancillary" chunks.
   Decoders encountering an unknown chunk in which the
   ancillary bit is 1 may safely ignore the chunk and proceed to
   display the image. The time chunk (tIME) is an example of an
   ancillary chunk.

   Chunks which are critical to the successful display of the file's
   contents are called "critical" chunks. Decoders encountering an
   unknown chunk in which the ancillary bit is 0 must indicate to
   the user that the image contains information they cannot safely
   interpret. The image header chunk (IHDR) is an example of a
   critical chunk.

Second Byte: 0 (uppercase) = public, 1 (lowercase) = private
   If the chunk is public (part of this specification or a later edition
   of this specification), its second letter is uppercase. If your
   application requires proprietary chunks, and you have no interest
   in seeing the software of other vendors recognize them, use a
   lowercase second letter in the chunk name. Such names will
   never be assigned in the official specification. Note that there is
   no need for software to test this property bit; it simply ensures
   that private and public chunk names will not conflict.

Third Byte: reserved, must be 0 (uppercase) always
   The significance of the case of the third letter of the chunk name
   is reserved for possible future expansion. At the present time all
   chunk names must have uppercase third letters.

Fourth Byte: 0 (uppercase) = unsafe to copy, 1 (lowercase) = safe to copy
   This property bit is not of interest to pure decoders, but it is
   needed by PNG editors (programs that modify a PNG file).

   If a chunk's safe-to-copy bit is 1, the chunk may be copied to a
   modified PNG file whether or not the software recognizes the
   chunk type, and regardless of the extent of the file modifications.

   If a chunk's safe-to-copy bit is 0, it indicates that the chunk
   depends on the image data. If the program has made any
   changes to critical chunks, including addition, modification,
   deletion, or reordering of critical chunks, then unrecognized
   unsafe chunks must not be copied to the output PNG file. (Of
   course, if the program does recognize the chunk, it may choose
   to output an appropriately modified version.)

   A PNG editor is always allowed to copy all unrecognized chunks
   if it has only added, deleted, or modified ancillary chunks. This
   implies that it is not permissible to make ancillary chunks that
   depend on other ancillary chunks.

   PNG editors that do not recognize a critical chunk must report
   an error and refuse to process that PNG file at all. The
   safe/unsafe mechanism is intended for use with ancillary chunks.
   The safe-to-copy bit will always be 0 for critical chunks.

For example, the hypothetical chunk type name "bLOb" has the
property bits:

	bLOb  <-- 32 bit Chunk Name represented in ASCII form
	||||
	|||'- Safe to copy bit is 1 (lower case letter; bit 5 of byte is 1)
	||'-- Reserved bit is 0     (upper case letter; bit 5 of byte is 0)
	|'--- Private bit is 0      (upper case letter; bit 5 of byte is 0)
	'---- Ancillary bit is 1    (lower case letter; bit 5 of byte is 1)

Therefore, this name represents an ancillary, public, safe-to-copy
chunk.

See Rationale: Chunk naming conventions.

CRC algorithm
=============

Chunk CRCs are calculated using standard CRC methods with pre and
post conditioning. The CRC polynomial employed is as follows:

x^32+x^26+x^23+x^22+x^16+x^12+x^11+x^10+x^8+x^7+x^5+x^4+x^2+x+1

The 32-bit CRC register is initialized to all 1's, and then the data from
each byte is processed from the least significant bit (1) to the most
significant bit (128). After all the data bytes are processed, the CRC
register is inverted (its ones complement is taken). This value is
transmitted (stored in the file) MSB first. For the purpose of separating
into bytes and ordering, the least significant bit of the 32-bit CRC is
defined to be the coefficient of the x^31 term.

Practical calculation of the CRC always employs a precalculated table
to greatly accelerate the computation. See Appendix: Sample CRC
Code.

4. Chunk Specifications
=======================

This chapter defines the standard types of PNG chunks.

Critical Chunks
===============

All implementations must understand and successfully render the
standard critical chunks. A valid PNG image must contain an IHDR
chunk, one or more IDAT chunks, and an IEND chunk.

IHDR Image Header
   This chunk must appear FIRST. Its contents are:

   Width:            4 bytes
   Height:           4 bytes
   Bit depth:        1 byte
   Color type:       1 byte
   Compression type: 1 byte
   Filter type:      1 byte
   Interlace type:   1 byte

   Width and height give the image dimensions in pixels. They are
   4-byte integers. Zero is an invalid value. The maximum for each
   is (2^31)-1 in order to accommodate languages which have
   difficulty with unsigned 4-byte values.

   Bit depth is a single-byte integer giving the number of bits per
   pixel (for palette images) or per sample (for grayscale and
   truecolor images). Valid values are 1, 2, 4, 8, and 16, although
   not all values are allowed for all color types.

   Color type is a single-byte integer that describes the
   interpretation of the image data. Color type values represent
   sums of the following values: 1 (palette used), 2 (color used), and
   4 (full alpha used). Valid values are 0, 2, 3, 4, and 6.

   Bit depth restrictions for each color type are imposed both to
   simplify implementations and to prohibit certain combinations
   that do not compress well in practice. Decoders must support all
   legal combinations of bit depth and color type. (Note that bit
   depths of 16 are easily supported on 8-bit display hardware by
   dropping the least significant byte.) The allowed combinations
   are:

   Color    Allowed    Interpretation
   Type    Bit Depths

   0       1,2,4,8,16  Each pixel value is a grayscale level.

   2       8,16        Each pixel value is an R,G,B series.

   3       1,2,4,8     Each pixel value is a palette index;
					   a PLTE chunk must appear.

   4       8,16        Each pixel value is a grayscale level,
					   followed by an alpha channel level.

   6       8,16        Each pixel value is an R,G,B series,
					   followed by an alpha channel level.

   Compression type is a single-byte integer that indicates the
   method used to compress the image data. At present, only
   compression type 0 (deflate/inflate compression with a 32K
   sliding window) is defined. All standard PNG images must be
   compressed with this scheme. The compression type code is
   provided for possible future expansion or proprietary variants.
   Decoders must check this byte and report an error if it holds an
   unrecognized code. See Deflate/Inflate Compression for details.

   Filter type is a single-byte integer that indicates the
   preprocessing method applied to the image data before
   compression. At present, only filter type 0 (adaptive filtering
   with five basic filter types) is defined. As with the compression
   type code, decoders must check this byte and report an error if it
   holds an unrecognized code. See Filter Algorithms for details.

   Interlace type is a single-byte integer that indicates the
   transmission order of the pixel data. Two values are currently
   defined: 0 (no interlace) or 1 (Adam7 interlace). See Interlaced
   data order for details.

PLTE Palette
   This chunk's contents are from 1 to 256 palette entries, each a
   three-byte series of the form:

   red:   1 byte (0 = black, 255 = red)
   green: 1 byte (0 = black, 255 = green)
   blue:  1 byte (0 = black, 255 = blue)

   The number of entries is determined from the chunk length. A
   chunk length not divisible by 3 is an error.

   This chunk must appear for color type 3, and may appear for
   color types 2 and 6. If this chunk does appear, it must precede the
   first IDAT chunk. There cannot be more than one PLTE chunk.

   For color type 3 (palette data), the PLTE chunk is required. The
   first entry in PLTE is referenced by pixel value 0, the second by
   pixel value 1, etc. The number of palette entries must not exceed
   the range that can be represented by the bit depth (for example,
   2^4 = 16 for a bit depth of 4). It is permissible to have fewer
   entries than the bit depth would allow. In that case, any
   out-of-range pixel value found in the image data is an error.

   For color types 2 and 6 (truecolor), the PLTE chunk is optional.
   If present, it provides a recommended set of from 1 to 256 colors
   to which the truecolor image may be quantized if the viewer
   cannot display truecolor directly. If PLTE is not present, such a
   viewer must select colors on its own, but it is often preferable for
   this to be done once by the encoder.

   Note that the palette uses 8 bits (1 byte) per value regardless of
   the image bit depth specification. In particular, the palette is 8
   bits deep even when it is a suggested quantization of a 16-bit
   truecolor image.

IDAT Image Data
   This chunk contains the actual image data. To create this data,
   begin with image scanlines represented as described under Image
   layout; the layout and total size of this raw data are determinable
   from the IHDR fields. Then filter the image data according to
   the filtering method specified by the IHDR chunk. (Note that
   with filter method 0, the only one currently defined, this implies
   prepending a filter type byte to each scanline.) Finally, compress
   the filtered data using the compression method specified by the
   IHDR chunk. The IDAT chunk contains the output datastream
   of the compression algorithm. To read the image data, reverse
   this process.

   There may be multiple IDAT chunks; if so, they must appear
   consecutively with no other intervening chunks. The compressed
   datastream is then the concatenation of the contents of all the
   IDAT chunks. The encoder may divide the compressed data
   stream into IDAT chunks as it wishes. (Multiple IDAT chunks
   are allowed so that encoders can work in a fixed amount of
   memory; typically the chunk size will correspond to the encoder's
   buffer size.) It is important to emphasize that IDAT chunk
   boundaries have no semantic significance and can appear at any
   point in the compressed datastream. A PNG file in which each
   IDAT chunk contains only one data byte is legal, though
   remarkably wasteful of space. (For that matter, zero-length
   IDAT chunks are legal, though even more wasteful.)

   See Filter Algorithms and Deflate/Inflate Compression for
   details.

IEND Image Trailer
   This chunk must appear LAST. It marks the end of the PNG
   data stream. The chunk's data field is empty.

Ancillary Chunks
================

All ancillary chunks are optional, in the sense that encoders need not
write them and decoders may ignore them. However, encoders are
encouraged to write the standard ancillary chunks when the
information is available, and decoders are encouraged to interpret these
chunks when appropriate and feasible.

The standard ancillary chunks are listed in alphabetical order. This is
not necessarily the order in which they would appear in a file.

bKGD Background Color
   This chunk specifies a default background color against which
   the image may be presented. Note that viewers are not bound to
   honor this chunk; a viewer may choose to use a different
   background color.

   For color type 3 (palette), the bKGD chunk contains:

   palette index: 1 byte

   The value is the palette index of the color to be used as
   background.

   For color types 0 and 4 (grayscale, with or without alpha), bKGD
   contains:

   gray:  2 bytes, range 0 .. (2^bitdepth) - 1

   (For consistency, 2 bytes are used regardless of the image bit
   depth.) The value is the gray level to be used as background.

   For color types 2 and 6 (RGB, with or without alpha), bKGD
   contains:

   red:   2 bytes, range 0 .. (2^bitdepth) - 1
   green: 2 bytes, range 0 .. (2^bitdepth) - 1
   blue:  2 bytes, range 0 .. (2^bitdepth) - 1

   (For consistency, 2 bytes per sample are used regardless of the
   image bit depth.) This is the RGB color to be used as background.

   When present, the bKGD chunk must precede the first IDAT
   chunk, and must follow the PLTE chunk, if any.

   See Recommendations for Decoders: Background color.

cHRM Primary Chromaticities and White Point
   Applications that need precise specification of colors in a PNG
   file may use this chunk to specify the chromaticities of the red,
   green, and blue primaries used in the image, and the referenced
   white point. These values are based on the 1931 CIE
   (International Color Committee) XYZ color space. Only the
   chromaticities (x and y) are specified. The chunk layout is:

   White Point x: 4 bytes
   White Point y: 4 bytes
   Red x:         4 bytes
   Red y:         4 bytes
   Green x:       4 bytes
   Green y:       4 bytes
   Blue x:        4 bytes
   Blue y:        4 bytes

   Each value is encoded as a 4-byte unsigned integer, representing
   the x or y value times 100000.

   If the cHRM chunk appears, it must precede the first IDAT
   chunk, and it must also precede the PLTE chunk if present.

gAMA Gamma Correction
   The gamma correction chunk specifies the gamma of the
   camera (or simulated camera) that produced the image, and
   thus the gamma of the image with respect to the original scene.
   Note that this is not the same as the gamma of the display device
   that will reproduce the image correctly.

   The chunk's contents are:

   Image gamma value: 4 bytes

   A value of 100000 represents a gamma of 1.0, a value of 45000 a
   gamma of 0.45, and so on (divide by 100000.0). Values around
   1.0 and around 0.45 are common in practice.

   If the encoder does not know the gamma value, it should not
   write a gamma chunk; the absence of a gamma chunk indicates
   the gamma is unknown.

   If the gAMA chunk appears, it must precede the first IDAT
   chunk, and it must also precede the PLTE chunk if present.

   See Gamma correction, Recommendations for Encoders:
   Encoder gamma handling, and Recommendations for Decoders:
   Decoder gamma handling.

hIST Image Histogram
   The histogram chunk gives the approximate usage frequency of
   each color in the color palette. A histogram chunk may appear
   only when a palette chunk appears. If a viewer is unable to
   provide all the colors listed in the palette, the histogram may
   help it decide how to choose a subset of the colors for display.

   This chunk's contents are a series of 2-byte (16 bit) unsigned
   integers. There must be exactly one entry for each entry in the
   PLTE chunk. Each entry is proportional to the fraction of pixels
   in the image that have that palette index; the exact scale factor
   is chosen by the encoder.

   Histogram entries are approximate, with the exception that a
   zero entry specifies that the corresponding palette entry is not
   used at all in the image. It is required that a histogram entry be
   nonzero if there are any pixels of that color.

   When the palette is a suggested quantization of a truecolor
   image, the histogram is necessarily approximate, since a decoder
   may map pixels to palette entries differently than the encoder
   did. In this situation, zero entries should not appear.

   The hIST chunk, if it appears, must follow the PLTE chunk, and
   must precede the first IDAT chunk.

   See Rationale: Palette histograms, and Recommendations for
   Decoders: Palette histogram usage.

pHYs Physical Pixel Dimensions
   This chunk specifies the intended resolution for display of the
   image. The chunk's contents are:

   4 bytes: pixels per unit, X axis (unsigned integer)
   4 bytes: pixels per unit, Y axis (unsigned integer)
   1 byte: unit specifier

   The following values are legal for the unit specifier:

   0: unit is unknown (pHYs defines pixel aspect ratio only)
   1: unit is the meter

   Conversion note: one inch is equal to exactly 0.0254 meters.

   If this ancillary chunk is not present, pixels are assumed to be
   square, and the physical size of each pixel is unknown.

   If present, this chunk must precede the first IDAT chunk.

   See Recommendations for Decoders: Pixel dimensions.

sBIT Significant Bits
   To simplify decoders, PNG specifies that only certain bit depth
   values be used, and further specifies that pixel values must be
   scaled to the full range of possible values at that bit depth.
   However, the sBIT chunk is provided in order to store the
   original number of significant bits, since this information may be
   of use to some decoders. We recommend that an encoder emit an
   sBIT chunk if it has converted the data from a different bit
   depth.

   For color type 0 (grayscale), the sBIT chunk contains a single
   byte, indicating the number of bits which were significant in the
   source data.

   For color type 2 (RGB truecolor), the sBIT chunk contains
   three bytes, indicating the number of bits which were significant
   in the source data for the red, green, and blue channels,
   respectively.

   For color type 3 (palette color), the sBIT chunk contains three
   bytes, indicating the number of bits which were significant in the
   source data for the red, green, and blue components of the
   palette entries, respectively.

   For color type 4 (grayscale with alpha channel), the sBIT chunk
   contains two bytes, indicating the number of bits which were
   significant in the source grayscale data and the source alpha
   channel data, respectively.

   For color type 6 (RGB truecolor with alpha channel), the sBIT
   chunk contains four bytes, indicating the number of bits which
   were significant in the source data for the red, green, blue and
   alpha channels, respectively.

   Note that sBIT does not have any implications for the
   interpretation of the stored image: the bit depth indicated by
   IHDR is the correct depth. sBIT is only an indication of the
   history of the image. However, an sBIT chunk showing a bit
   depth less than the IHDR bit depth does mean that not all
   possible color values occur in the image; this fact may be of use to
   some decoders.

   If the sBIT chunk appears, it must precede the first IDAT
   chunk, and it must also precede the PLTE chunk if present.

tEXt Textual Data
   Any textual information that the encoder wishes to record with
   the image is stored in tEXt chunks. Each tEXt chunk contains
   a keyword and a text string, in the format:

   Keyword:        n bytes (character string)
   Null separator: 1 byte
   Text:           n bytes (character string)

   The keyword and text string are separated by a zero byte (null
   character). Neither the keyword nor the text string may contain
   a null character. Note that the text string is not null-terminated
   (the length of the chunk is sufficient information to locate the
   ending). The keyword must be at least one character and less
   than 80 characters long. The text string may be of any length
   from zero bytes up to the maximum permissible chunk size.

   Any number of tEXt chunks may appear, and more than one
   with the same keyword is permissible.

   The keyword indicates the type of information represented by
   the text string. The following keywords are predefined and
   should be used where appropriate:

   Title            Short (one line) title or caption for image
   Author           Name of image's creator
   Copyright        Copyright notice
   Description      Description of image (possibly long)
   Software         Software used to create the image
   Disclaimer       Legal disclaimer
   Warning          Warning of nature of content
   Source           Device used to create the image
   Comment          Miscellaneous comment; conversion from GIF comment

   Other keywords, containing any sequence of printable characters
   in the character set, may be invented for other purposes.
   Keywords of general interest may be registered with the
   maintainers of the PNG specification.

   Keywords must be spelled exactly as registered, so that decoders
   may use simple literal comparisons when looking for particular
   keywords. In particular, keywords are considered case-sensitive.

   Both keyword and text are interpreted according to the ISO
   8859-1 (Latin-1) character set. Newlines in the text string
   should be represented by a single linefeed character (decimal
   10); use of other ASCII control characters is discouraged.

   See Recommendations for Encoders: Text chunk processing and
   Recommendations for Decoders: Text chunk processing.

tIME Image Last-Modification Time
   This chunk gives the time of the last image modification (not the
   time of initial image creation). The chunk contents are:

   2 bytes: Year (complete; for example, 1995, not 95)
   1 byte: Month (1-12)
   1 byte: Day (1-31)
   1 byte: Hour (0-23)
   1 byte: Minute (0-59)
   1 byte: Second (0-60)    (yes, 60, for leap seconds; not 61, a common error)

   Universal Time (UTC, also called GMT) should be specified
   rather than local time.

tRNS Transparency
   Transparency is an alternative to the full alpha channel.
   Although transparency is not as elegant as the full alpha
   channel, it requires less storage space and is sufficient for many
   common cases.

   For color type 3 (palette), this chunk's contents are a series of
   alpha channel bytes, corresponding to entries in the PLTE
   chunk:

   Alpha for palette index 0:  1 byte
   Alpha for palette index 1:  1 byte
   etc.

   Each entry indicates that pixels of that palette index should be
   treated as having the specified alpha value. Alpha values have
   the same interpretation as in an 8-bit full alpha channel: 0 is
   fully transparent, 255 is fully opaque, regardless of image bit
   depth. The tRNS chunk may contain fewer alpha channel bytes
   than there are palette entries. In this case, the alpha channel
   value for all remaining palette entries is assumed to be 255. In
   the common case where only palette index 0 need be made
   transparent, only a one-byte tRNS chunk is needed. The tRNS
   chunk may not contain more bytes than there are palette entries.

   For color type 0 (grayscale), the tRNS chunk contains a single
   gray level value, stored in the format

   gray:  2 bytes, range 0 .. (2^bitdepth) - 1

   (For consistency, 2 bytes are used regardless of the image bit
   depth.) Pixels of the specified gray level are to be treated as
   transparent (equivalent to alpha value 0); all other pixels are to
   be treated as fully opaque (alpha value (2^bitdepth)-1).

   For color type 2 (RGB), the tRNS chunk contains a single RGB
   color value, stored in the format

   red:   2 bytes, range 0 .. (2^bitdepth) - 1
   green: 2 bytes, range 0 .. (2^bitdepth) - 1
   blue:  2 bytes, range 0 .. (2^bitdepth) - 1

   (For consistency, 2 bytes per sample are used regardless of the
   image bit depth.) Pixels of the specified color value are to be
   treated as transparent (equivalent to alpha value 0); all other
   pixels are to be treated as fully opaque (alpha value
   (2^bitdepth)-1).

   tRNS is prohibited for color types 4 and 6, since a full alpha
   channel is already present in those cases.

   Note: when dealing with 16-bit grayscale or RGB data, it is
   important to compare both bytes of the sample values to
   determine whether a pixel is transparent. Although decoders
   may drop the low-order byte of the samples for display, this must
   not occur until after the data has been tested for transparency.
   For example, if the grayscale level 0x0001 is specified to be
   transparent, it would be incorrect to compare only the
   high-order byte and decide that 0x0002 is also transparent.

   When present, the tRNS chunk must precede the first IDAT
   chunk, and must follow the PLTE chunk, if any.

zTXt Compressed Textual Data
   A zTXt chunk contains textual data, just as tEXt does;
   however, zTXt takes advantage of compression.

   A zTXt chunk begins with an uncompressed Latin-1 keyword
   followed by a null (0) character, just as in the tEXt chunk. The
   next byte after the null contains a compression type byte, for
   which the only presently legitimate value is zero (deflate/inflate
   compression). The compression-type byte is followed by a
   compressed data stream which makes up the remainder of the
   chunk. Decompression of this data stream yields Latin-1 text
   which is equivalent to the text stored in a tEXt chunk.

   Any number of zTXt and tEXt chunks may appear in the same
   file. See the preceding definition of the tEXt chunk for the
   predefined keywords and the exact format of the text.

   See Deflate/Inflate Compression, Recommendations for
   Encoders: Text chunk processing, and Recommendations for
   Decoders: Text chunk processing.

Summary of Standard Chunks
==========================

This table summarizes some properties of the standard chunk types.

Critical chunks (must appear in this order, except PLTE is optional):

		Name  Multiple  Ordering constraints
				OK?

		IHDR    No      Must be first
		PLTE    No      Before IDAT
		IDAT    Yes     Multiple IDATs must be consecutive
		IEND    No      Must be last

Ancillary chunks (need not appear in this order):

		Name  Multiple  Ordering constraints
				OK?

		cHRM    No      Before PLTE and IDAT
		gAMA    No      Before PLTE and IDAT
		sBIT    No      Before PLTE and IDAT
		bKGD    No      After PLTE; before IDAT
		hIST    No      After PLTE; before IDAT
		tRNS    No      After PLTE; before IDAT
		pHYs    No      Before IDAT
		tIME    No      None
		tEXt    Yes     None
		zTXt    Yes     None

Standard keywords for tEXt and zTXt chunks:

Title            Short (one line) title or caption for image
Author           Name of image's creator
Copyright        Copyright notice
Description      Description of image (possibly long)
Software         Software used to create the image
Disclaimer       Legal disclaimer
Warning          Warning of nature of content
Source           Device used to create the image
Comment          Miscellaneous comment; conversion from GIF comment

Additional Chunk Types
======================

Additional public PNG chunk types are defined in the document "PNG
Special-Purpose Public Chunks", available by FTP from
ftp.uu.net:/graphics/png/ or via WWW from
http://sunsite.unc.edu/boutell/pngextensions.html.

5. Deflate/Inflate Compression
==============================

PNG compression type 0 (the only compression method presently
defined for PNG) specifies deflate/inflate compression with a 32K
window. Deflate compression is an LZ77 derivative used in zip, gzip,
pkzip and related programs. Extensive research has been done
supporting its patent-free status. Portable C implementations are freely
available.

Documentation and C code for deflate are available from the Info-Zip
archives at ftp.uu.net:/pub/archiving/zip/.

Deflate-compressed datastreams within PNG are stored in the "zlib"
format, which has the structure:

Compression method/flags code: 1 byte
Additional flags/check bits:   1 byte
Compressed data blocks:        n bytes
Checksum:                      4 bytes

Further details on this format may be found in the zlib specification. At
this writing, the zlib specification is at draft 3.1, and is available from
ftp.uu.net:/pub/archiving/zip/doc/zlib-3.1.doc.

For PNG compression type 0, the zlib compression method/flags code
must specify method code 8 ("deflate" compression) and an LZ77
window size of not more than 32K.

The checksum stored at the end of the zlib datastream is calculated on
the uncompressed data represented by the datastream. Note that the
algorithm used is not the same as the CRC calculation used for PNG
chunk checksums. Verifying the chunk CRCs provides adequate
confidence that the PNG file has been transmitted undamaged. The zlib
checksum is useful mainly as a crosscheck that the deflate and inflate
algorithms are implemented correctly.

In a PNG file, the concatenation of the contents of all the IDAT chunks
makes up a zlib datastream as specified above. This datastream
decompresses to filtered image data as described elsewhere in this
document.

It is important to emphasize that the boundaries between IDAT chunks
are arbitrary and may fall anywhere in the zlib datastream. There is not
necessarily any correlation between IDAT chunk boundaries and deflate
block boundaries or any other feature of the zlib data. For example, it is
entirely possible for the terminating zlib checksum to be split across
IDAT chunks.

PNG also uses zlib datastreams in zTXt chunks. In a zTXt chunk, the
remainder of the chunk following the compression type code byte is a
zlib datastream as specified above. This datastream decompresses to the
user-readable text described by the chunk's keyword. Unlike the image
data, such datastreams are not split across chunks; each zTXt chunk
contains an independent zlib datastream.

6. Filter Algorithms
====================

This chapter describes the pixel filtering algorithms which may be
applied in advance of compression. The purpose of these filters is to
prepare the image data for optimum compression.

PNG defines five basic filtering algorithms, which are given numeric
codes as follows:

Code    Name
0       None
1       Sub
2       Up
3       Average
4       Paeth

The encoder may choose which algorithm to apply on a
scanline-by-scanline basis. In the image data sent to the compression
step, each scanline is preceded by a filter type byte containing the
numeric code of the filter algorithm used for that scanline.

Filtering algorithms are applied to bytes, not to pixels, regardless of the
bit depth or color type of the image. The filtering algorithms work on
the byte sequence formed by a scanline that has been represented as
described under Image layout.

When the image is interlaced, each pass of the interlace pattern is
treated as an independent image for filtering purposes. The filters work
on the byte sequences formed by the pixels actually transmitted during a
pass, and the "previous scanline" is the one previously transmitted in the
same pass, not the one adjacent in the complete image. Note that the
subimage transmitted in any one pass is always rectangular, but is of
smaller width and/or height than the complete image. Filtering is not
applied when this subimage is empty.

For all filters, the bytes "to the left of" the first pixel in a scanline must
be treated as being zero. For filters that refer to the prior scanline, the
entire prior scanline must be treated as being zeroes for the first scanline
of an image (or of a pass of an interlaced image).

To reverse the effect of a filter, the decoder must use the decoded values
of the prior pixel on the same line, the pixel immediately above the
current pixel on the prior line, and the pixel just to the left of the pixel
above. This implies that at least one scanline's worth of image data must
be stored by the decoder at all times. Even though some filter types do
not refer to the prior scanline, the decoder must always store each
scanline as it is decoded, since the next scanline might use a filter that
refers to it.

PNG imposes no restriction on which filter types may be applied to an
image. However, the filters are not equally effective on all types of data.
See Recommendations for Encoders: Filter selection.

Filter type 0: None
===================

With the None filter, the scanline is transmitted unmodified; it is only
necessary to insert a filter type byte before the data.

Filter type 1: Sub
==================

The Sub filter transmits the difference between each byte and the value
of the corresponding byte of the prior pixel.

To compute the Sub filter, apply the following formula to each byte of
each scanline:

  Sub(x) = Raw(x) - Raw(x-bpp)

where x ranges from zero to the number of bytes representing that
scanline minus one, Raw(x) refers to the raw data byte at that byte
position in the scanline, and bpp is defined as the number of bytes per
complete pixel, rounding up to one. For example, for color type 2 with a
bit depth of 16, bpp is equal to 6 (three channels, two bytes per channel);
for color type 0 with a bit depth of 2, bpp is equal to 1 (rounding up); for
color type 4 with a bit depth of 16, bpp is equal to 4 (two-byte grayscale
value, plus two-byte alpha channel).

Note this computation is done for each byte, regardless of bit depth. In a
16-bit image, MSBs are differenced from the preceding MSB and LSBs
are differenced from the preceding LSB, because of the way that bpp is
defined.

Unsigned arithmetic modulo 256 is used, so that both the inputs and
outputs fit into bytes. The sequence of Sub values is transmitted as the
filtered scanline.

For all x < 0, assume Raw(x) = 0.

To reverse the effect of the Sub filter after decompression, output the
following value:

  Sub(x) + Raw(x-bpp)

(computed mod 256), where Raw refers to the bytes already decoded.

Filter type 2: Up
=================

The Up filter is just like the Sub filter except that the pixel immediately
above the current pixel, rather than just to its left, is used as the
predictor.

To compute the Up filter, apply the following formula to each byte of
each scanline:

  Up(x) = Raw(x) - Prior(x)

where x ranges from zero to the number of bytes representing that
scanline minus one, Raw(x) refers to the raw data byte at that byte
position in the scanline, and Prior(x) refers to the unfiltered bytes of
the prior scanline.

Note this is done for each byte, regardless of bit depth. Unsigned
arithmetic modulo 256 is used, so that both the inputs and outputs fit
into bytes. The sequence of Up values is transmitted as the filtered
scanline.

On the first scanline of an image (or of a pass of an interlaced image),
assume Prior(x) = 0 for all x.

To reverse the effect of the Up filter after decompression, output the
following value:

  Up(x) + Prior(x)

(computed mod 256), where Prior refers to the decoded bytes of the
prior scanline.

Filter type 3: Average
======================

The Average filter uses the average of the two neighboring pixels (left
and above) to predict the value of a pixel.

To compute the Average filter, apply the following formula to each byte
of each scanline:

  Average(x) = Raw(x) - floor((Raw(x-bpp)+Prior(x))/2)

where x ranges from zero to the number of bytes representing that
scanline minus one, Raw(x) refers to the raw data byte at that byte
position in the scanline, Prior(x) refers to the unfiltered bytes of the
prior scanline, and bpp is defined as for the Sub filter.

Note this is done for each byte, regardless of bit depth. The sequence of
Average values is transmitted as the filtered scanline.

The subtraction of the predicted value from the raw byte must be done
modulo 256, so that both the inputs and outputs fit into bytes. However,
the sum Raw(x-bpp)+Prior(x) must be formed without overflow
(using at least nine-bit arithmetic). floor() indicates that the result
of the division is rounded to the next lower integer if fractional; in other
words, it is an integer division or right shift operation.

For all x < 0, assume Raw(x) = 0. On the first scanline of an image (or of
a pass of an interlaced image), assume Prior(x) = 0 for all x.

To reverse the effect of the Average filter after decompression, output
the following value:

  Average(x) + floor((Raw(x-bpp)+Prior(x))/2)

where the result is computed mod 256, but the prediction is calculated in
the same way as for encoding. Raw refers to the bytes already decoded,
and Prior refers to the decoded bytes of the prior scanline.

Filter type 4: Paeth
====================

The Paeth filter computes a simple linear function of the three
neighboring pixels (left, above, upper left), then chooses as predictor the
neighboring pixel closest to the computed value. This technique is taken
from Alan W. Paeth's article "Image File Compression Made Easy" in
Graphics Gems II, James Arvo, editor, Academic Press, 1991.

To compute the Paeth filter, apply the following formula to each byte of
each scanline:

  Paeth(x) = Raw(x) - PaethPredictor(Raw(x-bpp),Prior(x),Prior(x-bpp))

where x ranges from zero to the number of bytes representing that
scanline minus one, Raw(x) refers to the raw data byte at that byte
position in the scanline, Prior(x) refers to the unfiltered bytes of the
prior scanline, and bpp is defined as for the Sub filter.

Note this is done for each byte, regardless of bit depth. Unsigned
arithmetic modulo 256 is used, so that both the inputs and outputs fit
into bytes. The sequence of Paeth values is transmitted as the filtered
scanline.

The PaethPredictor function is defined by the following pseudocode:

	 function PaethPredictor (a, b, c)
	 begin
		  ; a = left, b = above, c = upper left
		  p := a + b - c        ; initial estimate
		  pa := abs(p - a)      ; distances to a, b, c
		  pb := abs(p - b)
		  pc := abs(p - c)
		  ; return nearest of a,b,c,
		  ; breaking ties in order a,b,c.
		  if pa <= pb AND pa <= pc
		  begin
			   return a
		  end
		  if pb <= pc
		  begin
			   return b
		  end
		  return c
	 end

The calculations within the PaethPredictor function must be performed
exactly, without overflow. Arithmetic modulo 256 is to be used only for
the final step of subtracting the function result from the target pixel
value.

Note that the order in which ties are broken is fixed and must not be
altered. The tie break order is: pixel to the left, pixel above, pixel to the
upper left. (This order differs from that given in Paeth's article.)

For all x < 0, assume Raw(x) = 0 and Prior(x) = 0. On the first scanline
of an image (or of a pass of an interlaced image), assume Prior(x) = 0
for all x.

To reverse the effect of the Paeth filter after decompression, output the
following value:

  Paeth(x) + PaethPredictor(Raw(x-bpp),Prior(x),Prior(x-bpp))

(computed mod 256), where Raw and Prior refer to bytes already
decoded. Exactly the same PaethPredictor function is used by both
encoder and decoder.

For more information, check out the above ftp sites.

EXTENSION:PNG
OCCURENCES:PC,UNIX,AMIGA
PROGRAMS:????
REFERENCE:The PNG Specification

This information is from Corion.net and is used with permission.

More Resources