SourceForge

PNG Reference Library: libpng

PNG Web Site

Defending Libpng Applications Against Decompression Bombs

Glenn Randers-Pehrson <glennrp at users.sourceforge.net>
March 9, 2010

Abstract

Because of the efficient compression method used in Portable Network Graphics (PNG) files, a small PNG file can expand tremendously, acting as a "decompression bomb". Malformed PNG chunks can consume a large amount of CPU and wall-clock time and large amounts of memory, up to all memory available on a system, causing a Denial of Service (DoS). Libpng-1.4.1 has been revised to use less CPU time and memory, and provides functions that applications can use to further defend against such files.

Background

The Portable Network Graphics (PNG) format employs an efficient compression method to store image data and some related data that is carried in "ancillary chunks". The PNG specification does not include any limit on the number of such chunks, and limits their size to 2.147 gigabytes (2,147,483,647 bytes). Likewise, the specification only limits the width and height of an image to 2.147 billion rows and 2.147 billion columns.

Because the "deflate" compression method is extremely efficient in compressing datastreams that consist of nothing but a single byte repeated many times, it is possible to make a very small PNG file which occupies a large amount of memory when decompressed, forming a "decompression bomb" that uses up all of your memory. For example, a zTXt chunk with 50,000 lines, each containing 100 instances of the letter "Z", compresses to about 17 kbytes, but, when decoded, occupies 5 megabytes, which is about a 300:1 compression ratio.

Libpng versions prior to 1.4.1, 1.2.43, and 1.0.53 utilized an inefficient means of acquiring memory while expanding the compressed ancillary chunks zTXt, iTXt, and iCCP. An image was found in the wild that contained an accidentally malformed iCCP chunk that was about 50 kilobytes long but expanded to 60 megabytes. Because of the inefficient means of decompression, this would hang a browser for about 20 minutes or more. Deliberately malformed chunks could be much larger and hang the browser for a very long time, while consuming all available memory. Eventually libpng would discover that the chunk was malformed or would run out of memory, abandon the chunk and return the allocated memory, so this is only a nasty DoS vulnerability that probably cannot be used to compromise a system.

Defenses

(1) Upgrade. Libpng 1.4.0 should be upgraded to version 1.4.1, libpng-1.2.42 to 1.2.43, and libpng-1.0.52 to 1.0.53. These all use a new two-pass algorithm for ancillary chunk decompression that is about 1000-fold faster than the previous version when decoding a 60 Megabyte iCCP chunk.

(2) Impose limits. These new libpng versions do not impose any arbitrary limits, on the memory consumption and number of ancillary chunks, but they do allow applications to do so via the png_set_chunk_malloc_max() and png_set_chunk_cache_max() functions, respectively.

Previous versions of libpng, since libpng-1.0.16 and 1.2.6, have had the png_set_user_limits() function to impose arbitrary limits on the image width and height, but it was disabled by default in libpng-1.0.x through 1.0.52. The default limits, if the application does not override them, are 1,000,000 by 1,000,000.

Persons building the current versions of libpng can redefine these macros to change the default or eliminate the arbitrary limits in the library:

  #define PNG_USER_WIDTH_MAX 1000000L  /* 0x7FFFFFFF means unlimited */
  #define PNG_USER_HEIGHT_MAX 1000000L /* 0x7FFFFFFF means unlimited */
  #define PNG_USER_CHUNK_MALLOC_MAX 0  /* 0 means unlimited */
  #define PNG_USER_CHUNK_CACHE_MAX 0   /* 0 means unlimited */

It is not a good idea to do this in a general-purpose system library, but if you are building an application with its own embedded copy of libpng, this is a simple, acceptable method.

Persons building applications with the current libpng versions can override these defaults with "png_set" calls, e.g.,

  png_set_user_limits(png_ptr, 8192, 8192);
  png_set_user_chunk_malloc_max(png_ptr, 4000000L);
  png_set_user_chunk_cache_max(png_ptr, 100);

(3) Don't decode unused chunks. Persons building any version of libpng can cause applications to ignore particular ancillary chunks by defining PNG_NO_* macros, e.g.,

  #define PNG_NO_READ_iCCP
  #define PNG_NO_READ_TEXT /* disables tEXt, zTXt, and iTXt */

Persons building current versions of libpng can cause their application to ignore particular ancillary chunks with the png_set_keep_unknown_chunks() function, e.g.,

    #if defined(PNG_HANDLE_AS_UNKNOWN_SUPPORTED)
     png_byte unused_chunks[]=
       { 98,  75,  71,  68, '\0',   /* bKGD */
         99,  72,  82,  77, '\0',   /* cHRM */
        104,  73,  83,  84, '\0',   /* hIST */
        105,  67,  67,  80, '\0';   /* iCCP */
        105,  84,  88, 116, '\0',   /* iTXt */
        111,  70,  70, 115, '\0',   /* oFFs */
        112,  67,  65,  76, '\0',   /* pCAL */
        115,  67,  65,  76, '\0',   /* sCAL */
        112,  72,  89, 115, '\0',   /* pHYs */
        115,  66,  73,  84, '\0',   /* sBIT */
        115,  80,  76,  84, '\0',   /* sPLT */
        116,  69,  88, 116, '\0',   /* tEXt */
        116,  73,  77,  69, '\0',   /* tIME */
        122,  84,  88, 116, '\0'};  /* zTXt */
        };
    #endif
    ...
    #if defined(PNG_HANDLE_AS_UNKNOWN_SUPPORTED)
      /* Ignore unused chunks */
      png_set_keep_unknown_chunks(read_ptr, 1, unused_chunks,
          (int)sizeof(unused_chunks)/5);
    #endif

Even without any vulnerability to worry about, it is a good idea to do this to save the computational resources used in decoding chunks that the application will never use.

(4) Impose a memory limit via a replacement memory allocator. This section may seem a little scary, but if you can accomplish (1), (2), and (3) above, then you don't have to read it. Persons whose applications might be used with older versions of libpng (later than version 1.0.2), and cannot use the methods described in (2) above to limit the memory consumption, can use a replacement memory allocation function with a built-in limit of their choice:

#ifdef PNG_USER_MEM_SUPPORTED
    read_ptr = png_create_read_struct_2(PNG_LIBPNG_VER_STRING,
        NULL, NULL, NULL, NULL, malloc_fn, NULL);
#else
    read_ptr = png_create_read_struct(PNG_LIBPNG_VER_STRING,
                                  NULL, NULL, NULL);
#endif
...     
#ifdef PNG_USER_MEM_SUPPORTED
    /* Replacement libpng memory allocator that has a 4MB limit */
# if PNG_LIBPNG_VER < 10400
    png_voidp malloc_fn(png_structp png_ptr, png_size_t size) {
# else
    png_voidp malloc_fn(png_structp png_ptr, png_alloc_size_t size) {
# endif

      png_voidp ret;

      if (png_ptr == NULL || size == 0)
        return (png_voidp) (NULL);

#ifdef PNG_MAX_MALLOC_64K
      if (size > (png_uint_32)65536L) {
             return NULL;
#endif
      if (size > (png_uint_32)4000000L) {
             return NULL;
      }
#if defined(__TURBOC__) && !defined(__FLAT__)
      if (size != (unsigned long)size)
         ret = NULL;
      else
         ret = farmalloc(size);
#else
#  if defined(_MSC_VER) && defined(MAXSEG_64K)
      if (size != (unsigned long)size)
         ret = NULL;
   else
         ret = halloc(size, 1);
#  else
      if (size != (size_t)size)
         ret = NULL;
      else
         ret = malloc((size_t)size);
#  endif
#endif  
      return (ret);
   }
#endif /* PNG_USER_MEM_SUPPORTED */

(5) Replace png_decompress_chunk() in libpng. If you feel you need to patch an old version of libpng instead of upgrading, replace the png_decompress_chunk() function in pngrutil.c with the new png_inflate() and png_decompress_chunk() functions:

static png_size_t
png_inflate(png_structp png_ptr, const png_byte *data, png_size_t size,
	png_bytep output, png_size_t output_size)
{
   png_size_t count = 0;

   png_ptr->zstream.next_in = (png_bytep)data; /* const_cast: VALID */
   png_ptr->zstream.avail_in = size;

   while (1)
   {
      int ret, avail;

      /* Reset the output buffer each time round - we empty it
       * after every inflate call.
       */
      png_ptr->zstream.next_out = png_ptr->zbuf;
      png_ptr->zstream.avail_out = png_ptr->zbuf_size;

      ret = inflate(&png_ptr->zstream, Z_NO_FLUSH);
      avail = png_ptr->zbuf_size - png_ptr->zstream.avail_out;

      /* First copy/count any new output - but only if we didn't
       * get an error code.
       */
      if ((ret == Z_OK || ret == Z_STREAM_END) && avail > 0)
      {
         if (output != 0 && output_size > count)
	 {
	    int copy = output_size - count;
	    if (avail < copy) copy = avail;
	    png_memcpy(output + count, png_ptr->zbuf, copy);
	 }
         count += avail;
      }

      if (ret == Z_OK)
         continue;

      /* Termination conditions - always reset the zstream, it
       * must be left in inflateInit state.
       */
      png_ptr->zstream.avail_in = 0;
      inflateReset(&png_ptr->zstream);

      if (ret == Z_STREAM_END)
         return count; /* NOTE: may be zero. */

      /* Now handle the error codes - the API always returns 0
       * and the error message is dumped into the uncompressed
       * buffer if available.
       */
      {
         char *msg, umsg[52];
	 if (png_ptr->zstream.msg != 0)
	    msg = png_ptr->zstream.msg;
	 else
	 {
#if defined(PNG_STDIO_SUPPORTED) && !defined(_WIN32_WCE)
	    switch (ret)
	    {
	 case Z_BUF_ERROR:
	    msg = "Buffer error in compressed datastream in %s chunk";
	    break;
	 case Z_DATA_ERROR:
	    msg = "Data error in compressed datastream in %s chunk";
	    break;
	 default:
	    msg = "Incomplete compressed datastream in %s chunk";
	    break;
	    }

	    png_snprintf(umsg, sizeof umsg, msg, png_ptr->chunk_name);
	    msg = umsg;
#else
	    msg = "Damaged compressed datastream in chunk other than IDAT";
#endif
	 }

         png_warning(png_ptr, msg);
      }

      /* 0 means an error - notice that this code simple ignores
       * zero length compressed chunks as a result.
       */
      return 0;
   }
}

/*
 * Decompress trailing data in a chunk.  The assumption is that chunkdata
 * points at an allocated area holding the contents of a chunk with a
 * trailing compressed part.  What we get back is an allocated area
 * holding the original prefix part and an uncompressed version of the
 * trailing part (the malloc area passed in is freed).
 */
void /* PRIVATE */
png_decompress_chunk(png_structp png_ptr, int comp_type,
    png_size_t chunklength,
    png_size_t prefix_size, png_size_t *newlength)
{
   /* The caller should guarantee this */
   if (prefix_size > chunklength)
   {
      /* The recovery is to delete the chunk. */
      png_warning(png_ptr, "invalid chunklength");
      prefix_size = 0; /* To delete everything */
   }

   else if (comp_type == PNG_COMPRESSION_TYPE_BASE)
   {
      png_size_t expanded_size = png_inflate(png_ptr,
		(png_bytep)(png_ptr->chunkdata + prefix_size),
                chunklength - prefix_size,
		0/*output*/, 0/*output size*/);

#ifdef PNG_USER_CHUNK_MALLOC_MAX
      /* Now check the limits on this chunk - if the limit fails the
       * compressed data will be removed, the prefix will remain.
       */
      if ((PNG_USER_CHUNK_MALLOC_MAX > 0) &&
          prefix_size + expanded_size >= PNG_USER_CHUNK_MALLOC_MAX - 1)
         png_warning(png_ptr, "Exceeded size limit while expanding chunk");
      /* If the size is zero either there was an error and a message
       * has already been output (warning) or the size really is zero
       * and we have nothing to do - the code will exit through the
       * error case below.
       */
      else
#endif
      if (expanded_size > 0)
      {
         /* Success (maybe) - really uncompress the chunk. */
	 png_size_t new_size = 0;
	 png_charp text = png_malloc_warn(png_ptr,
			prefix_size + expanded_size + 1);

         if (text != NULL)
         {
	    png_memcpy(text, png_ptr->chunkdata, prefix_size);
	    new_size = png_inflate(png_ptr,
                (png_bytep)(png_ptr->chunkdata + prefix_size),
		chunklength - prefix_size,
                (png_bytep)(text + prefix_size), expanded_size);
	    text[prefix_size + expanded_size] = 0; /* just in case */

	    if (new_size == expanded_size)
	    {
	       png_free(png_ptr, png_ptr->chunkdata);
	       png_ptr->chunkdata = text;
	       *newlength = prefix_size + expanded_size;
	       return; /* The success return! */
	    }
      
	    png_warning(png_ptr, "png_inflate logic error");
	    png_free(png_ptr, text);
	 }
	 else
          png_warning(png_ptr, "Not enough memory to decompress chunk.");
      }
   }

   else /* if (comp_type != PNG_COMPRESSION_TYPE_BASE) */
   {
      char umsg[50];

#if defined(PNG_STDIO_SUPPORTED) && !defined(_WIN32_WCE)
      png_snprintf(umsg, sizeof umsg, "Unknown zTXt compression type %d", comp_type);
      png_warning(png_ptr, umsg);
#else
      png_warning(png_ptr, "Unknown zTXt compression type");
#endif

      /* The recovery is to simply drop the data. */
   }

   /* Generic error return - leave the prefix, delete the compressed
    * data, reallocate the chunkdata to remove the potentially large
    * amount of compressed data.
    */
   {
      png_charp text = png_malloc_warn(png_ptr, prefix_size + 1);
      if (text != NULL)
      {
	 if (prefix_size > 0)
            png_memcpy(text, png_ptr->chunkdata, prefix_size);
	 png_free(png_ptr, png_ptr->chunkdata);
	 png_ptr->chunkdata = text;

	 /* This is an extra zero in the 'uncompressed' part. */
	 *(png_ptr->chunkdata + prefix_size) = 0x00;
      }
      /* Ignore a malloc error here - it is safe. */
   }

   *newlength = prefix_size;
}

Concluding Remarks

As stated above, everyone is strongly encouraged to upgrade their copy of libpng to version 1.4.1 and to upgrade their applications to use it, together with calls to the new functions for setting limits.

Heartfelt thanks to John Bowler for developing the two-pass decompression method that was crucial to fixing this problem.

[Note for any DHS people who have stumbled upon this site, be aware that this is a cybersecurity issue, not a physical security issue. Feel free to contact me at <glennrp at users.sourceforge.net> to discuss it.]