Opened 14 years ago
Last modified 5 years ago
#2557 assigned Bugs
iostreams filtering_stream w/ gzip infinite loop when writing to a full drive
Reported by: | Owned by: | Jonathan Turkanis | |
---|---|---|---|
Milestone: | Boost 1.38.0 | Component: | iostreams |
Version: | Boost 1.55.0 | Severity: | Problem |
Keywords: | Cc: |
Description
When a filtering_stream with a gzip_compressor is used to write to a full hard drive (i.e. insufficient free space), boost enters an infinite loop in /boost/iostreams/detail/adapter/non_blocking_adapter.hpp:41 because the write function keeps returning zero. This loop happens during the destruction of the stream. I can't seem to find a client-side workaround.
Attached is a test case, point it to a volume with zero space and give some large number of bytes. If there's insufficient space, execution hangs. Tested on mingw/winxp/gcc4.2 but seems to fail on linux/gcc as well.
Attachments (1)
Change History (11)
by , 14 years ago
Attachment: | boost-bug.cpp added |
---|
comment:1 by , 14 years ago
Status: | new → assigned |
---|
comment:2 by , 12 years ago
comment:4 by , 12 years ago
The gzip file consists of a sequence of compressed chunks.
You can manually compress data chunks and append them to a file:
class StreamSink { public: typedef char char_type; typedef boost::iostreams::sink_tag category; StreamSink(); StreamSink(std::ostream * stream); std::streamsize write(const char * ptr, std::streamsize n); void close(); bool isOpen() const; private: bool opened; std::ostream * stream; }; StreamSink::StreamSink(): opened(false), stream(NULL) { } StreamSink::StreamSink(std::ostream * stream): opened(true), stream(stream) { assert(stream != NULL, "StreamSink constructed from NULL"); } std::streamsize StreamSink::write(const char * ptr, std::streamsize n) { assert(this->stream != NULL, "Writing to NULL stream"); this->stream->write(ptr, n); return n; } void StreamSink::close() { this->opened = false; this->stream = NULL; } bool StreamSink::isOpen() const { return this->opened; } void writeGzippedSection(string const & filename, ios_base::openmode mode, string const & data) { try { std::ofstream out(filename.c_str(), mode | ios::binary); out.exceptions(std::ios_base::badbit | std::ios_base::failbit); StreamSink sink(&out); boost::iostreams::gzip_compressor compressor; compressor.write(sink, data.c_str(), data.length()); compressor.close(sink, BOOST_IOS::out); } catch (boost::iostreams::gzip_error const & e) { std::cerr << "gzip error: " << e.error() << ", zlib error: " << e.zlib_error_code() << std::endl; throw; } catch (ofstream::failure const & e) { std::cerr << "ofstream::failure: " << e.what() << std::endl; throw; } }
The code above produces decompressable files (tested with zcat, zless and gunzip). Also, when out-of-space occurs, chunks that were written successfully can be recovered.
The code may not be pretty, but it works.
comment:5 by , 9 years ago
The same infinite loop happens if user does not have write access on the file that is being written to. In that case the variable amt receives value -1 in
/include/boost/iostreams/detail/adapter/non_blocking_adapter.hpp line 42
this bug is still here in boost 1.55
comment:6 by , 9 years ago
Version: | Boost 1.37.0 → Boost 1.55.0 |
---|
comment:7 by , 8 years ago
And the same issue happens on Windows if the "filename" is longer then MAX_PATH
and not prefixed with
"\\?\"
.
comment:8 by , 6 years ago
There are also hangs if a file_sink could not be opened (though that is at least possible to check manually by calling is_open() before). Also, the "hangs" are not just hangs but infinite loops with 100% CPU usage. It feels like the whole iostream was written without any consideration for error handling at all.
comment:9 by , 5 years ago
Submitted a pull request with fix and test-case: https://github.com/boostorg/iostreams/pull/36
It doesn't check or ensure that the error is properly propagated (I am not sure there even is a proper way to do that), but at least no more infinite loop.
comment:10 by , 5 years ago
Change has been merged, so hopefully the next release will finally have the fix.
Has this been fixed? If so, which version of the library contains the fix?