On Incomplete HTTP Reads and the Requests Library In Python

The requests library is arguably the mostly widely used HTTP library for Python. However, what I believe most of its users are not aware of is that its current stable version happily accepts responses whose length is less than what is given in the Content-Length header. If you are not careful enough to check this by yourself, you may end up using corrupted data without even noticing. I have witnessed this first-hand, which is the reason for the present blog post. Let’s see why the current requests version does not do this checking (spoiler: it is a feature, not a bug) and how to check this manually in your scripts.

What Is the Content-Length Header?

Just to refresh your memory, in the HTTP protocol, the Content-Length header indicates the size of the body of a request or response. It is given in octets, where one octet is 8 bits. For simplicity, I will use the term byte instead of octet throughout the post. Generally, the Content-Length header is used to inform the receiving party when the current request (or response) has finished. Without it, you would not know whether you have received all the data (and so you should stop reading) or whether there are more data underway. Of course, the server could end the connection after every request/response (which is what HTTP 1.0 did), but since HTTP 1.1, all connections are considered persistent unless declared otherwise. This significantly speeds up the communication as you do not have to open a separate connection for each request.

After reading the above paragraph, the following question may have popped into your head:

What If I Receive Fewer Bytes Than Stated In Content-Length?

Under certain circumstances (network or server-side errors), the server may abruptly close the connection before sending the complete message. The HTTP 1.1 RFC specifies:

When a Content-Length is given in a message where a message-body is allowed, its field value MUST exactly match the number of OCTETs in the message-body. HTTP/1.1 user agents MUST notify the user when an invalid length is received and detected.

So, upon receiving fewer bytes than stated in the Content-Length header, one may rightly expect to be informed about it. To check this, I have put together a simple HTTP server that always answers with the following response and then closes the connection:

HTTP/1.1 200 OK\r\n
Content-Length: 10\r\n
\r\n
123456

Then, I wrote a Python script that sends a GET request to the server, checks whether it succeeded, and prints the received data:

import requests
import sys

response = requests.get('http://localhost:8080/')
if not response.ok:
    sys.exit('error: HTTP {}'.format(response.status_code))

print(response.headers)
print(response.content)
print(len(response.content))

When you run it, it succeeds, without raising an exception:

$ python client.py
{'Content-Length': '10'}
b'123456'
6

This is unsettling. Well, maybe this is how all clients behave? To verify, I tried to use curl:

$ curl http://localhost:8080
curl: (18) transfer closed with 4 bytes remaining to read

$ echo $?
18

Hmm. So maybe this is because requests is a library and curl is a tool? To find out, I have used reqwest, which is an HTTP library for Rust. The full implementation of my testing client is available here. When I ran it, it also notified me about the discrepancy:

error: failed to read the contents of the response
  cause: end of file before message length reached

There is something fishy going on here with requests

Why Does the Requests Library Not Warn Me?

When you search the requests repository, you find numerous reports of this surprising behavior (#1855, #1938, #2275, #2833, #3459, #4415). Basically, the reason for not incorporating such a check into requests is threefold:

  1. Firstly, I’d argue that Requests is not technically a user-agent, it’s a library. This frees us from some of the constraints of user-agent behaviour (and in fact we take that liberty elsewhere in the library, like with our behaviour on redirects).

    Well, if it is not a user agent, why does it send the following User-Agent header by default?

        User-Agent: python-requests/2.18.4
        
  2. Secondly, if we throw an exception we irrevocably destroy the data we read. It becomes impossible to access. This means that situations where the user might want to ‘muddle through’, taking as much of the data as they were able to read and keeping hold of it, becomes a little bit harder.

    This is understandable. However, should this really be the default behavior? I would argue that this should be an opt-in, i.e. requests will warn you by default, but you should be able to suppress this warning and use the data that you were able to read.
  3. Finally, even if we did want this logic we’d need to implement it in urllib3. Content-Length refers to the number of bytes on the wire, not the decoded length, so if we get a gzipped (or DEFLATEd) response, we’d need to know how many bytes there were before decoding. This is not typically information we have at the Requests level. So if you’re still interested in having this behaviour, I suggest you open an issue over on shazow/urllib3.

    urllib3 is the underlying HTTP library used by requests. The original poster submitted an issue in there (#311). It was closed with “I’m personally happy to leave this as-is too”, although there was a will to review a PR that does such a check. And luckily, one year and a half later, such a PR was submitted and accepted (#949)!

After reading the third point above, you may start to rejoice. Unfortunately, even though the urllib3 PR was merged on 2016-08-29, the current stable version of requests (2.18.4 at the time of writing, which is 2018-04-22) still uses an older version of urllib3 that does not provide this piece of functionality. On the bright side, there is a merged requests PR that brings a newer version of urllib3 into requests (#3563). The only problem with it is that it was merged into the requests:proposed/3.0.0 branch, which represents proposed changes for the 3.0 version of requests that is currently under development.

So, What Can I Do To Detect Incomplete Reads In My Scripts?

requests 3.x

If you come here from the future, just use requests 3.x. It should provide the enforce_content_length parameter, whose default value should be True. That is, if the requests library receives an incomplete content, it should raise an exception:

urllib3.exceptions.IncompleteRead: IncompleteRead(6 bytes read, 4 more expected)

requests 2.x

If you come here before the release of requests 3.0, you will have to perform the check by yourself. You can use the following piece of code:

response = requests.get(...)

# Check that we have read all the data as the requests library does not
# currently enforce this.
expected_length = response.headers.get('Content-Length')
if expected_length is not None:
    actual_length = response.raw.tell()
    expected_length = int(expected_length)
    if actual_length < expected_length:
        raise IOError(
            'incomplete read ({} bytes read, {} more expected)'.format(
                actual_length,
                expected_length - actual_length
            )
        )

The check works as follows. First, we ensure that the response has the Content-Length header. If not, the check is meaningless (more on that later). Then, we get the number of bytes that were actually read and compare it with the expected value. If we have read fewer bytes, we signal an error. Of course, instead of raising an exception, you can do whatever you want (retry, print an error message and quit, complain to a friend, etc.).

To verify, you can run the content-length.py HTTP server and send a request via client-with-check.py. The server is written in a way that it returns fewer bytes than stated in the Content-Length header of the response.

What About Compressed Responses?

Responses can be compressed. For example, a server may return a response having the Content-Encoding header set to gzip. This means that the body of the response is compressed via the Lempel-Ziv coding (LZ77). When the requests library receives such a response, it automatically decompresses it. When you then check the length of response.content (uncompressed body of the response in bytes), it will most probably differ from the length specified in the Content-Length header. This is the reason we did not use len(response.content) to obtain the actual length of the response in the above check. Instead, we have to use response.raw.tell(), which returns the actual number of bytes that were read (prior to decompression).

To verify, you can run the content-encoding-gzip.py HTTP server and send a request via client-with-check.py. The server is written in a way that it returns fewer bytes than stated in the Content-Length header of the response.

What About Responses With Transfer-Encoding: chunked?

Alternatively, the Content-Length header can be omitted and the chunked Transfer-Encoding header can be used. This streaming data transfer, available since HTTP 1.1, works by splitting the response into chunks. The body of the response then has the following form:

size of the first chunk
data of the first chunk

size of the second chunk
data of the second chunk

...

This has several advantages over Content-Length, including the ability to maintain a persistent HTTP connection for dynamically generated content whose complete size is not known in advance.

How should we check whether we have received all the data when we are dealing with a chunked transfer without a Content-Length header? Luckily, in this case, the requests library works as expected. That is, if the server sends incomplete data, the library raises an exception:

http.client.IncompleteRead: IncompleteRead(6 bytes read, 4 more expected)

To verify, you can run the transfer-encoding-chunked.py HTTP server and send a request via client.py. The server is written in a way that it returns fewer bytes than stated in the chunk size.

Final Recommendation

Always verify that the data that you receive are correct. Verifying that you have read the expected number of bytes is just the first step. For example, when downloading a file whose hash (e.g. SHA-256) is known, you should check that the hash of the downloaded file matches. Otherwise, you risk working with corrupted data, which may lead to nasty bugs.

Complete Source Code

The complete source code of all the servers and clients is available on GitHub.

Discussion

Apart from comments below, you can also discuss this post at /r/Python and Hacker News.

10 Comments

  1. The exact plan for requests v3 is currently up in the air, but I don’t think it will end up using that `proposed/3.0.0` branch. (Kenneth is quite keen that requests v3 should support async operation, and that will require entirely replacing the current low-level HTTP stack; see https://github.com/urllib3/urllib3/issues/1323 for some discussion of what this might look like.)

    My current guess is that urllib3 v2 and requests v3 *will* switch to always raising an error on short responses, but I can’t say that for certain, and unfortunately that PR isn’t evidence for much either way.

    Reply
  2. Great post. Surprised with the attention to detail that you still wrote “less” not “fewer” bytes (bytes are countable, as was a key point of the post!)

    Reply
  3. I’m missing an approach on how to go on when this behavior is detected. Normally I would like to get all bytes, not just the first N bytes and one OSI layer should handle this requirement. So to implement this layer, does it make sense to ask for the remaining bytes when I found the answer to be short? Or is that not possible in general?

    How about a wrapping generator which yields responses one after the other until finally a response contains all bytes which have been asked for? Would that be possible?

    Reply
  4. Latest news, from https://github.com/psf/requests/issues/4956:
    > It looks like the audit trail was followed for the most part here. We did merge this into Requests in #3563(https://github.com/psf/requests/pull/3563), but it’s in a separate branch that was intended for 3.0. Adding the flag is a breaking change for the 2.x branch, so we’re unable to resolve this until the next major version.
    > Just providing an update, I’ve opened urllib3/urllib3#2514(https://github.com/urllib3/urllib3/pull/2514) to change the default here in urllib3 2.0.

    Reply
  5. Thanks for this post, it was really useful.
    I have a quick question, though:
    in your example you raise an error and you say you could retry instead. Is there any elegant way to force that retry that’s different from a `while` statement?
    I’m using urllib3 Retry class, but it only works for status_codes:

    retry_strategy = Retry(
                total=5,
                backoff_factor=1,
                status_forcelist=[500, 502, 503, 504],
                method_whitelist=["HEAD", "GET", "PUT", "DELETE", "OPTIONS", "TRACE"]
            )
    
    adapter = HTTPAdapter(max_retries=retry_strategy)
    http = requests.Session()
    http.mount('https://', adapter)
    
    response = http.request(
        url=url,
        method=method,
        headers=headers,
        json=body,
        params=params,
        stream=stream,
    )
    
    response.raise_for_status()  # Will raise an HTTPError on unsuccessfull status code.
    
    expected_length = response.headers.get('Content-Length')
    if expected_length is not None:
        actual_length = response.raw.tell()
        expected_length = int(expected_length)
        if actual_length < expected_length:
            # Retry here?
    
    Reply
    • Hi, when it comes to retrying, I am afraid I cannot provide any specific guidance as it depends on the context. In your case, what could work is to replace the use of urllib3.util.Retry with e.g. the tenacity library that can automatically retry a block of code upon catching specific exceptions (like HTTPError or an exception raised due to a response-length mismatch).

      Reply
  6. This is helpful. Thanks! I have a follow up question. Can you also cover if there are any caveats with using the requests library when a server is using the combination of gzip-compression & chunked transfer-encoding? We are getting this error:
    Python error:
    Exception: (“Connection broken: InvalidChunkLength(got length b’\\x10\\x1cR\\x08\\x800\\x86\\x82\\x06\\xc7\\x16\\x17\\xce\\x9a\\xd0\\xda{\\x1c\\r
    n’, 0 bytes read)”, InvalidChunkLength(got length b’\x10\x1cR\x08\x800\x86\x82\x06\xc7\x16\x17\xce\x9a\xd0\xda{\x1c\r\n’, 0 bytes read))

    It looks like the requests library isn’t decompressing before decoding the chunked response. Any suggestions on how to proceed?

    Reply

Leave a Reply to zhbzhbzhbz Cancel reply