pull: Handle remote web server not honoring range requests

It's valid for the remote server to say 200 OK and give us the entire
file instead of a 206 Partial Content, and in that case we should blow
away the previous cached data, rather than blindly appending to it and
thus creating multiple copies of the data inside the file.

This problem primarily occurs when we do have the complete file, and
we're interrupted, then try again, where the new process didn't record
the download was already complete.  We do a range request for bytes
past the end, and some web servers (e.g. Akamai) will return 200 OK
with the whole content again, rather than a 416 Requested Range Not
Satisfiable.

Thus we could also fix this by saner caching strategy - since we know
the file is complete, rename it again to $checksum.done or something
before it's processed.  (Or really, rework how we do caching more
intelligently in general).

This fixes the issue that interrupted pulls failed with such
webservers, although repeated attempts would eventually succeed
because we'd unlink files that failed to pull.

Related: https://bugzilla.redhat.com/show_bug.cgi?id=1207292
This commit is contained in:
Colin Walters 2015-04-04 10:49:28 -04:00
parent 1e501422e2
commit 115e05746b
1 changed files with 12 additions and 1 deletions

View File

@ -492,7 +492,18 @@ on_request_sent (GObject *object,
if (!pending->is_stream)
{
int fd = openat (pending->self->tmpdir_dfd, pending->out_tmpfile, O_CREAT | O_WRONLY | O_APPEND | O_CLOEXEC, 0600);
int oflags = O_CREAT | O_WRONLY | O_CLOEXEC;
int fd;
/* If we got partial content, we can append; if the server
* ignored our range request, we need to truncate.
*/
if (msg && msg->status_code == SOUP_STATUS_PARTIAL_CONTENT)
oflags |= O_APPEND;
else
oflags |= O_TRUNC;
fd = openat (pending->self->tmpdir_dfd, pending->out_tmpfile, oflags, 0600);
if (fd == -1)
{
gs_set_error_from_errno (&local_error, errno);