Discussion:
Size of chunks in chunked uploads
Apurva Mehta
2009-05-01 17:42:41 UTC
Permalink
Hi, I was wondering if there is any way to specif the chunk size in
HTTP uploads with chunked transfer-encoding (ie. with the
"Transfer-encoding:chunked" header). It seems that the default chunk
size is 128 bytes. I would like to increase this value and was
wondering if there is any option I can specify (through libcurl or
command line curl) to do this.

Thanks,
Apurva
Michael Graf
2009-05-01 18:17:31 UTC
Permalink
Hey there, Im not sure if this works/helps, just saw it and thought maybe...


from this page: http://curl.haxx.se/libcurl/c/curl_easy_setopt.html

CURLOPT_BUFFERSIZE

Pass a long specifying your preferred size (in bytes) for the receive buffer
in libcurl. The main point of this would be that the write callback gets
called more often and with smaller chunks. This is just treated as a
request, not an order. You cannot be guaranteed to actually get the given
size. (Added in 7.10)

This size is by default set as big as possible (CURL_MAX_WRITE_SIZE), so it
only makes sense to use this option if you want it smaller.
From: Apurva Mehta
<apurva_at_mathmeth.com<apurva_at_mathmeth.com?Subject=Re:%20Size%20of%20chunks%20in%20chunked%20uploads>>
Date: Fri, 1 May 2009 10:42:41 -0700
Hi, I was wondering if there is any way to specif the chunk size in
HTTP uploads with chunked transfer-encoding (ie. with the
"Transfer-encoding:chunked" header). It seems that the default chunk
size is 128 bytes. I would like to increase this value and was
wondering if there is any option I can specify (through libcurl or
command line curl) to do this.
Thanks,
Apurva
Daniel Stenberg
2009-05-01 18:23:35 UTC
Permalink
Hi, I was wondering if there is any way to specif the chunk size in HTTP
uploads with chunked transfer-encoding (ie. with the
"Transfer-encoding:chunked" header). It seems that the default chunk size
is 128 bytes. I would like to increase this value and was wondering if there
is any option I can specify (through libcurl or command line curl) to do
this.
There is no particular default size, libcurl will "wrap" whatever the read
function returns with the chunked transfer magic. And no, there's no way to
change that other than to simply make your read callback return larger or
smaller chunks.
--
/ daniel.haxx.se
Daniel Stenberg
2009-05-01 18:54:56 UTC
Permalink
(0) Doesn't the read callback accept as arguments the maximum size it is
allowed to copy into the buffer? How is it then possible to have the read
callback send larger or smaller values (and so control the chunk size)?
You can respond with less data than what it asks for.

So if your callback today only responds a certain amount of data that is less
than what libcurl asks for, you can make it return larger or smaller.
(1) What about the command-line curl utility? I notice that when I use it to
upload large files using chunked encoding, the server receives 128 byte
chunks. For the same file uploaded to the same server without chunked
encoding, the server receives the data in 4000 byte segments. (This is an
apache webserver and a I get these numbers because I have a custom apache
module handling these uploads.) This is what lead me to believe that there
is some implicit default value for the chunk size.
I have no explanation for that. 'curl' has the exact same read function for
doing the read from a file so it should provide data in the exact same
way/pattern independently of chunked or not.
--
/ daniel.haxx.se
Apurva Mehta
2009-05-01 19:17:29 UTC
Permalink
Post by Daniel Stenberg
(1) What about the command-line curl utility? I notice that when I use it
to upload large files using chunked encoding, the server receives 128 byte
chunks. For the same file uploaded to the same server without chunked
encoding, the server receives the data in 4000 byte segments. (This is an
apache webserver and a I get these numbers because I have a custom apache
module handling these uploads.) This is what lead me to believe that there
is some implicit default value for the chunk size.
I have no explanation for that. 'curl' has the exact same read function for
doing the read from a file so it should provide data in the exact same
way/pattern independently of chunked or not.
Just an update. I don't think that the problem is with curl. I did an
strace on the curl process doing the chunked upload, and it is clear
that it sending variable sized chunks in sizes much larger than 128
bytes. Something else along the way is causing our server to see only
128 byte segments.

Thanks a lot for answering my questions. I appreciate it.

Apurva
Apurva Mehta
2009-05-01 18:41:12 UTC
Permalink
This post might be inappropriate. Click to display it.
Loading...