Age | Commit message (Collapse) | Author |
|
Since Curl_pgrsDone() itself calls Curl_pgrsUpdate() which may return an
abort instruction or similar we need to return that info back and
subsequently properly handle return codes from Curl_pgrsDone() where
used.
(Spotted by a Coverity scan)
|
|
With FOLLOWLOCATION enabled. When a 3xx page is downloaded and the
download size was known (like with a Content-Length header), but the
subsequent URL (transfered after the 3xx page) was chunked encoded, then
the previous "known download size" would linger and cause the progress
meter to get incorrect information, ie the former value would remain
being sent in. This could easily result in downloads that were WAY
larger than "expected" and would cause >100% outputs with the curl
command line tool.
Test case 599 was created and it was used to repeat the bug and then
verify the fix.
Bug: http://curl.haxx.se/bug/view.cgi?id=3510057
Reported by: Michael Wallner
|
|
1- Two new error codes are introduced.
CURLE_FTP_ACCEPT_FAILED to be set whenever ACCEPTing fails because of
FTP server connected.
CURLE_FTP_ACCEPT_TIMEOUT to be set whenever ACCEPTing timeouts.
Neither of these errors are considered fatal and control connection
remains OK because it could just be a firewall blocking server to
connect to the client.
2- One new setopt option was introduced.
CURLOPT_ACCEPTTIMEOUT_MS
It sets the maximum amount of time FTP client is going to wait for a
server to connect. Internal default accept timeout is 60 seconds.
|
|
It makes it easier to introduce debug outputs in this function, and
everything in the function is using the value anyway so it might even be
more efficient.
|
|
To avoid that the progress meter headers get output between each
transfer, make sure the bits gets kept when (re-)inited.
Reported by: Christopher Stone
|
|
As bug 3385258 pointed out but I missed up the fix for. This is another
take at a fix.
Bug: http://curl.haxx.se/bug/view.cgi?id=3392101
Reported by: Wu Yongzheng
|
|
Bug: http://curl.haxx.se/bug/view.cgi?id=3385258
Reported by: Ben Winslow
|
|
When an easy handle is used to download an URI which has no
Content-Length header (or equivalent) after downloading an URI which
does, the value from the previous transfer is reused and returned by
CURLINFO_CONTENT_LENGTH_DOWNLOAD. This is because the progress flags
(used to determine whether such a header was received) are not reset
between transfers.
Bug: http://curl.haxx.se/bug/view.cgi?id=3370895
|
|
By the use of a the new lib/checksrc.pl script that checks that our
basic source style rules are followed.
|
|
Found with codespell.
|
|
Curl_posttransfer is called too soon to add the final new line.
Moved the new line logic to pgrsDone as there is no more call to
update the progress status after this call.
Reported by: Dmitri Shubin <sbn_at_tbricks.com>
http://curl.haxx.se/mail/lib-2010-12/0162.html
|
|
|
|
|
|
wrong percentage for small files, most notable for <1000 bytes and could
easily end up showing more than 100% at the end. It also didn't show any
percentage, transfer size or estimated transfer times when transferring
less than 100 bytes.
|
|
download was 0 bytes, as libcurl would then return the size as unknown (-1)
and not 0. I wrote a fix and test case 566 to verify it.
|
|
smaller integral type
|
|
|
|
remain in use as internal curl_off_t print formatting strings for the internal
*printf functions which still cannot handle print formatting string directives
such as "I64d", "I64u", and others available on MSVC, MinGW, Intel's ICC, and
other DOS/Windows compilers.
This reverts previous commit part which did:
FORMAT_OFF_T -> CURL_FORMAT_CURL_OFF_T
FORMAT_OFF_TU -> CURL_FORMAT_CURL_OFF_TU
|
|
the names of the curl_off_t formatting string directives now become
CURL_FORMAT_CURL_OFF_T and CURL_FORMAT_CURL_OFF_TU.
CURL_FMT_OFF_T -> CURL_FORMAT_CURL_OFF_T
CURL_FMT_OFF_TU -> CURL_FORMAT_CURL_OFF_TU
Remove the use of an internal name for the curl_off_t formatting string directives
and use the common one available from the inside and outside of the library.
FORMAT_OFF_T -> CURL_FORMAT_CURL_OFF_T
FORMAT_OFF_TU -> CURL_FORMAT_CURL_OFF_TU
|
|
|
|
|
|
CURLINFO_APPCONNECT_TIME. This is set with the "application layer"
handshake/connection is completed (typically SSL, TLS or SSH). By using this
you can figure out the application layer's own connect time. You can extract
the time stamp using curl's -w option and the new variable named
'time_appconnect'. This feature was sponsored by Lenny Rachitsky at NeuStar.
|
|
is inited at the start of the DO action. I removed the Curl_transfer_keeper
struct completely, and I had to move out a few struct members (that had to
be set before DO or used after DONE) to the UrlState struct. The SingleRequest
struct is accessed with SessionHandle->req.
One of the biggest reasons for doing this was the bunch of duplicate struct
members in HandleData and Curl_transfer_keeper since it was really messy to
keep track of two variables with the same name and basically the same purpose!
|
|
per second.
|
|
Shave off a couple of function calls in the part of
Curl_pgrsUpdate() which is always executed when called.
Fix a couple of comments.
|
|
more frequently allowing same calling frecuency for the client progress
callback, while keeping the once a second frecuency for speed calculations
and internal display of the transfer progress.
|
|
|
|
1) the progress callback gets called more frequently (at times)
2) libcurl *might* call the callback when it receives a signal
|
|
|
|
cache within the multi handle.
|
|
(http://qa.mandrakesoft.com/show_bug.cgi?id=12289), curl would print a newline
to "finish" the progress meter after each redirect and not only after a
completed transfer.
|
|
|
|
|
|
|
|
that might lose accuracy
|
|
|
|
precaution to prevent mistakes to lead to buffer overflows.
|
|
once and for all
|
|
deleted trailing whitespace
|
|
|
|
overflow on 32bit filesize-systems
|
|
are transfered. The maximum size we support now is 8 exabytes, which equals
to 8192 petabytes...
|
|
|
|
|
|
|
|
* bring back subsecond resolution to CURLINFO_TOTAL_TIME
* Fix the Curl_pgrsDone() so that the final progress update is shown properly
|
|
when converting to ints
|
|
|
|
999 days
|
|
to a "days + hours" or even "just days" display if the time value is very
large. I also switched several calculations over to fixed-point instead of the
previous doubles.
|