Discussion:
Problem with CURLE_COULDNT_CONNECT on FTP
Jonas Schnelli
2011-11-04 07:30:01 UTC
Permalink
Hi

I'm using libcurl in a mass-upload/download software.
In high load, the software runs every ~seconds a download/upload by FTP (with libcurl).

Under the high-load simulation, after some minutes, curl_easy_perform responses always with a CURLE_COULDNT_CONNECT (7).
But the host is still connectable. Sometimes the CURLE_COULDNT_CONNECT comes for only one try, sometimes it's endless returning CURLE_COULDNT_CONNECT.

I did also try catch a CURLE_COULDNT_CONNECT and do:
curl_easy_cleanup(curl);
curl_global_cleanup();

... and re-init the handle ...

curl_global_init(CURL_GLOBAL_ALL);
curl = curl_easy_init();

But then still get endless:
* couldn't connect to host
* Closing connection #0
CURLE_COULDNT_CONNECT

If i relaunch the application libcurl manages to connect fine as it was before.
i did also check for mem leak: all fine.

If somebody also works with FTP:
How do you handle CURLE_COULDNT_CONNECT?
Do you try to make server tries?


I'm running curl 7.22.0 on a Mac 64Bit with openssl / libssh2.
Server is proftpd on debian lenny.

Thanks for any hint.
---
Jonas





-------------------------------------------------------------------
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette: http://curl.haxx.se/mail/etiquette.html
Gokhan Sengun
2011-11-04 08:03:49 UTC
Permalink
Post by Jonas Schnelli
Under the high-load simulation, after some minutes, curl_easy_perform
responses always with a CURLE_COULDNT_CONNECT (7).
But the host is still connectable. Sometimes the CURLE_COULDNT_CONNECT
comes for only one try, sometimes it's endless returning
CURLE_COULDNT_CONNECT.
It is not easy to tell without traces taken by setting CURLOPT_VERBOSE
option. If your SW is really running fast and leaking sockets, you may be
running out of file descriptors needed for socket creation. The error code
returning from socket would tell. To easily test, - if your app is launched
from a console - just try to decrease it to a lower value like 64 and
expect a shorter time span before failure. Below is the command to decrease
# of allowed open files

ulimit -a | grep "open files" # check current limit
# ulimit -n 64 # set a new limit
ulimit -a | grep "open files" # check updated limit
Post by Jonas Schnelli
If i relaunch the application libcurl manages to connect fine as it was before.
i did also check for mem leak: all fine.
Relaunching the applicaiton will tell Linux kernel to cleanup (close/free)
sockets - file descriptors for you. That may support my theory.
Post by Jonas Schnelli
How do you handle CURLE_COULDNT_CONNECT?
Do you try to make server tries?
You should normally not handle. A proper engineering should protect from
traping to this error.


---
it is twice as difficult to debug a program as to write it. Therefore, if
you put all of your creativity and effort into writing the program, you are
not smart enough to debug it.
Jonas Schnelli
2011-11-04 08:25:38 UTC
Permalink
Post by Jonas Schnelli
Under the high-load simulation, after some minutes, curl_easy_perform responses always with a CURLE_COULDNT_CONNECT (7).
But the host is still connectable. Sometimes the CURLE_COULDNT_CONNECT comes for only one try, sometimes it's endless returning CURLE_COULDNT_CONNECT.
It is not easy to tell without traces taken by setting CURLOPT_VERBOSE option. If your SW is really running fast and leaking sockets, you may be running out of file descriptors needed for socket creation. The error code returning from socket would tell. To easily test, - if your app is launched from a console - just try to decrease it to a lower value like 64 and expect a shorter time span before failure. Below is the command to decrease # of allowed open files
ulimit -a | grep "open files" # check current limit
# ulimit -n 64 # set a new limit
ulimit -a | grep "open files" # check updated limit
Yes. You my rescue!
After changing the open file limit the "problems" occurs much faster.
So you theory must be right.

Could it also be that there are unclosed FILE * handles?
I assume that libcurl is not leaking sockets.

I do a propper curl_global_cleanup after i'm finish with "ALL" transfers.

But maybe the problem is "ALL".
Because i like to avoid multiple reconnects i keep the "curl_easy session" open by just calling curl_easy_reset before every file transfer.
I only cleanup after the connection is no longer needen.
But this should not affect the amount of open sockets?

Any ideas are welcome while i try to get the sockets leaking under controll.

Thanks gokhan
Post by Jonas Schnelli
How do you handle CURLE_COULDNT_CONNECT?
Do you try to make server tries?
You should normally not handle. A proper engineering should protect from traping to this error.
Sound clear. But what if the uses has connected on 3G and there was a timeout because of a net-blackhole?
But yeah, your right, that goes more into the business logic of my software.



-------------------------------------------------------------------
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette: http://curl.haxx.se/mail/etiquette.html
Gokhan Sengun
2011-11-04 08:47:44 UTC
Permalink
Post by Jonas Schnelli
Could it also be that there are unclosed FILE * handles?
I assume that libcurl is not leaking sockets.
It could be anything Linux considers as a file handle (FILE, pipe, socket,
etc). libcurl is probably innocent :)
Post by Jonas Schnelli
I do a propper curl_global_cleanup after i'm finish with "ALL" transfers.
But maybe the problem is "ALL".
Because i like to avoid multiple reconnects i keep the "curl_easy session"
open by just calling curl_easy_reset before every file transfer.
I only cleanup after the connection is no longer needen.
But this should not affect the amount of open sockets?
It depends on the number of "ALL", if it is greater than total allowed soft
limit of FDs, then it is a trouble.

I have not used it for years but utilized once, maybe you can use it to
further debug. "valgrind" could be of help here. It will tell you the
openers of the file descriptors. The option is "--track-fds=<yes|no>".

http://valgrind.org/docs/manual/manual-core.html
Jonas Schnelli
2011-11-04 11:31:23 UTC
Permalink
Post by Jonas Schnelli
Could it also be that there are unclosed FILE * handles?
I assume that libcurl is not leaking sockets.
It could be anything Linux considers as a file handle (FILE, pipe, socket, etc). libcurl is probably innocent :)
I do a propper curl_global_cleanup after i'm finish with "ALL" transfers.
But maybe the problem is "ALL".
Because i like to avoid multiple reconnects i keep the "curl_easy session" open by just calling curl_easy_reset before every file transfer.
I only cleanup after the connection is no longer needen.
But this should not affect the amount of open sockets?
It depends on the number of "ALL", if it is greater than total allowed soft limit of FDs, then it is a trouble.
I have not used it for years but utilized once, maybe you can use it to further debug. "valgrind" could be of help here. It will tell you the openers of the file descriptors. The option is "--track-fds=<yes|no>".
http://valgrind.org/docs/manual/manual-core.html
Thanks Gokhan
Solved the issue.

I had a dummy log where i append some strings to a file.
I never closed the file. /-)
So after writing some log-output there where hundreds of handle to the log open.

lsof -p PID did help me out

Loading...