In follow up to my previous post on processes and threads, I propose the following small project / test.
At present, http or ftp can download in the background, at least on unix. This is useful for large files or slow internet or not a lot of ram etc. It works by forking the process and reissuing the http request. This is harmless 99% of the time, but even here you wonder if the separate processes and separate curl instances could mismatch. You go on to look at 3 more websites during the download, more cookies are added to the jar, then the download finishes and that process writes out the jar and clobbers the new cookies. Since I don't really understand when curl writes the jar it's hard to say whether this could happen or not. With this in mind, how bout this modest project. A download determined to be in the background spins up a thread, rather than forking a process. The thread runs the http request and when that is done it sets, in the appropriate BG_JOB structure, state = 0, then the thread exits. Seems manageable, and it has the advantage of working on windows, which currently does not support the download in background feature at all. Also, and more important, we can test the threadsafe nature of curl. While downloading a large file in background, pull up some other small web pages. Do these all run in their threads, and does the whole assemblage still use one cookie space, making transient cookies available to all threads and consistently writing the cookie jar? We really need to know this. It's incremental, and as Adam writes, it increments us in the right direction. What do you think? Karl Dahlke _______________________________________________ Edbrowse-dev mailing list [email protected] http://lists.the-brannons.com/mailman/listinfo/edbrowse-dev
