On Wed, 28 Nov 2001 10:15, Steve Loughran wrote: > oh, one more thing I remembered, but committing the http tasks into the > sandbox has just reminded me: the java.net implementation of http is > woefully inadequate and blissfully different from version to version. > > I do not, therefore, think use of the standard classes should be > encouraged. both Get.java and the new stuff I have just put in the sandbox > are weak in that they do use the existing libraries, arent proper HTTP1.1 > clients and cant handle digest auth. > > Ignoring http issues, there is still some validity in supporting URLs in > copy, because then you can use alternate url schemas, such as > "classpath:/com/mycompany/web-app_2_2.dtd" inside a copy command, which > lets you copy from a classpath resource to an output file > > However, I suspect that Get does that too -I will have to test it and > see -and doc it if so.
Yep. Using native URL objects is for all intents and purposese out of the picture because it is a really really really really really really brittle system. Oh - did I mention it sucks ? ;) I would much prefer a more fully fledged system that allows you to plug in a virtual filesystem of any sort you want (cvs, file, ftp, http, webdav, whatever). There is already setups like this that are partially available Sun had the xFile package, I have a vFile package somewhere lying about and netbeans has a fully fledged system. I woul drecomend looking at netbeans system first as it is actively developed and probably a lot more build orientated. We can still use URL-like strings to designate resources but the actual work is done by the VFS and not by ant URL objects. We could even query a fs to see if it supports the features we need and so forth. -- Cheers, Pete "Invincibility is in oneself, vulnerability in the opponent." -- Sun Tzu -- To unsubscribe, e-mail: <mailto:[EMAIL PROTECTED]> For additional commands, e-mail: <mailto:[EMAIL PROTECTED]>