Re: working on patch to limit to "percent of bandwidth"
On 10/12/07, Hrvoje Niksic <[EMAIL PROTECTED]> wrote: > Personally I don't see the value in attempting to find out the > available bandwidth automatically. It seems too error prone, no > matter how much heuristics you add into it. --limit-rate works > because reading the data more slowly causes it to (eventually) also be > sent more slowly. --limit-percentage is impossible to define in > precise terms, there's just too much guessing. Yeah, that is a good point. Hence, I vote for it to become a module.
Re: working on patch to limit to "percent of bandwidth"
"Tony Godshall" <[EMAIL PROTECTED]> writes: >> My point remains that the maximum initial rate (however you define >> "initial" in a protocol as unreliable as TCP/IP) can and will be >> wrong in a large number of cases, especially on shared connections. > > Again, would an algorithm where the rate is re-measured periodically > and the initial-rate-error criticism were therefore addressed reduce > your objection to the patch? Personally I don't see the value in attempting to find out the available bandwidth automatically. It seems too error prone, no matter how much heuristics you add into it. --limit-rate works because reading the data more slowly causes it to (eventually) also be sent more slowly. --limit-percentage is impossible to define in precise terms, there's just too much guessing.
Re: anyone look at the actual patch? anyone try it? [Re: working on patch to limit to "percent of bandwidth"]
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Tony Godshall wrote: > [Jim] >> Well, we need the plugin architecture anyway. There are some planned >> features (JavaScript and MetaLink support being the main ones) that have >> no business in Wget proper, as far as I'm concerned, but are inarguably >> useful. > >>> I know when I put an app into an embedded app, I'd rather not even >>> have the overhead of the plug-in mechanism, I want it smaller than >>> that. > >> You have a good point regarding customized compilation, though I think >> that most of the current features in Wget belong as core features. There >> are some small exceptions (egd sockets). > > Thanks. (You've misattributed this: it's me talking here, not Jim.) > OK, so so far there are three of us, I think, that find it potentially > useful. One of whom, you'll note, was happy to see it as a module, which is what I had also been suggesting. >> This doesn't look to me like a vital function, one that a large number >> of users will find mildly useful, or one that a mild number of users >> will find extremely useful. This looks like one that a mild number of >> users will find mildly useful. Only slightly more useful, in fact, than >> what is already done. > > You keep saying that. You seem to think unknown upstream bandwidth is > a rare thing. Or that wanting to be nice to other bandwidth users in > such a circumstance is a rare thing. I do not think it's a particularly rare thing. I think it's a fairly easily-dealt-with thing. > Like I said when I submitted the patch, this > essentially automates what I do manually: > wget somesite > ctrl-c > wget -c --limit-rate nnK somesite What I've been trying to establish, is whether automating such a thing (directly within Wget), is a useful-enough thing to justify the patch. >> But they mainly fall into the category of features that a large number >> of users will use occasionally, and a small number of users will find >> indispensable much of the time. Will you find this feature >> indispensable, or can you pretty much use --limit-rate with a reasonable >> value to do the same thing? > > Horse dead. Parts rolling in the freeway. Is it? I was talking to Jim, not you. He actually hadn't said very much until this point. >> If, on the other hand, it is really, just a pretty minor improvement >> that happens to be mildly useful to you, could we please drop using this >> as a platform to predict what my future reactions to new features in >> general are likely to be? :p > > Well, when a guy first joins the list and submits his first patch and gets... Gets what? One should not expect that all patches are automatically accepted. Jim knows this, and has also seen other people come with patches I've accepted, which is why it's just silly to accuse me of something there's already ample proof I don't. And what is it you "got"? Did I ever say, "no, it's not going in?" Did I ever say "I'm against it?" What I repeatedly said was, "I need convincing." > Anyhow, perhaps I did the wrong thing in bringing it here- perhaps I > should have provided it as a wishlist bug in debian and seen how many > ordinary people find it useful before taking it to the source... > perhaps I should have vetted it or whatever. Sure, vetting it is entirely helpful. Getting feedback from a larger community of users is very helpful. And, lamentably, the current activity level of this list is not sufficient that I can gauge how useful a feature is to the community as a whole from the five-or-so people that participate on this list. I cannot gauge how useful a feature is from how loudly the contributor proclaims it's useful. I already _know_ you find it useful, as you cared enough to bother writing a patch. What I was hoping to hear, but hadn't heard much of until just now, was more support from the rest of the community. Jim had spoken up, but not particularly strongly. Rather than waiting for people to have the chance to speak up, though, you just got louder. What is most interesting to me, is your reaction to my statements, which were never "I'm not putting it in", but "I think it should wait and live as an accessory." And to this you get upset, and both defensive and offensive. This does not make it likelier for me to include your changes. In this specific case, there's probably a good chance it'll go in (not for 1.11 though), as I'm clearer now on exactly how useful Jim finds it, and we've also had another speak up. In the future, though, if you've got something you'd like me to consider including, you might consider just a bit more patience than you've exhibited this time around. Hopefully this thread can go away now, unless someone has something truly new to contribute. - -- Micah J. Cowan Programmer, musician, typesetting enthusiast, gamer... http://micah.cowan.name/ -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFHD/m
Re: anyone look at the actual patch? anyone try it? [Re: working on patch to limit to "percent of bandwidth"]
I don't want this to spiral down to Micah bashing. He has brought a lot of good energy to the project, and gotten things moving forward nicely. Thanks. I know of instances where this option would be useful for me, and others have chipped in. I think we all agree it isn't perfect and there is no perfect solution for the situation. But it is better than what exists now. How about one last hypothetical situation, and then I'll bow out of this. (Yes, I can live a happy life if this option isn't included!) Joe Random User can type wget --limit-percent 50% ftp://site/BigImage.iso and then happily play his online game without giving up all his bandwidth, and without having to have a clue about networking. A simple --limit-percent replaces trying to explain to someone how to determine their bandwidth and then specifying some amount which is less than the total but still leaves enough for "other activity". Jim
Re: anyone look at the actual patch? anyone try it? [Re: working on patch to limit to "percent of bandwidth"]
... > > I guess I'd like to see compile-time options so people could make a > > tiny version for their embedded system, with most options and all > > documentation stripped out, and a huge kitchen-sink all-the-bells > > version and complete documentation for the power user version. I > > don't think you have to go to a totally new (plug in) architecture or > > make the hard choices. [Jim] > Well, we need the plugin architecture anyway. There are some planned > features (JavaScript and MetaLink support being the main ones) that have > no business in Wget proper, as far as I'm concerned, but are inarguably > useful. > > I know when I put an app into an embedded app, I'd rather not even > > have the overhead of the plug-in mechanism, I want it smaller than > > that. > You have a good point regarding customized compilation, though I think > that most of the current features in Wget belong as core features. There > are some small exceptions (egd sockets). Thanks. Well, when I'm building an embedded device, I look at the invocations wget that are actually being called in the scripts. Since the end product has no interactive shell, I don't need to have all those extra options enabled! In fact, in wget's case, one can often dispense with the tool entirely- the busybox version suffices. > > ... And when I'm running the gnu version of something I expect it > > to have verbose man pages and lots of double-dash options, that's what > > tools like less and grep are for. > Well... many GNU tools actually lack "verbose man pages", particularly > since "info" is the preferred documentation system for GNU software. Well, I guess I'm spoiled by Debian. If it ain't broke, don't fix it. Debian makes man pages because tools should have manpages. IIRC, that was one of the "divorce" issues. > Despite the fact that many important GNU utilities are very > feature-packed, they also tend not to have options that are only useful > to a relatively small number of people--particularly when equivalent > effects are possible with preexisting options. > > As to the overhead of the plugin mechanism, you're right, and I may well > decide to make that optionally compiled. Well, I'd rather have rate-limiting things be optionally compiled than plugged-in, since they'd be useful for embedded devices. [Micah] > >> It's not really about this option, it's about a class of options. I'm in > >> the unenviable position of having to determine whether small patches > >> that add options are sufficiently useful to justify the addition of the > >> option. Adding one new option/rc command is not a problem. But when, > >> over time, fifty people suggest little patches that offer options with > >> small benefits, we've suddenly got fifty new options cluttering up the > >> documentation and --help output. [Jim] > > I would posit that the vast majority of wget options are used in some > > extremely small percentage of wget invocations. Should they be removed? [Micah] > Such as which ones? > > I don't think we're talking about the same "extremely small percentages". OK, so so far there are three of us, I think, that find it potentially useful. And you have not addressed the use cases I brought up. So I think your "extremely small percentages" assumption may be faulty. > Looking through the options listed with --help, I can find very few > options that I've never used or would not consider vital in some > situations I (or someone else) might encounter. > > This doesn't look to me like a vital function, one that a large number > of users will find mildly useful, or one that a mild number of users > will find extremely useful. This looks like one that a mild number of > users will find mildly useful. Only slightly more useful, in fact, than > what is already done. You keep saying that. You seem to think unknown upstream bandwidth is a rare thing. Or that wanting to be nice to other bandwidth users in such a circumstance is a rare thing. I wish I lived in your universe. Mine's a lot more sloppy. > It's also one of those "fuzzy" features that addresses a scenario that > has no "right" solution (JavaScript support is in that domain). These > sorts of features tend to invite a gang of friends to help get a little > bit closer to the unreachable target. For instance, if we include this > option, then the same users will find another option to control the > period of time spent "full-bore" just as useful. A "pulse" feature might > be useful, but then you'll probably want an option to control the > spacing between those, too. And someone else may wish to introduce an > option that saves bandwidth information persistently, and uses this to > make a good estimate from the beginning. Ah, finally, some meat. You see this as opening a door. Especially as I inquire as too whether anyone has feedback on my implementation, you see it mushrooming into a plethora of options. > And all of this would amount to a very mild improvement over what > al
Re: working on patch to limit to "percent of bandwidth"
On 10/12/07, Josh Williams <[EMAIL PROTECTED]> wrote: > On 10/12/07, Tony Godshall <[EMAIL PROTECTED]> wrote: > > Again, I do not claim to be unobtrusive. Merely to reduce > > obtrusiveness. I do not and cannot claim to be making wget *nice*, > > just nicER. > > > > You can't deny that dialing back is nicer than not. > > Personally, I think this is a great idea. But I do agree that the > documentation is a bit messy right now (as well as the code). If this > doesn't make it into the current trunk, I think it'd make a great > module in version 2. Thanks for the support
Re: working on patch to limit to "percent of bandwidth"
On 10/12/07, Tony Godshall <[EMAIL PROTECTED]> wrote: > Again, I do not claim to be unobtrusive. Merely to reduce > obtrusiveness. I do not and cannot claim to be making wget *nice*, > just nicER. > > You can't deny that dialing back is nicer than not. Personally, I think this is a great idea. But I do agree that the documentation is a bit messy right now (as well as the code). If this doesn't make it into the current trunk, I think it'd make a great module in version 2.
Re: working on patch to limit to "percent of bandwidth"
On 10/12/07, Hrvoje Niksic <[EMAIL PROTECTED]> wrote: > "Tony Godshall" <[EMAIL PROTECTED]> writes: > > >> > available bandwidth and adjusts to that. The usefullness is in > >> > trying to be unobtrusive to other users. > >> > >> The problem is that Wget simply doesn't have enough information to be > >> unobtrusive. Currently available bandwidth can and does change as new > >> downloads are initiated and old ones are turned off. Measuring > >> initial bandwidth is simply insufficient to decide what bandwidth is > >> really appropriate for Wget; only the user can know that, and that's > >> what --limit-rate does. > > > > My patch (and the doc change in my patch) don't claim to be totally > > unobtrusive [...] Obviously people who the level of unobtrusiveness > > you define shouldn't be using it. > > It was never my intention to define a particular level of > unobtrusiveness; the concept of being unobtrusive to other users was > brought up by Jim and I was responding to that. My point remains that > the maximum initial rate (however you define "initial" in a protocol > as unreliable as TCP/IP) can and will be wrong in a large number of > cases, especially on shared connections. Again, would an algorithm where the rate is re-measured periodically and the initial-rate-error criticism were therefore addressed reduce your objection to the patch? Perhaps you can answer each idea I gave separately: a) full speed downloads (which re-measure channel capacity) followed by long sleeps b) speed ramps up to peak and then back down > Not only is it impossible to > be "totally unobtrusive", but any *automated* attempts at being nice > to other users are doomed to failure, either by taking too much (if > the download starts when you're alone) or too little (if the download > starts with shared connection). Again, I do not claim to be unobtrusive. Merely to reduce obtrusiveness. I do not and cannot claim to be making wget *nice*, just nicER. You can't deny that dialing back is nicer than not. -- Best Regards. Please keep in touch.
Re: wget does not compile with SSL support
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Thomas Wolff wrote: > So I think it's clear the version thus produced was invoked. Yeah, guess it couldn't be that easy! :) Hm... well, can you verify that src/config.h has been correctly generated to #define HAVE_SSL? If you go to src/ and type "rm url.o; make url.o CFLAGS=-E | $PAGER", what does the definition of supported_schemes look like? - -- Micah J. Cowan Programmer, musician, typesetting enthusiast, gamer... http://micah.cowan.name/ -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFHD5yt7M8hyUobTrERCDmQAJ4+8Kf4Fb2bgRwQsdpViqnckEMKzgCghvzG wM141l57ZZvpbgoYNgQQugk= =1yT2 -END PGP SIGNATURE-
Re: Version tracking in Wget binaries
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Hrvoje Niksic wrote: > Micah Cowan <[EMAIL PROTECTED]> writes: > >> Among other things, version.c is now generated rather than >> parsed. Every time "make all" is run, which also means that "make >> all" will always relink the wget binary, even if there haven't been >> any changes. > > I personally find that quite annoying. :-( I hope there's a very > good reason for introducing that particular behavior. Well, making version.c a generated file is necessary to get the most-recent revision for the working directory. I'd like to avoid it, obviously, but am not sure how without making version.c dependent on every source file. But maybe that's the appropriate fix. It shouldn't be too difficult to arrange; probably just version.c: $(wget_SOURCES) or similar. It's not 100% effective; it relies on (1) this source directory being managed as a repository, and (2) the user possessing a copy of Mercurial (which seems likely if (1) is true). So, for instance, clicking the "bz2" link at http://hg.addictivecode.org/wget/mainline means you aren't getting a repository, and won't get the revision stamp. :\ I'm currently looking into ways to deal with this. For instance, I can add an extension to the repository on the server that ensures that archives are modified to include their version information before they're shipped out; that could help. There is also the problem that, if it _is_ a repository clone, the local user may have made local changes and committed them, in which case I'll get a different revision id (which is a truncated SHA1 hash, and not a linear number as with Subversion*), with no information about how it relates to revision ids I know about from the official repos. * Mercurial actually has linear numbers, but they're only meaningful to that one specific repository instance, as the same hashes may have different corresponding numbers on someone elses clone. They're basically for making the local repo easier to work with, not for sharing around. I'm happy to field suggestions! > BTW does that mean that, for example, running `make install', also > attempts to relink Wget? Yup. - -- Micah J. Cowan Programmer, musician, typesetting enthusiast, gamer... http://micah.cowan.name/ -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFHD5lY7M8hyUobTrERCDDsAJ9m4u7suve+sqID92ebcrq1VvrhawCgi+T4 nunTe5ve/E96fmi4EB7OYbI= =ykJF -END PGP SIGNATURE-
Re: Version tracking in Wget binaries
Micah Cowan <[EMAIL PROTECTED]> writes: > Among other things, version.c is now generated rather than > parsed. Every time "make all" is run, which also means that "make > all" will always relink the wget binary, even if there haven't been > any changes. I personally find that quite annoying. :-( I hope there's a very good reason for introducing that particular behavior. BTW does that mean that, for example, running `make install', also attempts to relink Wget?
Re: working on patch to limit to "percent of bandwidth"
"Tony Godshall" <[EMAIL PROTECTED]> writes: >> > available bandwidth and adjusts to that. The usefullness is in >> > trying to be unobtrusive to other users. >> >> The problem is that Wget simply doesn't have enough information to be >> unobtrusive. Currently available bandwidth can and does change as new >> downloads are initiated and old ones are turned off. Measuring >> initial bandwidth is simply insufficient to decide what bandwidth is >> really appropriate for Wget; only the user can know that, and that's >> what --limit-rate does. > > My patch (and the doc change in my patch) don't claim to be totally > unobtrusive [...] Obviously people who the level of unobtrusiveness > you define shouldn't be using it. It was never my intention to define a particular level of unobtrusiveness; the concept of being unobtrusive to other users was brought up by Jim and I was responding to that. My point remains that the maximum initial rate (however you define "initial" in a protocol as unreliable as TCP/IP) can and will be wrong in a large number of cases, especially on shared connections. Not only is it impossible to be "totally unobtrusive", but any *automated* attempts at being nice to other users are doomed to failure, either by taking too much (if the download starts when you're alone) or too little (if the download starts with shared connection).
Re: wget does not compile with SSL support
> Thomas Wolff wrote: > > Hi, > > as requested, I am sending you the output of configure and config.log > > for checking the problem that my compiled wget does not retrieve > > over https ("Unsupported scheme"). > Thomas, I don't see that anything went wrong at all with the > configuration. This makes me think that you're not actually using the > wget that you built when you try to use https. > What does "command -v wget" give? How about "whereis wget"? I did check this potential problem as follows: [EMAIL PROTECTED]:~/opt/wget-1.10.2> ls -l config.log src/wget -rw-r--r-- 1 demsn702 mdd82290 Oct 11 18:11 config.log -rwxr-xr-x 1 demsn702 mdd 1600208 Oct 11 18:15 src/wget [EMAIL PROTECTED]:~/opt/wget-1.10.2> ./src/wget https://x.y https://x.y: Unsupported scheme. [EMAIL PROTECTED]:~/opt/wget-1.10.2> So I think it's clear the version thus produced was invoked. Regards, Thomas