On Feb 8, 2008 9:58 PM, Jacqui Lahr [EMAIL PROTECTED] wrote:
hi .i've been trying to install opera on the olpc xo with info from
wiki opera site
and i get messages to contact you.iv'e tried both codes(?) with the
tar ball and without. i have been using macs since the 512 and in my 75
On Jan 31, 2008 8:21 PM, Diego 'Flameeyes' Pettenò [EMAIL PROTECTED] wrote:
char *foo = ab - 4 + 3 = 9 bytes
How did you get 9?
On Dec 12, 2007 1:46 PM, Micah Cowan [EMAIL PROTECTED] wrote:
And, what do you think about enabling that option by default when
recursive mode is on?
Well, I think it's obvious that we need the option. But I don't think
it should be enabled by default. By default, shouldn't we want to
capture
On Dec 9, 2007 7:03 PM, Stuart Moore [EMAIL PROTECTED] wrote:
Could the exit code used be determined by a flag? E.g. by default it
uses unix convention, 0 for any success; with an
--extended_error_codes flag or similar then it uses extra error codes
depending on the type of success (but for
On 12/7/07, Brian [EMAIL PROTECTED] wrote:
For the life of me, I cannot convince wget to download an old copy of a
website from the Internet Archive. I think the url within a url is somehow
messing it up..
wget -e robots=off --base=
http://web.archive.org/web/19990125085924/http://gnu.org/
On Nov 29, 2007 6:20 PM, David Ginger
[EMAIL PROTECTED] wrote:
So can I ask is a wget2 actualy being developed ?
Go ahead, but I'll answer that question before you do ;-)
The answer is no - not at the moment. But we've been discussing it for
several months. It will be a while before any code is
On 11/29/07, Micah Cowan [EMAIL PROTECTED] wrote:
A new discussion page on the wiki:
http://wget.addictivecode.org/Wget2Names
(Does it sound a bit too much like something that extracts names from
wget output? :) )
I really like the name `fetch` because it does what it says it does.
It's
On 11/29/07, Micah Cowan [EMAIL PROTECTED] wrote:
- Alan has prior history on this list. Check the archives:
yeah, I remember him. And is it just me, or does it seem that
something's going to go down tonight with wget 2? ;-)
On 11/29/07, Micah Cowan [EMAIL PROTECTED] wrote:
Yeah... of course they won't be able to edit the wiki that way.
I doubt you'd get the slashdot effect from just the people who're
interested in editing the wiki. You may get a handful of developers
and a few thousand people who only want to read
On 11/29/07, Micah Cowan [EMAIL PROTECTED] wrote:
Well, the trouble with that is that I'm running all of Wget's stuff
(plus my own personal mail and whatnot) on a little VPS. I'm rather
concerned that the traffic will kill me. I'm already worried about it
potentially hitting SlashDot or Digg
On 11/29/07, Micah Cowan [EMAIL PROTECTED] wrote:
Well don't look at _me_; I'm not the one who brought it up! ;)
heh. I wasn't looking for some grand unveiling. It just seems that it
seems to be attracting a lot of attention, and we should probably
start putting more effort into it.
I'm going
On 11/29/07, Micah Cowan [EMAIL PROTECTED] wrote:
I dunno, man, I think our current wget2 roadmap goals are already pretty
wild-and-crazy. ;)
I agree. I think we should create an announcement asking for
developers to help and submit it to digg and slashdot. The new
features may get some
On 11/29/07, Alan Thomas [EMAIL PROTECTED] wrote:
Sorry for the misunderstanding. Honestly, Java would be a great language
for what wget does. Lots of built-in support for web stuff. However, I was
kidding about that. wget has a ton of great functionality, and I am a
reformed C/C++
On 11/4/07, Micah Cowan [EMAIL PROTECTED] wrote:
Christian Roche has submitted a revised version of a patch to modify the
unique-name-finding algorithm to generate names in the pattern
foo-n.html rather than foo.html.n. The patch looks good, and will
likely go in very soon.
That's something I
On 11/4/07, Hrvoje Niksic [EMAIL PROTECTED] wrote:
It just occurred to me that this change breaks backward compatibility.
It will break scripts that try to clean up after Wget or that in any
way depend on the current naming scheme.
You mean the scripts that fix the same problem this patch
On 10/26/07, Micah Cowan [EMAIL PROTECTED] wrote:
And, of course, when I say there would be two Wgets, what I really
mean by that is that the more exotic-featured one would be something
else entirely than a Wget, and would have a separate name.
I think the idea of having two Wgets is good. I
On 10/15/07, patrick robinson [EMAIL PROTECTED] wrote:
Hello,
I want to unsubscripe from this list but lost my registration e-mail.
How is this performed?
You can find this (and other information) on the Wget wiki.
http://wget.addictivecode.org/
To unsubscribe from a list, send an email to
On 10/15/07, Micah Cowan [EMAIL PROTECTED] wrote:
Note that this doesn't help him much if he's lost his registration e-mail.
Patrick, you'll probably have to go bug the staff at www.dotsrc.org, who
hosts this list; send an email to [EMAIL PROTECTED]
E-mail *address* or just the e-mail? I
On 10/13/07, Tony Godshall [EMAIL PROTECTED] wrote:
OK, so let's go back to basics for a moment.
wget's default behavior is to use all available bandwidth.
Is this the right thing to do?
Or is it better to back off a little after a bit?
Tony
IMO, this should be handled by the operating
On 10/13/07, Tony Godshall [EMAIL PROTECTED] wrote:
Well, you may have such problems but you are very much reaching in
thinking that my --linux-percent has anything to do with any failing
in linux.
It's about dealing with unfair upstream switches, which, I'm quite
sure, were not running
On 10/13/07, Micah Cowan [EMAIL PROTECTED] wrote:
Hi Joshua,
There is a very strong likelihood that this has been fixed in the
current development version of Wget. Could you try with that?
If you're a Windows user, you can get a binary from
On 10/12/07, Tony Godshall [EMAIL PROTECTED] wrote:
Again, I do not claim to be unobtrusive. Merely to reduce
obtrusiveness. I do not and cannot claim to be making wget *nice*,
just nicER.
You can't deny that dialing back is nicer than not.
Personally, I think this is a great idea. But I
On 10/12/07, Hrvoje Niksic [EMAIL PROTECTED] wrote:
Personally I don't see the value in attempting to find out the
available bandwidth automatically. It seems too error prone, no
matter how much heuristics you add into it. --limit-rate works
because reading the data more slowly causes it to
On 10/8/07, A. P. Godshall [EMAIL PROTECTED] wrote:
Anyhow, does this seem like something others of you could use? Should
I submit the patch to the submit list or should I post it here for
people to hash out any parameterization niceties etc first?
Go ahead and send it on here so we can
On 10/4/07, Brian Keck [EMAIL PROTECTED] wrote:
I would have sent a fix too, but after finding my way through http.c
retr.c I got lost in url.c.
You and me both. A lot of the code needs re-written.. there's a lot of
spaghetti code in there. I hope Micah chooses to do a complete
re-write for
On 9/9/07, Jochen Roderburg [EMAIL PROTECTED] wrote:
Hi,
This is now an easy case for a change ;-)
In the log output for wget -c we have the line:
The sizes do not match (local 0) -- retrieving.
This shows always 0 as local size in the current svn version.
The variable which is
On 9/12/07, Juhana Sadeharju [EMAIL PROTECTED] wrote:
A forum has topics which are available only for members.
How to use wget for downloading copy of the pages in that
case? How to get the proper cookies and how to get wget to
use them correctly? I use IE in PC/Windows and wget in
a unix
On 9/11/07, Hex Star [EMAIL PROTECTED] wrote:
When I try to execute the command (minus quotes) wget -P ftp.usask.ca -r
-np -passive-ftp ftp://ftp.usask.ca/pub/mirrors/apple/;
wget works for a bit and then terminates with the following error:
xmalloc.c:186: failed assertion `ptr !=NULL'
Abort
On 9/12/07, Erik Bolstad [EMAIL PROTECTED] wrote:
Hi!
I'm doing a master thesis on online news at the University of Oslo,
and need a software that can download html pages based on RSS feeds.
I suspect that Wget could be modified to do this.
- Do you know if there are any ways to get Wget to
On 9/13/07, Hex Star [EMAIL PROTECTED] wrote:
wget 1.9+cvs-dev
Try it in either the latest release or (preferably) the subversion
trunk and let us know if you still have the same problem. The version
you're using is an old trunk version, so we can safely assume that it
has plenty of fixed bugs
On 9/7/07, Micah Cowan [EMAIL PROTECTED] wrote:
Doh! Of course, it's .org. Fortunately all the other links, including
the ones from the site at gnu.org, seem to be correct.
Unfortunately for you, your typo is now an official piece of free
software history! :D
Just poking. :-P
On 9/6/07, Alan Thomas [EMAIL PROTECTED] wrote:
I know this is probably something simple I screwed up, but the following
commands in a Windows batch file return the error Bad command or file name
for the wget command
cd ..
wget --convert-links
On 9/6/07, Micah Cowan [EMAIL PROTECTED] wrote:
Not really; we've been Cc'ing you. I don't think we knew whether you
were subscribed or not, and so Cc'd you in case you weren't. Also, many
of us just habitually hit Reply All to hit the message, so we don't
accidentally send it to the message's
On 9/3/07, Andreas Kohlbach [EMAIL PROTECTED] wrote:
Hi,
though the man page of wget mentions .netrc, I assume this is a bug.
For my understanding if you provide a --user=user and --password=password
at the command line this should overwrite any setting elsewhere, as in
the .netrc. It
On 9/2/07, Christopher G. Lewis [EMAIL PROTECTED] wrote:
Warning_C4142_Fix.diff
Windows added support of intptr_t and uintptr_t with Visual Studio 2003
(MSVER 1310)
This patch removes 60+ warnings from the MSWindows build
Holy crap, those're a lot of warnings for that small patch. Thanks!
On 8/23/07, Micah Cowan [EMAIL PROTECTED] wrote:
--user-agent Mozilla does the trick. Apparently Intel's website does
not like wget. :)
Stinky buzzards. What did we ever do to them?
On 8/22/07, Josh Williams [EMAIL PROTECTED] wrote:
In src/url.c, function in_url_list_p, there is an argument called
bool verbose, but it is never used. Furthermore, the verbose option
is defined in our options struct.
Should this argument be removed?
Below is a patch of this change.
Index
On 8/22/07, Micah Cowan [EMAIL PROTECTED] wrote:
This looks like very reasonable, Josh. Feel free to check this change
directly into the trunk (with a note in src/ChangeLog).
That I will, when I get home tonight. The stupid network at the
college is blocking subversion. I'm going to have to
On 8/22/07, Micah Cowan [EMAIL PROTECTED] wrote:
What would be the appropriate behavior of -R then?
I think the default option should be to download the html files to
parse the links, but it should discard them afterwards if they do not
match the acceptance list.
But, as you stated, I believe
On 8/18/07, Micah Cowan [EMAIL PROTECTED] wrote:
I'm not convinced. To me, the name spider implies recursion, and it's
counter-intuitive for it not to.
As to wasted functionality, what's wrong with -O /dev/null (or NUL or
whatever) for simply checking existence?
I see his point. The
Is there any particular reason the --spider option requires
--recursive? As it is now, we run into the following error if we omit
--recursive:
[EMAIL PROTECTED]:~/cprojects/wget/src$ ./wget
http://www.google.com --spider
Spider mode enabled. Check if remote file exists.
--00:37:21--
On 8/2/07, dmitry over [EMAIL PROTECTED] wrote:
Hi,
In `man wget` is see text
---[ cut ]---
--http-user=user
--http-password=password
[..]
but in `wget --help` is see
--http-user=USER set http user to USER.
--http-passwd=PASSset http password to PASS.
check
On 7/25/07, Matthew Woehlke [EMAIL PROTECTED] wrote:
Any reason you're not replying to the list? (Unless there is, please
direct replies to the list.)
No, I was in a hurry at the time and forgot to change the e-mail
address before I sent it.
I personally *must have* this patch; storing my
On 7/18/07, Maciej W. Rozycki [EMAIL PROTECTED] wrote:
There is no particular reason, so we do.
As far as I can tell, there's nothing in the man page about it.
On 7/17/07, Hrvoje Niksic [EMAIL PROTECTED] wrote:
-R allows excluding files. If you use a wildcard character in -R, it
will treat it as a pattern and match it against the entire file name.
If not, it will treat it as a suffix (not really an extension, it
doesn't care about . being there or
On 7/16/07, Jaymz Goktug YUKSEL [EMAIL PROTECTED] wrote:
Hello everyone,
Is there a command to override the
maximum redirections?
Attached is a patch for this problem. Let me know if you have any
problems with it. It was written for the latest trunk in
On 7/16/07, Dax Mickelson [EMAIL PROTECTED] wrote:
I've read the man page about 10 times now and I'm sure this issue is my
own stupidity but I can't see where or how.
[..]
Thus I would expect to get a directory full of index.html.n files along
with a bunch of .zip files! Alas, all I get is:
On 7/16/07, Dax Mickelson [EMAIL PROTECTED] wrote:
Thanks for the quick reply. I truly did RTFM (or at least RTF'Man').
Sorry for the dumb question and I knew it must be me but I just couldn't
see it. I'm running the file now and it is looking good so far!
Nah, it wasn't a dumb question. To
On 7/16/07, Jaymz Goktug YUKSEL [EMAIL PROTECTED] wrote:
Hey Josh,
Thank you very much for that patch, this was what I was looking for, I think
this is going to solve my problem!
Thank you vary much, and have a good one!
Cordially,
James
You're welcome :-)
Let me know how it turns out. The
On 7/17/07, Tony Lewis [EMAIL PROTECTED] wrote:
Just forward the patch to [EMAIL PROTECTED] and let them test it. :-)
Hmm. .org, maybe?
Delivery to the following recipient failed permanently:
[EMAIL PROTECTED]
Technical details of permanent failure:
PERM_FAILURE: DNS Error: Domain name
On 7/15/07, Rich Cook [EMAIL PROTECTED] wrote:
I think you may well be correct. I am now unable to reproduce the
problem where the server does not recognize a filename unless I give
it quotes. In fact, as you say, the server ONLY recognizes filenames
WITHOUT quotes and quoting breaks it. I
Consider this example, which happens to be how I realised this problem:
wget http://www.mxpx.com/ -r --base=.
Here, I want the entire site to be downloaded with each link pointing
to the local file. This works for some links, but it does not take
references to the root directory into account,
On 7/14/07, Matthias Vill [EMAIL PROTECTED] wrote:
So you would suggest handling in the way that when I use
wget --base=/some/serverdir http://server/serverdir/
/.* will be interpreted as /some/.* so if you have a link like
/serverdir/ it would go back to /some/serverdir, right?
Correct.
I
On 7/14/07, Matthias Vill [EMAIL PROTECTED] wrote:
I think I got your point:
Now i think this could result in different problems like what schould
happen with wget -r --base=/home/matthias/tmp
http://server/with/a/complicated/structure/and/to/many/dirs/a.php;
If you now have a link to
It has come to my attention that --delete-after and --spider leave
empty directories when they have finished. IMHO, we should force
--no-directories since we're not leaving any of the files we're
downloading.
I have submitted a patch here - https://savannah.gnu.org/bugs/index.php?20466
Do any
55 matches
Mail list logo