a new request (in line 1434).
Thanks in advance.
---
Charles
producing somefile.1 . So, has there any change to these behavior or
can this be filed as bug/enhancement?
Thanks.
---
Charles
On Feb 19, 2008 11:25 PM, Steven M. Schweda <[EMAIL PROTECTED]> wrote:
> From: Charles
>
> > In wget 1.10, [...]
>
>Have you tried this in something like a current release (1.11, or
> even 1.10.2)?
My wget version is 1.10.2. It isn't really a problem for me,
On Feb 20, 2008 2:12 AM, Micah Cowan <[EMAIL PROTECTED]> wrote:
> We could have Wget treat 200 OK exactly as 416 Requested Range Not
> Satisfiable; but then it won't properly handle servers that legitimately
> do not support byte ranges for some or all files.
Yes, what I would ask is that wget com
no wgetrc option for that. An easy solution would be
creating a simple wget wrapper
$ mkdir ~/bin
$ cat > ~/bin/wget
#!/bin/sh
echo $* >> ~/.wget_history
/usr/bin/wget $*
^D
$ chmod 755 ~/bin/wget
$ export PATH=~/bin:$PATH
---
Charles
$log >> ~/.wget_history
/usr/bin/wget $*
-
Some notes:
$(date) : capture the output of date command
'['$(date) : string concatenation
$* : all the command line arguments
If you want to customize the date format, see the man page of date.
---
Charles
for this would be
wget -r -l 1 -A .odf http://site-url
---
Charles
27; already there; not retrieving.
File `localhost/test/c.jpg' already there; not retrieving.
FINISHED --20:31:41--
Downloaded: 0 bytes in 0 files
I think wget 1.10.2 behavior is more correct. Anyway it did not abort
in my case.
---
Charles
cannot continue from the point we cancel the download (all the files
will have to be downloaded again).
---
Charles
On Thu, Mar 13, 2008 at 1:17 AM, Hrvoje Niksic <[EMAIL PROTECTED]> wrote:
> > It assums, though, that the preexisting index.html corresponds to
> > the one that you were trying to download; it's unclear to me how
> > wise that is.
>
> That's what -nc does. But the question is why it assumes th
ile with format
specification (would have to create the parser first) or a proprietary
binary format.
OK, that's some suggestions I have. Thanks for your time :D
---
Charles.
On Mon, Mar 17, 2008 at 3:20 PM, Micah Cowan <[EMAIL PROTECTED]> wrote:
> echo http://something >> links
> echo http://anotherthing >> links
> echo wget http://something | at 23:30
> wget -i links
Sure, I used to do this. The only problem I have is that all the links
have to be collected f
On Mon, Mar 17, 2008 at 4:41 PM, Micah Cowan <[EMAIL PROTECTED]> wrote:
> Is that true? I thought wget actually read the input file in a streaming
> fashion.
If that is the case, then I think it's possible to add links to the
list while wget has already running.
> I don't expect that a single
s to run wget in the background so that it produce
a wget-log that can be used to trace the URLs or 'tee' the output of
wget to a file.
---
Charles
s to run wget in the background so that it produce
a wget-log that can be used to trace the URLs or 'tee' the output of
wget to a file.
---
Charles
On Fri, Mar 21, 2008 at 2:33 AM, Micah Cowan <[EMAIL PROTECTED]> wrote:
> The intent is that all the writer operations would take virtually no
> time at all. The sidb_read function should take at most O(N log N) time
> on the size of the SIDB file, and should take less than a second under
> nor
On Sat, Mar 22, 2008 at 12:14 AM, Micah Cowan <[EMAIL PROTECTED]> wrote:
> YAML uses UTF-8; I'm beginning to think YAML may not be what we want,
> though, given that the definition for a given entry may be interposed
> with defining content for other entries; I don't want to kludge that by
> su
On Wed, Mar 26, 2008 at 11:17 PM, <[EMAIL PROTECTED]> wrote:
> Can you help me figure out how to use wget to "log in to this page"? Once
> logged in, I am intending to do a recursive download, or mirror.
Normally the steps should be like these:
1. wget --post-data="uname=username&pwd=password"
On Wed, Sep 17, 2008 at 11:02 PM, Tobias Opialla
<[EMAIL PROTECTED]> wrote:
> Hey all,
>
> I hope this is the right adress, and you can help me.
> I'm currently trying to run a perlscript including some wget commands, but if
> I try to run it, it says:
> "The ordinal 2253 could not be located in t
does anoyone know how i can unsubcsibe to all the wget mail lists?
| Charles A. Piety|
| Department of Meteorology |
| University of Maryland, College Park, MD 20742 |
| phone
r/
20041118/
file1 file2
20041119/
file1 file2
So download what you would have with a normal "wget -m" but store in
this subdirectory.
Any ideas?
--
Charles Gagnon | My views are my views and they
http://unixrealm.com | do not represent tho
rator
make: fatal errors encountered -- cannot continue
with the following lines in Makefile.in:
31
32 SHELL = /bin/sh
33 @SET_MAKE@
34
35 top_builddir = .
Any ideas?
On FreeBSD 4.7.
Charles "Chas" Belov
Up and running, thanks.
-Original Message-
From: Hrvoje Niksic [mailto:[EMAIL PROTECTED]
Sent: Wednesday, March 02, 2005 12:34 PM
To: Belov, Charles
Cc: wget@sunsite.dk
Subject: Re: Makefile hassles
"Belov, Charles" <[EMAIL PROTECTED]> writes:
> I would like to use
The target site is on a Un*x box, but I have to be able to
upload/download from a PC.
Thanks in advance,
Charles "Chas" Belov
ut still get the links fully translated? Or will I
need to post-edit my new files outside of wget to fix the links?
Note: The target site is on a Un*x box, but I have to be able to
upload/download from a PC.
Thanks in advance,
Charles "Chas" Belov
8 work.
Any idea how to work around this problem (at least the link problem, if not the
destination problem)?
Charles "Chas" Belov
Relevant lines from the debug file:
--14:42:13-- http://www.sfgov.org/site/dpt_index.asp
=> `dpt_index.asp'
...
--14:42:14--
wget does not correctly handle trailing whitespace after the last ; in
a Set-Cookie tag. This causes it to spew repeated `premature end of
string' errors with some web sites generated by Cold Fusion. E.g.,
the following is legal but not accepted (quotes added for clarity):
"Set-Cookie: FOO=bar;
27 matches
Mail list logo