I've been using wget for the last time in order to retrieve mirrors of some
web sites. Recently i discovered the -N option. When i use it checks if the
local files are older than the server files (same filenames) and if the last
one is newer than the local file then it overwrites the local one
to the mail list..
And I am sorry for my terrible english
Your English is clear enough to get your question across, so don't
apologize.
Murrah Boswell
Yes, I beleive the option you are looking for is --backup-converted or to
make thing easier -K (make sure it is capital) is an alias for the same
thing.
Craig Sowadski
_
Dream of owning a home? Find out how in the First-time Home
In the docs I've seen on wget, I see that I can use wildcards to
download multiple files on ftp sites. So using *.pdf would get me all
the pdfs in a directory. It seems that this isn't possible with http
sites though. For work I often have to download lots of pdfs when
there's new info I
Hi Ron!
If I understand you correctly, you could probably use the
-A acclist
--accept acclist
accept = acclist
option.
So, probably (depending on your site), the syntax should be something like:
wget -r -A *.pdf URL
wget -r -A *.pdf -np URL
or, if you have to recurse through multiple html
I know that to download a web page that is a cgi
script (or php or asp or jsp, etc.), you should should
use the --output-document parameter. For example, if
you want to download the following address:
http://www.osnews.com/comment.php?news_id=5602offset=105rows=120
you would simply do something
Noname NoLast [EMAIL PROTECTED] writes:
reasonable so that it can save them) otherwise it will
give me error messages such as:
Cannot write to
'http://www.osnews.com/comment.php?news_id=5602offset=105rows=120'
I don't understand this error message. Wget should never try to write
to a
On Thu, 22 Jan 2004, Simons, Rick wrote:
curl https://server/file -uuser:pass
Virtual user user logged in.
No file created locally. Chalk it up as a http server flaw?
Uh, curl doesn't create any file when used like that. It outputs the
downloaded data to stdout unless you use an option to
Daniel Stenberg [EMAIL PROTECTED] writes:
On Thu, 22 Jan 2004, Simons, Rick wrote:
curl https://server/file -uuser:pass
Virtual user user logged in.
[...]
In my eyes, this looks like the correct output from curl. Wasn't it?
I think that Rick expects to see a complete HTML page rather than
] fdx_map_tilde: pw.dir
server/directory/ ui.type 1 r-uri file
Is that last line the issue?
-Original Message-
From: Hrvoje Niksic [mailto:[EMAIL PROTECTED]
Sent: Thursday, January 22, 2004 9:00 AM
To: Simons, Rick
Cc: '[EMAIL PROTECTED]'
Subject: Re: Syntax question ...
Simons, Rick [EMAIL
Thanks for persisting with this. It doesn't look like a mishandled
redirection -- the response headers exist and they don't request a
redirection or any kind of refresh.
access_log shows that 30 bytes have been transmitted. As it happens,
the string Virtual user ricks logged in.\n is exactly
?
-Original Message-
From: Hrvoje Niksic [mailto:[EMAIL PROTECTED]
Sent: Thursday, January 22, 2004 1:10 PM
To: Simons, Rick
Cc: '[EMAIL PROTECTED]'
Subject: Re: Syntax question ...
Thanks for persisting with this. It doesn't look like a mishandled
redirection -- the response headers
Greetings all.
I've posted in the past, but never really have gotten connectivity to a
https server I support using the wget application. I've looked in the
manual, on the website and searched the Internet but am not getting very
far.
wget -V
GNU Wget 1.9
wget -d -S https://server/file
Simons, Rick [EMAIL PROTECTED] writes:
Greetings all.
I've posted in the past, but never really have gotten connectivity to a
https server I support using the wget application. I've looked in the
manual, on the website and searched the Internet but am not getting very
far.
wget -V
I got wget compiled with ssl support now, and have a followup question ...
I'm getting the local file created but populated with a server response, not
the actual contents of the remote file. See example:
wget -d -S https://server/testfile --http-user=user --http-passwd=pass
DEBUG output
Well, that's what you're telling it to do with the -S option, so why are you
surprised? man wget, then /-S
Mark Post
-Original Message-
From: Simons, Rick [mailto:[EMAIL PROTECTED]
Sent: Wednesday, January 21, 2004 11:09 AM
To: '[EMAIL PROTECTED]'
Subject: RE: Syntax question ...
I
Another followup question(s), and thanks for the continued assistance ...:
-S
--server-response
Print the headers sent by HTTP servers and responses sent by FTP servers.
I misinterpreted this switch that the file would still be downloaded, but
the console would see the server messages
Simons, Rick [EMAIL PROTECTED] writes:
I got wget compiled with ssl support now, and have a followup question ...
I'm getting the local file created but populated with a server response, not
the actual contents of the remote file. See example:
wget -d -S https://server/testfile --http-user
Hello,
I'd like to download recursively the content of a web
directory WITHOUT AN INDEX file.
The directory content if generated by the server.
How can I do that ?
I use wget 1.8.2 for windows.
Thank you.
__
Do you Yahoo!?
Protect your identity with Yahoo! Mail
Danny Linkov wrote:
I'd like to download recursively the content of a web
directory WITHOUT AN INDEX file.
What shows up in your web browser if you enter the directory (such as
http://www.somesite.com/dir/)?
The most common responses are:
* some HTML file selected by the server (often
Hi,
I tried wget 1.9 for windows from Heiko Herold
(http://xoomer.virgilio.it/hherold/) and the problem with the filters
and the question marks remains:
On the following page:
http://www.wordtheque.com/owa-wt/new_wordtheque.wcom_literature.literaturea_page?lang=FRletter=Asource=searchpage=1
[mailto:[EMAIL PROTECTED]
Sent: Thursday, October 23, 2003 12:12 PM
To: Boris New
Cc: [EMAIL PROTECTED]
Subject: Re: Problem with wget 1.9 and question mark at least
on windows
Sorry about that, Wget currently applies -R and -A only to file names,
not to the query part of the URL
Have wget any rules to convert retrive url to store url?
Or may be in future?
For example:
Get - site.com/index.php?PHPSESSID=123124324
Filter - /PHPSESSID=[a-z0-9]+//i
Save as - site.com/index.php
Sergey Vasilevsky [EMAIL PROTECTED] writes:
Have wget any rules to convert retrive url to store url? Or may be
in future?
For example:
Get - site.com/index.php?PHPSESSID=123124324
Filter - /PHPSESSID=[a-z0-9]+//i
Save as - site.com/index.php
The problem with this is that it would
I'm having trouble with downloading a file across https using wget. I can't
figure out if it is something i'm doing wrong with wget syntax, or if the
httpd server isn't working like it should.
# wget -d https://filed1/InBox/FILE3 --http-user=blahuser
--http-passwd=blahpw
DEBUG output created
Simons, Rick [EMAIL PROTECTED] writes:
I'm having trouble with downloading a file across https using wget.
I can't figure out if it is something i'm doing wrong with wget
syntax, or if the httpd server isn't working like it should.
I don't know what's goingn wrong here. Your Wget syntax
-V
GNU Wget 1.9-b5
-Original Message-
From: Hrvoje Niksic [mailto:[EMAIL PROTECTED]
Sent: Monday, October 13, 2003 8:44 AM
To: Simons, Rick
Cc: 'Tony Lewis'; '[EMAIL PROTECTED]'
Subject: Re: very simple wget syntax question (-d info added) ...
Simons, Rick [EMAIL PROTECTED] writes
Simons, Rick [EMAIL PROTECTED] writes:
Using 1.9 I get a different error ...
[...]
using 1.9b5
# ./wget https://filed1/InBox/FILE3 --http-user=user --http-passwd=pass
https://filed1/InBox/FILE3: Unsupported scheme.
That just means that you haven't compiled 1.9-b5 with SSL. Did you
compile
?
-Original Message-
From: Hrvoje Niksic [mailto:[EMAIL PROTECTED]
Sent: Monday, October 13, 2003 9:07 AM
To: Simons, Rick
Cc: '[EMAIL PROTECTED]'
Subject: Re: very simple wget syntax question (-d info added) ...
Simons, Rick [EMAIL PROTECTED] writes:
Using 1.9 I get a different error
Simons, Rick [EMAIL PROTECTED] writes:
I believe 1.8 was an rpm install, but I could be mistaken. You are
right about the 1.9 install .. it was just a config/make/make
install on the tar I nabbed. How can I determine if I have SSL
includes on a RH9 box?
I think you need to install the
If the following:
wget https://filed1/InBox/FILE3 --http-user=username --http-passwd=password
is creating a file locally called FILE3 that has a server response in it
Virtual user username logged in., instead of the actual contents of FILE3
... what i'm trying to figure out is if that is a WGET
Hello
My office connected to internet with 2 serial-line modems, and we use
load balancing with Linux ip tool... but as I know WGET doesn't support
multithreading download (like flashget, for instance), so if I
download file,the download stream will use only 50% of bandwidth... Did U plan to
begin:vcard
n:Deutsch;Christian
tel;fax:+49-7031-14-8253
tel;work:+49-7031-14-7273
x-mozilla-html:TRUE
url:http://ovweb.bbn.hp.com/ito/hp/htdocs/users/cdeutsch/
org:Hewlett-Packard GmbH;Mailstop ASM-AP (B32, 3.OG, M22)
adr:;;Schickardstrasse 25;Boeblingen;;D-71034;Germany
version:2.1
or to Logprint? Please reply at your earliest convenience. Thank you.
Regards,
Tom Madigan, Director
Global Intelligence Data Operations
Adzone Research, Inc.
[EMAIL PROTECTED]
PS. I received a bounce back when attempting to send this question to
Hrvoje Niksic [EMAIL PROTECTED]
The original
Hello,
I tried to copy to my local disk the manual
http://www.kgraph.narod.ru/lectures/lectures.htm
like
wget -r -k -l 4 -nH http://www.kgraph.narod.ru/lectures/lectures.htm
but 99.9% of .gif files were not copied and their links
left like absolute links
(for example
From the help
-p, --page-requisitesget all images, etc. needed to display HTML page.
I think you need to add the -p option as well.
Fred Holmes
At 05:05 AM 2/16/2003, Oleg Gorchakov wrote:
Hello,
I tried to copy to my local disk the manual
Hello all,
I found the wget project while searching on the web for web downloaders and crawlers.
My question is, whether it is possible to let wget crawl or spider over the web (given
a certain start url) and follow all the unique urls that it runs into and download
only the images it finds
[EMAIL PROTECTED] (Max Bowsher) wrote in news:001401c2820f$345485b0$78d96f83
@pomello:
DennisBagley [EMAIL PROTECTED] wrote:
ok - am using wget to mirror an ftp site [duh]
and would like it not only to keep an up to
date copy of the files [which it does beautifuly]
but also remove files that
ok - am using wget to mirror an ftp site [duh]
and would like it not only to keep an up to
date copy of the files [which it does beautifuly]
but also remove files that are no-longer on the ftp server
?? Is this possible ???
tia
den
DennisBagley [EMAIL PROTECTED] wrote:
ok - am using wget to mirror an ftp site [duh]
and would like it not only to keep an up to
date copy of the files [which it does beautifuly]
but also remove files that are no-longer on the ftp server
?? Is this possible ???
Use a perl script.
Max.
wget 1.7 on linux rh 7.2
hello,
I have a very simple question.
I am writing a script that utilizes wget of course.
All I want to accomplish is for wget to ,very quickly, exit
when url is unreachable (like a test-bogus one below for e.g.)
None of the following tries (A thru C
Hi, I hope this is the right place to ask
for some help? I have written a (UNIX/LINUX) shell script thatuses
wget to get a commodity options page from www.cme.com. I have been running this script on LINUX using wget 1.8.1 with no
problems. I am trying to move the script over to the people
Hi,
i think while downloading wget does not show the current speed but the
average speed. i dont know whether this is a bug or not, but it's not
favourably because (one reason) it seems that wget calculates the time
to download with this speed, so this ETA is not correct.
i hope i could help...
Hi, I hope this is the right place to ask
for some help? I have written a (UNIX/LINUX) shell script that goes and
gets a commodity options page from www.cme.com. I have been running this on
LINUX using wget 1.8.1 with no problems. I am trying to move the script
over to the people that host
that was the fix, thank you
Matt Whimp Sarah Kemp matt[EMAIL PROTECTED] wrote in message
news:20020720064602.0645af92.matt[EMAIL PROTECTED]...
On Fri, 19 Jul 2002 11:57:38 -0400
Steve tapped the following into the keyboard:
== PORT ... Master socket fd 428 bound.
using port 1342.
--
hi...
I've got one question...
when I'm downloading some file, that contain spaces in his name, with -nd
or --cut-dirs options, it saves file with %20 instead of space
character
whetn I'm downloading without any options, in directories names thera are
%20, but in file name there are normal
This is not a wget issue, I saw something like that on a capped cable modem, may be
thats how caps are implemented. No problem on the LAN.
From: Abdullah al-Muhaitheef [EMAIL PROTECTED]
Date: 2002/06/20 Thu AM 08:41:03 GMT
To: [EMAIL PROTECTED]
Subject: Question about wget !!
Hi guys
Hello wget,
$ wget -v
wget: missing URL
Usage: wget [OPTION]... [URL]...
Try `wget --help' for more options.
[root@rmt-gw]1014# wget --version
GNU Wget 1.8.1
script.sh:
#!/bin/sh
wget=/usr/local/bin/wget -t0 -nr -nc -x --timeout=20 --wait=61 --waitretry=120
$wget ftp://nonanonymous:[EMAIL
On Fri, 17 May 2002 16:59:07 +0400, Pavel Stepchenko [EMAIL PROTECTED]
wrote:
#!/bin/sh
wget=/usr/local/bin/wget -t0 -nr -nc -x --timeout=20 --wait=61 --waitretry=120
$wget ftp://nonanonymous:[EMAIL PROTECTED]/file1.zip
sleep 60
$wget ftp://nonanonymous:[EMAIL PROTECTED]/file2.zip
Why WGET
On 12 May 2002 02:54:52 -0500, asher [EMAIL PROTECTED] wrote:
hi, I've been trying to figure out how wget prints all over the screen
with out using curses, and I'm hoping someone can help. from the code,
I'm pretty sure it's just printing to the C-stream stderr, but I can't
for the life of me
hi, I've been trying to figure out how wget prints all over the screen
with out using curses, and I'm hoping someone can help. from the code,
I'm pretty sure it's just printing to the C-stream stderr, but I can't
for the life of me figure out how it seeks or jumps around in the
stream. any help
I need to download multiple data files from a site requiring
a username and password. Wget gets in fine and downloads
a number of files in other directories, but when attempting the directory
containing the data files each file is listed but not
downloaded with a socket:Too many
Newer versions of Wget check the server type and adjust the directory
listing parser accordingly. If I remember correctly, NT directory
listing is now supported.
Just when I thought it was safe to start downloading files, I get this:
wget --mirror -v -I/ -X/report,/Software -w1 -gon
ftp://x:[EMAIL PROTECTED]
--11:27:41-- ftp://x:[EMAIL PROTECTED]:21/
= `64.226.243.208/.listing'
Connecting to 64.226.243.208:21... connected!
Hi All
Sorry for the off topic and 'Newbie' question, but is it possible having
downloaded a site using 'wget' to a local directory, to then at a later
stage to run just the 'proxy' part of the process, reading the site
contents from the local directory and note from the site to enable
re
On Mon, Mar 04, 2002 at 12:36:41PM -0800, Jim Gifford wrote:
I noticed a lot of posts about the php?, I have followed alot of this
It's invoking a php script and passing it parameters Normal stuff
I am using the current versions of wget-181 on a linux box
wget -c -nv
wget -c -nv
http://gbarbierfreefr/prj/dev/downphp3?file=txt112targz;
Hi,
this version 112 wget can't nor mozilla can fetch it
br
bWarning/b: Unable to access down/txt112targz in byour
script/b on line b61/bbr
br
bWarning/b: ReadFile(down/txt112targz) - No such file or
directory in byour
On Mon, Mar 04, 2002 at 11:48:50PM +0100, Thomas Lussnig wrote:
Try 11targz and it work I think it is simple The file DO NOT EXSIST YET
I must've screwed up with Konq and grabbed a different link I believe you're
correct
--
Alan Eldridge
Dave's not here, man
Hi!
I tried to continue downloading a half existing file with:
wget -c ftp://www.test.com/file.c; -O testfile.c
But I realised that this doesn't work as I want, because according to wgets
info page the -O option overwrites an existing file.
I'd like to know if there is any possibility wget
Hi,
I love wget it has been a godsend for many a project for me,
thank you for creating it. My questions is that I have noticed the -nc
option to not overwrite a file but how can I tell wget to specifically
overwrite a file? I have tried a few things and I am unable to keep it
from
Folks,
I've been looking into using wget to do some web data-collecting for me
(instead of doing it all by hand) and I'm striking out so far. Can
someone
tell me if these things are do-able via wget (or even some other util),
and point me at an example? The tasks I'm trying to do:
1)
replace the first @ with %40 and check
Regards
Bharath
Turgut Kalfaoglu wrote:
Hi. I love WGET, but I have a stumper: How do you put a URL if the FTP
site requires a password that has an '@' in it??
Like:
wget ftp://userid:password@[EMAIL PROTECTED]:21/pub/incoming/blah.zip
does not
doesn't work, cannot login...
P! Vladi.
T. Bharath wrote:
replace the first @ with %40 and check
Regards
Bharath
Turgut Kalfaoglu wrote:
Hi. I love WGET, but I have a stumper: How do you put a URL if the FTP
site requires a password that has an '@' in it??
Like:
wget
I have go the same problem and I found that my previous version of wget
can handle this (the version was 1.7.1 I guess):
wget ftp://user:pass@[EMAIL PROTECTED]/etc
and worked fine, lates sources print error `no host in the url' or similar,
perhaps --ftp-user and --ftp-pass would be good
Edward Manukovsky [EMAIL PROTECTED] writes:
Excuse me, please, but I've got a question.
I cannot set retry timeout for 30 seconds by doing:
wget -w30 -T600 -c -b -t0 -S -alist.log -iurl_list
For me, Wget waits for 30 seconds between each retrieval. What
version are you using?
Hello!
Excuse me, please, but I've got a question.
I cannot set retry timeout for 30 seconds by doing:
wget -w30 -T600 -c -b -t0 -S -alist.log -iurl_list
This timeout is much more shorter :(((
Answer me, please, how can I do this.
--
Best regards,
Edward Manukovsky
Amy == Amy Rupp [EMAIL PROTECTED] writes:
Amy I'd been archiving obituaries from:
Amy www.valleystar.com/obits
Amy The obituaries were in the default file index.html.
Amy Lately they have switched things so that I believe the
Amy output of invoking the above URL is dynamic
hi administrator;
how can i use the speed limited with wget?
¼Óн£¬ÉýÖ°ÃÜóÅ
http://www.englishtown.com/master/home/courseoverview.asp?etag=TOCNctr=cn
===
ÊÖ»úºÅÂëÊǵçÓÊ£¬´Ó´Ë½»·Ñ²»ÓóһºÅÔÚÊÖ¡°ËæÉíÓÊ¡±
¡ª¡ª
On Thu Oct 4 04:55:53 2001, Ian Abbott wrote:
On 3 Oct 2001, at 16:01, CJ Kucera wrote:
The closest I've come is (and there's lots of extraneous stuff in there):
wget -r -l inf -k -p --wait=1 -H
On 3 Oct 2001, at 16:01, CJ Kucera wrote:
The closest I've come is (and there's lots of extraneous stuff in there):
wget -r -l inf -k -p --wait=1 -H
--domains=theonion.com,graphics.theonion.com,www.theonion.com,theonionavclub.com,www.theonionavclub.com
http://www.theonion.com
The domains
Greetings, humans!
I'd like to use wget to take a snapshot of www.theonion.com. On that
site, all of the graphics are served from graphics.theonion.com,
and there's a bunch of other sub-domains as well. Also, it links over
to www.theonionavclub.com, which I would also like to mirror.
How can I
I said:
I'd like to use wget to take a snapshot of www.theonion.com. On that
[snip!]
I forgot to mention I'm using wget 1.7. Sorry 'bout that.
-CJ
WOW: Rapacious | A priest advised Voltaire on his death bed to
apocalyptech.com/wow | renounce the devil. Replied Voltaire, This
hi!
is it possible (well I mean easy way:)) to make wget pass referer auto?
I mean for every url that wget tries to fetch to pass hostname as referer.
for example:
http://www.somewhere.org/path/etc/page.html
then the referer should be `http://www.somewhere.org/'
well this is not
Hi Vladi!
If you are using windows, you might try
http://www.jensroesner.de/wgetgui/
it is a GUI for wGet written in VB 6.0.
If you click on the checkbox identify as browser, wGetGUI
will create a command line like you want.
I use it and it works for me.
Hope this helps?
CU
Jens
Vladi wrote:
Hello!
To be able to use the referer switch, you must have a new version of
wget - I'm not sure, if 1.6 is enough, 1.7 certainly is enough and 1.5.3
is not enough.(Get more info's from http://wget.sunsite.dk)
The switch is --referer=URL
Try to use it with the -d (debug) switch to see, that
hi!
well I think you misunderstood perhaps...
I mean not to give referer (the --referer) but to give
something like --auto-referer (possibly in wgetrc even)
and wget will set referer for each url it process to the
host-part from the same url. i.e. I'll give the example
once again:
Hi wgetters!
@André
Guys, you don't understand what the OP wants. He needs a
dynamically generated referer, something like
wget --referer 'http://%h/'
where, for each URL downloaded, wget would replace %h by the
hostname.
Well, I understood it this way.
My problem was that I mainly use
, too)
does of course not allow you to write files
with question marks in their filenames!
so, wget will complain
Cannot write to 'mydomain.org/index.html?foo=bar',
[snip]
The following characters are not legal in DOS filenames, even
long
(VFAT) filenames:
\ / : * ?|
(sorry
On Tue, 7 Aug 2001, Reto Kohli wrote:
as far as i can tell, one would have to change/rebuild
wget so that all those illegal characters are translated
to something else (ie underscores) before the files are
written to disk.
if someone did that, i would appreciate it _very_ much.
maybe this
On 10 Jul 2001, at 10:16, ÀÌÀç·É wrote:
When we execute wget, we can get this message...
Connecting to www.chosun.com:80... connected!
HTTP request sent, awaiting response... 206 Partial Content
What's the problem???
What command line parameters did you use?
And What is the 206
This question is not related to Wget - so this newsgroup is probably not the
right place to post this message. However, I am posting this msg here hoping
that someone here might be able to help me.
I have just written a bare-bones crawler in C++. It seems to run just fine
except when getting
When we execute "wget", we can get this message...
"Connecting to www.chosun.com:80... connected!
HTTP request sent, awaiting response... 206 Partial Content"
What's the problem???
And What is the "206 Partial Content"??
Please answer to me...
=
µðÁöƲ
But as home.nexgo.de has very many dirs except mine, I also tried
wget -r -l0 -nh -H -Dhome.nexgo.de/bmaj.roesner http://www.audistory.com
and
wget -r -l0 -nh -H -Dhome.nexgo.de/bmaj* http://www.audistory.com
which both did not work.
The reason is that -D takes only domain names as
Hi Mengmeng!
Thanks very much, I (obviously) was not aware of that!
I'll see how I can incorporate that (-I/-X/-D/-H) in wGetGUI.
Can I do something like -H -Dhome.nexgo.de -Ibmaj.roesner
http://www.AudiStory.com ?
I'll just give it a try.
Thanks again!
Jens
On Mon, 2 Jul 2001, Doug Kaufman wrote:
+ On Mon, 2 Jul 2001, Malcolm Austen wrote:
+
+ If I recall correctly, the DOS version of wget 1.5.2 handled this by
+ translating the ? into @. I did feed back (can't recall exactly when
+ now) that wget 1.6 for DOS wasn't doing this. I haven't used
windows, too)
does of course not allow you to write files
with question marks in their filenames!
so, wget will complain
Cannot write to 'mydomain.org/index.html?foo=bar',
which is sadly true, and i will never get those
wonderful pages.. (b.g. die die die! ;)
-- can this be circumdone
files
with question marks in their filenames!
so, wget will complain
Cannot write to 'mydomain.org/index.html?foo=bar',
which is sadly true, and i will never get those
wonderful pages.. (b.g. die die die! ;)
Did you try putting quotes around the URL? Don't have time to test
Hi,
Sorry for bothering you but I have a small wget
quetsion?
How do I fetch in a html which location ends on this:
/printpage.cgi?forum=1topic=1.
Most of the time it is something like /index.html
which doesn't give a problem.
thanks,
Emile
__
Can I use Wget to
retreive files through https?
If yes, could I get
an example?
Thanks,
Scott Shellabarger
Acxiom Corporation - Products
Division
501.252.3089
[EMAIL PROTECTED]
Shellabarger Scott - sshell [EMAIL PROTECTED] writes:
Can I use Wget to retreive files through https?
Yes, but you need the version 1.7, which is about to be released.
On 30 May 2001, Hrvoje Niksic wrote:
Yes, but you need the version 1.7, which is about to be released.
If a pre-release is available I'd like to install the .pot file of this
pre-release package on the Translation Robot site
(http://www.iro.umontreal.ca/contrib/po/HTML/).
Of course, I can
Karl Eichwalder [EMAIL PROTECTED] writes:
On 30 May 2001, Hrvoje Niksic wrote:
Yes, but you need the version 1.7, which is about to be released.
If a pre-release is available I'd like to install the .pot file of this
pre-release package on the Translation Robot site
hello.
i'm finding wget really useful for my dial up connection at home, and I
am trying to set it up in work, as there are some web based documents I
would like to store in my home directory.
I have given the program the following arguments:
wget --background
"Dan Harkless" [EMAIL PROTECTED] writes:
Vladi Belperchinov-Shabanski [EMAIL PROTECTED] writes:
Yes, it's a known bug and is documented in the current CVS version of
wget.texi. With luck, the fix may be as simple as changing a = to a .
It's just that no one's had a chance to look at
hi!
`wget -c file'
starts to download file from the begining if the file
is completely downloaded already...
why?!
I expect wget to do nothing in this case: I wanted it
to download file to the end (i.e. to continue, -c) and
if the file is already here so there is nothing to do.
Vladi Belperchinov-Shabanski [EMAIL PROTECTED] writes:
hi!
`wget -c file'
starts to download file from the begining if the file
is completely downloaded already...
why?!
I expect wget to do nothing in this case: I wanted it
to download file to the end (i.e. to
101 - 196 of 196 matches
Mail list logo