re-mirror + no-clobber

2008-10-24 Thread Jonathan Elsas

Hi --

I'm using wget 1.10.2

I'm trying to mirror a web site with the following command:

wget -m http://www.example.com

After this process finished, I realized that I also needed pages from  
a subdomain (eg. www2)


To re-start the mirror process without downloading the same pages  
again, I've issued the command


wget -nc -r -l inf -H -D www.example.com,www2.example.com http://www.example.com

but, I get the message:


file 'www.example.com/index.html' already there; not retrieving.


and the process exits.   According to the man page files with .html  
suffix will be loaded off disk and parsed but this does not appear to  
be happening.   Am I missing something?


thanks in advance for your help


Bug using recursive get and stdout

2007-04-17 Thread Jonathan A. Zdziarski

Greetings,

Stumbled across a bug yesterday reproduced in both v1.8.2 and 1.10.2.

Apparently, recursive get tries to open the file for reading after  
downloading, to download subsequent files. Problem is, when used with  
-O - to deliver to stdout, it cannot open that file, so you get the  
output below (note the No such file or directory error). In 1.10,  
it appears that they removed this error message, but wget still fails  
to recursively fetch.


I realize it seems like there wouldn't be much reason to send more  
than one page to stdout, but I'm feeding it all into a statistical  
filter to classify website data, so it doesn't really matter to the  
filter. Do you know of any workaround for this, other than opening  
the files after reading (won't scale with thousands per minute).


Thanks!

$ wget -O - -r http://www.zdziarski.com  out
--15:40:06--  http://www.zdziarski.com/
   = `-'
Resolving www.zdziarski.com... done.
Connecting to www.zdziarski.com[209.51.159.242]:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 24,275 [text/html]

100%[] 24,275   163.49K/s 
ETA 00:00


15:40:06 (163.49 KB/s) - `-' saved [24275/24275]

www.zdziarski.com/index.html: No such file or directory

FINISHED --15:40:06--
Downloaded: 24,275 bytes in 1 files





Jonathan




re: 4 gig ceiling on wget download of wiki database. Wikipedia database being blocked?

2006-12-24 Thread Jonathan Bazemore
Please cc: (copy) your response(s) to my email,
[EMAIL PROTECTED], as I am not subscribed to the
list, thank you.

I've repeatedly tried to download the wikipedia
database, one of which is 8 gigabytes, and the other,
57.5 gigabytes, from a server which supports
resumability.

The downloads are consistently breaking around the 4
gigabyte mark.  The ceiling isn't on my end: running
and downloading to NTFS.  I've also done test
downloads from the same wiki server
(download.wikimedia.org) (it works fine) and repeated
tests of my own bandwidth and network (somewhat slower
than it should be, with congestion at times and
sporadic dropouts, but since wget supports
resumability, that shouldn't be an issue) which rules
out those factors--granted, the download might slow
down or be broken off, but why can't it resume after
the 4 gigabyte mark?  

I've used a file splitting program to break the
partially downloaded database file into smaller parts
of differing size.  Here are my results:


6 gigabyte file to start.  (The 6 gigabyte file
resulted from a lucky patch when the connection was
unbroken after resuming a 4 gigabyte file--but that
isn't acceptable for my purposes) 

6 gigabytes broken into 2 gigabyte segments:

first 2 gigabyte segment resumed successfully.

6 gigs broken into 3 gigabyte segments:

first 3 gigabytes resumed successfully.

6 gigs broken into 4.5 gigabyte segment(s)(seg.
2-partial):

will not resume.

6 gig broken into 4.1 gigabyte segment(s) (seg.
2-partial):

will not resume.

6 gig broken into 3.9 gigabyte segment(s) (seg.
2-partial):

resumed successfully.  

Of course, the original 6 gigabyte partial file
couldn't be resumed.

As you are aware, NTFS, while certainly not the
Rolls-Royce of FS's, supports multiple exabytes, and
therefore that 4 gig ceiling would only apply under
a Win-32 formatted partition.  Such limits are rare in
up-to-date operating systems.

I've considered if the data stream is being corrupted,
but wget (to my knowledge) doesn't do error checking
in the file itself, it just checks remote and local
file sizes and does a difference comparison,
downloading the remainder if the file size is smaller
on the client side.  And even if the file were being
corrupted, the file-splitting program (which is not
adding headers) should have ameliorated the problem by
now (by excising the corrupt part), unless either: 1.
the corruption is happening at the same point each
time; or 2. the server, or something interposed
between myself and the server, is blocking the
download when resumption of the download of the
database file is detected at or beyond the 4 gigabyte
mark.  

I've also tried different download managers:
Truedownloader (open-source download manager), which
is rejected by the server; and getright, a good
commercial program, but it is throttled at
19k/s--making the smaller download well over 120
hours--too slow, especially not knowing if the file is
any good to begin with.

Wikipedia doesn't have tech support, and I haven't
seen anything about this error/problem listed in a
search that should encompass their forums--but they do
suggest the use of wget for that particular
application, so I would infer that the problem is at
least related to wget itself.

I am using wgetgui (as I mentioned in my previous post
to the mailing list) and yes, all the options are
checked correctly, I've double and triple-checked and
quaduple-checked everything.  And then I checked
again. 
 
The database size is irrelevant: it could be 100
gigabytes, and that would present no difficulty from
the standpoint of bandwidth.  However, the reason we
have such programs as wget is to deal with redundancy
and resumability issues with large file downloads.  I
also see that you've been working on large file issues
with wget since 2002, and security issues.  But the
internet has network protocols to deal with this--what
is happening?

Why can't I get the data?  Have the network transport
protocols failed?  Has wget failed?  The data is
supposed to go from point A to point B--what is
stopping that?  It doesn't make sense.  

If I'm running up against a wall, I want to see that
wall.  If something is failing, I want to know what is
failing so I can fix it.  

Do you have an intermediary server that I can FTP off
of to get the wikipedia databases?  What about
CuteFTP?  



*
This e-mail and any files transmitted with it may contain confidential and/or 
proprietary information.  It is intended solely for the use of the individual 
or entity who is the intended recipient.  Unauthorized use of this information 
is prohibited.  If you have received this in error, please contact the sender 
by replying to this message and delete this material from any system it may be 
on.  This disclaimer precedes in law, and supercedes any and all other 
disclaimers, regardless of conflicts of construction or interpretation.

re: problem at 4 gigabyte mark downloading wikipedia database file.

2006-12-21 Thread Jonathan Bazemore
Hello,

I am a former computer tech, and I've followed all
instructions closely regarding wget.  I am using wget
1.9 in conjunction with the wgetgui program.  

I have confirmed resumability with smaller binary
files, up to 2.3 gigabytes in size.  

What happens is, that when downloading the wikipedia
database, which is about 8 gigabytes, using wget, the
download proceeds and is resumable up to about the 4
gig mark, then, when I attempt resumption, the
internet connection appears to be working, but the
file just sits there, and doesn't increase in size.

I theorize that the datastream is being corrupted, and
my next step will be to shave pieces of the file off
the end, in several megabyte increments, until I reach
the uncorrupted part.  

Please let me know what's going on and why this is
happening at this email address, as I am not a
developer and not currently subscribed to the mailing
list, but I do need to have wget working properly to
get the database.  

Thanks,

Jonathan.

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


Wget 1.10.2 hangs

2006-08-15 Thread Jonathan Abrahams








Team,



We are using wget to perform HTTP download, specifying a
range in the HTTP header. What we found out is that the current version of wget
hangs on cygwin.



The following summarizes the tests:


 
  
  Wget version
  
  
  OS
  
  
  Result
  
 
 
  
  1.8.2
  
  
  Linux Red Hat 9
  
  
  OK
  
 
 
  
  1.9.1
  
  
  Cygwin on Windows XP
  
  
  OK
  
 
 
  
  1.10.2
  
  
  Cygwin on Windows XP
  
  
  Hangs
  
 




Here is the URL being tested:

wget --header Range:
bytes=0-34800 -O /tmp/data.3392 -U BrainMedia Ops Player Version
1.09useridadmin -o /tmp/DOWNError11.3392 http://stream.brainmediaco.com:15080/Rock/Green_Day/American_Idiot/BMcc0e580e01.nem?duration=174isrc=BMcc0e580e01id=admindrm=1samplerate=16000bitrate=29000channels=2match=max



Any idea why this happens?



Note that the same Cygwin environment was used to run both
versions of wget.



Thank you,

___

Jonathan Abrahams

BrainMedia LLC

Director of Operations

Tel. no. 212 529 6500 x 150

Fax. no. 212 481-8188

mailto: [EMAIL PROTECTED]

website: www.BrainMediaCo.com

This electronic mail transmission is
intended only for the addressee indicated above. It may contain information
that is privileged, confidential, or otherwise protected from disclosure. Any
review, dissemination or use of this transmission or its contents by any
persons other than the addressee(s) is strictly prohibited. If you have
received this transmission in error, please notify [EMAIL PROTECTED] and
delete the original immediately.










Re: new feature...

2006-02-02 Thread Jonathan

any thoughts on this new feature for wget:

when a file has been downloaded successfully, wget calls a script with 
that file as an argument.

e.g. wget -r --post-processing='myprocessor.pl' URL
would call myprocessor.pl with each file which has been downloaded.

I've already hacked the source to get wget to do this for me, but I think 
many people could benefit from this.


It's much easier to throw wget into a script or other program and then have 
that script/program do the post processing (why mess around with wget?).


Jonathan 



Re: new feature...

2006-02-02 Thread Jonathan
Now I see what you are trying to do... we do something similar in that we 
have an external process which wakes up every n minutes, processes whatever 
files wget has downloaded, then goes back to sleep.  This is not the same as 
having wget invoke a separate script/routine after each file download (or 
download attempt if the download failed).


Having wget invoke a separate script/routine after each file download would 
be fairly ugly to implement on a wide variety of o/s's - do you want wget to 
spawn a new process after each d/l so that wget can carry on or would you 
want to halt wget processing, while the post processing routine runs?  Both 
options have drawbacks, and could be very ugly to implement/support across 
o/s's.


I can't see the wget developers getting really excited about this feature, 
but you never know...


J.


It's better to have that functionality inside wget because wget may not 
sucessfully download the file. Also, wget may be downloading multiple 
files (usually the case) so, wrapping wget up in a script will not allow 
you to process each downloaded file.


K.

Jonathan wrote:

any thoughts on this new feature for wget:

when a file has been downloaded successfully, wget calls a script with 
that file as an argument.

e.g. wget -r --post-processing='myprocessor.pl' URL
would call myprocessor.pl with each file which has been downloaded.

I've already hacked the source to get wget to do this for me, but I 
think many people could benefit from this.



It's much easier to throw wget into a script or other program and then 
have that script/program do the post processing (why mess around with 
wget?).


Jonathan






Re: Downloading grib files

2006-01-25 Thread Jonathan
I think you are trying to use wget for a use case that wget was not intended 
for... but there is a simple solution:  write a script/routine which wakes 
up every n seconds/minutes and does an http request to the url you want to 
check.  If the page has changed then your routine/script would invoke wget 
to retrieve the new page(s).


hth

Jonathan


- Original Message - 
From: Benjamin Scaplen [EMAIL PROTECTED]

To: wget@sunsite.dk
Sent: Wednesday, January 25, 2006 7:06 AM
Subject: Downloading grib files



Hi,

I am using Wget version 1.9.1.

I am downloading weather information off the weatheroffice website
everyday around the same time. Each day the weatheroffice clears the page
and reposts the updated data but since the date is in the name, the names
all change.
eg. CMC_reg_ABSV_ISBL_250_ps15km_2006012312_P000.grib

What I am doing is using wget to get about 500 files from the site. I want
to be able to download the files AS SOON AS they are posted. I was
wondering if there was a way retry to look for a file (which wouldn't be
there yet because the previous day's date would be in the name) until it
is posted then as soon as it is posted be able to download it.

I tried to set the download 'tries' to infinity ( -t inf) but it seems
that it only works for a failure due to connection problems and doesn't
retry when the file I am looking for is not there. Is there any way to
keep looking for a file even if the file specified is not at the site
specified...yet (then of course download it when it is posted)?

I know this may be very confusing, the last question sums it up.

please reply to [EMAIL PROTECTED],

thanks,

Ben Scaplen






spaces in pathnames using --directory-prefix=prefix

2005-11-30 Thread Jonathan D. Degumbia








Hello,



Im trying to use the --directory-prefix=prefix option
for wget on a Windows system. My prefix has spaces in the path
directories. Wget
appears to terminate the path at the first space encountered. In
other words if my prefix is: c:/my prefix/
then wget copies files to c:/my/ .



Is there a work-around for this? 



Thanks,

-JD



Jonathan DeGumbia

Systems Engineer, Omitron Inc.

[EMAIL PROTECTED]

301.474.1700 x611










Re: Getting list of files

2005-11-02 Thread Jonathan
I think you should be using a tool like linklint (www.linklint.org) not 
wget.


Jonathan



- Original Message - 
From: Jean-Marc MOLINA [EMAIL PROTECTED]

To: wget@sunsite.dk
Sent: Wednesday, November 02, 2005 12:56 PM
Subject: Re: Getting list of files



Shahram Bakhtiari wrote:

I would like to share my experience of my failed attempt on using
wget to get the list of files.
I used the following command to get a list of all existing mp3 files,
without really downloading them:

wget --http-user=peshvar2000 --http-passwd=peshvar2000 -r -np --spider
http://www.peshvar.com/music/mp3/amir aram


I don't understand how you can expect wget to get list of files using
these options. -r is for recursive and -np for no parent... I'm also 
trying

to get a list of files so hopefully I will find a solution and post it.

JM.










wget Mailing List question

2005-08-26 Thread Jonathan
Would it be possible (and is anyone else interested) to have the subject 
line of messages posted to this list prefixed with '[wget]'?


I belong to several development mailing lists that utilize this feature so 
that distributed messages to not get removed by spam filters, or deleted by 
recipients because they have no idea who sent the message.


Often the subject line does not indicate that the message relates to wget. 
For example,  I almost deleted a message with the subject Honor --datadir 
because it looked like spam.  If the subject line read  [wget] 
Honor --datadir it would be much easier to deal with.


Is anyone else interested in this idea?  Is it feasible?

Jonathan 





Invalid directory names created by wget

2005-07-08 Thread Jonathan



I have encountered a problem while mirroring a web 
site using wget 1.10: wget encountered a url that contains embedded '(' 
and ')' characters. wget then creates a directory structure which contains these 
characters.

eg. 
/www.kingpins.ca/xq/asp/sId./kbId.6/title.Die-Struck+(Gold+on+Gold)+Lapel+Pins
The operating system (linux) accepts (and creates) 
a directory with the name 'title.Die-Struck+(Gold+on+Gold)+Lapel+Pins' but this 
directory is not directly accessible (eg. you can not'cd' to it or 'ls' it 
and it drives rsync crazy). wget also creates files within this directory 
(valid html files) but they cannot be directly accessed because a path error is 
raised. 

Note: these file can be indirectly accessed by 
going to the parent directory (in this case: 
/www.kingpins.ca/xq/asp/sId./kbId.6/) then entering: cd titl* which puts you 
into the directory and gives you access to the files in the directory (which is 
good for manually reviewing the files, but that's all), but you cannot directly 
access these files with a full path name.

Has anyone encountered this problem? Is there 
a patch/work-around?

I am running:

GNU wget 1.10
Red Hat Linux release 8.0 
(Psyche)
GNU/Linux kernal 
release:2.4.20-20.8 kernal version:Mon Aug 18 14:39:22 EDT 
2003machine type:i686processor:athlon

Any and all ideas greatfully 
accepted!


Jonathan


Re: Can Wget do client authentication? Can it be implemented?

2005-02-23 Thread Jonathan Share
On Tue, 22 Feb 2005 19:09:11 +, Bryan [EMAIL PROTECTED] wrote:
 
 Automize cannot access PKI-enabled websites, as it does not support
 that functionality.  I was reading up about Wget, and I saw that you
 can send a specific cookie back to a website.  This is almost exactly
 what I am looking for, but instead of a cookie, it's a personal
 certificate, with an extension of .p12 .
 
 If anyone can tell me if that is something that is maybe coming up in
 later builds or if that functionality is already available, I would
 greatly appreciate it.

Have you tried playing with the sslcertfile and/or sslcertkey parameters?
I don't seem to be able to find any documentation for these parameters
other than they are listed when you do a wget --help but I assume
their function is fairly self explanatory.

HTH

Jon


Re: Wget - Interpret Debug Output

2005-02-19 Thread Jonathan Share
Sorry for the Dual post Steven, just realised I hadn't sent it to the list.

On Sat, 19 Feb 2005 11:26:16 +, Jonathan Share [EMAIL PROTECTED] wrote:
 On Fri, 18 Feb 2005 22:43:50 -0600 (CST), Steven M. Schweda
 [EMAIL PROTECTED] wrote:
 In case it might be useful, I've included the -d results I got from
  Wget on my (different) system.
 
 Thanks, It works fine from my windows system too, I had seen the
 difference and that is why I assumed  it was a connection issue but
 didn't know where in the process.
 
 More detailed diagnostics might help, but it appears that there's a
  big difference between your:
 
  Connecting to www.exeter-airport.co.uk[217.199.170.196]:80... seconds
  0.00, Closing fd 1928
 
  and my:
 
  Connecting to www.exeter-airport.co.uk[217.199.170.196]:80... connected.
 
 Can you get to that URL using a normal browser from the same system?
 
 No, this is why it is a problem, it's end users that cannot access the site.
 
 
   My best guess is that it is failing to open the tcp connection but
   looking for a more specific answer if possible.
 
 The seconds 0.00, message seems to come from run_with_timeout() in
  MSWINDOWS.C, so it's a Windows-specific thing (with which I have no
  experience), but it looks a little like you might (somehow) be
  specifying a time limit of zero seconds, which may not be enough time
  for the connection to form (or for anything else, for that matter).
 
 Interesting, it has been noted that we have only had complaints from
 clients with ADSL connections, could this be a buggy driver?
 
 
 Is this of any help?:
 
-T,  --timeout=SECONDSset all timeout values to SECONDS.\n\
 
 I'll try it and see what happens.
 
 
 This advice may not be worth much, but the price is reasonable.
 
 
 Any advice is greatly appreciated this issue is baffling us.
 
 Thanks,
 Jon
 
  
 
 Steven M. Schweda   (+1) 651-699-9818
 382 South Warwick Street[EMAIL PROTECTED]
 Saint Paul  MN  55105-2547
 
  



Re: Interpret Debug Output

2005-02-19 Thread Jonathan Share
On Sat, 19 Feb 2005 15:54:42 -0500, Post, Mark K [EMAIL PROTECTED] wrote:
 That does seem a bit odd.  I did a wget www.exeter-airport.co.uk command
 using 1.9.1, and got this result:
 wget www.exeter-airport.co.uk
 --15:52:05--  http://www.exeter-airport.co.uk/
= `index.html'
 Resolving www.exeter-airport.co.uk... 217.199.170.196
 Connecting to www.exeter-airport.co.uk[217.199.170.196]:80... connected.
 HTTP request sent, awaiting response... 302 Found
 Location: http://www.exeter-airport.co.uk/site/1/home.html [following]
 --15:52:06--  http://www.exeter-airport.co.uk/site/1/home.html
= `home.html'
 Connecting to www.exeter-airport.co.uk[217.199.170.196]:80... connected.
 HTTP request sent, awaiting response... 200 OK
 Length: unspecified [text/html]
 
 [  =] 20,990
 78.70K/s
 
 15:52:08 (78.48 KB/s) - `home.html' saved [20990]
 
 Doesn't answer your question, I know, but it is another data point.
 
 
 Mark Post

This isn't odd, this is the expected outcome. There is a .htaccess
file in place that rewrites a request for / to /site/1/home.html

Thanks for trying to help anyway.

Jon


Re: Interpret Debug Output

2005-02-19 Thread Jonathan Share
On Sat, 19 Feb 2005 18:06:37 -0500, Post, Mark K [EMAIL PROTECTED] wrote:
 I meant your users' problem seemed a bit odd, not the fact that my attempt
 worked.

Sorry, it's gettin late over here, I misread your message. It really
is time I went to bed.

Thanks again anyway.

Jon

 
 
 Mark Post


Interpret Debug Output

2005-02-18 Thread Jonathan Share
Hi,

I'm using wget to assist in debugging a intermittent connection
problem to a particular server. I cannot reproduce the problem myself
and the users experiencing the problem receive correct output from
nslookup so I have ruled out DNS problems.

As a second phase I got them to run wget with the following command
line hoping it would give me a meaningful error message to work on
wget -d -o testOutput.txt www.exeter-airport.co.uk

The resultant text file contains
DEBUG output created by Wget 1.9.1 on Windows.

set_sleep_mode(): mode 0x8001, rc 0x8000
--12:00:06--  http://www.exeter-airport.co.uk/
   = `index.html'
Resolving www.exeter-airport.co.uk... seconds 0.00, 217.199.170.196
Caching www.exeter-airport.co.uk = 217.199.170.196
Connecting to www.exeter-airport.co.uk[217.199.170.196]:80... seconds
0.00, Closing fd 1928
failed: No such file or directory.
Releasing 003947F0 (new refcount 1).
Retrying.

--12:00:27--  http://www.exeter-airport.co.uk/
  (try: 2) = `index.html'
Found www.exeter-airport.co.uk in host_name_addresses_map (003947F0)
Connecting to www.exeter-airport.co.uk[217.199.170.196]:80... seconds
0.00, Closing fd 1928
failed: No such file or directory.
Releasing 003947F0 (new refcount 1).
Retrying.

This No such file or directory means nothing to me as if the file
did not exist I would still see the HTTP request/response messages in
the output.

Can someone more knowledgable identify where this process is failing?
My best guess is that it is failing to open the tcp connection but
looking for a more specific answer if possible.

Thanks in advance.
Jon


Re: 2 giga file size limit ?

2004-09-11 Thread Jonathan Stewart
On Thu, 09 Sep 2004 23:44:09 -0400, Leonid [EMAIL PROTECTED] wrote:
 Jonathan,
 
There exists more than one LFS patch for wget. For Linux, Sun and
 HP-UX one can use http://software.lpetrov.net/wget-LFS/
I shall have to try said patches.  And then find a large file to download.

But i am curious.  If patches exist, why have they not been merged
into the 'vanilla' wget?  Seems this is a very requested feature, and
an often encountered bug...

-- 
 Jonathan


Re: 2 giga file size limit ?

2004-09-09 Thread Jonathan Stewart
Wget doesn't support 2GB files.  It is a known issue that is brought up a lot.

Please patch if you're able, so far no fix has been forthcoming.

Cheers,
Jonathan


- Original Message -
From: david coornaert [EMAIL PROTECTED]
Date: Thu, 09 Sep 2004 12:41:31 +0200
Subject: 2 giga file size limit ?
To: [EMAIL PROTECTED]

 Hi all, 
 I'm trying to get around this kind of message on  I*86 linux boxes
 with wget 1.9.1
 
 
 --11:12:08--  
ftp://ftp.ensembl.org/pub/current_human/data/mysql/homo_sapiens_snp_23_34e/RefSNP.txt.table.gz
= `current_human/data/mysql/homo_sapiens_snp_23_34e/RefSNP.txt.table.gz'
 == CWD not required.
 == PASV ... done.== RETR RefSNP.txt.table.gz ... done.
 Length: -1,212,203,102
 
 The file is actually more than 3giga, 
 since my main goal is to mirror the whole thing @ensembl-org, It
would be very fine  if mirroring could be used with huge files too
 
 
 There is no trouble though on Tru64 machines.
 
 
 here is what the .listing file says for this file on the linux boxes :
 
 total 3753518
 -rw-rw-r--   1 00 97960 Jul 21 17:05 Assay.txt.table.gz
 -rw-rw-r--   1 00   279 Jul 21 19:29 CHECKSUMS.gz
 -rw-rw-r--   1 00 153157540 Jul 21 17:08 ContigHit.txt.table.gz
 -rw-rw-r--   1 0032 Jul 21 17:08
DataSource.txt.table.gz
 -rw-rw-r--   1 00  18359087 Jul 21 17:09 Freq.txt.table.gz
 -rw-rw-r--   1 00 46848 Jul 21 17:09 GTInd.txt.table.gz
 -rw-rw-r--   1 00 185265599 Jul 21 17:13 Hit.txt.table.gz
 -rw-rw-r--   1 00  35914149 Jul 21 17:14 Locus.txt.table.gz
 -rw-rw-r--   1 0020 Jul 21 17:14 Pop.txt.table.gz
 -rw-rw-r--   1 003082764194 Jul 21 19:21 RefSNP.txt.table.gz
 -rw-rw-r--   1 00   195 Jul 21 19:21 Resource.txt.table.gz
 -rw-rw-r--   1 00  72306055 Jul 21 19:23 Strain.txt.table.gz
 -rw-rw-r--   1 00   9480171 Jul 21 19:23 SubPop.txt.table.gz
 -rw-rw-r--   1 00 286116716 Jul 21 19:27 SubSNP.txt.table.gz
 -rw-rw-r--   1 00 49095 Jul 21 19:23 Submitter.txt.table.gz
 -rw-rw-r--   1 00  1697 Jul 21 19:27
homo_sapiens_snp_23_34e.sql.gz
 
 You can see that the file is appropriately listed , though once the
ftp session is started it reports a negative size..
 
 any solution ?
 



-- 
 Jonathan


Re: files 4 gb

2004-08-25 Thread Jonathan Stewart
On Wed, 25 Aug 2004 19:26:46 +0200, dahead [EMAIL PROTECTED] wrote:
 Hi,
 there is only one thing:
 wget does not support really huge files like a dvd iso file.
this is a known issuse.  Wget does not support large files ( 2 GB)

i wonder if any of the deveropers have looked at, or talked to the
author of this: http://software.lpetrov.net/wget-LFS/

-- 
 Jonathan


Stratus VOS support

2004-07-27 Thread Grubb, Jonathan
Title: Stratus VOS support





Any thoughts of adding support for Stratus VOS file structures?


*
Jonathan D Grubb [EMAIL PROTECTED]
WebMD Envoy - Medical Real-Time
*







Grubb, Jonathan.vcf
Description: Binary data


Re: retrieving stylesheets

2003-02-17 Thread Jonathan Buhacoff



Ah, that was it. I didn't even think to check 
that. Now I got 1.8.2 and it's ok.
Thanks
Jonathan Buhacoff



  - Original Message - 
  From: 
  Herold Heiko 
  To: 'Jonathan Buhacoff' ; [EMAIL PROTECTED] 
  Sent: Monday, 17 February, 2003 
  03:00
  Subject: RE: retrieving stylesheets
  
  Which version os wget are you using ? 1.5.3 ? If so, please upgrade and 
  retry.
  Heiko
   PREVINET 
  S.p.A. [EMAIL PROTECTED]-- Via 
  Ferretto, 1 
  ph x39-041-5907073-- I-31021 Mogliano V.to (TV) fax 
  x39-041-5907472-- ITALY 
  
-Original Message-From: Jonathan Buhacoff 
[mailto:[EMAIL PROTECTED]]Sent: Saturday, February 15, 2003 10:48 
PMTo: [EMAIL PROTECTED]Subject: 
retrieving stylesheets
Hi,

I tried using wget to make a copy of my website 
on a second computer, and I noticed that it did not download my 
stylesheet. I have this tag in the header of my html 
file:

link rel="stylesheet" href="" 
type="text/css"

And it was ignored. I looked in the docs and 
archives for something about this but didn't find anything. I was using 
default recursion level of 5, and that worked fine because everything else 
was downloaded (including a flash movie, I was impressed). 

If it's a bug or if nobody ever thought of it, 
is there a chance it can be added? Do you want me to do it myself and submit 
a patch to this list? 

Please CC your replies to [EMAIL PROTECTED] because I'm not on the 
    list.

Thanks,
Jonathan 
Buhacoff


retrieving stylesheets

2003-02-15 Thread Jonathan Buhacoff



Hi,

I tried using wget to make a copy of my website on 
a second computer, and I noticed that it did not download my stylesheet. I 
have this tag in the header of my html file:

link rel="stylesheet" href="" 
type="text/css"

And it was ignored. I looked in the docs and 
archives for something about this but didn't find anything. I was using default 
recursion level of 5, and that worked fine because everything else was 
downloaded (including a flash movie, I was impressed). 

If it's a bug or if nobody ever thought of it, is 
there a chance it can be added? Do you want me to do it myself and submit a 
patch to this list? 

Please CC your replies to [EMAIL PROTECTED] because I'm not on the 
list.

Thanks,
Jonathan Buhacoff


WGET Offline proxy question

2002-04-04 Thread Jonathan A Ruxton



Hi All

Sorry for the off topic and 'Newbie' question, but is it possible having
downloaded a site using 'wget' to a local directory, to then at a later 
stage to run just the 'proxy' part of the process, reading the site 
contents from the local directory and note from the site to enable
re-loading of a 'Squid' cache, effectively running 'wget' in an offline
mode with the contents already downloaded. Apologies if this has already
been answered, could someone please point to the solution if this is the
case.

Thanks in advance

 jar...


Please could I be cc'd ([EMAIL PROTECTED]) on any relevant answers as I am 
not currently subscribed to this list.




wget 1.8.1

2002-01-04 Thread Jonathan Davis

I recently successfully compiled and installed wget 1.8.1 on my box.  
The new OS and architecture reads as follows: Mac OS X 
(powerpc-apple-darwin5.2)

Jonathan




suggestion for wget

2001-03-15 Thread Jonathan Nichols

hello,

i have a suggestion for the wget program.  would it be possible to
have a command line option that, when invoked, would tell wget to
preserve the modification date when transfering the file??  the
modification time would then reflect the last time the file was modified
on the remote machine, as opposed to the last time it was modified on
the local machine.  i know that the cp command has this option (-p).  is
this reasonable/possible for wget??

thanks,

jon