Micah Cowan wrote:
Actually, I'll have to confirm this, but I think that current Wget will
re-download it, but not overwrite the current content, until it arrives
at some content corresponding to bytes beyond the current content.
I need to investigate further to see if this change was
Juan Manuel wrote:
OK, you are right, I`ll try to make it better on my free time. I
supposed that it would have been more polite with one option, but
thought it was easier with two (and since this is my first
approach to C I took the easy way) because one option would have
to deal
Micah Cowan wrote:
Would hash really be useful, ever?
Probably not as long as we strip off the hash before we do the comparison.
Tony
Micah Cowan wrote:
On expanding current URI acc/rej matches to allow matching against query
strings, I've been considering how we might enable/disable this
functionality, with an eye toward backwards compatibility.
What about something like --match-type=TYPE (with accepted values of all,
Cristián Serpell wrote:
I would like to know if there is a reason for using a signed int for
the length of the files to download.
I would like to know why people still complain about bugs that were fixed
three years ago. (More accurately, it was a design flaw that originated from
a time when
Cristián Serpell wrote:
Maybe I should have started by this (I had to change the name of the
file shown):
[snip]
---response begin---
HTTP/1.1 200 OK
Date: Tue, 16 Sep 2008 19:37:46 GMT
Server: Apache
Last-Modified: Tue, 08 Apr 2008 20:17:51 GMT
ETag: 7f710a-8a8e1bf7-47fbd2ef
Micah Cowan wrote:
The easiest way to do what you want may be to log in using your browser,
and then tell Wget to use the cookies from your browser, using
Given the frequency of the login and then download a file use case , it
should probably be documented on the wiki. (Perhaps it already is.
Coombe, Allan David (DPS) wrote:
However, the case of the files on disk is still mixed - so I assume that
wget is not using the URL it originally requested (harvested from the
HTML?) to create directories and files on disk. So what is it using? A
http header (if so, which one??).
I think
mm w wrote:
a simple url-rewriting conf should fix the problem, wihout touch the file
system
everything can be done server side
Why do you assume the user of wget has any control over the server from which
content is being downloaded?
mm w wrote:
Hi, after all, after all it's only my point of view :D
anyway,
/dir/file,
dir/File, non-standard
Dir/file, non-standard
and /Dir/File non-standard
According to RFC 2396: The path component contains data, specific to the
authority (or the scheme if there is no authority
Micah Cowan wrote:
Unfortunately, nothing really comes to mind. If you'd like, you could
file a feature request at
https://savannah.gnu.org/bugs/?func=additemgroup=wget, for an option
asking Wget to treat URLs case-insensitively.
To have the effect that Allan seeks, I think the option would
mm w wrote:
standard: the URL are case-insensitive
you can adapt your software because some people don't respect standard,
we are not anymore in 90's, let people doing crapy things deal with
their crapy world
You obviously missed the point of the original posting: how can one
conveniently
Steven M. Schweda wrote:
From Tony Lewis:
To have the effect that Allan seeks, I think the option would have to
convert all URIs to lower case at an appropriate point in the process.
I think that that's the wrong way to look at it. Implementation
details like name hashing may also need
Saint Xavier wrote:
Well, you'd better escape the '' in your shell (\)
It's probably easier to just put quotes around the entire URL than to try to
find all the special characters and put backslashes in front of them.
Tony
Matthias Vill wrote:
Alexandru Tudor Constantinescu wrote:
I have the feeling wget is not really able to figure out which files
to download from some web sites, when css files are used.
That's right. Up until wget 1.11 (released yesterday) there is no
support for css-files in the matter
Wayne Connolly wrote:
Thanks mate- i know we chatted on IRC but just thought someone
else may be able to provide some insight.
OK. Here's some insight: wget is essentially a web browser. If the URL
starts with http, then wget sees the exact same content as Internet
Explorer, Firefox,
Hrvoje Niksic wrote:
And how is .tar.gz renamed? .tar-1.gz?
Ouch.
OK. I'm responding to the chain and not Hrvoje's expression of pain. :-)
What if we changed the semantics of --no-clobber so the user could specify
the behavior? I'm thinking it could accept the following strings:
- after:
Micah Cowan wrote:
Keeping a single Wget and using runtime libraries (which we were terming
plugins) was actually the original concept (there's mention of this in
the first post of this thread, actually); the issue is that there are
core bits of functionality (such as the multi-stream
Micah Cowan wrote
Stuart Moore wrote:
Is there any way to get wget to only use the post data for the first
file downloaded?
Unfortunately, I'm not sure I can offer much help. AFAICT, --post-file
and --post-data weren't really designed for use with recursive
downloading.
Perhaps not, but
Hrvoje Niksic wrote:
Measuring initial bandwidth is simply insufficient to decide what
bandwidth is really appropriate for Wget; only the user can know
that, and that's what --limit-rate does.
The user might be able to make a reasonable guess as to the download rate if
wget reported its
Gerard Seibert wrote:
Is it possible for wget to compare the file named AV.hdb'
located in one directory, and if it is older than the AV.hdb.gz file
located on the remote server, to download the AV.hdb.gz file to the
temporary directory?
No, you can only get wget to compare a file of the
Micah Cowan wrote:
If you mean that you want Wget to find any file that matches that
wildcard, well no: Wget can do that for FTP, which supports directory
listings; it can't do that for HTTP, which has no means for listing
files in a directory (unless it has been extended, for example with
Himanshu Gupta wrote:
Thanks Josh and Micah for your inputs.
In addition to whatever Josh and Micah told you, let me add the information
that follows. More than once I have had to relearn how wget deals with
command line options. The last time I did so, I created the HOWTO that
appears
Michiel de Boer wrote:
Is there another way though to achieve the same thing?
You can always run wget and then rename the file afterward. If this happens
often, you might want write a shell script to handle it. Of course, If you
want all the references to the file to be converted, the script
Micah Cowan wrote:
The manpage doesn't need to give as detailed explanations as the info
manual (though, as it's auto-generated from the info manual, this could
be hard to avoid); but it should fully describe essential features.
I can't see any good reason for one set of documentation to be
Micah Cowan wrote:
Don't we already follow typical etiquette by default? Or do you mean
that to override non-default settings in the rcfile or whatnot?
We don't automatically use a --wait time between requests. I'm not sure what
other nice options we'd want to make easily available, but there
Josh Williams wrote:
Hmm. .org, maybe?
LOL. Do you know how many kewl domain names I had to go through before I
found one that didn't actually exist? Close to a dozen.
Tony
Noèl Köthe wrote:
A switch to the new GPL v3 is a not so small change and like samba
(3.0.x - 3.2) would imho be a good reason for wget 1.2 so everybody
sees something bigger changed.
There already was a version 1.2 (although the program was called geturl at that
time).
The number scheme
On http://www.gnu.org/software/wget/wgetdev.html, step 1 of the summary is:
1. Change to the topmost GNU Wget directory:
% cd wget
But you need to cd to either wget/trunk or the appropriate version
subdirectory of wget/branches.
Micah Cowan wrote:
This information is currently in the bug submitting form at Savannah:
That looks good.
I think perhaps such things as the wget version and operating system
ought to be emitted by default anyway (except when -q is given).
I'm not convinced that wget should ordinarily emit
Micah Cowan wrote:
Done. Lemme know if that works for you.
Looks good
There is a buffer overflow in the following line of the proposed code:
sprintf(filecopy, \%.2047s\, file);
It should be:
sprintf(filecopy, \%.2045s\, file);
in order to leave room for the two quotes.
Tony
-Original Message-
From: Rich Cook [mailto:[EMAIL PROTECTED]
Sent:
Try: wget http://ip.of.new.sitename --header=Host: sitename.com --mirror
For example: wget http://66.233.187.99 --header=Host: google.com --mirror
Tony
-Original Message-
From: Kelly Jones [mailto:[EMAIL PROTECTED]
Sent: Sunday, June 17, 2007 6:10 PM
To: wget@sunsite.dk
Subject:
Joe Kopra wrote:
The wget statement looks like:
wget --post-file=serverdata.mup -o postlog -O survey.html
http://www14.software.ibm.com/webapp/set2/mds/mds
--post-file does not work the way you want it to; it expects a text file
that contains something like this:
Highlord Ares wrote:
it tries to download web pages named similar to
http://site.com?variable=yesmode=awesome
http://site.com?variable=yesmode=awesome
Since is a reserved character in many command shells, you need to quote
the URL on the command line:
wget
Lara Röpnack wrote:
1.) How can I send Post Data with Line Breaks? I can not press enter
and \n or \r or \r\n dont work...
You dont need a line break because parameters are separated by ampersands;
a=1b=2
2.) I dont understand the post File. I can Send one file - but I cant give
J.F.Groff wrote:
Amazingly I found this feature request in a 2003 message to this very
mailing
list. Are there only a few lunatics like me who think this should be
included?
Wget is written and maintained by volunteers. What you need to find is a
lunatic willing to volunteer to write the code
I don't think there is such a feature, but if you're going to add
--not-before, you might as well add --not-after too.
Tony
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Saturday, April 07, 2007 6:27 PM
To: wget@sunsite.dk
Subject: Suggesting Feature:
Vitaly Lomov wrote:
It's a file system issue on windows: file path length is limited to
259 chars.
In which case, wget should do something reasonable (generate an error
message, truncate the file name, etc.). It shouldn't be left as exercise for
the user to figure out that the automatically
Bruce [EMAIL PROTECTED] wrote:
the hostname 'ga13.gamesarena.com.au' resoles back to an NX domain
NXDOMAIN is short hand for non-existent domain. It means the domain name
system doesn't know the IP address of the domain. (It would be like me
having a non-published telephone number; if you know
The server told wget that it was going to return 6K:
Content-Length: 6720
_
From: Smith, Dewayne R. [mailto:[EMAIL PROTECTED]
Sent: Thursday, March 01, 2007 8:05 AM
To: [EMAIL PROTECTED]
Subject: wget help on file download
Trying to download a 4mb file. it only retrieves 6k of it.
If it were me, I'd grab all the files to my local drive and then write
scripts to do the moving and renaming.
Tony
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Friday, February 23, 2007 1:33 AM
To: wget@sunsite.dk
Subject: how to get images into a new
=CA.
Tony
_
From: Alan Thomas [mailto:[EMAIL PROTECTED]
Sent: Thursday, February 22, 2007 5:27 PM
To: Tony Lewis; wget@sunsite.dk
Subject: Re: php form
Tony,
Thanks. I have to log in with username/password, and I think I
know how to do that with wget using POST
Lars Hamren wrote:
Download speeds are reported as K/s, where, I assume, K is short for
kilobytes.
The correct SI prefix for thousand is k, not K:
http://physics.nist.gov/cuu/Units/prefixes.html
SI units are for decimal-based numbers (that is powers of 10) whereas
computer programs
Christoph Anton Mitterer wrote:
I don't agree with that,.. SI units like K/M/G etc. are specified by
international standards and those specify them as 10^x.
The IEC defined in IEC 60027 symbols for the use with base 2 (e.g. Ki, Mi,
Gi)
All of this is described in the Wikipedia article I
A) This is the list for reporting bugs. Questions should go to
wget@sunsite.dk
B) wget does not support multiple time simultaneously
C) The decreased per-file download time you're seeing is (probably) because
wget is reusing its connection to the server to download the second file. It
takes some
I don't think that's valid HTML. According to RFC
1866: An HTML user
agent should treat end of line in any of its variations as a word space in all
contexts except preformatted text.
I
don't see any provision for end of line within the HREF attribute of an A
tag.
Tony
From: HUAZHANG GUO
Mauro Tortonesi wrote:
perhaps we should modify wget in order to print the list of touched
URLs as well? maybe only in case -v is given? what do you think?
On June 28, 2005, I submitted a patch to write unfollowed links to a file.
It would be pretty simple to have a similar --followed-links
Title: RE: BUG
Run the command with -d and post the output here.
Tony
_
From: Junior + Suporte [mailto:[EMAIL PROTECTED]]
Sent: Monday, July 03, 2006 2:00 PM
To: [EMAIL PROTECTED]
Subject: BUG
Dear,
I using wget to send login request
Bruce wrote:
any idea as to who's working on this feature?
Mauro Tortonesi sent out a request for comments to the mailing list on March
29. I don't know whether he has started working on the feature or not.
Tony
I think there is a limit to the number of characters that DOS will accept on
the command line (perhaps around 256). Try putting echo in front of the
command in your batch file and see how much of it gets echoed back to you.
As Tobias suggested, you can try moving some of your command line options
The problem is your accept list; -A*.* says to accept any file that contains
at least one dot in the file name and
GetFile?id=DBJOHNUNZIOCSBMOMKRUconvert=image%2Fgifscale=3 doesn't contain
any dots.
I think you want to accept all files so just delete -A*.* from your argument
list because the
Erich Steinboeck wrote:
Is there a way to trace the browser traffic and compare
that to the wget traffic, to see where they differ.
You can use a web proxy. I like Achilles:
http://www.mavensecurity.com/achilles
Tony
ks wrote:
Just one more question.
Something like this inside somefile.txt
http://fly.srk.fer.hr/
-r http://www.gnu.org/ -o gnulog
-S http://www.lycos.com/
Why not use a batch file or command script (depending on what OS you're
using) containing something like:
wget http://fly.srk.fer.hr
Hrvoje Niksic wrote:
Anyway, adding further customizations to an already questionnable feature
is IMHO not a very good idea.
Perhaps Derek would be happy if there were a way to turn off this
questionable feature.
Tony
18 mao [EMAIL PROTECTED] wrote:
then save the page as 2.html with the FireFox browser
You should not assume that the file saved by any browser is the same as the
file delivered to the browser by the server. The browser is probably
manipulating line endings to match the conventions on your
It's not a bug; it's a (missing) feature.
-Original Message-
From: Detlef Girke [mailto:[EMAIL PROTECTED]
Sent: Thursday, April 13, 2006 3:17 AM
To: [EMAIL PROTECTED]
Subject: download of images linkes in css does not work
Hello,
I tried everything, but images, built in via CSS are
Mauro Tortonesi wrote:
no. i was talking about regexps. they are more expressive
and powerful than simple globs. i don't see what's the
point in supporting both.
The problem is that users who are expecting globs will try things like
--filter=-file:*.pdf rather than --filter:-file:.*\.pdf. In
Hrvoje Niksic wrote:
But that misses the point, which is that we *want* to make the
more expressive language, already used elsewhere on Unix, the
default.
I didn't miss the point at all. I'm trying to make a completely different
one, which is that regular expressions will confuse most users
Hrvoje Niksic wrote:
I don't see a clear line that connects --filter to glob patterns as used
by the shell.
I want to list all PDFs in the shell, ls -l *.pdf
I want a filter to keep all PDFs, --filter=+file:*.pdf
Note that *.pdf is not a valid regular expression even though it's what
most
How many keywords do we need to provide maximum flexibility on the
components of the URI? (I'm thinking we need five.)
Consider http://www.example.com/path/to/script.cgi?foo=bar
--filter=uri:regex could match against any part of the URI
--filter=domain:regex could match against www.example.com
Curtis Hatter wrote:
Also any way to add modifiers to the regexs?
Perhaps --filter=path,i:/path/to/krs would work.
Tony
Hrvoje Niksic wrote:
The cast to int looks like someone was trying to remove a warning and
botched operator precedence in the process.
I can't see any good reason to use , here. Why not write the line as:
eta_hrs = eta / 3600; eta %= 3600;
This makes it much less likely that someone
Mauro Tortonesi wrote:
i would like to read other users' opinion before deciding which
course of action to take, though.
Other users have suggested adding a command line option for -a two or
three times in the past:
- 2002-11-24: Steve Friedl [EMAIL PROTECTED] submitted a patch
- 2002-12-24:
I believe the following simplified code would have the same
effect:
if ((opt.recursive || opt.page_requisites || opt.use_proxy) url_scheme
(*t) != SCHEME_FTP) status = retrieve_tree (*t);else
status = retrieve_url (*t, filename, redirected_URL, NULL,
dt);
Tony
From: [EMAIL PROTECTED]
Jonathan DeGumbia wrote:
I'm trying to use the --directory-prefix=prefix option for wget on a
Windows system. My prefix has spaces in the path directories. Wget
appears to terminate the path at the first space encountered. In other
words if my prefix is: c:/my prefix/ then wget copies
[EMAIL PROTECTED] wrote:
Thanks for your reply. Only ping works for bbc.com and not wget.
When I issue the command wget www.bbc.com, it successfully downloads the
following file:
!DOCTYPE HTML PUBLIC -//W3C//DTD HTML 3.2//EN
HTML
HEAD
META HTTP-EQUIV=Refresh content=0;
Eberhard Wolff wrote:
Apparently wget can't handle large file.
[snip]
wget --version GNU Wget 1.8.2
This bug was fixed in version 1.10 of wget. You should obtain a copy of
the latest version, 1.10.2.
Tony
Mauro Tortonesi wrote:
this is a very interesting point, but the patch you mentioned above uses
the
LIST -a FTP command, which AFAIK is not supported by all FTP servers.
As I recall, that's why the patch was not accepted. However, it would be
useful if there were some command line option to
PoWah Wong wrote:
The login page is:
http://safari.informit.com/?FPI=uicode=
How to figure out the login command?
These two commands do not work:
wget --save-cookies cookies.txt http://safari.informit.com/?FPI= [snip]
wget --save-cookies cookies.txt
Pat Malatack wrote:
is there a
way to stay connected, because it seems to me that this takes a decent amount of
time that could be minimized
The following
command will do what you want:
wget "google.com/news"
"google.com/froogle"
Tony
Larry Jones wrote:
Of course it's directly accessible -- you just have to quote it to keep
the
shell from processing the parentheses:
cd 'title.Die-Struck+(Gold+on+Gold)+Lapel+Pins'
You can also make the individual characters into literals:
cd
I got a "Name or
service not known" error from wget 1.10 running on Linux. When I installed an
earlier version of wget, it worked just fine.It also works just fine on
version 1.10 running on Windows. Any ideas?
Here's the output on
Linux:
wget --versionGNU Wget 1.9-beta1
wget
Hrvoje Niksic wrote:
In fact, I know of no application that accepts numbers as Wget prints
them.
Microsoft Calculator does.
Tony
Maurice Volaski wrote:
wget's -m option seems to be able to ignore most of the files it should
download from a site. Is this simply because wget can download only the
files it can see? That is, if the web server's directory indexing option
is off and a page on the site is present on the
Andrzej wrote:
Two problems:
There is no index.html under this link:
http://znik.wbc.lublin.pl/Mineraly/Ftp/UpLoad/
[snip]
it creates a non existing link:
http://znik.wbc.lublin.pl/Mineraly/Ftp/UpLoad/index.html
When you specify a directory, it is up to the web server to determine what
Hrvoje Niksic wrote:
The question is what should we do for 1.10? Document the
unreadable names and cryptic values, and have to support
them until eternity?
My vote is to change them to more reasonable syntax (as you suggested
earlier in the note) for 1.10 and include the new syntax in the
Alan Thomas wrote:
I am having trouble getting the files I want using a wildcard specifier...
There are no options on the command line for what you're attempting to do.
Neither wget nor the server you're contacting understand *.pdf in a URI.
In the case of wget, it is designed to read web
Jens Rösner wrote:
AFAIK, RegExp for (HTML?) file rejection was requested a few times, but is
not implemented at the moment.
It seems all the examples people are sending are just attempting to get a
match that is not case sensitive. A switch to ignore case in the file name
match would be a
The --post-data option was added in version 1.9. You need to upgrade your
version of wget.
Tony
-Original Message-
From: Richard Emanilov [mailto:[EMAIL PROTECTED]
Sent: Monday, March 21, 2005 8:49 AM
To: Tony Lewis; [EMAIL PROTECTED]
Cc: wget@sunsite.dk
Subject: RE: help!!!
wget
Hrvoje Niksic wrote:
I don't see how and why a web site would generate headers (not bodies, to
be sure) larger than 64k.
To be honest, I'm less concerned about the 64K header limit than I am about
limiting a header line to 4096 bytes. I don't know any sites that send back
header lines that
Jesus Legido wrote:
I'm getting a file from https://mfi-assets.ecb.int/dla/EA/ea_all_050303.txt:
The problem is not with wget. The file on the server
starts with 0xFF 0xFE. Put the following into an HTML file (say temp.html) on
your hard drive, open it in your web browser, right click on
Normand Savard wrote:
I have a question about wget. Is is possible to download other attribute
value other than the harcoded ones?
No, at least not in the existing versions of wget. I have not heard that
anyone is working on such an enhancement.
Mauro Tortonesi wrote:
Alle 18:28, mercoled 5 gennaio 2005, Draen Kacar ha scritto:
Jan Minar wrote:
What's wrong with mbrtowc(3) and friends? The mysterious solution
is probably to use wprintf(3) instead printf(3). Couple of
questions on #c on freenode would give you that answer.
John J Foerch wrote:
It seems that the system of using the metric prefixes for numbers 2^n is a
simple accident of history. Any thoughts on this?
I would say that the practice of using powers of 10 for K and M is a
response to people who cannot think in binary.
Tony
Carlos Villegas snidely wrote:
I would say that the original poster understands what he is saying, and
you clearly don't...
I'll put my computer science degree up against your business administration
and accounting degree any day.
A kilobyte has always been 1024 bytes and the choice was not
Mark Post wrote:
While we're at it, why don't we just round off the value of pi to be 3.0
Do you live in Indiana?
Actually, Dr. Edwin Goodwin wanted to round off pi to any of several values
including 3.2.
http://www.agecon.purdue.edu/crd/Localgov/Second%20Level%20pages/Indiana_Pi_
Story.htm
Anthony Caetano wrote:
I am looking for a way to stay current without mirroring an entire site.
[snip]
Does anyone else see a use for this?
Yes. Here's my non-wget solution. I truncate all the files in the
directories that I don't want, but maintain the date/time accessed and
modified. The
Justin Gombos wrote:
Since I feel that computers serve man, not the reverse, so I don't
intend to change my file organization to be web page centric. Looking
around the web, I was quite surprized to find that I'm the only one
with this problem. I was very relieved to find that there was a wput
-
Stratus VOS supportJonathan Grubb wrote:
Any thoughts of adding support for Stratus VOS file structures?
Your question is a little too vague -- even for me (I used to work for
Stratus and actually know what VOS is :-)
What file structures are you needing supported that wget does not currently
Jonathan Grubb wrote:
Um. I'm using wget on Win2000 to ftp to a VOS machine. I'm finding that
the
usual '' sign for directories isn't supported by wget and that '/'
doesn't
work either, I think because the ftp server itself is expecting ''.
The problem may be that Win 2000 grabs the before
Ploc wrote:
The result is a website very different from the original one as it lacks
backgrounds.
Can you please confirm if what I think is true (or not), if it is
registered as a bug, and if there is a date planning to correct it.
It is true. wget only retrieves objects that appear in the
Ploc wrote:
Is it already registered as a bug or in a whishlist ?
It's not a bug. This feature has been on the wishlist for a long time.
Tony
Malte Schünemann wrote:
Since wget is able to obtain directoy listings / retrieve data from
there is should be possible to also upload data
Then it would be wput. :-)
What is so special about wget that it is able to perform this task?
You can learn a LOT about how wget is communicating with
Phil Endecott wrote:
Tony The stuff between the quotes following HREF is not HTML; it
Tony is a URL. Hence, it must follow URL rules not HTML rules.
No, it's both a URL and HTML. It must follow both rules.
Please see the page that I cited in my previous message:
, wget encodes it to create a valid name.
Tony Lewis wrote:
I use semicolons in CGI URIs to separate parameters. (Ampersand
is more often used for this, but semicolon is also allowed and
has the advantage that there is no need to escape it in HTML.)
There is no need to escape ampersands
Phil Endecott wrote:
I am using wget to build a downloadable zip file for offline viewing of
a CGI-intensive web site that I am building.
Essentially it works, but I am encountering difficulties with semicolons.
I use semicolons in CGI URIs to separate parameters. (Ampersand is more
often
henry luo wrote:
i find a problem at GNU Wget 1.9.1, but i dont know it is a new
function or a bug;
the old version(1.8.2) download a link ,for example:
wget
'http://www.expekt.com/odds/eventsodds.jsp?range=100sortby=dateactive=
bettingbetcategoryId=SOC%25'
Hrvoje Niksic wrote:
Wget could always support a URL parameter, such as:
wget 'ftp://server/dir1/dir2/file;disk=foo'
Assuming, you can detect a VMS connection, why not simply
ftp://server/foo:[dir1.dir2]?
Tony
How do you enter the path in your web browser?
- Original Message -
From: Bufford, Benjamin (AGRE) [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, May 26, 2004 7:32 AM
Subject: OpenVMS URL
I am trying to use wget to retrieve a file from an OpenVMS server but have
been unable
1 - 100 of 161 matches
Mail list logo