Hi Gregory,
What you've mentioned is indeed the intended behaviour.
* Gregory R Fellow [181001 08:55]:
> Hi. Is it the intended behavior for wget to allow sending custom header
> lines with no value?
>
>
> The following clears previous user-defined headers as described in the
> documentation:
Hi Michael,
Nice to hear from you again. I vaguely remember a mention of someone who wanted
to work on this feature. When deciding to make this work, please remember that
any of this can only work if the site does not rely on Javascript; which given
Wordpress is a difficult thing. The reason for
On 10/8/18 10:27 PM, mich...@cyber-dome.com wrote:
> The issues that I have is this:
>
> Since the source code is split in various directories (src, lib) the Netbeans
> lose track of source code in the lib directory.
> I verified it using gdb. (You can see how dip I went).
lib/ is a
-Original Message-
From: Tim Rühsen
Sent: Monday, 8 October, 2018 10:55 PM
To: mich...@cyber-dome.com; bug-wget@gnu.org
Subject: Re: [Bug-wget] Hello again
On 10/8/18 7:57 PM, mich...@cyber-dome.com wrote:
>
> Hello again,
>
> My name is Michael. I have approached you about a year
On 10/8/18 7:57 PM, mich...@cyber-dome.com wrote:
>
> Hello again,
>
> My name is Michael. I have approached you about a year ago.
>
> I am interested in making wget2 a tool that can convert content management
> systems (like WordPress) output to HTML. This actually limits the content
>
all uses cases, is not
> for the use case where the server allows a client to connect without
> certificate but requests authentication later after the location of
> access is known. Under TLS1.2 this was working via a re-handshake, but
> under TLS1.3 a client must enable
Hi,
Most of the new features are now better shipped in wget2 and libwget.
Development is done in GitLab: https://gitlab.com/gnuwget/wget2
Docs: https://gnuwget.gitlab.io/wget2/reference/modules.html
I don't know if that particular feature would fit into wget2, but a
generalisation
of it might
Hi Tom,
On 10/4/18 11:13 AM, Tom Mounet wrote:
> Hello,
>
> I saw your needs for a C developer on Savannah home page. I'm a student
> in IT I have good basis in Networking (so TCP/IP) and I've been coding
> for the last 2 years in C, It's not my main language but I love it, and
> I would love to
The new --retry-on-host-error option might prevent the need for
complicated shell retry logic (it was added for the exact problem of a
flaky network connection). That option is not yet in a tagged release,
but it's been merged.
Hope this helps!
On Tue, Oct 2, 2018 at 11:42 AM Paul Wagner wrote:
On 9/30/18 7:49 PM, Gregory R Fellow wrote:
> Hi. Is it the intended behavior for wget to allow sending custom header
> lines with no value?
>
> The following clears previous user-defined headers as described in the
> documentation:
> --header=
>
> The following both send a header with no
Hi Giovanni,
On 9/24/18 7:38 AM, Giovanni Porta wrote:
> Hello all,
>
> For the past week or so, I've been attempting to mirror a website with Wget.
> However, after a couple days of downloading (and approx 38 GB downloaded),
> Wget eventually exhausts all system memory and swap leading to the
Hi Sam,
thanks for the report.
The problem might be fixed in upstream gnulib already. May I send you a
tarball with everything up-to-date for testing ?
BTW, s...@bingner.com doesn't seem to be a valid account (invalid DNS MX
or A/ resource record). Please let me know where to send the
Paul Wagner writes:
> That's what the OP thinks, too. I attributed the slow startup to DNS
> resolution.
Depending on your circumstances, one way to fix that is set up a local
caching-only DNS server. Direct ordinary processes to use that. Then
the first lookup is expensive, but the caching
Dear all,
On 12.09.2018 03:51, wor...@alum.mit.edu wrote:
Tim Rühsen writes:
Thanks for the pointer to coproc, never heard of it ;-) (That means I
never had a problem that needed coproc).
Anyways, copy the script results in a file '[1]' with bash
4.4.23.
Yeah, I'm not surprised there are
Tim Rühsen writes:
> Thanks for the pointer to coproc, never heard of it ;-) (That means I
> never had a problem that needed coproc).
>
> Anyways, copy the script results in a file '[1]' with bash 4.4.23.
Yeah, I'm not surprised there are bugs in it.
> Also, wget -i - waits with downloading
On 9/11/18 5:34 AM, Dale R. Worley wrote:
> Paul Wagner writes:
>> Now I tried
>>
>>{ i=1; while [[ $i != 100 ]]; do echo
>> "http://domain.com/path/segment_$((i++)).mp4"; done } | wget -O foo.mp4
>> -i -
>>
>> which works like a charm *as long as the 'generator process' is finite*,
>>
Paul Wagner writes:
> Now I tried
>
>{ i=1; while [[ $i != 100 ]]; do echo
> "http://domain.com/path/segment_$((i++)).mp4"; done } | wget -O foo.mp4
> -i -
>
> which works like a charm *as long as the 'generator process' is finite*,
> i.e. the loop is actually programmed as in the example.
Pushed. Thank you, Tomas !
Regards, Tim
On 9/4/18 11:22 AM, Tomas Hozza wrote:
> Wget currently allows specifying "TLSv1_3" as the parameter for
> --secure-protocol option. However it is only implemented for OpenSSL
> and in case wget is compiled with GnuTLS, it causes wget to abort with:
>
On 8/28/18 10:13 AM, Tomas Korbar wrote:
> Hello,
> when quiet option is provided and wget runs in background an empty log file
> is created. I think this file is obsolete because quiet option prevents any
> log to be written into that file. This problem is caused by redirection of
> logs. Wget
On 08/27/2018 11:01 AM, Tomas Hozza wrote:
> Hi Darshit.
>
> On 25.08.2018 08:20, Darshit Shah wrote:
>> Hi Tomas,
>>
>> Thanks for running the scan and the patches you've made! I briefly glanced
>> through those and they seem fine. Of course, they will need to be slightly
>> modified to apply to
Thank you, Kalle !
We'll go through the docs soon and amend them.
On 08/26/2018 10:08 AM, kalle wrote:
> hello,
> here my proposals:
>
> chapter 2, part "download all the URLs specified": make it clearer, what
> that exactly means in relationship to URLs describing a directory. Is
> the whole
Hi Darshit.
On 25.08.2018 08:20, Darshit Shah wrote:
> Hi Tomas,
>
> Thanks for running the scan and the patches you've made! I briefly glanced
> through those and they seem fine. Of course, they will need to be slightly
> modified to apply to the current git HEAD. I can do that in the coming
Hi Tomas,
Thanks for running the scan and the patches you've made! I briefly glanced
through those and they seem fine. Of course, they will need to be slightly
modified to apply to the current git HEAD. I can do that in the coming days and
apply these patches.
I would like to ask you if there is
Thanks,
it has been remove on 8th of March 2018 (commit
7eff94e881b94d119ba22fc0c29edd65f4e6798b)
Regards, Tim
On 08/24/2018 09:39 AM, Endre Hagelund wrote:
> Hi
>
> I've encountered a bug when compiling wget 1.19.5 from source with the
> following command:
>
> ./configure --with-cares &&
On 8/23/2018 2:56 AM, Tim Rühsen wrote:
Feedback into the project is what it lives on :-)
Your goal sounds interesting, what do you need it for ?
Well, it's fairly trivial and there might be a better way but...
What I am looking to do is retrieve and store pages from ebay. Several
times a
Hi Richard,
On 08/22/2018 08:21 PM, Richard Thomas wrote:
> Hi, hope this is the correct way to do this.
>
> I want to be able to download a webpage and all its prerequisites and
> turn it into a multipart/related single file. Now, this requires
> identifying and changing URLs which, as most
: [External] Re: [Bug-wget] Inconsistent cookie handling between
different machines
EXTERNAL: This email originated from outside of the organization. Do not click
any links or open any attachments unless you trust the sender and know the
content is safe.
> When running from machine 2 (Ubu
nt reuse.
URI content encoding = ‘utf-8’
Saving cookies to /tmp/cookies.txt.
Done saving cookies.
Saving HSTS entries to /home/developer/.wget-hsts+
-Original Message-
From: Darshit Shah
Sent: Saturday, August 18, 2018 3:45 AM
To: Casey, Sean
Cc: Bug-wget@gnu.org
Subject: [External] Re: [Bu
Hi,
Thanks for the report and the analysis. However, could you please share the
entire debug output from both runs? Please don't cut out anything, very often
the contextual information around the problem area is about as important. You
may redact the actual Cookie data if you want.
Also, please
In Windows, the error still exists when the file name is large enough to
scroll.
In progress.c -> create_image (...)
memcpy (p, bp-> f_download + offset_bytes, bytes_in_filename);
overwrites some memory, beyond the lower bound of p. (a good old bug is
buffer overflow). (
Good catch, thanks !
Regards, Tim
On 10.08.2018 14:51, Tomas Hozza wrote:
> In Fedora, we are implementing crypto policies, in order to enhance the
> security of user systems. This is done on the system level by global
> configuration. It may happen that due to the active policy, only
> TLSv1.2
On 05.08.2018 19:54, Anyparktos wrote:
> I noticed an error in the greek translation of wget. It reads "Μήκος" which
> is greek for "length" instead of "Μέγεθος" which is "size". I attached a
> relevant screenshot for clarity. It drives me crazy, please correct it!
>
In C locale:
Length:
Hi James,
Wget2 is built on top of the libwget library which uses Asynchronous network
calls. However, Wget2 is written such that it only utilizes one connection per
thread. This is essentially a design decision to simplify the codebase. In case
you want a more complex crawler, you can use
On 31.07.2018 20:17, James Read wrote:
> Thanks,
>
> as I understand it though there is only so much you can do with
> threading. For more scalable solutions you need to go with async
> programming techniques. See http://www.kegel.com/c10k.html for a summary
> of the problem. I want to do large
Thanks,
as I understand it though there is only so much you can do with threading.
For more scalable solutions you need to go with async programming
techniques. See http://www.kegel.com/c10k.html for a summary of the
problem. I want to do large scale webcrawling and am not sure if wget2 is
up to
On 31.07.2018 18:39, James Read wrote:
> Hi,
>
> how much work would it take to convert wget into a fully fledged
> asynchronous webcrawler?
>
> I was thinking something like using select. Ideally, I want to be able to
> supply wget with a list of starting point URLs and then for wget to crawl
>
-Original Message-
From: Tim Rühsen
To: Yuxi Hao; 'Dale R. Worley'
Cc: bug-wget@gnu.org
Subject: Re: [Bug-wget] Any explanation for the '-nc' returned value?
On 30.07.2018 16:44, Yuxi Hao wrote:
> Let's take an example in practice.
> When there is a bad network connection, I try wget wit
tps://www.gnu.org/software/wget/manual/html_node/Download-Options.html
If the server doesn't support it, it simply won't work.
All you can do is not to use -N or ask the server's admin to support it.
Regards, Tim
>
> Best Regards,
> YX Hao
>
> -Original Message-
> From
!
And '-N' is not always working as desired, because of "Last-modified header
missing". One example:
wget -N https://www.gnu.org/software/wget/manual/html_node/Download-Options.html
Best Regards,
YX Hao
-Original Message-
From: Dale R. Worley
To: Tim Rühsen
Cc: lifenjoine; bug-wget
Su
Tim Rühsen writes:
>-nd, even if -r or -p are in effect.) When -nc is specified,
> this behavior is suppressed, and Wget will
>refuse to download newer copies of file.
Though strictly speaking, this doesn't say that wget will then exit with
error code 1.
Dale
Hi,
from the man pages (--no-clobber):
When running Wget without -N, -nc, -r, or -p, downloading the
same file in the same directory will result
in the original copy of file being preserved and the second
copy being named file.1. If that file is
On 19.07.2018 17:24, Paul Wagner wrote:
> Dear wgetters,
>
> apologies if this has been asked before.
>
> I'm using wget to download DASH media files, i.e. a number of URLs in
> the form domain.com/path/segment_1.mp4, domain.com/path/segment_2.mp4,
> ..., which represent chunks of audio or
On 18.07.2018 14:58, Jeffrey Walton wrote:
> On Wed, Jul 18, 2018 at 7:14 AM, Tim Rühsen wrote:
>> Maybe it's an bash/sh incompatibility. Anyways - what does 'make
>> install' do !? It basically copies the 'wget' executable into a
>> directory (e.g. /usr/local/bin/) that is listed in your PATH
Jeffrey Walton wrote:
When I check my locally installed wget --version it is showing the wrong wgetrc:
$ command -v wget
/usr/local/bin/wget
$ wget --version
GNU Wget 1.19.5 built on linux-gnu.
...
Wgetrc:
/etc/wgetrc (system)
I installed an updated
On Wed, Jul 18, 2018 at 8:59 AM, Darshit Shah wrote:
> Are you trying to compile Wget from git? Or are you using the tarballs?
Tarball.
> If you are using the tarballs, this should not happen unless you have modified
> some of the build files. In which case, I would ask you to share your
On Wed, Jul 18, 2018 at 7:14 AM, Tim Rühsen wrote:
> Maybe it's an bash/sh incompatibility. Anyways - what does 'make
> install' do !? It basically copies the 'wget' executable into a
> directory (e.g. /usr/local/bin/) that is listed in your PATH env variable.
>
> You can do that by hand. If you
Are you trying to compile Wget from git? Or are you using the tarballs?
If you are using the tarballs, this should not happen unless you have modified
some of the build files. In which case, I would ask you to share your changes
with us so that we can fix the build for everyone.
You should not
On Wed, Jul 18, 2018 at 7:14 AM, Tim Rühsen wrote:
> Maybe it's an bash/sh incompatibility. Anyways - what does 'make
> install' do !? It basically copies the 'wget' executable into a
> directory (e.g. /usr/local/bin/) that is listed in your PATH env variable.
>
> You can do that by hand. If you
Maybe it's an bash/sh incompatibility. Anyways - what does 'make
install' do !? It basically copies the 'wget' executable into a
directory (e.g. /usr/local/bin/) that is listed in your PATH env variable.
You can do that by hand. If you want the updated man file, copy wget.1
into your man1
At run time you can use the --ca-certificate option to pass the filename
* Jeffrey Walton [180718 11:17]:
> Hi Everyone,
>
> I'm working on an ancient system. I need to bootstrap an updated Wget.
>
> I installed a new ca-certs.pem in /usr/local/share. I need to tell
> Wget to use it. I don't
Tim Rühsen wrote:
GnuTLS 3.6.3 has been released today with TLS1.3 support (latest draft).
So if you rebuild/link wget or wget2 with the new GnuTLS version, you
can enable TLS1.3 via --ciphers="NORMAL:+VERS-TLS1.3" (wget) resp.
--gnutls-options="NORMAL:+VERS-TLS1.3" (wget2).
Not for me:
On Mon, Jul 16, 2018 at 6:37 PM, Tim Rühsen wrote:
> FYI
>
> GnuTLS 3.6.3 has been released today with TLS1.3 support (latest draft).
>
> So if you rebuild/link wget or wget2 with the new GnuTLS version, you
> can enable TLS1.3 via --ciphers="NORMAL:+VERS-TLS1.3" (wget) resp.
>
On 14.07.2018 23:57, Jeffrey Walton wrote:
> On Tue, Jun 19, 2018 at 6:44 AM, Loganaden Velvindron
> wrote:
>> ...
>> As per:
>> https://tools.ietf.org/html/draft-moriarty-tls-oldversions-diediedie-00
>>
>> Attached is a tentative patch to disable TLS 1.0 and TLS 1.1 by
>> default. No doubt that
On Tue, Jun 19, 2018 at 6:44 AM, Loganaden Velvindron wrote:
> ...
> As per:
> https://tools.ietf.org/html/draft-moriarty-tls-oldversions-diediedie-00
>
> Attached is a tentative patch to disable TLS 1.0 and TLS 1.1 by
> default. No doubt that this will cause some discussions, I'm open to
>
Loganaden Velvindron wrote:
Hi All,
As per:
https://tools.ietf.org/html/draft-moriarty-tls-oldversions-diediedie-00
Attached is a tentative patch to disable TLS 1.0 and TLS 1.1 by
default. No doubt that this will cause some discussions, I'm open to
hearing all opinions on this.
please
file?
No, we send the GET request with the local file's timestamp. If the
server has a newer version, it sends it together with a 200 OK, else it
sends 304 Not Modified with an empty body.
Just give it a try. If you see, everything is re-downloaded, stop and
try again with '-N --no-if-modified-since'
Hi Tim,
Excellent answer thank you very much for this info, "-N" or
"--timestamping" sounds like a much better way to go, however if I'm
converting links, using wget (1) I think I've read somewhere and noticed
that two separate commands running in series wouldn't be able to continue
due to the
On 07/08/2018 02:59 AM, John Roman wrote:
> Greetings,
> I wish to discuss a formal change of the default retry for wget from 20
> to something more pragmatic such as two or three.
>
> While I believe 20 retries may have been the correct default many years
> ago, it seems overkill for the modern
On 07/12/2018 08:12 PM, Triston Line wrote:
> If that's possible that would help immensely. I "review" sites for my
> friends at UBC and we look at geographic performance on their apache and
> nginx servers, the only problem is they encounter minor errors from time to
> time while recursively
Hi,
On 07/12/2018 08:12 PM, Triston Line wrote:
> Hi Wget team,
>
> I am but a lowly user and linux sysadmin, however, after noticing the wget2
> project I have wondered about a feature that could be added to the new
> version.
>
> I approve of all the excellent new features already being added
On 07/03/2018 12:48 PM, Zoe Blade wrote:
>> In Wget2 there is an extra option for this, --filter-urls.
>
> Thank you Tim, this sounds like exactly what I was after! (It's especially
> important when you have wget logged in as a user, to be able to tell it not
> to go to the logout page.)
> In Wget2 there is an extra option for this, --filter-urls.
Thank you Tim, this sounds like exactly what I was after! (It's especially
important when you have wget logged in as a user, to be able to tell it not to
go to the logout page.) Though if that feature could be ported to the original
On 06/29/2018 03:20 PM, Zoe Blade wrote:
> For anyone else who needs to do this, I adapted Sergey Svishchev's 1.8-era
> patch for 19.1 (one of the few versions I managed to get to compile in OS X;
> I'm on a Mac, and not the best programmer):
>
> recur.c:578
> - if (blacklist_contains
For anyone else who needs to do this, I adapted Sergey Svishchev's 1.8-era
patch for 19.1 (one of the few versions I managed to get to compile in OS X;
I'm on a Mac, and not the best programmer):
recur.c:578
- if (blacklist_contains (blacklist, url))
+ if (blacklist_contains (blacklist, url)
> ...it would be more useful to avoid downloading rejected files altogether...
Hmm, after a bit more digging, I see this isn't a new request:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=217243 Is anyone working on
this?
Hi Elliot,
On 06/25/2018 11:17 PM, Elliot Chandler wrote:
> Hello again,
>
> Thank you for integrating the patch!
Thanks for contributing :-)
> Regarding licensing, I don't have immediate plans to contribute more
> patches to GNU (kind of swamped with things going on), but it's always a
>
Hello again,
Thank you for integrating the patch!
Regarding licensing, I don't have immediate plans to contribute more
patches to GNU (kind of swamped with things going on), but it's always a
possibility. I'm happy to fill out a copyright grant to irrevocably assign
ownership of my patches sent
Just try
wget2 -nd -l2 -r -A "*little-nemo*s.jpeg"
'http://comicstriplibrary.org/search?search=little+nemo'
and you only get
little-nemo-19051015-s.jpeg
little-nemo-19051022-s.jpeg
little-nemo-19051029-s.jpeg
little-nemo-19051105-s.jpeg
little-nemo-19051112-s.jpeg
little-nemo-19051119-s.jpeg
On 20.06.2018 18:20, Nils Gerlach wrote:
> It does not delete any html-file or anything else. Either it is accepted
> and kept or it is saved forever.
> With the tip about --accept and --acept-regex I can get wget to traverse
> the links but it does not go deep
> enough to get the *l.jpgs I tried
It does not delete any html-file or anything else. Either it is accepted
and kept or it is saved forever.
With the tip about --accept and --acept-regex I can get wget to traverse
the links but it does not go deep
enough to get the *l.jpgs I tried to increase -l but to no avail. It seems
like it is
Hi Tim,
I am sorry but your command does not work. It only downloads the thumbnails
from the first page
and follows none of the links. Open the link in a browser. Click on the
pictures to get a larger picture.
There is a link "high quality picture" the pictures behind those links are
the ones i
Hi Niels,
please always answer to the mailing list (no problem if you CC me, but
not needed).
It was just an example for POSIX regexes - it's up to you to work out
the details ;-) Or maybe there is a volunteer reading this.
The implicitly downloaded HTML pages should be removed after parsing
Hi Nils,
On 06/20/2018 06:16 AM, Nils Gerlach wrote:
> Hi there,
>
> in #wget on freenode I was suggested to write this to you:
> I tried using wget to get some images:
> wget -nd -rH -Dcomicstriplibrary.org -A
> "little-nemo*s.jpeg","*html*","*.html.*","*.tmp","*page*","*display*" -p -e
>
On Tue, Jun 19, 2018 at 4:48 PM, Tomas Hozza wrote:
>
>
> On 19.06.2018 13:20, Loganaden Velvindron wrote:
>> On Tue, Jun 19, 2018 at 3:18 PM, Tim Rühsen wrote:
>>> On 06/19/2018 12:44 PM, Loganaden Velvindron wrote:
Hi All,
As per:
On 19.06.2018 13:20, Loganaden Velvindron wrote:
> On Tue, Jun 19, 2018 at 3:18 PM, Tim Rühsen wrote:
>> On 06/19/2018 12:44 PM, Loganaden Velvindron wrote:
>>> Hi All,
>>>
>>> As per:
>>> https://tools.ietf.org/html/draft-moriarty-tls-oldversions-diediedie-00
>>>
>>> Attached is a tentative
* Tim Rühsen [180619 13:18]:
> On 06/19/2018 12:44 PM, Loganaden Velvindron wrote:
> > Hi All,
> >
> > As per:
> > https://tools.ietf.org/html/draft-moriarty-tls-oldversions-diediedie-00
> >
> > Attached is a tentative patch to disable TLS 1.0 and TLS 1.1 by
> > default. No doubt that this will
On Tue, Jun 19, 2018 at 3:18 PM, Tim Rühsen wrote:
> On 06/19/2018 12:44 PM, Loganaden Velvindron wrote:
>> Hi All,
>>
>> As per:
>> https://tools.ietf.org/html/draft-moriarty-tls-oldversions-diediedie-00
>>
>> Attached is a tentative patch to disable TLS 1.0 and TLS 1.1 by
>> default. No doubt
On 06/19/2018 12:44 PM, Loganaden Velvindron wrote:
> Hi All,
>
> As per:
> https://tools.ietf.org/html/draft-moriarty-tls-oldversions-diediedie-00
>
> Attached is a tentative patch to disable TLS 1.0 and TLS 1.1 by
> default. No doubt that this will cause some discussions, I'm open to
> hearing
Congrats your patch has been pushed, though I detected a small issue
right after that (fixed now).
And don't you think --tries should be considered ?
Regards, Tim
On 12.06.2018 20:53, Tim Rühsen wrote:
> Hi Elliot,
>
> thanks for your contribution.
>
> I'll care for the integration / merge
Hi Elliot,
thanks for your contribution.
I'll care for the integration / merge tomorrow.
We maintainers decided to accept your work without the FSF Copyright
Assignment from you (will be tagged as Copyright-paperwork-exempt).
Let us know if you plan to work on more and we'll send you the FSF
Hi,
On 06/11/2018 07:30 AM, Md. Qudratullah Phd2011, MGL wrote:
> Hi,
> I started downloading with wget but it stopped after some time (25%
> completion) and I restarted with -c option but it does not show that it
> resume download. Every time I restart with -c option, it starts from 0%.
> I have
> From: Sam Habiel
> Date: Wed, 6 Jun 2018 08:27:44 -0400
> Cc: bug-wget@gnu.org
>
> Is there a valid argument to be made that some arguments for wget
> should not be expanded, like accept and reject?
Probably. The problem is that wildcard expansion of the command line
doesn't understand the
Gisle,
I downloaded it from here: https://eternallybored.org/misc/wget/. It
doesn't seem to be MinGW compiled; but I can't tell.
Eli,
Holy shit it works! I spent several hours trying different
combinations--but never came up with this incantation! Thank you so
much!
Is there a valid argument
> From: Sam Habiel
> Date: Tue, 5 Jun 2018 14:16:27 -0400
>
> I have a wget command that has a -A flag that contains a wildcard.
> It's '*.DAT'. That works fine on Linux. I am trying to get the same
> thing to run on Windows, but *.DAT keeps getting expanded by wget (cmd
> does no expansion
actual wiki page URL.
>>>
>>> What I would need to do is exclude from wget visiting any
>>> www.wiki.com/delete or www.wiki.com/remove/ pages. I'd also need to exclude
>>> links that end with "xpage=watch=adddocument" which triggers me to watch
>&
;xpage=watch=adddocument" which triggers me to watch
> > that page.
> >
> > I am using v1.12 because the most recent versions have disabled
> > --no-clobber and --convert-links from working together. I need --no-clobber
> > because if the download stops, I
ot;xpage=watch=adddocument" which triggers me to watch that page.
>
> I am using v1.12 because the most recent versions have disabled --no-clobber
> and --convert-links from working together. I need --no-clobber because if the
> download stops, I need to be able to resume without re-d
t;xpage=watch=adddocument" which triggers me to watch that page.
I am using v1.12 because the most recent versions have disabled --no-clobber
and --convert-links from working together. I need --no-clobber because if the
download stops, I need to be able to resume without re-downloading all t
On 06/05/2018 11:53 AM, CryHard wrote:
> Hey there,
>
> I've used the following:
>
> wget --user-agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6)
> AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.139 Safari/537.36"
> --user=myuser --ask-password --no-check-certificate --recursive
On 06/04/2018 01:14 PM, Krzysztof Malinowski wrote:
> Hello,
>
> I am trying to build wget 1.19.5 with c-ares 1.14.0 and the
> compilation fails with error:
>
> make[3]: Entering directory '/dev/shm/akm022/wget-1.19.5/src'
> CC host.o
> host.c: In function 'wait_ares':
> host.c:735:11:
Tim Rühsen wrote:
I just merged another branch into master, this issue seemed to be fixed
in there. Please try again latest master.
Thanks, unit-test.exe works fine now.
BTW. Why 2 versions of these macros:
'mu_assert()' and 'mu_run_test()'?
Or even 2 files with exactly the content:
Thanks for your report.
I just merged another branch into master, this issue seemed to be fixed
in there. Please try again latest master.
Regards, Tim
On 05/22/2018 05:06 PM, Gisle Vanem wrote:
> I've built unit-test on Windows (clang-cl). But when running
> it, it crashes after the message:
>
On Thu, May 10, 2018 at 10:27:35AM +, VINEETHSIVARAMAN wrote:
> My server is behind a firewall and a proxy, but when i give 2 "wget" in
> command gives me a DNS resolution but not with the single wget !
>
[...]
> [~]$ nslookup google.com
>
> Non-authoritative answer:
> Name: google.com
On 19.05.2018 23:44, Jeffrey Walton wrote:
> On Sat, May 19, 2018 at 5:21 PM, Tim Rühsen wrote:
>> On 19.05.2018 20:53, Jeffrey Walton wrote:
>>> On Sat, May 19, 2018 at 12:27 PM, Tim Rühsen wrote:
>>> ...
>>> make[4]: Entering directory
On Sat, May 19, 2018 at 5:21 PM, Tim Rühsen wrote:
> On 19.05.2018 20:53, Jeffrey Walton wrote:
>> On Sat, May 19, 2018 at 12:27 PM, Tim Rühsen wrote:
>> ...
>> make[4]: Entering directory '/home/Build-Scripts/wget-1.19.5/src'
>> make[4]: Leaving directory
On 19.05.2018 20:53, Jeffrey Walton wrote:
> On Sat, May 19, 2018 at 12:27 PM, Tim Rühsen wrote:
>> Hi Jeff,
>>
>> could you 'cd fuzz', then 'make -j1 V=1' and send us the ouput ?
>>
>> It should include the full gcc command line.
>>
>> Please attach your config.log.
>>
>
On Sat, May 19, 2018 at 12:27 PM, Tim Rühsen wrote:
> Hi Jeff,
>
> could you 'cd fuzz', then 'make -j1 V=1' and send us the ouput ?
>
> It should include the full gcc command line.
>
> Please attach your config.log.
>
Thanks Tim.
$ cd wget-1.19.5
$ make check V=1
...
Hi Jeff,
could you 'cd fuzz', then 'make -j1 V=1' and send us the ouput ?
It should include the full gcc command line.
Please attach your config.log.
Regards, Tim
On 19.05.2018 17:28, Jeffrey Walton wrote:
> Hi Everyone,
>
> This looks like a new issue with Wget 1.19.5:
>
> make
>
On 15.05.2018 13:48, Graeme wrote:
> I am trying to find the up to date version of Wget for Windows.
>
> On https://www.gnu.org/software/wget/faq.html#download there are 4 links.
>
> Sourceforge and Bart Puype links seem to be dead links.
>
> Christopher Lewis' link times out.
>
> Jernej
On 15.05.2018 13:32, Graeme wrote:
> I opened the mailing list archives.
> http://lists.gnu.org/archive/html/bug-wget/
>
> I searched for Windows, looking for information on the latest Wget for
> windows version.
>
> I get 1091 documents but they are in order of score. I change the
> sort to by
801 - 900 of 5537 matches
Mail list logo