On Tue, Apr 02, 2013 at 09:27:17AM +0100, James Griffin wrote:
Mon 1.Apr'13 at 8:14:35 -0600 Luis Mochan
By the way, the author of the program, Kyle Wheeler, wrote to me that
he expects that adding the line
COMMAND /etc/urlview/url_handler.sh '%s'
Tue 2.Apr'13 at 11:51:15 -0600 Luis Mochan
Hello James,
You are not using the program correctly. extract_urlview has worked
perfectly with mutt, for me, for probably about 2 years now. Why bother
trying to integrate it into your shell script, just use it as
Mon 1.Apr'13 at 8:14:35 -0600 Luis Mochan
By the way, the author of the program, Kyle Wheeler, wrote to me that
he expects that adding the line
COMMAND /etc/urlview/url_handler.sh '%s'
to the configuration file ~/.extract_urlview would be enough to
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On Monday, April 1 at 07:30 PM, quoth Luis Mochan:
I tried now your fix, and it didn't work for me; my browser doesn't
find the resulting pages when the url has ampersands that are
converted to %26 (probably because the % itself is further
Dear Kyle,
...
/etc/urlhandler/url_handler.sh is a shell script that obtains its
url doing '$url=$1'.
Ahh, indeed, that could cause a problem. Variables are substituted
simply, in an as if typed manner. Take this simple example:
...
Thanks for your explanation. As I wrote last night,
Hello James,
You are not using the program correctly. extract_urlview has worked
perfectly with mutt, for me, for probably about 2 years now. Why bother
trying to integrate it into your shell script, just use it as a
stand-alone program and don't use urlview at all.
the contents of my
Sun 31.Mar'13 at 15:37:28 -0600 Luis Mochan
I found a mistake in the extract_url.pl program: it doesn't escape
ampersands when present in the url, so when the command to actually
view the url is invoked, the shell gets confused. I made a quick fix
by
On Sun, Mar 31, 2013 at 10:00:58PM -0600, s. keeling wrote:
Incoming from Luis Mochan:
I found a mistake in the extract_url.pl program: it doesn't escape
ampersands when present in the url, so when the command to actually
view the url is invoked, the shell gets confused. I made a quick fix
On Tue, Apr 02, 2013 at 01:06:05AM +1300, Chris Bannister wrote:
On Sun, Mar 31, 2013 at 10:00:58PM -0600, s. keeling wrote:
Incoming from Luis Mochan:
I found a mistake in the extract_url.pl program: it doesn't escape
ampersands when present in the url, so when the command to actually
Hi Guys,
I guess I'm the slow one on the list.
Is there more to the patch than commenting out
# $command =~ s/%s/'$url'/g;
and replacing it with
$command=~s//\\/g
Because either way, extract_url.pl isn't working for me.
I can see the list of urls, but if I click on one I still get a page
Incoming from John Niendorf:
I guess I'm the slow one on the list.
Is there more to the patch than commenting out
# $command =~ s/%s/'$url'/g;
and replacing it with
$command=~s//\\/g
Because either way, extract_url.pl isn't working for me.
It looks like that was incorrect; Luis
Hi John,
I guess I'm the slow one on the list.
Is there more to the patch than commenting out
# $command =~ s/%s/'$url'/g;
and replacing it with
$command=~s//\\/g
I didn't comment out that line; it is needed to replace %s by the URL
in the 'COMMAND' that actually opens the URL. What I
By the way, the author of the program, Kyle Wheeler, wrote to me that
he expects that adding the line
COMMAND /etc/urlview/url_handler.sh '%s'
to the configuration file ~/.extract_urlview would be enough to solve
the problem (with %s between quotes). I believe I had tried that and
that
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On Sunday, March 31 at 11:16 PM, quoth Luis Mochan:
I'm a perl guy, yet that's non-trivial here. Thx. :-)
You're welcome. I don't know if there are other characters that appear
in an url and need to be escaped for the shell ([;]?); they could
Hi Kyle,
I'm the author of extract_url.pl, so perhaps I can shed some light
here.
Thanks.
The *correct* place to fix the issue of escaping (or otherwise
sanitizing) ampersands is in the sanitizeuri function (line 208). The
current version of extract_url.pl uses this:
sub
Incoming from Luis Mochan:
I don't know much about shell programming, but I found that
/etc/urlhandler/url_handler.sh is a shell script that obtains its url
doing '$url=$1'. I replaced the whole handler by the following
program:
#! /bin/bash
url=$1; shift
echo $url tmp.txt;
Unix shell handles variables abysmally. You need to help it a lot to
do the right thing. *Always* quote variables, else if they're empty
they tend to blow up on you.
Thanks for the advice! Your script did work from the command line, but
it was not enough when called from extract_url.pl.
Dear John,
On Sat, Mar 30, 2013 at 05:15:28PM +0100, John Niendorf wrote:
This looks really great, but where do I put the script?
I made it exicutable and put it in my path and I got the error that it wasn't
there.
I coped it to ~/ and got the same error.
John
I found a mistake in the extract_url.pl program: it doesn't escape
ampersands when present in the url, so when the command to actually
view the url is invoked, the shell gets confused. I made a quick fix
by substituting $command=~s//\\/g before running command.
Incoming from Luis Mochan:
I found a mistake in the extract_url.pl program: it doesn't escape
ampersands when present in the url, so when the command to actually
view the url is invoked, the shell gets confused. I made a quick fix
by substituting $command=~s//\\/g before running command.
Line
Line 633? 634? So:
# $command =~ s/%s/'$url'/g;
$command=~s//\\/g;
Sorry for not having given the line numbers, etc. I actually made
changes around 522 and 647, and defined a new subroutine (I named it
wlmsanitize) which modifies the command to run. A patch
follows.
I'm a perl
with the url I end up
copying a bunch of other stuff that is not in the url (like part of the list of
folders in the Mutt side panel, for example).
Does anyone know a way to deal with long urls aside from opening up Thunderbird?
Thanks for any advice, I really appreciate it.
John
error.
I tried copying the url, but when I highlight the lines with the url I end up
copying a bunch of other stuff that is not in the url (like part of the list
of folders in the Mutt side panel, for example).
Does anyone know a way to deal with long urls aside from opening up
Thunderbird
, but when I highlight the lines with the url
I end up copying a bunch of other stuff that is not in the url (like
part of the list of folders in the Mutt side panel, for example).
Does anyone know a way to deal with long urls aside from opening up
Thunderbird?
I don't know if this'll help (I've
error.
I tried copying the url, but when I highlight the lines with the url I end up
copying a bunch of other stuff that is not in the url (like part of the list
of folders in the Mutt side panel, for example).
Does anyone know a way to deal with long urls aside from opening up
Thunderbird
Reply to the list (only) please John, so others can follow along.
The messages are archived for future generations to find if they
can bother searching.
On Sat, Mar 30, 2013 at 03:44:08PM +0100, John Niendorf wrote:
Sun, Mar 31, 2013 at 03:37:19AM +1300, Chris Bannister wrote:
Is there a '+'
Thank you all for the tips. Actually all I did was install urlview from the
repository and then when I clicked ctrl+B I got a list of ur$
I'll see how it works with urls that extend onto multiple lines the next time
one comes by.
John
* Chris Bannister schrieb am 31.03.2013 um 4:07 Uhr:
Then you'll either have to manually remove the '+' sign or find the
setting that turns the feature (where a + sign is inserted at the start
of the line on line wrap) off. Unfortunately I can' remember what
variable controls it, at the
Incoming from Chris Bannister:
On Sat, Mar 30, 2013 at 03:44:08PM +0100, John Niendorf wrote:
Sun, Mar 31, 2013 at 03:37:19AM +1300, Chris Bannister wrote:
Is there a '+' sign at the start of each line of the url?
Yes there usually is.
Then you'll either have to manually remove the
This looks really great, but where do I put the script?
I made it exicutable and put it in my path and I got the error that it wasn't
there.
I coped it to ~/ and got the same error.
John
http://www.memoryhole.net/~kyle/extract_url/
Try this. It's brilliant.
Martin --
...and then Martin Karlsson said...
%
% Hi all.
Hello!
%
% Question 1:
%
% My editor is vim, and I wrap lines at 68.
Good for you :-)
%
% When confronted with a 'longer-than-68' URL, my colourization-regexp
% won't cath the second-line part of the URL, e.g.
%
%
Hi all.
Question 1:
My editor is vim, and I wrap lines at 68.
When confronted with a 'longer-than-68' URL, my colourization-regexp
won't cath the second-line part of the URL, e.g.
http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/kernelconfig-config.ht
ml
What do I need in my regexp
32 matches
Mail list logo