Re: urllib unqoute providing string mismatch between string found using os.walk (Python3)

2019-12-22 Thread Ben Hearn
On Saturday, 21 December 2019 21:46:43 UTC, Ben Hearn wrote: > Hello all, > > I am having a bit of trouble with a string mismatch operation in my tool I am > writing. > > I am comparing a database collection or url quoted paths to the paths on the > users drive. > > These 2 paths look identic

Re: urllib unqoute providing string mismatch between string found using os.walk (Python3)

2019-12-21 Thread Richard Damon
On 12/21/19 8:25 PM, MRAB wrote: > On 2019-12-22 00:22, Michael Torrie wrote: >> On 12/21/19 2:46 PM, Ben Hearn wrote: >>> These 2 paths look identical, one from the drive & the other from an >>> xml url: >>> a = '/Users/macbookpro/Music/tracks_new/_NS_2018/J.Staaf - >>> ¡Móchate! _PromoMix_.wav'

Re: urllib unqoute providing string mismatch between string found using os.walk (Python3)

2019-12-21 Thread MRAB
On 2019-12-22 00:22, Michael Torrie wrote: On 12/21/19 2:46 PM, Ben Hearn wrote: These 2 paths look identical, one from the drive & the other from an xml url: a = '/Users/macbookpro/Music/tracks_new/_NS_2018/J.Staaf - ¡Móchate! _PromoMix_.wav'

Re: urllib unqoute providing string mismatch between string found using os.walk (Python3)

2019-12-21 Thread Chris Angelico
On Sun, Dec 22, 2019 at 11:33 AM Michael Torrie wrote: > > On 12/21/19 2:46 PM, Ben Hearn wrote: > > These 2 paths look identical, one from the drive & the other from an xml > > url: > > a = '/Users/macbookpro/Music/tracks_new/_NS_2018/J.Staaf - ¡Móchate! > > _PromoMix_.wav' >

Re: urllib unqoute providing string mismatch between string found using os.walk (Python3)

2019-12-21 Thread Michael Torrie
On 12/21/19 2:46 PM, Ben Hearn wrote: > These 2 paths look identical, one from the drive & the other from an xml url: > a = '/Users/macbookpro/Music/tracks_new/_NS_2018/J.Staaf - ¡Móchate! > _PromoMix_.wav' ^^ > b = '/Users/macbookpro

Re: urllib unqoute providing string mismatch between string found using os.walk (Python3)

2019-12-21 Thread Dan Sommers
On 12/21/19 4:46 PM, Ben Hearn wrote: import difflib print('\n'.join(difflib.ndiff([a], [b]))) - /Users/macbookpro/Music/tracks_new/_NS_2018/J.Staaf - ¡Móchate! _PromoMix_.wav ? ^^ + /Users/ma

Re: urllib unqoute providing string mismatch between string found using os.walk (Python3)

2019-12-21 Thread Pieter van Oostrum
Ben Hearn writes: > Hello all, > > I am having a bit of trouble with a string mismatch operation in my tool I am > writing. > > I am comparing a database collection or url quoted paths to the paths on the > users drive. > > These 2 paths look identical, one from the drive & the other from an xm

Re: urllib

2017-06-30 Thread breamoreboy
On Friday, June 30, 2017 at 1:30:10 PM UTC+1, Rasputin wrote: > good luck with that, mate ! Please don't change the subject line and also provide some context when you reply, we're not yet mindreaders :) Kindest regards. -- Mark Lawrence. -- https://mail.python.org/mailman/listinfo/python-list

Re: urllib/urllib2 support for specifying ip address

2014-06-19 Thread Chris Angelico
On Fri, Jun 20, 2014 at 12:19 AM, Robin Becker wrote: > in practice [monkeypatching socket] worked well with urllib in python27. Excellent! That's empirical evidence of success, then. Like with all monkey-patching, you need to keep it as visible as possible, but if your driver script is only a p

Re: urllib/urllib2 support for specifying ip address

2014-06-19 Thread Robin Becker
On 19/06/2014 13:03, Chris Angelico wrote: . I can use python >= 3.3 if required. The main reason I ask is in case something's changed. Basically, what I did was go to my Python 2 installation (which happens to be 2.7.3, because that's what Debian Wheezy ships with - not sure why it has

Re: urllib/urllib2 support for specifying ip address

2014-06-19 Thread Chris Angelico
On Thu, Jun 19, 2014 at 9:51 PM, Robin Becker wrote: >> Since you mention urllib2, I'm assuming this is Python 2.x, not 3.x. >> The exact version may be significant. >> > I can use python >= 3.3 if required. The main reason I ask is in case something's changed. Basically, what I did was go to my

Re: urllib/urllib2 support for specifying ip address

2014-06-19 Thread Robin Becker
.. Since you mention urllib2, I'm assuming this is Python 2.x, not 3.x. The exact version may be significant. I can use python >= 3.3 if required. Can you simply query the server by IP address rather than host name? According to the docs, urllib2.urlopen() doesn't check the certific

Re: urllib/urllib2 support for specifying ip address

2014-06-19 Thread Chris Angelico
On Thu, Jun 19, 2014 at 7:22 PM, Robin Becker wrote: > I want to run torture tests against an https server on domain A; I have > configured apache on the server to respond to a specific hostname ipaddress. > > I don't want to torture the live server so I have set up an alternate > instance on a di

Re: urllib and authentication script integration

2013-12-23 Thread Chris Angelico
On Tue, Dec 24, 2013 at 12:47 AM, Jeff James wrote: > I have some simple code I would like to share with someone that can assist > me in integrating authentication script into. I'm sure it's an easy answer > for any of you. I am still researching, but on this particular project, > time is of the

RE: urllib and authentication script integration

2013-12-23 Thread Jeff James
I have some simple code I would like to share with someone that can assist me in integrating authentication script into. I'm sure it's an easy answer for any of you. I am still researching, but on this particular project, time is of the essence and this is the only missing piece of the puzzle for

Re: Question RE urllib

2013-12-17 Thread Tobiah
On 12/17/2013 08:10 AM, Larry Martell wrote: On Tue, Dec 17, 2013 at 10:26 AM, Jeff James mailto:j...@jeffljames.com>> wrot So I'm using the following script to check our sites to make sure they are all up and some of them are reporting they are "down" when, in fact, they are actually

Re: Question RE urllib

2013-12-17 Thread Larry Martell
On Tue, Dec 17, 2013 at 10:26 AM, Jeff James wrot > > So I'm using the following script to check our sites to make sure they > are all up and some of them are reporting they are "down" when, in fact, > they are actually up. These sites do not require a logon in order for the > home page to come

RE: Question RE urllib

2013-12-17 Thread Jeff James
So I'm using the following script to check our sites to make sure they are all up and some of them are reporting they are "down" when, in fact, they are actually up. These sites do not require a logon in order for the home page to come up. Could this be due to some port being blocked internally

Re: Question RE urllib

2013-12-16 Thread William Ray Wing
On Dec 16, 2013, at 6:40 AM, Jeff James wrote: > So I'm using the following script to check our sites to make sure they are > all up and some of them are reporting they are "down" when, in fact, they are > actually up. These sites do not require a logon in order for the home page > to come u

Re: Question RE urllib

2013-12-16 Thread John Gordon
In Jeff James writes: > --f46d04479f936227ee04edac31bd > Content-Type: text/plain; charset=ISO-8859-1 > Sorry to be a pain here, guys, as I'm also a newbie at this as well. > Where, exactly in the script would I place the " print str(e) " ? except Exception, e: print site + " is

Re: Question Re urllib (Resolved)

2013-12-16 Thread Jeff James
This worked perfectly. Thank You Where, exactly in the script would I place the " print str(e) " ? The line after the print site + " is down" line. Original Post : I'm not really receiving an "exception" other than those three sites, out of the 30 or so I have listed, are the only sites

Re: Question RE urllib

2013-12-16 Thread Larry Martell
On Mon, Dec 16, 2013 at 2:55 PM, Jeff James wrote: > Sorry to be a pain here, guys, as I'm also a newbie at this as well. > > Where, exactly in the script would I place the " print str(e) " ? The line after the print site + " is down" line. > > Thanks > > Original message : > >> I'm not really

RE: Question RE urllib

2013-12-16 Thread Jeff James
Sorry to be a pain here, guys, as I'm also a newbie at this as well. Where, exactly in the script would I place the " print str(e) " ? Thanks Original message : I'm not really receiving an "exception" other than those three sites, out > of the 30 or so I have listed, are the only sites which s

Re: Question Re urllib (Jeff James)

2013-12-16 Thread Larry Martell
t; > Cc: python-list@python.org 'python-list@python.org');> > Date: Mon, 16 Dec 2013 06:54:48 -0500 > Subject: Re: Question RE urllib > On Mon, Dec 16, 2013 at 6:40 AM, Jeff James > > > wrote: > > So I'm using the following script to check our sites to ma

Re: Question Re urllib (Jeff James)

2013-12-16 Thread Jeff James
/my..com/intranet.html* is down* http://#.main..com/psso/pssignsso.asp?dbname=FSPRD90 * is down* http://sharepoint..com/regions/west/PHX_NSC/default.aspx * is down* Cc: python-list@python.org Date: Mon, 16 Dec 2013 06:54:48 -0500 Subject: Re: Question RE urllib On Mon, Dec 16, 2013

Re: Question RE urllib

2013-12-16 Thread Tim Chase
On 2013-12-16 04:40, Jeff James wrote: > These sites do not require a logon in order for the home > page to come up. Could this be due to some port being blocked > internally ? Only one of the sites reporting as down is "https" but > all are internal sites. Is there some other component I should

Re: Question RE urllib

2013-12-16 Thread Larry Martell
On Mon, Dec 16, 2013 at 6:40 AM, Jeff James wrote: > So I'm using the following script to check our sites to make sure they are > all up and some of them are reporting they are "down" when, in fact, they > are actually up. These sites do not require a logon in order for the home > page to come u

Question RE urllib

2013-12-16 Thread Jeff James
So I'm using the following script to check our sites to make sure they are all up and some of them are reporting they are "down" when, in fact, they are actually up. These sites do not require a logon in order for the home page to come up. Could this be due to some port being blocked internally

Re: urllib and parsing

2011-10-06 Thread Tim Roberts
luca72 wrote: > >Hello i have a simple question: >up to now if i have to parse a page i do as follow: >... >Now i have the site that is open by an html file like this: >... >how can i open it with urllib, please note i don't have to parse this >file, but i have to parse the site where he point. W

Re: urllib, urlretrieve method, how to get headers?

2011-07-01 Thread Даниил Рыжков
Thanks, everyone! Problem solved. -- Regards, Daniil -- http://mail.python.org/mailman/listinfo/python-list

Re: urllib, urlretrieve method, how to get headers?

2011-07-01 Thread Chris Rebert
On Fri, Jul 1, 2011 at 1:53 AM, Даниил Рыжков wrote: > Hello again! > Another question: urlopen() reads full file's content, but how can I > get page by small parts? I don't think that's true. Just pass .read() the number of bytes you want to read, just as you would with an actual file object. C

Re: urllib, urlretrieve method, how to get headers?

2011-07-01 Thread Kushal Kumaran
On Fri, Jul 1, 2011 at 2:23 PM, Даниил Рыжков wrote: > Hello again! > Another question: urlopen() reads full file's content, but how can I > get page by small parts? > Set the Range header for HTTP requests. The format is specified here: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec

Re: urllib, urlretrieve method, how to get headers?

2011-07-01 Thread Даниил Рыжков
Hello again! Another question: urlopen() reads full file's content, but how can I get page by small parts? Regards, Daniil -- http://mail.python.org/mailman/listinfo/python-list

Re: urllib, urlretrieve method, how to get headers?

2011-07-01 Thread Даниил Рыжков
Thanks, everyone! Problem solved. -- Regards, Daniil -- http://mail.python.org/mailman/listinfo/python-list

Re: urllib, urlretrieve method, how to get headers?

2011-07-01 Thread Peter Otten
Даниил Рыжков wrote: > How can I get headers with urlretrieve? I want to send request and get > headers with necessary information before I execute urlretrieve(). Or > are there any alternatives for urlretrieve()? It's easy to do it manually: >>> import urllib2 Connect to website and inspect h

Re: urllib, urlretrieve method, how to get headers?

2011-07-01 Thread Chris Rebert
On Fri, Jul 1, 2011 at 12:03 AM, Даниил Рыжков wrote: > Hello, everyone! > > How can I get headers with urlretrieve? I want to send request and get > headers with necessary information before I execute urlretrieve(). Or > are there any alternatives for urlretrieve()? You can use regular urlopen()

Re: urllib "quote" problem

2010-08-20 Thread MRAB
John Nagle wrote: On 8/20/2010 8:41 AM, Aahz wrote: In article<4c5eef7f$0$1609$742ec...@news.sonic.net>, John Nagle wrote: This looks like code that will do the wrong thing in Python 2.6 for characters in the range 128-255. Those are illegal in type "str", but this code is constructing

Re: urllib "quote" problem

2010-08-20 Thread Robert Kern
On 8/20/10 1:50 PM, John Nagle wrote: On 8/20/2010 8:41 AM, Aahz wrote: In article<4c5eef7f$0$1609$742ec...@news.sonic.net>, John Nagle wrote: This looks like code that will do the wrong thing in Python 2.6 for characters in the range 128-255. Those are illegal in type "str", but this code is

Re: urllib "quote" problem

2010-08-20 Thread John Nagle
On 8/20/2010 8:41 AM, Aahz wrote: In article<4c5eef7f$0$1609$742ec...@news.sonic.net>, John Nagle wrote: This looks like code that will do the wrong thing in Python 2.6 for characters in the range 128-255. Those are illegal in type "str", but this code is constructing such values with "c

Re: urllib "quote" problem

2010-08-20 Thread Aahz
In article <4c5eef7f$0$1609$742ec...@news.sonic.net>, John Nagle wrote: > > This looks like code that will do the wrong thing in >Python 2.6 for characters in the range 128-255. Those are >illegal in type "str", but this code is constructing such >values with "chr". WDYM "illegal"? -- Aahz

Re: urllib timeout

2010-07-30 Thread Lawrence D'Oliveiro
In message <43f464f9-3f8a-4bec-8d06-930092d5a...@g6g2000pro.googlegroups.com>, kBob wrote: > The company changed the Internet LAN connections to "Accept Automatic > settings" and "Use automatic configuration script" Look at that configuration script, figure out what it’s returning for a proxy

Re: urllib timeout

2010-07-28 Thread kBob
On Jul 28, 12:44 pm, John Nagle wrote: > On 7/27/2010 2:36 PM, kBob wrote: > > > > > > > > >   I created a script to access weather satellite imagery fron NOAA's > > ADDS. > > >   It worked fine until recently with Python 2.6. > > >   The company changed the Internet LAN connections to "Accept Aut

Re: urllib timeout

2010-07-28 Thread John Nagle
On 7/27/2010 2:36 PM, kBob wrote: I created a script to access weather satellite imagery fron NOAA's ADDS. It worked fine until recently with Python 2.6. The company changed the Internet LAN connections to "Accept Automatic settings" and "Use automatic configuration script" How do yo

Re: urllib timeout

2010-07-28 Thread Chris Rebert
On Wed, Jul 28, 2010 at 9:30 AM, kBob wrote: > On Jul 28, 9:11 am, kBob wrote: >> The connection problem has to do with the proxy settings. >> >>  In order for me to use Internet Explorer, the LAN's Automatic >> configuration must be turned on and use a script found on the >> company's proxy ser

Re: urllib timeout

2010-07-28 Thread kBob
On Jul 28, 9:11 am, kBob wrote: > On Jul 27, 4:56 pm, MRAB wrote: > > > > > > > kBob wrote: > > > On Jul 27, 4:23 pm, MRAB wrote: > > >> kBob wrote: > > > >>>  I created a script to access weather satellite imagery fron NOAA's > > >>> ADDS. > > >>>  It worked fine until recently with Python 2.6.

Re: urllib timeout

2010-07-28 Thread kBob
On Jul 27, 4:56 pm, MRAB wrote: > kBob wrote: > > On Jul 27, 4:23 pm, MRAB wrote: > >> kBob wrote: > > >>>  I created a script to access weather satellite imagery fron NOAA's > >>> ADDS. > >>>  It worked fine until recently with Python 2.6. > >>>  The company changed the Internet LAN connections

Re: urllib timeout

2010-07-27 Thread MRAB
kBob wrote: On Jul 27, 4:23 pm, MRAB wrote: kBob wrote: I created a script to access weather satellite imagery fron NOAA's ADDS. It worked fine until recently with Python 2.6. The company changed the Internet LAN connections to "Accept Automatic settings" and "Use automatic configuration s

Re: urllib timeout

2010-07-27 Thread kBob
On Jul 27, 4:23 pm, MRAB wrote: > kBob wrote: > > >  I created a script to access weather satellite imagery fron NOAA's > > ADDS. > > >  It worked fine until recently with Python 2.6. > > >  The company changed the Internet LAN connections to "Accept Automatic > > settings" and "Use automatic conf

Re: urllib timeout

2010-07-27 Thread MRAB
kBob wrote: I created a script to access weather satellite imagery fron NOAA's ADDS. It worked fine until recently with Python 2.6. The company changed the Internet LAN connections to "Accept Automatic settings" and "Use automatic configuration script" How do you get urllib.urlopen to use

Re: urllib, can't seem to get form post right

2009-09-24 Thread Jon Clements
On 24 Sep, 22:18, "Adam W." wrote: > I'm trying to scrape some historical data from NOAA's website, but I > can't seem to feed it the right form values to get the data out of > it.  Heres the code: > > import urllib > import urllib2 > > ## The source pagehttp://www.erh.noaa.gov/bgm/climate/bgm.sht

Re: urllib with x509 certs

2009-07-17 Thread Lacrima
Hello! I've solved this problem, using pyCurl. Here is sample code. import pycurl import StringIO b = StringIO.StringIO() c = pycurl.Curl() url = 'https://example.com/' c.setopt(pycurl.URL, url) c.setopt(pycurl.WRITEFUNCTION, b.write) c.setopt(pycurl.CAINFO, 'cert.crt') c.setopt(pycurl.SSLKEY, 'm

Re: urllib with x509 certs

2009-07-04 Thread Martin v. Löwis
> Thanks for the reply. I want my key to be as secure as possible. So I > will remove pass phrase if only there is no other possibility to go > through authentication. And you put the passphrase into the source code instead? How does it make that more secure? Regards, Martin -- http://mail.pytho

Re: urllib with x509 certs

2009-07-04 Thread Chris Rebert
2009/7/4 Lacrima : > On Jul 4, 11:24 am, Chris Rebert wrote: >> On Sat, Jul 4, 2009 at 1:12 AM, Lacrima wrote: >> > Hello! >> >> > I am trying to use urllib to fetch some internet resources, using my >> > client x509 certificate. >> > I have divided my .p12 file into mykey.key and mycert.cer files

Re: urllib with x509 certs

2009-07-04 Thread Lacrima
On Jul 4, 12:38 pm, "Martin v. Löwis" wrote: > > This works Ok! But every time I am asked to enter PEM pass phrase, > > which I specified during dividing my .p12 file. > > So my question... What should I do to make my code fetch any url > > automatically (without asking me every time to enter pass

Re: urllib with x509 certs

2009-07-04 Thread Lacrima
On Jul 4, 11:24 am, Chris Rebert wrote: > On Sat, Jul 4, 2009 at 1:12 AM, Lacrima wrote: > > Hello! > > > I am trying to use urllib to fetch some internet resources, using my > > client x509 certificate. > > I have divided my .p12 file into mykey.key and mycert.cer files. > > Then I use following

Re: urllib with x509 certs

2009-07-04 Thread Martin v. Löwis
> This works Ok! But every time I am asked to enter PEM pass phrase, > which I specified during dividing my .p12 file. > So my question... What should I do to make my code fetch any url > automatically (without asking me every time to enter pass phrase)? > As I understand there is impossible to spe

Re: urllib with x509 certs

2009-07-04 Thread Chris Rebert
On Sat, Jul 4, 2009 at 1:12 AM, Lacrima wrote: > Hello! > > I am trying to use urllib to fetch some internet resources, using my > client x509 certificate. > I have divided my .p12 file into mykey.key and mycert.cer files. > Then I use following approach: import urllib url = 'https://exam

Re: urllib confusion

2009-02-18 Thread Steven D'Aprano
On Wed, 18 Feb 2009 01:17:40 -0700, Tim H wrote: > When I attempt to open 2 different pages on the same site I get 2 copies > of the first page. ?? ... > Any thoughts? What does your browser do? What does your browser do if you turn off cookies, re-directions and/or referers? -- Steven -- h

Re: Urllib vs. FireFox

2008-10-28 Thread Gilles Ganault
On Fri, 24 Oct 2008 13:15:49 -0700 (PDT), Mike Driscoll <[EMAIL PROTECTED]> wrote: >On Oct 24, 2:53 pm, Rex <[EMAIL PROTECTED]> wrote: >> By the way, if you're doing non-trivial web scraping, the mechanize >> module might make your work much easier. You can install it with >> easy_install.http://ww

Re: Urllib vs. FireFox

2008-10-26 Thread Tim Roberts
Lie Ryan <[EMAIL PROTECTED]> wrote: > >Cookies? Yes, please. I'll take two. Chocolate chip. With milk. -- Tim Roberts, [EMAIL PROTECTED] Providenza & Boekelheide, Inc. -- http://mail.python.org/mailman/listinfo/python-list

Re: Urllib vs. FireFox

2008-10-25 Thread Lie Ryan
On Fri, 24 Oct 2008 20:38:37 +0200, Gilles Ganault wrote: > Hello > > After scratching my head as to why I failed finding data from a web > using the "re" module, I discovered that a web page as downloaded by > urllib doesn't match what is displayed when viewing the source page in > FireFox. >

Re: Urllib vs. FireFox

2008-10-24 Thread Mike Driscoll
On Oct 24, 2:53 pm, Rex <[EMAIL PROTECTED]> wrote: > Right. If you want to get the same results with your Python script > that you did with Firefox, you can modify the browser headers in your > code. > > Here's an example with > urllib2:http://vsbabu.org/mt/archives/2003/05/27/urllib2_setting_http

Re: Urllib vs. FireFox

2008-10-24 Thread Rex
Right. If you want to get the same results with your Python script that you did with Firefox, you can modify the browser headers in your code. Here's an example with urllib2: http://vsbabu.org/mt/archives/2003/05/27/urllib2_setting_http_headers.html By the way, if you're doing non-trivial web scr

Re: Urllib vs. FireFox

2008-10-24 Thread Stefan Behnel
Gilles Ganault wrote: > After scratching my head as to why I failed finding data from a web > using the "re" module, I discovered that a web page as downloaded by > urllib doesn't match what is displayed when viewing the source page in > FireFox. > > For instance, when searching Amazon for "Wargam

Re: urllib accept-language doesn't have any effect

2008-10-17 Thread Lawrence D'Oliveiro
In message <[EMAIL PROTECTED]>, Martin Bachwerk wrote: > It does indeed give me a swedish version.. of www.google.de :) That's the > beauty about Google that they have all languages for all domains > available. > > However if I try it with www.gizmodo.com (a tech blog in several > languages) I s

Re: urllib accept-language doesn't have any effect

2008-10-16 Thread Martin Bachwerk
Hey Philip, thanks for the snipplet, but I have tried that code already. It does indeed give me a swedish version.. of www.google.de :) That's the beauty about Google that they have all languages for all domains available. However if I try it with www.gizmodo.com (a tech blog in several lang

Re: urllib accept-language doesn't have any effect

2008-10-16 Thread Philip Semanchuk
On Oct 16, 2008, at 6:50 AM, Martin Bachwerk wrote: Hmm, thanks for the ideas, I've checked the requests in Firefox one more time after deleting all the cookies and both google.com and gizmodo.com do indeed forward me to the German site without caring about the browser settings. wget s

Re: urllib accept-language doesn't have any effect

2008-10-16 Thread Diez B. Roggisch
Martin Bachwerk wrote: > Hmm, thanks for the ideas, > > I've checked the requests in Firefox one more time after deleting all > the cookies and both google.com and gizmodo.com do indeed forward me to > the German site without caring about the browser settings. > > wget shows me that the server d

Re: urllib accept-language doesn't have any effect

2008-10-16 Thread Martin Bachwerk
Hmm, thanks for the ideas, I've checked the requests in Firefox one more time after deleting all the cookies and both google.com and gizmodo.com do indeed forward me to the German site without caring about the browser settings. wget shows me that the server does a 302 redirect straight away..

Re: urllib accept-language doesn't have any effect

2008-10-15 Thread Philip Semanchuk
On Oct 15, 2008, at 9:50 AM, Martin Bachwerk wrote: Hello, I'm trying to load a couple of pages using the urllib2 module. The problem is that I live in Germany and some sites seem to look at the IP of the client and forward him to a localized page.. Here's an example of the code, how I w

Re: urllib accept-language doesn't have any effect

2008-10-15 Thread Diez B. Roggisch
Martin Bachwerk wrote: > Hi, > > yes, well my browser settings are set to display sites in following > languages "en-gb" then "en". > > As a matter of fact, Google does indeed show me the German site first, > before I click on "go to google.com" and it probably stores a cookie to > remember that

Re: urllib accept-language doesn't have any effect

2008-10-15 Thread Martin Bachwerk
Hi, yes, well my browser settings are set to display sites in following languages "en-gb" then "en". As a matter of fact, Google does indeed show me the German site first, before I click on "go to google.com" and it probably stores a cookie to remember that. But a site like gizmodo.com for

Re: urllib accept-language doesn't have any effect

2008-10-15 Thread Diez B. Roggisch
Martin Bachwerk wrote: > Hello, > > I'm trying to load a couple of pages using the urllib2 module. The > problem is that I live in Germany and some sites seem to look at the IP > of the client and forward him to a localized page.. Here's an example of > the code, how I want to access google.com m

Re: urllib equivalent for HTTP requests

2008-10-13 Thread lkcl
On Oct 8, 7:34 am, "Diez B. Roggisch" <[EMAIL PROTECTED]> wrote: > > I would like to keep track of that but I realize that py does not have > > a JS engine. :( Anyone with ideas on how to track these items or yep. > What you can't do though is to get the requests that are issued > byJavascript

Re: urllib equivalent for HTTP requests

2008-10-08 Thread Diez B. Roggisch
K schrieb: Hello everyone, I understand that urllib and urllib2 serve as really simple page request libraries. I was wondering if there is a library out there that can get the HTTP requests for a given page. Example: URL: http://www.google.com/test.html Something like: urllib.urlopen('http://w

Re: urllib error on urlopen

2008-09-25 Thread Mike Driscoll
On Sep 24, 9:36 pm, Steven D'Aprano <[EMAIL PROTECTED] cybersource.com.au> wrote: > On Wed, 24 Sep 2008 08:46:56 -0700, Mike Driscoll wrote: > > Hi, > > > I have been using the following code for over a year in one of my > > programs: > > > f = urllib2.urlopen('https://www.companywebsite.com/somest

Re: urllib error on urlopen

2008-09-25 Thread Mike Driscoll
On Sep 24, 7:08 pm, Michael Palmer <[EMAIL PROTECTED]> wrote: > On Sep 24, 11:46 am, Mike Driscoll <[EMAIL PROTECTED]> wrote: > > > > > Hi, > > > I have been using the following code for over a year in one of my > > programs: > > > f = urllib2.urlopen('https://www.companywebsite.com/somestring') >

Re: urllib error on urlopen

2008-09-24 Thread Steven D'Aprano
On Wed, 24 Sep 2008 08:46:56 -0700, Mike Driscoll wrote: > Hi, > > I have been using the following code for over a year in one of my > programs: > > f = urllib2.urlopen('https://www.companywebsite.com/somestring') > > It worked great until the middle of the afternoon yesterday. Now I get > the

Re: urllib error on urlopen

2008-09-24 Thread Michael Palmer
On Sep 24, 11:46 am, Mike Driscoll <[EMAIL PROTECTED]> wrote: > Hi, > > I have been using the following code for over a year in one of my > programs: > > f = urllib2.urlopen('https://www.companywebsite.com/somestring') > > It worked great until the middle of the afternoon yesterday. Now I get > the

Re: urllib fails to connect

2008-08-20 Thread jlist
Thanks. My problem was not how to use a proxy server but how to not use the IE proxy :) BTW, I'm not a fan of the way urllib2 uses a proxy particularly. I think it's really unneccesarily complicated. I think it should be something like this: def urlopen(url, proxy='') And if you want to use a pro

Re: urllib fails to connect

2008-08-20 Thread Edwin . Madari
jlist wrote: > > I found out why. I set a proxy in IE and I didn't know > ActiveState Python use IE proxy! > > > I'm running ActiveState Python 2.5 on Windows XP. It used > > to work fine. Today however I get (10061, 'Connection refused') > > for any site I try with urllib.urlopen(). > switching

Re: urllib fails to connect

2008-08-20 Thread Fredrik Lundh
jlist wrote: My guess is urllib.urlopen() wraps the wininet calls, which share IE proxy settings. urllib doesn't use wininet, but it does fetch the proxy settings from the Windows registry. -- http://mail.python.org/mailman/listinfo/python-list

Re: urllib fails to connect

2008-08-20 Thread jlist
My guess is urllib.urlopen() wraps the wininet calls, which share IE proxy settings. > Perhaps IE's proxy settings are effectively setting the Windows system > networking proxy settings? -- http://mail.python.org/mailman/listinfo/python-list

Re: urllib fails to connect

2008-08-20 Thread Trent Mick
jlist wrote: I found out why. I set a proxy in IE and I didn't know ActiveState Python use IE proxy! I'm running ActiveState Python 2.5 on Windows XP. It used to work fine. Today however I get (10061, 'Connection refused') for any site I try with urllib.urlopen(). Perhaps IE's proxy settings

Re: urllib fails to connect

2008-08-20 Thread raashid bhatt
On Aug 20, 10:06 am, "jlist" <[EMAIL PROTECTED]> wrote: > I'm running ActiveState Python 2.5 on Windows XP. It used > to work fine. Today however I get (10061, 'Connection refused') > for any site I try with urllib.urlopen(). May be the host is Listening on the port you are connecting to or the ho

Re: urllib fails to connect

2008-08-20 Thread jlist
I found out why. I set a proxy in IE and I didn't know ActiveState Python use IE proxy! > I'm running ActiveState Python 2.5 on Windows XP. It used > to work fine. Today however I get (10061, 'Connection refused') > for any site I try with urllib.urlopen(). -- http://mail.python.org/mailman/list

Re: urllib getting SSL certificate info

2008-08-20 Thread Heikki Toivonen
Ghirai wrote: > Would you mind sharing some code? The module is pretty ugly and on top has no > docs whatsoever; got tired of reading the source... Did you find out the right homepage at http://chandlerproject.org/Projects/MeTooCrypto? The original author, ngps, hasn't been involved in the projec

Re: urllib getting SSL certificate info

2008-08-19 Thread Ghirai
On Wednesday 20 August 2008 00:05:47 Jean-Paul Calderone wrote: > I don't know about M2Crypto. Here's some sample code for PyOpenSSL: > > from socket import socket > from OpenSSL.SSL import Connection, Context, SSLv3_METHOD > s = socket() > s.connect(('google.com', 443)) > c = Connectio

Re: urllib getting SSL certificate info

2008-08-19 Thread Jean-Paul Calderone
On Tue, 19 Aug 2008 23:06:30 +0300, Ghirai <[EMAIL PROTECTED]> wrote: On Sunday 17 August 2008 20:15:47 John Nagle wrote: If you really need details from the SSL cert, you usually have to use M2Crypto. The base SSL package doesn't actually do much with certificates. It doesn't validate the

Re: urllib getting SSL certificate info

2008-08-19 Thread Ghirai
On Sunday 17 August 2008 20:15:47 John Nagle wrote: > If you really need details from the SSL cert, you usually have to use > M2Crypto. The base SSL package doesn't actually do much with certificates. > It doesn't validate the certificate chain. And those strings of > attributes you can get

Re: urllib getting SSL certificate info

2008-08-17 Thread John Nagle
Fredrik Lundh wrote: Ghirai wrote: Using urllib, is there any way i could access some info about the SSL certificate (when opening a https url)? I'm really interested in the fingerprint. I haven't been able to find anything so far. you can get some info via (undocumented?) attributes on th

Re: urllib getting SSL certificate info

2008-08-16 Thread Ghirai
On Saturday 16 August 2008 12:16:14 Fredrik Lundh wrote: > Ghirai wrote: > > Using urllib, is there any way i could access some info about the SSL > > certificate (when opening a https url)? > > > > I'm really interested in the fingerprint. > > > > I haven't been able to find anything so far. > > y

Re: urllib getting SSL certificate info

2008-08-16 Thread Fredrik Lundh
Ghirai wrote: Using urllib, is there any way i could access some info about the SSL certificate (when opening a https url)? I'm really interested in the fingerprint. I haven't been able to find anything so far. you can get some info via (undocumented?) attributes on the file handle: >>> im

Re: urllib and login with passwords

2008-07-26 Thread Jive Dadson
Thanks, Rob! Some of that is beyond my maturity level, but I'll try to figure it out. If anyone has specific info on about how YouTube does it, I would appreciate the info. -- http://mail.python.org/mailman/listinfo/python-list

Re: urllib and login with passwords

2008-07-26 Thread Rob Williscroft
Jive Dadson wrote in news:[EMAIL PROTECTED] in comp.lang.python: > Hey folks! > > There are various web pages that I would like to read using urllib, but > they require login with passwords. Can anyone tell me how to find out > how to do that, both in general and specifically for YouTube.com.

Re: Urllib(1/2) how to open multiple client sockets?

2008-06-26 Thread MRAB
On Jun 26, 11:48 am, ShashiGowda <[EMAIL PROTECTED]> wrote: > Hey there i made a script to download all images from a web site but > it runs damn slow though I have a lot of bandwidth waiting to be used > please tell me a way to use urllib to open many connections to the > server to download many p

Re: Urllib(1/2) how to open multiple client sockets?

2008-06-26 Thread Gerhard Häring
ShashiGowda wrote: Hey there i made a script to download all images from a web site but it runs damn slow though I have a lot of bandwidth waiting to be used please tell me a way to use urllib to open many connections to the server to download many pics simultaneously Any off question suggest

Re: urllib tutorial or manual

2008-06-26 Thread Simon Brunning
2008/6/24 Alex Bryan <[EMAIL PROTECTED]>: > I have never used the urllib class and I need to use it for an app I am > working on. I am wondering if anyone has any good sites that will fill me in > on it(especially the urllib.urlopen module). Or better yet, an example of > how you would submit a sea

Re: urllib (54, 'Connection reset by peer') error

2008-06-21 Thread John Nagle
Tim Golden wrote: [EMAIL PROTECTED] wrote: Thanks for the help. The error handling worked to a certain extent but after a while the server does seem to stop responding to my requests. I have a list of about 7,000 links to pages I want to parse the HTML of (it's basically a web crawler) but aft

Re: urllib (54, 'Connection reset by peer') error

2008-06-18 Thread Tim Golden
[EMAIL PROTECTED] wrote: Thanks for the help. The error handling worked to a certain extent but after a while the server does seem to stop responding to my requests. I have a list of about 7,000 links to pages I want to parse the HTML of (it's basically a web crawler) but after a certain number

  1   2   >