On Saturday, 21 December 2019 21:46:43 UTC, Ben Hearn wrote:
> Hello all,
>
> I am having a bit of trouble with a string mismatch operation in my tool I am
> writing.
>
> I am comparing a database collection or url quoted paths to the paths on the
> users drive.
>
> These 2 paths look identic
On 12/21/19 8:25 PM, MRAB wrote:
> On 2019-12-22 00:22, Michael Torrie wrote:
>> On 12/21/19 2:46 PM, Ben Hearn wrote:
>>> These 2 paths look identical, one from the drive & the other from an
>>> xml url:
>>> a = '/Users/macbookpro/Music/tracks_new/_NS_2018/J.Staaf -
>>> ¡Móchate! _PromoMix_.wav'
On 2019-12-22 00:22, Michael Torrie wrote:
On 12/21/19 2:46 PM, Ben Hearn wrote:
These 2 paths look identical, one from the drive & the other from an xml url:
a = '/Users/macbookpro/Music/tracks_new/_NS_2018/J.Staaf - ¡Móchate!
_PromoMix_.wav'
On Sun, Dec 22, 2019 at 11:33 AM Michael Torrie wrote:
>
> On 12/21/19 2:46 PM, Ben Hearn wrote:
> > These 2 paths look identical, one from the drive & the other from an xml
> > url:
> > a = '/Users/macbookpro/Music/tracks_new/_NS_2018/J.Staaf - ¡Móchate!
> > _PromoMix_.wav'
>
On 12/21/19 2:46 PM, Ben Hearn wrote:
> These 2 paths look identical, one from the drive & the other from an xml url:
> a = '/Users/macbookpro/Music/tracks_new/_NS_2018/J.Staaf - ¡Móchate!
> _PromoMix_.wav'
^^
> b = '/Users/macbookpro
On 12/21/19 4:46 PM, Ben Hearn wrote:
import difflib
print('\n'.join(difflib.ndiff([a], [b])))
- /Users/macbookpro/Music/tracks_new/_NS_2018/J.Staaf - ¡Móchate!
_PromoMix_.wav
?
^^
+ /Users/ma
Ben Hearn writes:
> Hello all,
>
> I am having a bit of trouble with a string mismatch operation in my tool I am
> writing.
>
> I am comparing a database collection or url quoted paths to the paths on the
> users drive.
>
> These 2 paths look identical, one from the drive & the other from an xm
On Friday, June 30, 2017 at 1:30:10 PM UTC+1, Rasputin wrote:
> good luck with that, mate !
Please don't change the subject line and also provide some context when you
reply, we're not yet mindreaders :)
Kindest regards.
--
Mark Lawrence.
--
https://mail.python.org/mailman/listinfo/python-list
On Fri, Jun 20, 2014 at 12:19 AM, Robin Becker wrote:
> in practice [monkeypatching socket] worked well with urllib in python27.
Excellent! That's empirical evidence of success, then.
Like with all monkey-patching, you need to keep it as visible as
possible, but if your driver script is only a p
On 19/06/2014 13:03, Chris Angelico wrote:
.
I can use python >= 3.3 if required.
The main reason I ask is in case something's changed. Basically, what
I did was go to my Python 2 installation (which happens to be 2.7.3,
because that's what Debian Wheezy ships with - not sure why it has
On Thu, Jun 19, 2014 at 9:51 PM, Robin Becker wrote:
>> Since you mention urllib2, I'm assuming this is Python 2.x, not 3.x.
>> The exact version may be significant.
>>
> I can use python >= 3.3 if required.
The main reason I ask is in case something's changed. Basically, what
I did was go to my
..
Since you mention urllib2, I'm assuming this is Python 2.x, not 3.x.
The exact version may be significant.
I can use python >= 3.3 if required.
Can you simply query the server by IP address rather than host name?
According to the docs, urllib2.urlopen() doesn't check the
certific
On Thu, Jun 19, 2014 at 7:22 PM, Robin Becker wrote:
> I want to run torture tests against an https server on domain A; I have
> configured apache on the server to respond to a specific hostname ipaddress.
>
> I don't want to torture the live server so I have set up an alternate
> instance on a di
On Tue, Dec 24, 2013 at 12:47 AM, Jeff James wrote:
> I have some simple code I would like to share with someone that can assist
> me in integrating authentication script into. I'm sure it's an easy answer
> for any of you. I am still researching, but on this particular project,
> time is of the
I have some simple code I would like to share with someone that can assist
me in integrating authentication script into. I'm sure it's an easy answer
for any of you. I am still researching, but on this particular project,
time is of the essence and this is the only missing piece of the puzzle for
On 12/17/2013 08:10 AM, Larry Martell wrote:
On Tue, Dec 17, 2013 at 10:26 AM, Jeff James mailto:j...@jeffljames.com>> wrot
So I'm using the following script to check our sites to make sure they are
all up and some of them are reporting they are
"down" when, in fact, they are actually
On Tue, Dec 17, 2013 at 10:26 AM, Jeff James wrot
>
> So I'm using the following script to check our sites to make sure they
> are all up and some of them are reporting they are "down" when, in fact,
> they are actually up. These sites do not require a logon in order for the
> home page to come
So I'm using the following script to check our sites to make sure they are
all up and some of them are reporting they are "down" when, in fact, they
are actually up. These sites do not require a logon in order for the home
page to come up. Could this be due to some port being blocked internally
On Dec 16, 2013, at 6:40 AM, Jeff James wrote:
> So I'm using the following script to check our sites to make sure they are
> all up and some of them are reporting they are "down" when, in fact, they are
> actually up. These sites do not require a logon in order for the home page
> to come u
In Jeff James
writes:
> --f46d04479f936227ee04edac31bd
> Content-Type: text/plain; charset=ISO-8859-1
> Sorry to be a pain here, guys, as I'm also a newbie at this as well.
> Where, exactly in the script would I place the " print str(e) " ?
except Exception, e:
print site + " is
This worked perfectly. Thank You
Where, exactly in the script would I place the " print str(e) " ?
The line after the print site + " is down" line.
Original Post :
I'm not really receiving an "exception" other than those three sites, out
of the 30 or so I have listed, are the only sites
On Mon, Dec 16, 2013 at 2:55 PM, Jeff James wrote:
> Sorry to be a pain here, guys, as I'm also a newbie at this as well.
>
> Where, exactly in the script would I place the " print str(e) " ?
The line after the print site + " is down" line.
>
> Thanks
>
> Original message :
>
>> I'm not really
Sorry to be a pain here, guys, as I'm also a newbie at this as well.
Where, exactly in the script would I place the " print str(e) " ?
Thanks
Original message :
I'm not really receiving an "exception" other than those three sites, out
> of the 30 or so I have listed, are the only sites which s
t;
> Cc: python-list@python.org 'python-list@python.org');>
> Date: Mon, 16 Dec 2013 06:54:48 -0500
> Subject: Re: Question RE urllib
> On Mon, Dec 16, 2013 at 6:40 AM, Jeff James
> >
> wrote:
> > So I'm using the following script to check our sites to ma
/my..com/intranet.html* is down*
http://#.main..com/psso/pssignsso.asp?dbname=FSPRD90
* is down*
http://sharepoint..com/regions/west/PHX_NSC/default.aspx
* is down*
Cc: python-list@python.org
Date: Mon, 16 Dec 2013 06:54:48 -0500
Subject: Re: Question RE urllib
On Mon, Dec 16, 2013
On 2013-12-16 04:40, Jeff James wrote:
> These sites do not require a logon in order for the home
> page to come up. Could this be due to some port being blocked
> internally ? Only one of the sites reporting as down is "https" but
> all are internal sites. Is there some other component I should
On Mon, Dec 16, 2013 at 6:40 AM, Jeff James wrote:
> So I'm using the following script to check our sites to make sure they are
> all up and some of them are reporting they are "down" when, in fact, they
> are actually up. These sites do not require a logon in order for the home
> page to come u
So I'm using the following script to check our sites to make sure they are
all up and some of them are reporting they are "down" when, in fact, they
are actually up. These sites do not require a logon in order for the home
page to come up. Could this be due to some port being blocked internally
luca72 wrote:
>
>Hello i have a simple question:
>up to now if i have to parse a page i do as follow:
>...
>Now i have the site that is open by an html file like this:
>...
>how can i open it with urllib, please note i don't have to parse this
>file, but i have to parse the site where he point.
W
Thanks, everyone!
Problem solved.
--
Regards,
Daniil
--
http://mail.python.org/mailman/listinfo/python-list
On Fri, Jul 1, 2011 at 1:53 AM, Даниил Рыжков wrote:
> Hello again!
> Another question: urlopen() reads full file's content, but how can I
> get page by small parts?
I don't think that's true. Just pass .read() the number of bytes you
want to read, just as you would with an actual file object.
C
On Fri, Jul 1, 2011 at 2:23 PM, Даниил Рыжков wrote:
> Hello again!
> Another question: urlopen() reads full file's content, but how can I
> get page by small parts?
>
Set the Range header for HTTP requests. The format is specified here:
http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec
Hello again!
Another question: urlopen() reads full file's content, but how can I
get page by small parts?
Regards,
Daniil
--
http://mail.python.org/mailman/listinfo/python-list
Thanks, everyone!
Problem solved.
--
Regards,
Daniil
--
http://mail.python.org/mailman/listinfo/python-list
Даниил Рыжков wrote:
> How can I get headers with urlretrieve? I want to send request and get
> headers with necessary information before I execute urlretrieve(). Or
> are there any alternatives for urlretrieve()?
It's easy to do it manually:
>>> import urllib2
Connect to website and inspect h
On Fri, Jul 1, 2011 at 12:03 AM, Даниил Рыжков wrote:
> Hello, everyone!
>
> How can I get headers with urlretrieve? I want to send request and get
> headers with necessary information before I execute urlretrieve(). Or
> are there any alternatives for urlretrieve()?
You can use regular urlopen()
John Nagle wrote:
On 8/20/2010 8:41 AM, Aahz wrote:
In article<4c5eef7f$0$1609$742ec...@news.sonic.net>,
John Nagle wrote:
This looks like code that will do the wrong thing in
Python 2.6 for characters in the range 128-255. Those are
illegal in type "str", but this code is constructing
On 8/20/10 1:50 PM, John Nagle wrote:
On 8/20/2010 8:41 AM, Aahz wrote:
In article<4c5eef7f$0$1609$742ec...@news.sonic.net>,
John Nagle wrote:
This looks like code that will do the wrong thing in
Python 2.6 for characters in the range 128-255. Those are
illegal in type "str", but this code is
On 8/20/2010 8:41 AM, Aahz wrote:
In article<4c5eef7f$0$1609$742ec...@news.sonic.net>,
John Nagle wrote:
This looks like code that will do the wrong thing in
Python 2.6 for characters in the range 128-255. Those are
illegal in type "str", but this code is constructing such
values with "c
In article <4c5eef7f$0$1609$742ec...@news.sonic.net>,
John Nagle wrote:
>
> This looks like code that will do the wrong thing in
>Python 2.6 for characters in the range 128-255. Those are
>illegal in type "str", but this code is constructing such
>values with "chr".
WDYM "illegal"?
--
Aahz
In message
<43f464f9-3f8a-4bec-8d06-930092d5a...@g6g2000pro.googlegroups.com>, kBob
wrote:
> The company changed the Internet LAN connections to "Accept Automatic
> settings" and "Use automatic configuration script"
Look at that configuration script, figure out what it’s returning for a
proxy
On Jul 28, 12:44 pm, John Nagle wrote:
> On 7/27/2010 2:36 PM, kBob wrote:
>
>
>
>
>
>
>
> > I created a script to access weather satellite imagery fron NOAA's
> > ADDS.
>
> > It worked fine until recently with Python 2.6.
>
> > The company changed the Internet LAN connections to "Accept Aut
On 7/27/2010 2:36 PM, kBob wrote:
I created a script to access weather satellite imagery fron NOAA's
ADDS.
It worked fine until recently with Python 2.6.
The company changed the Internet LAN connections to "Accept Automatic
settings" and "Use automatic configuration script"
How do yo
On Wed, Jul 28, 2010 at 9:30 AM, kBob wrote:
> On Jul 28, 9:11 am, kBob wrote:
>> The connection problem has to do with the proxy settings.
>>
>> In order for me to use Internet Explorer, the LAN's Automatic
>> configuration must be turned on and use a script found on the
>> company's proxy ser
On Jul 28, 9:11 am, kBob wrote:
> On Jul 27, 4:56 pm, MRAB wrote:
>
>
>
>
>
> > kBob wrote:
> > > On Jul 27, 4:23 pm, MRAB wrote:
> > >> kBob wrote:
>
> > >>> I created a script to access weather satellite imagery fron NOAA's
> > >>> ADDS.
> > >>> It worked fine until recently with Python 2.6.
On Jul 27, 4:56 pm, MRAB wrote:
> kBob wrote:
> > On Jul 27, 4:23 pm, MRAB wrote:
> >> kBob wrote:
>
> >>> I created a script to access weather satellite imagery fron NOAA's
> >>> ADDS.
> >>> It worked fine until recently with Python 2.6.
> >>> The company changed the Internet LAN connections
kBob wrote:
On Jul 27, 4:23 pm, MRAB wrote:
kBob wrote:
I created a script to access weather satellite imagery fron NOAA's
ADDS.
It worked fine until recently with Python 2.6.
The company changed the Internet LAN connections to "Accept Automatic
settings" and "Use automatic configuration s
On Jul 27, 4:23 pm, MRAB wrote:
> kBob wrote:
>
> > I created a script to access weather satellite imagery fron NOAA's
> > ADDS.
>
> > It worked fine until recently with Python 2.6.
>
> > The company changed the Internet LAN connections to "Accept Automatic
> > settings" and "Use automatic conf
kBob wrote:
I created a script to access weather satellite imagery fron NOAA's
ADDS.
It worked fine until recently with Python 2.6.
The company changed the Internet LAN connections to "Accept Automatic
settings" and "Use automatic configuration script"
How do you get urllib.urlopen to use
On 24 Sep, 22:18, "Adam W." wrote:
> I'm trying to scrape some historical data from NOAA's website, but I
> can't seem to feed it the right form values to get the data out of
> it. Heres the code:
>
> import urllib
> import urllib2
>
> ## The source pagehttp://www.erh.noaa.gov/bgm/climate/bgm.sht
Hello!
I've solved this problem, using pyCurl.
Here is sample code.
import pycurl
import StringIO
b = StringIO.StringIO()
c = pycurl.Curl()
url = 'https://example.com/'
c.setopt(pycurl.URL, url)
c.setopt(pycurl.WRITEFUNCTION, b.write)
c.setopt(pycurl.CAINFO, 'cert.crt')
c.setopt(pycurl.SSLKEY, 'm
> Thanks for the reply. I want my key to be as secure as possible. So I
> will remove pass phrase if only there is no other possibility to go
> through authentication.
And you put the passphrase into the source code instead? How does it
make that more secure?
Regards,
Martin
--
http://mail.pytho
2009/7/4 Lacrima :
> On Jul 4, 11:24 am, Chris Rebert wrote:
>> On Sat, Jul 4, 2009 at 1:12 AM, Lacrima wrote:
>> > Hello!
>>
>> > I am trying to use urllib to fetch some internet resources, using my
>> > client x509 certificate.
>> > I have divided my .p12 file into mykey.key and mycert.cer files
On Jul 4, 12:38 pm, "Martin v. Löwis" wrote:
> > This works Ok! But every time I am asked to enter PEM pass phrase,
> > which I specified during dividing my .p12 file.
> > So my question... What should I do to make my code fetch any url
> > automatically (without asking me every time to enter pass
On Jul 4, 11:24 am, Chris Rebert wrote:
> On Sat, Jul 4, 2009 at 1:12 AM, Lacrima wrote:
> > Hello!
>
> > I am trying to use urllib to fetch some internet resources, using my
> > client x509 certificate.
> > I have divided my .p12 file into mykey.key and mycert.cer files.
> > Then I use following
> This works Ok! But every time I am asked to enter PEM pass phrase,
> which I specified during dividing my .p12 file.
> So my question... What should I do to make my code fetch any url
> automatically (without asking me every time to enter pass phrase)?
> As I understand there is impossible to spe
On Sat, Jul 4, 2009 at 1:12 AM, Lacrima wrote:
> Hello!
>
> I am trying to use urllib to fetch some internet resources, using my
> client x509 certificate.
> I have divided my .p12 file into mykey.key and mycert.cer files.
> Then I use following approach:
import urllib
url = 'https://exam
On Wed, 18 Feb 2009 01:17:40 -0700, Tim H wrote:
> When I attempt to open 2 different pages on the same site I get 2 copies
> of the first page. ??
...
> Any thoughts?
What does your browser do?
What does your browser do if you turn off cookies, re-directions and/or
referers?
--
Steven
--
h
On Fri, 24 Oct 2008 13:15:49 -0700 (PDT), Mike Driscoll
<[EMAIL PROTECTED]> wrote:
>On Oct 24, 2:53 pm, Rex <[EMAIL PROTECTED]> wrote:
>> By the way, if you're doing non-trivial web scraping, the mechanize
>> module might make your work much easier. You can install it with
>> easy_install.http://ww
Lie Ryan <[EMAIL PROTECTED]> wrote:
>
>Cookies?
Yes, please. I'll take two. Chocolate chip. With milk.
--
Tim Roberts, [EMAIL PROTECTED]
Providenza & Boekelheide, Inc.
--
http://mail.python.org/mailman/listinfo/python-list
On Fri, 24 Oct 2008 20:38:37 +0200, Gilles Ganault wrote:
> Hello
>
> After scratching my head as to why I failed finding data from a web
> using the "re" module, I discovered that a web page as downloaded by
> urllib doesn't match what is displayed when viewing the source page in
> FireFox.
>
On Oct 24, 2:53 pm, Rex <[EMAIL PROTECTED]> wrote:
> Right. If you want to get the same results with your Python script
> that you did with Firefox, you can modify the browser headers in your
> code.
>
> Here's an example with
> urllib2:http://vsbabu.org/mt/archives/2003/05/27/urllib2_setting_http
Right. If you want to get the same results with your Python script
that you did with Firefox, you can modify the browser headers in your
code.
Here's an example with urllib2:
http://vsbabu.org/mt/archives/2003/05/27/urllib2_setting_http_headers.html
By the way, if you're doing non-trivial web scr
Gilles Ganault wrote:
> After scratching my head as to why I failed finding data from a web
> using the "re" module, I discovered that a web page as downloaded by
> urllib doesn't match what is displayed when viewing the source page in
> FireFox.
>
> For instance, when searching Amazon for "Wargam
In message <[EMAIL PROTECTED]>, Martin
Bachwerk wrote:
> It does indeed give me a swedish version.. of www.google.de :) That's the
> beauty about Google that they have all languages for all domains
> available.
>
> However if I try it with www.gizmodo.com (a tech blog in several
> languages) I s
Hey Philip,
thanks for the snipplet, but I have tried that code already. It does
indeed give me a swedish version.. of www.google.de :) That's the beauty
about Google that they have all languages for all domains available.
However if I try it with www.gizmodo.com (a tech blog in several
lang
On Oct 16, 2008, at 6:50 AM, Martin Bachwerk wrote:
Hmm, thanks for the ideas,
I've checked the requests in Firefox one more time after deleting
all the cookies and both google.com and gizmodo.com do indeed
forward me to the German site without caring about the browser
settings.
wget s
Martin Bachwerk wrote:
> Hmm, thanks for the ideas,
>
> I've checked the requests in Firefox one more time after deleting all
> the cookies and both google.com and gizmodo.com do indeed forward me to
> the German site without caring about the browser settings.
>
> wget shows me that the server d
Hmm, thanks for the ideas,
I've checked the requests in Firefox one more time after deleting all
the cookies and both google.com and gizmodo.com do indeed forward me to
the German site without caring about the browser settings.
wget shows me that the server does a 302 redirect straight away..
On Oct 15, 2008, at 9:50 AM, Martin Bachwerk wrote:
Hello,
I'm trying to load a couple of pages using the urllib2 module. The
problem is that I live in Germany and some sites seem to look at the
IP of the client and forward him to a localized page.. Here's an
example of the code, how I w
Martin Bachwerk wrote:
> Hi,
>
> yes, well my browser settings are set to display sites in following
> languages "en-gb" then "en".
>
> As a matter of fact, Google does indeed show me the German site first,
> before I click on "go to google.com" and it probably stores a cookie to
> remember that
Hi,
yes, well my browser settings are set to display sites in following
languages "en-gb" then "en".
As a matter of fact, Google does indeed show me the German site first,
before I click on "go to google.com" and it probably stores a cookie to
remember that.
But a site like gizmodo.com for
Martin Bachwerk wrote:
> Hello,
>
> I'm trying to load a couple of pages using the urllib2 module. The
> problem is that I live in Germany and some sites seem to look at the IP
> of the client and forward him to a localized page.. Here's an example of
> the code, how I want to access google.com m
On Oct 8, 7:34 am, "Diez B. Roggisch" <[EMAIL PROTECTED]> wrote:
> > I would like to keep track of that but I realize that py does not have
> > a JS engine. :( Anyone with ideas on how to track these items or
yep.
> What you can't do though is to get the requests that are issued
> byJavascript
K schrieb:
Hello everyone,
I understand that urllib and urllib2 serve as really simple page
request libraries. I was wondering if there is a library out there
that can get the HTTP requests for a given page.
Example:
URL: http://www.google.com/test.html
Something like: urllib.urlopen('http://w
On Sep 24, 9:36 pm, Steven D'Aprano <[EMAIL PROTECTED]
cybersource.com.au> wrote:
> On Wed, 24 Sep 2008 08:46:56 -0700, Mike Driscoll wrote:
> > Hi,
>
> > I have been using the following code for over a year in one of my
> > programs:
>
> > f = urllib2.urlopen('https://www.companywebsite.com/somest
On Sep 24, 7:08 pm, Michael Palmer <[EMAIL PROTECTED]> wrote:
> On Sep 24, 11:46 am, Mike Driscoll <[EMAIL PROTECTED]> wrote:
>
>
>
> > Hi,
>
> > I have been using the following code for over a year in one of my
> > programs:
>
> > f = urllib2.urlopen('https://www.companywebsite.com/somestring')
>
On Wed, 24 Sep 2008 08:46:56 -0700, Mike Driscoll wrote:
> Hi,
>
> I have been using the following code for over a year in one of my
> programs:
>
> f = urllib2.urlopen('https://www.companywebsite.com/somestring')
>
> It worked great until the middle of the afternoon yesterday. Now I get
> the
On Sep 24, 11:46 am, Mike Driscoll <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I have been using the following code for over a year in one of my
> programs:
>
> f = urllib2.urlopen('https://www.companywebsite.com/somestring')
>
> It worked great until the middle of the afternoon yesterday. Now I get
> the
Thanks. My problem was not how to use a proxy server but how
to not use the IE proxy :)
BTW, I'm not a fan of the way urllib2 uses a proxy particularly.
I think it's really unneccesarily complicated. I think it should be
something like this:
def urlopen(url, proxy='')
And if you want to use a pro
jlist wrote:
>
> I found out why. I set a proxy in IE and I didn't know
> ActiveState Python use IE proxy!
>
> > I'm running ActiveState Python 2.5 on Windows XP. It used
> > to work fine. Today however I get (10061, 'Connection refused')
> > for any site I try with urllib.urlopen().
>
switching
jlist wrote:
My guess is urllib.urlopen() wraps the wininet calls, which share
IE proxy settings.
urllib doesn't use wininet, but it does fetch the proxy settings from
the Windows registry.
--
http://mail.python.org/mailman/listinfo/python-list
My guess is urllib.urlopen() wraps the wininet calls, which share
IE proxy settings.
> Perhaps IE's proxy settings are effectively setting the Windows system
> networking proxy settings?
--
http://mail.python.org/mailman/listinfo/python-list
jlist wrote:
I found out why. I set a proxy in IE and I didn't know
ActiveState Python use IE proxy!
I'm running ActiveState Python 2.5 on Windows XP. It used
to work fine. Today however I get (10061, 'Connection refused')
for any site I try with urllib.urlopen().
Perhaps IE's proxy settings
On Aug 20, 10:06 am, "jlist" <[EMAIL PROTECTED]> wrote:
> I'm running ActiveState Python 2.5 on Windows XP. It used
> to work fine. Today however I get (10061, 'Connection refused')
> for any site I try with urllib.urlopen().
May be the host is Listening on the port you are connecting to or the
ho
I found out why. I set a proxy in IE and I didn't know
ActiveState Python use IE proxy!
> I'm running ActiveState Python 2.5 on Windows XP. It used
> to work fine. Today however I get (10061, 'Connection refused')
> for any site I try with urllib.urlopen().
--
http://mail.python.org/mailman/list
Ghirai wrote:
> Would you mind sharing some code? The module is pretty ugly and on top has no
> docs whatsoever; got tired of reading the source...
Did you find out the right homepage at
http://chandlerproject.org/Projects/MeTooCrypto? The original author,
ngps, hasn't been involved in the projec
On Wednesday 20 August 2008 00:05:47 Jean-Paul Calderone wrote:
> I don't know about M2Crypto. Here's some sample code for PyOpenSSL:
>
> from socket import socket
> from OpenSSL.SSL import Connection, Context, SSLv3_METHOD
> s = socket()
> s.connect(('google.com', 443))
> c = Connectio
On Tue, 19 Aug 2008 23:06:30 +0300, Ghirai <[EMAIL PROTECTED]> wrote:
On Sunday 17 August 2008 20:15:47 John Nagle wrote:
If you really need details from the SSL cert, you usually have to use
M2Crypto. The base SSL package doesn't actually do much with certificates.
It doesn't validate the
On Sunday 17 August 2008 20:15:47 John Nagle wrote:
> If you really need details from the SSL cert, you usually have to use
> M2Crypto. The base SSL package doesn't actually do much with certificates.
> It doesn't validate the certificate chain. And those strings of
> attributes you can get
Fredrik Lundh wrote:
Ghirai wrote:
Using urllib, is there any way i could access some info about the SSL
certificate (when opening a https url)?
I'm really interested in the fingerprint.
I haven't been able to find anything so far.
you can get some info via (undocumented?) attributes on th
On Saturday 16 August 2008 12:16:14 Fredrik Lundh wrote:
> Ghirai wrote:
> > Using urllib, is there any way i could access some info about the SSL
> > certificate (when opening a https url)?
> >
> > I'm really interested in the fingerprint.
> >
> > I haven't been able to find anything so far.
>
> y
Ghirai wrote:
Using urllib, is there any way i could access some info about the SSL
certificate (when opening a https url)?
I'm really interested in the fingerprint.
I haven't been able to find anything so far.
you can get some info via (undocumented?) attributes on the file handle:
>>> im
Thanks, Rob! Some of that is beyond my maturity level, but I'll try to
figure it out. If anyone has specific info on about how YouTube does
it, I would appreciate the info.
--
http://mail.python.org/mailman/listinfo/python-list
Jive Dadson wrote in news:[EMAIL PROTECTED] in
comp.lang.python:
> Hey folks!
>
> There are various web pages that I would like to read using urllib, but
> they require login with passwords. Can anyone tell me how to find out
> how to do that, both in general and specifically for YouTube.com.
On Jun 26, 11:48 am, ShashiGowda <[EMAIL PROTECTED]> wrote:
> Hey there i made a script to download all images from a web site but
> it runs damn slow though I have a lot of bandwidth waiting to be used
> please tell me a way to use urllib to open many connections to the
> server to download many p
ShashiGowda wrote:
Hey there i made a script to download all images from a web site but
it runs damn slow though I have a lot of bandwidth waiting to be used
please tell me a way to use urllib to open many connections to the
server to download many pics simultaneously Any off question
suggest
2008/6/24 Alex Bryan <[EMAIL PROTECTED]>:
> I have never used the urllib class and I need to use it for an app I am
> working on. I am wondering if anyone has any good sites that will fill me in
> on it(especially the urllib.urlopen module). Or better yet, an example of
> how you would submit a sea
Tim Golden wrote:
[EMAIL PROTECTED] wrote:
Thanks for the help. The error handling worked to a certain extent
but after a while the server does seem to stop responding to my
requests.
I have a list of about 7,000 links to pages I want to parse the HTML
of (it's basically a web crawler) but aft
[EMAIL PROTECTED] wrote:
Thanks for the help. The error handling worked to a certain extent
but after a while the server does seem to stop responding to my
requests.
I have a list of about 7,000 links to pages I want to parse the HTML
of (it's basically a web crawler) but after a certain number
1 - 100 of 192 matches
Mail list logo