Re: urllib.urlretireve problem

2005-03-31 Thread Ritesh Raj Sarraf
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Diez B. Roggisch wrote:

 You could for instance try and see what kind of result you got using the
 unix file command - it will tell you that you received a html file, not a
 deb.
 
 Or check the mimetype returned - its text/html in the error case of yours,
 and most probably something like application/octet-stream otherwise.
 

Using the unix file command is not possible at all. The whole goal of the
program is to help people get their packages downloaded from some other
(high speed) machine which could be running Windows/Mac OSX/Linux et
cetera. That is why I'm sticking strictly to python libraries.

The second suggestion sounds good. I'll look into that.

Thanks,

rrs
- -- 
Ritesh Raj Sarraf
RESEARCHUT -- http://www.researchut.com
Gnupg Key ID: 04F130BC
Stealing logic from one person is plagiarism, stealing from many is
research.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.5 (GNU/Linux)

iD8DBQFCTDhV4Rhi6gTxMLwRAi2BAJ4zp7IsQNMZ1zqpF/hGUAjUyYwKigCeKaqO
FbGuuFOIHawZ8y/ICf87wOI=
=btA5
-END PGP SIGNATURE-

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: urllib.urlretireve problem

2005-03-30 Thread Ritesh Raj Sarraf
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Larry Bates wrote:

 I noticed you hadn't gotten a reply.  When I execute this it put's the
 following in the retrieved file:
 
 !DOCTYPE HTML PUBLIC -//IETF//DTD HTML 2.0//EN
 HTMLHEAD
 TITLE404 Not Found/TITLE
 /HEADBODY
 H1Not Found/H1
 The requested URL /pool/updates/main/p/perl/libparl5.6_5.6.1-8.9_i386.deb
 was no t found on this server.P
 /BODY/HTML
 
 You will probably need to use something else to first determine if the URL
 actually exists.

I'm happy that at least someone responded as this was my first post to the
python mailing list.

I'm coding a program for offline package management.
The link that I provided could be obsolete by newer packages. That is where
my problem is. I wanted to know how to raise an exception here so that
depending on the type of exception I could make my program function.

For example, for Temporary Name Resolution Failure, python raises an
exception which I've handled well. The problem lies with obsolete urls 
where no exception is raised and I end up having a 404 error page as my
data.

Can we have an exception for that ?  Or can we have the exit status of
urllib.urlretrieve to know if it downloaded the desired file.
I think my problem is fixable in urllib.urlopen, I just find
urllib.urlretrieve more convenient and want to know if it can be done with
it.

Thanks for responding.

rrs
- -- 
Ritesh Raj Sarraf
RESEARCHUT -- http://www.researchut.com
Gnupg Key ID: 04F130BC
Stealing logic from one person is plagiarism, stealing from many is
research.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.5 (GNU/Linux)

iD8DBQFCSuYS4Rhi6gTxMLwRAu0FAJ9R0s4TyB7zHcvDFTflOp2joVkErQCfU4vG
8U0Ah5WTdTQHKRkmPsZsHdE=
=OMub
-END PGP SIGNATURE-

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: urllib.urlretireve problem

2005-03-30 Thread Diez B. Roggisch
 I'm coding a program for offline package management.
 The link that I provided could be obsolete by newer packages. That is
 where my problem is. I wanted to know how to raise an exception here so
 that depending on the type of exception I could make my program function.
 
 For example, for Temporary Name Resolution Failure, python raises an
 exception which I've handled well. The problem lies with obsolete urls
 where no exception is raised and I end up having a 404 error page as my
 data.
 
 Can we have an exception for that ?  Or can we have the exit status of
 urllib.urlretrieve to know if it downloaded the desired file.
 I think my problem is fixable in urllib.urlopen, I just find
 urllib.urlretrieve more convenient and want to know if it can be done with
 it.

It makes no sense having urllib generating exceptions for such a case. From
its point of view, things work pefectly - it got a result. No network error
or whatsoever.

Its your application that is not happy with the result - but it has to
figure that out by itself. 

You could for instance try and see what kind of result you got using the
unix file command - it will tell you that you received a html file, not a
deb.

Or check the mimetype returned - its text/html in the error case of yours,
and most probably something like application/octet-stream otherwise.

Regards,

Diez

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: urllib.urlretireve problem

2005-03-30 Thread Skip Montanaro

 For example, for Temporary Name Resolution Failure, python raises an
 exception which I've handled well. The problem lies with obsolete
 urls where no exception is raised and I end up having a 404 error
 page as my data.

Diez It makes no sense having urllib generating exceptions for such a
Diez case. From its point of view, things work pefectly - it got a
Diez result. No network error or whatsoever.

You can subclass FancyURLOpener and define a method to handle 404s, 403s,
401s, etc.  There should be no need to resort to grubbing around with file
extensions and such.

Skip

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: urllib.urlretireve problem

2005-03-29 Thread Larry Bates
I noticed you hadn't gotten a reply.  When I execute this it put's the following
in the retrieved file:

!DOCTYPE HTML PUBLIC -//IETF//DTD HTML 2.0//EN
HTMLHEAD
TITLE404 Not Found/TITLE
/HEADBODY
H1Not Found/H1
The requested URL /pool/updates/main/p/perl/libparl5.6_5.6.1-8.9_i386.deb was no
t found on this server.P
/BODY/HTML

You will probably need to use something else to first determine if the URL
actually exists.

Larry Bates


Ritesh Raj Sarraf wrote:
 Hello Everybody,
 
  I've got a small problem with urlretrieve.
 Even passing a bad url to urlretrieve doesn't raise an exception. Or does
 it?
 
 If Yes, What exception is it ? And how do I use it in my program ? I've
 searched a lot but haven't found anything helping.
 
 Example:
 try:
   
 urllib.urlretrieve(http://security.debian.org/pool/updates/main/p/perl/libparl5.6_5.6.1-8.9_i386.deb;)
 except IOError, X:
 DoSomething(X)
 except OSError, X:
 DoSomething(X)
 
 urllib.urlretrieve doesn't raise an exception even though there is no
 package named libparl5.6
 
 Please Help!
 
 rrs
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: urllib.urlretireve problem

2005-03-29 Thread gene . tani
 Mertz' Text Processing in Python book had a good discussion about
trapping 403 and 404's.

http://gnosis.cx/TPiP/

Larry Bates wrote:
 I noticed you hadn't gotten a reply.  When I execute this it put's
the following
 in the retrieved file:

 !DOCTYPE HTML PUBLIC -//IETF//DTD HTML 2.0//EN
 HTMLHEAD
 TITLE404 Not Found/TITLE
 /HEADBODY
 H1Not Found/H1
 The requested URL
/pool/updates/main/p/perl/libparl5.6_5.6.1-8.9_i386.deb was no
 t found on this server.P
 /BODY/HTML

 You will probably need to use something else to first determine if
the URL
 actually exists.

 Larry Bates


 Ritesh Raj Sarraf wrote:
  Hello Everybody,
 
   I've got a small problem with urlretrieve.
  Even passing a bad url to urlretrieve doesn't raise an exception.
Or does
  it?
 
  If Yes, What exception is it ? And how do I use it in my program ?
I've
  searched a lot but haven't found anything helping.
 
  Example:
  try:
 
 
urllib.urlretrieve(http://security.debian.org/pool/updates/main/p/perl/libparl5.6_5.6.1-8.9_i386.deb;)
  except IOError, X:
  DoSomething(X)
  except OSError, X:
  DoSomething(X)
 
  urllib.urlretrieve doesn't raise an exception even though there is
no
  package named libparl5.6
  
  Please Help!
  
  rrs

-- 
http://mail.python.org/mailman/listinfo/python-list