Thanks for that.

I'm surprised that the blob_read function isn't documented anywhere except
in that example though...
Is there a reason for this? Perhaps Tim doesn't want it used?

Adam

----- Original Message -----
From: "Sterin, Ilya" <[EMAIL PROTECTED]>
To: "'Adam Kennedy '" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Saturday, September 22, 2001 12:46 AM
Subject: RE: Large BLOBs


> First of all you can set LongReadLen to the bigest size a BLOB will except
> and it will only fetch the right amount.
>
> In your case, it's probably best to use blob_read method to fetch in
chunks.
> See Readme.longs in DBD::Oracle package for examples.
>
> Ilya
>
> -----Original Message-----
> From: Adam Kennedy
> To: [EMAIL PROTECTED]
> Sent: 9/20/01 8:30 PM
> Subject: Large BLOBs
>
> Hi there
>
> I'm trying to find a mechanism for dealing with large BLOBs, where large
> is
> a value greater than the memory available to the process.
> This is used in a CGI application.
>
> I've been reading through and docs and source to DBI and DBD::Oracle.
> I've
> checked pretty much every resource I can find, and everything so far has
> either said
>
> 1. Set LongReadLen to a bigger value.
> This won't help because I can't set LongReadLen large enough
> obviously...
> BTW, I do know what the size of the BLOBs are, using a
>
> SELECT DBMS_LOB.GETLENGTH($field) AS $field FROM $table
>
> type command. The largest BLOB I'm currently dealing with is 50meg.
> While
> the server ( E10000 ) has the memory to deal with this, the sysadmins
> had a
> fit when I told them about having to use 50-60 meg of ram for the
> process,
> especially since I have the added burden of not being able to write to
> the
> filesystem ( ipsec policies ), so I can't just read from the database
> and
> dump to the filesystem, free up the memory, and then stream to the
> browser.
> I have to stream the file direct from the database to the browser, so
> that
> 50meg process might be running for half an hour or so... ( or longer
> depending on browser bandwidth ). I expect te concurrency level to be
> fairly
> low, so running 5 or 10 smaller processes would be acceptable.
>
> 2. "BLOBs are hard and database dependant"
> This is obvious, however I would have thought that there would be a way
> of
> doing this.
>
>
> Possible Solutions
>
> IO::BLOB::Pg is a nice little module for PostgreSQL that let's you treat
> a
> BLOB in the database as a filehandle, and from there I imagine a
> bufferred
> read/write to the browser ( STDOUT ) would be a piece of cake. Something
> similar to this would be very handy. I'm on the verge of giving up and
> trying to write something like IO::BLOB::Oracle, but I'd like to exhaust
> any
> possible alternatives first.
>
> Of course, there could be something I'm missing...
>
> Oh, one more thing. DBD::Oracle does the downloading of the BLOB for you
> when you select it's field... is there a way to override this, and JUST
> get
> the BLOB identifier?
>
> Thanks
>
> Adam
>

Reply via email to