Andy Hassall wrote:
Looking at the MySQL API I'd have to agree that there's no obvious way of
doing this efficiently. I don't see anything equivalent to, for example,
Oracle's "piecewise LOB fetch" functionality, or other ways of "streaming"
the BLOB rather than fetching/updating all in one go.

Right. I suspect I'm on a wild goose chase. Still, it makes me wonder
how people get large BLOBs in and out of MySQL - if you've got 500MB
in the DB, do you really want to be tying up that much memory in the
server each time you fetch or insert it ?

(And another problem that comes to mind, though it's off-topic
for this list, is what to do about max_allowed_packet; I
believe this has to be set on both client and server to accommodate
the full size of the query, which is potentially a problem if
you have no control over the server).

For fetching I suppose you could hack it together with SUBSTR (but is it
valid to use that on binary data?). I doubt it'd be very friendly to the
server, although it would reduce the client memory issues.

Are you referring to Perl's substr() or MySQL's SUBSTRING() ? Either
way, I'm not sure that it would make much difference. This is from
the "How mysql uses memory" page of the mysql docs:

For each table having BLOB columns, a buffer is enlarged dynamically
>to read in larger BLOB values. If you scan a table, a buffer as large
>as the largest BLOB value is allocated.

From that, I get the impression that mysql is pretty naive in terms of
memory minimization w.r.t BLOBs.

--
Regards

Stephen Collyer
Netspinner Ltd

Reply via email to