Eildert Groeneveld wrote:
> I am currently implementing using a compressed binary storage scheme
> genotyping data. These are basically vectors of binary data which may be
> megabytes in size.
> 
> Our current implementation uses the data type bit varying.
> 
> What we want to do is very simple: we want to retrieve such records from
> the database and transfer it unaltered to the client which will do
> something (uncompressing) with it. As massive amounts of data are to be
> moved, speed is of great importance, precluding any to and fro
> conversions.
> 
> Our current implementation uses Perl DBI; we can retrieve the data ok,
> but apparently there is some converting going on.
> 
> Further, we would like to use ODBC from Fortran90 (wrapping the
> C-library)  for such transfers. However, all sorts funny things happen
> here which look like conversion issues.
> 
> In old fashioned network database some decade ago (in pre SQL times)
> this was no problem. Maybe there is someone here who knows the PG
> internals sufficiently well to give advice on how big blocks of memory
> (i.e. bit varying records) can between transferred UNALTERED between
> backend and clients.

Using the C API you can specify binary mode for your data, which
meand that they won't be converted.

I don't think you will be able to use this with DBI or ODBC,
but maybe binary corsors can help
(http://www.postgresql.org/docs/current/static/sql-declare.html),
but I don't know if DBI or ODBC handles them well.

If you can avoid DBI or ODBC, that would be best.

Yours,
Laurenz Albe

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to