Re: [opendbx] About large objects

2010-03-14 Thread Norbert Sendetzky
Hi Mariano CREATE TABLE test_large_columns ( large_clob CLOB, large_nclob NCLOB, large_blobBLOB ). INSERT INTO test_large_columns (large_clob, large_nclob, large_blob) VALUES (''large_clob_data'' ,

Re: [opendbx] About large objects

2010-02-13 Thread Mariano Martinez Peck
2) I need to know for all the backends, which ones use normal functions and which ones the special functions Firebird provides odbx_lo_* capabilites and Oracle could but I didn't get it to work without segfault. All other backends use the regular functions to manage large texts or

Re: [opendbx] About large objects

2010-02-12 Thread Mariano Martinez Peck
On Fri, Feb 12, 2010 at 8:10 AM, Norbert Sendetzky norb...@linuxnetworks.de wrote: Hi Mariano Hi Norbert! How are you ? Here with 7 cm of snow :) Nice. Is that already more than you would like to have? ;-) Fortunately, not yet ;) 4) Suppose I have a row with a CLOB field of 1Gb (to

[opendbx] About large objects

2010-02-11 Thread Mariano Martinez Peck
Hi Norbert! How are you ? Here with 7 cm of snow :) While in Argentina is doing like 40ยบ hahahah I have a question regarding large objects. I know that some client libraries support large objects using the same functions used for normal datatypes and that there are others that use special

Re: [opendbx] About large objects

2010-02-11 Thread Norbert Sendetzky
Hi Mariano Hi Norbert! How are you ? Here with 7 cm of snow :) Nice. Is that already more than you would like to have? ;-) 1) This depends just in the backend or also in the OS ? Is it possible that for a particular client library they behave different in different OS like the