Re: [HACKERS] Large Objects versus transactional behavior

2011-05-12 Thread YAMAMOTO Takashi
hi,

> On Sat, Apr 30, 2011 at 2:58 PM, Kevin Grittner
>  wrote:
>> This is related to the "SIREAD lock versus ACCESS EXCLUSIVE lock"
>> thread, but seemed different enough to merit spinning off a new
>> thread.
>>
>> Our shop hasn't used large objects so far because of the lack of
>> security (until 9.1), so I never noticed the rather unusual
>> transactional semantics of large objects.  From the devel
>> documentation:
>>
>> http://developer.postgresql.org/pgdocs/postgres/lo-interfaces.html#LO-OPEN
>>
>> | [...] with INV_READ you cannot write on the descriptor, and the
>> | data read from it will reflect the contents of the large object at
>> | the time of the transaction snapshot that was active when lo_open
>> | was executed, regardless of later writes by this or other
>> | transactions. Reading from a descriptor opened with INV_WRITE
>> | returns data that reflects all writes of other committed
>> | transactions as well as writes of the current transaction. This is
>> | similar to the behavior of REPEATABLE READ versus READ COMMITTED
>> | transaction modes for ordinary SQL SELECT commands.

as a novice user who has been annoyed by them, i'm curious about
the rationale of the unusual semantics.
is there any chance to "just" make large objects obey the normal semantics
in future?

YAMAMOTO Takashi

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Large Objects versus transactional behavior

2011-05-12 Thread Kevin Grittner
YAMAMOTO Takashi  wrote:
 
> is there any chance to "just" make large objects obey the normal
> semantics in future?
 
I sure hope so, but I have no idea how hard that is.  I feel the same
about TRUNCATE TABLE now that I recognize the semantic difference
between it and DELETE FROM with no WHERE clause.
 
For this release, though, I don't think it would be sane to try to
make them comply with transactional semantics.  And, as I mentioned,
Serializable Snapshot Isolation can only make serializable that which
is conforming to the semantics of snapshot isolation.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Large Objects versus transactional behavior

2011-05-06 Thread Robert Haas
On Sat, Apr 30, 2011 at 2:58 PM, Kevin Grittner
 wrote:
> This is related to the "SIREAD lock versus ACCESS EXCLUSIVE lock"
> thread, but seemed different enough to merit spinning off a new
> thread.
>
> Our shop hasn't used large objects so far because of the lack of
> security (until 9.1), so I never noticed the rather unusual
> transactional semantics of large objects.  From the devel
> documentation:
>
> http://developer.postgresql.org/pgdocs/postgres/lo-interfaces.html#LO-OPEN
>
> | [...] with INV_READ you cannot write on the descriptor, and the
> | data read from it will reflect the contents of the large object at
> | the time of the transaction snapshot that was active when lo_open
> | was executed, regardless of later writes by this or other
> | transactions. Reading from a descriptor opened with INV_WRITE
> | returns data that reflects all writes of other committed
> | transactions as well as writes of the current transaction. This is
> | similar to the behavior of REPEATABLE READ versus READ COMMITTED
> | transaction modes for ordinary SQL SELECT commands.
>
> Since Serializable Snapshot Isolation can only serialize behavior
> which is working within the semantics of snapshot isolation, it
> doesn't seem like SSI has any chance of serializing access to the
> contents of a large object while the current behavior stands.
> Modifications to the *references* to large objects within the bodies
> of normal tables is properly tracked by SSI, but no predicate locks
> are taken on the large object contents themselves, nor would
> modifications to the contents be able to generate a rw-conflict
> between transactions.
>
> In other words, I don't think there is any action item here for SSI
> in terms of C code for 9.1, but we may want to mention the unusual
> transaction-related behavior of large objects within the Concurrency
> Control chapter of the docs.
>
> Comments?

Well, in the long run, I think serializability ought to apply to large
objects along with everything else.  But documenting it seems like a
reasonable approach for now.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Large Objects versus transactional behavior

2011-04-30 Thread Kevin Grittner
This is related to the "SIREAD lock versus ACCESS EXCLUSIVE lock"
thread, but seemed different enough to merit spinning off a new
thread.
 
Our shop hasn't used large objects so far because of the lack of
security (until 9.1), so I never noticed the rather unusual
transactional semantics of large objects.  From the devel
documentation:
 
http://developer.postgresql.org/pgdocs/postgres/lo-interfaces.html#LO-OPEN
 
| [...] with INV_READ you cannot write on the descriptor, and the
| data read from it will reflect the contents of the large object at
| the time of the transaction snapshot that was active when lo_open
| was executed, regardless of later writes by this or other
| transactions. Reading from a descriptor opened with INV_WRITE
| returns data that reflects all writes of other committed
| transactions as well as writes of the current transaction. This is
| similar to the behavior of REPEATABLE READ versus READ COMMITTED
| transaction modes for ordinary SQL SELECT commands.
 
Since Serializable Snapshot Isolation can only serialize behavior
which is working within the semantics of snapshot isolation, it
doesn't seem like SSI has any chance of serializing access to the
contents of a large object while the current behavior stands. 
Modifications to the *references* to large objects within the bodies
of normal tables is properly tracked by SSI, but no predicate locks
are taken on the large object contents themselves, nor would
modifications to the contents be able to generate a rw-conflict
between transactions.
 
In other words, I don't think there is any action item here for SSI
in terms of C code for 9.1, but we may want to mention the unusual
transaction-related behavior of large objects within the Concurrency
Control chapter of the docs.
 
Comments?
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Large objects.

2010-09-27 Thread Robert Haas
On Mon, Sep 27, 2010 at 2:25 PM, Dmitriy Igrishin  wrote:
> Hey Robert, Tom
>
> Tom, thank you for explanation!
>
>> Ouch.  Letting people write data to where they can't get it back from
>> seems double-plus ungood.
>>
> Robert, yes, I agree with you. This is exactly what I wanted to say.
> I've implemented a stream class in C++ and this circumstance makes
> the code not so clean because I need to take into account the behavior
> of lo_write() and 2GB limit.

On further examination, it appears we're not doing this.  The reason
lo_read wasn't returning any data in your earlier example is because
you called it after seeking to the end of the object.  If you seek to
the position where the data was written, it works fine.

A fairly plausible argument could be made that we shouldn't allow
reading or writing past 2^31-1, but it now appears to me that the
behavior is at least self-consistent.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Large objects.

2010-09-27 Thread Dmitriy Igrishin
Hey Robert, Tom

Tom, thank you for explanation!

Ouch.  Letting people write data to where they can't get it back from
> seems double-plus ungood.
>
> Robert, yes, I agree with you. This is exactly what I wanted to say.
I've implemented a stream class in C++ and this circumstance makes
the code not so clean because I need to take into account the behavior
of lo_write() and 2GB limit.

-- 
// Dmitriy.


Re: [HACKERS] Large objects.

2010-09-27 Thread Robert Haas
On Mon, Sep 27, 2010 at 10:50 AM, Tom Lane  wrote:
> Robert Haas  writes:
>> According to the documentation, the maximum size of a large object is
>> 2 GB, which may be the reason for this behavior.
>
> In principle, since pg_largeobject stores an integer pageno, we could
> support large objects of up to LOBLKSIZE * 2^31 bytes = 4TB without any
> incompatible change in on-disk format.  This'd require converting a lot
> of the internal LO access logic to track positions as int64 not int32,
> but now that we require platforms to have working int64 that's no big
> drawback.  The main practical problem is that the existing lo_seek and
> lo_tell APIs use int32 positions.  I'm not sure if there's any cleaner
> way to deal with that than to add "lo_seek64" and "lo_tell64" functions,
> and have the existing ones throw error if asked to deal with positions
> past 2^31.
>
> In the particular case here, I think that lo_write may actually be
> writing past the 2GB boundary, while the coding in lo_read is a bit
> different and stops at the 2GB "limit".

Ouch.  Letting people write data to where they can't get it back from
seems double-plus ungood.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Large objects.

2010-09-27 Thread Tom Lane
Robert Haas  writes:
> According to the documentation, the maximum size of a large object is
> 2 GB, which may be the reason for this behavior.

In principle, since pg_largeobject stores an integer pageno, we could
support large objects of up to LOBLKSIZE * 2^31 bytes = 4TB without any
incompatible change in on-disk format.  This'd require converting a lot
of the internal LO access logic to track positions as int64 not int32,
but now that we require platforms to have working int64 that's no big
drawback.  The main practical problem is that the existing lo_seek and
lo_tell APIs use int32 positions.  I'm not sure if there's any cleaner
way to deal with that than to add "lo_seek64" and "lo_tell64" functions,
and have the existing ones throw error if asked to deal with positions
past 2^31.

In the particular case here, I think that lo_write may actually be
writing past the 2GB boundary, while the coding in lo_read is a bit
different and stops at the 2GB "limit".

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Large objects.

2010-09-27 Thread Robert Haas
On Sun, Sep 26, 2010 at 12:21 PM, Dmitriy Igrishin  wrote:
> Yes, I am sure. I've tested it by test case in my original post.
> Do you can compile and reproduce it please?

I think the reason lo_read is returning 0 is because it's not reading
anything.  See attached test case, cleaned up a bit from yours and
with some error checks added.

According to the documentation, the maximum size of a large object is
2 GB, which may be the reason for this behavior.

http://www.postgresql.org/docs/9/static/lo-intro.html

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company


d.c
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Large objects.

2010-09-26 Thread Dmitriy Igrishin
Hey Robert,

Yes, I am sure. I've tested it by test case in my original post.
Do you can compile and reproduce it please?

-- 
// Dmitriy.


Re: [HACKERS] Large objects.

2010-09-26 Thread Robert Haas
On Fri, Sep 24, 2010 at 9:13 AM, Dmitriy Igrishin  wrote:
> Tell me please, why lo_write() returns me the number of bytes "actually
> written"
> when current write location is out of 2GB ? IMO, in this case it should
> returns
> at least zero.
> lo_read() returns zero in this case, and it is correct, IMO.

Hmm, are you sure?  If the behavior of lo_read and lo_write is not
symmetric, that's probably not good, but I don't see anything obvious
in the code to make me think that's the case.  Returning 0 for a value
>= 2^31 seems problematic unless there is no possibility of a short
read (or write).

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Large objects.

2010-09-24 Thread Dmitriy Igrishin
Hey all,

Here is simple test case of LOB usage, please, note the comments:

#include 
#include 

int main(int argc, char* argv[])
{
  PGconn* c = PQconnectdb("password=test");

  PGresult* r = PQexec(c, "BEGIN");
  PQclear(r);

  const unsigned int id = lo_create(c, 0);

  int fd1 = lo_open(c, id, INV_READ | INV_WRITE);
  int nBytes = lo_write(c, fd1, "D", 1);
  int fd1Pos = lo_lseek(c, fd1, 2147483647, SEEK_SET);
  fd1Pos = lo_lseek(c, fd1, 1, SEEK_CUR);
  nBytes = lo_write(c, fd1, "Dima", 4); // nBytes == 4 ! Should be 0, IMO.
   // If not, where is
my name
   // will be written?

  r = PQexec(c, "COMMIT");
  PQclear(r);

  r = PQexec(c, "BEGIN");
  PQclear(r);

  fd1 = lo_open(c, id, INV_READ | INV_WRITE);
  fd1Pos =  lo_lseek(c, fd1, 0, SEEK_END); // fd1Pos == -2147483647 !

  char buf[16];
  nBytes = lo_read(c, fd1, buf, 4); // nBytes == 0 ! Correct, IMO.

  r = PQexec(c, "COMMIT");
  PQclear(r);

  return 0;
}

Tell me please, why lo_write() returns me the number of bytes "actually
written"
when current write location is out of 2GB ? IMO, in this case it should
returns
at least zero.
lo_read() returns zero in this case, and it is correct, IMO.

-- 
Regards,
Dmitriy


Re: [HACKERS] Large objects through ODBS

2004-11-30 Thread Joshua D. Drake
Bojidar Mihajlov wrote:
Hi all!
I need to operate with large objects through ODBC in
C/C++ program. How can I do that?
 

Look at the contrib lo data type.
Sincerely,
Joshua D. Drake

		
__ 
Do you Yahoo!? 
Meet the all-new My Yahoo! - Try it today! 
http://my.yahoo.com 


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly
 


--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-667-4564 - [EMAIL PROTECTED] - http://www.commandprompt.com
PostgreSQL Replicator -- production quality replication for PostgreSQL
begin:vcard
fn:Joshua Drake
n:Drake;Joshua
org:Command Prompt, Inc.
adr:;;PO Box 215 ;Cascade Locks;OR;97014;US
email;internet:[EMAIL PROTECTED]
title:Consultant
tel;work:503-667-4564
tel;fax:503-210-0334
x-mozilla-html:FALSE
url:http://www.commandprompt.com
version:2.1
end:vcard


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[HACKERS] Large objects through ODBS

2004-11-30 Thread Bojidar Mihajlov
Hi all!
I need to operate with large objects through ODBC in
C/C++ program. How can I do that?



__ 
Do you Yahoo!? 
Meet the all-new My Yahoo! - Try it today! 
http://my.yahoo.com 
 


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] [HACKERS] large objects

2003-06-22 Thread Bruce Momjian

Add to TODO:

* Allow SSL-enabled clients to turn off SSL transfers


---

Tom Lane wrote:
> "Nigel J. Andrews" <[EMAIL PROTECTED]> writes:
> > Ok, I tried to try this but I can not get SSL to _not_ be used when
> > connecting via any tcp connection, unless the client hasn't been built
> > with ssl support of course.
> 
> Yeah, I believe that's the way it works.  It seems overly fascist of the
> SSL code to not have the option to turn it off ... but that's how libpq
> is set up at the moment.
> 
>   regards, tom lane
> 
> ---(end of broadcast)---
> TIP 2: you can get off all lists at once with the unregister command
> (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
> 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] [HACKERS] large objects

2003-06-09 Thread Tom Lane
"Nigel J. Andrews" <[EMAIL PROTECTED]> writes:
> Ok, I tried to try this but I can not get SSL to _not_ be used when
> connecting via any tcp connection, unless the client hasn't been built
> with ssl support of course.

Yeah, I believe that's the way it works.  It seems overly fascist of the
SSL code to not have the option to turn it off ... but that's how libpq
is set up at the moment.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [HACKERS] large objects

2003-06-08 Thread Nigel J. Andrews

Note, primary list address changed to -general, I'd suggest any followups
remove the -hackers, which I've left in just for 'closure'.


On Fri, 6 Jun 2003, Nigel J. Andrews wrote:

> On Fri, 6 Jun 2003, Tom Lane wrote:
> 
> > "Nigel J. Andrews" <[EMAIL PROTECTED]> writes:
> > > Now, I did a little bit of testing and when doing a \lo_export 
> > >  in psql connected via localhost a SIGPIPE is generated in
> > > write() in libc and psql quit, without printing any message to the
> > > terminal.  Perhaps interestingly
> > > the file that gets written is always 65536 bytes long.
> > 
> > Hm.  Are you using an SSL connection?  There are some known bugs in the
> > SSL support in 7.3.1.  It's supposed to be fixed in 7.3.3, though I've
> > not tried it myself.
> 
> Damn, yes I am, I noticed the notice when connecting but then didn't think
> anything of it. Thanks Tom, I'll check that later when I do have time
> (shouldn't have wasted the precious minutes joining the NULL != "" war).

Ok, I tried to try this but I can not get SSL to _not_ be used when connecting
via any tcp connection, unless the client hasn't been built with ssl support of
course. The pg_hba.conf has:

# TYPE  DATABASEUSERIP-ADDRESSIP-MASK   METHOD

local  all all md5
host   all all 127.0.0.1 255.255.255.255   md5

psql -U me -h localhost db

prints:

SSL connection (cipher: EDH-RSA-DES-CBC3-SHA, bits: 168)

psql -U me db

doesn't.

Am I losing my mind? Should I need hostssl on that second line of the config
before ssl is allowed? I did look at that code once a few weeks ago and vaguely
remember something about host and hostssl handling but can't remember the
details. Is this really a bug, even if only in the documentation, or have I got
completely the wrong end of the stick?


-- 
Nigel J. Andrews


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] large objects

2003-06-08 Thread Nigel J. Andrews


Repost just to add the information that this is [now] on 7.3.3, previously on
7.3.2.

Sorry for the noise of the incomplete previous message, although the email
lists seem to be very light this last week. Obviously the mail server is still
feeling under the weather, I presume many posts have been lost in a hole
somewhere much like another of my posts.


On Sun, 8 Jun 2003, Nigel J. Andrews wrote:

> 
> Note, primary list address changed to -general, I'd suggest any followups
> remove the -hackers, which I've left in just for 'closure'.
> 
> 
> On Fri, 6 Jun 2003, Nigel J. Andrews wrote:
> 
> > On Fri, 6 Jun 2003, Tom Lane wrote:
> > 
> > > "Nigel J. Andrews" <[EMAIL PROTECTED]> writes:
> > > > Now, I did a little bit of testing and when doing a \lo_export 
> > > >  in psql connected via localhost a SIGPIPE is generated in
> > > > write() in libc and psql quit, without printing any message to the
> > > > terminal.  Perhaps interestingly
> > > > the file that gets written is always 65536 bytes long.
> > > 
> > > Hm.  Are you using an SSL connection?  There are some known bugs in the
> > > SSL support in 7.3.1.  It's supposed to be fixed in 7.3.3, though I've
> > > not tried it myself.
> > 
> > Damn, yes I am, I noticed the notice when connecting but then didn't think
> > anything of it. Thanks Tom, I'll check that later when I do have time
> > (shouldn't have wasted the precious minutes joining the NULL != "" war).
> 
> Ok, I tried to try this but I can not get SSL to _not_ be used when connecting
> via any tcp connection, unless the client hasn't been built with ssl support of
> course. The pg_hba.conf has:
> 
> # TYPE  DATABASEUSERIP-ADDRESSIP-MASK   METHOD
> 
> local  all all md5
> host   all all 127.0.0.1 255.255.255.255   md5
> 
> psql -U me -h localhost db
> 
> prints:
> 
> SSL connection (cipher: EDH-RSA-DES-CBC3-SHA, bits: 168)
> 
> psql -U me db
> 
> doesn't.
> 
> Am I losing my mind? Should I need hostssl on that second line of the config
> before ssl is allowed? I did look at that code once a few weeks ago and vaguely
> remember something about host and hostssl handling but can't remember the
> details. Is this really a bug, even if only in the documentation, or have I got
> completely the wrong end of the stick?
> 

-- 
Nigel J. Andrews


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [HACKERS] large objects

2003-06-08 Thread Tom Lane
"Nigel J. Andrews" <[EMAIL PROTECTED]> writes:
> Now, I did a little bit of testing and when doing a \lo_export 
>  in psql connected via localhost a SIGPIPE is generated in
> write() in libc and psql quit, without printing any message to the
> terminal.  Perhaps interestingly
> the file that gets written is always 65536 bytes long.

Hm.  Are you using an SSL connection?  There are some known bugs in the
SSL support in 7.3.1.  It's supposed to be fixed in 7.3.3, though I've
not tried it myself.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org


Re: [HACKERS] large objects

2003-06-08 Thread Nigel J. Andrews
On Fri, 6 Jun 2003, Tom Lane wrote:

> "Nigel J. Andrews" <[EMAIL PROTECTED]> writes:
> > Now, I did a little bit of testing and when doing a \lo_export 
> >  in psql connected via localhost a SIGPIPE is generated in
> > write() in libc and psql quit, without printing any message to the
> > terminal.  Perhaps interestingly
> > the file that gets written is always 65536 bytes long.
> 
> Hm.  Are you using an SSL connection?  There are some known bugs in the
> SSL support in 7.3.1.  It's supposed to be fixed in 7.3.3, though I've
> not tried it myself.

Damn, yes I am, I noticed the notice when connecting but then didn't think
anything of it. Thanks Tom, I'll check that later when I do have time
(shouldn't have wasted the precious minutes joining the NULL != "" war).


--
Nigel Andrews



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[HACKERS] large objects

2003-06-06 Thread Nigel J. Andrews


I realise large objects are liked anymore and that the wire protocol is
changing in 7.4 but I've just posted this message into the PHP documentation:

-- begin
Using php 4.3.0 and PostgreSQL 7.3.1

I can write a simple script in which pg_lo_write seems to always return 1 and not the 
number of bytes written, as evidenced by extracting the data through another means.

Further more, I can make this pg_lo_write fail, or at least fail to write all the data 
it's pretty difficult to tell without the number of bytes written being returned, and 
not return the false value. In addition to this, the lo resource has been adjusted so 
that the oid it contains is 0.

Unfortunately, I do not know what exactly the failure mode is, it does seem to be in 
the ip network communication side of PostgreSQL, which is odd since the unix domain 
comms works fine for this. However, it would have been useful to have the 
pg_lo_write() function return as advertised, it would have saved some of the 2 man 
hours me and the dev. team put into diagnosing this problem.
-- end

Now, I did a little bit of testing and when doing a \lo_export  
in psql connected via localhost a SIGPIPE is generated in write() in libc and
psql quit, without printing any message to the terminal. Perhaps interestingly
the file that gets written is always 65536 bytes long. The server log shows:

2003-06-05 14:24:02 LOG:  query: select proname, oid from pg_proc  
 where proname = 'lo_open' or proname = 'lo_close' 
or proname =
 'lo_creat'or proname = 'lo_unlink'or 
proname = 'lo_lseek'or proname = 'lo_tell' 
 or proname = 'loread'
   or proname = 'lowrite'
2003-06-05 14:24:02 LOG:  duration: 0.002924 sec
2003-06-05 14:24:03 LOG:  pq_recvbuf: recv() failed: Success

fwiw.

The last 4 bytes saved and next 16 bytes that should follow are:

00fffc 74 f3 5f ff d0 d1 c6 b3 eb bb 01 26 d6 b3 51 a9
01000c 68 e5 70 54

Of course it could be way past there the failure point but I thought it worth
including on the off chance.

When I do the same except allowing psql to connect through the unix domain
socket it works. The right number of bytes are returned and cmp shows no
differences to the original.

So, a) is this known? b) what is it? c) is it not going to happen in the new
protocol? and d) does anyone care?


-- 
Nigel J. Andrews


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] Large objects and ecpg

2001-09-19 Thread Michael Meskes

On Wed, Sep 19, 2001 at 12:46:03AM +0530, Chamanya wrote:
> Can I use ecpg with large objects? All examples in documentation are for 
> libpq.

Yes and no. Since ECPG uses libpq it should not be too difficult to use the
lo functions too. But there is no way to use them via some EXEC SQL
statements.

Any idea how these statements should look like? Would be a good idea to
implement.

Michael
-- 
Michael Meskes
[EMAIL PROTECTED]
Go SF 49ers! Go Rhein Fire!
Use Debian GNU/Linux! Use PostgreSQL!

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[HACKERS] Large objects and ecpg

2001-09-18 Thread Chamanya

Hi all,

Can I use ecpg with large objects? All examples in documentation are for 
libpq.

 Thanks

 Shridhar

_
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



[HACKERS] Large objects and table deletion.

2001-07-22 Thread Grant

When I delete a table that has an OID, the OID does not get deleted
correct? How can I delete the data from the large object?


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [HACKERS] large objects dump

2001-06-05 Thread Denis Perchine

Hi,

> I had PostgreSQL 7.0.3 (7.1 now) and one nice day I've noticed that much
> number of my BLOBs are broken! Although they seems to be with good content
> in file system (xinv[0-9]+ files) I was not able to get them via
> lo_export... After spending some time trying to fix it, I decided to write
> my own xinv2plainfile converter. I hope if someone has same troubles this
> converter will help him.
> Just compile it, put it in the dir with your xinv[0-9]+ files and run.
> It will create new files with name eq to BLOB id from apropriate xinv.

Either use 7.1.x, or apply my patch to 7.0.3. And you will have no such 
problems at all. :-))

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[HACKERS] large objects dump

2001-06-05 Thread Maks N. Polunin

Hello, All!

I had PostgreSQL 7.0.3 (7.1 now) and one nice day I've noticed that much
number of my BLOBs are broken! Although they seems to be with good content
in file system (xinv[0-9]+ files) I was not able to get them via
lo_export... After spending some time trying to fix it, I decided to write
my own xinv2plainfile converter. I hope if someone has same troubles this
converter will help him.
Just compile it, put it in the dir with your xinv[0-9]+ files and run.
It will create new files with name eq to BLOB id from apropriate xinv.
xinv2palinfile.c---
#include "sys/types.h"
#include "dirent.h"
#include "stdio.h"
#include "string.h"
 
#define BLCKSIZE 8192
#define HPT_LEN 40
 
#define DEBUG
//#undef DEBUG
 
typedef unsigned short uint16;
typedef unsigned int uint32;
 
typedef struct ItemIdData
{
  unsigned lp_off:15,
lp_flags:2,
lp_len:15;
} ItemIdData;

typedef struct PageHeaderData
{
  uint16 pd_lower;
  uint16 pd_upper;
  uint16 pd_special;
  uint16 pd_opaque; //page size
  ItemIdData pd_linp[1];
} PageHeaderData;
 
int
extract(const char * filename)
{
  FILE * infile;
  FILE * outfile;
  ItemIdData linp;
  PageHeaderData* pg_head;
  char buff[BLCKSIZE];
  char data[BLCKSIZE];
  int tuple_no;

  //opening outpur file, if it is already presents, overwrite it!
  if ((outfile = fopen(filename + 4, "w")) == NULL)
return -1;
 
  //opening input file
  if ((infile = fopen(filename, "r")) == NULL)
return -1;
 
  while (fread(&buff, BLCKSIZE, 1, infile))
{
  pg_head = (PageHeaderData*)buff;
#ifdef DEBUG
  printf("Page data: pd_lower=%d, pd_upper=%d, pd_special=%d, 
pd_opaque=%d\
 pg_head->pd_lower, pg_head->pd_upper, pg_head->pd_special,
pg_head
#endif

  for(tuple_no = 0; pg_head->pd_linp[tuple_no].lp_len; ++tuple_no)
{
  linp = pg_head->pd_linp[tuple_no];
  memcpy(data, buff + linp.lp_off + HPT_LEN, linp.lp_len -
HPT_LEN);
  data[linp.lp_len - HPT_LEN] = 0;
  //}
#ifdef DEBUG
  printf("Tuple %d: off=%d,\tflags=%d,\tlen=%d\n",\
 tuple_no, linp.lp_off, linp.lp_flags, linp.lp_len);
  printf("Data:\n%s\n--\n", data);
#endif
  fprintf(outfile, "%s", data);
}
}
  fclose(infile);
  fclose(outfile);
  return 0;
}

int
main(void)
{
  DIR * curdir;
  struct dirent * curdirentry;
 
  //open current directory
  curdir = opendir(".");
  if (curdir == NULL)
{
  printf("Cannot open curdir!!!\n");
  return -1;
}

  //search through curdir for files 'xinv[0-9]+'
  while ((curdirentry = readdir(curdir)) != NULL)
  {
if (strstr(curdirentry->d_name, "xinv") != curdirentry->d_name)
  continue;
//found entry with name begining with xinv.
//let's hope this is what we are looking for :)
printf("Trying to extract file '%s'... ", curdirentry->d_name);
if (extract(curdirentry->d_name))
  printf("failed\n");
else
  printf("successed\n");
  }
 
  return 0;
}

---
With Best Regards,
  Maks N. Polunin.
Brainbench: http://www.brainbench.com/transcript.jsp?pid=111472
ICQ#:18265775


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])