[GENERAL] Blobs
I was wondering if anyone had any working sample code of inserting a blob into a table and then retrieving one from a table for viewing? I'm using Postgres 8.2, the jdbc is postgresql-8.2-504.jdbc3, and the Java is 1.6. I'm also running on a Windows XP Pro box if that matters. Thanks, Marc
Re: [GENERAL] Blobs
Marc Horvath, 14.03.2008 12:35: I was wondering if anyone had any working sample code of inserting a blob into a table and then retrieving one from a table for viewing? I’m using Postgres 8.2, the jdbc is postgresql-8.2-504.jdbc3, and the Java is 1.6. I’m also running on a Windows XP Pro box if that matters. You should probably have posted that to the JDBC mailing list. I am using the following code successfully: PreparedStatement stmt = connection.prepareStatement(insert into blob_table (id, blob_data) values (?,?); File f = new File(pretty_image.jpg); FileInputStream f = new FileInputStream(f); stmt.setInt(1, 42), stmt.setBinaryStream(2, f, (int)f.length()); stmt.executeUpdate(); connection.commit(); Note that the field blob_table is defined as bytea in Postgres Regards Thomas -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Blobs in Postgresql
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 08/17/07 23:16, Merlin Moncure wrote: On 8/18/07, Ron Olson [EMAIL PROTECTED] wrote: The language is Java. I've made some tests and they work very well for 25meg filesworks exactly the way it should, first time. MySQL had all kinds of nasty surprises for me when I first started working with blobs, but I can say that I took my code, changed the driver, and it all works like a champ (mind you, this was a quick test app). I haven't looked at encryption at the database levelis such a thing available? I know Oracle has some form of data encryption at the database level so the nefarious DBA with the wide mustache and black brimmed hat always going ah ha ha ha ha can't make off with the data, but does Postgres have something similar? BTW, to put into context, the database will be designed to hold evidence (well, photos and videos of). Thus the compelling need for some security, as well as the variation in file sizes. Well, my assumption was that you would encrypt the data on the client side and store it that way. PostgreSQL has open architecture. If you wanted to do the encryption on the server, one possible approach that jumps out at me is to write a small C function which receives the data, encrypts the image using a key sent by the client all (but not stored), and either stores the encrypted image back in the database via SPI or writes it out to a file. There are many strategies to encrypting data...first thing to think about is where the encryption happens, where the keys are stored, etc. Client-side encryption is important, because with server-side encryption, you are sending the Valuable Data across the wire (or, even worse!) wireless in cleartext form. It's more likely that there's a packet sniffer on the network than an Evil DBA snooping around. - -- Ron Johnson, Jr. Jefferson LA USA Give a man a fish, and he eats for a day. Hit him with a fish, and he goes away for good! -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFGxwf7S9HxQb37XmcRAimGAJ98Kykormb63BedYknIij2xZvDgEACgw23C eWn7JJKSs1KL9dSfVx3p/BY= =OLl1 -END PGP SIGNATURE- ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Blobs in Postgresql
Ron Johnson wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 08/17/07 23:16, Merlin Moncure wrote: On 8/18/07, Ron Olson [EMAIL PROTECTED] wrote: The language is Java. I've made some tests and they work very well for 25meg filesworks exactly the way it should, first time. MySQL had all kinds of nasty surprises for me when I first started working with blobs, but I can say that I took my code, changed the driver, and it all works like a champ (mind you, this was a quick test app). I haven't looked at encryption at the database levelis such a thing available? I know Oracle has some form of data encryption at the database level so the nefarious DBA with the wide mustache and black brimmed hat always going ah ha ha ha ha can't make off with the data, but does Postgres have something similar? BTW, to put into context, the database will be designed to hold evidence (well, photos and videos of). Thus the compelling need for some security, as well as the variation in file sizes. Well, my assumption was that you would encrypt the data on the client side and store it that way. PostgreSQL has open architecture. If you wanted to do the encryption on the server, one possible approach that jumps out at me is to write a small C function which receives the data, encrypts the image using a key sent by the client all (but not stored), and either stores the encrypted image back in the database via SPI or writes it out to a file. There are many strategies to encrypting data...first thing to think about is where the encryption happens, where the keys are stored, etc. Client-side encryption is important, because with server-side encryption, you are sending the Valuable Data across the wire (or, even worse!) wireless in cleartext form. It's more likely that there's a packet sniffer on the network than an Evil DBA snooping around. The two options I see are - 1. the client encrypts the data and sends it to the DB 2. the client uses an SSL connection to the server to prevent snooping and lets the DB encrypt for storage. I would suggest looking at pgcrypto in contrib for server side encryption. The main benefit I would see from the first is it doesn't matter if another DB admin changes the server security settings or not. The new guy may setup a new server and not enforce SSL connections. Of course if the client refused non-SSL connections you can prevent that. Either way the app provides the key to decrypt the data for viewing, so the developers current and future must maintain the security level you choose. What sort of security measures are taken for viewing the data? Will each user have a security certificate on their own USB flash drive to allow them to view the data? which could also prevent developers from accessing the data. Or is their password enough to allow the program to decrypt it for them? It would really come down to which encryption method you find easiest to implement that provides enough security for your needs. -- Shane Ambler [EMAIL PROTECTED] Get Sheeky @ http://Sheeky.Biz ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Blobs in Postgresql
On Wednesday 2007-08-15 05:52, Gregory Stark wrote: Ron Olson [EMAIL PROTECTED] writes: Hi all- I am evaluating databases for use in a large project that will hold image data as blobs. I know, everybody says to just store pointers to files on the disk... Well not everyone. I usually do, but if you're not handling these blobs under heavy load independent of the database (like web servers) then either approach works. I've always wondered how you keep transactions working when you only store pointers to large data. Do you need an external transaction manager to insure that the file doesn't get deleted when you delete the data via the pointer? Do you need an external application that handles all deletes, inserts, and updates? ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Blobs in Postgresql
On 8/15/07, Ron Olson [EMAIL PROTECTED] wrote: Hi all- I am evaluating databases for use in a large project that will hold image data as blobs. I know, everybody says to just store pointers to files on the disk...can't do it here...the images are of a confidential nature and access to the database (and resulting client app) will be highly restricted. The underlying platform will likely be Linux though Solaris x86-64 has been suggested as well. I did some tests with MySQL and found the results very sub-par...the standard blob field only holds 64k (they have three types of blobs for whatever reason) and the real problem is that my uploads and downloads have failed because of packet size issues...this can be solved somewhat with server settings, but I get the impression that blobs are barely supported. So turning to Postgresql, can I get any recommendations, suggestions and tips on blob handling in the database? The image sizes will be pretty variable, from a few kilobytes to several hundred megabytes, so I need something that will handle the various file sizes, hopefully transparently. for fast performance, you should make sure to use the parameterized interface and send in the results as binary (or use a language that accesses the database that way). I would be nervous about storing blobs if they were very large. regarding the security issue, have you looked at encryption? merlin ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Blobs in Postgresql
The language is Java. I've made some tests and they work very well for 25meg filesworks exactly the way it should, first time. MySQL had all kinds of nasty surprises for me when I first started working with blobs, but I can say that I took my code, changed the driver, and it all works like a champ (mind you, this was a quick test app). I haven't looked at encryption at the database levelis such a thing available? I know Oracle has some form of data encryption at the database level so the nefarious DBA with the wide mustache and black brimmed hat always going ah ha ha ha ha can't make off with the data, but does Postgres have something similar? BTW, to put into context, the database will be designed to hold evidence (well, photos and videos of). Thus the compelling need for some security, as well as the variation in file sizes. On 8/17/07, Merlin Moncure [EMAIL PROTECTED] wrote: On 8/15/07, Ron Olson [EMAIL PROTECTED] wrote: Hi all- I am evaluating databases for use in a large project that will hold image data as blobs. I know, everybody says to just store pointers to files on the disk...can't do it here...the images are of a confidential nature and access to the database (and resulting client app) will be highly restricted. The underlying platform will likely be Linux though Solaris x86-64 has been suggested as well. I did some tests with MySQL and found the results very sub-par...the standard blob field only holds 64k (they have three types of blobs for whatever reason) and the real problem is that my uploads and downloads have failed because of packet size issues...this can be solved somewhat with server settings, but I get the impression that blobs are barely supported. So turning to Postgresql, can I get any recommendations, suggestions and tips on blob handling in the database? The image sizes will be pretty variable, from a few kilobytes to several hundred megabytes, so I need something that will handle the various file sizes, hopefully transparently. for fast performance, you should make sure to use the parameterized interface and send in the results as binary (or use a language that accesses the database that way). I would be nervous about storing blobs if they were very large. regarding the security issue, have you looked at encryption? merlin
Re: [GENERAL] Blobs in Postgresql
On 8/18/07, Ron Olson [EMAIL PROTECTED] wrote: The language is Java. I've made some tests and they work very well for 25meg filesworks exactly the way it should, first time. MySQL had all kinds of nasty surprises for me when I first started working with blobs, but I can say that I took my code, changed the driver, and it all works like a champ (mind you, this was a quick test app). I haven't looked at encryption at the database levelis such a thing available? I know Oracle has some form of data encryption at the database level so the nefarious DBA with the wide mustache and black brimmed hat always going ah ha ha ha ha can't make off with the data, but does Postgres have something similar? BTW, to put into context, the database will be designed to hold evidence (well, photos and videos of). Thus the compelling need for some security, as well as the variation in file sizes. Well, my assumption was that you would encrypt the data on the client side and store it that way. PostgreSQL has open architecture. If you wanted to do the encryption on the server, one possible approach that jumps out at me is to write a small C function which receives the data, encrypts the image using a key sent by the client all (but not stored), and either stores the encrypted image back in the database via SPI or writes it out to a file. There are many strategies to encrypting data...first thing to think about is where the encryption happens, where the keys are stored, etc. merlin ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Blobs in Postgresql
Ron Olson [EMAIL PROTECTED] writes: Hi all- I am evaluating databases for use in a large project that will hold image data as blobs. I know, everybody says to just store pointers to files on the disk... Well not everyone. I usually do, but if you're not handling these blobs under heavy load independent of the database (like web servers) then either approach works. So turning to Postgresql, can I get any recommendations, suggestions and tips on blob handling in the database? The image sizes will be pretty variable, from a few kilobytes to several hundred megabytes, so I need something that will handle the various file sizes, hopefully transparently. There are basically two options. If you are not handling data that are too large to copy around in memory, and you don't need to upload and download the data in chunks (usually these are the same issue) then you can just store your images in a bytea. Postgres transparently treats *all* large variable-sized data whether text, bytea, arrays, like a blob. It stores it in a separate table outside the main table. If your data can sometimes be so large that you cannot manipulate the whole thing in memory all at once (Keep in mind that Postgres expects to be able to handle a few copies of the data at the same time. Conservatively expect 5 simultaneous copies to have to fit in memory.) then you'll have to look into the large object interface which is a set of functions starting with lo_* -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Blobs in Postgresql
Hi all- I am evaluating databases for use in a large project that will hold image data as blobs. I know, everybody says to just store pointers to files on the disk...can't do it here...the images are of a confidential nature and access to the database (and resulting client app) will be highly restricted. The underlying platform will likely be Linux though Solaris x86-64 has been suggested as well. I did some tests with MySQL and found the results very sub-par...the standard blob field only holds 64k (they have three types of blobs for whatever reason) and the real problem is that my uploads and downloads have failed because of packet size issues...this can be solved somewhat with server settings, but I get the impression that blobs are barely supported. So turning to Postgresql, can I get any recommendations, suggestions and tips on blob handling in the database? The image sizes will be pretty variable, from a few kilobytes to several hundred megabytes, so I need something that will handle the various file sizes, hopefully transparently. Thanks for any info, Ron
Re: [GENERAL] Blobs in Postgresql
So turning to Postgresql, can I get any recommendations, suggestions and tips on blob handling in the database? The image sizes will be pretty variable, from a few kilobytes to several hundred megabytes, so I need something that will handle the various file sizes, hopefully transparently. PostgreSQL BLOB implementation is well. We used it without any problems with images from 20K-30M. Regards Pavel Stehule ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] blobs
On Jul 4, 2007, at 11:34 , Cesar Alvarez wrote: is there a way to store pictures or executables in postgres?? From the PostgreSQL FAQ 4.10) What is the difference between the various character types? http://www.postgresql.org/docs/faqs.FAQ.html#item4.10 From the PostgreSQL Documentation 8.4. Binary Data Types http://www.postgresql.org/docs/8.2/interactive/datatype-binary.html Chapter 30. Large Objects http://www.postgresql.org/docs/8.2/interactive/largeobjects.html Hope this helps. Michael Glaesemann grzm seespotcode net ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] blobs
hello every one is there a way to store pictures or executables in postgres?? Cesar Alvarez. begin:vcard fn:Cesar Alvarez n:;Cesar Alvarez title:Web Development Asesor and Software Enginner version:2.1 end:vcard ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] blobs
On 04/07/2007 17:34, Cesar Alvarez wrote: is there a way to store pictures or executables in postgres?? http://www.postgresql.org/docs/8.2/static/datatype-binary.html --- Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland [EMAIL PROTECTED] --- ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] BLOBS : how to remove them totally
Thanx for ur suggestion... But I face a new problem now... connection.setAutoCommit works well with postgresql 7.3 .. but with postgresql 7.4 I am getting the error... Error in connection == ERROR: SET AUTOCOMMIT TO OFF is no longer supported I have tried pg73jdbc1.jar and pg73jdbc3.jar .. both gave the same error But in psql (7.4) the command \set AUTOCOMMIT off is working. Strangely \set AUTOCOMMIT off is actually changing the value of AUTOCOMMIT internal variable... If I type \set autocommit off .. there will create another variable 'autocommit' and set it to 'off'... But this wont change the autocommit mode to off.. (The documentation doesnot tell us of any such caps/small behaviour!) Regards N Banerjee --- Kris Jurka [EMAIL PROTECTED] wrote: On Sat, 20 Dec 2003, [iso-8859-1] Nilabhra Banerjee wrote: But unfortunately I could not extract this data to frontend thru java... I tried in two ways but got the same error...after getting the data in Blob or Large Object. Error in connection == FastPath call returned ERROR: invalid large-object descriptor: 0 This is usually a symptom of not being in a transaction. Large objects need to be done inside a transaction. Try adding connection.setAutoCommit(false) somewhere in your code. Kris Jurka Yahoo! Messenger - Communicate instantly...Ping your friends today! Download Messenger Now http://uk.messenger.yahoo.com/download/index.html ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] BLOBS : how to remove them totally
Thanks a lot for the clue... Now I am comfortably handling the Lrge Objects thru SQL... But unfortunately I could not extract this data to frontend thru java... I tried in two ways but got the same error...after getting the data in Blob or Large Object. Error in connection == FastPath call returned ERROR: invalid large-object descriptor: 0 1) Process One Blob myBlob = null; Then for resultset rs myBlob=rs.getBlob(1); The error is returned in any statement which processes the Blob object like, long myLength = myBlob.length(); 2) Process Two FIRST the largeobject manager LargeObjectManager lobj = ((org.postgresql.PGConnection)conn).getLargeObjectAPI(); THEN in the while rs.next() loop LargeObject obj = lobj.open(oid, LargeObjectManager.READ); AND THEN InputStream input = new BufferedInputStream(largeobj.getInputStream()); THe Error is returned in any statement that processes the input like writing in a ouputstream int b = -1; while ((b = input.read()) != -1) outputStream.write(b); I AM PUZZLED... WHERE IS THE WRONG ? THE CODE IS NOT COMPLAINING WHEN I GET THE VALUE FROM THE RESULT IN A OBJECT. BUT IT IS GIVING ERROR WHEN I AM TRYING TO READ THE OBJECT. Regards Nilabhra Banerjee --- Tom Lane [EMAIL PROTECTED] wrote: Bernd Helmle [EMAIL PROTECTED] writes: Here you can find an excellent description, how BLOBs in PostgreSQL can be handled: http://www.varlena.com/varlena/GeneralBits/44.php That's a good discussion, but it left out at least one useful bit of info about managing large objects: there's a contrib utility (contrib/vacuumlo) that can find and remove large objects that are not referenced anywhere in the database. This is a good way to clean up if you've been using large objects without any of the automatic management techniques suggested in the GeneralBits article. regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly Yahoo! Messenger - Communicate instantly...Ping your friends today! Download Messenger Now http://uk.messenger.yahoo.com/download/index.html ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] BLOBS : how to remove them totally
On Sat, 20 Dec 2003, [iso-8859-1] Nilabhra Banerjee wrote: But unfortunately I could not extract this data to frontend thru java... I tried in two ways but got the same error...after getting the data in Blob or Large Object. Error in connection == FastPath call returned ERROR: invalid large-object descriptor: 0 This is usually a symptom of not being in a transaction. Large objects need to be done inside a transaction. Try adding connection.setAutoCommit(false) somewhere in your code. Kris Jurka ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] BLOBS : how to remove them totally
[EMAIL PROTECTED] says... That's a good discussion, but it left out at least one useful bit of info about managing large objects: there's a contrib utility (contrib/vacuumlo) that can find and remove large objects that are not referenced anywhere in the database. What is the URL for the contributed stuff? TIA. Paul... regards, tom lane -- plinehan x__AT__x yahoo x__DOT__x com C++ Builder 5 SP1, Interbase 6.0.1.6 IBX 5.04 W2K Pro Please do not top-post. ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] BLOBs, pg_dump pg_restore
On Thu, 2 Oct 2003, Tom Lane wrote: Howard Lowndes [EMAIL PROTECTED] writes: On Wed, 1 Oct 2003, Tom Lane wrote: No, because pg_restore has logic to adjust the references to match the new BLOB OIDs. If you have a test case where this fails to work, let's see it ... No, I don't have any example, it is an enquiry. What I am reading into the above however is that the loid column in my table should have a CONSTRAINT REFERENCES clause to whereever in the system large objects table. Correct? No. No doubt if Postgres had had foreign keys when the large-object stuff was invented, it would have required such a constraint for LO references, but it didn't and it doesn't. The pg_restore code simply goes through all oid columns (and all lo columns if you've installed the contrib/lo datatype) and looks for matches to LO OIDs that existed in the dumped database. When it finds a match, it replaces that value with the new BLOB's OID. Simple, effective, crufty ... OK, I'm convinced, except for one small, but not insignificant hiccup. When you dump a database with the BLOBs, even with the -c option, and then restore that database again with the -c option, you get double the BLOBs. The original BLOBs are there as are the new copies, and the cross referenced oids are updated. It looks as if there should be some way of removing the old BLOB at restore time once the new BLOB is in place. I don't know the detail of how pg_restore works but it does create a table solely for the purpose of cross referencing the oids. This of course means that each dump and subsequent restore doubles up on the BLOBs and since BLOBs are by nature Large there could be disk space problems. -- Howard. LANNet Computing Associates - Your Linux people http://www.lannetlinux.com -- Flatter government, not fatter government - Get rid of the Australian states. -- If all economists were laid end to end, they would not reach a conclusion - George Bernard Shaw ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] BLOBs, pg_dump pg_restore
Howard Lowndes [EMAIL PROTECTED] writes: OK, I'm convinced, except for one small, but not insignificant hiccup. When you dump a database with the BLOBs, even with the -c option, and then restore that database again with the -c option, you get double the BLOBs. The original BLOBs are there as are the new copies, and the cross referenced oids are updated. It looks as if there should be some way of removing the old BLOB at restore time once the new BLOB is in place. I don't know the detail of how pg_restore works but it does create a table solely for the purpose of cross referencing the oids. This of course means that each dump and subsequent restore doubles up on the BLOBs and since BLOBs are by nature Large there could be disk space problems. If you blow away the database (DROP DATABASE) and recreate it before doing the restore, those LOs will be gone. If not, something is very wrong. pg_restore basically assumes a virgin database. If you just clear out the tables before the restore, you should also clear out the pg_largeobject table. It's not hard to keep garbage LOs from hanging around by putting an ON DELETE trigger on the referencing table. -Doug ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] BLOBs, pg_dump pg_restore
On 2 Oct 2003, Doug McNaught wrote: Howard Lowndes [EMAIL PROTECTED] writes: OK, I'm convinced, except for one small, but not insignificant hiccup. When you dump a database with the BLOBs, even with the -c option, and then restore that database again with the -c option, you get double the BLOBs. The original BLOBs are there as are the new copies, and the cross referenced oids are updated. It looks as if there should be some way of removing the old BLOB at restore time once the new BLOB is in place. I don't know the detail of how pg_restore works but it does create a table solely for the purpose of cross referencing the oids. This of course means that each dump and subsequent restore doubles up on the BLOBs and since BLOBs are by nature Large there could be disk space problems. If you blow away the database (DROP DATABASE) and recreate it before doing the restore, those LOs will be gone. If not, something is very wrong. pg_restore basically assumes a virgin database. If you just clear out the tables before the restore, you should also clear out the pg_largeobject table. It's not hard to keep garbage LOs from hanging around by putting an ON DELETE trigger on the referencing table. Tks -- Howard. LANNet Computing Associates - Your Linux people http://www.lannetlinux.com -- Flatter government, not fatter government - Get rid of the Australian states. -- If all economists were laid end to end, they would not reach a conclusion - George Bernard Shaw ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] BLOBs, pg_dump pg_restore
My situation is that I am interacting PHP 4.1.2 to PostgreSQL 7.2.2 I have no difficulty inserting and managing BLOBs into the Large Object system table, and I have a user table called images which maintains the relationship between the BLOB loid and the identity that relates to it in my user tables. So far so good. When I RTFM obout psql it refers to the \lo_import, \lo_list, \lo_export and \lo_unlink functions. The syntax for the \lo_import function indicates that a comment may be appended to the BLOB entry in the large object system table. What is not mentioned is that this will only occur if psql is run as the PostgreSQL superuser. Now, my concern is that if I use pg_dump with the --clean or --create, and the --blobs options, and then try a pg_restore from the resulting archive file, I believe the BLOBs will take up a different loid to the one they came from, and hence the relation in my user table will be broken and I will not be able to relocate the BLOBs using my identifier in my images table. My other problem is that the various functions in PHP, namely the various pg_lo_* functions do not appear to have the ability to include the comment option that is available to \lo_import under psql. I suppose one workaround, though not very elegant, would be to use under PHP something like `psql \lo_export known_file_name` whilst running through the records in the images table, and not to use the --blobs option under pg_dump, then use `psql \lo_import known_file_name` called from PHP to reload them after a pg_restore has been run, at the same time updating the loids in my images table. As I say very inelegant. I guess this must be a shortfall in both PHP, in as much as it doesn't appear to handle BLOBs to cleanly, and PostgreSQL in its way that it handles the description column in the large opjects system table. Am I right or wrong, or is there a better workaround? -- Howard. LANNet Computing Associates - Your Linux people http://www.lannetlinux.com -- Flatter government, not fatter government - Get rid of the Australian states. -- If all economists were laid end to end, they would not reach a conclusion - George Bernard Shaw ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] BLOBs, pg_dump pg_restore
On Wed, 1 Oct 2003, Joshua D. Drake wrote: Hello, We usually have a table called something like file_objects that contains information like the loid, content-type, filesize etc... that we reference. Yes, that's what I also have got: test= \d images Table images Column | Type | Modifiers ---+--+--- id| text | not null loid | oid | not null imagetype | integer | not null imagesize | integer | not null imagex| integer | not null imagey| integer | not null caption | text | timestamp | timestamp with time zone | not null and a sample if the data is: test= select * from images; id | loid | imagetype | imagesize | imagex | imagey | caption | timestamp +-+---+---+++-+--- 100732 | 2085885 | 2 | 27215 |576 |432 | Paint Job | 2003-10-01 09:47:01.254781+10 100732 | 2085887 | 2 | 36606 |500 |357 | Out of Africa | 2003-10-01 11:37:23.791189+10 100732 | 2085893 | 1 | 34958 | 54 |135 | An animated gif | 2003-10-01 22:26:24.63995+10 100732 | 2085895 | 3 | 45727 |523 |100 | A png image | 2003-10-01 22:30:44.0359+10 (4 rows) The BLOBs are: test= \lo_list Large objects ID| Description -+- 2085885 | 2085887 | 2085893 | 2085895 | (4 rows) My concern is the the relationship between id and loid in images will be lost by reason of a pg_dump -c -b and a subsequent pg_restore causing the BLOBs to locate into different loids. Is there some way of constraining loid in images to ID in Large Objects? Howard Lowndes wrote: My situation is that I am interacting PHP 4.1.2 to PostgreSQL 7.2.2 I have no difficulty inserting and managing BLOBs into the Large Object system table, and I have a user table called images which maintains the relationship between the BLOB loid and the identity that relates to it in my user tables. So far so good. When I RTFM obout psql it refers to the \lo_import, \lo_list, \lo_export and \lo_unlink functions. The syntax for the \lo_import function indicates that a comment may be appended to the BLOB entry in the large object system table. What is not mentioned is that this will only occur if psql is run as the PostgreSQL superuser. Now, my concern is that if I use pg_dump with the --clean or --create, and the --blobs options, and then try a pg_restore from the resulting archive file, I believe the BLOBs will take up a different loid to the one they came from, and hence the relation in my user table will be broken and I will not be able to relocate the BLOBs using my identifier in my images table. My other problem is that the various functions in PHP, namely the various pg_lo_* functions do not appear to have the ability to include the comment option that is available to \lo_import under psql. I suppose one workaround, though not very elegant, would be to use under PHP something like `psql \lo_export known_file_name` whilst running through the records in the images table, and not to use the --blobs option under pg_dump, then use `psql \lo_import known_file_name` called from PHP to reload them after a pg_restore has been run, at the same time updating the loids in my images table. As I say very inelegant. I guess this must be a shortfall in both PHP, in as much as it doesn't appear to handle BLOBs to cleanly, and PostgreSQL in its way that it handles the description column in the large opjects system table. Am I right or wrong, or is there a better workaround? -- Howard. LANNet Computing Associates - Your Linux people http://www.lannetlinux.com -- Flatter government, not fatter government - Get rid of the Australian states. -- If all economists were laid end to end, they would not reach a conclusion - George Bernard Shaw ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] BLOBs, pg_dump pg_restore
Howard Lowndes [EMAIL PROTECTED] writes: Now, my concern is that if I use pg_dump with the --clean or --create, and the --blobs options, and then try a pg_restore from the resulting archive file, I believe the BLOBs will take up a different loid to the one they came from, and hence the relation in my user table will be broken No, because pg_restore has logic to adjust the references to match the new BLOB OIDs. If you have a test case where this fails to work, let's see it ... My other problem is that the various functions in PHP, namely the various pg_lo_* functions do not appear to have the ability to include the comment option that is available to \lo_import under psql. psql is out on a limb claiming that LOs can have comments --- there's no support for that in the backend or any other client application. It's doing it by direct manual injection of entries into the pg_description system catalog, which is why superuser privilege is needed. It's a useful hack if you only use psql, but still a hack. Feel free to contribute a patch for backend COMMENT ON LARGE OBJECT support, if you'd like to see a better level of support for this. regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] BLOBs, pg_dump pg_restore
On Wed, 1 Oct 2003, Tom Lane wrote: Howard Lowndes [EMAIL PROTECTED] writes: Now, my concern is that if I use pg_dump with the --clean or --create, and the --blobs options, and then try a pg_restore from the resulting archive file, I believe the BLOBs will take up a different loid to the one they came from, and hence the relation in my user table will be broken No, because pg_restore has logic to adjust the references to match the new BLOB OIDs. If you have a test case where this fails to work, let's see it ... No, I don't have any example, it is an enquiry. What I am reading into the above however is that the loid column in my table should have a CONSTRAINT REFERENCES clause to whereever in the system large objects table. Correct? My other problem is that the various functions in PHP, namely the various pg_lo_* functions do not appear to have the ability to include the comment option that is available to \lo_import under psql. psql is out on a limb claiming that LOs can have comments --- there's no support for that in the backend or any other client application. It's doing it by direct manual injection of entries into the pg_description system catalog, which is why superuser privilege is needed. It's a useful hack if you only use psql, but still a hack. Feel free to contribute a patch for backend COMMENT ON LARGE OBJECT support, if you'd like to see a better level of support for this. Sorry, way beyond my competency level. -- Howard. LANNet Computing Associates - Your Linux people http://www.lannetlinux.com -- Flatter government, not fatter government - Get rid of the Australian states. -- If all economists were laid end to end, they would not reach a conclusion - George Bernard Shaw ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] BLOBs, pg_dump pg_restore
Howard Lowndes [EMAIL PROTECTED] writes: On Wed, 1 Oct 2003, Tom Lane wrote: No, because pg_restore has logic to adjust the references to match the new BLOB OIDs. If you have a test case where this fails to work, let's see it ... No, I don't have any example, it is an enquiry. What I am reading into the above however is that the loid column in my table should have a CONSTRAINT REFERENCES clause to whereever in the system large objects table. Correct? No. No doubt if Postgres had had foreign keys when the large-object stuff was invented, it would have required such a constraint for LO references, but it didn't and it doesn't. The pg_restore code simply goes through all oid columns (and all lo columns if you've installed the contrib/lo datatype) and looks for matches to LO OIDs that existed in the dumped database. When it finds a match, it replaces that value with the new BLOB's OID. Simple, effective, crufty ... regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Blobs in PostgreSQL
Any examples available, please? On all of creating, insertion, updateing, setting it to null? From: Alex Pilosov [EMAIL PROTECTED] To: Richard Church [EMAIL PROTECTED] CC: [EMAIL PROTECTED] Subject: Re: [GENERAL] Blobs in PostgreSQL Date: Wed, 27 Jun 2001 08:43:33 -0400 (EDT) SQL syntax is fine. On Wed, 27 Jun 2001, Richard Church wrote: Can it be created and manipulated using the SQL syntax, or does it require using the C language interface? From: Alex Pilosov [EMAIL PROTECTED] To: Richard Church [EMAIL PROTECTED] CC: [EMAIL PROTECTED] Subject: Re: [GENERAL] Blobs in PostgreSQL Date: Tue, 26 Jun 2001 22:22:29 -0400 (EDT) There are many things one may mean by blob. a) there's old 'large object' interface, http://www.postgresql.org/idocs/index.php?largeobjects.html b) the above is rarely necessary now, that there is no limit on amount of data you can put into one tuple (due to TOASTing of tuples). To store binary data, you should use bytea type and it has infinite (well, 4G) limit of data. On Wed, 27 Jun 2001, Richard Church wrote: It seems that PostgreSQL supports Blobs but I couldn't find any mention of it in the data types section of the manual. Is it a variant on the Text type, or is it entirely different? Where in the manual is it described? Richard _ Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] blobs and the toast project ?
Hi, In june, I had to migrate a web site using paradox tables on a winNT to a Linux box. I had chosen first postgresql 7.0 for its solid-based features. But unfortunately, when I started to migrate my tables, a problem raised : the implementation of blobs in Postgre. My tables stored lots of blobs (images, big text files, ... ) and I expected PostgreSql to store this blobs as Paradox does, ie. in a big file where data is compressed. But as I found out, with Postgre, 2 files are created for one blob record and as i manipulate large amounts of blobs, it increased seriously the amount of disk space used and the memory mangement crashed after a while. Therefore, I turned back to use mysql. But I kept an eye on the future development. Now I have read the summary of TOAST project which will be implemented in the future 7.1 release. It seems to me that it could fix my problem with blobs in postgresql, am i right ? If so, it would be nice and i could then migrate again to PostgreSql.
[GENERAL] blobs dont rollback ?
I recently ran into a big problems regarding blobs and rollback with postgreSQL 6.5.3 A table contains many entries and each entry includes a single blob-oid. In a databaseprocessing perlscript I need to delete such a blob and perform the lo_unlink and later I rollback the session without commiting (and even no autocommiting) in between. In my opinion this should have left the original blob unchanged. But it seems that it is badly damaged: it is not deleted and it is not functional and every future work causes strange effects. (the strangest effect is, that whenever I try to work with this blob inside a transaction I dont get an single error, but after commiting the whole stuff there is no change in the database. It's like accessing the blob prevents commiting) This effects are completely different from the effects that occures by just using an invalid blob_oid. --- this is what pgsql tells me at the prompt: 32609 is the oid of the 'damaged' blob: 32600 is no blob-oid at all 32545 is a valid blob_oid # \lo_unlink 32609; NOTICE: UserAbortTransactionBlock and not in in-progress state ERROR: mdopen: couldn't open xinv32609: No such file or directory # \lo_unlink 32600; NOTICE: UserAbortTransactionBlock and not in in-progress state oekoland=# # \lo_unlink 32545; NOTICE: UserAbortTransactionBlock and not in in-progress state lo_unlink 32545 if this is of interest, here is the raw perl-stuff that cause the problem: ... $dbh-{AutoCommit} = 0; ... show_blob; # work fine !! ... $lobj_fd = $dbh-func($blob-oid, 'lo_unlink'); bloberror('lo_unlink failed') if not(defined($lobj_fd)); ... $dbh-rollback; ... show_blob; # get an error !! ... thanx for any help, peter -- mag. peter pilsl pgp-key available