On Thu, November 9, 2006 11:12, Rich Wardwell wrote: > I've been enjoying pqxx / postgres over the last few weeks and have > come across my first stumbling block. I need to write blobs of > binary (think media) into the database. My blobs are generally > between 40KB to 120KB. I've searched the internet and the postgres/ > pqxx sites with varied success. So far, I haven't been very > successful. First, I haven't seen anything that gives detailed > information about whether I should be using large objects / oids or > the bytea / binary type. The documentation is vague -- generally > stating, "for big files go large objects". What does that mean? Are > my blobs big? The nature of my transactions are that I'll be > querying these objects very rarely, but will be inserting them in the > neighborhood of 50 objects / second -- does this type of activity > favor one data type over the other?
>From what you say, you don't *need* large objects for what you're doing. You could just use bytea fields in regular tuples. But since libpqxx does not support binary result sets, those fields will have to be un-escaped when you retrieve them. (Normally you'd also need to escape them before inserting, but you can get around that using prepared statements). I see a few possible reasons why, in your case, you might want to use large objects: 1. Most of the time you really don't need to read/write objects in their entirety. Large objects give you a file-like interface complete with seeks. 2. The table that you'd want to store the binary data in is important for performance, but most accesses (retrievals, table scans) don't actually involve the binary objects. Keeping the binary data separately as large objects will keep the table smaller and faster. 3. The size range you gave is not absolute, and occasionally a few of the objects may get much larger than 120 KB. 4. You retrieve lots and lots of these objects, caching them on the client side is not feasible, and the un-escaping is just taking too much time. If none of these applies, you might as well go with bytea fields in regular tuples. > I was hoping to run load tests both ways but so far, I've been > attempting large objects with prepare statements with little > success. Anyone have any good working examples of either data type > (presumably in conjunction with prepared types (assuming they provide > significant performance improvement on repeated calls)? Have you tried the examples in the test/ directory? Tests 48, 50--55, 57--59, and 65 access large objects in various ways. Test 62 works with bytea fields using the binarystring class; tests 0 and 90 also give examples of escaping and un-escaping of binary data; and test 92 shows how to insert binary data using prepared statements (so no escaping is needed). Jeroen _______________________________________________ Libpqxx-general mailing list [email protected] http://gborg.postgresql.org/mailman/listinfo/libpqxx-general
