Hi,
I use postgresql often but I'm not very familiar with how it works internal.
I've made a small script to backup files from different computers to a
postgresql database.
Sort of a versioning networked backup system.
It works with large objects (oid in table, linked to large object),
which
Op 08-10-15 om 13:21 schreef Graeme B. Bell:
First the database was on a partition where compression was enabled, I changed
it to an uncompressed one to see if it makes a difference thinking maybe the
cpu couldn't handle the load.
It made little difference in my case.
My regular gmirror par
ing all the time or do you start it before this test?
Perhaps check if any background tasks are running when you use postgres -
autovacuum, autoanalyze etc.
Graeme Bell
On 08 Oct 2015, at 11:17, Bram Van Steenlandt wrote:
Hi,
I use postgresql often but I'm not very familiar with how it
Op 08-10-15 om 13:37 schreef Graeme B. Bell:
Like this ?
gmirror (iozone -s 4 -a /dev/mirror/gm0s1e) = 806376 (faster drives)
zfs uncompressed (iozone -s 4 -a /datapool/data) = 650136
zfs compressed (iozone -s 4 -a /datapool/data) = 676345
If you can get the complete tables (as in the imag
Op 08-10-15 om 14:10 schreef Graeme B. Bell:
On 08 Oct 2015, at 13:50, Bram Van Steenlandt wrote:
1. The part is "fobj = lobject(db.db,0,"r",0,fpath)", I don't think there is
anything there
Re: lobject
http://initd.org/psycopg/docs/usage.html#large-objects
&quo
Op 08-10-15 om 15:10 schreef Graeme B. Bell:
http://initd.org/psycopg/docs/usage.html#large-objects
"Psycopg large object support *efficient* import/export with file system files
using the lo_import() and lo_export() libpq functions.”
See *
I was under the impression they meant that the l