Hi,
I use postgresql often but I'm not very familiar with how it works internal.
I've made a small script to backup files from different computers to a
postgresql database.
Sort of a versioning networked backup system.
It works with large objects (oid in table, linked to large object),
which
Sounds like a locking problem, but assuming you aren’t sherlock holmes and
simply want to get the thing working as soon as possible:
Stick a fast SSD in there (whether you stay on VM or physical). If you have
enough I/O, you may be able to solve the problem with brute force.
SSDs are a lot
Op 08-10-15 om 14:10 schreef Graeme B. Bell:
On 08 Oct 2015, at 13:50, Bram Van Steenlandt wrote:
1. The part is "fobj = lobject(db.db,0,"r",0,fpath)", I don't think there is
anything there
Re: lobject
http://initd.org/psycopg/docs/usage.html#large-objects
"Psycopg
Op 08-10-15 om 13:21 schreef Graeme B. Bell:
First the database was on a partition where compression was enabled, I changed
it to an uncompressed one to see if it makes a difference thinking maybe the
cpu couldn't handle the load.
It made little difference in my case.
My regular gmirror
>> First the database was on a partition where compression was enabled, I
>> changed it to an uncompressed one to see if it makes a difference thinking
>> maybe the cpu couldn't handle the load.
> It made little difference in my case.
>
> My regular gmirror partition seems faster:
> dd bs=8k
Seems a bit slow.
1. Can you share the script (the portion that does the file transfer) to the
list? Maybe you’re doing something unusual there by mistake.
Similarly the settings you’re using for scp.
2. What’s the network like?
For example, what if the underlying network is only capable of
>>
>>
> Like this ?
>
> gmirror (iozone -s 4 -a /dev/mirror/gm0s1e) = 806376 (faster drives)
> zfs uncompressed (iozone -s 4 -a /datapool/data) = 650136
> zfs compressed (iozone -s 4 -a /datapool/data) = 676345
If you can get the complete tables (as in the images on the blog post) with
> On 08 Oct 2015, at 11:17, Bram Van Steenlandt wrote:
>
> The database (9.2.9) on the server (freebsd10) runs on a zfs mirror.
> If I copy a file to the mirror using scp I get 37MB/sec
> My script achieves something like 7 or 8MB/sec on large (+100MB) files.
This may help -
Op 08-10-15 om 13:13 schreef Graeme B. Bell:
1. The part is "fobj = lobject(db.db,0,"r",0,fpath)", I don't think there is
anything there
Can you include the surrounding code please (e.g. setting up the db connection)
so we can see what’s happening, any sync/commit type stuff afterwards.
Op 08-10-15 om 13:37 schreef Graeme B. Bell:
Like this ?
gmirror (iozone -s 4 -a /dev/mirror/gm0s1e) = 806376 (faster drives)
zfs uncompressed (iozone -s 4 -a /datapool/data) = 650136
zfs compressed (iozone -s 4 -a /datapool/data) = 676345
If you can get the complete tables (as in the
> On 08 Oct 2015, at 13:50, Bram Van Steenlandt wrote:
>>> 1. The part is "fobj = lobject(db.db,0,"r",0,fpath)", I don't think there
>>> is anything there
Re: lobject
http://initd.org/psycopg/docs/usage.html#large-objects
"Psycopg large object support *efficient*
Op 08-10-15 om 15:10 schreef Graeme B. Bell:
http://initd.org/psycopg/docs/usage.html#large-objects
"Psycopg large object support *efficient* import/export with file system files
using the lo_import() and lo_export() libpq functions.”
See *
I was under the impression they meant that the
>>
>> http://initd.org/psycopg/docs/usage.html#large-objects
>>
>>
>> "Psycopg large object support *efficient* import/export with file system
>> files using the lo_import() and lo_export() libpq functions.”
>>
>> See *
>>
> I was under the impression they meant that the lobject was using
>> Sounds like a locking problem
This is what I am trying to get at. The reason that I am not addressing
hardware or OS configuration concerns is that this is not my environment,
but my client's. The client is running my import software and has a choice
of how long the transactions can be. They
On Thu, Oct 08, 2015 at 11:08:55AM -0400, Carlo wrote:
> >> Sounds like a locking problem
>
> This is what I am trying to get at. The reason that I am not addressing
> hardware or OS configuration concerns is that this is not my environment,
> but my client's. The client is running my import
-Original Message-
From: k...@rice.edu [mailto:k...@rice.edu]
Sent: October 8, 2015 1:00 PM
To: Carlo
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] One long transaction or multiple short transactions?
On Thu, Oct 08, 2015 at 11:08:55AM -0400, Carlo wrote:
> >> Sounds like a
On Thu, Oct 08, 2015 at 05:43:11PM -0400, Carlo wrote:
> -Original Message-
> From: k...@rice.edu [mailto:k...@rice.edu]
> Sent: October 8, 2015 1:00 PM
> To: Carlo
> Cc: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] One long transaction or multiple short transactions?
>
> On
17 matches
Mail list logo