On Wed, Jul 17, 2013 at 12:21 PM, Vasilis Ventirozos wrote:
>
>
>
> On Wed, Jul 17, 2013 at 11:52 AM, Xenofon Papadopoulos
> wrote:
>
>> Thank you for your replies so far.
>> The DB in question is Postgres+ 9.2 running inside a VM with the
>> following specs
On Wed, Jul 17, 2013 at 11:52 AM, Xenofon Papadopoulos wrote:
> Thank you for your replies so far.
> The DB in question is Postgres+ 9.2 running inside a VM with the following
> specs:
>
> 16 CPUs (dedicated to the VM)
> 60G RAM
> RAID-10 storage on a SAN for pgdata and pgarchieves, using differen
p
> In our databases we have extremely high disk I/O, I'm wondering if
> distributed transactions may be the reason behind it.
>
could be, but you have to send us more info about your setup, your
configuration , especially your io settings, output of vmstats would also
be helpful
> Thanks
>
>
Vasilis Ventirozos
w you populated your pgbench database (scale factor / fill factor).
Vasilis Ventirozos
> On 8 April 2013 21:02, Vasilis Ventirozos wrote:
>
>>
>> -c 10 means 10 clients so that should take advantage of all your cores
>> (see bellow)
>>
>> %Cpu0 : 39.3 us, 21.1 sy,
t you will find in the contrib,
>> the execution plan may be different because of different statistics, have
>> you analyzed both databases when you compared the execution plans ?
>>
>> Vasilis Ventirozos
>>
>>
>> Been trying to progress with this today.
e
you analyzed both databases when you compared the execution plans ?
Vasilis Ventirozos
Been trying to progress with this today. Decided to setup the database on
> my local machine to try a few things and I'm getting much more sensible
> results and a totally different query plan http://ex
anging the optimizer parameters as other
guys already mentioned
Vasilis Ventirozos
On Wed, Apr 3, 2013 at 11:18 AM, Dieter Rehbein
wrote:
> Hi Igor,
>
> thanks for the reply. The sequential scan on user_2_competition wasn't my
> main-problem. What really suprised me was the sequential
rocpid AND locker.pid=locker_act.procpid AND
locked.relation=locker.relation;
Vasilis Ventirozos
Hello, i think that your system during the checkpoint pauses all clients in
order to flush all data from controller's cache to the disks if i were you
i'd try to tune my checkpoint parameters better, if that doesn't work, show
us some vmstat output please
Vasilis Ventirozos
-
Hello,
This definitively doesn't look like something that has to do with postgres
settings, can you show us the statement and the explain plan ?
Also , have you checked pg_stat_activity to monitor what is running at the
same time when the delay occurs ?
It kinda looks like a lock to me.
Va
On Thu, Mar 21, 2013 at 5:58 AM, Tom Lane wrote:
> Josh Berkus writes:
> > I just noticed that if I use a tstzrange for convenience, a standard
> > btree index on a timestamp won't get used for it. Example:
>
> > table a (
> > id int,
> > val text,
> > ts timestamptz
> > );
>
Its better to split WAL segments and data just because these two have
different io requirements and because its easier to measure and tune things
if you have them on different disks.
Vasilis Ventirozos
On Wed, Mar 13, 2013 at 8:37 PM, Niels Kristian Schjødt <
nielskrist...@autouncle.com>
raid0 tends to linear scaling so 3 of them should give something close to
300% increased write speed. So i would say 1. but make sure you test your
configuration as soon as you can with bonnie++ or something similar
On Wed, Mar 13, 2013 at 7:43 PM, Niels Kristian Schjødt <
nielskrist...@autouncle.
help.
Vasilis Ventirozos
On Thu, Feb 21, 2013 at 11:59 AM, Mark Smith wrote:
> Hardware: IBM X3650 M3 (2 x Xeon X5680 6C 3.33GHz), 96GB RAM. IBM X3524
> with RAID 10 ext4 (noatime,nodiratime,data=writeback,barrier=0) volumes for
> pg_xlog / data / indexes.
>
> Software: SLES 11 S
14 matches
Mail list logo