Re: [PERFORM] in-memory sorting

2010-08-19 Thread Samuel Gendler
Incidentally, if I set values on the connection before querying, is there an
easy way to get things back to default values or will my code need to know
the prior value and explicitly set it back?  Something like


set work_mem = '512MB'
query
set value = 'default'


or maybe


BEGIN;
set work_mem='512MB'
select query
ROLLBACK;


On Wed, Aug 18, 2010 at 11:52 PM, Scott Marlowe wrote:

> Exactly, it's about the concurrency.  I have a server with 128G ram
> but it runs dozens of queries at a time for hundreds of clients a
> second.  The chance that something big for work_mem might jump up and
> bite me are pretty good there.  Even so, at 16Meg it's not really big
> for that machine, and I might test cranking it up. Note that large
> work_mem can cause the kernel to flush its cache, which means going to
> disk for everybody's data, and all the queries are slow instead of
> one.  Keep an eye on how high work_mem affects your kernel cache.
>
> On Thu, Aug 19, 2010 at 12:38 AM, Samuel Gendler
>  wrote:
> > Yeah, although with 48GB of available memory and not that much
> concurrency,
> > I'm not sure it matters that much. But point taken, I'll see about
> modifying
> > the app such that work_mem gets set on a per-query basis.
> >
> > On Wed, Aug 18, 2010 at 11:24 PM, Scott Marlowe  >
> > wrote:
> >>
> >> On Wed, Aug 18, 2010 at 11:45 PM, Samuel Gendler
> >>  wrote:
> >> > Answered my own question.  Cranking work_mem up to 350MB revealed that
> >> > the in-memory sort requires more memory than the disk sort.
> >>
> >> Note that unless you run VERY few client connections, it's usually
> >> better to leave work_mem somewhere in the 1 to 32Meg range and have
> >> the connection or user or database that needs 350Meg be set there.
> >>
> >> I.e.
> >>
> >> 
> >> set work_mem='512MB';
> >>  >>
> >> OR
> >>
> >> alter user memoryhog set work_mem='512MB';
> >>
> >> OR
> >>
> >> alter database memhogdb set work_mem='512MB';
> >
> >
>
>
>
> --
> To understand recursion, one must first understand recursion.
>


Re: [PERFORM] in-memory sorting

2010-08-19 Thread Samuel Gendler
On Thu, Aug 19, 2010 at 12:06 AM, Samuel Gendler
wrote:

> Incidentally, if I set values on the connection before querying, is there
> an easy way to get things back to default values or will my code need to
> know the prior value and explicitly set it back?  Something like
>
> 
> set work_mem = '512MB'
> query
> set value = 'default'
> 
>
> or maybe
>
> 
> BEGIN;
> set work_mem='512MB'
> select query
> ROLLBACK;
> 
>
>
I guess I'm getting the hang of this whole postgres thing because those were
both wild guesses and both of them appear to work.

set work_mem=default sets it to the value in the config file, and setting
within a transaction and rolling back also restores the original value.



>
> On Wed, Aug 18, 2010 at 11:52 PM, Scott Marlowe 
> wrote:
>
>> Exactly, it's about the concurrency.  I have a server with 128G ram
>> but it runs dozens of queries at a time for hundreds of clients a
>> second.  The chance that something big for work_mem might jump up and
>> bite me are pretty good there.  Even so, at 16Meg it's not really big
>> for that machine, and I might test cranking it up. Note that large
>> work_mem can cause the kernel to flush its cache, which means going to
>> disk for everybody's data, and all the queries are slow instead of
>> one.  Keep an eye on how high work_mem affects your kernel cache.
>>
>> On Thu, Aug 19, 2010 at 12:38 AM, Samuel Gendler
>>  wrote:
>> > Yeah, although with 48GB of available memory and not that much
>> concurrency,
>> > I'm not sure it matters that much. But point taken, I'll see about
>> modifying
>> > the app such that work_mem gets set on a per-query basis.
>> >
>> > On Wed, Aug 18, 2010 at 11:24 PM, Scott Marlowe <
>> scott.marl...@gmail.com>
>> > wrote:
>> >>
>> >> On Wed, Aug 18, 2010 at 11:45 PM, Samuel Gendler
>> >>  wrote:
>> >> > Answered my own question.  Cranking work_mem up to 350MB revealed
>> that
>> >> > the in-memory sort requires more memory than the disk sort.
>> >>
>> >> Note that unless you run VERY few client connections, it's usually
>> >> better to leave work_mem somewhere in the 1 to 32Meg range and have
>> >> the connection or user or database that needs 350Meg be set there.
>> >>
>> >> I.e.
>> >>
>> >> 
>> >> set work_mem='512MB';
>> >> > >>
>> >> OR
>> >>
>> >> alter user memoryhog set work_mem='512MB';
>> >>
>> >> OR
>> >>
>> >> alter database memhogdb set work_mem='512MB';
>> >
>> >
>>
>>
>>
>> --
>> To understand recursion, one must first understand recursion.
>>
>
>


Re: [PERFORM] in-memory sorting

2010-08-19 Thread Scott Marlowe
On Thu, Aug 19, 2010 at 1:06 AM, Samuel Gendler
 wrote:
> Incidentally, if I set values on the connection before querying, is there an
> easy way to get things back to default values or will my code need to know
> the prior value and explicitly set it back?  Something like

reset work_mem;

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Performance on new 64bit server compared to my 32bit desktop

2010-08-19 Thread Philippe Rimbault

Hi,

I'm having a strange performance result on a new database server 
compared to my simple desktop.


The configuration of the new server :
- OS : GNU/Linux Debian Etch x86_64
- kernel : Linux 2.6.26-2-vserver-amd64 #1 SMP Sun Jun 20 20:40:33 
UTC 2010 x86_64 GNU/Linux

(tests are on the "real server", not on a vserver)
- CPU : 2 x Six-Core AMD Opteron(tm) Processor 2427 @ 2.20GHz
- RAM : 32 Go
The configuration of my desktop pc :
- OS : GNU/Linux Debian Testing i686
- kernel : Linux 2.6.32-5-686 #1 SMP Tue Jun 1 04:59:47 UTC 2010 
i686 GNU/Linux

- CPU : Intel(R) Core(TM)2 Duo CPU E7500  @ 2.93GHz
- RAM : 2 Go

On each configuration, i've compiled Postgresql 8.4.4 (simple 
./configuration && make && make install).


On each configuration, i've restore a little database (the compressed 
dump is 33Mo), here is the output of "\d+" :
 Schema |Name|   Type   |Owner|
Size| Description

++--+-++-
 public | article| table| indexwsprem | 77 
MB  |
 public | article_id_seq | sequence | indexwsprem | 8192 
bytes |
 public | evt| table| indexwsprem | 8192 
bytes |
 public | evt_article| table| indexwsprem | 17 
MB  |
 public | evt_article_id_seq | sequence | indexwsprem | 8192 
bytes |
 public | evt_id_seq | sequence | indexwsprem | 8192 
bytes |
 public | firm   | table| indexwsprem | 1728 
kB|
 public | firm_article   | table| indexwsprem | 17 
MB  |
 public | firm_article_id_seq| sequence | indexwsprem | 8192 
bytes |
 public | firm_id_seq| sequence | indexwsprem | 8192 
bytes |
 public | publication| table| indexwsprem | 64 
kB  |
 public | publication_article| table| indexwsprem | 0 
bytes|
 public | publication_article_id_seq | sequence | indexwsprem | 8192 
bytes |
 public | publication_id_seq | sequence | indexwsprem | 8192 
bytes |

(14 rows)

On each configuration, postgresql.conf are the same and don't have been 
modified (the shared_buffer seems enought for my simple tests).


I've enabled timing on psql, and here is the result of different 
"simple" query (executed twice to use cache) :

1- select count(*) from firm;
server x64 :  48661 (1 row) Time: 14,412 ms
desk i686  :  48661 (1 row) Time: 4,845 ms

2- select * from pg_settings;
server x64 :  Time: 3,898 ms
desk i686  :  Time: 1,517 ms

3- I've run "time pgbench -c 50" :
server x64 :
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 1
query mode: simple
number of clients: 50
number of transactions per client: 10
number of transactions actually processed: 500/500
tps = 523.034437 (including connections establishing)
tps = 663.511008 (excluding connections establishing)

real0m0.984s
user0m0.088s
sys 0m0.096s
desk i686 :
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 1
query mode: simple
number of clients: 50
number of transactions per client: 10
number of transactions actually processed: 500/500
tps = 781.986778 (including connections establishing)
tps = 862.809792 (excluding connections establishing)

real0m0.656s
user0m0.028s
sys 0m0.052s


Do you think it's a 32bit/64bit difference ?

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Performance on new 64bit server compared to my 32bit desktop

2010-08-19 Thread Scott Marlowe
On Thu, Aug 19, 2010 at 2:07 AM, Philippe Rimbault  wrote:
> Hi,
>
> I'm having a strange performance result on a new database server compared to
> my simple desktop.
>
> The configuration of the new server :
>    - OS : GNU/Linux Debian Etch x86_64
>    - kernel : Linux 2.6.26-2-vserver-amd64 #1 SMP Sun Jun 20 20:40:33 UTC
> 2010 x86_64 GNU/Linux
>        (tests are on the "real server", not on a vserver)
>    - CPU : 2 x Six-Core AMD Opteron(tm) Processor 2427 @ 2.20GHz
>    - RAM : 32 Go
> The configuration of my desktop pc :
>    - OS : GNU/Linux Debian Testing i686
>    - kernel : Linux 2.6.32-5-686 #1 SMP Tue Jun 1 04:59:47 UTC 2010 i686
> GNU/Linux
>    - CPU : Intel(R) Core(TM)2 Duo CPU     E7500  @ 2.93GHz
>    - RAM : 2 Go

PERFORMANCE STUFF DELETED FOR BREVITY

> Do you think it's a 32bit/64bit difference ?

No, it's likely that your desktop has much faster CPU cores than your
server, and it has drives that may or may not be obeying fsync
commands.  Your server, OTOH, has more cores, so it's likely to do
better under a real load.  And assuming it has more disks on a better
controller it will also do better under heavier loads.

So how are the disks setup anyway?

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Performance on new 64bit server compared to my 32bit desktop

2010-08-19 Thread Philippe Rimbault

On 19/08/2010 11:51, Scott Marlowe wrote:

On Thu, Aug 19, 2010 at 2:07 AM, Philippe Rimbault  wrote:
   

Hi,

I'm having a strange performance result on a new database server compared to
my simple desktop.

The configuration of the new server :
- OS : GNU/Linux Debian Etch x86_64
- kernel : Linux 2.6.26-2-vserver-amd64 #1 SMP Sun Jun 20 20:40:33 UTC
2010 x86_64 GNU/Linux
(tests are on the "real server", not on a vserver)
- CPU : 2 x Six-Core AMD Opteron(tm) Processor 2427 @ 2.20GHz
- RAM : 32 Go
The configuration of my desktop pc :
- OS : GNU/Linux Debian Testing i686
- kernel : Linux 2.6.32-5-686 #1 SMP Tue Jun 1 04:59:47 UTC 2010 i686
GNU/Linux
- CPU : Intel(R) Core(TM)2 Duo CPU E7500  @ 2.93GHz
- RAM : 2 Go
 

PERFORMANCE STUFF DELETED FOR BREVITY

   

Do you think it's a 32bit/64bit difference ?
 

No, it's likely that your desktop has much faster CPU cores than your
server, and it has drives that may or may not be obeying fsync
commands.  Your server, OTOH, has more cores, so it's likely to do
better under a real load.  And assuming it has more disks on a better
controller it will also do better under heavier loads.

So how are the disks setup anyway?
   

Thanks for your reply !

The server use a HP Smart Array P410 with a Raid 5 array on Sata 133 disk.
My desktop only use one Sata 133 disk.
I was thinking that my simples queries didn't use disk but only memory.
I've launch a new pgbench with much more client and transactions :

Server :
postgres$ pgbench -c 400 -t 100
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 1
query mode: simple
number of clients: 400
number of transactions per client: 100
number of transactions actually processed: 4/4
tps = 115.054386 (including connections establishing)
tps = 115.617186 (excluding connections establishing)

real5m47.706s
user0m27.054s
sys 0m59.804s

Desktop :
postgres$ time pgbench -c 400 -t 100
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 1
query mode: simple
number of clients: 400
number of transactions per client: 100
number of transactions actually processed: 4/4
tps = 299.456785 (including connections establishing)
tps = 300.590503 (excluding connections establishing)

real2m13.604s
user0m5.304s
sys 0m13.469s





--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Performance on new 64bit server compared to my 32bit desktop

2010-08-19 Thread Philippe Rimbault

On 19/08/2010 12:23, Philippe Rimbault wrote:

On 19/08/2010 11:51, Scott Marlowe wrote:
On Thu, Aug 19, 2010 at 2:07 AM, Philippe Rimbault  
wrote:

Hi,

I'm having a strange performance result on a new database server 
compared to

my simple desktop.

The configuration of the new server :
- OS : GNU/Linux Debian Etch x86_64
- kernel : Linux 2.6.26-2-vserver-amd64 #1 SMP Sun Jun 20 
20:40:33 UTC

2010 x86_64 GNU/Linux
(tests are on the "real server", not on a vserver)
- CPU : 2 x Six-Core AMD Opteron(tm) Processor 2427 @ 2.20GHz
- RAM : 32 Go
The configuration of my desktop pc :
- OS : GNU/Linux Debian Testing i686
- kernel : Linux 2.6.32-5-686 #1 SMP Tue Jun 1 04:59:47 UTC 2010 
i686

GNU/Linux
- CPU : Intel(R) Core(TM)2 Duo CPU E7500  @ 2.93GHz
- RAM : 2 Go

PERFORMANCE STUFF DELETED FOR BREVITY


Do you think it's a 32bit/64bit difference ?

No, it's likely that your desktop has much faster CPU cores than your
server, and it has drives that may or may not be obeying fsync
commands.  Your server, OTOH, has more cores, so it's likely to do
better under a real load.  And assuming it has more disks on a better
controller it will also do better under heavier loads.

So how are the disks setup anyway?

Thanks for your reply !

The server use a HP Smart Array P410 with a Raid 5 array on Sata 133 
disk.

My desktop only use one Sata 133 disk.
I was thinking that my simples queries didn't use disk but only memory.
I've launch a new pgbench with much more client and transactions :

Server :
postgres$ pgbench -c 400 -t 100
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 1
query mode: simple
number of clients: 400
number of transactions per client: 100
number of transactions actually processed: 4/4
tps = 115.054386 (including connections establishing)
tps = 115.617186 (excluding connections establishing)

real5m47.706s
user0m27.054s
sys 0m59.804s

Desktop :
postgres$ time pgbench -c 400 -t 100
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 1
query mode: simple
number of clients: 400
number of transactions per client: 100
number of transactions actually processed: 4/4
tps = 299.456785 (including connections establishing)
tps = 300.590503 (excluding connections establishing)

real2m13.604s
user0m5.304s
sys 0m13.469s





I've re-init the pgbench with -s 400 and now server work (very) better 
than desktop.
So ... my desktop cpu is faster if i only work with small query but 
server handle better heavier loads.

I was just suprise about the difference on my small database.

Thx

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: Fwd: [PERFORM] Vacuum Full + Cluster + Vacuum full = non removable dead rows

2010-08-19 Thread Alexandre de Arruda Paes
Hi Tom,

Below, the pg_prepared_xacts result.
The only way to restore the table is with TRUNCATE.
Vacuum, Vacuum full,  cluster not help and subsequent updates will become
slow and slow.


carmen=# select * from vlocks where relname='tp93t'; select * from
pg_stat_activity where usename='webpa'; select * from pg_prepared_xacts;
 datname | relname | virtualtransaction | mode | granted | usename | substr
| query_start | age | procpid
-+-++--+-+-++-+-+-
(0 rows)

 datid | datname | procpid | usesysid  | usename | current_query | waiting |
xact_start |  query_start  | backend_start |
client_addr | client_port
---+-+-+---+-+---+-++---+---+-+-
 16745 | carmen  |   19345 | 750377993 | webpa   | | f
|| 2010-08-19 09:40:44.295753-03 | 2010-08-19 09:38:45.637543-03
| 192.168.1.1 |   59867
(1 row)

 transaction | gid | prepared | owner | database
-+-+--+---+--
(0 rows)

carmen=# VACUUM full verbose tp93t;
INFO:  vacuuming "public.tp93t"
INFO:  "tp93t": found 0 removable, 38588 nonremovable row versions in 9700
pages
DETAIL:  38378 dead row versions cannot be removed yet.
Nonremovable row versions range from 1853 to 2029 bytes long.
There were 317 unused item pointers.
Total free space (including removable row versions) is 1178860 bytes.
0 pages are or will become empty, including 0 at the end of the table.
190 pages containing 442568 free bytes are potential move destinations.
CPU 0.00s/0.02u sec elapsed 0.02 sec.
INFO:  index "tp93t_pkey" now contains 11597 row versions in 195 pages
DETAIL:  0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  "tp93t": moved 0 row versions, truncated 9700 to 9700 pages
DETAIL:  CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  vacuuming "pg_toast.pg_toast_24274"
INFO:  "pg_toast_24274": found 0 removable, 0 nonremovable row versions in 0
pages
DETAIL:  0 dead row versions cannot be removed yet.
Nonremovable row versions range from 0 to 0 bytes long.
There were 0 unused item pointers.
Total free space (including removable row versions) is 0 bytes.
0 pages are or will become empty, including 0 at the end of the table.
0 pages containing 0 free bytes are potential move destinations.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  index "pg_toast_24274_index" now contains 0 row versions in 1 pages
DETAIL:  0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
VACUUM


2010/8/18 Tom Lane 

> Alexandre de Arruda Paes  writes:
> > I know the problem with VACUUM FULL and bloated Indexes, but I don't
> > understand why the table that is not in use by nobody, cant be vacuumed
> or
> > clustered to avoid dead tuples.
>
> There's an open transaction somewhere that VACUUM is preserving the
> tuples for.  This transaction need not ever have touched the table,
> or ever intend to touch the table --- but VACUUM cannot know that,
> so it saves any tuples that the transaction might be entitled to see
> if it looked.
>
> > carmen=# select * from vlocks where relname='tp93t'; select * from
> > pg_stat_activity where usename='webpa';
>
> You keep on showing us only subsets of pg_stat_activity :-(
>
> Also, if you don't see anything in pg_stat_activity, try pg_prepared_xacts.
>
>regards, tom lane
>


Re: [PERFORM] in-memory sorting

2010-08-19 Thread Tom Lane
Samuel Gendler  writes:
> Answered my own question.  Cranking work_mem up to 350MB revealed that
> the in-memory sort requires more memory than the disk sort.

Yeah.  The on-disk representation of sortable data is tighter than the
in-memory representation for various reasons, mostly that we're willing
to work at making it small.  Datums aren't necessarily properly aligned
for example, and there's also palloc overhead to consider in-memory.

regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: Fwd: [PERFORM] Vacuum Full + Cluster + Vacuum full = non removable dead rows

2010-08-19 Thread Kevin Grittner
Alexandre de Arruda Paes  wrote:
> 2010/8/18 Tom Lane 
 
>> There's an open transaction somewhere that VACUUM is preserving
>> the tuples for.  This transaction need not ever have touched the
>> table, or ever intend to touch the table --- but VACUUM cannot
>> know that, so it saves any tuples that the transaction might be
>> entitled to see if it looked.
>>
>> > carmen=# select * from vlocks where relname='tp93t'; select *
>> > from pg_stat_activity where usename='webpa';
>>
>> You keep on showing us only subsets of pg_stat_activity :-(
 
> select * from pg_stat_activity where usename='webpa';
 
You keep on showing us only subsets of pg_stat_activity :-(
 
*ANY* open transaction, including "idle in transaction" including
transactions by other users in other databases will prevent vacuum
from cleaning up rows, for the reasons Tom already gave you.
 
What do you get from?:
 
select * from pg_stat_activity where current_query <> ''
  order by xact_start limit 10;
 
-Kevin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: Fwd: [PERFORM] Vacuum Full + Cluster + Vacuum full = non removable dead rows

2010-08-19 Thread Tom Lane
Alexandre de Arruda Paes  writes:
> Below, the pg_prepared_xacts result.

OK, so you don't have any prepared transactions, but you're still not
showing us the full content of pg_stat_activity.

Just out of curiosity, how many rows does "select count(*) from tp93t"
think there are?

regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] yet another q

2010-08-19 Thread Tom Lane
Samuel Gendler  writes:
> fast plan: http://explain.depesz.com/s/iZ
> slow plan: http://explain.depesz.com/s/Dv2

Your problem here is that it's switching from hash aggregation to
sort-and-group-aggregate once it decides that the number of aggregate
groups won't fit in work_mem anymore.  While you could brute-force
that by raising work_mem, it'd be a lot better if you could get the
estimated number of groups more in line with the actual.  Notice the
very large discrepancy between the estimated and actual numbers of
rows out of the aggregation steps.

Increasing the stats targets for the GROUP BY columns might help,
but I think what's basically going on here is there's correlation
between the GROUP BY columns that the planner doesn't know about.

One thing I'm wondering is why you're grouping by owner_customer_id
and t_fact.provider_id, when these aren't used in the output.

regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Performance on new 64bit server compared to my 32bit desktop

2010-08-19 Thread Scott Marlowe
On Thu, Aug 19, 2010 at 4:23 AM, Philippe Rimbault  wrote:
>> So how are the disks setup anyway?
>>
>
> Thanks for your reply !
>
> The server use a HP Smart Array P410 with a Raid 5 array on Sata 133 disk.

If you can change that to RAID-10 do so now.  RAID-5 is notoriously
slow for database use, unless you're only gonna do reporting type
queries with few updates.

> My desktop only use one Sata 133 disk.
> I was thinking that my simples queries didn't use disk but only memory.

No, butt pgbench has to write to the disk.

> I've launch a new pgbench with much more client and transactions :
>
> Server :
>    postgres$ pgbench -c 400 -t 100

-c 400 is HUGE.  (and as you mentioned in your later email, you need
to -s -i 400 for -c 400 to make sense)  Try values in the 4 to 40
range and the server should REALLY outshine your desktop as you pass
12 or 16 or so.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Performance on new 64bit server compared to my 32bit desktop

2010-08-19 Thread Greg Smith

Philippe Rimbault wrote:

I've run "time pgbench -c 50" :
server x64 :
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 1
query mode: simple
number of clients: 50
number of transactions per client: 10
number of transactions actually processed: 500/500
tps = 523.034437 (including connections establishing)
tps = 663.511008 (excluding connections establishing)



As mentioned already, most of the difference you're seeing is simply 
that your desktop system has faster individual processor cores in it, so 
jobs where only a single core are being used are going to be faster on it.


The above isn't going to work very well either because the database 
scale is too small, and you're not running the test for very long.  The 
things the bigger server is better at, you're not testing.


Since your smaller system has 2GB of RAM and the larger one 32GB, try 
this instead:


pgbench -i -s 2000
pgbench -c 24 -T 60 -S
pgbench -c 24 -T 300

That will create a much larger database, run some simple SELECT-only 
tests on it, and then run a write intensive one.  Expect to see the 
server system crush the results of the desktop here.  Note that this 
will take quite a while to run--the pgbench initialization step in 
particular is going to take a good fraction of an hour or more, and then 
the actual tests will run for 6 minutes after that.  You can run more 
tests after that without doing the initialization step again, but if you 
run a lot of the write-heavy tests eventually performance will start to 
degrade.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Performance on new 64bit server compared to my 32bit desktop

2010-08-19 Thread Greg Smith

Greg Smith wrote:
Since your smaller system has 2GB of RAM and the larger one 32GB, try 
this instead:


pgbench -i -s 2000
pgbench -c 24 -T 60 -S
pgbench -c 24 -T 300


Oh, and to at least give a somewhat more normal postgresql.conf I'd 
recommend you at least make the following two changes before doing the 
above:


shared_buffers=256MB
checkpoint_segments=32

Those are the two parameters the pgbench test is most sensitive to, so 
setting to higher values will give more realistic results.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance