Re: [PERFORM] Excessive context switching on SMP Xeons

2004-10-05 Thread Alan Stange
A few quick random observations on the Xeon v. Opteron comparison:
-  running a dual Xeon with hyperthreading turned on really isn't the 
same as having a quad cpu system.   I haven't seen postgresql specific 
benchmarks, but the general case has been that HT is a benefit in a few 
particular work loads but with no benefit in general.

- We're running postgresql 8 (in production!) on a dual Opteron 250, 
Linux 2.6, 8GB memory, 1.7TB of attached fiber channel disk, etc.   This 
machine is fast.A dual 2.8 Ghz Xeon with 512K caches (with or 
without HT enabled) simlpy won't be in the same performance league as 
this dual Opteron system (assuming identical disk systems, etc).  We run 
a Linux 2.6 kernel because it scales under load so much better than the 
2.4 kernels.

The units we're using (and we have a lot of them) are SunFire v20z.  You 
can get a dualie Opteron 250 for $7K with 4GB memory from Sun.  My 
personal experience with this setup in a mission critical config is to 
not depend on 4 hour spare parts, but to spend the money and install the 
spare in the rack.   Naturally, one can go cheaper with slower cpus, 
different vendors, etc.

I don't care to go into the whole debate of Xeon v. Opteron here.   We 
also have a lot of dual Xeon systems. In every comparison I've done with 
our codes, the dual Opteron clearly outperforms the dual Xeon, when 
running on one and both cpus.

-- Alan

Josh Berkus wrote:
Bill,
 

I'd be thrilled to test it too, if for no other reason that to determine
whether what I'm experiencing really is the "CS problem".
   

Hmmm ... Gavin's patch is built against 8.0, and any version of the patch 
would require linux 2.6, probably 2.6.7 minimum.   Can you test on that linux 
version?   Do you have the resources to back-port Gavin's patch?   

 

Fair enough. I never see nearly this much context switching on my dual
Xeon boxes running dozens (sometimes hundreds) of concurrent apache
processes, but I'll concede this could just be due to the more parallel
nature of a bunch of independent apache workers.
   

Certainly could be.  Heavy CSes only happen when you have a number of 
long-running processes with contention for RAM in my experience.  If Apache 
is dispatching thing quickly enough, they'd never arise.

 

Hence my desire for recommendations on alternate architectures ;-)
   

Well, you could certainly stay on Xeon if there's better support availability.  
Just get off Dell *650's.   

 

Being a 24x7x365 shop, and these servers being mission critical, I
require vendors that can offer 24x7 4-hour part replacement, like Dell
or IBM. I haven't seen 4-way 64-bit boxes meeting that requirement for
less than $20,000, and that's for a very minimally configured box. A
suitably configured pair will likely end up costing $50,000 or more. I
would like to avoid an unexpected expense of that size, unless there's
no other good alternative. That said, I'm all ears for a cheaper
alternative that meets my support and performance requirements.
   

No, you're going to pay through the nose for that support level.   It's how 
things work.

 

tps = 369.717832 (including connections establishing)
tps = 370.852058 (excluding connections establishing)
   

Doesn't seem too bad to me.   Have anything to compare it to?
What's in your postgresql.conf?
--Josh
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html
 


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] Planner picks the wrong plan?

2004-10-05 Thread Tom Lane
Nichlas =?iso-8859-1?Q?L=F6fdahl?= <[EMAIL PROTECTED]> writes:
> My question is, why doesn't the planner pick the same plan for Q1 & Q3?

I think it's mostly that after you've added and ANALYZEd the "age"
column, the planner has a pretty good idea of how many rows will pass
the "age > 17 AND age < 20" condition.  It can't do very much with the
equivalent condition in the original form, though, and in fact ends up
drastically underestimating the number of matching rows (86 vs reality
of 3021).  That leads directly to a bad plan choice :-(

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[PERFORM] test post

2004-10-05 Thread Max Baker
please ignore if this goes through.  They've been bouncing and I'm trying to
find out why.

-m

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[PERFORM] Planner picks the wrong plan?

2004-10-05 Thread Nichlas Löfdahl
Hello!

I'm using Postgres 7.4.5, sort_mem is 8192. Tables analyzed / vacuumed.

Here's a function I'm using to get an age from the user's birthday:

agey(date) -> SELECT date_part('year', age($1::timestamp))


The problem is, why do the plans differ so much between Q1 & Q3 below? Something with 
age() being a non-IMMUTABLE function?


Q1: explain analyze SELECT al.pid, al.owner, al.title, al.front, al.created_at, 
al.n_images, u.username as owner_str, u.image as owner_image, u.puid as owner_puid 
FROM albums al  , users u WHERE  u.uid = al.owner AND  al.security='a' AND al.n_images 
> 0 AND date_part('year', age(u.born)) > 17 AND date_part('year', age(u.born)) < 20 
AND city = 1 ORDER BY al.id DESC LIMIT 9;
   
  QUERY PLAN   


 Limit  (cost=5700.61..5700.63 rows=9 width=183) (actual time=564.291..564.299 rows=9 
loops=1)
   ->  Sort  (cost=5700.61..5700.82 rows=83 width=183) (actual time=564.289..564.291 
rows=9 loops=1)
 Sort Key: al.id
 ->  Nested Loop  (cost=0.00..5697.97 rows=83 width=183) (actual 
time=30.029..526.211 rows=4510 loops=1)
   ->  Seq Scan on users u  (cost=0.00..5311.05 rows=86 width=86) (actual 
time=5.416..421.264 rows=3021 loops=1)
 Filter: ((date_part('year'::text, 
age((('now'::text)::date)::timestamp with time zone, (born)::timestamp with time 
zone)) > 17::double precision) AND (date_part('year'::text, 
age((('now'::text)::date)::timestamp with time zone, (born)::timestamp with time 
zone)) < 20::double precision) AND (city = 1))
   ->  Index Scan using albums_owner_key on albums al  (cost=0.00..4.47 
rows=2 width=101) (actual time=0.014..0.025 rows=1 loops=3021)
 Index Cond: ("outer".uid = al."owner")
 Filter: (("security" = 'a'::bpchar) AND (n_images > 0))
 Total runtime: 565.120 ms
(10 rows)


Result when removing the second age-check (AND date_part('year', age(u.born)) < 20):

Q2: explain analyze SELECT al.pid, al.owner, al.title, al.front, al.created_at, 
al.n_images, u.username as owner_str, u.image as owner_image, u.puid as owner_puid 
FROM albums al, users u WHERE  u.uid = al.owner AND  al.security='a' AND al.n_images > 
0 AND date_part('year', age(u.born)) > 17 AND city = 1 ORDER BY al.id DESC LIMIT 9;
-
 Limit  (cost=0.00..140.95 rows=9 width=183) (actual time=0.217..2.474 rows=9 loops=1)
   ->  Nested Loop  (cost=0.00..86200.99 rows=5504 width=183) (actual 
time=0.216..2.464 rows=9 loops=1)
 ->  Index Scan Backward using albums_id_key on albums al  (cost=0.00..2173.32 
rows=27610 width=101) (actual time=0.086..1.080 rows=40 loops=1)
   Filter: (("security" = 'a'::bpchar) AND (n_images > 0))
 ->  Index Scan using users_pkey on users u  (cost=0.00..3.03 rows=1 width=86) 
(actual time=0.031..0.031 rows=0 loops=40)
   Index Cond: (u.uid = "outer"."owner")
   Filter: ((date_part('year'::text, age((('now'::text)::date)::timestamp 
with time zone, (born)::timestamp with time zone)) > 17::double precision) AND (city = 
1))
 Total runtime: 2.611 ms
(8 rows)

Trying another approach: adding a separate "stale" age-column to the users-table:

alter table users add column age smallint;
update users set age=date_part('year'::text, age((('now'::text)::date)::timestamp with 
time zone, (born)::timestamp with time zone));
analyze users;

Result with separate column:
Q3: explain analyze SELECT al.pid, al.owner, al.title, al.front, al.created_at, 
al.n_images, u.username as owner_str, u.image as owner_image, u.puid as owner_puid 
FROM albums al  , users u WHERE  u.uid = al.owner AND  al.security='a' AND al.n_images 
> 0 AND age > 17 AND age < 20 AND city = 1 ORDER BY al.id DESC LIMIT 9;

 Limit  (cost=0.00..263.40 rows=9 width=183) (actual time=0.165..2.832 rows=9 loops=1)
   ->  Nested Loop  (cost=0.00..85925.69 rows=2936 width=183) (actual 
time=0.163..2.825 rows=9 loops=1)
 ->  Index Scan Backward using albums_id_key on albums al  (cost=0.00..2173.32 
rows=27610 width=101) (actual time=0.043..1.528 rows=56 loops=1)
   Filter: (("security" = 'a'::b

[PERFORM] slow rule on update

2004-10-05 Thread Janning Vygen
Hi,

(pg_version 7.4.2, i do run vacuum analyze on the whole database frequently 
and just before executing statements below)

i dont know if anyone can help me because i dont know really where the problem 
is, but i try. If any further information is needed i'll be glad to send.

my real rule much longer (more calculation instead of "+ 1") but this shortcut 
has the same disadvantages in performance:

CREATE RULE ru_sp_update AS ON UPDATE TO Spiele
DO
  UPDATE punktecache SET pc_punkte = pc_punkte + 1

FROM Spieletipps AS stip
NATURAL JOIN tippspieltage2spiele AS tspt2sp

WHERE punktecache.tr_kurzname = stip.tr_kurzname
AND punktecache.mg_name = stip.mg_name
AND punktecache.tspt_name   = tspt2sp.tspt_name
AND stip.sp_id = OLD.sp_id
;

punktecache is a materialized view which should be updated by this rule

# \d punktecache
 Table "public.punktecache"
   Column|   Type   | Modifiers
-+--+---
 tr_kurzname | text | not null
 mg_name | text | not null
 tspt_name   | text | not null
 pc_punkte   | smallint | not null
Indexes:
"pk_punktecache" primary key, btree (tr_kurzname, mg_name, tspt_name)
Foreign-key constraints:
"fk_mitglieder" FOREIGN KEY (tr_kurzname, mg_name) REFERENCES 
mitglieder(tr_kurzname, mg_name) ON UPDATE CASCADE ON DELETE CASCADE
"fk_tippspieltage" FOREIGN KEY (tr_kurzname, tspt_name) REFERENCES 
tippspieltage(tr_kurzname, tspt_name) ON UPDATE CASCADE ON DELETE CASCADE


my update statement:

explain analyze UPDATE spiele
SETsp_heimtore = spup.spup_heimtore,
   sp_gasttore = spup.spup_gasttore,
   sp_abpfiff  = spup.spup_abpfiff
FROM   spieleupdates AS spup
WHERE  spiele.sp_id = spup.sp_id;

and output from explain
[did i post explain's output right? i just copied it, but i wonder if there is 
a more pretty print like method to post explain's output?]


 Nested Loop  (cost=201.85..126524.78 rows=1 width=45) (actual 
time=349.694..290491.442 rows=100990 loops=1)
   ->  Nested Loop  (cost=201.85..126518.97 rows=1 width=57) (actual 
time=349.623..288222.145 rows=100990 loops=1)
 ->  Hash Join  (cost=201.85..103166.61 rows=4095 width=64) (actual 
time=131.376..8890.220 rows=102472 loops=1)
   Hash Cond: (("outer".tspt_name = "inner".tspt_name) AND 
("outer".tr_kurzname = "inner".tr_kurzname))
   ->  Seq Scan on punktecache  (cost=0.00..40970.20 rows=2065120 
width=45) (actual time=0.054..4356.321 rows=2065120 loops=1)
   ->  Hash  (cost=178.16..178.16 rows=4738 width=35) (actual 
time=102.259..102.259 rows=0 loops=1)
 ->  Nested Loop  (cost=0.00..178.16 rows=4738 width=35) 
(actual time=17.262..88.076 rows=10519 loops=1)
   ->  Seq Scan on spieleupdates spup  
(cost=0.00..0.00 rows=1 width=4) (actual time=0.015..0.024 rows=1 loops=1)
   ->  Index Scan using ix_tspt2sp_fk_spiele on 
tippspieltage2spiele tspt2sp  (cost=0.00..118.95 rows=4737 width=31) (actual 
time=17.223..69.486 rows=10519 loops=1)
 Index Cond: ("outer".sp_id = tspt2sp.sp_id)
 ->  Index Scan using pk_spieletipps on spieletipps stip  
(cost=0.00..5.69 rows=1 width=25) (actual time=2.715..2.717 rows=1 
loops=102472)
   Index Cond: (("outer".tr_kurzname = stip.tr_kurzname) AND 
("outer".mg_name = stip.mg_name) AND ("outer".sp_id = stip.sp_id))
   ->  Index Scan using pk_spiele on spiele  (cost=0.00..5.78 rows=1 width=4) 
(actual time=0.012..0.014 rows=1 loops=100990)
 Index Cond: (spiele.sp_id = "outer".sp_id)
 Total runtime: 537319.321 ms


Can this be made any faster? Can you give me a hint where to start research? 

My guess is that the update statement inside the rule doesnt really uses the 
index on punktecache, but i dont know why and i dont know how to change it.

Any hint or help is is very appreciated.

kind regards
janning

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [PERFORM] Excessive context switching on SMP Xeons

2004-10-05 Thread Josh Berkus
Bill,

> I'd be thrilled to test it too, if for no other reason that to determine
> whether what I'm experiencing really is the "CS problem".

Hmmm ... Gavin's patch is built against 8.0, and any version of the patch 
would require linux 2.6, probably 2.6.7 minimum.   Can you test on that linux 
version?   Do you have the resources to back-port Gavin's patch?   

> Fair enough. I never see nearly this much context switching on my dual
> Xeon boxes running dozens (sometimes hundreds) of concurrent apache
> processes, but I'll concede this could just be due to the more parallel
> nature of a bunch of independent apache workers.

Certainly could be.  Heavy CSes only happen when you have a number of 
long-running processes with contention for RAM in my experience.  If Apache 
is dispatching thing quickly enough, they'd never arise.

> Hence my desire for recommendations on alternate architectures ;-)

Well, you could certainly stay on Xeon if there's better support availability.  
Just get off Dell *650's.   

> Being a 24x7x365 shop, and these servers being mission critical, I
> require vendors that can offer 24x7 4-hour part replacement, like Dell
> or IBM. I haven't seen 4-way 64-bit boxes meeting that requirement for
> less than $20,000, and that's for a very minimally configured box. A
> suitably configured pair will likely end up costing $50,000 or more. I
> would like to avoid an unexpected expense of that size, unless there's
> no other good alternative. That said, I'm all ears for a cheaper
> alternative that meets my support and performance requirements.

No, you're going to pay through the nose for that support level.   It's how 
things work.

> tps = 369.717832 (including connections establishing)
> tps = 370.852058 (excluding connections establishing)

Doesn't seem too bad to me.   Have anything to compare it to?

What's in your postgresql.conf?

--Josh

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [PERFORM] Excessive context switching on SMP Xeons

2004-10-05 Thread Gaetano Mendola
Bill Montgomery wrote:
All,
I realize the excessive-context-switching-on-xeon issue has been 
discussed at length in the past, but I wanted to follow up and verify my 
conclusion from those discussions:

On a 2-way or 4-way Xeon box, there is no way to avoid excessive 
(30,000-60,000 per second) context switches when using PostgreSQL 7.4.5 
to query a data set small enough to fit into main memory under a 
significant load.

I am experiencing said symptom on two different dual-Xeon boxes, both 
Dells with ServerWorks chipsets, running the latest RH9 and RHEL3 
kernels, respectively. The databases are 90% read, 10% write, and are 
small enough to fit entirely into main memory, between pg shared buffers 
and kernel buffers.

I don't know if my box is not loaded enough but I have a dual-Xeon box,
by DELL with the HT enabled and I'm not experiencing this kind of CS
problem, normaly hour CS is around 10 per second.
# cat /proc/version
Linux version 2.4.9-e.24smp ([EMAIL PROTECTED]) (gcc version 2.96 2731 (Red Hat 
Linux 7.2 2.96-118.7.2)) #1 SMP Tue May 27 16:07:39 EDT 2003
# cat /proc/cpuinfo
processor   : 0
vendor_id   : GenuineIntel
cpu family  : 15
model   : 2
model name  : Intel(R) Xeon(TM) CPU 2.80GHz
stepping: 7
cpu MHz : 2787.139
cache size  : 512 KB
fdiv_bug: no
hlt_bug : no
f00f_bug: no
coma_bug: no
fpu : yes
fpu_exception   : yes
cpuid level : 2
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat 
pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm
bogomips: 5557.45
processor   : 1
vendor_id   : GenuineIntel
cpu family  : 15
model   : 2
model name  : Intel(R) Xeon(TM) CPU 2.80GHz
stepping: 7
cpu MHz : 2787.139
cache size  : 512 KB
fdiv_bug: no
hlt_bug : no
f00f_bug: no
coma_bug: no
fpu : yes
fpu_exception   : yes
cpuid level : 2
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat 
pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm
bogomips: 5570.56
processor   : 2
vendor_id   : GenuineIntel
cpu family  : 15
model   : 2
model name  : Intel(R) Xeon(TM) CPU 2.80GHz
stepping: 7
cpu MHz : 2787.139
cache size  : 512 KB
fdiv_bug: no
hlt_bug : no
f00f_bug: no
coma_bug: no
fpu : yes
fpu_exception   : yes
cpuid level : 2
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat 
pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm
bogomips: 5570.56
processor   : 3
vendor_id   : GenuineIntel
cpu family  : 15
model   : 2
model name  : Intel(R) Xeon(TM) CPU 2.80GHz
stepping: 7
cpu MHz : 2787.139
cache size  : 512 KB
fdiv_bug: no
hlt_bug : no
f00f_bug: no
coma_bug: no
fpu : yes
fpu_exception   : yes
cpuid level : 2
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat 
pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm
bogomips: 5570.56


Regards
Gaetano Mendola



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] Excessive context switching on SMP Xeons

2004-10-05 Thread Bill Montgomery
Thanks for the helpful response.
Josh Berkus wrote:
First off, the good news: Gavin Sherry and OSDL may have made some 
progress

on this.   We'll be testing as soon as OSDL gets the Scalable Test Platform 
running again.   If you have the CS problem (which I don't think you do, see 
below) and a test box, I'd be thrilled to have you test it.

I'd be thrilled to test it too, if for no other reason that to determine 
whether what I'm experiencing really is the "CS problem".

1) I don't really consider a CS of 30,000 to 60,000 on Xeon to be excessive.  
People demonstrating the problem on dual or quad Xeon reported CS levels of 
150,000 or more.So you probably don't have this issue at all -- depending 
on the load, your level could be considered "normal".

Fair enough. I never see nearly this much context switching on my dual 
Xeon boxes running dozens (sometimes hundreds) of concurrent apache 
processes, but I'll concede this could just be due to the more parallel 
nature of a bunch of independent apache workers.

I am experiencing said symptom on two different dual-Xeon boxes, both
Dells with ServerWorks chipsets, running the latest RH9 and RHEL3
kernels, respectively. The databases are 90% read, 10% write, and are
small enough to fit entirely into main memory, between pg shared buffers
and kernel buffers.
Ah.  Well, you do have the worst possible architecture for PostgreSQL-SMP 
performance.   The ServerWorks chipset is badly flawed (the company is now, I 
believe, bankrupt from recalled products) and Xeons have several performance 
issues on databases based on online tests.

Hence my desire for recommendations on alternate architectures ;-)
AthalonMP appears to be less suseptible to the CS bug than Xeon, and the 
effect of the bug is not as severe.   However, a quad-Opteron box can be 
built for less than $6000; what's your standard for "expensive"?   If you 
don't have that much money, then you may be stuck for options.

Being a 24x7x365 shop, and these servers being mission critical, I 
require vendors that can offer 24x7 4-hour part replacement, like Dell 
or IBM. I haven't seen 4-way 64-bit boxes meeting that requirement for 
less than $20,000, and that's for a very minimally configured box. A 
suitably configured pair will likely end up costing $50,000 or more. I 
would like to avoid an unexpected expense of that size, unless there's 
no other good alternative. That said, I'm all ears for a cheaper 
alternative that meets my support and performance requirements.

Overall, though, I'm not convinced that you have the CS bug and I think it's 
more likely that you have a few "bad queries" which are dragging down the 
whole system.Troubleshoot those and your CPU-bound problems may go away.

You may be right, but to compare apples to apples, here's some vmstat 
output from a pgbench run:

[EMAIL PROTECTED] billm]$ pgbench -i -s 20 pgbench

[EMAIL PROTECTED] billm]$ pgbench -s 20 -t 500 -c 100 pgbench
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 20
number of clients: 100
number of transactions per client: 500
number of transactions actually processed: 5/5
tps = 369.717832 (including connections establishing)
tps = 370.852058 (excluding connections establishing)
and some of the vmstat output...
[EMAIL PROTECTED] billm]$ vmstat 1
procs  memory  swap  io 
system cpu
r  b   swpd   free   buff  cache   si   sobibo   incs us sy 
wa id
0  1  0 863108 220620 157192400 464   3450  1  
0  0 98
0  1  0 863092 220620 157193200 0  3144  171  2037  3  
3 47 47
0  1  0 863084 220620 157195600 0  5840  202  3702  6  
3 46 45
1  1  0 862656 220620 157242000 0 12948  631 42093 69 
22  5  5
11  0  0 862188 220620 157282800 0 12644  531 41330 70 
23  2  5
9  0  0 862020 220620 157307600 0  8396  457 28445 43 
17 17 22
9  0  0 861620 220620 157355600 0 13564  726 44330 72 
22  2  5
8  1  0 861248 220620 157398000 0 12564  660 43667 65 
26  2  7
3  1  0 860704 220624 157423600 0 14588  646 41176 62 
25  5  8
0  1  0 860440 220624 157447600 0 42184  865 31704 44 
23 15 18
8  0  0 860320 220624 157462800 0 10796  403 19971 31 
10 29 29
0  1  0 860040 220624 157488400 0 23588  654 36442 49 
20 13 17
0  1  0 859984 220624 157493200 0  4940  229  3884  5  
3 45 46
0  1  0 859940 220624 157500400 0 12140  355 13454 20 
10 35 35
0  1  0 859904 220624 157504400 0  5044  218  6922 11  
5 41 43
1  1  0 859868 220624 157505200 0  4808  199  2029  3  
3 47 48
0  1  0 859720 220624 157518000 0 21596  485 18075 28 
13 29 30
11  1  0 859372 220624 157553200 0 24520  609 41409 62 
33  2  3

While pgbench does not generate quite as high a number of CS as o

Re: [PERFORM] Excessive context switching on SMP Xeons

2004-10-05 Thread Josh Berkus
Bill,

> I realize the excessive-context-switching-on-xeon issue has been
> discussed at length in the past, but I wanted to follow up and verify my
> conclusion from those discussions:

First off, the good news: Gavin Sherry  and OSDL may have made some progress 
on this.   We'll be testing as soon as OSDL gets the Scalable Test Platform 
running again.   If you have the CS problem (which I don't think you do, see 
below) and a test box, I'd be thrilled to have you test it.

> On a 2-way or 4-way Xeon box, there is no way to avoid excessive
> (30,000-60,000 per second) context switches when using PostgreSQL 7.4.5
> to query a data set small enough to fit into main memory under a
> significant load.

Hmmm ... some clarification:
1) I don't really consider a CS of 30,000 to 60,000 on Xeon to be excessive.  
People demonstrating the problem on dual or quad Xeon reported CS levels of 
150,000 or more.So you probably don't have this issue at all -- depending 
on the load, your level could be considered "normal".

2) The problem is not limited to Xeon, Linux, or x86 architecture.It has 
been demonstrated, for example, on 8-way Solaris machines.It's just worse 
(and thus more noticable) on Xeon.

> I am experiencing said symptom on two different dual-Xeon boxes, both
> Dells with ServerWorks chipsets, running the latest RH9 and RHEL3
> kernels, respectively. The databases are 90% read, 10% write, and are
> small enough to fit entirely into main memory, between pg shared buffers
> and kernel buffers.

Ah.  Well, you do have the worst possible architecture for PostgreSQL-SMP 
performance.   The ServerWorks chipset is badly flawed (the company is now, I 
believe, bankrupt from recalled products) and Xeons have several performance 
issues on databases based on online tests.

> We recently invested in an solid-state storage device
> (http://www.superssd.com/products/ramsan-320/) to help write
> performance. Our entire pg data directory is stored on it. Regrettably
> (and in retrospect, unsurprisingly) we found that opening up the I/O
> bottleneck does little for write performance when the server is under
> load, due to the bottleneck created by excessive context switching. 

Well, if you're CPU-bound, improved I/O won't help you, no.

> Is 
> the only solution then to move to a different SMP architecture such as
> Itanium 2 or Opteron? If so, should we expect to see an additional
> benefit from running PostgreSQL on a 64-bit architecture, versus 32-bit,
> context switching aside? 

Your performance will almost certainly be better for a variety of reasons on 
Opteron/Itanium.However, I'm still not convinced that you have the CS 
bug.

> Alternatively, are there good 32-bit SMP 
> architectures to consider other than Xeon, given the high cost of
> Itanium 2 and Opteron systems?

AthalonMP appears to be less suseptible to the CS bug than Xeon, and the 
effect of the bug is not as severe.   However, a quad-Opteron box can be 
built for less than $6000; what's your standard for "expensive"?   If you 
don't have that much money, then you may be stuck for options.

> More generally, how have others scaled "up" their PostgreSQL
> environments? We will eventually have to invent some "outward"
> scalability within the logic of our application (e.g. do read-only
> transactions against a pool of Slony-I subscribers), but in the short
> term we still have an urgent need to scale upward. Thoughts? General
> wisdom?

As long as you're on x86, scaling outward is the way to go.   If you want to 
continue to scale upwards, ask Andrew Sullivan about his experiences running 
PostgreSQL on big IBM boxes.   But if you consider an quad-Opteron server 
expensive, I don't think that's an option for you.

Overall, though, I'm not convinced that you have the CS bug and I think it's 
more likely that you have a few "bad queries" which are dragging down the 
whole system.Troubleshoot those and your CPU-bound problems may go away.

-- 
Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] Comparing user attributes with bitwise operators

2004-10-05 Thread Josh Berkus
Patrick,

First off, thanks for posting this solution!  I love to see a new demo of The 
Power of Postgres(tm) and have been wondering about this particular problem 
since it came up on IRC.

> The array method works quite nicely, especially for the
> columns like "languages" and "seeking" that are multiple choice. However,
> even though this method is fast, I still might opt for caching the results
> because the "real world" search query involves a lot more and will be
> executed non-stop. But to have it run this fast the first time certainly
> helps.

Now, for the bad news:  you need to test having a large load of users updating 
their data.   The drawback to GiST indexes is that they are low-concurrency, 
because the updating process needs to lock the whole index (this has been on 
our TODO list for about a decade, but it's a hard problem).   

-- 
Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


[PERFORM] Excessive context switching on SMP Xeons

2004-10-05 Thread Bill Montgomery
All,
I realize the excessive-context-switching-on-xeon issue has been 
discussed at length in the past, but I wanted to follow up and verify my 
conclusion from those discussions:

On a 2-way or 4-way Xeon box, there is no way to avoid excessive 
(30,000-60,000 per second) context switches when using PostgreSQL 7.4.5 
to query a data set small enough to fit into main memory under a 
significant load.

I am experiencing said symptom on two different dual-Xeon boxes, both 
Dells with ServerWorks chipsets, running the latest RH9 and RHEL3 
kernels, respectively. The databases are 90% read, 10% write, and are 
small enough to fit entirely into main memory, between pg shared buffers 
and kernel buffers.

We recently invested in an solid-state storage device 
(http://www.superssd.com/products/ramsan-320/) to help write 
performance. Our entire pg data directory is stored on it. Regrettably 
(and in retrospect, unsurprisingly) we found that opening up the I/O 
bottleneck does little for write performance when the server is under 
load, due to the bottleneck created by excessive context switching. Is 
the only solution then to move to a different SMP architecture such as 
Itanium 2 or Opteron? If so, should we expect to see an additional 
benefit from running PostgreSQL on a 64-bit architecture, versus 32-bit, 
context switching aside? Alternatively, are there good 32-bit SMP 
architectures to consider other than Xeon, given the high cost of 
Itanium 2 and Opteron systems?

More generally, how have others scaled "up" their PostgreSQL 
environments? We will eventually have to invent some "outward" 
scalability within the logic of our application (e.g. do read-only 
transactions against a pool of Slony-I subscribers), but in the short 
term we still have an urgent need to scale upward. Thoughts? General wisdom?

Best Regards,
Bill Montgomery
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [PERFORM] Caching of Queries

2004-10-05 Thread Matt Clark

> I don't know what you are exactly referring to in above URL 
> when you are talking about "potential pitfalls of pooling". 
> Please explain more.

Sorry, I wasn't implying that pgpool doesn't deal with the issues, just that
some people aren't necessarily aware of them up front.  For instance, pgpool
does an 'abort transaction' and a 'reset all' in lieu of a full reconnect
(of course, since a full reconnect is exactly what we are trying to avoid).
Is this is enough to guarantee that a given pooled connection behaves
exactly as a non-pooled connection would from a client perspective?  For
instance, temporary tables are usually dropped at the end of a session, so a
client (badly coded perhaps) that does not already use persistent
connections might be confused when the sequence 'connect, create temp table
foo ..., disconnect, connect, create temp table foo ...' results in the
error 'Relation 'foo' already exists'.



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])