Marinos,

what if you construct "apachebench & Co" free  script and see if
the issue still exists. There are could be many issues doesn't
connected to postgresql and tsearch2.

Oleg

On Thu, 3 Feb 2005, Marinos J. Yannikos wrote:

Oleg Bartunov wrote:
On Thu, 3 Feb 2005, Marinos J. Yannikos wrote:
concurrent access to GiST indexes isn't possible at the moment. I [...]

there are should no problem with READ access.

OK, thanks everyone (perhaps it would make sense to clarify this in the manual).


I'm willing to see some details: version, query, explain analyze.

8.0.0

Query while the box is idle:

explain analyze select count(*) from fr_offer o, fr_merchant m where idxfti @@ to_tsquery('ranz & mc') and eur >= 70 and m.m_id=o.m_id;

Aggregate (cost=2197.48..2197.48 rows=1 width=0) (actual time=88.052..88.054 rows=1 loops=1)
-> Merge Join (cost=2157.42..2196.32 rows=461 width=0) (actual time=88.012..88.033 rows=3 loops=1)
Merge Cond: ("outer".m_id = "inner".m_id)
-> Index Scan using fr_merchant_pkey on fr_merchant m (cost=0.00..29.97 rows=810 width=4) (actual time=0.041..1.233 rows=523 loops=1)
-> Sort (cost=2157.42..2158.57 rows=461 width=4) (actual time=85.779..85.783 rows=3 loops=1)
Sort Key: o.m_id
-> Index Scan using idxfti_idx on fr_offer o (cost=0.00..2137.02 rows=461 width=4) (actual time=77.957..85.754 rows=3 loops=1)
Index Cond: (idxfti @@ '\'ranz\' & \'mc\''::tsquery)
Filter: (eur >= 70::double precision)


Total runtime: 88.131 ms

now, while using apachebench (-c10), "top" says this:

Cpu0  : 15.3% us, 10.0% sy,  0.0% ni, 74.7% id,  0.0% wa,  0.0% hi,  0.0% si
Cpu1  : 13.3% us, 11.6% sy,  0.0% ni, 75.1% id,  0.0% wa,  0.0% hi,  0.0% si
Cpu2  : 16.9% us,  9.6% sy,  0.0% ni, 73.4% id,  0.0% wa,  0.0% hi,  0.0% si
Cpu3  : 18.7% us, 14.0% sy,  0.0% ni, 67.0% id,  0.0% wa,  0.0% hi,  0.3% si

(this is with shared_buffers = 2000; a larger setting makes almost no difference for overall performance: although according to "top" system time goes to ~0 and user time to ~25%, the system still stays 70-75% idle)

vmstat:

r b swpd free buff cache si so bi bo in cs us sy id wa
2 0 0 8654316 64908 4177136 0 0 56 35 279 286 5 1 94 0
2 0 0 8646188 64908 4177136 0 0 0 0 1156 2982 15 10 75 0
2 0 0 8658412 64908 4177136 0 0 0 0 1358 3098 19 11 70 0
1 0 0 8646508 64908 4177136 0 0 0 104 1145 2070 13 12 75 0


so the script's execution speed is apparently not limited by the CPUs.

The query execution times go up like this while apachebench is running (and the system is 75% idle):

Aggregate (cost=2197.48..2197.48 rows=1 width=0) (actual time=952.661..952.663 rows=1 loops=1)
-> Merge Join (cost=2157.42..2196.32 rows=461 width=0) (actual time=952.621..952.641 rows=3 loops=1)
Merge Cond: ("outer".m_id = "inner".m_id)
-> Index Scan using fr_merchant_pkey on fr_merchant m (cost=0.00..29.97 rows=810 width=4) (actual time=2.078..3.338 rows=523 loops=1)
-> Sort (cost=2157.42..2158.57 rows=461 width=4) (actual time=948.345..948.348 rows=3 loops=1)
Sort Key: o.m_id
-> Index Scan using idxfti_idx on fr_offer o (cost=0.00..2137.02 rows=461 width=4) (actual time=875.643..948.301 rows=3 loops=1)
Index Cond: (idxfti @@ '\'ranz\' & \'mc\''::tsquery)
Filter: (eur >= 70::double precision)
Total runtime: 952.764 ms


I can't seem to find out where the bottleneck is, but it doesn't seem to be CPU or disk. "top" shows that postgres processes are frequently in this state:

 PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  WCHAN COMMAND
6701 postgres  16   0  204m  58m  56m S  9.3  0.2   0:06.96 semtimedo
                                                            ^^^^^^^^^
postmaste

Any hints are appreciated...

Regards,
Marinos


Regards, Oleg _____________________________________________________________ Oleg Bartunov, sci.researcher, hostmaster of AstroNet, Sternberg Astronomical Institute, Moscow University (Russia) Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/ phone: +007(095)939-16-83, +007(095)939-23-83

---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

Reply via email to