Yeah, I did some more testing myself, and actually get better numbers
with increasing spins per delay to 1000, but my suspicion is that it is
highly dependent on finding the right delay for the processor you are
on.
My hypothesis is that if you spin approximately the same or more time
than the ave
=?ISO-8859-1?Q?Pailloncy_Jean-G=E9rard?= <[EMAIL PROTECTED]> writes:
> dps=# explain analyze SELECT rec_id FROM url WHERE crc32!=0 AND
> crc32=419903683 AND status IN (200,304,206) ORDER BY rec_id LIMIT 1;
> QUERY PLAN
> ---
Bruce Momjian <[EMAIL PROTECTED]> writes:
> For BSDOS it has:
> #if (CLIENT_OS == OS_FREEBSD) || (CLIENT_OS == OS_BSDOS) || \
> (CLIENT_OS == OS_OPENBSD) || (CLIENT_OS == OS_NETBSD)
> { /* comment out if inappropriate for your *bsd - cyp (25/may/1999) */
> int ncpus; size_t l
Tom Lane wrote:
> Dave Cramer <[EMAIL PROTECTED]> writes:
> > I tried increasing the NUM_SPINS to 1000 and it works better.
>
> Doesn't surprise me. The value of 100 is about right on the assumption
> that the spinlock instruction per se is not too much more expensive than
> any other instruction
Yeah. I don't know a reasonable way to tune this number automatically
for particular systems ... but at the very least we'd need to find a way
to distinguish uniprocessor from multiprocessor, because on a
uniprocessor the optimal value is surely 1.
From TODO:
* Add code to detect an SMP machine a
Dave Cramer <[EMAIL PROTECTED]> writes:
> I tried increasing the NUM_SPINS to 1000 and it works better.
Doesn't surprise me. The value of 100 is about right on the assumption
that the spinlock instruction per se is not too much more expensive than
any other instruction. What I was seeing from op
Dave Cramer <[EMAIL PROTECTED]> writes:
> diff -c -r1.16 s_lock.c
> *** backend/storage/lmgr/s_lock.c 8 Aug 2003 21:42:00 - 1.16
> --- backend/storage/lmgr/s_lock.c 21 Apr 2004 20:27:34 -
> ***
> *** 76,82
>* The select() delays are measured in centise
Kenneth Marshall <[EMAIL PROTECTED]> writes:
> If the context swap storm derives from LWLock contention, maybe using
> a random order to assign buffer locks in buf_init.c would prevent
> simple adjacency of buffer allocation to cause the storm.
Good try, but no cigar ;-). The test cases I've been
On Tue, 2004-04-20 at 17:27, Joshua D. Drake wrote:
> > -Currently, the internal RAID volume is ext3 filesystem. Any
> > recommendations for the filesystem on the new FC volume? Rieserfs?
> >
> >
> XFS
What Linux distributions are popular in here for PG+XFS?
I'm very disappointed that Redhat
attached.
--
Dave Cramer
519 939 0336
ICQ # 14675561
Index: backend/storage/lmgr/s_lock.c
===
RCS file: /usr/local/cvs/pgsql-server/src/backend/storage/lmgr/s_lock.c,v
retrieving revision 1.16
diff -c -r1.16 s_lock.c
*** backend/stora
Josh Berkus wrote:
Folks,
I've sent a polite e-mail to Mr. Gomez offering our help. Please, nobody
flame him!
Please keep in mind that the entire test has, other than a similar
database schema and query types maybe, nothing to do with a TPC-H. I
don't see any kind of SUT. Foreign key support
FYI,
I am doing my testing on non hyperthreading dual athlons.
Also, the test and set is attempting to set the same resource, and not
simply a bit. It's really an lock;xchg in assemblelr.
Also we are using the PAUSE mnemonic, so we should not be seeing any
cache coherency issues, as the cache i
"Chris Hoover" <[EMAIL PROTECTED]> writes:
> I was taking heap_blks_hit / heap_blks_read for my hit pct.
> It should be heap_blks_hit/(heap_blks_read+heap_blks_hit), correct?
Right.
regards, tom lane
---(end of broadcast)---
Paul Thomas wrote:
Looks like he's using the default postgresql.conf settings in which
case I'm not suprised at pg looking so slow. His stated use of foreign
keys invalidates the tests anyway as MyISAM tables don't support FKs
so we're probably seeing FK check overheads in pg that are simply
i
Paul Tuckfield <[EMAIL PROTECTED]> writes:
> I wonder do the threads stall so badly when pinging cache lines back
> and forth, that the kernel sees it as an opportunity to put the
> process to sleep? or do these worst case misses cause an interrupt?
No; AFAICS the kernel could not even be aware
Dave:
Why would test and set increase context swtches:
Note that it *does not increase* context swtiches when the two threads
are on the two cores of a single Xeon processor. (use taskset to force
affinity on linux)
Scenario:
If the two test and set processes are testing and setting the same bi
Edoardo Ceccarelli <[EMAIL PROTECTED]> writes:
> I wasn't able to make this 2 field index with lower:
> dba400=# CREATE INDEX annuncio400_rubric_testo_idx ON
> annuncio400(rubric, LOWER(testo));
> ERROR: parser: parse error at or near "(" at character 71
> seems impossible to creat 2 field inde
Folks,
I've sent a polite e-mail to Mr. Gomez offering our help. Please, nobody
flame him!
--
Josh Berkus
Aglio Database Solutions
San Francisco
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Dave,
> After some testing if you use the current head code for s_lock.c which
> has some mods in it to alleviate this situation, and change
> SPINS_PER_DELAY to 10 you can drastically reduce the cs with tom's test.
> I am seeing a slight degradation in throughput using pgbench -c 10 -t
> 1000 but
can't understand this policy:
dba400=# SELECT count(*) from annuncio400 where rubric='DD';
count
---
6753
(1 row)
dba400=# SELECT count(*) from annuncio400 where rubric='MA';
count
---
2165
(1 row)
so it's using the index on 2000 rows and not for 6000? it's not that
big difference, isn
I think I have figured my problem out.
I was taking heap_blks_hit / heap_blks_read for my hit pct.
It should be heap_blks_hit/(heap_blks_read+heap_blks_hit), correct?
Thanks
On Wednesday 21 April 2004 11:34, Chris Hoover wrote:
> I just want to make sure that I am interpreting this data correctl
Hello,
I have a bi-PIII server with 2Gb of RAM with Debian and a PostgreSQL 7.4
running on. What are the bests settings for shared buffers, sort memory and
effective cache size?
My main database have a small/mid range size: some tables may have 1 or 2
millions of records.
Thanks
Frédéric Robin
I just want to make sure that I am interpreting this data correctly.
From pg_statio_user_tables, I have pulled relname, heap_blks_read,
heap_blks_hit. I get several rows like this:
relname heap_bkls_read heap_blks_hit
clmhdr 8607161
After some testing if you use the current head code for s_lock.c which
has some mods in it to alleviate this situation, and change
SPINS_PER_DELAY to 10 you can drastically reduce the cs with tom's test.
I am seeing a slight degradation in throughput using pgbench -c 10 -t
1000 but it might be live
On Wed, 21 Apr 2004, Edoardo Ceccarelli wrote:
>
> > What happens if you go:
> >
> > CREATE INDEX annuncio400_rubric_testo_idx ON annuncio400(rubric,
> > LOWER(testo));
> >
> > or even just:
> >
> > CREATE INDEX annuncio400_rubric_testo_idx ON annuncio400(LOWER(testo));
> >
> I wasn't able to mak
On 21/04/2004 14:31 Cestmir Hybl wrote:
> Looks like he's using the default postgresql.conf settings in which
case
> I'm not suprised at pg looking so slow.
The question also is, IMHO, why the hell, postgreSQL still comes out of
the
box with so stupid configuration defaults, totally underestimated
> Looks like he's using the default postgresql.conf settings in which case
> I'm not suprised at pg looking so slow.
The question also is, IMHO, why the hell, postgreSQL still comes out of the
box with so stupid configuration defaults, totally underestimated for todays
average hardware configurati
On 21/04/2004 09:31 Nick Barr wrote:
Hi,
Has anyone had a look at:
http://people.ac.upc.es/zgomez/
I realize that MySQL & PG cannot really be compared (especially when you
consider the issues that MySQL has with things like data integrity) but
still surely PG would perform better than the stat
On Wed, 2004-04-21 at 08:19, Rod Taylor wrote:
> > I realize that MySQL & PG cannot really be compared (especially when you
> > consider the issues that MySQL has with things like data integrity) but
> > still surely PG would perform better than the stats show (i.e. #7 31.28
> > seconds versus 4
> I realize that MySQL & PG cannot really be compared (especially when you
> consider the issues that MySQL has with things like data integrity) but
> still surely PG would perform better than the stats show (i.e. #7 31.28
> seconds versus 42 minutes!!!).
We know that PostgreSQL 7.5 will perfor
It is intended to run indefinately.
Dirk
[EMAIL PROTECTED] wrote:
How long is this test supposed to run?
I've launched just 1 for testing, the plan seems horrible; the test is cpu
bound and hasn't finished yet after 17:02 min of CPU time, dual XEON 2.6G
Unixware 713
The machine is a Fujitsu-Sie
just created a copy of the same database and it shows that is the
analyze that's messing things:
Slow seqscan query executed on dba400
dba400=# explain analyze SELECT *, oid FROM annuncio400 WHERE rubric =
'DD' AND LOWER(testo) Like LOWER('cbr%') OFFSET 0 LIMIT 11;
How long is this test supposed to run?
I've launched just 1 for testing, the plan seems horrible; the test is cpu
bound and hasn't finished yet after 17:02 min of CPU time, dual XEON 2.6G
Unixware 713
The machine is a Fujitsu-Siemens TX 200 server
On Mon, 19 Apr 2004, Tom Lane wrote:
> Date: Mo
enable_seqscan = false
and I'm having all index scans, timing has improved from 600ms to 18ms
wondering what other implications I might expect.
Lots of really bad implications...it's really not a good idea.
Chris
---(end of broadcast)---
TIP 9: t
tried the
enable_seqscan = false
and I'm having all index scans, timing has improved from 600ms to 18ms
wondering what other implications I might expect.
Edoardo Ceccarelli ha scritto:
What happens if you go:
CREATE INDEX annuncio400_rubric_testo_idx ON annuncio400(rubric,
LOWER(testo));
What happens if you go:
CREATE INDEX annuncio400_rubric_testo_idx ON annuncio400(rubric,
LOWER(testo));
or even just:
CREATE INDEX annuncio400_rubric_testo_idx ON annuncio400(LOWER(testo));
I wasn't able to make this 2 field index with lower:
dba400=# CREATE INDEX annuncio400_rubric_testo_id
dba400=# explain analyze SELECT *, oid FROM annuncio400 WHERE rubric =
'DD' AND LOWER(testo) Like LOWER('cbr%') OFFSET 0 LIMIT 11;
QUERY
PLAN
---
In general we are going to need more information, like what kind of
search filters you are using on the text field and an EXPLAIN ANALYZE.
But can you try and run the following, bearing in mind it will take a
while to complete.
REINDEX TABLE
From what I remember there were issues with index
Hi,
Has anyone had a look at:
http://people.ac.upc.es/zgomez/
I realize that MySQL & PG cannot really be compared (especially when you
consider the issues that MySQL has with things like data integrity) but
still surely PG would perform better than the stats show (i.e. #7 31.28
seconds versus
Edoardo Ceccarelli wrote:
My first post to this list :)
Scenario:
I have a database used only with search queries with only one table that
holds about 450.000/500.000 records.
The table is well indexed so that most of the queries are executed with
index scan but since there is a big text field in
Hi Edoardo,
The table is well indexed so that most of the queries are executed with
index scan but since there is a big text field in the table (360chars)
some search operation (with certain filters) ends up with seq scans.
Please paste the exact SELECT query that uses a seqscan, plus the
EXPLAIN
My first post to this list :)
Scenario:
I have a database used only with search queries with only one table that
holds about 450.000/500.000 records.
The table is well indexed so that most of the queries are executed with
index scan but since there is a big text field in the table (360chars)
some
42 matches
Mail list logo