Any suggestions? FYI the original question wasn't meant as a poke at comparing PG to MySQL to DB2. I'm not making an yvalue judgements either way. I'm just trying to understand how we can use it the best way possible.
Actually we just thought about something. With PG, we can create an index that
I'd have to agree with you about the specific SAN/setup you're working
with there.I certainly disagree that it's a general property of SAN'sthough.We've got a DS4300 with FC controllers and drives, hosts aregenerally dual-controller load-balanced and it works quite decently.
How are you guys doing
I have a producer/consumer setup where various producer processes
insert new records into a table and consumer processes mark those
records as having been handled when they have dealt with them, but
leave the records in the table so that we can generate reports later.
The records are added
Thanks! This exactly what I was looking for.
Benjamin Arai
[EMAIL PROTECTED]
http://www.benjaminarai.com
-Original Message-
From: Bruno Wolff III [mailto:[EMAIL PROTECTED]
Sent: Friday, June 16, 2006 11:56 AM
To: Benjamin Arai
Cc: pgsql-performance@postgresql.org
Subject: Re: Question
I've recently configured a new high-performance database server:
2xXeon 3.4G, 2G RAM, 4x15K SCSI disks in RAID 10, h/w RAID
This has been live for a couple of weeks.
The box is running Fedora Core 4.
The only thing running on this box is PostgreSQL 8.1.4 and some stub
applications
We just purchased a new Dell PowerEdge 2800 (dual xeon, 8GB RAM, raid 4,
RHEL, postgres 8.1) and ported our old database over to it (single cpu,
2GB RAM, no raid, postgres 7.4). Our apps perform great on it, however
some queries are super slow. One function in particular, which used to
take
We just purchased a new Dell PowerEdge 2800 (dual xeon, 8GB RAM, raid 4,
RHEL, postgres 8.1) and ported our old database over to it (single cpu,
2GB RAM, no raid, postgres 7.4). Our apps perform great on it, however
some queries are super slow. One function in particular, which used to
take
Jim Nasby wrote:
On Jun 16, 2006, at 12:01 PM, Josh Berkus wrote:
First thing as soon as I have a login, of course, is to set up a
Buildfarm
instance.
Keep in mind that buildfarm clients and benchmarking stuff don't
usually mix well.
On a fast machine like this a buildfarm run is
Hello again,
thanks for all the quick replies.
It seems i wasn't entirely correct on my previous post, i've mixed up
some times/numbers.
Below the correct numbers
MSSQL: SELECT COUNT(*) from JOIN (without insert) 17 minutes
PostgreSQL: SELECT COUNT(*) from JOIN (without insert) 33
Hi Nicky,
Did you tried to create an index to avoid the sequential scans?
Seq Scan on src_faktuur_verrsec t0...
I think, you should try
CREATE INDEX src.src_faktuur_verrsec_codesubstr ON
src.src_faktuur_verrsec (substr(src.src_faktuur_verrsec.code,1,2))
Cheers
Sven.
nicky schrieb:
Hello
Hello Sven,
We have the following indexes on src_faktuur_verrsec
/
CREATE INDEX src_faktuur_verrsec_idx0
ON src.src_faktuur_verrsec
USING btree
(id);
CREATE INDEX src_faktuur_verrsec_idx1
ON src.src_faktuur_verrsec
USING btree
(substr(code::text, 1, 2));
PostgreSQL elects not to use them. I assume, because it most
likely needs to traverse the entire table anyway.
if i change: / substr(t0.code,1,2) not in
('14','15','16','17')/
to (removing the NOT): /substr(t0.code,1,2) in
('14','15','16','17')/
it uses the index,
Hi Nick,
I'm not that good to advice how to get PostgreSQL to use an index to get
your results faster.
Did you try not (substr(t0.code,1,2) in ('14','15','16','17'))?
Cheers
Sven.
nicky schrieb:
Hello Sven,
We have the following indexes on src_faktuur_verrsec
/
CREATE INDEX
Arjen van der Meijden
wrote:
Here is a graph of our performance measured on PostgreSQL:
http://achelois.tweakers.net/~acm/pgsql-t2000/T2000-schaling-postgresql.png
...
The "perfect" line is based on the "Max" value for 1 core and then just
multiplied by the amount of cores to have
On 22-6-2006 15:03, David Roussel wrote:
Sureky the 'perfect' line ought to be linear? If the performance was
perfectly linear, then the 'pages generated' ought to be G times the
number (virtual) processors, where G is the gradient of the graph. In
such a case the graph will go through the
soni de [EMAIL PROTECTED] writes:
I am getting following error while inserting a row into the abc table:
*ERROR: fmgr_info: function 2720768: cache lookup failed*
What PG version is this? (I can tell from the spelling of the error
message that it's older than 7.4.) If it's pre-7.3 then the
Arjen van der Meijden wrote:
First of all, this graph has no origin. Its a bit difficult to test with
less than one cpu.
Sure it does. I ran all the tests. They all took infinite time, and I got
zero results. And my results are 100% accurate and reliable. It's perfectly
valid data. :-)
Richard Frith-Macdonald [EMAIL PROTECTED] writes:
I have a producer/consumer setup where various producer processes
insert new records into a table and consumer processes mark those
records as having been handled when they have dealt with them, but
leave the records in the table so that
Ron St-Pierre [EMAIL PROTECTED] writes:
We just purchased a new Dell PowerEdge 2800 (dual xeon, 8GB RAM, raid 4,
RHEL, postgres 8.1) and ported our old database over to it (single cpu,
2GB RAM, no raid, postgres 7.4). Our apps perform great on it, however
some queries are super slow. One
Peter Wilson [EMAIL PROTECTED] writes:
I'd tweaked a couple of parameters in postgres.conf - the significant one I
thought being random_page_cost, so I changed this back to default and did a
'service postgresql reload' - no difference, but I wasn't sure whether this
could be changed via
On Jun 21, 2006, at 13:08, Tom Lane wrote:
There are some reports in the archives of particular usage patterns
where they pretty much suck, because GetDomainConstraints() searches
pg_constraint every time it's called. We do what we can to avoid
calling that multiple times per query, but for
On Jun 21, 2006, at 18:19, Josh Berkus wrote:
Well, current case-insensitivity hacks definitely aren't compatible
with
LIKE as far as begins with indexes are concerned.
Yes, currently I use LOWER() for my indexes and for all LIKE, =, etc.
queries. This works well, but ORDER by of course
On Jun 21, 2006, at 19:24, Michael Glaesemann wrote:
I haven't seen it mentioned in this thread yet, but have you looked
at citext?
http://gborg.postgresql.org/project/citext/projdisplay.php
I don't have any experience with it, but perhaps it can do what
you're looking for.
Yes, I've
How to speed the following query? It seems to run forever.
explain SELECT
bilkaib.DB,
CASE WHEN dbkonto.objekt1='+' THEN bilkaib.DBOBJEKT ELSE '' END AS dbobjekt,
CASE WHEN dbkonto.objekt2='+' THEN bilkaib.DB2OBJEKT ELSE '' END AS
db2objekt,
CASE WHEN dbkonto.objekt3='+' THEN bilkaib.DB3OBJEKT
Andrus [EMAIL PROTECTED] writes:
How to speed the following query? It seems to run forever.
explain SELECT
bilkaib.DB,
CASE WHEN dbkonto.objekt1='+' THEN bilkaib.DBOBJEKT ELSE '' END AS dbobjekt,
CASE WHEN dbkonto.objekt2='+' THEN bilkaib.DB2OBJEKT ELSE '' END AS
db2objekt,
CASE WHEN
On Jun 21, 2006, at 5:53 PM, Ron St-Pierre wrote:
Jim C. Nasby wrote:
On Wed, Jun 21, 2006 at 02:27:41PM -0700, Ron St-Pierre wrote:
We just purchased a new Dell PowerEdge 2800 (dual xeon, 8GB RAM,
raid 4, RHEL, postgres 8.1) and ported our old database over to
it (single cpu,
RAID *4*?
Meetesh Karia [EMAIL PROTECTED] writes:
... But, once again we ran into the same
situation where a query that normally executes in ~15ms wouldn't finish. As
before, there were no ungranted locks and threads weren't waiting on a
lock. I attached gdb to one of the stuck postgres processes and
Richard Frith-Macdonald [EMAIL PROTECTED] writes:
What has confused me is why a query using an empty index should be
slow, irrespective of the state of the table that the index applies to.
Is it actually empty, or have you just deleted-and-not-yet-vacuumed
all the rows in the index?
I had
28 matches
Mail list logo