with -c option to include CPU times
helps to put it in the right perspective.
Also do check the tunables mentioned and make sure they are set.
Regards,
Jignesh
Arjen van der Meijden wrote:
Hi Jignesh,
Jignesh K. Shah wrote:
Hi Arjen,
Looking at your outputs...of syscall and usrcall
On 15-11-2005 15:18, Steve Wampler wrote:
Magnus Hagander wrote:
(This is after putting an index on the (id,name,value) tuple.) That outer seq
scan
is still annoying, but maybe this will be fast enough.
I've passed this on, along with the (strong) recommendation that they
upgrade PG.
Have
On 23-9-2005 13:05, Michael Stone wrote:
On Fri, Sep 23, 2005 at 12:21:15PM +0200, Joost Kraaijeveld wrote:
Ok, that's great, but you didn't respond to the suggestion of using COPY
INTO instead of INSERT.
But I have no clue where to begin with determining the bottleneck (it
even may be a
On 23-9-2005 15:35, Joost Kraaijeveld wrote:
On Fri, 2005-09-23 at 13:19 +0200, Arjen van der Meijden wrote:
Drop all of them and recreate them once the table is filled. Of course
that only works if you know your data will be ok (which is normal for
imports of already conforming data like
assume its in the order of days for most RAID controllers.
Best regards,
Arjen van der Meijden
---(end of broadcast)---
TIP 6: explain analyze is your friend
On 1-9-2005 19:42, Matthew Sackman wrote:
Obviously, to me, this is a problem, I need these queries to be under a
second to complete. Is this unreasonable? What can I do to make this go
faster? I've considered normalising the table but I can't work out
whether the slowness is in dereferencing
On 27-8-2005 16:27, Tom Lane wrote:
Arjen van der Meijden [EMAIL PROTECTED] writes:
Is a nested loop normally so much (3x) more costly than a hash join? Or
is it just this query that gets estimated wronly?
There's been some discussion that we are overestimating the cost of
nestloops
On 27-8-2005 0:56, Tom Lane wrote:
Arjen van der Meijden [EMAIL PROTECTED] writes:
As said, it chooses sequential scans or the wrong index plans over a
perfectly good plan that is just not selected when the parameters are
too well tuned or sequential scanning of the table is allowed.
I
On 26-8-2005 15:05, Richard Huxton wrote:
Arjen van der Meijden wrote:
I left all the configuration-stuff to the defaults since changing
values didn't seem to impact much. Apart from the buffers and
effective cache, increasing those made the performance worse.
I've not looked at the rest
On 24-8-2005 16:43, Alexandre Barros wrote:
Hello,
i have a pg-8.0.3 running on Linux kernel 2.6.8, CPU Sempron 2600+,
1Gb RAM on IDE HD ( which could be called a heavy desktop ), measuring
this performance with pgbench ( found on /contrib ) it gave me an
average ( after several runs ) of
happily send a copy to anyone interested.
Best regards,
Arjen van der Meijden
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
On 6-4-2005 19:04, Steve Atkins wrote:
On Wed, Apr 06, 2005 at 06:52:35PM +0200, Arjen van der Meijden wrote:
Hi list,
I noticed on a forum a query taking a surprisingly large amount of time
in MySQL. Of course I wanted to prove PostgreSQL 8.0.1 could do it much
better. To my surprise PostgreSQL
On 6-4-2005 19:42, Tom Lane wrote:
Arjen van der Meijden [EMAIL PROTECTED] writes:
I noticed on a forum a query taking a surprisingly large amount of time
in MySQL. Of course I wanted to prove PostgreSQL 8.0.1 could do it much
better. To my surprise PostgreSQL was ten times worse on the same
. In
case of the non-pooled setup, you'd still have 40 db-connections.
In a simple test I did, I did feel pgpool had quite some overhead
though. So it should be well tested, to find out where the
turnover-point is where it will be a gain instead of a loss...
Best regards,
Arjen van der Meijden
Greg Stark wrote:
Arjen van der Meijden [EMAIL PROTECTED] writes:
Was this the select with the CASE, or the update?
It was just the select to see how long it'd take. I already anticipated
it to be possibly a slow query, so I only did the select first.
Best regards,
Arjen van der Meijden
cases like I did)?
The database is a lightly optimised gentoo-compile of 7.4.2, the
mysql-version was 4.0.18 in case anyone wanted to know that.
Best regards,
Arjen van der Meijden
PS, don't try to help improve the query I discarded the idea as too
inefficient and went along with a simple left
I've heard that too, but it doesn't seem to make much sense
to me. If
you get to the point where your machine is _needing_ 2GB of swap then
something has gone horribly wrong (or you just need more RAM in the
machine) and it will just crawl until the kernel kills off whatever
process
101 - 117 of 117 matches
Mail list logo