Thanks Mark,
We are using DBCP and i found something about pgpool in some forum threads,
which gave me queries on it. But I am clear now.
On Wed, Aug 20, 2008 at 8:59 PM, Mark Lewis [EMAIL PROTECTED] wrote:
Yes, we use connection pooling. As I recall Hibernate ships with c3p0
connection
Thanks Matthew,
does that mean i can just have index1, index3, index4?
On Wed, Aug 20, 2008 at 8:54 PM, Matthew Wakeling [EMAIL PROTECTED]wrote:
On Wed, 20 Aug 2008, Kranti K K Parisa™ wrote:
creating multiple indexes on same column will effect performance?
for example:
index1 : column1,
Am 20.08.2008 um 20:06 schrieb Scott Carey:
Ok, so the problem boils down to the sort at the end.
The query up through the merge join on domain is as fast as its
going to get. The sort at the end however, should not happen
ideally. There are not that many rows returned, and it should
Am 20.08.2008 um 20:28 schrieb Tom Lane:
Scott Carey [EMAIL PROTECTED] writes:
The planner actually thinks there will only be 28704 rows returned
of width
12. But it chooses to sort 53 million rows before aggregating.
Thats
either a bug or there's something else wrong here. That is the
Am 21.08.2008 um 09:04 schrieb Moritz Onken:
Am 20.08.2008 um 20:28 schrieb Tom Lane:
Scott Carey [EMAIL PROTECTED] writes:
The planner actually thinks there will only be 28704 rows returned
of width
12. But it chooses to sort 53 million rows before aggregating.
Thats
either a bug or
Thanks Joshua,
So what about putting the indexes on a separate array? Since we do a lot
of inserts indexes are going to be worked on a lot of the time.
Regards,
Christiaan
Joshua D. Drake wrote:
Christiaan Willemsen wrote:
So, what you are basically saying, is that a single mirror is in
André Volpato escreveu:
David Wilson escreveu:
On Wed, Aug 20, 2008 at 2:30 PM, André Volpato
[EMAIL PROTECTED] wrote:
The CPU is 100% used since a few hours ago. Can anyone tell why?
Sounds like you've just got a CPU bound query. The data may even all
be in cache.
Some information
On Thu, 2008-08-21 at 12:33 +0530, Kranti K K Parisa™ wrote:
On Wed, Aug 20, 2008 at 8:54 PM, Matthew Wakeling
[EMAIL PROTECTED] wrote:
On Wed, 20 Aug 2008, Kranti K K Parisa™ wrote:
creating multiple indexes on same column will effect
performance?
André Volpato wrote:
In practice, I have noticed that dual 1.8 is worse than single 3.0. We
have another server wich
is a Pentium D 3.0 GHz, that runs faster.
...
Postgres read the array in less than 1 sec, and the other 10s he takes
100% of CPU usage,
wich is, in this case, one of the two
=?ISO-8859-1?Q?Andr=E9_Volpato?= [EMAIL PROTECTED] writes:
Explain output:
HashAggregate (cost=19826.23..19826.96 rows=73 width=160) (actual
time=11826.754..11826.754 rows=0 loops=1)
- Subquery Scan b2 (cost=19167.71..19817.21 rows=722 width=160)
(actual time=11826.752..11826.752
It looks to me like the work_mem did have an effect.
Your earlier queries had a sort followed by group aggregate at the top, and
now its a hash-aggregate. So the query plan DID change. That is likely
where the first 10x performance gain came from.
The top of the plan was:
GroupAggregate
Tom Lane escreveu:
=?ISO-8859-1?Q?Andr=E9_Volpato?= [EMAIL PROTECTED] writes:
Explain output:
HashAggregate (cost=19826.23..19826.96 rows=73 width=160) (actual
time=11826.754..11826.754 rows=0 loops=1)
- Subquery Scan b2 (cost=19167.71..19817.21 rows=722 width=160)
(actual
Indexes will be random write workload, but these won't by synchronous writes
and will be buffered by the raid controller's cache. Assuming you're using
a hardware raid controller that is, and one that doesn't have major
performance problems on your platform. Which brings those questions up ---
Hi Scott,
Great info! Our RAID card is at the moment a ICP vortex (Adaptec)
ICP5165BR, and I'll be using it with Ubuntu server 8.04. I tried
OpenSolaris, but it yielded even more terrible performance, specially
using ZFS.. I guess that was just a missmatch. Anyway, I'm going to
return the
Am 21.08.2008 um 16:39 schrieb Scott Carey:
It looks to me like the work_mem did have an effect.
Your earlier queries had a sort followed by group aggregate at the
top, and now its a hash-aggregate. So the query plan DID change.
That is likely where the first 10x performance gain came
On Thu, Aug 21, 2008 at 11:07 AM, Moritz Onken [EMAIL PROTECTED] wrote:
Am 21.08.2008 um 16:39 schrieb Scott Carey:
It looks to me like the work_mem did have an effect.
Your earlier queries had a sort followed by group aggregate at the top,
and now its a hash-aggregate. So the query plan
My company finally has the means to install a new database server for
replication. I have Googled and found a lot of sparse information out
there regarding replication systems for PostgreSQL and a lot of it looks
very out-of-date. Can I please get some ideas from those of you that
are
=?ISO-8859-1?Q?Andr=E9_Volpato?= [EMAIL PROTECTED] writes:
Tom Lane escreveu:
So I guess the question is what is the bds_internacoes function, and
why is it so slow?
This function is quite fast:
Well, fast is relative. It's not fast enough, or you wouldn't have
been complaining.
We are
Scott Carey wrote:
For reads, if your shared_buffers is large enough, your heavily used
indexes won't likely go to disk much at all.
ISTM this would happen regardless of your shared_buffers setting.
If you have enough memory the OS should cache the frequently used
pages regardless of
Tom Lane escreveu:
We are guessing that a dual core 3.0GHz will beat up a quad core 2.2,
at least in this environmnent with less than 4 concurrent queryes.
The most you could hope for from that is less than a 50% speedup. I'd
suggest investing some tuning effort first. Some rethinking of
Hi Dan!
Its true, many of the replication options that exists for PostgreSQL
have not seen any updates in a while.
If you only looking for redundancy and not a performance gain you
should look at PostgreSQL PITR (http://www.postgresql.org/docs/8.1/static/backup-online.html
)
For
On Thu, Aug 21, 2008 at 10:53:05PM +0200, Mathias Stjernström wrote:
For Master-Slave replication i think that Slony http://www.slony.info/ is
most up to date. But it does not support DDL changes.
This isn't quite true. It supports DDL; it just doesn't support it in
the normal way, and is
On Thursday 21 August 2008, Dan Harris [EMAIL PROTECTED] wrote:
Especially since we make frequent use of sequences in our databases. If
MM is too difficult, I'm willing to accept a hot-standby read-only
system that will handle queries until we can fix whatever ails the
master.
A
Yes thats true. It does support DDL changes but not in a automatic
way. You have to execute all DDL changes with a separate script.
What's the status of http://www.commandprompt.com/products/mammothreplicator/
?
Best regards,
Mathias
http://www.pastbedti.me/
On 21 aug 2008, at 23.04,
On Thu, 21 Aug 2008 23:21:26 +0200
Mathias Stjernström [EMAIL PROTECTED] wrote:
Yes thats true. It does support DDL changes but not in a automatic
way. You have to execute all DDL changes with a separate script.
What's the status of
http://www.commandprompt.com/products/mammothreplicator/
Mathias Stjernström wrote:
Yes thats true. It does support DDL changes but not in a automatic way.
You have to execute all DDL changes with a separate script.
That's true, but it's quite simple to do with the provided perl
script(s) - slonik_execute_script. I've had to make use of it a
On Thursday 21 August 2008, salman [EMAIL PROTECTED] wrote:
Mathias Stjernström wrote:
Yes thats true. It does support DDL changes but not in a automatic way.
You have to execute all DDL changes with a separate script.
That's true, but it's quite simple to do with the provided perl
Joshua Drake wrote:
On Thu, 21 Aug 2008 23:21:26 +0200
Mathias Stjernström [EMAIL PROTECTED] wrote:
Yes thats true. It does support DDL changes but not in a automatic
way. You have to execute all DDL changes with a separate script.
What's the status of
On Thu, 21 Aug 2008, Mathias Stjernstr?m wrote:
Hi Dan!
Its true, many of the replication options that exists for PostgreSQL have not
seen any updates in a while.
If you only looking for redundancy and not a performance gain you should look
at PostgreSQL PITR
On Thu, 21 Aug 2008 17:54:11 -0400
Alvaro Herrera [EMAIL PROTECTED] wrote:
Joshua Drake wrote:
On Thu, 21 Aug 2008 23:21:26 +0200
Mathias Stjernström [EMAIL PROTECTED] wrote:
Yes thats true. It does support DDL changes but not in a
automatic way. You have to execute all DDL changes
I only have a few days of experience with postgres and it is working
great, but when I start building up test queries I ran into a problem I
don't understand.
One query works fast, returning results in under a second. If I insert
one more join into the table however, it switches to
[EMAIL PROTECTED] writes:
One query works fast, returning results in under a second. If I insert
one more join into the table however, it switches to nested-loops and
takes minutes.
I think you need to raise from_collapse_limit and/or
join_collapse_limit.
regards,
Hi Mark,
Thanks again for the info.
I shall create diff sets of indexes and see the query execution time.
And one of such tables might get around 700,000 records over a period of 4-5
months. So what kind of other measures I need to focus on.
I thought of the following
1) Indexes
2) Better
33 matches
Mail list logo