Thanks tom. I checked the client side software. The software closes connection when connected locally. But when connected through dialup,
this problem comes. I will check the ppp connection also.
Is there any method of killing old pids. And also any performance tuning to be done on postgresql.
Thanks again for your response. I'll try and clarify some metrics that I
took a few days to figure out what would be the best join order.
By running some count queries on the production database, I noticed there
were only 8 rows in release_code. The filtered column is unique, so that
means th
So are you suggesting as a general rule then that sub-queries are the
way to force a specific join order in postgres? If that is the case, I
will do this from now on.
I'll try to explain a bit better...
Here's your original query :
select s.*, ss.*
from shipment s, shipment_status
Well, postgres does what you asked. It will be slow, because you have a
full table join. LIMIT does not change this because the rows have to be
sorted first.
I am aware that limit doesn't really affect the execution time all that
much. It does speed up ORM though and keeps the rows to a managea
Yes, I'm very well aware of VACUUM and VACUUM
ANALYZE. I've even clusted the date index and so on to ensure faster
performance.
- Original Message -
From:
David
Parker
To: Ken Egervari ; pgsql-performance@postgresql.org
Sent: Saturday, January 29, 2005 5:04
PM
select s.*, ss.*
from shipment s, shipment_status ss, release_code r
where s.current_status_id = ss.id
and ss.release_code_id = r.id
and r.filtered_column = '5'
order by ss.date desc
limit 100;
Release code is just a very small table of 8 rows by looking at the
production data, hence the
You don't mention if you have run VACUUM or VACUUM ANALYZE
lately. That's generally one of the first things that folks will suggest. If you
have a lot of updates then VACUUM will clean up dead tuples; if you have a lot
of inserts then VACUUM ANALYZE will update statistics so that the planner
Ken,
Actually, your problem isn't that generic, and might be better solved by
dissecting an EXPLAIN ANALYZE.
> 1. Should I just change beg to change the requirements so that I can make
> more specific queries and more screens to access those?
This is always good.
> 2. Can you
> recommend way
On 01/28/2005-05:57PM, Alex Turner wrote:
> >
> > Your system A has the absolute worst case Raid 5, 3 drives. The more
> > drives you add to Raid 5 the better it gets but it will never beat Raid
> > 10. On top of it being the worst case, pg_xlog is not on a separate
> > spindle.
> >
>
> True for
On 01/28/2005-10:59AM, Alex Turner wrote:
> At this point I will interject a couple of benchmark numbers based on
> a new system we just configured as food for thought.
>
> System A (old system):
> Compaq Proliant Dual Pentium III 933 with Smart Array 5300, one RAID
> 1, one 3 Disk RAID 5 on 10k R
Hi everyone.
I'm new to this forum and was wondering if anyone
would be kind enough to help me out with a pretty severe performance
issue. I believe the problem to be rather generic, so I'll put it in
generic terms. Since I'm at home and not a work (but this is really
bugging me), I can'
"Narayanan Subramaniam Iyer" <[EMAIL PROTECTED]> writes:
> 1) When 3 or 4 clients connect to this server, the pids are created and
> those pids are not killed even after the client disconnects.
In that case your clients are not really disconnecting. Take a closer
look at your client-side software
Hi
i am running a High availability Postgresql server on redhat
linux 9. I am using NFS mount of data directory from a shared storage. The server was running without problems for last two
months. The server is connected to a dialin router where all my company units dialin and update the dat
Richard Huxton wrote:
Sebastian Böck wrote:
But why is the scan on table b performed?
If i understand it correctly this is unnecessary because the
result contains only rows from table a.
It's only unnecessary in the case where there is a 1:1 correspondence
between a.id and b.id - if you had more
I know what I would choose. I'd get the mega server w/ a ton of RAM and skip
all the trickyness of partitioning a DB over multiple servers. Yes your data
will grow to a point where even the XXGB can't cache everything. On the
otherhand, memory prices drop just as fast. By that time, you can ebay yo
15 matches
Mail list logo