On Tue, 2005-11-08 at 00:05 -0700, Charlie Savage wrote:
> Setup: Dell Dimension 3000, Suse 10, 1GB ram, PostgreSQL 8.1 RC 1 with
> I want to extract data out of the file, with the most important values
> being stored in a column called tlid. The tlid field is an integer, and
> the values are
Hi, all!
I've been using postgresql for a long time now, but today I had some
problem I couldn't solve properly - hope here some more experienced
users have some hint's for me.
First, I'm using postgresql 7.4.7 on a 2GHz machine having 1.5GByte RAM
and I have a table with about 220 columns and 2
Jan Kesten wrote:
First, I'm using postgresql 7.4.7 on a 2GHz machine having 1.5GByte RAM
and I have a table with about 220 columns and 2 rows - and the first
five columns build a primary key (and a unique index).
transfer=> explain analyse SELECT * FROM test WHERE test_a=9091150001
AND t
On Wed, Nov 09, 2005 at 01:08:07PM +0100, Jan Kesten wrote:
> First, I'm using postgresql 7.4.7 on a 2GHz machine having 1.5GByte RAM
> and I have a table with about 220 columns and 2 rows - and the first
> five columns build a primary key (and a unique index).
I forgot this, but it should be
transfer=> explain analyse SELECT * FROM test WHERE test_a=9091150001
AND test_b=1 AND test_c=2 AND test_d=0 AND test_e=0;
Index Scan using test_idx on test (cost=0.00..50.27 rows=1 width=1891)
(actual time=0.161..0.167 rows=1 loops=1)
Index Cond: (test_a = 9091150001::bigint)
Filter: ((t
On Wed, Nov 09, 2005 at 01:08:07PM +0100, Jan Kesten wrote:
> Now my problem: I need really many queries of rows using it's primary
> key and fetching about five different columns but these are quite slow
> (about 10 queries per second and as I have some other databases which
> can have about 300 q
Hi Simon,
Thanks for the response Simon.
PostgreSQL can do HashAggregates as well as GroupAggregates, just like
Oracle. HashAggs avoid the sort phase, so would improve performance
considerably. The difference in performance you are getting is because
of the different plan used. Did you specific
Hi all!
First thanks to any answer by now :-)
> You don't post your table definitions (please do), but it looks like
> test_b, test_c, test_d and test_e might be bigints? If so, you may
> want to do explicit "AND test_b=1::bigint AND test_c=2::bigint" etc.
> -- 7.4 doesn't figure this out for you
I noticed outer join is very very slow in postgresql as compared
to Oracle.
SELECT a.dln_code, a.company_name,
to_char(a.certificate_date,'DD-MON-'),
to_char(a.certificate_type_id, '99'),
COALESCE(b.certificate_type_description,'None') ,
a.description, a.blanket_single, a.certificate_status,
C
Hi all,
I've got PG 8.0 on Debian sarge set up ...
I want to speed up performance on the system.
The system will run PG, Apache front-end on port 80 and Tomcat / Cocoon for the webapp.
The webapp is not so heavily used, so we can give the max performance to the database.
The database has a lot of
0= Optimize your schema to be a tight as possible. Your goal is to give
yourself the maximum chance that everything you want to work on is in RAM when
you need it.
1= Upgrade your RAM to as much as you can possibly strain to afford. 4GB at
least. It's that important.
2= If the _entire_ DB doe
Ashok Agrawal <[EMAIL PROTECTED]> writes:
> I noticed outer join is very very slow in postgresql as compared
> to Oracle.
I think the three things the people best able to help you are going to
ask for are 1) what version of PostgreSQL, 2) what are the tables, and
how many rows in each, and 3) outp
...and on those notes, let me repeat my often stated advice that a DB server
should be configured with as much RAM as is feasible. 4GB or more strongly
recommended.
I'll add that the HW you are using for a DB server should be able to hold _at
least_ 4GB of RAM (note that modern _laptops_ can h
Hello all , i post this question here because i wasen't able to find
answer to my question elsewhere , i hope someone can answer.
Abstract:
The function that can be found at the end of the e-mail emulate two thing.
First it will fill a record set of result with needed column from a table and
On Wed, 9 Nov 2005, Ashok Agrawal wrote:
> I noticed outer join is very very slow in postgresql as compared
> to Oracle.
>
> SELECT a.dln_code, a.company_name,
> to_char(a.certificate_date,'DD-MON-'),
> to_char(a.certificate_type_id, '99'),
> COALESCE(b.certificate_type_description,'None') ,
>
Ron Peacetree wrote:
> 0= Optimize your schema to be a tight as possible. Your goal is to give
> yourself the maximum chance that everything you want to work on is in RAM
> when you need it.
> 1= Upgrade your RAM to as much as you can possibly strain to afford. 4GB at
> least. It's that impor
Frank Wiles wrote:
> Obviously there are systems/datasets/quantities where this won't
> always work out best, but for the majority of systems out there
> complicating your schema, maxing your hardware, and THEN tuning
> is IMHO the wrong approach.
I wasn't suggesting to complicate the s
On Wed, 9 Nov 2005 21:43:33 -0300
Alvaro Herrera <[EMAIL PROTECTED]> wrote:
> Frank Wiles wrote:
>
> > Obviously there are systems/datasets/quantities where this won't
> > always work out best, but for the majority of systems out there
> > complicating your schema, maxing your hardware, an
On Wed, 9 Nov 2005 20:07:52 -0300
Alvaro Herrera <[EMAIL PROTECTED]> wrote:
> IMHO you should really be examining your queries _before_ you do any
> investment in hardware, because later those may prove unnecessary.
It all really depends on what you're doing. For some of the systems
I run, 4
The point Gentlemen, was that Good Architecture is King. That's what I was
trying to emphasize by calling proper DB architecture step 0. All other things
being equal (and they usually aren't, this sort of stuff is _very_ context
dependent), the more of your critical schema that you can fit int
20 matches
Mail list logo