None - but I'll definately take a look..
Alex Turner
NetEconomist
On Tue, 01 Feb 2005 22:11:30 +0100, Cosimo Streppone
<[EMAIL PROTECTED]> wrote:
> Alex Turner wrote:
>
> > To be honest I've used compaq, dell and LSI SCSI RAID controllers and
> > got pretty pathetic benchmarks from all of them.
Jim C. Nasby wrote:
On Tue, Feb 01, 2005 at 07:35:35AM +0100, Cosimo Streppone wrote:
You might look at Opteron's, which theoretically have a higher data
bandwidth. If you're doing anything data intensive, like a sort in
memory, this could make a difference.
Would Opteron systems need 64-bit postgr
Hi all,
I have a big table with ~ 10 Milion rows, and is a very
pain administer it, so after years I convinced my self
to partition it and replace the table usage ( only for reading )
with a view.
Now my user_logs table is splitted in 4:
user_logs
user_logs_2002
user_logs_2003
user_logs_2004
and th
Josh Berkus wrote:
> Steve,
>
> > I help manage an animal hospital of 100-employees Linux servers. I am
> > new to database setup and tuning, I was hoping I could get some
> > direction on a setting up drive array we're considering moving our
> > database to.
>
> Check what I have to say at http:
Alex Turner wrote:
To be honest I've used compaq, dell and LSI SCSI RAID controllers and
got pretty pathetic benchmarks from all of them.
I also have seen average-low results for LSI (at least the 1020 card).
2xOpteron 242, Tyan S2885 MoBo, 4GB Ram, 14xSATA WD Raptor drives:
2xRaid 1, 1x4 disk Raid
Merlin Moncure wrote:
Corollary: use pl/pgsql. It can be 10 times or more faster than query
by query editing.
Merlin, thanks for your good suggestions.
By now, our system has never used "stored procedures" approach,
due to the fact that we're staying on the minimum common SQL features
that are sup
Joost Kraaijeveld wrote:
Hi all,
I have a freshly vacuumed table with 1104379 records with a index on zipcode.
Can anyone explain why the queries go as they go, and why the performance
differs so much (1 second versus 64 seconds, or stated differently, 1
records per second versus 1562 recor
Hi all,
I have a freshly vacuumed table with 1104379 records with a index on zipcode.
Can anyone explain why the queries go as they go, and why the performance
differs so much (1 second versus 64 seconds, or stated differently, 1
records per second versus 1562 records per second) and why t
> Hi all,
> 1) What kind of performance gain can I expect switching from
> 7.1 to 7.4 (or 8.0)? Obviously I'm doing my own testing,
> but I'm not very impressed by 8.0 speed, may be I'm doing
> testing on a low end server...
8.0 gives you savepoints. While this may not seem like a big
To be honest I've used compaq, dell and LSI SCSI RAID controllers and
got pretty pathetic benchmarks from all of them. The best system I
have is the one I just built:
2xOpteron 242, Tyan S2885 MoBo, 4GB Ram, 14xSATA WD Raptor drives:
2xRaid 1, 1x4 disk Raid 10, 1x6 drive Raid 10. 2x3ware (now AM
clause will be a cheap query - and use it to test if
a table is empty, for instance. (because for
Oracle/Sybase/SQL Server, count(*) is cheap).
To test if a table is empty, use a SELECT EXISTS or whatever SELECT with
a LIMIT 1...
---(end of broadcast)--
Hello Andrew,
Everything that Shridhar says makes perfect
sense, and, speaking from experience in dealing with
this type of 'problem', everything you say does as
well. Such is life really :)
I would not be at -all- surprised if Sybase
and Oracle did query re-writing behind the sc
On Tuesday 01 Feb 2005 6:11 pm, Andrew Mayo wrote:
> PG, on the other hand, appears to do a full table scan
> to answer this question, taking nearly 4 seconds to
> process the query.
>
> Doing an ANALYZE on the table and also VACUUM did not
> seem to affect this.
>
> Can PG find a table's row count
Doing some rather crude comparative performance tests
between PG 8.0.1 on Windows XP and SQL Server 2000, PG
whips SQL Server's ass on
insert into junk (select * from junk)
on a one column table defined as int.
If we start with a 1 row table and repeatedly execute
this command, PG can take the ta
On Tue, Feb 01, 2005 at 07:35:35AM +0100, Cosimo Streppone wrote:
> >You might look at Opteron's, which theoretically have a higher data
> >bandwidth. If you're doing anything data intensive, like a sort in
> >memory, this could make a difference.
>
> Would Opteron systems need 64-bit postgresql (
Sorry, I sent this mail message with wrong account and it
has been delayed by the mail system, so I’m resending it.
Hello my friends,
I'd like to know (based on your experience and technical
details) which OS is recommended for running PostgreSQL keeping in mind 3
indicators:
1
As I read the docs, a temp table doesn't solve our problem, as it does
not persist between sessions. With a web page there is no guarentee
that you will receive the same connection between requests, so a temp
table doesn't solve the problem. It looks like you either have to
create a real table (
Lago, Bruno Almeida do wrote:
Hello my friends,
I'd like to know (based on your experience and technical details) which OS
is recommended for running PostgreSQL keeping in mind 3 indicators:
1 - Performance (SO, Network and IO)
2 - SO Stability
3 - File System Integrity
The short answer is almost c
18 matches
Mail list logo