Hello,
I realize I need to be much more specific. Here is a more detailed
description of my hardware and system design.
Pentium 4 2.4GHz
Memory 4x DIMM DDR 1GB PC3200 400MHZ CAS3, KVR
Motherboard chipset 'I865G', two IDE channels on board
2x SEAGATE BARRACUDA 7200.7 80GB 7200RPM ATA/100
--On Mittwoch, August 24, 2005 16:26:40 -0400 Chris Hoover
[EMAIL PROTECTED] wrote:
On 8/24/05, Merlin Moncure [EMAIL PROTECTED] wrote:
Linux does a pretty good job of deciding what to cache. I don't think
this will help much. You can always look at partial indexes too.
Yes, but won't
On Thu, 25 Aug 2005 09:10:37 +0200
Ulrich Wisser [EMAIL PROTECTED] wrote:
Pentium 4 2.4GHz
Memory 4x DIMM DDR 1GB PC3200 400MHZ CAS3, KVR
Motherboard chipset 'I865G', two IDE channels on board
2x SEAGATE BARRACUDA 7200.7 80GB 7200RPM ATA/100
(software raid 1, system, swap, pg_xlog)
ADAPTEC
At 03:10 AM 8/25/2005, Ulrich Wisser wrote:
I realize I need to be much more specific. Here is a more detailed
description of my hardware and system design.
Pentium 4 2.4GHz
Memory 4x DIMM DDR 1GB PC3200 400MHZ CAS3, KVR
Motherboard chipset 'I865G', two IDE channels on board
First
Putting pg_xlog on the IDE drives gave about 10% performance
improvement. Would faster disks give more performance?
What my application does:
Every five minutes a new logfile will be imported. Depending on the
source of the request it will be imported in one of three raw click
tables.
On Thu, 2005-08-25 at 11:16 -0400, Ron wrote:
# - Settings -
fsync = false # turns forced synchronization on or off
#wal_sync_method = fsync# the default varies across platforms:
# fsync, fdatasync, open_sync, or
I hope you have a
Should I temporarily increase sort_mem, vacuum_mem, neither, or both
when doing a CLUSTER on a large (100 million row) table where as many as
half of the tuples are deadwood from UPDATEs or DELETEs? I have large
batch (10 million row) inserts, updates, and deletes so I'm not sure
frequent
Andrew,
On Thu, 2005-08-25 at 12:24 -0700, Andrew Lazarus wrote:
Should I temporarily increase sort_mem, vacuum_mem, neither, or both
when doing a CLUSTER on a large (100 million row) table where as many as
half of the tuples are deadwood from UPDATEs or DELETEs? I have large
batch (10
Jeff,
Ask me sometime about my replacement for GNU sort. It uses the same
sorting algorithm, but it's an order of magnitude faster due to better
I/O strategy. Someday, in my infinite spare time, I hope to demonstrate
that kind of improvement with a patch to pg.
Since we desperately need
At 03:45 PM 8/25/2005, Josh Berkus wrote:
Jeff,
Ask me sometime about my replacement for GNU sort. Â It uses the same
sorting algorithm, but it's an order of magnitude faster due to better
I/O strategy. Â Someday, in my infinite spare time, I hope to demonstrate
that kind of improvement
[EMAIL PROTECTED] (Ron) writes:
At 03:45 PM 8/25/2005, Josh Berkus wrote:
Ask me sometime about my replacement for GNU sort. Â It uses the
same sorting algorithm, but it's an order of magnitude faster due
to better I/O strategy. Â Someday, in my infinite spare time, I
hope to demonstrate
Consider this setup - which is a gross simplification of parts of our
production system ;-)
create table c (id integer primary key);
create table b (id integer primary key, c_id integer);
create index b_on_c on b(c_id)
insert into c (select ... lots of IDs ...);
insert into b (select
[Jeffrey W. Baker - Thu at 06:56:59PM -0700]
explain select c.id from c join b on c_id=c.id group by c.id order by c.id
desc limit 5;
Where's b in this join clause?
join b on c_id=c.id
It just a funny way of writing:
select c.id from c,b where c_id=c.id group by c.id order by c.id desc
On Thu, 2005-08-25 at 18:56 -0700, Jeffrey W. Baker wrote:
On Fri, 2005-08-26 at 02:27 +0200, Tobias Brox wrote:
Consider this setup - which is a gross simplification of parts of our
production system ;-)
create table c (id integer primary key);
create table b (id integer primary
14 matches
Mail list logo