On Aug 19, 2005, at 12:55 AM, Jeffrey W. Baker wrote:
On Tue, 2005-08-16 at 10:46 -0700, Roger Hand wrote:
Have you considered booting your
machine with elevator=deadline?
Although I'm not the OP for this problem, I thought I'd try it out.
WOW.. this should be in a Pg tuning guide somewh
:) Most of the ppl on this list are systems programmers, however I am not.
The tool of choice for this app is Ruby and the libraries don't support
async SNMP at the moment.
I've done a good deal of async snmp and the libraries that actually pull it
off generally aren't that good(Net-SNMP and Perl
At 03:31 PM 8/19/2005, Alex Turner wrote:
Don't forget that Ultra 320 is the speed of the bus, not each drive.
No matter how many honking 15k disks you put on a 320MB bus, you can
only get 320MB/sec! and have so many outstanding IO/s on the bus.
Of course. This is exactly why multi-channel SCS
Jeremiah Jahn wrote:
> Rebuild in progress with just ext3 on the raid array...will see if this
> helps the access times. If it doesn't I'll mess with the stripe size. I
> have REINDEXED, CLUSTERED, tablespaced and cached with 'cat table/index
>
>>/dev/null' none of this seems to have helped, or eve
Jeremiah Jahn wrote:
> On Fri, 2005-08-19 at 12:18 -0500, John A Meinel wrote:
>
>>Jeremiah Jahn wrote:
>>
...
>>
>>Well, in general, 3ms for a single lookup seems really long. Maybe your
>>index is bloated by not vacuuming often enough. Do you tend to get a lot
>>of updates to litigant_details?
Rebuild in progress with just ext3 on the raid array...will see if this
helps the access times. If it doesn't I'll mess with the stripe size. I
have REINDEXED, CLUSTERED, tablespaced and cached with 'cat table/index
> /dev/null' none of this seems to have helped, or even increased my
memory usage.
On Fri, 2005-08-19 at 14:23 -0500, John A Meinel wrote:
> Ron wrote:
> > At 01:18 PM 8/19/2005, John A Meinel wrote:
> >
> >> Jeremiah Jahn wrote:
> >> > Sorry about the formatting.
> >> >
> >> > On Thu, 2005-08-18 at 12:55 -0500, John Arbash Meinel wrote:
> >> >
> >> >>Jeremiah Jahn wrote:
> >> >>
On Fri, 2005-08-19 at 12:18 -0500, John A Meinel wrote:
> Jeremiah Jahn wrote:
> > Sorry about the formatting.
> >
> > On Thu, 2005-08-18 at 12:55 -0500, John Arbash Meinel wrote:
> >
> >>Jeremiah Jahn wrote:
> >>
> >>
>
> ...
>
> >>The expensive parts are the 4915 lookups into the litigant_detai
Don't forget that Ultra 320 is the speed of the bus, not each drive.
No matter how many honking 15k disks you put on a 320MB bus, you can
only get 320MB/sec! and have so many outstanding IO/s on the bus.
Not so with SATA! Each drive is on it's own bus, and you are only
limited by the speed of you
Ron wrote:
> At 01:18 PM 8/19/2005, John A Meinel wrote:
>
>> Jeremiah Jahn wrote:
>> > Sorry about the formatting.
>> >
>> > On Thu, 2005-08-18 at 12:55 -0500, John Arbash Meinel wrote:
>> >
>> >>Jeremiah Jahn wrote:
>> >>
>> >>
>>
>> ...
>>
>> >>The expensive parts are the 4915 lookups into the l
At 01:18 PM 8/19/2005, John A Meinel wrote:
Jeremiah Jahn wrote:
> Sorry about the formatting.
>
> On Thu, 2005-08-18 at 12:55 -0500, John Arbash Meinel wrote:
>
>>Jeremiah Jahn wrote:
>>
>>
...
>>The expensive parts are the 4915 lookups into the litigant_details (each
>>one takes approx 4ms fo
While I agree that hundreds of threads seems like overkill, I think the
above advice might be going too far in the other direction. The problem
with single-threaded operation is that any delay affects the whole
system --- eg, if you're blocked waiting for disk I/O, the CPU doesn't
You
At 12:34 PM 8/19/2005, Jeffrey W. Baker wrote:
On Fri, 2005-08-19 at 10:54 -0400, Ron wrote:
> Maxtor Atlas 15K II's.
> Areca's 1GB buffer RAID cards
The former are SCSI disks and the latter is an SATA controller. The
combination would have a transaction rate of approximately 0.
You are evid
Jeremiah Jahn wrote:
> Sorry about the formatting.
>
> On Thu, 2005-08-18 at 12:55 -0500, John Arbash Meinel wrote:
>
>>Jeremiah Jahn wrote:
>>
>>
...
>>The expensive parts are the 4915 lookups into the litigant_details (each
>>one takes approx 4ms for a total of ~20s).
>>And then you do it again
> Cool --- we've done a fair amount of work on squeezing out internal
> inefficiencies during this devel cycle, but it's always hard to
predict
> just how much anyone will notice in the real world.
>
> Care to do some oprofile or gprof profiles to see where it's still
bad?
>
Since release of 8.0
On 8/19/05 1:24 AM, "Mark Cotner" <[EMAIL PROTECTED]> wrote:
> I'm currently working on an application that will poll
> thousands of cable modems per minute and I would like
> to use PostgreSQL to maintain state between polls of
> each device. This requires a very heavy amount of
> updates in plac
Sorry about the formatting.
On Thu, 2005-08-18 at 12:55 -0500, John Arbash Meinel wrote:
> Jeremiah Jahn wrote:
>
> >here's an example standard query. Ireally have to make the first hit go
> >faster. The table is clustered as well on full_name as well. 'Smith%'
> >took 87 seconds on the first hi
On Fri, 2005-08-19 at 10:54 -0400, Ron wrote:
> Maxtor Atlas 15K II's.
> Areca's 1GB buffer RAID cards
The former are SCSI disks and the latter is an SATA controller. The
combination would have a transaction rate of approximately 0.
I can vouch for the Areca controllers, however. You can cert
Diego de Lima wrote:
> Hi list,
>
> I´m using Pg 8.0.3 on Linux FC2.
>
> This question may have a very simple answer (I hope), but I´m having
> lots of trouble solving it, and I counldn´t find any other post about it
> or anything in the pg docs.
>
> I have some very complex select statements
"Diego de Lima" <[EMAIL PROTECTED]> writes:
> I have some very complex select statements on 4 million rows tables. =
> When using LEFT JOIN ON, some select statements takes about 2 minutes. =
> When I write exactly the same statement but with LEFT JOIN USING, it =
> takes only 1 minute.
Could we s
that took a little while to get through the system didn't it. Please
ignore.
> Ingrate, n.: A man who bites the hand that feeds him, and then complains
> of indigestion.
--
A free society is one where it is safe to be unpopular.
-- Adlai Stevenson
---(en
Hi list,
I´m using Pg 8.0.3 on Linux FC2.
This question may have a very simple answer (I
hope), but I´m having lots of trouble solving it, and I counldn´t find any other
post about it or anything in the pg docs.
I have some very complex select statements on 4
million rows tables. When u
At 09:58 AM 8/19/2005, Andreas Pflug wrote:
The 1-2k xact/sec for MySQL seems suspicious, sounds very much like
write-back cached, not write-through, esp. considering that heavy
concurrent write access isn't said to be MySQLs strength...
Don't be suspicious.
I haven't seen the code under dis
Alex mentions a nice setup, but I'm pretty sure I know how to beat
that IO subsystems HW's performance by at least 1.5x or 2x. Possibly
more. (No, I do NOT work for any vendor I'm about to discuss.)
Start by replacing the WD Raptors with Maxtor Atlas 15K II's.
At 5.5ms average access, 97.4MB/
Andreas Pflug <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> As far as the question "can PG do 1-2k xact/sec", the answer is "yes
>> if you throw enough hardware at it". Spending enough money on the
>> disk subsystem is the key ...
>>
> The 1-2k xact/sec for MySQL seems suspicious, sounds very
"Merlin Moncure" <[EMAIL PROTECTED]> writes:
> Relative to 8.0, I am seeing a dramatic, almost miraculous reduction in
> CPU load times in 8.1devel. This is for ISAM style access patterns over
> the parse/bind interface. (IOW one record at a time, 90% read, 10%
> write).
> Relative to commercial
Tom Lane wrote:
Bob Ippolito <[EMAIL PROTECTED]> writes:
If you don't want to optimize the whole application, I'd at least
just push the DB operations down to a very small number of
connections (*one* might even be optimal!), waiting on some kind of
thread-safe queue for updates from th
> Kari Lavikka <[EMAIL PROTECTED]> writes:
> > However, those configuration changes didn't have significant effect
to
> > oprofile results. AtEOXact_CatCache consumes even more cycles.
>
> I believe I've fixed that for 8.1.
Relative to 8.0, I am seeing a dramatic, almost miraculous reduction in
C
Bob Ippolito <[EMAIL PROTECTED]> writes:
> If you don't want to optimize the whole application, I'd at least
> just push the DB operations down to a very small number of
> connections (*one* might even be optimal!), waiting on some kind of
> thread-safe queue for updates from the rest of the
Kari Lavikka <[EMAIL PROTECTED]> writes:
> However, those configuration changes didn't have significant effect to
> oprofile results. AtEOXact_CatCache consumes even more cycles.
I believe I've fixed that for 8.1.
regards, tom lane
---(end of broa
>> Ulrich Wisser wrote:
>> >
>> > one of our services is click counting for on line advertising. We do
>> > this by importing Apache log files every five minutes. This results in a
>> > lot of insert and delete statements.
> ...
>> If you are doing mostly inserting, make sure you are in a transact
I have managed tx speeds that high from postgresql going even as high
as 2500/sec for small tables, but it does require a good RAID
controler card (yes I'm even running with fsync on). I'm using 3ware
9500S-8MI with Raptor drives in multiple RAID 10s. The box wasn't too
$$$ at just around $7k. I
On Mon, 8 Aug 2005, Tom Lane wrote:
What that sounds like to me is a machine with inadequate disk I/O bandwidth.
Your earlier comment that checkpoint drives the machine into the ground
fits right into that theory, too. You said there is "almost no IO-wait"
but are you sure you are measuring that
On Aug 19, 2005, at 12:14 AM, Mark Cotner wrote:
Excellent feedback. Thank you. Please do keep in mind I'm storing
the
results of SNMP queries. The majority of the time each thread is
in a wait
state, listening on a UDP port for return packet. The number of
threads is
high because in o
Excellent feedback. Thank you. Please do keep in mind I'm storing the
results of SNMP queries. The majority of the time each thread is in a wait
state, listening on a UDP port for return packet. The number of threads is
high because in order to sustain poll speed I need to minimize the impact o
On Aug 18, 2005, at 10:24 PM, Mark Cotner wrote:
I'm currently working on an application that will poll
thousands of cable modems per minute and I would like
to use PostgreSQL to maintain state between polls of
each device. This requires a very heavy amount of
updates in place on a reasonably
Hi all,
I bet you get tired of the same ole questions over and
over.
I'm currently working on an application that will poll
thousands of cable modems per minute and I would like
to use PostgreSQL to maintain state between polls of
each device. This requires a very heavy amount of
updates in pla
Jeffrey W. Baker wrote:
> On Tue, 2005-08-16 at 10:46 -0700, Roger Hand wrote:
>> The disks are ext3 with journalling type of ordered, but this was later
>> changed to writeback with no apparent change in speed.
>>
>> They're on a Dell poweredge 6650 with LSI raid card, setup as follows:
>> 4 dis
The query plan does *not* look okay.
> electric=# EXPLAIN ANALYZE
> electric-# SELECT datavalue, logfielddatatype, timestamp FROM logdata_recent
> electric-# WHERE (logfielddatatype = 70 OR logfielddatatype = 71 OR
> logfielddatatype = 69)
> electric-# AND graphtargetlog = 1327
> electric-# AND t
39 matches
Mail list logo