You asked for it! ;-)
If you want cheap, get SATA. If you want fast under
*load* conditions, get SCSI. Everything else at this
time is marketing hype, either intentional or learned.
Ignoring dollars, expect to see SCSI beat SATA by 40%.
* * * What I tell you three times is true * * *
Also, c
Things might've changed somewhat over the past year, but this is from
_the_ Linux guy at Dell...
-tfo
--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC
Strategic Open Source — Open Your i™
http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-26
Greg Stark <[EMAIL PROTECTED]> writes:
> In any case the issue with the IDE protocol is that fundamentally you
> can only have a single command pending. SCSI can have many commands
> pending.
That's the bottom line: the SCSI protocol was designed (twenty years ago!)
to allow the drive to do physic
Neil Conway <[EMAIL PROTECTED]> writes:
> It wouldn't surprise me if gprof didn't get profiling data for dlopen'ed
> shared libraries (I haven't checked), but I think both oprofile and
> callgrind should be able to.
None of the platforms I use are very good at this :-(. Consider
building a spec
Yeah - the more reading I'm doing - the more I'm finding out.
Alledgelly the Western Digial Raptor drives implement a version of
ATA-4 Tagged Queing which allows reordering of commands. Some
controllers support this. The 3ware docs say that the controller
support both reordering on the controlle
Alex Turner <[EMAIL PROTECTED]> writes:
> SATA gives each drive it's own channel, but you have to share in SCSI.
> A SATA controller typicaly can do 3Gb/sec (384MB/sec) per drive, but
> SCSI can only do 320MB/sec across the entire array.
SCSI controllers often have separate channels for each de
Using COPY ... FROM STDIN via the Perl DBI (DBD::Pg) interface,
I accidentally strung together several \n-terminated input lines,
and sent them to the server with a single "putline".
To my (happy) surprise, I ended up with exactly that number of rows
in the target table.
Is this a bug? Is this fund
Ok - I take it back - I'm reading through this now, and realising that
the reviews are pretty clueless in several places...
On Apr 6, 2005 8:12 PM, Alex Turner <[EMAIL PROTECTED]> wrote:
> Ok - so I found this fairly good online review of various SATA cards
> out there, with 3ware not doing too h
Ok - so I found this fairly good online review of various SATA cards
out there, with 3ware not doing too hot on RAID 5, but ok on RAID 10.
http://www.tweakers.net/reviews/557/
Very interesting stuff.
Alex Turner
netEconomist
On Apr 6, 2005 7:32 PM, Alex Turner <[EMAIL PROTECTED]> wrote:
> I gue
I guess I'm setting myself up here, and I'm really not being ignorant,
but can someone explain exactly how is SCSI is supposed to better than
SATA?
Both systems use drives with platters. Each drive can physically only
read one thing at a time.
SATA gives each drive it's own channel, but you have
Adam Palmblad wrote:
can I actually look at the call tree that occurs when my function is
being executed or will I be limited to viewing calls to functions in
the postmaster binary?
You're the one with the gprof data, you tell us :)
It wouldn't surprise me if gprof didn't get profiling data for dlo
On Wed, 2005-04-06 at 18:09 -0400, Tom Lane wrote:
> I wrote:
> > Arjen van der Meijden <[EMAIL PROTECTED]> writes:
> >> SELECT COUNT(*) FROM
> >> data_main AS dm,
> >> postcodes AS p
> >> WHERE dm.range BETWEEN p.range_from AND p.range_till
>
> > Planner error ... because it doesn't have any good
I wanted to see if I could squeeze any more performance out of a C set
returning function I wrote. As such, I looked to a profiler. Is it
possible to get profile information on the function I wrote? I've got
postmaster and my function compiled with profiling support, and can find
the gmon.ou
John A Meinel <[EMAIL PROTECTED]> writes:
> Actually, I think he was saying do a nested loop, and for each item in
> the nested loop, re-evaluate if an index or a sequential scan is more
> efficient.
> I don't think postgres re-plans once it has started, though you could
> test this in a plpgsql f
Tom Lane wrote:
"Jim C. Nasby" <[EMAIL PROTECTED]> writes:
On Wed, Apr 06, 2005 at 06:09:37PM -0400, Tom Lane wrote:
Can anyone suggest a more general rule? Do we need for example to
consider whether the relation membership is the same in two clauses
that might be opposite sides of a range restric
Sorry if I'm pointing out the obvious here, but it seems worth
mentioning. AFAIK all 3ware controllers are setup so that each SATA
drive gets it's own SATA bus. My understanding is that by and large,
SATA still suffers from a general inability to have multiple outstanding
commands on the bus at onc
"Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> On Wed, Apr 06, 2005 at 06:09:37PM -0400, Tom Lane wrote:
>> Can anyone suggest a more general rule? Do we need for example to
>> consider whether the relation membership is the same in two clauses
>> that might be opposite sides of a range restriction?
On Wed, Apr 06, 2005 at 06:09:37PM -0400, Tom Lane wrote:
> Can anyone suggest a more general rule? Do we need for example to
> consider whether the relation membership is the same in two clauses
> that might be opposite sides of a range restriction? It seems like
>
> a.x > b.y AND a.x < b
Well - unfortuantely software RAID isn't appropriate for everyone, and
some of us need a hardware RAID controller. The LSI Megaraid 320-2
card is almost exactly the same price as the 3ware 9500S-12 card
(although I will conceed that a 320-2 card can handle at most 2x14
devices compare with the 12
I wrote:
> Arjen van der Meijden <[EMAIL PROTECTED]> writes:
>> SELECT COUNT(*) FROM
>> data_main AS dm,
>> postcodes AS p
>> WHERE dm.range BETWEEN p.range_from AND p.range_till
> Planner error ... because it doesn't have any good way to estimate the
> number of matching rows, it thinks that way
Arjen van der Meijden <[EMAIL PROTECTED]> writes:
> On 6-4-2005 19:42, Tom Lane wrote:
>> Wrong index ... what you probably could use here is an index on
>> data_main.range, so that the query could run with postcodes as the
>> outer side. I get such a plan by default with empty tables:
>> but I'm
Mischa <[EMAIL PROTECTED]> writes:
> Using COPY ... FROM STDIN via the Perl DBI (DBD::Pg) interface,
> I accidentally strung together several \n-terminated input lines,
> and sent them to the server with a single "putline".
> To my (happy) surprise, I ended up with exactly that number of rows
> in
> -Original Message-
> From: Mischa [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, April 06, 2005 1:47 PM
> To: pgsql-performance@postgresql.org
> Subject: COPY Hacks (WAS: RE: [PERFORM] Postgresql vs
> SQLserver for this
> application ?)
>
> [...]
> Using COPY ... FROM STDIN via the Perl D
Arjen van der Meijden <[EMAIL PROTECTED]> writes:
> On 6-4-2005 20:09, Tom Lane wrote:
>> Comparing the nestloop case to the hash case does make one think that
>> there's an awful lot of overhead somewhere, though. Two int2
>> comparisons ought not take very long :-(. Arjen, are you interested
>>
It's the same money if you factor in the 3ware controller. Even without
a caching controller, SCSI works good in multi-threaded IO (not
withstanding crappy shit from Dell or Compaq). You can get such cards
from LSI for $75. And of course, many server MBs come with LSI
controllers built-in. Our
On Wed, 2005-04-06 at 14:40 -0400, Alex Turner wrote:
> I think his point was that 9 * 4 != 2400
Oh.. heh.. I didn't even notice that.
Can I pretend I did it in my head using HEX math and that it wasn't a
mistake?
> On Apr 6, 2005 2:23 PM, Rod Taylor <[EMAIL PROTECTED]> wrote:
> > On Wed, 2005-0
This thread seems to be focusing in on COPY efficiency,
I'd like to ask something I got no answer to, a few months ago.
Using COPY ... FROM STDIN via the Perl DBI (DBD::Pg) interface,
I accidentally strung together several \n-terminated input lines,
and sent them to the server with a single "putli
I think his point was that 9 * 4 != 2400
Alex Turner
netEconomist
On Apr 6, 2005 2:23 PM, Rod Taylor <[EMAIL PROTECTED]> wrote:
> On Wed, 2005-04-06 at 19:42 +0200, Steinar H. Gunderson wrote:
> > On Wed, Apr 06, 2005 at 01:18:29PM -0400, Rod Taylor wrote:
> > > Yeah, I think that can be done pro
Quoting Arjen van der Meijden <[EMAIL PROTECTED]>:
> Hi list,
>
> I noticed on a forum a query taking a surprisingly large amount of time
> in MySQL. Of course I wanted to prove PostgreSQL 8.0.1 could do it much
> better. To my surprise PostgreSQL was ten times worse on the same
> machine! And
On Wed, 2005-04-06 at 19:42 +0200, Steinar H. Gunderson wrote:
> On Wed, Apr 06, 2005 at 01:18:29PM -0400, Rod Taylor wrote:
> > Yeah, I think that can be done provided there is more than one worker.
> > My limit seems to be about 1000 transactions per second each with a
> > single insert for a sin
I guess I was thinking more in the range of 5000 transaction/sec, less
so 5000 rows on bulk import...
Alex
On Apr 6, 2005 12:47 PM, Mohan, Ross <[EMAIL PROTECTED]> wrote:
>
>
> 31Million tuples were loaded in approx 279 seconds, or approx 112k rows per
> second.
>
> > I'd love to see PG get i
On 6-4-2005 19:42, Tom Lane wrote:
Arjen van der Meijden <[EMAIL PROTECTED]> writes:
I noticed on a forum a query taking a surprisingly large amount of time
in MySQL. Of course I wanted to prove PostgreSQL 8.0.1 could do it much
better. To my surprise PostgreSQL was ten times worse on the same
m
"Dave Held" <[EMAIL PROTECTED]> writes:
> My completely amateur guess is that the planner is able to use
> Merge Join and Hash Join on your contrived queries because you
> are only trying to join one field to a single value (i.e.:
> operator=). But the BETWEEN clause is what forces the Nested
> Lo
Arjen van der Meijden <[EMAIL PROTECTED]> writes:
> I noticed on a forum a query taking a surprisingly large amount of time
> in MySQL. Of course I wanted to prove PostgreSQL 8.0.1 could do it much
> better. To my surprise PostgreSQL was ten times worse on the same
> machine! And I don't underst
On 6-4-2005 19:04, Steve Atkins wrote:
On Wed, Apr 06, 2005 at 06:52:35PM +0200, Arjen van der Meijden wrote:
Hi list,
I noticed on a forum a query taking a surprisingly large amount of time
in MySQL. Of course I wanted to prove PostgreSQL 8.0.1 could do it much
better. To my surprise PostgreSQL
> -Original Message-
> From: Arjen van der Meijden
> [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, April 06, 2005 11:53 AM
> To: performance pgsql
> Subject: [PERFORM] Plan for relatively simple query seems to be very
> inefficient
>
> [...]
> SELECT COUNT(*) FROM
> data_main AS dm,
> post
On Wed, Apr 06, 2005 at 01:18:29PM -0400, Rod Taylor wrote:
> Yeah, I think that can be done provided there is more than one worker.
> My limit seems to be about 1000 transactions per second each with a
> single insert for a single process (round trip time down the Fibre
> Channel is large) but run
On Wed, 2005-04-06 at 19:08 +0200, [EMAIL PROTECTED] wrote:
>
> On our production server, I can insert 5000 tuples in 2100 ms.
>
> Single Xeon 2.6 Ghz
> 2 Gigs ram
> 3ware RAID 5 SATA drives array, 3 drives only :-((
> PG 8.0 - fsync off
>
> I do think inserting 5000 tuples in a second (i
On our production server, I can insert 5000 tuples in 2100 ms.
Single Xeon 2.6 Ghz
2 Gigs ram
3ware RAID 5 SATA drives array, 3 drives only :-((
PG 8.0 - fsync off
I do think inserting 5000 tuples in a second (i.e 5000 insert transactions, no bulk load) can be reached with well a configured SC
31Million tuples were loaded in approx 279 seconds, or approx 112k rows per
second.
> I'd love to see PG get into this range..i am a big fan of PG (just a
> rank newbie) but I gotta think the underlying code to do this has to
> be not-too-complex.
I'd say we're there.
||Yes! PG is
On Wed, Apr 06, 2005 at 06:52:35PM +0200, Arjen van der Meijden wrote:
> Hi list,
>
> I noticed on a forum a query taking a surprisingly large amount of time
> in MySQL. Of course I wanted to prove PostgreSQL 8.0.1 could do it much
> better. To my surprise PostgreSQL was ten times worse on the s
Hi list,
I noticed on a forum a query taking a surprisingly large amount of time
in MySQL. Of course I wanted to prove PostgreSQL 8.0.1 could do it much
better. To my surprise PostgreSQL was ten times worse on the same
machine! And I don't understand why.
I don't really need this query to be fa
How close to this is PG's COPY? I get surprisingly good results using COPY
with jdbc on smallish systems (now if that patch would make into the mainstream
PG jdbc support!) I think COPY has a bit more overhead than what a Bulkload
feature may have, but I suspect it's not that much more.
|| S
On Wed, 2005-04-06 at 16:12 +, Mohan, Ross wrote:
> I wish I had a Dell system and run case to show you Alex, but I don't...
> however...using Oracle's "direct path" feature, it's pretty straightforward.
>
> We've done 110,000 rows per second into index-less tables on a big system
> (IBM Powe
Mohan, Ross wrote:
> I wish I had a Dell system and run case to show you Alex, but I don't...
> however...using Oracle's "direct path" feature, it's pretty straightforward.
>
> We've done 110,000 rows per second into index-less tables on a big system
> (IBM Power5 chips, Hitachi SAN). ( Yes, I am
I wish I had a Dell system and run case to show you Alex, but I don't...
however...using Oracle's "direct path" feature, it's pretty straightforward.
We've done 110,000 rows per second into index-less tables on a big system
(IBM Power5 chips, Hitachi SAN). ( Yes, I am sure: over 100K a second. Su
I think everyone was scared off by the 5000 inserts per second number.
I've never seen even Oracle do this on a top end Dell system with
copious SCSI attached storage.
Alex Turner
netEconomist
On Apr 6, 2005 3:17 AM, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
> Unfortunately.
>
> But we
It's hardly the same money, the drives are twice as much.
It's all about the controller baby with any kind of dive. A bad SCSI
controller will give sucky performance too, believe me. We had a
Compaq Smart Array 5304, and it's performance was _very_ sub par.
If someone has a simple benchmark tes
On Wed, Apr 06, 2005 at 03:26:33PM +0200, PFC wrote:
> Well, unless you have PCI 64 bits, the "standard" PCI does 133 MB/s
> which is then split exactly in two times 66.5 MB/s for 1) reading from
> the
> PCI network card and 2) writing to the PCI harddisk controller. No wonder
> y
and we get about 65MB/sec sustained when writing to an ext3 filesystem
(actually, when wgetting a file off the gigabit LAN :-) ). I haven't
Well, unless you have PCI 64 bits, the "standard" PCI does 133 MB/s which
is then split exactly in two times 66.5 MB/s for 1) reading from the PCI
networ
On Tue, Apr 05, 2005 at 09:44:56PM -0700, Kevin Brown wrote:
> Now, the performance is pretty bad considering the setup -- a RAID 5
> with five 73.6 gig SCSI disks (10K RPM, I believe). Reads through the
> filesystem come through at about 65 megabytes/sec, writes about 35
> megabytes/sec (at least
On Tue, 2005-04-05 at 16:05 -0400, Christopher Petrilli wrote:
> On Apr 5, 2005 3:48 PM, Simon Riggs <[EMAIL PROTECTED]> wrote:
> > > The indicated fix of course is to increase shared_buffers.
> >
> > Splitting your tables at 4M, not 10M would work even better.
>
> Unfortunately, given we are tal
On Tue, 2005-04-05 at 18:55 -0400, Christopher Petrilli wrote:
> On Apr 5, 2005 3:48 PM, Simon Riggs <[EMAIL PROTECTED]> wrote:
> > B-trees aren't unique to PostgreSQL; the explanation developed here
> > would work equally well for any database system that used tree-based
> > indexes. Do we still t
Alex Turner wrote:
I'm no drive expert, but it seems to me that our write performance is
excellent. I think what most are concerned about is OLTP where you
are doing heavy write _and_ heavy read performance at the same time.
Our system is mostly read during the day, but we do a full system
update
Unfortunately.
But we are in the the process to choose Postgresql with pgcluster. I'm currently running some tests (performance, stability...)
Save the money on the license fees, you get it for your hardware ;-)
I still welcome any advices or comments and I'll let you know how the project is g
55 matches
Mail list logo