Alan Stange wrote:
Not sure I get your point. We would want the lighter one,
all things being equal, right ? (lower shipping costs, less likely
to break when dropped on the floor)
Why would the lighter one be less likely to break when dropped on the
floor?
They'd have less kinetic
Joshua Marsh wrote:
On 11/17/05, *William Yu* [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:
No argument there. But it's pointless if you are IO bound.
Why would you just accept we're IO bound, nothing we can do? I'd do
everything in my power to make my app go from IO bound
Joshua Marsh [EMAIL PROTECTED] writes:
We all want our systems to be CPU bound, but it's not always possible.
Sure it is, let me introduce you to my router, a 486DX100...
Ok, I guess that wasn't very helpful, I admit.
--
greg
---(end of
Joshua Marsh [EMAIL PROTECTED] writes:
We all want our systems to be CPU bound, but it's not always possible.
Remember, he is managing a 5 TB Databse. That's quite a bit different than a
100 GB or even 500 GB database.
Ok, a more productive point: it's not really the size of the database
Alex Turner wrote:
Not at random access in RAID 10 they aren't, and anyone with their
head screwed on right is using RAID 10. The 9500S will still beat the
Areca cards at RAID 10 database access patern.
The max 256MB onboard for 3ware cards is disappointing though. While
good enough for 95%
James Mello wrote:
Unless there was a way to guarantee consistency, it would be hard at
best to make this work. Convergence on large data sets across boxes is
non-trivial, and diffing databases is difficult at best. Unless there
was some form of automated way to ensure consistency, going 8 ways
I agree - you can get a very good one from www.acmemicro.com or
www.rackable.com with 8x 400GB SATA disks and the new 3Ware 9550SX SATA
RAID controller for about $6K with two Opteron 272 CPUs and 8GB of RAM
on a Tyan 2882 motherboard. We get about 400MB/s sustained disk read
performance on
Got some hard numbers to back your statement up? IME, the Areca
1160's with = 1GB of cache beat any other commodity RAID
controller. This seems to be in agreement with at least one
independent testing source:
http://print.tweakers.net/?reviews/557
RAID HW from Xyratex, Engino, or Dot Hill
On 16 Nov 2005, at 12:51, William Yu wrote:
Alex Turner wrote:
Not at random access in RAID 10 they aren't, and anyone with their
head screwed on right is using RAID 10. The 9500S will still beat
the
Areca cards at RAID 10 database access patern.
The max 256MB onboard for 3ware cards
Joshua D. Drake wrote:
The reason you want the dual core cpus is that PostgreSQL can only
execute 1 query per cpu at a time,...
Is that true? I knew that PG only used one cpu per query, but how
does PG know how many CPUs there are to limit the number of queries?
--
Steve Wampler -- [EMAIL
Steve Wampler wrote:
Joshua D. Drake wrote:
The reason you want the dual core cpus is that PostgreSQL can only
execute 1 query per cpu at a time,...
Is that true? I knew that PG only used one cpu per query, but how
does PG know how many CPUs there are to limit the number of queries?
David Boreham wrote:
Steve Wampler wrote:
Joshua D. Drake wrote:
The reason you want the dual core cpus is that PostgreSQL can only
execute 1 query per cpu at a time,...
Is that true? I knew that PG only used one cpu per query, but how
does PG know how many CPUs there are to
Alex Stapleton wrote:
Your going to have to factor in the increased failure rate in your cost
measurements, including any downtime or performance degradation whilst
rebuilding parts of your RAID array. It depends on how long your
planning for this system to be operational as well of course.
Spend a fortune on dual core CPUs and then buy crappy disks... I bet
for most applications this system will be IO bound, and you will see a
nice lot of drive failures in the first year of operation with
consumer grade drives.
I guess I've never bought into the vendor story that there are
two
Alex Turner wrote:
Spend a fortune on dual core CPUs and then buy crappy disks... I bet
for most applications this system will be IO bound, and you will see a
nice lot of drive failures in the first year of operation with
consumer grade drives.
Spend your money on better Disks, and don't
David Boreham wrote:
Spend a fortune on dual core CPUs and then buy crappy disks... I bet
for most applications this system will be IO bound, and you will see a
nice lot of drive failures in the first year of operation with
consumer grade drives.
I guess I've never bought into the vendor
I guess I've never bought into the vendor story that there are
two reliability grades. Why would they bother making two
different kinds of bearing, motor etc ? Seems like it's more
likely an excuse to justify higher prices. In my experience the
expensive SCSI drives I own break frequently while
I suggest you read this on the difference between enterprise/SCSI and
desktop/IDE drives:
http://www.seagate.com/content/docs/pdf/whitepaper/D2c_More_than_Interface_ATA_vs_SCSI_042003.pdf
This is exactly the kind of vendor propaganda I was talking about
and it proves my point quite
The only questions would be:
(1) Do you need a SMP server at all? I'd claim yes -- you always need
2+ cores whether it's DC or 2P to avoid IO interrupts blocking other
processes from running.
I would back this up. Even for smaller installations (single raid 1, 1
gig of ram). Why? Well
On Wed, 2005-11-16 at 08:51, David Boreham wrote:
Spend a fortune on dual core CPUs and then buy crappy disks... I bet
for most applications this system will be IO bound, and you will see a
nice lot of drive failures in the first year of operation with
consumer grade drives.
I guess
On Wed, 2005-11-16 at 09:33, William Yu wrote:
Alex Turner wrote:
Spend a fortune on dual core CPUs and then buy crappy disks... I bet
for most applications this system will be IO bound, and you will see a
nice lot of drive failures in the first year of operation with
consumer grade
Yes - that very benchmark shows that for a MySQL Datadrive in RAID 10,
the 3ware controllers beat the Areca card.
Alex.
On 11/16/05, Ron [EMAIL PROTECTED] wrote:
Got some hard numbers to back your statement up? IME, the Areca
1160's with = 1GB of cache beat any other commodity RAID
Title: Re: [PERFORM] Hardware/OS recommendations for large databases (
Scott,
On 11/16/05 9:09 AM, Scott Marlowe [EMAIL PROTECTED] wrote:
The biggest gain is going from 1 to 2 CPUs (real cpus, like the DC
Opterons or genuine dual CPU mobo, not hyperthreaded). Part of the
issue isn't just raw
Title: Re: [PERFORM] Hardware/OS recommendations for large databases (
Oops,
Last point should be worded: All CPUs on all machines used by a parallel database
- Luke
On 11/16/05 9:47 AM, Luke Lonergan [EMAIL PROTECTED] wrote:
Scott,
On 11/16/05 9:09 AM, Scott Marlowe [EMAIL PROTECTED
On Wed, 2005-11-16 at 11:47, Luke Lonergan wrote:
Scott,
Some cutting for clarity... I agree on the OLTP versus OLAP
discussion.
Here are the facts so far:
* Postgres can only use 1 CPU on each query
* Postgres I/O for sequential scan is CPU limited to 110-120
MB/s on
On 11/16/05, David Boreham [EMAIL PROTECTED] wrote:
Spend a fortune on dual core CPUs and then buy crappy disks... I bet
for most applications this system will be IO bound, and you will see a
nice lot of drive failures in the first year of operation with
consumer grade drives.
I guess
On Wed, Nov 16, 2005 at 11:06:25AM -0600, Scott Marlowe wrote:
There was a big commercial EMC style array in the hosting center at the
same place that had something like a 16 wide by 16 tall array of IDE
drives for storing pdf / tiff stuff on it, and we had at least one
failure a month in it.
On 11/16/05, Steinar H. Gunderson [EMAIL PROTECTED] wrote:
If you have a cool SAN, it alerts you and removes all data off a disk
_before_ it starts giving hard failures :-)
/* Steinar */
--
Homepage: http://www.sesse.net/
Good point. I have avoided data loss *twice* this year by using SMART
AMD added quad-core processors to their public roadmap for 2007.
Beyond 2007, the quad-cores will scale up to 32 sockets
(using Direct Connect Architecture 2.0)
Expect Intel to follow.
douglas
On Nov 16, 2005, at 9:38 AM, Steve Wampler wrote:
[...]
Got it - the cpu is only
David Boreham wrote:
I guess I've never bought into the vendor story that there are
two reliability grades. Why would they bother making two
different kinds of bearing, motor etc ? Seems like it's more
likely an excuse to justify higher prices.
then how to account for the fact that bleeding
On Nov 15, 2005, at 3:28 AM, Claus Guttesen wrote:
Hardware-wise I'd say dual core opterons. One dual-core-opteron
performs better than two single-core at the same speed. Tyan makes
at 5TB data, i'd vote that the application is disk I/O bound, and the
difference in CPU speed at the level
On Wed, 2005-11-16 at 12:51, Steinar H. Gunderson wrote:
On Wed, Nov 16, 2005 at 11:06:25AM -0600, Scott Marlowe wrote:
There was a big commercial EMC style array in the hosting center at the
same place that had something like a 16 wide by 16 tall array of IDE
drives for storing pdf / tiff
You _ARE_ kidding right? In what hallucination?
The performance numbers for the 1GB cache version of the Areca 1160
are the _grey_ line in the figures, and were added after the original
article was published:
Note: Since the original Dutch article was published in late
January, we have
William Yu wrote:
Our SCSI drives have failed maybe a little less than our IDE drives.
Microsoft in their database showcase terraserver project has
had the same experience. They studied multiple configurations
including a SCSI/SAN solution as well as a cluster of SATA boxes.
They measured
Amendment: there are graphs where the 1GB Areca 1160's do not do as
well. Given that they are mySQL specific and that similar usage
scenarios not involving mySQL (as well as most of the usage scenarios
involving mySQL; as I said these did not follow the pattern of the
rest of the benchmarks)
Yeah those big disks arrays are real sweet.
One day last week I was in a data center in Arizona when the big LSI/Storagetek
array in the cage next to mine had a hard drive failure. So the alarm shrieked
at like 13225535 decibles continuously for hours. BEEEP BP BP BP.
Of course
at 5TB data, i'd vote that the application is disk I/O bound, and the
difference in CPU speed at the level of dual opteron vs. dual-core
opteron is not gonna be noticed.
to maximize disk, try getting a dedicated high-end disk system like
nstor or netapp file servers hooked up to fiber
Does anyone have recommendations for hardware and/or OS to work with around
5TB datasets?
Hardware-wise I'd say dual core opterons. One dual-core-opteron
performs better than two single-core at the same speed. Tyan makes
some boards that have four sockets, thereby giving you 8 cpu's (if you
Adam,
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
Claus Guttesen
Sent: Tuesday, November 15, 2005 12:29 AM
To: Adam Weisberg
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Hardware/OS recommendations for large
databases ( 5TB
Hardware-wise I'd say dual core opterons. One dual-core-opteron
performs better than two single-core at the same speed. Tyan makes
some boards that have four sockets, thereby giving you 8 cpu's (if you
need that many). Sun and HP also makes nice hardware although the Tyan
board is more
Luke,Have you tried the areca cards, they are slightly faster yet.DaveOn 15-Nov-05, at 7:09 AM, Luke Lonergan wrote: I agree - you can get a very good one from www.acmemicro.com or www.rackable.com with 8x 400GB SATA disks and the new 3Ware 9550SX SATA RAID controller for about $6K with two
Dave,
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Dave
CramerSent: Tuesday, November 15, 2005 6:15 AMTo: Luke
LonerganCc: Adam Weisberg;
pgsql-performance@postgresql.orgSubject: Re: [PERFORM] Hardware/OS
recommendations for large databases (
Luke
Merlin,
just FYI: tyan makes a 8 socket motherboard (up to 16 cores!):
http://www.swt.com/vx50.html
It can be loaded with up to 128 gb memory if all the sockets
are filled :).
Cool!
Just remember that you can't get more than 1 CPU working on a query at a
time without a parallel
Merlin,
just FYI: tyan makes a 8 socket motherboard (up to 16 cores!):
http://www.swt.com/vx50.html
It can be loaded with up to 128 gb memory if all the sockets are
filled :).
Another thought - I priced out a maxed out machine with 16 cores and
128GB of RAM and 1.5TB of usable disk -
On Tue, Nov 15, 2005 at 09:33:25AM -0500, Luke Lonergan wrote:
write performance is now up to par with the best cards I believe. We
find that you still need to set Linux readahead to at least 8MB
(blockdev --setra) to get maximum read performance on them, is that your
What on earth does that
Merlin,
just FYI: tyan makes a 8 socket motherboard (up to 16 cores!):
http://www.swt.com/vx50.html
It can be loaded with up to 128 gb memory if all the sockets are
filled :).
Another thought - I priced out a maxed out machine with 16 cores and
128GB of RAM and 1.5TB of
Luke,
-Original Message-
From: Luke Lonergan [mailto:[EMAIL PROTECTED]
Sent: Tuesday, November 15, 2005 7:10 AM
To: Adam Weisberg
Cc: pgsql-performance@postgresql.org
Subject: RE: [PERFORM] Hardware/OS recommendations for large databases (
5TB)
Adam,
-Original Message-
From
Nov 15 10:40:53 2005
Subject: RE: [PERFORM] Hardware/OS recommendations for large databases ( 5TB)
Luke,
-Original Message-
From: Luke Lonergan [mailto:[EMAIL PROTECTED]
Sent: Tuesday, November 15, 2005 7:10 AM
To: Adam Weisberg
Cc: pgsql-performance@postgresql.org
Subject: RE: [PERFORM
Title: Re: [PERFORM] Hardware/OS recommendations for large databases (
Mike,
On 11/15/05 6:55 AM, Michael Stone [EMAIL PROTECTED] wrote:
On Tue, Nov 15, 2005 at 09:33:25AM -0500, Luke Lonergan wrote:
write performance is now up to par with the best cards I believe. We
find that you still
Title: Re: [PERFORM] Hardware/OS recommendations for large databases ( 5TB)
Merlin,
On 11/15/05 7:20 AM, Merlin Moncure [EMAIL PROTECTED] wrote:
It's hard to say what would be better. My gut says the 5u box would be
a lot better at handling high cpu/high concurrency problems...like your
Merlin Moncure wrote:
You could instead buy 8 machines that total 16 cores, 128GB RAM and
It's hard to say what would be better. My gut says the 5u box would be
a lot better at handling high cpu/high concurrency problems...like your
typical business erp backend. This is pure speculation of
15, 2005 10:57 AM
To: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Hardware/OS recommendations for large databases (
5TB)
Merlin Moncure wrote:
You could instead buy 8 machines that total 16 cores, 128GB RAM and
It's hard to say what would be better. My gut says the 5u box would
: [PERFORM] Hardware/OS recommendations for large
databases ( 5TB)
Does anyone have recommendations for hardware and/or OS to
work with
around 5TB datasets?
Hardware-wise I'd say dual core opterons. One
dual-core-opteron performs better than two single-core at the
same speed. Tyan makes
Not at random access in RAID 10 they aren't, and anyone with their
head screwed on right is using RAID 10. The 9500S will still beat the
Areca cards at RAID 10 database access patern.
Alex.
On 11/15/05, Dave Cramer [EMAIL PROTECTED] wrote:
Luke,
Have you tried the areca cards, they are
Title: Re: [PERFORM] Hardware/OS recommendations for large databases ( 5TB)
James,
On 11/15/05 11:07 AM, James Mello [EMAIL PROTECTED] wrote:
Unless there was a way to guarantee consistency, it would be hard at
best to make this work. Convergence on large data sets across boxes is
non
101 - 155 of 155 matches
Mail list logo