On Sat, Jan 14, 2006 at 09:37:01PM -0500, Charles Sprickman wrote:
Following up to myself again...
On Wed, 14 Dec 2005, Charles Sprickman wrote:
Hello all,
Supermicro 1U w/SCA backplane and 4 bays
2x2.8 GHz Xeons
Adaptec 2015S zero channel RAID card
I don't want to throw away the
Charles,
On 1/14/06 7:23 PM, Charles Sprickman [EMAIL PROTECTED] wrote:
The drives and the controller go in the Chenbro case. U320 SCSI from the
RAID controller in the Chenbro case to the 1U server.
Thanks for the explanation - I didn't click on your Areca link until now,
thinking it was a
Following up to myself again...
On Wed, 14 Dec 2005, Charles Sprickman wrote:
Hello all,
Supermicro 1U w/SCA backplane and 4 bays
2x2.8 GHz Xeons
Adaptec 2015S zero channel RAID card
I don't want to throw away the four machines like that that we have. I do
want to throw away the ZCR
On Sat, 14 Jan 2006, Luke Lonergan wrote:
Charles,
On 1/14/06 6:37 PM, Charles Sprickman [EMAIL PROTECTED] wrote:
I'm vaguely considering pairing these two devices:
http://www.areca.us/products/html/products.htm
That's an Areca 16 channel SATA II (I haven't even read up on what's new
in
Charles,
On 1/14/06 6:37 PM, Charles Sprickman [EMAIL PROTECTED] wrote:
I'm vaguely considering pairing these two devices:
http://www.areca.us/products/html/products.htm
That's an Areca 16 channel SATA II (I haven't even read up on what's new
in SATA II) RAID controller with an optional
I hope this isn't too far off topic for this list. Postgres is
the main application that I'm looking to accomodate. Anything
else I can do with whatever solution we find is just gravy...
You've given me a lot to go on... Now I'm going to have to do some
research as to real-world RAID
Charles,
On 12/20/05 9:58 PM, Charles Sprickman [EMAIL PROTECTED] wrote:
You've given me a lot to go on... Now I'm going to have to do some
research as to real-world RAID controller performance. It's vexing (to
say the least) that most vendors don't supply any raw throughput or TPS
stats
Hello all,
It seems that I'm starting to outgrow our current Postgres setup. We've
been running a handful of machines as standalone db servers. This is all
in a colocation environment, so everything is stuffed into 1U Supermicro
boxes. Our standard build looks like this:
Supermicro 1U
On Wed, 14 Dec 2005, Charles Sprickman wrote:
[big snip]
The list server seems to be regurgitating old stuff, and in doing so it
reminded me to thank everyone for their input. I was kind of waiting to
see if anyone who was very pro-NAS/SAN was going to pipe up, but it looks
like most people
Jim C. Nasby wrote:
On Wed, Dec 14, 2005 at 01:56:10AM -0500, Charles Sprickman wrote:
You'll note that I'm being somewhat driven by my OS of choice, FreeBSD.
Unlike Solaris or other commercial offerings, there is no nice volume
management available. While I'd love to keep managing a dozen
: Re: [PERFORM] SAN/NAS options
Jim C. Nasby wrote:
On Wed, Dec 14, 2005 at 01:56:10AM -0500, Charles Sprickman wrote:
You'll note that I'm being somewhat driven by my OS of choice,
FreeBSD.
Unlike Solaris or other commercial
On Wed, Dec 14, 2005 at 08:28:56PM +1300, Mark Kirkwood wrote:
Another interesting thing to try is rebuilding the database ufs
filesystem(s) with 32K blocks and 4K frags (as opposed to 8K/1K or
16K/2K - can't recall the default on 4.x). I found this to give a factor
of 2 speedup on random
On Wed, Dec 14, 2005 at 01:56:10AM -0500, Charles Sprickman wrote:
You'll note that I'm being somewhat driven by my OS of choice, FreeBSD.
Unlike Solaris or other commercial offerings, there is no nice volume
management available. While I'd love to keep managing a dozen or so
FreeBSD
Jim C. Nasby wrote:
On Wed, Dec 14, 2005 at 08:28:56PM +1300, Mark Kirkwood wrote:
Another interesting thing to try is rebuilding the database ufs
filesystem(s) with 32K blocks and 4K frags (as opposed to 8K/1K or
16K/2K - can't recall the default on 4.x). I found this to give a factor
of 2
On Fri, Dec 16, 2005 at 04:18:01PM -0600, Jim C. Nasby wrote:
Even if you're doing a lot of random IO? I would think that random IO
would perform better if you use smaller (8K) blocks, since there's less
data being read in and then just thrown away that way.
The overhead of reading an 8k block
On Fri, Dec 16, 2005 at 05:51:03PM -0500, Michael Stone wrote:
On Fri, Dec 16, 2005 at 04:18:01PM -0600, Jim C. Nasby wrote:
Even if you're doing a lot of random IO? I would think that random IO
would perform better if you use smaller (8K) blocks, since there's less
data being read in and then
On Fri, Dec 16, 2005 at 06:25:25PM -0600, Jim C. Nasby wrote:
True, but now you've got 4x the amount of data in your cache that you
probably don't need.
Or you might be 4x more likely to have data cached that's needed later.
If you're hitting disk either way, that's probably more likely than
The Apple is, as you say, cheap (except, the Apple markup on the disks
fuzzes that a bit). Its easy to set up, and has been quite reliable for me,
but do not expect anything resembling good DB performance out of it (I gave
up running anything but backup DBs on it). From the mouth of Apple guys,
On Wed, Dec 14, 2005 at 11:53:52AM -0500, Andrew Rawnsley wrote:
Other goofy things about it: it isn't 1 device with 14 disks and redundant
controllers. Its 2 7 disk arrays with non-redundant controllers. It doesn't
do RAID10.
And if you want hot spares you need *two* per tray (one for each
Hello all,
It seems that I'm starting to outgrow our current Postgres setup. We've been
running a handful of machines as standalone db servers. This is all in a
colocation environment, so everything is stuffed into 1U Supermicro boxes. Our
standard build looks like this:
Supermicro 1U
Charles,
Lastly, one thing that I'm not yet finding in trying to
educate myself on SANs is a good overview of what's come out
in the past few years that's more affordable than the old
big-iron stuff. For example I saw some brief info on this
list's archives about the Dell/EMC offerings.
Charles Sprickman wrote:
Hello all,
It seems that I'm starting to outgrow our current Postgres setup. We've
been running a handful of machines as standalone db servers. This is
all in a colocation environment, so everything is stuffed into 1U
Supermicro boxes. Our standard build looks
22 matches
Mail list logo