On Aug 17, 2007, at 10:44 PM, Ivan Voras wrote:
fdisk and bsdlabels both have a limit: because of the way they
store the
data about the disk space they span, they can't store values that
reference space 2 TB. In particular, every partition must start
at an
offset = 2 TB, and cannot be
On Aug 17, 2007, at 10:44 PM, Ivan Voras wrote:
fdisk and bsdlabels both have a limit: because of the way they
store the
data about the disk space they span, they can't store values that
reference space 2 TB. In particular, every partition must start
at an
offset = 2 TB, and cannot be
Vivek Khera wrote:
On Aug 17, 2007, at 10:44 PM, Ivan Voras wrote:
fdisk and bsdlabels both have a limit: because of the way they
store the
data about the disk space they span, they can't store values that
reference space 2 TB. In particular, every partition must start
at an
offset = 2
On Aug 29, 2007, at 2:43 PM, Kirill Ponomarew wrote:
What type I/O did you test, random read/writes, sequential writes ?
The performance of RAID group always depends on what software you
run on your RAID group. If it's database, be prepared for many
random read/writes, hence dd(1) tests would
On Wed, Aug 29, 2007 at 10:07:19AM -0400, Vivek Khera wrote:
On Aug 17, 2007, at 10:44 PM, Ivan Voras wrote:
fdisk and bsdlabels both have a limit: because of the way they store the
data about the disk space they span, they can't store values that
reference space 2 TB. In particular,
If you want to avoid the long fsck-times your remaining options are a
journaling filesystem or zfs, either requires an upgrade from freebsd
6.2. I have used zfs and had a serverstop due to powerutage in out
area. Our zfs-samba-server came up fine with no data corruption. So I
will
* Vivek Khera ([EMAIL PROTECTED]) wrote:
I'll investigate this option. Does anyone know the stability
reliability of the mpt(4) driver on CURRENT? Is it out of GIANT lock
yet? It was hard to tell from the TODO list if it is entirely free of
GIANT or not.
Yes, mpt(4) was made MPSAFE in
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Clayton Milos wrote:
If you want awesome performance and reliability the real way to go is
RAID10 (or more correctly RAID 0+1).
RAID10 and RAID0+1 are very different beasts. RAID10 is the best
choice for a read/write intensive f/s with valuable
On Fri, 17 Aug 2007 21:50:53 -0400
Vivek Khera [EMAIL PROTECTED] wrote:
My only fear of this is that once this system is in production,
that's pretty much it. Maintenance windows are about 1 year apart,
usually longer.
Seems to me you really should want a redundant / clustered system,
On Aug 18, 2007, at 4:09 AM, Thomas Hurst wrote:
Best temper your fear with some thorough testing then. If you are
going
to use ZFS in such a situation, though, I might be strongly tempted to
use Solaris instead.
Why the long gaps between maintenance?
This is a DB server for a 24x7
I have a shiny new big RAID array. 16x500GB SATA 300+NCQ drives
connected to the host via 4Gb fibre channel. This gives me 6.5Tb of
raw disk.
I've come up with three possibilities on organizing this disk. My
needs are really for a single 1Tb file system on which I will run
postgres.
- Original Message -
From: Claus Guttesen [EMAIL PROTECTED]
To: Vivek Khera [EMAIL PROTECTED]
Cc: FreeBSD Stable freebsd-stable@freebsd.org
Sent: Friday, August 17, 2007 11:10 PM
Subject: Re: large RAID volume partition strategy
I have a shiny new big RAID array. 16x500GB SATA 300
On Fri, 17 Aug 2007 17:42:55 -0400 Vivek Khera wrote:
I have a shiny new big RAID array. 16x500GB SATA 300+NCQ drives
connected to the host via 4Gb fibre channel. This gives me 6.5Tb of
raw disk.
I've come up with three possibilities on organizing this disk. My
needs are really for a
On Sat, 18 Aug 2007 02:26:04 +0400 Boris Samorodov wrote:
On Fri, 17 Aug 2007 17:42:55 -0400 Vivek Khera wrote:
I have a shiny new big RAID array. 16x500GB SATA 300+NCQ drives
connected to the host via 4Gb fibre channel. This gives me 6.5Tb of
raw disk.
I've come up with three
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Vivek Khera wrote:
I'm not keen on option 1 because of the potentially long fsck times
after a crash.
Depending on your allowable downtime after a crash, fscking even a 1 TB
UFS file system can be a long time. For large file systems there's
really
- Clayton Milos [EMAIL PROTECTED] wrote:
If you goal is speed and obviously as little possibility of a fail
(RAID6+spare) then RAID6 is the wrong way to go...
RAID6's read speeds are great but the write speeds are not.
If you want awesome performance and reliability the real way to go
On Aug 17, 2007, at 6:26 PM, Boris Samorodov wrote:
I have 6 SATA-II 300MB/s disks at 3WARE adapter. My (very!) simple
tests gave about 170MB/s for dd. BTW, I tested (OK, very fast)
RAID5, RAID6, gmirror+gstripe and noone get close to RAID10. (Well, as
expected, I suppose).
Whichever RAID
On Aug 17, 2007, at 6:10 PM, Claus Guttesen wrote:
If you want to avoid the long fsck-times your remaining options are a
journaling filesystem or zfs, either requires an upgrade from freebsd
6.2. I have used zfs and had a serverstop due to powerutage in out
area. Our zfs-samba-server came up
On Aug 17, 2007, at 7:31 PM, Ivan Voras wrote:
Depending on your allowable downtime after a crash, fscking even a
1 TB
UFS file system can be a long time. For large file systems there's
really no alternative to using -CURRENT / 7.0, and either gjournal
or ZFS.
I'll investigate this
Vivek Khera wrote:
My only fear of this is that once this system is in production, that's
pretty much it. Maintenance windows are about 1 year apart, usually
longer.
Others will have to comment about that. I have only one 7-CURRENT in
production (because of ZFS) and I had only one panic (in
Vivek Khera wrote:
But, if I don't go with zfs, which would be a better way to slice the
space up: RAID volumes exported as individual disks to freebsd, or one
RAID volume divided into multiple logical partitions with disklabel?
In general, it's almost always better to do the partitioning in
21 matches
Mail list logo