Thanks. Apparently, napp-it web interface did not do what I asked it to do.
I'll try to remove the cache and the log devices from the pool, and redo it
from the command line interface.
napp-it up to 0.8 does not support slices or partitions
napp-it 0.9 supports partitions an offers
On Thu, Jan 03, 2013 at 03:21:33PM -0600, Phillip Wagstrom wrote:
Eugen,
Thanks Phillip and others, most illuminating (pun intended).
Be aware that p0 corresponds to the entire disk, regardless of how it
is partitioned with fdisk. The fdisk partitions are 1 - 4. By using p0 for
Personally, I'd recommend putting a standard Solaris fdisk
partition on the drive and creating the two slices under that.
Why? In most cases giving zfs an entire disk is the best option.
I wouldn't bother with any manual partitioning.
--
Robert Milkowski
http://milek.blogspot.com
On Fri, Jan 04, 2013 at 06:57:44PM -, Robert Milkowski wrote:
Personally, I'd recommend putting a standard Solaris fdisk
partition on the drive and creating the two slices under that.
Why? In most cases giving zfs an entire disk is the best option.
I wouldn't bother with any
If you're dedicating the disk to a single task (data, SLOG, L2ARC) then
absolutely. If you're splitting tasks and wanting to make a drive do two
things, like SLOG and L2ARC, then you have to do this.
Some of the confusion here is between what is a traditional FDISK
partition
On Sun, Dec 30, 2012 at 06:02:40PM +0100, Eugen Leitl wrote:
Happy $holidays,
I have a pool of 8x ST31000340AS on an LSI 8-port adapter as
Just a little update on the home NAS project.
I've set the pool sync to disabled, and added a couple
of
8. c4t1d0 ATA-INTELSSDSA2M080-02G9 cyl
On Jan 3, 2013, at 12:33 PM, Eugen Leitl eu...@leitl.org wrote:
On Sun, Dec 30, 2012 at 06:02:40PM +0100, Eugen Leitl wrote:
Happy $holidays,
I have a pool of 8x ST31000340AS on an LSI 8-port adapter as
Just a little update on the home NAS project.
I've set the pool sync to
On Thu, Jan 03, 2013 at 12:44:26PM -0800, Richard Elling wrote:
On Jan 3, 2013, at 12:33 PM, Eugen Leitl eu...@leitl.org wrote:
On Sun, Dec 30, 2012 at 06:02:40PM +0100, Eugen Leitl wrote:
Happy $holidays,
I have a pool of 8x ST31000340AS on an LSI 8-port adapter as
Just a
Eugen,
Be aware that p0 corresponds to the entire disk, regardless of how it
is partitioned with fdisk. The fdisk partitions are 1 - 4. By using p0 for
log and p1 for cache, you could very well be writing to same location on the
SSD and corrupting things.
Personally, I'd
On Thu, Jan 03, 2013 at 03:21:33PM -0600, Phillip Wagstrom wrote:
Eugen,
Be aware that p0 corresponds to the entire disk, regardless of how it
is partitioned with fdisk. The fdisk partitions are 1 - 4. By using p0 for
log and p1 for cache, you could very well be writing to same
On Jan 3, 2013, at 3:33 PM, Eugen Leitl wrote:
On Thu, Jan 03, 2013 at 03:21:33PM -0600, Phillip Wagstrom wrote:
Eugen,
Be aware that p0 corresponds to the entire disk, regardless of how it
is partitioned with fdisk. The fdisk partitions are 1 - 4. By using p0 for
log and p1 for
On Thu, Jan 03, 2013 at 03:44:54PM -0600, Phillip Wagstrom wrote:
On Jan 3, 2013, at 3:33 PM, Eugen Leitl wrote:
On Thu, Jan 03, 2013 at 03:21:33PM -0600, Phillip Wagstrom wrote:
Eugen,
Be aware that p0 corresponds to the entire disk, regardless of how it
is partitioned with
Free advice is cheap...
I personally don't see the advantage of caching reads
and logging writes to the same devices. (Is this recommended?)
If this pool is serving CIFS/NFS, I would recommend testing
for best performance with a mirrored log device first without
a separate cache device:
#
On Sun, Dec 30, 2012 at 10:40:39AM -0800, Richard Elling wrote:
On Dec 30, 2012, at 9:02 AM, Eugen Leitl eu...@leitl.org wrote:
The system is a MSI E350DM-E33 with 8 GByte PC1333 DDR3
memory, no ECC. All the systems have Intel NICs with mtu 9000
enabled, including all switches in the path.
On Jan 2, 2013, at 2:03 AM, Eugen Leitl eu...@leitl.org wrote:
On Sun, Dec 30, 2012 at 10:40:39AM -0800, Richard Elling wrote:
On Dec 30, 2012, at 9:02 AM, Eugen Leitl eu...@leitl.org wrote:
The system is a MSI E350DM-E33 with 8 GByte PC1333 DDR3
memory, no ECC. All the systems have Intel
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Eugen Leitl
I have a pool of 8x ST31000340AS on an LSI 8-port adapter as
a raidz3 (no compression nor dedup) with reasonable bonnie++
1.03 values, e.g. 145 MByte/s Seq-Write @ 48% CPU and
Happy $holidays,
I have a pool of 8x ST31000340AS on an LSI 8-port adapter as
a raidz3 (no compression nor dedup) with reasonable bonnie++
1.03 values, e.g. 145 MByte/s Seq-Write @ 48% CPU and 291 MByte/s
Seq-Read @ 53% CPU. It scrubs with 230+ MByte/s with reasonable
system load. No hybrid
On Dec 30, 2012, at 9:02 AM, Eugen Leitl eu...@leitl.org wrote:
Happy $holidays,
I have a pool of 8x ST31000340AS on an LSI 8-port adapter as
a raidz3 (no compression nor dedup) with reasonable bonnie++
1.03 values, e.g. 145 MByte/s Seq-Write @ 48% CPU and 291 MByte/s
Seq-Read @ 53%
18 matches
Mail list logo