Andrew Werchowiecki wrote:
Thanks for the info about slices, I may give that a go later on. I’m
not keen on that because I have clear evidence (as in zpools set up
this way, right now, working, without issue) that GPT partitions of
the style shown above work and I want to see why it doesn’t
above work and I want to see why it doesn’t work in my set up
rather than simply ignoring and moving on.
*From:*Fajar A. Nugraha [mailto:w...@fajar.net]
*Sent:* Sunday, 17 March 2013 3:04 PM
*To:* Andrew Werchowiecki
*Cc:* zfs-discuss@opensolaris.org
*Subject:* Re: [zfs-discuss] partioned cache
Andrew Werchowiecki wrote:
Total disk size is 9345 cylinders
Cylinder size is 12544 (512 byte) blocks
Cylinders
Partition StatusType Start End Length%
= ==
On 2013-03-19 20:38, Cindy Swearingen wrote:
Hi Andrew,
Your original syntax was incorrect.
A p* device is a larger container for the d* device or s* devices.
In the case of a cache device, you need to specify a d* or s* device.
That you can add p* devices to a pool is a bug.
I disagree; at
On 03/19/13 20:27, Jim Klimov wrote:
I disagree; at least, I've always thought differently:
the d device is the whole disk denomination, with a
unique number for a particular controller link (c+t).
The disk has some partitioning table, MBR or GPT/EFI.
In these tables, partition p0 stands for
On 2013-03-19 22:07, Andrew Gabriel wrote:
The GPT partitioning spec requires the disk to be FDISK
partitioned with just one single FDISK partition of type EFI,
so that tools which predate GPT partitioning will still see
such a GPT disk as fully assigned to FDISK partitions, and
therefore less
To: Andrew Werchowiecki
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] partioned cache devices
On Sun, Mar 17, 2013 at 1:01 PM, Andrew Werchowiecki
andrew.werchowie...@xpanse.com.aumailto:andrew.werchowie...@xpanse.com.au
wrote:
I understand that p0 refers to the whole disk
On Sun, Mar 17, 2013 at 1:01 PM, Andrew Werchowiecki
andrew.werchowie...@xpanse.com.au wrote:
I understand that p0 refers to the whole disk... in the logs I pasted in
I'm not attempting to mount p0. I'm trying to work out why I'm getting an
error attempting to mount p2, after p1 has
It's a home set up, the performance penalty from splitting the cache devices is
non-existant, and that work around sounds like some pretty crazy amount of
overhead where I could instead just have a mirrored slog.
I'm less concerned about wasted space, more concerned about amount of SAS ports
I
On Mar 16, 2013, at 7:01 PM, Andrew Werchowiecki
andrew.werchowie...@xpanse.com.au wrote:
It's a home set up, the performance penalty from splitting the cache devices
is non-existant, and that work around sounds like some pretty crazy amount of
overhead where I could instead just have a
Andrew Werchowiecki wrote:
Hi all,
I'm having some trouble with adding cache drives to a zpool, anyone
got any ideas?
muslimwookie@Pyzee:~$ sudo zpool add aggr0 cache c25t10d1p2
Password:
cannot open '/dev/dsk/c25t10d1p2': I/O error
muslimwookie@Pyzee:~$
I have two SSDs in the system,
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Andrew Werchowiecki
muslimwookie@Pyzee:~$ sudo zpool add aggr0 cache c25t10d1p2
Password:
cannot open '/dev/dsk/c25t10d1p2': I/O error
muslimwookie@Pyzee:~$
I have two SSDs in the
Hi all,
I'm having some trouble with adding cache drives to a zpool, anyone got any
ideas?
muslimwookie@Pyzee:~$ sudo zpool add aggr0 cache c25t10d1p2
Password:
cannot open '/dev/dsk/c25t10d1p2': I/O error
muslimwookie@Pyzee:~$
I have two SSDs in the system, I've created an 8gb partition on
13 matches
Mail list logo