On 2013-03-19 22:07, Andrew Gabriel wrote:
The GPT partitioning spec requires the disk to be FDISK
partitioned with just one single FDISK partition of type EFI,
so that tools which predate GPT partitioning will still see
such a GPT disk as fully assigned to FDISK partitions, and
therefore less li
On 03/19/13 20:27, Jim Klimov wrote:
I disagree; at least, I've always thought differently:
the "d" device is the whole disk denomination, with a
unique number for a particular controller link ("c+t").
The disk has some partitioning table, MBR or GPT/EFI.
In these tables, partition "p0" stands f
On 2013-03-19 20:38, Cindy Swearingen wrote:
Hi Andrew,
Your original syntax was incorrect.
A p* device is a larger container for the d* device or s* devices.
In the case of a cache device, you need to specify a d* or s* device.
That you can add p* devices to a pool is a bug.
I disagree; at l
Andrew Werchowiecki wrote:
Total disk size is 9345 cylinders
Cylinder size is 12544 (512 byte) blocks
Cylinders
Partition StatusType Start End Length%
= ==
le
shown above work and I want to see why it doesn’t work in my set up
rather than simply ignoring and moving on.
*From:*Fajar A. Nugraha [mailto:w...@fajar.net]
*Sent:* Sunday, 17 March 2013 3:04 PM
*To:* Andrew Werchowiecki
*Cc:* zfs-discuss@opensolaris.org
*Subject:* Re: [zfs-discuss] partion
Andrew Werchowiecki wrote:
Thanks for the info about slices, I may give that a go later on. I’m
not keen on that because I have clear evidence (as in zpools set up
this way, right now, working, without issue) that GPT partitions of
the style shown above work and I want to see why it doesn’t w
March 2013 3:04 PM
To: Andrew Werchowiecki
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] partioned cache devices
On Sun, Mar 17, 2013 at 1:01 PM, Andrew Werchowiecki
mailto:andrew.werchowie...@xpanse.com.au>>
wrote:
I understand that p0 refers to the whole disk... in the l
On Sun, Mar 17, 2013 at 1:01 PM, Andrew Werchowiecki <
andrew.werchowie...@xpanse.com.au> wrote:
> I understand that p0 refers to the whole disk... in the logs I pasted in
> I'm not attempting to mount p0. I'm trying to work out why I'm getting an
> error attempting to mount p2, after p1 has succe
On Mar 16, 2013, at 7:01 PM, Andrew Werchowiecki
wrote:
> It's a home set up, the performance penalty from splitting the cache devices
> is non-existant, and that work around sounds like some pretty crazy amount of
> overhead where I could instead just have a mirrored slog.
>
> I'm less conc
It's a home set up, the performance penalty from splitting the cache devices is
non-existant, and that work around sounds like some pretty crazy amount of
overhead where I could instead just have a mirrored slog.
I'm less concerned about wasted space, more concerned about amount of SAS ports
I
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Andrew Werchowiecki
>
> muslimwookie@Pyzee:~$ sudo zpool add aggr0 cache c25t10d1p2
> Password:
> cannot open '/dev/dsk/c25t10d1p2': I/O error
> muslimwookie@Pyzee:~$
>
> I have two SSDs in th
Andrew Werchowiecki wrote:
Hi all,
I'm having some trouble with adding cache drives to a zpool, anyone
got any ideas?
muslimwookie@Pyzee:~$ sudo zpool add aggr0 cache c25t10d1p2
Password:
cannot open '/dev/dsk/c25t10d1p2': I/O error
muslimwookie@Pyzee:~$
I have two SSDs in the system, I'
12 matches
Mail list logo