Hi Andrew,

Your original syntax was incorrect.

A p* device is a larger container for the d* device or s* devices.
In the case of a cache device, you need to specify a d* or s* device.
That you can add p* devices to a pool is a bug.

Adding different slices from c25t10d1 as both log and cache devices
would need the s* identifier, but you've already added the entire
c25t10d1 as the log device. A better configuration would be using
c25t10d1 for log and using c25t9d1 for cache or provide some spares
for this large pool.

After you remove the log devices, re-add like this:

# zpool add aggr0 log c25t10d1
# zpool add aggr0 cache c25t9d1

You might review the ZFS recommendation practices section, here:

http://docs.oracle.com/cd/E26502_01/html/E29007/zfspools-4.html#storage-2

See example 3-4 for adding a cache device, here:

http://docs.oracle.com/cd/E26502_01/html/E29007/gayrd.html#gazgw

Always have good backups.

Thanks, Cindy



On 03/18/13 23:23, Andrew Werchowiecki wrote:
I did something like the following:

format -e /dev/rdsk/c5t0d0p0

fdisk

1 (create)

F (EFI)

6 (exit)

partition

label

1

y

0

usr

wm

64

4194367e

1

usr

wm

4194368

117214990

label

1

y

Total disk size is 9345 cylinders

Cylinder size is 12544 (512 byte) blocks

Cylinders

Partition Status Type Start End Length %

========= ====== ============ ===== === ====== ===

1 EFI 0 9345 9346 100

partition> print

Current partition table (original):

Total disk sectors available: 117214957 + 16384 (reserved sectors)

Part Tag Flag First Sector Size Last Sector

0 usr wm 64 2.00GB 4194367

1 usr wm 4194368 53.89GB 117214990

2 unassigned wm 0 0 0

3 unassigned wm 0 0 0

4 unassigned wm 0 0 0

5 unassigned wm 0 0 0

6 unassigned wm 0 0 0

8 reserved wm 117214991 8.00MB 117231374

This isn’t the output from when I did it but it is exactly the same
steps that I followed.

Thanks for the info about slices, I may give that a go later on. I’m not
keen on that because I have clear evidence (as in zpools set up this
way, right now, working, without issue) that GPT partitions of the style
shown above work and I want to see why it doesn’t work in my set up
rather than simply ignoring and moving on.

*From:*Fajar A. Nugraha [mailto:w...@fajar.net]
*Sent:* Sunday, 17 March 2013 3:04 PM
*To:* Andrew Werchowiecki
*Cc:* zfs-discuss@opensolaris.org
*Subject:* Re: [zfs-discuss] partioned cache devices

On Sun, Mar 17, 2013 at 1:01 PM, Andrew Werchowiecki
<andrew.werchowie...@xpanse.com.au
<mailto:andrew.werchowie...@xpanse.com.au>> wrote:

    I understand that p0 refers to the whole disk... in the logs I
    pasted in I'm not attempting to mount p0. I'm trying to work out why
    I'm getting an error attempting to mount p2, after p1 has
    successfully mounted. Further, this has been done before on other
    systems in the same hardware configuration in the exact same
    fashion, and I've gone over the steps trying to make sure I haven't
    missed something but can't see a fault.

How did you create the partition? Are those marked as solaris partition,
or something else (e.g. fdisk on linux use type "83" by default).

    I'm not keen on using Solaris slices because I don't have an
    understanding of what that does to the pool's OS interoperability.

Linux can read solaris slice and import solaris-made pools just fine, as
long as you're using compatible zpool version (e.g. zpool version 28).

--

Fajar



_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to