On 30/03/2010 21:53, Miles Nordin wrote:
"et" == Erik Trimble writes:
et> Add this zvol as the cache device (L2arc) for your other pool
doesn't bug 6915521 mean this arrangement puts you at risk of deadlock?
Yes that risk is there.
I would highly recommend against using a ZVOL as a c
Hi all,
yes it works with the partitions.
I think that I made a typo during the initial testing off adding a partition as
cache, probably swapped the 0 for an o.
Tested with a b134 gui and text installer on the x86 platform.
So here it goes:
Install opensolaris into a partition and leave some s
> "et" == Erik Trimble writes:
et> Add this zvol as the cache device (L2arc) for your other pool
doesn't bug 6915521 mean this arrangement puts you at risk of deadlock?
pgpLKXWAeF2QV.pgp
Description: PGP signature
___
zfs-discuss mailing list
F. Wessels wrote:
Hi,
as Richard Elling wrote earlier:
"For more background, low-cost SSDs intended for the boot market are
perfect candidates. Take a X-25V @ 40GB and use 15-20 GB for root
and the rest for an L2ARC. For small form factor machines or machines
with max capacity of 8GB of RAM (a t
On Mar 29, 2010, at 1:10 PM, F. Wessels wrote:
> Hi,
>
> as Richard Elling wrote earlier:
> "For more background, low-cost SSDs intended for the boot market are
> perfect candidates. Take a X-25V @ 40GB and use 15-20 GB for root
> and the rest for an L2ARC. For small form factor machines or machin
> you can't use anything but a block device for the L2ARC device.
sure you can...
http://mail.opensolaris.org/pipermail/zfs-discuss/2010-March/039228.html
it even lives through a reboot (rpool is mounted before other pools)
zpool create -f test c9t3d0s0 c9t4d0s0
zfs create -V 3G rpool/cache
zp
Just clarifying Darren's comment - we got bitten by this pretty badly so I
figure it's worth saying again here. ZFS will *allow* you to use a ZVOL of
one pool as a ZDEV in another pool, but it results in race conditions and an
unstable system. (At least on Solaris 10 update 8).
We tried to use a
http://fixunix.com/solaris-rss/570361-make-most-your-ssd-zfs.html
I think this is what you are looking for. GParted FTW.
Cheers,
_GP_
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.op
F. Wessels wrote:
Thank you Erik for the reply.
I misunderstood Dan's suggestion about the zvol in the first place. Now you
make the same suggestion also. Doesn't zfs prefer raw devices? When following
this route the zvol used as cache device for tank makes use of the ARC of rpool
what doesn'
Thank you Darren.
So no zvol's as L2ARC cache device. That leaves partitions and slices.
When I tried to add a second partition, the first contained slices with the
root pool, as cache device. Zpool refused, it reported that the device CxTyDzP2
(note P2) wasn't supported. Perhaps I did something
Thank you Erik for the reply.
I misunderstood Dan's suggestion about the zvol in the first place. Now you
make the same suggestion also. Doesn't zfs prefer raw devices? When following
this route the zvol used as cache device for tank makes use of the ARC of rpool
what doesn't seem right. Or is
Darren J Moffat wrote:
On 30/03/2010 10:13, Erik Trimble wrote:
Add this zvol as the cache device (L2arc) for your other pool
# zpool create tank mirror c1t0d0 c1t1d0s0 cache rpool/zvolname
That won't work L2ARC devices can not be a ZVOL of another pool, they
can't be a file either. An L2AR
On 30/03/2010 10:13, Erik Trimble wrote:
Add this zvol as the cache device (L2arc) for your other pool
# zpool create tank mirror c1t0d0 c1t1d0s0 cache rpool/zvolname
That won't work L2ARC devices can not be a ZVOL of another pool, they
can't be a file either. An L2ARC device must be a physi
Darren J Moffat wrote:
On 30/03/2010 10:05, Erik Trimble wrote:
F. Wessels wrote:
Thanks for the reply.
I didn't get very much further.
Yes, ZFS loves raw devices. When I had two devices I wouldn't be in
this mess.
I would simply install opensolaris on the first disk and add the
second ssd to
On 30/03/2010 10:05, Erik Trimble wrote:
F. Wessels wrote:
Thanks for the reply.
I didn't get very much further.
Yes, ZFS loves raw devices. When I had two devices I wouldn't be in
this mess.
I would simply install opensolaris on the first disk and add the
second ssd to the
data pool with a zp
F. Wessels wrote:
Thanks for the reply.
I didn't get very much further.
Yes, ZFS loves raw devices. When I had two devices I wouldn't be in this mess.
I would simply install opensolaris on the first disk and add the second ssd to
the
data pool with a zpool add mpool cache cxtydz Notice that no
Thanks for the reply.
I didn't get very much further.
Yes, ZFS loves raw devices. When I had two devices I wouldn't be in this mess.
I would simply install opensolaris on the first disk and add the second ssd to
the
data pool with a zpool add mpool cache cxtydz Notice that no slices or
partitio
On Tue, Mar 30, 2010 at 03:13:45PM +1100, Daniel Carosone wrote:
> You can:
> - install to a partition that's the size you want rpool
> - expand the partition to the full disk
- expand the s2 slice to the full disk
> - leave the s0 slice for rpool alone
> - make another slice for l2arc in the
On Mon, Mar 29, 2010 at 01:10:22PM -0700, F. Wessels wrote:
> The caiman installer allows you to control the size of the partition
> on the boot disk but it doesn't allow you (at least I couldn't
> figure out how) to control the size of the slices. So you end with
> slice0 filling the entire partit
Hi,
as Richard Elling wrote earlier:
"For more background, low-cost SSDs intended for the boot market are
perfect candidates. Take a X-25V @ 40GB and use 15-20 GB for root
and the rest for an L2ARC. For small form factor machines or machines
with max capacity of 8GB of RAM (a typical home system)
20 matches
Mail list logo