Thanks..it was what i had to do .
Bruno
On 29-3-2010 19:12, Cyril Plisko wrote:
On Mon, Mar 29, 2010 at 4:57 PM, Bruno Sousa bso...@epinfante.com wrote:
pkg uninstall aac
Creating Planpkg: Cannot remove
'pkg://opensolaris.org/driver/storage/a...@0.5.11
,5.11-0.134:20100302T021758Z' due
I'm running Solaris 10 Sparc with rather updated patches (as of ~30
days ago?) on a netra x1.
I had set up zfs root with two IDE 40GB hard disks. all was fine
until my secondary master died. no read/write errors; just dead.
No matter what I try (booting with the dead drive in place,
Thanks for the reply.
I didn't get very much further.
Yes, ZFS loves raw devices. When I had two devices I wouldn't be in this mess.
I would simply install opensolaris on the first disk and add the second ssd to
the
data pool with a zpool add mpool cache cxtydz Notice that no slices or
F. Wessels wrote:
Thanks for the reply.
I didn't get very much further.
Yes, ZFS loves raw devices. When I had two devices I wouldn't be in this mess.
I would simply install opensolaris on the first disk and add the second ssd to
the
data pool with a zpool add mpool cache cxtydz Notice that
On 30/03/2010 10:05, Erik Trimble wrote:
F. Wessels wrote:
Thanks for the reply.
I didn't get very much further.
Yes, ZFS loves raw devices. When I had two devices I wouldn't be in
this mess.
I would simply install opensolaris on the first disk and add the
second ssd to the
data pool with a
Darren J Moffat wrote:
On 30/03/2010 10:05, Erik Trimble wrote:
F. Wessels wrote:
Thanks for the reply.
I didn't get very much further.
Yes, ZFS loves raw devices. When I had two devices I wouldn't be in
this mess.
I would simply install opensolaris on the first disk and add the
second ssd
On 30/03/2010 10:13, Erik Trimble wrote:
Add this zvol as the cache device (L2arc) for your other pool
# zpool create tank mirror c1t0d0 c1t1d0s0 cache rpool/zvolname
That won't work L2ARC devices can not be a ZVOL of another pool, they
can't be a file either. An L2ARC device must be a
Darren J Moffat wrote:
On 30/03/2010 10:13, Erik Trimble wrote:
Add this zvol as the cache device (L2arc) for your other pool
# zpool create tank mirror c1t0d0 c1t1d0s0 cache rpool/zvolname
That won't work L2ARC devices can not be a ZVOL of another pool, they
can't be a file either. An
Thank you Erik for the reply.
I misunderstood Dan's suggestion about the zvol in the first place. Now you
make the same suggestion also. Doesn't zfs prefer raw devices? When following
this route the zvol used as cache device for tank makes use of the ARC of rpool
what doesn't seem right. Or is
Hi, I did some tests on a Sun Fire x4540 with an external J4500 array
(connected via two
HBA ports). I.e. there are 96 disks in total configured as seven 12-disk raidz2
vdevs
(plus system, spares, unused disks) providing a ~ 63 TB pool with fletcher4
checksums.
The system was recently equipped
Thank you Darren.
So no zvol's as L2ARC cache device. That leaves partitions and slices.
When I tried to add a second partition, the first contained slices with the
root pool, as cache device. Zpool refused, it reported that the device CxTyDzP2
(note P2) wasn't supported. Perhaps I did
F. Wessels wrote:
Thank you Erik for the reply.
I misunderstood Dan's suggestion about the zvol in the first place. Now you
make the same suggestion also. Doesn't zfs prefer raw devices? When following
this route the zvol used as cache device for tank makes use of the ARC of rpool
what
On Mon, Mar 29, 2010 at 5:39 PM, Nicolas Williams
nicolas.willi...@sun.com wrote:
One really good use for zfs diff would be: as a way to index zfs send
backups by contents.
Or to generate the list of files for incremental backups via NetBackup
or similar. This is especially important
Just clarifying Darren's comment - we got bitten by this pretty badly so I
figure it's worth saying again here. ZFS will *allow* you to use a ZVOL of
one pool as a ZDEV in another pool, but it results in race conditions and an
unstable system. (At least on Solaris 10 update 8).
We tried to use
you can't use anything but a block device for the L2ARC device.
sure you can...
http://mail.opensolaris.org/pipermail/zfs-discuss/2010-March/039228.html
it even lives through a reboot (rpool is mounted before other pools)
zpool create -f test c9t3d0s0 c9t4d0s0
zfs create -V 3G rpool/cache
OK, I see what the problem is: the /etc/zfs/zpool.cache file.
When the pool was split, the zpool.cache file was also split - and the split
happens prior to the config file being updated. So, after booting off the
split side of the mirror, zfs attempts to mount rpool based on the information
Thanks for the details Edward, that is good to know.
Another quick question.
In my test setup I created the pool using snv_134 because I wanted to see how
things would run as the next release is supposed to be based off of snv_134
(from my understanding). However, I recently read that the
On Mar 29, 2010, at 1:10 PM, F. Wessels wrote:
Hi,
as Richard Elling wrote earlier:
For more background, low-cost SSDs intended for the boot market are
perfect candidates. Take a X-25V @ 40GB and use 15-20 GB for root
and the rest for an L2ARC. For small form factor machines or machines
F. Wessels wrote:
Hi,
as Richard Elling wrote earlier:
For more background, low-cost SSDs intended for the boot market are
perfect candidates. Take a X-25V @ 40GB and use 15-20 GB for root
and the rest for an L2ARC. For small form factor machines or machines
with max capacity of 8GB of RAM (a
On 3/30/2010 2:44 PM, Adam Leventhal wrote:
Hey Karsten,
Very interesting data. Your test is inherently single-threaded so I'm not
surprised that the benefits aren't more impressive -- the flash modules on
the F20 card are optimized more for concurrent IOPS than single-threaded
latency.
Thanks - have run it and returns pretty quickly. Given the output (attached)
what action can I take?
Thanks
James
--
This message posted from opensolaris.orgDirty time logs:
tank
outage [300718,301073] length 356
outage [301138,301139] length 2
outage
Hello,
wanted to know if there are any updates on this topic ?
Regards,
Robert
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I have a pool (on an X4540 running S10U8) in which a disk failed, and the
hot spare kicked in. That's perfect. I'm happy.
Then a second disk fails.
Now, I've replaced the first failed disk, and it's resilvered and I have my
hot spare back.
But: why hasn't it used the spare to cover the other
Hi all,
yes it works with the partitions.
I think that I made a typo during the initial testing off adding a partition as
cache, probably swapped the 0 for an o.
Tested with a b134 gui and text installer on the x86 platform.
So here it goes:
Install opensolaris into a partition and leave some
On 03/31/10 10:39 AM, Peter Tribble wrote:
I have a pool (on an X4540 running S10U8) in which a disk failed, and the
hot spare kicked in. That's perfect. I'm happy.
Then a second disk fails.
Now, I've replaced the first failed disk, and it's resilvered and I have my
hot spare back.
But: why
On Mar 30, 2010, at 2:50 PM, Jeroen Roodhart wrote:
Hi Karsten. Adam, List,
Adam Leventhal wrote:
Very interesting data. Your test is inherently single-threaded so I'm not
surprised that the benefits aren't more impressive -- the flash modules on
the F20 card are optimized more for
I'm running Windows 7 64bit and VMware player 3 with Solaris 10 64bit
as a client. I have added additional hard drive to virtual Solaris 10
as physical drive. Solaris 10 can see and use already created zpool
without problem. I could also create additional zpool on the other
mounted raw device. I
If you are going to trick the system into thinking a volatile cache is
nonvolatile, you
might as well disable the ZIL -- the data corruption potential is the same.
I'm sorry? I believe the F20 has a supercap or the like? The advise on:
On Mar 30, 2010, at 3:32 PM, Jeroen Roodhart wrote:
If you are going to trick the system into thinking a volatile cache is
nonvolatile, you
might as well disable the ZIL -- the data corruption potential is the same.
I'm sorry? I believe the F20 has a supercap or the like? The advise on:
Richard Elling wrote:
On Mar 30, 2010, at 3:32 PM, Jeroen Roodhart wrote:
If you are going to trick the system into thinking a volatile cache is
nonvolatile, you
might as well disable the ZIL -- the data corruption potential is the same.
I'm sorry? I believe the F20 has a supercap or the
Our backup system has a couple of datasets used for iscsi
that have somehow lost their baseline snapshots with the
live system. In fact zfs list -t snapshots doesn't show
any snapshots at all for them. We rotate backup and live
every now and then, so these datasets have been shared
at some time.
But the speedup of disabling the ZIL altogether is
appealing (and would
probably be acceptable in this environment).
Just to make sure you know ... if you disable the ZIL altogether, and you
have a power interruption, failed cpu, or kernel halt, then you're likely to
have a corrupt unusable
standard ZIL: 7m40s (ZFS default)
1x SSD ZIL: 4m07s (Flash Accelerator F20)
2x SSD ZIL: 2m42s (Flash Accelerator F20)
2x SSD mirrored ZIL: 3m59s (Flash Accelerator F20)
3x SSD ZIL: 2m47s (Flash Accelerator F20)
4x SSD
what size is the gz file if you do an incremental send to file?
something like:
zfs send -i sn...@vol sn...@vol | gzip /somplace/somefile.gz
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
The problem that I have now is that each created snapshot is always
equal to zero... zfs just not storing changes that I have made to the
file system before making a snapshot.
r...@sl-node01:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
mypool01
On Tue, 30 Mar 2010, Edward Ned Harvey wrote:
But the speedup of disabling the ZIL altogether is
appealing (and would
probably be acceptable in this environment).
Just to make sure you know ... if you disable the ZIL altogether, and you
have a power interruption, failed cpu, or kernel halt,
Again, we can't get a straight answer on this one..
(or at least not 1 straight answer...)
Since the ZIL logs are committed atomically they are either committed
in FULL, or NOT at all (by way of rollback of incomplete ZIL applies at
zpool mount time / or transaction rollbacks if things
On Tue, 30 Mar 2010, Edward Ned Harvey wrote:
If this is true ... Suppose you shutdown a system, remove the ZIL device,
and power back on again. What will happen? I'm informed that with current
versions of solaris, you simply can't remove a zil device once it's added to
a pool. (That's
Anyway, my question is, [...]
as expected I can't import it because the pool was created
with a newer version of ZFS. What options are there to import?
I'm quite sure there is no option to import or receive or downgrade a zfs
filesystem from a later version. I'm pretty sure your only option
If the ZIL device goes away then zfs might refuse to use the pool
without user affirmation (due to potential loss of uncommitted
transactions), but if the dedicated ZIL device is gone, zfs will use
disks in the main pool for the ZIL.
This has been clarified before on the list by top zfs
if you disable the ZIL altogether, and you have a power interruption, failed
cpu,
or kernel halt, then you're likely to have a corrupt unusable zpool
the pool will always be fine, no matter what.
or at least data corruption.
yea, its a good bet that data sent to your file or zvol will
So you think it would be ok to shutdown, physically remove the log
device,
and then power back on again, and force import the pool? So although
there
may be no live way to remove a log device from a pool, it might still
be
possible if you offline the pool to ensure writes are all completed
Just to make sure you know ... if you disable the ZIL altogether, and
you
have a power interruption, failed cpu, or kernel halt, then you're
likely to
have a corrupt unusable zpool, or at least data corruption. If that
is
indeed acceptable to you, go nuts. ;-)
I believe that the
43 matches
Mail list logo