try export and import the zpool
On 9/13/2010 1:26 PM, Brian wrote:
I am running zfs-fuse on an Ubuntu 10.04 box. I have a dual mirrored pool:
mirror sdd sde mirror sdf sdg
Recently the device names shifted on my box and the devices are now sdc sdd sde and sdf.
The pool is of course very
may be 5x(3+1) use one disk from each controller, 15TB usable space,
3+1 raidz rebuild time should be reasonable
On 9/7/2010 4:40 AM, hatish wrote:
Thanks for all the replies :)
My mindset is split in two now...
Some detail - I'm using 4 1-to-5 Sata Port multipliers connected to a 4-port
important to update the menu.1st before reboot
regards
On 8/28/2010 5:17 PM, Ian Collins wrote:
On 08/28/10 11:39 PM, LaoTsao 老曹 wrote:
hi all
Try to learn how UFS root to ZFS root liveUG work.
I download the vbox image of s10u8, it come up as UFS root.
add a new disks (16GB)
create zpool rpool
hi all
Try to learn how UFS root to ZFS root liveUG work.
I download the vbox image of s10u8, it come up as UFS root.
add a new disks (16GB)
create zpool rpool
run lucreate -n zfsroot -p rpool
run luactivate zfsroot
run lustatus it do show zfsroot will be active in next boot
init 6
but it
thx
try to detach the old ufs and boot from the new zfsroot
it fail
I reattach the ufsroot disk and find out that in the
rpool/boot/grub/menu.1st
findroot (rootfs0,0,a) and not findroot (pool_rpool,0,a)
not sure what is the correct findroot here
even with this change to findroot and try to
On 8/27/2010 12:25 AM, Michael Dodwell wrote:
Lao,
I had a look at the HAStoragePlus etc and from what i understand that's to
mirror local storage across 2 nodes for services to be able to access 'DRBD
style'.
not true, HAS+ use shred storage.
in this case since ZFS is not clustered FS so
hi
may be boot a livecd then export and import the zpool?
regards
On 8/27/2010 8:27 AM, Rainer Orth wrote:
For quite some time I'm bitten by the fact that on my laptop (currently
running self-built snv_147) zpool status rpool and format disagree about
the device name of the root disk:
IMHO, if U use the backup SW that support dedupe in the SW then ZFS is
still a viable solution
On 8/26/2010 6:13 PM, Sigbjørn Lie wrote:
Hi Daniel,
We we're looking into very much the same solution you've tested.
Thanks for your advise. I think we will look for something else. :)
Just
be very careful here!!
On 8/26/2010 9:16 PM, Michael Dodwell wrote:
Hey all,
I currently work for a company that has purchased a number of different SAN
solutions (whatever was cheap at the time!) and i want to setup a HA ZFS file
store over fiber channel.
Basically I've taken slices from
IMHO, may be take look the ZFS appliance (7000 storage) from oracle
it provide GUI for Dtrace based Analytics and WGUI management.
It support 1/2 PB now and will be support much more in near future.
http://www.oracle.com/us/products/servers-storage/039224.pdf
it support local cluster and
not possible now
On 8/25/2010 2:34 PM, Mike DeMarco wrote:
Is it currently or near future possible to shrink a zpool remove a disk
attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
IMHO, U want -E for ZIL and -M for L2ARC
On 8/25/2010 2:44 PM, Karl Rossing wrote:
I'm trying to pick between an Intel X25-M or Intel X25-E for a slog
device.
At some point in the future, TRIM support will become available
dtrace is DTrace
On 8/25/2010 3:27 PM, F. Wessels wrote:
Although it's bit much Nexenta oriented, command wise. It's a nice introduction. I did found one
thing, page 28 about the zil. There's no zil device, the zil can be written to an optional slog
device. And the last line first paragraph,
-M has larger capacity and L2ARC is mostly for read and not much write
and U also need memory for ARC
L2ARC should be the sze of Ur working dataset
ZIL is mostly for write and U want SSD for better write and longer life
and ZIl may be =1/2 phy memory
regards
On 8/25/2010 9:18 PM,
14 matches
Mail list logo