Hello experts, and preferably those in the know of the
low-level side of ZFS!
I've mentioned this in another thread, but this question
may need some more general attention:
My main data pool shows a discrepancy between traversal
and alloc sizes, as quoted below. In another thread, it
was
I'm running OI 151a. I'm trying to create a zone for the first time, and am
getting an error about zfs. I'm logged in as me, then su - to root before
running these commands.
I have a pool called datastore, mounted at /datastore
Per the wiki document
Le Tue, 6 Dec 2011 12:52:02 +0800,
darkblue darkblue2...@gmail.com a écrit :
I am going to share a dir and it's subdir through NFS to Virtual Host,
which include XEN(CentOS/netbsd) and ESXi,but failed, the following
step is what I did:
solaris 11:
zfs create tank/iso
zfs create
Hi,
I'm thinking of getting LSI 9212-4i4e(4 internal and 4 external ports)
to replace a SUN Storagetek raid card.
The StorageTek raid card seems to want to have it's drives initialized
and volumes created on it before they are presented to zfs. I can't find
a way of telling it just to be an
On Tue, Dec 6, 2011 at 3:57 PM, Karl Rossing ka...@barobinson.com wrote:
Hi,
I'm thinking of getting LSI 9212-4i4e(4 internal and 4 external ports) to
replace a SUN Storagetek raid card.
Is it possible to disable the raid on an LSI 9212-4i4e and have the drives
read by a simple sas/sata
Karl,
LSI 9212-4i4e will be good.
You can flash it with the IT Firmware and there are no raid at all.
Rocky
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Karl Rossing
Sent: Tuesday, December 06, 2011 12:57 PM
To:
On Tue, Dec 6, 2011 at 12:57 PM, Karl Rossing ka...@barobinson.com wrote:
I'm thinking of getting LSI 9212-4i4e(4 internal and 4 external ports) to
replace a SUN Storagetek raid card.
The StorageTek raid card seems to want to have it's drives initialized and
volumes created on it before they
2011-12-05 5:15, Ryan Wehler wrote:
Well if we want to get into theories on faulty hardware batches and such we
can. Though I think the likelihood is slim but not impossible I suppose.
I did the best I can diagnostic wise given I have no spare parts that have
never been a part of this SAN. As
zfs get compressratio | grep GCMS
raidpool/GCMSBackup compressratio 1.92x -
tank/GCMSBackup compressratio 1.92x -
zfs list | grep GCMS
raidpool/GCMSBackup 121G 371G 121G /volumes/raidpool/GCMSBackup
I should also mention there are no snapshots on either the source or
destination datasets.
zfs list -t snapshot | grep GCMS
root@calculon:/volumes/tank#
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
10 matches
Mail list logo