Hello,
I try to create a zfs file system according to
Creating a Basic ZFS File System section of
Creating a Basic ZFS File System document of SUN.
The problem is that the device has a ufs filesystem the partiotion
I am trying to work with; it is in fact empty and does not contain any
file
Le 06/12/2006 à 05:05:55+0100, Flemming Danielsen a écrit
Hi
I have 2 questions on use of ZFS.
How do I ensure I have site redundancy using zfs pools, as I see it we only
ensures mirrors between 2 disks. I have 2 HDS on one each site and I want to
be
able to loose the one of them and my
Ian,
The first error is correct in that zpool-create will not, unless
forced, create a file system if it knows that another filesystem
presides in the target vdev.
The second error was caused by your removal of the slice.
What I find discerning is that the zpool created.
Can you provide the
We have two aging Netapp filers and can't afford to buy new Netapp gear,
so we've been looking with a lot of interest at building NFS fileservers
running ZFS as a possible future approach. Two issues have come up in the
discussion
- Adding new disks to a RAID-Z pool (Netapps handle adding
On Wed, 6 Dec 2006, Jim Davis wrote:
We have two aging Netapp filers and can't afford to buy new Netapp gear,
so we've been looking with a lot of interest at building NFS fileservers
running ZFS as a possible future approach. Two issues have come up in the
discussion
- Adding new disks to
On Wed, 6 Dec 2006, Jim Davis wrote:
We have two aging Netapp filers and can't afford to buy new Netapp gear,
so we've been looking with a lot of interest at building NFS fileservers
running ZFS as a possible future approach. Two issues have come up in the
discussion
- Adding new disks to a
Still ... I don't think a core file is appropriate. Sounds like a bug is
in order if one doesn't already exist. (zpool dumps core when missing
devices are used perhaps?)
Wee Yeh Tan wrote:
Ian,
The first error is correct in that zpool-create will not, unless
forced, create a file system if
Thanks so much.. anyway resilvering worked its way, I got everything resolved
zpool status -v
pool: mypool
state: ONLINE
scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirrorONLINE 0 0 0
You can add more disks to a pool that is in raid-z you just can't
add disks to the existing raid-z vdev.
cd /usr/tmp
mkfile -n 100m 1 2 3 4 5 6 7 8 9 10
zpool create t raidz /usr/tmp/1 /usr/tmp/2 /usr/tmp/3
zpool status t
zfs list t
zpool add -f t raidz2 /usr/tmp/4 /usr/tmp/5 /usr/tmp/6
Jim Davis wrote:
We have two aging Netapp filers and can't afford to buy new Netapp gear,
so we've been looking with a lot of interest at building NFS fileservers
running ZFS as a possible future approach. Two issues have come up in
the discussion
- Adding new disks to a RAID-Z pool
Hi Luke,
We've been using MPXIO (STMS) with ZFS quite solidly for the past few
months. Failover is instantaneous when a write operations occurs
after a path is pulled. Our environment is similar to yours, dual-FC
ports on the host, and 4 FC ports on the storage (2 per controller).
Depending on
One of our file servers internally to Sun that reproduces this running nv53
here is the dtrace output:
unix`mutex_vector_enter+0x120
zfs`metaslab_group_alloc+0x1a0
zfs`metaslab_alloc_dva+0x10c
zfs`metaslab_alloc+0x3c
On 12/6/06, Jason J. W. Williams [EMAIL PROTECTED] wrote:
We've been using MPXIO (STMS) with ZFS quite solidly for the past few
months. Failover is instantaneous when a write operations occurs
after a path is pulled. Our environment is similar to yours, dual-FC
ports on the host, and 4 FC ports
Hi Doug,
The configuration is a T2000 connected to a StorageTek FLX210 array
via Qlogic QLA2342 HBAs and Brocade 3850 switches. We currently RAID-Z
the LUNs across 3 array volume groups. For performance reasons we're
in the process of changing to striped zpools across RAID-1 volume
groups. The
On Wed, Dec 06, 2006 at 07:28:53AM -0700, Jim Davis wrote:
We have two aging Netapp filers and can't afford to buy new Netapp gear,
so we've been looking with a lot of interest at building NFS fileservers
running ZFS as a possible future approach. Two issues have come up in the
discussion
-
On 12/6/06, Jason J. W. Williams [EMAIL PROTECTED] wrote:
The configuration is a T2000 connected to a StorageTek FLX210 array
via Qlogic QLA2342 HBAs and Brocade 3850 switches. We currently RAID-Z
the LUNs across 3 array volume groups. For performance reasons we're
in the process of changing to
Edward Pilatowicz wrote:
On Wed, Dec 06, 2006 at 07:28:53AM -0700, Jim Davis wrote:
We have two aging Netapp filers and can't afford to buy new Netapp gear,
so we've been looking with a lot of interest at building NFS fileservers
running ZFS as a possible future approach. Two issues have come
Jim Davis wrote:
eric kustarz wrote:
What about adding a whole new RAID-Z vdev and dynamicly stripe across
the RAID-Zs? Your capacity and performance will go up with each
RAID-Z vdev you add.
Thanks, that's an interesting suggestion.
Have you tried using the automounter as suggested
Hi Luke,
That's really strange. We did the exact same thing moving between two
hosts (export/import) and it took maybe 10 secs. How big is your
zpool?
Best Regards,
Jason
On 12/6/06, Luke Schwab [EMAIL PROTECTED] wrote:
Doug,
I should have posted the reason behind this posting.
I have 2
I, too, experienced a long delay while importing a zpool on a second machine. I
do not have any filesystems in the pool. Just the Solaris 10 Operating system,
Emulex 10002DC HBA, and a 4884 LSI array (dual attached).
I don't have any file systems created but when STMS(mpxio) is enabled I see
I simply created a zpool with an array disk like
hosta# zpool created testpool c6tnumd0 //runs within a second
hosta# zpool export testpool // runs within a second
hostb# zpool import testpool // takes 5-7 minutes
If STMS(mpxio) is disabled, it takes from 45-60 seconds. I tested this with
On Wed, Dec 06, 2006 at 12:35:58PM -0800, Jim Hranicky wrote:
If those are the original path ids, and you didn't
move the disks on the bus? Why is the is_spare flag
Well, I'm not sure, but these drives were set as spares in another pool
I deleted -- should I have done something to the
Luke Schwab wrote:
I simply created a zpool with an array disk like
hosta# zpool created testpool c6tnumd0 //runs within a second hosta#
zpool export testpool // runs within a second hostb# zpool import
testpool // takes 5-7 minutes
If STMS(mpxio) is disabled, it takes from 45-60
For background on what this is, see:
http://www.opensolaris.org/jive/message.jspa?messageID=24416#24416
http://www.opensolaris.org/jive/message.jspa?messageID=25200#25200
=
zfs-discuss 11/16 - 11/30
=
Size of all threads during
Hello,
Thanks.
Here is the needed info:
zpool status
pool: tank
state: ONLINE
scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
tankONLINE 0 0 0
c1d0s6ONLINE 0 0 0
errors: No known data errors
df -h
These days it's like the dot com revolution round two. The
only difference is, this time there will be no crash.
Everyone is wiser and the internet has already proven that
it is THE place to do business.
Acquisitions are happening at a record pace. Google
picking up Youtube. News corp
26 matches
Mail list logo