The slides from my ZFS presentation at OSCON (as well as some
additional information) are available at http://www.meangrape.com/
2007/08/oscon-zfs/
Jay Edwards
[EMAIL PROTECTED]
http://www.meangrape.com
___
zfs-discuss mailing list
Tim Thomas wrote:
if I create a storage pool with multiple RAID-Z stripes in it does ZFS
dynamically stripe data across all the RAID-Z stripes in the pool
automagically ?
If I relate this back to my storage array experience, this would be
Plaiding which is/was creating a RAID-0 logical
Hi
if I create a storage pool with multiple RAID-Z stripes in it does ZFS
dynamically stripe data across all the RAID-Z stripes in the pool
automagically ?
If I relate this back to my storage array experience, this would be
Plaiding which is/was creating a RAID-0 logical volume across
ZFS will stripe across all Root Level VDEVs, regardless of type of VDEV
(mirror, raidz, single whole disk, disk slices, whatever).
e.g.
tank01
mirror
c1t0d0
c1t1d0
raidz
c1t2d0
c1t3d0
c1t4d0
mirror
c0t0d0s7
c0t1d0s7
Should give you 3 stripes.
--
Sean
-Original
I think I have ran into this bug, 6560174, with a firewire drive.
And 6560174 might be a duplicate of 6445725
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
I am playing with ZFS on a jetStor 516F with 9 1TB E-SATA drives. This is our
first real tests with ZFS and I working on how to replace our HA-NFS ufs file
systems with ZFS counterparts. One of the things I am concerned with is how do
I replace a disk array/vdev in a pool? It appears that is
On 02 August, 2007 - Matthew C Aycock sent me these 1,0K bytes:
I am playing with ZFS on a jetStor 516F with 9 1TB E-SATA drives. This
is our first real tests with ZFS and I working on how to replace our
HA-NFS ufs file systems with ZFS counterparts. One of the things I am
concerned with is
And 6560174 might be a duplicate of 6445725
I see what you mean. Unfortunately there does not look to be a work-around.
It is beginning to sound like firewire drives are not a safe alternative for
backup? This is unfortunate when you have an Ultra20 with only 2 disks.
Is there a way to
Lisa Shepherd wrote:
Zettabyte File System is the formal, expanded name of the file system and
ZFS is its abbreviation. In most Sun manuals, the name is expanded at first
use and the abbreviation used the rest of the time. Though I was surprised to
find that the Solaris ZFS System
And 6560174 might be a duplicate of 6445725
I see what you mean. Unfortunately there does not
look to be a work-around.
Nope, no work-around. This is a scsa1394 bug; it
has some issues when it is used from interrupt context.
I have some source code diffs, that are supposed to
fix the
http://www.sun.com/software/solaris/ds/zfs.jsp
Solaris ZFSThe Most Advanced File System on the Planet
Anyone who has ever lost important files, run out of space on a
partition, spent weekends adding new storage to servers, tried to grow
or shrink a file system, or experienced data corruption
As a novice, I undestand that if you don't have any redundancy between vdevs
this is going to be a problem. Perhaps you can add mirroring to your existing
pool and make it work that way?
A pool made up of mirror pairs:
{cyrus4:137} zpool status
pool: ms2
state: ONLINE
scrub: scrub
Nope, no work-around.
OK. Then I have 3 questions:
1) How do I destroy the pool that was on the firewire drive? (So that zfs stops
complaining about it)
2) How can I reformat the firewire drive? Does this need to be done on a
non-Solaris OS?
3) Can your code diffs be integrated into the
13 matches
Mail list logo