Hi,
On 9/4/06, UNIX admin [EMAIL PROTECTED] wrote:
[Solaris 10 6/06 i86pc]
...
Then I added two more disks to the pool with the `zpool add -fn space c2t10d0
c2t11d0`, whereby I determined that those would be added as a RAID0, which is
not what I wanted. `zpool add -f raidz c2t10d0 c2t11d0`
On Tue, Aug 22, 2006 at 12:45:16PM +0200, Pawel Jakub Dawidek wrote:
Hi.
I started porting the ZFS file system to the FreeBSD operating system.
[...]
Just a quick note about progress in my work. I needed slow down a bit,
but:
All file system operations seems to work. The only exception are
Hello Wee,
Tuesday, September 5, 2006, 10:58:32 AM, you wrote:
WYT On 9/5/06, Torrey McMahon [EMAIL PROTECTED] wrote:
This is simply not true. ZFS would protect against the same type of
errors seen on an individual drive as it would on a pool made of HW raid
LUN(s). It might be overkill to
AFAIK, no. The attach semantics only works for
adding mirrors.
Would be nice if that can be overloaded for RAIDZ.
Sure would be.
Not sure exactly which blog entry but you might be
confused that
stripes can be of different sizes (not different
sized disks). The
man page for zpool
On Sep 5, 2006, at 06:45, Robert Milkowski wrote:Hello Wee,Tuesday, September 5, 2006, 10:58:32 AM, you wrote:WYT On 9/5/06, Torrey McMahon [EMAIL PROTECTED] wrote: This is simply not true. ZFS would protect against the same type oferrors seen on an individual drive as it would on a pool made of
Jonathan Edwards wrote:
Here's 10 options I can think of to summarize combinations of zfs with
hw redundancy:
# ZFS ARRAY HWCAPACITYCOMMENTS
-- ---
1 R0 R1 N/2 hw mirror - no zfs healing (XXX)
2 R0
IIRC there was a tunable variable to set how much data to read-in.
And default was 64KB...
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Robert,
I would be interested in seeing your crash dump. ZFS will consume much
of your memory *in the absence of memory pressure*, but it should be
responsive to memory pressure, and give up memory when this happens. It
looks like you have 8GB of memory on your system? ZFS should never
Yes, server has 8GB of RAM.
Most of the time there's about 1GB of free RAM.
bash-3.00# mdb 0
Loading modules: [ unix krtld genunix dtrace specfs ufs sd md ip sctp usba fcp
fctl qlc ssd lofs zfs random logindmux ptm cpc nfs ipc ]
arc::print
{
anon = ARC_anon
mru = ARC_mru
mru_ghost =
Oatway, Ted wrote:
IHAC that has 560+ LUNs that will be assigned to ZFS Pools and some
level of protection. The LUNs are provided by seven Sun StorageTek
FLX380s. Each FLX380 is configured with 20 Virtual Disks. Each Virtual
Disk presents four Volumes/LUNs. (4 Volumes x 20 Virtual Disks x 7
Thanks for the response Richard. Forgive my ignorance but the following
questions come to mind as I read your response.
I would then have to create 80 RAIDz(6+1) Volumes and the process of
creating these Volumes can be scripted. But -
1) I would then have to create 80 mount points to mount each
11 matches
Mail list logo