1. evacuating a vdev resulting in a smaller pool
for all raid configs - ?
2. adding new vdev and rewriting all existing data
to new larger
stripe - ?
3. expanding stripe width for raid-z1 and raid-z2 -
?
4. live conversion between different raid kinds on
the same disk set
Thanks for the continuing flow of information. I already have all of the
equipment. I'm actually upgrading my main computer to a new Core 2 Duo setup
which is why this hardware is going to the file server. I think I'm going to
try a 64bit install using the four 500GB drives in a RAID-Z
Hello,
solaris Internals wiki contains many interesting things about zfs.
But i have no glue about the reasons for this entry:
In Section ZFS Storage Pools Recommendations - Storage Pools you can read:
[i]For all production environments, set up a redundant ZFS storage pool, such
as a raidz,
On Thu, May 03, 2007 at 11:43:49AM -0500, [EMAIL PROTECTED] wrote:
I think this may be a premature leap -- It is still undetermined if we are
running up against a yet unknown bug in the kernel implementation of gzip
used for this compression type. From my understanding the gzip code has
been
Drive in my solaris box that had the OS on it decided to kick the bucket this
evening, a joyous occasion for all, but luckly all my data is stored on a zpool
and the OS is nothing but a shell to serve it up on. One quick install later
and im back trying to import my pool, and things are not
On 9-May-07, at 4:45 AM, Andreas Koppenhoefer wrote:
Hello,
solaris Internals wiki contains many interesting things about zfs.
But i have no glue about the reasons for this entry:
In Section ZFS Storage Pools Recommendations - Storage Pools you
can read:
[i]For all production environments,
We've Solaris 10 Update 3 (aka 11/06) running on an E2900 (24 x 96). On this
server we've been running a large SAS environment totalling well over 2TB. We
also take daily snapshots of the filesystems and clone them for use by a local
zone. This setup has been in use for well over 6 months.
Tried filebench before??
http://www.solarisinternals.com/wiki/index.php/FileBench
Rayson
On 5/9/07, cesare VoltZ [EMAIL PROTECTED] wrote:
I used in the past iozone (http://www.iozone.org/) but I'm wondering
if there are other tools.
Thanks.
Cesare
For whatever reason EMC notes (on PowerLink) suggest that ZFS is not supported
on their arrays. If one is going to use a ZFS filesystem on top of a EMC array
be warned about support issues.
This message posted from opensolaris.org
___
zfs-discuss
I've read that it's supposed to go at full speed, i.e. as fast as possible. I'm
doing a disk replace and what zpool reports kind of surprises me. The resilver
goes on at 1.6MB/s. Did resilvering get throttled at some point between the
builds, or is my ATA controller having bigger issues?
comment below...
Toby Thain wrote:
On 9-May-07, at 4:45 AM, Andreas Koppenhoefer wrote:
Hello,
solaris Internals wiki contains many interesting things about zfs.
But i have no glue about the reasons for this entry:
In Section ZFS Storage Pools Recommendations - Storage Pools you can
read:
cesare VoltZ wrote:
Hy,
I'm planning to test on pre-production data center a ZFS solution for
our application and I'm searching a good filesystem benchmark for see
which configuration is the best solution.
Pedantically, your application is always best.
-- richard
Adam Leventhal wrote:
On Wed, May 09, 2007 at 11:52:06AM +0100, Darren J Moffat wrote:
Can you give some more info on what these problems are.
I was thinking of this bug:
6460622 zio_nowait() doesn't live up to its name
Which was surprised to find was fixed by Eric in build 59.
Adam
On Wed, 2007-05-09 at 16:27 +0200, cesare VoltZ wrote:
Hy,
I'm planning to test on pre-production data center a ZFS solution for
our application and I'm searching a good filesystem benchmark for see
which configuration is the best solution.
Server are Solaris 10 connected to a EMC
Hello Anantha,
Wednesday, May 9, 2007, 4:45:10 PM, you wrote:
ANS For whatever reason EMC notes (on PowerLink) suggest that ZFS is
ANS not supported on their arrays. If one is going to use a ZFS
ANS filesystem on top of a EMC array be warned about support issues.
Nope. For a couple of months
Hello Richard,
Wednesday, May 9, 2007, 9:10:22 PM, you wrote:
RE Robert Milkowski wrote:
Hello Mario,
Wednesday, May 9, 2007, 5:56:18 PM, you wrote:
MG I've read that it's supposed to go at full speed, i.e. as fast as
MG possible. I'm doing a disk replace and what zpool reports kind of
Robert Milkowski wrote:
Hello Mario,
Wednesday, May 9, 2007, 5:56:18 PM, you wrote:
MG I've read that it's supposed to go at full speed, i.e. as fast as
MG possible. I'm doing a disk replace and what zpool reports kind of
MG surprises me. The resilver goes on at 1.6MB/s. Did
Hello Michael,
Tuesday, May 8, 2007, 9:20:56 PM, you wrote:
Probably RAID-Z as you don't have enough disks to be interesting for doing
1+0.
Paul
MC How do you configure ZFS RAID 1+0 ?
MC Will next lines do that right? :
MC [b]zpool create -f zfs_raid1 mirror c0t1d0 c1t1d0
MC zpool add
Anantha N. Srirama wrote:
For whatever reason EMC notes (on PowerLink) suggest that ZFS is not supported
on their arrays. If one is going to use a ZFS filesystem on top of a EMC array
be warned about support issues.
They should have fixed that in their matrices. It should say something
I've since stopped making the second clone when I realized the
.zfs/snapshot/snapname still exists after the clone operation is completed.
So my need for the local clone is met by the direct access to the snapshot.
However, the poor performance of the destroy is still valid. It is quite
Gurus,
My fresh installed Solaris 10 U3 can't bootup normally on T2000
server(System Firmware 6.4.4 ),the OS can only enter into the
single-user mode,as one critical service fails to start:
# uname -a
SunOS t2000 5.10 Generic_118833-33 sun4v sparc SUNW,Sun-Fire-T200
(it's not patched,just
21 matches
Mail list logo