You're right, from the documentation it definitely should work. Still, it
doesn't. At least not in Solaris 10. But i am not a zfs-developer, so this
should probably answered by them. I will give it a try with a recent
OpneSolaris-VM and check, wether this works in newer implementations of zfs.
FYI:
In b117 it works as expected and stated in the documentation.
Tom
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
You can't replace it because this disk is still a valid member of the pool,
although it is marked faulty.
Put in a replacement disk, add this to the pool and replace the faulty one with
the new disk.
Regards,
Tom
--
This message posted from opensolaris.org
You could offline the disk if [b]this[/b] disk (not the pool) had a replica.
Nothing wrong with the documentation. Hmm, maybe it is little misleading here.
I walked into the same trap.
The pool is not using the disk anymore anyway, so (from the zfs point of view)
there is no need to offline
Ralf Ramge schrieb:
Thomas Liesner wrote:
Does this mean, that if i have a pool of 7TB with one filesystem for all
users
with a quota of 6TB i'd be alright?
Yep. Although I *really* recommend creating individual file systems, e.g.
if you have 1,000 users on your server, I'd create 1,000
bda wrote:
I haven't noticed this behavior when ZFS has (as recommended) the
full disk.
Good to know, as i intended to use the whole disks anyway.
Thanks,
Tom
This message posted from opensolaris.org
___
zfs-discuss mailing list
Ralf Ramge wrote:
Quotas are applied to file systems, not pools, and a such are pretty
independent from the pool size. I found it best to give every user
his/her own filesystem and applying individual quotas afterwards.
Does this mean, that if i have a pool of 7TB with one filesystem for
Nobody out there who ever had problems with low diskspace?
Regrads,
Tom
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
If you can't use zpool status, you probably should check wether your system
is right and not all devices needed for this pool are currently available...
i.e. format...
Regards,
Tom
This message posted from opensolaris.org
___
zfs-discuss mailing
Hi all,
i am planning a zfs-fileserver for a larger prepress-company in Germany.
Knowing that users tend to use all the space they can get, i am looking for a
solution to avoid a rapid performance loss when the production-pool is more
than 80% used.
Would it be a practical solution to just set
Hi,
from sun germany i got the info hat the 2u JBODs wille be officially announced
in q1 2008 and the 4u JBODs in q2 2008.
Both will have SAS connectors and support either SAS and SATA drives.
Ragards,
Tom
This message posted from opensolaris.org
Hi,
did you read the following?
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
Currently, pool performance can degrade when a pool is very full and
filesystems are updated frequently, such as on a busy mail server.
Under these circumstances, keep pool space under 80%
Hi,
compression is off.
I've checked rw-perfomance with 20 simultaneous cp and with the following...
#!/usr/bin/bash
for ((i=1; i=20; i++))
do
cp lala$i lulu$i
done
(lala1-20 are 2gb files)
...and ended up with 546mb/s. Not too bad at all.
This message posted from opensolaris.org
Hi all,
i am currently using two XStore XJ 1100 SAS JBOD
enclosures(http://www.xtore.com/product_detail.asp?id_cat=11) attached to a
x4200 for testing. So far it works rather nicly, but i am still looking for
alternatives.
The Infortrend JBOD-expansions are not deliverable at the moment.
What
Hi Eric,
Are you talking about the documentation at:
http://sourceforge.net/projects/filebench
or:
http://www.opensolaris.org/os/community/performance/filebench/
and:
http://www.solarisinternals.com/wiki/index.php/FileBench
?
i was talking about the solarisinternals wiki. I can't find any
Hi again,
i did not want to compare the filebench test with the single mkfile command.
Still, i was hoping to see similar numbers in the filbench stats.
Any hints what i could do to further improve the performance?
Would a raid1 over two stripes be faster?
TIA,
Tom
This message posted from
Hi,
i checked with $nthreads=20 which will roughly represent the expected load and
these are the results:
IO Summary: 7989 ops 7914.2 ops/s, (996/979 r/w) 142.7mb/s,255us
cpu/op, 0.2ms latency
BTW, smpatch is still running and further tests will get done when the system
is
Hi,
i checked with $nthreads=20 which will roughly represent the expected load and
these are the results:
IO Summary: 7989 ops 7914.2 ops/s, (996/979 r/w) 142.7mb/s, 255us cpu/op, 0.2ms
latency
BTW, smpatch is still running and further tests will get done when the system
is rebooted.
The
i wanted to test some simultanious sequential writes and wrote this little
snippet:
#!/bin/bash
for ((i=1; i=20; i++))
do
dd if=/dev/zero of=lala$i bs=128k count=32768
done
While the script was running i watched zpool iostat and measured the time
between starting and stopping of the writes
Hi all,
i want to replace a bunch of Apple Xserves with Xraids and HFS+ (brr) by Sun
x4200 with SAS-Jbods and ZFS. The application will be the Helios UB+ fileserver
suite.
I installed the latest Solaris 10 on a x4200 with 8gig of ram and two Sun SAS
controllers, attached two sas-jbods with 8
Hi all,
i am about to put together a one month test configuration for a
graphics-production server (prepress-filer that is). I would like to test zfs
on a x4200 with two sas2sata-jbods attached. Initially i wanted to use an
infortrend fc2sata-jbod-enclosure but these are at out of production
21 matches
Mail list logo