Hi,
the question is which WD Green drives you are using. WDxxEADS or WDxxEARS. The
WDxxEARS have a 4k physical sector size instead of 512B. You need some special
trickery to get the max performance out of them, probably even more so in a
raidz configuration.
See
I have created a solaris9 zfs root flash archive for sun4v environment which i
'm tryin to use for upgrading solaris10 u8 zfs root based server using live
upgrade.
following is my current system status
lustatus
Boot Environment Is Active ActiveCanCopy
Name
I also have this problem on my system which consists of an AMD Phenom 2 X4 with
system pools on various hard drives connected to the SB750 controller and a
larger raidz2 storage pool connected to an LSI 1068e controller (using IT
mode). The storage pool is also used to share files using CIFS.
On 28/09/10 09:22 PM, Robin Axelsson wrote:
I also have this problem on my system which consists of an AMD Phenom 2
X4 with system pools on various hard drives connected to the SB750
controller and a larger raidz2 storage pool connected to an LSI 1068e
controller (using IT mode). The storage
I have now run some hardware tests as suggested by Cindy.'iostat -En' indicates
no errors, i.e. after carefully checking the output from this command, all
errors are followed by zeroes.
The only messages found in /var/adm/messages are the following:
timestamp opensolaris scsi: [ID 365881
I am using a zpool for swap that is located in the rpool (i.e. not in the
storage pool). The system disk contains four primary partitions where the first
contains the system volume (c7d0s0) two are windows partitions (c7d0p2 and
c7d0p3) and the fourth (c7d0p4) is a zfs pool dedicated for
I have both EVDS and EARS 2TB green drive. And I have to say they are not good
to build storage servers.
EVDS has compatibility issue with my supermicro appliance. it will hang when
doing huge data send or copy. from IOSTAT I can see the data throughput is
stuck on green disks with
I have both EVDS and EARS 2TB green drive. And I have to say they are
not good to build storage servers.
I think both have native 4K sectors; as such, they balk or perform slowly
when a smaller I/O or an unaligned IOP hits them.
How are they formatted? Specifically, solaris slices must be
Hello,
We have been running into a few issues recently with cpu panics while trying
to reboot the control/service domains on various T-series platforms. Has
anyone seen the message below?
Thanks
SunOS Release 5.10 Version Generic_142900-03 64-bit
Copyright 1983-2009 Sun Microsystems, Inc. All
Hi Ketan,
My flash archive experience is minimal, but..
This error suggest that the disk components of this pool might have some
SVM remnants. Is that possible? I would check with the metastat command,
review /etc/vfstab, or /etc/lu/ICF.* to see if they are referencing
meta devices.
Thanks,
Ketan,
Someone with more flash archive experience that me says that you
can't install a ZFS root flash archive with live upgrade at this
time. Duh, I knew that.
Sorry for the red herring... :-)
Cindy
On 09/28/10 08:30, Cindy Swearingen wrote:
Hi Ketan,
My flash archive experience is
Roy Sigurd Karlsbakk wrote:
device r/s w/s kr/s kw/s wait actv svc_t %w %b
cmdk0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
cmdk1 0.0 163.6 0.0 20603.7 1.6 0.5 12.9 24 24
fd0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd1 0.5 140.3 0.3 2426.3 0.0 1.0 7.2 0 14
sd2 0.0
Regarding vdevs and mixing WD Green drives with other drives, you might find it
interesting that WD itself does not recommend them for 'business critical' RAID
use - this quoted from the WD20EARS page here
(http://www.wdc.com/en/products/Products.asp?DriveID=773):
i
Desktop / Consumer RAID
http://blogs.sun.com/bonwick/en_US/entry/and_now_page_2
Monday Sep 27, 2010
And now, page 2
To my team:
After 20 incredible years at Sun/Oracle, I have decided to try something new.
This was a very hard decision, and not one made lightly. I have always enjoyed
my work, and still do --
- Original Message -
Regarding vdevs and mixing WD Green drives with other drives, you
might find it interesting that WD itself does not recommend them for
'business critical' RAID use - this quoted from the WD20EARS page here
(http://www.wdc.com/en/products/Products.asp?DriveID=773):
sb == Simon Breden sbre...@gmail.com writes:
sb WD itself does not recommend them for 'business critical' RAID
sb use
The described problems with WD aren't okay for non-critical
development/backup/home use either. The statement from WD is nothing
but an attempt to upsell you, to
Dear ZFS Discussion,
I ran out of space, consequently could not rm or truncate files. (It
make sense because it's a copy-on-write and any transaction needs to
be written to disk. It worked out really well - all I had to do is
destroy some snapshots.)
If there are no snapshots to destroy, how to
Preemptively use quotas?
On 9/22/10 7:25 PM, Aleksandr Levchuk alevc...@gmail.com wrote:
Dear ZFS Discussion,
I ran out of space, consequently could not rm or truncate files. (It
make sense because it's a copy-on-write and any transaction needs to
be written to disk. It worked out really
I'm sorry to say that I am quite the newbie to ZFS. When you say zfs
send/receive what exactly are you referring to?
I had the zfs array mounted to a specific location in my file system
(/mnt/Share) and I was sharing that location over the network with a samba
server. The directory had
Le 16 sept. 2010 à 16:18, Rich Teer rich.t...@rite-group.com a écrit :
On Thu, 16 Sep 2010, erik.ableson wrote:
And for reference, I have a number of 10.6 clients using NFS for
sharing Fusion virtual machines, iTunes library, iPhoto libraries etc.
without any issues.
Excellent; what OS
The only tweak needed was making sure that I used the FQDN of the client
machines (with appropriate reverse lookups in my DNS) for the sharenfs
properties.
Envoyé de mon iPhone
Le 16 sept. 2010 à 17:15, Rich Teer rich.t...@rite-group.com a écrit :
On Thu, 16 Sep 2010, Erik Ableson wrote:
Am 19.09.2010 um 18:59 schrieb Victor Latushkin:
On Sep 19, 2010, at 12:08 AM, Stephan Ferraro wrote:
Is there a way to fsck the spacemap?
Does scrub helps for this?
No, because issues that you see are internal inconsistencies with unclear
nature.
Though as actual issue varies from
Am 19.09.2010 um 19:24 schrieb Victor Latushkin:
On Sep 19, 2010, at 9:06 PM, Stephan Ferraro wrote:
Am 19.09.2010 um 18:59 schrieb Victor Latushkin:
On Sep 19, 2010, at 12:08 AM, Stephan Ferraro wrote:
Is there a way to fsck the spacemap?
Does scrub helps for this?
No, because
Brilliant. I set those parameters via /etc/system, rebooted, and the pool
imported with just the f switch. I had seen this as an option earlier,
although not that thread, but was not sure it applied to my case.
Scrub is running now. Thank you very much!
-Scott
On 9/23/10 7:07 PM, David
Thanks a lot for that. I'm not experienced in reading the output of dtrace,
but I'm pretty sure that dedup was the cause here, as I disabling it during
the transfer, immediately raised the transfer speed to ~100MB/s.
Thanks for the article you linked to — it seems my system would need about
16GB
On 28/09/2010 10:20, Ketan wrote:
I have created a solaris9 zfs root flash archive for sun4v environment which i
'm tryin to use for upgrading solaris10 u8 zfs root based server using live
upgrade.
one cannot use zfs flash archive with luupgrade, that is with zfs root a
flash archive
IIRC the currently available WD Caviar Black models no longer enable TLER to be
set. For WD drives, to have TLER capability you will need to buy their
enterprise models like REx models which cost mucho $$$.
--
This message posted from opensolaris.org
On Sat, 25 Sep 2010, [iso-8859-1] Ralph Böhme wrote:
Darwin ACL model is nice and slick, the new NFSv4 one in 147 is just
braindead. chmod resulting in ACLs being discarded is a bizarre design
decision.
Agreed. What's the point of ACLs that disappear? Sun didn't want to fix
acl/chmod
On Tue, Sep 28, 2010 at 12:18:49PM -0700, Paul B. Henson wrote:
On Sat, 25 Sep 2010, [iso-8859-1] Ralph Böhme wrote:
Darwin ACL model is nice and slick, the new NFSv4 one in 147 is just
braindead. chmod resulting in ACLs being discarded is a bizarre design
decision.
Agreed. What's the
On Tue, 28 Sep 2010, Nicolas Williams wrote:
I've researched this enough (mainly by reading most of the ~240 or so
relevant zfs-discuss posts and several bug reports)
And I think some fair fraction of those posts were from me, so I'll try not
to start rehashing old discussions ;).
That only
Yes. But what is enough reserved free memory? If you need 1Mb for a normal
configuration you might need 2Mb when you are doing ZFS on ZFS. (I am just
guessing).
This is the same problem as mounting an NFS server on itself via NFS. Also
not supported.
The system has shrinkable caches and
On Tue, Sep 28, 2010 at 02:03:30PM -0700, Paul B. Henson wrote:
On Tue, 28 Sep 2010, Nicolas Williams wrote:
I've researched this enough (mainly by reading most of the ~240 or so
relevant zfs-discuss posts and several bug reports)
And I think some fair fraction of those posts were from
On 09/29/10 09:38 AM, Nicolas Williams wrote:
I've researched this enough (mainly by reading most of the ~240 or so
relevant zfs-discuss posts and several bug reports) to conclude the
following:
- ACLs derived from POSIX mode_t and/or POSIX Draft ACLs that result in
DENY ACEs are
On Wed, Sep 29, 2010 at 10:15:32AM +1300, Ian Collins wrote:
Based on my own research, experimentation and client requests, I
agree with all of the above.
Good to know.
I have be re-ordering and cleaning (deny) ACEs for one client for a
couple of years now and we haven't seen any user
described problems with WD aren't okay for
non-critical
evelopment/backup/home use either.
Indeed. I don't use WD drives for RAID any longer.
The statement
from WD is nothing
but an attempt to upsell you, to differentiate the
market so they can
tap into the demand curve at multiple
On 9/28/2010 2:13 PM, Nicolas Williams wrote:
Or aclmode=deny, which is pretty simple, not very confusing, and
basically
the only paradigm that will prevent chmod from breaking your ACL.
That can potentially render many applications unusable.
Yes. Which is why it obviously wouldn't be the
36 matches
Mail list logo