If anybody uses SSD for rpool more than half-year, can you post SMART
information about HostWrites attribute?
I want to see how SSD wear for system disk purposes.
I'd be happy to, exactly what commands shall I run?
Hm. I'm experimenting with OpenSolaris in virtual machine now.
Unfortunately
Hi all,Yet another story regarding mpt issues, and in order to make a
longstory short everytime that a Dell R710 running snv_134 logs the
information
scsi: [ID 107833 kern.warning]
WARNING:/p...@0,0/pci8086,3...@4/pci1028,1...@0 (mpt0): , the system
freezes andony a hard-reset fixes the issue.
Hi all,
Yet another story regarding mpt issues, and in order to make a long
story short everytime that a Dell R710 running snv_134 logs the information
scsi: [ID 107833 kern.warning] WARNING:
/p...@0,0/pci8086,3...@4/pci1028,1...@0 (mpt0): , the system freezes and
ony a hard-reset fixes the
- Daniel Carosone d...@geek.com.au skrev:
On Mon, Apr 26, 2010 at 10:02:42AM -0700, Chris Du wrote:
SAS: full duplex
SATA: half duplex
SAS: dual port
SATA: single port (some enterprise SATA has dual port)
SAS: 2 active channel - 2 concurrent write, or 2 read, or 1 write
and
The problem is that the windows server backup seems to choose dynamic
vhd (which would make sense in most cases) and I dont know if there is a
way to change that. Using ISCSI-volumes wont help in my case since
servers are running on physical hardware.
Am 27.04.2010 01:54, schrieb Brandon
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
If something like this already exists, please let me know. Otherwise,
I
plan to:
Create zfshistory command, written in python. (open source, public,
free)
So, I
- Tim.Kreis tim.kr...@gmx.de skrev:
The problem is that the windows server backup seems to choose dynamic
vhd (which would make sense in most cases) and I dont know if there is
a
way to change that. Using ISCSI-volumes wont help in my case since
servers are running on physical
Let's suppose you rename a file or directory.
/tank/widgets/a/rel2049_773.13-4/somefile.txt
Becomes
/tank/widgets/b/foogoo_release_1.9/README
Let's suppose you are now working on widget B, and you want to look at the
past zfs snapshot of README, but you don't remember where it came from.
Bruno Sousa on Tue, Apr 27, 2010 at 09:16:08AM +0200 wrote:
Hi all,
Yet another story regarding mpt issues, and in order to make a long
story short everytime that a Dell R710 running snv_134 logs the information
scsi: [ID 107833 kern.warning] WARNING:
Hi all
I have a test system with snv134 and 8x2TB drives in RAIDz2 and currently no
Zil or L2ARC. I noticed the I/O speed to NFS shares on the testpool drops to
something hardly usable while scrubbing the pool.
How can I address this? Will adding Zil or L2ARC help? Is it possible to tune
down
On Mon, April 26, 2010 17:21, Edward Ned Harvey wrote:
Also, if you've got all those disks in an array, and they're MTBF is ...
let's say 25,000 hours ... then 3 yrs later when they begin to fail, they
have a tendency to all fail around the same time, which increases the
probability of
On Tue, 27 Apr 2010, David Dyer-Bennet wrote:
Hey, you know what might be helpful? Being able to add redundancy to a
raid vdev. Being able to go from RAIDZ2 to RAIDZ3 by adding another drive
of suitable size. Also being able to go the other way. This lets you do
the trick of temporarily
On Tue, April 27, 2010 10:38, Bob Friesenhahn wrote:
On Tue, 27 Apr 2010, David Dyer-Bennet wrote:
Hey, you know what might be helpful? Being able to add redundancy to a
raid vdev. Being able to go from RAIDZ2 to RAIDZ3 by adding another
drive
of suitable size. Also being able to go the
On Tue, 27 Apr 2010, Roy Sigurd Karlsbakk wrote:
I have a test system with snv134 and 8x2TB drives in RAIDz2 and
currently no Zil or L2ARC. I noticed the I/O speed to NFS shares on
the testpool drops to something hardly usable while scrubbing the
pool.
How can I address this? Will adding
On Tue, 27 Apr 2010, David Dyer-Bennet wrote:
I don't think I understand your scenario here. The docs online at
http://docs.sun.com/app/docs/doc/819-5461/gazgd?a=view describe uses of
zpool replace that DO run the array degraded for a while, and don't seem
to mention any other.
Could you be
On 04/27/10 03:55 AM, Yuri Vorobyev wrote:
If anybody uses SSD for rpool more than half-year, can you post SMART
information about HostWrites attribute?
I want to see how SSD wear for system disk purposes.
I'd be happy to, exactly what commands shall I run?
Hm. I'm experimenting with
Hi - was there any progress on this issue?
I'd be interested to know if any bugs were filed regarding it and whether
there's a way to follow up on the progress.
Cheers,
Alasdair
--
This message posted from opensolaris.org
___
zfs-discuss mailing
Hi everyone,
Please review the information below regarding access to ZFS version
information.
Let me know if you have questions.
Thanks,
Cindy
CR 6898657:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6898657
ZFS commands zpool upgrade -v and zfs upgrade -v refer to URLs that
On Tue, April 27, 2010 11:17, Bob Friesenhahn wrote:
On Tue, 27 Apr 2010, David Dyer-Bennet wrote:
I don't think I understand your scenario here. The docs online at
http://docs.sun.com/app/docs/doc/819-5461/gazgd?a=view describe uses
of
zpool replace that DO run the array degraded for a
I've got an OCZ Vertex 30gb drive with a 1GB stripe
used for the slog
and the rest used for the L2ARC, which for ~ $100 has
been a nice
boost to nfs writes.
What about the Intel X25-V? I know it will likely be fine for L2ARC, but what
about ZIL/slog?
--
This message posted from
For the l2arc you want iops pure an simple. For this I think the Intel SSDs are
still king.
The slog however has a gotcha, you want a iops, but also you want something
that doesn't say it's done writing until the write is safely nonvolitile. The
intel drives fail in this regard. So far I'm
On Tue, 27 Apr 2010, David Dyer-Bennet wrote:
I don't have a RAIDZ group, but trying this while there's significant load
on the group, it should be easy to see if there's traffic on the old drive
after the resilver starts. If there is, that would seem to be evidence
that it's continuing to use
On 04/27/10 03:24 PM, Miles Nordin wrote:
http://opensolaris.org/jive/thread.jspa?messageID=473727 -- 'smartctl -d sat,12
...' is the incantation to use on solaris for ahci
OK, getting somewhere.
I have a total of 3 SSD's in my laptop. Laptop is a Clevo D901C.
Cindy,
Thanks for your help as it got me on the right track. The OpenSolaris Live CD
wasn't reading the GUID/GPT partition tables properly, which was causing the
Assertion failed errors. I relabeled the disks using the partition
information I was able to get from the FreeBSD Live CD, and
We would like to delete and recreate our existing zfs pool without losing any
data. The way we though we could do this was attach a few HDDs and create a new
temporary pool, migrate our existing zfs volume to the new pool, delete and
recreate the old pool and migrate the zfs volumes back. The
On Tue, Apr 27, 2010 at 11:29:04AM -0600, Cindy Swearingen wrote:
The revised ZFS Administration Guide describes the ZFS version
descriptions and the Solaris OS releases that provide the version
and feature, starting on page 293, here:
Hi Wolf,
Which Solaris release is this?
If it is an OpenSolaris system running a recent build, you might
consider the zpool split feature, which splits a mirrored pool into two
separate pools, while the original pool is online.
If possible, attach the spare disks to create the mirrored pool as
On Tue, Apr 27, 2010 at 10:36:37AM +0200, Roy Sigurd Karlsbakk wrote:
- Daniel Carosone d...@geek.com.au skrev:
SAS: Full SCSI TCQ
SATA: Lame ATA NCQ
What's so lame about NCQ?
Primarily, the meager number of outstanding requests; write cache is
needed to pretend the writes are done
The OSOL ZFS Admin Guide PDF is pretty stable, even if the
page number isn't, but I wanted to provide an interim solution.
When this information is available on docs.sun.com very soon now,
the URL will be stable.
cs
On 04/27/10 15:32, Daniel Carosone wrote:
On Tue, Apr 27, 2010 at 11:29:04AM
On 04/28/10 03:17 AM, Roy Sigurd Karlsbakk wrote:
Hi all
I have a test system with snv134 and 8x2TB drives in RAIDz2 and currently no
Zil or L2ARC. I noticed the I/O speed to NFS shares on the testpool drops to
something hardly usable while scrubbing the pool.
Is that small random or
Unclear what you want to do? What's the goal for this excise?
If you want to replace the pool with larger disks and the pool is in mirror or
raidz. You just replace one disk at a time and allow the pool to rebuild it
self. Once all the disk has been replace, it will atomically realize the
On 04/28/10 10:01 AM, Bob Friesenhahn wrote:
On Wed, 28 Apr 2010, Ian Collins wrote:
On 04/28/10 03:17 AM, Roy Sigurd Karlsbakk wrote:
Hi all
I have a test system with snv134 and 8x2TB drives in RAIDz2 and
currently no Zil or L2ARC. I noticed the I/O speed to NFS shares on
the testpool
On Tue, Apr 27, 2010 at 2:47 PM, Daniel Carosone d...@geek.com.au wrote:
What's so lame about NCQ?
Primarily, the meager number of outstanding requests; write cache is
needed to pretend the writes are done straight away and free up the
slots for reads.
NCQ handles 32 outstanding operations.
Whats the default size of the file system cache for Solaris 10 x86 and can it
be tuned?
I read various posts on the subject and its confusing..
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
ZFS does not use segmap.
The ZFS ARC (Adaptive Replacement Cache) will consume what's
available, memory-wise, based on the workload. There's an upper
limit if zfs_arc_max has not been set, but I forget what it is.
If other memory consumers (applications, other kernel subsystems)
need memory,
Today, Compellant announced their zNAS addition to their unified storage
line. zNAS uses ZFS behind the scenes.
http://www.compellent.com/Community/Blog/Posts/2010/4/Compellent-zNAS.aspx
Congrats Compellant!
-- richard
ZFS storage and performance consulting at http://www.RichardElling.com
ZFS
I had a problems with a UFS file system on a hardware raid controller. It was
spitting out errors like crazy, so I rsynced it to a ZFS volume on the same
machine. There were a lot of read errors during the transfer and the RAID
controller alarm was going off constantly. Rsync was copying the
Hello.
Is all this data what your looking for?
Yes, thank you, Paul.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Just in case someone of you want to jump in, I created the case
#100426-001820 to WD to ask for a firmware update to the WD??EARS drives
without any 512-byte emulation, just the 4K sectors directly exposed.
The WD forum thread:
39 matches
Mail list logo