Re: [zfs-discuss] S11 vs illumos zfs compatiblity

2011-12-29 Thread Richard Elling
On Dec 29, 2011, at 1:29 PM, Nico Williams wrote: > On Thu, Dec 29, 2011 at 2:06 PM, sol wrote: >> Richard Elling wrote: >>> many of the former Sun ZFS team >>> regularly contribute to ZFS through the illumos developer community. >> >> Does this mean tha

Re: [zfs-discuss] S11 vs illumos zfs compatiblity

2011-12-27 Thread Richard Elling
On Dec 27, 2011, at 7:46 PM, Tim Cook wrote: > On Tue, Dec 27, 2011 at 9:34 PM, Nico Williams wrote: > On Tue, Dec 27, 2011 at 8:44 PM, Frank Cusack wrote: > > So with a de facto fork (illumos) now in place, is it possible that two > > zpools will report the same version yet be incompatible acros

Re: [zfs-discuss] Corrupt Array

2011-12-22 Thread Richard Elling
On Dec 21, 2011, at 11:45 AM, Gareth de Vaux wrote: > Hi guys, after a scrub my raidz array status showed: > > # zpool status > pool: pool > state: ONLINE > status: One or more devices has experienced an unrecoverable error. An >attempt was made to correct the error. Applications are u

Re: [zfs-discuss] Very poor pool performance - no zfs/controller errors?!

2011-12-19 Thread Richard Elling
comments below… On Dec 18, 2011, at 6:53 AM, Jan-Aage Frydenbø-Bruvoll wrote: > Dear List, > > I have a storage server running OpenIndiana with a number of storage > pools on it. All the pools' disks come off the same controller, and > all pools are backed by SSD-based l2arc and ZIL. Performance

Re: [zfs-discuss] Improving L1ARC cache efficiency with dedup

2011-12-12 Thread Richard Elling
On Dec 11, 2011, at 2:59 PM, Mertol Ozyoney wrote: > Not exactly. What is dedup'ed is the stream only, which is infect not very > efficient. Real dedup aware replication is taking the necessary steps to > avoid sending a block that exists on the other storage system. These exist outside of ZFS (e

Re: [zfs-discuss] LSI 3GB HBA SAS Errors (and other misc)

2011-12-04 Thread Richard Elling
On Dec 4, 2011, at 8:50 AM, Ryan Wehler wrote: >> >> A certification does not mean that any specific implementation operates >> without errors. A failed part, >> noisy environment, or other influences will affect any specific >> implementation. > > Would it not be more prudent to re-run the tes

Re: [zfs-discuss] LSI 3GB HBA SAS Errors (and other misc)

2011-12-03 Thread Richard Elling
On Dec 3, 2011, at 9:32 PM, Ryan Wehler wrote: > On Dec 3, 2011, at 11:18 PM, Richard Elling wrote: > >> On Dec 3, 2011, at 9:02 PM, Ryan Wehler wrote: >>> >>> On Dec 3, 2011, at 10:31 PM, Richard Elling wrote: >>> >>>> On Dec 3, 2011, at

Re: [zfs-discuss] LSI 3GB HBA SAS Errors (and other misc)

2011-12-03 Thread Richard Elling
On Dec 3, 2011, at 9:02 PM, Ryan Wehler wrote: > > On Dec 3, 2011, at 10:31 PM, Richard Elling wrote: > >> On Dec 3, 2011, at 7:36 PM, Ryan Wehler wrote: >> >>> Hi Richard, >>> Thanks for getting back to me. >>> >>> >>> On

Re: [zfs-discuss] LSI 3GB HBA SAS Errors (and other misc)

2011-12-03 Thread Richard Elling
On Dec 3, 2011, at 7:36 PM, Ryan Wehler wrote: > Hi Richard, > Thanks for getting back to me. > > > On Dec 3, 2011, at 9:03 PM, Richard Elling wrote: > >> On Dec 1, 2011, at 5:08 PM, Ryan Wehler wrote: >> >>> During the diagnostics of my SAN fai

Re: [zfs-discuss] questions about the DDT and other things

2011-12-03 Thread Richard Elling
more below… On Dec 1, 2011, at 8:21 PM, Erik Trimble wrote: > On 12/1/2011 6:44 PM, Ragnar Sundblad wrote: >> Thanks for your answers! >> >> On 2 dec 2011, at 02:54, Erik Trimble wrote: >> >>> On 12/1/2011 4:59 PM, Ragnar Sundblad wrote: I am sorry if these are dumb questions. If there are

Re: [zfs-discuss] LSI 3GB HBA SAS Errors (and other misc)

2011-12-03 Thread Richard Elling
On Dec 1, 2011, at 5:08 PM, Ryan Wehler wrote: > During the diagnostics of my SAN failure last week we thought we had seen a > backplane failure due to high error counts with 'lsiutil'. However, even > with a new backplane and ruling out failed cards (MPXIO or singular) or bad > cables I'm sti

Re: [zfs-discuss] HP JBOD D2700 - ok?

2011-11-30 Thread Richard Elling
On Nov 30, 2011, at 6:06 AM, Sašo Kiselkov wrote: > On 11/30/2011 02:40 PM, Edmund White wrote: >> Absolutely. >> >> I'm using a fully-populated D2700 with an HP ProLiant DL380 G7 server >> running NexentaStor. >> >> On the HBA side, I used the LSI 9211-8i 6G controllers for the server's >> inte

Re: [zfs-discuss] Compression

2011-11-22 Thread Richard Elling
Hi Matt, On Nov 22, 2011, at 7:39 PM, Matt Breitbach wrote: > So I'm looking at files on my ZFS volume that are compressed, and I'm > wondering to myself, "self, are the values shown here the size on disk, or > are they the pre-compressed values". Google gives me no great results on > the first

Re: [zfs-discuss] slow zfs send/recv speed

2011-11-17 Thread Richard Elling
On Nov 16, 2011, at 7:35 AM, David Dyer-Bennet wrote: > > On Tue, November 15, 2011 17:05, Anatoly wrote: >> Good day, >> >> The speed of send/recv is around 30-60 MBytes/s for initial send and >> 17-25 MBytes/s for incremental. I have seen lots of setups with 1 disk >> to 100+ disks in pool. Bu

Re: [zfs-discuss] weird bug with Seagate 3TB USB3 drive

2011-11-13 Thread Richard Elling
tip below… On Nov 13, 2011, at 3:24 AM, Pasi Kärkkäinen wrote: > On Sat, Nov 12, 2011 at 10:08:04AM -0800, Richard Elling wrote: >> >> On Nov 12, 2011, at 8:31 AM, Pasi Kärkkäinen wrote: >> >>> On Sat, Nov 12, 2011 at 08:15:31AM -0500, David Magda wrote: >>&

Re: [zfs-discuss] weird bug with Seagate 3TB USB3 drive

2011-11-12 Thread Richard Elling
On Nov 12, 2011, at 8:31 AM, Pasi Kärkkäinen wrote: > On Sat, Nov 12, 2011 at 08:15:31AM -0500, David Magda wrote: >> On Nov 12, 2011, at 00:55, Richard Elling wrote: >> >>> Better than ? >>> If the disks advertise 512 bytes, the only way around it is with

Re: [zfs-discuss] weird bug with Seagate 3TB USB3 drive

2011-11-11 Thread Richard Elling
On Nov 10, 2011, at 7:47 PM, David Magda wrote: > On Nov 10, 2011, at 18:41, Daniel Carosone wrote: > >> On Tue, Oct 11, 2011 at 08:17:55PM -0400, John D Groenveld wrote: >>> Under both Solaris 10 and Solaris 11x, I receive the evil message: >>> | I/O request is not aligned with 4096 disk sector

Re: [zfs-discuss] Solaris Based Systems "Lock Up" - Possibly ZFS/memory related?

2011-10-31 Thread Richard Elling
FWIW, we recommend disabling C-states in the BIOS for NexentaStor systems. C-states are evil. -- richard On Oct 31, 2011, at 9:46 PM, Lachlan Mulcahy wrote: > Hi All, > > > We did not have the latest firmware on the HBA - through a lot of pain I > managed to boot into an MS-DOS disk and run t

Re: [zfs-discuss] Poor relative performance of SAS over SATA drives

2011-10-31 Thread Richard Elling
On Oct 26, 2011, at 7:56 PM, weiliam.hong wrote: > > Questions: > 1. Why does SG SAS drives degrade to <10 MB/s while WD RE4 remain consistent > at >100MB/s after 10-15 min? > 2. Why does SG SAS drive show only 70+ MB/s where is the published figures > are > 100MB/s refer here? Are the SAS driv

Re: [zfs-discuss] Log disk with all ssd pool?

2011-10-30 Thread Richard Elling
On Oct 27, 2011, at 11:04 PM, Mark Wolek wrote: > Still kicking around this idea and didn’t see it addressed in any of the > threads before the forum closed. > > If one made an all ssd pool, would a log/cache drive just slow you down? > Would zil slow you down? In general, a slog makes sens

Re: [zfs-discuss] about btrfs and zfs

2011-10-19 Thread Richard Elling
On Oct 18, 2011, at 6:35 PM, David Magda wrote: > If we've found one bad disk, what are our options? Live with it or replace it :-) -- richard -- ZFS and performance consulting http://www.RichardElling.com VMworld Copenhagen, October 17-20 OpenStorage Summit, San Jose, CA, October 24-27 LISA

[zfs-discuss] repair [was: about btrfs and zfs]

2011-10-19 Thread Richard Elling
On Oct 18, 2011, at 5:21 PM, Edward Ned Harvey wrote: >> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- >> boun...@opensolaris.org] On Behalf Of Tim Cook >> >> I had and have redundant storage, it has *NEVER* automatically fixed >> it. You're the first person I've heard that has

Re: [zfs-discuss] Wanted: sanity check for a clustered ZFS idea

2011-10-17 Thread Richard Elling
On Oct 15, 2011, at 12:31 PM, Toby Thain wrote: > On 15/10/11 2:43 PM, Richard Elling wrote: >> On Oct 15, 2011, at 6:14 AM, Edward Ned Harvey wrote: >> >>>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- >>>> boun...@opensolaris.org] On Beha

Re: [zfs-discuss] Any info about "System attributes"

2011-10-16 Thread Richard Elling
On Oct 16, 2011, at 10:22 AM, Jesus Cea wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA1 > > On 16/10/11 18:49, Jesus Cea wrote: >>> These are special on disk blocks for storing file system metadata >>> attributes when there isn't enough space in the bonus buffer >>> area of the on disk v

Re: [zfs-discuss] All (pure) SSD pool rehash

2011-10-16 Thread Richard Elling
On Oct 16, 2011, at 3:56 AM, Jim Klimov wrote: > 2011-09-29 17:15, Zaeem Arshad пишет: >> >> >> On Thu, Sep 29, 2011 at 11:33 AM, Garrett D'Amore >> wrote: >> >> >> I think he means, resilver faster. >> >> SSDs can be driven harder, and have more IOPs so we can hit them harder with >> less

Re: [zfs-discuss] Wanted: sanity check for a clustered ZFS idea

2011-10-15 Thread Richard Elling
On Oct 15, 2011, at 6:14 AM, Edward Ned Harvey wrote: >> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- >> boun...@opensolaris.org] On Behalf Of Tim Cook >> >> In my example - probably not a completely clustered FS. >> A clustered ZFS pool with datasets individually owned by >> sp

Re: [zfs-discuss] weird bug with Seagate 3TB USB3 drive

2011-10-15 Thread Richard Elling
On Oct 14, 2011, at 7:02 PM, John D Groenveld wrote: > As a sanity check, I connected the drive to a Windows 7 installation. > I was able to partition, create an NTFS volume on it, eject and > remount it. > > I also tried creating the zpool on my Solaris 10 system, exporting > and trying to impor

Re: [zfs-discuss] Wanted: sanity check for a clustered ZFS idea

2011-10-11 Thread Richard Elling
On Oct 9, 2011, at 10:28 AM, Jim Klimov wrote: > Hello all, > > ZFS developers have for a long time stated that ZFS is not intended, > at least not in near term, for clustered environments (that is, having > a pool safely imported by several nodes simultaneously). However, > many people on forums

Re: [zfs-discuss] tuning zfs_arc_min

2011-10-11 Thread Richard Elling
On Oct 11, 2011, at 2:03 PM, Frank Van Damme wrote: > 2011/10/11 Richard Elling : >>> ZFS Tunables (/etc/system): >>> set zfs:zfs_arc_min = 0x20 >>> set zfs:zfs_arc_meta_limit=0x1 >> >> It is not uncommon to tune arc meta limit

Re: [zfs-discuss] tuning zfs_arc_min

2011-10-11 Thread Richard Elling
On Oct 6, 2011, at 5:19 AM, Frank Van Damme wrote: > Hello, > > quick and stupid question: I'm breaking my head over how to tunz > zfs_arc_min on a running system. There must be some magic word to pipe > into mdb -kw but I forgot it. I tried /etc/system but it's still at the > old value after re

Re: [zfs-discuss] how to remove disk from raid0

2011-10-11 Thread Richard Elling
On Oct 11, 2011, at 2:25 AM, KES wrote: > Hi > > I have the next configuration: 3 disk 1Gb in raid0 > all disks in zfs pool we recommend protecting the data. Friends don't let friends use raid-0. nit: We tend to refer to disk size in bytes (B), not bits (b) > freespace on so raid is 1.5Gb and

Re: [zfs-discuss] zvol space consumption vs ashift, metadata packing

2011-10-10 Thread Richard Elling
[exposed organs below…] On Oct 7, 2011, at 8:25 PM, Daniel Carosone wrote: > On Tue, Oct 04, 2011 at 09:28:36PM -0700, Richard Elling wrote: >> On Oct 4, 2011, at 4:14 PM, Daniel Carosone wrote: >> >>> I sent it twice, because something strange happened on the first se

Re: [zfs-discuss] zvol space consumption vs ashift, metadata packing

2011-10-04 Thread Richard Elling
On Oct 4, 2011, at 4:14 PM, Daniel Carosone wrote: > I sent a zvol from host a, to host b, twice. Host b has two pools, > one ashift=9, one ashift=12. I sent the zvol to each of the pools on > b. The original source pool is ashift=9, and an old revision (2009_06 > because it's still running xen

Re: [zfs-discuss] All (pure) SSD pool rehash

2011-09-28 Thread Richard Elling
On Sep 27, 2011, at 6:30 PM, Fajar A. Nugraha wrote: > On Wed, Sep 28, 2011 at 8:21 AM, Edward Ned Harvey >> So again: Not a problem if you're making your pool out of SSD's. > > Big problem if your system is already using most of the available IOPSduring > normal operation. Resilvers are thrott

Re: [zfs-discuss] Replacement for X25-E

2011-09-20 Thread Richard Elling
On Sep 20, 2011, at 12:21 AM, Markus Kovero wrote: > Hi, I was wondering do you guys have any recommendations as replacement for > Intel X25-E as it is being EOL’d? Mainly as for log device. Can you rank your priorities: + cost/IOPS + cost + latency + predictable l

Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Richard Elling
more below… On Sep 19, 2011, at 9:51 AM, Fred Liu wrote: >> >> No, but your pool is not imported. >> > > YES. I see. >> and look to see which disk is missing"? >> >> The label, as displayed by "zdb -l" contains the heirarchy of the >> expected pool config. >> The contents are used to build th

Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Richard Elling
On Sep 19, 2011, at 9:16 AM, Fred Liu wrote: >> >> For each disk, look at the output of "zdb -l /dev/rdsk/DISKNAMEs0". >> 1. Confirm that each disk provides 4 labels. >> 2. Build the vdev tree by hand and look to see which disk is missing >> >> This can be tedious and time consuming. > > Do I ne

Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Richard Elling
rd > > Thanks. > > Fred > >> -Original Message- >> From: Fred Liu >> Sent: 星期一, 九月 19, 2011 22:28 >> To: 'Richard Elling' >> Cc: zfs-discuss@opensolaris.org >> Subject: RE: [zfs-discuss] remove wrongly added device from zpool

Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Richard Elling
On Sep 19, 2011, at 12:10 AM, Fred Liu wrote: > Hi, > > For my carelessness, I added two disks into a raid-z2 zpool as normal data > disk, but in fact > I want to make them as zil devices. You don't mention which OS you are using, but for the past 5 years of [Open]Solaris releases, the system

Re: [zfs-discuss] zfs destroy snapshot runs out of memory bug

2011-09-14 Thread Richard Elling
Question below… On Sep 14, 2011, at 12:07 PM, Paul Kraus wrote: > On Wed, Sep 14, 2011 at 2:30 PM, Richard Elling > wrote: > >> I don't recall a bug with that description. However, there are several bugs >> that >> relate to how the internals work that were

Re: [zfs-discuss] zfs destroy snapshot runs out of memory bug

2011-09-14 Thread Richard Elling
On Sep 14, 2011, at 9:50 AM, Paul Kraus wrote: >I know there was (is ?) a bug where a zfs destroy of a large > snapshot would run a system out of kernel memory, but searching the > list archives and on defects.opensolaris.org I cannot find it. Could > someone here explain the failure mechanism

Re: [zfs-discuss] bad seagate drive?

2011-09-11 Thread Richard Elling
On Sep 11, 2011, at 3:41 AM, Matt Harrison wrote: > Hi list, > > I've got a system with 3 WD and 3 seagate drives. Today I got an email that > zpool status indicated one of the seagate drives as REMOVED. The removed state can be the result of a transport issue. If this is a Solaris-based OS,

Re: [zfs-discuss] BAD WD drives - defective by design?

2011-09-08 Thread Richard Elling
On Sep 7, 2011, at 2:05 AM, Roy Sigurd Karlsbakk wrote: >> The common use for desktop drives is having a single disk without >> redundancy.. If a sector is feeling bad, it's better if it tries a bit >> harder to recover it than just say "blah, there was a bit of dirt in >> the corner.. I don't fee

Re: [zfs-discuss] zfs send and dedupe

2011-09-06 Thread Richard Elling
On Sep 6, 2011, at 9:01 PM, Freddie Cash wrote: > Just curious if anyone has looked into the relationship between zpool dedupe, > zfs zend dedupe, memory use, and network throughput. > Yes. > For example, does 'zfs send -D' use the same DDT as the pool? > No. > Or does it require more memory

Re: [zfs-discuss] BAD WD drives - defective by design?

2011-09-06 Thread Richard Elling
On Aug 29, 2011, at 2:07 PM, Roy Sigurd Karlsbakk wrote: > Hi all > > It seems recent WD drives that aren't "Raid edition" can cause rather a lot > of problems on RAID systems. We have a few machines with LSI controllers > (6801/6081/9201) and we're seeing massive errors occuring. The usual pat

Re: [zfs-discuss] Does the zpool cache file affect import?

2011-08-29 Thread Richard Elling
Hi Gary, We use this method to implement NexentaStor HA-Cluster and, IIRC, Solaris Cluster uses shared cachefiles, too. More below... On Aug 29, 2011, at 11:13 AM, Gary Mills wrote: > I have a system with ZFS root that imports another zpool from a start > method. It uses a separate cache file f

Re: [zfs-discuss] zfs send and zfs destroy at the same time

2011-08-28 Thread Richard Elling
On Aug 28, 2011, at 5:55 AM, Edward Ned Harvey wrote: > What do you expect to happen if you're in progress doing a zfs send, and then > simultaneously do a zfs destroy of the snapshot you're sending? It depends on the release. For modern implementations, a hold is placed on the snapshot and it

Re: [zfs-discuss] ZFS raidz on top of hardware raid0

2011-08-27 Thread Richard Elling
On Aug 26, 2011, at 4:02 PM, Brandon High wrote: > On Fri, Aug 12, 2011 at 6:34 PM, Tom Tang wrote: >> Suppose I want to build a 100-drive storage system, wondering if there is >> any disadvantages for me to setup 20 arrays of HW RAID0 (5 drives each), >> then setup ZFS file system on these 20

Re: [zfs-discuss] solaris 10u8 hangs with message Disconnected command timeout for Target 0

2011-08-17 Thread Richard Elling
On Aug 15, 2011, at 11:17 PM, Ding Honghui wrote: > My solaris storage hangs. I login to the console and there is messages[1] > display on the console. > I can't login into the console and seems the IO is totally blocked. > > The system is solaris 10u8 on Dell R710 with disk array Dell MD3000. 2

Re: [zfs-discuss] Intel 320 as ZIL?

2011-08-15 Thread Richard Elling
On Aug 11, 2011, at 1:16 PM, Ray Van Dolson wrote: > On Thu, Aug 11, 2011 at 01:10:07PM -0700, Ian Collins wrote: >> On 08/12/11 08:00 AM, Ray Van Dolson wrote: >>> Are any of you using the Intel 320 as ZIL? It's MLC based, but I >>> understand its wear and performance characteristics can be bum

Re: [zfs-discuss] matching zpool versions to development builds

2011-08-09 Thread Richard Elling
On Aug 8, 2011, at 9:01 AM, John Martin wrote: > Is there a list of zpool versions for development builds? > > I found: > > http://blogs.oracle.com/stw/entry/zfs_zpool_and_file_system Since Oracle no longer shares that info, you might look inside the firewall :-) > > where it says Solaris 11

Re: [zfs-discuss] Large scale performance query

2011-08-09 Thread Richard Elling
On Aug 8, 2011, at 4:01 PM, Peter Jeremy wrote: > On 2011-Aug-08 17:12:15 +0800, Andrew Gabriel > wrote: >> periodic scrubs to cater for this case. I do a scrub via cron once a >> week on my home system. Having almost completely filled the pool, this >> was taking about 24 hours. However, now

Re: [zfs-discuss] Question about WD drives with Super Micro systems

2011-08-06 Thread Richard Elling
On Aug 6, 2011, at 9:56 AM, Roy Sigurd Karlsbakk wrote: >> In my experience, SATA drives behind SAS expanders just don't work. >> They "fail" in the manner you >> describe, sooner or later. Use SAS and be happy. > > Funny thing is Hitachi and Seagate drives work stably, whereas WD drives tend >

Re: [zfs-discuss] Question about WD drives with Super Micro systems

2011-08-06 Thread Richard Elling
On Aug 6, 2011, at 9:45 AM, Roy Sigurd Karlsbakk wrote: > Hi all > > We have a few servers with WD Black (and some green) drives on Super Micro > systems. We've seen both drives work well with direct attach, but with LSI > controllers and Super Micro's SAS expanders, well, that's another story.

Re: [zfs-discuss] Disable ZIL - persistent

2011-08-05 Thread Richard Elling
On Aug 5, 2011, at 6:14 AM, Darren J Moffat wrote: > On 08/05/11 13:11, Edward Ned Harvey wrote: >> After a certain rev, I know you can set the "sync" property, and it >> takes effect immediately, and it's persistent across reboots. But that >> doesn't apply to Solaris 10. >> >> My question: Is

Re: [zfs-discuss] ZFS Fragmentation issue - examining the ZIL

2011-08-01 Thread Richard Elling
On Aug 1, 2011, at 2:16 PM, Neil Perrin wrote: > In general the blogs conclusion is correct . When file systems get full there > is > fragmentation (happens to all file systems) and for ZFS the pool uses gang > blocks of smaller blocks when there are insufficient large blocks. > However, the ZIL

Re: [zfs-discuss] NexentaCore 3.1 - ZFS V. 28

2011-07-31 Thread Richard Elling
On Jul 31, 2011, at 8:20 AM, Eugen Leitl wrote: > On Sun, Jul 31, 2011 at 05:19:07AM -0700, Erik Trimble wrote: > >> >> Yes. You can attach a ZIL or L2ARC device anytime after the pool is created. > > Excellent. :-) > >> Also, I think you want an Intel 320, NOT the 311, for use as a ZIL. T

Re: [zfs-discuss] Gen-ATA read sector errors

2011-07-29 Thread Richard Elling
Thanks Jens, I have a vdbench profile and script that will run the new SNIA Solid State Storage (SSS) Performance Test Suite (PTS). I'd be happy to share if anyone is interested. -- richard On Jul 28, 2011, at 7:10 AM, Jens Elkner wrote: > Hi, > > Roy Sigurd Karlsbakk wrote: >> Crucial RealSSD

Re: [zfs-discuss] Gen-ATA read sector errors

2011-07-29 Thread Richard Elling
On Jul 28, 2011, at 4:55 AM, Koopmann, Jan-Peter wrote: > Hi, > > my system is running oi148 on a super micro X8SIL-F board. I have two pools > (2 disc mirror, 4 disc RAIDZ) with RAID level SATA drives. (Hitachi HUA72205 > and SAMSUNG HE103UJ). The system runs as expected however every few day

Re: [zfs-discuss] SSD vs "hybrid" drive - any advice?

2011-07-24 Thread Richard Elling
On Jul 21, 2011, at 4:08 PM, Gordon Ross wrote: > I'm looking to upgrade the disk in a high-end laptop (so called > "desktop replacement" type). I use it for development work, > runing OpenIndiana (native) with lots of ZFS data sets. > > These "hybrid" drives look kind of interesting, i.e. for a

Re: [zfs-discuss] Replacement disks for Sun X4500

2011-07-07 Thread Richard Elling
On Jul 7, 2011, at 3:33 PM, nathan wrote: > On 7/07/2011 3:12 PM, X4 User wrote: >> I am bumping this thread because I too have the same question ... can I put >> modern 3TB disks (hitachi deskstars) into an old x4500 ? X4500 uses the LSI 1068e. AFAIK, that HBA does not support disks > 2TB for a

Re: [zfs-discuss] 512b vs 4K sectors

2011-07-04 Thread Richard Elling
Thomas, On Jul 4, 2011, at 9:53 AM, Thomas Nau wrote: > Richard > > > On 07/04/2011 03:58 PM, Richard Elling wrote: >> On Jul 4, 2011, at 6:42 AM, Lanky Doodle wrote: >> >>> Hiya, >>> >>> I''ve been doing a lot of research surround

Re: [zfs-discuss] 512b vs 4K sectors

2011-07-04 Thread Richard Elling
On Jul 4, 2011, at 6:42 AM, Lanky Doodle wrote: > Hiya, > > I''ve been doing a lot of research surrounding this and ZFS, including some > posts on here, though I am still left scratching my head. > > I am planning on using slow RPM drives for a home media server, and it's > these that seem to

Re: [zfs-discuss] write cache partial-disk pools (was Server with 4 drives, how to configure ZFS?)

2011-07-02 Thread Richard Elling
On Jul 2, 2011, at 6:39 AM, Edward Ned Harvey wrote: >> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- >> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey >> >> Conclusion: Yes it matters to enable the write_cache. > > Now the question of whether or not it matters to us

Re: [zfs-discuss] Fixing txg commit frequency

2011-06-26 Thread Richard Elling
On Jun 24, 2011, at 5:29 AM, Sašo Kiselkov wrote: > Hi All, > > I'd like to ask about whether there is a method to enforce a certain txg > commit frequency on ZFS. I'm doing a large amount of video streaming > from a storage pool while also slowly continuously writing a constant > volume of data

Re: [zfs-discuss] Cannot format 2.5TB ext disk (EFI)

2011-06-23 Thread Richard Elling
On Jun 23, 2011, at 1:13 PM, Kitty Tam wrote: > I wonder if there is a limit on the size of disk to mount for Solaris. > I was able to run "format" on a WD 1TB disk several months ago. > The diff is that it's a 2.5TB one this time. > 2TB limit for 32-bit Solaris. If you hit this, then you'll fin

Re: [zfs-discuss] write cache partial-disk pools (was Server with 4 drives, how to configure ZFS?)

2011-06-21 Thread Richard Elling
On Jun 21, 2011, at 8:18 AM, Garrett D'Amore wrote: >> >> Does that also go through disksort? Disksort doesn't seem to have any >> concept of priorities (but I haven't looked in detail where it plugs in to >> the whole framework). >> >>> So it might make better sense for ZFS to keep the disk qu

Re: [zfs-discuss] Server with 4 drives, how to configure ZFS?

2011-06-20 Thread Richard Elling
On Jun 15, 2011, at 1:33 PM, Nomen Nescio wrote: > Has there been any change to the server hardware with respect to number of > drives since ZFS has come out? Many of the servers around still have an even > number of drives (2, 4) etc. and it seems far from optimal from a ZFS > standpoint. All you

Re: [zfs-discuss] write cache partial-disk pools (was Server with 4 drives, how to configure ZFS?)

2011-06-20 Thread Richard Elling
On Jun 20, 2011, at 6:31 AM, Gary Mills wrote: > On Sun, Jun 19, 2011 at 08:03:25AM -0700, Richard Elling wrote: >> On Jun 19, 2011, at 6:28 AM, Edward Ned Harvey wrote: >>>> From: Richard Elling [mailto:richard.ell...@gmail.com] >>>> Sent: Saturday, June 18, 2011

Re: [zfs-discuss] write cache partial-disk pools (was Server with 4 drives, how to configure ZFS?)

2011-06-19 Thread Richard Elling
On Jun 19, 2011, at 6:04 AM, Andrew Gabriel wrote: > Richard Elling wrote: >> Actually, all of the data I've gathered recently shows that the number of >> IOPS does not significantly increase for HDDs running random workloads. >> However the response time does :-( My

Re: [zfs-discuss] write cache partial-disk pools (was Server with 4 drives, how to configure ZFS?)

2011-06-19 Thread Richard Elling
On Jun 19, 2011, at 6:28 AM, Edward Ned Harvey wrote: >> From: Richard Elling [mailto:richard.ell...@gmail.com] >> Sent: Saturday, June 18, 2011 7:47 PM >> >> Actually, all of the data I've gathered recently shows that the number of >> IOPS does not significant

Re: [zfs-discuss] write cache partial-disk pools (was Server with 4 drives, how to configure ZFS?)

2011-06-18 Thread Richard Elling
On Jun 16, 2011, at 8:05 PM, Daniel Carosone wrote: > On Thu, Jun 16, 2011 at 10:40:25PM -0400, Edward Ned Harvey wrote: >>> From: Daniel Carosone [mailto:d...@geek.com.au] >>> Sent: Thursday, June 16, 2011 10:27 PM >>> >>> Is it still the case, as it once was, that allocating anything other >>>

Re: [zfs-discuss] Is ZFS internal reservation excessive?

2011-06-18 Thread Richard Elling
On Jun 17, 2011, at 4:07 PM, MasterCATZ wrote: > >> > ok what is the Point of the RESERVE > > When we can not even delete a file when their is no space left !!! > > if they are going to have a RESERVE they should make it a little smarter and > maybe have the FS use some of that free space so w

Re: [zfs-discuss] OpenIndiana | ZFS | scrub | network | awful slow

2011-06-18 Thread Richard Elling
On Jun 16, 2011, at 3:36 PM, Sven C. Merckens wrote: > Hi roy, Hi Dan, > > many thanks for Your responses. > > I am using napp-it to control the OpenSolaris-Systems > The napp-it-interface shows a dedup factor of 1.18x on System 1 and 1.16x on > System 2. You're better off disabling dedup for

Re: [zfs-discuss] zfs global hot spares?

2011-06-18 Thread Richard Elling
more below... On Jun 16, 2011, at 2:27 AM, Fred Liu wrote: > Fixing a typo in my last thread... > >> -Original Message- >> From: Fred Liu >> Sent: 星期四, 六月 16, 2011 17:22 >> To: 'Richard Elling' >> Cc: Jim Klimov; zfs-discuss@opensolaris.or

[zfs-discuss] Finding disks [was: # disks per vdev]

2011-06-18 Thread Richard Elling
On Jun 17, 2011, at 12:55 AM, Lanky Doodle wrote: > Thanks Richard. > > How does ZFS enumerate the disks? In terms of listing them does it do them > logically, i.e; > > controller #1 (motherboard) >| >|--- disk1 >|--- disk2 > controller #3 >|--- disk3 >|--- disk4 >|--- d

Re: [zfs-discuss] question about COW and snapshots

2011-06-16 Thread Richard Elling
On Jun 16, 2011, at 12:09 AM, Simon Walter wrote: > On 06/16/2011 09:09 AM, Erik Trimble wrote: >> We had a similar discussion a couple of years ago here, under the title "A >> Versioning FS". Look through the archives for the full discussion. >> >> The jist is that application-level versioning

Re: [zfs-discuss] # disks per vdev

2011-06-16 Thread Richard Elling
On Jun 16, 2011, at 2:07 AM, Lanky Doodle wrote: > Thanks guys. > > I have decided to bite the bullet and change to 2TB disks now rather than go > through all the effort using 1TB disks and then maybe changing in 6-12 months > time or whatever. The price difference between 1TB and 2TB disks is

Re: [zfs-discuss] zfs global hot spares?

2011-06-15 Thread Richard Elling
my point exactly, more below... On Jun 15, 2011, at 8:20 PM, Fred Liu wrote: >> This is only true if the pool is not protected. Please protect your >> pool with mirroring or raidz*. >> -- richard >> > > Yes. We use a raidz2 without any spares. In theory, with one disk broken, > there should be

Re: [zfs-discuss] question about COW and snapshots

2011-06-15 Thread Richard Elling
On Jun 15, 2011, at 4:45 AM, Darren J Moffat wrote: > On 06/15/11 12:29, Edward Ned Harvey wrote: >>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- >>> boun...@opensolaris.org] On Behalf Of Richard Elling >>> >>> That would suck worse. &

Re: [zfs-discuss] Disk replacement need to scan full pool ?

2011-06-15 Thread Richard Elling
On Jun 15, 2011, at 4:22 AM, Pawel Jakub Dawidek wrote: > On Tue, Jun 14, 2011 at 11:49:56AM -0700, Bill Sommerfeld wrote: >> On 06/14/11 04:15, Rasmus Fauske wrote: >>> I want to replace some slow consumer drives with new edc re4 ones but >>> when I do a replace it needs to scan the full pool and

Re: [zfs-discuss] zfs global hot spares?

2011-06-15 Thread Richard Elling
On Jun 15, 2011, at 2:44 AM, Fred Liu wrote: >> -Original Message- >> From: Richard Elling [mailto:richard.ell...@gmail.com] >> Sent: 星期三, 六月 15, 2011 14:25 >> To: Fred Liu >> Cc: Jim Klimov; zfs-discuss@opensolaris.org >> Subject: Re: [zfs-discuss] zfs

Re: [zfs-discuss] zfs global hot spares?

2011-06-14 Thread Richard Elling
On Jun 14, 2011, at 10:31 PM, Fred Liu wrote: > >> -Original Message- >> From: Richard Elling [mailto:richard.ell...@gmail.com] >> Sent: 星期三, 六月 15, 2011 11:59 >> To: Fred Liu >> Cc: Jim Klimov; zfs-discuss@opensolaris.org >> Subject: Re: [zfs-discus

Re: [zfs-discuss] zfs global hot spares?

2011-06-14 Thread Richard Elling
On Jun 14, 2011, at 2:36 PM, Fred Liu wrote: > What is the difference between warm spares and hot spares? Warm spares are connected and powered. Hot spares are connected, powered, and automatically brought online to replace a "failed" disk. The reason I'm leaning towards warm spares is because I

Re: [zfs-discuss] question about COW and snapshots

2011-06-14 Thread Richard Elling
On Jun 14, 2011, at 10:25 AM, Simon Walter wrote: > I'm looking to create a NAS with versioning for non-technical users (Windows > and Mac). I want the users to be able to simply save a file, and a > revision/snapshot is created. I could use a revision control software like > SVN (it has autove

Re: [zfs-discuss] zfs global hot spares?

2011-06-14 Thread Richard Elling
On Jun 14, 2011, at 10:38 AM, Jim Klimov wrote: > 2011-06-14 19:23, Richard Elling пишет: >> On Jun 14, 2011, at 5:18 AM, Jim Klimov wrote: >> >>> Hello all, >>> >>> Is there any sort of a "Global Hot Spare" feature in ZFS, >>> i

Re: [zfs-discuss] zfs global hot spares?

2011-06-14 Thread Richard Elling
On Jun 14, 2011, at 5:18 AM, Jim Klimov wrote: > Hello all, > > Is there any sort of a "Global Hot Spare" feature in ZFS, > i.e. that one sufficiently-sized spare HDD would automatically > be pulled into any faulted pool on the system? Yes. See the ZFS Admin Guide section on Designating Hot Spa

Re: [zfs-discuss] Pool error in a hex file name

2011-06-13 Thread Richard Elling
On Jun 12, 2011, at 1:53 PM, James Sutherland wrote: > A reboot and then another scrub fixed this. Reboot made no difference. So > after the reboot I started another scrub and now the pool shows clean. > > So the sequence was like this: > 1. zpool reported ioerrors after a scrub with an erro

Re: [zfs-discuss] Impact of L2ARC device failure and SSD recommendations

2011-06-12 Thread Richard Elling
On Jun 12, 2011, at 5:04 PM, Edmund White wrote: > On 6/12/11 6:18 PM, "Jim Klimov" wrote: >> 2011-06-12 23:57, Richard Elling wrote: >>> >>> How long should it wait? Before you answer, read through the thread: >>> http://lists.illumos.org/piper

Re: [zfs-discuss] Impact of L2ARC device failure and SSD recommendations

2011-06-12 Thread Richard Elling
On Jun 12, 2011, at 4:18 PM, Jim Klimov wrote: > 2011-06-12 23:57, Richard Elling wrote: >> >> How long should it wait? Before you answer, read through the thread: >> http://lists.illumos.org/pipermail/developer/2011-April/001996.html >> Then add your

Re: [zfs-discuss] Impact of L2ARC device failure and SSD recommendations

2011-06-12 Thread Richard Elling
On Jun 11, 2011, at 9:26 AM, Jim Klimov wrote: > 2011-06-11 19:15, Pasi Kärkkäinen пишет: >> On Sat, Jun 11, 2011 at 08:35:19AM -0500, Edmund White wrote: >>>I've had two incidents where performance tanked suddenly, leaving the VM >>>guests and Nexenta SSH/Web consoles inaccessible and req

Re: [zfs-discuss] Impact of L2ARC device failure and SSD recommendations

2011-06-12 Thread Richard Elling
On Jun 11, 2011, at 6:35 AM, Edmund White wrote: > Posted in greater detail at Server Fault - > http://serverfault.com/q/277966/13325 > Replied in greater detail at same. > I have an HP ProLiant DL380 G7 system running NexentaStor. The server has > 36GB RAM, 2 LSI 9211-8i SAS controllers (no S

Re: [zfs-discuss] ZFS receive checksum mismatch

2011-06-12 Thread Richard Elling
On Jun 11, 2011, at 5:46 AM, Edward Ned Harvey wrote: >> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- >> boun...@opensolaris.org] On Behalf Of Jim Klimov >> >> See FEC suggestion from another poster ;) > > Well, of course, all storage mediums have built-in hardware FEC. At lea

Re: [zfs-discuss] Tuning disk failure detection?

2011-06-12 Thread Richard Elling
On May 10, 2011, at 9:18 AM, Ray Van Dolson wrote: > We recently had a disk fail on one of our whitebox (SuperMicro) ZFS > arrays (Solaris 10 U9). > > The disk began throwing errors like this: > > May 5 04:33:44 dev-zfs4 scsi: [ID 243001 kern.warning] WARNING: > /pci@0,0/pci8086,3410@9/pci15d9

Re: [zfs-discuss] ZFS receive checksum mismatch

2011-06-10 Thread Richard Elling
On Jun 10, 2011, at 8:59 AM, David Magda wrote: > On Fri, June 10, 2011 07:47, Edward Ned Harvey wrote: > >> #1 A single bit error causes checksum mismatch and then the whole data >> stream is not receivable. > > I wonder if it would be worth adding a (toggleable?) forward error > correction (F

Re: [zfs-discuss] L2ARC and poor read performance

2011-06-08 Thread Richard Elling
On Jun 7, 2011, at 9:12 AM, Phil Harman wrote: > Ok here's the thing ... > > A customer has some big tier 1 storage, and has presented 24 LUNs (from four > RAID6 groups) to an OI148 box which is acting as a kind of iSCSI/FC bridge > (using some of the cool features of ZFS along the way). The OI

Re: [zfs-discuss] [illumos-Developer] zfs refratio property

2011-06-06 Thread Richard Elling
Beautiful, ship it -- richard On Jun 6, 2011, at 6:56 PM, Eric Schrock wrote: > Good catch. For consistency, I updated the property description to match > "compressratio" exactly. > > - Eric > > On Mon, Jun 6, 2011 at 9:39 PM, Mark Musante wrote: > > minor quibble: compressratio uses a low

Re: [zfs-discuss] [illumos-Developer] zfs refratio property

2011-06-06 Thread Richard Elling
n also be used as the property name. So maybe the > full name should be "refcompressratio" as the long name and "refratio" as the > short name would make sense, as that matches "compressratio". Matt? > > - Eric > > > On Mon, Jun 6, 2011 at 7:0

Re: [zfs-discuss] [illumos-Developer] zfs refratio property

2011-06-06 Thread Richard Elling
On Jun 6, 2011, at 2:54 PM, Yuri Pankov wrote: > On Mon, Jun 06, 2011 at 02:19:50PM -0700, Matthew Ahrens wrote: >> I have implemented a new property for ZFS, "refratio", which is the >> compression ratio for referenced space (the "compressratio" is the ratio for >> used space). We are using this

Re: [zfs-discuss] Metadata (DDT) Cache Bias

2011-06-04 Thread Richard Elling
On Jun 3, 2011, at 6:25 AM, Roch wrote: > > Edward Ned Harvey writes: >> Based on observed behavior measuring performance of dedup, I would say, some >> chunk of data and its associated metadata seem have approximately the same >> "warmness" in the cache. So when the data gets evicted, the associ

<    1   2   3   4   5   6   7   8   9   10   >