Re: [zfs-discuss] Quick zfs send -i performance questions

2011-05-04 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Rich Teer > > Also related to this is a performance question. My initial test involved > copying a 50 MB zfs file system to a new disk, which took 2.5 minutes > to complete. The strikes me as

Re: [zfs-discuss] gaining speed with l2arc

2011-05-04 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Frank Van Damme > > another dedup question. I just installed an ssd disk as l2arc. This > is a backup server with 6 GB RAM (ie I don't often read the same data > again), basically it has a lar

Re: [zfs-discuss] multipl disk failures cause zpool hang

2011-05-04 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of TianHong Zhao > > There seems to be a few threads about zpool hang,  do we have a > workaround to resolve the hang issue without rebooting ? > > In my case,  I have a pool with disks from exte

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-29 Thread Edward Ned Harvey
> From: Neil Perrin [mailto:neil.per...@oracle.com] > > The size of these structures will vary according to the release you're running. > You can always find out the size for a particular system using ::sizeof within > mdb. For example, as super user : > > : xvm-4200m2-02 ; echo ::sizeof ddt_entry

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-29 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Edward Ned Harvey > > What does it mean / what should you do, if you run that command, and it > starts spewing messages like this? > leaked space: vdev 0, offset 0x3bd8096e

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-29 Thread Edward Ned Harvey
> From: Edward Ned Harvey > I saved the core and ran again. This time it spewed "leaked space" messages > for an hour, and completed. But the final result was physically impossible (it > counted up 744k total blocks, which means something like 3Megs per block in > my 2.

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-29 Thread Edward Ned Harvey
> From: Richard Elling [mailto:richard.ell...@gmail.com] > > > Worse yet, your arc consumption could be so large, that > > PROCESSES don't fit in ram anymore. In this case, your processes get > pushed > > out to swap space, which is really bad. > > This will not happen. The ARC will be asked to

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-28 Thread Edward Ned Harvey
> From: Tomas Ögren [mailto:st...@acc.umu.se] > > zdb -bb pool Oy - this is scary - Thank you by the way for that command - I've been gathering statistics across a handful of systems now ... What does it mean / what should you do, if you run that command, and it starts spewing messages like this

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-28 Thread Edward Ned Harvey
> From: Brandon High [mailto:bh...@freaks.com] > Sent: Thursday, April 28, 2011 5:33 PM > > On Wed, Apr 27, 2011 at 9:26 PM, Edward Ned Harvey > wrote: > > Correct me if I'm wrong, but the dedup sha256 checksum happens in > addition > > to (not instead of) th

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-28 Thread Edward Ned Harvey
> From: Erik Trimble [mailto:erik.trim...@oracle.com] > > OK, I just re-looked at a couple of things, and here's what I /think/ is > the correct numbers. > > I just checked, and the current size of this structure is 0x178, or 376 > bytes. > > Each ARC entry, which points to either an L2ARC item

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-27 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Neil Perrin > > No, that's not true. The DDT is just like any other ZFS metadata and can be > split over the ARC, > cache device (L2ARC) and the main pool devices. An infrequently referenced >

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-27 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Erik Trimble > > (BTW, is there any way to get a measurement of number of blocks consumed > per zpool?  Per vdev?  Per zfs filesystem?)  *snip*. > > > you need to use zdb to see what the curr

Re: [zfs-discuss] Spare drives sitting idle in raidz2 with failed drive

2011-04-27 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Lamp Zy > > One of my drives failed in Raidz2 with two hot spares: > What zpool & zfs version are you using? What OS version? Are all the drives precisely the same size (Same make/model numb

[zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-25 Thread Edward Ned Harvey
There are a lot of conflicting references on the Internet, so I'd really like to solicit actual experts (ZFS developers or people who have physical evidence) to weigh in on this... After searching around, the reference I found to be the most seemingly useful was Erik's post here: http://openso

Re: [zfs-discuss] zpool scrub on b123

2011-04-18 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Karl Rossing > > So i figured out after a couple of scrubs and fmadm faulty that drive > c9t15d0 was bad. > > My pool now looks like this: > NAME STATE READ WRITE CKSUM

Re: [zfs-discuss] X4540 no next-gen product?

2011-04-09 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Julian King > > Actually I think our figures more or less agree. 12 disks = 7 mbits > 48 disks = 4x7mbits I know that sounds like terrible performance to me. Any time I benchmark disks, a che

Re: [zfs-discuss] cannot destroy snapshot

2011-04-05 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Paul Kraus > > I have a zpool with one dataset and a handful of snapshots. I > cannot delete two of the snapshots. The message I get is "dataset is > busy". Neither fuser or lsof show anyth

Re: [zfs-discuss] A resilver record?

2011-03-24 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Giovanni Tirloni > > We've production servers with 9 vdev's (mirrored) doing `zfs send` > daily to backup servers with with 7 vdev's (each 3-disk raidz1). Some > backup servers that receive dat

Re: [zfs-discuss] Any use for extra drives?

2011-03-24 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Nomen Nescio > > Hi ladies and gents, I've got a new Solaris 10 development box with ZFS > mirror root using 500G drives. I've got several extra 320G drives and I'm > wondering if there's any w

Re: [zfs-discuss] A resilver record?

2011-03-21 Thread Edward Ned Harvey
> From: Richard Elling [mailto:richard.ell...@gmail.com] > > There is no direct correlation between the number of blocks and resilver > time. Incorrect. Although there are possibly some cases where you could be bandwidth limited, it's certainly not true in general. If Richard were correct, then

Re: [zfs-discuss] A resilver record?

2011-03-21 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Paul Kraus > > Is resilver time related to the amount of data (TBs) or the number > of objects (file + directory counts) ? I have seen zpools with lots of > data in very few files resilver

Re: [zfs-discuss] A resilver record?

2011-03-21 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Edward Ned Harvey > > it depends on the total number of used blocks that must > be resilvered on the resilvering device, multiplied by the access time for > the resilvering

Re: [zfs-discuss] A resilver record?

2011-03-21 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Richard Elling > > How many times do we have to rehash this? The speed of resilver is > dependent on the amount of data, the distribution of data on the resilvering > device, speed of the resil

Re: [zfs-discuss] dual protocal on one file system?

2011-03-18 Thread Edward Ned Harvey
> From: David Magda [mailto:dma...@ee.ryerson.ca] > > >> 2. Unix / Solaris limitation of 16 / 32 group membership > >> > > I don't think you're going to eliminate #2. > > #2 is fixed in OpenSolaris as of snv_129: > > The new limit is 1024--the same maximum number of groups as Windows > supports.

Re: [zfs-discuss] dual protocal on one file system?

2011-03-17 Thread Edward Ned Harvey
> From: Paul Kraus [mailto:p...@kraus-haus.org] > > > Samba even has modules for mapping NT RIDs to Nix UIDs/GIDs as well as a > module that > > supports "Previous Versions" using the hosts native snapshot method. > > But... if SAMBA has native AD authentication, and the underlying > OS can a

Re: [zfs-discuss] "Invisible" snapshot/clone

2011-03-17 Thread Edward Ned Harvey
> From: Freddie Cash [mailto:fjwc...@gmail.com] > > On Wed, Mar 16, 2011 at 7:23 PM, Edward Ned Harvey > wrote: > > P.S. If your primary goal is to use ZFS, you would probably be better > > switching to nexenta or openindiana or solaris 11 express, because they all >

Re: [zfs-discuss] "Invisible" snapshot/clone

2011-03-16 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Peter Jeremy > > I am in the process of upgrading from FreeBSD-8.1 with ZFSv14 to > FreeBSD-8.2 with ZFSv15 and, following a crash, have run into a > problem with ZFS claiming a snapshot or clo

Re: [zfs-discuss] dual protocal on one file system?

2011-03-15 Thread Edward Ned Harvey
> From: Paul Kraus [mailto:p...@kraus-haus.org] > > > So if you were to enable the sharesmb property on a zfs filesystem in sol10, > > you just get an error or something? > > Nope. The command succeeds and the flag gets set on the dataset. > Since there is no kernel process to read the flag a

Re: [zfs-discuss] dual protocal on one file system?

2011-03-14 Thread Edward Ned Harvey
> From: Paul Kraus [mailto:p...@kraus-haus.org] > > > I have a solaris 10u8 box I'm logged into right now.  man zfs shows that > sharesmb is > > available as an option.  I suppose I could be wrong, if either the man page is > wrong, > > or if I'm incorrectly assuming the zfs sharesmb property uses

Re: [zfs-discuss] dual protocal on one file system?

2011-03-14 Thread Edward Ned Harvey
> From: James C. McPherson [mailto:j...@opensolaris.org] > Sent: Monday, March 14, 2011 9:20 AM > > > Just for clarity: > > The in-kernel CIFS service is indeed available in solaris 10. > > Are you really, really sure about that? Please point the RFE number > which tracks the inclusion in a Solar

Re: [zfs-discuss] dual protocal on one file system?

2011-03-14 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Richard Elling > > >> Yes. > >> http://www.unix.com/man-page/OpenSolaris/1m/idmap/ > > > > This appears to be only for OpenSolaris/Solaris 11, and not Solaris 10. Or am > I missing something? >

Re: [zfs-discuss] dual protocal on one file system?

2011-03-12 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Fred Liu > > Is there a mapping mechanism like what DataOnTap does to map the > permission/acl between NIS/LDAP and AD? There are a lot of solutions available. But if you don't already have a

Re: [zfs-discuss] dual protocal on one file system?

2011-03-12 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > > Is it possible to run both CIFS and NFS on one file system over ZFS? Yes. I do. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/

Re: [zfs-discuss] zfs-nfs-sun 7000 series

2011-03-12 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Mike MacNeil > > I have a Sun 7000 series NAS device, I am trying to back it up via NFS mount > on a Solaris 10 server running Networker 7.6.1.  It works but it is extremely > slow, I have test

Re: [zfs-discuss] Slices and reservations Was: Re: How long should an empty destroy take? snv_134

2011-03-09 Thread Edward Ned Harvey
> From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us] > > The disk write cache helps with the step where data is > sent to the disks since it is much faster to write into the disk write > cache than to write to the media. Besides helping with unburdening > the I/O channel, Having the di

Re: [zfs-discuss] Slices and reservations Was: Re: How long should an empty destroy take? snv_134

2011-03-09 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Nathan Kroenert > > Bottom line is that at 75 IOPS per spindle won't impress many people, > and that's the sort of rate you get when you disable the disk cache. It's the same rate that you get

Re: [zfs-discuss] Slices and reservations Was: Re: How long should an empty destroy take? snv_134

2011-03-08 Thread Edward Ned Harvey
> From: Jim Dunham [mailto:james.dun...@oracle.com] > > ZFS only uses system RAM for read caching, If your email address didn't say oracle, I'd just simply come out and say you're crazy, but I'm trying to keep an open mind here... Correct me where the following statement is wrong: ZFS uses sys

Re: [zfs-discuss] Slices and reservations Was: Re: How long should an empty destroy take? snv_134

2011-03-07 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Brandon High > > Write caching will be disabled on devices that use slices. It can be > turned back on by using format -e My experience has been, despite what the BPG (or whatever) says, this

Re: [zfs-discuss] Slices and reservations Was: Re: How long should an empty destroy take? snv_134

2011-03-07 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Yaverot > > >I recommend: > >When creating your new pool, use slices of the new disks, which are 99% of > >the size of the new disks instead of using the whole new disks. Because > >this is a

Re: [zfs-discuss] How long should an empty destroy take? snv_134

2011-03-07 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Yaverot > > rpool remains 1% inuse. tank reports 100% full (with 1.44G free), I recommend: When creating your new pool, use slices of the new disks, which are 99% of the size of the new disks

Re: [zfs-discuss] How long should an empty destroy take? snv_134

2011-03-06 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Yaverot > > I'm (still) running snv_134 on a home server. My main pool "tank" filled up > last night ( 1G free remaining ). There is (or was) a bug that would sometimes cause the system to cr

Re: [zfs-discuss] How long should an empty destroy take? snv_134

2011-03-06 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Yaverot > > We're heading into the 3rd hour of the zpool destroy on "others". > The system isn't locked up, as it responds to local keyboard input, and I bet you, you're in a semi-crashed stat

Re: [zfs-discuss] cannot replace c10t0d0 with c10t0d0: device is too small

2011-03-04 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Tim Cook > > The response was that Sun makes sure all drives > are exactly the same size (although I do recall someone on this forum having > this issue with Sun OEM disks as well).   That was

Re: [zfs-discuss] External SATA drive enclosures + ZFS?

2011-02-27 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Brandon High > > I would avoid USB, since it can be less reliable than other connection > methods. That's the impression I get from older posts made by Sun Take that a step further. Anything

Re: [zfs-discuss] ZFS Performance

2011-02-27 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of David Blasingame Oracle > > Keep pool space under 80% utilization to maintain pool performance. For what it's worth, the same is true for any other filesystem too. What really matters is the

Re: [zfs-discuss] Best way/issues with large ZFS send?

2011-02-16 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Eff Norwood > > Are there any gotchas that I should be aware of? Also, at what level should I > be taking the snapshot to do the zfs send? At the primary pool level or at the > zvol level? Sinc

Re: [zfs-discuss] ZFS and Virtual Disks

2011-02-14 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Mark Creamer > > 1. Should I create individual iSCSI LUNs and present those to the VMware > ESXi host as iSCSI storage, and then create virtual disks from there on each > Solaris VM? > >  - or

Re: [zfs-discuss] Very bad ZFS write performance. Ok Read.

2011-02-12 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Edward Ned Harvey > > What does "zpool status" tell you? Also, "zpool iostat 5' ___ zfs-discuss mailing list

Re: [zfs-discuss] Very bad ZFS write performance. Ok Read.

2011-02-11 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of ian W > > Hope you can still help here. Solaris 11 Express. > x86 platform E6600 with 6GB of RAM > > I have a fairly new S11E box Im using as a file server. > 3x1.5TB HDD's in a raidz pool. J

Re: [zfs-discuss] Very bad ZFS write performance. Ok Read.

2011-02-11 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Markus Kovero > > > I noticed recently that write rate has dropped off and through testing now I > am getting 35MB/sec writes. The pool is around 50-60% full. > > Hi, do you have your zfs pref

Re: [zfs-discuss] deduplication requirements

2011-02-07 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Michael > > Core i7 2600 CPU > 16gb DDR3 Memory > 64GB SSD for ZIL (optional) > > Would this produce decent results for deduplication of 16TB worth of pools > or would I need more RAM still?

Re: [zfs-discuss] RAID Failure Calculator (for 8x 2TB RAIDZ)

2011-02-06 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Matthew Angelo > > My question is, how do I determine which of the following zpool and > vdev configuration I should run to maximize space whilst mitigating > rebuild failure risk? > > 1. 2x R

Re: [zfs-discuss] ZFS and TRIM - No need for TRIM

2011-02-05 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Erik Trimble > > Bottom line, it's maybe $50 in parts, plus a $100k VLSI Engineer to do > the design. Well, only if there's a high volume. If you're only going to sell 10,000 of these device

Re: [zfs-discuss] ZFS and TRIM

2011-02-05 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Orvar Korvar > > So, the bottom line is that Solaris 11 Express can not use TRIM and SSD? Is > that the conclusion? So, it might not be a good idea to use a SSD? Even without TRIM, SSD's are s

Re: [zfs-discuss] ZFS and spindle speed (7.2k / 10k / 15k)

2011-02-03 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of taemun > > Uhm. Higher RPM = higher linear speed of the head above the platter = > higher throughput. If the bit pitch (ie the size of each bit on the platter) is the Nope. That's what I orig

Re: [zfs-discuss] ZFS and spindle speed (7.2k / 10k / 15k)

2011-02-02 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of James > > block sizes and a ZFS 4kB recordsize* would mean much lower IOPS. e.g. > Seagate Constellations are around 75-141MB/s(inner-outer) and 75MB/s is > 18750 4kB IOPS! However I've jus

Re: [zfs-discuss] ZFS and spindle speed (7.2k / 10k / 15k)

2011-02-02 Thread Edward Ned Harvey
> From: Brandon High [mailto:bh...@freaks.com] > > That's assuming that the drives have the same number of platters. 500G > drives are generally one platter, and 2T drives are generally 4 > platters. Same size platters, same density. The 500G drive could be Wouldn't multiple platters of the same

Re: [zfs-discuss] ZFS and spindle speed (7.2k / 10k / 15k)

2011-02-02 Thread Edward Ned Harvey
> From: Richard Elling [mailto:richard.ell...@gmail.com] > > They aren't. Check the datasheets, the max media bandwidth is almost > always > published. I looked for said data sheets before posting. Care to drop any pointers? I didn't see any drives publishing figures for throughput to/from pla

Re: [zfs-discuss] ZFS and spindle speed (7.2k / 10k / 15k)

2011-02-02 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of James > > I assume while a 2TB 7200rpm drive may have better sequential IOPS than a > 500GB, it will not be double and therefore, Don't know why you'd assume that. I would assume a 2TB drive

Re: [zfs-discuss] ZFS and L2ARC memory requirements?

2011-02-01 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk > > > Even *with* an L2ARC, your memory requirements are *substantial*, > > because the L2ARC itself needs RAM. 8 GB is simply inadequate for your > > test. > > With 50TB

Re: [zfs-discuss] ZFS dedup success stories (take two)

2011-02-01 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk > > Sorry about the initial post - it was wrong. The hardware configuration was > right, but for initial tests, I use NFS, meaning sync writes. This obviously > stresses th

Re: [zfs-discuss] ZFS dedup success stories?

2011-02-01 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk > > > Dedup is *hungry* for RAM. 8GB is not enough for your configuration, > > most likely! First guess: double the RAM and then you might have > > better > > luck. > > I

Re: [zfs-discuss] ZFS and spindle speed (7.2k / 10k / 15k)

2011-02-01 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of James > > I’m trying to select the appropriate disk spindle speed for a proposal and > would welcome any experience and opinions (e.g. has anyone actively > chosen 10k/15k drives for a new ZFS

Re: [zfs-discuss] Best choice - file system for system

2011-01-30 Thread Edward Ned Harvey
> From: Peter Jeremy [mailto:peter.jer...@alcatel-lucent.com] > Sent: Sunday, January 30, 2011 3:48 PM > > >2- When you want to restore, it's all or nothing. If a single bit is > >corrupt in the data stream, the whole stream is lost. > > > OTOH, it renders ZFS send useless for backup or archival

Re: [zfs-discuss] ZFS dedup success stories?

2011-01-30 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk > > We're getting down to 10-20MB/s on Oh, one more thing. How are you measuring the speed? Because if you have data which is highly compressible, or highly duplicated,

Re: [zfs-discuss] ZFS dedup success stories?

2011-01-30 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk > > The test box is a supermicro thing with a Core2duo CPU, 8 gigs of RAM, 4 gigs > of mirrored SLOG and some 150 gigs of L2ARC on 80GB x25-M drives. The > data drives are

Re: [zfs-discuss] ZFS and TRIM

2011-01-29 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Edward Ned Harvey > > My google-fu is coming up short on this one...  I didn't see that it had been > discussed in a while ... BTW, there were a bunch of places where peop

Re: [zfs-discuss] multiple disk failure

2011-01-29 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Mike Tancsa > > NAMESTATE READ WRITE CKSUM > tank1 UNAVAIL 0 0 0 insufficient replicas > raidz1ONLINE 0 0 0 >

[zfs-discuss] ZFS and TRIM

2011-01-29 Thread Edward Ned Harvey
My google-fu is coming up short on this one... I didn't see that it had been discussed in a while ... What is the status of ZFS support for TRIM? For the pool in general... and... Specifically for the slog and/or cache??? ___ zfs-discuss maili

Re: [zfs-discuss] Lower latency ZIL Option?: SSD behind Controller BB Write Cache

2011-01-29 Thread Edward Ned Harvey
> From: Deano [mailto:de...@rattie.demon.co.uk] > > Hi Edward, > Do you have a source for the 8KiB block size data? whilst we can't avoid the > SSD controller in theory we can change the smallest size we present to the > SSD to 8KiB fairly easily... I wonder if that would help the controller do a

Re: [zfs-discuss] Best choice - file system for system

2011-01-28 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Tristram Scott > > When it comes to dumping and restoring filesystems, there is still no official > replacement for the ufsdump and ufsrestore. Let's go into that a little bit. If you're pi

Re: [zfs-discuss] Lower latency ZIL Option?: SSD behind Controller BB Write Cache

2011-01-28 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Eff Norwood > > We tried all combinations of OCZ SSDs including their PCI based SSDs and > they do NOT work as a ZIL. After a very short time performance degrades > horribly and for the OCZ dri

Re: [zfs-discuss] OS restore to first hard disk on ZFS while booted from second had disk

2011-01-24 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Ddl > > But now the trouble is if I need to perform a full Solaris OS restore, I need to > perform an installation of the Solaris 10 base OS and install Networker 7.6 > client to call back the

Re: [zfs-discuss] raidz2 read performance

2011-01-20 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Bueno > > Is it true that a raidz2 pool has a read capacity equal to the slowest disk's IOPs > per second ?? No, but there's a grain of truth there. Random reads: * If you have a single proce

Re: [zfs-discuss] How well does zfs mirror handle temporary disk offlines?

2011-01-18 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Erik Trimble > > As far as what the resync does: ZFS does "smart" resilvering, in that > it compares what the "good" side of the mirror has against what the > "bad" side has, and only copies t

Re: [zfs-discuss] Request for comments: L2ARC, ZIL, RAM, and slow storage

2011-01-18 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Karl Wagner > > Consider the situation where someone has a large amount of off-site data > storage (of the order of 100s of TB or more). They have a slow network link > to this storage. > > My

Re: [zfs-discuss] configuration

2011-01-18 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Trusty Twelve > > Hello, I'm going to build home server. System is deployed on 8 GB USB flash > drive. I have two identical 2 TB HDD and 250 GB one. Could you please > recommend me ZFS configur

Re: [zfs-discuss] (Fletcher+Verification) versus (Sha256+No Verification)

2011-01-15 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Peter Taps > > Thank you for sharing the calculations. In lay terms, for Sha256, how many > blocks of data would be needed to have one collision? There is no point in making a generalization a

Re: [zfs-discuss] zfs send & tape autoloaders?

2011-01-13 Thread Edward Ned Harvey
> From: Richard Elling [mailto:richard.ell...@gmail.com] > > > This means the current probability of any sha256 collision in all of the > > data in the whole world, using a ridiculously small block size, assuming all > > ... it doesn't matter. Other posters have found collisions and a collision >

Re: [zfs-discuss] Running on Dell hardware?

2011-01-12 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Ben Rockwood > > If you're still having issues go into the BIOS and disable C-States, if you > haven't already. It is responsible for most of the problems with 11th Gen > PowerEdge. I did

Re: [zfs-discuss] (Fletcher+Verification) versus (Sha256+No Verification)

2011-01-12 Thread Edward Ned Harvey
> Edward, this is OT but may I suggest you to use something like Wolfram Alpha > to perform your calculations a bit more comfortably? Wow, that's pretty awesome. Thanks. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.o

Re: [zfs-discuss] zfs send & tape autoloaders?

2011-01-11 Thread Edward Ned Harvey
heheheh, ok, I'll stop after this. ;-) Sorry for going on so long, but it was fun. In 2007, IDC estimated the size of the digital universe in 2010 would be 1 zettabyte. (10^21 bytes) This would be 2.5*10^18 blocks of 4000 bytes. http://www.emc.com/collateral/analyst-reports/expanding-digital-

Re: [zfs-discuss] zfs send & tape autoloaders?

2011-01-11 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of David Strom > > So, has anyone had any experience with piping a zfs send through dd (so > as to set the output blocksize for the tape drive) to a tape autoloader > in "autoload" mode? Yes. I'

Re: [zfs-discuss] (Fletcher+Verification) versus (Sha256+No Verification)

2011-01-11 Thread Edward Ned Harvey
In case you were wondering "how big is n before the probability of collision becomes remotely possible, slightly possible, or even likely?" Given a fixed probability of collision p, the formula to calculate n is: n = 0.5 + sqrt( ( 0.25 + 2*l(1-p)/l((d-1)/d) ) ) (That's just the same equation

Re: [zfs-discuss] (Fletcher+Verification) versus (Sha256+No Verification)

2011-01-11 Thread Edward Ned Harvey
For anyone who still cares: I'm calculating the odds of a sha256 collision in an extremely large zpool, containing 2^35 blocks of data, and no repetitions. The formula on wikipedia for the birthday problem is: p(n;d) ~= 1-( (d-1)/d )^( 0.5*n*(n-1) ) In this case, n=2^35 d=2^256 The problem is,

Re: [zfs-discuss] (Fletcher+Verification) versus (Sha256+No Verification)

2011-01-11 Thread Edward Ned Harvey
> From: Lassi Tuura [mailto:l...@cern.ch] > > bc -l < scale=150 > define bday(n, h) { return 1 - e(-(n^2)/(2*h)); } > bday(2^35, 2^256) > bday(2^35, 2^256) * 10^57 > EOF > > Basically, ~5.1 * 10^-57. > > Seems your number was correct, although I am not sure how you arrived at > it. The number w

Re: [zfs-discuss] (Fletcher+Verification) versus (Sha256+No Verification)

2011-01-10 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Edward Ned Harvey > > ~= 5.1E-57 Bah. My math is wrong. I was never very good at P&S. I'll ask someone at work tomorrow to look at it and show me the folly. Wikipedi

Re: [zfs-discuss] (Fletcher+Verification) versus (Sha256+No Verification)

2011-01-10 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of David Magda > > Knowing exactly how the math (?) works is not necessary, but understanding Understanding the math is not necessary, but it is pretty easy. And unfortunately it becomes kind of

Re: [zfs-discuss] (Fletcher+Verification) versus (Sha256+No Verification)

2011-01-10 Thread Edward Ned Harvey
> From: Pawel Jakub Dawidek [mailto:p...@freebsd.org] > > Well, I find it quite reasonable. If your block is referenced 100 times, > it is probably quite important. If your block is referenced 1 time, it is probably quite important. Hence redundancy in the pool. > There are many corruption po

Re: [zfs-discuss] (Fletcher+Verification) versus (Sha256+No Verification)

2011-01-10 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Peter Taps > > I haven't looked at the link that talks about the probability of collision. > Intuitively, I still wonder how the chances of collision can be so low. We are > reducing a 4K block

Re: [zfs-discuss] (Fletcher+Verification) versus (Sha256+No Verification)

2011-01-09 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Pawel Jakub Dawidek > > Dedupditto doesn't work exactly that way. You can have at most 3 copies > of your block. Dedupditto minimal value is 100. The first copy is > created on first write, the

Re: [zfs-discuss] A few questions

2011-01-09 Thread Edward Ned Harvey
> From: Pasi Kärkkäinen [mailto:pa...@iki.fi] > > Other OS's have had problems with the Broadcom NICs aswell.. Yes. The difference is, when I go to support.dell.com and punch in my service tag, I can download updated firmware and drivers for RHEL that (at least supposedly) solve the problem. I

Re: [zfs-discuss] (Fletcher+Verification) versus (Sha256+No Verification)

2011-01-08 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Robert Milkowski > > What if you you are storing lots of VMDKs? > One corrupted block which is shared among hundreds of VMDKs will affect > all of them. > And it might be a block containing met

Re: [zfs-discuss] A few questions

2011-01-08 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Garrett D'Amore > > When you purchase NexentaStor from a top-tier Nexenta Hardware Partner, > you get a product that has been through a rigorous qualification process How do I do this, exactly

Re: [zfs-discuss] (Fletcher+Verification) versus (Sha256+No Verification)

2011-01-07 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Bakul Shah > > See http://en.wikipedia.org/wiki/Birthday_problem -- in > particular see section 5.1 and the probability table of > section 3.4. They say "The expected number of n-bit hashes th

Re: [zfs-discuss] (Fletcher+Verification) versus (Sha256+No Verification)

2011-01-06 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Peter Taps > > Perhaps (Sha256+NoVerification) would work 99.99% of the time. But Append 50 more 9's on there. 99.% See below. >

Re: [zfs-discuss] ZFS on top of ZFS iSCSI share

2011-01-06 Thread Edward Ned Harvey
> From: Brandon High [mailto:bh...@freaks.com] > > On Thu, Jan 6, 2011 at 5:33 AM, Edward Ned Harvey > wrote: > > But the conclusion remains the same:  Redundancy is not needed at the > > client, because any data corruption the client could possibly see from the > >

Re: [zfs-discuss] ZFS on top of ZFS iSCSI share

2011-01-06 Thread Edward Ned Harvey
> From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us] > > > > But that's precisely why it's an impossible situation. In order for the > > client to see a checksum error, it must have read some corrupt data from > the > > pool storage, but the server will never allow that to happen. So the

Re: [zfs-discuss] A few questions

2011-01-06 Thread Edward Ned Harvey
> From: Khushil Dep [mailto:khushil@gmail.com] > > I've deployed large SAN's on both SuperMicro 825/826/846 and Dell > R610/R710's and I've not found any issues so far. I always make a point of > installing Intel chipset NIC's on the DELL's and disabling the Broadcom ones > but other than that

Re: [zfs-discuss] A few questions

2011-01-06 Thread Edward Ned Harvey
> From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us] > > On Wed, 5 Jan 2011, Edward Ned Harvey wrote: > > with regards to ZFS and all the other projects relevant to solaris.) > > > > I know in the case of SGE/OGE, it's officially closed source now. As of

<    1   2   3   4   5   6   7   8   9   10   >