> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Rich Teer
>
> Also related to this is a performance question. My initial test involved
> copying a 50 MB zfs file system to a new disk, which took 2.5 minutes
> to complete. The strikes me as
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Frank Van Damme
>
> another dedup question. I just installed an ssd disk as l2arc. This
> is a backup server with 6 GB RAM (ie I don't often read the same data
> again), basically it has a lar
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of TianHong Zhao
>
> There seems to be a few threads about zpool hang, do we have a
> workaround to resolve the hang issue without rebooting ?
>
> In my case, I have a pool with disks from exte
> From: Neil Perrin [mailto:neil.per...@oracle.com]
>
> The size of these structures will vary according to the release you're
running.
> You can always find out the size for a particular system using ::sizeof
within
> mdb. For example, as super user :
>
> : xvm-4200m2-02 ; echo ::sizeof ddt_entry
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>
> What does it mean / what should you do, if you run that command, and it
> starts spewing messages like this?
> leaked space: vdev 0, offset 0x3bd8096e
> From: Edward Ned Harvey
> I saved the core and ran again. This time it spewed "leaked space"
messages
> for an hour, and completed. But the final result was physically
impossible (it
> counted up 744k total blocks, which means something like 3Megs per block
in
> my 2.
> From: Richard Elling [mailto:richard.ell...@gmail.com]
>
> > Worse yet, your arc consumption could be so large, that
> > PROCESSES don't fit in ram anymore. In this case, your processes get
> pushed
> > out to swap space, which is really bad.
>
> This will not happen. The ARC will be asked to
> From: Tomas Ögren [mailto:st...@acc.umu.se]
>
> zdb -bb pool
Oy - this is scary - Thank you by the way for that command - I've been
gathering statistics across a handful of systems now ...
What does it mean / what should you do, if you run that command, and it
starts spewing messages like this
> From: Brandon High [mailto:bh...@freaks.com]
> Sent: Thursday, April 28, 2011 5:33 PM
>
> On Wed, Apr 27, 2011 at 9:26 PM, Edward Ned Harvey
> wrote:
> > Correct me if I'm wrong, but the dedup sha256 checksum happens in
> addition
> > to (not instead of) th
> From: Erik Trimble [mailto:erik.trim...@oracle.com]
>
> OK, I just re-looked at a couple of things, and here's what I /think/ is
> the correct numbers.
>
> I just checked, and the current size of this structure is 0x178, or 376
> bytes.
>
> Each ARC entry, which points to either an L2ARC item
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Neil Perrin
>
> No, that's not true. The DDT is just like any other ZFS metadata and can
be
> split over the ARC,
> cache device (L2ARC) and the main pool devices. An infrequently referenced
>
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Erik Trimble
>
> (BTW, is there any way to get a measurement of number of blocks consumed
> per zpool? Per vdev? Per zfs filesystem?) *snip*.
>
>
> you need to use zdb to see what the curr
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Lamp Zy
>
> One of my drives failed in Raidz2 with two hot spares:
>
What zpool & zfs version are you using? What OS version?
Are all the drives precisely the same size (Same make/model numb
There are a lot of conflicting references on the Internet, so I'd really
like to solicit actual experts (ZFS developers or people who have physical
evidence) to weigh in on this...
After searching around, the reference I found to be the most seemingly
useful was Erik's post here:
http://openso
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Karl Rossing
>
> So i figured out after a couple of scrubs and fmadm faulty that drive
> c9t15d0 was bad.
>
> My pool now looks like this:
> NAME STATE READ WRITE CKSUM
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Julian King
>
> Actually I think our figures more or less agree. 12 disks = 7 mbits
> 48 disks = 4x7mbits
I know that sounds like terrible performance to me. Any time I benchmark
disks, a che
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Paul Kraus
>
> I have a zpool with one dataset and a handful of snapshots. I
> cannot delete two of the snapshots. The message I get is "dataset is
> busy". Neither fuser or lsof show anyth
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Giovanni Tirloni
>
> We've production servers with 9 vdev's (mirrored) doing `zfs send`
> daily to backup servers with with 7 vdev's (each 3-disk raidz1). Some
> backup servers that receive dat
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Nomen Nescio
>
> Hi ladies and gents, I've got a new Solaris 10 development box with ZFS
> mirror root using 500G drives. I've got several extra 320G drives and I'm
> wondering if there's any w
> From: Richard Elling [mailto:richard.ell...@gmail.com]
>
> There is no direct correlation between the number of blocks and resilver
> time.
Incorrect.
Although there are possibly some cases where you could be bandwidth limited,
it's certainly not true in general.
If Richard were correct, then
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Paul Kraus
>
> Is resilver time related to the amount of data (TBs) or the number
> of objects (file + directory counts) ? I have seen zpools with lots of
> data in very few files resilver
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>
> it depends on the total number of used blocks that must
> be resilvered on the resilvering device, multiplied by the access time for
> the resilvering
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Richard Elling
>
> How many times do we have to rehash this? The speed of resilver is
> dependent on the amount of data, the distribution of data on the
resilvering
> device, speed of the resil
> From: David Magda [mailto:dma...@ee.ryerson.ca]
>
> >> 2. Unix / Solaris limitation of 16 / 32 group membership
> >>
> > I don't think you're going to eliminate #2.
>
> #2 is fixed in OpenSolaris as of snv_129:
>
> The new limit is 1024--the same maximum number of groups as Windows
> supports.
> From: Paul Kraus [mailto:p...@kraus-haus.org]
>
> > Samba even has modules for mapping NT RIDs to Nix UIDs/GIDs as well as a
> module that
> > supports "Previous Versions" using the hosts native snapshot method.
>
> But... if SAMBA has native AD authentication, and the underlying
> OS can a
> From: Freddie Cash [mailto:fjwc...@gmail.com]
>
> On Wed, Mar 16, 2011 at 7:23 PM, Edward Ned Harvey
> wrote:
> > P.S. If your primary goal is to use ZFS, you would probably be better
> > switching to nexenta or openindiana or solaris 11 express, because they all
>
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Peter Jeremy
>
> I am in the process of upgrading from FreeBSD-8.1 with ZFSv14 to
> FreeBSD-8.2 with ZFSv15 and, following a crash, have run into a
> problem with ZFS claiming a snapshot or clo
> From: Paul Kraus [mailto:p...@kraus-haus.org]
>
> > So if you were to enable the sharesmb property on a zfs filesystem in
sol10,
> > you just get an error or something?
>
> Nope. The command succeeds and the flag gets set on the dataset.
> Since there is no kernel process to read the flag a
> From: Paul Kraus [mailto:p...@kraus-haus.org]
>
> > I have a solaris 10u8 box I'm logged into right now. man zfs shows that
> sharesmb is
> > available as an option. I suppose I could be wrong, if either the man
page is
> wrong,
> > or if I'm incorrectly assuming the zfs sharesmb property uses
> From: James C. McPherson [mailto:j...@opensolaris.org]
> Sent: Monday, March 14, 2011 9:20 AM
>
> > Just for clarity:
> > The in-kernel CIFS service is indeed available in solaris 10.
>
> Are you really, really sure about that? Please point the RFE number
> which tracks the inclusion in a Solar
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Richard Elling
>
> >> Yes.
> >> http://www.unix.com/man-page/OpenSolaris/1m/idmap/
> >
> > This appears to be only for OpenSolaris/Solaris 11, and not Solaris 10.
Or am
> I missing something?
>
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Fred Liu
>
> Is there a mapping mechanism like what DataOnTap does to map the
> permission/acl between NIS/LDAP and AD?
There are a lot of solutions available. But if you don't already have a
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>
> Is it possible to run both CIFS and NFS on one file system over ZFS?
Yes. I do.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Mike MacNeil
>
> I have a Sun 7000 series NAS device, I am trying to back it up via NFS
mount
> on a Solaris 10 server running Networker 7.6.1. It works but it is
extremely
> slow, I have test
> From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us]
>
> The disk write cache helps with the step where data is
> sent to the disks since it is much faster to write into the disk write
> cache than to write to the media. Besides helping with unburdening
> the I/O channel,
Having the di
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Nathan Kroenert
>
> Bottom line is that at 75 IOPS per spindle won't impress many people,
> and that's the sort of rate you get when you disable the disk cache.
It's the same rate that you get
> From: Jim Dunham [mailto:james.dun...@oracle.com]
>
> ZFS only uses system RAM for read caching,
If your email address didn't say oracle, I'd just simply come out and say
you're crazy, but I'm trying to keep an open mind here... Correct me where
the following statement is wrong: ZFS uses sys
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Brandon High
>
> Write caching will be disabled on devices that use slices. It can be
> turned back on by using format -e
My experience has been, despite what the BPG (or whatever) says, this
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Yaverot
>
> >I recommend:
> >When creating your new pool, use slices of the new disks, which are 99%
of
> >the size of the new disks instead of using the whole new disks. Because
> >this is a
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Yaverot
>
> rpool remains 1% inuse. tank reports 100% full (with 1.44G free),
I recommend:
When creating your new pool, use slices of the new disks, which are 99% of
the size of the new disks
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Yaverot
>
> I'm (still) running snv_134 on a home server. My main pool "tank" filled
up
> last night ( 1G free remaining ).
There is (or was) a bug that would sometimes cause the system to cr
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Yaverot
>
> We're heading into the 3rd hour of the zpool destroy on "others".
> The system isn't locked up, as it responds to local keyboard input, and
I bet you, you're in a semi-crashed stat
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Tim Cook
>
> The response was that Sun makes sure all drives
> are exactly the same size (although I do recall someone on this forum
having
> this issue with Sun OEM disks as well).
That was
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Brandon High
>
> I would avoid USB, since it can be less reliable than other connection
> methods. That's the impression I get from older posts made by Sun
Take that a step further. Anything
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of David Blasingame Oracle
>
> Keep pool space under 80% utilization to maintain pool performance.
For what it's worth, the same is true for any other filesystem too. What
really matters is the
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Eff Norwood
>
> Are there any gotchas that I should be aware of? Also, at what level
should I
> be taking the snapshot to do the zfs send? At the primary pool level or at
the
> zvol level? Sinc
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Mark Creamer
>
> 1. Should I create individual iSCSI LUNs and present those to the VMware
> ESXi host as iSCSI storage, and then create virtual disks from there on
each
> Solaris VM?
>
> - or
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>
> What does "zpool status" tell you?
Also, "zpool iostat 5'
___
zfs-discuss mailing list
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of ian W
>
> Hope you can still help here. Solaris 11 Express.
> x86 platform E6600 with 6GB of RAM
>
> I have a fairly new S11E box Im using as a file server.
> 3x1.5TB HDD's in a raidz pool.
J
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Markus Kovero
>
> > I noticed recently that write rate has dropped off and through testing
now I
> am getting 35MB/sec writes. The pool is around 50-60% full.
>
> Hi, do you have your zfs pref
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Michael
>
> Core i7 2600 CPU
> 16gb DDR3 Memory
> 64GB SSD for ZIL (optional)
>
> Would this produce decent results for deduplication of 16TB worth of pools
> or would I need more RAM still?
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Matthew Angelo
>
> My question is, how do I determine which of the following zpool and
> vdev configuration I should run to maximize space whilst mitigating
> rebuild failure risk?
>
> 1. 2x R
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Erik Trimble
>
> Bottom line, it's maybe $50 in parts, plus a $100k VLSI Engineer to do
> the design.
Well, only if there's a high volume. If you're only going to sell 10,000 of
these device
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Orvar Korvar
>
> So, the bottom line is that Solaris 11 Express can not use TRIM and SSD?
Is
> that the conclusion? So, it might not be a good idea to use a SSD?
Even without TRIM, SSD's are s
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of taemun
>
> Uhm. Higher RPM = higher linear speed of the head above the platter =
> higher throughput. If the bit pitch (ie the size of each bit on the
platter) is the
Nope. That's what I orig
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of James
>
> block sizes and a ZFS 4kB recordsize* would mean much lower IOPS. e.g.
> Seagate Constellations are around 75-141MB/s(inner-outer) and 75MB/s is
> 18750 4kB IOPS! However I've jus
> From: Brandon High [mailto:bh...@freaks.com]
>
> That's assuming that the drives have the same number of platters. 500G
> drives are generally one platter, and 2T drives are generally 4
> platters. Same size platters, same density. The 500G drive could be
Wouldn't multiple platters of the same
> From: Richard Elling [mailto:richard.ell...@gmail.com]
>
> They aren't. Check the datasheets, the max media bandwidth is almost
> always
> published.
I looked for said data sheets before posting. Care to drop any pointers? I
didn't see any drives publishing figures for throughput to/from pla
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of James
>
> I assume while a 2TB 7200rpm drive may have better sequential IOPS than a
> 500GB, it will not be double and therefore,
Don't know why you'd assume that. I would assume a 2TB drive
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
>
> > Even *with* an L2ARC, your memory requirements are *substantial*,
> > because the L2ARC itself needs RAM. 8 GB is simply inadequate for your
> > test.
>
> With 50TB
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
>
> Sorry about the initial post - it was wrong. The hardware configuration was
> right, but for initial tests, I use NFS, meaning sync writes. This obviously
> stresses th
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
>
> > Dedup is *hungry* for RAM. 8GB is not enough for your configuration,
> > most likely! First guess: double the RAM and then you might have
> > better
> > luck.
>
> I
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of James
>
> I’m trying to select the appropriate disk spindle speed for a proposal and
> would welcome any experience and opinions (e.g. has anyone actively
> chosen 10k/15k drives for a new ZFS
> From: Peter Jeremy [mailto:peter.jer...@alcatel-lucent.com]
> Sent: Sunday, January 30, 2011 3:48 PM
>
> >2- When you want to restore, it's all or nothing. If a single bit is
> >corrupt in the data stream, the whole stream is lost.
> >
> OTOH, it renders ZFS send useless for backup or archival
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
>
> We're getting down to 10-20MB/s on
Oh, one more thing. How are you measuring the speed? Because if you have data
which is highly compressible, or highly duplicated,
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
>
> The test box is a supermicro thing with a Core2duo CPU, 8 gigs of RAM, 4 gigs
> of mirrored SLOG and some 150 gigs of L2ARC on 80GB x25-M drives. The
> data drives are
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>
> My google-fu is coming up short on this one... I didn't see that it had
been
> discussed in a while ...
BTW, there were a bunch of places where peop
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Mike Tancsa
>
> NAMESTATE READ WRITE CKSUM
> tank1 UNAVAIL 0 0 0 insufficient replicas
> raidz1ONLINE 0 0 0
>
My google-fu is coming up short on this one... I didn't see that it had
been discussed in a while ...
What is the status of ZFS support for TRIM?
For the pool in general...
and...
Specifically for the slog and/or cache???
___
zfs-discuss maili
> From: Deano [mailto:de...@rattie.demon.co.uk]
>
> Hi Edward,
> Do you have a source for the 8KiB block size data? whilst we can't avoid
the
> SSD controller in theory we can change the smallest size we present to the
> SSD to 8KiB fairly easily... I wonder if that would help the controller do
a
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Tristram Scott
>
> When it comes to dumping and restoring filesystems, there is still no
official
> replacement for the ufsdump and ufsrestore.
Let's go into that a little bit. If you're pi
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Eff Norwood
>
> We tried all combinations of OCZ SSDs including their PCI based SSDs and
> they do NOT work as a ZIL. After a very short time performance degrades
> horribly and for the OCZ dri
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Ddl
>
> But now the trouble is if I need to perform a full Solaris OS restore, I
need to
> perform an installation of the Solaris 10 base OS and install Networker
7.6
> client to call back the
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Bueno
>
> Is it true that a raidz2 pool has a read capacity equal to the slowest
disk's IOPs
> per second ??
No, but there's a grain of truth there.
Random reads:
* If you have a single proce
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Erik Trimble
>
> As far as what the resync does: ZFS does "smart" resilvering, in that
> it compares what the "good" side of the mirror has against what the
> "bad" side has, and only copies t
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Karl Wagner
>
> Consider the situation where someone has a large amount of off-site data
> storage (of the order of 100s of TB or more). They have a slow network
link
> to this storage.
>
> My
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Trusty Twelve
>
> Hello, I'm going to build home server. System is deployed on 8 GB USB
flash
> drive. I have two identical 2 TB HDD and 250 GB one. Could you please
> recommend me ZFS configur
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Peter Taps
>
> Thank you for sharing the calculations. In lay terms, for Sha256, how many
> blocks of data would be needed to have one collision?
There is no point in making a generalization a
> From: Richard Elling [mailto:richard.ell...@gmail.com]
>
> > This means the current probability of any sha256 collision in all of the
> > data in the whole world, using a ridiculously small block size, assuming
all
>
> ... it doesn't matter. Other posters have found collisions and a collision
>
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Ben Rockwood
>
> If you're still having issues go into the BIOS and disable C-States,
if you
> haven't already. It is responsible for most of the problems with 11th Gen
> PowerEdge.
I did
> Edward, this is OT but may I suggest you to use something like Wolfram
Alpha
> to perform your calculations a bit more comfortably?
Wow, that's pretty awesome. Thanks.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.o
heheheh, ok, I'll stop after this. ;-) Sorry for going on so long, but it
was fun.
In 2007, IDC estimated the size of the digital universe in 2010 would be 1
zettabyte. (10^21 bytes) This would be 2.5*10^18 blocks of 4000 bytes.
http://www.emc.com/collateral/analyst-reports/expanding-digital-
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of David Strom
>
> So, has anyone had any experience with piping a zfs send through dd (so
> as to set the output blocksize for the tape drive) to a tape autoloader
> in "autoload" mode?
Yes. I'
In case you were wondering "how big is n before the probability of collision
becomes remotely possible, slightly possible, or even likely?"
Given a fixed probability of collision p, the formula to calculate n is:
n = 0.5 + sqrt( ( 0.25 + 2*l(1-p)/l((d-1)/d) ) )
(That's just the same equation
For anyone who still cares:
I'm calculating the odds of a sha256 collision in an extremely large zpool,
containing 2^35 blocks of data, and no repetitions.
The formula on wikipedia for the birthday problem is:
p(n;d) ~= 1-( (d-1)/d )^( 0.5*n*(n-1) )
In this case,
n=2^35
d=2^256
The problem is,
> From: Lassi Tuura [mailto:l...@cern.ch]
>
> bc -l < scale=150
> define bday(n, h) { return 1 - e(-(n^2)/(2*h)); }
> bday(2^35, 2^256)
> bday(2^35, 2^256) * 10^57
> EOF
>
> Basically, ~5.1 * 10^-57.
>
> Seems your number was correct, although I am not sure how you arrived at
> it.
The number w
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>
> ~= 5.1E-57
Bah. My math is wrong. I was never very good at P&S. I'll ask someone at
work tomorrow to look at it and show me the folly. Wikipedi
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of David Magda
>
> Knowing exactly how the math (?) works is not necessary, but understanding
Understanding the math is not necessary, but it is pretty easy. And
unfortunately it becomes kind of
> From: Pawel Jakub Dawidek [mailto:p...@freebsd.org]
>
> Well, I find it quite reasonable. If your block is referenced 100 times,
> it is probably quite important.
If your block is referenced 1 time, it is probably quite important. Hence
redundancy in the pool.
> There are many corruption po
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Peter Taps
>
> I haven't looked at the link that talks about the probability of
collision.
> Intuitively, I still wonder how the chances of collision can be so low. We
are
> reducing a 4K block
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Pawel Jakub Dawidek
>
> Dedupditto doesn't work exactly that way. You can have at most 3 copies
> of your block. Dedupditto minimal value is 100. The first copy is
> created on first write, the
> From: Pasi Kärkkäinen [mailto:pa...@iki.fi]
>
> Other OS's have had problems with the Broadcom NICs aswell..
Yes. The difference is, when I go to support.dell.com and punch in my
service tag, I can download updated firmware and drivers for RHEL that (at
least supposedly) solve the problem. I
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Robert Milkowski
>
> What if you you are storing lots of VMDKs?
> One corrupted block which is shared among hundreds of VMDKs will affect
> all of them.
> And it might be a block containing met
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Garrett D'Amore
>
> When you purchase NexentaStor from a top-tier Nexenta Hardware Partner,
> you get a product that has been through a rigorous qualification process
How do I do this, exactly
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Bakul Shah
>
> See http://en.wikipedia.org/wiki/Birthday_problem -- in
> particular see section 5.1 and the probability table of
> section 3.4.
They say "The expected number of n-bit hashes th
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Peter Taps
>
> Perhaps (Sha256+NoVerification) would work 99.99% of the time. But
Append 50 more 9's on there.
99.%
See below.
>
> From: Brandon High [mailto:bh...@freaks.com]
>
> On Thu, Jan 6, 2011 at 5:33 AM, Edward Ned Harvey
> wrote:
> > But the conclusion remains the same: Redundancy is not needed at the
> > client, because any data corruption the client could possibly see from
the
> >
> From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us]
> >
> > But that's precisely why it's an impossible situation. In order for the
> > client to see a checksum error, it must have read some corrupt data from
> the
> > pool storage, but the server will never allow that to happen. So the
> From: Khushil Dep [mailto:khushil@gmail.com]
>
> I've deployed large SAN's on both SuperMicro 825/826/846 and Dell
> R610/R710's and I've not found any issues so far. I always make a point of
> installing Intel chipset NIC's on the DELL's and disabling the Broadcom ones
> but other than that
> From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us]
>
> On Wed, 5 Jan 2011, Edward Ned Harvey wrote:
> > with regards to ZFS and all the other projects relevant to solaris.)
> >
> > I know in the case of SGE/OGE, it's officially closed source now. As of
401 - 500 of 1156 matches
Mail list logo