From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Richard Elling
How many times do we have to rehash this? The speed of resilver is
dependent on the amount of data, the distribution of data on the
resilvering
device, speed of the
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
it depends on the total number of used blocks that must
be resilvered on the resilvering device, multiplied by the access time for
the resilvering device.
It is a safe
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Paul Kraus
Is resilver time related to the amount of data (TBs) or the number
of objects (file + directory counts) ? I have seen zpools with lots of
data in very few files resilver
From: Richard Elling [mailto:richard.ell...@gmail.com]
There is no direct correlation between the number of blocks and resilver
time.
Incorrect.
Although there are possibly some cases where you could be bandwidth limited,
it's certainly not true in general.
If Richard were correct, then a
From: David Magda [mailto:dma...@ee.ryerson.ca]
2. Unix / Solaris limitation of 16 / 32 group membership
I don't think you're going to eliminate #2.
#2 is fixed in OpenSolaris as of snv_129:
The new limit is 1024--the same maximum number of groups as Windows
supports. Unlikely that
From: Freddie Cash [mailto:fjwc...@gmail.com]
On Wed, Mar 16, 2011 at 7:23 PM, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
P.S. If your primary goal is to use ZFS, you would probably be better
switching to nexenta or openindiana or solaris 11 express
From: Paul Kraus [mailto:p...@kraus-haus.org]
Samba even has modules for mapping NT RIDs to Nix UIDs/GIDs as well as a
module that
supports Previous Versions using the hosts native snapshot method.
But... if SAMBA has native AD authentication, and the underlying
OS can authenticate
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Peter Jeremy
I am in the process of upgrading from FreeBSD-8.1 with ZFSv14 to
FreeBSD-8.2 with ZFSv15 and, following a crash, have run into a
problem with ZFS claiming a snapshot or clone
From: Paul Kraus [mailto:p...@kraus-haus.org]
So if you were to enable the sharesmb property on a zfs filesystem in
sol10,
you just get an error or something?
Nope. The command succeeds and the flag gets set on the dataset.
Since there is no kernel process to read the flag and act
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Richard Elling
Yes.
http://www.unix.com/man-page/OpenSolaris/1m/idmap/
This appears to be only for OpenSolaris/Solaris 11, and not Solaris 10.
Or am
I missing something?
Correct.
From: James C. McPherson [mailto:j...@opensolaris.org]
Sent: Monday, March 14, 2011 9:20 AM
Just for clarity:
The in-kernel CIFS service is indeed available in solaris 10.
Are you really, really sure about that? Please point the RFE number
which tracks the inclusion in a Solaris 10
From: Paul Kraus [mailto:p...@kraus-haus.org]
I have a solaris 10u8 box I'm logged into right now. man zfs shows that
sharesmb is
available as an option. I suppose I could be wrong, if either the man
page is
wrong,
or if I'm incorrectly assuming the zfs sharesmb property uses the
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Mike MacNeil
I have a Sun 7000 series NAS device, I am trying to back it up via NFS
mount
on a Solaris 10 server running Networker 7.6.1. It works but it is
extremely
slow, I have tested
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
Is it possible to run both CIFS and NFS on one file system over ZFS?
Yes. I do.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Fred Liu
Is there a mapping mechanism like what DataOnTap does to map the
permission/acl between NIS/LDAP and AD?
There are a lot of solutions available. But if you don't already have a
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Nathan Kroenert
Bottom line is that at 75 IOPS per spindle won't impress many people,
and that's the sort of rate you get when you disable the disk cache.
It's the same rate that you get
From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us]
The disk write cache helps with the step where data is
sent to the disks since it is much faster to write into the disk write
cache than to write to the media. Besides helping with unburdening
the I/O channel,
Having the disk
From: Jim Dunham [mailto:james.dun...@oracle.com]
ZFS only uses system RAM for read caching,
If your email address didn't say oracle, I'd just simply come out and say
you're crazy, but I'm trying to keep an open mind here... Correct me where
the following statement is wrong: ZFS uses
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Yaverot
rpool remains 1% inuse. tank reports 100% full (with 1.44G free),
I recommend:
When creating your new pool, use slices of the new disks, which are 99% of
the size of the new disks
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Yaverot
I recommend:
When creating your new pool, use slices of the new disks, which are 99%
of
the size of the new disks instead of using the whole new disks. Because
this is a more
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Brandon High
Write caching will be disabled on devices that use slices. It can be
turned back on by using format -e
My experience has been, despite what the BPG (or whatever) says, this is
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Yaverot
We're heading into the 3rd hour of the zpool destroy on others.
The system isn't locked up, as it responds to local keyboard input, and
I bet you, you're in a semi-crashed state
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Yaverot
I'm (still) running snv_134 on a home server. My main pool tank filled
up
last night ( 1G free remaining ).
There is (or was) a bug that would sometimes cause the system to crash
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Tim Cook
The response was that Sun makes sure all drives
are exactly the same size (although I do recall someone on this forum
having
this issue with Sun OEM disks as well).
That was me.
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of David Blasingame Oracle
Keep pool space under 80% utilization to maintain pool performance.
For what it's worth, the same is true for any other filesystem too. What
really matters is the
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Brandon High
I would avoid USB, since it can be less reliable than other connection
methods. That's the impression I get from older posts made by Sun
Take that a step further. Anything
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Eff Norwood
Are there any gotchas that I should be aware of? Also, at what level
should I
be taking the snapshot to do the zfs send? At the primary pool level or at
the
zvol level? Since the
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Mark Creamer
1. Should I create individual iSCSI LUNs and present those to the VMware
ESXi host as iSCSI storage, and then create virtual disks from there on
each
Solaris VM?
- or -
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
What does zpool status tell you?
Also, zpool iostat 5'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Markus Kovero
I noticed recently that write rate has dropped off and through testing
now I
am getting 35MB/sec writes. The pool is around 50-60% full.
Hi, do you have your zfs prefetch
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of ian W
Hope you can still help here. Solaris 11 Express.
x86 platform E6600 with 6GB of RAM
I have a fairly new S11E box Im using as a file server.
3x1.5TB HDD's in a raidz pool.
Just so
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Michael
Core i7 2600 CPU
16gb DDR3 Memory
64GB SSD for ZIL (optional)
Would this produce decent results for deduplication of 16TB worth of pools
or would I need more RAM still?
What
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Matthew Angelo
My question is, how do I determine which of the following zpool and
vdev configuration I should run to maximize space whilst mitigating
rebuild failure risk?
1. 2x
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Orvar Korvar
So, the bottom line is that Solaris 11 Express can not use TRIM and SSD?
Is
that the conclusion? So, it might not be a good idea to use a SSD?
Even without TRIM, SSD's are still
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Erik Trimble
Bottom line, it's maybe $50 in parts, plus a $100k VLSI Engineer to do
the design. wink
Well, only if there's a high volume. If you're only going to sell 10,000 of
these
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of taemun
Uhm. Higher RPM = higher linear speed of the head above the platter =
higher throughput. If the bit pitch (ie the size of each bit on the
platter) is the
Nope. That's what I
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of James
I assume while a 2TB 7200rpm drive may have better sequential IOPS than a
500GB, it will not be double and therefore,
Don't know why you'd assume that. I would assume a 2TB drive
From: Richard Elling [mailto:richard.ell...@gmail.com]
They aren't. Check the datasheets, the max media bandwidth is almost
always
published.
I looked for said data sheets before posting. Care to drop any pointers? I
didn't see any drives publishing figures for throughput to/from platter
From: Brandon High [mailto:bh...@freaks.com]
That's assuming that the drives have the same number of platters. 500G
drives are generally one platter, and 2T drives are generally 4
platters. Same size platters, same density. The 500G drive could be
Wouldn't multiple platters of the same
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of James
block sizes and a ZFS 4kB recordsize* would mean much lower IOPS. e.g.
Seagate Constellations are around 75-141MB/s(inner-outer) and 75MB/s is
18750 4kB IOPS! However I've just
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of James
I’m trying to select the appropriate disk spindle speed for a proposal and
would welcome any experience and opinions (e.g. has anyone actively
chosen 10k/15k drives for a new ZFS
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
Dedup is *hungry* for RAM. 8GB is not enough for your configuration,
most likely! First guess: double the RAM and then you might have
better
luck.
I know...
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
Sorry about the initial post - it was wrong. The hardware configuration was
right, but for initial tests, I use NFS, meaning sync writes. This obviously
stresses the
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
Even *with* an L2ARC, your memory requirements are *substantial*,
because the L2ARC itself needs RAM. 8 GB is simply inadequate for your
test.
With 50TB storage,
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
The test box is a supermicro thing with a Core2duo CPU, 8 gigs of RAM, 4 gigs
of mirrored SLOG and some 150 gigs of L2ARC on 80GB x25-M drives. The
data drives are 7 2TB
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
We're getting down to 10-20MB/s on
Oh, one more thing. How are you measuring the speed? Because if you have data
which is highly compressible, or highly duplicated,
From: Peter Jeremy [mailto:peter.jer...@alcatel-lucent.com]
Sent: Sunday, January 30, 2011 3:48 PM
2- When you want to restore, it's all or nothing. If a single bit is
corrupt in the data stream, the whole stream is lost.
OTOH, it renders ZFS send useless for backup or archival purposes.
From: Deano [mailto:de...@rattie.demon.co.uk]
Hi Edward,
Do you have a source for the 8KiB block size data? whilst we can't avoid
the
SSD controller in theory we can change the smallest size we present to the
SSD to 8KiB fairly easily... I wonder if that would help the controller do
a
My google-fu is coming up short on this one... I didn't see that it had
been discussed in a while ...
What is the status of ZFS support for TRIM?
For the pool in general...
and...
Specifically for the slog and/or cache???
___
zfs-discuss
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Mike Tancsa
NAMESTATE READ WRITE CKSUM
tank1 UNAVAIL 0 0 0 insufficient replicas
raidz1ONLINE 0 0 0
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
My google-fu is coming up short on this one... I didn't see that it had
been
discussed in a while ...
BTW, there were a bunch of places where people said ZFS doesn't need
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Eff Norwood
We tried all combinations of OCZ SSDs including their PCI based SSDs and
they do NOT work as a ZIL. After a very short time performance degrades
horribly and for the OCZ drives
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Tristram Scott
When it comes to dumping and restoring filesystems, there is still no
official
replacement for the ufsdump and ufsrestore.
Let's go into that a little bit. If you're piping
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ddl
But now the trouble is if I need to perform a full Solaris OS restore, I
need to
perform an installation of the Solaris 10 base OS and install Networker
7.6
client to call back the data
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Bueno
Is it true that a raidz2 pool has a read capacity equal to the slowest
disk's IOPs
per second ??
No, but there's a grain of truth there.
Random reads:
* If you have a single process
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Trusty Twelve
Hello, I'm going to build home server. System is deployed on 8 GB USB
flash
drive. I have two identical 2 TB HDD and 250 GB one. Could you please
recommend me ZFS configuration
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Karl Wagner
Consider the situation where someone has a large amount of off-site data
storage (of the order of 100s of TB or more). They have a slow network
link
to this storage.
My idea
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Erik Trimble
As far as what the resync does: ZFS does smart resilvering, in that
it compares what the good side of the mirror has against what the
bad side has, and only copies the
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Peter Taps
Thank you for sharing the calculations. In lay terms, for Sha256, how many
blocks of data would be needed to have one collision?
There is no point in making a generalization and a
From: Richard Elling [mailto:richard.ell...@gmail.com]
This means the current probability of any sha256 collision in all of the
data in the whole world, using a ridiculously small block size, assuming
all
... it doesn't matter. Other posters have found collisions and a collision
without
Edward, this is OT but may I suggest you to use something like Wolfram
Alpha
to perform your calculations a bit more comfortably?
Wow, that's pretty awesome. Thanks.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ben Rockwood
If you're still having issues go into the BIOS and disable C-States,
if you
haven't already. It is responsible for most of the problems with 11th Gen
PowerEdge.
I did that
From: Lassi Tuura [mailto:l...@cern.ch]
bc -l EOF
scale=150
define bday(n, h) { return 1 - e(-(n^2)/(2*h)); }
bday(2^35, 2^256)
bday(2^35, 2^256) * 10^57
EOF
Basically, ~5.1 * 10^-57.
Seems your number was correct, although I am not sure how you arrived at
it.
The number was
For anyone who still cares:
I'm calculating the odds of a sha256 collision in an extremely large zpool,
containing 2^35 blocks of data, and no repetitions.
The formula on wikipedia for the birthday problem is:
p(n;d) ~= 1-( (d-1)/d )^( 0.5*n*(n-1) )
In this case,
n=2^35
d=2^256
The problem
In case you were wondering how big is n before the probability of collision
becomes remotely possible, slightly possible, or even likely?
Given a fixed probability of collision p, the formula to calculate n is:
n = 0.5 + sqrt( ( 0.25 + 2*l(1-p)/l((d-1)/d) ) )
(That's just the same equation
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of David Strom
So, has anyone had any experience with piping a zfs send through dd (so
as to set the output blocksize for the tape drive) to a tape autoloader
in autoload mode?
Yes. I've had
heheheh, ok, I'll stop after this. ;-) Sorry for going on so long, but it
was fun.
In 2007, IDC estimated the size of the digital universe in 2010 would be 1
zettabyte. (10^21 bytes) This would be 2.5*10^18 blocks of 4000 bytes.
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Peter Taps
I haven't looked at the link that talks about the probability of
collision.
Intuitively, I still wonder how the chances of collision can be so low. We
are
reducing a 4K block to
From: Pawel Jakub Dawidek [mailto:p...@freebsd.org]
Well, I find it quite reasonable. If your block is referenced 100 times,
it is probably quite important.
If your block is referenced 1 time, it is probably quite important. Hence
redundancy in the pool.
There are many corruption
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of David Magda
Knowing exactly how the math (?) works is not necessary, but understanding
Understanding the math is not necessary, but it is pretty easy. And
unfortunately it becomes kind of
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
~= 5.1E-57
Bah. My math is wrong. I was never very good at PS. I'll ask someone at
work tomorrow to look at it and show me the folly. Wikipedia has it right,
but I can't
From: Pasi Kärkkäinen [mailto:pa...@iki.fi]
Other OS's have had problems with the Broadcom NICs aswell..
Yes. The difference is, when I go to support.dell.com and punch in my
service tag, I can download updated firmware and drivers for RHEL that (at
least supposedly) solve the problem. I
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Pawel Jakub Dawidek
Dedupditto doesn't work exactly that way. You can have at most 3 copies
of your block. Dedupditto minimal value is 100. The first copy is
created on first write, the
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Garrett D'Amore
When you purchase NexentaStor from a top-tier Nexenta Hardware Partner,
you get a product that has been through a rigorous qualification process
How do I do this, exactly? I
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Robert Milkowski
What if you you are storing lots of VMDKs?
One corrupted block which is shared among hundreds of VMDKs will affect
all of them.
And it might be a block containing meta-data
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Bakul Shah
See http://en.wikipedia.org/wiki/Birthday_problem -- in
particular see section 5.1 and the probability table of
section 3.4.
They say The expected number of n-bit hashes that can
From: Richard Elling [mailto:richard.ell...@nexenta.com]
If I understand correctly, you want Dell, HP, and IBM to run OSes other
I agree, but neither Dell, HP, nor IBM develop Windows...
I'm not sure of the current state, but many of the Solaris engineers
develop
on laptops and Sun did
From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us]
On Wed, 5 Jan 2011, Edward Ned Harvey wrote:
with regards to ZFS and all the other projects relevant to solaris.)
I know in the case of SGE/OGE, it's officially closed source now. As of
Dec
31st, sunsource is being
From: Khushil Dep [mailto:khushil@gmail.com]
I've deployed large SAN's on both SuperMicro 825/826/846 and Dell
R610/R710's and I've not found any issues so far. I always make a point of
installing Intel chipset NIC's on the DELL's and disabling the Broadcom ones
but other than that it's
From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us]
But that's precisely why it's an impossible situation. In order for the
client to see a checksum error, it must have read some corrupt data from
the
pool storage, but the server will never allow that to happen. So the
short
From: Brandon High [mailto:bh...@freaks.com]
On Thu, Jan 6, 2011 at 5:33 AM, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
But the conclusion remains the same: Redundancy is not needed at the
client, because any data corruption the client could possibly see
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Peter Taps
Perhaps (Sha256+NoVerification) would work 99.99% of the time. But
Append 50 more 9's on there.
99.%
See below.
I
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Tim Cook
The claim was that there are more people contributing code from outside of
Oracle than inside to zfs. Your contributions to Illumos do absolutely
nothing
Guys, please let's just
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Chris Murray
Thank you for the feedback. All makes sense.
Sorry to hear about what's probably an unfortunate loss of nonredundant
disk...
One comment about etiquette though:
You changed
From: Deano [mailto:de...@rattie.demon.co.uk]
Sent: Wednesday, January 05, 2011 9:16 AM
So honestly do we want to innovate ZFS (I do) or do we just want to follow
Oracle?
Well, you can't follow Oracle. Unless you wait till they release something,
reverse engineer it, and attempt to
From: Michael Schuster [mailto:michaelspriv...@gmail.com]
Well, you can't follow Oracle. Unless you wait till they release
something,
reverse engineer it, and attempt to reimplement it.
that's not my understanding - while we will have to wait, oracle is
supposed to release *some*
From: Khushil Dep [mailto:khushil@gmail.com]
We do have a major commercial interest - Nexenta. It's been quiet but I do
look forward to seeing something come out of that stable this year? :-)
I'll agree to call Nexenta a major commerical interest, in regards to
contribution to the open
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Bruins
I have a filer running Opensolaris (snv_111b) and I am presenting a
iSCSI share from a RAIDZ pool. I want to run ZFS on the share at the
client. Is it necessary to create a mirror
From: Richard Elling [mailto:richard.ell...@nexenta.com]
I'll agree to call Nexenta a major commerical interest, in regards to
contribution to the open source ZFS tree, if they become an officially
supported OS on Dell, HP, and/or IBM hardware.
NexentaStor is officially supported on
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Chris Murray
I have some strange goings-on with my VM of Solaris Express 11, and I
hope someone can help.
It shares out other virtual machine files for use in ESXi 4.0 (it,
too, runs in
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Paul Gress
On 01/ 4/11 01:19 PM, webd...@gmail.com wrote:
It is sad that such a lovely file system is now in Oracle's unresponsive
hands. I
hope someone builds another open file system just
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Stephan Budach
Well a couple of weeks before christmas, I enabled the onboard bcom nics
on my R610 again, to use them as IMPI ports - I didn't even use them in
You don't have to enable the
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of pieterjan
so I
suppose the message is just informational.
I don't want that message there,
I'm pretty sure the answer is zpool clear
___
From: Frank Lahm [mailto:frankl...@googlemail.com]
Don't all of those concerns disappear in the event of a reboot?
If you stop AFP, you could completely obliterate the BDB database, and
restart AFP, and functionally continue from where you left off. Right?
No. Apple's APIs provide
From: Kevin Walker [mailto:indigoskywal...@gmail.com]
You do seem to misunderstand ZIL.
Wrong.
ZIL is quite simply write cache
ZIL is not simply write cache, but it enables certain types of operations to
use write cache which otherwise would have been ineligible.
The Intent Log is where
From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us]
Sent: Tuesday, December 28, 2010 9:23 PM
The question of IOPS here is relevant to conversation because of ZIL
dedicated log. If you have advanced short-stroking to get the write
latency
of a log device down to zero, then it can
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
Ok, what we've hit here is two people using the same word to talk about
different things. Apples to oranges, as it were. Both meanings of IOPS
are ok, but context
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Nicolas Williams
Actually I'd say that latency has a direct relationship to IOPS because
it's the
time it takes to perform an IO that determines how many IOs Per Second
that can be
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Martin Matuska
Hi guys, I am one of the ZFS porting folks at FreeBSD.
That's all really cool, and IMHO, more promising than anything I knew before.
But I'll really believe it if (a) some
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Joerg Schilling
And people should note that Netapp filed their patents starting from 1993.
This
is 5 years after I started to develop WOFS, which is copy on write. This
still
In any case,
401 - 500 of 1109 matches
Mail list logo