From: David Magda [mailto:dma...@ee.ryerson.ca]
On Wed, October 13, 2010 21:26, Edward Ned Harvey wrote:
I highly endorse mirrors for nearly all purposes.
Are you a member of BAARF?
http://www.miracleas.com/BAARF/BAARF2.html
Never heard of it. I don't quite get it ... They want
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Toby Thain
I don't want to heat up the discussion about ZFS managed discs vs.
HW raids, but if RAID5/6 would be that bad, no one would use it
anymore.
It is. And there's no reason not
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ian D
ok... we're making progress. After swapping the LSI HBA for a Dell
H800 the issue disappeared. Now, I'd rather not use those controllers
because they don't have a JBOD mode. We have
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Wilkinson, Alex
can you paste them anyway ?
Note: If you have more than one adapter, I believe you can specify -aALL in
the commands below, instead of -a0
I have 2 disks (slots 4 5) that
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Derek G Nokes
r...@dnokes.homeip.net:~# zpool create marketData raidz2
c0t5000C5001A6B9C5Ed0 c0t5000C5001A81E100d0 c0t5000C500268C0576d0
c0t5000C500268C5414d0 c0t5000C500268CFA6Bd0
I have a Dell R710 which has been flaky for some time. It crashes about
once per week. I have literally replaced every piece of hardware in it, and
reinstalled Sol 10u9 fresh and clean.
I am wondering if other people out there are using Dell hardware, with what
degree of success, and in
From: Markus Kovero [mailto:markus.kov...@nebula.fi]
Sent: Wednesday, October 13, 2010 10:43 AM
Hi, we've been running opensolaris on Dell R710 with mixed results,
some work better than others and we've been struggling with same issue
as you are with latest servers.
I suspect somekind
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Steve Radich, BitShop, Inc.
Do you have dedup on? Removing large files, zfs destroy a snapshot, or
a zvol and you'll see hangs like you are describing.
Thank you, but no.
I'm running sol
From: edmud...@mail.bounceswoosh.org
[mailto:edmud...@mail.bounceswoosh.org] On Behalf Of Eric D. Mudama
Out of curiosity, did you run into this:
http://blogs.everycity.co.uk/alasdair/2010/06/broadcom-nics-dropping-
out-on-solaris-10/
I personally haven't had the broadcom problem. When my
Dell R710 ... Solaris 10u9 ... With stability problems ...
Notice that I have several CPU's whose current_cstate is higher than the
supported_max_cstate.
Logically, that sounds like a bad thing. But I can't seem to find
documentation that defines the meaning of supported_max_cstates, to verify
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
Dell R710 ... Solaris 10u9 ... With stability problems ...
Notice that I have several CPU's whose current_cstate is higher than
the
supported_max_cstate.
One more data
From: Henrik Johansen [mailto:hen...@scannet.dk]
The 10g models are stable - especially the R905's are real workhorses.
You would generally consider all your machines stable now?
Can you easily pdsh to all those machines?
kstat | grep current_cstate ; kstat | grep supported_max_cstates
I'd
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of dirk schelfhout
Wanted to test the zfs diff command and ran into this.
What's zfs diff? I know it's been requested, but AFAIK, not implemented
yet. Is that new feature being developed now
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Peter Taps
If I have 20 disks to build a raidz3 pool, do I create one big raidz
vdev or do I create multiple raidz3 vdevs? Is there any advantage of
having multiple raidz3 vdevs in a single
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Stephan Budach
You are implying that the issues resulted from the H/W raid(s) and I
don't think that this is appropriate.
Please quote originals when you reply. If you don't - then it's
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Stephan Budach
c3t211378AC0253d0 ONLINE 0 0 0
How many disks are there inside of c3t211378AC0253d0?
How are they configured? Hardware raid 5? A mirror of
From: Stephan Budach [mailto:stephan.bud...@jvm.de]
I now also got what you meant by good half but I don't dare to say
whether or not this is also the case in a raid6 setup.
The same concept applies to raid5 or raid6. When you read the device, you
never know if you're actually reading the
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ray Van Dolson
I have a pool with a single SLOG device rated at Y iops.
If I add a second (non-mirrored) SLOG device also rated at Y iops will
my zpool now theoretically be able to handle
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Harry Putnam
beep beep beep beep beep beep
I'm kind of having a brain freeze about this:
So what are the standard tests or cmds to run to collect enough data
to try
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of David Dyer-Bennet
I must say that this concept of scrub running w/o error when corrupted
files, detectable to zfs send, apparently exist, is very disturbing.
As previously mentioned, the OP
Is there a ZFS equivalent (or alternative) of inotify?
You have some thing, which wants to be notified whenever a specific file or
directory changes. For example, a live sync application of some kind...
___
zfs-discuss mailing list
From: Peter Jeremy [mailto:peter.jer...@alcatel-lucent.com]
Sent: Thursday, October 07, 2010 10:02 PM
On 2010-Oct-08 09:07:34 +0800, Edward Ned Harvey sh...@nedharvey.com
wrote:
If you're going raidz3, with 7 disks, then you might as well just make
mirrors instead, and eliminate the slow
From: cas...@holland.sun.com [mailto:cas...@holland.sun.com] On Behalf
Of casper@sun.com
Is there a ZFS equivalent (or alternative) of inotify?
Have you looked at port_associate and ilk?
port_associate looks promising. But google is less than useful on ilk.
Got any pointers, or
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
In addition to this comes another aspect. What if one drive fails and
you find bad data on another in the same VDEV while resilvering. This
is quite common these days,
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ian D
the help to community can provide. We're running the latest version of
Nexenta on a pretty powerful machine (4x Xeon 7550, 256GB RAM, 12x
100GB Samsung SSDs for the cache, 50GB
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Stephan Budach
I
conducted a couple of tests, where I configured my raids as jbods and
mapped each drive out as a seperate LUN and I couldn't notice a
difference in performance in any way.
From: edmud...@mail.bounceswoosh.org
[mailto:edmud...@mail.bounceswoosh.org] On Behalf Of Eric D. Mudama
On Wed, Oct 6 at 22:04, Edward Ned Harvey wrote:
* Because ZFS automatically buffers writes in ram in order to
aggregate as previously mentioned, the hardware WB cache
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Kevin Walker
We are a running a Solaris 10 production server being used for backup
services within our DC. We have 8 500GB drives in a zpool and we wish
to swap them out 1 by 1 for 1TB
From: Cindy Swearingen [mailto:cindy.swearin...@oracle.com]
I would not discount the performance issue...
Depending on your workload, you might find that performance increases
with ZFS on your hardware RAID in JBOD mode.
Depends on the raid card you're comparing to. I've certainly seen
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ian Collins
I would seriously consider raidz3, given I typically see 80-100 hour
resilver times for 500G drives in raidz2 vdevs. If you haven't
already,
If you're going raidz3, with 7
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Tony MacDoodle
Is it possible to add 2 disks to increase the size of the pool below?
NAME STATE READ WRITE CKSUM
testpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c1t2d0 ONLINE 0 0 0
c1t3d0
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Stephan Budach
Ian,
yes, although these vdevs are FC raids themselves, so the risk is… uhm…
calculated.
Whenever possible, you should always JBOD the storage and let ZFS manage the
raid,
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Stephan Budach
Now, scrub would reveal corrupted blocks on the devices, but is there a
way to identify damaged files as well?
I saw a lot of people offering the same knee-jerk reaction that
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Peter Taps
As I understand, the hash generated by sha256 is almost guaranteed
not to collide. I am thinking it is okay to turn off verify property
on the zpool. However, if there is indeed a
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Scott Meilicke
Why do you want to turn verify off? If performance is the reason, is it
significant, on and off?
Under most circumstances, verify won't hurt performance. It won't hurt
reads
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
extended device statistics
devicer/sw/s kr/s kw/s wait actv svc_t %w %b
sd1 0.5 140.30.3 2426.3 0.0 1.07.2 0 14
sd2
From: Richard Elling [mailto:richard.ell...@gmail.com]
It is relatively easy to find the latest, common snapshot on two file
systems.
Once you know the latest, common snapshot, you can send the
incrementals
up to the latest.
I've always relied on the snapshot names matching. Is there a
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jason J. W. Williams
I just witnessed a resilver that took 4h for 27gb of data. Setup is 3x
raid-z2 stripes with 6 disks per raid-z2. Disks are 500gb in size. No
checksum errors.
27G on a
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Brad Stone
For de-duplication to perform well you need to be able to fit the de-
dup table in memory. Is a good rule-of-thumb for needed RAM Size=(pool
capacity/avg block size)*270 bytes?
From: opensolaris-discuss-boun...@opensolaris.org [mailto:opensolaris-
discuss-boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
I'm using a custom snaopshot scheme which snapshots every hour, day,
week and month, rotating 24h, 7d, 4w and so on. What would be the best
way to zfs
From: Roy Sigurd Karlsbakk [mailto:r...@karlsbakk.net]
For now, the rule of thumb is 3G ram for every 1TB of unique data,
including
snapshots and vdev's.
3 gigs? Last I checked it was a little more than 1GB, perhaps 2 if you
have small files.
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Freddie Cash
The following works well:
dd if=/dev/random of=/dev/disk-node bs=1M count=1 seek=whatever
If you have long enough cables, you can move a disk outside the case
and run a
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Peter Taps
The dedup property is set on a filesystem, not on the pool.
However, the dedup ratio is reported on the pool and not on the
filesystem.
As with most other ZFS concepts, the
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Bob Friesenhahn
It is very unusual to obtain the same number of errors (probably same
errors) from two devices in a pair. This should indicate a common
symptom such as a memory error (does
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of David Dyer-Bennet
For example, if you start with an empty drive, and you write a large
amount
of data to it, you will have no fragmentation. (At least, no
significant
fragmentation;
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Marty Scholes
What appears to be missing from this discussion is any shred of
scientific evidence that fragmentation is good or bad and by how much.
We also lack any detail on how much
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Bryan Horstmann-Allen
The ability to remove the slogs isn't really the win here, it's import
-F. The
Disagree.
Although I agree the -F is important and good, I think the log device
removal
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Tom Bird
We recently had a long discussion in this list, about resilver times versus
raid types. In the end, the conclusion was: resilver code is very
inefficient for raidzN. Someday it may
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ramesh Babu
I would like to know if I can create ZFS file system without ZFS
storage pool. Also I would like to know if I can create ZFS pool/ZFS
pool on Veritas Volume.
Unless I'm
From: Richard Elling [mailto:rich...@nexenta.com]
Suppose you want to ensure at least 99% efficiency of the drive. At
most 1%
time wasted by seeking.
This is practically impossible on a HDD. If you need this, use SSD.
Lately, Richard, you're saying some of the craziest illogical
From: Richard Elling [mailto:rich...@nexenta.com]
It is practically impossible to keep a drive from seeking. It is also
The first time somebody (Richard) said you can't prevent a drive from
seeking, I just decided to ignore it. But then it was said twice. (Ian.)
I don't get why anybody is
From: Haudy Kazemi [mailto:kaze0...@umn.edu]
With regard to multiuser systems and how that negates the need to
defragment, I think that is only partially true. As long as the files
are defragmented enough so that each particular read request only
requires one seek before it is time to
From: Richard Elling [mailto:rich...@nexenta.com]
With appropriate write caching and grouping or re-ordering of writes
algorithms, it should be possible to minimize the amount of file
interleaving and fragmentation on write that takes place.
To some degree, ZFS already does this. The
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Wolfraider
We are looking into the possibility of adding a dedicated ZIL and/or
L2ARC devices to our pool. We are looking into getting 4 – 32GB Intel
X25-E SSD drives. Would this be a good
From: Richard Elling [mailto:rich...@nexenta.com]
This operational definition of fragmentation comes from the single-
user,
single-tasking world (PeeCees). In that world, only one thread writes
files
from one application at one time. In those cases, there is a reasonable
expectation that
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Orvar Korvar
I was thinking to delete all zfs snapshots before zfs send receive to
another new zpool. Then everything would be defragmented, I thought.
You don't need to delete snaps before
From: Richard Elling [mailto:rich...@nexenta.com]
Regardless of multithreading, multiprocessing, it's absolutely
possible to
have contiguous files, and/or file fragmentation. That's not a
characteristic which depends on the threading model.
Possible, yes. Probable, no. Consider
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Brad
Hi! I'd been scouring the forums and web for admins/users who deployed
zfs with compression enabled on Oracle backed by storage array luns.
Any problems with cpu/memory overhead?
I
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Orvar Korvar
I am not really worried about fragmentation. I was just wondering if I
attach new drives and zfs send recieve to a new zpool, would count as
defrag. But apparently, not.
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Freddie Cash
No, it (21-disk raidz3 vdev) most certainly will not resilver in the
same amount of time. In fact, I highly doubt it would resilver at
all.
My first foray into ZFS resulted
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Erik Trimble
the thing that folks tend to forget is that RaidZ is IOPS limited. For
the most part, if I want to reconstruct a single slab (stripe) of data,
I have to issue a read to EACH
From: Hatish Narotam [mailto:hat...@gmail.com]
PCI-E 8X 4-port ESata Raid Controller.
4 x ESata to 5Sata Port multipliers (each connected to a ESata port on
the controller).
20 x Samsung 1TB HDD's. (each connected to a Port Multiplier).
Assuming your disks can all sustain 500Mbit/sec,
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
The characteristic that *really* makes a big difference is the number
of
slabs in the pool. i.e. if your filesystem is composed of mostly small
files or fragments, versus
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Bob Friesenhahn
There should be little doubt that NetApp's goal was to make money by
suing Sun. Nexenta does not have enough income/assets to make a risky
lawsuit worthwhile.
But in all
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Orvar Korvar
A) Resilver = Defrag. True/false?
I think everyone will agree false on this question. However, more detail
may be appropriate. See below.
B) If I buy larger drives and
From: Haudy Kazemi [mailto:kaze0...@umn.edu]
There is another optimization in the Best Practices Guide that says the
number of devices in a vdev should be (N+P) with P = 1 (raidz), 2
(raidz2), or 3 (raidz3) and N equals 2, 4, or 8.
I.e. 2^N + P where N is 1, 2, or 3 and P is the RAIDZ level.
From: pantz...@gmail.com [mailto:pantz...@gmail.com] On Behalf Of
Mattias Pantzare
It
is about 1 vdev with 12 disk or 2 vdev with 6 disks. If you have 2
vdev you have to read half the data compared to 1 vdev to resilver a
disk.
Let's suppose you have 1T of data. You have 12-disk raidz2.
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of David Magda
The 9/10 Update appears to have been released. Some of the more
noticeable
ZFS stuff that made it in:
More at:
http://docs.sun.com/app/docs/doc/821-1840/gijtg
Awesome!
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of hatish
I have just
read the Best Practices guide, and it says your group shouldnt have 9
disks.
I think the value you can take from this is:
Why does the BPG say that? What is the
On Tue, Sep 7, 2010 at 4:59 PM, Edward Ned Harvey sh...@nedharvey.com
wrote:
I think the value you can take from this is:
Why does the BPG say that? What is the reasoning behind it?
Anything that is a rule of thumb either has reasoning behind it (you
should know the reasoning
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of bear
[b]Short Version[/b]
I used zpool add instead of zpool replace while trying to move drives
from an si3124 controller card. I can backup the data to other drives
and destroy the pool,
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ian Collins
However writes to already opened files are allowed.
Think of this from the perspective of an application. How would write
failure be reported?
Both very good points. But I
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ian Collins
so it should behave in the same way as an unmount in
the presence of open files.
+1
You can unmount lazy, or force, or by default, the unmount fails in the
presence of open
From: Ian Collins [mailto:i...@ianshome.com]
On 08/28/10 12:45 PM, Edward Ned Harvey wrote:
Another specific example ...
Suppose you zfs send from a primary server to a backup server. You
want
the filesystems to be readonly on the backup fileserver, in order to
receive
From: Neil Perrin [mailto:neil.per...@oracle.com]
Hmm, I need to check, but if we get a checksum mismatch then I don't
think we try other
mirror(s). This is automatic for the 'main pool', but of course the ZIL
code is different
by necessity. This problem can of course be fixed. (It will be
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of StorageConcepts
So would say there are 2 bugs / missing features in this:
1) zil needs to report truncated transactions on zilcorruption
2) zil should need mirrored counterpart to recover
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Neil Perrin
This is a consequence of the design for performance of the ZIL code.
Intent log blocks are dynamically allocated and chained together.
When reading the intent log we read each
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Dr. Martin Mundschenk
devices attached. Unfortunately the USB and sometimes the FW devices
just die, causing the whole system to stall, forcing me to do a hard
reboot.
Well, I wonder what
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of devsk
If dedup is ON and the pool develops a corruption in a file, I can
never fix it because when I try to copy the correct file on top of the
corrupt file,
the block hash will match with
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of devsk
What do you mean original? dedup creates only one copy of the file
blocks. The file was not corrupt when it was copied 3 months ago.
Please describe the problem.
If you copied the
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Eric D. Mudama
On Sat, Aug 21 at 4:13, Orvar Korvar wrote:
And by the way: Wasn't there a comment of Linus Torvals recently that
people shound move their low-quality code into the codebase
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Linder, Doug
there are an
awful lot of places that actively DO NOT want the latest and greatest,
and for good reason.
Agreed. Latest-greatest has its place, which is not 24/7
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Peter Jeremy
My interpretation of those results is that you can't generalise: The
only way to determine whether your application is faster in 32-bit or
64-bit more is to test it. And your
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Joerg Schilling
1) The OpenSource definition
http://www.opensource.org/docs/definition.php
section 9 makes it very clear that an OSS license must not restrict
other
software and must not
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Alxen4
Disabling ZIL converts all synchronous calls to asynchronous which
makes ZSF to report data acknowledgment before it actually was written
to stable storage which in turn improves
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Alxen4
For example I'm trying to use ramdisk as ZIL device (ramdiskadm )
Other people have already corrected you about ramdisk for log.
It's already been said, use SSD, or disable ZIL
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ethan Erchinger
We've had a failed disk in a fully support Sun system for over 3 weeks,
Explorer data turned in, and been given the runaround forever. The
7000
series support is no better,
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Garrett D'Amore
interpretation. Since it no longer is relevant to the topic of the
list, can we please either take the discussion offline, or agree to
just
let the topic die (on the basis
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Will Murnane
I am surprised with the performances of some 64-bit multi-threaded
applications on my AMD Opteron machine. For most of the applications,
the performance of 32-bit version
From: Garrett D'Amore [mailto:garr...@nexenta.com]
Sent: Sunday, August 15, 2010 8:17 PM
(The only way I could see this changing would be if there was a sudden
license change which would permit either ZFS to overtake btrfs in the
Linux kernel, or permit btrfs to overtake zfs in the Solaris
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of David Dyer-Bennet
However, if Oracle makes a binary release of BTRFS-derived code, they
must
release the source as well; BTRFS is under the GPL.
When a copyright holder releases something
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
Can someone provide a link to the requisite source files so that we
can see the copyright statements? It may well be that Oracle assigned
the copyright to some other party.
BTRFS is inside the linux kernel.
Copyright (C)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jerome Warnier
Do not forget Btrfs is mainly developed by ... Oracle. Will it survive
better than Free Solaris/ZFS?
It's gpl. Just as zfs is cddl. They cannot undo, or revoke the free
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Bob Friesenhahn
The $400 number is bogus since the amount that Oracle quotes now
depends on the value of the hardware that the OS will run on. For my
Using the same logic, if I said MS
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Tim Cook
The cost discussion is ridiculous, period. $400 is a steal for
support. You'll pay 3x or more for the same thing from Redhat or
Novell.
Actually, as a comparison with the message
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
#3 I previously believed that vmfs3 was able to handle sparse files
amazingly well, like, when you create a new vmdk, it appears almost
instantly regardless of size, and I
From: cyril.pli...@gmail.com [mailto:cyril.pli...@gmail.com] On Behalf
Of Cyril Plisko
The compressratio shows you how much *real* data was compressed.
The file in question, however, can be sparse file and have its size
vastly
different from what du says, even without compression.
Ahhh.
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Russ Price
For me, Solaris had zero mindshare since its beginning, on account of
being
prohibitively expensive.
I hear that a lot, and I don't get it. $400/yr does move it out of peoples'
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Andrej Podzimek
Or Btrfs. It may not be ready for production now, but it could become a
serious alternative to ZFS in one year's time or so. (I have been using
I will much sooner pay for
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Chris Twa
My plan now is to buy the ssd's and do extensive testing. I want to
focus my performance efforts on two zpools (7x146GB 15K U320 + 7x73GB
10k U320). I'd really like two ssd's for
601 - 700 of 1109 matches
Mail list logo