to diverge.
See https://www.illumos.org/issues/3236 for details.
This is interesting. I didn't know about it.
Is there an option similar to verify=on in dedup or does it just assume that
checksum is your data?
--
Robert Milkowski
http://milek.blogspot.com
Solaris 11.1 (free for non-prod use).
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Tiernan OToole
Sent: 25 February 2013 14:58
To: zfs-discuss@opensolaris.org
Subject: [zfs-discuss] ZFS Distro Advice
Good morning all.
My home
Robert Milkowski wrote:
Solaris 11.1 (free for non-prod use).
But a ticking bomb if you use a cache device.
It's been fixed in SRU (although this is only for customers with a support
contract - still, will be in 11.2 as well).
Then, I'm sure there are other bugs which are fixed
It also has a lot of performance improvements and general bug fixes
in
the Solaris 11.1 release.
Performance improvements such as?
Dedup'ed ARC for one.
0 block automatically dedup'ed in-memory.
Improvements to ZIL performance.
Zero-copy zfs+nfs+iscsi
...
--
Robert Milkowski
http
it they
are bad?
Isn't it at least a little bit being hypocritical? (bashing Oracle and doing
sort of the same)
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
Personally, I'd recommend putting a standard Solaris fdisk
partition on the drive and creating the two slices under that.
Why? In most cases giving zfs an entire disk is the best option.
I wouldn't bother with any manual partitioning.
--
Robert Milkowski
http://milek.blogspot.com
by Oracle that Illumos is not
getting (lack of resource, limited usage, etc.).
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
in front, another 2x 2.5 in rear,
Sandy Bridge as well.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
No, there isn't other way to do it currently. SMF approach is probably the
best option for the time being.
I think that there should be couple of other properties for zvol where
permissions could be stated.
Best regards,
Robert Milkowski
http://milek.blogspot.com
From: zfs-discuss
is set to 1 after the cache size is
decreased, and if it stays that way.
The fix is in one of the SRUs and I think it should be in 11.1
I don't know if it was fixed in Illumos or even if Illumos was affected by
this at all.
--
Robert Milkowski
http://milek.blogspot.com
-Original Message
size and balloon the memory usage).
Can you expand a little bit more here?
Dedup+compression works pretty well actually (not counting standard
problems with current dedup - compression or not).
--
Robert Milkowski
http://milek.blogspot.com
___
zfs
a way (and not always, see
logbias, etc.) and to arc to be committed later to a pool when txg closes.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
/chassis/SUN-FIRE-X4270-M2-SERVER.unknown/HDD17/disk ONLINE
0 0 0
/dev/chassis/SUN-FIRE-X4270-M2-SERVER.unknown/HDD15/disk ONLINE
0 0 0
errors: No known data errors
Best regards,
Robert Milkowski
http://milek.blogspot.com
now trying a run with all zfs datasets unmounted, hope that helps
somewhat... I'm growing puzzled now.
To double check that no snapshots, etc. are being created run: zpool history
-il pond
--
Best regards,
Robert Milkowski
http://milek.blogspot.com
And he will still need an underlying filesystem like ZFS for them :)
-Original Message-
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Nico Williams
Sent: 25 April 2012 20:32
To: Paul Archer
Cc: ZFS-Discuss mailing list
which with lower recordsize
values should improve dedup ratios (although it will require more memory for
ddt).
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Brad Diggs
Sent: 29 December 2011 15:55
To: Robert Milkowski
Cc: 'zfs-discuss
and you need arc memory to cache other data as well.
--
Robert Milkowski
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
disk. This behavior is what makes NFS over ZFS slow without a slog: NFS
does
everything O_SYNC by default,
No, it doesn't. Howver VMWare by default issues all writes as SYNC.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
and such a workload that I see little physical reads going
on.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 01/ 7/11 09:02 PM, Pawel Jakub Dawidek wrote:
On Fri, Jan 07, 2011 at 07:33:53PM +, Robert Milkowski wrote:
Now what if block B is a meta-data block?
Metadata is not deduplicated.
Good point but then it depends on a perspective.
What if you you are storing lots of VMDKs?
One
the other possible cases of data corruption are there
anyway, adding yet another one might or might not be acceptable.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
duplicate blocks.
I don't believe that fletcher is still allowed for dedup - right now it
is only sha256.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
On 01/ 3/11 04:28 PM, Richard Elling wrote:
On Jan 3, 2011, at 5:08 AM, Robert Milkowski wrote:
On 12/26/10 05:40 AM, Tim Cook wrote:
On Sat, Dec 25, 2010 at 11:23 PM, Richard Elling
richard.ell...@gmail.com mailto:richard.ell...@gmail.com wrote:
There are more people outside
On 01/ 4/11 11:35 PM, Robert Milkowski wrote:
On 01/ 3/11 04:28 PM, Richard Elling wrote:
On Jan 3, 2011, at 5:08 AM, Robert Milkowski wrote:
On 12/26/10 05:40 AM, Tim Cook wrote:
On Sat, Dec 25, 2010 at 11:23 PM, Richard Elling
richard.ell...@gmail.com mailto:richard.ell...@gmail.com
-weekly out of Sun. Nexenta spending
hundreds of man-hours on a GUI and userland apps isn't work on ZFS.
Exactly my observation as well. I haven't seen any ZFS related
development happening at Ilumos or Nexenta, at least not yet.
--
Robert Milkowski
http://milek.blogspot.com
2010 at
src.opensolaris.org they are still old versions from August, at least
the ones I checked.
See
http://src.opensolaris.org/source/history/onnv/onnv-gate/usr/src/uts/common/fs/zfs/
the mercurial gate doesn't have any updates either.
Best regards,
Robert Milkowski
On 07/12/2010 23:54, Tony MacDoodle wrote:
Is is possible to expand the size of a ZFS volume?
It was created with the following command:
zfs create -V 20G ldomspool/test
see man page for zfs, section about volsize property.
Best regards,
Robert Milkowski
http://milek.blogspot.com
On 18/11/2010 17:53, Cindy Swearingen wrote:
Markus,
Let me correct/expand this:
1. If you create a RAIDZ pool on OS 11 Express (b151a), you will have
some mirrored metadata. This feature integrated into b148 and the pool
version is 29. This is the part I mixed up.
2. If you have an existing
[ZPL], ID 1110, cr_txg 33537, 2.03M, 6 objects
Object lvl iblk dblk dsize lsize %full type
6216K32K 1.00M 1M 100.00 ZFS plain file
Now it is fine.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss
plain file
Now it is fine.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs
byte
ranges in one file to another is also
supported, allowing large files to be more efficiently manipulated
like standard rope
http://en.wikipedia.org/wiki/Rope_%28computer_science%29 data structures.
Also see http://www.symantec.com/connect/virtualstoreserver
--
Robert Milkowski
http
write file
in sync or async mode?
async
The sync property takes an effect immediately for all new writes even if
a file was open before the property was changed.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs
this way and it should be considered as a bug.
What do you think?
ps. I tested it on S10u8 and snv_134.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
remember if it offered or not an ability to manipulate zvol's
WCE flag but if it didn't then you can do it anyway as it is a zvol
property. For an example see
http://milek.blogspot.com/2010/02/zvols-write-cache.html
--
Robert Milkowski
http://milek.blogspot.com
fyi
--
Robert Milkowski
http://milek.blogspot.com
Original Message
Subject:zpool import despite missing log [PSARC/2010/292 Self Review]
Date: Mon, 26 Jul 2010 08:38:22 -0600
From: Tim Haley tim.ha...@oracle.com
To: psarc-...@sun.com
CC: zfs-t...@sun.com
On 22/07/2010 03:25, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Robert Milkowski
I had a quick look at your results a moment ago.
The problem is that you used a server with 4GB of RAM + a raid card
recordsize and iozone block size in this case.
The issue with raid-z and random reads is that as cache hit ratio goes
down to 0 the IOPS approaches IOPS of a single drive. For a little bit
more information see http://blogs.sun.com/roch/entry/when_to_and_not_to
--
Robert Milkowski
http
a
regression.
Are you sure it is not a debug vs. non-debug issue?
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
. Also
please note that you can use both: compression and dedup at the same time.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
(async or sync) to be written synchronously.
ps. still, I'm not saying it would made ZFS ACID.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
.
Not to be outdone, they've stopped other OS releases as well. Surely,
this is a temporary situation.
AFAIK the dev OSOL releases are still being produced - they haven't been
made public since b134 though.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs
smaller writes to metadata that will distribute parity.
What is the total width of your raidz1 stripe?
4x disks, 16KB recordsize, 128GB file, random read with 16KB block.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs
On 23/06/2010 19:29, Ross Walker wrote:
On Jun 23, 2010, at 1:48 PM, Robert Milkowskimi...@task.gda.pl wrote:
128GB.
Does it mean that for dataset used for databases and similar environments where
basically all blocks have fixed size and there is no other data all parity
information
with raidz you need a zpool with X raidz vdevs where X = desired
IOPS/IOPS of single drive.
I know that and it wasn't mine question.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
performance as a much greater number of disk drives in RAID-10
configuration and if you don't need much space it could make sense.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
.
http://blogs.sun.com/roch/entry/when_to_and_not_to
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
big of a file are you making? RAID-Z does not explicitly do the parity
distribution that RAID-5 does. Instead, it relies on non-uniform stripe widths
to distribute IOPS.
Adam
On Jun 18, 2010, at 7:26 AM, Robert Milkowski wrote:
Hi,
zpool create test raidz c0t0d0 c1t0d0 c2t0d0 c3t0d0
to get it integrated into ON? Because if
you do then I think that getting Nexenta guys expanding on it would be
better for everyone instead of having them reinventing the wheel...
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing
rather except all of them to get about the same
number of iops.
Any idea why?
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
with dedup enabled in a pool you
can't really get a dedup ratio per share.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
Previous Versions should work even if you have a one large filesystems
with all users homes as directories within.
What Solaris/OpenSolaris version did you try for the 5k test?
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
?
The whole point of having L2ARC is to serve high random read iops from
RAM and L2ARC device instead of disk drives in a main pool.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
? It maps the snapshots so windows
can access them via previous versions from the explorers context menu.
btw: the CIFS service supports Windows Shadow Copies out-of-the-box.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs
full priority.
Is this problem known to the developers? Will it be addressed?
http://sparcv9.blogspot.com/2010/06/slower-zfs-scrubsresilver-on-way.html
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6494473
--
Robert Milkowski
http://milek.blogspot.com
port is nothing unusual and
has been the case for at least several years.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
with large blocks.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 11/06/2010 10:58, Andrey Kuzmin wrote:
On Fri, Jun 11, 2010 at 1:26 PM, Robert Milkowski mi...@task.gda.pl
mailto:mi...@task.gda.pl wrote:
On 11/06/2010 09:22, sensille wrote:
Andrey Kuzmin wrote:
On Fri, Jun 11, 2010 at 1:54 AM, Richard Elling
ports at the same
time and it scaled linearly by achieving well over 400k IOPS.
hw used: x4270, 2x Intel X5570 2.93GHz, 4x SAS SG-PCIE8SAS-E-Z (fw.
1.27.3.0), connected to F5100.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing
On 10/06/2010 15:39, Andrey Kuzmin wrote:
On Thu, Jun 10, 2010 at 6:06 PM, Robert Milkowski mi...@task.gda.pl
mailto:mi...@task.gda.pl wrote:
On 21/10/2009 03:54, Bob Friesenhahn wrote:
I would be interested to know how many IOPS an OS like Solaris
is able to push through
: why do you need to do
this at all? Isn't the ZFS ARC supposed to release memory when the
system is under pressure? Is that mechanism not working well in some
cases ... ?
My understanding is that if kmem gets heavily fragmaneted ZFS won't be
able to give back much memory.
--
Robert
|recv should replicate it I think.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
://milek.blogspot.com/2010/05/zfs-synchronous-vs-asynchronous-io.html
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
that it shouldn't
but it was changed again during a PSARC review that it should.
And I did a copy'n'paste here.
Again, sorry for the confusion.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
On 06/05/2010 13:12, Robert Milkowski wrote:
On 06/05/2010 12:24, Pawel Jakub Dawidek wrote:
I read that this property is not inherited and I can't see why.
If what I read is up-to-date, could you tell why?
It is inherited. Sorry for the confusion but there was a discussion if
it should
L2ARC will be kept warm. Then the only thing which might affect
L2 performance considerably would be a L2ARC device failure...
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
probably decrease
performance and would invalidate all blocks if only a single l2arc
device would die. Additionally having each block only on one l2arc
device allows to read from all of l2arc devices at the same time.
--
Robert Milkowski
http://milek.blogspot.com
On 06/05/2010 21:45, Nicolas Williams wrote:
On Thu, May 06, 2010 at 03:30:05PM -0500, Wes Felter wrote:
On 5/6/10 5:28 AM, Robert Milkowski wrote:
sync=disabled
Synchronous requests are disabled. File system transactions
only commit to stable storage on the next DMU transaction
fails prior to completing a series of
writes and I reboot using a failsafe (i.e. install disc), will the log be
replayed after a zpool import -f ?
yes
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss
synchronicity
No promise on date, but it will bubble to the top eventually.
So everyone knows - it has been integrated into snv_140 :)
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
it is off it
will give you an estimate of what's the absolute maximum performance
increase (if any) by having a dedicated ZIL device.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
is
that it is not as easy problem as it seems.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
*.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
happens
after your disks ara available.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
discover it. does 'zpool import' (no other
options) list the pool?
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
import I think requiring the -f or -F,
and reboot again normal.
I just did a test on Solaris 10/09 - and system came up properly,
entirely on its own, with a failed pool.
zpool status showed the pool as unavailable (as I removed an underlying
device) which is fine.
--
Robert Milkowski
http
some benchmarks with sysbench + mysql or oracle.
I don't remember if I posted or not some of my results but I'm pretty
sure you can find others.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
.
You
will need to power cycle. The system won't boot up again; you'll have to
The system should boot-up properly even if some pools are not accessible
(except rpool of course).
If it is not the case then there is a bug - last time I checked it
worked perfectly fine.
--
Robert Milkowski
automatically try to import the pool and your
scripts will do it once disks are available.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
without going through the
process of actually copying the blocks, but just duplicating its meta data like
NetApp does?
I don't know about file cloning but why not put each VM on top of a zvol
- then you can clone a zvol. ?
--
Robert Milkowski
http://milek.blogspot.com
but it suggests that it had nothing to do with a double slash - rather
some process (your shell?) had an open file within the mountpoint. But
supplying -f you forced zfs to unmount it anyway.
--
Robert Milkowski
http://milek.blogspot.com
On 21/04/2010 06:16, Ryan John wrote:
Thanks
, atime off vs. on, lzjb, gzip, ssd). Also comparison of
benchmark results with all default zfs setting compared to whatever
setting you did which gave you the best result.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs
accessing \\filer\arch\myfolder\myfile.txt works.
Any ideas?
We are running snv_130.
you are not using Samba daemon, are you?
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
.
Other than that you are fine even with unmirrored slog device.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
slog.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
for example - on x4540 servers try to avoid creating a pool with a
single RAID-Z3 group made of 44 disks, rather create 4 RAID-Z2 groups
each made of 11 disks all of them in a single pool.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing
.
look in the archives of this mailing list for more information.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 02/04/2010 16:04, casper@sun.com wrote:
sync() is actually *async* and returning from sync() says nothing about
to clarify - in case of ZFS sync() is actually synchronous.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss
condition for
the sake of internal consistency. Applications which need to know their
next commands will not begin until after the previous sync write was
committed to disk.
ROTFL!!!
I think you should explain it even further for Casper :) :) :) :) :) :) :)
--
Robert Milkowski
http
a share with as sync (default) or
async share while on Solaris you can't really currently force a NFS
server to start working in an async mode.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
of a
cluster both of them have a full access to shared storage and you can
force zpool import on both nodes at the same time.
When you think about it you need actually such behavior for RAC to work
on raw devices or real cluster volumes or filesystems, etc.
--
Robert Milkowski
http://milek.blogspot.com
and enable the storage resource
The other approach is to keep a pool under a cluster management but
eventually suspend a resource group so there won't be any unexpected
failovers (but it really depends on circumstances and what you are
trying to do).
--
Robert Milkowski
http
Well, spend some extra money on a really fast NVRAM solution for ZIL and
you will get much faster ZFS environment than NetApp and still you will
spend much less money. Not to mention all the extra flexibity compared
to NetApp.
--
Robert Milkowski
http://milek.blogspot.com
server would suddenly lost power.
To clarify - if ZIL is disabled it makes no difference at all for a
pool/filesystem level consistency.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
is put on
a separate device?
Well, it is actually different. With ZFS you can still guearantee it to
be consistent on-disk while others generally can't and often you will
have to do fsck to even mount a fs in r/w...
--
Robert Milkowski
http://milek.blogspot.com
failed drive? And
can I hotspare it manually? I could do a straight replace, but that
isn't quite the same thing.
It seems like it is even driven. Hmmm.. perhaps it shouldn't be.
Anyway you can do zpool replace and it is the same thing, why wouldn't it?
--
Robert Milkowski
http
and it re-synchronizes a hot spare will
detach automatically (regardless if you forced it to kick-in via zpool
replace or if it did so due to FMA).
For more details see http://blogs.sun.com/eschrock/entry/zfs_hot_spares
--
Robert Milkowski
http://milek.blogspot.com
.
or there might be an extra zpool level (or system wide) property to
enable checking checksums onevery access from ARC - there will be a
siginificatn performance impact but then it might be acceptable for
really paranoid folks especially with modern hardware.
--
Robert Milkowski
http://milek.blogspot.com
a database or recover
lots of files over NFS - your service is down and disabling ZIL makes a
recovery MUCH faster. Then there are cases when leaving the ZIL disabled
is acceptable as well.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss
are talking about doing regular snapshots and making sure
that application is consistent while doing so - for example putting all
Oracle tablespaces in a hot backup mode and taking a snapshot...
otherwise it doesn't really make sense.
--
Robert Milkowski
http://milek.blogspot.com
1 - 100 of 948 matches
Mail list logo