No, I did not set that property; not now, not in previous releases.
Nice to see secure by default coming to the admin tools as well.
Waiting for SSH to become 127.0.0.1:22 sometime... just kidding ;)
Thanks for the tip!
Any ideas about the stacktrace? - it's still there instead of the web-GUI
I checked - this system has a UFS root. When installed as snv_84 and then LU'd
to snv_89, and when I fiddled with these packages from various other releases,
it had the stacktrace instead of the ZFS admin GUI (or the well-known
smcwebserver restart effect for the older packages).
This system
Likewise. Just plain doesn't work.
Not required though, since the command-line is okay and way powerful ;)
And there are some more interesting challenges to work on, so I didn't push
this problem any more yet.
This message posted from opensolaris.org
Interesting, we'll try that.
Our server with the problem has been boxed now, so I'll check the solution when
it gets on site.
Thanks ahead, anyway ;)
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
should have been clearer about that.
I will investigate using ZFS snapshots with ZFS send as a method
for accomplishing my task. I'm not convinced it's the best way
to acheive my goal, but if it's not, I'd like to make sure I understand
why not.
Thanks for your interest.
/jim
Mattias Pantzare wrote
Just my 2c: Is it possible to do an offline dedup, kind of like snapshotting?
What I mean in practice, is: we make many Solaris full-root zones. They share a
lot of data as complete files. This is kind of easy to save space - make one
zone as a template, snapshot/clone its dataset, make new
Ok, thank you Nils, Wade for the concise replies.
After much reading I agree that the ZFS-development queued features do deserve
a higher ranking on the priority list (pool-shrinking/disk-removal and
user/group quotas would be my favourites), so probably the deduplication tool
I'd need would,
)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
Engineering Manager
Storage Platform Software Group
Sun Microsystems, Inc.
___
zfs-discuss mailing list
zfs
-discuss
Jim Dunham
Engineering Manager
Storage Platform Software Group
Sun Microsystems, Inc.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Ralf,
Jim, at first: I never said that AVS is a bad product. And I never
will. I wonder why you act as if you were attacked personally.
To be honest, if I were a customer with the original question, such
a reaction wouldn't make me feel safer.
I am sorry that my response came across
-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
Engineering Manager
Storage Platform Software Group
Sun Microsystems, Inc.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
On Sep 11, 2008, at 11:19 AM, A Darren Dunham wrote:
On Thu, Sep 11, 2008 at 10:33:00AM -0400, Jim Dunham wrote:
The issue with any form of RAID 1, is that the instant a disk fails
out of the RAID set, with the next write I/O to the remaining members
of the RAID set, the failed disk (and its
On Sep 11, 2008, at 5:16 PM, A Darren Dunham wrote:
On Thu, Sep 11, 2008 at 04:28:03PM -0400, Jim Dunham wrote:
On Sep 11, 2008, at 11:19 AM, A Darren Dunham wrote:
On Thu, Sep 11, 2008 at 10:33:00AM -0400, Jim Dunham wrote:
The issue with any form of RAID 1, is that the instant a disk
#
---
Importing on the primary gives the same error.
Anyone have any ideas?
Thanks
Corey
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
Engineering
-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
Engineering Manager
Storage Platform Software Group
Sun Microsystems, Inc.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
be placed into
logging mode first. Then ZFS will be left in I/O consistent after the
disable is done.
Corey
Jim Dunham
Engineering Manager
Storage Platform Software Group
Sun Microsystems, Inc.
___
zfs-discuss mailing list
zfs-discuss
On Sep 16, 2008, at 5:39 PM, Miles Nordin wrote:
jd == Jim Dunham [EMAIL PROTECTED] writes:
jd If at the time the SNDR replica is deleted the set was
jd actively replicating, along with ZFS actively writing to the
jd ZFS storage pool, I/O consistency will be lost, leaving ZFS
replication 'smarter'.
--
Brent Jones
[EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
Engineering Manager
Storage Platform Software Group
Sun
functionality on a single node, use
host based or controller based mirroring software.
--Joe
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
Storage Platform Software
Jim Dunham
Storage Platform Software Group
Sun Microsystems, Inc.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
backing stores. If
one use rdsk backing stores of any type, this is not an issue.
Jim
I have a similar situation here, with a 2-TB ZFS pool on
a T2000 using Iscsi to a Netapp file server. Is there any way to tell
in advance if any of those changes will make a difference? Many of
them seem
to OpenSolaris at build
snv_74, and current being backported to Solaris 10, available in S10u7
next year.
The weird behavior seen below with 'dd', is likely Oracle's desire to
continually repair one of its many redundant header blocks.
- Jim
FWIW, you don't need a file that contains zeros, as /dev
=vt100; export TERM
or
setenv TERM vt100
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim
on x64 - it will panic your system.
None of this has anything to do with ZFS, which uses a completely different
mechanism for caching (the ZFS ARC).
Thanks,
/jim
What is what I heard Jim Mauro tell us. I recall feeling a bit
disturbed when I heard it. If it is true, perhaps it applies only
[ initiator]
iscsiadm modify target-param -p maxrecvdataseglen=65536 target-IQN
Jim
Can you verify the single connection throughput using either of
iperf,uperf,netperf.
-r
--
This message posted from opensolaris.org
___
zfs-discuss
approach, this is useful while expanding
home systems when I don't have a spare tape backup to dump my files on it
and restore afterwards.
I think it's an (intended?) limitation in zpool command itself, since the kernel
can very well live with degraded pools.
//Jim
--
This message posted from
For the sake of curiosity, is it safe to have components of two different ZFS
pools on the same drive, with and without HDD write cache turned on?
How will ZFS itself behave, would it turn on the disk cache if the two imported
pools co-own the drive?
An example is a multi-disk system like mine
Thanks Tomas, I haven't checked yet, but your workaround seems feasible.
I've posted an RFE and referenced your approach as a workaround.
That's nearly what zpool should do under the hood, and perhaps can be done
temporarily with a wrapper script to detect min(physical storage sizes) ;)
//Jim
Thanks to all those who helped, even despite the non-enterprise approach of
this question ;)
While experimenting I discovered that Solaris /tmp doesn't seem to support
sparse files: mkfile -n still creates full-sized files which can either use
up the
swap space, or not fit there. ZFS and UFS
would next look into one of the various SAR data graphing tools.
http://sourceforge.net/projects/ksar/
http://freshmeat.net/projects/ksar
Jim
Thanks.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs
copying data; so estimated speed was bytes or kbytes
per sec).
//Jim
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
,
model, and firmware revision. Consider planning ahead and reserving
some
space
by creating a slice which is smaller than the whole disk instead of
the
whole disk.
Creating a slice, instead of using the whole disk, will cause ZFS to
not enable write-caching on the underlying device.
- Jim
Kristof,
Jim Yes, in step 5 commands were executed on both nodes.
We did some more tests with opensolaris 2008.11. (build 101b)
We managed to get AVS setup up and running, but we noticed that
performance was really bad.
When we configured a zfs volume for replication, we noticed
maxed out on some system
limitation, such as CPU and memory. I/O impact should not be a
factor, given that a RAM disk is used. The addition of both SNDR and a
RAM disk in the data, regardless of how small their system cost is,
will have a profound impact on disk throughput.
Jim
Please
://www.opensolaris.org/os/community/performance/filebench/quick_start/
- Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Richard Elling wrote:
Jim Dunham wrote:
Ahmed,
The setup is not there anymore, however, I will share as much
details
as I have documented. Could you please post the commands you have
used
and any differences you think might be important. Did you ever test
with 2008.11 ? instead
://bugs.opensolaris.org/view_bug.do?bug_id=5097228
It may be that this is an awful idea..in which case I am happy to hear
that as well and will feed that back to the customer.
For both controller-based host-based snapshots, replicas, even iSCSI
LUs, this would be an awful [good] idea. :-)
- Jim
for each drive to be replicated, or is there a better way to do it?
Thanks!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jim Dunham
Engineering Manager
Storage Platform Software Group
Sun Microsystems, Inc.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
Multiple Thors (more than 2?), with performance problems.
Maybe it's the common demnominator - the network.
Can you run local ZFS IO loads and determine if performance
is expected when NFS and the network are out of the picture?
Thanks,
/jim
Greg Mason wrote:
So, I'm still beating my head
clients. I've narrowed my particular
performance issue down to the ZIL, and how well ZFS plays with NFS.
Great.
Good luck.
/jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
,
/jim
#! /usr/bin/ksh -p
# CDDL HEADER START
#
# The contents of this file are subject to the terms of the
# Common Development and Distribution License (the License).
# You may not use this file except in compliance with the License.
#
# You can obtain a copy of the license at usr/src
Sogranted, tank is about 77% full (not to split hairs ;^),
but in this case, 23% is 640GB of free space. I mean, it's
not like 15 years ago when a file system was 2GB total,
and 23% free meant a measely 460MB to allocate from.
640GB is a lot of space, and our largest writes are less
than 5MB.
-locking at the following location.
http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/zvol.c#1130
- Jim
At least trying to open /dev/zvol/rdsk/datapool/master where master
is defined
as:
# zpool create -f datapool mirror c1t1d0 c2t0d0
# zfs create -V
solution.
Jim Dunham
Engineering Manager
Sun Microsystems, Inc.
Storage Platform Software Group
What's required
to make it work? Consider a file server running ZFS that exports a
volume with Iscsi. Consider also an application server that imports
the LUN with Iscsi and runs a ZFS filesystem
it out. See a demo at:
http://blogs.sun.com/constantin/entry/csi_munich_how_to_save
Jim
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
A recent increase in email about ZFS and SNDR (the replication
component of Availability Suite), has given me reasons to post one of
my replies.
Well, now I'm confused! A collegue just pointed me towards your blog
entry about SNDR and ZFS which, until now, I thought was not a
supported
Andrew,
Jim Dunham wrote:
ZFS the filesystem is always on disk consistent, and ZFS does
maintain filesystem consistency through coordination between the
ZPL (ZFS POSIX Layer) and the ZIL (ZFS Intent Log). Unfortunately
for SNDR, ZFS caches a lot of an applications filesystem data
Nicolas,
On Fri, Mar 06, 2009 at 10:05:46AM -0700, Neil Perrin wrote:
On 03/06/09 08:10, Jim Dunham wrote:
A simple test I performed to verify this, was to append to a ZFS
file
(no synchronous filesystem options being set) a series of blocks
with a
block order pattern contained within
to Solaris 10 u7.
- Jim
On Wed, Mar 4, 2009 at 2:47 AM, Scott Lawson scott.law...@manukau.ac.nz
wrote:
Stephen Nelson-Smith wrote:
Hi,
I recommended a ZFS-based archive solution to a client needing to
have
a network-based archive of 15TB of data in a remote datacentre. I
based
. What web browser are you using, and it works
just fine with Firefox.
- Jim
James D. Rogers
NRA, GOA, DAD -- and I VOTE!
2207 Meadowgreen Circle
Franktown, CO 80116
coyote_hunt...@msn.com
303-688-0480
303-885-7410 Cell (Working hours and when coyote huntin
pool significantly
tighter on free space than the other pools (zpool list)?
Thanks,
/jim
Nobel Shelby wrote:
Customer has many large zfs pools..He does the same on all pools:
Copying overnight large amounts of small files (1-5K).
All but one particular pool (that has been expanded) gives them
- capture kstat -n arcstats before a test,
after the write test and after the read test.
Sorry - I need to think about this a bit more.
Something is seriously broken, but I'm not yet
sure what it is. Unless you're running an older
Solaris version, and/or missing patches.
Thanks,
/jim
zfetch needs a whole lotta love.
For both CR's the workaround is disabling prefetch
(echo zfs_prefetch_disable/W 1 | mdb -kw)
Any other theories on this test case?
Thanks,
/jim
Original Message
Subject: Re: [perf-discuss] ZFS performance issue - READ is slow as
hell...
Date
iscsioptions zpool-name/zvol-name
- Jim
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss
)?
//Thanks in advance, we're expecting a busy weekend ;(
//Jim Klimov
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
was aborted:
2009-05-28.11:44:05 zfs destroy -r pond/zones/ldap03 [user root on
thumper:global]
2009-05-28.11:44:06 [internal destroy txg:712330] dataset = 445 [user root on
thumper]
//Jim
PS: I guess I'm up to an RFE: zfs destroy should have an interactive option,
perhaps (un-)set by default
technical expertise buffooned by marketing yells led to poor decisions ;(
//Jim
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
---BeginMessage---
Some data I forgot to add:
I have tried installing opensolaris 2009.06 and importing the pool, yields the
same results as Solaris 10 U7
The array is configured to use all 24 disks in a radz2 configuration with 2
hot-spares this gives me about 16TB of usable space. The
As the subject says, I can't import a seemingly okay raidz pool and I really
need to as it has some information on it that is newer than the last backup
cycle :-( I'm really in a bind; I hope anyone can help...
Background: A drive in a four-slice pool failed (I have to use slices due to a
to be renamed.
Hope this helps, let us know if it does ;)
//Jim Klimov
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
that.
//HTH, Jim
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
idea to detect errors crawling in.
// HTH, Jim
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
breaks, or errors related to bugs
in
the firmware itself - in more elaborate conspiracy theories).
Due to this it is often recommended to use external RAID implementations as
vdev's to a redundant ZFS pool (such as a mirror of two equivalent arrays).
//Jim
--
This message posted from
(say, the boot one).
//Jim
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Probably better use zfs recv -nFvd first (no-write verbose mode) to be
certain
about your write-targets and about overwriting stuff (i.e. zfs recv -F would
destroy any newer snapshots, if any - so you can first check which ones, and
possibly clone/rename them first).
// HTH, Jim Klimov
You might also want to force ZFS into accepting a faulty root pool:
# zpool set failmode=continue rpool
//Jim
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
of these drives, and make a mirrored swap pool on the other
couple.
//Jim
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
and other
factors. It is possible that in the course of your quest you'll try several of
them.
Starting out with a transactionable approach (i.e. not deleting the originals
until
necessary) pays off in such cases.
//Jim
--
This message posted from opensolaris.org
), 234MBps
Occasionally I did reruns; user time for the same setups can vary significantly
(like 65s vs 84s) while the system time stays pretty much the same.
zpool iostat shows larger values (like 320MBps typically) but I think that
can be
attributed to writing parity stripes on raidz vdevs.
//Jim
, or otherwise ;)
//Jim
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
, and PulsarOS...
http://eonstorage.blogspot.com/2008_11_01_archive.html (features page)
http://eonstorage.blogspot.com/2009/05/eon-zfs-nas-0591-based-on-snv114.html
http://code.google.com/p/pulsaros/
http://pulsaros.digitalplayground.at/
Haven't yet tried them, though.
//Jim
--
This message posted from
Bob - Have you filed a bug on this issue?
I am not up to speed on this thread, so I can
not comment on whether or not there is a bug
here, but you seem to have a test case and supporting
data. Filing a bug will get the attention of ZFS
engineering.
Thanks,
/jim
Bob Friesenhahn wrote:
On Mon
/w on each of 2 mirrored drives, and
2 h/w
errors on one of the drives).
Any ideas? Is it a cosmetic problem or a crawling-hiding bug in my hardware and
I should go about replacing something somewhere? I don't see such behavior on
any other servers around...
Thanks for ideas,
//Jim
zpool status
into the samfs although its structures are only 20% used?
If by any chance the latter - I think it would count as a bug. If the former -
see
the posts above for explanations and workarounds :)
Thanks in advance for such detail,
Jim
--
This message posted from opensolaris.org
If I understand you right it is as you said.
Here's an example and you can see what happened.
The sam-fs is filled to only 6% and the zvol ist full.
I'm afraid I was not clear with my question, so I'd elaborate, then.
It remains standing as: during this situation, can you write new data into
the reservation is less then
the volume size. Consequently, writes to a sparse volume
can fail with ENOSPC when the pool is low on space. For
a sparse volume, changes to volsize are not reflected in
the reservation.
Did you do anything like this?
HTH,
//Jim
idle.
Thanks,
/jim
Javier Conde wrote:
Hello,
IHAC with a huge performance problem in a newly installed M8000
confiured with a USP1100 and ZFS.
From what we can see, 2 disks used by in different ZPOOLS have are
100% busy and and average service time is also quite high (between 30
and 5 ms
genunix`taskq_thread+0xbc
unix`thread_start+0x8
Let's see what the fsstat and zpool iostat data looks like when this
starts happening..
Thanks,
/jim
Jim Leonard wrote:
It would also be interesting to see some snapshots
of the ZFS arc kstats
kstat -n arcstats
Here you
,
/jim
Jim Leonard wrote:
Can you gather some ZFS IO statistics, like
fsstat zfs 1 for a minute or so.
Here is a snapshot from when it is exhibiting the behavior:
new name name attr attr lookup rddir read read write write
file remov chng get setops ops ops bytes ops
The only thing that jumps out at me is the ARC size -
53.4GB, or
most of your 64GB of RAM. This in-and-of-itself is
not necessarily
a bad thing - if there are no other memory consumers,
let ZFS cache
data in the ARC. But if something is coming along to
flush dirty
ARC pages
an international group in English for the Tokyo OSUG. There are
bi-lingual westerners and Japanese on both lists, and we have events in
Yoga as well.
http://mail.opensolaris.org/mailman/listinfo/ug-tsug (English )
http://mail.opensolaris.org/mailman/listinfo/ug-jposug (Japanese)
Jim
--
http://blogs.sun.com
device is not a ZVOL.
Note: For ZVOL support, there is a corresponding ZFS storage pool
change to support this functionality, so a zpool upgrade ... to
version 16 is required:
# zpool upgrade -v
.
.
16 stmf property support
- Jim
The options seem to be
a) stay
Posting to zfs-discuss. There's no reason this needs to be
kept confidential.
5-disk RAIDZ2 - doesn't that equate to only 3 data disks?
Seems pointless - they'd be much better off using mirrors,
which is a better choice for random IO...
Looking at this now...
/jim
Jeff Savit wrote:
Hi all
the flexibility of ZFS and offline-storage capabilities of HSM?
--
Thanks for any replies, including statements that my ideas are insane or my
views are outdated ;) But constructive ones are more appreciated ;)
//Jim
--
This message posted from opensolaris.org
Thanks for the link, but the main concern in spinning down drives of a ZFS pool
is that ZFS by default is not so idle. Every 5 to 30 seconds it closes a
transaction
group (TXG) which requires a synchronous write of metadata to disk.
I mentioned reading many blogs/forums on the matter, and some
?
In general, were there any stability issues with snv_128 during internal/BFU
testing?
TIA,
//Jim
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
As an aside, there's nothing about this that requires it be posted
to zfs-discuss-confidential. I posted to zfs-disc...@opensolaris.org.
Thanks,
/jim
Anthony Benenati wrote:
Jim,
The issue with using scan rate alone is if you are looking for why you
have significant performance degradation
Think he's looking for a single, intuitively obvious, easy to acces indicator
of memory usage along the lines of the vmstat free column (before ZFS) that
show the current amount of free RAM.
On Dec 23, 2009, at 4:09 PM, Jim Mauro wrote:
Hi Anthony -
I don't get this. How does the presence
We have a production SunFireV240 that had a zfs mirror until this week. One of
the drives (c1t3d0) in the mirror failed.
The system was shutdown and the bad disk replaced without an export.
I don't know what happened next but by the time I got involved there was no
evidence that the remaining
No. Only slice 6 from what I understand.
I didn't create this (the person who did has left the company) and all I know
is that the pool was mounted on /oraprod before it faulted.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
Never mind.
It looks like the controller is flakey. Neither disk in the mirror is clean.
Attempts to backup and recover the remaining disk produced I/O errors that were
traced to the controller.
Thanks for your help Victor.
--
This message posted from opensolaris.org
at 90% full.
Read the link Richard sent for some additional information.
Thanks,
/jim
Tony MacDoodle wrote:
Was wondering if anyone has had any performance issues with Oracle
running on ZFS as compared to UFS?
Thanks
Unclear what you want to do? What's the goal for this excise?
If you want to replace the pool with larger disks and the pool is in mirror or
raidz. You just replace one disk at a time and allow the pool to rebuild it
self. Once all the disk has been replace, it will atomically realize the
, ZFS will release memory being used by the ARC.
But, if no one else wants it
/jim
On Apr 27, 2010, at 9:07 PM, Brad wrote:
Whats the default size of the file system cache for Solaris 10 x86 and can it
be tuned?
I read various posts on the subject and its confusing..
--
This message
For this type of migration a downtime is required. However, it can be reduce
to only a few hours to a few minutes depending how much change need to be
synced.
I have done this many times on a NetApp Filer but can be apply to zfs as well.
First thing is consider is only do the migration once
So on the point of not need an migration back.
Even at 144 disk. they won't be on the same raid group. So figure out what is
the best raid group size for you since zfs don't support changing number of
disk in raidz yet. I usually use the number of the slots per shelf. or a good
number is
Sorry, I need to correct myself. Mirror luns on the windows side to switch
storage pool under it is a great idea and I think you can do this without
downtime.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
I understand your point. however in most production system the selves are added
incrementally so make sense to be related to number of slots per shelf. and in
most case withstand a shelf failure is to much of overhead on storage any are.
for example in his case he will have to configure 1+0
Sorry for the double post but I think this was better suite for zfs forum.
I am running OpenSolaris snv_134 as a file server in a test environment,
testing deduplication. I am transferring large amount of data from our
production server via using rsync.
The Data pool is on a separated raidz1-0
101 - 200 of 753 matches
Mail list logo