Hello,
I came across this blog post:
http://kevinclosson.wordpress.com/2007/03/15/copying-files-on-solaris-slow-or-fast-its-your-choice/
and would like to hear from you performance gurus how this 2007
article relates to the 2010 ZFS implementation? What should I use and
why?
[
Hi--
ZFS command operations involving disk space take input and display using
numeric values specified as exact values, or in a human-readable form
with a suffix of B, K, M, G, T, P, E, Z for bytes, kilobytes, megabytes,
gigabytes, terabytes, petabytes, exabytes, or zettabytes.
Let's play
?
--
Dennis Clarke
dcla...@opensolaris.ca - Email related to the open source Solaris
dcla...@blastwave.org - Email related to open source for Solaris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
(NOT one already in a pool).To add such a device, you would do:
'zpool add tank mycachedevice'
That was an awesome response! Thank you for that :-)
I tend to config my servers with 16G of ram minimum these days and now I
know why.
--
Dennis Clarke
dcla...@opensolaris.ca - Email related
Re-read the section onSwap Space and Virtual Memory for particulars on
how Solaris does virtual memory mapping, and the concept of Virtual Swap
Space, which is what 'swap -s' is really reporting on.
The Solaris Internals book is awesome for this sort of thing. A bit over
the top in detail but
0 0 0
So the manner in which any given IO transaction gets to the zfs filesystem
just gets ever more complicated and convoluted and it makes me wonder if I
am tossing away performance to get higher levels of safety.
--
Dennis Clarke
dcla...@opensolaris.ca - Email related
=sys,ro,anon=0 /mnt
r...@aequitas:/# unshare /mnt
r...@aequitas:/#
Guess I must now try this with a ZFS fs under that iso file.
--
Dennis Clarke
dcla...@opensolaris.ca - Email related to the open source Solaris
dcla...@blastwave.org - Email related to open source for Solaris
On 05-17-10, Thomas Burgess wonsl...@gmail.com wrote:
psrinfo -pv shows:
The physical processor has 8 virtual processors (0-7)
x86 (AuthenticAMD 100F91 family 16 model 9 step 1 clock 200 MHz)
AMD Opteron(tm) Processor 6128 [ Socket: G34 ]
That's odd.
Please try this :
- Original Message -
From: Thomas Burgess wonsl...@gmail.com
Date: Saturday, May 15, 2010 8:09 pm
Subject: Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?
To: Orvar Korvar knatte_fnatte_tja...@yahoo.com
Cc: zfs-discuss@opensolaris.org
Well i just wanted to let everyone
on that
also ... I'm always game to play with such things.
--
Dennis Clarke
dcla...@opensolaris.ca - Email related to the open source Solaris
dcla...@blastwave.org - Email related to open source for Solaris
[1] ummm No, I won't be installing Microsoft Windows 7 64-bit Ultimate
Edition
On 06/05/2010 21:07, Erik Trimble wrote:
VM images contain large quantities of executable files, most of which
compress poorly, if at all.
What data are you basing that generalisation on ?
note : I can't believe someone said that.
warning : I just detected a fast rise time on my pedantic
Do the following ZFS stats look ok?
::memstat
Page Summary Pages MB %Tot
Kernel 106619 832 28%
ZFS File Data 79817 623 21%
Anon 28553 223 7%
Exec and libs 3055 23 1%
Page cache 18024 140 5%
Free (cachelist) 2880 22 1%
Free (freelist)
ea == erik ableson eable...@me.com writes:
dc == Dennis Clarke dcla...@blastwave.org writes:
rw,ro...@100.198.100.0/24, it works fine, and the NFS client
can do the write without error.
ea I' ve found that the NFS host based settings required the
ea FQDN
Hi All,
I had create a ZFS filesystem test and shared it with zfs set
sharenfs=root=host1 test, and I checked the sharenfs option and it
already update to root=host1:
Try to use a backslash to escape those special chars like so :
zfs set
by
default or not with the latest builds. Here's the package if you need to
build from source:
http://smartmontools.sourceforge.net/
You can find it at http://blastwave.network.com/csw/unstable/
Just install it with pkgadd or use pkgtrans to extract it and then run the
binary.
--
Dennis Clarke
or 6.
--
Dennis Clarke
dcla...@opensolaris.ca - Email related to the open source Solaris
dcla...@blastwave.org - Email related to open source for Solaris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
probably have to
go through the installboot procedure for that.
--
Dennis Clarke
dcla...@opensolaris.ca - Email related to the open source Solaris
dcla...@blastwave.org - Email related to open source for Solaris
___
zfs-discuss mailing list
zfs
Suppose the requirements for storage shrink ( it can happen ) is it
possible to remove a mirror set from a zpool?
Given this :
# zpool status array03
pool: array03
state: ONLINE
status: The pool is formatted using an older on-disk format. The pool can
still be used, but some
No, sorry Dennis, this functionality doesn't exist yet, but
is being worked,
but will take a while, lots of corner cases to handle.
James Dickens
uadmin.blogspot.com
1 ) dammit
2 ) looks like I need to do a full offline backup and then restore
to shrink a zpool.
As usual, Thanks for
memory
and work to keep that buffer full as data is written on the output side.
Its probably at least as fast as mv and probably safer because you never
delete the original until after the copy is complete.
--
Dennis Clarke
dcla...@opensolaris.ca - Email related to the open source Solaris
dcla
--
Dennis Clarke
dcla...@opensolaris.ca - Email related to the open source Solaris
dcla...@blastwave.org - Email related to open source for Solaris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
I hate it when I do that .. 30 secs later I see -m mountpoint which is a
Property but not specified as -o foo-bar format.
erk
# ptime zpool create -f -o autoreplace=on -o version=10 \
-m legacy \
fibre01 mirror c2t0d0 c3t16d0 \
mirror c2t1d0 c3t17d0 \
mirror c2t2d0 c3t18d0 \
mirror c2t3d0
.
Then you can easily see the used space per filesystem. Allocating user
quotas and then asking the simple questions seems mysterious to me also.
I am looking into this for my own reasons and will stay in touch.
--
Dennis Clarke
dcla...@opensolaris.ca - Email related to the open source Solaris
FYI,
OpenSolaris b128a is available for download or image-update from the
dev repository. Enjoy.
I thought that dedupe has been out for weeks now ?
Dennis
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Dennis Clarke wrote:
FYI,
OpenSolaris b128a is available for download or image-update from the
dev repository. Enjoy.
I thought that dedupe has been out for weeks now ?
The source has, yes. But what Richard was referring to was the
respun build now available via IPS.
Oh, sorry. Thought
On Sat, 7 Nov 2009, Dennis Clarke wrote:
Now the first test I did was to write 26^2 files [a-z][a-z].dat in 26^2
directories named [a-z][a-z] where each file is 64K of random
non-compressible data and then some english text.
What method did you use to produce this random data?
I'm using
, copies = 1.00, dedup * compress / copies =
2.95
#
I have no idea what any of that means, yet :-)
--
Dennis Clarke
dcla...@opensolaris.ca - Email related to the open source Solaris
dcla...@blastwave.org - Email related to open source for Solaris
-
neptune_rpool allocated 21.3G-
I'm currently running tests with this :
http://www.blastwave.org/dclarke/crucible_source.txt
--
Dennis Clarke
dcla...@opensolaris.ca - Email related to the open source Solaris
dcla...@blastwave.org - Email related to open source for Solaris
Does the dedupe functionality happen at the file level or a lower block
level?
I am writing a large number of files that have the fol structure :
-- file begins
1024 lines of random ASCII chars 64 chars long
some tilde chars .. about 1000 of then
some text ( english ) for 2K
more text (
On Sat, 2009-11-07 at 17:41 -0500, Dennis Clarke wrote:
Does the dedupe functionality happen at the file level or a lower block
level?
it occurs at the block allocation level.
I am writing a large number of files that have the fol structure :
-- file begins
1024 lines of random
at all or shall I just wait
for the putback to hit the mercurial repo ?
Yes .. this is sort of begging .. but I call it enthusiasm :-)
--
Dennis Clarke
dcla...@opensolaris.ca - Email related to the open source Solaris
dcla...@blastwave.org - Email related to open source for Solaris
Dennis Clarke wrote:
I just went through a BFU update to snv_127 on a V880 :
neptune console login: root
Password:
Nov 3 08:19:12 neptune login: ROOT LOGIN /dev/console
Last login: Mon Nov 2 16:40:36 on console
Sun Microsystems Inc. SunOS 5.11 snv_127 Nov. 02, 2009
SunOS
be possible and even realistic. That would solve the
hash collision concern I would think.
Merely thinking out loud here ...
--
Dennis Clarke
dcla...@opensolaris.ca - Email related to the open source Solaris
dcla...@blastwave.org - Email related to open source for Solaris
This seems like a bit of a restriction ... is this intended ?
# cat /etc/release
Solaris Express Community Edition snv_125 SPARC
Copyright 2009 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
, ranging from 33514530
correctable errors per year.
B. Schroeder, E. Pinheiro, W.-D. Weber. DRAM errors in the wild: A
Large-Scale Field Study. Sigmetrics/Performance 2009
see http://www.cs.toronto.edu/~bianca/
--
Dennis Clarke
dcla...@opensolaris.ca - Email related to the open source
what I see.
--
Dennis Clarke
dcla...@opensolaris.ca - Email related to the open source Solaris
dcla...@blastwave.org - Email related to open source for Solaris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
refquotanone default
fibre0 refreservation none default
fibre0 quota none default
fibre0 reservation none default
what the heck is refreservation ?? 8-)
--
Dennis Clarke
dcla...@opensolaris.ca - Email related to the open source Solaris
dcla
. :-(
Hows life the universe and risc processors for you these days ?
--
Dennis Clarke
dcla...@opensolaris.ca - Email related to the open source Solaris
dcla...@blastwave.org - Email related to open source for Solaris
ps: I have been busy porting as per usual.
New 64-bit ready Tk/Tcl
like the write traffic to the new device is being ignored
in the non-verbose output data.
--
Dennis Clarke
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
To enable mpxio, you need to have
mpxio-disable=no;
in your fp.conf file. You should run /usr/sbin/stmsboot -e to make
this happen. If you *must* edit that file by hand, always run
/usr/sbin/stmsboot -u afterwards to ensure that your system's MPxIO
config is correctly updated.
I thought
Pardon me but I had to change subject lines just to get out of that other
thread.
In that other thread .. you were saying :
dick hoogendijk uttered:
true. Furthermore, much so-called consumer hardware is very good these
days. My guess is ZFS should work quite reliably on that hardware.
self replies are so degrading ( pun intended )
I see this patch :
Document Audience: PUBLIC
Document ID:139555-08
Title: SunOS 5.10: Kernel Patch
Copyright Notice: Copyright © 2009 Sun Microsystems, Inc. All Rights
Reserved
Update Date:Fri Jul 10 04:29:40 MDT 2009
I have a
some keg(s) of
whatever beer they want. Or maybe new Porsche Cayman S toys.
That would be gratitude as something more than just words.
Thank you.
--
Dennis Clarke
ps: the one funny thing is that I had to get a few things swapped
out and I guess that resets the system clock. It now reports
Richard Elling richard.ell...@gmail.com writes:
You can only send/receive snapshots. However, on the receiving end,
there will also be a dataset of the name you choose. Since you didn't
share what commands you used, it is pretty impossible for us to
speculate what you might have tried.
Dennis Clarke dcla...@blastwave.org writes:
This will probably get me bombed with napalm but I often just
use star from Jörg Schilling because its dead easy :
star -copy -p -acl -sparse -dump -C old_dir . new_dir
and you're done.[1]
So long as you have both the new and the old zfs
On Tue, 16 Jun 2009, roland wrote:
so, we have a 128bit fs, but only support for 1tb on 32bit?
i`d call that a bug, isn`t it ? is there a bugid for this? ;)
I'd say the bug in this instance is using a 32-bit platform in 2009! :-)
Rich, a lot of embedded industrial solutions are 32-bit
is a choice and would add :
Compression is a choice and it is the default.
Just my feelings on the issue.
Dennis Clarke
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
, new_device begins to
resilver immediately.
so yeah, you have it.
Want to go for bonus points? Try to read into that man page to figure out
how to add a hot spare *after* you are all mirrored up.
--
Dennis Clarke
___
zfs-discuss mailing list
zfs
://www.blastwave.org/dclarke/blog/files/kernel_thread_stuck.README
also see output from fmdump -eV
http://www.blastwave.org/dclarke/blog/files/fmdump_e.log
Please let me know what else you may need.
--
Dennis Clarke
___
zfs-discuss mailing list
zfs-discuss
Dennis Clarke wrote:
It may be because it is blocked in kernel.
Can you do something like this:
echo 0tpid of zpool import::pid2proc|::walk thread|::findstack -v
So we see that it cannot complete import here and is waiting for
transaction group to sync. So probably spa_sync thread is stuck
Dennis Clarke wrote:
Dennis Clarke wrote:
It may be because it is blocked in kernel.
Can you do something like this:
echo 0tpid of zpool import::pid2proc|::walk thread|::findstack
-v
So we see that it cannot complete import here and is waiting for
transaction group to sync. So probably
CTRL+C does nothing and kill -9 pid does nothing to this command.
feels like a bug to me
Yes, it is:
http://bugs.opensolaris.org/view_bug.do?bug_id=6758902
Now I recall why I had to reboot. Seems as if a lot of commands hang now.
Things like :
df -ak
zfs list
zpool list
they all
--
Dennis Clarke wrote:
# w
3:14pm up 11:24, 3 users, load average: 0.46, 0.29, 0.23
User tty login@ idle JCPU PCPU what
dclarke console 1:22pm 1:52 2:02 1:31 /usr/lib/nwam-manager
dclarke pts/4 1:44pm 1:10zpool
I tried to import a zpool and the process just hung there, doing nothing.
It has been ten minutes now so I tries to hit CTRL-C. That did nothing.
So then I tried :
Sun Microsystems Inc. SunOS 5.11 snv_110 November 2008
r...@opensolaris:~# ps -efl
F S UID PID PPID C PRI NI
Dennis Clarke wrote:
I tried to import a zpool and the process just hung there, doing
nothing.
It has been ten minutes now so I tries to hit CTRL-C. That did nothing.
This symptom is consistent with a process blocked waiting on disk I/O.
Are the disks functional?
totally
I'm running
Dennis Clarke wrote:
I tried to import a zpool and the process just hung there, doing
nothing.
It has been ten minutes now so I tries to hit CTRL-C. That did nothing.
This symptom is consistent with a process blocked waiting on disk I/O.
Are the disks functional?
dcla...@neptune
Dennis Clarke wrote:
I tried to import a zpool and the process just hung there, doing nothing.
It has been ten minutes now so I tries to hit CTRL-C. That did
nothing.
It may be because it is blocked in kernel.
Can you do something like this:
echo 0tpid of zpool import::pid2proc|::walk
And after some 4 days without any CKSUM error, how can yanking the
power cord mess boot-stuff?
Maybe because on the fifth day some hardware failure occurred? ;-)
ha ha ! sorry .. that was pretty funny.
--
Dennis
___
zfs-discuss mailing list
data errors which
makes me wonder if the Severe FAULTs are for unknown data errors :-)
--
Dennis Clarke
sig du jour : An appeaser is one who feeds a crocodile, hoping it will
eat him last., Winston Churchill
[1] I really want to know where PowerChute for Solaris went to.
[2] I would create
On Tue, 24 Mar 2009, Dennis Clarke wrote:
However, I have repeatedly run into problems when I need to boot after a
power failure. I see vdevs being marked as FAULTED regardless if there
are
actually any hard errors reported by the on disk SMART Firmware. I am
able
to remove these FAULTed
On Tue, 24 Mar 2009, Dennis Clarke wrote:
You would think so eh?
But a transient problem that only occurs after a power failure?
Transient problems are most common after a power failure or during
initialization.
Well the issue here is that power was on for ten minutes before I tried
to do
Hey, Dennis -
I can't help but wonder if the failure is a result of zfs itself finding
some problems post restart...
Yes, yes, this is what I am feeling also, but I need to find the data also
and then I can sleep at night. I am certain that ZFS does not just toss
out faults on a whim
c8t2004CF96FF00d0 \
mirror c8t2004CFAC489Fd0 c8t2004CF961853d0
Does the keyword current work in some other fashion ?
--
Dennis Clarke
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Not sure if this has been reported or not.
This is fairly minor but slightly annoying.
After fresh install of snv_64a I run zpool import to find this :
# zpool import
pool: zfs0
id: 13628474126490956011
state: ONLINE
status: The pool is formatted using an older on-disk version.
action:
in /usr/src/cmd/zpool/zpool_main.c :
at line 680 forwards we can probably check for this scenario :
if ( ( altroot != NULL ) ( altroot[0] != '/') ) {
(void) fprintf(stderr, gettext(invalid alternate root '%s':
must be an absolute path\n), altroot);
nvlist_free(nvroot);
On Mon, Jun 25, 2007 at 02:34:21AM -0400, Dennis Clarke wrote:
note that it was well after 2 AM for me .. half blind asleep
that's my excuse .. I'm sticking to it. :-)
in /usr/src/cmd/zpool/zpool_main.c :
at line 680 forwards we can probably check for this scenario
You've tripped over a variant of:
6335095 Double-slash on /. pool mount points
- Eric
oh well .. no points for originality then I guess :-)
Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On 4/27/07, Ben Miller [EMAIL PROTECTED] wrote:
I just threw in a truss in the SMF script and rebooted the test system and it
failed again.
The truss output is at http://www.eecis.udel.edu/~bmiller/zfs.truss-Apr27-2007
324:read(7, 0x000CA00C, 5120) = 0
324:
On 4/23/07, Richard Elling [EMAIL PROTECTED] wrote:
FYI,
Sun is having a big, 25th Anniversary sale. X4500s are half price --
24 TBytes for $24k. ZFS runs really well on a X4500.
http://www.sun.com/emrkt/25sale/index.jsp?intcmp=tfa5101
I appologize for those not in the US or UK and
Dear ZFS and OpenSolaris people :
I recently upgraded a large NFS server upwards from Solaris 8. This is a
production manufacturing facility with football field sized factory floors
and 25 tonne steel products. Many on-site engineers on AIX and CATIA as well
as Solaris users and Windows and
On 4/18/07, Nicolas Williams [EMAIL PROTECTED] wrote:
On Wed, Apr 18, 2007 at 03:47:55PM -0400, Dennis Clarke wrote:
Maybe with a definition of what a backup is and then some way to
achieve it. As far as I know the only real backup is one that can be
tossed into a vault and locked away
{
zpool_list_t *cb_list;
/*
* The cb_raw int is added here by Dennis Clarke
*/
int cb_raw;
int cb_verbose;
int cb_iteration;
int cb_namewidth;
} iostat_cbdata_t;
I don't think that any change to print_vdev_stats is required because the
creation
I really need to take a longer look here.
/*
* zpool iostat [-v] [pool] ... [interval [count]]
*
* -v Display statistics for individual vdevs
*
* This command can be tricky because we want to be able to deal with pool
.
.
.
I think I may need to deal with a raw option here ?
Robert Milkowski wrote:
Hello Ivan,
Sunday, March 11, 2007, 12:01:28 PM, you wrote:
IW Got it, thanks, and a more general question, in a single disk
IW root pool scenario, what advantage zfs will provide over ufs w/
IW logging? And when zfs boot integrated in neveda, will live upgrade
You don't honestly, really, reasonably, expect someone, anyone, to look
at the stack
well of course he does :-)
and I looked at it .. all of it and I can tell exactly what the problem is
but I'm not gonna say because its a trick question.
so there.
Dennis
On Sun, 18 Feb 2007, Calvin Liu wrote:
I want to run command rm Dis* in a folder but mis-typed a space in it
so it became rm Dis *. Unfortunately I had pressed the return button
before I noticed the mistake. So you all know what happened... :( :( :(
Ouch!
How can I get the files back in
in a very
stable fashion long term. Once you add a single patch to that system you
have wandered out of this is shipped on media to somewhere else.
--
Dennis Clarke
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
boldly plowing forwards I request a few disks/vdevs to be mirrored
all at the same time :
bash-3.2# zpool status zfs0
pool: zfs0
state: ONLINE
scrub: resilver completed with 0 errors on Thu Feb 1 04:17:58 2007
config:
NAME STATE READ WRITE CKSUM
zfs0
Hello,
We're setting up a new mailserver infrastructure and decided, to run it
on zfs. On a E220R with a D1000, I've setup a storage pool with four
mirrors:
Good morning Ihsan ...
I see that you have everything mirrored here, thats excellent.
When you pulled a disk, was it a disk
Hello Michael,
Am 24.1.2007 14:36 Uhr, Michael Schuster schrieb:
--
[EMAIL PROTECTED] # zpool status
pool: pool0
state: ONLINE
scrub: none requested
config:
[...]
Jan 23 18:51:38 newponit
Am 24.1.2007 14:59 Uhr, Dennis Clarke schrieb:
Jan 23 17:25:26 newponit genunix: [ID 408822 kern.info] NOTICE: glm0:
fault detected in device; service still available
Jan 23 17:25:26 newponit genunix: [ID 611667 kern.info] NOTICE: glm0:
Disconnected tagged cmd(s) (1) timeout for Target 0.0
What do you mean by UFS wasn't an option due to
number of files?
Exactly that. UFS has a 1 million file limit under Solaris. Each Oracle
Financials environment well exceeds this limitation.
what ?
$ uname -a
SunOS core 5.10 Generic_118833-17 sun4u sparc SUNW,UltraSPARC-IIi-cEngine
$ df -F
On Mon, Jan 08, 2007 at 03:47:31PM +0100, Peter Schuller wrote:
http://blogs.sun.com/roch/entry/nfs_and_zfs_a_fine
So just to confirm; disabling the zil *ONLY* breaks the semantics of
fsync()
and synchronous writes from the application perspective; it will do
*NOTHING*
to lessen the
my
data when I am trying to add redundency.
Any thoughts ?
--
Dennis Clarke
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Note that attach has no option for -n which would just show me the
damage I am about to do :-(
In general, ZFS does a lot of checking before committing a change to the
configuration. We make sure that you don't do things like use disks
that are already in use, partitions aren't overlapping,
Another thing to keep an eye out for is disk caching. With ZFS,
whenever the NFS server tells us to make sure something is on disk, we
actually make sure it's on disk by asking the drive to flush dirty data
in its write cache out to the media. Needless to say, this takes a
while.
With
?
--
Dennis Clarke
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Anton B. Rang wrote:
INFORMATION: If a member of this striped zpool becomes unavailable or
develops corruption, Solaris will kernel panic and reboot to protect your
data.
OK, I'm puzzled.
Am I the only one on this list who believes that a kernel panic, instead
of EIO, represents a bug?
think its low priority. You can recover a zpool
easily enough with zpool import but if you ever lose a few disks or some
disaster hits then you had better have Veritas NetBackup or similar in
place.
Dennis Clarke
___
zfs-discuss mailing list
zfs-discuss
to backup those ZFS filesystems while booted
from CDROM/DVD or boot net ?
Essentially, if I had nothing but bare metal here and a tape drive can I
access the zpool that resides on six 36GB disks on controller 2 or am I dead
in the water ?
--
Dennis Clarke
On 11/23/06, James Dickens [EMAIL PROTECTED] wrote:
On 11/23/06, Dennis Clarke [EMAIL PROTECTED] wrote:
assume worst case
someone walks up to you and drops an array on you.
They say its ZFS an' I need that der stuff 'k? all while chewing on a
cig.
what do you do ? besides run
Have a gander below :
Agreed - it sucks - especially for small file use. Here's a 5,000 ft view
of the performance while unzipping and extracting a tar archive. First
the test is run on a SPARC 280R running Build 51a with dual 900MHz USIII
CPUs and 4Gb of RAM:
$ cp emacs-21.4a.tar.gz
or slices, you get a stripe of mirrors and not
just a mirror of stripes.
While ZFS may do a similar thing *I don't know* if there is a published
document yet that shows conclusively that ZFS will survive multiple disk
failures.
However ZFS brings a lot of other great features.
Dennis Clarke
Dennis Clarke wrote:
While ZFS may do a similar thing *I don't know* if there is a published
document yet that shows conclusively that ZFS will survive multiple disk
failures.
?? why not? Perhaps this is just too simple and therefore doesn't get
explained well.
That is not what I wrote
Steffen Weiberle wrote:
Customer asks whether ZFS is fully POSIX compliant, such as flock?
ZFS is not currently fully POSIX compliant. Making ZFS fully POSIX
compliant is still planned and we are currently addressing bugs in this
area.
Interfaces such as flock() should work just fine
- Original Message -
Subject: no tool to get expected disk usage reports
From:Dennis Clarke [EMAIL PROTECTED]
Date:Fri, October 13, 2006 14:29
To: zfs-discuss@opensolaris.org
I think
I'd better keep to the common forums.
Sorry for causing any possible inconvenience for people only following this
through e-mail.
I had no problem with your email thread at all. No worries and I don't
any cause for concern.
my 0.02 $
--
Dennis Clarke
Who hoo! It looks like the resilver completed sometime over night. The
system appears to be running normally, (after one final reboot):
[EMAIL PROTECTED]: zpool status
pool: storage
state: ONLINE
scrub: none requested
config:
NAME STATE
of deal ?
Dennis Clarke
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
please enlighten me.
Dennis Clarke
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
1 - 100 of 111 matches
Mail list logo