Bill Sommerfeld wrote:
...
So its really both - the subcommand successfully executes when its
actually written to disk and txg group is synced.
I found myself backtracking while reading that sentence due to the
ambiguity in the first half -- did you mean the write of the literal
text of
How about it folks - would it be a good idea for me to explore what it
takes to get such a bug/RFE setup implemented for the ZFS community on
OpenSolaris.org?
what's wrong with http://bugs.opensolaris.org/bugdatabase/index.jsp for
finding bugs?
i think we've been really good about taking
Steve Bennett wrote:
OK, I know that there's been some discussion on this before, but I'm not sure
that any specific advice came out of it. What would the advice be for
supporting a largish number of users (10,000 say) on a system that supports
ZFS? We currently use vxfs and assign a user
Robert Milkowski wrote:
Hello Steve,
Thursday, June 29, 2006, 5:54:50 PM, you wrote:
SB I've noticed another possible issue - each mount consumes about 45KB of
SB memory - not an issue with tens or hundreds of filesystems, but going
SB back to the 10,000 user scenario this would be 450MB of
Sean Meighan wrote:
i made sure path is clean, i also qualified the paths. time varies
from 0.5 seconds to 15 seconds. If i just do a timex pwd, it always
seems to be fast. We are using csh.
Here's a simple dscript to figure out how long each syscall is taking:
#!/usr/sbin/dtrace -FCs
martin wrote:
How could i monitor zfs ?
or the zpool activity ?
I want to know if anything wrong is going on.
If i could receive those warning by email, it would be great :)
For pool health:
# zpool status -x
all pools are healthy
#
To monitor activity, use 'zpool iostat 1' to monitor
Robert Milkowski wrote:
Hello George,
Wednesday, July 26, 2006, 7:27:04 AM, you wrote:
GW Additionally, I've just putback the latest feature set and bugfixes
GW which will be part of s10u3_03. There were some additional performance
GW fixes which may really benefit plus it will provide hot
Rich Teer wrote:
On Mon, 31 Jul 2006, Dale Ghent wrote:
So what does this exercise leave me thinking? Is Linux 2.4.x really screwed up
in NFS-land? This Solaris NFS replaces a Linux-based NFS server that the
Linux has had, uhhmmm (struggling to be nice), iffy NFS for ages.
The
ES Second, you may be able to get more performance from the ZFS filesystem
ES on the HW lun by tweaking the max pending # of reqeusts. One thing
ES we've found is that ZFS currently has a hardcoded limit of how many
ES outstanding requests to send to the underlying vdev (35). This works
ES
Leon Koll wrote:
I performed a SPEC SFS97 benchmark on Solaris 10u2/Sparc with 4 64GB
LUNs, connected via FC SAN.
The filesystems that were created on LUNS: UFS,VxFS,ZFS.
Unfortunately the ZFS test couldn't complete bacuase the box was hung
under very moderate load (3000 IOPs).
Additional tests
Leon Koll wrote:
On 8/8/06, eric kustarz [EMAIL PROTECTED] wrote:
Leon Koll wrote:
I performed a SPEC SFS97 benchmark on Solaris 10u2/Sparc with 4 64GB
LUNs, connected via FC SAN.
The filesystems that were created on LUNS: UFS,VxFS,ZFS.
Unfortunately the ZFS test couldn't complete
Leon Koll wrote:
...
So having 4 pools isn't a recommended config - i would destroy those 4
pools and just create 1 RAID-0 pool:
#zpool create sfsrocks c4t00173801014Bd0 c4t00173801014Cd0
c4t001738010140001Cd0 c4t0017380101400012d0
each of those devices is a 64GB lun, right?
I did
Leon Koll wrote:
On 8/11/06, eric kustarz [EMAIL PROTECTED] wrote:
Leon Koll wrote:
...
So having 4 pools isn't a recommended config - i would destroy
those 4
pools and just create 1 RAID-0 pool:
#zpool create sfsrocks c4t00173801014Bd0 c4t00173801014Cd0
c4t001738010140001Cd0
Constantin Gonzalez wrote:
Hi,
my ZFS pool for my home server is a bit unusual:
pool: pelotillehue
state: ONLINE
scrub: scrub completed with 0 errors on Mon Aug 21 06:10:13 2006
config:
NAMESTATE READ WRITE CKSUM
pelotillehue ONLINE 0 0 0
Robert Milkowski wrote:
Hello zfs-discuss,
I've got many ydisks in a JBOD (100) and while doing tests there
are lot of destroyed pools. Then some disks are re-used to be part
of new pools. Now if I do zpool import -D I can see lot of destroyed
pool in a state that I can't import them
Anton B. Rang wrote:
If you issue aligned, full-record write requests, there is a definite advantage
to continuing to set the record size. It allows ZFS to process the write
without the read-modify-write cycle that would be required for the default 128K
record size. (While compression
Matthew Ahrens wrote:
Matthew Ahrens wrote:
Here is a proposal for a new 'copies' property which would allow
different levels of replication for different filesystems.
Thanks everyone for your input.
The problem that this feature attempts to address is when you have
some data that is
David Dyer-Bennet wrote:
On 9/12/06, eric kustarz [EMAIL PROTECTED] wrote:
So it seems to me that having this feature per-file is really useful.
Say i have a presentation to give in Pleasanton, and the presentation
lives on my single-disk laptop - I want all the meta-data and the actual
Torrey McMahon wrote:
eric kustarz wrote:
Matthew Ahrens wrote:
Matthew Ahrens wrote:
Here is a proposal for a new 'copies' property which would allow
different levels of replication for different filesystems.
Thanks everyone for your input.
The problem that this feature attempts
Jakob Praher wrote:
Frank Cusack wrote:
On September 18, 2006 5:45:08 PM +0200 Jakob Praher [EMAIL PROTECTED] wrote:
huh. How do you create a SAN with NFS?
Sorry. Okay it would be Network Attached Sotrage not the other way round
. I guess you are right.
BUT if we are at discussing NFS
Frank Cusack wrote:
On September 21, 2006 10:48:34 AM +0200 Jakob Praher [EMAIL PROTECTED] wrote:
Frank Cusack wrote:
On September 18, 2006 5:45:08 PM +0200 Jakob Praher [EMAIL PROTECTED]
wrote:
BUT if we are at discussing NFS for distributed stroage: What are your
guys performance data
Chad Leigh wrote:
I have set up a Solaris 10 U2 06/06 system that has basic patches to the latest
-19 kernel patch and latest zfs genesis etc as recommended. I have set up a
basic pool (local) and a bunch of sub-pools (local/mail, local/mail/shire.net,
local/mail/shire.net/o,
Hi everybody,
Yesterday I putback into nevada:
PSARC 2006/288 zpool history
6343741 want to store a command history on disk
This introduces a new subcommand to zpool(1m), namely 'zpool history'.
Yes, team ZFS is tracking what you do to our precious pools.
For more information, check out:
ozan s. yigit wrote:
we thought we would try adding a disk to an existing raidz pool
named backup:
# zpool status
...
NAMESTATE READ WRITE CKSUM
backup ONLINE 0 0 0
raidz ONLINE 0 0 0
c4t2d0 ONLINE 0
Robert Milkowski wrote:
Hi.
When there's a lot of IOs to a pool then zpool status is really slow.
These are just statistics and it should work quick.
This is:
6430480 grabbing config lock as writer during I/O load can take
excessively long
eric
# truss -ED zpool status
[...]
Daniel Rock wrote:
Peter Guthrie schrieb:
So far I've seen *very* high loads twice using ZFS which does not
happen when the same task is implemented with UFS
This is a known bug in the SPARC IDE driver.
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6421427
You could try the
Pavan Reddy wrote:
This is the time it took to move the file:
The machine is a Intel P4 - 512MB RAM.
bash-3.00# time mv ../share/pav.tar .
real1m26.334s
user0m0.003s
sys 0m7.397s
bash-3.00# ls -l pav.tar
-rw-r--r-- 1 root root 516628480 Oct 29 19:30 pav.tar
A
Jay Grogan wrote:
Ran 3 test using mkfile to create a 6GB on a ufs and ZFS file system.
command ran mkfile -v 6gb /ufs/tmpfile
Test 1 UFS mounted LUN (2m2.373s)
Test 2 UFS mounted LUN with directio option (5m31.802s)
Test 3 ZFS LUN (Single LUN in a pool) (3m13.126s)
Sunfire V120
1 Qlogic
Adam Leventhal wrote:
Rick McNeal and I have been working on building support for sharing ZVOLs
as iSCSI targets directly into ZFS. Below is the proposal I'll be
submitting to PSARC. Comments and suggestions are welcome.
Adam
---8---
iSCSI/ZFS Integration
A. Overview
The goal of this
Chris Gerhard wrote:
An alternate way will be to use NFSv4. When an NFSv4
client crosses
a mountpoint on the server, it can detect this and
mount the filesystem.
It can feel like a lite version of the automounter
in practice, as
you just have to mount the root and discover the
filesystems as
Erik Trimble wrote:
I actually think this is an NFSv4 issue, but I'm going to ask here
anyway...
Server:Solaris 10 Update 2 (SPARC), with several ZFS file systems
shared via the legacy method (/etc/dfs/dfstab and share(1M), not via the
ZFS property). Default settings in /etc/default/nfs
Neil Perrin wrote:
Tomas Ögren wrote On 11/09/06 09:59,:
1. DNLC-through-ZFS doesn't seem to listen to ncsize.
The filesystem currently has ~550k inodes and large portions of it is
frequently looked over with rsync (over nfs). mdb said ncsize was about
68k and vmstat -s said we had a
Brian Wong wrote:
eric kustarz wrote:
If the ARC detects low memory (via arc_reclaim_needed()), then we call
arc_kmem_reap_now() and subsequently dnlc_reduce_cache() - which
reduces the # of dnlc entries by 3% (ARC_REDUCE_DNLC_PERCENT).
So yeah, dnlc_nentries would be really interesting
Rainer Heilke wrote:
Greetings, all.
I put myself into a bit of a predicament, and I'm hoping there's a way out.
I had a drive (EIDE) in a ZFS mirror die on me. Not a big deal, right? Well, I
bought two SATA drives to build a new mirror. Since they were about the same
size (I wanted bigger
Tomas Ögren wrote:
On 13 November, 2006 - Sanjeev Bagewadi sent me these 7,1K bytes:
Tomas,
comments inline...
arc::print struct arc
{
anon = ARC_anon
mru = ARC_mru
mru_ghost = ARC_mru_ghost
mfu = ARC_mfu
mfu_ghost = ARC_mfu_ghost
size = 0x6f7a400
p = 0x5d9bd5a
Jim Davis wrote:
We have two aging Netapp filers and can't afford to buy new Netapp gear,
so we've been looking with a lot of interest at building NFS fileservers
running ZFS as a possible future approach. Two issues have come up in
the discussion
- Adding new disks to a RAID-Z pool
Jim Davis wrote:
eric kustarz wrote:
What about adding a whole new RAID-Z vdev and dynamicly stripe across
the RAID-Zs? Your capacity and performance will go up with each
RAID-Z vdev you add.
Thanks, that's an interesting suggestion.
Have you tried using the automounter as suggested
Ben Rockwood wrote:
Eric Kustarz wrote:
Ben Rockwood wrote:
I've got a Thumper doing nothing but serving NFS. Its using B43 with
zil_disabled. The system is being consumed in waves, but by what I
don't know. Notice vmstat:
We made several performance fixes in the NFS/ZFS area in recent
Ben Rockwood wrote:
Bill Moore wrote:
On Fri, Dec 08, 2006 at 12:15:27AM -0800, Ben Rockwood wrote:
Clearly ZFS file creation is just amazingly heavy even with ZIL
disabled. If creating 4,000 files in a minute squashes 4 2.6Ghz
Opteron cores we're in big trouble in the longer term. In
Jim Mauro wrote:
Could be NFS synchronous semantics on file create (followed by
repeated flushing of the write cache). What kind of storage are you
using (feel free to send privately if you need to) - is it a thumper?
It's not clear why NFS-enforced synchronous semantics would induce
Jim Hranicky wrote:
By mistake, I just exported my test filesystem while it was up
and being served via NFS, causing my tar over NFS to start
throwing stale file handle errors.
So you had a pool and were sharing filesystems over NFS, NFS clients had
active mounts, you removed
Bill Casale wrote:
Please reply directly to me. Seeing the message below.
Is it possible to determine exactly which file is corrupted?
I was thinking the OBJECT/RANGE info may be pointing to it
but I don't know how to equate that to a file.
This is bug:
6410433 'zpool status -v' would be more
errors: The following persistent errors have been detected:
DATASET OBJECT RANGE
z_tsmsun1_pool/tsmsrv1_pool 26208464760832-8464891904
Looks like I have possibly a single file that is corrupted. My question is how do I find the file. Is it as
Peter Schuller wrote:
http://blogs.sun.com/roch/entry/nfs_and_zfs_a_fine
So just to confirm; disabling the zil *ONLY* breaks the semantics of fsync()
and synchronous writes from the application perspective; it will do *NOTHING*
to lessen the correctness guarantee of ZFS itself,
Hans-Juergen Schnitzer wrote:
Roch - PAE wrote:
Just posted:
http://blogs.sun.com/roch/entry/nfs_and_zfs_a_fine
Which role plays network latency? If I understand you right,
even a low-latency network, e.g. Infiniband, would not increase
performance substantially since the main
Anantha N. Srirama wrote:
I'm observing the following behavior in our environment (Sol10U2, E2900, 24x96,
2x2Gbps, ...)
In general, i would recommend upgrading to s10u3 (if you can).
- I've a compressed ZFS filesystem where I'm creating a large tar file. I
notice that the tar process is
James Dickens wrote:
On 1/13/07, roland [EMAIL PROTECTED] wrote:
thanks for your infos!
can zfs protect my data from such single-bit-errors with a single
drive ?
nope.. but it can tell you that it has occurred.
can it also tell (or can i use a tool to determine), which data/file
is
Glenn Skinner wrote:
About a month ago, I upgraded my workstation to Nevada build 49. When
I ran zpool status on my existing zfs pool (a couple mirrored
partitions, one from each of the system's two disks), I discovered that
the existing on-disk format was at version 1 and that I could upgrade
Note that the bad disk on the node caused a normal reboot to hang.
I also verified that sync from the command line hung. I don't know
how ZFS (or Solaris) handles situations involving bad disks...does
a bad disk block proper ZFS/OS handling of all IO, even to the
other healthy disks?
On Jan 26, 2007, at 6:02 AM, Robert Milkowski wrote:
Hello zfs-discuss,
Is anyone working on that bug? Any progress?
For bug:
6343667 scrub/resilver has to start over when a snapshot is taken
I believe that is on Matt and Mark's radar, and they have made some
progress.
eric
IIRC Bill posted here some tie ago saying the problem with write cache
on the arrays is being worked on.
Yep, the bug is:
6462690 sd driver should set SYNC_NV bit when issuing SYNCHRONIZE
CACHE to
SBC-2 devices
We have a case going through PSARC that will make things works
correctly with
On Feb 6, 2007, at 10:43 AM, Robert Milkowski wrote:
Hello eric,
Tuesday, February 6, 2007, 5:55:23 PM, you wrote:
IIRC Bill posted here some tie ago saying the problem with write
cache
on the arrays is being worked on.
ek Yep, the bug is:
ek 6462690 sd driver should set SYNC_NV bit
On Feb 12, 2007, at 8:05 AM, Robert Petkus wrote:
Some comments from the author:
1. It was a preliminary scratch report not meant to be exhaustive
and complete by any means. A comprehensive report of our findings
will be released soon.
2. I claim responsibility for any benchmarks
On Feb 12, 2007, at 7:52 AM, Robert Milkowski wrote:
Hello Roch,
Monday, February 12, 2007, 3:54:30 PM, you wrote:
RP Duh!.
RP Long sync (which delays the next sync) are also possible on
RP a write intensive workloads. Throttling heavy writters, I
RP think, is the key to fixing this.
ek Have you increased the load on this machine? I have seen a
similar
ek situation (new requests being blocked waiting for the sync
thread to
ek finish), but that's only been when either 1) the hardware is
broken
ek and taking too long or 2) the server is way overloaded.
I don't think
I've been using it in another CR where destroying one of a snapshots
was helping the performance. Nevertheless here it's on that server:
Short period of time:
bash-3.00# ./metaslab-6495013.d
^C
Loops count
value - Distribution - count
-1
On Feb 15, 2007, at 6:08 AM, Robert Milkowski wrote:
Hello eric,
Wednesday, February 14, 2007, 5:04:01 PM, you wrote:
ek I'm wondering if we can just lower the amount of space we're
trying
ek to alloc as the pool becomes more fragmented - we'll lose a
little I/
ek O performance, but it
On Feb 18, 2007, at 9:19 PM, Davin Milun wrote:
I have one that looks like this:
pool: preplica-1
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible.
ek If you were able to send over your complete pool, destroy the
ek existing one and re-create a new one using recv, then that should
ek help with fragmentation. That said, that's a very poor man's
ek defragger. The defragmentation should happen automatically or at
ek least while the pool is
On Feb 20, 2007, at 10:43 AM, [EMAIL PROTECTED] wrote:
If you run a 'zpool scrub preplica-1', then the persistent error log
will be cleaned up. In the future, we'll have a background scrubber
to make your life easier.
eric
Eric,
Great news! Are there any details about how
On Feb 9, 2007, at 8:02 AM, Carisdad wrote:
I've seen very good performance on streaming large files to ZFS on
a T2000. We have been looking at using the T2000 as a disk storage
unit for backups. I've been able to push over 500MB/s to the
disks. Setup is EMC Clariion CX3 with 84 500GB
On Feb 22, 2007, at 10:01 AM, Carisdad wrote:
eric kustarz wrote:
On Feb 9, 2007, at 8:02 AM, Carisdad wrote:
I've seen very good performance on streaming large files to ZFS
on a T2000. We have been looking at using the T2000 as a disk
storage unit for backups. I've been able to push
On Feb 27, 2007, at 2:35 AM, Roch - PAE wrote:
Jens Elkner writes:
Currently I'm trying to figure out the best zfs layout for a
thumper wrt. to read AND write performance.
I did some simple mkfile 512G tests and found out, that per average ~
500 MB/s seems to be the maximum on can reach
On Mar 16, 2007, at 1:29 PM, JS wrote:
I've been seeing this failure to cap on a number of (Solaris 10
update 2 and 3) machines since the script came out (arc hogging is
a huge problem for me, esp on Oracle). This is probably a red
herring, but my v490 testbed seemed to actually cap on 3
On Mar 19, 2007, at 7:26 PM, Jens Elkner wrote:
On Wed, Feb 28, 2007 at 11:45:35AM +0100, Roch - PAE wrote:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6460622
Any estimations, when we'll see a [feature] fix for U3?
Should I open a call, to perhaps rise the priority
OKAY that's an idea, but then this becomes not so easy to manage. I
have
made some tries and I found iscsi{,t}adm not that cool to use
confronted
to what zfs,zpool interfaces provides.
hey Cedrice,
Could you be more specific here? What wasn't easy? Any suggestions
to improve it?
On Mar 20, 2007, at 10:27 AM, [EMAIL PROTECTED] wrote:
Folks,
Is there any update on the progress of fixing the resilver/
snap/scrub
reset issues? If the bits have been pushed is there a patch for
Solaris
10U3?
http://bugs.opensolaris.org/view_bug.do?bug_id=6343667
Matt and
Is this the same panic I observed when moving a FireWire disk from
a SPARC
system running snv_57 to an x86 laptop with snv_42a?
6533369 panic in dnode_buf_byteswap importing zpool
Yep, thanks - i was looking for that bug :) I'll close it out as a dup.
eric
On Mar 23, 2007, at 6:13 AM, Łukasz wrote:
When I'm trying to do in kernel in zfs ioctl:
1. snapshot destroy PREVIOS
2. snapshot rename LATEST-PREVIOUS
3. snapshot create LATEST
code is:
/* delete previous snapshot */
zfs_unmount_snap(snap_previous, NULL);
On Apr 9, 2007, at 2:20 AM, Dirk Jakobsmeier wrote:
Hello,
was use several cad applications and with one of those we have
problems using zfs.
OS and hardware is SunOS 5.10 Generic_118855-36, Fire X4200, the
cad application is catia v4.
There are several configuration and data files
On Apr 18, 2007, at 2:35 AM, Richard L. Hamilton wrote:
Well, no; his quote did say software or hardware. The theory is
apparently
that ZFS can do better at detecting (and with redundancy,
correcting) errors
if it's dealing with raw hardware, or as nearly so as possible.
Most SANs
_can_
On Apr 19, 2007, at 1:38 AM, Ricardo Correia wrote:
Why doesn't zpool status -v display the byte ranges of permanent
errors anymore, like it used to (before snv_57)?
I think it was a useful feature. For example, I have a pool with 17
permanent errors in 2 files with 700 MB each, but no
On Apr 18, 2007, at 6:44 AM, Yaniv Aknin wrote:
Hello,
I'd like to plan a storage solution for a system currently in
production.
The system's storage is based on code which writes many files to
the file system, with overall storage needs currently around 40TB
and expected to reach
On Apr 25, 2007, at 9:16 AM, Oliver Gould wrote:
Hello-
I was planning on sending out a more formal sort of introduction in a
few weeks, but.. hey- it came up.
I will be porting ZFS to NetBSD this summer. Some info on this
project
can be found at:
In order to prevent the so-called poor man's cluster from
corrupting your data, we now store the hostid and verify it upon
importing a pool.
Check it out at:
http://blogs.sun.com/erickustarz/entry/poor_man_s_cluster_end
This is bug:
6282725 hostname/hostid should be stored in the label
On May 7, 2007, at 7:11 AM, Frank Batschulat wrote:
running a recent patched s10 system, zfs version 3, attempting to
dump the label information using zdb when the pool is online
doesn't seem to give
a reasonable information, any particular reason for this ?
# zpool status
pool:
On May 10, 2007, at 10:04 PM, Matthew Flanagan wrote:
Hi,
I have a test server that I use for testing my different jumpstart
installations. This system is continuously installed and
reinstalled with different system builds.
For some builds I have a finish script that creates a zpool using
On May 12, 2007, at 2:12 AM, Matthew Flanagan wrote:
On May 10, 2007, at 10:04 PM, Matthew Flanagan wrote:
Hi,
I have a test server that I use for testing my
different jumpstart
installations. This system is continuously
installed and
reinstalled with different system builds.
For some
On May 15, 2007, at 9:37 AM, XIU wrote:
Hey,
I'm currently running on Nexenta alpha 6 and I have some corrupted
data in a pool.
The output from sudo zpool status -v data is:
pool: data
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.
On May 15, 2007, at 4:49 PM, Nigel Smith wrote:
I seem to have got the same core dump, in a different way.
I had a zpool setup on a iscsi 'disk'. For details see:
http://mail.opensolaris.org/pipermail/storage-discuss/2007-May/
001162.html
But after a reboot the iscsi target was not longer
Don't take this numbers too seriously - those were only first tries to
see where my port is and I was using OpenSolaris for comparsion, which
has debugging turned on.
Yeah, ZFS does a lot of extra work with debugging on (such as
verifying checksums in the ARC), so always do serious
On May 29, 2007, at 1:25 PM, Lida Horn wrote:
Point one, the comments that Eric made do not give the complete
picture.
All the tests that Eric's referring to were done through ZFS
filesystem.
When sequential I/O is done to the disk directly there is no
performance
degradation at all.
2) Following Chris's advice to do more with snapshots, I
played with his cron-triggered snapshot routine:
http://blogs.sun.com/chrisg/entry/snapping_every_minute
Now, after a couple of days, zpool history shows almost
100,000 lines of output (from all the snapshots and
On Jun 1, 2007, at 2:09 PM, John Plocher wrote:
eric kustarz wrote:
We specifically didn't allow the admin the ability to truncate/
prune the log as then it becomes unreliable - ooops i made a
mistake, i better clear the log and file the bug against zfs
I understand - auditing means
On Jun 2, 2007, at 8:27 PM, Jesus Cea wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Toby Thain wrote:
Sorry, I should have cited it. Blew my chance to moderate by
posting to
the thread :)
http://ask.slashdot.org/comments.pl?sid=236627cid=19319903
I computed the FUD factor by
Hi Jeff,
You should take a look at this:
http://blogs.sun.com/erickustarz/entry/poor_man_s_cluster_end
We added the hostid/hostname to the vdev label. What this means is
that we stop you from importing a pool onto multiple machines (which
would have lead to corruption).
eric
On May 30,
Would be very nice if the improvements would be documented
anywhere :-)
Cindy has been doing a good job of putting the new features into the
admin guide:
http://www.opensolaris.org/os/community/zfs/docs/zfsadmin.pdf
Check out the What's New in ZFS? section.
eric
Just got the latest ;login: and Pawel has an article on Porting the
Solaris ZFS File System to the FreeBSD Operating System.
Lots of interesting stuff in there, such as the differences between
OpenSolaris and FreeBSD, as well as getting ZFS to work with FreeBSD
jails (a new 'jailed'
On Jun 11, 2007, at 12:52 AM, Borislav Aleksandrov wrote:
Panic on snv_6564 when:
#mkdir /disk
#mkfile 128m /disk/disk1
#mkfile 128m /disk/disk2
#zpool create data mirror /disk/disk1 /disk/disk2
#mkfile 128m /disk/disk1
#mkfile 128m /disk/disk2
At this point you have completely overwritten
Over NFS to non-ZFS drive
-
tar xfvj linux-2.6.21.tar.bz2
real5m0.211s, user0m45.330s, sys 0m50.118s
star xfv linux-2.6.21.tar.bz2
real3m26.053s, user0m43.069s, sys 0m33.726s
star -no-fsync -x -v -f linux-2.6.21.tar.bz2
real
On Jun 12, 2007, at 12:57 AM, Roch - PAE wrote:
Hi Seigfried, just making sure you had seen this:
http://blogs.sun.com/roch/entry/nfs_and_zfs_a_fine
You have very fast NFS to non-ZFS runs.
That seems only possible if the hosting OS did not sync the
data when NFS required it or the
On Jun 13, 2007, at 9:22 PM, Siegfried Nikolaivich wrote:
On 12-Jun-07, at 9:02 AM, eric kustarz wrote:
Comparing a ZFS pool made out of a single disk to a single UFS
filesystem would be a fair comparison.
What does your storage look like?
The storage looks like:
NAME
On Jun 19, 2007, at 11:23 AM, Huitzi wrote:
Hi once again and thank you very much for your reply. Here is
another thread.
I'm planning to deploy a small file server based on ZFS. I want to
know if I can start with 2 RAIDs, and add more RAIDs in the future
(like the gray RAID in the
On Jun 20, 2007, at 1:25 PM, mario heimel wrote:
Linux is the first operating system that can boot from RAID-1+0,
RAID-Z or RAID-Z2 ZFS, really cool trick to put zfs-fuse in the
initramfs.
( Solaris can only boot from single-disk or RAID-1 pools )
On Jun 21, 2007, at 8:47 AM, Niclas Sodergard wrote:
Hi,
I was playing around with NexentaCP and its zfs boot facility. I tried
to figure out how what commands to run and I ran zpool history like
this
# zpool history
2007-06-20.10:19:46 zfs snapshot syspool/[EMAIL PROTECTED]
On Jun 21, 2007, at 3:25 PM, Bryan Wagoner wrote:
Quick question,
Are there any tunables, or is there any way to specify devices in a
pool to use for the ZIL specifically? I've been thinking through
architectures to mitigate performance problems on SAN and various
other storage
A data structure view of ZFS is now available:
http://www.opensolaris.org/os/community/zfs/structures/
We've only got one picture up right now (though its a juicy one!),
but let us know what you're interested in seeing, and we'll try to
make that happen.
I see this as a nice supplement to
On Jul 4, 2007, at 7:50 AM, Wout Mertens wrote:
A data structure view of ZFS is now available:
http://www.opensolaris.org/os/community/zfs/structures/
We've only got one picture up right now (though its a juicy one!),
but let us know what you're interested in seeing, and
we'll try to make
However, I've one more question - do you guys think NCQ with short
stroked zones help or hurt performance? I have this feeling (my
gut, that is), that at a low queue depth it's a Great Win, whereas
at a deeper queue it would degrade performance more so than without
it. Any
On Jul 9, 2007, at 11:21 AM, Scott Lovenberg wrote:
You sir, are a gentleman and a scholar! Seriously, this is exactly
the information I was looking for, thank you very much!
Would you happen to know if this has improved since build 63 or if
chipset has any effect one way or the other?
1 - 100 of 167 matches
Mail list logo