On Aug 21, 2008, at 9:51 AM, Brent Jones wrote:
Hello,
I have been experimenting with ZFS on a test box, preparing to
present it to management.
One thing I cannot test right now is our real-world application
load. We write to CIFS shares currently in small files.
We write about 250,000
On Aug 13, 2008, at 5:58 AM, Moinak Ghosh wrote:
I have to help setup a configuration where a ZPOOL on MPXIO on
OpenSolaris is being used with Symmetrix devices with replication
being handled via Symmetrix Remote Data Facility (SRDF).
So I am curious whether anyone has used this
On Aug 7, 2008, at 10:25 PM, Anton B. Rang wrote:
How would you describe the difference between the file system
checking utility and zpool scrub? Is zpool scrub lacking in its
verification of the data?
To answer the second question first, yes, zpool scrub is lacking, at
least to the
I've filed specifically for ZFS:
6735425 some places where 64bit values are being incorrectly accessed
on 32bit processors
eric
On Aug 6, 2008, at 1:59 PM, Brian D. Horn wrote:
In the most recent code base (both OpenSolaris/Nevada and S10Ux with
patches)
all the known marvell88sx
On Jul 29, 2008, at 2:24 PM, Chris Cosby wrote:
On Tue, Jul 29, 2008 at 5:13 PM, Stefano Pini [EMAIL PROTECTED]
wrote:
Hi guys,
we are proposing a customer a couple of X4500 (24 Tb) used as NAS
(i.e. NFS server).
Both server will contain the same files and should be accessed by
On Jun 6, 2008, at 2:50 PM, Nicolas Williams wrote:
On Fri, Jun 06, 2008 at 10:42:45AM -0500, Bob Friesenhahn wrote:
On Fri, 6 Jun 2008, Brian Hechinger wrote:
On Thu, Jun 05, 2008 at 12:02:42PM -0400, Chris Siebenmann wrote:
- as separate filesystems, they have to be separately NFS
On Jun 6, 2008, at 3:27 PM, Brian Hechinger wrote:
On Fri, Jun 06, 2008 at 02:58:09PM -0700, eric kustarz wrote:
clients do not. Without per-filesystem mounts, 'df' on the client
will not report correct data though.
I expect that mirror mounts will be coming Linux's way too.
The should
On Jun 3, 2008, at 11:16 AM, Chris Siebenmann wrote:
Is there any way to set ZFS on a system so that it will not
automatically import all of the ZFS pools it had active when it was
last
running?
The problem with automatic importation is preventing disasters in a
failover situation.
On May 8, 2008, at 12:31 PM, Carson Gaspar wrote:
Luke Scharf wrote:
Dave wrote:
On 05/08/2008 08:11 AM, Ross wrote:
It may be an obvious point, but are you aware that snapshots need
to be stopped any time a disk fails? It's something to consider
if you're planning frequent
On May 5, 2008, at 9:51 PM, Bill McGonigle wrote:
Is it also true that ZFS can't be re-implemented in GPLv2 code
because then the CDDL-based patent protections don't apply?
Some of it has already been done:
On May 5, 2008, at 4:43 PM, Bob Friesenhahn wrote:
On Mon, 5 May 2008, eric kustarz wrote:
That's not true:
http://blogs.sun.com/erickustarz/entry/zil_disable
Perhaps people are using consistency to mean different things
here...
Consistency means that fsync() assures that the data
On Apr 27, 2008, at 4:39 PM, Carson Gaspar wrote:
Ian Collins wrote:
Carson Gaspar wrote:
If this is possible, it's entirely undocumented... Actually, fmd's
documentation is generally terrible. The sum total of configuration
information is:
FILES
/etc/fm/fmd Fault
If you are really sure that disks c5t2d0 and c5t6d0 are not in use by
anyone and want to add them as spares, then dd'ing 0s over the labes
should suffice (front and back labels). Usual warnings about dd'ing
0s over a disk apply here. I'd probably do one at a time.
eric
On Apr 2, 2008, at
On Mar 27, 2008, at 9:24 AM, Bob Friesenhahn wrote:
On Thu, 27 Mar 2008, Neelakanth Nadgir wrote:
This causes the sync to happen much faster, but as you say,
suboptimal.
Haven't had the time to go through the bug report, but probably
CR 6429205 each zpool needs to monitor its throughput
messages:
# tail /var/adm/messages
Mar 22 17:28:36 hancock genunix: [ID 936769 kern.info] fssnap0 is /
pseudo/[EMAIL PROTECTED]
Mar 22 17:28:36 hancock pseudo: [ID 129642 kern.info] pseudo-
device: winlock0
Mar 22 17:28:36 hancock genunix: [ID 936769 kern.info] winlock0 is /
Also history only tells me what someone typed. It doesn't tell me
what other changes may have occurred.
What other changes were you thinking about?
eric
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
.
eric
David
On Fri, 2008-03-21 at 13:10 -0700, eric kustarz wrote:
Also history only tells me what someone typed. It doesn't tell me
what other changes may have occurred.
What other changes were you thinking about?
eric
___
zfs-discuss mailing
On Mar 17, 2008, at 6:21 AM, Mertol Ozyoney wrote:
Hi All ;
I am not a Solaris or ZFS expert and I am in need of your help.
When I run the following command
zfs send –i [EMAIL PROTECTED] [EMAIL PROTECTED] | ssh 10.10.103.42 zfs
receive –F data/data41
if some one is accessing
On Mar 12, 2008, at 12:35 PM, Ben Middleton wrote:
Hi,
Sorry if this is a RTM issue - but I wanted to be sure before
continuing. I received a corrupted file error on one of my pools. I
removed the file, and the status command now shows the following:
zpool status -v rpool
pool:
On Mar 6, 2008, at 7:58 AM, Brian D. Horn wrote:
Take a look at CR 6634371. It's worse than you probably thought.
The only place i see ZFS mentioned in that bug report is regarding
z_mapcnt. Its being atomically inc/dec in zfs_addmap()/zfs_delmap()
- so those are ok.
In zfs_frlock(),
If you can't file a RFE yourself (with the attached diffs), then
yeah, i'd like to see them so i can do it.
cool stuff,
eric
On Feb 26, 2008, at 4:35 AM, [EMAIL PROTECTED] wrote:
Hi All,
I have modified zdb to do decompression in zdb_read_block. Syntax is:
# zdb -R
On Feb 20, 2008, at 2:16 PM, Robert Milkowski wrote:
Hello eric,
Tuesday, February 12, 2008, 7:33:14 PM, you wrote:
ek On Feb 1, 2008, at 7:17 AM, Nicolas Dorfsman wrote:
Hi,
I wrote an hobbit script around lunmap/hbamap commands to monitor
SAN health.
I'd like to add detail on what
On Feb 16, 2008, at 5:26 PM, Bob Friesenhahn wrote:
Some of us are still using Solaris 10 since it is the version of
Solaris released and supported by Sun. The 'filebench' software from
SourceForge does not seem to install or work on Solaris 10. The
'pkgadd' command refuses to recognize
On Feb 1, 2008, at 7:17 AM, Nicolas Dorfsman wrote:
Hi,
I wrote an hobbit script around lunmap/hbamap commands to monitor
SAN health.
I'd like to add detail on what is being hosted by those luns.
With svm metastat -p is helpful.
With zfs, zpool status output is awful for script.
Is
While browsing the ZFS source code, I noticed that usr/src/cmd/
ztest/ztest.c, includes ztest_spa_rename(), a ZFS test which
renames a ZFS storage pool to a different name, tests the pool
under its new name, and then renames it back. I wonder why this
functionality was not exposed as
On Feb 4, 2008, at 5:10 PM, Marion Hakanson wrote:
[EMAIL PROTECTED] said:
FYI, you can use the '-c' option to compare results from various
runs and
have one single report to look at.
That's a handy feature. I've added a couple of such comparisons:
On Feb 1, 2008, at 11:17 AM, Marion Hakanson wrote:
[EMAIL PROTECTED] said:
Depending on needs for space vs. performance, I'd probably pixk
eithr 5*9 or
9*5, with 1 hot spare.
[EMAIL PROTECTED] said:
How you can check the speed (I'm totally newbie on Solaris)
We're deploying a
On Jan 25, 2008, at 6:06 AM, Niksa Franceschi wrote:
Yes, the link explains quite well the issue we have.
Only difference is that server1 can be manually rebooted, and while
it's still down I can mount ZFS pool on server2 even without -f
option, and yet server1 when booted up still
On Jan 18, 2008, at 4:23 AM, Sengor wrote:
On 1/17/08, Darren J Moffat [EMAIL PROTECTED] wrote:
Pardon my ignorance, but is ZFS with compression safe to use in a
production environment?
Yes, why wouldn't it be ? If it wasn't safe it wouldn't have been
delivered.
Few reasons -
I'm using raidz2 across 8 drives, but if I had it to do again, I'd
probably just use mirroring. Unfortunately, raidz2 kills your random
read and write performance, and that makes Time Machine really, really
slow. I'm running low on space now, and considering throwing another
8 drives into
On Jan 14, 2008, at 11:08 AM, Tim Cook wrote:
www.mozy.com appears to have unlimited backups for 4.95 a month.
Hard to beat that. And they're owned by EMC now so you know they
aren't going anywhere anytime soon.
I just signed on and am trying Mozy out. Note, its $5 per computer
and
On Jan 10, 2008, at 4:50 AM, Łukasz K wrote:
Hi
I'm using ZFS on few X4500 and I need to backup them.
The data on source pool keeps changing so the online replication
would be the best solution.
As I know AVS doesn't support ZFS - there is a problem with
mounting backup pool.
On Jan 9, 2008, at 9:09 PM, Rob Logan wrote:
fun example that shows NCQ lowers wait and %w, but doesn't have
much impact on final speed. [scrubbing, devs reordered for clarity]
Here are the results i found when comparing random reads vs.
sequential reads for NCQ:
On Jan 10, 2008, at 9:18 AM, Łukasz K wrote:
Dnia 10-01-2008 o godz. 17:45 eric kustarz napisał(a):
On Jan 10, 2008, at 4:50 AM, Łukasz K wrote:
Hi
I'm using ZFS on few X4500 and I need to backup them.
The data on source pool keeps changing so the online replication
would be the best
On Jan 10, 2008, at 5:13 PM, Jim Dunham wrote:
Eric,
On Jan 10, 2008, at 4:50 AM, Łukasz K wrote:
Hi
I'm using ZFS on few X4500 and I need to backup them.
The data on source pool keeps changing so the online replication
would be the best solution.
As I know AVS doesn't support
This should work just fine with latest bits (Nevada 77 and later) via:
http://bugs.opensolaris.org/view_bug.do?bug_id=6425096
Its backport is currently targeted for an early build of s10u6.
eric
On Jan 8, 2008, at 7:13 AM, Andreas Koppenhoefer wrote:
[I apologise for reposting this... but no
So either we're hitting a pretty serious zfs bug, or they're purposely
holding back performance in Solaris 10 so that we all have a good
reason to
upgrade to 11. ;)
In general, for ZFS we try to push all changes from Nevada back to
s10 updates.
In particular, 6535160 Lock contention on
On Dec 23, 2007, at 7:53 PM, David Dyer-Bennet wrote:
Just out of curiosity, what are the dates ls -l shows on a snapshot?
Looks like they might be the pool creation date.
The ctime and mtime are from the file system creation date. The
atime is the current time. See:
On Dec 12, 2007, at 3:03 PM, Robert Milkowski wrote:
Hello zfs-discuss,
http://sunsolve.sun.com/search/document.do?assetkey=1-1-6604198-1
Is there a patch for S10? I thought it's been fixed.
It was fixed via 6460622 zio_nowait() doesn't live up to its name
and that is in s10u4.
On Dec 5, 2007, at 8:38 PM, Anton B. Rang wrote:
This might have been affected by the cache flush issue -- if the
3310 flushes its NVRAM cache to disk on SYNCHRONIZE CACHE commands,
then ZFS is penalizing itself. I don't know whether the 3310
firmware has been updated to support the
Basically, I want to know if somebody here on this list is using
a ZFS
file system for a proxy cache and what will be it's performance?
Will it
improve and degrade Squid's performance? Or better still, is
there any
kind of benchmark tools for ZFS performance?
filebench sounds like
On Oct 26, 2007, at 3:21 AM, Matt Buckland wrote:
Hi forum,
I did something stupid the other day, managed to connect an
external disk that was part of zpool A such that it appeared in
zpool B. I realised as soon as I had done zpool status that zpool B
should not have been online, but
On Oct 22, 2007, at 2:52 AM, Mertol Ozyoney wrote:
I know I havent defined my particular needs. However I am looking
for a
simple explanation of waht is available today and what will be
available in
short term.
Example. One to one asynch replication is suported, many to one synch
Since you were already using filebench, you could use the
'singlestreamwrite.f' and 'singlestreamread.f' workloads (with
nthreads set to 20, iosize set to 128k) to achieve the same things.
With the latest version of filebench, you can then use the '-c'
option to compare your results in a
That all said - we don't have a simple dd benchmark for random
seeking.
Feel free to try out randomread.f and randomwrite.f - or combine them
into your own new workload to create a random read and write workload.
eric
___
zfs-discuss mailing
This looks like a bug in the sd driver (SCSI).
Does this look familiar to anyway from the sd group?
eric
On Oct 10, 2007, at 10:30 AM, Claus Guttesen wrote:
Hi.
Just migrated to zfs on opensolaris. I copied data to the server using
rsync and got this message:
Oct 10 17:24:04 zetta
On Oct 9, 2007, at 4:25 AM, Thomas Liesner wrote:
Hi,
i checked with $nthreads=20 which will roughly represent the
expected load and these are the results:
Note, here is the description of the 'fileserver.f' workload:
define process name=filereader,instances=1
{
thread
Client A
- import pool make couple-o-changes
Client B
- import pool -f (heh)
Client A + B - With both mounting the same pool, touched a couple of
files, and removed a couple of files from each client
Client A + B - zpool export
Client A - Attempted import and dropped the panic.
Anyhow, in the case of DBs, ARC indeed becomes a vestigial organ. I'm
surprised that this is being met with skepticism considering that
Oracle highly recommends direct IO be used, and, IIRC, Oracle
performance was the main motivation to adding DIO to UFS back in
Solaris 2.6. This isn't a
On Oct 3, 2007, at 3:44 PM, Dale Ghent wrote:
On Oct 3, 2007, at 5:21 PM, Richard Elling wrote:
Slightly off-topic, in looking at some field data this morning
(looking
for something completely unrelated) I notice that the use of directio
on UFS is declining over time. I'm not sure what
On Sep 21, 2007, at 11:47 AM, Pawel Jakub Dawidek wrote:
Hi.
I gave a talk about ZFS during EuroBSDCon 2007, and because it won the
the best talk award and some find it funny, here it is:
http://youtube.com/watch?v=o3TGM0T1CvE
a bit better version is here:
On Sep 21, 2007, at 3:50 PM, Tim Spriggs wrote:
Paul B. Henson wrote:
On Thu, 20 Sep 2007, Tim Spriggs wrote:
The x4500 is very sweet and the only thing stopping us from
buying two
instead of another shelf is the fact that we have lost pools on
Sol10u3
servers and there is no easy
On Sep 15, 2007, at 12:55 PM, Victor Latushkin wrote:
I'm proposing new project for ZFS community - Block Selection
Policy and
Space Map Enhancements.
+1.
I wonder if some of this could look into a dynamic policy. For
example, a policy that switches when the pool becomes too full.
On Sep 20, 2007, at 6:46 PM, Paul B. Henson wrote:
On Thu, 20 Sep 2007, Gary Mills wrote:
You should consider a Netapp filer. It will do both NFS and CIFS,
supports disk quotas, and is highly reliable. We use one for 30,000
students and 3000 employees. Ours has never failed us.
We had
as
when
using my test app.
Duff
-Original Message-
From: eric kustarz [mailto:[EMAIL PROTECTED]
Sent: Monday, September 17, 2007 6:58 PM
To: J Duff; [EMAIL PROTECTED]
Cc: ZFS Discussions
Subject: Re: [zfs-discuss] Possible ZFS Bug - Causes OpenSolaris Crash
This actually looks like
On Sep 14, 2007, at 8:16 AM, Łukasz wrote:
I have a huge problem with space maps on thumper. Space maps takes
over 3GB
and write operations generates massive read operations.
Before every spa sync phase zfs reads space maps from disk.
I decided to turn on compression for pool ( only for
On Aug 29, 2007, at 11:16 PM, Jeffrey W. Baker wrote:
I have a lot of people whispering zfs in my virtual ear these days,
and at the same time I have an irrational attachment to xfs based
entirely on its lack of the 32000 subdirectory limit. I'm not
afraid of
ext4's newness, since really
On Aug 30, 2007, at 12:33 PM, Jeffrey W. Baker wrote:
On Thu, 2007-08-30 at 12:07 -0700, eric kustarz wrote:
Hey jwb,
Thanks for taking up the task, its benchmarking so i've got some
questions...
What does it mean to have an external vs. internal journal for ZFS?
This is my first use
On Jul 31, 2007, at 5:44 AM, Orvar Korvar wrote:
I have begun a scrub on a 1,5TB pool which has 600GB data, and
seeing that it will take 11h47min I want to stop it. I invoked
zpool scrub -s pool and nothing happens. There is no message:
scub stopped or something similar. The cursor
I've filed:
6586537 async zio taskqs can block out userland commands
to track this issue.
eric
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Jul 25, 2007, at 11:46 PM, asa wrote:
Hello all,
I am interested in getting a list of the changed files between two
snapshots in a fast and zfs-y way. I know that zfs knows all about
what blocks have been changed, but can one map that to a file list? I
know this could be solved
On Jul 26, 2007, at 10:00 AM, gerald anderson wrote:
Customer question:
Oracle 10
Customer has a 6540 with 4 trays of 300G 10k drives. The raid sets
are 3 + 1
vertically stripped on the 4 trays. Two 400G volumes are created on
each
raid set. Would it be best to put all of the
On Jul 22, 2007, at 7:39 PM, JS wrote:
There a way to take advantage of this in Sol10/u03?
sorry, variable 'zfs_vdev_cache_max' is not defined in the 'zfs'
module
That tunable/hack will be available in s10u4:
http://bugs.opensolaris.org/view_bug.do?bug_id=6472021
wait about a month and
]
[mailto:[EMAIL PROTECTED] On Behalf Of eric kustarz
Sent: Thursday, July 19, 2007 1:24 PM
To: ZFS Discussions
Subject: [zfs-discuss] more love for databases
Here's some info on the changes we've made to the vdev cache (in
part) to help database performance:
http://blogs.sun.com/erickustarz/entry
Here's some info on the changes we've made to the vdev cache (in
part) to help database performance:
http://blogs.sun.com/erickustarz/entry/vdev_cache_improvements_to_help
enjoy your properly inflated I/O,
eric
___
zfs-discuss mailing list
On Jul 13, 2007, at 10:57 AM, Brian Wilson wrote:
Hmm. Odd. I've got PowerPath working fine with ZFS with both
Symmetrix and Clariion back ends.
PowerPath Version is 4.5.0, running on leadville qlogic drivers.
Sparc hardware. (if it matters)
I ran one our test databases on ZFS on
However, I've one more question - do you guys think NCQ with short
stroked zones help or hurt performance? I have this feeling (my
gut, that is), that at a low queue depth it's a Great Win, whereas
at a deeper queue it would degrade performance more so than without
it. Any
On Jul 9, 2007, at 11:21 AM, Scott Lovenberg wrote:
You sir, are a gentleman and a scholar! Seriously, this is exactly
the information I was looking for, thank you very much!
Would you happen to know if this has improved since build 63 or if
chipset has any effect one way or the other?
On Jul 8, 2007, at 8:05 PM, Peter C. Norton wrote:
List,
Sorry if this has been done before - I'm sure I'm not the only person
interested in this, but I haven't found anything with the searches
I've done.
I'm looking to compare nfs performance between nfs on zfs and a
lower-end netapp
On Jul 4, 2007, at 7:50 AM, Wout Mertens wrote:
A data structure view of ZFS is now available:
http://www.opensolaris.org/os/community/zfs/structures/
We've only got one picture up right now (though its a juicy one!),
but let us know what you're interested in seeing, and
we'll try to make
On Jun 21, 2007, at 3:25 PM, Bryan Wagoner wrote:
Quick question,
Are there any tunables, or is there any way to specify devices in a
pool to use for the ZIL specifically? I've been thinking through
architectures to mitigate performance problems on SAN and various
other storage
A data structure view of ZFS is now available:
http://www.opensolaris.org/os/community/zfs/structures/
We've only got one picture up right now (though its a juicy one!),
but let us know what you're interested in seeing, and we'll try to
make that happen.
I see this as a nice supplement to
On Jun 21, 2007, at 8:47 AM, Niclas Sodergard wrote:
Hi,
I was playing around with NexentaCP and its zfs boot facility. I tried
to figure out how what commands to run and I ran zpool history like
this
# zpool history
2007-06-20.10:19:46 zfs snapshot syspool/[EMAIL PROTECTED]
On Jun 20, 2007, at 1:25 PM, mario heimel wrote:
Linux is the first operating system that can boot from RAID-1+0,
RAID-Z or RAID-Z2 ZFS, really cool trick to put zfs-fuse in the
initramfs.
( Solaris can only boot from single-disk or RAID-1 pools )
On Jun 19, 2007, at 11:23 AM, Huitzi wrote:
Hi once again and thank you very much for your reply. Here is
another thread.
I'm planning to deploy a small file server based on ZFS. I want to
know if I can start with 2 RAIDs, and add more RAIDs in the future
(like the gray RAID in the
On Jun 13, 2007, at 9:22 PM, Siegfried Nikolaivich wrote:
On 12-Jun-07, at 9:02 AM, eric kustarz wrote:
Comparing a ZFS pool made out of a single disk to a single UFS
filesystem would be a fair comparison.
What does your storage look like?
The storage looks like:
NAME
Over NFS to non-ZFS drive
-
tar xfvj linux-2.6.21.tar.bz2
real5m0.211s, user0m45.330s, sys 0m50.118s
star xfv linux-2.6.21.tar.bz2
real3m26.053s, user0m43.069s, sys 0m33.726s
star -no-fsync -x -v -f linux-2.6.21.tar.bz2
real
On Jun 12, 2007, at 12:57 AM, Roch - PAE wrote:
Hi Seigfried, just making sure you had seen this:
http://blogs.sun.com/roch/entry/nfs_and_zfs_a_fine
You have very fast NFS to non-ZFS runs.
That seems only possible if the hosting OS did not sync the
data when NFS required it or the
On Jun 11, 2007, at 12:52 AM, Borislav Aleksandrov wrote:
Panic on snv_6564 when:
#mkdir /disk
#mkfile 128m /disk/disk1
#mkfile 128m /disk/disk2
#zpool create data mirror /disk/disk1 /disk/disk2
#mkfile 128m /disk/disk1
#mkfile 128m /disk/disk2
At this point you have completely overwritten
Just got the latest ;login: and Pawel has an article on Porting the
Solaris ZFS File System to the FreeBSD Operating System.
Lots of interesting stuff in there, such as the differences between
OpenSolaris and FreeBSD, as well as getting ZFS to work with FreeBSD
jails (a new 'jailed'
Would be very nice if the improvements would be documented
anywhere :-)
Cindy has been doing a good job of putting the new features into the
admin guide:
http://www.opensolaris.org/os/community/zfs/docs/zfsadmin.pdf
Check out the What's New in ZFS? section.
eric
On Jun 2, 2007, at 8:27 PM, Jesus Cea wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Toby Thain wrote:
Sorry, I should have cited it. Blew my chance to moderate by
posting to
the thread :)
http://ask.slashdot.org/comments.pl?sid=236627cid=19319903
I computed the FUD factor by
Hi Jeff,
You should take a look at this:
http://blogs.sun.com/erickustarz/entry/poor_man_s_cluster_end
We added the hostid/hostname to the vdev label. What this means is
that we stop you from importing a pool onto multiple machines (which
would have lead to corruption).
eric
On May 30,
2) Following Chris's advice to do more with snapshots, I
played with his cron-triggered snapshot routine:
http://blogs.sun.com/chrisg/entry/snapping_every_minute
Now, after a couple of days, zpool history shows almost
100,000 lines of output (from all the snapshots and
On Jun 1, 2007, at 2:09 PM, John Plocher wrote:
eric kustarz wrote:
We specifically didn't allow the admin the ability to truncate/
prune the log as then it becomes unreliable - ooops i made a
mistake, i better clear the log and file the bug against zfs
I understand - auditing means
On May 29, 2007, at 1:25 PM, Lida Horn wrote:
Point one, the comments that Eric made do not give the complete
picture.
All the tests that Eric's referring to were done through ZFS
filesystem.
When sequential I/O is done to the disk directly there is no
performance
degradation at all.
Don't take this numbers too seriously - those were only first tries to
see where my port is and I was using OpenSolaris for comparsion, which
has debugging turned on.
Yeah, ZFS does a lot of extra work with debugging on (such as
verifying checksums in the ARC), so always do serious
On May 15, 2007, at 4:49 PM, Nigel Smith wrote:
I seem to have got the same core dump, in a different way.
I had a zpool setup on a iscsi 'disk'. For details see:
http://mail.opensolaris.org/pipermail/storage-discuss/2007-May/
001162.html
But after a reboot the iscsi target was not longer
On May 15, 2007, at 9:37 AM, XIU wrote:
Hey,
I'm currently running on Nexenta alpha 6 and I have some corrupted
data in a pool.
The output from sudo zpool status -v data is:
pool: data
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.
On May 12, 2007, at 2:12 AM, Matthew Flanagan wrote:
On May 10, 2007, at 10:04 PM, Matthew Flanagan wrote:
Hi,
I have a test server that I use for testing my
different jumpstart
installations. This system is continuously
installed and
reinstalled with different system builds.
For some
On May 10, 2007, at 10:04 PM, Matthew Flanagan wrote:
Hi,
I have a test server that I use for testing my different jumpstart
installations. This system is continuously installed and
reinstalled with different system builds.
For some builds I have a finish script that creates a zpool using
On May 7, 2007, at 7:11 AM, Frank Batschulat wrote:
running a recent patched s10 system, zfs version 3, attempting to
dump the label information using zdb when the pool is online
doesn't seem to give
a reasonable information, any particular reason for this ?
# zpool status
pool:
In order to prevent the so-called poor man's cluster from
corrupting your data, we now store the hostid and verify it upon
importing a pool.
Check it out at:
http://blogs.sun.com/erickustarz/entry/poor_man_s_cluster_end
This is bug:
6282725 hostname/hostid should be stored in the label
On Apr 25, 2007, at 9:16 AM, Oliver Gould wrote:
Hello-
I was planning on sending out a more formal sort of introduction in a
few weeks, but.. hey- it came up.
I will be porting ZFS to NetBSD this summer. Some info on this
project
can be found at:
On Apr 18, 2007, at 6:44 AM, Yaniv Aknin wrote:
Hello,
I'd like to plan a storage solution for a system currently in
production.
The system's storage is based on code which writes many files to
the file system, with overall storage needs currently around 40TB
and expected to reach
On Apr 19, 2007, at 1:38 AM, Ricardo Correia wrote:
Why doesn't zpool status -v display the byte ranges of permanent
errors anymore, like it used to (before snv_57)?
I think it was a useful feature. For example, I have a pool with 17
permanent errors in 2 files with 700 MB each, but no
On Apr 18, 2007, at 2:35 AM, Richard L. Hamilton wrote:
Well, no; his quote did say software or hardware. The theory is
apparently
that ZFS can do better at detecting (and with redundancy,
correcting) errors
if it's dealing with raw hardware, or as nearly so as possible.
Most SANs
_can_
On Apr 9, 2007, at 2:20 AM, Dirk Jakobsmeier wrote:
Hello,
was use several cad applications and with one of those we have
problems using zfs.
OS and hardware is SunOS 5.10 Generic_118855-36, Fire X4200, the
cad application is catia v4.
There are several configuration and data files
On Mar 23, 2007, at 6:13 AM, Łukasz wrote:
When I'm trying to do in kernel in zfs ioctl:
1. snapshot destroy PREVIOS
2. snapshot rename LATEST-PREVIOUS
3. snapshot create LATEST
code is:
/* delete previous snapshot */
zfs_unmount_snap(snap_previous, NULL);
Is this the same panic I observed when moving a FireWire disk from
a SPARC
system running snv_57 to an x86 laptop with snv_42a?
6533369 panic in dnode_buf_byteswap importing zpool
Yep, thanks - i was looking for that bug :) I'll close it out as a dup.
eric
1 - 100 of 167 matches
Mail list logo