I have a lot of people whispering zfs in my virtual ear these days,
and at the same time I have an irrational attachment to xfs based
entirely on its lack of the 32000 subdirectory limit. I'm not afraid of
ext4's newness, since really a lot of that stuff has been in Lustre for
years. So
Jeffrey,
it would be interesting to see your zpool layout info as well.
It can significantly influence the results obtained in the benchmarks.
On 8/30/07, Jeffrey W. Baker [EMAIL PROTECTED] wrote:
I have a lot of people whispering zfs in my virtual ear these days,
and at the same time I have
Hi,
I'm looking for Samba, which work native ZFS ACL.
With ZFS almost everything work except native ZFS ACL.
I have learned on samba mailing list, that it dosn't work while samba-3.2.0
will be released.
Has anyone knows any solution to work samba-3.0.25?
If any idea, please let me know.
ZFS Experts,
Is it possible to use DMU as general purpose transaction engine? More
specifically, in following order:
1. Create transaction:
tx = dmu_tx_create(os);
error = dmu_tx_assign(tx, TXG_WAIT)
2. Decide what to modify(say create new object):
dmu_tx_hold_bonus(tx, DMU_NEW_OBJECT);
Please read this thread on my blog
http://blogs.sun.com/timthomas/entry/samba_and_swat_in_solaris. This
question has been addressed in the comments.
Yoshikuni.Yanagiya said the following :
Hi,
I'm looking for Samba, which work native ZFS ACL.
With ZFS almost everything work except native
works like a charm! thank you very much Darren!
greetings,
Stoyan
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hello ZFS folks,
with the deployment of HA-ZFS we are in the process of migrating some
SVM based cluster services to ZFS.
What we couldn't find is an up-to-date performance comparison between
Oracle's own ASM technology compared to ZFS performance, I could
only find performance figures from
I am not an expert but I think the correct sequence is :
1. dmu_tx_create()
2. dmu_tx_hold_()
3. dmu_tx_assign()
4. modify the objects as part of the transaction
5. dmu_tx_commit()
see comments in common/fs/zfs/sys/dmu.h
Thanks
Bhaskar
___
I'm not sure if this is a zfs, zones, or solaris/nfs problem... So I'll
start on this alias...
Problem:
I am seeing file copies from one machine to another grab an older file.
(Worded differently: The cp command is not getting the most recent file.)
For instance,
On a T2000, Solaris 10u3,
On Thu, Aug 30, 2007 at 10:18:05AM -0700, Russ Petruzzelli wrote:
I'm not sure if this is a zfs, zones, or solaris/nfs problem... So I'll
start on this alias...
Problem:
I am seeing file copies from one machine to another grab an older file.
(Worded differently: The cp command is not
NFS clients can cache. This cache can be loosely synchronized for
performance reasons. See the settings for actimeo and related variables
in mount_nfs(1m)
-- richard
Russ Petruzzelli wrote:
I'm not sure if this is a zfs, zones, or solaris/nfs problem... So
I'll start on this alias...
On Aug 30, 2007, at 12:35 PM, Richard Elling wrote:
NFS clients can cache. This cache can be loosely synchronized for
performance reasons. See the settings for actimeo and related
variables
in mount_nfs(1m)
The NFS client will getattr/OPEN at the point where the application
opens the
No. You can neither access ZFS nor UFS in that way.
Only one host can mount the file system at the same time
(read/write or read-only doesn't matter here).
[...]
If you don't want to use NFS, you can use QFS in such a configuration.
The shared writer approach of QFS allows mounting the same
On 8/30/07, Peter L. Thomas [EMAIL PROTECTED] wrote:
That said, is there a HOWTO anywhere on installing QFS on Solaris 9
(Sparc64)
machines? Is that even possible?
I don't know of a How To, but I assume the manual has instructions.
When I took the Sun SAM-FS / QFS technical training many
On 8/30/07, Russ Petruzzelli [EMAIL PROTECTED] wrote:
For instance,
On a T2000, Solaris 10u3, with zfs setup, and a zone I try to copy in a
file from my swan home directory to a directory in the zone ...
The file copied, is not the file currently in my home directory. It is an
older
On Thu, 2007-08-30 at 14:03 -0400, Paul Kraus wrote:
On 8/30/07, Peter L. Thomas [EMAIL PROTECTED] wrote:
That said, is there a HOWTO anywhere on installing QFS on Solaris 9
(Sparc64)
machines? Is that even possible?
I don't know of a How To, but I assume the manual has instructions.
I'll take a look at this. ZFS provides outstanding sequential IO performance
(both read and write). In my testing, I can essentially sustain
hardware speeds
with ZFS on sequential loads. That is, assuming 30-60MB/sec per disk
sequential
IO capability (depending on hitting inner or out
On Thu, 2007-08-30 at 14:33 -0400, Jim Mauro wrote:
Your numbers are in the 50-90MB/second range, or roughly 1/2 to 1/4
what was
measured on the other 2 file systems for the same test. Very odd.
Yeah it's pretty odd. I'd tend to blame the Areca HBA, but then I'd
also point out that the HBA
On Thu, 2007-08-30 at 08:37 -0500, Jose R. Santos wrote:
On Wed, 29 Aug 2007 23:16:51 -0700
Jeffrey W. Baker [EMAIL PROTECTED] wrote:
http://tastic.brillig.org/~jwb/zfs-xfs-ext4.html
FFSB:
Could you send the patch to fix FFSB Solaris build? I should probably
update the Sourceforge
On Aug 29, 2007, at 11:16 PM, Jeffrey W. Baker wrote:
I have a lot of people whispering zfs in my virtual ear these days,
and at the same time I have an irrational attachment to xfs based
entirely on its lack of the 32000 subdirectory limit. I'm not
afraid of
ext4's newness, since really
On Thu, 2007-08-30 at 12:07 -0700, eric kustarz wrote:
Hey jwb,
Thanks for taking up the task, its benchmarking so i've got some
questions...
What does it mean to have an external vs. internal journal for ZFS?
This is my first use of ZFS, so be gentle. External == ZIL on a
separate
Hi all,
has an alternative to ARC been considered to improve sequential write IO in zfs?
here's a reference for DULO:
http://www.usenix.org/event/fast05/tech/full_papers/jiang/jiang_html/dulo-html.html#BG03
sd-
___
zfs-discuss mailing list
On Aug 30, 2007, at 12:33 PM, Jeffrey W. Baker wrote:
On Thu, 2007-08-30 at 12:07 -0700, eric kustarz wrote:
Hey jwb,
Thanks for taking up the task, its benchmarking so i've got some
questions...
What does it mean to have an external vs. internal journal for ZFS?
This is my first use of
Hey folks,
I've been wanting to use Solaris for a while now, for a ZFS home storage server
and simply to get used to Solaris (I like to experiment). However, installing
b70 has really not worked out for me at all.
The hardware I'm using is pretty simple, but didn't seem to be supported under
On Thu, 2007-08-30 at 13:07 -0700, eric kustarz wrote:
On Aug 30, 2007, at 12:33 PM, Jeffrey W. Baker wrote:
Uh, whoops. As I freely admit this is my first encounter with
opensolaris, I just built the software on the assumption that it would
be 64-bit by default. But it looks like all
Jeffrey W. Baker wrote:
# zfs set recordsize=2K tank/bench
# randomio bigfile 10 .25 .01 2048 60 1
total | read: latency (ms) | write:latency (ms)
iops | iops minavgmax sdev | iops minavgmax
sdev
The problems I'm experiencing are as follows:
ZFS creates the storage pool just fine, sees no errors on the drives, and
seems to work great...right up until I attempt to put data on the drives.
After only a few moments of transfer, things start to go wrong. The system
doesn't power off,
Nigel Smith wrote:
Are you sure your hardware is working without problems?
I would first check the RAM with memtest86+
http://www.memtest.org/
Also, SunVTS should be in /usr/sunvts and includes memory and disk
tests (plus others). This is the test suite we (Sun) use in manufacturing.
Take
On Thu, 2007-08-30 at 15:28 -0700, Richard Elling wrote:
Jeffrey W. Baker wrote:
# zfs set recordsize=2K tank/bench
# randomio bigfile 10 .25 .01 2048 60 1
total | read: latency (ms) | write:latency (ms)
iops | iops minavgmax sdev | iops
I'm seeing some odd i/o behaviour on a Sun Fire running snv_70,
connected via 4gb FC to some passthrough disks for a ZFS pool.
The system is normally not heavily loaded, so I don't pay as much
attention to I/O performance as I should, but recently we had several
drives fail checksums (heat event)
30 matches
Mail list logo