However, I can't help but think that if my file server is
compressing every data block that it writes that it would be able to write
more data if it used a thread (or more) per core I would come out ahead.
No arguments here. MT-hot compression was the design of ZFS from day one.
A bug got
On 22/08/06, Bill Moore [EMAIL PROTECTED] wrote:
On Mon, Aug 21, 2006 at 02:40:40PM -0700, Anton B. Rang wrote:
Yes, ZFS uses this command very frequently. However, it only does this
if the whole disk is under the control of ZFS, I believe; so a
workaround could be to use slices rather than
IHAC who is using a very similar test (cp -pr /zpool1/Studio11
/zpool1/Studio11.copy) and is seeing behaviour similar to what we've seen
described here; BUT since he's using a single-CPU box (SunBlade 1500) and has a
single disk in his pool, every time the CPU goes into 100%-mode, interactive
Hi.
I started porting the ZFS file system to the FreeBSD operating system.
There is a lot to do, but I'm making good progress, I think.
I'm doing my work in those directories:
contrib/opensolaris/ - userland files taken directly from
OpenSolaris (libzfs, zpool, zfs and
This is fantastic work!
How long have you been at it?
You seem a lot further on than the ZFS-Fuse project.
On 22/08/06, Pawel Jakub Dawidek [EMAIL PROTECTED] wrote:
Hi.
I started porting the ZFS file system to the FreeBSD operating system.
There is a lot to do, but I'm making good progress,
Roch wrote:
Michael Schuster writes:
IHAC who is using a very similar test (cp -pr /zpool1/Studio11
/zpool1/Studio11.copy) and is seeing behaviour similar to what we've
seen described here; BUT since he's using a single-CPU box (SunBlade
1500) and has a single disk in his pool, every
Michael Schuster - Sun Microsystems writes:
Roch wrote:
Michael Schuster writes:
IHAC who is using a very similar test (cp -pr /zpool1/Studio11
/zpool1/Studio11.copy) and is seeing behaviour similar to what we've
seen described here; BUT since he's using a single-CPU box
On 8/22/06, Pawel Jakub Dawidek [EMAIL PROTECTED] wrote:
I started porting the ZFS file system to the FreeBSD operating system.
Mighty cool!! Please keep us posted!!
raj
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Roch wrote:
Dick Davies writes:
On 22/08/06, Bill Moore [EMAIL PROTECTED] wrote:
On Mon, Aug 21, 2006 at 02:40:40PM -0700, Anton B. Rang wrote:
Yes, ZFS uses this command very frequently. However, it only does this
if the whole disk is under the control of ZFS, I believe; so a
On Tue, Aug 22, 2006 at 12:22:44PM +0100, Dick Davies wrote:
This is fantastic work!
How long have you been at it?
As I said, 10 days, but this is really far from beeing finished.
--
Pawel Jakub Dawidek http://www.wheel.pl
[EMAIL PROTECTED]
A question (well lets make it 3 really) – Is vdbench a useful tool when testing
file system performance of a ZFS file system? Secondly - is ZFS write
performance really much worse than UFS or VxFS? and Third - what is a good
benchmarking tool to test ZFS vs UFS vs VxFS?
The reason I ask is
On Tue, Aug 22, 2006 at 04:30:44PM +0200, Jeremie Le Hen wrote:
I don't know much about ZFS, but Sun states this is a 128 bits
filesystem. How will you handle this in regards to the FreeBSD
kernel interface that is already struggling to be 64 bits
compliant ? (I'm stating this based on this
Hello !
I searched the net and the forum for this, but couldn`t find anything about
this.
can someone tell, how effective is zfs compression and space-efficiency
(regarding small files) ?
since compression works at the block level, i assume compression may not come
into effect as some may
Michael Schuster - Sun Microsystems wrote:
Pawel Jakub Dawidek wrote:
On Tue, Aug 22, 2006 at 04:30:44PM +0200, Jeremie Le Hen wrote:
I don't know much about ZFS, but Sun states this is a 128 bits
filesystem. How will you handle this in regards to the FreeBSD
kernel interface that is
On Tue, Aug 22, 2006 at 06:15:08AM -0700, Tony Galway wrote:
A question (well lets make it 3 really) ? Is vdbench a useful tool
when testing file system performance of a ZFS file system? Secondly -
is ZFS write performance really much worse than UFS or VxFS? and Third
- what is a good
On Tue, Aug 22, 2006 at 08:43:32AM -0700, roland wrote:
can someone tell, how effective is zfs compression and
space-efficiency (regarding small files) ?
since compression works at the block level, i assume compression may
not come into effect as some may expect. (maybe i`m wrong here)
It's
Hi
2006/8/22, Constantin Gonzalez [EMAIL PROTECTED]:
Thomas Deutsch wrote:
I'm thinking about to change from Linux/Softwareraid to
OpenSolaris/ZFS. During this, I've got some (probably stupid)
questions:
don't worry, there are no stupid questions :).
1. Is ZFS able to encrypt all the
On Tue, 22 Aug 2006, Matthew Ahrens wrote:
gzip. We plan to implement a broader range of compression algorithms in
the future.
Cool. Presumably, the algorithm used will be a user-settable property?
--
Rich Teer, SCNA, SCSA, OpenSolaris CAB member
President,
Rite Online Inc.
Voice: +1
On 8/10/06, Neil Perrin [EMAIL PROTECTED] wrote:
Myron Scott wrote:
Is there any difference between fdatasync and fsync on ZFS?
-No. ZFS does not log data and meta data separately. rather
it logs essentially the system call records, eg writes, mkdir,
truncate, setattr, etc. So fdatasync and
Shane, I wasn't able to reproduce this failure on my system. Could you
try running Eric's D script below and send us the output while running
'zfs list'?
thanks,
--matt
On Fri, Aug 18, 2006 at 09:47:45AM -0700, Eric Schrock wrote:
Can you send the output of this D script while running 'zfs
Bill,
I realized just now that we're actually sending the wrong variant of
SYNCHRONIZE CACHE, at least for SCSI devices which support SBC-2.
SBC-2 (or possibly even SBC-1, I don't have it handy) added the SYNC_NV bit to
the command. If SYNC_NV is set to 0, the device is required to flush data
Just updating the discussion with some email chains. After more digging, this
does not appear to be a version 2 or 3 replicatiion issues. I believe it to be
an invalid named snapshot that causes zpool and zfs commands to core.
Tim mentioned it may be similiar to bug 6450219.
I agree it seems
We're running ZFS with compress=ON on a E2900. I'm hosting SAS/SPDS datasets
(files) on these filesystems and am achieving 1:3.87 (as reported by zfs)
compression. Your mileage will vary depending on the data you are writing. If
your data is already compressed (zip files) then don't expect any
On Tue, Aug 22, 2006 at 11:46:30AM -0700, Anton B. Rang wrote:
I realized just now that we're actually sending the wrong variant of
SYNCHRONIZE CACHE, at least for SCSI devices which support SBC-2.
SBC-2 (or possibly even SBC-1, I don't have it handy) added the
SYNC_NV bit to the command. If
Saw this while writing a script today -- while debugging the script, I was
ctrl-c-ing it a lot rather
than wait for the zfs create / zfs set commands to complete. After doing so,
my cleanup script
failed to zfs destroy the new filesystem:
[EMAIL PROTECTED]:/ # zfs destroy -f
Filed as 6462690.
If our storage qualification test suite doesn't yet check for support of this
bit, we might want to get that added; it would be useful to know (and gently
nudge vendors who don't yet support it).
This message posted from opensolaris.org
Hello zfs-discuss,
S10U2 SPARC + patches
Generic_118833-20
LUNs from 3510 array.
bash-3.00# zpool import
no pools available to import
bash-3.00# zpool create f3-1 mirror c5t600C0FF0098FD535C3D2B900d0
c5t600C0FF0098FD54CB01E1100d0 mirror
Robert Milkowski wrote:
Hello zfs-discuss,
S10U2 SPARC + patches
Generic_118833-20
LUNs from 3510 array.
bash-3.00# zpool import
no pools available to import
bash-3.00# zpool create f3-1 mirror c5t600C0FF0098FD535C3D2B900d0
c5t600C0FF0098FD54CB01E1100d0
Hello Robert,
After server restart I got:
bash-3.00# zpool create test c5t600C0FF0098FD535C3D2B900d0
warning: device in use checking failed: No such device
bash-3.00# zpool list
NAMESIZEUSED AVAILCAP HEALTH ALTROOT
test204G 84.5K
Hello James,
Tuesday, August 22, 2006, 11:52:37 PM, you wrote:
JCM Robert Milkowski wrote:
Hello zfs-discuss,
S10U2 SPARC + patches
Generic_118833-20
LUNs from 3510 array.
bash-3.00# zpool import
no pools available to import
bash-3.00# zpool create f3-1 mirror
Hello Eric,
Tuesday, August 22, 2006, 11:51:55 PM, you wrote:
ES This looks like a bug in the in-use checking for SVM (?). What build
ES are you running?
S10 update2 + patches, kernel Generic_118833-20 sparc
ES In the meantime, you can work around this by setting 'NOINUSE_CHECK' in
ES your
Hello Anton,
Tuesday, August 22, 2006, 9:53:57 PM, you wrote:
ABR Filed as 6462690.
ABR If our storage qualification test suite doesn't yet check for
ABR support of this bit, we might want to get that added; it would be
ABR useful to know (and gently nudge vendors who don't yet support it).
Hi Robert,
Looks like you are using libumem? And it looks like there is a possible
memory issue in the libmeta code when we are trying to dlopen it from
libdiskmgt.
I think we would have seen this more if it was happening every time with
u2 bits. Doesn't mean its not a bug, but looks like
Anton B. Rang wrote:
If you issue aligned, full-record write requests, there is a definite advantage
to continuing to set the record size. It allows ZFS to process the write
without the read-modify-write cycle that would be required for the default 128K
record size. (While compression
Filed as 6462690.
If our storage qualification test suite doesn't yet
check for support of this bit, we might want to get
that added; it would be useful to know (and gently
nudge vendors who don't yet support it).
Is either the test suite, or at least a list of what it tests
(which it
35 matches
Mail list logo