Hello Jesus,
Wednesday, February 21, 2007, 5:54:35 AM, you wrote:
JC -BEGIN PGP SIGNED MESSAGE-
JC Hash: SHA1
JC Joerg Schilling wrote:
What they missed to say is that you need to access the whole disk
frequently enough in order to give SMART the ability to work.
JC I thought modern
So Jonathan, you have a concern about the on-disk space
efficiency for small file (more or less subsector). It is a
problem that we can throw rust at. I am not sure if this is
the basis of Claude's concern though.
Creating small files, last week I did a small test. With ZFS
I can create 4600
Thank you gro your answers, but I have another question.
If I don't use any special ACL with Samba and ZFS, only each user can
write and read from his home directory. I am affected with the
incompatibility?
Thank you again.
Rod
2007/2/19, Eric Enright [EMAIL PROTECTED]:
On 2/19/07, Rod
More detailed description of readdir test and conclusion at the end:
Roch asked me:
Is this a NFS V3 or V4 test or don't care ?
I am running NFS V3 but the short test of NFS V4 showed that the
problem is there.
Then Roch asked:
I've run rdir on a few of my large directories, However my
On Wed, Feb 21, 2007 at 11:21:27AM +0100, Rodrigo Ler?a wrote:
If I don't use any special ACL with Samba and ZFS, only each user can
write and read from his home directory. I am affected with the
incompatibility?
Samba runs as the requesting user during file access. Because of that,
any file
Not sure how technically feasible it is, but something I thought of while
shuffling some files around my home server. My poor understanding of ZFS
internals is that the entire pool is effectivly a tree structure, with nodes
either being data or metadata. Given that, couldnt ZFS just change a
Adrian,
Seems like a cool idea to me :-) Not sure if there is anything of this
kind being thought about...
Would be a good idea to file an RFE.
Regards,
Sanjeev
Adrian Saul wrote:
Not sure how technically feasible it is, but something I thought of while
shuffling some files around my home
Thank you very much for your answer. It is very userful for me.
Thank you
2007/2/21, Ed Plese [EMAIL PROTECTED]:
On Wed, Feb 21, 2007 at 11:21:27AM +0100, Rodrigo Ler?a wrote:
If I don't use any special ACL with Samba and ZFS, only each user can
write and read from his home directory. I am
The ability to shrink a pool by removing devices is the only reason my
enterprise is not yet using ZFS, simply because it prevents us from
easily migrating storage.
That logic is totally bogus AFAIC. There are so many advantages to
running ZFS that denying yourself that opportunity is
On Wed, 21 Feb 2007, Valery Fouques wrote:
Here in my company we are very interested in ZFS, but we do not care
about the RAID/mirror features, because we already have a SAN with
RAID-5 disks, and dual fabric connection to the hosts.
... And presumably you've read the threads where ZFS has
I cannot let you say that.
Here in my company we are very interested in ZFS, but we do not care
about the RAID/mirror features, because we already have a SAN with
RAID-5 disks, and dual fabric connection to the hosts.
But you understand that these underlying RAID mechanism give absolutely
no
Not sure how technically feasible it is, but something I thought of
while shuffling some files around my home server. My poor
understanding of ZFS internals is that the entire pool is effectivly a
tree structure, with nodes either being data or metadata. Given that,
couldnt ZFS just change
We have a system with two drives in it, part UFS, part ZFS. It's a software
mirrored system with slices 0,1,3 setup as small UFS slices, and slice 4 on
each drive being the ZFS slice.
One of the drives is failing and we need to replace it.
I just want to make sure I have the correct order of
Matt Cohen wrote:
We have a system with two drives in it, part UFS, part ZFS. It's a software
mirrored system with slices 0,1,3 setup as small UFS slices, and slice 4 on
each drive being the ZFS slice.
One of the drives is failing and we need to replace it.
I just want to make sure I
Matt,
Generally, when a disk needs to be replaced, you replace the disk,
use the zpool replace command, and you're done...
This is only a little more complicated in your scenario below because
of the sharing the disk between ZFS and UFS.
Most disks are hot-pluggable so you generally don't need
Adrian Saul wrote:
Not hard to work around - zfs create and a mv/tar command and it is
done... some time later. If there was say a zfs graft directory
newfs command, you could just break of the directory as a new
filesystem and away you go - no copying, no risking cleaning up the
wrong files
Matt,
Also, since you only have two drives and are using software mirroring
for the UFS slices, you'll need to follow the proper procedures for the
software mirroring metadata replicas. See the pertinent docs for details.
-- richard
[EMAIL PROTECTED] wrote:
Matt,
Generally, when a disk needs
On February 21, 2007 4:43:34 PM +0100 [EMAIL PROTECTED] wrote:
I cannot let you say that.
Here in my company we are very interested in ZFS, but we do not care
about the RAID/mirror features, because we already have a SAN with
RAID-5 disks, and dual fabric connection to the hosts.
But you
Valery Fouques wrote:
The ability to shrink a pool by removing devices is the only reason my
enterprise is not yet using ZFS, simply because it prevents us from
easily migrating storage.
That logic is totally bogus AFAIC. There are so many advantages to
running ZFS that denying yourself that
On February 21, 2007 4:43:34 PM +0100 [EMAIL PROTECTED] wrote:
I cannot let you say that.
Here in my company we are very interested in ZFS, but we do not care
about the RAID/mirror features, because we already have a SAN with
RAID-5 disks, and dual fabric connection to the hosts.
But you
On February 21, 2007 10:55:43 AM -0800 Richard Elling
[EMAIL PROTECTED] wrote:
Valery Fouques wrote:
The ability to shrink a pool by removing devices is the only reason my
enterprise is not yet using ZFS, simply because it prevents us from
easily migrating storage.
That logic is totally bogus
Perforce is based upon berkely db (some early version), so standard database
XXX on ZFS techniques are relevant. For example, putting the journal file on a
different disk than the table files. There are several threads about optimizing
databases under ZFS.
If you need a screaming perforce
On Wed, Feb 21, 2007 at 10:11:43AM -0800, Matthew Ahrens wrote:
Adrian Saul wrote:
Not hard to work around - zfs create and a mv/tar command and it is
done... some time later. If there was say a zfs graft directory
newfs command, you could just break of the directory as a new
filesystem and
Below is another paper on drive failure analysis, this one won best
paper at usenix:
http://www.usenix.org/events/fast07/tech/schroeder/schroeder_html/
index.html
What I found most interesting was the idea that drives don't fail
outright most of the time. They can slow down operations,
Hi ZFS'ers,
We're putting together an internal ZFS performance document and
could use your experiences. If you have ZFS performance data to
share please send it to me. I'm looking for good news or bad,
whatever your actual experience is. Specific quantitative data
is most useful. (It seems
Gregory Shaw wrote:
Below is another paper on drive failure analysis, this one won best
paper at usenix:
http://www.usenix.org/events/fast07/tech/schroeder/schroeder_html/index.html
What I found most interesting was the idea that drives don't fail
outright most of the time. They can slow
On Feb 21, 2007, at 4:59 PM, Richard Elling wrote:
With this behavior in mind, I had an idea for a new feature in ZFS:
If a disk fitness test were available to verify disk read/write
and performance, future drive problems could be avoided.
Some example tests:
- full disk read
- 8kb r/w iops
On Feb 21, 2007, at 12:11 PM, Matthew Ahrens wrote:
Adrian Saul wrote:
Not hard to work around - zfs create and a mv/tar command and it is
done... some time later. If there was say a zfs graft directory
newfs command, you could just break of the directory as a new
filesystem and away you go
On Wed, Feb 21, 2007 at 03:35:06PM -0700, Gregory Shaw wrote:
Below is another paper on drive failure analysis, this one won best
paper at usenix:
http://www.usenix.org/events/fast07/tech/schroeder/schroeder_html/
index.html
What I found most interesting was the idea that drives don't
Nissim Ben Haim wrote:
I was asked by a customer considering the x4500 - how much time should
it take to rebuild a failed Disk under RaidZ ?
This question keeps popping because customers perceive software RAID as
substantially inferior to HW raids.
I could not find someone who has really
On Feb 21, 2007, at 5:20 PM, Eric Schrock wrote:
On Wed, Feb 21, 2007 at 03:35:06PM -0700, Gregory Shaw wrote:
Below is another paper on drive failure analysis, this one won best
paper at usenix:
http://www.usenix.org/events/fast07/tech/schroeder/schroeder_html/
index.html
What I found most
On 2/22/07, Gregory Shaw [EMAIL PROTECTED] wrote:
I was thinking of something similar to a scrub. An ongoing process
seemed too intrusive. I'd envisioned a cron job similar to a scrub (or
defrag) that could be run periodically to show any differences between disk
performance over time.
Leon Koll wrote:
The fact that the described problem is 100%-NFS-client-problem, there
is nothing to do with ZFS code to improve the situtaion.
You may want to see if the folks over at [EMAIL PROTECTED]
have any ideas on your NFS problem.
--matt
Dale Ghent wrote:
but it got me thinking about how things such as the current
compression ratio for a volume could be indicated over a otherwise
ZFS-agnostic NFS export. The .zfs snapdir came to mind. Perhaps ZFS
could maintain a special file under there, called compressratio for
example, and
All,
I think dtrace could be a viable option here. crond to run a
dtrace script on a regular basis that times a series of reads and then
provides that info to Cacti or rrdtool. It's not quite the
one-size-fits-all that the OP was looking for, but if you want trends,
this should get 'em.
Correct me if I'm wrong but fma seems like a more appropriate tool to
track disk errors.
--
Just me,
Wire ...
On 2/22/07, TJ Easter [EMAIL PROTECTED] wrote:
All,
I think dtrace could be a viable option here. crond to run a
dtrace script on a regular basis that times a series of reads and
Since most of our customers are predominantly UFS based, we would like to use
the same configuration and compare ZFS performance, so that we can announce
support for ZFS.
We're planning on measuring the performance of a ZFS file system vs UFS file
system.
Please look at the following scenario
37 matches
Mail list logo