Martin,
This is a shot in the dark, but, this seems to be a IO scheduling
issue.
Since, i am late on this thread, what is the characteristic of
the IO: read mostly, appending writes, read, modify write,
sequentiality, random, single large file, multiple
http://www.sun.com/software/solaris/ds/zfs.jsp
Solaris ZFSThe Most Advanced File System on the Planet
Anyone who has ever lost important files, run out of space on a
partition, spent weekends adding new storage to servers, tried to grow
or shrink a file system, or experienced data corruption
Toby Thain, et al,
I am guessing here, but to just be able to access
the FS data locally without the headaches of
verifying FS consistency, write caches, etc.
Mitchell Erblich
Toby Thain wrote:
On 13-Jun-07, at 1:14 PM, Rick Mann
Group,
Isn't Apple strength really in the non-compute intensive
personal computer / small business environment?
IE, Plug and play.
Thus, even though ZFS is able to work as the default
FS, should it be the default FS for the small system
Group,
MOST people want a system to work without doing
ANYTHING when they turn on the system.
So yes, the thought of people buying another
drive and installing it in a brand new system
would be insane for this group of buyers.
Mitchell Erblich
--
Darren J Moffat wrote:
Erblichs wrote:
So, my first order would be to take 1GB or 10GB .wav files
AND time both the kernel implementation of Gzip and the
user application. Approx the same times MAY indicate
that the kernel implementation gzip
Ian Collins,
My two free cents..
If the gzip was in application space, most gzip's implementations
support (maybe a new compile) a less extensive/expensive deflation
that would consume fewer CPU cycles.
Secondly, if the file objects are being written
Jorg,
Do you really think that ANY FS actually needs to support
more FS objects? If that would be an issue, why not create
more FSs?
A multi-TB FS SHOULD support 100MB+/GB size FS objects, which
IMO is the more common use. I have seen this alot in video
Ming,
Lets take a pro example with a minimal performance
tradeoff.
All FSs that modify a disk block, IMO, do a full
disk block read before anything.
If doing a extended write and moving to a
larger block size with COW you give yourself
the
Leon Koll,
As a knowldegeable outsider I can say something.
The benchbark (SFS) page specifies NFSv3,v2 support, so I question
whether you ra n NFSv4. I would expect a major change in
performance just to version 4 NFS version and ZFS.
The benchmark
more problems.
Mitchell Erblich
Sr Software Engineer
-
Joerg Schilling wrote:
Erblichs [EMAIL PROTECTED] wrote:
Joerg Shilling,
Putting the license issues aside for a moment.
I was trying to point people to the fact that the biggest
Rich Teer,
I have a perfect app for the masses.
A Hi-Def Video/ audio server for the hi-def TV
and audio setup.
I would think the average person would want
to have access to 1000s of DVDs / CDs within
a small box versus taking up the
My two cents,
Assuming that you may pick a specific compression algorithm,
most algorithms can have different levels/percentages of
deflations/inflations which is effects the time to compress
and/or inflate wrt the CPU capacity.
Secondly, if I can add an
Tp the original poster,
FYI,
Accessing RAID drives at a constant ~70-75% does not
probably leave enough excess for degraded mode.
A normal rule of thumb is 50 to 60% constant to
allow excess capacity to be absorbed in degraded
mode.
An
ZFS Group,
My two cents..
Currently, in my experience, it is a waste of time to try to
guarantee exact location of disk blocks with any FS.
A simple reason exception is bad blocks, a neighboring block
will suffice.
Second, current disk
-
Toby Thain wrote:
On 28-Feb-07, at 6:43 PM, Erblichs wrote:
ZFS Group,
My two cents..
Currently, in my experience, it is a waste of time to try to
guarantee exact location of disk blocks with any FS.
? Sounds like you're confusing logical
Jeff Bonwick,
Do you agree that their is a major tradeoff of
builds up a wad of transactions in memory?
We loose the changes if we have an unstable
environment.
Thus, I don't quite understand why a 2-phase
approach to commits isn't done. First,
Hey guys,
Do to lng URL lookups, the DNLC was pushed to variable
sized entries. The hit rate was dropping because of
name to long misses. This was done long ago while I
was at Sun under a bug reported by me..
I don't know your usage, but you should
, Erblichs wrote:
Bill Sommerfield,
that's not how my name is spelled
Are their any existing snaps?
no. why do you think this would matter?
Can you have any scripts that may be
removing aged files?
no; there was essentially no other activity on the pool other
, that unless you are just
touching a FS low-level(file) object, all writes are
proceeded by at least 1 read.
Mitchell Erblich
Bill Sommerfeld wrote:
On Thu, 2006-11-09 at 19:18 -0800, Erblichs wrote:
Bill Sommerfield
Hi,
My suggestion is direct any command output to a file
that may print thous of lines.
I have not tried that number of FSs. So, my first
suggestion is to have alot of phys mem installed.
The second item that I could be concerned with is
path
Hi,
How much time is a long time?
Second, had a snapshot been taken after the file
was created?
Are the src and dst directories in the
same slice?
What other work was being done at the time of
the move?
Were their numerous
the snapshot and remove it?
Mitchell Erblich
Matthew Ahrens wrote:
Erblichs wrote:
Now the stupid question..
If the snapshot is identical to the FS, I can't
remove files from the FS because of the snapshot
and removing files from
Group, et al,
I don't understand that if the problem is systemic based on
the number of continual dirty pages and stress to clean
those pages, then why .
If the problem is FS independent, because any number of
different installed FSs can equally
knowledege applied to UFS (page lists stuff, chksum stuff,
large file awwareness/support), but adds a new twist to
things..
Mitchell Erblich
--
Nicolas Williams wrote:
On Fri, Oct 13, 2006 at 09:22:53PM -0700, Erblichs wrote
Group,
If their is a bad vfs ops template, why
wouldn't you just return(error) versus
trying to create the vnode ops template?
My suggestion is after the cmn_err()
then return(error);
Mitchell Erblich
Group,
I am not sure I agree with the 8k size.
Since recordsize is based on the size of filesystem blocks
for large files, my first consideration is what will be
the max size of the file object.
For extremely large files (25 to 100GBs), that are accessed
or steal it from another's cache.
Mitchell Erblich
-
Frank Hofmann wrote:
On Thu, 5 Oct 2006, Erblichs wrote:
Casper Dik,
After my posting, I assumed that a code question should be
directed to the ZFS code alias, so I apologize
Casper Dik,
After my posting, I assumed that a code question should be
directed to the ZFS code alias, so I apologize to the people
show don't read code. However, since the discussion is here,
I will post a code proof here. Just use time program to get
a
Casper Dik,
Yes, I am familiar with Bonwick's slab allocators and tried
it for wirespeed test of 64byte pieces for a 1Gb and then
100Mb Eths and lastly 10Mb Eth. My results were not
encouraging. I assume it has improved over time.
First, let me ask what
30 matches
Mail list logo