Daniel Phillips [EMAIL PROTECTED] wrote:
These patches add local caching for network filesystems such as NFS.
Have you got before/after benchmark results?
I need to get a new hard drive for my test machine before I can go and get
some more up to date benchmark results. It does seem,
Daniel Phillips [EMAIL PROTECTED] wrote:
Have you got before/after benchmark results?
See attached.
These show a couple of things:
(1) Dealing with lots of metadata slows things down a lot. Note the result of
looking and reading lots of small files with tar (the last result). The
On Thu, Feb 21, 2008 at 9:55 AM, David Howells [EMAIL PROTECTED] wrote:
Daniel Phillips [EMAIL PROTECTED] wrote:
Have you got before/after benchmark results?
See attached.
These show a couple of things:
(1) Dealing with lots of metadata slows things down a lot. Note the result
of
On Thu, Feb 21, 2008 at 02:05:52AM -0500, David H. Lynch Jr. wrote:
Can I boot an initramfs kernel without a block device ?
Yes.
Can I write a filesystem driver for a flash device that does not
require a block device ?
Yes.
Are their any examples of something even close ?
For
Hello everyone,
Btrfs v0.13 is now available for download from:
http://oss.oracle.com/projects/btrfs/
We took another short break from the multi-device code to make the minor mods
required to compile on 2.6.25, fix some problematic bugs and do more tuning.
The most important fix is for file
On Thursday 21 February 2008, Chris Mason wrote:
Hello everyone,
Btrfs v0.13 is now available for download from:
http://oss.oracle.com/projects/btrfs/
We took another short break from the multi-device code to make the minor
mods required to compile on 2.6.25, fix some problematic bugs and
Hi David,
I am trying to spot the numbers that show the sweet spot for this
optimization, without much success so far.
Who is supposed to win big? Is this mainly about reducing the load on
the server, or is the client supposed to win even with a lightly loaded
server?
When you say Ext3
Well, the AFS paper that was referenced earlier was written around the
time of 10bt and 100bt. Local disk caching worked well then. There
should also be some papers at CITI about disk caching over slower
connections, and disconnected operation (which should still be
applicable today). There are
David Howells [EMAIL PROTECTED] wrote:
Have you got before/after benchmark results?
See attached.
Attached here are results using BTRFS (patched so that it'll work at all)
rather than Ext3 on the client on the partition backing the cache.
Note that I didn't bother redoing the tests that
Daniel Phillips [EMAIL PROTECTED] wrote:
When you say Ext3 cache vs NFS cache is the first on the server and the
second on the client?
The filesystem on the server is pretty much irrelevant as long as (a) it
doesn't change, and (b) all the data is in memory on the server anyway.
The way the
On Thursday 21 February 2008 16:07, David Howells wrote:
The way the client works is like this:
Thanks for the excellent ascii art, that cleared up the confusion right
away.
What are you trying to do exactly? Are you actually playing with it, or just
looking at the numbers I've produced?
--- David Howells [EMAIL PROTECTED] wrote:
Separate the task security context from task_struct. At this point, the
security data is temporarily embedded in the task_struct with two pointers
pointing to it.
...
diff --git a/security/smack/smack_access.c b/security/smack/smack_access.c
--- David Howells [EMAIL PROTECTED] wrote:
Remove the temporarily embedded task security record from task_struct.
Instead
it is made to dangle from the task_struct::sec and task_struct::act_as
pointers
with references counted for each.
...
The LSM hooks for dealing with task security
--- David Howells [EMAIL PROTECTED] wrote:
Allow kernel services to override LSM settings appropriate to the actions
performed by a task by duplicating a security record, modifying it and then
using task_struct::act_as to point to it when performing operations on behalf
of a task.
This is
14 matches
Mail list logo