Re: vm/fs meetup details

2007-07-27 Thread Dave Kleikamp
Sorry for the late response, but I'm interested in attending.  I didn't
think I would be able to justify the trip for the one-day meeting, but I
begged an invitation to the VM summit from Martin.  I still need to get
travel approval, but I'm optimistic that I can justify the trip now.

Thanks,
Shaggy

On Thu, 2007-07-05 at 06:01 +0200, Nick Piggin wrote:
> Hi,
> 
> The vm/fs meetup will be held September 4th from 10am till 4pm (with the
> option of going longer), at the University of Cambridge. 
> 
> Anton Altaparmakov has arranged a conference room for us with whiteboard
> and projector, so many thanks to him. I will send out the location and
> plans for meeting/getting there after we work out the best strategy for
> that.
> 
> At the moment we have 15 people interested so far. We can have a few
> more people, so if you aren't cc'ed and would like to come along please
> let me know. We do have limited space, so I'm sorry in advance if anybody
> misses out.
> 
> I'll post out a running list of suggested topics later, but they're
> really just a rough guideline. It will be a round-table kind of thing
> and long monologue talks won't be appropriate, however some slides or
> whiteboarding to interactively introduce and discuss your idea would
> be OK.
> 
> I think we want to avoid assigning slots for specific people/topics.
> Feel free to propose anything, if it only gets a small amount of
> interest then at least you'll know who to discuss it with later :)
> 
> Thanks,
> Nick
-- 
David Kleikamp
IBM Linux Technology Center

-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: vm/fs meetup details

2007-07-09 Thread Martin Bligh

Jörn Engel wrote:

On Mon, 9 July 2007 09:29:38 +1000, David Chinner wrote:

On Sat, Jul 07, 2007 at 12:45:35PM +0200, Jörn Engel wrote:

Oh certainly!  I should dust off my dcache_static patch.  Some dentries
are hands-off for the shrinker, basically mountpoints and tmpfs.  The
patch moves those to a seperate slab cache.

I doubt there's enough of those to make any difference - putting all
the directories into another slab did little to reduce fragmentation
(~18 months ago we tried that), so I don't think that this would help
at all...


Interesting.  I suspect that the de-facto random cache eviction has a
bigger effect and overshadows everything else.  So the decisive step
would be to nuke all dentries in a given slab.

It wouldn't surprise me if your patch did make a difference afterwards.
With 32 dentries per slab, it doesn't take many pinned objects to pin
most slabs.


What happened to the patches floating around to stick the dentries for
directories into a different cache? IIRC, they were somewhat problematic
because you don't know what the dentry is used for at allocate time,
but did that ever get fixed / worked around?

M.
-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: vm/fs meetup details

2007-07-08 Thread Jörn Engel
On Mon, 9 July 2007 09:29:38 +1000, David Chinner wrote:
> On Sat, Jul 07, 2007 at 12:45:35PM +0200, Jörn Engel wrote:
> > 
> > Oh certainly!  I should dust off my dcache_static patch.  Some dentries
> > are hands-off for the shrinker, basically mountpoints and tmpfs.  The
> > patch moves those to a seperate slab cache.
> 
> I doubt there's enough of those to make any difference - putting all
> the directories into another slab did little to reduce fragmentation
> (~18 months ago we tried that), so I don't think that this would help
> at all...

Interesting.  I suspect that the de-facto random cache eviction has a
bigger effect and overshadows everything else.  So the decisive step
would be to nuke all dentries in a given slab.

It wouldn't surprise me if your patch did make a difference afterwards.
With 32 dentries per slab, it doesn't take many pinned objects to pin
most slabs.

Jörn

-- 
All art is but imitation of nature.
-- Lucius Annaeus Seneca
-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: vm/fs meetup details

2007-07-08 Thread David Chinner
On Sat, Jul 07, 2007 at 12:45:35PM +0200, Jörn Engel wrote:
> On Fri, 6 July 2007 13:40:03 -0700, Christoph Lameter wrote:
> > 
> > An interesting topic is certainly
> > 
> > 1. Large buffer support
> > 
> > 2. icache/dentry/buffer_head defragmentation.
> 
> Oh certainly!  I should dust off my dcache_static patch.  Some dentries
> are hands-off for the shrinker, basically mountpoints and tmpfs.  The
> patch moves those to a seperate slab cache.

I doubt there's enough of those to make any difference - putting all
the directories into another slab did little to reduce fragmentation
(~18 months ago we tried that), so I don't think that this would help
at all...

Cheers,

Dave.
-- 
Dave Chinner
Principal Engineer
SGI Australian Software Group
-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: vm/fs meetup details

2007-07-07 Thread Jörn Engel
On Fri, 6 July 2007 13:40:03 -0700, Christoph Lameter wrote:
> 
> An interesting topic is certainly
> 
> 1. Large buffer support
> 
> 2. icache/dentry/buffer_head defragmentation.

Oh certainly!  I should dust off my dcache_static patch.  Some dentries
are hands-off for the shrinker, basically mountpoints and tmpfs.  The
patch moves those to a seperate slab cache.

Jörn

-- 
Data dominates. If you've chosen the right data structures and organized
things well, the algorithms will almost always be self-evident. Data
structures, not algorithms, are central to programming.
-- Rob Pike
-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: vm/fs meetup details

2007-07-06 Thread Neil Brown
On Friday July 6, [EMAIL PROTECTED] wrote:
> Hi,
> 
> On Fri, Jul 06, 2007 at 05:57:49PM +0200, Jörn Engel wrote:
...
> > 
> > Interesting idea.  Is it possible to attach several address spaces to an
> > inode?  That would cure some headaches.
> >
> GFS2 already uses something like this, in fact by having a second inode
> to contain the second address space. Thats a bit of a hack but we can't
...
> 
> So that would certainly be an issue that I'd like to discuss to see
> what can be worked out in that area,
> 
> Steve.

Maybe the question here is:

  What support should common code provide for caching indexing
  metadata?

Common code already provides the page cache that is very nice for
caching file data.
Some filesystems use the blockdev page cache to cache index metadata
by physical address.  But I think that increasingly filesystems want
to cache index metadata by some sort of virtual address.
A second page cache address space would be suitable if the addresses
were dense, and would be acceptable if the blocks were page-sized (or
larger).  But for non-dense, non-page-sized blocks, a radix tree of
pages is less than ideal (I think).

My filesystem (LaFS, which is actually beginning to work thanks to
Novell's HackWeek) uses non-dense, non-page-sized blocks both for file
indexing and for directories and while I have a working solution for
each case, there is room for improvements that might fit well with
other filesystems too.

NeilBrown
-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: vm/fs meetup details

2007-07-06 Thread Christoph Lameter
An interesting topic is certainly

1. Large buffer support

2. icache/dentry/buffer_head defragmentation.

-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: vm/fs meetup details

2007-07-06 Thread Steven Whitehouse
Hi,

On Fri, Jul 06, 2007 at 05:57:49PM +0200, Jörn Engel wrote:
> On Fri, 6 July 2007 09:52:14 -0400, Chris Mason wrote:
> > On Fri, 6 Jul 2007 23:42:01 +1000 David Chinner <[EMAIL PROTECTED]> wrote:
> > 
> > > Hmmm - I guess you could use it for writeback ordering. I hadn't
> > > really thought about that. Doesn't seem a particularly efficient way
> > > of doing it, though. Why not just use multiple address spaces for
> > > this? i.e. one per level and flush in ascending order.
> 
> Interesting idea.  Is it possible to attach several address spaces to an
> inode?  That would cure some headaches.
>
GFS2 already uses something like this, in fact by having a second inode
to contain the second address space. Thats a bit of a hack but we can't
put the second address space into the inode since that causes problems
during writeback of inodes. So our perferred solution would be to put
the second address space into the glock structure, but we can't do that
until some of the VFS/VM routines can cope with mapping->host not being
an inode.

So that would certainly be an issue that I'd like to discuss to see
what can be worked out in that area,

Steve.

-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: vm/fs meetup details

2007-07-06 Thread Jörn Engel
On Fri, 6 July 2007 09:52:14 -0400, Chris Mason wrote:
> On Fri, 6 Jul 2007 23:42:01 +1000 David Chinner <[EMAIL PROTECTED]> wrote:
> 
> > Hmmm - I guess you could use it for writeback ordering. I hadn't
> > really thought about that. Doesn't seem a particularly efficient way
> > of doing it, though. Why not just use multiple address spaces for
> > this? i.e. one per level and flush in ascending order.

Interesting idea.  Is it possible to attach several address spaces to an
inode?  That would cure some headaches.

> At least in the case of btrfs, the perfect order for sync is disk
> order ;)  COW happens when blocks are changed for the first time in a
> transaction, not when they are written out to disk.  If logfs is
> writing things out some form of tree order, you're going to have to
> group disk allocations such that tree order reflects disk order somehow.

I don't understand half of what you're writing.  Maybe we should do
another design session on irc?

At any rate, logfs simply writes out blocks.  When it is handed a page
to write, the corresponding block is written.  Allocation happens at
writeout time, not earlier.  Each written block causes a higher-level
block to get changed, so that is written immediatly as well, until the
next higher level is the inode.

I would like to instead just dirty the higher-level block, so that
multiple changes can accumulate before indirect blocks are written.  And
I have no idea how transactions relate to all this.

> But, the part where we toss leaves first is definitely useful.

Shouldn't LRU ordering already do that.  I can even imagine cases when
leaves should be tossed last and LRU ordering would dtrt.

Jörn

-- 
The competent programmer is fully aware of the strictly limited size of
his own skull; therefore he approaches the programming task in full
humility, and among other things he avoids clever tricks like the plague.
-- Edsger W. Dijkstra
-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: vm/fs meetup details

2007-07-06 Thread Chris Mason
On Fri, 6 Jul 2007 23:42:01 +1000
David Chinner <[EMAIL PROTECTED]> wrote:

> On Fri, Jul 06, 2007 at 12:26:23PM +0200, Jörn Engel wrote:
> > On Fri, 6 July 2007 20:01:10 +1000, David Chinner wrote:
> > > On Fri, Jul 06, 2007 at 04:26:51AM +0200, Nick Piggin wrote:
> > > 
> > > But, surprisingly enough, the above work is relevent to this
> > > forum because of two things:
> > > 
> > >   - we've had to move to direct I/O and user space caching
> > > to work around deficiencies in kernel block device caching under
> > > memory pressure
> > > 
> > >   - we've exploited techniques that XFS supports but the VM
> > > does not. i.e. priority tagging of cached metadata so that less
> > > important metadata is tossed first (e.g. toss tree leaves before
> > > nodes and nodes before roots) when under memory pressure.
> > 
> > And the latter is exactly what logfs needs as well.  You certainly
> > have me interested.
> > 
> > I believe it applies to btrfs and any other cow-fs as well.  The
> > point is that higher levels get dirtied by writing lower layers.
> > So perfect behaviour for sync is to write leaves first, then nodes,
> > then the root.  Any other order will either cause sync not to sync
> > or cause unnecessary writes and cost performance.
> 
> Hmmm - I guess you could use it for writeback ordering. I hadn't
> really thought about that. Doesn't seem a particularly efficient way
> of doing it, though. Why not just use multiple address spaces for
> this? i.e. one per level and flush in ascending order.
> 

At least in the case of btrfs, the perfect order for sync is disk
order ;)  COW happens when blocks are changed for the first time in a
transaction, not when they are written out to disk.  If logfs is
writing things out some form of tree order, you're going to have to
group disk allocations such that tree order reflects disk order somehow.

But, the part where we toss leaves first is definitely useful.

-chris
-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: vm/fs meetup details

2007-07-06 Thread David Chinner
On Fri, Jul 06, 2007 at 12:26:23PM +0200, Jörn Engel wrote:
> On Fri, 6 July 2007 20:01:10 +1000, David Chinner wrote:
> > On Fri, Jul 06, 2007 at 04:26:51AM +0200, Nick Piggin wrote:
> > > 
> > > Keep in mind that the way to get the most out of this meeting is for the
> > > fs people to have topics of the form "we'd really like to do X, can we
> > > get some help from the VM"? Or vice versa from vm people.
> > 
> > *nod*
> > 
> > But, surprisingly enough, the above work is relevent to this forum because
> > of two things:
> > 
> > - we've had to move to direct I/O and user space caching to work
> > around deficiencies in kernel block device caching under memory
> > pressure
> > 
> > - we've exploited techniques that XFS supports but the VM does not.
> > i.e. priority tagging of cached metadata so that less important
> > metadata is tossed first (e.g. toss tree leaves before nodes and nodes
> > before roots) when under memory pressure.
> 
> And the latter is exactly what logfs needs as well.  You certainly have me
> interested.
> 
> I believe it applies to btrfs and any other cow-fs as well.  The point is
> that higher levels get dirtied by writing lower layers.  So perfect
> behaviour for sync is to write leaves first, then nodes, then the root.  Any
> other order will either cause sync not to sync or cause unnecessary writes
> and cost performance.

Hmmm - I guess you could use it for writeback ordering. I hadn't
really thought about that. Doesn't seem a particularly efficient way
of doing it, though. Why not just use multiple address spaces for
this? i.e. one per level and flush in ascending order.

Cheers,

Dave.
-- 
Dave Chinner
Principal Engineer
SGI Australian Software Group
-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: vm/fs meetup details

2007-07-06 Thread Jörn Engel
On Fri, 6 July 2007 20:01:10 +1000, David Chinner wrote:
> On Fri, Jul 06, 2007 at 04:26:51AM +0200, Nick Piggin wrote:
> > 
> > Keep in mind that the way to get the most out of this meeting
> > is for the fs people to have topics of the form "we'd really
> > like to do X, can we get some help from the VM"? Or vice versa
> > from vm people.
> 
> *nod*
> 
> But, surprisingly enough, the above work is relevent to this forum
> because of two things:
> 
>   - we've had to move to direct I/O and user space caching to
> work around deficiencies in kernel block device caching
> under memory pressure
> 
>   - we've exploited techniques that XFS supports but the VM
> does not. i.e. priority tagging of cached metadata so that
> less important metadata is tossed first (e.g. toss tree
> leaves before nodes and nodes before roots) when under
> memory pressure.

And the latter is exactly what logfs needs as well.  You certainly have
me interested.

I believe it applies to btrfs and any other cow-fs as well.  The point
is that higher levels get dirtied by writing lower layers.  So perfect
behaviour for sync is to write leaves first, then nodes, then the root.
Any other order will either cause sync not to sync or cause unnecessary
writes and cost performance.

Jörn

-- 
Public Domain  - Free as in Beer
General Public - Free as in Speech
BSD License- Free as in Enterprise
Shared Source  - Free as in "Work will make you..."
-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: vm/fs meetup details

2007-07-06 Thread David Chinner
On Fri, Jul 06, 2007 at 04:26:51AM +0200, Nick Piggin wrote:
> On Thu, Jul 05, 2007 at 05:40:57PM -0400, Rik van Riel wrote:
> > David Chinner wrote:
> > >On Thu, Jul 05, 2007 at 01:40:08PM -0700, Zach Brown wrote:
> > >>>- repair driven design, we know what it is (Val told us), but
> > >>> how does it apply to the things we are currently working on?
> > >>> should we do more of it?
> > >>I'm sure Chris and I could talk about the design elements in btrfs  
> > >>that should aid repair if folks are interested in hearing about  
> > >>them.  We'd keep the hand-waving to a minimum :).
> > >
> > >And I'm sure I could provide a counterpoint by talking about
> > >the techniques we've used improving XFS repair speed and
> > >scalability without needing to change any on disk formats
> > 
> > Sounds like that could be an interesting discussion.
> > 
> > Especially when trying to answer questions like:
> > 
> > "At what filesystem size will the mitigating fixes no
> >  longer be enough?"
> > 
> > and
> > 
> > "When will people start using filesystems THAT big?"  :)
> 
> Keep in mind that the way to get the most out of this meeting
> is for the fs people to have topics of the form "we'd really
> like to do X, can we get some help from the VM"? Or vice versa
> from vm people.

*nod*

But, surprisingly enough, the above work is relevent to this forum
because of two things:

- we've had to move to direct I/O and user space caching to
  work around deficiencies in kernel block device caching
  under memory pressure

- we've exploited techniques that XFS supports but the VM
  does not. i.e. priority tagging of cached metadata so that
  less important metadata is tossed first (e.g. toss tree
  leaves before nodes and nodes before roots) when under
  memory pressure.


> That said, we can talk about whatever interests the group on
> the day. And that could definitely include issues common to
> different filesystems.

Sure ;)

Cheers,

Dave.
-- 
Dave Chinner
Principal Engineer
SGI Australian Software Group
-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: vm/fs meetup details

2007-07-05 Thread Nick Piggin
On Thu, Jul 05, 2007 at 05:40:57PM -0400, Rik van Riel wrote:
> David Chinner wrote:
> >On Thu, Jul 05, 2007 at 01:40:08PM -0700, Zach Brown wrote:
> >>>- repair driven design, we know what it is (Val told us), but
> >>> how does it apply to the things we are currently working on?
> >>> should we do more of it?
> >>I'm sure Chris and I could talk about the design elements in btrfs  
> >>that should aid repair if folks are interested in hearing about  
> >>them.  We'd keep the hand-waving to a minimum :).
> >
> >And I'm sure I could provide a counterpoint by talking about
> >the techniques we've used improving XFS repair speed and
> >scalability without needing to change any on disk formats
> 
> Sounds like that could be an interesting discussion.
> 
> Especially when trying to answer questions like:
> 
> "At what filesystem size will the mitigating fixes no
>  longer be enough?"
> 
> and
> 
> "When will people start using filesystems THAT big?"  :)

Keep in mind that the way to get the most out of this meeting
is for the fs people to have topics of the form "we'd really
like to do X, can we get some help from the VM"? Or vice versa
from vm people.

That said, we can talk about whatever interests the group on
the day. And that could definitely include issues common to
different filesystems.

-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: vm/fs meetup details

2007-07-05 Thread Nick Piggin
On Thu, Jul 05, 2007 at 01:54:06PM -0400, Rik van Riel wrote:
> Nick Piggin wrote:
> >Hi,
> >
> >The vm/fs meetup will be held September 4th from 10am till 4pm (with the
> >option of going longer), at the University of Cambridge. 
> 
> I am interested.  A few potential topics:

OK, I'll put you on the list. Hope to see you there.

 
> - improving laptop_mode in the VFS & VM to further increase
>   battery life in laptops
> 
> - repair driven design, we know what it is (Val told us), but
>   how does it apply to the things we are currently working on?
>   should we do more of it?

Thanks for the suggestions.
-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: vm/fs meetup details

2007-07-05 Thread Rik van Riel

David Chinner wrote:

On Thu, Jul 05, 2007 at 01:40:08PM -0700, Zach Brown wrote:

- repair driven design, we know what it is (Val told us), but
 how does it apply to the things we are currently working on?
 should we do more of it?
I'm sure Chris and I could talk about the design elements in btrfs  
that should aid repair if folks are interested in hearing about  
them.  We'd keep the hand-waving to a minimum :).


And I'm sure I could provide a counterpoint by talking about
the techniques we've used improving XFS repair speed and
scalability without needing to change any on disk formats


Sounds like that could be an interesting discussion.

Especially when trying to answer questions like:

"At what filesystem size will the mitigating fixes no
 longer be enough?"

and

"When will people start using filesystems THAT big?"  :)

--
Politics is the struggle between those who want to make their country
the best in the world, and those who believe it already is.  Each group
calls the other unpatriotic.
-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: vm/fs meetup details

2007-07-05 Thread David Chinner
On Thu, Jul 05, 2007 at 01:40:08PM -0700, Zach Brown wrote:
> >- repair driven design, we know what it is (Val told us), but
> >  how does it apply to the things we are currently working on?
> >  should we do more of it?
> 
> I'm sure Chris and I could talk about the design elements in btrfs  
> that should aid repair if folks are interested in hearing about  
> them.  We'd keep the hand-waving to a minimum :).

And I'm sure I could provide a counterpoint by talking about
the techniques we've used improving XFS repair speed and
scalability without needing to change any on disk formats

Cheers,

Dave.
-- 
Dave Chinner
Principal Engineer
SGI Australian Software Group
-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: vm/fs meetup details

2007-07-05 Thread Zach Brown

- repair driven design, we know what it is (Val told us), but
  how does it apply to the things we are currently working on?
  should we do more of it?


I'm sure Chris and I could talk about the design elements in btrfs  
that should aid repair if folks are interested in hearing about  
them.  We'd keep the hand-waving to a minimum :).


- z
-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: vm/fs meetup details

2007-07-05 Thread Rik van Riel

Nick Piggin wrote:

Hi,

The vm/fs meetup will be held September 4th from 10am till 4pm (with the
option of going longer), at the University of Cambridge. 


I am interested.  A few potential topics:

- improving laptop_mode in the VFS & VM to further increase
  battery life in laptops

- repair driven design, we know what it is (Val told us), but
  how does it apply to the things we are currently working on?
  should we do more of it?

--
Politics is the struggle between those who want to make their country
the best in the world, and those who believe it already is.  Each group
calls the other unpatriotic.
-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: vm/fs meetup details

2007-07-05 Thread Peter Zijlstra
On Thu, 2007-07-05 at 06:01 +0200, Nick Piggin wrote:
> Hi,
> 
> The vm/fs meetup will be held September 4th from 10am till 4pm (with the
> option of going longer), at the University of Cambridge. 
> 
> Anton Altaparmakov has arranged a conference room for us with whiteboard
> and projector, so many thanks to him. I will send out the location and
> plans for meeting/getting there after we work out the best strategy for
> that.
> 
> At the moment we have 15 people interested so far. We can have a few
> more people, so if you aren't cc'ed and would like to come along please
> let me know. We do have limited space, so I'm sorry in advance if anybody
> misses out.
> 
> I'll post out a running list of suggested topics later, but they're
> really just a rough guideline. It will be a round-table kind of thing
> and long monologue talks won't be appropriate, however some slides or
> whiteboarding to interactively introduce and discuss your idea would
> be OK.
> 
> I think we want to avoid assigning slots for specific people/topics.
> Feel free to propose anything, if it only gets a small amount of
> interest then at least you'll know who to discuss it with later :)

I'm interested in attending, worst that could happen is that I learn
something :-)

-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


vm/fs meetup details

2007-07-04 Thread Nick Piggin
Hi,

The vm/fs meetup will be held September 4th from 10am till 4pm (with the
option of going longer), at the University of Cambridge. 

Anton Altaparmakov has arranged a conference room for us with whiteboard
and projector, so many thanks to him. I will send out the location and
plans for meeting/getting there after we work out the best strategy for
that.

At the moment we have 15 people interested so far. We can have a few
more people, so if you aren't cc'ed and would like to come along please
let me know. We do have limited space, so I'm sorry in advance if anybody
misses out.

I'll post out a running list of suggested topics later, but they're
really just a rough guideline. It will be a round-table kind of thing
and long monologue talks won't be appropriate, however some slides or
whiteboarding to interactively introduce and discuss your idea would
be OK.

I think we want to avoid assigning slots for specific people/topics.
Feel free to propose anything, if it only gets a small amount of
interest then at least you'll know who to discuss it with later :)

Thanks,
Nick
-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html