Re: System update spec proposal

2007-06-27 Thread David Woodhouse
On Tue, 2007-06-26 at 20:45 -0400, Ivan Krstić wrote:
 On Jun 26, 2007, at 7:23 PM, David Woodhouse wrote:
  because the people working on the security stuff
  let it all slide for too long and now have declared that we don't have
  time to do anything sensible.
 
 That's a cutely surreal take on things -- I really appreciate you  
 trying to defuse the situation with offbeat humor :)

Nevertheless, it's an accurate description of what happened. To avoid
ruffling feathers unnecessarily I suppose I should have made it clear
that there is no blame to be assigned here -- the kernel hackers who
would ideally have worked on this were simply busy doing more important
things like power management and didn't do anything more than just say
No, VServer is not workable, which evidently wasn't taken sufficiently
seriously.

The plan of record is to use this vserver crap for as short a period of
time as possible, until we can implement something which is supportable
upstream.

-- 
dwmw2

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: mesh network vs broadcast/multicast question

2007-06-27 Thread Dan Williams
On Wed, 2007-06-27 at 15:18 +0200, Alexander Larsson wrote:
 On Tue, 2007-06-26 at 11:34 -0400, Michail Bletsas wrote:
  The key point to remember in order to derive answers to these questions is 
  that our mesh network operates at layer-2
  For all practical purposes, there aren't any differences between broadcast 
  and multicast frames and every node maintains a table of recently 
  forwarded broadcast frames so that they are not broadcasted multiple 
  times.
  
  One can limit the radious of the (layer-2) neighborhood be means of 
  controlling the Mesh TTL field.
 
 That is a global setting though, and not something we want applications
 to touch.
 
 Is there a way for a program to figure out which of a list of ip
 addresses are neighbours in the mesh (i.e. 1 hop away)?

Some combination of the ARP cache and the forwarding table from the
firmware would probably be able to give us that information.  For each
node that is 1 hop away in the fowarding table, check the ARP cache and
grab the IP address.  '/sbin/iwpriv msh0 fwt_list' gives you the FWT
info, though the fields are a bit hard to discern unless you've got the
driver source.

Dan


___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: XO in-field upgrades

2007-06-27 Thread Alexander Larsson
On Tue, 2007-06-26 at 17:47 -0400, Mike C. Fletcher wrote:

 I may be missing an operation or two in there, but I *think* that's the 
 intention.  That is, there's an ultimate loading level that sits outside 
 the standard root to make it all work.  That loading level could be in 
 the BIOS or on an initrd, but there does need to be a level to manage 
 the ultimate overlay-based structure AFAIK.  And at normal boot, 
 something has to set up the overlays, regardless of what overlay set is 
 loaded.  That level of access is where the update manager has to sit to 
 be able to do the updates AFAICS.  That is to accomplish a merge or 
 remount of the core fs we need a way to ask the overlay manager to do 
 some work for us at the update-manager level.

Oh I see. So, you update the root, but you can't update all the system
software (like the part that does the switching between roots).

In such a setup, how do you update e.g. the kernel?

 (how does that 
 work for soft links, incidentally, I gather you recreate them rather 
 than hard-linking?)  

They are recreated, same with fifos, directories, and device nodes. At
least this is how update-manifest in updatinator does it, but I don't
think there is any other way really.

With update transaction on the filesystem level (like my jffs2 proposal)
this kind of outer manager is not needed. However, with an outer
controller I can see this working. One could even use symlinks to make
this pretty simple:

/rootfs/olpc.5 [contains image version 5]
/rootfs/olpc.6 [contains image version 6, shared hardlinks with olpc.5]
/rootfs/current - olpc.6 [a symlink]
/rootfs/previous - olpc.5
/usr - /rootfs/current/usr
/var - /rootfs/current/var
/etc - /rootfs/current/etc

Then, to upgrade almost atomically, one just does:
clone_w_hardlinks (/rootfs/olpc.6, /rootfs/olpc.7)
apply_changes (/rootfs/olpc.7)
ln -sf /rootfs/olpc.6 /rootfs/previous [1]
ln -sf /rootfs/olpc.7 /rootfs/current [2]
rm -rf /rootfs/olpc.5

A power failure during [1], or between 1 and 2 can mess up the
previous link, and since symlink isn't an atomic operation if the file
exists a failure during [2] can cause current to disappear. However,
we will never have both previous and current missing.

Using symlinks like this means the jffs2 parser in the firmware (if it
supports symlinks) will even be able to pick up the right kernel.
(Although it will always pick the current kernel.)

There is one tricky area with hardlinks though. All hard links of the
same inode share the permission and other metadata with each other. What
if the trees have different metadata for the same file? Or what if
running the current image changes metadata for a file shared between
the images?

Hmm, i guess any kind of in-place change to the hardlinked files while
running the image is also mirrored in the other image. I guess this is
where the COW stuff is needed... I guess this means the /usr
- /rootfs/current/usr symlinks don't cut it, but one needs something
heavier, like overlays or vserver+COW. Sad, it was an interesting idea.


___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: mesh network vs broadcast/multicast question

2007-06-27 Thread Michail Bletsas
You can find the (layer-2) neighbors using the 
iwpriv msh0 fwt_list_neigh n
command. Combining that information with a (persistent) ARP table can give 
you the IP addresses.

The mesh TTL field will eventually be tunable on a per packet basis.

M.







Alexander Larsson [EMAIL PROTECTED] 
06/27/2007 09:20 AM

To
Michail Bletsas [EMAIL PROTECTED]
cc
Dan Williams [EMAIL PROTECTED], olpc-devel devel@lists.laptop.org
Subject
Re: mesh network vs broadcast/multicast question






On Tue, 2007-06-26 at 11:34 -0400, Michail Bletsas wrote:
 The key point to remember in order to derive answers to these questions 
is 
 that our mesh network operates at layer-2
 For all practical purposes, there aren't any differences between 
broadcast 
 and multicast frames and every node maintains a table of recently 
 forwarded broadcast frames so that they are not broadcasted multiple 
 times.
 
 One can limit the radious of the (layer-2) neighborhood be means of 
 controlling the Mesh TTL field.

That is a global setting though, and not something we want applications
to touch.

Is there a way for a program to figure out which of a list of ip
addresses are neighbours in the mesh (i.e. 1 hop away)?



___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: System update spec proposal

2007-06-27 Thread Alexander Larsson
On Tue, 2007-06-26 at 13:55 -0400, Ivan Krstić wrote:
 Software updates on the One Laptop per Child's XO laptop
 

First some stray comments:

 1.4. Design note: rsync scalability
 ---
 
 rsync is a known CPU hog on the server side. It would be absolutely
 infeasible to support a very large number of users from a single rsync
 server. This is far less of a problem in our scenario for three reasons:

What about CPU hogging on the school server? That seems likely to be far
less beefy than the centralized server.

 The most up-to-date bundle for each activity in the set is accessed, and
 the first several kilobytes downloaded. Since bundles are simple ZIP
 files, the downloaded data will contain the ZIP file index which stores
 byte offsets for the constituent compressed files. The updater then
 locates the bundle manifest in each index and makes a HTTP request with
 the respective byte range to each bundle origin. At the end of this
 process, the updater has cheaply obtained a set of manifests of the
 files in all available activity updates.

Zip files have the file index at the end of the file.

Now for comments on the general approach:

First of all, there seems to be exceptional amounts of confusion as to
exactly how some form of atomic updating of the system will happen. Some
people talk about overlays, others about vserver, I myself has thrown in
the filesystem transaction idea. I must say this area seems very
uncertain, and I worry that this will result in the implementation of
none of these options...

But anyway, the exact way these updates are applied is quite orthogonal
to how you download the bits required for the update, or how to discover
new updates. So far I've been mainly working on this part in order to
avoid blocking on the confusion I mentioned above.

As to using rsync for the file transfers. This seems worse than the
trivial manifest + sha1-named files on http approach I've been working
on, especially with the optional usage of bsdiff I just commited. We
already know (have to know in fact, so we can strongly verify them) the
contents on both the laptop and the target image. To drop all this
knowledge and have rsync reconstruct it at runtime seems both a waste
and a possible performance problem (e.g. cpu and memory overload on the
school server, and rsync re-hashing files on the laptop using up
battery). 

You talk about the time it takes to implement another approach, but its
really quite simple, and I have most of it done already. The only hard
part is the atomic applying of the bits. Also, there seems to be
development needed for the rsync approach too, as there is e.g. no
support for xattrs in the current protocol.

I've got the code for discovering local instances of updgrades and
downloading it already working. I'll try to make it do an actual
(non-atomic, unsafe) upgrade of a XO this week.

I have a general question on how this vserver/overlay/whatever system is
supposed to handle system files that are not part of the system image,
but still exist in the root file system. For instance,
take /var/log/messages or /dev/log? Where are they stored? Are they
mixed in with the other system files? If so, then rolling back to an
older version will give you e.g. your old log files back. Also, that
could be complicating the usage of rsync. If you use --delete then it
would delete these files (as they are not on the server).

Also, your document contains a lot of comments about what will be in FRS
and not. Does this mean you're working on actually developing this
system for FRS?

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: System update spec proposal

2007-06-27 Thread Alexander Larsson
On Tue, 2007-06-26 at 15:03 -0400, Mike C. Fletcher wrote:

 My understanding here is that Alex's system is currently point-to-point, 
 but that we don't yet have any way to distribute it on the mesh?  That 
 is, that we are looking at a research project to determine how to make 
 it work in-the-field using some form of mesh-based discovery protocol 
 that's going to try to optimise for connecting to local laptops.  I 
 personally don't care which way we distribute, but I'm wary of having to 
 have some mesh-network-level hacking implemented to provide discovery of 
 an update server.

I'm not sure what you mean here exactly. Discovery is done using avahi,
a well known protocol which we are already using in many places on the
laptop. The actual downloading of file uses http, which is a well known
protocol with many implementations.

The only thing i'm missing atm is a way to tell which ip addresses to
prefer downloading from since they are close. This information is
already availible in the mesh routing tables in the network driver (and
possibly the arp cache), and its just a question of getting this info
and using it to drive what servers to pick for downloading.

  Basically aside from the vserver bits, which no one has seen, I don't
  see a particular advantage to using rsync.  In fact, I see serious
  downsides since it misses some of the key critical advantages of using
  our own tool not the least of which is that we can make our tool do what
  we want and with rsync you're talking about changing the protocols.

 Hmm, interestingly I see using our own tool as a disadvantage, not a 
 huge one, but a disadvantage nonetheless, in that we have more untested 
 code on the system (and we already have a lot), and in this case, in a 
 critical must-never-fail system.  For instance, what happens if the user 
 is never connected to another XO or school server, but only connects to 
 a (non-mesh) WiFi network?  Does the mesh-broadcast upgrade discovery 
 protocol work in that case?

Avahi works fine for these cases too. Of course, since it was originally
created for normal networks. However, if you never come close to another
OLPC machine, then we won't find a machine to upgrade against. Its quite
trivial to make it pull from any http server on the net, but that has to
be either polled (which I don't like) or initiated manually (which might
be fine).


___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: System update spec proposal

2007-06-27 Thread Mike C. Fletcher
Alexander Larsson wrote:
 On Tue, 2007-06-26 at 15:03 -0400, Mike C. Fletcher wrote:
   
...
 I'm not sure what you mean here exactly. Discovery is done using avahi,
 a well known protocol which we are already using in many places on the
 laptop. The actual downloading of file uses http, which is a well known
 protocol with many implementations.

 The only thing i'm missing atm is a way to tell which ip addresses to
 prefer downloading from since they are close. This information is
 already availible in the mesh routing tables in the network driver (and
 possibly the arp cache), and its just a question of getting this info
 and using it to drive what servers to pick for downloading.
   
Ah, somehow in the discussions I'd come under the impression that the 
only way the system would be allowed to work was a single-hop network 
link on the mesh.  If we already have the information and can always 
have a fallback, even if it means going a number of hops across the 
network.  Looking back I see that was actually a discussion on bandwidth 
characteristics that wasn't intended to imply an absolute requirement. 
I'm reasonably happy with the approach of using Avahi and only using the 
network topology to inform the decision on which server to use.

That said, I would be more comfortable if the fallback included a way 
for the laptop to check a well-known machine every X period (e.g. in 
Ivan's proposal) and if there's no locally discovered source, use a 
publicly available source as the HTTP source by default
 Hmm, interestingly I see using our own tool as a disadvantage, not a 
 huge one, but a disadvantage nonetheless, in that we have more untested 
 code on the system (and we already have a lot), and in this case, in a 
 critical must-never-fail system.  For instance, what happens if the user 
 is never connected to another XO or school server, but only connects to 
 a (non-mesh) WiFi network?  Does the mesh-broadcast upgrade discovery 
 protocol work in that case?
 

 Avahi works fine for these cases too. Of course, since it was originally
 created for normal networks. However, if you never come close to another
 OLPC machine, then we won't find a machine to upgrade against. 
Sorry, should have been clearer that the later case (not coming close to 
another OLPC) was the one I was concerned about.  I realise such 
situations will represent only a small percentage of children, but a 
small percentage of 50,000,000 or so users is a huge number of people to 
have to teach how to manually upgrade their machines.
 Its quite
 trivial to make it pull from any http server on the net, but that has to
 be either polled (which I don't like) or initiated manually (which might
 be fine).
   
I'd advocate that the piece of Ivan's proposal wherein a central 
mechanism allows even a completely isolated machine to find and update 
automatically is a good idea.  It's a fairly trivial proposal that way, 
in the same check to see if we've been stolen download 4 bytes (or so) 
telling us the currently available version.  If we can't get it locally 
after X period, try to get it from the server (using whatever protocol, 
be it your own, rsync, BitTorrent or Telepathy (cute, I was actually 
writing telepathy there and then realised it was the name of our 
networking library)).  That is, I'd like to see a robust, automatic 
mechanism for triggering the laptop's search and a fallback position 
that allows for resolution even in the isolated-user case that doesn't 
require user intervention.

Enjoy,
Mike

-- 

  Mike C. Fletcher
  Designer, VR Plumber, Coder
  http://www.vrplumber.com
  http://blog.vrplumber.com

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: System update spec proposal

2007-06-27 Thread Dan Williams
On Wed, 2007-06-27 at 17:42 +0200, Alexander Larsson wrote:
 On Tue, 2007-06-26 at 15:03 -0400, Mike C. Fletcher wrote:
 
  My understanding here is that Alex's system is currently point-to-point, 
  but that we don't yet have any way to distribute it on the mesh?  That 
  is, that we are looking at a research project to determine how to make 
  it work in-the-field using some form of mesh-based discovery protocol 
  that's going to try to optimise for connecting to local laptops.  I 
  personally don't care which way we distribute, but I'm wary of having to 
  have some mesh-network-level hacking implemented to provide discovery of 
  an update server.
 
 I'm not sure what you mean here exactly. Discovery is done using avahi,
 a well known protocol which we are already using in many places on the
 laptop. The actual downloading of file uses http, which is a well known
 protocol with many implementations.
 
 The only thing i'm missing atm is a way to tell which ip addresses to
 prefer downloading from since they are close. This information is
 already availible in the mesh routing tables in the network driver (and
 possibly the arp cache), and its just a question of getting this info
 and using it to drive what servers to pick for downloading.

So, like michail said, do something like:

n = 0
while (True) {
buf = output of (iwpriv msh0 fwt_list_neigh n)
if (buf == (null))
break;  // all done

parse buf into fields
hwaddr = parsed[0]  // Grab the 'ra' field (1st one)
ip4addr = lookup_hwaddr_in_arp(hwaddr)
do something with ip4addr
n++;
}

Look on the 'olpc' branch of the libertas driver here:

http://git.infradead.org/?p=libertas-2.6.git;a=blob;f=drivers/net/wireless/libertas/README;hb=olpc
http://git.infradead.org/?p=libertas-2.6.git;a=blob;f=drivers/net/wireless/libertas/ioctl.c;hb=olpc

The README has a description of the command, and the ioctl.c has the
implementation.  Just search for the string neigh and you'll find it.

Dan

   Basically aside from the vserver bits, which no one has seen, I don't
   see a particular advantage to using rsync.  In fact, I see serious
   downsides since it misses some of the key critical advantages of using
   our own tool not the least of which is that we can make our tool do what
   we want and with rsync you're talking about changing the protocols.
 
  Hmm, interestingly I see using our own tool as a disadvantage, not a 
  huge one, but a disadvantage nonetheless, in that we have more untested 
  code on the system (and we already have a lot), and in this case, in a 
  critical must-never-fail system.  For instance, what happens if the user 
  is never connected to another XO or school server, but only connects to 
  a (non-mesh) WiFi network?  Does the mesh-broadcast upgrade discovery 
  protocol work in that case?
 
 Avahi works fine for these cases too. Of course, since it was originally
 created for normal networks. However, if you never come close to another
 OLPC machine, then we won't find a machine to upgrade against. Its quite
 trivial to make it pull from any http server on the net, but that has to
 be either polled (which I don't like) or initiated manually (which might
 be fine).
 
 
 ___
 Devel mailing list
 Devel@lists.laptop.org
 http://lists.laptop.org/listinfo/devel

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: SW Dev meeting minutes, 6/26/07

2007-06-27 Thread Christopher Blizzard
On Tue, 2007-06-26 at 22:14 -0400, Kim Quirk wrote:
 
 * Ivan discussed the activation feature - it fell behind as we are
 trying to sort out the updates; but he believes he can still get a
 minimum activation feature into the release over the next few days. He
 will touch base with J5 and Blizzard on changes that are coming or
 might be done in userland init so that won't affect activation. He is
 also waiting on hardware to test crypto and keys. Hopefully he and
 Mitch will make progress on this next week when Mitch is at 1CC. 

I thought that activation was located entirely in firmware and there
wouldn't be any changes to the OS to support activation?

--Chris

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: System update spec proposal

2007-06-27 Thread Christopher Blizzard
On Wed, 2007-06-27 at 12:26 -0400, Mike C. Fletcher wrote:
 
 That said, I would be more comfortable if the fallback included a way 
 for the laptop to check a well-known machine every X period (e.g. in 
 Ivan's proposal) and if there's no locally discovered source, use a 
 publicly available source as the HTTP source by default 

I think that you're talking about the mail from Scott, not Ivan.  And I
think that we'll do something like that, yeah.  That's one of the
easiest parts of the update system.  (Also one of the worst mistakes if
you get it wrong ala the Hour of Terror:
http://www.justdave.net/dave/2005/05/01/one-hour-of-terror/ but that's
beside the point.)

--Chris

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: System update spec proposal

2007-06-27 Thread Christopher Blizzard
On Wed, 2007-06-27 at 12:39 -0400, Mike C. Fletcher wrote:
 
 Could we get a summary of what the problem is: 

The main objection to vserver from all the kernel hackers (and those of
us that have to support them!) is that it's a huge patch that touches
core kernel bits and it has no plans to make it upstream.  Yes, it's
used in a lot of interesting places successfully, but that doesn't mean
it's a supportable-over-the-long-term solution.  Scale has nothing to do
with long term supportability.  And these laptops have to be supported
for at least 5 years.

This isn't a new discussion; it's been going on for months and months.
Just quietly, that's all.

--Chris

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: System update spec proposal

2007-06-27 Thread Christopher Blizzard
On Wed, 2007-06-27 at 17:31 +0200, Alexander Larsson wrote:
 
 I have a general question on how this vserver/overlay/whatever system
 is
 supposed to handle system files that are not part of the system image,
 but still exist in the root file system. For instance,
 take /var/log/messages or /dev/log? Where are they stored? Are they
 mixed in with the other system files? If so, then rolling back to an
 older version will give you e.g. your old log files back. Also, that
 could be complicating the usage of rsync. If you use --delete then it
 would delete these files (as they are not on the server).
 

Just a note about these particular files.  I don't think that on the
final version that we're going to be running a kernel or syslog daemon.
We're running them right now because they are useful for debugging but I
don't want those out there in the field taking up memory and writing to
the flash when they don't have to be.  I suspect that for most users
they will have very little use.

--Chris

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: mesh network vs broadcast/multicast question

2007-06-27 Thread Christopher Blizzard
On Wed, 2007-06-27 at 11:22 -0400, Michail Bletsas wrote:
 
 The mesh TTL field will eventually be tunable on a per packet basis.
 

Yeah, and we really want that.  What's left to make that happen?
(Frankly, I thought it was done already.)

--Chris

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: System update spec proposal

2007-06-27 Thread Chris Ball
Hi,

I have a general question on how this vserver/overlay/whatever
system is supposed to handle system files that are not part of the
system image, but still exist in the root file system. For
instance, take /var/log/messages or /dev/log? Where are they
stored? Are they mixed in with the other system files? 

Just a note about these particular files.  I don't think that on
the final version that we're going to be running a kernel or syslog
daemon. We're running them right now because they are useful for
debugging but I don't want those out there in the field taking up
memory and writing to the flash when they don't have to be.  I
suspect that for most users they will have very little use.

We're currently using a tmpfs for these (/var/log, /var/run, plus others)
so they aren't being written to the flash at all.  I'd rather have us
keep doing that than turn them off, since we're throwing away useful
in-the-field debugging information (for anyone who wants to help fix
laptops remotely, not just us) if we turn them off.

- Chris.
-- 
Chris Ball   [EMAIL PROTECTED]
___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Test Group release notes, build 466

2007-06-27 Thread Kim Quirk

Test Group (and other interested parties),

As we are getting the first few builds for Trial-2, we have started an
informal wiki page of notes (and bugs) which don't need to be written up by
each person. If you are trying to test something specific (like an
activity), and you are pretty sure that it is a reproducable bug in that
component, please go ahead and write up a bug in trac.

If, on the other hand, you really don't know if it is a problem with sugar,
or the x driver, or something in journal or mesh that may be affecting all
activities, then add your note to this wiki page and we'll try to get a
resolution.

http://wiki.laptop.org/index.php?title=Test_Group_Release_Notes

Developers -- if you know of an issue and are working on it, it would be
great if you wanted to add a note so the test group knows this is being
addressed.

Within a couple of days (when the build is working a little more) then we'll
get rid of this page and just ask everyone to put bugs into trac.

Current Trial-2 build is 466:
http://olpc.download.redhat.com/olpc/streams/development/build466/

Thanks!
Kim
___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: System update spec proposal

2007-06-27 Thread Ivan Krstić
On Jun 27, 2007, at 1:50 PM, Christopher Blizzard wrote:
 I think that you're talking about the mail from Scott, not Ivan.

It was my mail; my proposal explicitly talks about this.

--
Ivan Krstić [EMAIL PROTECTED] | GPG: 0x147C722D

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


rsync benchmarking

2007-06-27 Thread C. Scott Ananian
I wrote a quick ~60 line script to do non-recursive rsync a directory
at a time.  Actually, it's a little smarter than that: it generates
manifests for each directory, and syncs the tree by first syncing the
root (and the root's manifest).  It then checks the directory hashes
in the roots manifest against the current contents of the directory's
manifest, and recurses only if they don't meet (ie, if the directory
has been updated).

Anyway, here are some quick stats, for upgrades from build 464 to 465 to 466:

464-to-465:
Full contents of tree: 429,302,610 bytes
rsync --whole-file: sent 32,034 bytes; received 13,153,813 bytes.
Standard rsync: sent 96,720 bytes; received 10,047,524 bytes. 9.0s
user, 64.9s real
Rsync-with-manifest: sent 79,192 bytes; received 9,139,515 bytes. 4.3
user, 11.5s real.

465-to-466:
Full contents of tree: 458,828,206 bytes
rsync --whole-file: sent 21,990 bytes; received 25,826,516 bytes.
Standard rsync: sent 141,948 bytes; received 23,981,423 bytes. 14.0s
user, 68.5s real
Rsync-with-manifest: sent 125,158 bytes; received 23,171,430 bytes.
10.4 user, 28.5s real.

Using manifests saves a bit of bandwidth but a lot of memory and disk
I/O.  I'll put the python scripts I used to create, verify, and sync
with manifests online soon.
 --scott

-- 
 ( http://cscott.net/ )
___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: Pagemap

2007-06-27 Thread Jonathan Corbet
[Hi, Jordan, from across the room]

 They just talked about pagemap from Matt Mackall during an BoF at OLS.
 This seems like something useful we can use to measure our memory 
 usage - in particular, it is screaming for tinderbox integration.. :)
 
 http://lkml.org/lkml/2007/4/3/405

It's good stuff.  See also http://lwn.net/Articles/230975/ for more
info.

jon
 
___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: System update spec proposal

2007-06-27 Thread Ivan Krstić
On Jun 27, 2007, at 2:57 AM, David Woodhouse wrote:
 Nevertheless, it's an accurate description of what happened.

Let's agree to disagree. In any case, it doesn't matter at this  
point. We have work to do.

--
Ivan Krstić [EMAIL PROTECTED] | GPG: 0x147C722D

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel