Re: Network Traffic and Multicast

2007-06-26 Thread Miguel Álvarez

Your questions are quite broad, so I fear my answers may be proportionately
vague... if you want more details, please specify:

On 6/26/07, Alfonso de la Guarda [EMAIL PROTECTED] wrote:


In reference to the message from Alexander, i will post the original
request

Hi.

After EduKT and AmiGO, we are working on ITv which has been develop for
any OS except XO because the usage of 2 very dynamic libraries: tubes and
gst (for regular linux/windows we are using vlc/twisted) the next week
we will launch a video demo of ITv (Interactive Television), which allows 1
video channel and 3 real time data channel however i have to know about
the performance of the network,



Are you planning on using 'raw multicast' or on accessing the multicast
tubes in Salut? There is also a third option, that would be
implementing/using some sort of reliable multicast (I worked on this a while
ago). I will speak mainly about the 2 first possibilities, get back to me if
you want to discuss the third one more in detail.

bandwidth usage for multicast,




There are no particular limitations for multicast with respect to unicast
traffic, but remember that (if you are using raw multicast) packages aren't
acknowledged, so the more traffic you send, the more likely it is that some
will get dropped.

dead times over a stream,




you mean connection delays to a multicast channel? We haven't done testing
of those, but while working with multicast it seemed to react similarly to
'wired multicast' scenarios: more flooding of packages at the beginning, and
progressive path building

coverage (ideal range)




Of the mesh? With one channel? various channels? I guess you have read
about the range tests in Australia, but in a situation with heavy radio
interference the workable range is heavily reduced. In most target cases you
should count with some 100 mts per hop, and it seems that up to 3 hops video
streaming has been conducted. 'Real world testing' of such heavy loads has
started recently, though, so I have no data about the most current
performance limit

of the mesh... the goal is optimize the traffic consume and low latency

issues.



I'd rather you try to limit  the number of 'delay-sensitive' communication,
and try to reduce the bandwith usage regardless of the total available
data-rate. This way, you make sure that your application may work while
other kinds of traffic are also being relayed

Thanks,




Cheers,
Miguel

--



Alfonso de la Guarda
ICTEC SAC
   www.cosperu.com
www.delaguarda.info
Telef. 97550914
  4726906

--


Alfonso de la Guarda
ICTEC SAC
   www.cosperu.com
www.delaguarda.info
Telef. 97550914
  4726906

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel





--
violence is the last refuge of the incompetent
--isaac asimov
___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: mesh network vs broadcast/multicast question

2007-06-26 Thread Dan Williams
On Tue, 2007-06-26 at 15:22 +0200, Alexander Larsson wrote:
 Hi dan, I have some questions on the mesh network for the updates work.
 
 What is the broadcast domain for a laptop on the mesh? (I.E. how far are
 broadcast messages sent)? Is it just the set of nodes reachable from
 your machine, or are broadcasts forwarded by the mesh?

Michail?

 This affects how avahi works, as avahi only uses local broadcasts.
 
 Another question is on multicasts. How many hops do multicast packets
 live on the mesh? And are there any other limits than nr of hops to
 avoid multicast loops? (Otherwise its likely that nodes see multicast
 packages many times, especially in dense networks.)

Michail?

 I read on the olpc wiki about the mesh using three different channels.
 My understanding on this is that these pretty much generate three
 separate mesh networks that are routed between by the school server, and
 that laptops can end up on any channel. 

Right.  We have only one radio in the laptops, and therefore we can only
tune to one channel at a time.  However, having all laptops on one
channel would be detrimental from a bandwidth perspective, since all
laptops share the bandwidth of the channel, which isn't large (54Mbps
but that's certainly not seen in the real-world).

 Does this mean that two laptops next to each other can be on two
 completely different networks? This means broadcasts (and by extension

Yes.  But if they are at school, then you'll be able to see that person
because you'll both be connected to the school server.  We only
(ideally) run into problems when one or the other laptop can't connect
to the school server, and they are on different channels.

 Avahi) cannot reach from one laptop to the other. Is this true? And are
 we doing something to work around this (like letting users manually
 switch channels)?

We are letting users manually search for friends, which is a fairly
lengthy process that involves jumping to each channel, searching for
friends and activities using mDNS, then to the next channel, etc.
You'll be warned if you do something that will make you jump to a
different channel, and therefore break stuff.

Dan


___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Ivan's XO Field Upgrade Proposal

2007-06-26 Thread C. Scott Ananian
Ivan dropped by 1cc tonight, and I was able to squeeze the details of
*his* field upgrade proposal out of him.  As I haven't yet seen him
email this to the list, I'll try to state it for him.  Hopefully he
can then give a diff against my version, which will save him time.

The XO already needs to call home to foobar.laptop.org as part of
the antitheft system.  In addition to the bits which tell it you're
not stolen, the response also includes the version number of the
latest system software.  I assume the version number is a simple
integer.

If the laptop's version is not up to date, it tries to get the bits
from the school server.  It it sees the school server, but the school
server doesn't have the bits (yet), it backs off a retries later.  If
it doesn't have a school server, or the retry on the school server
fails, it gets the bits directly from Cambridge.

The get the bits phase is as simple as practical: rsync.  The school
server maintains a complete image of the XO filesystem, possibly in a
small number of versions.  The XO just rsyncs with the school server
to get the updated files.  This magically does the proper
binary-differencing thing, and is robust against connection failure,
data corruption, etc.  If it can't get the bits from the school
server, it just rsyncs directly against a foo.laptop.org machine in
Cambridge.

We use vserver copy-on-write to do the atomic upgrade.  There is a
'fake root' context (which i'll call /fakeroot here) which has all the
files in the filesystem.  Activity containers  etc are created out of
/fakeroot.  The upgrade process starts out with a copy-on-write clone
of /fakeroot, which it rsyncs to get the new filesystem.  We then
either:
  a) save this new tree as /upgraded-root (or some such) and on reboot
swap /fakeroot and /upgraded-root, or...
  b) do some sort of pivot_root to swap these trees without rebooting.
 This latter approach has more technical risk, but is still A Simple
Matter Of Software and permits live upgrades.

Some notes:
 a) rsync scales, as demonstrated by rsync.kernel.org.  We can use
load-sharing, anycast addresses, etc if necessary (if it turns out
that very many laptops are not getting updates from a school server).
The important thing is that this complexity is on our side and is not
propagated to the XO software.
 b) This completely punts on XO-to-XO upgrades.  This complexity is
not necessary for version 1.0, and (given the efficient rsync
protocol) doesn't buy you all that much.  It can be added later,
either via a different mechanism or by rsync between machines.
 c) This proposal has no way to push upgrades.  Again, this can be
added later (eg, a signed broadcast packet which says, upgrade now to
version N.  The actual upgrade is then identical.)
 d) The filesystem can (should) contain a manifest, as described in
Alex's proposal, which is signed and can be used to
authenticate/validate the upgrade.  The manifest is rsynced along with
the rest of the files, and then checked.  We also use rsync-over-ssh
with fixed keys to ensure that we're only rsyncing with 'real' update
servers.

Scott's comments (Ivan's not heard all of these, he might not agree):
 a) I enthusiastically recommend this approach.  It seems to be the
simplest thing with reasonable performance that will work.  It avoids
reinventing the wheel, and it seems to have very few dependencies
which might break it.  Improvements can be made to the rsync protocol
if better efficiency is desired, and that work will help not only OLPC
but also the (myriad) users of rsync.
 b) For simplicity, I favor (re)using rsync in other places where we
need synchronization and/or file distribution.  For example, I think
that the school servers use rsync in order to get their copies of the
XO filesystem.
 c) No extra protocols or dependencies.  rsync should be statically
linked.  School server doesn't have version N should be read as
rsync to school server fails, rather than involving some extra
protocol or query.  I'd like to see the driver program written in a
compiled language and statically linked as well, to provide robustness
in case an upgrade breaks python (say).
 e) The rsync protocol is interactive. There are more round-trips than
in other proposals, but the process is robust: if it fails, it can
just be restarted and it will magically continue where it left off.
 f) We can do better than rsync, because we know what files are on
the other side, and can use this to send better diffs.  This
improvement could be added to rsync directly, rather than creating
special XO-only code.  (Option to preseed rsync with a directory of
files known to be on the remote machine.)
 g) I believe that we can use plain old hard links when we do the
rsync, instead of requiring any fancy vserver stuff.  rsync will break
the link appropriately when it needs to modify a file (as long as the
--inplace option isn't given).  This probably breaks a critical edge
during development.

OK, that's all folks.  Discuss!
  

Re: Ivan's XO Field Upgrade Proposal

2007-06-26 Thread Dan Williams
On Tue, 2007-06-26 at 12:16 -0400, C. Scott Ananian wrote:
 On 6/26/07, Dan Williams [EMAIL PROTECTED] wrote:
  Downside of this is, as Alex pointed out, it'll load the mesh a _lot_
  more than XO-XO updates.
 
 Not necessarily.  Rsync is pretty efficient: we're still basically
 distributing just (blockwise) diffs.  And we can always do XO-to-XO
 later: the important thing is to get a rock-solid basis.  The selling
 point (to me) is the simplicity.  Like I said: its the simplest thing
 that works.

I'm not arguing simplicity; just that we have to be aware of the
implications of having lots of XOs pulling from the server with some
overlap with this method, but we don't with XO-XO.  We just have to
make the tradeoffs clear, and understand them.

Dan

 I can do some benchmarks if people actually need to see numbers.
  --scott
 

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: Ivan's XO Field Upgrade Proposal

2007-06-26 Thread C. Scott Ananian
On 6/26/07, Mike C. Fletcher [EMAIL PROTECTED] wrote:
   g) I believe that we can use plain old hard links when we do the
  rsync, instead of requiring any fancy vserver stuff.  rsync will break
  the link appropriately when it needs to modify a file (as long as the
  --inplace option isn't given).  This probably breaks a critical edge
  during development.
 
 I'm not actually sure what vserver is beyond a chroot-jail-like
 environment using an overlay file system, but assuming that's the basic
 idea, the rationale here is that we want to allow the COW file system
 overlay to be built by the rsync and only swap it into the root file
 system at some later time.  At the *least* after the image has been
 verified!

Yes.  In Ivan's full proposal (which he promises to send out RSN) he
wants to vserver-jail the upgrade process to avoid giving the upgrader
more privileges than it needs, which is entirely reasonable.  But
since there are few people here who grok vserver (at the present), I
was just suggesting that we prototype the system by running rsync on a
hard-linked copy of the filesystem (trusting its file-modification
process not to modify files through their hard links).  This requires
us to trust rsync, but we don't have to trust the received bits: they
are still authenticated before the upgraded image is swapped for the
running one.

Obviously we wouldn't run rsync directly on the running filesystem.
 --scott

-- 
 ( http://cscott.net/ )
___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: A different proposal for XO upgrade.

2007-06-26 Thread Christopher Blizzard
Just some comments on this thread.

It seems odd to try to optimize the bandwith on the actual local lan as
we have a decent amount of bandwith to work with.  The only use case
that I can come up with to support that is during unboxing and/or a mass
re-install.  And I don't think that we're ready to try and support that
with this first iteration.  It's a distribution method and probably
something that we can add after the fact.  (I know that some Red Hat
guys did something like this for a customer where an entire bank's set
of terminals could be completely re-imaged after a power failure in 20
seconds using a mutlicast-rsync setup.  I should see more about that.)
As long as the formats that we pick support something like this we
should be pretty safe for now.

The case where we do worry about bandwidth is the client - main
updates server setup.  That's where the bandwidth is likely very slow
and expensive and looking at using a diff-style system is worth the
investment given our install base.  I think Alex is looking into this
now.

I like the system that scott proposed for how often we should look at
updates and the idea of a lead to say hey, I'm getting this update so
don't look.  (It sounds strangely familiar to an idea that I shared
with scott over lunch a month or so ago so of course I like it!) We also
might consider setting up other clients to start collecting the update
before it's completely finished downloading to start spreading around
the bandwidth over a longer period of time and making the entire process
more fault tolerant and bandwidth efficient on the local lan.  i.e.
someone else could pick up the rest of the update if the lead machine
vanishes.  Also, two machines might be able to get the update faster
than one, just due to latency over a lot of the links we're talking
about using.

--Chris

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: Ivan's XO Field Upgrade Proposal

2007-06-26 Thread Mike C. Fletcher
Dan Williams wrote:
...
 I'm not arguing simplicity; just that we have to be aware of the
 implications of having lots of XOs pulling from the server with some
 overlap with this method, but we don't with XO-XO.  We just have to
 make the tradeoffs clear, and understand them.
   
Agreed.  Given the simplicity of setting it up, the approach should be 
given consideration.  I would judge that having this robust and simple 
from the start (when we'll be having lots of updates and a far greater 
chance for failures to happen) to be more important than bandwidth 
pre-optimisation.

At a later point, if we can be sure that the core system software 
image is always distinct from the sensitive data and user data 
overlays, we can readily provide a service at some point that allows the 
laptops themselves to advertise an rsync-based source running in a 
special chroot with just those (read-only) layers.  The overlay manager 
could initiate an rsync service on the laptop in response to a tubes 
request (or whatever) in order to allow for XO-to-XO sharing at some point.

Have fun,
Mike

-- 

  Mike C. Fletcher
  Designer, VR Plumber, Coder
  http://www.vrplumber.com
  http://blog.vrplumber.com

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: Ivan's XO Field Upgrade Proposal

2007-06-26 Thread John (J5) Palmieri
On Tue, 2007-06-26 at 12:32 -0400, Dan Williams wrote:
 On Tue, 2007-06-26 at 12:16 -0400, C. Scott Ananian wrote:
  On 6/26/07, Dan Williams [EMAIL PROTECTED] wrote:
   Downside of this is, as Alex pointed out, it'll load the mesh a _lot_
   more than XO-XO updates.
  
  Not necessarily.  Rsync is pretty efficient: we're still basically
  distributing just (blockwise) diffs.  And we can always do XO-to-XO
  later: the important thing is to get a rock-solid basis.  The selling
  point (to me) is the simplicity.  Like I said: its the simplest thing
  that works.
 
 I'm not arguing simplicity; just that we have to be aware of the
 implications of having lots of XOs pulling from the server with some
 overlap with this method, but we don't with XO-XO.  We just have to
 make the tradeoffs clear, and understand them.
 
 Dan
 
  I can do some benchmarks if people actually need to see numbers.
   --scott
  
 

We Also have to remember the countries want control over when the boxes
update.  At least that was the impression I got at the country meetings.

-- 
John (J5) Palmieri [EMAIL PROTECTED]

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


System update spec proposal

2007-06-26 Thread Ivan Krstić
Software updates on the One Laptop per Child's XO laptop





0. Problem statement and scope
==

This document aims to specify the mechanism for updating software on the
XO-1 laptop. When we talk about updating software, we are referring both
to system software such as the OS and the core services controlled by
OLPC that are required for the laptop's basic operation, and about any
installed user-facing applications (activities), both those provided
by OLPC and those provided by third parties.




1. System updater
=

1.1. Core goals
---

The three core goals of a software update tool (hereafter updater)  
for the
XO are as follows:

 * Security
 Given the initial age group of our users, it is the only reasonable
 solution to default to automatic detection and installation of
 updates, both to be able to apply security patches in a timely
 fashion, and to enable users to benefit from rapid development and
 improvements in the software they're using. Automatic updates,
 however, are a security issue unto themselves: compromising the
 update system in any way can provide an attacker with the  
ability to
 wreak havoc across entire installed bases of laptops while  
bypassing
 -- by design -- all the security measures on the machine.  
Therefore,
 the security of the updater is paramount and must be its first
 design goal.

 * Uncompromising emphasis on fault-tolerance
 Given the scale of our deployment, the relatively high  
complexity of
 our network stack when compared to currently-common deployments,  
the
 unreliability of Internet connectivity even when available, and
 perhaps most importantly our desire for participating countries to
 soon begin customizing the official OLPC OS images to best suit
 them, it is clear that our updater must be fault-tolerant. This is
 both in the simple sense -- cryptographic checksums need to be used
 to ensure updates were received correctly -- and in the more  
complex
 sense that the likelihood of a human error with regard to update
 preparation goes up proportionally to the number of different base
 OS images at play. A fault-tolerant updater will therefore allow
 _unconditional_ rollback of the most recently applied
 update. Unconditional here means that, barring the failure of
 other parts of the system which are dependencies of the updater
 (e.g. the filesystem), the updater must always know how to  
correctly
 unapply an applied update, even if the update was malformed.

 * Low bandwidth
 For much the same reasons (project scale, Internet access scarcity
 and unreliability) that require fault-tolerance from the updater,
 the tool must take maximum care to minimize data transfer
 requirements. This means, concretely, that a delta-based approach
 must be utilized by the updater, with a keyframe or heavy  
update
 being strictly a fallback in the unlikely case an update path  
cannot
 be constructed from the available or reachable delta sets.



1.2. Design
---

It is given, due to requirements imposed by the Bitfrost security
platform, that a laptop will attempt to make daily contact with the
OLPC anti-theft servers. During that interaction, the laptop will post
its system software version, and the response provided by the
anti-theft service will optionally contain a relative URL of a more
recent OS image.

If such a pointer has been received and the laptop is behind a known
school server, it will probe the school server via rsync at the provided
relative URL to determine whether the server has cached the update
locally. If the update is not available locally, the laptop will wait up
to 24 hours, checking approximately hourly whether the school server has
obtained the update. If at the end of this wait period the school server
still does not have a local copy of the update, it is assumed to be
malfunctioning, and the laptop will contact an upstream master server
directly by using the URL provided originally by the anti-theft service.

In any of these three cases (school server has update immediately,
school server has update after delay, upstream master has update), we
say the laptop has 'found an update source'.

Once an update source has been found, the laptop will invoke the
standard rsync tool over a plaintext (unsecured) connection via the
rsync protocol -- not piped through a shell of any kind -- to bring
its own files up to date with the more recent version of the
system. rsync uses a network-efficient binary diff algorithm which
satisfies goal 3.



1.3. Design note: peer-to-peer updates
--

It is desirable to provide viral update functionality at a later date,
such that two laptops with different software versions (and without any
notion of trust) can engage 

Re: Ivan's XO Field Upgrade Proposal

2007-06-26 Thread Mike C. Fletcher
John (J5) Palmieri wrote:
 On Tue, 2007-06-26 at 12:32 -0400, Dan Williams wrote:
   
...
 We Also have to remember the countries want control over when the boxes
 update.  At least that was the impression I got at the country meetings.
   
This is a concern, no?  In cases of regime change and the like, can a 
particular user opt out of the upgrades somehow?  Although it's a good 
idea to have the laptops automatically update themselves by default I'm 
somewhat concerned if a new regime coming in can disable all of the 
laptops in the country, or force them to install something that the 
users themselves don't want.

This concern is one of the reasons I want to allow for overlay file 
systems for the Core Operating System images, the install may be 
downloaded and automatically installed, but if it does something nasty 
the child can disable that overlay and potentially decide to move to a 
different update source (e.g. change countries) using the security 
configuration UI mechanism (or whatever).

On a less tin-foil-hat note, I would be most comfortable if all system 
updates were notifying the user and allowing them (again, likely part of 
the security UI) to temporarily delay an update.  30 kids in the middle 
of a video-capture-and-editing activity for the school play that's 
taxing their systems and their network to the limit are going to be a 
little upset if the board of education decides that's the moment to 
start a whole-system update.

Just a thought,
Mike

-- 

  Mike C. Fletcher
  Designer, VR Plumber, Coder
  http://www.vrplumber.com
  http://blog.vrplumber.com

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: A different proposal for XO upgrade.

2007-06-26 Thread C. Scott Ananian
On 6/26/07, Christopher Blizzard [EMAIL PROTECTED] wrote:
 (I know that some Red Hat
 guys did something like this for a customer where an entire bank's set
 of terminals could be completely re-imaged after a power failure in 20
 seconds using a mutlicast-rsync setup.  I should see more about that.)

That does sound very interesting.  Please do.

 I like the system that scott proposed for how often we should look at
 updates and the idea of a lead to say hey, I'm getting this update so
 don't look.  (It sounds strangely familiar to an idea that I shared
 with scott over lunch a month or so ago so of course I like it!)

I also like it, but this is a situation where the best is the enemy of
the good-enough. =)  We can revisit broadcast updates,
multicast-rsync, and all sorts of fancy goodness once the basic system
is working -- and Ivan's proposal seems most likely to get that robust
basic system up quickly.
  --scott

-- 
 ( http://cscott.net/ )
___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: Ivan's XO Field Upgrade Proposal

2007-06-26 Thread Ivan Krstić
On Jun 26, 2007, at 1:59 PM, C. Scott Ananian wrote:
 Developer keys also let you opt out of
 automatic updates if that's what you want.

No dev key needed, actually -- it's just a setting you'll be able to  
toggle off in the security GUI.

--
Ivan Krstić [EMAIL PROTECTED] | GPG: 0x147C722D

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: Ivan's XO Field Upgrade Proposal

2007-06-26 Thread Ivan Krstić
On Jun 26, 2007, at 2:13 PM, Mike C. Fletcher wrote:
 We should be planning to allow for the root file system to be
 potentially two or three layers deep with installed, but not yet fully
 tested/accepted upgrade overlays.

Let's please table this discussion until we're way, way past FRS.

--
Ivan Krstić [EMAIL PROTECTED] | GPG: 0x147C722D

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: System update spec proposal

2007-06-26 Thread Christopher Blizzard
A few notes follow here.

First about approach: you should have given this feedback earlier rather
than later since Alex has been off working on an implementation and if
you're not giving feedback early then you're wasting Alex's time.  Also
I would have appreciated it if you had given direct feedback to Alex
instead of just dropping your own proposal from space.  It's a crappy
thing to do.

So notes on the proposal:

1. There's a lot in here about vserver + updates and all of that is
fine.  But we've been pretty careful in our proposals to point out that
how you get to the bits to the box is different than how they are
applied.  I don't see anything in here around vserver that couldn't use
alex's system instead of rsync.  So there's no added value there.

2. rsync is a huge hammer in this case.  In fact, I think it's too much
of a hammer.  We've used it in the past ourselves for these image update
systems over the last few years (see also: stateless linux) and it
always made things pretty hard.  Because you have to use lots random
exceptions during its execution and once it starts you can't really
control what it does.  It's good for moving live image to live image,
but I wouldn't really want to use it for an image update system -
especially one that will be as distributed as this.  Simply put I see
rsync as more of a tool for sysadmins than for a task like this.  I
think that we need something that's actually designed to solve the
problems at hand rather than seeing the hammer we have on the shelf and
thinking that it's obviously the right solution.

3. It doesn't really solve the scaling + bandwith problems in the same
way as Alex's tool does.  Still requires a server and doesn't let you
propagate changes out to servers as easily as his code does.

Basically aside from the vserver bits, which no one has seen, I don't
see a particular advantage to using rsync.  In fact, I see serious
downsides since it misses some of the key critical advantages of using
our own tool not the least of which is that we can make our tool do what
we want and with rsync you're talking about changing the protocols.

Anyway, I'll let Alex respond with more technical points if he chooses
to.

--Chris

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: Why pilgrim does not support indic language

2007-06-26 Thread John (J5) Palmieri
On Tue, 2007-06-26 at 10:08 +0530, sachin Tawniya wrote:
 Hi all,
 
   I have tried pilgrim with fedora and olpc development repo. Also I
 added gnome-session and scim things with some indic language support.
 I got ISO image with gnome and scim activated things. I can operate in
 common indic language like Hindi and can use it with terminal, text
 editor etc... 
 
 Issue with me is, when I want my desktop should come in local
 language, it's not providing any indic language selection menu.
 Take a example of Indic-Hindi. I have installed m17n-dn-hindi,
 fonts-hindi etc.. 
 While login if I select change language option, it doesn't provide
 option for hindi (Indic Language)
 
 Can any one give me some reference over it or any suggestions for
 supporting/enabling Indic Language in OLPC ISO. 
 
 Any suggestions will be appreciated.

First, Sugar and activities need to be translated.  Please see
http://wiki.laptop.org/go/Activity_Translations

Second, since we work off dedicated hardware we can tell what language
the device is laid out for. Please look at /etc/init.d/olpc-configure to
see how we configure languages. 

Third, no one has put work in creating a language selection menu though
I think we do need some sort of place to make it easy to override the
default settings.  Patches welcomed.

-- 
John (J5) Palmieri [EMAIL PROTECTED]

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: System update spec proposal

2007-06-26 Thread Mike C. Fletcher
Ivan Krstić wrote:
 On Jun 26, 2007, at 2:21 PM, Christopher Blizzard wrote:.  Also
   
 I would have appreciated it if you had given direct feedback to Alex
 instead of just dropping your own proposal from space.  It's a crappy
 thing to do.
 
 Let's not make this about approach on a public mailing list, please.
   
Actually, while I found the response a bit harsh, could I suggest that 
what the project needs is *more* public discussion all around, not 
necessarily about approach, but about half-formed plans, ideas and 
rationale.  Having discussion move offline into some private channel is 
a good way to prevent anyone outside the offices from knowing *why* 
things are happening and having things blow up when the decisions appear 
to come from on high.

For instance:

* Whole projects are surfacing after weeks or months of development.
  Papers and implementations are starting and stopping without
  anyone knowing what's going on or, more importantly, knowing *why*
  they have been done.  Witness the immediate counter-proposals to
  Alex's implementation of the point-to-point protocol.
* VServer only appeared in public discussions yesterday or so
  AFAIK, yet it's apparently already the chosen path for doing the
  system compartmentalization.

We need more draft-level discussions, more discussion of plans, 
rationales, ideas and approaches.  The discussions don't need to be 
long, they don't need to be formal and well structured, they just need 
to be sufficient to let people have an idea what they need to write for 
in a few months.  I realise we're on a very tight schedule, but it's 
extremely difficult to help with the project if we don't know what's 
going on inside it.

Assume good will and proper intentions on the part of all people until 
*proven* wrong (repeatedly), and only then assume ignorance of the 
proper path until proved wrong, and only then assume misguidance yet a 
thirst for knowledge until proved wrong.  Even if proved wrong many 
times, attempt to find a way to solve the issue politely and 
respectfully.  We are all working to make a better world, and there 
should be no egos involved if we are doing things right.

Anyway, just my thoughts,
Mike

-- 

  Mike C. Fletcher
  Designer, VR Plumber, Coder
  http://www.vrplumber.com
  http://blog.vrplumber.com

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: mesh network vs broadcast/multicast question

2007-06-26 Thread Dan Williams
On Tue, 2007-06-26 at 17:10 -0400, Frank Ch. Eigler wrote:
 Hi -
 
 On Tue, Jun 26, 2007 at 04:49:40PM -0400, Dan Williams wrote:
  [...]
   Does this mean that one needs a school server/bridge in order to talk
   to two differently-channeled friends ?
  
  Yes.  You must spread the XOs between channels to maximize the overall
  system bandwidth [...]
 
 Is it obvious that this benefit (increased aggregate bandwidth) is
 worth the cost (the necessity for a special bridge - possibly
 hamstringing an ad-hoc mesh away from the schoolhouse)?  How
 bandwidth-hungry are common XO activities anyway?

It depends, but packing 50 normal laptops onto a single radio channel
_today_ with infrastructure APs doesn't really work well, especially
when they all start to talk.  We're likely to have that many laptops in
a single classroom, and when you get a couple classrooms next each
other...

Dan


___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: System update spec proposal

2007-06-26 Thread Ivan Krstić
On Jun 26, 2007, at 5:15 PM, Daniel Monteiro Basso wrote:
 And the same document rendered using jsCrossmark is available at:
 http://www.lec.ufrgs.br/~dmbasso/jsCrossmark/systemUpdate.html

Looks great!

 Please Ivan, consider answering my e-mail about Crossmark. I sent you
 privately because I wasn't on the list before, but now you can  
 answer it
 openly.

Hm. I don't have any mail from you after a message from May 21st that  
I answered a couple of days later. Could you please send me copies of  
any other messages you sent? In general, if I don't answer non-urgent  
mail in 3-5 days, it's safe to assume something's wrong and I  
probably haven't seen the message.

--
Ivan Krstić [EMAIL PROTECTED] | GPG: 0x147C722D

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: System update spec proposal

2007-06-26 Thread David Woodhouse
On Tue, 2007-06-26 at 15:59 -0400, Mike C. Fletcher wrote:
 * VServer only appeared in public discussions yesterday or so
   AFAIK, yet it's apparently already the chosen path for doing the
   system compartmentalization. 

It's a short-term hack, because the people working on the security stuff
let it all slide for too long and now have declared that we don't have
time to do anything sensible. It will be dropped as soon as possible,
because we know it's not a viable and supportable plan in the long (or
even medium) term.

-- 
dwmw2

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: System update spec proposal

2007-06-26 Thread Christopher Blizzard
On Tue, 2007-06-26 at 18:50 -0400, C. Scott Ananian wrote:
 On 6/26/07, Christopher Blizzard [EMAIL PROTECTED] wrote:
  A note about the history of using rsync.  We used rsync as the basis for
  a lot of the Stateless Linux work that we did a few years ago.  That
  approach (although using LVM snapshots instead of CoW snapshots)
  basically did exactly what you've proposed here.  And we used to kill
  servers all the time with only a handful of clients.  Other people
  report that it's easy to take out other servers using rsync.  It's
  pretty fragile and it doesn't scale well to entire filesystem updates.
  That's just based on our experience of building systems like what you're
  suggesting here and how we got to where we are today.
 
 I can try to get some benchmark numbers to validate this one way or
 the other.  My understanding is that rsync is a memory hog because it
 builds the complete list of filenames to sync before doing anything.
 'Killing servers' would be their running out of memory.  Rsync 3.0
 claims to fix this problem, which may also be mitigated by the
 relatively small scale of our use:  my laptop's debian/unstable build
 has 1,345,731 files. Rsync documents using 100 bytes per file, so
 that's 100M of core required. Not hard to see that 10 clients or so
 would tax a machine with 1G main memory.  In contrast, XO build 465
 has 23,327 files: ~2M of memory.  100 kids simultaneously updating
 equals 2G of memory, which is within our specs for the school server.
 Almost two orders of magnitude fewer files for the XO vs. a 'standard'
 distribution ought to fix the scaling problem, even without moving to
 rsync 3.0.
  --scott
 

I think that in our case it wasn't just memory it was also seeking all
over the disk.  We could probably solve that easily by stuffing the
entire image into memory (and it will fit, easily) but your comment
serves to prove another point: that firing up a program that has to do a
lot of computation every time a client connects is something that's
deeply wrong.  And that's just for the central server.

Also, 2G of memory on the school server is nice - as long as you don't
expect to do anything else.  Or as long as you don't want to do what I
mention above and shove everything into memory to avoid the thrashing
problem.

--Chris

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: System update spec proposal

2007-06-26 Thread Ivan Krstić
On Jun 26, 2007, at 7:23 PM, David Woodhouse wrote:
 because the people working on the security stuff
 let it all slide for too long and now have declared that we don't have
 time to do anything sensible.

That's a cutely surreal take on things -- I really appreciate you  
trying to defuse the situation with offbeat humor :)

--
Ivan Krstić [EMAIL PROTECTED] | GPG: 0x147C722D

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: System update spec proposal

2007-06-26 Thread Ivan Krstić
On Jun 26, 2007, at 7:21 PM, Christopher Blizzard wrote:
 Also, 2G of memory on the school server is nice - as long as you don't
 expect to do anything else.  Or as long as you don't want to do what I
 mention above and shove everything into memory to avoid the thrashing
 problem.

I see no reason why the school server should be configured to allow  
more than 10-20 simultaneous client updates. We're not going for real- 
time propagation update here. Our largest schools can get updated  
within a day or so at that rate, and all others within at most a few  
hours.

--
Ivan Krstić [EMAIL PROTECTED] | GPG: 0x147C722D

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: System update spec proposal

2007-06-26 Thread tridge
Scott,

  Rsync documents using 100 bytes per file, so that's 100M of core
  required.

That 100 bytes per file is very approximate. It also increases quite
a lot if you use --delete and also increases if you use
--hard-links. Other options have smaller, but non-zero, impacts on the
memory usage, and of course it depends on the filenames themselves.

If rsync is going to be used on low memory machines, then it could be
broken up into several pieces. So do multiple rsync runs, each
synchronising a portion of the filesystem (eg. each directory under
/usr).

Alternatively, talk to Wayne Davison about rsync 3.0. One of the core
things that brings is lower memory usage (essentially automating the
breakup into directory trees that I mentioned above).

I had hoped to have time to write a new synchronisation tool for OLPC
that would be much more memory efficient and take advantage of
multicast, taking advantage of a changeset like approach to complete
OS update, but various things have gotten in the way of me
contributing serious time to the OLPC project, for which I
apologise. I could review any rsync based scripts you have though, and
offer suggestions on getting the most out of rsync.

Cheers, Tridge
___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: System update spec proposal

2007-06-26 Thread tridge
Chris,

  but your comment serves to prove another point: that firing up a
  program that has to do a lot of computation every time a client
  connects is something that's deeply wrong.  And that's just for the
  central server.

yes, very true. What rsync as a daemon should do is mmap a
pre-prepared file list, and you generate that file list using
cron. 

For the OLPC case this isn't as hard as for the general rsync case, as
you know that all the clients will be passing the same options to the
server, so the same pre-prepared file list can be used for all of
them. In the general rsync case we can't guarantee that, which is what
makes it harder (though we could use a map file named after a hash of
the options).

Coding/testing this takes time though :(

Cheers, Tridge
___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: System update spec proposal

2007-06-26 Thread C. Scott Ananian
On 6/26/07, Ivan Krstić [EMAIL PROTECTED] wrote:
 term one. It's still my *strong* hunch that we are not going to run
 into any issues whatsoever given our update sizes and the fact that
 we're serving them from reasonably beefy school server machines, so
 adding this functionality to rsync would easily be a post-FRS goal.

I concur.  Rate-limiting is certainly a viable option for FRS if
server resources are an issue, and the *network* characteristics of
rsync are certainly in the right ballpark.  I suspect that we won't
need to hack rsync ourselves at all, since rsync 3.0 will Do What We
Want.  But we'll see what Wayne says about the timeline of rsync 3.0.

 Scott, are you willing to do a few tests and grab some real numbers,
 using previous OLPC OS images, for resource utilization on the school
 server in the face of e.g. 5, 10, 20, 50 parallel updates?

I might need some help getting access to enough clients, but I have no
problem doing the benchmarks.  Tridge, do you have any recommendations
about benchmarking rsync?
 --scott

-- 
 ( http://cscott.net/ )
___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: System update spec proposal

2007-06-26 Thread Ivan Krstić
On Jun 26, 2007, at 11:46 PM, C. Scott Ananian wrote:
 I suspect that we won't
 need to hack rsync ourselves at all, since rsync 3.0 will Do What We
 Want

I understood rsync 3.0 is smart about breaking up file list  
generation into smaller chunks to be better about memory usage, but  
that's orthogonal to the pregenerated-mmaped-file-list optimization.  
The former helps memory consumption, but you're still needlessly stat 
()ing static data left and right. Although, again, our updates are  
sufficiently small that I expect the stat() calls to just be hitting  
the VFS cache and not actually cost us much of anything.

--
Ivan Krstić [EMAIL PROTECTED] | GPG: 0x147C722D

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Technical Event from COS

2007-06-26 Thread Alfonso de la Guarda

Hi,

The June 30, we are making the course: Multiplatform Programming with
Python+Gtk+Glade thinking in Sugar o Programación Multiplataforma con
Python+Gtk+Glade pensando en Sugar with the help of Eduardo Silva from
Chile (who arrives to Peru the June 28 for many events pro-OLPC).
The course will be free, a duration of 4 hours, in Spanish and through real
time video streaming for many countries (slots limited) -questions by chat-.
The goal is give to the starter python programmers resources for develop
activities/applications running on any OS incluging XO/Sugar.
More information: http://www.cosperu.com
Video Stream Server: http://cosperu.com:8000 (NSV streams with 24kbps audio)
(Clients: VLC/mplayer/winamp)

Thanks,




Alfonso de la Guarda
   ICTEC SAC
 www.cosperu.com
www.delaguarda.info
Telef. 97550914
 4726906
___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Trac login?

2007-06-26 Thread C. Scott Ananian
I tried to create an account on trac (http://dev.laptop.org/) with
userid 'cscott' and something seems to have gone wrong: it won't let
me log in, but it complains that an account with the userid 'cscott'
already exists if I try to recreate the account.  However, when I try
to use the 'lost password' function with cscott / [EMAIL PROTECTED],
it says there is no such account.  Can some trac wizard help me out?
Thanks...
 --scott

-- 
 ( http://cscott.net/ )
___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: SW Dev meeting minutes, 6/26/07

2007-06-26 Thread Richard A. Smith
Kim Quirk wrote:

 * Mitch warned that the suspend resume issues may still require EC change.

No may.  This is an absolute.  HOST-WLAN wakeup is broken if you press 
game keys for the wakeup.

-- 
Richard Smith  [EMAIL PROTECTED]
One Laptop Per Child
___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: SW Dev meeting minutes, 6/26/07

2007-06-26 Thread Alfonso de la Guarda

On 6/26/07, Kim Quirk [EMAIL PROTECTED] wrote:


SW meeting Minutes:

We went through open issues that affect Trial-2 feature freeze; then
discussed some blocking bugs

* Ivan discussed the activation feature - it fell behind as we are trying
to sort out the updates; but he believes he can still get a minimum
activation feature into the release over the next few days. He will touch
base with J5 and Blizzard on changes that are coming or might be done in
userland init so that won't affect activation. He is also waiting on
hardware to test crypto and keys. Hopefully he and Mitch will make progress
on this next week when Mitch is at 1CC.
* CJB provided an update on Suspend/Resume - the code that Marcelo and
Marvell got working in CA last week hasn't been working at 1CC. Marvell and
Cozybit are still working on this but it is difficult to get the systems
that show the problem in the same room as the people trying to fix it. Mitch
suggested trying to completely replicate the nand image from the one that
shows the problem and send it out to Marvell.
* J5 mentioned that he is getting ogg files and streaming video to work
soon. FC7 is done and he is doing a build tonight with latest sugar,
telepathy, and AMD driver for x server.



ogg and streams through gst?... we need some options for recode ITv from
vlc to anything.. and analyze the network performance since many
concurrent users can degrade the network.

* Bernie mentioned that there is a bug with x that looks pretty bad. It may

be an upstream problem in which case we may want to revert the x server.
* We agreed that gamepad buttons rotating with screen rotation can be a
future feature - not necessary for Trial-2. It is important to 'stop'
checking in code that belongs to a future release so we can stablize this
code base. So please don't check in new features or big  new chunks of code
if we don't know it is coming. (Especially by the end of this week).
* Ivan mentioned that we might want to pull master into the vserver branch
and test out of that one (I'll try to follow up on this just so I understand
it, since it might not have been jg's expectation - we don't need too many
surprises for when he returns). Andres has been ill so there hasn't been
much kernel work in the last few days.
* Scott reported that he has been looking at IPv6 and DNS for the school
server. We will need to reset expectations on what can be delivered as
school server support for Trial-2 since I don't believe there has been any
resources available for this.

* We went through some blocking bugs owned by people on the call and they
are moving along.
* Mitch warned that the suspend resume issues may still require EC change.
* Mitch and Ivan will be working on 'real' school server firmware and some
of the related security issues next week.

Thanks everyone!
Kim


___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel





--


Alfonso de la Guarda
   ICTEC SAC
 www.cosperu.com
www.delaguarda.info
Telef. 97550914
 4726906
___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel