Re: [Server-devel] [Sugar-news] Network configuration (was Re: Sugar DIgest 2009-04-09)

2009-04-16 Thread Alexander Dupuy
Hi Walter and Wade,

It's been a while since you wrote these, but I had wanted to reply and 
just now got around to it.

 On Thu, Apr 9, 2009 at 4:49 PM, Walter Bender walter.ben...@gmail.com wrote:
   
 ===Sugar Digest ===
 I was able
 to get the network working but the process is tedious—I don't think we
 can expect teachers and youn children to use ifconfig, route, etc.
 from the shell. I also had to boot each machine in Windows, get the IP
 address, netmask, gateway, and DNS, but this is something that needs
 only to be done once per machine. Configuring the network on Sugar on
 a Stick has to happen every time, presuming the children will be
 jumping from machine to machine. A control panel widget for setting up
 a static IP address is a first step, but I wonder if there is an
 easier way.
 

   
Wade Brainerd replied:
 In the long term, what about enabling freedesktop.org standard panel
 applets to appear in the frame, and then just using nm-panel for
 network configuration?

 The access points could then be removed from the Neighborhood view.
   

Something else that you might want to consider would be using link-local 
addresses (Zeroconf) for most of the Sugar machines, and having one or a 
few Sugar systems manually configured to provide a NAT routing service 
(IP proxy) with a caching DNS relay, that would allow the 
link-local-addressed systems to communicate with the internet and other 
(non-link-local) machines on the network.  This way you would only need 
to manually configure a handful of machines (or even just the teacher's) 
rather than the entire classroom.  While not as efficient or desirable 
as a proper DHCP configuration, it does provide a mechanism that allows 
you to bootstrap up on the network with only a minimal amount of 
configuration, and without any possibility of conflicts with existing 
networking setups that you would get by trying to bring up a new DHCP 
server.

While I'm not 100% sure of this, I believe that some (or maybe even 
all?) of this already exists (or existed) on the OLPC distributions - I 
think that the mesh networking uses link-local addresses (at least in 
some cases) and I remember reading that XO systems with a second network 
interface would act as Internet gateways for the machines that only had 
mesh connections.  I don't know whether this functionality is still 
present or working (it might have been removed or just suffered from bit 
rot due to Fedora version changes) but it would certainly be something 
that could be used as a starting point for implementing this for Sugar 
on a stick.

Link-local addresses are trivially easy to configure for IPv6 (you 
actually have to go to some effort to *not* use them), and Fedora 
supports link-local 169.254.*.* addresses for IPv4 as well.  Sugar would 
have to provide a configuration mechanism (this could be tied to the 
configuration of a static IP address) that would set up the IP proxy 
NAT routing service for other machines using link-local addresses (the 
NAT conversion would map link-local endpoints to unused UDP/TCP ports on 
the routing system) - while I have never done such a thing, it should 
certainly be possible, and perhaps someone on the networking list has 
done this already for non-link-local networking configurations, and 
could provide more details on the necessary configuration.

Once you had support for the IP proxy enabled, you would need to 
advertise that service via multicast DNS, and add something to the 
default Sugar configuration that (if a link-local address was the only 
IP address available) would attempt to do a lookup for available IP 
proxies and choose one for installation as a default gateway router 
(and DNS resolver).  Fedora already includes the Avahi tools that you 
would use for this - it would pretty much be a matter of configuration 
and adding a script or two that manages this during networking startup.  
If this is tested out and found to be useful, you could probably even 
get Fedora upstream to pick up the relevant changes to the networking 
startup scripts (as long as the scripts do not fail if link-local 
addresses are unavailable and/or the Avahi tools are not installed).

It probably would be best to implement this as an IPv4-only service 
initially, then look at the possibility of adding an IPv6/IPv6 service, 
eventually looking at IPv6/IPv4 tunneling and/or proxy options as well.

Finally, the IP proxy NAT service would be something that it would 
make sense to add to the school server distributions as and when this 
is adopted by Sugar systems.

@alex
-- 
mailto:alex.du...@mac.com



___
Server-devel mailing list
Server-devel@lists.laptop.org
http://lists.laptop.org/listinfo/server-devel


Re: [Localization] [sugar] [Proposal] .xot bundles for translations

2008-12-05 Thread Alexander Dupuy
Martin Langhoff wrote:
 Using rpm or apt Sugar would getting a bit further away from Windows
 (does cygwin carry either?) - a bit less so on OSX (where the fink
 toolchain will probably work alright, specially with translation pkgs,
 which are by definition noarch).
   

I don't think that this necessarily prevents the possibility of OSX 
support via fink, but it's worth remembering that translation packages 
are not 'by definition noarch' - if the packages contain compiled 
gettext .mo files, those files may be machine-specific.  As noted in 
http://www.gnu.org/software/autoconf/manual/gettext/MO-Files.html

 The first two words serve the identification of the file. The magic 
 number will always signal GNU MO files. The number is stored in the 
 byte order of the generating machine, so the magic number really is 
 two numbers: |0x950412de| and |0xde120495|. 
 The |msgfmt| program has an option selecting the alignment for MO file 
 strings. With this option, each string is separately aligned so it 
 starts at an offset which is a multiple of the alignment value. On 
 some RISC machines, a correct alignment will speed things up.

In practice, it seems that the GNU gettext utils will support either 
endianness, and generate little-endian .mo files regardless of host 
byte-order (the discussion thread at 
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=468209 is informative 
here) so that you can get away with treating .mo files as noarch, but 
you might not want to do so in some cases (notably, for the discussion 
in that bug report, on slow-CPU (e.g. embedded) big-endian devices).

If for some reason you were wanting to support non-GNU libc based 
operating systems (like Solaris, which I realize is pretty irrelevant in 
this context) their gettext() implementation does not support automatic 
conversion of .mo files, and therefore packages containing them would 
not be architecture independent.

@alex

-- 
mailto:[EMAIL PROTECTED]

___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: [Server-devel] recover from broken yum transaction

2008-09-11 Thread Alexander Dupuy
Ahmed Kamal writes:
 Trying rpm -Va, I am getting lots of these lines

 S.?./usr/bin/kblankscrn.kss
 S.?./usr/bin/kcminit
 S.?./usr/bin/kcminit_startup

 Basically, the size has changed, and the md5 check cannot be performed ?! I
 understand this is due to prelink, but that sux ! This effectively kills
 the rpm -V functionality. Is it not possible to prelink binaries on the
 server before wrapping them into rpms ? Any suggested solution around this ?
If you are getting these errors, it is not due to prelinking, but due to 
files having been updated but not properly registered with RPM.  The 
suggestion to reinstall  kdebase-workspace (and other packages which are 
reporting size changes) is a probable solution.

rpm -V uses prelink --verify to check binaries and shared libraries that 
might be affected by prelinking:

It is explained in the manpage for prelink

-y --verify
Verifies a prelinked binary or library. This option can be used only on 
a single binary or library. It first applies an --undo operation on the 
file, then prelinks just that file again and compares this with the 
original file. If both are identical, it prints the file after --undo 
operation on standard output and exits with zero status. Otherwise it 
exits with error status. Thus if --verify operation returns zero exit 
status and its standard output is equal to the content of the binary or 
library before prelinking, you can be sure that nobody modified the 
binaries or libraries after prelinking. Similarly with message digests 
and checksums (unless you trigger the improbable case of modified file 
and original file having the same digest or checksum).

@alex

-- 
mailto:[EMAIL PROTECTED]

___
Server-devel mailing list
Server-devel@lists.laptop.org
http://lists.laptop.org/listinfo/server-devel


Re: [Server-devel] Notes on replacing bridging with bonding

2008-09-08 Thread Alexander Dupuy
Martin Langhoff escribió:
 This is roughly what I am doing:

 # mark the device as a bonding device
 # - for some reason TYPE does not work
   

I have placed TYPE=Bonding in the ifcfg-bond0 config files, but this is 
not needed for Fedora 7 or later (it doesn't hurt to have it, though)

 # cat /etc/modprobe.d/xs-bonding
 alias bond0 bonding
   

A modprobe.d/ directory - that's a nice trick!  I wasn't aware of this, 
so just added some lines to the /etc/modprobe.conf file:

alias bond0 bonding
options bond0 miimon=100 mode=3 downdelay=1000 updelay=1000

As far as I know, these options are pretty much the same as the ones 
which can be specified with BONDING_OPTS - I guess it is probably better 
to do that there (this feature was added since Fedora Core 5, when I set 
mine up).  As for the specifics of what might be appropriate for a 
bonded channel with only one interface expected to be enslaved, I would 
suggest something along the lines of:

# use active backup mode, allowing primary slave to be specified
mode=1
# set multicast only on primary
multicast=1
# set primary slave
primary=wlan1
# set status monitoring to 1000msec
miimon=1000

I'm not 100% sure about the last one.  I believe that if there is no 
monitoring, the link status of the bond interface won't reflect the link 
status of the underlying device, but I haven't confirmed this (or, for 
that matter, that the link status of the bond device will reflect the 
underlying device(s) if monitoring is enabled).  There is presumably 
some overhead to this, but at once a second or even tenth of a second, 
it is negligible.  Thinking about this more, since these are wireless 
devices (active antenna via USB) I don't even know if there is a link 
status - for those you might want to omit this option.  It could 
theoretically be useful for the wired Ethernet interfaces, though.

Other than these options, which aren't strictly necessary (your setup 
with the defaults should work fine) everything you have looks perfectly 
reasonable.  The only advantage to specifying the active backup mode is 
a little bit of misconfiguration protection, so that if somehow a second 
interface gets enslaved to a bond, it won't be used for transmission 
(packets will still be received on it, however).  Explicitly specifying 
the primary slave in the BONDING_OPTS of the master duplicates some of 
the configuration which introduces the possibility of inconsistency - 
I'm not sure what your feelings on that are.  If the primary slave is 
not explicitly specified, the first enslaved device is the primary.

@alex

-- 
mailto:[EMAIL PROTECTED]

___
Server-devel mailing list
Server-devel@lists.laptop.org
http://lists.laptop.org/listinfo/server-devel