Re: [Server-devel] Upgrade many machines

2014-01-09 Thread James Cameron
On Thu, Jan 09, 2014 at 02:40:31PM -0500, Adam Gordon wrote:
 The problem is usb key supply. Is there any way to update firmware
 without them?

Sure.  But it is not a good way.  It is complex and time consuming,
with higher possibility of error, and doesn't scale up.

You don't say what the problem is with USB drive supply.  If you can
supply SD cards instead of USB drive, then the operating system can be
updated from them.  Let me know if this is possible.

For upgrading to an unsigned custom build with only a server, the
general sequence, which you should test and refine further, is:

- configure an open access point, DHCP server, and BIND server, (this
  you may already have),

- configure the server to hold latest firmware and operating system,
  for each model of laptop to be supported, for example at
  http://server/path/ (this you may already have),

- quieten the wireless network, removing as many clients from it as
  possible,

- position the laptop close to the access point,

- lock the battery into the laptop,

- connect external power to the laptop,

- boot the laptop to the ok prompt, which is the firmware of the laptop,

- associate with the access point, for example by typing:

  ok essid mynetworkname

- download the latest firmware and reflash (Q3C16 for XO-1.5, Q4D34
  for XO-1.75), for example by typing:

  ok flash http:\\server\path\q3c16.rom

- wait for the firmware to be updated,

- boot the laptop to the ok prompt again, and verify the firmware
  version is current,

- download the latest operating system, for example:

  ok fs-update http:\\server\path\32013o2.zd

- reboot the laptop, for example by typing:

  ok bye

- repeat from the second step on the next laptop,

- avoid having more than one laptop doing fs-update at the same time
  as another laptop, or experiment until you find a point of failure.

Note that the firmware uses the reverse slash character \ instead of /
as normally used in operating systems.

CC: as before.

-- 
James Cameron
http://quozl.linux.org.au/
___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: [Server-devel] Upgrade many machines

2014-01-09 Thread Martin Langhoff
On Tue, Dec 31, 2013 at 2:06 AM, James Cameron qu...@laptop.org wrote:
 Summary: use a few USB drives or NANDblaster, not the server.

Note that if preserving user data and apps is desired, USB drives /
NANDBlaster isn't that good.

Any reason you are not mentioning the olpc-update path?



m
-- 
 martin.langh...@gmail.com
 -  ask interesting questions
 - don't get distracted with shiny stuff  - working code first
 ~ http://docs.moodle.org/en/User:Martin_Langhoff
___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: [Server-devel] Upgrade many machines

2014-01-09 Thread Martin Langhoff
On Thu, Jan 9, 2014 at 7:33 PM, James Cameron qu...@laptop.org wrote:
 No reason, thanks for reminding me.  I guess xs-rsync on the local
 server?

yep

 - oatslite to send the upgrade msg
 - XOs need to be registered
 - xs-rsync to host the new img

There are several possible gotchas I have not thought through, and I
no longer remember in clear enough detail. Signed vs unsigned, OATs
keys (master or delegated), whether olpc-update has any showstopper
bugs in this specific configuration, etc.

So it might not be feasible with the given configuration. However, it
might be worthwhile to upgrade to a state where the next upgrade can
be handled with this mechanism...



m
-- 
 martin.langh...@gmail.com
 -  ask interesting questions
 - don't get distracted with shiny stuff  - working code first
 ~ http://docs.moodle.org/en/User:Martin_Langhoff
___
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel


Re: [Sugar-devel] The quest for data

2014-01-09 Thread Anish Mangal
Sorry for being late to the party. Clearly the quest for data is a
commonly shared one, with many different approaches, questions, and
reporting/results.

One of the already mentioned solutions is the sugar-stats package,
originally developed by Aleksey, which have now been part of dextrose-sugar
builds for over a year, and the server side (xsce).

http://wiki.sugarlabs.org/go/Platform_Team/Usage_Statistics

The approach we followed was to collect as much data as possible without
interfering with sugar-apis or code. The project has made slow progress on
the visualization front, but the data collection front has already been
field tested.


I for one think there are a few technical trade-offs, which lead to larger
strategy decisions:
* Context v/s Universality ... Ideally we'd like to collect (activity)
context specific data, but that requires tinkering with the sugar api
itself and each activity. The other side is we might be ignoring the other
types of data a server might be collecting ... internet usage and the
various other logfiles in /var/log

* Static v/s Dynamic ... Analyzing journal backups is great, but they are
ultimately limited in time resolution due to the datastore's design itself.
So the key question being what's valuable? ... a) Frequency counts of
activities? b) Data such as upto the minute resolution of what activities
are running, which activity is active (visible  when), collaborators over
time ... etc ...

In my humble opinion, the next steps could be:
1 Get better on the visualization front.
2 Search for more context. Maybe arm the sugar-datastore to collect higher
resolution data.



On Tue, Jan 7, 2014 at 12:24 PM, Christophe Guéret 
christophe.gue...@dans.knaw.nl wrote:

 Dear Sameer, all,

 That's a very interesting blog post and discussion. I agree that
 collecting data is important but knowing that are the questions aimed to be
 answered with that data is even more so. If you need help with that last
 bit, I could propose to use the journal data as a use-case for the project
 KnowEscape ( http://knowescape.org/ ). This project is about getting
 insights out of large knowledge spaces via visualisation. There is wide
 (European) community of experts behind it coming from different research
 fields (humanities, physic, computer science, ...). Something useful could
 maybe come out...

 I would also like to refer you to the project ERS we have now almost
 finished. This project is an extension of the ideas behind SemanticXO some
 of you may remember. We developed a decentralised entity registry system
 with the XO as a primary platform for coding and testing. There is a
 description of the implementation and links to code on
 http://ers-devs.github.io/ers/ . We also had a poster at OLPC SF (thanks
 for that !).

 In a nutshell, ERS creates global and shared knowledge spaces through
 series of statements. For instance, Amsterdam is in the Netherlands is a
 statement made about the entity Amsterdam relating it to the entity the
 Netherlands. Every user of ERS may want to either de-reference an entity
 (*e.g.*, asking for all pieces of information about Amsterdam) or
 contribute to the content of the shared space by adding new statements.
 This is made possible via Contributors nodes, one of the three types of
 node defined in our system. Contributors can interact freely with the
 knowledge base. They themselves take care of publishing their own
 statements but cannot edit third-party statements. Every set of statements
 about a given entity contributed by one single author is wrapped into a
 document in couchDB to avoid conflicts and enable provenance tracking.
 Every single XO is a Contributor. Two Contributors in a closed P2P network
 can freely create and share Linked Open Data. In order for them to share
 data with another closed group of Contributors, we haves Bridges. A
 Bridge is a relay between two closed networks using the internet or any
 other form of direct connection to share data. Two closed communities, for
 example two schools, willing to share data can each setup one Bridge and
 connect these two nodes to each other. The Bridges will then collect and
 exchange data coming from the Contributors. These bridges are not
 Contributors themselves, they are just used to ship data (named graphs)
 around and can be shut-down or replaced without any data-loss. Lastly, the
 third component we define in our architecture is the Aggregator. This is
 a special node every Bridge may push content to and get updated content
 from. As its name suggests, an Aggregator is used to aggregate entity
 descriptions that are otherwise scattered among all the Contributors. When
 deployed, an aggregator can be used to access and expose the global content
 of the knowledge space or a subset thereof.

 One could use ERS to store (part of) the content of the Journal on an XO
 (Contributor), cluster information as the school level (Bridge put on the
 XS) and provide higher level analysis 

Re: [Server-devel] Upgrade many machines

2014-01-09 Thread Martin Langhoff
On Thu, Jan 9, 2014 at 7:33 PM, James Cameron qu...@laptop.org wrote:
 No reason, thanks for reminding me.  I guess xs-rsync on the local
 server?

yep

 - oatslite to send the upgrade msg
 - XOs need to be registered
 - xs-rsync to host the new img

There are several possible gotchas I have not thought through, and I
no longer remember in clear enough detail. Signed vs unsigned, OATs
keys (master or delegated), whether olpc-update has any showstopper
bugs in this specific configuration, etc.

So it might not be feasible with the given configuration. However, it
might be worthwhile to upgrade to a state where the next upgrade can
be handled with this mechanism...



m
-- 
 martin.langh...@gmail.com
 -  ask interesting questions
 - don't get distracted with shiny stuff  - working code first
 ~ http://docs.moodle.org/en/User:Martin_Langhoff
___
Server-devel mailing list
Server-devel@lists.laptop.org
http://lists.laptop.org/listinfo/server-devel


Re: [Server-devel] [XSCE] Upgraded VirtualBox XSCE 0.4 in my Google Drive to 0.5

2014-01-09 Thread James Cameron
On Thu, Jan 09, 2014 at 06:34:38PM -0800, Curt Thompson wrote:
 I have this set up now in appliance mode, considering people may want
 to try it out without running a dhcp server and possibly breaking their
 network(s).

Does this really happen?  Wouldn't it be better to fix XSCE so that
the DHCP server is not authoritative?

dhcpd.conf(5) on a system here says

  The authoritative statement

 authoritative;

 not authoritative;

 The DHCP server will normally assume that the configuration  informa‐
 tion  about a given network segment is not known to be correct and is
 not authoritative.  This is so that if a naive user installs  a  DHCP
 server  not fully understanding how to configure it, it does not send
 spurious DHCPNAK messages to clients  that  have  obtained  addresses
 from a legitimate DHCP server on the network.

-- 
James Cameron
http://quozl.linux.org.au/
___
Server-devel mailing list
Server-devel@lists.laptop.org
http://lists.laptop.org/listinfo/server-devel


Re: [Server-devel] [Sugar-devel] The quest for data

2014-01-09 Thread Anish Mangal
Sorry for being late to the party. Clearly the quest for data is a
commonly shared one, with many different approaches, questions, and
reporting/results.

One of the already mentioned solutions is the sugar-stats package,
originally developed by Aleksey, which have now been part of dextrose-sugar
builds for over a year, and the server side (xsce).

http://wiki.sugarlabs.org/go/Platform_Team/Usage_Statistics

The approach we followed was to collect as much data as possible without
interfering with sugar-apis or code. The project has made slow progress on
the visualization front, but the data collection front has already been
field tested.


I for one think there are a few technical trade-offs, which lead to larger
strategy decisions:
* Context v/s Universality ... Ideally we'd like to collect (activity)
context specific data, but that requires tinkering with the sugar api
itself and each activity. The other side is we might be ignoring the other
types of data a server might be collecting ... internet usage and the
various other logfiles in /var/log

* Static v/s Dynamic ... Analyzing journal backups is great, but they are
ultimately limited in time resolution due to the datastore's design itself.
So the key question being what's valuable? ... a) Frequency counts of
activities? b) Data such as upto the minute resolution of what activities
are running, which activity is active (visible  when), collaborators over
time ... etc ...

In my humble opinion, the next steps could be:
1 Get better on the visualization front.
2 Search for more context. Maybe arm the sugar-datastore to collect higher
resolution data.



On Tue, Jan 7, 2014 at 12:24 PM, Christophe Guéret 
christophe.gue...@dans.knaw.nl wrote:

 Dear Sameer, all,

 That's a very interesting blog post and discussion. I agree that
 collecting data is important but knowing that are the questions aimed to be
 answered with that data is even more so. If you need help with that last
 bit, I could propose to use the journal data as a use-case for the project
 KnowEscape ( http://knowescape.org/ ). This project is about getting
 insights out of large knowledge spaces via visualisation. There is wide
 (European) community of experts behind it coming from different research
 fields (humanities, physic, computer science, ...). Something useful could
 maybe come out...

 I would also like to refer you to the project ERS we have now almost
 finished. This project is an extension of the ideas behind SemanticXO some
 of you may remember. We developed a decentralised entity registry system
 with the XO as a primary platform for coding and testing. There is a
 description of the implementation and links to code on
 http://ers-devs.github.io/ers/ . We also had a poster at OLPC SF (thanks
 for that !).

 In a nutshell, ERS creates global and shared knowledge spaces through
 series of statements. For instance, Amsterdam is in the Netherlands is a
 statement made about the entity Amsterdam relating it to the entity the
 Netherlands. Every user of ERS may want to either de-reference an entity
 (*e.g.*, asking for all pieces of information about Amsterdam) or
 contribute to the content of the shared space by adding new statements.
 This is made possible via Contributors nodes, one of the three types of
 node defined in our system. Contributors can interact freely with the
 knowledge base. They themselves take care of publishing their own
 statements but cannot edit third-party statements. Every set of statements
 about a given entity contributed by one single author is wrapped into a
 document in couchDB to avoid conflicts and enable provenance tracking.
 Every single XO is a Contributor. Two Contributors in a closed P2P network
 can freely create and share Linked Open Data. In order for them to share
 data with another closed group of Contributors, we haves Bridges. A
 Bridge is a relay between two closed networks using the internet or any
 other form of direct connection to share data. Two closed communities, for
 example two schools, willing to share data can each setup one Bridge and
 connect these two nodes to each other. The Bridges will then collect and
 exchange data coming from the Contributors. These bridges are not
 Contributors themselves, they are just used to ship data (named graphs)
 around and can be shut-down or replaced without any data-loss. Lastly, the
 third component we define in our architecture is the Aggregator. This is
 a special node every Bridge may push content to and get updated content
 from. As its name suggests, an Aggregator is used to aggregate entity
 descriptions that are otherwise scattered among all the Contributors. When
 deployed, an aggregator can be used to access and expose the global content
 of the knowledge space or a subset thereof.

 One could use ERS to store (part of) the content of the Journal on an XO
 (Contributor), cluster information as the school level (Bridge put on the
 XS) and provide higher level analysis