Hi, Jerry

/library/user/<some_id_here> refers to the directory on the school server.

Currently, the store on the school server is a metadata file and a document file in the Journal directory per Journal object. The Log directory contains the
metadata file for Journal objects without a document.

The script stores and retrieves using sftp with authentication by public/private key.

Day-by-day control is by using the 'favorite' star in the Journal (in the metadata, the 'keep' flag). It is set, if there is a local (on the XO) copy of the document. It is clear, if there is no local copy. The user can set the flag which causes the script to download it to make a local copy. The user can clear to request the local copy to be deleted.

There will be need for a 'recovery' mode in the case of a reflashed XO, for example. Currently, that is envisioned as a 'sync' option which forces the script to download any metadata files in the server 'Journal' directory which are not present in the local copy. The keep star would be cleared for these objects, but the user could then selectively download copies of the document file. I am not sure of the performance cost, but this could be done by the script as a normal step.

Tony

On 04/15/2016 11:36 PM, Jerry Vonau wrote:

On April 15, 2016 at 3:37 AM Tony Anderson <tony_ander...@usa.net> wrote:


Hi, Jerry

I am not sure what you mean by 'remote network datastore'.

The objects that are not stored as part of the local journal would need to
live somewhere, you have being referring to '/library/user/<some_id_here>'
as being the place for that. I view that location as 'remote network
datastore' because that location holds all the objects not on the client XO
but can be retrieved. The older objects can be retrieved right? Or are the
objects being stored in more of a warehouse like fashion with no access
from the clients?

Jerry


What I am doing now is looking at each Journal object. If is is new
(doesn't have Journal in the metadata - Journal showing the
item has been backed up to the school server), the script uses sftp to
upload it to /library/user/journal (create_user adds two directories,
journal and log.) and saves 'journal' in the metadata.

Actually, there is an additional check to make sure the document in the
new object has a 'user-supplied' title. If not, the object is saved to
the log (without the document).

An new object which does not have an associated document with a
user-supplied title has the metadata file uploaded to the
/library/user/serial-number/log directory. The local copy of the object
is deleted.

Currently, there is a design flaw in Sugar so that when an activity is
resumed, the object_id is used. This means that the activity overwrites
the original. I don't remember if I deal with an updated object which
should be saved again to the server.

The script checks the 'keep' flag in the metadata. If it is set and the
document in not in the local datastore, it is downloaded (sftp).
If it is not set and the document is in the local datastore, it is
deleted.

The idmgr 'create_user' does need to be tweaked. Currently, I modified
it to create the extra directories.

One concern is with the 'small' servers. Will they have space to backup
Journal objects? At one point OLPC estimated 2GB per XO as the space
required to backup the Journal. For a 100 XO deployment, 200GB seems
reasonable against a 1TB drive. Fortunately, the industry enables use of
larger drives with only a small penalty in price. (Actually on Amazon,
the cost per GB seems constant).

Tony

On 04/15/2016 04:12 PM, Jerry Vonau wrote:
Or something like a remote network datastore if I understand what Tony
is
in favor of.
_______________________________________________
Sugar-devel mailing list
Sugar-devel@lists.sugarlabs.org
http://lists.sugarlabs.org/listinfo/sugar-devel
.


_______________________________________________
Sugar-devel mailing list
Sugar-devel@lists.sugarlabs.org
http://lists.sugarlabs.org/listinfo/sugar-devel

Reply via email to