[Server-devel] Wikiserver on XS

2008-06-26 Thread Philipp Kocher
Hi,

Is there a wikiserver installed on the XS?
Apart from this old page http://wiki.laptop.org/go/Trial1_Server_Software, I 
can not find any hints.
If yes, is it moinmoin, mediawiki or maybe the wiki-plugin for moodle?

I have to write some documentation and I want to make sure that I 
install the right product. Makes it easier to transfer the data to the 
XS later.

Thanks and best regards,
Philipp
ENN Primary School Reaksmy
Cambodia


___
Server-devel mailing list
Server-devel@lists.laptop.org
http://lists.laptop.org/listinfo/server-devel


Re: [Server-devel] [PATCH] postprocess.py: an incrond-triggered script to cleanup file transfers

2008-06-26 Thread Michael Stone
On Thu, Jun 26, 2008 at 07:10:18PM -0400, [EMAIL PROTECTED] wrote:

> diff --git a/server/postprocess.py b/server/postprocess.py
> new file mode 100755
> index 000..82a2418
> --- /dev/null
> +++ b/server/postprocess.py
> +homebasepath = '/library/users'
> +dirpath  = sys.argv[1]
> +fname= sys.argv[2]
> +fpath= dirpath + '/' + fname

Are we content with the exceptions that might result from
postprocessing.py if run with fewer than two arguments? Do we ever
expect that postprocess.py will be run manually?

> +# Sanity checks:
> +# - must be a file
> +# - username must be ^\w+$
> +# - uid must match username

Do you mean that the file must be owned by the correct user?

> +# - must exist in /library/users
> +#
> +# Note: there are race conditions here.
> +#   We will drop privs before doing
> +#   potentially damanging stuff.

s/damanging/damaging

If we check the path name, then open the file, then fstat the resulting
descriptor, would we avoid the races?

> +# we'll hit a KeyError exception and exit 1
> +# if the user is not found
> +user  = pwd.getpwnam(fname)

Might be worth a remark at the beginning of the file that uncaught
exceptions are intended to kill the process. Do we ensure that the
resulting tracebacks are logged appropriately?

> +# match with /library/users
> +if not re.match(homebasepath, user[5]):
> +exit(1)

I think your regex is incorrect - you want '^/library/users', right?
Also, python has builtin tests for this which are fast and which don't
require regexen. Consider:

  user[5].startswith(homebasepath) # what I think you want
  homebasepath in user[5]  # equivalent to what you have

> +# user uid must match file owner uid
> +if not (user[3] == os.stat(fpath)[4]):
> +exit(1)

Correctness analysis:

You seem to have implemented: 

  1. Read .
  2. stat64(fpath); check that fpath resolves to a file.
  3. lstat64(fpath); check that fpath resolves to a not-link.
  4. Look up the passwd data for .
  5. Check fpath contains '/library/users'.
  6. stat64(fpath); check that fpath resolves to a file owned by .
  7. Assume the identity of .
  8. ...

This is not correct for two reasons, the most important being that we
have a classic TOCTTOU race - i.e. that any attacker with write access
to /library/users and with read access to a dirent pointing to an inode
owned by  can link() and rename() that inode into
/library/users in order to pass your check, then replace the resulting
dirent at any time before you re-use the path.

Now, as it happens, your present code merely unlinks fpath. Since
unlink() doesn't follow symlinks, why not just strengthen the conditions
on fpath to something like:

  "Check that fpath begins with /library/users/ and contains exactly 3
   slashes - i.e. that '/' not in fpath[len(prefix):]."

and then unlink fpath without any other conditions? 

If, for some reason, you still want to check the ownership on the inode
pointed to by fpath and you don't care about the race I described above,
then you might:

  1. Read .
  2. Check that fpath begins with /library/users/ and contains exactly 3
 slashes - i.e. that '/' not in fpath[len(prefix):].
  3. Open fpath with O_NOFOLLOW.
  4. fstat64() the resulting descriptor. Verify from the stat data that
 the descriptor points to a file and that the file is owned by
 's uid.

Michael

P.S. - If you haven't already filled in the rsync logic alluded to by
your final comment, then poke me when you do so and we can attempt to
construct a correct guard for it.

___
Server-devel mailing list
Server-devel@lists.laptop.org
http://lists.laptop.org/listinfo/server-devel


[Server-devel] [PATCH] ds-backup client - fix rsync paths to the XS.

2008-06-26 Thread martin . langhoff
From: Martin Langhoff <[EMAIL PROTECTED]>

---
 client/ds_backup.py |8 +---
 client/ds_backup.sh |2 +-
 2 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/client/ds_backup.py b/client/ds_backup.py
index 60d0be2..089a2e9 100755
--- a/client/ds_backup.py
+++ b/client/ds_backup.py
@@ -83,9 +83,11 @@ def rsync_to_xs(from_path, to_path, keyfile, user):
 
 # Transfer an empty file marking completion
 # so the XS can see we are done.
+# Note: the dest dir on the XS is watched via
+# inotify - so we avoid creating tempfiles there.
 tmpfile = tempfile.mkstemp()
-rsync = ("/usr/bin/rsync --timeout 10 -e '%s' '%s' '%s' "
- % (ssh, tmpfile[1], to_path+'/.transfer_complete'))
+rsync = ("/usr/bin/rsync --timeout 10 -T /tmp -e '%s' '%s' '%s' "
+ % (ssh, tmpfile[1], 
'schoolserver:/var/lib/ds-backup/completion/'+user))
 rsync_p = popen2.Popen3(rsync, True)
 rsync_exit = os.WEXITSTATUS(rsync_p.wait())
 if rsync_exit != 0:
@@ -130,7 +132,7 @@ if __name__ == "__main__":
 sstatus = check_server_available(backup_url, sn)
 if (sstatus == 200):
 # cleared to run
-rsync_to_xs(ds_path, 'schoolserver:datastore', pk_path, sn)
+rsync_to_xs(ds_path, 'schoolserver:datastore-current', pk_path, sn)
 # this marks success to the controlling script...
 os.system('touch ~/.sugar/default/ds_backup-done')
 exit(0)
diff --git a/client/ds_backup.sh b/client/ds_backup.sh
index 9916334..334e4eb 100755
--- a/client/ds_backup.sh
+++ b/client/ds_backup.sh
@@ -121,7 +121,7 @@ fi
 #  We use this to stagger client machines in the 30 minute
 #  slots between cron invocations...
 #  (yes we need all the parenthesys)
-(sleep $(($RANDOM % 1200)));
+#(sleep $(($RANDOM % 1200)));
 
 # After the sleep, check again. Perhaps something triggered
 # another invokation that got the job done while we slept
-- 
1.5.6.dirty

___
Server-devel mailing list
Server-devel@lists.laptop.org
http://lists.laptop.org/listinfo/server-devel


[Server-devel] [PATCH] postprocess.py gets fleshed out, and incrontab tweaks

2008-06-26 Thread martin . langhoff
From: Martin Langhoff <[EMAIL PROTECTED]>

postprocess.py now provides hardlinked timestamped
directories, and maintains a symlink pointing to
the latest transferred directory.

The timestamped directories are maintained with a resolution
of the nearest minute - we don't expect to have more
than one per hour.

The incrond crontab gets updated to deal well with rsync
clients too. To work well with rsync, the rsync client must
use the -T option to avoid creating tempfiles in the monitored
directory.

Strangely, incrond cannot handle comments on its tab files, so
we workaround that for the moment. This is known as bug 173 in the
incrond bugtracker (bts.aiken.cz).
---
 server/incron-ds-backup.comments |   11 +++
 1 files changed, 11 insertions(+), 0 deletions(-)
 create mode 100644 server/incron-ds-backup.comments

diff --git a/server/incron-ds-backup.comments b/server/incron-ds-backup.comments
new file mode 100644
index 000..6edb8d8
--- /dev/null
+++ b/server/incron-ds-backup.comments
@@ -0,0 +1,11 @@
+## NOTE: incrond v0.5.5 cannot cope with
+## comments in its "tab" files. So we
+## put them here, until incrontab gets
+## better at this :-)
+
+# We monitor create and move to, because
+# while touch/echo will trigger IN_CREATE,
+# rsync transfers (use the -T flag!) will
+# create the file in a tempdir and then mv
+# it into place. Again: use rsync -T /tmp
+# : [EMAIL PROTECTED]
-- 
1.5.6.dirty

___
Server-devel mailing list
Server-devel@lists.laptop.org
http://lists.laptop.org/listinfo/server-devel


[Server-devel] [PATCH] postprocess.py: an incrond-triggered script to cleanup file transfers

2008-06-26 Thread martin . langhoff
From: Martin Langhoff <[EMAIL PROTECTED]>

After a complete rsync transfer, clients will touch a file in
a "trigger" directory. incrond will call postprocess.py to
perform any actions required. With this commit we have

 - a paranoid initial part of postprocess, that performs
   sanity checks and then drops privs to the matching user

 - an incron config
---
 server/incron-ds-backup.conf |1 +
 server/postprocess.py|   73 ++
 2 files changed, 74 insertions(+), 0 deletions(-)
 create mode 100644 server/incron-ds-backup.conf
 create mode 100755 server/postprocess.py

diff --git a/server/incron-ds-backup.conf b/server/incron-ds-backup.conf
new file mode 100644
index 000..5bb4dff
--- /dev/null
+++ b/server/incron-ds-backup.conf
@@ -0,0 +1 @@
+/var/lib/ds-backup/completion IN_CREATE 
/usr/local/ds-backup/server/postprocess.py $@ $#
\ No newline at end of file
diff --git a/server/postprocess.py b/server/postprocess.py
new file mode 100755
index 000..82a2418
--- /dev/null
+++ b/server/postprocess.py
@@ -0,0 +1,73 @@
+#!/usr/bin/python
+#
+# Once a backup from a client is complete,
+# postprocess is called by incrond finish off
+# the task. Should be called in as root, and it will
+# drop privileges to the user it will process.
+#
+# The incrond invocation line should be
+# /path/to/dir IN_CREATE /path/to/postprocess.py $@ $#
+#
+# (in other words, we expect 2 parameters, dirpath, filename)
+#
+import sys
+import os
+import re
+import pwd
+
+homebasepath = '/library/users'
+dirpath  = sys.argv[1]
+fname= sys.argv[2]
+fpath= dirpath + '/' + fname
+
+#
+# Sanity checks:
+# - must be a file
+# - username must be ^\w+$
+# - uid must match username
+# - must exist in /library/users
+#
+# Note: there are race conditions here.
+#   We will drop privs before doing
+#   potentially damanging stuff.
+#
+
+if not os.path.isfile(fpath):
+exit(1)
+if os.path.islink(fpath):
+exit(1)
+if not re.match('\w+$', fname):
+exit(1)
+
+# we'll hit a KeyError exception and exit 1
+# if the user is not found
+user  = pwd.getpwnam(fname)
+# match with /library/users
+if not re.match(homebasepath, user[5]):
+exit(1)
+# user uid must match file owner uid
+if not (user[3] == os.stat(fpath)[4]):
+exit(1)
+
+# Checks done -now we drop privs and
+# - remove flag file
+# - hardlink files as appropriate
+#
+try:
+os.setgid(user[3])
+os.setuid(user[2])
+except OSError, e:
+sys.stderr.write('Could not set gid %s uid %s' % user[3], user[2])
+
+# rm the flagfile
+os.unlink(fpath)
+
+# os.system() rsync 
+#print 
+
+
+
+
+
+
+
-- 
1.5.6.dirty

___
Server-devel mailing list
Server-devel@lists.laptop.org
http://lists.laptop.org/listinfo/server-devel


Re: [Server-devel] Yummy incron

2008-06-26 Thread Martin Langhoff
On Thu, Jun 26, 2008 at 12:45 AM, Bill Bogstad <[EMAIL PROTECTED]> wrote:
> Sure but any process that is waiting for a file to appear in the
> filesystem seem more like a batch process to me. There is no way to
> know how long it will take (and thus your timeouts).

Bill,

everything you say makes sense, except that perhaps a key concept is missing.

Most of our scripts and processes are in Python. If I have 20 such
scripts idling in memory, waiting for something to happen (a dbus
event?), the footprint is huge - clearly this does not scale. Even if
they get paged out.

Incron, otoh, is a single process, weighing 600K, and can have a
config file listing 20 scripts that might be triggered by a (inotify)
event. Make that 200 scripts. The memory footprint doesn't change.

> You don't seem to like DBUS

Why do you repeat that? I have no problem with DBus where it makes
sense: processes that are guaranteed to be running in memory all the
time.

The key problem with paging out is that if you have a dozen idle
processes in memory that are network daemons, and a dozen that are
batch processes waiting for a dbus signal, the kernel will page them
indiscriminately - the batch ones will be able to do their job ok when
called, the network ones will timeout on their users. The kernel has
no way of knowing.

This isn't theoretical stuff -- rather very practical concerns of
offering network services in real life. Memory usage matters.

OTOH, I'm open to seeing facts that contradict my analysis - if you or
anyone has a strategy that is better than incron - say, way to keep 20
to 200 python scripts in memory in 600K of (safely swappable) memory -
I'd love to see it.

cheers,



m
-- 
 [EMAIL PROTECTED] -- School Server Architect
 - ask interesting questions
 - don't get distracted with shiny stuff - working code first
 - http://wiki.laptop.org/go/User:Martinlanghoff
___
Server-devel mailing list
Server-devel@lists.laptop.org
http://lists.laptop.org/listinfo/server-devel