[Zeitgeist] [Bug 660307] Re: zeitgeist fails to run if its database structure is not complete

2011-05-16 Thread J.P. Lacerda
** Branch linked: lp:~jplacerda/zeitgeist/660307

-- 
You received this bug notification because you are a member of Zeitgeist
Framework Team, which is subscribed to Zeitgeist Framework.
https://bugs.launchpad.net/bugs/660307

Title:
  zeitgeist fails to run if its database structure is not complete

Status in Zeitgeist Framework:
  Triaged
Status in “zeitgeist” package in Ubuntu:
  Confirmed

Bug description:
  Binary package hint: zeitgeist

  If zeitgeist's database (~/.local/share/zeitgeist/activity.sqlite) is
  incomplete, eg missing the events table, zeitgeist fails to run. And
  because the GUI does not report that zeitgeist faiiled to run,
  applications that rely on zeitgeist simply fail to work without any
  relevant reason given.

  I ran into this problem on upgrading an installation from Ubuntu 10.04
  to 10.10. After the upgrade, the dockbarx applet failed to run. The
  error message from gnome-panel just said it had failed to run, and
  .xsession-errors said the child process did not report any specific
  error. Running in debug mode (ie with the command dockbarx-factory.py
  run-in-window) gave:

  ERROR:dbus.proxies:Introspect error on 
:1.134:/org/gnome/zeitgeist/log/activity: dbus.exceptions.DBusException: 
org.freedesktop.DBus.Error.NoReply: Message did not receive a reply (timeout by 
message bus)
  DEBUG:dbus.proxies:Executing introspect queue due to error
  Traceback (most recent call last):
File /usr/bin/dockbarx_factory.py, line 26, in module
  import dockbarx.dockbar
  ...
File /usr/lib/pymodules/python2.6/dbus/proxies.py, line 140, in __call__
  **keywords)
File /usr/lib/pymodules/python2.6/dbus/connection.py, line 620, in 
call_blocking
  message, timeout)
  dbus.exceptions.DBusException: org.freedesktop.DBus.Error.ServiceUnknown: The 
name :1.134 was not provided by any .service files

  The error appeared to be a dbus error, but in fact was a problem with
  zeitgeist, which was failing to run because its database apparently
  was corrupted during the upgrade. I fixed the problem (eventually) by
  deleting the zeitgeist database file and restarting the zeitgeist-
  daemon manually.

  
  What I would expect to happen is:

  1) The GUI should report that zeitgeist has failed to run.

  2) Better yet, zeitgeist could create the necessary tables if its
  database is invalid or perhaps backup the old database and create a
  new one so it can run properly.

  It would of course be nice if dockbarx reported better error
  information, but since there are other applications that depend on
  zeitgeist, it would be good if zeitgeist could recover from this
  situation.

  ProblemType: Bug
  DistroRelease: Ubuntu 10.10
  Package: zeitgeist 0.5.2-0ubuntu1
  ProcVersionSignature: Ubuntu 2.6.35-22.34-generic 2.6.35.4
  Uname: Linux 2.6.35-22-generic i686
  Architecture: i386
  Date: Thu Oct 14 11:52:41 2010
  InstallationMedia: Ubuntu 10.10 Maverick Meerkat - Alpha i386 (20100602.2)
  PackageArchitecture: all
  ProcEnviron:
   PATH=(custom, no user)
   LANG=en_AU.UTF-8
   SHELL=/bin/bash
  SourcePackage: zeitgeist

___
Mailing list: https://launchpad.net/~zeitgeist
Post to : zeitgeist@lists.launchpad.net
Unsubscribe : https://launchpad.net/~zeitgeist
More help   : https://help.launchpad.net/ListHelp


[Zeitgeist] [Bug 783688] [NEW] make sql connection more modular

2011-05-16 Thread J.P. Lacerda
Public bug reported:

In order to make handling cursors / connections in sql.py, we should
create a function _connect_to_db which does exactly that.

19:34 thekorn I think we should make these lines a new method, like 
connect_to_database:
19:34 thekorn Iconn = sqlite3.connect(file_path)
19:34 thekorn Iconn.row_factory = sqlite3.Row
19:34 thekorn Icursor = conn.cursor(UnicodeCursor)
19:34 thekorn in sql.py that is
19:35 jplacerda thekorn I agree
19:35 thekorn and call it in create_db and in the restore code
19:35 jplacerda I ran into a similar problem (while working on a fix for 
sqlite memory) because I couldn't easily manipulate cursors / connection
19:35 jplacerda Seems *very* sane to me

** Affects: zeitgeist
 Importance: Undecided
 Status: New

** Branch linked: lp:~jplacerda/zeitgeist/660307

-- 
You received this bug notification because you are a member of Zeitgeist
Framework Team, which is subscribed to Zeitgeist Framework.
https://bugs.launchpad.net/bugs/783688

Title:
  make sql connection more modular

Status in Zeitgeist Framework:
  New

Bug description:
  In order to make handling cursors / connections in sql.py, we should
  create a function _connect_to_db which does exactly that.

  19:34 thekorn I think we should make these lines a new method, like 
connect_to_database:
  19:34 thekorn Iconn = sqlite3.connect(file_path)
  19:34 thekorn Iconn.row_factory = sqlite3.Row
  19:34 thekorn Icursor = conn.cursor(UnicodeCursor)
  19:34 thekorn in sql.py that is
  19:35 jplacerda thekorn I agree
  19:35 thekorn and call it in create_db and in the restore code
  19:35 jplacerda I ran into a similar problem (while working on a fix for 
sqlite memory) because I couldn't easily manipulate cursors / connection
  19:35 jplacerda Seems *very* sane to me

___
Mailing list: https://launchpad.net/~zeitgeist
Post to : zeitgeist@lists.launchpad.net
Unsubscribe : https://launchpad.net/~zeitgeist
More help   : https://help.launchpad.net/ListHelp


[Zeitgeist] [Bug 784011] [NEW] clean up sql.py

2011-05-17 Thread J.P. Lacerda
Public bug reported:

There are quite a few clean-ups that are possibly in sql.py.
I've already made a few of these cleanups, but some more complex one's would 
have to rely on better sql tests:

10:16  jplacerda thekorn: back to sql.py... :) In create_db if the database 
is not new, and it is up to date, then the cursor is returned immediately
10:17  jplacerda However, the code in check_core_schema_upgrade only returns 
True if it is already upgraded prior to entering the function
10:17  jplacerda False is returned if: 1) the database does not exist 2) the 
database is not up to date prior to entering the database
10:18  jplacerda The latter poses a problem, as check_core_schema_upgrade 
calls do_schema_upgrade, and the database is upgraded to CORE_SCHEMA_VERSION
10:18  jplacerda *but* False is still returned
10:18  jplacerda Which means that create_db then tries to insert the same 
things again into the table
10:19  jplacerda This is a bug, I assume? 
10:27  thekorn let me have a closer look to the code
10:28  jplacerda thekorn: sure. it looks particularly suspicious, as at the 
end of create_db you have: _set_schema_version (cursor, constants.CORE_SCHEMA, 
constants.CORE_SCHEMA_VERSION)
10:41  thekorn jplacerda: in short: it is not a problem at all, as all SQL 
statements in create_db have sth. along IF NOT EXIST
10:41  thekorn or similar
10:42  jplacerda thekorn: agreed, but does it really make sense to go through 
all those now that we have upgrade checking?
10:43  jplacerda At the end of _do_schema_upgrade it will be good to go
10:43  jplacerda Is there a compelling reason not to?
10:49  jplacerda I mean, if it is not a problem, then we shouldn't even have 
do_schema_upgrade :P
10:50  jplacerda It's just that do_schema_upgrade provides a better 
incremental picture of what's going on, and is probably faster than running 
through all of create_db
10:52  thekorn I know we had a good reason to do it this way, but I cannot 
remember which, let me think about it a bit
10:53  jplacerda thekorn: sure :)
10:55  jplacerda thekorn: the only reason I can think of (from looking at the 
code) is in case an upgrade fails -- you still get the same db structure after 
going through the statements in create_db
10:56  thekorn yeah, sth. alog the lines
10:56  thekorn along
10:57  jplacerda well, now that we have a method of ensuring n - n+1 works, 
we can do away with the repetition, and have that code only for new db's :)
10:58  thekorn jplacerda: it's also nicer to maintain, because we knoe that 
the sql statements in create_db() represent the up-to-date version of our db 
scheme
10:58  thekorn and we don't have to look at many files to understand how the 
current db structure looks like
10:59  thekorn plus I really don't think it has significant performance 
issues this way,
10:59  thekorn e.g. impact on startup time
10:59  jplacerda thekorn: sure :) I think that the statements in create_db 
can be left unchanged, but I think it now makes sense to only have them being 
reached by new databases
10:59  jplacerda would you agree with this?
11:00  thekorn jplacerda: yeah, but this would mean the upgrade scripts would 
get more complicated
11:00  thekorn e.g. we have to find out which indecies were added (or 
removed) at which point
11:00  thekorn etc.
11:01  jplacerda hmmm
11:01  thekorn as we have it right now, the upgrade scripts are mostly all 
about upgrading the data
11:01  jplacerda But don't you do that already?
11:02  jplacerda I mean, the statements in create_db give you an absolute 
picture of the current schema
11:02  thekorn sorry, what I mean is: if we change it the way you suggest, we 
have to go back in history and adjust each upgrade script
11:02  thekorn and see if they are really compatible
11:02  jplacerda the ones in core_n_n+1 are relative to previous versions
11:02  jplacerda I see.
11:02  jplacerda What is the best way to test this, then?
11:02  thekorn and given that we have no good way to test our upgrade pathes 
it might get some realy big pain
11:03  thekorn jplacerda: I thought about how we can test it intensively, and 
I failed miserably
11:03  thekorn the best is to use some sample data
11:03  jplacerda ok
11:03  jplacerda Should the tests be done in sql.py?
11:03  thekorn at different db scheme versions,
11:03  jplacerda woops
11:03  jplacerda i mean
11:03  jplacerda in tests/sql
11:04  thekorn jplacerda: I think it would be woth adding a new file 
t/sql_upgrade
11:04  thekorn to get some more structure
11:04  jplacerda thekorn: agreed
11:05  jplacerda would any tests designed previously have to be moved there?
11:06  thekorn but honestly, if you want to work on it, I think having a kind 
of testing framework, which generats dbs at a specified version, and tests 
upgrades to each other versions would be *the most awesome 
 solution* (tm)
11:06  thekorn which would scale in a good way for the future
11:07  thekorn jplacerda: I don't think so, because we don't have tests 

[Zeitgeist] [Bug 660307] Re: zeitgeist fails to run if its database structure is not complete

2011-05-17 Thread J.P. Lacerda
Things have changed slightly:

I implemented Mikkel's suggestion of setting the version to -1 just
before an upgrade, and back to it's correct value afterwards. This is
only useful if an upgrade is killed: the upgrade can also fail due to a
raised OperationalError. If that is the case (regardless of whether or
not the corruption comes from a bad database creation), we allow the
code to fall through the statements in create_db, which safely restores
the database.

-- 
You received this bug notification because you are a member of Zeitgeist
Framework Team, which is subscribed to Zeitgeist Framework.
https://bugs.launchpad.net/bugs/660307

Title:
  zeitgeist fails to run if its database structure is not complete

Status in Zeitgeist Framework:
  Triaged
Status in “zeitgeist” package in Ubuntu:
  Confirmed

Bug description:
  Binary package hint: zeitgeist

  If zeitgeist's database (~/.local/share/zeitgeist/activity.sqlite) is
  incomplete, eg missing the events table, zeitgeist fails to run. And
  because the GUI does not report that zeitgeist faiiled to run,
  applications that rely on zeitgeist simply fail to work without any
  relevant reason given.

  I ran into this problem on upgrading an installation from Ubuntu 10.04
  to 10.10. After the upgrade, the dockbarx applet failed to run. The
  error message from gnome-panel just said it had failed to run, and
  .xsession-errors said the child process did not report any specific
  error. Running in debug mode (ie with the command dockbarx-factory.py
  run-in-window) gave:

  ERROR:dbus.proxies:Introspect error on 
:1.134:/org/gnome/zeitgeist/log/activity: dbus.exceptions.DBusException: 
org.freedesktop.DBus.Error.NoReply: Message did not receive a reply (timeout by 
message bus)
  DEBUG:dbus.proxies:Executing introspect queue due to error
  Traceback (most recent call last):
File /usr/bin/dockbarx_factory.py, line 26, in module
  import dockbarx.dockbar
  ...
File /usr/lib/pymodules/python2.6/dbus/proxies.py, line 140, in __call__
  **keywords)
File /usr/lib/pymodules/python2.6/dbus/connection.py, line 620, in 
call_blocking
  message, timeout)
  dbus.exceptions.DBusException: org.freedesktop.DBus.Error.ServiceUnknown: The 
name :1.134 was not provided by any .service files

  The error appeared to be a dbus error, but in fact was a problem with
  zeitgeist, which was failing to run because its database apparently
  was corrupted during the upgrade. I fixed the problem (eventually) by
  deleting the zeitgeist database file and restarting the zeitgeist-
  daemon manually.

  
  What I would expect to happen is:

  1) The GUI should report that zeitgeist has failed to run.

  2) Better yet, zeitgeist could create the necessary tables if its
  database is invalid or perhaps backup the old database and create a
  new one so it can run properly.

  It would of course be nice if dockbarx reported better error
  information, but since there are other applications that depend on
  zeitgeist, it would be good if zeitgeist could recover from this
  situation.

  ProblemType: Bug
  DistroRelease: Ubuntu 10.10
  Package: zeitgeist 0.5.2-0ubuntu1
  ProcVersionSignature: Ubuntu 2.6.35-22.34-generic 2.6.35.4
  Uname: Linux 2.6.35-22-generic i686
  Architecture: i386
  Date: Thu Oct 14 11:52:41 2010
  InstallationMedia: Ubuntu 10.10 Maverick Meerkat - Alpha i386 (20100602.2)
  PackageArchitecture: all
  ProcEnviron:
   PATH=(custom, no user)
   LANG=en_AU.UTF-8
   SHELL=/bin/bash
  SourcePackage: zeitgeist

___
Mailing list: https://launchpad.net/~zeitgeist
Post to : zeitgeist@lists.launchpad.net
Unsubscribe : https://launchpad.net/~zeitgeist
More help   : https://help.launchpad.net/ListHelp


[Zeitgeist] [Bug 784850] [NEW] create a default test sandbox

2011-05-18 Thread J.P. Lacerda
Public bug reported:

21:43  RainCT jplacerda: So there's a problem with the test system. It's 
supposed to set ZEITGEIST_DATA_PATH to a temporary directory, but it's only 
doing so for doctests (of which we don't have any anymore)
21:44  jplacerda I was just looking at engine-test and something similar to 
what I was doing happens there
21:46  RainCT jplacerda: If you're friends with unittests, feel free to 
figure out how to fix that. I can only think of putting of wrapping it around 
everything (just like ZEITGEIST_DEFAULT_EXTENSIONS -- but then all 
tests get the same temp directory) or subclassing 
unittest.TestCase and having all tests use that; not happy with either option :(
21:46  RainCT jplacerda: Yeah, engine-test is also affected. The problem is 
in run-all-tests.py.

This clearly needs to be fixed :)

** Affects: zeitgeist
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Zeitgeist
Framework Team, which is subscribed to Zeitgeist Framework.
https://bugs.launchpad.net/bugs/784850

Title:
  create a default test sandbox

Status in Zeitgeist Framework:
  New

Bug description:
  21:43  RainCT jplacerda: So there's a problem with the test system. It's 
supposed to set ZEITGEIST_DATA_PATH to a temporary directory, but it's only 
doing so for doctests (of which we don't have any anymore)
  21:44  jplacerda I was just looking at engine-test and something similar to 
what I was doing happens there
  21:46  RainCT jplacerda: If you're friends with unittests, feel free to 
figure out how to fix that. I can only think of putting of wrapping it around 
everything (just like ZEITGEIST_DEFAULT_EXTENSIONS -- but then all 
  tests get the same temp directory) or subclassing 
unittest.TestCase and having all tests use that; not happy with either option :(
  21:46  RainCT jplacerda: Yeah, engine-test is also affected. The problem is 
in run-all-tests.py.

  This clearly needs to be fixed :)

___
Mailing list: https://launchpad.net/~zeitgeist
Post to : zeitgeist@lists.launchpad.net
Unsubscribe : https://launchpad.net/~zeitgeist
More help   : https://help.launchpad.net/ListHelp


[Zeitgeist] [Bug 784011] Re: clean up sql.py

2011-05-18 Thread J.P. Lacerda
This still needs more work :)

-- 
You received this bug notification because you are a member of Zeitgeist
Framework Team, which is subscribed to Zeitgeist Framework.
https://bugs.launchpad.net/bugs/784011

Title:
  clean up sql.py

Status in Zeitgeist Framework:
  New

Bug description:
  There are quite a few clean-ups that are possibly in sql.py.
  I've already made a few of these cleanups, but some more complex one's would 
have to rely on better sql tests:

  10:16  jplacerda thekorn: back to sql.py... :) In create_db if the database 
is not new, and it is up to date, then the cursor is returned immediately
  10:17  jplacerda However, the code in check_core_schema_upgrade only 
returns True if it is already upgraded prior to entering the function
  10:17  jplacerda False is returned if: 1) the database does not exist 2) 
the database is not up to date prior to entering the database
  10:18  jplacerda The latter poses a problem, as check_core_schema_upgrade 
calls do_schema_upgrade, and the database is upgraded to CORE_SCHEMA_VERSION
  10:18  jplacerda *but* False is still returned
  10:18  jplacerda Which means that create_db then tries to insert the same 
things again into the table
  10:19  jplacerda This is a bug, I assume? 
  10:27  thekorn let me have a closer look to the code
  10:28  jplacerda thekorn: sure. it looks particularly suspicious, as at the 
end of create_db you have: _set_schema_version (cursor, constants.CORE_SCHEMA, 
constants.CORE_SCHEMA_VERSION)
  10:41  thekorn jplacerda: in short: it is not a problem at all, as all SQL 
statements in create_db have sth. along IF NOT EXIST
  10:41  thekorn or similar
  10:42  jplacerda thekorn: agreed, but does it really make sense to go 
through all those now that we have upgrade checking?
  10:43  jplacerda At the end of _do_schema_upgrade it will be good to go
  10:43  jplacerda Is there a compelling reason not to?
  10:49  jplacerda I mean, if it is not a problem, then we shouldn't even 
have do_schema_upgrade :P
  10:50  jplacerda It's just that do_schema_upgrade provides a better 
incremental picture of what's going on, and is probably faster than running 
through all of create_db
  10:52  thekorn I know we had a good reason to do it this way, but I cannot 
remember which, let me think about it a bit
  10:53  jplacerda thekorn: sure :)
  10:55  jplacerda thekorn: the only reason I can think of (from looking at 
the code) is in case an upgrade fails -- you still get the same db structure 
after going through the statements in create_db
  10:56  thekorn yeah, sth. alog the lines
  10:56  thekorn along
  10:57  jplacerda well, now that we have a method of ensuring n - n+1 
works, we can do away with the repetition, and have that code only for new db's 
:)
  10:58  thekorn jplacerda: it's also nicer to maintain, because we knoe that 
the sql statements in create_db() represent the up-to-date version of our db 
scheme
  10:58  thekorn and we don't have to look at many files to understand how 
the current db structure looks like
  10:59  thekorn plus I really don't think it has significant performance 
issues this way,
  10:59  thekorn e.g. impact on startup time
  10:59  jplacerda thekorn: sure :) I think that the statements in create_db 
can be left unchanged, but I think it now makes sense to only have them being 
reached by new databases
  10:59  jplacerda would you agree with this?
  11:00  thekorn jplacerda: yeah, but this would mean the upgrade scripts 
would get more complicated
  11:00  thekorn e.g. we have to find out which indecies were added (or 
removed) at which point
  11:00  thekorn etc.
  11:01  jplacerda hmmm
  11:01  thekorn as we have it right now, the upgrade scripts are mostly all 
about upgrading the data
  11:01  jplacerda But don't you do that already?
  11:02  jplacerda I mean, the statements in create_db give you an absolute 
picture of the current schema
  11:02  thekorn sorry, what I mean is: if we change it the way you suggest, 
we have to go back in history and adjust each upgrade script
  11:02  thekorn and see if they are really compatible
  11:02  jplacerda the ones in core_n_n+1 are relative to previous versions
  11:02  jplacerda I see.
  11:02  jplacerda What is the best way to test this, then?
  11:02  thekorn and given that we have no good way to test our upgrade 
pathes it might get some realy big pain
  11:03  thekorn jplacerda: I thought about how we can test it intensively, 
and I failed miserably
  11:03  thekorn the best is to use some sample data
  11:03  jplacerda ok
  11:03  jplacerda Should the tests be done in sql.py?
  11:03  thekorn at different db scheme versions,
  11:03  jplacerda woops
  11:03  jplacerda i mean
  11:03  jplacerda in tests/sql
  11:04  thekorn jplacerda: I think it would be woth adding a new file 
t/sql_upgrade
  11:04  thekorn to get some more structure
  11:04  jplacerda thekorn: agreed
  11:05  jplacerda would any tests designed previously have to be moved there?
  

[Zeitgeist] [Bug 784850] Re: create a default test sandbox

2011-05-24 Thread J.P. Lacerda
12:46  thekorn jplacerda: wow, great. the main question is: do we wan to be 
the cool kids on the block, or is it enough for us to do it the old school way?
12:47  thekorn if we want to be on the cool side of life, and use the state 
of the art python way to handle such situations, we propable want to use the 
fixtures module
12:48  thekorn ...and testtools, as we want to support python 2.5 too
12:49  thekorn the good thing about fixtures is: it already has a Fixture 
which sets env vars temporary on board
12:49  thekorn also a fixture to create temp dirs
12:49  thekorn and to start external scripts/processes
12:49  thekorn which is basically all we need
12:50  thekorn so with some syntactic sugar, a test would simple look like:
12:50  jplacerda It's not part of the standard lib though, is it?
12:50  thekorn def bootest(self):
12:50  thekorn ...   with EngineFixture() as engine:
12:51  thekorn ...  self.assert(engine.do_something(), 1)
12:51  thekorn jplacerda: no, but I'm fine with adding these dependencies
12:51  jplacerda okay, that looks quite neat :)
12:55  jplacerda thekorn: I'm thinking that most of the stuff could actually 
be updated using Fixtures
12:57  thekorn one good thing about using testtools + fixtures is that we 
don't need our custom test runner anymore, we can just run the tests, and the 
'sandbox' (esp. the private session bus) are just there
12:57  thekorn so noone will ever run the tests directly and destroy his 
systems activity log
12:57  thekorn etc
12:58  jplacerda sounds awesome
12:59  jplacerda thekorn: you're essentially proposing a re-write of the test 
system (which I like, as nothing is handled in a standardized manner atm)
12:59  thekorn exactly
12:59  thekorn it's some work, but might be worth it
13:00  thekorn maybe we can even write a generic DBusSessionBus Fixture which 
can land upstream in the fixture package
13:00  thekorn so other people with similar dbus server/client setups can 
reuse it
13:01  thekorn which moves some complex code of of zeitgeist

-- 
You received this bug notification because you are a member of Zeitgeist
Framework Team, which is subscribed to Zeitgeist Framework.
https://bugs.launchpad.net/bugs/784850

Title:
  create a default test sandbox

Status in Zeitgeist Framework:
  Confirmed

Bug description:
  21:43  RainCT jplacerda: So there's a problem with the test system. It's 
supposed to set ZEITGEIST_DATA_PATH to a temporary directory, but it's only 
doing so for doctests (of which we don't have any anymore)
  21:44  jplacerda I was just looking at engine-test and something similar to 
what I was doing happens there
  21:46  RainCT jplacerda: If you're friends with unittests, feel free to 
figure out how to fix that. I can only think of putting of wrapping it around 
everything (just like ZEITGEIST_DEFAULT_EXTENSIONS -- but then all 
  tests get the same temp directory) or subclassing 
unittest.TestCase and having all tests use that; not happy with either option :(
  21:46  RainCT jplacerda: Yeah, engine-test is also affected. The problem is 
in run-all-tests.py.

  This clearly needs to be fixed :)

___
Mailing list: https://launchpad.net/~zeitgeist
Post to : zeitgeist@lists.launchpad.net
Unsubscribe : https://launchpad.net/~zeitgeist
More help   : https://help.launchpad.net/ListHelp


[Zeitgeist] [Bug 787868] Re: Encryption of database

2011-05-26 Thread J.P. Lacerda
First off, thanks to Jacob for taking the time to provide such complete
use cases / examples / API suggestions, these are much appreciated! I
agree that this is an issue which needs to be tackled. The crux of the
matter, as outlined by Jacob, is that Zeitgeist is enabled by default in
11.04, while encryption of /home is optional. As thekorn and RainCT,
said, encrypting activity.sqlite isn't the solution to all of our
problems, but at least it rules out some attack vectors. I feel like
this would be a good start (I'll package sqlcipher tonight).

-- 
You received this bug notification because you are a member of Zeitgeist
Framework Team, which is subscribed to Zeitgeist Framework.
https://bugs.launchpad.net/bugs/787868

Title:
  Encryption of database

Status in Zeitgeist Framework:
  New

Bug description:
  I think that Zeitgeist should encrypt databases in
  ~/.local/share/zeitgeist/* for anti-forensics reasons.

  While someone may happen to use an encrypted disk, Zeitgeist may serve
  as the ultimate accidental spyware to an unsuspecting user. One
  possible mitigation is to randomly generate a reasonable key, tie it
  into the login keychain and then use that key with something like
  http://sqlcipher.net/ rather than straight sqlite.

  In theory, a user will never know that this encryption/decryption is
  happening - no underlying assumptions about the disk need to be made
  to maintain any security guarantees. This should prevent anyone from
  learning the contents of the database without also learning the login
  password. Modern Ubuntu machines disallow non-root ptracing (
  https://wiki.ubuntu.com/SecurityTeam/Roadmap/KernelHardening#ptrace )
  and if the gnome keyring is locked, an attacker would have a much
  harder time grabbing meaningful Zeitgeist data without interacting
  with the user or bruteforcing the login keychain.

___
Mailing list: https://launchpad.net/~zeitgeist
Post to : zeitgeist@lists.launchpad.net
Unsubscribe : https://launchpad.net/~zeitgeist
More help   : https://help.launchpad.net/ListHelp


[Zeitgeist] [Bug 787868] Re: Encryption of database

2011-05-30 Thread J.P. Lacerda
Here's an update on our recent efforts:

1) RainCT and I have been packaging sqlcipher -- I don't think it should take 
us much longer to finish (RainCT can probably give a better estimate);
2) I have confirmed (via LD_PRELOAD) that sqlcipher somewhat works with the 
existing pysqlite bindings, we will need to focus on a non-hacky solution after 
we have finished the packaging;
3) It is trivial to support keyring integration (this can be done by using the 
external keyring module, which provides cross-platform support for 
GnomeKeyring, KDEKWallet, OSXKeychain, and Win32CryptoKeyring);
4) I have already written the code that integrates sqlcipher with Zeitgeist, 
it's just a matter of testing once packaging is complete.

Regarding the generation of a raw key, I'm currently using the following
method:

base64.b64encode(hashlib.sha256(str(random.getrandbits(256))).digest()

Perhaps Jacob could comment on the viability of this method?

-- 
You received this bug notification because you are a member of Zeitgeist
Framework Team, which is subscribed to Zeitgeist Framework.
https://bugs.launchpad.net/bugs/787868

Title:
  Encryption of database

Status in Zeitgeist Framework:
  New

Bug description:
  I think that Zeitgeist should encrypt databases in
  ~/.local/share/zeitgeist/* for anti-forensics reasons.

  While someone may happen to use an encrypted disk, Zeitgeist may serve
  as the ultimate accidental spyware to an unsuspecting user. One
  possible mitigation is to randomly generate a reasonable key, tie it
  into the login keychain and then use that key with something like
  http://sqlcipher.net/ rather than straight sqlite.

  In theory, a user will never know that this encryption/decryption is
  happening - no underlying assumptions about the disk need to be made
  to maintain any security guarantees. This should prevent anyone from
  learning the contents of the database without also learning the login
  password. Modern Ubuntu machines disallow non-root ptracing (
  https://wiki.ubuntu.com/SecurityTeam/Roadmap/KernelHardening#ptrace )
  and if the gnome keyring is locked, an attacker would have a much
  harder time grabbing meaningful Zeitgeist data without interacting
  with the user or bruteforcing the login keychain.

___
Mailing list: https://launchpad.net/~zeitgeist
Post to : zeitgeist@lists.launchpad.net
Unsubscribe : https://launchpad.net/~zeitgeist
More help   : https://help.launchpad.net/ListHelp


[Zeitgeist] [Bug 787868] Re: Encryption of database

2011-06-05 Thread J.P. Lacerda
I now have the python bindings working  -- all that remains is
packaging.

** Branch linked: lp:~jplacerda/+junk/python-sqlcipher

-- 
You received this bug notification because you are a member of Zeitgeist
Framework Team, which is subscribed to Zeitgeist Framework.
https://bugs.launchpad.net/bugs/787868

Title:
  Encryption of database

Status in Zeitgeist Framework:
  In Progress

Bug description:
  I think that Zeitgeist should encrypt databases in
  ~/.local/share/zeitgeist/* for anti-forensics reasons.

  While someone may happen to use an encrypted disk, Zeitgeist may serve
  as the ultimate accidental spyware to an unsuspecting user. One
  possible mitigation is to randomly generate a reasonable key, tie it
  into the login keychain and then use that key with something like
  http://sqlcipher.net/ rather than straight sqlite.

  In theory, a user will never know that this encryption/decryption is
  happening - no underlying assumptions about the disk need to be made
  to maintain any security guarantees. This should prevent anyone from
  learning the contents of the database without also learning the login
  password. Modern Ubuntu machines disallow non-root ptracing (
  https://wiki.ubuntu.com/SecurityTeam/Roadmap/KernelHardening#ptrace )
  and if the gnome keyring is locked, an attacker would have a much
  harder time grabbing meaningful Zeitgeist data without interacting
  with the user or bruteforcing the login keychain.

___
Mailing list: https://launchpad.net/~zeitgeist
Post to : zeitgeist@lists.launchpad.net
Unsubscribe : https://launchpad.net/~zeitgeist
More help   : https://help.launchpad.net/ListHelp


[Zeitgeist] [Bug 787868] Re: Encryption of database

2011-06-05 Thread J.P. Lacerda
** Branch linked: lp:~jplacerda/zeitgeist/encryption

-- 
You received this bug notification because you are a member of Zeitgeist
Framework Team, which is subscribed to Zeitgeist Framework.
https://bugs.launchpad.net/bugs/787868

Title:
  Encryption of database

Status in Zeitgeist Framework:
  In Progress

Bug description:
  I think that Zeitgeist should encrypt databases in
  ~/.local/share/zeitgeist/* for anti-forensics reasons.

  While someone may happen to use an encrypted disk, Zeitgeist may serve
  as the ultimate accidental spyware to an unsuspecting user. One
  possible mitigation is to randomly generate a reasonable key, tie it
  into the login keychain and then use that key with something like
  http://sqlcipher.net/ rather than straight sqlite.

  In theory, a user will never know that this encryption/decryption is
  happening - no underlying assumptions about the disk need to be made
  to maintain any security guarantees. This should prevent anyone from
  learning the contents of the database without also learning the login
  password. Modern Ubuntu machines disallow non-root ptracing (
  https://wiki.ubuntu.com/SecurityTeam/Roadmap/KernelHardening#ptrace )
  and if the gnome keyring is locked, an attacker would have a much
  harder time grabbing meaningful Zeitgeist data without interacting
  with the user or bruteforcing the login keychain.

___
Mailing list: https://launchpad.net/~zeitgeist
Post to : zeitgeist@lists.launchpad.net
Unsubscribe : https://launchpad.net/~zeitgeist
More help   : https://help.launchpad.net/ListHelp


[Zeitgeist] [Bug 787868] Re: Encryption of database

2011-07-06 Thread J.P. Lacerda
After talking with the maintainer of sqlcipher, I'm now using the v2beta
branch, which provides an sqlcipher_export function, making my life much
easier. I'm exposing this function to python-sqlcipher (the bindings),
and then making use of it within Zeitgeist. Basically, this allows us to
attach an unencrypted db to an encrypted db with a simple subroutine
call, rather than having to specify the entire schema manually. Someone
could start packaging this, I believe that most of the packaging for
sqlcipher has been done.

J.P.

-- 
You received this bug notification because you are a member of Zeitgeist
Framework Team, which is subscribed to Zeitgeist Framework.
https://bugs.launchpad.net/bugs/787868

Title:
  Encryption of database

Status in Zeitgeist Framework:
  In Progress

Bug description:
  I think that Zeitgeist should encrypt databases in
  ~/.local/share/zeitgeist/* for anti-forensics reasons.

  While someone may happen to use an encrypted disk, Zeitgeist may serve
  as the ultimate accidental spyware to an unsuspecting user. One
  possible mitigation is to randomly generate a reasonable key, tie it
  into the login keychain and then use that key with something like
  http://sqlcipher.net/ rather than straight sqlite.

  In theory, a user will never know that this encryption/decryption is
  happening - no underlying assumptions about the disk need to be made
  to maintain any security guarantees. This should prevent anyone from
  learning the contents of the database without also learning the login
  password. Modern Ubuntu machines disallow non-root ptracing (
  https://wiki.ubuntu.com/SecurityTeam/Roadmap/KernelHardening#ptrace )
  and if the gnome keyring is locked, an attacker would have a much
  harder time grabbing meaningful Zeitgeist data without interacting
  with the user or bruteforcing the login keychain.

To manage notifications about this bug go to:
https://bugs.launchpad.net/zeitgeist/+bug/787868/+subscriptions

___
Mailing list: https://launchpad.net/~zeitgeist
Post to : zeitgeist@lists.launchpad.net
Unsubscribe : https://launchpad.net/~zeitgeist
More help   : https://help.launchpad.net/ListHelp