Re: [Evolution-hackers] Semantic Desktop Evolution Plugin

2009-01-05 Thread Thomas Franz

Thanks for the pointer.
The Evolution metadata API will be helpful for our work.

We have to see now how to add/overload GUI elements in Evolution
to display some additional metadata within the Evolution user interface.

Best regards,
Thomas


Philip Van Hoof schrieb:

On Tue, 2008-12-23 at 14:42 +0100, Thomas Franz wrote:

  

I'm doing research on personal information management
and have been working on a semantic desktop implementation
and extensions for the KDE Dolphin file manager, Gnome Nautilus,
and Thunberbird Email (see [1] for a video of what I'm talking about).




You might be interested in this:

http://live.gnome.org/Evolution/Metadata

It's being developed as we speak

  
begin:vcard
fn:Thomas Franz
n:Franz;Thomas
org:University of Koblenz-Landau;Dept. for Computer Science
adr;quoted-printable:;;Universit=C3=A4tsstr. 1;Koblenz;;56070;Germany
email;internet:fr...@uni-koblenz.de
title:MSc. Computer Science
tel;work:+49 261 287 2862
x-mozilla-html:FALSE
url:http://isweb.uni-koblenz.de/Team/franz
version:2.1
end:vcard

___
Evolution-hackers mailing list
Evolution-hackers@gnome.org
http://mail.gnome.org/mailman/listinfo/evolution-hackers


[Evolution-hackers] camel-folder-summary.c - 64bit-ness ...

2009-01-05 Thread Michael Meeks
Hi guys,

I was just trying to reproduce some migration performance tests with my
mbox and summary data rsync'd from a 32bit machine to a 64bit machine.

Surprisingly this appears to crash immediately. Looking at the
camel-file-utils.c code I was surprised to see simultaneously an
apparent concern for network byte ordering:

camel_file_util_encode_fixed_int32 (FILE *out, gint32 value)
{
guint32 save;

save = g_htonl (value);
if (fwrite (save, sizeof (save), 1, out) != 1)
return -1;
return 0;
}

and also things like:

CFU_ENCODE_T(time_t)

that appear to generate data based on the sizeof the platform's time_t
- on my 64bit machine time_t is 8 bytes, on 32bit it is only 4.

Presumably this summary code is made obsolete by the new SQLite summary
code - and modulo some data as to what architecture a file was written
by it's perhaps less than obvious how to fix this. Also - why we're not
using fgetc_unlocked in these tight loops I don't know.

I guess I need an old evo. version to re-build all my summaries for
64bit now; or am I barking up the wrong tree ?

Thanks,

Michael.

-- 
 michael.me...@novell.com  , Pseudo Engineer, itinerant idiot

___
Evolution-hackers mailing list
Evolution-hackers@gnome.org
http://mail.gnome.org/mailman/listinfo/evolution-hackers


Re: [Evolution-hackers] Plans for GroupDAV/CalDAV/CardDAV/plain WebDAV?

2009-01-05 Thread Adam Tauno Williams
 I'd like to ask, what are the plans for supporting completely
 GroupDAV / CalDAV / CardDAV / or simply
 calendars/todo's/journals/contacts over plain WebDAV?
 AFAIK the current stable version supports:
 • CalDAV: now complete (calendar, todo, journal) (since 2.25)
 • CardDAV: not implemented
 • GroupDAV/plain WebDAV: only adressbook (since 2.24)
 (Plus there was once some GroupDAV plugin, but I wasn't able to 
 get it work, see http://www.groupdav.org/implementations.html ...)

The GroupDAV connector is long since been unmaintained.  It did work
reasonably well with Evo 2.8, but the Evo API seems to change with every
release.  I believe the last work on the GroupDAV connector was 2+ years
ago.  The SVN URL is
http://developer.opengroupware.org/OGoProjects/evolution-groupdav/trunk

But I wasn't aware that 2.24 could create a WebDAV (GroupDAV)
addressbook;  that is interesting.

 My question is: is there a plan form implementing 
 GroupDAV/WebDAV for calendar/todo/journal? And: is it supposed 
 to be separate connector for GroupDAV and CalDAV/CardDAV? 
 Wouldn't it be better to have a generic one that would be able
 to handle both: simpler GroupDAV servers as well as 
 full-fledged CalDAV/CardDAV?



___
Evolution-hackers mailing list
Evolution-hackers@gnome.org
http://mail.gnome.org/mailman/listinfo/evolution-hackers


Re: [Evolution-hackers] A Camel API to get the filename of the cache, also a proposal to have one format to rule them all

2009-01-05 Thread Philip Van Hoof
On Mon, 2009-01-05 at 08:25 -0500, Jeffrey Stedfast wrote:

 migrating away from the IMAP specific data cache would be good.

Yes. I think IMAP and the local providers are the only ones that are
still using a specialized datacache.

The IMAP4 one, for example, ain't using a specialized one.

  b) migrate away the mbox data cache (the all-in-one file crap)
  
  I'm all for it. Once I thought of doing this, but the options were like
  Maildir or a format of one mbox file per mail in a distributed folder
  [CamelDataCache sort of format, like imap4/GW/Exchange]. But IIRC Fejj,
  had some concern like, Local still might be good to be held in a
  'standards' way. I know it hurts us on expunge/mailbox rewrite etc.

 
 what mbox data cache? CamelDataCache would probably be the best cache to
 use for IMAP.

Although I would change CamelDataCache to store individual MIME parts as
separate files instead of files that look like a single-mail MBox file.

I would also decode the separate MIME parts before storing if the
original E-mail had them encoded (which is usually the case, and always
for binary attachments). This to make it more easy for metadata engines
to index the MIME parts, and to allow such to do this efficiently. 

Perhaps also to reduce disk-space, as encoded consumes more disk-space,
but that is for me just a nice side-effect.

So my format would create a directory foreach E-mail, or prefix each
MIME part with the uid. Perhaps

INBOX/subfolders/temp/1.  // headers+multipart container
INBOX/subfolders/temp/1.1 // multipart container
INBOX/subfolders/temp/1.1.1   // text/plain
INBOX/subfolders/temp/1.1.2   // text/html
INBOX/subfolders/temp/1.2.1   // inline JPeg attachment
INBOX/subfolders/temp/1.BODYSTRUCTURE // Bodystructure of the E-mail
INBOX/subfolders/temp/1.ENVELOPE  // Top envelope of the E-mail

ps. Perhaps I would store 1.BODYSTRUCTURE in the database instead. I
would probably store 1.ENVELOPE in the database (like how it is now).

I would probably on top of storing BODYSTRUCTURE and ENVELOPE in the
database also store them in separate files. Even if most filesystems
will consume 4k or more (sector or block size) for those mini files.

To get the JPeg attachment:

$ cp INBOX/subfolders/temp/1.2.1 ~/mommy.jpeg

$ exif INBOX/subfolders/temp/1.2.1
EXIF tags in 'INBOX/subfolders/temp/1.2.1' ('Intel' byte order):
+--
Tag |Value 
+--
Image Description   |Mommy with cake at birthday 
Manufacturer|SONY  
Model   |DSC-T33   
...

$ tracker-search -s EMails birthday
Results:
  email://u...@server/INBOX/temp/1
  email://u...@server/INBOX/temp/1#2.1
  ~/mommy.jpeg


[CUT]

 this can cause problems if you need to verify signed parts because
 re-encoding them might not result in the same output.

Ok, for signatures I guess we can make an exception and keep then
encoded in their original format then.

  For Maildir I recommend wasting diskspace by storing both the original
  Maildir format and in parallel store the attachments separately.
 
  Maildir ain't accessible by current Evolution's UI, by the way.
 
  For MBox I recommend TO STOP USING THIS BROKEN FORMAT. It's insane with
  today's mailboxes that easily grow to 3 gigabytes in size per user.
  
  I second your thoughts for MBox stuff. 

 
 Eh, I think mbox works fine but I can understand wanting to move to
 Maildir which is also fine :-)

Maildir doesn't store individual MIME parts separately. So Mailbox is
equally hard to handle for metadata engines as MBox is. Only difference
with MBox is that we need to seek() to some location.

So Maildir doesn't make it possible for us to let app developers
implement indexing plugins easily, like a typical exif extractor.

We would have to Base64 decode image attachments before extracting exif,
for example. Instead of just saying: here's a stream, or here's a FILE*,
go ahead and extract the info you want. (with a stream we could make it
relatively easy to auto-base64 decode, but often are these extractors
still FILE* based, not stream based).

There's IMO not really a good reason to keep the attachments stored in
their encoded version. Except the signatures, perhaps, but we don't
really need those in decoded form anyway. So it would be fine to have an
exception on signatures (to keep them encoded-stored).

Hmmaybe someday having the fingerprint information about a person might
be useful to verify the identify of an individual before linking the
person with a contact in our RDF triple store.


-- 
Philip Van Hoof, freelance software developer
home: me at pvanhoof dot be 
gnome: pvanhoof at gnome dot org 
http://pvanhoof.be/blog
http://codeminded.be