Currently contains roughly 350 000 articles with a full
word index (not partial).

The word index is spread out on "virtual remotes" ie they are not
really on remote machines, it's more a way to split up the physical
database files on disk (I've written on how that is done on I have no way of knowing how many words are mapped to
their articles like this but most of the database is occupied by these
indexes and it currently occupies some 30GB all in all.

A search for the word "Google" just took 22 seconds.

No other part of the application is lagging significantly except for
when listing new articles in my news category due to the fact that
there are so many articles in that category. However the fetching
method is highly inefficient as I first fetch all feeds in a category
and then all their articles and then take (tail) on them to get the 50
newest for instance. Walking and then only loading the wanted articles
to memory would of course be the best way and something I will look

Why don't you try out the application yourself now that you know how
big the database is and so on, if you use Google Reader you can just
export your subscriptions as an OPML and import it into VizReader.

Henrik Sarvell

On Mon, Jul 19, 2010 at 4:39 PM, Mateusz Jan Przybylski
<> wrote:
> On Monday 19 July 2010 16:23:27 you wrote:
>> if anybody would be so kind to share how they have experienced running
>> picolisp in production.
> None yet, unfortunately.
> However, a (quick'n'dirty) HTML & HTTP application in PicoLisp got me a v=
> good grade for `Programming languages & paradigms' course at Uni.
> The lecturer never heard of Lisp before; after listening to my explanatio=
ns he
> wrapped it up with:
> =A0``So this Lisp is a newfangled language, quite like Ruby, right?''
> Geez...
> --
> Mateusz Jan Przybylski
> ``One can't proceed from the informal to the formal by formal means.''
> --

Reply via email to