Rob Shortt wrote:
> I also agree that fetching the data before trying to show the EPG is
> best.  What I'd like to avoid is a problem we're having right now.  The
> "local" cached EPG data will be incorrect if the EPG changes in the
> database / server.  Will we use some callbacks for when the server updates?

My idea is that the main database announces a version. Each change
increases this version. On a change, the local db can sync very
quickly. The db will only be outdated for some seconds.

> Ideally I'd like to see everything done realtime, from the client, to
> the server, to the database, but I'm not sure we can get the performance
> there that we really want.  Is it bad to dream?  A small buffer on the
> client and prefetching could hide potential lag.

We must compromise because of speed. You are right, the best way would
be one db in the lan. But when very request goes over the network
including pickle and unpickle the data, it will be slow. On the other
hand, reading the local db in a thread menas no network communication
and no pickle/unpickle.

>>>Back to client / server. When we want to add data to the epg, we spawn
>>>a child. First we check if a write child is already running (try to
>>>connect to the ipc, if it doesn't work, start child, I have some test
>>>code for that). Same for kaa.vfs. One reading thread in each app, one
>>>writing app on each machine. 
>
> Again I'd prefer a single point of entry...

One single point for writing, many for reading would give us the best
performance. 

>> Copying the epgdb.sqlite file straight over would be a big win, of
>> course.  We could implement that eventually.
>
> I don't like the idea of syncing the databases.  I'd rather that there
> was one database, one controlling process with a client/server interface
> (for reads AND writes (including large writes like tv_grab results)).
>
> I would be happy with something like this:
>
>               +---------------------+
>               |epg_server.py        +
>               +---------------------+
>               |   kaa.epg   +-------+
>               |             | DB    |
>               |-------------+-------+
>               |object interface/IPC |
>               +---------------------+
>                       /\
>                      /  \
>                     /    \
>             UNIX or TCP/IP sockets
>                   /        \
>                  /          \
>      +--------------+    +--------------+
>      |epg_client.py |    |epg_client.py |
>      +--------------+    +--------------+
>      |  client 1    |    |  client n    |
>      +--------------+    +--------------+
>      | cached OR    |
>      | realtime view|
>      +--------------+

So when caching doesn't work because the user does something we didn't
expect, we need to pickle our request, send it over the network (which
may be slow right now because we are streaming many live tv
channels). The server will unpickle the request, query the db, pickles
the answer and sends the answer back. The client (in best case
sleeping in the select right now, in the wirst case doing something)
must read from the socket, unpickle the data and give it to the tv
guide. A clean but slow design.



Dischi

-- 
In the Beginning

It was a nice day.
        -- (Terry Pratchett & Neil Gaiman, Good Omens)

Attachment: pgpqlKvwgeqYF.pgp
Description: PGP signature

Reply via email to