On 04/03/13 18:44, Simon Slavin wrote:
On 4 Mar 2013, at 4:13pm, Eleytherios Stamatogiannakis <est...@gmail.com> wrote:
Is there a way in SQLite to have a full covering index on a table without also
storing the duplicate table?
Can we ask why you care about this ? Do you have a huge table which is taking a huge amount of
space, and you're trying to fit it on a Flash Drive ? Can you distinguish between "I think it
could be smaller." and "It's just a little too big and that means I can't use SQLite for
this." ?
We are creating a distributed processing system in the spirit of Hadapt
[1], but instead of using PostgreSQL we are using SQLite.
For the intermediate result tables (each one inside an SQLite DB) that
we know how they will be accessed (and so we prepare their indexes), it
is very wasteful to have to transfer twice the data (index + full table).
This kind of systems live and die by their I/O.
The most compact way of carrying SQLite databases around is to use the shell
tool to dump the database to a SQL text file, then use a compression utility
(e.g. ZIP) to compress that text file. But without knowing your situation I
can't tell if that would help you.
For streaming processing we have our own serialization format that is
compressed on the fly with LZ4. These streams are opened on the other
side as SQLite Virtual Tables. For store and forward type of processing,
we use SQLite DBs also compressed on the fly with LZ4. On the other side
we simply "attach" these DBs.
A first shot toward a partial solution would be to declare all the columns on
the table as primary keys:
create table t(a,b,c, primary key(a,b,c));
Sorry, but it doesn't help. Even fields in the primary key are stored twice.
I'm saddened to hear that. I thought that at least we had a partial
solution with declaring all rows as a primary key...
Thank you for answering.
l.
[1] http://hadapt.com
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users