On 4 Jun 2013, at 3:09pm, Eleytherios Stamatogiannakis <est...@gmail.com> wrote:

> Is there any way to go beyond the SQLITE_MAX_ATTACHED limit for *read only* 
> attached DBs?

See section 11 of

<http://www.sqlite.org/limits.html>

It's a 64-bit value, and two bits are already taken up.

You can attach databases, copy data from them to the main database, then detach 
those and attach some others.  Or you can create a hierarchy of shards (each of 
62 shards can point to up to 62 others).  Or you can rewrite your code so it 
never uses more than 62 shards no matter how many nodes are available.

None of them good solutions, I'm afraid.

> Also is there anyway for SQLite to create an automatic index on a view (or 
> Virtual Table), without having to first materialize the view (or VT)?

I believe that SQLite needs the data to be in one place (i.e. at least a 
virtual table) for the indexing routine to work.

If you're willing to put a bit of SQLite-only effort in, you could implement 
your own virtual table implementation that consulted data on each of your 
nodes.  This would be quite highly customised for your own application's 
requirements but it would mean you didn't have to do any attaching or detaching 
at all.  Your SQLite API calls could address your data as if it was all in one 
database file but SQLite would understand how data is partitioned between nodes 
and automatically gather it from all the necessary nodes.

<http://www.sqlite.org/vtab.html>

Another way to do it would be to implement your own VFS which would distribute 
over the nodes not at the row level but as if they were all one huge storage 
medium (i.e. like a RAID).

<http://www.sqlite.org/vfs.html>

I don't know which, if either, to recommend.  This kind of programming is 
beyond me, but someone into C and with a good understanding of your farm should 
be able to do it.

Simon.
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to