Re: [sqlite] [EXTERNAL] Common index for multiple databases

2018-08-05 Thread Jim Callahan
Back off from the index semantics for a second.

If Gunter Hick has captured at the application level of what you are trying
to do (remote databases),

I think the name of the concept we are looking for is: "Eventual
Consistency".
 https://en.wikipedia.org/wiki/Eventual_consistency

SQL databases (as opposed to Xbase) are built around the concept of ACID
transactions which
implies a centralized database where everything can be reconciled
immediately. The delayed processing
of "Eventual Consistency" is implemented at the application level; for
example there are applications written
for PostgreSQL to make complete replica databases "eventually consistent".
That would be overkill in
your case, but the concept that "eventual consistency" has to be
implemented in the application space
above the core SQL level.

So, at the application level, what you want to do is create subset tables
for a particular purpose.
Good news! SQL is fantastic at creating of subsets of rows and columns of
data.

In a new SQLite database ATTACH the main database and create a query that
describes the subset
of data you need and then wrap that query in a "CREATE TABLE AS query;"
statement. That with create a subset of the data (without indexes). Rebuild
the indexes you
need in the local table (do not attempt to copy indexes!).

The application logic needs to use the subset database to build a
time-stamped transaction
to run against the main database.

Your application needs a module that accepts all the remote time stamped
transactions
and queue them up to feed into the main database. You have to decide how
your
application should handle conflicting transactions (see the "eventual
consistency"
article).
https://en.wikipedia.org/wiki/Eventual_consistency

Then run your consistent application level transaction log against the main
database.

Don't worry about "copying" indexes.  As Dr. Hipp suggests, copying indexes
is a non-starter in the SQL world.
Just copy the data and rebuild your indexes on the subset data.  If you
want to assure you don't
create a duplicate customer number; copy the column of customer numbers to
a separate table;
reindex it and join it to your subset table.

So, in short, you can't copy indexes, but you can copy any subset of data
and re-index that subset.
"Eventual consistency" has to be handled at the application level above the
SQL core (which only
handles "ACID consistency").

HTH

Jim Callahan
Callahan Data Science LLC
Orlando, FL




On Fri, Aug 3, 2018 at 5:41 AM, Hick Gunter  wrote:

> This is what I think you are asking:
>
> - You have a "main office" computer that holds the current information on
> "everything"
> - You have several different categories of users (technicians, accountant,
> ...) that require different subsets of the data
> - Each user has his own computer, that may be disconnected from the "main
> office", e.g. for "field work"
> - When a user's computer is "attached" to the "main office", it needs to
> be "synchronized".
>
> If this is correct, then you require either a "distributed" DBMS that
> handles synchronization by itself, or you need to do some programming both
> inside and outside  of SQLite.
>
> This may be appropriate for you:
>
> - As already stated, SQLite has just 1 file to hold all tables and indexes
> of the schema. Make this identical for all users. You can always leave the
> tables empty with just minimal overhead.
> - Downloading from "office" to "user" is accomplished by using ATTACH to
> make the "user" and "office" databases accessible. Just run the appropriate
> INSERT ... INTO statements. Check the authorizer callback to allow
> different users to access only the tables/fields that they are allowed to
> see. Limiting the rows requires an appropriate WHERE clause.
> - "Work" done by the user while offline needs to be saved in a worklog
> table.
> - Uploading the "work" of a user would copy the new worklog records into
> the "office" worklog table, just another INSERT ... INTO, to be processed
> by a dedicated sync application.
>
> -Ursprüngliche Nachricht-
> Von: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org]
> Im Auftrag von John R. Sowden
> Gesendet: Donnerstag, 02. August 2018 19:12
> An: sqlite-users@mailinglists.sqlite.org
> Betreff: [EXTERNAL] [sqlite] Common index for multiple databases
>
> I have been reviewing sqlite for a couple of years, but still use foxpro.
> I have a question regarding an index issue.
>
> Currently I have several types of databases (in foxpro, one per file) that
> all point to an index of a common field, a customer account number.  The
>

Re: [sqlite] [EXTERNAL] Common index for multiple databases

2018-08-03 Thread Hick Gunter
This is what I think you are asking:

- You have a "main office" computer that holds the current information on 
"everything"
- You have several different categories of users (technicians, accountant, ...) 
that require different subsets of the data
- Each user has his own computer, that may be disconnected from the "main 
office", e.g. for "field work"
- When a user's computer is "attached" to the "main office", it needs to be 
"synchronized".

If this is correct, then you require either a "distributed" DBMS that handles 
synchronization by itself, or you need to do some programming both inside and 
outside  of SQLite.

This may be appropriate for you:

- As already stated, SQLite has just 1 file to hold all tables and indexes of 
the schema. Make this identical for all users. You can always leave the tables 
empty with just minimal overhead.
- Downloading from "office" to "user" is accomplished by using ATTACH to make 
the "user" and "office" databases accessible. Just run the appropriate INSERT 
... INTO statements. Check the authorizer callback to allow different users to 
access only the tables/fields that they are allowed to see. Limiting the rows 
requires an appropriate WHERE clause.
- "Work" done by the user while offline needs to be saved in a worklog table.
- Uploading the "work" of a user would copy the new worklog records into the 
"office" worklog table, just another INSERT ... INTO, to be processed by a 
dedicated sync application.

-Ursprüngliche Nachricht-
Von: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] Im 
Auftrag von John R. Sowden
Gesendet: Donnerstag, 02. August 2018 19:12
An: sqlite-users@mailinglists.sqlite.org
Betreff: [EXTERNAL] [sqlite] Common index for multiple databases

I have been reviewing sqlite for a couple of years, but still use foxpro.  I 
have a question regarding an index issue.

Currently I have several types of databases (in foxpro, one per file) that all 
point to an index of a common field, a customer account number.  The databases 
are for accounting, technical, general info lookup, etc.  \

I do not want these databases to all reside in one sqlite file.  How do I index 
each database on this customer account number when each database and associated 
index are in separate files?  Is this what seems to be referred to as an 
external file?  I assume that I would have to reindex each database each time 
it is opened, since a record could have been edited, etc.

tia,

John



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
 Gunter Hick | Software Engineer | Scientific Games International GmbH | 
Klitschgasse 2-4, A-1130 Vienna | FN 157284 a, HG Wien, DVR: 0430013 | (O) +43 
1 80100 - 0

May be privileged. May be confidential. Please delete if not the addressee.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users