[sqlite] 64bit DLL vs 32bit

2016-05-06 Thread Keith Medcalf

> I have found that the Windows 32bit DLL works slower on a 64bit machine
> than on a 32bit.  I would have thought that the calls from the
> applications would have the same response for both machines since the
> application is a 32 bit application.  Anyone thinks otherwise?  Thanks.

I presume you mean that running 32-bit application on a 64-bit OS is slower 
than the same application run on a 32-bit OS.  Generally speaking this is not 
the case unless you are spending a significant portion of the execution time in 
non-optimized thunking code, which only exists when transitioning between 
"user" mode (32-bit) and "supervisor" mode (64-bit).

What in particular is slower?  And what versions of Windows are you comparing?






[sqlite] SQLite workshop

2016-05-06 Thread Gert Van Assche
Cecil,

I like the idea but I can't give an outline. It will depend on your
audience. Tables, Views, Indexes, Triggers, Joins...

One piece of advice I can give you: use an example anyone can understand. I
once have a db training and I got lost because the example was about
football and the only thing I know about football is that there are 2 teams
of 11 players. That ways enough to understand the first 10 minutes of the
training. And then I got lost. Not because the DB stuff was too complicated
but because football is too complicated ;-)

One of the things you should probably also explain is how to work in a
shell and how to work in a tool like the SQLite Expert. And maybe also how
to use an SQLite db in an online environment like PHP and how to protect
yourself against sql-injections. Also how to use extensions might be
useful.
You could ask your future audience what they need.

gert

2016-05-05 17:17 GMT+02:00 Cecil Westerhof :

> Last year I gave a presentation at T-DOSE about SQLite. It was received
> enthusiastic, so I am thinking about giving a set of workshops about SQLite
> this year. What are good subjects to treat? And what are the pitfalls to
> expect?
> I am thinking to do it on several levels. At least people that have never
> worked with databases, but would benefit from using something better as
> just files. And people that have experience with databases, but would like
> to switch to SQLite for certain cases.
>
> --
> Cecil Westerhof
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


[sqlite] Incremental backup/sync facility?

2016-05-06 Thread Hick Gunter
No,you just have to log the bound parameters and a reference to the prepared 
statement (so the other side will know which statement to prepare).
Or just log the statement & the parameters each time.

-Original Message-
From: sqlite-users-bounces at mailinglists.sqlite.org 
[mailto:sqlite-users-boun...@mailinglists.sqlite.org] On Behalf Of Jim Morris
Sent: Freitag, 06. Mai 2016 20:14
To: sqlite-users at mailinglists.sqlite.org
Subject: Re: [sqlite] Incremental backup/sync facility?

Doesn't this eliminate the use of prepared statements?


On 5/6/2016 11:10 AM, Jeffrey Mattox wrote:
> As an aside, this is how Apple syncs Core Data to iCloud (and then to 
> multiple iOS devices) if the backing store uses SQLite (the default).  When a 
> small amount of data changes (which is common), the changes get send out, not 
> the entire (mostly unchanged and potential huge) database.
>
> Jeff
>
>
>> On May 6, 2016, at 7:43 AM, Simon Slavin  wrote:
>>
>> Believe it or not, the fastest way to synchronise the databases is not to 
>> synchronise the databases.  Instead you keep a log of the instructions used 
>> to modify the database.  You might, for example, modify the library that you 
>> use for INSERT, DELETE and UPDATE commands to execute those commands and 
>> also save the command to another 'commandLog' table
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users at mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
 Gunter Hick
Software Engineer
Scientific Games International GmbH
FN 157284 a, HG Wien
Klitschgasse 2-4, A-1130 Vienna, Austria
Tel: +43 1 80100 0
E-Mail: hick at scigames.at

This communication (including any attachments) is intended for the use of the 
intended recipient(s) only and may contain information that is confidential, 
privileged or legally protected. Any unauthorized use or dissemination of this 
communication is strictly prohibited. If you have received this communication 
in error, please immediately notify the sender by return e-mail message and 
delete all copies of the original communication. Thank you for your cooperation.




[sqlite] 64bit DLL vs 32bit

2016-05-06 Thread Simon Slavin

On 6 May 2016, at 5:36pm, Jose I. Cabrera  wrote:

> I have found that the Windows 32bit DLL works slower on a 64bit machine than 
> on a 32bit.  I would have thought that the calls from the applications would 
> have the same response for both machines since the application is a 32 bit 
> application. Anyone thinks otherwise?

Yes.  When you call a 32bit DLL or run a 32bit program from a 64bit operating 
system constant translation between the two word sizes is taking place.  Not 
everything needs to be translated but enough of it has to be done that not only 
do you have the bottleneck of 32bit speed of the DLL, but you also have to 
spend extra time getting stuff to it and from it.

Simon.


[sqlite] Incremental backup/sync facility?

2016-05-06 Thread Jean-Luc Hainaut
Le 14:43 06/05/2016,Simon Slavin ?crit:

>On 6 May 2016, at 1:32pm, Stephan Buchert  wrote:
>
>> The largest database file has now grown to about 180 GB. I need to have
>> copies of the files at at least two different places. The databases are
>> updated regularly as new data from the satellites become available.
>> 
>> Having the copies of the file synced becomes increasingly tedious
>> as their sizes increase. Ideal would be some kind of
>> incremental backup/sync facility.
>
>Believe it or not, the fastest way to synchronise the databases is not to 
>synchronise the databases.  Instead you keep a log of the instructions used to 
>modify the database.  You might, for example, modify the library that you use 
>for INSERT, DELETE and UPDATE commands to execute those commands and also save 
>the command to another 'commandLog' table.  Or perhaps just append those 
>commands to a plain text file.
>
>Then instead of sending any data to the other sites you send this list of 
>commands to the other sites and have them execute them.
>
>Once you start implementing this you'll see that it's more complicated than I 
>have described but the text of your post suggests that you're a good enough 
>programmer to do it properly.

This does not need to be so complicated: the problem can be solved by three 
triggers (per table) that insert in a log table the components of the three 
data modification commands. At definite time points, the contents of this table 
is used to generate the SQL data modification commands for the other DB.

J-L Hainaut




[sqlite] 64bit DLL vs 32bit

2016-05-06 Thread Jose I. Cabrera
Greetings!

I have found that the Windows 32bit DLL works slower on a 64bit machine than on 
a 32bit.  I would have thought that the calls from the applications would have 
the same response for both machines since the application is a 32 bit 
application.  Anyone thinks otherwise?  Thanks.

jos?


[sqlite] Incremental backup/sync facility?

2016-05-06 Thread Simon Slavin

On 6 May 2016, at 3:40pm, Gerry Snyder  wrote:

> One feature of SQLite -- the whole database in one file -- is normally an 
> advantage but becomes less so when the file is huge.

Believe me.  It's still a huge advantage.  Have you ever tried to copy a MySQL 
database off a non-working server by figuring out which files contained it ?  
That's three days of my life I wish I could forget.

Simon.


[sqlite] The Session Extension (future SQLite extension)

2016-05-06 Thread Simon Slavin

On 6 May 2016, at 1:51pm, Richard Hipp  wrote:

> I bungled the copy/paste of that URL.  Should be
> https://www.sqlite.org/draft/sessionintro.html

That's very interesting.  I note that



says "This function does not change the order in which changes appear within 
the changeset. It merely reverses the sense of each individual change.".

This will not be useful to anyone trying to undo a set of changes to one row.

CREATE TABLE testTable (theText TEXT)
INSERT INTO testTable VALUES ('A')
-- session starts here
UPDATE textTable SET theText='B'-- changes A to B
UPDATE textTable SET theText='C'-- changes B to C
-- session ends here

Inverting that without reversing the order will yield

-- changes B to A (documentation suggest that this might be ignored ?)
-- changes C to B

The result would be to leave the value at B, which was true neither at the 
beginning nor end of the session.

Are we meant to iterate backwards through the inverted changeset ?  I don't see 
a way to do it.

Simon.


[sqlite] Incremental backup/sync facility?

2016-05-06 Thread Stephan Buchert

We are using Sqlite for data from satellite Earth observations. It
works very well. Thanks to everybody contributing to Sqlite, uppermost
Dr. Hipp.

The largest database file has now grown to about 180 GB. I need to have
copies of the files at at least two different places. The databases are
updated regularly as new data from the satellites become available.

Having the copies of the file synced becomes increasingly tedious
as their sizes increase. Ideal would be some kind of
incremental backup/sync facility.

I looked at https://www.sqlite.org/backup.html, but, as it does
a complete backup, this takes a long time for such large
databases (files are mounted over networks), and other work gets
blocked.

An alternative is perhaps https://www.sqlite.org/sqldiff.html, but it
is also slow for large databases. Most of the data are binary, and the
SQL input/output of sqldiff/sqlite3 is probably not very efficient.

A solution might be to add an extension to the standard VFS
http://www.sqlite.org/vfs.html. Each page would need a counter which
increments when the page is modified (and there seems to
be spare place for such counters). Then the backup could test which
pages need to be updated. Is there any chance that such an extension
could be added?

A kind of hack-ish solution might be to update the primary database
files in WAL mode, copy only the WAL file to the secondary place,
and force there WAL checkpoint. Would this work?

/Stephan


[sqlite] Incremental backup/sync facility?

2016-05-06 Thread Richard Hipp
On 5/6/16, Hick Gunter  wrote:
> No,you just have to log the bound parameters and a reference to the prepared
> statement (so the other side will know which statement to prepare).
> Or just log the statement & the parameters each time.

The sqlite3_trace() interface fills in the values for the parameters
automatically.
-- 
D. Richard Hipp
drh at sqlite.org


[sqlite] Is it faster with descending index

2016-05-06 Thread Ertan Küçükoğlu
I prepared a test application on Windows. Used sqlite 3.12.2 32bit DLL. 

- Just one table in a database.
- Database1, Table created with below SQL and second database I included 
WITHOUT ROWID at the end when creating the test table.
- Application filled in 500.000 random value generated records in first 
database1.
- Application generated exactly field size random value strings for each field 
separately. (Char(30) field, value inserted with 30 characters, Char(60) field, 
value inserted with 60 characters and so on)
- Application generated strings between Chr(32) - Chr(90) in ASCII table, so 
nothing non-printable inserted.
- Application select all rows from database1 & inserted to database2.
- Test showed me having insert values ready is faster than generating insert 
values.

As a result database2 size is about 5.2% smaller. I do not know how faster 
queries will be in that small database but still a gain.

Table SQL:
CREATE TABLE URUN(
 Barkod Char(30) NOT NULL PRIMARY KEY,
 UrunKodu Char(50),
 UrunAciklamasi Char(60),
 UrunGrubu Char(30),
 Renk Char(20),
 Beden Char(20),
 Cup Char(20),
 OlcuBirimi Char(10),
 AlisFiyat Char(20),
 SatisFiyat Char(20)
); 

Thanks.

Ertan K???ko?lu


-Original Message-
From: sqlite-users-bounces at mailinglists.sqlite.org 
[mailto:sqlite-users-boun...@mailinglists.sqlite.org] On Behalf Of Richard Hipp
Sent: Friday, May 6, 2016 12:51 AM
To: SQLite mailing list 
Subject: Re: [sqlite] Is it faster with descending index

On 5/5/16, Ertan K???ko?lu  wrote:
>
> Table create SQL:
> CREATE TABLE URUN(
>  Barkod Char(30) NOT NULL PRIMARY KEY,  ...
> );
>
> My question is, if I add following index, will it make my searches faster?
>
> CREATE UNIQUE INDEX IDX_BARKOD_DESC ON URUN(BARKOD);

No.

But if you change your table schema to read:

  CREATE TABLE URUN(
 Barkod char(30) PRIMARY KEY,
 
  ) WITHOUT ROWID;

Then it might well generate a smaller and faster database.  It seems like it is 
worth a try, anyhow.
--
D. Richard Hipp
drh at sqlite.org
___
sqlite-users mailing list
sqlite-users at mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



[sqlite] Incremental backup/sync facility?

2016-05-06 Thread Simon Slavin

On 6 May 2016, at 1:32pm, Stephan Buchert  wrote:

> The largest database file has now grown to about 180 GB. I need to have
> copies of the files at at least two different places. The databases are
> updated regularly as new data from the satellites become available.
> 
> Having the copies of the file synced becomes increasingly tedious
> as their sizes increase. Ideal would be some kind of
> incremental backup/sync facility.

Believe it or not, the fastest way to synchronise the databases is not to 
synchronise the databases.  Instead you keep a log of the instructions used to 
modify the database.  You might, for example, modify the library that you use 
for INSERT, DELETE and UPDATE commands to execute those commands and also save 
the command to another 'commandLog' table.  Or perhaps just append those 
commands to a plain text file.

Then instead of sending any data to the other sites you send this list of 
commands to the other sites and have them execute them.

Once you start implementing this you'll see that it's more complicated than I 
have described but the text of your post suggests that you're a good enough 
programmer to do it properly.

This assumes that the structure and primary keys of the tables which hold data 
are constructed in such a way that the order in which new data is entered 
doesn't matter.

Simon.


[sqlite] Incremental backup/sync facility?

2016-05-06 Thread Jeffrey Mattox
As an aside, this is how Apple syncs Core Data to iCloud (and then to multiple 
iOS devices) if the backing store uses SQLite (the default).  When a small 
amount of data changes (which is common), the changes get send out, not the 
entire (mostly unchanged and potential huge) database.

Jeff


> On May 6, 2016, at 7:43 AM, Simon Slavin  wrote:
> 
> Believe it or not, the fastest way to synchronise the databases is not to 
> synchronise the databases.  Instead you keep a log of the instructions used 
> to modify the database.  You might, for example, modify the library that you 
> use for INSERT, DELETE and UPDATE commands to execute those commands and also 
> save the command to another 'commandLog' table


[sqlite] Incremental backup/sync facility?

2016-05-06 Thread Hick Gunter
May I suggest using a statement log of the "primary" database to update the 
secondary sites? If you have a sequence counter (separate or embedded) then the 
state of the database could be recovered from a backup (sequence counter x) 
plus the log files for all statements from x+1 to "current".

-Original Message-
From: sqlite-users-bounces at mailinglists.sqlite.org 
[mailto:sqlite-users-boun...@mailinglists.sqlite.org] On Behalf Of Stephan 
Buchert
Sent: Freitag, 06. Mai 2016 14:32
To: sqlite-users at mailinglists.sqlite.org
Subject: [sqlite] Incremental backup/sync facility?


We are using Sqlite for data from satellite Earth observations. It works very 
well. Thanks to everybody contributing to Sqlite, uppermost Dr. Hipp.

The largest database file has now grown to about 180 GB. I need to have copies 
of the files at at least two different places. The databases are updated 
regularly as new data from the satellites become available.

Having the copies of the file synced becomes increasingly tedious as their 
sizes increase. Ideal would be some kind of incremental backup/sync facility.

I looked at https://www.sqlite.org/backup.html, but, as it does a complete 
backup, this takes a long time for such large databases (files are mounted over 
networks), and other work gets blocked.

An alternative is perhaps https://www.sqlite.org/sqldiff.html, but it is also 
slow for large databases. Most of the data are binary, and the SQL input/output 
of sqldiff/sqlite3 is probably not very efficient.

A solution might be to add an extension to the standard VFS 
http://www.sqlite.org/vfs.html. Each page would need a counter which increments 
when the page is modified (and there seems to be spare place for such 
counters). Then the backup could test which pages need to be updated. Is there 
any chance that such an extension could be added?

A kind of hack-ish solution might be to update the primary database files in 
WAL mode, copy only the WAL file to the secondary place, and force there WAL 
checkpoint. Would this work?

/Stephan
___
sqlite-users mailing list
sqlite-users at mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
 Gunter Hick
Software Engineer
Scientific Games International GmbH
FN 157284 a, HG Wien
Klitschgasse 2-4, A-1130 Vienna, Austria
Tel: +43 1 80100 0
E-Mail: hick at scigames.at

This communication (including any attachments) is intended for the use of the 
intended recipient(s) only and may contain information that is confidential, 
privileged or legally protected. Any unauthorized use or dissemination of this 
communication is strictly prohibited. If you have received this communication 
in error, please immediately notify the sender by return e-mail message and 
delete all copies of the original communication. Thank you for your cooperation.




[sqlite] Fastest way to backup/copy database?

2016-05-06 Thread Rowan Worth
On 4 May 2016 at 20:39, Rob Willett  wrote:

> Clemens,
>
> We have 8GB of memory which is the most our VPS provider allows. We?d like
> 32GB but its not an option for us. Our desktops have more than that, but
> the VPS provider is reasonably priced :)
>
> We hadn?t considered the WAL mode, my conclusion is that a simple change
> to our workflow is actually simpler, we stop the database updates for 15
> mins out of hours, cp and then restart. Its not ideal but we?re not running
> a nuclear power station or a 24/7 medical facility. Users *may* not get
> traffic updates for 15 mins at 03:00 in the morning. The world will keep
> spinning.


Are users really forced to wait for a database update to receive traffic?
Assuming you run a BEGIN IMMEDIATE to prevent updates while the copy is in
progress -- which you must no matter how fast you can copy the database,
otherwise a transaction committing during the copy will likely result in a
corrupted backup -- connections that are reading the database can still
proceed.

Note that you can prevent the sqlite3_backup process from restarting when
the database changes by passing -1 as the number of pages to
sqlite3_backup_step ("copy all pages at once"). If you take this approach
you don't need BEGIN IMMEDIATE because sqlite will take care of the
locking, but I'm not sure how the i/o performance compares to cp or rsync.

-Rowan


[sqlite] Incremental backup/sync facility?

2016-05-06 Thread Jim Morris
Doesn't this eliminate the use of prepared statements?


On 5/6/2016 11:10 AM, Jeffrey Mattox wrote:
> As an aside, this is how Apple syncs Core Data to iCloud (and then to 
> multiple iOS devices) if the backing store uses SQLite (the default).  When a 
> small amount of data changes (which is common), the changes get send out, not 
> the entire (mostly unchanged and potential huge) database.
>
> Jeff
>
>
>> On May 6, 2016, at 7:43 AM, Simon Slavin  wrote:
>>
>> Believe it or not, the fastest way to synchronise the databases is not to 
>> synchronise the databases.  Instead you keep a log of the instructions used 
>> to modify the database.  You might, for example, modify the library that you 
>> use for INSERT, DELETE and UPDATE commands to execute those commands and 
>> also save the command to another 'commandLog' table
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



[sqlite] Incremental backup/sync facility?

2016-05-06 Thread Stephen Chrzanowski
Gerry;

I trashed the email I was going to send.  You had the same line of thought
as me in regards to chopping the file on a per-day basis, but, what made me
trash it was any auto-numbered PKs that would be a hassle in new files,
unless that information was put into the new DB upon creation.

I agree that when files get large, a revamp of how the data needs to be
stored has to be examined when the new consideration of how to backup the
sucker becomes more of a logistical nightmare.  With MySQL, data
replication like that is somewhat a breeze.  With SQLite, the convenience
and portability applications granted by SQLite die when files become too
large to deal with for backup purposes.

On Fri, May 6, 2016 at 10:40 AM, Gerry Snyder 
wrote:

On 5/6/2016 5:32 AM, Stephan Buchert wrote:
>
>> We are using Sqlite for data from satellite Earth observations. It
>> works very well. Thanks to everybody contributing to Sqlite, uppermost
>> Dr. Hipp.
>>
>> The largest database file has now grown to about 180 GB.
>>
>
> One feature of SQLite -- the whole database in one file -- is normally an
> advantage but becomes less so when the file is huge.
>
>
>   I need to have
>> copies of the files at at least two different places. The databases are
>> updated regularly as new data from the satellites become available.
>>
>
> Others have suggested keeping track of changes using a logging file. That
> allows all the SELECT statements to remain unchanged, while complicating
> the input side of things.
>
> Another approach could be to have new observations go into a separate file
> (one day's worth, or whatever makes sense). This much smaller file could be
> shipped to the other site(s) and then merged into each copy of the main
> database. Almost no changes to the input logic, but every SELECT would have
> to use a JOIN. No idea how painful  process that would be.
> Gerry
>
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


[sqlite] Incremental backup/sync facility?

2016-05-06 Thread Roger Binns
On 06/05/16 05:32, Stephan Buchert wrote:
> Having the copies of the file synced becomes increasingly tedious
> as their sizes increase. Ideal would be some kind of
> incremental backup/sync facility.

Out of curiousity, would an approach of using multiple databases and
using ATTACH to "unify" them work?  The individual databases could
represent shorter amounts of time (eg a week) and as you state smaller
files are easier to deal with.

Roger


-- next part --
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 181 bytes
Desc: OpenPGP digital signature
URL: 
<http://mailinglists.sqlite.org/cgi-bin/mailman/private/sqlite-users/attachments/20160506/54cc3faf/attachment.pgp>


[sqlite] Incremental backup/sync facility?

2016-05-06 Thread Richard Hipp
On 5/6/16, Richard Hipp  wrote:
> On 5/6/16, Simon Slavin  wrote:
>>
>> Believe it or not, the fastest way to synchronise the databases is not to
>> synchronise the databases.  Instead you keep a log of the instructions
>> used
>> to modify the database.
>
> Or, this might be an even better solution.  Note that the
> sqlite3_trace() function (see
> https://www.sqlite.org/c3ref/profile.html) can help here.
>
> A third solution would be to use the session extension, which will be
> merged to trunk on the next release.  See
> https://www.sqlite.org/c3ref/profile.html

I bungled the copy/paste of that URL.  Should be
https://www.sqlite.org/draft/sessionintro.html

> for details.  One advantage
> of using the session extension is that it will work even if the two
> database files are not identical at the onset.  A disadvantage is that
> the session extension only works with rowid tables, not WITHOUT ROWID
> tables.
>
> --
> D. Richard Hipp
> drh at sqlite.org
>


-- 
D. Richard Hipp
drh at sqlite.org


[sqlite] Incremental backup/sync facility?

2016-05-06 Thread Richard Hipp
On 5/6/16, Simon Slavin  wrote:
>
> Believe it or not, the fastest way to synchronise the databases is not to
> synchronise the databases.  Instead you keep a log of the instructions used
> to modify the database.

Or, this might be an even better solution.  Note that the
sqlite3_trace() function (see
https://www.sqlite.org/c3ref/profile.html) can help here.

A third solution would be to use the session extension, which will be
merged to trunk on the next release.  See
https://www.sqlite.org/c3ref/profile.html for details.  One advantage
of using the session extension is that it will work even if the two
database files are not identical at the onset.  A disadvantage is that
the session extension only works with rowid tables, not WITHOUT ROWID
tables.

-- 
D. Richard Hipp
drh at sqlite.org


[sqlite] Incremental backup/sync facility?

2016-05-06 Thread Richard Hipp
On 5/6/16, Stephan Buchert  wrote:
>
> A kind of hack-ish solution might be to update the primary database
> files in WAL mode, copy only the WAL file to the secondary place,
> and force there WAL checkpoint. Would this work?
>

This sounds like the most promising solution to me.  We'll think on it some
-- 
D. Richard Hipp
drh at sqlite.org


[sqlite] Incremental backup/sync facility?

2016-05-06 Thread Gerry Snyder
On 5/6/2016 5:32 AM, Stephan Buchert wrote:
> We are using Sqlite for data from satellite Earth observations. It
> works very well. Thanks to everybody contributing to Sqlite, uppermost
> Dr. Hipp.
>
> The largest database file has now grown to about 180 GB.

One feature of SQLite -- the whole database in one file -- is normally 
an advantage but becomes less so when the file is huge.


>   I need to have
> copies of the files at at least two different places. The databases are
> updated regularly as new data from the satellites become available.

Others have suggested keeping track of changes using a logging file. 
That allows all the SELECT statements to remain unchanged, while 
complicating the input side of things.

Another approach could be to have new observations go into a separate 
file (one day's worth, or whatever makes sense). This much smaller file 
could be shipped to the other site(s) and then merged into each copy of 
the main database. Almost no changes to the input logic, but every 
SELECT would have to use a JOIN. No idea how painful  process that would 
be.
>
> Having the copies of the file synced becomes increasingly tedious
> as their sizes increase. Ideal would be some kind of
> incremental backup/sync facility.
>
> I looked at https://www.sqlite.org/backup.html, but, as it does
> a complete backup, this takes a long time for such large
> databases (files are mounted over networks), and other work gets
> blocked.
>
> An alternative is perhaps https://www.sqlite.org/sqldiff.html, but it
> is also slow for large databases. Most of the data are binary, and the
> SQL input/output of sqldiff/sqlite3 is probably not very efficient.
>
> A solution might be to add an extension to the standard VFS
> http://www.sqlite.org/vfs.html. Each page would need a counter which
> increments when the page is modified (and there seems to
> be spare place for such counters). Then the backup could test which
> pages need to be updated. Is there any chance that such an extension
> could be added?
>
> A kind of hack-ish solution might be to update the primary database
> files in WAL mode, copy only the WAL file to the secondary place,
> and force there WAL checkpoint. Would this work?
>
> /Stephan
>
Gerry


[sqlite] strftime accepts an illegal time string

2016-05-06 Thread Hick Gunter
>
>For instance, do you care if someone enters a time which is skipped by the 
>clocks going forward ?  If at 1am your clocks skip straight to 2am, do you 
>care if someone enters a time of 1:30am on that >day ?
>
>
>Simon.
>

Our local time skips from 2am to 3am and from 3am back to 2am for DST. This 
caused a job scheduled to run a 2:30 am to be skipped going into DST. But we 
made up by executing it twice when setting the clocks back ;)


___
 Gunter Hick
Software Engineer
Scientific Games International GmbH
FN 157284 a, HG Wien
Klitschgasse 2-4, A-1130 Vienna, Austria
Tel: +43 1 80100 0
E-Mail: hick at scigames.at

This communication (including any attachments) is intended for the use of the 
intended recipient(s) only and may contain information that is confidential, 
privileged or legally protected. Any unauthorized use or dissemination of this 
communication is strictly prohibited. If you have received this communication 
in error, please immediately notify the sender by return e-mail message and 
delete all copies of the original communication. Thank you for your cooperation.




[sqlite] .DUMP output compatibility

2016-05-06 Thread Tony Papadimitriou
Windows!

-Original Message- 
From: Richard Hipp
Sent: Friday, May 06, 2016 1:55 AM
To: SQLite mailing list
Subject: Re: [sqlite] .DUMP output compatibility

On 5/5/16, Tony Papadimitriou  wrote:
> Is it possible for .DUMP to produce table/field names quoted with `
> (backquote) instead of ? (double quote) for compatibility with MySQL?
> Or is this already adjustable by some setting I missed?

Pipe the output through sed (https://en.wikipedia.org/wiki/Sed)
-- 
D. Richard Hipp
drh at sqlite.org
___
sqlite-users mailing list
sqlite-users at mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users 



[sqlite] Is it faster with descending index

2016-05-06 Thread Ertan Küçükoğlu
Hi,

I am using SQLite version 3.9.2 on WinCE devices. I have below table in one
of my databases. Table sometimes holds 1.5 to 3 million records in it. That
database is reached for read-only purposes and created from scratch in a PC
if its contents needs to be updated. All database files, application, and
some application related files are on a microSD card. Brand and size is
changing from device to device.

Database opened using below pragmas:
PRAGMA page_size=4096;
PRAGMA journal_mode=MEMORY;
PRAGMA temp_store=2; // Memory temp store
PRAGMA locking_mode=EXCLUSIVE;

Table create SQL:
CREATE TABLE URUN(
 Barkod Char(30) NOT NULL PRIMARY KEY,
 UrunKodu Char(50),
 UrunAciklamasi Char(60),
 UrunGrubu Char(30),
 Renk Char(20),
 Beden Char(20),
 Cup Char(20),
 OlcuBirimi Char(10),
 SeriNo Char(1),
 Lot Char(1),
 SKT Char(1)
);

My question is, if I add following index, will it make my searches faster?

CREATE UNIQUE INDEX IDX_BARKOD_DESC ON URUN(BARKOD);

Application is developed using Lazarus 1.6. Table is opened in a TSQLQuery
and searches coded using "TSQLQuery.Locate('Barkod', Search_Value, []);"

Thanks & Regards,
Ertan K???ko?lu




[sqlite] Version of the database

2016-05-06 Thread R Smith
The file header contains the SQLite version that most recently modified 
the schema. You can see this using the cli, but not a pragma.

If you are willing to dig a bit, you can retrieve it by reading the 
first 100 bytes or so from the file and examining the 4 bytes at offset 
96. It's a big-endian 32-bit integer containing the version, and also 
another 32-bit integer (4-byte) value just prior at offset 92 is a 
counter of how many changes were made since using that library version. 
Together these can be quite useful information.

The Integer value of the version would be a number like this: 3012034 - 
starting with a 3 always for SQLite3 and then the next 3 digits the 
minor version (12 in the example) and the last 3 the release (34 in the 
example).

For more information on values stored in the header - see here:
https://www.sqlite.org/fileformat2.html#database_header

Cheers,
Ryan


On 2016/05/05 6:22 PM, Cecil Westerhof wrote:
> 2016-05-05 18:15 GMT+02:00 Cecil Westerhof :
>
>> I know how to get the version of the running version of SQLite, but is
>> there a way to get the version with which the database was created?
>>
> ?With the command line program I can get it with:
>  .dbinfo
> one of the things it gives is:
>  software version:3008010
> so it is 3.8.10,but would it also be possible to get in my Java program?
>



[sqlite] .DUMP output compatibility

2016-05-06 Thread Tony Papadimitriou
Is it possible for .DUMP to produce table/field names quoted with ` (backquote) 
instead of ? (double quote) for compatibility with MySQL?
Or is this already adjustable by some setting I missed?

Thank you.