Re: [sqlite] Safe saving of in-memory db to disk file

2019-07-15 Thread Adrian Ho
On 15/7/19 11:25 PM, Nelson, Erik - 2 wrote:
> Dominique Devienne wrote on Monday, July 15, 2019 2:41 AM
>
>> That's when you reach for virtual tables (and their "virtual indices").
>> I.e. you keep your data in native data-structures (Boost.MultiIndex in my 
>> case),
>> and just provide a SQLite view of it. Much faster than "pure-in-Memory" with
>> SQLite-managed pager-backed B-tree tables. Steep learning curve, especially
>> for the indexing part, but the results are well worth it IMHO.
>> Which can be freely mixed with "real" tables having "real" indexes (in the 
>> in-memory DB).
> That sounds really intriguing- does it significantly speed up queries coming 
> in through the SQLite engine?  Or the speed bump is only if accessing from 
> the C++-native side?

The whole point of virtual tables is to make something outside an actual
SQLite DB accessible via normal SQL queries within SQLite; you'd still
use the normal access methods (pointer deferencing,  lookups, etc.)
from the native side.

Speedup depends entirely on the efficiency of your implementation, of
course.

> Is there any literature out there or tips you can share that can flatten the 
> learning curve?

The definitive documentation on SQLite virtual tables is here:
https://sqlite.org/vtab.html

A list of implementation examples is here: https://sqlite.org/vtablist.html

Of which, a (relatively) simple example of exposing a C array is
described here: https://sqlite.org/carray.html

And whose source code is in your SQLite source distribution under
ext/misc/carray.c, or online here:
https://www.sqlite.org/src/artifact?ci=trunk&filename=ext/misc/carray.c


Best Regards,
Adrian

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] Re: how to delete BLOB object from the data base

2019-07-15 Thread Keith Medcalf
>-Original Message-
>From: Keith Medcalf [mailto:kmedc...@dessus.com]
>Sent: Monday, 15 July, 2019 19:13
>To: 'SQLite mailing list'
>Subject: RE: [sqlite] [EXTERNAL] Re: how to delete BLOB object from
>the data base
>
>
>Use one connection.  Use one thread.  As follows:
>
>sqlite *db;
>sqlite_stmt *beginstatement;
>sqlite_stmt *commitstatement;
>sqlite_stmt *insertstatement;
>sqlite_stmt *deletestatement;
>int i = 0;
>
>sqlite_open_v2('database.db', &db, ...);
>sqlite_prepare_v2(db, "begin immediate", &beginstatement, ...);
>sqlite_prepare_v2(db, "insert into blah (_rowid_, data) values (?,
>?);", &insertstatement, ...);
>sqlite_prepare_v2(db, "delete from blah where _rowid_ < ?;",
>&deletestatement ... );
>sqlite_prepare_v2(db, "commit;", &commitstatement, ...);
>while 1:
>  char *data = WaitForDataToArrive()
>  if (!data) break;
>  int64 ts = GetTheTimeInMSsinceTheUnixEpochAsInt64()
>  if (!i) {
> sqlite_step(beginstatement);
> sqlite_reset(beginstatement);
>  }
>  sqlite_bind_int64(deletestatement, 0, ts -
>(numberofmillisecondsofdatatoretainindatabase));
>  sqlite_step(deletestatement);
>  sqlite_reset(deletestatement);
>  sqlite_bind_int64(insertstatement, 0, ts);
>  sqlite_bind_blob(insertstatement, 1, data);
>  sqlite_step(insertstatement);
>  sqlite_reset(insertstatement);
>  i++;
>  if (i>NumberOfRecordsInEachBatch) {
> sqlite_step(commitstatement);
> sqlite_reset(commitstatement);
  i = 0;  /* Add this cuz you are going to start another batch */
>  }
>}
>if (i) {
>  sqlite_step(commitstatement);
>  sqlite_reset(commitstatement);
>}
>sqlite_finalize(beginstatement);
>sqlite_finalize(commitstatement);
>sqlite_finalize(insertstatement);
>sqlite_finalize(deletestatement);
>sqlite_close_v2(db);
>
>Use a separate connection for the data reading on its own
>thread/process.  Make sure the database is in WAL mode.  Don't bother
>with auto vacuum (not needed).  Space freed will be re-used
>automatically.  Assuming that the data rate is constant you will
>eventually end up with a stable database size.  Reading will not
>interfere with writing.  Make sure you do your reads in a deferred
>transaction if they require a REPEATABLE READ across multiple
>statements.  Be aware that your rowid (the timestamp) will overflow
>in a couple of hundred million years, so you will have to devise a
>different timestamp calculation method before then.  You are probably
>fine with a NumberOfRecordsInEachBatch being 1, but you might want to
>set it larger, particularly if your persistent storage is old and
>slow.
>
>--
>The fact that there's a Highway to Hell but only a Stairway to Heaven
>says a lot about anticipated traffic volume.
>
>>-Original Message-
>>From: sqlite-users [mailto:sqlite-users-
>>boun...@mailinglists.sqlite.org] On Behalf Of Zhu, Liang
>>[AUTOSOL/ASSY/US]
>>Sent: Monday, 15 July, 2019 15:11
>>To: SQLite mailing list
>>Subject: Re: [sqlite] [EXTERNAL] Re: how to delete BLOB object from
>>the data base
>>
>>We do the increment of 1000,  when the record reaches number which
>>dividable by 1000, we delete the record.
>>
>>-Original Message-
>>From: sqlite-users  On
>>Behalf Of Robert Hairgrove
>>Sent: Monday, July 15, 2019 4:00 PM
>>To: sqlite-users@mailinglists.sqlite.org
>>Subject: [EXTERNAL] Re: [sqlite] how to delete BLOB object from the
>>data base
>>
>>On 15.07.19 21:38, Zhu, Liang [AUTOSOL/ASSY/US] wrote:
>>> ... I am inserting to the BLOB data into the table at every 250ms,
>>I
>>> delete the oldest row at every 600ms, also I am reading the data
>>from
>>> the database at every 10ms...
>>
>>How do you determine the "oldest" row? I believe the timestamps
>>generated by SQLite are only accurate to the nearest second.
>>___
>>sqlite-users mailing list
>>sqlite-users@mailinglists.sqlite.org
>>https://urldefense.proofpoint.com/v2/url?u=http-
>>3A__mailinglists.sqlite.org_cgi-2Dbin_mailman_listinfo_sqlite-
>>2Dusers&d=DwIGaQ&c=jOURTkCZzT8tVB5xPEYIm3YJGoxoTaQsQPzPKJGaWbo&r=4Y1
>Z
>>hFy9bpH-
>>wvkl_u5WMmphQqcZcyyY0DFEINZ4E6I&m=JgVnpDzYgWWXrCfLyoOP5F3fhNn1Gk3rk2
>t
>>6Ak5G8Tw&s=9h4tFOoFd335e5M_jT34B0HuiTqMKR2HMqh_E0wcp1Y&e=
>>___
>>sqlite-users mailing list
>>sqlite-users@mailinglists.sqlite.org
>>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] Re: how to delete BLOB object from the data base

2019-07-15 Thread Keith Medcalf

Use one connection.  Use one thread.  As follows:

sqlite *db;
sqlite_stmt *beginstatement;
sqlite_stmt *commitstatement;
sqlite_stmt *insertstatement;
sqlite_stmt *deletestatement;
int i = 0;

sqlite_open_v2('database.db', &db, ...);
sqlite_prepare_v2(db, "begin immediate", &beginstatement, ...);
sqlite_prepare_v2(db, "insert into blah (_rowid_, data) values (?, ?);", 
&insertstatement, ...);
sqlite_prepare_v2(db, "delete from blah where _rowid_ < ?;", &deletestatement 
... );
sqlite_prepare_v2(db, "commit;", &commitstatement, ...);
while 1:
  char *data = WaitForDataToArrive()
  if (!data) break;
  int64 ts = GetTheTimeInMSsinceTheUnixEpochAsInt64()
  if (!i) {
 sqlite_step(beginstatement);
 sqlite_reset(beginstatement);
  }
  sqlite_bind_int64(deletestatement, 0, ts - 
(numberofmillisecondsofdatatoretainindatabase));
  sqlite_step(deletestatement);
  sqlite_reset(deletestatement);
  sqlite_bind_int64(insertstatement, 0, ts);
  sqlite_bind_blob(insertstatement, 1, data);
  sqlite_step(insertstatement);
  sqlite_reset(insertstatement);
  i++;
  if (i>NumberOfRecordsInEachBatch) {
 sqlite_step(commitstatement);
 sqlite_reset(commitstatement);
  }
}
if (i) {
  sqlite_step(commitstatement);
  sqlite_reset(commitstatement);
}
sqlite_finalize(beginstatement);
sqlite_finalize(commitstatement);
sqlite_finalize(insertstatement);
sqlite_finalize(deletestatement);
sqlite_close_v2(db);

Use a separate connection for the data reading on its own thread/process.  Make 
sure the database is in WAL mode.  Don't bother with auto vacuum (not needed).  
Space freed will be re-used automatically.  Assuming that the data rate is 
constant you will eventually end up with a stable database size.  Reading will 
not interfere with writing.  Make sure you do your reads in a deferred 
transaction if they require a REPEATABLE READ across multiple statements.  Be 
aware that your rowid (the timestamp) will overflow in a couple of hundred 
million years, so you will have to devise a different timestamp calculation 
method before then.  You are probably fine with a NumberOfRecordsInEachBatch 
being 1, but you might want to set it larger, particularly if your persistent 
storage is old and slow.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users [mailto:sqlite-users-
>boun...@mailinglists.sqlite.org] On Behalf Of Zhu, Liang
>[AUTOSOL/ASSY/US]
>Sent: Monday, 15 July, 2019 15:11
>To: SQLite mailing list
>Subject: Re: [sqlite] [EXTERNAL] Re: how to delete BLOB object from
>the data base
>
>We do the increment of 1000,  when the record reaches number which
>dividable by 1000, we delete the record.
>
>-Original Message-
>From: sqlite-users  On
>Behalf Of Robert Hairgrove
>Sent: Monday, July 15, 2019 4:00 PM
>To: sqlite-users@mailinglists.sqlite.org
>Subject: [EXTERNAL] Re: [sqlite] how to delete BLOB object from the
>data base
>
>On 15.07.19 21:38, Zhu, Liang [AUTOSOL/ASSY/US] wrote:
>> ... I am inserting to the BLOB data into the table at every 250ms,
>I
>> delete the oldest row at every 600ms, also I am reading the data
>from
>> the database at every 10ms...
>
>How do you determine the "oldest" row? I believe the timestamps
>generated by SQLite are only accurate to the nearest second.
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>https://urldefense.proofpoint.com/v2/url?u=http-
>3A__mailinglists.sqlite.org_cgi-2Dbin_mailman_listinfo_sqlite-
>2Dusers&d=DwIGaQ&c=jOURTkCZzT8tVB5xPEYIm3YJGoxoTaQsQPzPKJGaWbo&r=4Y1Z
>hFy9bpH-
>wvkl_u5WMmphQqcZcyyY0DFEINZ4E6I&m=JgVnpDzYgWWXrCfLyoOP5F3fhNn1Gk3rk2t
>6Ak5G8Tw&s=9h4tFOoFd335e5M_jT34B0HuiTqMKR2HMqh_E0wcp1Y&e=
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] how to delete BLOB object from the data base

2019-07-15 Thread Simon Slavin
On 15 Jul 2019, at 8:38pm, Zhu, Liang [AUTOSOL/ASSY/US]  
wrote:

> I am getting the  SQLite_locked error on delete, and my data from the select 
> statement are junk.

I think you are saying that you get the result SQLITE_LOCKED.  To do this you 
must be

A) Using one database connection for two or more accesses (e.g. 
multi-threading) OR
B) Using shared-cache mode.



Under these situations, SQLite has no way to prevent multiple threads from 
accessing the database at the same time.

To prevent it, make sure each thread uses a separate connection, and do not use 
shared-cache mode.  If you do this, then SQLite uses its busy/wait system, and 
will continually retry access until it reaches the time you've set as timeout:



> We do the increment of 1000,  when the record reaches number which dividable 
> by 1000, we delete the record.

This does not seem to be a good way to delete old rows.  Can you not check the 
rowid instead ?
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] Re: how to delete BLOB object from the data base

2019-07-15 Thread Zhu, Liang [AUTOSOL/ASSY/US]
We do the increment of 1000,  when the record reaches number which dividable by 
1000, we delete the record.

-Original Message-
From: sqlite-users  On Behalf Of 
Robert Hairgrove
Sent: Monday, July 15, 2019 4:00 PM
To: sqlite-users@mailinglists.sqlite.org
Subject: [EXTERNAL] Re: [sqlite] how to delete BLOB object from the data base

On 15.07.19 21:38, Zhu, Liang [AUTOSOL/ASSY/US] wrote:
> ... I am inserting to the BLOB data into the table at every 250ms, I 
> delete the oldest row at every 600ms, also I am reading the data from 
> the database at every 10ms...

How do you determine the "oldest" row? I believe the timestamps generated by 
SQLite are only accurate to the nearest second.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__mailinglists.sqlite.org_cgi-2Dbin_mailman_listinfo_sqlite-2Dusers&d=DwIGaQ&c=jOURTkCZzT8tVB5xPEYIm3YJGoxoTaQsQPzPKJGaWbo&r=4Y1ZhFy9bpH-wvkl_u5WMmphQqcZcyyY0DFEINZ4E6I&m=JgVnpDzYgWWXrCfLyoOP5F3fhNn1Gk3rk2t6Ak5G8Tw&s=9h4tFOoFd335e5M_jT34B0HuiTqMKR2HMqh_E0wcp1Y&e=
 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] how to delete BLOB object from the data base

2019-07-15 Thread David Raymond
The questions coming to mind at the moment:

What is the schema of the table holding the BLOBs?
What is the normal size for the blobs?
How are you doing the inserts and deletes?
What journal mode are you using?


I would think normal way to delete a record is the simple
delete from blob_table where primary_key_id = ?;


My understanding is that incremental vacuum basically says: "I want to shrink 
the file size by filling free pages at the front with data from the back to be 
able to truncate the file." It doesn't do any re-ordering or sorting, it just 
moves data from the end of the file into any free space closer to the front.
Since you're adding every 250ms and only deleting every 600, then the file size 
should be progressively increasing, and any free pages should be used up 
relatively quickly, so I don't think incremental vacuum would do a lot.



-Original Message-
From: sqlite-users  On Behalf Of 
Zhu, Liang [AUTOSOL/ASSY/US]
Sent: Monday, July 15, 2019 3:39 PM
To: sqlite-users@mailinglists.sqlite.org
Subject: [sqlite] how to delete BLOB object from the data base


Sqlite Experts,

I have a table contains the BLOB object,  I am inserting to the BLOB  data into 
the table at every 250ms,  I delete the oldest row at every 600ms, also I am 
reading the data from the database at every 10ms.  After almost of 100,000 
insert, delete and select operations,   I am getting the  SQLite_locked error 
on delete,  and my data from the select statement are junk.To prevent the 
database fermentation,  I tried PRAGMA incremental_vacuum(1000) and PRAGMA 
incremental_vacuum;
The performance improved some when I starting using PRAGMA incremental_vacuum 
but I am still getting the junk data.  Am I using the incremental_vacuum 
correctly?  And is there any optimal way to delete the BLOB object in the 
database table

Thank you,

Liang Zhu | Lead Software Engineer | Branson Ultrasonics
Emerson Automation Solutions | 41 Eagle Road | Danbury, CT 06810 | USA
T (203) 796-2235 | F (203) 796-0380
liang@emerson.com

The information contained in this message is confidential or protected by law. 
If you are not the intended recipient, please contact the sender and delete 
this message. Any unauthorized copying of this message or unauthorized 
distribution of the information contained herein is prohibited.

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] how to delete BLOB object from the data base

2019-07-15 Thread Robert Hairgrove

On 15.07.19 21:38, Zhu, Liang [AUTOSOL/ASSY/US] wrote:
... I am inserting to the BLOB data into the table at every 250ms, I 
delete the oldest row at every 600ms, also I am reading the data from 
the database at every 10ms...


How do you determine the "oldest" row? I believe the timestamps 
generated by SQLite are only accurate to the nearest second.

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] how to delete BLOB object from the data base

2019-07-15 Thread Zhu, Liang [AUTOSOL/ASSY/US]

Sqlite Experts,

I have a table contains the BLOB object,  I am inserting to the BLOB  data into 
the table at every 250ms,  I delete the oldest row at every 600ms, also I am 
reading the data from the database at every 10ms.  After almost of 100,000 
insert, delete and select operations,   I am getting the  SQLite_locked error 
on delete,  and my data from the select statement are junk.To prevent the 
database fermentation,  I tried PRAGMA incremental_vacuum(1000) and PRAGMA 
incremental_vacuum;
The performance improved some when I starting using PRAGMA incremental_vacuum 
but I am still getting the junk data.  Am I using the incremental_vacuum 
correctly?  And is there any optimal way to delete the BLOB object in the 
database table

Thank you,

Liang Zhu | Lead Software Engineer | Branson Ultrasonics
Emerson Automation Solutions | 41 Eagle Road | Danbury, CT 06810 | USA
T (203) 796-2235 | F (203) 796-0380
liang@emerson.com

The information contained in this message is confidential or protected by law. 
If you are not the intended recipient, please contact the sender and delete 
this message. Any unauthorized copying of this message or unauthorized 
distribution of the information contained herein is prohibited.

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLiteDataReader has already read all rows in first result set

2019-07-15 Thread Guy St-Denis
Issue is RESOLVED.

After more experimentation, I realized that I was causing the Read() operations 
in the first resultset by doing this after the ExecuteReader() call:
PS D:\temp> $dataReader
FieldCount
--
 2
 2
 2

It would seem that asking PowerShell to 'output' an SQLiteDataReader instance 
forces it to Read() through the current resultset.

This seems to be what is observed here after moving to the second resultset.
PS D:\temp> $dataReader.NextResult()
True
PS D:\temp> $dataReader.HasRows
True
PS D:\temp> $dataReader
FieldCount
--
 2
 2
 2

PS D:\temp> $dataReader.HasRows
False

--
Guy St-Denis

‐‐‐ Original Message ‐‐‐
On Monday, July 15, 2019 12:34 PM, Guy St-Denis  wrote:

> New SQLite user... trying to understand the following issue.
> (Spent several hours on Google, Stack Overflow, SQLite mailing list 
> archive... found nothing satisfactory.)
>
> [ISSUE]
> As far as I can tell, after calling ExecuteReader(), I cannot Read() any rows 
> in the *first* result set because the returned DataReader has *already* 
> Read() the rows.
> Calling NextResult() moves to the next result set, and then Read() allows me 
> to step through the rows in the *second* (and subsequent) result sets as 
> expected.
>
> [CONFIGURATION:SQLITE]
> sqlite-netFx46-binary-x64-2015-1.0.111.0\System.Data.SQLite.dll
> SQLiteSourceId=2019-04-16 19:49:53 
> 884b4b7e502b4e991677b53971277adfaf0a04a284f8e483e2553d0f83156b50
> SQLiteVersion=3.28.0
> InteropVersion=1.0.111.0
> ProviderSourceId=767b97f17029698d929f3fd9be563f51942b1805 2019-05-16 03:23:41 
> UTC
> ProviderVersion=1.0.111.0
>
> [CONFIGURATION:OTHER]
> Windows 10 Pro
> PROCESSOR_ARCHITECTURE=AMD64
> PROCESSOR_IDENTIFIER=Intel64 Family 6 Model 58 Stepping 9, GenuineIntel
> PROCESSOR_LEVEL=6
> PROCESSOR_REVISION=3a09
> .NET Framework=4.7.2
> CLRVersion=4.0.30319.42000
> PSVersion=5.1.17134.858
> PSBuildVersion=10.0.17134.858
>
> [STEPS TO REPRODUCE]
> ### SETUP ###
> PS D:\temp> Add-Type -Path 
> 'D:\temp\sqlite-netFx46-binary-x64-2015-1.0.111.0\System.Data.SQLite.dll'
> PS D:\temp> $dbConnection = New-Object -TypeName 
> System.Data.SQLite.SQLiteConnection
> PS D:\temp> $dbConnection.ConnectionString = 'Data Source=:memory:'
> PS D:\temp> $dbConnection.Open()
> PS D:\temp> $dbCommand = New-Object -TypeName System.Data.SQLite.SQLiteCommand
> PS D:\temp> $dbCommand.Connection = $dbConnection
> PS D:\temp> $dbCommand.CommandText = 'CREATE TABLE T1(C1,C2);'
> PS D:\temp> $dbNonQuery = $dbCommand.ExecuteNonQuery()
> PS D:\temp> $dbCommand.CommandText = 'INSERT INTO T1 VALUES 
> (101,102),(201,202),(301,302);'
> PS D:\temp> $dbNonQuery = $dbCommand.ExecuteNonQuery()
> PS D:\temp> $dbCommand.CommandText = 'SELECT C1 AS X1,C2 AS X2 FROM T1;SELECT 
> C1 AS Y1,C2 AS Y2 FROM T1;'
> PS D:\temp> $dataReader = $dbCommand.ExecuteReader()
>
> ### ACTUAL RESULTS: *FIRST* RESULT SET ###
> PS D:\temp> $dataReader.GetName(0)
> X1
> PS D:\temp> $dataReader.GetName(1)
> X2
> PS D:\temp> $dataReader.StepCount
> 3
> PS D:\temp> $dataReader.HasRows
> False
> PS D:\temp> $dataReader.Read()
> False
> PS D:\temp> $dataReader.GetValue(0)
> Exception calling "GetValue" with "1" argument(s): "No current row"
> (...)
> PS D:\temp> $dataReader.GetValue(1)
> Exception calling "GetValue" with "1" argument(s): "No current row"
> (...)
> ### ACTUAL RESULTS: *SECOND* RESULT SET ###
> PS D:\temp> $dataReader.NextResult()
> True
> PS D:\temp> $dataReader.StepCount
> 4
> PS D:\temp> $dataReader.GetName(0)
> Y1
> PS D:\temp> $dataReader.GetName(1)
> Y2
> PS D:\temp> $dataReader.HasRows
> True
> PS D:\temp> $dataReader.GetValue(0)
> Exception calling "GetValue" with "1" argument(s): "No current row"
> (...)
> PS D:\temp> $dataReader.Read()
> True
> PS D:\temp> $dataReader.GetValue(0)
> 101
> PS D:\temp> $dataReader.GetValue(1)
> 102
> PS D:\temp> $dataReader.HasRows
> True
> PS D:\temp> $dataReader.Read()
> True
> PS D:\temp> $dataReader.GetValue(0)
> 201
> PS D:\temp> $dataReader.GetValue(1)
> 202
> PS D:\temp> $dataReader.HasRows
> True
> PS D:\temp> $dataReader.Read()
> True
> PS D:\temp> $dataReader.GetValue(0)
> 301
> PS D:\temp> $dataReader.GetValue(1)
> 302
> PS D:\temp> $dataReader.Read()
> False
> PS D:\temp> $dataReader.HasRows
> False
> PS D:\temp> $dataReader.GetValue(0)
> Exception calling "GetValue" with "1" argument(s): "No current row"
> (...)
>
> --
> Guy St-Denis
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Safe saving of in-memory db to disk file

2019-07-15 Thread Jens Alfke


> On Jul 15, 2019, at 3:57 AM, Richard Hipp  wrote:
> 
> Yeah there is.  SQLite has a high-level query language (SQL) that can
> radically simplify application development.

I guess it depends on your programming style. Most apps whose architecture I’m 
aware of* either wrap something like an ORM (e.g. Core Data) around the 
database to expose it as native objects, or they just hand-write a CRUD layer 
to persist their objects. Which is much more complexity than just the classes 
themselves with a bit of export/import code.

I agree that SQL queries are powerful, but LINQ demonstrates that you can do 
the same kind of queries using functional-programming operations like `map`, 
`filter`, `fold`, etc. without having the impedance-mismatch of gluing two 
extremely different languages together.

I love SQLite! And I have been using & evangelizing it since 2004. But the fact 
that it’s a very elegant and powerful hammer doesn’t mean that it’s the best 
tool for pushing thumbtacks into cork boards. :)

(Even if you disagree with the above, I don’t see how one can think that it’s a 
good idea to read the entire database into memory, work with it there, and then 
write it all back out to disk. That’s a lot of unnecessary I/O and RAM usage. 
The only exception I can think of is if the storage medium can’t be counted on 
to remain online during operation.)

—Jens

* my background is in macOS and iOS, btw
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Safe saving of in-memory db to disk file

2019-07-15 Thread ardi
On Mon, Jul 15, 2019 at 12:57 PM Richard Hipp  wrote:
>
> On 7/15/19, Jens Alfke  wrote:
> >
> > If you’re going to keep your data in memory, there’s no good reason to use
> > SQLite at all.
>
> Yeah there is.  SQLite has a high-level query language (SQL) that can
> radically simplify application development.

In fact, the "idea" of adopting it for app I/O came just when reading
this page in the SQLite website: https://www.sqlite.org/whentouse.html
 ...that page made me realize this is the standard serialization I
want for all (or at least most) of my applications from today on.

Thanks a lot, a lot, a lot!! for all the useful comments posted in
this thread... I didn't know the backup API had the atomic safety. I
need to study all the information you gave me.

Anyway, if I decide to go the "easy way" and perform a complete file
overwrite whenever the application saves the document, would you first
generate a file database with a temporary name without removing the
older file, and then when saving is finished remove the old one and
rename the new? Or wouldn't it be necessary?

Thanks a lot!!

ardi
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLiteDataReader has already read all rows in first result set

2019-07-15 Thread Guy St-Denis
New SQLite user... trying to understand the following issue.
(Spent several hours on Google, Stack Overflow, SQLite mailing list archive... 
found nothing satisfactory.)

[ISSUE]
As far as I can tell, after calling ExecuteReader(), I cannot Read() any rows 
in the *first* result set because the returned DataReader has *already* Read() 
the rows.
Calling NextResult() moves to the next result set, and then Read() allows me to 
step through the rows in the *second* (and subsequent) result sets as expected.

[CONFIGURATION:SQLITE]
sqlite-netFx46-binary-x64-2015-1.0.111.0\System.Data.SQLite.dll
SQLiteSourceId=2019-04-16 19:49:53 
884b4b7e502b4e991677b53971277adfaf0a04a284f8e483e2553d0f83156b50
SQLiteVersion=3.28.0
InteropVersion=1.0.111.0
ProviderSourceId=767b97f17029698d929f3fd9be563f51942b1805 2019-05-16 03:23:41 
UTC
ProviderVersion=1.0.111.0

[CONFIGURATION:OTHER]
Windows 10 Pro
PROCESSOR_ARCHITECTURE=AMD64
PROCESSOR_IDENTIFIER=Intel64 Family 6 Model 58 Stepping 9, GenuineIntel
PROCESSOR_LEVEL=6
PROCESSOR_REVISION=3a09
.NET Framework=4.7.2
CLRVersion=4.0.30319.42000
PSVersion=5.1.17134.858
PSBuildVersion=10.0.17134.858

[STEPS TO REPRODUCE]
### SETUP ###
PS D:\temp> Add-Type -Path 
'D:\temp\sqlite-netFx46-binary-x64-2015-1.0.111.0\System.Data.SQLite.dll'
PS D:\temp> $dbConnection = New-Object -TypeName 
System.Data.SQLite.SQLiteConnection
PS D:\temp> $dbConnection.ConnectionString = 'Data Source=:memory:'
PS D:\temp> $dbConnection.Open()
PS D:\temp> $dbCommand = New-Object -TypeName System.Data.SQLite.SQLiteCommand
PS D:\temp> $dbCommand.Connection = $dbConnection
PS D:\temp> $dbCommand.CommandText = 'CREATE TABLE T1(C1,C2);'
PS D:\temp> $dbNonQuery = $dbCommand.ExecuteNonQuery()
PS D:\temp> $dbCommand.CommandText = 'INSERT INTO T1 VALUES 
(101,102),(201,202),(301,302);'
PS D:\temp> $dbNonQuery = $dbCommand.ExecuteNonQuery()
PS D:\temp> $dbCommand.CommandText = 'SELECT C1 AS X1,C2 AS X2 FROM T1;SELECT 
C1 AS Y1,C2 AS Y2 FROM T1;'
PS D:\temp> $dataReader = $dbCommand.ExecuteReader()

### ACTUAL RESULTS: *FIRST* RESULT SET ###
PS D:\temp> $dataReader.GetName(0)
X1
PS D:\temp> $dataReader.GetName(1)
X2
PS D:\temp> $dataReader.StepCount
3
PS D:\temp> $dataReader.HasRows
False
PS D:\temp> $dataReader.Read()
False
PS D:\temp> $dataReader.GetValue(0)
Exception calling "GetValue" with "1" argument(s): "No current row"
(...)
PS D:\temp> $dataReader.GetValue(1)
Exception calling "GetValue" with "1" argument(s): "No current row"
(...)
### ACTUAL RESULTS: *SECOND* RESULT SET ###
PS D:\temp> $dataReader.NextResult()
True
PS D:\temp> $dataReader.StepCount
4
PS D:\temp> $dataReader.GetName(0)
Y1
PS D:\temp> $dataReader.GetName(1)
Y2
PS D:\temp> $dataReader.HasRows
True
PS D:\temp> $dataReader.GetValue(0)
Exception calling "GetValue" with "1" argument(s): "No current row"
(...)
PS D:\temp> $dataReader.Read()
True
PS D:\temp> $dataReader.GetValue(0)
101
PS D:\temp> $dataReader.GetValue(1)
102
PS D:\temp> $dataReader.HasRows
True
PS D:\temp> $dataReader.Read()
True
PS D:\temp> $dataReader.GetValue(0)
201
PS D:\temp> $dataReader.GetValue(1)
202
PS D:\temp> $dataReader.HasRows
True
PS D:\temp> $dataReader.Read()
True
PS D:\temp> $dataReader.GetValue(0)
301
PS D:\temp> $dataReader.GetValue(1)
302
PS D:\temp> $dataReader.Read()
False
PS D:\temp> $dataReader.HasRows
False
PS D:\temp> $dataReader.GetValue(0)
Exception calling "GetValue" with "1" argument(s): "No current row"
(...)

--
Guy St-Denis
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Safe saving of in-memory db to disk file

2019-07-15 Thread Nelson, Erik - 2
Dominique Devienne wrote on Monday, July 15, 2019 2:41 AM

>That's when you reach for virtual tables (and their "virtual indices").

>I.e. you keep your data in native data-structures (Boost.MultiIndex in my 
>case),
>and just provide a SQLite view of it. Much faster than "pure-in-Memory" with
> SQLite-managed pager-backed B-tree tables. Steep learning curve, especially
>for the indexing part, but the results are well worth it IMHO.

> Which can be freely mixed with "real" tables having "real" indexes (in the 
> in-memory DB).

That sounds really intriguing- does it significantly speed up queries coming in 
through the SQLite engine?  Or the speed bump is only if accessing from the 
C++-native side?

Is there any literature out there or tips you can share that can flatten the 
learning curve?

Thanks

Erik

--
This message, and any attachments, is for the intended recipient(s) only, may 
contain information that is privileged, confidential and/or proprietary and 
subject to important terms and conditions available at 
http://www.bankofamerica.com/emaildisclaimer.   If you are not the intended 
recipient, please delete this message.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bug report: crash when close blob handle after close_v2 db

2019-07-15 Thread Dan Kennedy


On 14/7/62 17:18, Chaoji Li wrote:

This problem is only present for 3.28+. A sample test case is attached.


Thanks for reporting this. We think it's fixed here:

  https://sqlite.org/src/info/52f463d29407fad6

The mailing list stripped off your test case, so if you could either run 
it with the latest SQLite from fossil or else post it inline here so 
that we can run it, that would be very helpful.


Thanks,

Dan.





Basically, the flow is:

1. Open  in-memory db A (we don't do anything about it).
2. Open db B  from file test.db
3. Create a blob handle from B
4. close_v2 A
5. close_v2 B
6. close blob handle -> Segmentation fault

The problem seems to go away if A is not created.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] "unable to use function highlight in the requested context" in group by

2019-07-15 Thread Jake Thaw
Dan Kennedy explained why this limitation exists:

>On 27/2/62 05:47, Jake Thaw wrote:
>>This may not strictly be a bug, but currently (3.27.2) a vtab cannot overload 
>>scalar functions in aggregate queries.
>>
>>Adding a check for TK_AGG_COLUMN in sqlite3VtabOverloadFunction makes my use 
>>case function as expected.
>
>I think it has always been this way.
>
>The tricky case is when there is a GROUP BY with an expression list that the 
>virtual table cannot optimize as an ORDER BY. In that case the virtual table 
>cursor will be closed before the overloaded function is invoked, which breaks 
>the implementation of most overloaded functions (including the built-in ones).
>
>Dan.

-Jake

On Sat, Jul 13, 2019 at 7:36 PM Damian Adrian  wrote:
>
> Thank you, that works!
>
> I still think highlight() not working in this case may be a bug, though,
> since it only operates on a single row and in theory shouldn't be affected
> by the group by (but there may be limitations not obvious to me).
>
> Thanks again,
> Adrian
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Safe saving of in-memory db to disk file

2019-07-15 Thread Richard Hipp
On 7/15/19, Jens Alfke  wrote:
>
> If you’re going to keep your data in memory, there’s no good reason to use
> SQLite at all.

Yeah there is.  SQLite has a high-level query language (SQL) that can
radically simplify application development.
-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Safe saving of in-memory db to disk file

2019-07-15 Thread Dan Kennedy


On 15/7/62 00:05, ardi wrote:

Hi!

I'm going to use sqlite as means of the file format I/O for
applications. One of the critical points in file I/O is saving the
file in a safe way, so that data loss cannot happen (or at least the
risk of happening is minimized as much as possible). Traditionally,
some applications save their files with a temporary name, so that in
the case of system failure you don't lose the old file and the new
file at the same time, and then, when the file saving is finished, the
old file is deleted, and the temporary one is renamed to replace it.

I have read the backup API page (https://www.sqlite.org/backup.html)
that shows how to read a sqlite db from disk to memory, and how to
save it back to disk, but it doesn't talk about the topic of
performing the save in a safe way.


It's safe by default.

When you use the online backup API, the destination is written using an 
SQLite transaction. So if your app or the system crashes before the 
backup is complete, the transaction is rolled back following recovery.


Dan.




Do you have any recommendation for saving the inmemory db in a safe
way? (by "safe" I mean I don't want the to lose both the old db file
and the inmemory one --however losing the inmemory db would be
reasonable, as it's what obviously happens in a power outage if you
didn't save before).

Would you do it with the sqlite API, or with the OS system calls?

Another scenario of interest would be if the db is really huge and you
consider the possibility of not overwriting the whole old file, but
just committing the changes, in order to save unnecessary disk writes.
The FAQ explains about atomic sqlite writes into the db that also
prevent data loss... but... can you do that with an inmemory db? how?

Thanks in advance!!

ardi
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Link errors with SQLITE_OMIT_VIRTUALTABLE

2019-07-15 Thread Dan Kennedy


On 14/7/62 15:59, Orgad Shaneh wrote:

Hi,

In reply to 
https://www.mail-archive.com/sqlite-users@mailinglists.sqlite.org/msg113512.html.

Can you please accept this patch?


The patch doesn't seem all that intrusive, but is there a reason you 
can't build from canonical sources instead of sqlite3.c?


If you build from the full source package with SQLITE_OMIT_VIRTUALTABLE 
defined things work as is. And you will save a bit more code space than 
just building sqlite3.c with the same option.


  https://www.sqlite.org/compile.html#_options_to_omit_features

Dan.





Thanks,
- Orgad
This email and any files transmitted with it are confidential material. They 
are intended solely for the use of the designated individual or entity to whom 
they are addressed. If the reader of this message is not the intended 
recipient, you are hereby notified that any dissemination, use, distribution or 
copying of this communication is strictly prohibited and may be unlawful.

If you have received this email in error please immediately notify the sender 
and delete or destroy any copy of this message
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users