Re: [sqlite] Test failures on GPFS

2020-01-11 Thread Richard Hipp
On 1/11/20, J. King  wrote:
>
> WAL mode does not work over the network, so the test failures are presumably
> to be expected.
>

WAL mode should work on a network filesystem, as long as all of the
clients are on the same host computer, and as long as mmap()-ing the
*-shm file gives all the clients shared memory.  Dunno if GPFS does
that or not, though.  Maybe not.  Or, maybe not reliably.

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Test failures on GPFS

2020-01-11 Thread J. King
On January 11, 2020 5:57:31 p.m. EST, T J  wrote:

>I was interested in using sqlite over GPFS.  I've seen a few useful
>threads
>on this:
>
> [...]
>
>Overall, it looks pretty good, but there were some WAL
>failures.
>Could someone comment on the precise implication of those test
>failures?

WAL mode does not work over the network, so the test failures are presumably to 
be expected. 

-- 
J. King
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Test failures on GPFS

2020-01-11 Thread Jens Alfke

> On Jan 11, 2020, at 2:58 PM, T J  wrote:
> 
> I was interested in using sqlite over GPFS.

The standard advice on using SQLite over a network file system is “don’t do 
it.” Even if you find the rare file system that handles locks properly, you’ll 
likely have performance issues.

A client/server database like Postgres or MySQL is a better fit for a 
distributed use case. If you’re sending everything over the network, it makes 
more sense to send just the queries & results, not the innards of the b-tree 
too. Is there a reason you can’t use one of those?

—Jens
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Unexplained table bloat

2020-01-11 Thread Kevin Youren

Hi,

I checked the download mentioned in the original email. Not sure if the
table changed since the previous posts.


It seems LUTFullString has 3 BLOB rows, but LENGTH treats them as
strings.

I'm in Melbourne, Oz, so I added the UTC datetime.

regs, Kev

kevin@KCYDell:~$ cd /mnt/KCY/KCYDocs/
kevin@KCYDell:/mnt/KCY/KCYDocs$ sqlite3 /mnt/KCY/KCYDocs/200k-per-
row.sqlite
SQLite version 3.30.1 2019-10-10 20:19:45
Enter ".help" for usage hints.
sqlite> .schema
CREATE TABLE copied(
  id_local INT,
  LUTFullString,
  LUTHash
);
sqlite> 
sqlite> select '1',rowid,LENGTH(HEX(LUTFullString))/2 from copied
   ...> UNION ALL
   ...> select '2',rowid,INSTR(HEX(LUTFullString),'00') from copied
   ...> UNION ALL
   ...> select '3',rowid,substr(HEX(LUTFullString),0,10) from copied
   ...> UNION ALL
   ...> select '4',rowid,INSTR(SUBSTR(HEX(LUTFullString),4),'00') from
copied
   ...> UNION ALL
   ...> select '5',rowid,LENGTH(LUTFullString) from copied;
1|1|194238
1|2|183050
1|3|193908
2|1|3
2|2|3
2|3|3
3|1|2C0003007
3|2|2C0003007
3|3|2C0003007
4|1|1
4|2|1
4|3|1
5|1|1
5|2|1
5|3|1
sqlite> .quit
kevin@KCYDell:/mnt/KCY/KCYDocs$ date -u
Sat 11 Jan 23:39:43 UTC 2020
kevin@KCYDell:/mnt/KCY/KCYDocs$ 









Message: 6
Date: Fri, 10 Jan 2020 08:48:21 -0500
From: Ryan Mack 
To: sqlite-users@mailinglists.sqlite.org
Subject: [sqlite] Unexplained table bloat
Message-ID:
<
CABhGdGRbR1kT+3_BU6ob9L7tpSPZ09HJn=ofPyK6OXvgQK=_...@mail.gmail.com>
Content-Type: text/plain; charset="UTF-8"

Hi list,

I'm trying to understand unexplained table bloat I found in what should
be
a very small table in an old database file. If you dump/restore the
database, the problem goes away. If you duplicate the table, the
problem
propagates. Schema:

CREATE TABLE copied(
  id_local INT,
  LUTFullString,
  LUTHash
);

Test Data:
85960605|,|0DE19F8AA100D2962FF22C60B3FA8940
85960465|,|A44615408E8B3E48A684C60CA2967031
85960451|,|33C8804354984071A740788AD308B279

In my test database file, each of these 3 rows is allocating about 40
4k
overflow pages for a total database size of about 500k. The full
database
has about 4MB of actual data which takes up over 500MB on disk. If you
want
to see/reproduce the problem you'll need my test database file which
I've
uploaded here: https://mackman.net/200k-per-row.sqlite.zip (500kb
download). I don't know why it doesn't compress better, those extra
overflow pages must be getting filled with random garbage.

My uninformed guess is there was a bug in the version of sqlite used at

database creation time that computed an incorrect overflow threshold
and is
storing each byte of the row to its own page. Since the problem goes
away
with a dump/restore, I'm considering releasing a script to do that and
mitigate the problem for affected users. Before doing that I would like
to
understand the problem better.

Thanks for reading, Ryan

PS: Here's some output from my debug session showing the 123 bytes of
data
is occupying 582k of space on disk in a freshly created table.

% sqlite3 200k-per-row.sqlite
SQLite version 3.28.0 2019-04-15 14:49:49
Enter ".help" for usage hints.
sqlite> .tables
copied

sqlite> .schema copied
CREATE TABLE copied(
  id_local INT,
  LUTFullString,
  LUTHash
);

sqlite> select * from copied;
85960605|,|0DE19F8AA100D2962FF22C60B3FA8940
85960465|,|A44615408E8B3E48A684C60CA2967031
85960451|,|33C8804354984071A740788AD308B279

sqlite> select sum(length(id_local) + length(lutfullstring) +
length(luthash)) from copied;
123

sqlite> create table copied2 as select * from copied;
sqlite> SELECT name, SUM("pgsize") as sz FROM dbstat group by name;
copied|581632
copied2|581632
sqlite_master|4096



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Test failures on GPFS

2020-01-11 Thread T J
Hi,

I was interested in using sqlite over GPFS.  I've seen a few useful threads
on this:

   - Network file system that support sqlite3 well

   
https://www.mail-archive.com/sqlite-users@mailinglists.sqlite.org/msg117085.html

   - disable file locking mechanism over the network

   
https://www.mail-archive.com/sqlite-users@mailinglists.sqlite.org/msg116846.html

From these, I can see that there are some performance issues, even if I
willing (which I am not) to make all access (read+write) sequential. [I
don't expect to need many, if any, concurrent writers, but I will typically
have concurrent readers.]

To get a better sense of things, I downloaded 3.31.0 and ran the test suite
on GPFS.  Overall, it looks pretty good, but there were some WAL failures.
Could someone comment on the precise implication of those test failures?
I'm interested to know what usage patterns are likely to cause problems,
and which are likely safe.  Also, which other tests can I run (
https://www.sqlite.org/testing.html)? Perhaps more tests around concurrent
read/writes?

!Failures on these tests: e_walauto-1.1.2 e_walauto-1.1.3
e_walauto-1.1.5 e_walauto-1.1.7 e_walauto-1.1.12.3 e_walauto-1.1.12.5
e_walauto-1.2.2 e_walauto-1.2.3 e_walauto-1.2.5 e_walauto-1.2.7
e_walauto-1.2.12.3 e_walauto-1.2.12.5 zipfile-2.4a.2.1
zipfile-2.4a.2.2


Thanks in advance.  The `make test` output log snippet is below.
---

e_walauto-1.1.0... Ok

e_walauto-1.1.1... Ok

e_walauto-1.1.2...

! e_walauto-1.1.2 expected: [1]

! e_walauto-1.1.2 got:  [0]

e_walauto-1.1.3...

! e_walauto-1.1.3 expected: [1]

! e_walauto-1.1.3 got:  [0]

e_walauto-1.1.4... Ok

e_walauto-1.1.5...

! e_walauto-1.1.5 expected: [1]

! e_walauto-1.1.5 got:  [0]

e_walauto-1.1.6... Ok

e_walauto-1.1.7...

! e_walauto-1.1.7 expected: [1]

! e_walauto-1.1.7 got:  [0]

e_walauto-1.1.7... Ok

e_walauto-1.1.8... Ok

e_walauto-1.1.9... Ok

e_walauto-1.1.10.1... Ok

e_walauto-1.1.10.2... Ok

e_walauto-1.1.11.1... Ok

e_walauto-1.1.11.2... Ok

e_walauto-1.1.11.3... Ok

e_walauto-1.1.12.1... Ok

e_walauto-1.1.12.2... Ok

e_walauto-1.1.12.3...

! e_walauto-1.1.12.3 expected: [2]

! e_walauto-1.1.12.3 got:  [0]

e_walauto-1.1.12.4... Ok

e_walauto-1.1.12.5...

! e_walauto-1.1.12.5 expected: [1559]

! e_walauto-1.1.12.5 got:  [0]

e_walauto-1.2.0... Ok

e_walauto-1.2.1... Ok

e_walauto-1.2.2...

! e_walauto-1.2.2 expected: [1]

! e_walauto-1.2.2 got:  [0]

e_walauto-1.2.3...

! e_walauto-1.2.3 expected: [1]

! e_walauto-1.2.3 got:  [0]

e_walauto-1.2.4... Ok

e_walauto-1.2.5...

! e_walauto-1.2.5 expected: [1]

! e_walauto-1.2.5 got:  [0]

e_walauto-1.2.6... Ok

e_walauto-1.2.7...

! e_walauto-1.2.7 expected: [1]

! e_walauto-1.2.7 got:  [0]

e_walauto-1.2.7... Ok

e_walauto-1.2.8... Ok

e_walauto-1.2.9... Ok

e_walauto-1.2.10.1... Ok

e_walauto-1.2.10.2... Ok

e_walauto-1.2.11.1... Ok

e_walauto-1.2.11.2... Ok

e_walauto-1.2.11.3... Ok

e_walauto-1.2.12.1... Ok

e_walauto-1.2.12.2... Ok

e_walauto-1.2.12.3...

! e_walauto-1.2.12.3 expected: [2]

! e_walauto-1.2.12.3 got:  [0]

e_walauto-1.2.12.4... Ok

e_walauto-1.2.12.5...

! e_walauto-1.2.12.5 expected: [1559]

! e_walauto-1.2.12.5 got:  [0]

e_walauto.test-closeallfiles... Ok

e_walauto.test-sharedcachesetting... Ok

Time: e_walauto.test 92703 ms

...

zipfile2.test-closeallfiles... Ok

zipfile2.test-sharedcachesetting... Ok

Time: zipfile2.test 14 ms

Memory used:  now 24  max9283664  max-size   16908288

Allocation count: now  1  max1311131

Page-cache used:  now  0  max 13  max-size  65800

Page-cache overflow:  now  0  max   20640016

SQLite 2020-01-10 01:05:49
0a500da6aa659a8e73206e6d22ddbf2da5e4f1d1d551eeb66433163a3e13109d

14 errors out of 249964 tests on localhost Linux 64-bit little-endian

!Failures on these tests: e_walauto-1.1.2 e_walauto-1.1.3
e_walauto-1.1.5 e_walauto-1.1.7 e_walauto-1.1.12.3 e_walauto-1.1.12.5
e_walauto-1.2.2 e_walauto-1.2.3 e_walauto-1.2.5 e_walauto-1.2.7
e_walauto-1.2.12.3 e_walauto-1.2.12.5 zipfile-2.4a.2.1
zipfile-2.4a.2.2

All memory allocations freed - no leaks

Memory used:  now  0  max9283664  max-size   16908288

Allocation count: now  0  max1311131

Page-cache used:  now  0  max 13  max-size  65800

Page-cache overflow:  now  0  max   20640016

Maximum memory usage: 9283664 bytes

Current memory usage: 0 bytes

Number of malloc()  : -1 calls
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Best way to store key,value pairs

2020-01-11 Thread Jens Alfke

> On Jan 8, 2020, at 3:13 AM, Simon Slavin  wrote:
> 
> I believe Jens' point is valid, as long as you don't have to search/scan 
> headers.

You can even do that — to search for a specific header’s value, just create an 
index on json_extract(headers, ‘$Header-Name’), then in a query use that same 
expression in an equality or relational comparison.

(You’ll need to normalize the case of header names during the JSON conversion 
while inserting, since JSON keys are case-sensitive but RFC822 header names 
aren’t.)

—Jens
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database speed comparison

2020-01-11 Thread Andy
And huge speedup by exec("BEGIN") myloop exec("COMMIT")

sob., 11 sty 2020 o 07:30 Andy  napisał(a):

> I had sqlite3_prepare_v2 without sqlite3_finalize and sqlite3_close not
> closes multiple databases.
>
> sob., 11 sty 2020 o 07:17 Andy  napisał(a):
>
>> These tests are a lot faster than calling Sqlite in my program, I must
>> use the same convention: for example first prepare, next
>> sqlite3_bind_int64, sqlite3_bind_text and sqlite3_step. My calling was very
>> not optimal.
>>
>> pt., 10 sty 2020 o 20:13 Simon Slavin  napisał(a):
>>
>>> Are you aware of
>>>
>>> 
>>>
>>> ?  Now all you have to do is write comparative tests for other SQL
>>> implementations.  However, from what I've seen, realistic speed tests tend
>>> to come down to cache sizes.
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@mailinglists.sqlite.org
>>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>>>
>>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] sqldiff bug report: wrong real rounding.

2020-01-11 Thread forceps millenium

Greetings!
 
In short: sqldiff unable to formulate correct `sql` to convert slave.db into 
host.db.
 
$ sqldiff slave.db host.db 
UPDATE balancelog SET total=1500.74122892 WHERE id=1418;
$ echo "UPDATE balancelog SET total=1500.74122892 WHERE id=1418;" | sqlite3 
slave.db
$ sqldiff slave.db host.db 
UPDATE balancelog SET total=1500.74122892 WHERE id=1418;
 
 
 
The reason is in `real`. In host.db — `real` `total` was generated by TRIGGER.
And precision is - 16 numbers after comma. But sqldiff shows only 8 numbers. 
It’s round 19 to 20 in this edge #8-#9 numbers, and `UPDATE` failed, because 
difference still exist.
 
More deep:
That is how is looking host.db in mid-night-commander via F3:
PRAGMA foreign_keys=OFF;                                                        
                                                                                
                                                                                
                                  
BEGIN TRANSACTION;                                                              
                                                                                
                                                                                
                                  
CREATE TABLE IF NOT EXISTS "balancelog" ( id integer primary key, timestamp 
int, coin text, total real );                                                   
                                                                                
                                      
INSERT INTO balancelog VALUES(1418,1578700800,'WAVES',1500.741228921395);   
                                                                                
                                                                                
                                  
COMMIT;
 
 
But slave.db:
PRAGMA foreign_keys=OFF;                                                        
                                                                                
                                                                                
                                  
BEGIN TRANSACTION;                                                              
                                                                                
                                                                                
                                  
CREATE TABLE IF NOT EXISTS "balancelog" ( id integer primary key, timestamp 
int, coin text, total real );                                                   
                                                                                
                                      
INSERT INTO balancelog VALUES(1418,1578700800,'WAVES',1500.741228919122);   
                                                                                
                                                                                
                                  
COMMIT;
 
host:  1500.741228921395
slave: 1500.741228919122
 
That is it. slave.db and host.db in attach. see.
 
System: Debian sid
ii  sqlite3        3.30.1-1     amd64        Command line interface for SQLite 3
$ sqlite3 -version
3.30.1 2019-10-10 20:19:45 
18db032d058f1436ce3dea84081f4ee5a0f2259ad97301d43c426bc7f3dfalt1
 
 
--
forceps millenium
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] using System.Data.SQLite on Linux / ".Net Core 2"

2020-01-11 Thread tcowan
Did you ever get SQLite to work on PowerShell on Linux?

...ted



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users