[sqlite] update on Ticket 3437

2010-01-24 Thread nick
Hi,
   I submitted this some time ago, see
   http://www.sqlite.org/cvstrac/tktview?tn=3437,39.

I've just been messing about with this again and have found out what was
happening. The following script demonstrates the problem and resolution:
[begin bug2.sql]

/*
this command line will demonstrate the problem:
del bug.db or rm bug.db, to taste
sqlite3 bug.db < bug2.sql
*/

BEGIN TRANSACTION;
CREATE TABLE races (ID, Name, Date, Distance, Climb, Comment);
INSERT INTO "races" VALUES('20','totley
moor','2009-05-19',NULL,NULL,NULL);
CREATE TABLE times (RaceID, RaceNo, RaceTime, Name, Team, Cat,
RacePosition);
INSERT INTO "times" VALUES(20,'804','01:08:51','CLARKE,
Helen','','FV40','233');
COMMIT;

SELECT "looking for totley moor 09 results";
SELECT RaceTime, RaceNo, Name, Team, Cat, RacePosition FROM times 
WHERE RaceId = (SELECT ID FROM Races WHERE Name = 'totley moor' AND Date
LIKE "2009%");
SELECT "";

/* the sub queries are ok */
SELECT "race id for totley moor 09";
SELECT ID FROM Races WHERE Name = 'totley moor' AND Date LIKE "2009%";
SELECT "";

/* the data is actually there */
SELECT "totley moor 09 data";
SELECT * FROM times WHERE RaceId = 20;

/* now do it so it works */
SELECT "now doing it properly";
SELECT "";
SELECT "";
DROP TABLE races;
DROP TABLE times;

BEGIN TRANSACTION;
CREATE TABLE races (ID, Name, Date, Distance, Climb, Comment);
/* note 20, as opposed to '20' in line below */
INSERT INTO "races" VALUES(20,'totley
moor','2009-05-19',NULL,NULL,NULL);
CREATE TABLE times (RaceID, RaceNo, RaceTime, Name, Team, Cat,
RacePosition);
INSERT INTO "times" VALUES(20,'804','01:08:51','CLARKE,
Helen','','FV40','233');
COMMIT;

SELECT "looking for totley moor 09 results";
SELECT RaceTime, RaceNo, Name, Team, Cat, RacePosition FROM times 
WHERE RaceId = (SELECT ID FROM Races WHERE Name = 'totley moor' AND Date
LIKE "2009%");
SELECT "";

/* the sub queries are ok */
SELECT "race id for totley moor 09";
SELECT ID FROM Races WHERE Name = 'totley moor' AND Date LIKE "2009%";
SELECT "";

/* the data is actually there */
SELECT "totley moor 09 data";
SELECT * FROM times WHERE RaceId = 20;

[end bug2.sql]

My feeling is that this behaviour is arguably correct, but confusing. I
had the idea that sqlite isn't strongly typed?

I'm not exactly sure how I created this mix up, I use my program,
sqlite3 and the sqlite database browser to manipulate the DB, so there's
plenty of scope for error :-)

BTW, I enjoy using sqlite, does just what I need.

Cheers

NickJ
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Transaction speed too slow?

2011-05-08 Thread Nick

>From http://www.sqlite.org/faq.html#q19 it says "A transaction normally 
>requires two complete rotations of the disk platter, which on a 7200RPM disk 
>drive limits you to about 60 transactions per second." 

Using Linux/Ubuntu 10.04 on an otherwise idle Atom powered Nettop with a 
5400RPM disk drive I was expecting circa 45 transactions per second. However 
using the basic python script below I seem to be getting a lot slower results, 
namely:

ext3 partition: 16 transactions per second
jfs partition: 9 transactions per second
:memory: 15798 transaction per second

Does anyone have an explanation for this?

The script used is below. I'd be grateful if people could confirm whether it is 
just my hardware or a common result

FULL RESULTS:

nick@Haribo:~$ sudo hdparm -W 0 /dev/sda

/dev/sda:
 setting drive write-caching to 0 (off)
 write-caching =  0 (off)
nick@Haribo:~$ python write-transactions-1.py 
Run Number: 1, Location: :memory:
0:00:00.000108
0:00:00.58
0:00:00.57
0:00:00.57
0:00:00.56
0:00:00.56
0:00:00.57
0:00:00.57
0:00:00.57
0:00:00.56
Time Avg: 0.62
Trans/sec Avg: 16155
Run Number: 2, Location: write-transactions-1.db
0:00:00.099678
0:00:00.121630
0:00:00.110672
0:00:00.099599
0:00:00.110782
0:00:00.099542
0:00:00.121776
0:00:00.099599
0:00:00.121794
0:00:00.099624
Time Avg: 0.108470
Trans/sec Avg: 9

---

nick@Haribo:~$ sudo hdparm -W 1 /dev/sda

/dev/sda:
 setting drive write-caching to 1 (on)
 write-caching =  1 (on)
nick@Haribo:~$ python write-transactions-1.py 
Run Number: 1, Location: :memory:
0:00:00.000113
0:00:00.57
0:00:00.56
0:00:00.56
0:00:00.56
0:00:00.57
0:00:00.57
0:00:00.56
0:00:00.56
0:00:00.56
Time Avg: 0.62
Trans/sec Avg: 16129
Run Number: 2, Location: write-transactions-1.db
0:00:00.001438
0:00:00.000898
0:00:00.000989
0:00:00.000984
0:00:00.000982
0:00:00.001945
0:00:00.001059
0:00:00.001169
0:00:00.000914
0:00:00.001063
Time Avg: 0.001144
Trans/sec Avg: 874


--

SCRIPT

# Test Benchmark for Transactions speed per second using  built in SQLite
import sqlite3
import datetime

contype = [":memory:", "write-transactions-1.db"]

for runidx, runtype in enumerate(contype):

# Heading
print "Run Number: %d, Location: %s" % (runidx + 1, runtype)

con = sqlite3.connect(runtype)
con.isolation_level = None
con.execute("PRAGMA synchronous = FULL")
times = []

# Create the table
con.execute("drop table if exists person")
con.execute("create table person(firstname, lastname)")

# Measure 10 points
for run in range(10):

# BEGIN transaction
con.execute("begin")

# Fill the table
con.execute("insert into person(firstname, lastname) values (?, ?)", 
("alex","bob"))

# Start the clock
begin = datetime.datetime.now()

# COMMIT transaction
con.execute("commit")

# Stop the clock
end = datetime.datetime.now()

# Record the time
times.append(end - begin)

# Measure 10 points
for run in range(10):

# Display the time results
print times[run]

# Print Average
import operator
td = reduce(operator.add, times)
totalresultsec = (float(td.microseconds) + (td.seconds + td.days * 24 * 
3600) * 10**6) / 10**6
avgsec = totalresultsec / 10.0
print "Time Avg: %f" % avgsec
print "Trans/sec Avg: %0.0f" % (1.0/avgsec)
con.close()

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction speed too slow?

2011-05-09 Thread Nick
Thanks for the suggestion and link. It seems that noatime has already been set 
for both partition types on my test setup. I turned atime back on and re-ran 
the tests. 

The results are within error margins and therefore arguably the same as 
previous.

I'd be interested in the results if others would run the script below.

Thanks in advance
Nick


On 9 May 2011, at 13:31, Black, Michael (IS) wrote:

> You do realize the number they quote is a MAXnot necessarily what you'll 
> get.
> 
> 
> 
> With 16 transactions per second you're writing on transaction every 63ms.  My 
> understanding is that the hard drive in the netbook is 15ms access time so 
> you're seeing 4 accesses per transaction with the way you're doing this test 
> (write caching off, full sync).
> 
> 
> 
> When you write a record you have to update last access times for example.
> 
> 
> 
> 
> 
> Might be interesting to turn off last access updates and see if it changes 
> your results much.
> 
> 
> 
> http://www.cyberciti.biz/faq/linux-noatime-ext3-ext4-fstab-configuration/
> 
> 
> 
> 
> 
> 
> 
> Michael D. Black
> 
> Senior Scientist
> 
> NG Information Systems
> 
> Advanced Analytics Directorate
> 
> 
> 
> 
> From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on 
> behalf of Nick [maill...@css-uk.net]
> Sent: Sunday, May 08, 2011 3:52 PM
> To: sqlite-users@sqlite.org
> Subject: EXT :[sqlite] Transaction speed too slow?
> 
> 
> From http://www.sqlite.org/faq.html#q19 it says "A transaction normally 
> requires two complete rotations of the disk platter, which on a 7200RPM disk 
> drive limits you to about 60 transactions per second."
> 
> Using Linux/Ubuntu 10.04 on an otherwise idle Atom powered Nettop with a 
> 5400RPM disk drive I was expecting circa 45 transactions per second. However 
> using the basic python script below I seem to be getting a lot slower 
> results, namely:
> 
> ext3 partition: 16 transactions per second
> jfs partition: 9 transactions per second
> :memory: 15798 transaction per second
> 
> Does anyone have an explanation for this?
> 
> The script used is below. I'd be grateful if people could confirm whether it 
> is just my hardware or a common result
> 
> FULL RESULTS:
> 
> nick@Haribo:~$ sudo hdparm -W 0 /dev/sda
> 
> /dev/sda:
> setting drive write-caching to 0 (off)
> write-caching =  0 (off)
> nick@Haribo:~$ python write-transactions-1.py
> Run Number: 1, Location: :memory:
> 0:00:00.000108
> 0:00:00.58
> 0:00:00.57
> 0:00:00.57
> 0:00:00.56
> 0:00:00.56
> 0:00:00.57
> 0:00:00.57
> 0:00:00.57
> 0:00:00.56
> Time Avg: 0.62
> Trans/sec Avg: 16155
> Run Number: 2, Location: write-transactions-1.db
> 0:00:00.099678
> 0:00:00.121630
> 0:00:00.110672
> 0:00:00.099599
> 0:00:00.110782
> 0:00:00.099542
> 0:00:00.121776
> 0:00:00.099599
> 0:00:00.121794
> 0:00:00.099624
> Time Avg: 0.108470
> Trans/sec Avg: 9
> 
> ---
> 
> nick@Haribo:~$ sudo hdparm -W 1 /dev/sda
> 
> /dev/sda:
> setting drive write-caching to 1 (on)
> write-caching =  1 (on)
> nick@Haribo:~$ python write-transactions-1.py
> Run Number: 1, Location: :memory:
> 0:00:00.000113
> 0:00:00.57
> 0:00:00.56
> 0:00:00.56
> 0:00:00.56
> 0:00:00.57
> 0:00:00.57
> 0:00:00.56
> 0:00:00.56
> 0:00:00.56
> Time Avg: 0.62
> Trans/sec Avg: 16129
> Run Number: 2, Location: write-transactions-1.db
> 0:00:00.001438
> 0:00:00.000898
> 0:00:00.000989
> 0:00:00.000984
> 0:00:00.000982
> 0:00:00.001945
> 0:00:00.001059
> 0:00:00.001169
> 0:00:00.000914
> 0:00:00.001063
> Time Avg: 0.001144
> Trans/sec Avg: 874
> 
> 
> --
> 
> SCRIPT
> 
> # Test Benchmark for Transactions speed per second using  built in SQLite
> import sqlite3
> import datetime
> 
> contype = [":memory:", "write-transactions-1.db"]
> 
> for runidx, runtype in enumerate(contype):
> 
># Heading
>print "Run Number: %d, Location: %s" % (runidx + 1, runtype)
> 
>con = sqlite3.connect(runtype)
>con.isolation_level = None
>con.execute("PRAGMA synchronous = FULL")
>times = []
> 
># Create the table
>con.execute("drop table if exists person")
>con.execute("create table person(firstname, lastname)")
> 
># Measure 10 points
>for run in range(10):
> 
># BEGIN trans

[sqlite] A coredump when select with index

2018-06-22 Thread Nick
My query is
"SELECT x,y FROM t1 WHERE z=? COLLATE NOCASE".

sqlite3Select-> sqlite3WhereBegin-> sqlite3WhereCodeOneLoopStart->
codeAllEqualityTerms-> sqlite3IndexAffinityStr

And I found "Cannot access memory at address" when running
pTab->aCol[x].affinity  //in sqlite3IndexAffinityStr()

x = 29043 while in fact it has only 8394 records.

I am wondering if there is something wrong with my DISK file? Or is it
possible that the aCol[x] is in MEMORY?
Is there any way to know what happened?

Thanks.



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Corrupted FTS5 index? disk image is malformed

2018-07-10 Thread Nick
Using sqlite cli version 3.13 I have a simple schema with a virtual FTS5 table 
providing full index searching. It is accessed by a python application using 
apsw==3.13.0.post1.

I could successfully use the full index functionality during manual testing of 
the db at creation time (probably a year ago now) however, recently I've been 
getting "Error: database disk image is malformed" messages when running queries 
on the FTS5 virtual table.

In an attempt to explore further I downloaded the latest 3.24 version. With 
this latest version I used the ".backup" command to create a copy of the file 
in the hope of eliminating HDD errors being a culprit.

Running pragma quick_check and integrity_check on the copied db both return ok. 

The schema of the FTS5 table is:

CREATE VIRTUAL TABLE IF NOT EXISTS [i_epg] USING fts5 (
[mangled_title],
[mangled_subtitle],
[mangled_summary],
content=[t_epg],
content_rowid=[tid]
);

The table is exclusive kept up to date using triggers:

-- Triggers to keep the FTS index up to date.

CREATE TRIGGER IF NOT EXISTS i_epg_ai AFTER INSERT ON [t_epg] BEGIN
  INSERT INTO [i_epg](rowid, [mangled_title], [mangled_subtitle], 
[mangled_summary]) VALUES (new.[tid], new.[mangled_title], 
new.[mangled_subtitle], new.[mangled_summary]);
END;
CREATE TRIGGER IF NOT EXISTS i_epg_ad AFTER DELETE ON [t_epg] BEGIN
  INSERT INTO [i_epg]([i_epg], rowid, [mangled_title], [mangled_subtitle], 
[mangled_summary]) VALUES('delete', old.[tid], old.[mangled_title], 
old.[mangled_subtitle], old.[mangled_summary]);
END;
CREATE TRIGGER IF NOT EXISTS i_epg_au AFTER UPDATE ON [t_epg] BEGIN
  INSERT INTO [i_epg]([i_epg], rowid, [mangled_title], [mangled_subtitle], 
[mangled_summary]) VALUES('delete', old.[tid], old.[mangled_title], 
old.[mangled_subtitle], old.[mangled_summary]);
  INSERT INTO [i_epg](rowid, [mangled_title], [mangled_subtitle], 
[mangled_summary]) VALUES (new.[tid], new.[mangled_title], 
new.[mangled_subtitle], new.[mangled_summary]);
END;

Running SQL queries on the normal tables all work as expected. Digging further 
on the FTS5 queries I noticed the following behaviour:

SELECT * FROM [i_epg] WHERE [i_epg] MATCH '{ mangled_title } : big + ban*';
- expect results - actually returns "Error: database disk image is malformed" 
immediately

SELECT * FROM [i_epg] WHERE [i_epg] MATCH '{ mangled_title } : big + ban*';
- expect no results - returns no results

SELECT [mangled_title] FROM [i_epg] WHERE [i_epg] MATCH '{ mangled_title } : 
black + adder';
- expect results - returns results not matching request
The Black Adder
The Black Adder
The Black Adder
The Black Adder
The Black Adder
Morning Show Exclusives
Deal of the Day
Four in a Bed
The Black Adder
The Black Adder
The Black Adder
The Black Adder
Denim & Co
The Shoe Stylist
Our World: Crisis in Catalonia
The Black Adder
The Black Adder
The Black Adder

I've never come across a disk image malformed error in my years of using 
sqlite3 so not sure where to turn to next. Questions are:

1. Is this a known issue with FTS5 tables and if so is there a workaround?

2. It appears the FTS5 virtual table is corrupt. Is there a way to rebuild the 
FTS5 (drop table and recreate?) from just the sqlite cli tool?

Regards
Nick


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted FTS5 index? disk image is malformed

2018-07-10 Thread Nick

On 10 Jul 2018, at 21:17, Dan Kennedy wrote:
>> 
> 
> Try running the FTS5 integrity-check command with the 3.24.0 command line to 
> ensure it really is corrupt:
> 
>  https://www.sqlite.org/fts5.html#the_integrity_check_command
> 
> The index can be rebuilt using the rebuild command:
> 
>  https://www.sqlite.org/fts5.html#the_rebuild_command
> 
> 3.13.0 was about 2 years ago. There have been a couple of fixes for fts5 
> corruption bugs since then. This one, for example:
> 
>  https://www.sqlite.org/src/info/9a2de4f05fabf7e7
> 
> So you may have hit a known issue. Hard to say.
> 
> Dan.
> 

Thanks Dan.

Reading the webpage it says it doesn't work for contentless FTS5 but ran the 
commands anyway

sqlite> INSERT INTO [i_epg]([i_epg]) VALUES('integrity-check');
Error: database disk image is malformed
sqlite> INSERT INTO [i_epg]([i_epg]) VALUES('rebuild');
sqlite> INSERT INTO [i_epg]([i_epg]) VALUES('integrity-check');
sqlite> 

Running previous commands also seem to show its been fixed

sqlite> SELECT * FROM [i_epg] WHERE [i_epg] MATCH '{ mangled_title } : big + 
ban';
sqlite> SELECT * FROM [i_epg] WHERE [i_epg] MATCH '{ mangled_title } : big + 
ban*';

sqlite> SELECT [mangled_title] FROM [i_epg] WHERE [i_epg] MATCH '{ 
mangled_title } : black + adder';
The Black Adder
The Black Adder
The Black Adder
The Black Adder
The Black Adder
The Black Adder
The Black Adder
The Black Adder
The Black Adder
The Black Adder
The Black Adder
The Black Adder
The Black Adder
sqlite> 

Thanks again Dan.

Regards
Nick

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted FTS5 index? disk image is malformed

2018-07-11 Thread Nick
 
 
 
>  
> On 11 Jul 2018 at 9:28 am,wrote:
>  
>  
>  Yours is not a contentless table. It is an "external content" table. Dan. 

 
>  
>  
>
>  
>  
>Noted. Thanks for the clarification.
> Regards  
 
> Nick
 
 
 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted FTS5 index? disk image is malformed - Part II

2018-07-17 Thread Nick

On 2018-07-10 21:17, Dan Kennedy wrote:

On 07/11/2018 02:56 AM, Nick wrote:
Using sqlite cli version 3.13 I have a simple schema with a virtual 
FTS5 table providing full index searching. It is accessed by a python 
application using apsw==3.13.0.post1.


I could successfully use the full index functionality during manual 
testing of the db at creation time (probably a year ago now) however, 
recently I've been getting "Error: database disk image is malformed" 
messages when running queries on the FTS5 virtual table.


In an attempt to explore further I downloaded the latest 3.24 version. 
With this latest version I used the ".backup" command to create a copy 
of the file in the hope of eliminating HDD errors being a culprit.


Running pragma quick_check and integrity_check on the copied db both 
return ok.


The schema of the FTS5 table is:

CREATE VIRTUAL TABLE IF NOT EXISTS [i_epg] USING fts5 (
[mangled_title],
[mangled_subtitle],
[mangled_summary],
content=[t_epg],
content_rowid=[tid]
);

The table is exclusive kept up to date using triggers:

-- Triggers to keep the FTS index up to date.

CREATE TRIGGER IF NOT EXISTS i_epg_ai AFTER INSERT ON [t_epg] BEGIN
   INSERT INTO [i_epg](rowid, [mangled_title], [mangled_subtitle], 
[mangled_summary]) VALUES (new.[tid], new.[mangled_title], 
new.[mangled_subtitle], new.[mangled_summary]);

END;
CREATE TRIGGER IF NOT EXISTS i_epg_ad AFTER DELETE ON [t_epg] BEGIN
   INSERT INTO [i_epg]([i_epg], rowid, [mangled_title], 
[mangled_subtitle], [mangled_summary]) VALUES('delete', old.[tid], 
old.[mangled_title], old.[mangled_subtitle], old.[mangled_summary]);

END;
CREATE TRIGGER IF NOT EXISTS i_epg_au AFTER UPDATE ON [t_epg] BEGIN
   INSERT INTO [i_epg]([i_epg], rowid, [mangled_title], 
[mangled_subtitle], [mangled_summary]) VALUES('delete', old.[tid], 
old.[mangled_title], old.[mangled_subtitle], old.[mangled_summary]);
   INSERT INTO [i_epg](rowid, [mangled_title], [mangled_subtitle], 
[mangled_summary]) VALUES (new.[tid], new.[mangled_title], 
new.[mangled_subtitle], new.[mangled_summary]);

END;

Running SQL queries on the normal tables all work as expected. Digging 
further on the FTS5 queries I noticed the following behaviour:


SELECT * FROM [i_epg] WHERE [i_epg] MATCH '{ mangled_title } : big + 
ban*';
- expect results - actually returns "Error: database disk image is 
malformed" immediately


SELECT * FROM [i_epg] WHERE [i_epg] MATCH '{ mangled_title } : big + 
ban*';

- expect no results - returns no results

SELECT [mangled_title] FROM [i_epg] WHERE [i_epg] MATCH '{ 
mangled_title } : black + adder';

- expect results - returns results not matching request
The Black Adder
The Black Adder
The Black Adder
The Black Adder
The Black Adder
Morning Show Exclusives
Deal of the Day
Four in a Bed
The Black Adder
The Black Adder
The Black Adder
The Black Adder
Denim & Co
The Shoe Stylist
Our World: Crisis in Catalonia
The Black Adder
The Black Adder
The Black Adder

I've never come across a disk image malformed error in my years of 
using sqlite3 so not sure where to turn to next. Questions are:


1. Is this a known issue with FTS5 tables and if so is there a 
workaround?


2. It appears the FTS5 virtual table is corrupt. Is there a way to 
rebuild the FTS5 (drop table and recreate?) from just the sqlite cli 
tool?


Try running the FTS5 integrity-check command with the 3.24.0 command
line to ensure it really is corrupt:

  https://www.sqlite.org/fts5.html#the_integrity_check_command

The index can be rebuilt using the rebuild command:

  https://www.sqlite.org/fts5.html#the_rebuild_command

3.13.0 was about 2 years ago. There have been a couple of fixes for
fts5 corruption bugs since then. This one, for example:

  https://www.sqlite.org/src/info/9a2de4f05fabf7e7

So you may have hit a known issue. Hard to say.

Dan.



Part II

With the help from Dan the FTS5 table was fixed and then subsequently 
worked as expected. For belt and braces, using the 3.24 sqlite cli 
client, I created a new db with the below PRAGMA statements and then ran 
".dump"' to copy over the records from the previous db.


PRAGMA legacy_file_format = off;
PRAGMA page_size = 4096;
PRAGMA auto_vacuum = 2;
PRAGMA foreign_keys = on;
PRAGMA journal_mode = wal;
PRAGMA application_id = 19;

Both PRAGMA and FTS integrity returned ok and manual testing showed the 
new db worked as expected. At the same time I've upgrade apsw to the 
latest version (I saw it downloaded 3.24 file during compiling).


A number of days later I've gone back and ran the  INSERT INTO 
[i_epg]([i_epg]) VALUES('integrity-check') cmd and disappointingly it 
returned Error: database disk image is malformed.


However unlike my first report above the same FTS5 queries are all 
working and returning results as expected.


I'm at a l

Re: [sqlite] Corrupted FTS5 index? disk image is malformed - Part II

2018-07-18 Thread Nick

On 18 Jul 2018, at 14:09, Dan Kennedy wrote:
> 
> 
> Easiest explanation is that something is writing directly to the FTS5 table, 
> bypassing the external content table.
> 
> Otherwise, it may be a bug in fts5. How large is the corrupted db? Are you 
> able to share it with us?
> 
> Dan.

FTS5 table is exclusively modified with triggers. 

If I dropped all tables except the FTS5 table and external content table would 
that still be useful for you? 

It would be around 500MB uncompressed. Have you got a way to upload it?

Regards
Nick
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupted FTS5 index? disk image is malformed - Part II

2018-07-23 Thread Nick
 
 
Hi Dan
 

 
Did you receive below? Would extracted db be useful for debugging?  
 

 
Regards  
 
Nick  
 

   
 
 
>  
> On 18 Jul 2018 at 22:41,wrote:
>  
>  
>  On 18 Jul 2018, at 14:09, Dan Kennedy wrote:  >   >   >  Easiest explanation 
> is that something is writing directly to the FTS5 table, bypassing the 
> external content table.  >   >  Otherwise, it may be a bug in fts5. How large 
> is the corrupted db? Are you able to share it with us?  >   >  Dan. FTS5 
> table is exclusively modified with triggers. If I dropped all tables except 
> the FTS5 table and external content table would that still be useful for you? 
> It would be around 500MB uncompressed. Have you got a way to upload it? 
> Regards Nick ___ sqlite-users 
> mailing list sqlite-users@mailinglists.sqlite.org 
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users 
>  
 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] The performance of indexed select

2018-01-05 Thread Nick
I am trying to analysis the performance of indexed select. 

CREATE TABLE t2(a INTEGER, b INTEGER, c TEXT);
CREATE INDEX t2c ON t2(c); 

I think there may be much more leaf index b-tree pages whose header is
'0x0A' if the length of the content of index key 'c' is always 20-25 bytes,
as I notice the format of index inside sqlite consist of the index key and
rowid.

I can establish mapping relation between column 'c' and a new INTEGER column
'd'. Then I am wondering if it is reasonable to create new index t2(d) to
get a better performance, as sqlite stores INTEGER in a variable-length way
which means there will be less index pages. 

So if it is correct that the performance of indexed select is up to the
number of index pages which is fetched in getPageNormal() within the select?
I think it has positive correlation but I do not know if it is the major
constraint. 

And does sqlite have a profile tool to get call tree or execution time of
each functions? All I know is VDBE_PROFILE. 

Thanks for any light you can shed.


I want to profile sqlite



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] The performance of indexed select

2018-01-05 Thread Nick
Thank you Simon. 

But I am still uncertain if it is a good way to replace column 'c'. 

CREATE TABLE t2(a INTEGER, b INTEGER, d INTEGER); 
or:
CREATE TABLE t2(a INTEGER, b INTEGER, c TEXT, d INTEGER); 
and then
CREATE INDEX t2d ON t2(d);
SELECT count(*) FROM t2 WHERE d = xx;

I find it is indeed faster than t2(c). 

Or in another word, if a TEXT column has similar meaning with an INTEGER
column in my applications,(such as use userID instead of userName, still the
way that the data works in my head:) ) is it recommended to use INTEGER one
in order to get a less index pages?  


One more small question:
> For instance, once SQLite has found the right entry in the index it might
> need to look up that entry in the table to retrieve values which are not
> in the index.

I understand the execution process you said. And in my opinion, sqlite
should fetch pages when looking up the entry both in the index and then in
the table. But I only found pages with '0x0A' and '0x02' when
getPageNormal() is called during the time running select SQL. Could you give
me any advises to find the code when sqlite fetching the '0x0D' pages? 

Thanks.




--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] The performance of indexed select

2018-01-05 Thread Nick
Some simple SQLs:
SELECT count(*) FROM t2 WHERE c = xx; (or d = xx) 



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] The performance of indexed select

2018-01-07 Thread Nick
Thank you Keith for your useful advice. I am considering to organize the
columns based on BCNF.

I guess that table t3 is needed to remove functional dependency, which means
I should use table t2 and t3 instead of one table t2 with 4 columns a-d. Is
that right?

I am not familiar with the concept BCNF, and I want to make sure that if it
is recommended to create my tables in the way you wrote.

Thanks



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Speed issue of SELECT in my application

2018-01-16 Thread Nick
I have a table below in my application:

CREATE TABLE t1 (
a INTEGER PRIMARY KEY AUTOINCREMENT, 
b INTEGER NOT NULL UNIQUE, 
c INTEGER NOT NULL, 
d INTEGER, 
e TEXT, 
f INTEGER, 
g INTEGER, 
h TEXT, 
i INTEGER, 
UNIQUE(b, i)
);
And I’ve got some speed issues when I query the db:
SELECT b, c, d, e, f, g, h FROM t1 WHERE b >= 1;

It needs almost 60ms as there are about 100 records with some long TEXT data
in the TEXT columns.  

I am wondering if it is needed to add ANY INDEX to improve the performance
of the SELECT?

Thanks.



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed issue of SELECT in my application

2018-01-16 Thread Nick
Thank you Simon.

As you said, UNIQUE(b, i) is redundant, but I guess it will not affect the
performance of the SELECT.

I find "SEARCH TABLE t1 USING INDEX sqlite_autoindex_t1_1 (b>?)" when I use
EXPLAIN QUERY PLAN, so I do not need to add any index, right?

Um, I guess I have nothing to do to improve the performance. 
Thanks.



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed issue of SELECT in my application

2018-01-16 Thread Nick
OK. Thank you for your help.



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed issue of SELECT in my application

2018-01-17 Thread Nick
Thank you Smith.
The table is preseted with some data in my Android system. And I guess my
APPLICATION is more like a SERVICE which will be started at system boot time
and speed issue happens at the time.
According to the some other reasons, I have to use wal+normal journal and
sync mode.

>3. Start a transaction and hold the DB read locks for the duration of 
your application (again, if it won't need writing)
I only have several single SELECT at boot time and I guess it is not
necessary to use TRANSACTION? And there are still some write ops in my
service.

>4. Ensure there are no other time-consuming bits of code in the sqlite 
api RESET()-STEP() loop.
I think I only use sqlite api in some normal ways. 

And I do not know more details about what happens during the boot time.




--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed issue of SELECT in my application

2018-01-17 Thread Nick
I am confused about your table t2. 
It will be faster to query the table t1, but I need the content of column e
and h when I query the data which means I need a extra SELECT from the table
t2, is it right?
At the same time, I guess it is more complicate to INSERT data into both t1
and t2. 

What is more important is that, I think it is a better way to establish my
tables according to however the source data is. 



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed issue of SELECT in my application

2018-01-17 Thread Nick
Jens, I totally agree with your opinion of profile. I have tried to find some
useful tools to profile applications using sqlite and all I found is
sqlite3_profile()(gettimeofday) and vdbe_profile(hardware time) inside
sqlite. 

I also know a little about Time Profile of Instruments but I am using
Android.

So, what is the tool you mentioned such as ‘sample’ tool?
And do you mean CPU profiler (gperftools og Google) is useful to profile
sqlite? As I am not familiar about this tool.

Thanks.  



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed issue of SELECT in my application

2018-01-17 Thread Nick
Thank you all.
As Simon said, 60ms may be a reasonable figure and I am trying to focus on
the detail of my service according to all your suggestion.



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] About test_vfs.c in sqlite test

2018-01-19 Thread Nick
I find a file named test_vfs.c when I run the tcl tests in the source tree.
When I open a wal-file with a sqlite3_file* file descriptor pFile1, it
called sqlite3OsOpen(). The call tree is like the pic below:

sqlite3OsOpen(pFile1)
  |
  |
 pVfs->xOpen ==> tvfsOpen
  |
  |
 sqlite3OsOpen(pFile2)
  |
  |
  pVfs->xOpen ==> unixOpen(pFile2)

In some tests, test_vfs.c is involved and pVfs->xOpen() will bind to
tvfsOpen() instead of unixOpen() directly. 
And I find the address of pFile has changed to pFile2 when sqlite3OsOpen()
is called in the second time. 
Then unixOpen will initialize pFile2, such as set nFetchOut and some other
member elements in struct unixFile to 0. But the nFetchOut of pFile1 may not
be 0 when sqlite3OsOpen(pFile1) returns.

It makes me confused as I find db will not crash even if all the member
elements of the unixFile is not correct. Could it be said that these
elements will not be used before they are set to a right value?
And what is test_vfs.c for?

I am new bee to sqlite test and vfs. Could anyone explain me it is correct
that the nFetchOut of pFile1 is not 0 when sqlite3OsOpen(pFile1) returns? 

I am really confused. Thanks for any light you can shed. 



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Make some changes to the source code

2018-01-24 Thread Nick
I am trying to make some changes to the source code of sqlite. As I found
there will be a little improvement if I support MMAP to wal file. I guess
the optimization is micro but it is useful in my test and this is a good way
for me to study the code of sqlite :D

1.wal+mmap
I could use unixMapfile() to map the wal file while MMAP may cause SIGBUS if
the mapped file is truncated. This could happen when reseting the wal file,
in another word, if journal_size_limit is reached or
SQLITE_CHECKPOINT_TRUNCATE is called. But I guess it works if these two APIs
will always not be called in my application.
So, I want to create file holes to get a 4M wal-file in sqlite3WalOpen(),
and always set journal_size_limit to 4M. Then mmap will be supported by
simply calling unixMapfile(4M) in sqlite3WalOpen(). After that, memcpy()
instead of read() will be used when read the first 4M of wal file.
I am wondering if it is all right in my Android applications?

2.Further more. I know mmap is supported when fetching db file:
To map file:
In getPageMMap(), sqlite3OsFetch() MMAPs the whole db file, and return the
mapped page through *pData. Then pagerAcquireMapPage will obtain a page
reference PgHdr based on the pData.
(A small question here, why pData is needed? As xRead() will always use
memcpy instead of read() after unixMapfile(-1) is called.)

sqlite3OsFileControlHint is called to remap the db file when the db grows as
a result of a checkpoint.

To avoid SIGBUS:
Process will catch the CHANGE of other processes by comparing
pWal->hdr.iChange and the corresponding number in wal-index. Whenever a
read, write or checkpoint operation happens, unixUnmapfile() will be called
if there is a CHANGE.

3.Thus another way of wal+mmap:
I want to use pWal->hdr.unused to catch the CHANGE when other process
truncate the wal file(journal_size_limit or SQLITE_CHECKPOINT_TRUNCATE).
Then I will check the hdr.unused to call unixMapfile(-1) before whenever
sqlite3OsRead(pWal->pWalFd) is called.
Is there a better timing to remap the file? Just like
sqlite3WalBeginReadTransaction and walcheckpoint in db+mmap;

I run sqlite test to check my code, but I find pVfs->szOsFile is 16 when
test_vfs.c is called, which means pRet->pWalFd is no longer a unixFile
struct. At this time, sqlite3OsOpen() binds to tvfsOpen() instead of
unixOpen(). So I cannot use unixMapfile() and the test that uses test_vfs.c
will not pass. So could you give me some advices to pass the test?

Thank you.




--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Make some changes to the source code

2018-01-24 Thread Nick
I use sqlite in my Android application. And I tried to run sqlite test on my
MacOS PC. Some cases failed but I can not figure out it is indeed a
corruption.

Do you mean the corruption problems you mentioned will happen in db+mmap? I
guess it should happen in both wal+mmap and db+mmap if it exists. But I have
not found it until now even though I have heard about the mmap+OS problem
from the community.

And Simon, do you have any idea about the test_vfs problem? And is it OK to
change the code as 1.wal+mmap mentioned besides the mmap problem? 



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Make some changes to the source code

2018-01-24 Thread Nick
I ran tests in my MacOS 10.12.6.



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Make some changes to the source code

2018-01-24 Thread Nick
Yup, I guess I understand you correctly about the mmap problem in the OS. I
have seen some threads about it before.
But I think wal+mmap is still a worthy consideration as db+mmap has already
been supported even though it is disabled by default. At least I think I
could use it in my own application until I find the mmap problem in my
system.

The one thing that bothers me the most is that I have no way to check my
code, as there is a testvfs in sqlite test. So could you please review my
train of thought about my wal+mmap? (Map file when opening it and do not
truncate the file)

By the way, is there a possibly way to submit patch to sqlite? 

Thank you Simon.



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Make some changes to the source code

2018-01-25 Thread Nick
Thank you Simon, I totally understand you. 

And still hope for someone to give me some advice about my wal+mmap (Map
file when opening it and do not truncate the file) .

Thanks.



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Performance issue in different versions

2018-02-01 Thread Nick
I update sqlite in my Android mobile phone from 3.9.2 to 3.16.2.
And I find the default page_size in 3.9.2 is 1024 while in 3.16.2 is 4096
(changed since 3.12.0).
I think SQLITE_DEFAULT_PAGE_SIZE has great effect on the performance so I
use speedtest1.c to test it.

There are many test cases in speedtest1.c and case 270 is a DELETE case
which is the most time-consuming one.
There is a result.  (different version + different page_size)

 3.16.2+4096 3.16.2+1024  3.9.2+4096   
3.9.2+1024
Case 270:   5.695s 5.908s   2.307s  

6.130s
TOTAL  75.182s79.811s 58.723s   
 
81.732s

It is easy to find 3.9.2+4096 is extremely faster than others. 
And page_size has great effect on 3.9.2 but has only a small effect on
3.16.2.
But why?


I think 3.16.2 should faster than 3.9.2 according to the measurements of
sqlite (https://www.sqlite.org/cpu.html) but I guess 3.9.2 used 1024 at that
time.

Could someone give me any explanations? The result is strange but I think it
is reproducible if we set SQLITE_DEFAULT_PAGE_SIZE=4096 in 3.9.2. 

Thank you.





--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performance issue in different versions

2018-02-01 Thread Nick
Um, I am a OS application developer and we just upgraded the source code on
our developing engine.

I am sure I used the same compile-options. 
SQLITE_SECURE_DELETE is not set.



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performance issue in different versions

2018-02-01 Thread Nick
Yup, absolutely you are right.
I just ran a new test using the same upper bound on the amount of memory
used for the page cache, then I found a reasonable result.

Thank you, Dan.
I did notice the cache_size change before but you made me realize it.

Thanks a lot.



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performance issue in different versions

2018-02-01 Thread Nick
I realized that the amount of memory used for the page cache is different.
And I found that is the root cause.
Sorry for my careless mistake.

Thank you.



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Question about threadsafe

2018-02-06 Thread Nick
I use sqlite 3.16.2 in Android system and I have configured journal_mode=WAL,
threadsafe=1.

My understanding is that:
WAL => readers and only one writer can run at the same time.
threadsafe=1 => mutex is used in serialized mode so that two writers is
supported.
Is it correct?

But I ran a simple test:
Two processes will run sqlite3_open() respectively to open the same db. Then
both of the two processes will insert 1 records(in Transaction) into the
db simultaneously. 
But I find that:

Process A begin
Process A insert
Process B begin
Process B insert
Process A end
Process B end

Which I guess the Process B did not sleep at all?
And the count of records is less than 2 at last.

So I think multiple write threads is not allowed even though I use
threadsafe=1 in wal mode?



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] Question about threadsafe

2018-02-06 Thread Nick
Yep, Hick. We have the same understanding. 
But all I found is that process B did not wait for the lock and began to run
directly.



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Question about threadsafe

2018-02-06 Thread Nick
>  (a) an error result of some kind or (b) a corrupt database. 
I did not see any info about errmsg.

>  Are your processes using the same database connection or does each one
> have its own ? 
Two processes have two sqlite3_open(). So each one has its own.

>  Are you checking the result codes returned by all the API calls ? 
Yes. I use speedtest1.c as model code. 
  speedtest1_exec("BEGIN");
  speedtest1_prepare("INSERT INTO t1 VALUES(?1,?2,?3); --  %d times",
n);
  for(i=1; i<=n; i++){
rc = sqlite3_bind_int64(g.pStmt, 1, i);
rc = sqlite3_bind_int(g.pStmt, 2, i);
rc = sqlite3_bind_text(g.pStmt, 3, "hello", -1, SQLITE_STATIC);
speedtest1_run();
  }
  speedtest1_exec("COMMIT");
And I have checked rc = SQLITE_OK.

>  Can you reliably get less than 2 rows ? 
Yes, always less than 2.
Process A inserts 1-1 and process B inserts 10001-2. I found that
the first few rows is missing in the result. I mean there is no 10001-10xxx.

>  Does the problem go away if you use threadsafe = 2 ? 
The problem is still here.



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] Question about threadsafe

2018-02-07 Thread Nick
Yes. But

Process A begin 
Process A insert 
Process B begin 
Process B insert 
Process A end 
Process B end 

In fact, begin means "BEGIN" and end means "COMMIT". 
So I think the result is strange. 

And I guess the difference between Serilaized and Multithread is that if it
is allowed to shared the structure sqlite3 *db (together with prepared
statement) among threads. If I use Serilaized mode, then I could run
sqlite3_open(db) for only one time and all the threads could use the unique
"db". Is it right?



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Question about threadsafe

2018-02-12 Thread Nick
I ran several multi-threads tests these days and I want to get a confirmation
that my understanding is correct.
I use WAL mode and I think whether or not use the same connection with
THREADSAFE=1, 2 is the key to my question. 

Mode 1, threadsafe=2 + multiple threads use the same connection:
It is not threadsafe;

Mode 2, threadsafe=2 + each thread runs a sqlite3_open():
It is threadsafe which means reading and writing can proceed concurrently,
but only one writer at a time. PRAGMA busy_timeout() may avoid “db is
locked” when writing.

Mode 3, threadsafe=1 + each thread runs a sqlite3_open():
Same with mode 2, as threadsafe=1 is only supported the ability of a handle
to be used 
by more than one thread. 

Mode 4, threadsafe=1 + multiple threads use the same connection:
Reading and writing can proceed concurrently; Two writers can start at the
same time and sqlite will make them Serialized(but how? guess some threads
will be blocked and retry, but I can not find it in the source code).


In general, WAL make reading and writing concurrent - not just serial, but
writing and writing can only be serial. So writers should use busy_timeout()
to retry(Mode 2), or, use the same connection and the RETRY operation will
be done by sqlite(Mode 4).

I think sqlite is threadsafe means the integrity of database is guaranteed.
And there will not be any crash or corruption‎ if applications use sqlite
the way like mode 2 and 4 above.

Is it right?

Thanks.



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Question about threadsafe

2018-02-12 Thread Nick
Thank you Keith. And there are something I want to make sure.

>THREADSAFE=1 means that the sqlite3 library enforces the re-entrancy
requirements via mutexes attached to the CONNECTION object.  This means that
the library will serialize access to the sqlite3 engine for you so that only
one call (entrance) per connection is permitted to proceed.  
Yes. That's the reason why I think my applications can use "threadsafe=1 +
share one connection" directly and sqlite will meeting the entrance
requirements by itself.

>Other entrances (calls) will *wait* until the in-progress call is complete
before proceeding. 
I see the word "wait". But could you give me any advises to find the code
when sqlite waiting or retrying?

>THREADSAFE=2 means that the sqlite3 library WILL NOT ENFORCE the
re-entrancy requirements via mutexes attatched to the CONNECTION object.
Yes. So I can not use "threadsafe=2 + share one connection" in my apps, as I
may have more than one call at a time.

>The limitation of only ONE entrance per connection object at one time is
still in effect however, so if you violate the rules then AHWBL. 
I want to make sure that it is not OK to read and write concurrently within
the same transaction(two thread share one connection), but it is OK to read
and write concurrently if I have two different connection and use WAL. Is it
correct?

>Therefore IF you use THREADSAFE=2 *AND* open a new connection dedicated to
each thread, THEN you can be sure that you are meeting the entrance
requirements provided that only calls against that connection (or objects
derived therefrom) are made on the thread which owns that connection, and
from NO OTHER THREAD. 
Sorry, I still can not understand. Um, or, is it OK to use "threadsafe=2 and
2 connections" in my apps if the 2 threads may write at the same time?

thanks.







--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Question about threadsafe

2018-02-13 Thread Nick
>> is it OK to use "threadsafe=2 and 
>> 2 connections" in my apps if the 2 threads may write at the same time? 

>Yes.

So I think "threadsafe=2 + more than 1 connection + busy_handler" is a good
way to use.

Another possible way is "threadsafe=1 and share 1 connection", but if thread
1 begins a transaction, then the SQL of thread 2 will also be executed
within the transaction I guess. That may cause some unpredictable problems.

BTW, if I use "threadsafe=0 and more than 1 connection", there will not be
"database is locked" any more even if two threads writing at the same time,
as mutex is disabled on core. Is it correct? 



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Question about threadsafe

2018-02-13 Thread Nick
>> So I think "threadsafe=2 + more than 1 connection + busy_handler" is a
good 
>> way to use. 

>This is the normal way to use SQLite. 

I ran a test and I can still find "database is locked" even if I use
busy_handler(threadsafe=2, 2 connections).
When thread 1 executing a writing transaction, thread 2 runs the code below
at the same time: 
sqlite3_exec("BEGIN")
//SELECT
sqlite3_prepare_v2("SELECT * FROM t1;");
sqlite3_step;
sqlite3_reset;
//INSERT
sqlite3_exec("INSERT INTO t1 VALUES(1, 1, \"aweagsr\")");//database is
locked
sqlite3_exec("COMMIT");

Writing in thread 1 will no block SELECTs in thread 2 as I use WAL. But the
INSERT within the transaction of thread 2 still returns SQLITE_BUSY.
I think I have used sqlite3_busy_timeout() in right way and I find that
sqliteDefaultBusyCallback() did not be called.

Is it expected? 




--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Dealing with SQLITE_BUSY

2018-02-22 Thread Nick
I use sqlite3_open() to open two connections, and I have configured
journal_mode=WAL, threadsafe=2.

Connection 1 is doing:
sqlite3_exec(db1, "BEGIN", 0, 0, &zErrMsg);
sqlite3_exec(db1, "SELECT * FROM t1;", 0, 0, &zErrMsg);
sqlite3_exec(db1, "INSERT INTO t1 VALUES(1, 1, \"aa\”)”, 0, 0, &zErrMsg);
//SQLITE_BUSY
sqlite3_exec(db1, "COMMIT", 0, 0, &zErrMsg);

I got that SQLITE_BUSY as connection 2 was writing the db at the same time.
I have called sqlite3_busy_timeout() but I find that it does not work if
INSERT runs after a SELECT within BEGIN and COMMIT.

Is it expected? 



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Does mmap increase PSS?

2018-04-10 Thread Nick
Hi,
I guess that "cache_size=2000" means PSS of my process will always less than
2M.
But, when I use PRAMGA mmap_size to enable mmap, I found the PSS of my
process will almost the same as my db. 
Is that correct?



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] Does mmap increase PSS?

2018-04-12 Thread Nick
Thanks for your explanation.
I want to get a confirmation that my understanding is correct and that if I
use mmap_size=256M and I have only 1 process, then the PSS of the process
will always the same as the size of my db file, as unixMapfile(-1) means map
the whole file. (A big db file means 256M PSS) Is that correct?

In fact I had expected mmap only took up virtual memory instead of PSS.



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] Does mmap increase PSS?

2018-04-12 Thread Nick
Thanks a lot, Hick.
So, if 
- mmap_size=256M
- run only one copy of my program (has no other process to split PSS)
- have a large enough amount of main memory (bigger than 256M)
- a big db file (bigger than 256M)
Then the PSS of my program will be about 256M. 

Is that correct?



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] Does mmap increase PSS?

2018-04-12 Thread Nick
OK, I understand.

I ran a simple program to test if mmap will cause the increasing of PSS. But
I did not find the PSS increase according to showmap: 
addr = (char *)mmap(NULL, length, PROT_READ, MAP_SHARED, fd, 0);
for(i=0; ihttp://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] Does mmap increase PSS?

2018-04-13 Thread Nick
I find I confused several concepts of memory. Sorry for that.
And I guess I finally understand what my question really is:

Still there is only one process doing a SELECT * in a 256M db file.
Then 256M physical memory should be used when doing the query. (Ignore the
cache_size.)
So the PSS of my program should be 256M at that time. That is OK.

But from now on, the PSS will be 256M for a long time as my process will be
active for hours doing insert-select-insert-select without closing. My
system can not afford a 256M-PSS program.
In another word, the most important thing is there is no opportunity to call
unmmap() in my program.

Can I find a reasonable time to call unmmap or sqlite3OsUnfetch? Or is there
any way to solve the problem other than pragma mmap_size=2M?

Really thanks for your help.



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Any operation to trigger osMunmap?

2018-04-18 Thread Nick
Hi,
I have one process using sqlite with “pragma mmap_size=30M”. The operations
of my process is insert-select-insert-select. So the PSS(private clean) will
increase along with the growing of the db which is treated as memory leak by
Mem-Analysor tool. 

I guess calling sqlite3_close() or pragma mmap_size=0 after querys may free
the PSS but that is not a good way for my process.

So I am wondering is there any other way to free the PSS? As I find
unixUnmapfile() will be called when nFetchOut back to 0 but I do not know
what operation may trigger that.

Thanks for any light you can shed.



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Any operation to trigger osMunmap?

2018-04-18 Thread Nick
So the PSS will not decrease even if the db becomes smaller after some
DELETE/vacuum operations? 

I think it is a better way to free the mmap memory after every query
automatically inside sqlite. Why not?



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLite SELECT performance problem

2012-05-24 Thread Nick
Hello!
I have a program that does some math in an SQL query. There are
hundreds of thousands rows (some device measurements) in an SQLite
table, and using this query, the application breaks these measurements
into groups of, for example, 1 records, and calculates the average
for each group. Then it returns the average value for each of these
groups.

The query looks like this:

SELECT strftime('%s',Min(Stamp)) AS DateTimeStamp,
AVG(P) AS MeasuredValue,
((100 * (strftime('%s', [Stamp]) - 1334580095)) /
(1336504574 - 1334580095)) AS SubIntervalNumber
FROM LogValues
WHERE ((DeviceID=1) AND (Stamp >=  datetime(1334580095, 'unixepoch')) AND
(Stamp <= datetime(1336504574, 'unixepoch')))
GROUP BY ((100 * (strftime('%s', [Stamp]) - 1334580095)) /
(1336504574 - 1334580095)) ORDER BY MIN(Stamp)

The numbers in this request are substituted by my application with
some values. I don't know if i can optimize this request more (if
anyone could help me to do so, i'd really appreciate)..

This SQL query can be executed using an SQLite command line shell
(sqlite3.exe). On my Intel Core i5 machine it takes 4 seconds to
complete (there are 10 records in the database that are being
processed).

Now, if i write a C program, using sqlite.h C interface, I am waiting
for 14 seconds for exactly the same query to complete. This C program
"waits" during these 14 seconds on the first sqlite3_step() function
call (any following sqlite3_step() calls are executed immediately).

>From the Sqlite download page I have downloaded SQLite command line
shell's source code and build it using Visual Studio 2008. I ran it
and executed the query. Again 14 seconds.

So why does a prebuilt, downloaded from the sqlite website, command
line tool takes only 4 seconds, while the same tool, built by me,
takes 4 times longer time to execute?

I am running Windows 64 bit. The prebuilt tool is an x86 process. It
also does not seem to be multicore optimized - in a Task Manager,
during query execution, I can see only one core busy, for both
built-by-mine and prebuilt SQLite shells.
I have tried different Visual Studio's optimization options, tried to
match "Pragma compile_options;" output by defining preprocessor
directives in sqlite3.c file to output generated by downloaded
sqlite3.exe file. To no avail.

Any way I could make my C program execute this query as fast as the
prebuilt command line tool does it?

Thanks much!
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite SELECT performance problem

2012-05-24 Thread Nick
Yes, i have tried it. Here's what sqlite3.exe, that i just built, returns:
sqlite> pragma  compile_options ;
TEMP_STORE=1
THREADSAFE=0
sqlite>

Still getting these 14 seconds.
I am using Visual Studio 2008 for building..

2012/5/24 Richard Hipp :
> On Thu, May 24, 2012 at 3:59 PM, Nick  wrote:
>
>>
>> Any way I could make my C program execute this query as fast as the
>> prebuilt command line tool does it?
>>
>
>
> Have you tried compiling with the -DSQLITE_THREADSAFE=0 option?
>
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite SELECT performance problem

2012-05-24 Thread Nick
In my initial message I described some proof-of-concept that I've done.

I downloaded sqlite3.exe (An SQLite command line tool) from the
SQLite's website. I executed my query and I had to wait 4 seconds for
it to complete.

Then I downloaded sqlite3.c, sqlite3.h and shell.c, compiled them
altogether (and got again the command line tool, but now i've built it
by myself using Visual Studio) and executed the same query. It took 15
seconds this time.

I can't understand why.. I set the code optimization level to "max" in
the Visual Studio's Settings.
I also executed in the original (downloaded from sqlite.org)
sqlite3.exe a command:

  pragma compile_options;

and made sure all these options (#defines) were set in my own built of
sqlite3.exe



2012/5/25 Simon Slavin :
>
> On 24 May 2012, at 8:59pm, Nick  wrote:
>
>> So why does a prebuilt, downloaded from the sqlite website, command
>> line tool takes only 4 seconds, while the same tool, built by me,
>> takes 4 times longer time to execute?
>
> I'm wondering whether the speed increase is related to figuring out the 
> parameters in the command.  Purely out of interest, and not because I know 
> it's faster, have you tried executing the query using sqlite3_exec() rather 
> than _prepare(), _step(), _finalize() ?  Don't do a lot of reworking of your 
> code unless it turns out to be faster.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite SELECT performance problem

2012-05-24 Thread Nick
I tried defining "SQLITE_TEMP_STORE 2" as well. Unfortunately it
doesn't influence the speed much, in my case...

2012/5/25 Jonas Malaco Filho :
> Why TEMP_STORE=1 (file by default) and not TEMP_STORE=2 (memory by default)?
>
> *Jonas Malaco Filho*
>
> 2012/5/24 Simon Slavin 
>
>>
>> On 24 May 2012, at 8:59pm, Nick  wrote:
>>
>> > So why does a prebuilt, downloaded from the sqlite website, command
>> > line tool takes only 4 seconds, while the same tool, built by me,
>> > takes 4 times longer time to execute?
>>
>> I'm wondering whether the speed increase is related to figuring out the
>> parameters in the command.  Purely out of interest, and not because I know
>> it's faster, have you tried executing the query using sqlite3_exec() rather
>> than _prepare(), _step(), _finalize() ?  Don't do a lot of reworking of
>> your code unless it turns out to be faster.
>>
>> Simon.
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite SELECT performance problem

2012-05-24 Thread Nick
The sizes of the executable files are almost identical - there's a few
kilobytes difference.
I have attached the original (downloaded from sqlite.org) sqlite3.exe,
a compiled-by-myself sqlite3console.exe. And the source code. Also
there's import tables dump (import tables are also very similar for
both executables). I hope you can see the difference in the speed:

http://dl.dropbox.com/u/74970714/Custom%20built%20SQLite3.exe%20Performance%20problem.rar

There is also a database file. but it is quite large (71 Mb). You can
download it here, to be able to test the SQL query:

http://dl.dropbox.com/u/74970714/database.sqlite

Thanks to anyone who can help!

2012/5/25 Simon Slavin :
>
> On 24 May 2012, at 11:13pm, Nick  wrote:
>
>> In my initial message I described some proof-of-concept that I've done.
>>
>> I downloaded sqlite3.exe (An SQLite command line tool) from the
>> SQLite's website. I executed my query and I had to wait 4 seconds for
>> it to complete.
>>
>> Then I downloaded sqlite3.c, sqlite3.h and shell.c, compiled them
>> altogether (and got again the command line tool, but now i've built it
>> by myself using Visual Studio) and executed the same query. It took 15
>> seconds this time.
>
> I'm very sorry I missed that.  I have now re-read your original post.
>
> Can you compare the size of the two executable files for us ?  Are the 
> various versions you're compiling (I understand you've tried several 
> different compilation options) all definitely bigger than the one supplied on 
> the SQLite site ?  It might give us something to investigate.  Also, I don't 
> know how to do this under Windows, but do you have a way to check whether the 
> versions made by Visual Studio address any DLLs or other libraries ?
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite SELECT performance problem

2012-05-25 Thread Nick
Hi Michael, Kreith, Jonas,

Thanks for your response. I just tried to launch the binary manually,
and it worked very fast. I didn't figure out by myself that i
should've tried to launch the application outside of Visual Studio (I
was thinking that in "Release" mode VS doesn't slow down the execution
by debugging instruments). Sorry for raising a dust with all this.


2012/5/25 Black, Michael (IS) :
> Usng your sqlite3.exe
> CPU Time: user 2.156250 sys 2.078125
>
> Using your sqlite3.console.exe
> CPU Time: user 1.375000 sys 0.140625
>
> I'm afraid I don't see the problem since the pre-built is slower than your 
> executable for me.
>
>
> Michael D. Black
> Senior Scientist
> Advanced Analytics Directorate
> Advanced GEOINT Solutions Operating Unit
> Northrop Grumman Information Systems
>
>
>
> From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on 
> behalf of Nick [eveningn...@gmail.com]
> Sent: Thursday, May 24, 2012 5:49 PM
> To: General Discussion of SQLite Database
> Subject: EXT :Re: [sqlite] SQLite SELECT performance problem
>
>
> The sizes of the executable files are almost identical - there's a few
> kilobytes difference.
> I have attached the original (downloaded from sqlite.org) sqlite3.exe,
> a compiled-by-myself sqlite3console.exe. And the source code. Also
> there's import tables dump (import tables are also very similar for
> both executables). I hope you can see the difference in the speed:
>
> http://dl.dropbox.com/u/74970714/Custom%20built%20SQLite3.exe%20Performance%20problem.rar
>
> There is also a database file. but it is quite large (71 Mb). You can
> download it here, to be able to test the SQL query:
>
> http://dl.dropbox.com/u/74970714/database.sqlite
>
> Thanks to anyone who can help!
>
> 2012/5/25 Simon Slavin :
>>
>> On 24 May 2012, at 11:13pm, Nick  wrote:
>>
>>> In my initial message I described some proof-of-concept that I've done.
>>>
>>> I downloaded sqlite3.exe (An SQLite command line tool) from the
>>> SQLite's website. I executed my query and I had to wait 4 seconds for
>>> it to complete.
>>>
>>> Then I downloaded sqlite3.c, sqlite3.h and shell.c, compiled them
>>> altogether (and got again the command line tool, but now i've built it
>>> by myself using Visual Studio) and executed the same query. It took 15
>>> seconds this time.
>>
>> I'm very sorry I missed that.  I have now re-read your original post.
>>
>> Can you compare the size of the two executable files for us ?  Are the 
>> various versions you're compiling (I understand you've tried several 
>> different compilation options) all definitely bigger than the one supplied 
>> on the SQLite site ?  It might give us something to investigate.  Also, I 
>> don't know how to do this under Windows, but do you have a way to check 
>> whether the versions made by Visual Studio address any DLLs or other 
>> libraries ?
>>
>> Simon.
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Max page count not being remembered across close/open in SQLite 3.7.6.3

2011-08-30 Thread Nick
Hi

When performing some testing I noticed that SQLite was not enforcing the 
max_page_count pragma. Using the test case below is appears SQLite is not 
remembering the max_page_count across closing and opening a database file. 
Instead of the expected 12800 from PRAGMA max_page_count; I got 1073741823.

Is anyone else affected by this?

Environment:

Ubuntu Linux 10.04
SQLite shell 3.7.6.3 downloaded from SQlite.org

The following is copy-n-paste from the command line

rm page_count_test.db 
rm: cannot remove `page_count_test.db': No such file or directory
./sqlite3 page_count_test.db
SQLite version 3.7.6.3
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
PRAGMA encoding = "UTF-8";
PRAGMA legacy_file_format = off;
PRAGMA auto_vacuum = 0;
PRAGMA page_size = 4096;
PRAGMA max_page_count = 12800;
12800
sqlite> PRAGMA journal_mode = WAL;
wal
sqlite> PRAGMA user_version = 20;
sqlite> 
sqlite> 
sqlite> PRAGMA page_size;
4096
sqlite> PRAGMA max_page_count;
12800
sqlite> .quit

./sqlite3 page_count_test.db
SQLite version 3.7.6.3
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> PRAGMA page_size;
4096
sqlite> PRAGMA max_page_count;
1073741823
sqlite> 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Max page count not being remembered across close/open in SQLite 3.7.6.3

2011-08-30 Thread Nick

On 30 Aug 2011, at 16:53, Simon Slavin wrote:

> 
> Most PRAGMAs, including others which modify engine behaviour like 
> recursive_triggers, are not stored in the database but have to be restated 
> every time you open the database file.
> 

Ok, thanks, If that is the case for this PRAGMA I did not realise. 

Similar to the page_count PRAGMA I would have intuitively expected 
max_page_count would not need restating every time the database file is open.

Nick

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Online/Hot backup of WAL journalling mode database

2014-12-09 Thread Nick
Hi,

I'd like to check my understanding of Sqlite in WAL journalling mode. With 
automatic checkpointing turned off would the following psuedo-code result in a 
online backup approach that allows robust restore of the database with data 
fresh up to the last checkpoint?

Environment is Linux with multiple (c. 4-6) processes accessing a single sqlite 
database named "test.db".

Backup:
- New process started using cronjob to initiate application checkpoint until 
completion.
- rsync diff the file "test.db" to another drive/location (specifically 
ignoring the "-shm" and "-wal" file).
- exit process

Restore:
- rsync the file "test.db" from another drive/location.

Regards
Nick



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Online/Hot backup of WAL journalling mode database

2014-12-09 Thread Nick

On 9 Dec 2014, at 22:06, Simon Slavin wrote:

> 
> On 9 Dec 2014, at 8:57pm, Nick  wrote:
> 
>> Environment is Linux with multiple (c. 4-6) processes accessing a single 
>> sqlite database named "test.db".
>> 
>> Backup:
>> - New process started using cronjob to initiate application checkpoint until 
>> completion.
>> - rsync diff the file "test.db" to another drive/location (specifically 
>> ignoring the "-shm" and "-wal" file).
>> - exit process
>> 
>> Restore:
>> - rsync the file "test.db" from another drive/location.
> 
> Will not be trustworthy if the database is being written to during the rsync 
> operations.  Recommend either of the following:
> 
> A) Ensure all processes besides the backup process have the database closed 
> while it is being copied. Establish some kind of semaphore so they can tell 
> when it's safe to open the database again.
> 
> B) Use the SQLite Backup API which was invented to do what you want.
> 
> Simon.

That's interesting Simon I didn't expect the database not to be trustworthy. In 
WAL mode I thought the database file is only written to when checkpointing. 
Have I misunderstood this journaling mode?

Again I may have misunderstood the docs around the Backup API, does it not 
start again from the beginning copying pages if another process writes to the 
database during the process? In practice could it successfully backup a 2GB 
database that is being written to once a second?

Regards
Nick
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Online/Hot backup of WAL journalling mode database

2014-12-10 Thread Nick

On 10 Dec 2014, at 02:36, Simon Slavin wrote:

> 
> On 10 Dec 2014, at 12:30am, Nick  wrote:
> 
>> That's interesting Simon I didn't expect the database not to be trustworthy.
> 
> The database will be trustworthy at any instant.  Your copy of it will be 
> corrupt because the file will be changing while you are copying it.
> 
>> In WAL mode I thought the database file is only written to when 
>> checkpointing. Have I misunderstood this journaling mode?
> 
> How do you intend to prevent your other processes from checkpointing while 
> you take the backup ?  You can disable checkpointing for your own connection 
> to the database but not for the connections other processes have.

All the processes would have automatic checkpointing disabled. Just the backup 
process would perform the checkpoint.

>> Again I may have misunderstood the docs around the Backup API, does it not 
>> start again from the beginning copying pages if another process writes to 
>> the database during the process? In practice could it successfully backup a 
>> 2GB database that is being written to once a second?
> 
> Not if the writing never stopped.  But there's no way to take a copy of a 
> file which is constantly being rewritten.  rsync can't do it either.  How can 
> anything copy a file which is constantly being modified ?
> 
> You can BEGIN EXCLUSIVE and then END once your backup is finished.  That 
> should prevent other processes writing to the file.  You will have to deal 
> with what happens if your BEGIN EXCLUSIVE times out, and you will have to put 
> long timeouts in your other processes so they can handle the file being 
> locked long enough for the entire copy to be taken.  That's the only way I 
> can think of to do it.  And yes, it will prevent writing to the database 
> while it's being copied.
> 
> On the other hand, there's a different way to clone a database: log the 
> changes.
> 
> When something issues an INSERT/DELETE/UPDATE command, execute the command 
> but also append a copy of that command to a text file somewhere.  When you 
> want to bring your backup copy up-to-date, take a copy of the log file, then 
> execute all the commands in it to your out-of-date copy.
> 
> You need a method of zeroing out the log file, or knowing where you got to on 
> your last backup.

Thanks for the info, Simon.

Regards
Nick

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Online/Hot backup of WAL journalling mode database

2014-12-10 Thread Nick

On 10 Dec 2014, at 07:35, Dan Kennedy wrote:

> Strictly speaking the database file may not be well-formed even if there is 
> no ongoing checkpoint. If:
> 
>  a) process A opens a read transaction,
>  b) process B opens and commits a write transaction to the database,
>  c) process C checkpoints the db,
> 
> then the db file considered without the *-wal file may be corrupt. The 
> problem comes about because process C can only checkpoint frames up until the 
> start of B's transaction. And there is an optimization that will prevent it 
> from copying any earlier frames for which there exists a frame in B's 
> transaction that corresponds to the same database page. So it effectively 
> copis only a subset of the modifications made by earlier transactions into 
> the db file - not necessarily creating a valid db file.

Can this corruption be detected by running PRAGMA quick_check / 
integrity_check? Having the occasional backup db corrupted would be tolerable.

Regards
Nick 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Online/Hot backup of WAL journalling mode database

2014-12-11 Thread Nick

On 11 Dec 2014, at 10:08, Dan Kennedy wrote:

> On 12/11/2014 05:49 AM, Nick wrote:
>> On 10 Dec 2014, at 07:35, Dan Kennedy wrote:
>> 
>>> Strictly speaking the database file may not be well-formed even if there is 
>>> no ongoing checkpoint. If:
>>> 
>>>  a) process A opens a read transaction,
>>>  b) process B opens and commits a write transaction to the database,
>>>  c) process C checkpoints the db,
>>> 
>>> then the db file considered without the *-wal file may be corrupt. The 
>>> problem comes about because process C can only checkpoint frames up until 
>>> the start of B's transaction. And there is an optimization that will 
>>> prevent it from copying any earlier frames for which there exists a frame 
>>> in B's transaction that corresponds to the same database page. So it 
>>> effectively copis only a subset of the modifications made by earlier 
>>> transactions into the db file - not necessarily creating a valid db file.
>> Can this corruption be detected by running PRAGMA quick_check / 
>> integrity_check? Having the occasional backup db corrupted would be 
>> tolerable.
> 
> In many cases, but not generally. There would exist cases where a part of a 
> committed transaction was lost, or the values in unindexed columns where 
> replaced, that sort of thing.

Ok. Presumably a SQLITE_CHECKPOINT_FULL or SQLITE_CHECKPOINT_RESTART 
checkpoint mode would ensure the db file is valid?

Regards
Nick
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Online/Hot backup of WAL journalling mode database

2014-12-11 Thread Nick

On 11 Dec 2014, at 10:43, Simon Slavin wrote:

> 
> I don't know enough about the internals of SQLite to be sure, but various 
> parts of me are concerned that this is a bad idea.  I don't know what WAL 
> mode would be like without checkpointing but there has to be a reason for 
> checkpointing and disabling it between backups sounds bad.
> 

I would say the docs (https://www.sqlite.org/wal.html#how_to_checkpoint) do not 
imply application initiated checkpoints is a bad idea.

Regards
Nick
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Online/Hot backup of WAL journalling mode database

2014-12-11 Thread Nick

On 11 Dec 2014, at 20:39, David King wrote:

> Why are you trying to hard to avoid using the backup API? It sounds like it 
> does exactly what you want

Backup API works great if you have periods of no writing. However, if a process 
writes during the backup then the API would stop and start over again. So if 
you have frequent writes then theoretically the backup API would not complete. 

In an ideal world the backup API would only copy pages altered during the write 
rather than start over.

Regards
Nick
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] can sqlite result be sorted by using "prepare-step" API

2009-08-23 Thread nick huang

hi all,

 

I am a kind of new to Sqlite and just wonder if the query result row could be 
sorted by using Sqlite_prepare followed by Sqlite_Step.

For example, query statement is something like "select * from sometable order 
by somefield;" and we call prepare followed by step. And are all results 
returned from "step" in order of "somefield"?

 

 

As I read the document, it seems the step will return the first awailable row 
ASAP. That is why I wonder the sorting is not possible as according to what we 
learned from books the sorting of dataset is done at the last stage of SQL 
query when all result set is available.

However, this also seems to contradictive to that all other query API like 
"exec", "getTable" etc. which all support "sorting" are all based on 
prepare-step. Therefore the only conclusion is that "exec", "getTable" etc. 
retrieve dataset and sort by themselves after they call "prepare-step". 

 

Anybody has any idea about this? 

Nick Huang/Qingzhe Huang 


_
Stay in the loop and chat with friends, right from your inbox!
http://go.microsoft.com/?linkid=9671354
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] can sqlite result be sorted by using "prepare-step" API

2009-08-24 Thread nick huang

hi, 

 

Thanks for your reply. 

 
> nick huang wrote:
> > For example, query statement is something like "select * from
> > sometable order by somefield;" and we call prepare followed by step.
> > And are all results returned from "step" in order of "somefield"?
> 
> Of course. What would be the point of specifying "order by somefield" 
> otherwise?
> 

 

I cannot agree with you any more.


> > As I read the document, it seems the step will return the first
> > awailable row ASAP. That is why I wonder the sorting is not possible
> > as according to what we learned from books the sorting of dataset is
> > done at the last stage of SQL query when all result set is available.
> 
> ASAP stands for "as soon as possible". For a query with ORDER BY clause, 
> "possible" is after the whole resultset is retrieved and sorted 
> (assuming the order cannot be satisfied using an index).
> 

 

This is just the usual case when all query results are retrieved and then 
sorted. What I am insterested in is if there is any method to get the sorted 
rows by "step" as I am working on mobile phone system where time-consumed 
operation would probably reset the system. If sqlite's prepare cannot do better 
than "execute" in this aspect, then what is meaning to use prepare/step? The 
document says sqlite is especially suitable for embedded system and that is why 
I wonder if sqlite has some revolutionary way to solve this problem.

 


> > However, this also seems to contradictive to that all other query API
> > like "exec", "getTable" etc. which all support "sorting" are all
> > based on prepare-step. Therefore the only conclusion is that "exec",
> > "getTable" etc. retrieve dataset and sort by themselves after they
> > call "prepare-step".
> 
> You can look at the source code for sqlite3_exec and sqlite3_get_table, 
> and convince yourself that they do no such thing.
> 

The source code is a bit complex and at this stage I am still doing some 
feasibility study for evaluation. 


> Igor Tandetnik 
> 


nick


_
Stay in the loop and chat with friends, right from your inbox!
http://go.microsoft.com/?linkid=9671354
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite-users Digest, Vol 20, Issue 63

2009-08-24 Thread nick huang

> Message: 9
> Date: Sun, 23 Aug 2009 21:33:06 -0500
> From: "Jay A. Kreibich" 
> Subject: Re: [sqlite] can sqlite result be sorted by using
> "prepare-step" API
> To: General Discussion of SQLite Database 
> Message-ID: <20090824023306.gb1...@uiuc.edu>
> Content-Type: text/plain; charset=us-ascii

 

hi Jay,


> 
> On Mon, Aug 24, 2009 at 01:55:41AM +, nick huang scratched on the wall:
> > 
> > I am a kind of new to Sqlite and just wonder if the query result row
> > could be sorted by using Sqlite_prepare followed by Sqlite_Step.
> 
> Prepare/step is the only way to get data out of the database, so yes.
> 

 

Thanks.

 


> > For example, query statement is something like "select * from
> > sometable order by somefield;" and we call prepare followed by step.
> > And are all results returned from "step" in order of "somefield"?
> 
> Yes. Did you try it and see?

 

Sorry I am a bit lazy and busy right now and that is why I choose to post this 
question instead of doing the experiment myself. 

 

> 
> > As I read the document, it seems the step will return the first awailable 
> > row ASAP. That is why I wonder the sorting is not possible as according 
> > to what we learned from books the sorting of dataset is done at the 
> > last stage of SQL query when all result set is available.
> 
> It depends. If SQLite is sorting based off an indexed column, it may
> be able to start returning rows right away before the full result set
> has been computed. There are plenty of cases when the query
> optimizer can rearrange the query pipeline to produce "presorted"
> in-order results without the whole result set at hand.
> 
> On the other hand, if you're sorting on a non-indexed column or
> computed result-set column, then the database engine has to compute
> the entire result, sort it, and then start to return it.
> 
> In the first case, the cost of doing the query will be spread across
> each call to sqlite3_step(). In the second case, the first call to
> sqlite3_step() may be quite long, but all calls after that should be
> quite fast.

 

Your explanation is really convincing and helpful!

If SQLite can return the sorted result without waiting to retrieve all dataset, 
then it would be a great help as I am currently involved in porting sqlite on 
mobile phone where time-consumed query would probably reset the phone. And 
that's why I am interested to see if the first awailable row can return ASAP. 
And if "prepare-step" can not do better than "execute" or "GetTable", then what 
is meaning to use "prepare-step"? 

And here comes my little question: Suppose my query needs to be sorted by some 
field and if I create a view with  clause of "order by" to ask sqlite to sort 
on that index. Can I get sorted result by querying the view with prepare-step? 
In documents, sqlite doesn't say "order by" is not allowed in create view.  I 
know most of database like MS sqlserver doesn't allow "order by" clause in 
create view statement. However, sqlite is the most unique database which gives 
me a lot of surprise and I hope it can do it again.

 

 

 


> 
> > However, this also seems to contradictive to that all other query API 
> > like "exec", "getTable" etc. which all support "sorting" are all
> > based on prepare-step. Therefore the only conclusion is that "exec",
> > "getTable" etc. retrieve dataset and sort by themselves after they
> > call "prepare-step". 
> 
> No, the short-cut functions are not that smart. The much simpler and
> more logical conclusion is that the database engine does the sorting.
> 
> sqlite3_step() returns rows as they become available. You're reading
> too deeply into "as they become available", however. The database
> engine is still required to return the correct result set. If the
> query needs to be sorted, it needs to be sorted. That may or may not
> require computing the full result set before returning the first row.
> It depends on the query. But either way, the database will do the
> right thing and return the correct results.
> 

 

According to your explanation, it seems the sorting prevents "prepare-step" 
from returning faster than "execute". If this is correct understanding, the 
implication is that without using "order by"  clause the prepare-step API would 
return much quicker than "execute" because it doesn't have to wait for all 
result set being available. And this would be especially useful for embedded 
system.

 

Thanks again,

 

nick


> -j
> 
> -- 
> Jay A. Kreibi

Re: [sqlite] sqlite-users Digest, Vol 20, Issue 65

2009-08-25 Thread nick huang



> 
> > I am currently involved in porting sqlite on mobile phone
> 
> As an aside, you do realize that most smartphone OSes already have
> SQLite available? WinCE, iPhone OS, Symbian, PalmOS, Pre, and Android
> either have SQLite libs built-in to the SDK, or have a version of SQLite
> that compiles without modification. If you're using one of those
> environments, there is no reason to re-port SQLite unless you need a
> cutting-edge feature that just appeared in a newer version.
> 

 

Actually we are planning to use "qualcomm" solution for CDMA which wraps sqlite 
in its sdk.


> > And that's why I am interested to see if the first awailable row
> > can return ASAP.
> 
> This gets back to the original presumption that if you want it
> sorted, you want it sorted, and all the wishing and hoping isn't
> going to change the fact that sorting is often more computationally
> complex than returning rows in random order. The database engine
> will do what it takes to return a sorted result. If that takes more
> time, the database engine doesn't care. The database engine will not
> return an incorrect unsorted result if you asked for a sorted result,
> regardless of the API.
> 
> If you have an extremely tight reset timer you have to deal with, you
> can just have the rows returned and sort them yourself. The overall
> time is not likely to be much faster, but if you don't do the sort in
> one pass, you can keep the phone from resetting.
> 

 

BTW, I just happened to read documents about "progress-callback" in sqlite 
website(http://www.sqlite.org/c3ref/progress_handler.html) and it seems this is 
the best solution 

for embedded system which gives chances to kick the watch dog periodically 
without reset the system.

Any comment on this callback?


> > And if "prepare-step" can not do better than "execute" or "GetTable",
> 
> As I said before, prepare/step is the only way to get data out of the
> database. sqlite3_exec() and sqlite3_get_table() call prepare and step
> internally. exec and get_table are just wrapper functions. There is
> nothing special about them. You could write identical functions yourself
> with the public prepare/step API.
> 

 

agree.


> > then what is meaning to use "prepare-step"? 
> 
> That's a long discussion. 
> 
> I'd suggest you start here: http://sqlite.org/cintro.html
> 
> A quick summary:
> 
> 1) The "prepare" process is fairly expensive. Using bound variables,
> you can prepare a statement once and use it over and over without
> having to re-prepare it.
> 

understand.


> 2) Prepare/step is required to use bound variables. Bound variables
> prevent SQL injection attacks and a number of other issues. In
> general, you should be using bound variables and not doing string
> manipulations on SQL statements. If it was up to me, functions
> like sqlite3_exec, sqlite3_get_table, and sqlite3_vmprintf
> wouldn't be included in the API unless you compiled SQLite with
> SQLITE_BIG_SECURITY_HOLE or some similar flag.
> 

Interesting.


> 3) Prepare/step is generally faster and uses less memory. All the
> other APIs return results as strings, which requires further
> processing and memory management.
> 

good.

 


> 4) The step API allows you to break out of queries mid-way through
> and allows better processing and flow-control.
> 


> I'd also point out that the prepare/step paradigm is not unique to
> SQLite. Most database APIs have a similar construction, even if they
> use different terms.
> 

Yes, is there similar ones in MySql? not very sure. 


> > if I create a view with clause of "order by" to ask sqlite to sort
> > on that index. Can I get sorted result by querying the view with
> > prepare-step?
> 
> Sure, but it is going to take the same amount of processing as just
> issuing whatever the underlying query is. Views don't pre-sort or
> cache results, they're more like named sub-SELECTs.
> 

 

This is very helpful.

 


> > > sqlite3_step() returns rows as they become available. You're reading
> > > too deeply into "as they become available", however. The database
> > > engine is still required to return the correct result set. If the
> > > query needs to be sorted, it needs to be sorted. That may or may not
> > > require computing the full result set before returning the first row.
> > > It depends on the query. But either way, the database will do the
> > > right thing and return the correct results.
> 
> > According to your explanation, it seems the sorting prevents 
> > "prepare-step" from returning faster than "execute".
> 
> If you're talking about "time to first row returned", then yes.

 

good.


> 
> The time it takes for the first row to be available via step when using
> prepare/step and the time it takes for your first callback using exec
> is going to be the same. 
> 

> exec is actually a pretty thin wrapper. All it does is call prepare
> on the SQL you've provided, then calls step. Each time it calls step,
> it extracts the results and calls your callback. T

[sqlite] sqlite3 performace

2008-01-17 Thread Philip Nick
Greetings,

Currently I am using sqlite3 in a multi-process/multi-threaded server
setting.
I use a Mutex to ensure only one process/thread can access the database at
one time.

The current flow of events:
Get Mutex
Open Database connection
Run Query
Close Database connection
Release Mutex

This seems to work well except I have noticed some performance issue when
the database grows beyond a MB.

I have noticed my program basically reads the whole database every time the
query is run. The IO Read bytes increases by 900k for every query we run. We
also chew a good chunch of the cpu  I have tried keeping the database
connection open and open/closing for every query. But there was no change.

if I launch the CLI and run the query it runs instantly and monitoring the
IO read bytes is see only ~20 bytes of read to execute the query, when my
code is using over 900k for every call. I have been looking into the CLI
source to see what is done differently, but was hoping someone on here might
have some insight.

Thanks
Phil


--


Re: [sqlite] sqlite3 performace

2008-01-17 Thread Philip Nick
Thanks for replying,

I have tried moving the Open/Close outside the mutex no change.

As for using our own mutex, we started with early versions of sqlite and had
to come up with our own solution. I was planning on looking into using the
built in mutex's, but first I need to solve the performance issues.

The only difference I can see between our code and the CLI is it uses
sqlite3_exec and a call back, while we do

sqlite3_prepare
Then we call sqlite3_column_count.
We loop through 0-num_col and call:
   sqlite3_column_name
   sqlite3_column_decltype

Then we do  while(sqlite3_step() == SQLITE_ROW)

>From my understanding the sqlite3_exec() is doing the same thing and sending
the info too the callback.

Any ideas?

Thanks

On Jan 17, 2008 5:09 PM, <[EMAIL PROTECTED]> wrote:

> "Philip Nick" <[EMAIL PROTECTED]> wrote:
> > Greetings,
> >
> > Currently I am using sqlite3 in a multi-process/multi-threaded server
> > setting.
> > I use a Mutex to ensure only one process/thread can access the database
> at
> > one time.
> >
> > The current flow of events:
> > Get Mutex
> > Open Database connection
> > Run Query
> > Close Database connection
> > Release Mutex
> >
>
> SQLite does the mutexing automatically (assuming you are using
> version 3.5.0 or later).
> --
> D. Richard Hipp <[EMAIL PROTECTED]>
>
>
>
> -----
> To unsubscribe, send email to [EMAIL PROTECTED]
>
> -
>
>


-- 
Philip Nick
E-Mail: [EMAIL PROTECTED]
Cell: 352-262-9067
Web: http://www.ruffasdagut.com


Re: [sqlite] sqlite3 performace

2008-01-18 Thread Philip Nick
I tracked down the problem. I was linking an old version of sqlite3 (3.0.8).
That was the old version we used. I apparently doesn't cache between calls.

My code now performs IO very similar to the CLI. Each subsequent query once
the db is open does not require re-reading the whole db.

My next task is to solve keeping the database open for longer. The basic
application is an rpc server. So its spins up a thread does some work sends
reply and closes the thread. Ideally we want to open the db when we start
the server and then close it when we shutdown. And then pass the connection
into each thread, so we don't have to keep opening the db. In the past we
had lots of issues doing this, hence the open for each query model.

Any advice would be appreciated.

Phil

On Jan 18, 2008 8:46 AM, Jay Sprenkle <[EMAIL PROTECTED]> wrote:

> On Jan 17, 2008 3:48 PM, Philip Nick <[EMAIL PROTECTED]> wrote:
> > Greetings,
> >
> > Currently I am using sqlite3 in a multi-process/multi-threaded server
> > setting.
>
> opening a file is very slow. You need to move it out of your loop.
>
> > I have noticed my program basically reads the whole database every time
> the
> > query is run. The IO Read bytes increases by 900k for every query we
> run. We
> > also chew a good chunch of the cpu  I have tried keeping the database
> > connection open and open/closing for every query. But there was no
> change.
>
> > if I launch the CLI and run the query it runs instantly and monitoring
> the
> > IO read bytes is see only ~20 bytes of read to execute the query, when
> my
> > code is using over 900k for every call. I have been looking into the CLI
> > source to see what is done differently, but was hoping someone on here
> might
> > have some insight.
>
> The operating system will cache files it reads in memory.
> If your process reads the file and then you open the CLI
> it will still be in memory from before and will not give you
> comparable times.
>
> If your process reads the whole database for every call you didn't
> optimize
> your sql. You need to create indexes to optimize your query
>
> --
> The PixAddixImage Collector suite:
> http://groups-beta.google.com/group/pixaddix
>
> SqliteImporter and SqliteReplicator: Command line utilities for Sqlite
> http://www.reddawn.net/~jsprenkl/Sqlite<http://www.reddawn.net/%7Ejsprenkl/Sqlite>
>
> Cthulhu Bucks!
> http://www.cthulhubucks.com
>
>
> -
> To unsubscribe, send email to [EMAIL PROTECTED]
>
> -
>
>


-- 
Philip Nick
E-Mail: [EMAIL PROTECTED]
Cell: 352-262-9067
Web: http://www.ruffasdagut.com


[sqlite] Re: SQLite tests failures on Linux IA64

2006-04-12 Thread Nick Brown
On Saturday 25 March 2006 13:33:47 someone scribbled:
> > > > types3-1.3...
> > > > Expected: [wideInt integer]
> > > >  Got: [int integer]
> 
> The types3-1.3 error occurs because on ia64, a regular
> old integer is sufficient to hold the value whereas on
> an ix86 machine, a long long int is required.  No biggie.

Is there a patch that fixes this test failure, such that "make test" will pass 
on a x86_64 and x86?

Cheers,
Nick



[sqlite] INTEGER store size

2008-09-18 Thread Nick Shaforostoff
Hi. I'm deciding between having several INTEGER fields and only one,
which is a bit combination (i'd access it using e.g. main.bits&0x0011,
main.bits&0x0100 and so on).

The docs say "INTEGER. The value is a signed integer, stored in 1, 2,
3, 4, 6, or 8 bytes depending on the magnitude of the value."
So: which value is used to determine magnitude?
Will the database size be larger if I use several fields (containing
numbers smaller than e.g. 255) instead of a one bits field?

Please, add this clarification to documentation.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Running out of space

2009-11-25 Thread Nick Shaw
Deleting data may not free enough space in the database file to allow
new records to be added [the new records may contain more data].  You
could continually delete old records until an INSERT succeeded
(indicating enough space now)?  Otherwise, I'd say you'll just have to
monitor the hard disk space and ensure you have enough free.  Even if
you detect the disk becoming full, you then have to decide on what to
do: delete old records, or raise some kind of alert to the user?

Don't forget you can always VACUUM the database (providing there's
sufficient disk space for SqLite to rewrite the vacuumed DB file out).
If you do that regularly, as well as deleting unwanted records, you
shouldn't run out of disk space.  (Unless of course you really DO need
all the data - in which case your disk needs to be bigger!)

Thanks,
Nick.


-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Antti Nietosvaara
Sent: 25 November 2009 09:41
To: sqlite-users@sqlite.org
Subject: [sqlite] Running out of space

Hello,

I have an application which keeps an index of data in an SQLite 
database.  I'm trying to figure out the best way to handle the possible 
scenario of database filling out the entire hard disk.  I could just 
delete some of the oldest rows, but I wonder if it's possible that even 
delete statements fail due to insufficient disk space for journal file.

Is there a robust way to handle these situations?


Thanks
- Antti
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3 is blocked by transaction when wanting to close the sqlite3 *

2009-11-26 Thread Nick Shaw
By "other process" do you mean a separate DLL or similar?  You can't
free memory allocated in a DLL from an application, even when that
application has the DLL loaded - Windows will complain.  This could be
what's happening.

Could you instead write the database close call within this other
process, and call it from the main process when you shut down?

Nick.

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of liubin liu
Sent: 26 November 2009 02:29
To: sqlite-users@sqlite.org
Subject: [sqlite] sqlite3 is blocked by transaction when wanting to
close the sqlite3 *


My application includes a main process and some other processes. I open
the
database in other process, but at end I will close the database in main
process.

The problem happens while I close the database. The main process is
blocked.
And I could see the journal file is still there, so I guess there are
still
some transactions.

How resolve the problem?

Thanks in advance!
-- 
View this message in context:
http://old.nabble.com/sqlite3-is-blocked-by-transaction-when-wanting-to-
close-the-sqlite3-*-tp26523551p26523551.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Huge Table with only one field -- efficient way to create index ?

2009-11-26 Thread Nick Shaw
Out of interest, is all the data in the artnr field numeric?  If so, why
are you storing it as text and not an integer?  Integer searching is
much, much faster.

Nick.

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Thomas Lenders
Sent: 26 November 2009 14:50
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Huge Table with only one field -- efficient way to
create index ?

Simon Slavin schrieb:
> On 26 Nov 2009, at 2:04pm, Thomas Lenders wrote:
>
>   
>> I am using SQLite on a mobile device.
>>
>> I have this one table which has only one field, but I need to search
in 
>> the table very quickly.
>> When creating an index on this field the size of the database is
doubled 
>> - which makes sense.
>>
>> However, is there some way to implement this scenario more
efficiently 
>> using SQLite ?
>> 
>
> When you talk about searching, are you talking about searching in an
ordered way (e.g. all the records in alphabetical order) or are you
matching on content using something like LIKE '%fred%' ?  For the LIKE
matching, no INDEX is used.
>
> If you have just one column in the field, and don't need it indexed,
do you really need SQL at all ?  Could you not just store the data as a
text file ?
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>   

Thanks for your reply.

I am looking for an exact match as in I need to know if a number exists 
in the table or not.
Something like "select count(*) from ART where artnr='0123456789'".

I could store the data in a text file instead but then I would have to 
search in the textfile on "disk", eg. a binary search algorithm or 
something.
I cannot just load the textfile into memory because in this case, the 
storage space on "disk" and the available memory actually come from
the same pool I would still have to store the data twice.

Plus, there are other tables I use as well so I will use SQLite anyway. 
Would be rather nice to be able to use it for this special "table" as
well.

The catch is, if I search without having an index it will take 5+ 
seconds to find the record which sadly isnt fast enough.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Huge Table with only one field -- efficientway to create index ?

2009-11-26 Thread Nick Shaw
As Igor says, it's most likely the way you're creating your SQL query.
For example, if you're using C/C++ and you're trying to write a 64 bit
number to a string using the %d flag in sprintf(), if won't work
properly - you'd need to use %I64d for 64 bit numbers.

32-bit integers go up to (unsigned) 4,294,967,296, or (signed)
+/-2,147,483,648 - any higher and it's a 64 bit number.

For info, 64 bit integers go up to (unsigned) 18,446,744,073,709,551,616
(20 digits) or (signed) +/-9,223,372,036,854,775,808 (19 digits).

Thanks,
Nick.

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Igor Tandetnik
Sent: 26 November 2009 15:22
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Huge Table with only one field -- efficientway to
create index ?

Thomas Lenders wrote:
> I tried creating the field as INTEGER, but 10 digit numbers where
> imported as 0.

SQLite uses 64-bit integers, for about 19 decimal digits. The problem
must be with the software that populates the table.

> I assume the field was created as 32 bit integer and invalid entries
got
> a default value.

SQLite doesn't distinguish between 32-bit and 64-bit integers. There's
nothing special you have to put into CREATE TABLE statement to enable
64-bit integers - they just work.

> I also tried LONGINT, BIGINT and INT64 but no joy. What would be the
> correct syntax for
> longer ints ?

Like I said, no special syntax is needed.

Igor Tandetnik

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3 bind and insert double values

2009-11-30 Thread Nick Shaw
How different are they?  Could this be [unavoidable] binary floating
point storage limitations?

Nick.

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Michael Lippautz
Sent: 30 November 2009 13:50
To: sqlite-users@sqlite.org
Subject: [sqlite] sqlite3 bind and insert double values

Hej,

I am using sqlite_bind_double on a prepared statement (C API). The
insert completes, however, the value stored in the sqlite table is
different from the output of a casual printf("%f",..)

Am I wrong when assuming that they should be the same. (double values
are gathered by a gps and are in range of %1.6f up to %3.9f

Thanks in advance!
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3 bind and insert double values

2009-11-30 Thread Nick Shaw
Agreed - the difference in coordinates between the two values amounts to
3/10,000's of a second, which is about 9 millimeters.  Most GPS devices
can't give accuracy to more than 5 meters!

It's also probably nicer storing GPS coordinates as numeric instead of
text, as then you can use some useful equations on your data set to work
out such things as which GPS coordinates fall within a certain radius of
a certain position (as many shop websites use on their "find your
nearest store" page).  Google API's website has some example functions
to do just this on SQL data stored as GPS floats.

Nick.

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Simon Slavin
Sent: 30 November 2009 14:59
To: General Discussion of SQLite Database
Subject: Re: [sqlite] sqlite3 bind and insert double values


On 30 Nov 2009, at 2:05pm, Michael Lippautz wrote:

> 47.824669 / 47.824669167

Same number.  If you need better precision than that, declare the column
type as TEXT and bind your data as text.

But since you're using GPS coordinates I can tell you it's not
necessary.  That seventh digit in a GPS coordinate gives you more
precision than a GPS device can actually deliver.  No consumer GPS
device is going to quote you 47.8246690 in one place and 47.8246691 to
mean a different place.  So you don't need to worry about your rounding
error.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Possibly a bug in SQLite?

2009-12-02 Thread Nick Shaw
You don't need to define the PRIMARY KEY as NOT NULL - it's implied.
The column constraint flow diagram in the documentation in fact doesn't
allow it:
http://www.sqlite.org/syntaxdiagrams.html#column-constraint

Maybe that's the problem?  Try recreating the table without the NOT NULL
constraint on the "id" column and see if it works after that.

Thanks,
Nick.

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Brandon Wang
Sent: 01 December 2009 17:06
To: sqlite-users@sqlite.org
Subject: [sqlite] Possibly a bug in SQLite?

Hello,

I've come upon a interesting scenerio.

.sqlite> .schema rg_configuration
CREATE TABLE 'rg_configuration' (
"id" INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
"configurationName" TEXT NOT NULL,
"host" TEXT,
"user" TEXT,
"parentArch" INTEGER NOT NULL,
"parentJob" INTEGER NOT NULL,
"parentSubblock" INTEGER NOT NULL,
"parentBlock" INTEGER NOT NULL,
"canBeRun" INTEGER DEFAULT (1)
);
[Addititonal indices, triggers, etc. here]

One of my scripts attempts to execute the following:

INSERT INTO main.rg_configuration (configurationName, parentArch,
parentJob, parentSubblock, parentBlock, canBeRun) VALUES
('full_chip.nofeedthru', 9565, 3014, 33, 8, 1);

Upon which I get the error:

SQL error: PRIMARY KEY must be unique

I'm not specifying the primary key, id. Is there some error on my part?

Thanks!

-Brandon
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Possibly a bug in SQLite?

2009-12-02 Thread Nick Shaw
Ok, my mistake.  But would AUTOINCREMENT imply NOT NULL?  Could you have
an AUTOINCREMENT field with post-updated null values?

Nick.

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Jay A. Kreibich
Sent: 02 December 2009 15:02
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Possibly a bug in SQLite?

On Wed, Dec 02, 2009 at 09:38:54AM -, Nick Shaw scratched on the
wall:
> You don't need to define the PRIMARY KEY as NOT NULL - it's implied.

  Yes, you do.  You shouldn't, but you do:

http://sqlite.org/lang_createtable.html

According to the SQL standard, PRIMARY KEY should imply NOT NULL.
Unfortunately, due to a long-standing coding oversight, this is not
the case in SQLite. SQLite allows NULL values in a PRIMARY KEY
column.

> The column constraint flow diagram in the documentation in fact
doesn't
> allow it:
> http://www.sqlite.org/syntaxdiagrams.html#column-constraint

  The diagrams are for clear human readability, not to define the
  accepted language.

-j

-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"Our opponent is an alien starship packed with atomic bombs.  We have
 a protractor."   "I'll go home and see if I can scrounge up a ruler
 and a piece of string."  --from Anathem by Neal Stephenson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Effect of VACUUM when very low on disk space

2009-12-07 Thread Nick Shaw
Hi all,

 

I'm currently writing some code that attempts to keep an SqLite database
file below a certain file size (the embedded PC it is running on has a
wonderful side effect that when the disk runs out of disk space, it
blue-screens Windows and you can't boot the device after that - how
helpful!).

 

When the file exceeds a certain size, I DELETE a specific number of
records, then VACUUM the file to get the size back below required
limits.  This works fine, however what happens to the VACUUM command if
there is insufficient disk space for SqLite to write out the cleaned up
copy of the database?  I assume it will fail, but the documentation
doesn't specifically say how much disk space is required during a VACUUM
operation.  The newly vacuumed file's size should end up being equal to
or less than the existing file's size, so I assume I'll need at least
the current database's size of disk space free, but will it ever require
more space than that to perform the VACUUM (e.g. from other temporary
files)?

 

Also, is there any SqLite command I can use to get the database file's
size?  I'm currently using the Win32 API call GetFileSizeEx() to get the
file's size which works fine, but does SqLite itself know the database
file's size when it has the database open?  It would seem more
gracefully coded if I didn't have to open a separate handle to the
database file just to get the file size out.

 

Thanks,

Nick.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] custom fts3 tokenizer, used in read-only?

2009-12-16 Thread Nick Hodapp
I wrote a custom tokenizer for fts3, and I intend it to be used when I
populate my database.

I'd rather not ship the tokenizer in my embedded application which accesses
the database in read-only mode.  But it seems that fts3 checks for the
existence of the tokenizer even when accessing a fts3 table for a read-only
query.

Is it possible to change this behavior (or, is the tokenizer actually
required for a read-only query?)

If I register a dummy tokenizer with the same name in my read-only
application, would that work?

Nick Hodapp
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] custom fts3 tokenizer, used in read-only?

2009-12-16 Thread Nick Hodapp

Ah, Thanks.  That makes sense.

My tokenizer extracts text-node values from XHTML content and delegates the
tokenizing of each node value to the Porter tokenizer.  Since the RHS of the
MATCH operator will never be XHTML, I should be able to just defer the
tokenizing of that input directly to the Porter tokenizer?  And I can do
that by registering a custom tokenizer-module in my read-only application
that specifies the Porter functions, but has the name of the custom
tokenizer I used to index the XHTML data.

Does that seem reasonable?

Nick Hodapp




D. Richard Hipp wrote:
> 
> 
> On Dec 16, 2009, at 11:15 AM, D. Richard Hipp wrote:
> 
>>
>> On Dec 16, 2009, at 11:12 AM, Nick Hodapp wrote:
>>
>>> I wrote a custom tokenizer for fts3, and I intend it to be used  
>>> when I
>>> populate my database.
>>>
>>> I'd rather not ship the tokenizer in my embedded application which
>>> accesses
>>> the database in read-only mode.  But it seems that fts3 checks for  
>>> the
>>> existence of the tokenizer even when accessing a fts3 table for a
>>> read-only
>>> query.
>>>
>>> Is it possible to change this behavior (or, is the tokenizer actually
>>> required for a read-only query?)
>>>
>>> If I register a dummy tokenizer with the same name in my read-only
>>> application, would that work?
>>
>>
>> The tokenizer is needed for both reading and writing.  It is needing
>> to parse the LHS of the MATCH operator when reading.
> 
> Correction:  ...the RHS of the MATCH operator
> 
>>
>> You *must* register exactly the same tokenizer when reading as you
>> used for writing or your queries will not work.
>>
>> D. Richard Hipp
>> d...@hwaci.com
>>
>>
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> D. Richard Hipp
> d...@hwaci.com
> 
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 

-- 
View this message in context: 
http://old.nabble.com/custom-fts3-tokenizer%2C-used-in-read-only--tp26813756p26817624.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] a possible bug of select min(id) from...

2009-12-16 Thread nick huang

hi,

 

I am a sort of new to sqlite. I include sqlite in my project and then encounter 
such a problem.

 

Say a table named "url" is created with "id" as primary key. Then a query like 
this "select min(id) from url".

Quite unexpectedly, the result of query returns "SQLITE_ROW" and the min(id) is 
0. However, my table is still empty which confuses me for a while. Please note 
my project is C/C++ program and I just include sqlite source code in my 
project. I am not sure about the version number. In the "sqlite3.c", there is 
such line "This amalgamation was generated on 2009-08-10 13:49:19 UTC."

 

In sqlite3.h, there is a line @(#) $Id: sqlite.h.in,v 1.462 2009/08/06 17:40:46 
drh Exp $

 

Some other details as following:

Open DB with sqlite3_open.

Open table with sqlite3_prepare/sqlite3_step

 

 

Best regards,

Nick Huang/Qingzhe Huang nickhuan...@hotmail.com ÎÒµÄÕ÷³¾ÊÇÐdz½´óº£¡£¡£¡£ 
(Chinese) http://www.staroceans.com/ The dirt and dust from my pigremage form 
oceans of stars... (English) http://www.staroceans.com/english.htm 


  
_
Ready. Set. Get a great deal on Windows 7. See fantastic deals on Windows 7 now
http://go.microsoft.com/?linkid=9691818
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] a possible bug of select min(id) from...

2009-12-17 Thread nick huang

Hi Igor,

 

You are absolutely right that its type is NULL and min(id) returns 0 as I use 
sqlite_column_int64 to retrieve data. However, isn't it confusing that the 
"sqlite3_step" returns SQLITE_ROW which usually means some dataset is returned? 
I mean, if it returns one NULL row, why doesn't sqlite simply return 
"SQLITE_DONE" or something instead of "SQLITE_ROW"? You see, as user I have to 
double-check its return type even after I get "SQLITE_ROW". In this sense, I 
think it is a bug.

 

thank you,

 

> To: sqlite-users@sqlite.org
> From: itandet...@mvps.org
> Date: Wed, 16 Dec 2009 21:00:43 -0500
> Subject: Re: [sqlite] a possible bug of select min(id) from...
> 
> nick huang  wrote:
> > Say a table named "url" is created with "id" as primary key. Then a
> > query like this "select min(id) from url". 
> > 
> > Quite unexpectedly, the result of query returns "SQLITE_ROW" and the
> > min(id) is 0. However, my table is still empty which confuses me for
> > a while.
> 
> This statement should in fact return one row, with the value of NULL. How do 
> you retrieve the value? If you use sqlite3_column_int, it'll convert NULL to 
> 0 for you. Check with sqlite3_column_type.
> 
> Igor Tandetnik
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

  
_
Windows Live: Make it easier for your friends to see what you’re up to on 
Facebook.
http://go.microsoft.com/?linkid=9691816
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vaccum with VB,NET

2009-12-24 Thread Nick Shaw
VACUUM is an SqLite SQL command, so just use it inside an sqlite3_exec()
statement using "VACUUM" as the SQL string parameter.

Thanks,
Nick.

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Ernany
Sent: 24 December 2009 01:17
To: General Discussion of SQLite Database
Subject: [sqlite] Vaccum with VB,NET

Hello guys,

How i run sqllite with VB2005 , NE|T. I need tio run  *"Vacuum"*.

Thanks a lot

Ernany
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] speed, writing spends 30ms, reading spends 2ms, still slow

2009-12-24 Thread Nick Shaw
The read/write speed will depend quite a lot on disk access speed (unless 
you're using an in-memory DB of course), not just the database 
size/complexity/query.

Nick.

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of liubin liu
Sent: 24 December 2009 02:35
To: sqlite-users@sqlite.org
Subject: [sqlite] speed, writing spends 30ms, reading spends 2ms, still slow


the process runs on a board with Atmel AT91SAM9260 (Clocks: CPU 198 MHz,
...), Memory: 16MB, yaffs on NAND 256MB.

the time of writing one row is more than 20mS, and the time of reading one
row is more than 3mS.

Is it the ceiling of the SQLite3? If no, how to improve it? espically
improving the routine of writing.

Thanks in advance. :)

code___

// 测试sqlite3 写入、读取、修改数据库的时间 (一个进程只包含一个打开、关闭数据库), 20091223
//
// arm-linux-gcc -Wall -g -lpthread -lsqlite3sqlite3speed.c   -o
sqlite3speed

#include 
#include 
#include  // for system(*)
#include// for gettimeofday(*)

void test_sqlite3_time_write (void);
void test_sqlite3_time_read (void);

sqlite3 *g_db = NULL;
struct timeval  g_tv1;
struct timeval  g_tv2;
struct timezone g_tz;

int main (void)
{
int ret = -1;
int c = 0;
char *ct = "CREATE TABLE testspeed (id INTEGER PRIMARY KEY, type
INTEGER, content TEXT)";

ret = sqlite3_open ("testspeed.db", &g_db);
ret = sqlite3_exec (g_db, ct, NULL, NULL, NULL);

printf ("test sqlite3's writing、reading time\n");
printf ("1、 writing\n");
printf ("2、 reading\n");
while ((c = getchar()) != 'q')
{
switch (c)
{
case '1':
test_sqlite3_time_write ();
break;
case '2':
test_sqlite3_time_read ();
break;
default:
break;
}
}

ret = sqlite3_close (g_db);

return 0;
}

void test_sqlite3_time_write (void)
{
int  ret  = -1;
char *i_f = "INSERT OR REPLACE INTO testspeed VALUES (%d, %d, %Q)";
char *i_s = NULL;

/*
gettimeofday (&g_tv1, &g_tz);
i_s = sqlite3_mprintf (i_f, 1, 22, "reliable测试");
ret = sqlite3_exec (g_db, i_s, NULL, NULL, NULL);
gettimeofday (&g_tv2, &g_tz);
ret = system ("sqlite3 testspeed.db \"SELECT * FROM testspeed\"");
if (0 == ret)
{
printf ("using sqlite3_exec() writing one row spends: %d us\n\n",
(int) ((g_tv2.tv_sec - g_tv1.tv_sec)*100 + g_tv2.tv_usec
- g_tv1.tv_usec));
system ("sqlite3 testspeed.db \"DELETE FROM testspeed\"");
}
*/


sqlite3_stmt *stmt = NULL;


gettimeofday (&g_tv1, &g_tz);

i_s = sqlite3_mprintf (i_f, 1, 22, "test可靠吗?");
ret = sqlite3_prepare_v2 (g_db, i_s, -1, &stmt, NULL);
if (SQLITE_OK == ret)
{
ret = sqlite3_step (stmt);
if (SQLITE_DONE == ret)
{
sqlite3_finalize (stmt);

gettimeofday (&g_tv2, &g_tz);

ret = system ("sqlite3 testspeed.db \"SELECT * FROM
testspeed\"");
if (0 == ret)
{
printf ("using sqlite3_prepare_v2() + sqlite3_step() writing
one row spends: %d uS\n\n",
(int) ((g_tv2.tv_sec - g_tv1.tv_sec)*100 +
g_tv2.tv_usec - g_tv1.tv_usec));
system ("sqlite3 testspeed.db \"DELETE FROM testspeed\"");
}
}
else
sqlite3_finalize (stmt);
}
}

void test_sqlite3_time_read (void)
{
int  ret  = -1;
char *i_f = "SELECT * FROM testspeed WHERE id=%d";
char *i_s = NULL;


sqlite3_exec (g_db, "INSERT OR REPLACE INTO testspeed VALUES (1, 22,
\"测试, yes\")", NULL,NULL,NULL);

/*
char **sgt_medi = NULL;
char **sgt_resu = NULL;
int column = 0;
int row = 0;

gettimeofday (&g_tv1, &g_tz);
i_s = sqlite3_mprintf (i_f, 1);
ret = sqlite3_get_table (g_db, i_s, &sgt_medi, &row, &column, NULL);
if (SQLITE_OK == ret)
{
sgt_resu = sgt_medi + column;
printf ("%d, %d, %d, %s\n", row, atoi (sgt_resu[0]), atoi
(sgt_resu[1]), sgt_resu[2]);

gettimeofday (&g_tv2, &g_tz);
printf ("using sqlite3_get_table() reading one row spends: %d
uS\n\n",
(int) ((g_tv2.tv_sec - g_tv1.tv_sec)*100 + g_tv2.tv_usec
- g_tv1.tv_usec));

system ("sqlite3 testspeed.db \"DELETE FROM testspeed\"");
}
*/


sqlite3_stmt *stmt = NULL;

gettimeofday (&g_tv1, &g_tz);

i_s = sqlite3_mprintf (i_f, 1);
ret = sqlite3_prepare_v2 (g_db, i_s, -1, &am

Re: [sqlite] sqlite3_prepare_v2

2009-12-24 Thread Nick Shaw
I'd suggest having your own sqlite3.dll in your own application's working 
folder, rather than relying on an existing version somewhere on the PC already 
which could be any version!

If you created your own DLL, why would you then want to statically link to it 
(defeating a lot of the point of a *dynamic* load library)?  That doesn't 
provide you with any back/forwards compatibility.  Why not dynamically load the 
sqlite3.dll and the required functions at runtime instead?  If the functions 
can't be found on the load, you would then be able to handle it gracefully and 
inform the user, rather than having the prompt of a procedure entry point 
error, which means nothing to non-programmer users.

>From a C program, this is the kind of thing I'd do (example just loads the 
>function sqlite3_libversion_number from sqlite3.dll - note that I've 
>hand-coded this so it hasn't been checked for typos!):

//...
// include necessary windows headers for LoadLibrary() API etc.
#include // for sqlite-specific typedefs/structures/etc
//...

typedef int (SQLITE_API * MYPROC)(void);// a typedef'd version of the 
function definition of sqlite3_libversion_number() prototype from sqlite3.h

static HANDLE HLib = NULL;  // handle to sqlite3 library, 
once opened
static MYPROC MySqlite3LibVersion = NULL;// will contain a pointer to our 
imported sqlite3_libversion_number() function

BOOL LoadSqliteLibrary()
{
BOOL Ret = TRUE;

if (HLib != NULL)
{
// dynamically load sqlite3.dll
HLib = LoadLibrary( "sqlite3.dll" );

// if library loaded okay, get the required procedures' address 
pointers out
if (HLib != NULL)
{
// get pointer to sqlite3_libversion_number from the 
dll for our own use
MySqlite3LibVersion = GetProcAddress( HLib, 
"sqlite3_libversion_number" );

// if we couldn't find the required procedure's entry 
point...
If (MySqlite3Exec == NULL)
{
// procedure entry point may not exist in 
loaded library!
Ret = FALSE;
// free library on procedure load error, as not 
much use any more
FreeLibrary( HLib );
HLib = NULL;
// ... possibly inform user of the error at 
this point?
}
}
else
{
// library failed to load - sqlite3.dll file may not 
exist
Ret = FALSE;
// ... possibly inform user of the error at this point?
}
}
// else library is already loaded

return (Ret);
}

//...
// As long as LoadSqliteLibrary() returns TRUE, you can now use 
MySqlite3LibVersion() to return the library version number.
//...

Thanks,
Nick.

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Teg
Sent: 24 December 2009 00:57
To: General Discussion of SQLite Database
Subject: Re: [sqlite] sqlite3_prepare_v2


I statically link for exactly this reason. If I WAS going to dynamic
link though, I'd make my own DLL from the sources, give it a unique
name, link against it's import lib and keep it in the same folder as
my EXE.

C


Wednesday, December 23, 2009, 4:50:08 PM, you wrote:

DRH> On Dec 23, 2009, at 4:12 PM, Dr. Robert N. Cleaves wrote:

>> Dear Sir:
>>
>> On startup, I get a message that the procedure entry point  
>> sqlite3_prepare_v2 is missing from the dynamic link library  
>> SQLite3.dll. How can I download the missing item?
>>
>> Thank you…

DRH> I'm guessing you are running windows.  Probably you have two or more  
DRH> applications installed that use SQLite.  (Candidates applications  
DRH> include Skype, Google Gears, Firefox, McAfee Antivirus, Adobe  
DRH> Acroread, Adobe Photoshop, iTunes, DropBox, and many others.)   
DRH> Probably when one of these applications was installing itself, it  
DRH> overwrote the system SQLite3.dll with an older version that does not  
DRH> support the sqlite3_prepare_v2() interface.  Then when one of the  
DRH> other applications that needs the new interface tries to run, it gets
DRH> the older DLL that lacks the necessary entry point.

DRH> I beg and plead with application vendors to statically link against  
DRH> SQLite to avoid this kind of problem, but nobody listens to me about  
DRH> that

DRH> I don't use windows myself and so I won't be much help in  
DRH> troubleshooting this.  But I have forwarde

[sqlite] fts3 issue with tokenizing of content during a query

2009-12-29 Thread Nick Hodapp
Hi -

I'm using sqlite 3.6.21 with this
patch<http://www.sqlite.org/src/ci/6cbbae849990d99b7ffe252b642d6be49d0c7235>,
which I found in this forum a few weeks ago.  I'm also using a custom
tokenizer which I wrote.

My scenario is this:  I am storing XHTML in the database, and I want to
FTS-enable this content.  I only want to index the text contained within the
XHTML elements, not the element names or attributes.  (e.g. "index this")  My tokenizer skips over element names and
attributes, then delegates the element textual content to the Porter
tokenizer.  On return from the Porter tokenizer, I correct the token offset
and length values to be the actual offsets within the document (Porter
tokenizer doesn't ever see the whole document, just a string within a tag).

I didn't want to ship my tokenizer with my app for two reasons.  1 - I wrote
it using an API not available to my client app, 2 - it doesn't make sense
because on the client the user will be entering search terms that aren't
surrounded by xml tags, which is what my tokenizer expects.  Instead, my
client registers a tokenizer with the same name as my custom tokenizer, but
in fact it is registering a copy of the porter tokenizer.

I expected this to work fine - and it appeared to, until I discovered that
it was pulling out text in some of the xml attributes - which shouldn't be
indexed.

It turns out that FTS3 is re-tokenizing the content (not just the search
term) on the client (using my copy of the Porter tokenizer) and returning
those results.  I don't understand why - is this a bug or is this normal
behavior?  I expected the fts index to retain all of the token offsets/sizes
such that they wouldn't have to be recomputed on the client.

My workaround is to port my tokenizer so that it runs on the client, and to
wrap search terms in dummy xml tags like this.   But I feel I
shouldn't have to do this...

Any feedback appreciated...

Nick Hodapp
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] mode is not changed

2010-01-02 Thread Nick Atty
Depending on how you are calling sqlite3, you should be able to feed any 
list of commands into it.

For example, under a typical Unix shell you could do
sqlite3 mydb < .mode csv
 > .show
 > EOF
  echo: off
   explain: off
   headers: off
  mode: csv
nullvalue: ""
output: stdout
separator: ","
 width:
/var/www/canal$

But this is really about scripting on whatever system you're using, not 
SQLite as such.

Wensui Liu wrote:
> wonderful question, it is also what i'd like to know.
> 
> On Thu, Dec 31, 2009 at 1:03 PM, Simon Slavin  wrote:
>> On 31 Dec 2009, at 4:56pm, Kees Nuyt wrote:
>>
>>> On Thu, 31 Dec 2009 04:24:51 +, Simon Slavin
>>>  wrote:
>>>
 On 31 Dec 2009, at 2:13am, Wensui Liu wrote:

> i am trying to change the output by 'sqlite3 mydb ".mode csv"'.
> however, when i type 'sqlite3 mydb .show', I couldn't see the change
> of mode at all.
 Each new 'sqlite3' command starts a new instance of the application, and 
 the new instance starts with all settings set to the defaults.
>>> Correct.
>>>
>>> Additionally, if Wensui Liu wants to have his own defaults,
>>> he can put a   .sqliterc   file in his home directory
>> How would I string two commands together in the command-line ?  Is there a 
>> way to do something like
>>
>> sqlite3 mydb ".mode csv.show"
>>
>> ?
>>
>> Simon.
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> 
> 
> 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLITE_ERROR with system usage

2010-01-06 Thread Nick Atty
Have you run a memory test on your system?  Random failings when working 
hard is a classic symptom of dodgy memory (launching Firefox may cause 
memory to be allocated for SQLite elsewhere than when there's nothing 
else running).

I once had a system that would segfault in the C compiler in similar 
circumstances.  It was faulty memory.

Christopher Sansone wrote:
> Hi everyone,
> 
> I'm hoping someone can help with a strange issue I'm having.  I have a 
> SQLite application that runs a some modest SQL scripts (~750KB) to 
> create and populate a database.  For the same exact script, sometimes it 
> works perfectly and other times it fails with a SQLITE_ERROR.  When it 
> fails, it fails on a different statement each time, so it does not seem 
> to be a problem with the SQL script itself.
> 
> Upon further testing, I noticed that it seems to work when my system is 
> otherwise idle, but if something else is going on, it fails.  For 
> instance, if I perform a task like launching Firefox while the script is 
> running, it errors out every single time.
> 
> My application is single-threaded, and the basic pseudo-code is as follows:
> 
> repeat
>   sqlite3_prepare(db, sql, -1, p, sql);
>   repeat
> s := sqlite3_step(p);
>   until
> s in (SQLITE_DONE, SQLITE_ERROR, SQLITE_MISUSE);
>   sqlite3_finalize(p);
> until
>   sql is null;
> 
> As it should, sqlite3_prepare() provides a single statement to be 
> executed, and sqlite3_step() performs the execution of the single 
> statement.  The error specifically occurs in sqlite3_step().
> 
> Does anyone have any ideas of why this is occurring?  My best guess is 
> that it is a thread-safety issue inside SQLite... the statements are 
> being executed asynchronously and failing when one statement does not 
> execute fast enough.  I am running the pre-compiled build for Windows 
> that I downloaded right from the sqlite website.  Does anyone know what 
> threading mode it was compiled with and whether or not that would be a 
> problem?  Any other ideas?
> 
> Thanks so much in advance!
> 
> Cheers,
> Christopher
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] graphs and sql

2010-01-10 Thread Nick Atty
Robert Citek wrote:
> On Sun, Jan 10, 2010 at 8:06 PM, Dan Bishop  wrote:
>> Robert Citek wrote:
>>> Does anyone have any recommendations for books or other resources that
>>> deal with working with graphs (i.e. vertexes and edges) using sql?
>>>
>> I don't think that SQL is the best language for working with graphs, but:
>>
>> CREATE TABLE Graph (
>> NodeFrom INTEGER,
>> NodeTo INTEGER
>> );
> 
> Yes, the Koenigsberg bridge problem is just one example of what I am
> referring to.
> 
> I was working on creating a more general model initially with just two
> tables: one for vertexes and one for edges, which is a pairing of
> vertexes.  For example:
> 
> create table vertexes ( vertex integer ) ;
> create table edges ( v1 integer, v2 integer ) ;
> BEGIN;
> INSERT INTO vertexes VALUES (1);
> INSERT INTO vertexes VALUES (2);
> INSERT INTO vertexes VALUES (3);
> INSERT INTO vertexes VALUES (4);
> COMMIT;

I store all the data for my waterways route planner in SQLite, but I 
load it into memory for running Dijkstra's algorithm on it to find the 
shortest (when weighted) paths.   It's at canalplan.eu if anyone wants a 
play.

One problem you rapidly run into when storing graphs in SQL, in my 
limited and non-expert experience, is that - as in this example - you 
end up with edge records each of which refers to two vertices.  My 
database maintenance and update code is riddled with:

SELECT ... FROM link WHERE place1=x AND place2=y OR place1=y AND place2=x;

and similar.  Apart from imposing a condition (such as always having v1 
< v2 in the example code) is there any sensible way round this?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3_close ==> " library routine called out of sequence"

2010-02-23 Thread Nick Shaw
What's the actual code output from your program?  i.e. which
sqlite3_close() is it failing on?

Can you call sqlite3_errmsg() when sqlite3_open() failed to initialise
the db struct?  I would have thought "db" would be invalid at that
point.  Try setting db to NULL when declaring it, as otherwise you may
be calling sqlite3_errmsg() with a non-null invalid pointer (it'll be
some random memory address if it's not nullified by sqlite3_open() on
open failure), which sqlite3 attempts to use but fails, resulting in
your "library routine called out of sequence" error?

Nick.

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John
Sent: 23 February 2010 10:17
To: sqlite-users@sqlite.org
Subject: [sqlite] sqlite3_close ==> " library routine called out of
sequence"



Hi All,

The very simple C-code below returns error message " library routine
called out of sequence" .
I really don't understand why  ?

I'm using SQLite 3.6.22 under Windows XP

Any idea/suggestion would be really appreciated !


-
 sqlite3*db;
 sqlite3_stmt   *statement;
 wxStringTMPstring;


  if( sqlite3_open( "Test3.db", &db) )
  { TMPstring << "Failed opening databse file:\n" <<
sqlite3_errmsg(db);
wxMessageBox( TMPstring );
sqlite3_close(db);
return;
  }


if( sqlite3_close( db ) != SQLITE_OK );
{   wxMessageBox( wxString("Database can not be closed
properly !\n\n")   << sqlite3_errmsg( db ));
return( -1 );
}

-


Thanx for your help,


John



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [VB.Net] Working with SQLite?

2010-02-24 Thread Nick Shaw
You could just import the required DLL functions from sqlite3.dll?  Not
very object-oriented, I admit - you'd need to write your own class
wrapper around the functions if you wanted that - but I'd think that
importing the functions was the quickest method, and then you're not
reliant on 3rd party interfaces.

Nick.


-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Gilles Ganault
Sent: 24 February 2010 12:30
To: sqlite-users@sqlite.org
Subject: [sqlite] [VB.Net] Working with SQLite?

Hello

I'm only getting started with VB.Net, and I need a way to work with an
SQLite 3 database.

According to the wiki, the following solutions seem to be currently
supported:

=
System.Data.SQLite
http://sqlite.phxsoftware.com/

Devart dotConnect for SQLite
www.devart.com/dotconnect/sqlite/

.NET Mono ADO.NET provider for SQLite
www.mono-project.com/SQLite

myGeneration
www.mygenerationsoftware.com/portal/default.aspx

LightSpeed - .NET O/R Mapper with full support for SQLite
www.mindscape.co.nz/products/lightspeed/
=

If someone can recommend one of them, and ideally, have some code
handy to connect/SELECT/disconnect so I can be up and running soon...

Thank you.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Fwd: Sqlite files in temp folder

2010-02-24 Thread Nick Shaw
I'd suggest Anita uses a file monitoring app (like SysInternals'
DiskMon, if she's using Windows) to see what is writing those files.
Then stop the app from doing it.

Nick.

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of D. Richard Hipp
Sent: 24 February 2010 12:51
To: General Discussion of SQLite Database
Subject: [sqlite] Fwd: Sqlite files in temp folder



Begin forwarded message:

> From: "Anita Asken" 
> Date: February 24, 2010 6:04:35 AM EST
> To: 
> Subject: Sqlite files in temp folder
>
> Dear Sir,
> I have been contacted by a friend who has the above  
> files appearing in her Temp folder on a daily basis.  These files  
> are up to and including 40GB in size (when compressed) and are  
> filling the Hard drive at an  alarming rate.  Can you offer any  
> advice on how to stop the files being written, and remove the  
> programme(s) that are writing these files.
>
> Your help in this matter is very much appreciated.
>
> Kind regards,
>
> Mark Asken

D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [VB.Net] Working with SQLite?

2010-02-24 Thread Nick Shaw
Ah ok, in that case a 3rd party interface would probably be quickest for
you. :)

Good luck!

Nick.

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Gilles Ganault
Sent: 24 February 2010 13:20
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] [VB.Net] Working with SQLite?

On Wed, 24 Feb 2010 13:17:23 -, "Nick Shaw"
 wrote:
>You could just import the required DLL functions from sqlite3.dll?  Not
>very object-oriented, I admit - you'd need to write your own class
>wrapper around the functions if you wanted that - but I'd think that
>importing the functions was the quickest method, and then you're not
>reliant on 3rd party interfaces.

Thanks for the tip but as I said, I'm only getting started with
VB.Net, so I need something that'll get me up and running ASAP.

I'm giving System.Data.SQLite a try, and see how it goes.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [newbie/VB.Net + SQLite] Reliable file hashing?

2010-03-05 Thread Nick Shaw
Small aside: Why do you select the id, name AND hash in the select?  You
don't appear to use them.

I can't see anything specifically wrong, but try doing "SELECT COUNT(id)
WHERE hash='@hash'" instead, and see if the returned count > 0 or not.

You could also maybe test your routine by passing in a fixed string as
the hash='@hash' value each time and check it does actually returns
rows.

My guess is there's something subtle wrong with the VB.NET code, but my
VB.NET syntax knowledge is not up to the job of finding any problem I
can see. :)

Thanks,
Nick.

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Gilles Ganault
Sent: 05 March 2010 14:35
To: sqlite-users@sqlite.org
Subject: [sqlite] [newbie/VB.Net + SQLite] Reliable file hashing?

Hello,

I'm having the following issue while looping through a
directory:
for each file, I need to hash its content, check if this file is
already in an DB, add a record if it isn't.
The goal of this application is to check a whole drive for UltraEdit
temp files, check for duplicates, and save any unique file into a
backup directory.

In the following code, a record is added everytime, although this file
is already in the SQLite database (I checked by opening it with a
stand-alone application after running the program once):

http://pastebin.ca/1823757

The problem occurs around line 63.

I'm using TEXT to hold the hash column: Could it be that, for some
reason, this data isn't reliably saved or read, which would explain
why a new record is INSERTed every time, even though this item is
already in the database?

Thank you for any hint.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Executing SELECT and INSERT from C under Linux

2010-04-12 Thread Nick Shaw
I assume you're using the C API calls to open/close the database?  If
so, it's just another API call to execute some simple SQL:
sqlite3_exec().  If you need something more advanced, like prepare,
bind, step etc, then see the appropriate APIs for those.  All listed in
http://www.sqlite.org/c3ref/funclist.html.

Thanks,
Nick.

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Erik Kvernstad
Sent: 12 April 2010 14:02
To: sqlite-users@sqlite.org
Subject: [sqlite] Executing SELECT and INSERT from C under Linux

I am writing a C program under Linux to perform operations on an
sqlite-created (manually) database.  I have successfully compiled and
linked with the sqlite3 library and my program opens and closes the Db
successfully.
However, it is unclear to me how I in a simple way may perform
operations like SELECT, INSERT etc. on the database from the C
program.

Grateful for hints.

Erik Kvernstad
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] File locking with BEGIN/END

2010-05-25 Thread Nick Shaw
Hi all,

 

I've got a database that is accessed by two processes on the same PC.
When I add a large number of records to a table from one process (all
records wrapped in a BEGIN DEFERRED / END so it's committed to disk all
at once; and we're talking about adding 500,000 recs to a DB containing
around 3 million recs, so it takes some time), the other process gets
stuck in the sqlite3_step() call when doing a SELECT call.  It doesn't
return SQLITE_BUSY, it just sits there waiting.  It only continues after
the first process calls END to complete the transaction.

 

According to http://www.sqlite.org/lang_transaction.html, in deferred
mode, sqlite creates a RESERVED lock on the DB when it starts a write
operation, but this should allow other SHARE locks at the same time,
implying other processes should still be able to query the database
during the write transation.

 

Does this sound like correct behaviour, or not?  If correct, should my
second process be getting SQLITE_BUSY returned from sqlite3_step()?

(Note that the first process is adding data to the same table that the
second process wants to do a SELECT query on).

 

If this is all correct... what would be the best way to allow me to add
(or delete!) lots of records from a table in one process whilst still
allowing the other process to query that table's data during the insert
time?  An in-memory database is obviously an option for the second
process, but that has an impact on RAM usage and requiring the second
process to know when the database has changed on disk.

 

Thanks,

Nick.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] File locking with BEGIN/END

2010-05-25 Thread Nick Shaw
> Is it possible you have a busy-handler installed? Are you using SQLite
directly or via some wrapper API?

Yes and no.  I set a busy handler of 1 minute, but it takes longer than
this and doesn't timeout, plus I tried without the busy timeout; made no
difference.

Thanks,
Nick.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] File locking with BEGIN/END

2010-05-25 Thread Nick Shaw
> You need to add your records in smaller batches and sleep just a
little between iterations.
>  
> Your batches will have to be small enough to make the user response
time reasonable.
>  
> You'll find that your loop of inserting records won't take long.  It's
when you do the "COMMIT" that it locks everything and takes all the time
to write to disk.
>  
> Something like this to  COMMIT in batches:
>  
> while(records) {
>   insert records
>   if (recnum%1)==0 COMMIT;BEGIN DEFERRED } COMMIT
>  

Thanks Michael, that's what I thought.  I have tried this and it did
improve things considerably.  What I've got now is a mutex check between
each record insert, and if the second process is waiting on the mutex I
then in the first process I do a commit, then wait on the mutex in the
1st process until the second process has finished then carry on with the
inserts.  Seems to be faster doing it this way (though not by much).
I've got the delay in the second process down to around 200ms which is
much improved from the original delay of over a minute. :)

Thanks all,
Nick.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


  1   2   >