Re: [sqlite] FTS4: Datatypes

2011-11-09 Thread Richard Hipp
On Wed, Nov 9, 2011 at 8:02 PM, Fabian  wrote:

>
> "As of SQLite version 3.7.9, FTS4 supports a new option - "content"
> -designed to extend FTS4 to support the creation of full-text indexes
> where:
>
> I wonder why this new option wasn't mentioned in the changelog of
> 3.7.9, but maybe because it's still experimental?
>


Because it is experimental.  Also, it comes with no safety-belts:  you can
do a lot of damage by misusing it, and it is quite fussy and easy to
misuse.

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] inserts, performance, file lock...

2011-11-09 Thread Richard Hipp
On Wed, Nov 9, 2011 at 6:17 PM, yqpl  wrote:

>
> no matter how big my database is inserts starts with the same speed and
> keep
> getting slower.
> thats why it is better to split 1000 of files into 10x 100 files cause all
> of those x100 packages will be imported fast. but i also gets this file
> lock
> error /
>

First create your tables without indices.  Then do your inserts.  After all
the data is in the table, then do your CREATE INDEX statements.

Also avoid UNIQUE constraints since they generate indices.  Instead add
CREATE UNIQUE INDEX after all inserts are done.

Inserting content into a table is fast because it can simply append.
Inserting content into an index is slower since it has to look up the
correct place in the middle of the index and insert it there, which
involves a binary search, etc.

The latest release of SQLite contains an optimization that makes CREATE
INDEX go up to 100x faster for very large tables.  So be sure to use SQLite
version 3.7.8 or 3.7.9 to do the CREATE INDEX.


>
>
>
> Simon Slavin-3 wrote:
> >
> >
> > On 9 Nov 2011, at 10:21pm, yqpl wrote:
> >
> >> im starting a transaction
> >> then make a lot of inserts and commit.
> >> ive got about 30 inserts per second but after a while it is dropping to
> >> about 1-2 inserts per second. it takse about ~500 inserts to drop to
> this
> >> 1-2 insert per sec.
> >
> > Here is a guess, but it is just a guess.
> >
> > Your computer has a certain amount of memory, or disk cache free.  While
> > your whole SQLite database fits in that space, your process is fast.
>  Once
> > the database gets bigger than that amount of free, the computer has to
> > keep swapping bits of file in and out, and this makes the process slower.
> > So my guess is that the slow down will be related not to the number of
> > inserts so far, but to the total size of the database file so far.
> >
> > Something that will help to clarify this is to get an idea of how big the
> > files are that you are inserting.  There was a recent post to this list
> > where the SQLite team had tested the relative speeds of holding entire
> > image files in a SQLite database compared to holding just the paths of
> > those files.
> >
> > Simon.
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> >
> >
>
> --
> View this message in context:
> http://old.nabble.com/inserts%2C-performance%2C-file-lock...-tp32814772p32815038.html
> Sent from the SQLite mailing list archive at Nabble.com.
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] INDEX Types

2011-11-09 Thread Jay A. Kreibich
On Thu, Nov 10, 2011 at 12:28:24AM +0100, GB scratched on the wall:

> Ok, just thought it worth mentioning. But a VACUUMed database may be
> more efficient if you have multiple columns with multiple indexes
> because you get a mixed sequence of data and index pages while
> inserting data. VACUUM rearranges them so that pages belonging to an
> object are grouped together which in turn helps drawing benefit from
> prefetching and caching.

  While that's true, the difference is more limited for indexes.
  VACUUM rebuilds tables in ROWID order, hence re-packing both 
  records into a page, and pages into the database file.
  
  Indexes, on the other hand, are rebuilt with a table scan, essentially
  the same as CREATE INDEX on an existing table.  This means that if
  the table rows (in ROWID order) are not already more or less in-order
  (according to the index) the rebuilt index will suffer from
  inserted values and re-balanced nodes.

   -j

-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"Intelligence is like underwear: it is important that you have it,
 but showing it to the wrong people has the tendency to make them
 feel uncomfortable." -- Angela Johnson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] INDEX Types

2011-11-09 Thread Black, Michael (IS)
Doing a file copy has similar behavior.  So as long as the file is cached 
everything is copacetic.  That's what leads me to believe it's head thrashing 
causing this behavior.



 ./sqlite3 index2.db  However, reboot again and add "select count(*) from a;" as the first line of 
> gendat2.sql
>
> time sqlite3 index.db  110
> 2.389u 1.123s 0:07.39 47.3% 0+0k 0+0io 0pf+0w
> That's faster than the 2nd run before rebooting.

Out of interest, if you have the time, instead of doing "select count(*) from 
a;" can you just copy the file to another file ?  Possibly 
read-a-block-write-a-block ?  I'm curious to know whether this makes a 
difference.

I have tried some of these things on a Mac, but rebooting doesn't make anything 
like so much difference on a Mac.  It's slower without the reboot but faster 
with the reboot.  However, I don't have a Mac to play with which isn't also 
acting as a server.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] FTS4: Datatypes

2011-11-09 Thread Fabian
It seems I finally have some sort of a solution:

"As of SQLite version 3.7.9, FTS4 supports a new option - "content"
-designed to extend FTS4 to support the creation of full-text indexes
where:

+
+* The indexed documents are not stored within the SQLite database
+  at all (a "contentless" FTS4 table), or
+
+* The indexed documents are stored in a database table created and
+  managed by the user (an "external content" FTS4 table).


Using this new option I can keep all data together in a regular table,
and use FTS purely as an index.

I wonder why this new option wasn't mentioned in the changelog of
3.7.9, but maybe because it's still experimental?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] inserts, performance, file lock...

2011-11-09 Thread yqpl

and the files are small. could be 10-50 lines of text, orginal lines are
stored and also they are parsed between different tables.



Simon Slavin-3 wrote:
> 
> 
> On 9 Nov 2011, at 10:21pm, yqpl wrote:
> 
>> im starting a transaction
>> then make a lot of inserts and commit.
>> ive got about 30 inserts per second but after a while it is dropping to
>> about 1-2 inserts per second. it takse about ~500 inserts to drop to this
>> 1-2 insert per sec.
> 
> Here is a guess, but it is just a guess.
> 
> Your computer has a certain amount of memory, or disk cache free.  While
> your whole SQLite database fits in that space, your process is fast.  Once
> the database gets bigger than that amount of free, the computer has to
> keep swapping bits of file in and out, and this makes the process slower. 
> So my guess is that the slow down will be related not to the number of
> inserts so far, but to the total size of the database file so far.
> 
> Something that will help to clarify this is to get an idea of how big the
> files are that you are inserting.  There was a recent post to this list
> where the SQLite team had tested the relative speeds of holding entire
> image files in a SQLite database compared to holding just the paths of
> those files.
> 
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 

-- 
View this message in context: 
http://old.nabble.com/inserts%2C-performance%2C-file-lock...-tp32814772p32815040.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] INDEX Types

2011-11-09 Thread GB



Fabian schrieb am 09.11.2011 23:10:

I'm running these tests on a very simple database: 1 table and 1 column, so
ANALYZE shouldn't have any effect. And I already tested running VACUUM on
the database, but it didn't help (which seems logical, because I start with
a fresh db each time).

Ok, just thought it worth mentioning. But a VACUUMed database may be 
more efficient if you have multiple columns with multiple indexes 
because you get a mixed sequence of data and index pages while inserting 
data. VACUUM rearranges them so that pages belonging to an object are 
grouped together which in turn helps drawing benefit from prefetching 
and caching.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] inserts, performance, file lock...

2011-11-09 Thread Nico Williams
What's your page size?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] inserts, performance, file lock...

2011-11-09 Thread yqpl

no matter how big my database is inserts starts with the same speed and keep
getting slower.
thats why it is better to split 1000 of files into 10x 100 files cause all
of those x100 packages will be imported fast. but i also gets this file lock
error /



Simon Slavin-3 wrote:
> 
> 
> On 9 Nov 2011, at 10:21pm, yqpl wrote:
> 
>> im starting a transaction
>> then make a lot of inserts and commit.
>> ive got about 30 inserts per second but after a while it is dropping to
>> about 1-2 inserts per second. it takse about ~500 inserts to drop to this
>> 1-2 insert per sec.
> 
> Here is a guess, but it is just a guess.
> 
> Your computer has a certain amount of memory, or disk cache free.  While
> your whole SQLite database fits in that space, your process is fast.  Once
> the database gets bigger than that amount of free, the computer has to
> keep swapping bits of file in and out, and this makes the process slower. 
> So my guess is that the slow down will be related not to the number of
> inserts so far, but to the total size of the database file so far.
> 
> Something that will help to clarify this is to get an idea of how big the
> files are that you are inserting.  There was a recent post to this list
> where the SQLite team had tested the relative speeds of holding entire
> image files in a SQLite database compared to holding just the paths of
> those files.
> 
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 

-- 
View this message in context: 
http://old.nabble.com/inserts%2C-performance%2C-file-lock...-tp32814772p32815038.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] inserts, performance, file lock...

2011-11-09 Thread yqpl

yep im sure.

30 per sec is fine.
parsing time is included. and for real there 30 of files inserted into
databas but for each file there is more inserts. i didn want to make it more
complicated with explaining this.




Richard Hipp-3 wrote:
> 
> On Wed, Nov 9, 2011 at 5:21 PM, yqpl  wrote:
> 
>>
>> Hi,
>>
>> my task is to parse a lot of files and then insert them to sqlite
>> database.
>> it could be thousands of files. i use c#.
>>
>> im starting a transaction
>> then make a lot of inserts and commit.
>> ive got about 30 inserts per second
> 
> 
> I typically get 100,000 rows per second on a modern workstation, from
> inside a transaction.  Are you *sure* you are using a transaction?
> 
> 
> 
>> but after a while it is dropping to
>> about 1-2 inserts per second. it takse about ~500 inserts to drop to this
>> 1-2 insert per sec.
>>
>> ive got indexes on this database - but this dosnt make difference as i
>> checked on copy without indexes.
>>
>> 1) why it is getting tired ans slows down? how to fix it?
>>
>> o tried to this in a loop with 100 inserts to keep speed reasonable. then
>> closing database and reopening.
>> after close and reopen next commit gets SQLITE_BUSY/* The database file
>> is
>> locked */
>>
>> 2) wtf?
>>
>> please help me im stucked
>>
>> code is like:
>>
>> //done by background worker
>> void import(object sender, DoWorkEventArgs e)
>>{
>> DataTable tab;
>>tab = PST_POSTGRES.Postgres.Query(textBox6.Text,
>> textBox1.Text,
>> textBox3.Text, textBox4.Text, textBox5.Text, textBox2.Text);
>>
>>
>>SQLiteWrapper.SQLite db = new SQLiteWrapper.SQLite();
>>db.OpenDatabase(sqlite_db);
>>db.BeginTransaction();
>>
>>
>>foreach (DataRow r in tab.Rows)
>>{
>>
>>if (bw.CancellationPending == true)
>>{
>>e.Cancel = true;
>>break;
>>}
>>//import here
>>
>>foreach (object o in imported)
>>{
>>doinserts(o)
>>}
>>
>>//here is an reopen condition - when gets slower
>>if long)(imported * 1000)) /
>> stoper.ElapsedMilliseconds)
>> < next)
>>{
>>db.CommitTransaction();
>>db.CloseDatabase();
>>db = new SQLiteWrapper.SQLite();
>>db.OpenDatabase(sqlite_db);
>>db.begintransaction();
>>}
>>next = (((long)(imported * 1000)) /
>> stoper.ElapsedMilliseconds);
>>}
>>
>>db.CommitTransaction();
>>db.CloseDatabase();
>>stoper.Stop();
>>}
>> --
>> View this message in context:
>> http://old.nabble.com/inserts%2C-performance%2C-file-lock...-tp32814772p32814772.html
>> Sent from the SQLite mailing list archive at Nabble.com.
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> 
> 
> 
> -- 
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 

-- 
View this message in context: 
http://old.nabble.com/inserts%2C-performance%2C-file-lock...-tp32814772p32815037.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] FTS4: Datatypes

2011-11-09 Thread Petite Abeille

On Nov 9, 2011, at 11:59 PM, Fabian wrote:

> So would it be an idea to have a simple flag (NOINDEX for example) which
> dictates that a certain column shouldn't be indexed by FTS, just stored? It
> may be a lot simpler to implement than actual datatypes, and I could work
> around the other limitations myself.

Yes, it's called a plain, old table (POT).

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] FTS4: Datatypes

2011-11-09 Thread Fabian
Ever since I started using FTS, I'm always confronted with the problem that
I need two tables: one FTS table with the TEXT columns, and one normal
table with the INTEGER columns for numerical values. This causes all kinds
of problems (keeping the rowid's in sync, complex queries, etc.).

>From a previous post from the author of FTS, I understand that it will be
hard to implement datatypes for the virtual tables that FTS uses, so I
already ruled that out as a solution.

I could just store the numerical values as TEXT in the FTS table, but even
the most simple operation (sorting for example) will be a challenge, since
I have to format the numbers in such a way that they sort right (prepending
zeroes to make them all the same length). But suppose I take all that into
account, there still is another problem: FTS will unnecessary index all
those extra columns.

So would it be an idea to have a simple flag (NOINDEX for example) which
dictates that a certain column shouldn't be indexed by FTS, just stored? It
may be a lot simpler to implement than actual datatypes, and I could work
around the other limitations myself.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] inserts, performance, file lock...

2011-11-09 Thread Simon Slavin

On 9 Nov 2011, at 10:21pm, yqpl wrote:

> im starting a transaction
> then make a lot of inserts and commit.
> ive got about 30 inserts per second but after a while it is dropping to
> about 1-2 inserts per second. it takse about ~500 inserts to drop to this
> 1-2 insert per sec.

Here is a guess, but it is just a guess.

Your computer has a certain amount of memory, or disk cache free.  While your 
whole SQLite database fits in that space, your process is fast.  Once the 
database gets bigger than that amount of free, the computer has to keep 
swapping bits of file in and out, and this makes the process slower.  So my 
guess is that the slow down will be related not to the number of inserts so 
far, but to the total size of the database file so far.

Something that will help to clarify this is to get an idea of how big the files 
are that you are inserting.  There was a recent post to this list where the 
SQLite team had tested the relative speeds of holding entire image files in a 
SQLite database compared to holding just the paths of those files.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] inserts, performance, file lock...

2011-11-09 Thread Richard Hipp
On Wed, Nov 9, 2011 at 5:21 PM, yqpl  wrote:

>
> Hi,
>
> my task is to parse a lot of files and then insert them to sqlite database.
> it could be thousands of files. i use c#.
>
> im starting a transaction
> then make a lot of inserts and commit.
> ive got about 30 inserts per second


I typically get 100,000 rows per second on a modern workstation, from
inside a transaction.  Are you *sure* you are using a transaction?



> but after a while it is dropping to
> about 1-2 inserts per second. it takse about ~500 inserts to drop to this
> 1-2 insert per sec.
>
> ive got indexes on this database - but this dosnt make difference as i
> checked on copy without indexes.
>
> 1) why it is getting tired ans slows down? how to fix it?
>
> o tried to this in a loop with 100 inserts to keep speed reasonable. then
> closing database and reopening.
> after close and reopen next commit gets SQLITE_BUSY/* The database file is
> locked */
>
> 2) wtf?
>
> please help me im stucked
>
> code is like:
>
> //done by background worker
> void import(object sender, DoWorkEventArgs e)
>{
> DataTable tab;
>tab = PST_POSTGRES.Postgres.Query(textBox6.Text, textBox1.Text,
> textBox3.Text, textBox4.Text, textBox5.Text, textBox2.Text);
>
>
>SQLiteWrapper.SQLite db = new SQLiteWrapper.SQLite();
>db.OpenDatabase(sqlite_db);
>db.BeginTransaction();
>
>
>foreach (DataRow r in tab.Rows)
>{
>
>if (bw.CancellationPending == true)
>{
>e.Cancel = true;
>break;
>}
>//import here
>
>foreach (object o in imported)
>{
>doinserts(o)
>}
>
>//here is an reopen condition - when gets slower
>if long)(imported * 1000)) / stoper.ElapsedMilliseconds)
> < next)
>{
>db.CommitTransaction();
>db.CloseDatabase();
>db = new SQLiteWrapper.SQLite();
>db.OpenDatabase(sqlite_db);
>db.begintransaction();
>}
>next = (((long)(imported * 1000)) /
> stoper.ElapsedMilliseconds);
>}
>
>db.CommitTransaction();
>db.CloseDatabase();
>stoper.Stop();
>}
> --
> View this message in context:
> http://old.nabble.com/inserts%2C-performance%2C-file-lock...-tp32814772p32814772.html
> Sent from the SQLite mailing list archive at Nabble.com.
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] inserts, performance, file lock...

2011-11-09 Thread yqpl

Hi,

my task is to parse a lot of files and then insert them to sqlite database. 
it could be thousands of files. i use c#.

im starting a transaction
then make a lot of inserts and commit.
ive got about 30 inserts per second but after a while it is dropping to
about 1-2 inserts per second. it takse about ~500 inserts to drop to this
1-2 insert per sec.

ive got indexes on this database - but this dosnt make difference as i
checked on copy without indexes.

1) why it is getting tired ans slows down? how to fix it?

o tried to this in a loop with 100 inserts to keep speed reasonable. then
closing database and reopening.
after close and reopen next commit gets SQLITE_BUSY/* The database file is
locked */

2) wtf?

please help me im stucked

code is like:

//done by background worker
void import(object sender, DoWorkEventArgs e)
{
DataTable tab;
tab = PST_POSTGRES.Postgres.Query(textBox6.Text, textBox1.Text,
textBox3.Text, textBox4.Text, textBox5.Text, textBox2.Text);   


SQLiteWrapper.SQLite db = new SQLiteWrapper.SQLite();
db.OpenDatabase(sqlite_db);
db.BeginTransaction();


foreach (DataRow r in tab.Rows)
{

if (bw.CancellationPending == true)
{
e.Cancel = true;
break;
}
//import here
   
foreach (object o in imported)
{
doinserts(o)
}

//here is an reopen condition - when gets slower
if long)(imported * 1000)) / stoper.ElapsedMilliseconds)
< next)
{
db.CommitTransaction();
db.CloseDatabase();
db = new SQLiteWrapper.SQLite();
db.OpenDatabase(sqlite_db);
db.begintransaction();
}
next = (((long)(imported * 1000)) /
stoper.ElapsedMilliseconds);  
}

db.CommitTransaction();
db.CloseDatabase();
stoper.Stop();
}
-- 
View this message in context: 
http://old.nabble.com/inserts%2C-performance%2C-file-lock...-tp32814772p32814772.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] INDEX Types

2011-11-09 Thread Fabian
2011/11/9 GB 

> Maybe you could try to use a pagesize that matches the size of a disk
> allocation unit or memory page. For Windows since a typical NTFS partition
> has a clustersize of 4KiB - which happens to also be the size of a memory
> page - a pagesize of 4096 Byte seems to be a good compromise between
> pleasing the cache manager and waste of space.


I already used a page_size of 4096 for all the benchmarks, together with a
large (300MB) cache_size, but thanks for the suggestion anyway!


> You could also investigate the effect of VACUUMing and ANALYZEing the
> Database. If you have multiple indexes on tables try compiling sqlite
> defining the SQLITE_ENABLE_STAT3 switch to get even better results from
> ANALYZE.
>

I'm running these tests on a very simple database: 1 table and 1 column, so
ANALYZE shouldn't have any effect. And I already tested running VACUUM on
the database, but it didn't help (which seems logical, because I start with
a fresh db each time).
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] INDEX Types

2011-11-09 Thread Petite Abeille

On Nov 9, 2011, at 10:24 PM, Fabian wrote:

> And I'd like to avoid to have some fuzzy logic
> that tries to predicts which of the two methods is going to be faster.

Perhaps an alternative to your conundrum is the good, old "divide and conquer" 
approach. In other words, you could partition your data among multiple database 
files, using a simple hash to write into one partition or another. That, in 
combination with some astute use of 'attach', is as good an approximation as 
any of table partitioning in SQLite.

http://en.wikipedia.org/wiki/Partition_(database)

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] INDEX Types

2011-11-09 Thread GB
Maybe you could try to use a pagesize that matches the size of a disk 
allocation unit or memory page. For Windows since a typical NTFS 
partition has a clustersize of 4KiB - which happens to also be the size 
of a memory page - a pagesize of 4096 Byte seems to be a good compromise 
between pleasing the cache manager and waste of space. You could also 
investigate the effect of VACUUMing and ANALYZEing the Database. If you 
have multiple indexes on tables try compiling sqlite defining the 
SQLITE_ENABLE_STAT3 switch to get even better results from ANALYZE.


hth
gerd
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] INDEX Types

2011-11-09 Thread Nico Williams
On Wed, Nov 9, 2011 at 3:24 PM, Fabian  wrote:
> 2011/11/9 Nico Williams 
>> I don't get it.  You're reading practically the whole file in a random
>> manner, which is painfully slow, so why can't you read the file in one
>> fell swoop (i.e., sequential reads)??
>
> I'm only reading the whole file when the number of additional inserts is
> high enough to cause the whole index to be read from disk. But if I always
> pre-cache the database, it will downgrade performance for cases when only
> 10 inserts need to be done. And I'd like to avoid to have some fuzzy logic
> that tries to predicts which of the two methods is going to be faster.

I don't see how to avoid that.  Set N=100 inserts before you read the
whole thing into memory.  You'll need to be able to cache, somewhere,
whether the DB has been read since last reboot (you could use a table
in the same DB for this).

> Besides, pre-caching the file sounds easier than it is to accomplish,
> because all methods suggested on this list did not work on Windows (for
> example copying the file to null). Windows and the harddrive have their own
> logic to decide which data to cache, and I haven't found a simple way to
> force a certain file into cache.

On many operating systems copying a file to /dev/null or equivalent
can fail to read the file into cache.

On a Unix the cp(1) utility might mmap() in the file then write(2) the
file a page at a time to /dev/null, with the page fault deferred until
the last minute, but since /dev/null doesn't use the data, the page
fault never comes, thus the file is never read into memory.

If you want to read the file reliably you may need to use a SELECT, or
actually *read* the file into memory, not just mmap() it.

>> Or, if FTS really works better, then use that.
>
> I will, but I'm trying to understand the issue that i'm facing, not just
> workaround it. It seems that FTS doesn't need to read the whole index from
> disk, so I'm trying to pinpoint the difference. My best guess is that it
> creates a fresh b-tree for the additional inserts, causing the boost in
> performance.

Yes, it'd be nice to understand what FTS is doing.  I can imagine many
ways to implement an index that has the performance characteristic
you've observed, but with various trade-offs.

Nico
--
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] INDEX Types

2011-11-09 Thread Black, Michael (IS)
I'm imagining that the index is only loading what it has to for each insert.

That results in head thrashing the disk when the file isn't cached.



I'm going to profile this and see what pops out.  I'm also going to use 3.7.9.





Michael D. Black

Senior Scientist

Advanced Analytics Directorate

Advanced GEOINT Solutions Operating Unit

Northrop Grumman Information Systems


From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on 
behalf of Fabian [fabianpi...@gmail.com]
Sent: Wednesday, November 09, 2011 3:30 PM
To: General Discussion of SQLite Database
Subject: EXT :Re: [sqlite] INDEX Types

2011/11/9 Black, Michael (IS) 

OK...you're right...a reboot kills it.
>

I'm glad someone was able to reproduce this on Linux, ruling out the
possibility it's a Windows-issue.


> However, reboot again and add "select count(*) from a;" as the first line
> of gendat2.sql
>

So if a simple SELECT COUNT(*) can speed up the same insert from 90 seconds
to 7 seconds (including the count), does this confirm it's a sequential vs
random reads problem?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] INDEX Types

2011-11-09 Thread Petite Abeille

On Nov 9, 2011, at 10:24 PM, Fabian wrote:

> It seems that FTS doesn't need to read the whole index from
> disk, so I'm trying to pinpoint the difference. My best guess is that it
> creates a fresh b-tree for the additional inserts, causing the boost in
> performance.

Indeed.

Quoting the fine manual:

"Multiple b-tree structures are used instead of a single b-tree to reduce the 
cost of inserting records into FTS tables. When a new record is inserted into 
an FTS table that already contains a lot of data, it is likely that many of the 
terms in the new record are already present in a large number of existing 
records. If a single b-tree were used, then large doclist structures would have 
to be loaded from the database, amended to include the new docid and 
term-offset list, then written back to the database. Using multiple b-tree 
tables allows this to be avoided by creating a new b-tree which can be merged 
with the existing b-tree (or b-trees) later on. Merging of b-tree structures 
can be performed as a background task, or once a certain number of separate 
b-tree structures have been accumulated. Of course, this scheme makes queries 
more expensive (as the FTS code may have to look up individual terms in more 
than one b-tree and merge the results), but it has been found that in 
 practice this overhead is often negligible."

http://www.sqlite.org/fts3.html#section_8
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] INDEX Types

2011-11-09 Thread Fabian
2011/11/9 Black, Michael (IS) 

OK...you're right...a reboot kills it.
>

I'm glad someone was able to reproduce this on Linux, ruling out the
possibility it's a Windows-issue.


> However, reboot again and add "select count(*) from a;" as the first line
> of gendat2.sql
>

So if a simple SELECT COUNT(*) can speed up the same insert from 90 seconds
to 7 seconds (including the count), does this confirm it's a sequential vs
random reads problem?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] INDEX Types

2011-11-09 Thread Fabian
2011/11/9 Nico Williams 

>
> I don't get it.  You're reading practically the whole file in a random
> manner, which is painfully slow, so why can't you read the file in one
> fell swoop (i.e., sequential reads)??
>

I'm only reading the whole file when the number of additional inserts is
high enough to cause the whole index to be read from disk. But if I always
pre-cache the database, it will downgrade performance for cases when only
10 inserts need to be done. And I'd like to avoid to have some fuzzy logic
that tries to predicts which of the two methods is going to be faster.

Besides, pre-caching the file sounds easier than it is to accomplish,
because all methods suggested on this list did not work on Windows (for
example copying the file to null). Windows and the harddrive have their own
logic to decide which data to cache, and I haven't found a simple way to
force a certain file into cache.

Or, if FTS really works better, then use that.


I will, but I'm trying to understand the issue that i'm facing, not just
workaround it. It seems that FTS doesn't need to read the whole index from
disk, so I'm trying to pinpoint the difference. My best guess is that it
creates a fresh b-tree for the additional inserts, causing the boost in
performance.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] INDEX Types

2011-11-09 Thread Black, Michael (IS)
OK...you're right...a reboot kills it.  Here's the program I generated the data 
with:

#include 
main()
{
char sql[4096];
char key[9];
long *lkey=key;
int i;
FILE *fp1,*fp2;
fp1=fopen("gendat1.sql","w");
fp2=fopen("gendat2.sql","w");
fputs("CREATE TABLE t (a text);",fp1);
fputs("CREATE INDEX tx on t(a);",fp1);
fputs("BEGIN;",fp1);
fputs("BEGIN;",fp2);
for(i=0;i<110;++i) {
int j;
for(j=0;j<8;++j) {
key[j]=(random()%26)+'a';
}
if (i < 100) {
fprintf(fp1,"insert into t values('%s');\n",key);
}
else {
fprintf(fp2,"insert into t values(%ld);\n",*lkey);
}
}
fputs("COMMIT;",fp1);
fputs("COMMIT;",fp2);
fclose(fp1);
fclose(fp2);
}


time sqlite3 index.db < gendat1.sql
21.594u 5.497s 0:27.87 97.1%0+0k 0+0io 0pf+0w
time sqlite3 index.db < gendat2.sql
12.302u 1.035s 0:04.46 74.6% 0+0k 0+0io 0pf+0w

rm index.db
time sqlite3 index.db < gendat1.sql
20.641u 6.477s 0:27.91 97.1%0+0k 0+0io 0pf+0w

reboot
time sqlite3 index.db < gendat2.sql
2.474u 1.219s 1:28.56 4.1%  0+0k 0+0io 0pf+0w

However, reboot again and add "select count(*) from a;" as the first line of 
gendat2.sql

time sqlite3 index.db /proc/sys/vm/drop_caches

You don't see the same performance drop.

So I'm wondering is this is disk cache instead of Linux cache.


Michael D. Black
Senior Scientist
Advanced Analytics Directorate
Advanced GEOINT Solutions Operating Unit
Northrop Grumman Information Systems



From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on 
behalf of Fabian [fabianpi...@gmail.com]
Sent: Wednesday, November 09, 2011 1:36 PM
To: General Discussion of SQLite Database
Subject: EXT :Re: [sqlite] INDEX Types


2011/11/9 Luuk 

>
> no, i did not reboot, I created two NEW databases, so rebooting should not
> make a big difference because its not cached in both cases.


If you just created it, why wouldn't it be in cache? You just written all
the data, so it's highly likely to be in some kind of cache (OS,HD,etc.).

As already explained to Michael, I experience no problems when I create the
database, close it, re-open it, and add some more rows. The problem only
appears if I reboot in between those steps.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] INDEX Types

2011-11-09 Thread Nico Williams
On Wed, Nov 9, 2011 at 1:53 PM, Fabian  wrote:
> 2011/11/9 Nico Williams 
>> What's wrong with reading the whole file into memory at boot time as a
>> way to prime the cache?  Rebooting always takes some time, mostly the
>> time to read all sorts of files.
>>
> It's a desktop application, I cannot pre-cache anything before the user
> launches the app. After the launch a couple of inserts need to be done, but
> having to read the whole database file into memory, just to be able to
> perform those inserts, doesn't seem a scalable solution.

I don't get it.  You're reading practically the whole file in a random
manner, which is painfully slow, so why can't you read the file in one
fell swoop (i.e., sequential reads)??

You've picked a technology that scales only vertically with the
hardware.  If you've reached the limit of that hardware (no, you've
not)  then it's time to upgrade the hardware or switch to a technology
that scales horizontally.  But you're not there yet.  And this is for
a desktop app, so horizontal scaling would seem out of the question
(or else you'd have built a client/server app in the first place).
This leaves you with pre-fetching the file.  You could defer the
pre-fetching as much as possible (e.g., wait until the Nth operation,
for N<<10,000), but that's about as much as you can do.  Or, if FTS
really works better, then use that.

nico
--
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [bug] LIKE operator ignores rest of string after NUL character

2011-11-09 Thread Yuriy Kaminskiy
Roger Andersson wrote:
>  On 11/09/11 19:42, Yuriy Kaminskiy wrote:
>> Paul Corke wrote:
>>> On 09 November 2011 15:32, hmas wrote:
>>>
 sqlite>  select hex(foocol)  from footable where foocol like
 '98012470700566';
 39393939393830313234373037303035363600
>>> It looks like there's an extra 00 on the end.
>>>
>>> x'3900' != x'39'
>> That said, it seems LIKE operator is buggy.
>> sqlite>  SELECT X'1245005679' LIKE X'1245001234';
>> 1
>>
> 
> On Windows
> 
> sqlite3.exe
> SQLite version 3.7.9 2011-11-01 00:52:41
> Enter ".help" for instructions
> Enter SQL statements terminated with a ";"
> sqlite> SELECT X'1245005679',X'1245001234';
> ↕E|↕E

These strings are (intentionally) not printable, no surprise here.

Look at:
SELECT hex(X'1245005679'),hex(X'1245001234');

And compare:
SELECT X'1245005679' LIKE X'1245001234';
1 -- incorrect
SELECT X'1245005679' = X'1245001234';
0 -- correct
SELECT X'1245005679' > X'1245001234';
1 -- correct

"LIKE" (in both "native" and "icu" implementations) ignores value length
(sqlite_value_bytes) and stops at NUL character. Compare that with "=" and ">"
that compares full value.

Arguable SQL_BLOB should be illegal operand for LIKE/GLOB, but then this should
be documented and they should return error when called with BLOB argument
instead of returning nonsense.

Well, not sure, maybe shell.c is also buggy in respect with printing blobs (only
".mode insert" handles SQLITE_BLOB properly [IIRC, NUL-in-middle is only legal
in BLOB], other .mode's treats everything as text and ignores length, and so
arguable buggy), but that's different issue.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] INDEX Types

2011-11-09 Thread Fabian
2011/11/9 Nico Williams 

>
> What's wrong with reading the whole file into memory at boot time as a
> way to prime the cache?  Rebooting always takes some time, mostly the
> time to read all sorts of files.
>
>
It's a desktop application, I cannot pre-cache anything before the user
launches the app. After the launch a couple of inserts need to be done, but
having to read the whole database file into memory, just to be able to
perform those inserts, doesn't seem a scalable solution.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] INDEX Types

2011-11-09 Thread Nico Williams
Fabian,

What's wrong with reading the whole file into memory at boot time as a
way to prime the cache?  Rebooting always takes some time, mostly the
time to read all sorts of files.

Nico
--
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] INDEX Types

2011-11-09 Thread Nico Williams
On Wed, Nov 9, 2011 at 12:04 PM, Fabian  wrote:
> 2011/11/9 Simon Slavin 
>> Didn't someone recently note that entering the first million records was
>> fast, but if he then closed and reopened the database, entering the next
>> 100,000 records was slow ?
>>
> Yes, and there is still no real explanation for it, other than slow disk
> reads. But even with very slow random disk I/O, 30 seconds seems still way
> too slow for a 100MB file.

I posted some math on this assuming typical disk seek times.  30
seconds is very much less than the worst case scenario.

I don't have an answer regarding FTS.

Nico
--
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLite Error (19)

2011-11-09 Thread Fabian
I'm in the process of converting a normal table to a FTS virtual table, and
I'm noticing different behaviour if I try to insert duplicate rowid's. I'm
using the 'INSERT OR IGNORE' statement, and on the normal table I can try
to insert millions of duplicates very quickly, but on the FTS table it's
very slow, and for every failed insert I get:

SQLite error (19): abort at 10 in [INSERT INTO 'main'.'table_content'
VALUES(?,(?))]: PRIMARY KEY must be unique

So a possibly explanation of the performance difference would be that the
virtual table is sending a TRACE message for each row, but how can I
disable/prevent that? Or is there another reason the virtual table is slow
with duplicates?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] INDEX Types

2011-11-09 Thread Luuk

On 09-11-2011 20:14, Fabian wrote:

2011/11/9 Luuk


On 09-11-2011 17:23, Black, Michael (IS) wrote:


  time sqlite3

$ time sqlite3
Did you do a reboot between the second insert? Because the difference I'm
seeing is much larger than 38%? Did you test it on Linux or Windows?


Linux (Linux opensuse 2.6.27.56-0.1-default #1 SMP 2010-12-01 16:57:58 
+0100 x86_64 x86_64 x86_64 GNU/Linux)


no, i did not reboot, I created two NEW databases, so rebooting should 
not make a big difference because its not cached in both cases.





___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] INDEX Types

2011-11-09 Thread Fabian
2011/11/9 Luuk 

> On 09-11-2011 17:23, Black, Michael (IS) wrote:
>
>>  time sqlite3> 19.307u 0.082s 0:19.40 99.8%0+0k 0+0io 0pf+0w
>>
>> time sqlite3<  index2.sql
>> 19.266u 0.092s 0:19.37 99.8%0+0k 0+0io 0pf+0w
>>
>>
>
> $ time sqlite3 
> real0m21.094s
> user0m20.989s
> sys 0m0.104s
> $ time sqlite3 
> real0m20.898s
> user0m20.813s
> sys 0m0.084s
> $ time sqlite3 gendat1.sqlite 
> real2m32.701s
> user0m26.038s
> sys 0m5.256s
> $ time sqlite3 gendat2.sqlite 
> real1m50.452s
> user0m25.534s
> sys 0m5.360s
>
>
> The difference when creating a db on disk (size: 35Mb) is (152-110)/110=
> 38%
>
>
Did you do a reboot between the second insert? Because the difference I'm
seeing is much larger than 38%? Did you test it on Linux or Windows?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] INDEX Types

2011-11-09 Thread Luuk

On 09-11-2011 17:23, Black, Michael (IS) wrote:

  time sqlite3


$ time sqlite3 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite

2011-11-09 Thread Doug Currie
The PIC32MX664F064L has
 64 KiB Program Memory Size
 32 KiB RAM

SQLite as it presently stands will not fit within these constraints.

e

On Nov 9, 2011, at 7:47 AM, Parthiv Shah wrote:

> Respected Sir,
> 
> We want to use DB SQLite in our product.
> 
> We are using PIC32MX664F064L microcontroller from microchip.
> 
> Is it possible to embedded SQLite into it?
> 
> Do we need any Operating system for SQLite ?
> 
> Product is data acquisition system. 
> 
> For data storage we are using SD Card.
> 
> We are using FAT32 file system.
> 
> Please guide us how we can test it?
> 
> For more information about us, please visit our website:
> www.promptsoftech.com
> 
> Best Regards
> Parthiv Shah
> 
> Prompt Softech
> Ahmedabad
> India
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [bug] LIKE operator ignores rest of string after NUL character

2011-11-09 Thread Roger Andersson

 On 11/09/11 19:42, Yuriy Kaminskiy wrote:

Paul Corke wrote:

On 09 November 2011 15:32, hmas wrote:


sqlite>  select hex(foocol)  from footable where foocol like
'98012470700566';
39393939393830313234373037303035363600

It looks like there's an extra 00 on the end.

x'3900' != x'39'

That said, it seems LIKE operator is buggy.
sqlite>  SELECT X'1245005679' LIKE X'1245001234';
1

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



On Windows

sqlite3.exe
SQLite version 3.7.9 2011-11-01 00:52:41
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> SELECT X'1245005679',X'1245001234';
↕E|↕E

Roger
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] [bug] LIKE operator ignores rest of string after NUL character (was: select ... where [=] or [like])

2011-11-09 Thread Yuriy Kaminskiy
Paul Corke wrote:
> On 09 November 2011 15:32, hmas wrote:
> 
>> sqlite> select hex(foocol)  from footable where foocol like
>> '98012470700566';
>> 39393939393830313234373037303035363600
> 
> It looks like there's an extra 00 on the end.
> 
> x'3900' != x'39'

That said, it seems LIKE operator is buggy.
sqlite> SELECT X'1245005679' LIKE X'1245001234';
1

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] INDEX Types

2011-11-09 Thread Fabian
2011/11/9 Black, Michael (IS) 

> Are you sure you're using BEGIN/COMMIT on your transactions?
>

Yes


> I just used my benchmark data and inserted another 100,000 rows into the
> database in 2.3 seconds.


That is because you immediately insert those additional rows, after
creating the database. I get the same excellent performance as you when I
do that. But if you make sure the database is not in cache (for example,
reboot the machine), you will see that the additional inserts will take
MUCH longer than to be reasonally expected.

Could you repeat your benchmark, with a reboot between the two transactions?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] INDEX Types

2011-11-09 Thread Black, Michael (IS)
Are you sure you're using BEGIN/COMMIT on your transactions?

I just used my benchmark data and inserted another 100,000 rows into the 
database in 2.3 seconds.

I made 1,100,000 records and cut the last 100,000 into a seperate file with 
BEGIN/COMMIT on both.

time sqlite3 index.db 

>
> Didn't someone recently note that entering the first million records was
> fast, but if he then closed and reopened the database, entering the next
> 100,000 records was slow ?
>
>
Yes, and there is still no real explanation for it, other than slow disk
reads. But even with very slow random disk I/O, 30 seconds seems still way
too slow for a 100MB file.

But today I made a new observation: if I create the same table as a virtual
FTS4 table, I can add the additional rows within 1 second (even on an
un-cached database file).

So if the reason for the slowness is disk-reads, the FTS4 way of
creating/updating the index requires much less reads? Maybe because it
allows for seperate tree-branches?

FTS is overkill for my simple requirements, but if it's so much faster than
a regular index, why not? The only things that's holding me back from
switching to FTS for this table is:

 - I cannot use the UNIQUE constraint, to disallow duplicate values.
 - I cannot search efficiently for rows that DON'T match a certain value,
because FTS doesn't allow a single NOT operator.

So can someone explain what FTS is doing behind the scenes that makes these
additional inserts so much faster?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] INDEX Types

2011-11-09 Thread Fabian
2011/11/9 Simon Slavin 

>
> Didn't someone recently note that entering the first million records was
> fast, but if he then closed and reopened the database, entering the next
> 100,000 records was slow ?
>
>
Yes, and there is still no real explanation for it, other than slow disk
reads. But even with very slow random disk I/O, 30 seconds seems still way
too slow for a 100MB file.

But today I made a new observation: if I create the same table as a virtual
FTS4 table, I can add the additional rows within 1 second (even on an
un-cached database file).

So if the reason for the slowness is disk-reads, the FTS4 way of
creating/updating the index requires much less reads? Maybe because it
allows for seperate tree-branches?

FTS is overkill for my simple requirements, but if it's so much faster than
a regular index, why not? The only things that's holding me back from
switching to FTS for this table is:

 - I cannot use the UNIQUE constraint, to disallow duplicate values.
 - I cannot search efficiently for rows that DON'T match a certain value,
because FTS doesn't allow a single NOT operator.

So can someone explain what FTS is doing behind the scenes that makes these
additional inserts so much faster?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] select ... where [=] or [like]

2011-11-09 Thread hmas



Pavel Ivanov-2 wrote:
> 
> On Wed, Nov 9, 2011 at 10:31 AM, hmas  wrote:
 Thanks for your answer.
 foocol contains a string of numbers.
 The result of typeof(foocol) is text.
>>>
>>> And what's the result of hex(foocol) ? My guess is you have some extra
>>> non-printable characters in there.
>>>
>> Hi,
>> Here are the results:
>> sqlite> select hex(foocol)  from footable where foocol like
>> '98012470700566';
>> 39393939393830313234373037303035363600
> 
> Just as I said - extra '\0' character at the end which is not visible
> when you try to see the contents of the field, but is crucial for
> equality to fail.
> 
> 
> Pavel
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 

Thanks a lot, that's exactly it.
I was binding the text with the null character terminating the string
I took it off and it works now
Herve

-- 
View this message in context: 
http://old.nabble.com/select-...-where--%3D--or--like--tp32802547p32812483.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] INDEX Types

2011-11-09 Thread Simon Slavin

On 9 Nov 2011, at 4:42pm, Fabian wrote:

> 2011/11/9 Black, Michael (IS) 
> 
>> Hmmm...appears to be the same for this case which, I must say, I find
>> surprising.
> 
> Thanks for actually benchmarking it. I'm also a bit surprised, because I
> always thought SQLite handled INTEGER more efficiently than TEXT.

In order to save filespace, SQLite stores integers not as a fixed number of 
bytes per value, but as a different number of bytes depending on the value.  
This would make things slower than being able to read a fixed number of bytes 
from disk and using double-word maths for the values you've read.

> I also did some new benchmarks on the inserts-slowdown I experience, and to
> rule out VirtualBox, I performed them on real hardware, but the results are
> the same:
> 
> 10 secs to create a database with 1 million rows, and 30 secs to add an
> additional 10.000 rows, if the database file is un-cached.
> 
> Maybe this is normal behaviour, but how can it possibly be that it's faster
> to delete the whole database and re-create it, than just adding a
> (relatively) small amount of additional rows?

Didn't someone recently note that entering the first million records was fast, 
but if he then closed and reopened the database, entering the next 100,000 
records was slow ?

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] INDEX Types

2011-11-09 Thread Fabian
2011/11/9 Black, Michael (IS) 

> Hmmm...appears to be the same for this case which, I must say, I find
> surprising.
>
>
Thanks for actually benchmarking it. I'm also a bit surprised, because I
always thought SQLite handled INTEGER more efficiently than TEXT.

I also did some new benchmarks on the inserts-slowdown I experience, and to
rule out VirtualBox, I performed them on real hardware, but the results are
the same:

10 secs to create a database with 1 million rows, and 30 secs to add an
additional 10.000 rows, if the database file is un-cached.

Maybe this is normal behaviour, but how can it possibly be that it's faster
to delete the whole database and re-create it, than just adding a
(relatively) small amount of additional rows?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] INDEX Types

2011-11-09 Thread Pavel Ivanov
> But I'm wondering if SQLite can deal more efficiently with a INTEGER index
> (64bits) VS an 8-byte TEXT column (also 64bits). I know the INTEGERs
> require less disk-space because SQLite can store smaller values in fewer
> bytes, but are there any other differences that make them more preferable
> as an INDEX? If there is no difference in performance, I could just take
> the first 8 characters of the TEXT column as a hash-value, instead of
> calculating a CRC64 each time.

My first thought was "sure, text will be compared byte-by-byte, 64-bit
integers on a 64-bit CPU will be compared all in one operation". But
then SQLite should read integer from database first and I'm not sure
if it does some optimization to read the whole integer in one CPU
operation, maybe it does that also byte-by-byte. So probably you won't
notice much difference in performance of either way of indexing.


Pavel


On Wed, Nov 9, 2011 at 10:23 AM, Fabian  wrote:
> I'm having an issue where inserts on an un-cached database are very slow.
> The reason probably is that a large part of the existing index needs to be
> read from disk, to be able to insert new rows to the index. The length of
> the values in the indexed column are around 60 bytes, so I'm thinking about
> adding an extra column, containing a shorter hash (64bits) of the actual
> value, and move the index to that column instead. This way the total size
> of the index (and the database) will be much smaller, hopefully resulting
> in faster inserts.
>
> But I'm wondering if SQLite can deal more efficiently with a INTEGER index
> (64bits) VS an 8-byte TEXT column (also 64bits). I know the INTEGERs
> require less disk-space because SQLite can store smaller values in fewer
> bytes, but are there any other differences that make them more preferable
> as an INDEX? If there is no difference in performance, I could just take
> the first 8 characters of the TEXT column as a hash-value, instead of
> calculating a CRC64 each time.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] INDEX Types

2011-11-09 Thread Simon Slavin

On 9 Nov 2011, at 3:23pm, Fabian wrote:

> I'm having an issue where inserts on an un-cached database are very slow.
> The reason probably is that a large part of the existing index needs to be
> read from disk, to be able to insert new rows to the index.

Are you doing a lot of INSERT commands one after another ?  Things will go far 
faster if you wrap them in a transaction:

BEGIN TRANSACTION;
INSERT ...;
INSERT ...;
INSERT ...;
END TRANSACTION;

> The length of
> the values in the indexed column are around 60 bytes, so I'm thinking about
> adding an extra column, containing a shorter hash (64bits) of the actual
> value, and move the index to that column instead. This way the total size
> of the index (and the database) will be much smaller, hopefully resulting
> in faster inserts.
> 
> But I'm wondering if SQLite can deal more efficiently with a INTEGER index
> (64bits) VS an 8-byte TEXT column (also 64bits). I know the INTEGERs
> require less disk-space because SQLite can store smaller values in fewer
> bytes, but are there any other differences that make them more preferable
> as an INDEX?

Yes.  It is far faster to compare two integers than it is to compare two 
strings.  This will speed up everything to do with the index, and make the 
index take up less space on disk.  Of course, you will have to take the time to 
calculate a hash value for each string.

> If there is no difference in performance, I could just take
> the first 8 characters of the TEXT column as a hash-value, instead of
> calculating a CRC64 each time.

This would be a very fast HASHing calculation.  If it's easy to program, then 
it might be worth a try.

Simon.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] select ... where [=] or [like]

2011-11-09 Thread Pavel Ivanov
On Wed, Nov 9, 2011 at 10:31 AM, hmas  wrote:
>>> Thanks for your answer.
>>> foocol contains a string of numbers.
>>> The result of typeof(foocol) is text.
>>
>> And what's the result of hex(foocol) ? My guess is you have some extra
>> non-printable characters in there.
>>
> Hi,
> Here are the results:
> sqlite> select hex(foocol)  from footable where foocol like
> '98012470700566';
> 39393939393830313234373037303035363600

Just as I said - extra '\0' character at the end which is not visible
when you try to see the contents of the field, but is crucial for
equality to fail.


Pavel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] select ... where [=] or [like]

2011-11-09 Thread Paul Corke
On 09 November 2011 15:32, hmas wrote:

> sqlite> select hex(foocol)  from footable where foocol like
> '98012470700566';
> 39393939393830313234373037303035363600

It looks like there's an extra 00 on the end.

x'3900' != x'39'

Paul.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] select ... where [=] or [like]

2011-11-09 Thread hmas



Pavel Ivanov-2 wrote:
> 
>> Thanks for your answer.
>> foocol contains a string of numbers.
>> The result of typeof(foocol) is text.
> 
> And what's the result of hex(foocol) ? My guess is you have some extra
> non-printable characters in there.
> 
> 
> Pavel
> 
> 
> On Tue, Nov 8, 2011 at 12:11 PM, hmas  wrote:
>>
>>
>> Simon Slavin-3 wrote:
>>>
>>>
>>> On 8 Nov 2011, at 10:58am, hmas wrote:
>>>
 My application insert into a table a string of numbers (for exemple
 '98011295918220'.
 if I do (either from command line or from my c++ program)
 select foocol from footable where foocol = '98011295918220'
 the result set is empty
>>>
>>> I'm trying to figure out whether that string of digits was stored as a
>>> string or a number.  Please do
>>>
>>> SELECT foocol,typeof(foocol) FROM footable WHERE foocol LIKE
>>> '98011295918220'
>>>
>>> and tell us what the results are.
>>>
>>> Simon.
>>>
>>> Thanks for your answer.
>>> foocol contains a string of numbers.
>>> The result of typeof(foocol) is text.
>>> Herve.
>>>
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>>>
>>
>> --
>> View this message in context:
>> http://old.nabble.com/select-...-where--%3D--or--like--tp32802547p32803113.html
>> Sent from the SQLite mailing list archive at Nabble.com.
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 
Hi,
Here are the results:
sqlite> select hex(foocol)  from footable where foocol like
'98012470700566';
39393939393830313234373037303035363600
sqlite> select hex(foocol)  from footable where foocol =
'98012470700566';
sqlite>
select  where ... =
returns void
The characters seem well coded
herve
-- 
View this message in context: 
http://old.nabble.com/select-...-where--%3D--or--like--tp32802547p32811722.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] INDEX Types

2011-11-09 Thread Fabian
I'm having an issue where inserts on an un-cached database are very slow.
The reason probably is that a large part of the existing index needs to be
read from disk, to be able to insert new rows to the index. The length of
the values in the indexed column are around 60 bytes, so I'm thinking about
adding an extra column, containing a shorter hash (64bits) of the actual
value, and move the index to that column instead. This way the total size
of the index (and the database) will be much smaller, hopefully resulting
in faster inserts.

But I'm wondering if SQLite can deal more efficiently with a INTEGER index
(64bits) VS an 8-byte TEXT column (also 64bits). I know the INTEGERs
require less disk-space because SQLite can store smaller values in fewer
bytes, but are there any other differences that make them more preferable
as an INDEX? If there is no difference in performance, I could just take
the first 8 characters of the TEXT column as a hash-value, instead of
calculating a CRC64 each time.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vacuum on sqlite file fills the .wal file and it never shriks since

2011-11-09 Thread Yuriy Kaminskiy
Richard Hipp wrote:
> On Tue, Nov 8, 2011 at 10:04 AM, Orit Alul  wrote:
> 
>> Hi,
>>
>> I've performed a vacuuming operation (I ran the following command:
>> sqlite3.exe  VACUUM;).
>> It caused the WAL file to be the same size as the db file and it never
>> shrink back.
>>
>> For example I had a db file in the size of 1.8GB, wal file in the size of
>> 1.7MB, shm file in the size of 32KB.
>> I ran the VACUUM; command on the db file, now the db file got down to
>> 1.2GB, wal file got up to 1.2GB and the shm file got up to 9.5MB.
>>
>> Do you have  a clue what is the cause for this strange behavior? Do you
>> know if there's a command that can shrink the files back?
>>
> 
> The WAL file is not truncated (prior to the last database connection
> closing) because overwriting an existing file is faster than appending to a
> file.  Hence reusing an existing WAL is faster than truncating and
> appending.
> 
> But if the file is larger than you like, you can set an upper bound on its
> size using the journal_size_limit pragma.
> http://www.sqlite.org/pragma.html#pragma_journal_size_limit  With this
> pragma setting, the WAL will be truncated down to the requested size after
> each successful checkpoint operation.

IMO, that should be mentioned in documentation. Currently it says this pragma
affects only journal_mode=persist and locking_mode=exclusive.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3_open() failed with file name containsjapanesecharacters

2011-11-09 Thread Sireesha Jakku
Thank you so much for all your replies and pointers.
I was able to make it work on Japanese OS.

Regards,
Sireesha J

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Black, Michael (IS)
Sent: Wednesday, November 09, 2011 7:00 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] sqlite3_open() failed with file name 
containsjapanesecharacters

See if these routines and examples help you...

http://www.codeproject.com/KB/string/utfConvert.aspx





Michael D. Black

Senior Scientist

Advanced Analytics Directorate

Advanced GEOINT Solutions Operating Unit

Northrop Grumman Information Systems


From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on 
behalf of Sireesha Jakku [sireesha_ja...@persistent.co.in]
Sent: Wednesday, November 09, 2011 6:02 AM
To: General Discussion of SQLite Database
Subject: EXT :Re: [sqlite] sqlite3_open() failed with file name 
containsjapanesecharacters

Hi Simon,

That's what I am puzzled about.

I am using C++ in my test program and the file name is in CString which is in 
Unicode - I am not sure whether it is UTF-16.
I tried changing it from Unicode to CP_ACP (which changes to multi byte 
character in current code page) and passed that value to sqlite3_open ().

Regards,
Sireesha J

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

DISCLAIMER
==
This e-mail may contain privileged and confidential information which is the 
property of Persistent Systems Ltd. It is intended only for the use of the 
individual or entity to which it is addressed. If you are not the intended 
recipient, you are not authorized to read, retain, copy, print, distribute or 
use this message. If you have received this communication in error, please 
notify the sender and delete all copies of this message. Persistent Systems 
Ltd. does not accept any liability for virus infected mails.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Cannot compile extension-functions.c

2011-11-09 Thread Simon Davies
On 7 November 2011 21:58, Richard D Watkins
 wrote:
> I am trying to compile    extension-functions.py   so that I may use the
> stdev function in SQLite queries. However, I keep getting errors that
> indicate failure to compile.
>
> First, I compiled the code for pysqlite, which created a directory that
> contains the following files:
>
> amalgamation  build  cross_bdist_wininst.py  cross_bdist_wininst.pyc  doc
> lib  LICENSE  MANIFEST.in  PKG-INFO  setup.cfg  setup.py   src
>
> Then, I fetched and added these two files to the above directory:
> sqlite3ext.h  and  extension-functions.c .
> Then, I made this directory the current working directory, and executed the
> following command
>
> gcc -fPIC -lm -shared extension-functions.c -o libsqlitefunctions.so
>
> This displays a long list of error messages, some of which are copied
> below.  Pysqlite works fine. I can create tables, insert data and fetch it
> back out. However, I cannot compile the extension-functions.c file.   I am
> running gcc version 3.4.5.
>
> Does any one know what is going wrong?
>
>
> # gcc -fPIC -lm -shared extension-functions.c -o libsqlitefunctions.so
> In file included from extension-functions.c:124:
> sqlite3ext.h:22:21: sqlite3.h: No such file or directory  <  
> sqlite3.h is required
> In file included from extension-functions.c:124:
> sqlite3ext.h:37: error: syntax error before '*' token
.
.
.
>
>
> Regards,
> Richard Watkins
> Product Development & Systems Engineering

Regards,
Simon
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3_open() failed with file name containsjapanesecharacters

2011-11-09 Thread Black, Michael (IS)
See if these routines and examples help you...

http://www.codeproject.com/KB/string/utfConvert.aspx





Michael D. Black

Senior Scientist

Advanced Analytics Directorate

Advanced GEOINT Solutions Operating Unit

Northrop Grumman Information Systems


From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on 
behalf of Sireesha Jakku [sireesha_ja...@persistent.co.in]
Sent: Wednesday, November 09, 2011 6:02 AM
To: General Discussion of SQLite Database
Subject: EXT :Re: [sqlite] sqlite3_open() failed with file name 
containsjapanesecharacters

Hi Simon,

That's what I am puzzled about.

I am using C++ in my test program and the file name is in CString which is in 
Unicode - I am not sure whether it is UTF-16.
I tried changing it from Unicode to CP_ACP (which changes to multi byte 
character in current code page) and passed that value to sqlite3_open ().

Regards,
Sireesha J

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vacuum on sqlite file fills the .wal file and it never shriks since

2011-11-09 Thread Richard Hipp
On Tue, Nov 8, 2011 at 10:04 AM, Orit Alul  wrote:

> Hi,
>
> I've performed a vacuuming operation (I ran the following command:
> sqlite3.exe  VACUUM;).
> It caused the WAL file to be the same size as the db file and it never
> shrink back.
>
> For example I had a db file in the size of 1.8GB, wal file in the size of
> 1.7MB, shm file in the size of 32KB.
> I ran the VACUUM; command on the db file, now the db file got down to
> 1.2GB, wal file got up to 1.2GB and the shm file got up to 9.5MB.
>
> Do you have  a clue what is the cause for this strange behavior? Do you
> know if there's a command that can shrink the files back?
>

The WAL file is not truncated (prior to the last database connection
closing) because overwriting an existing file is faster than appending to a
file.  Hence reusing an existing WAL is faster than truncating and
appending.

But if the file is larger than you like, you can set an upper bound on its
size using the journal_size_limit pragma.
http://www.sqlite.org/pragma.html#pragma_journal_size_limit  With this
pragma setting, the WAL will be truncated down to the requested size after
each successful checkpoint operation.



>
> Thanks,
> Orit
>
>
>
> [Creative Zone by MediaMind]
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] null NULL nUlL

2011-11-09 Thread Simon Slavin

On 9 Nov 2011, at 1:08pm, Igor Tandetnik wrote:

> Simon Slavin  wrote:
>> Is SQLite case-sensitive when parsing for NULL ?  Does it identify things 
>> like 'nUlL' as NULL ?  Can I rely on this behaviour in
>> future versions ? 
> 
> NULL as a keyword is parsed case-insensitively.'nUlL' however is not NULL, 
> but a literal representing a 4-character string.

Sorry, Igor.  I was using apostrophes in a way that doesn't work well on this 
list because of how SQLite uses them.  Perhaps I should have written the above 
as

Does it identify things like nUlL as NULL ?

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] null NULL nUlL

2011-11-09 Thread Igor Tandetnik
Simon Slavin  wrote:
> Is SQLite case-sensitive when parsing for NULL ?  Does it identify things 
> like 'nUlL' as NULL ?  Can I rely on this behaviour in
> future versions ? 

NULL as a keyword is parsed case-insensitively.'nUlL' however is not NULL, but 
a literal representing a 4-character string.
-- 
Igor Tandetnik

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLite

2011-11-09 Thread Parthiv Shah
Respected Sir,

We want to use DB SQLite in our product.

We are using PIC32MX664F064L microcontroller from microchip.

Is it possible to embedded SQLite into it?

Do we need any Operating system for SQLite ?

Product is data acquisition system. 

For data storage we are using SD Card.

We are using FAT32 file system.

Please guide us how we can test it?

For more information about us, please visit our website:
www.promptsoftech.com

Best Regards
Parthiv Shah

Prompt Softech
Ahmedabad
India


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Vacuum on sqlite file fills the .wal file and it never shriks since

2011-11-09 Thread Orit Alul
Hi,

I've performed a vacuuming operation (I ran the following command: sqlite3.exe 
 VACUUM;).
It caused the WAL file to be the same size as the db file and it never shrink 
back.

For example I had a db file in the size of 1.8GB, wal file in the size of 
1.7MB, shm file in the size of 32KB.
I ran the VACUUM; command on the db file, now the db file got down to 1.2GB, 
wal file got up to 1.2GB and the shm file got up to 9.5MB.

Do you have  a clue what is the cause for this strange behavior? Do you know if 
there's a command that can shrink the files back?

Thanks,
Orit



[Creative Zone by MediaMind]
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Cannot compile extension-functions.c

2011-11-09 Thread Richard D Watkins
I am trying to compileextension-functions.py   so that I may use the
stdev function in SQLite queries. However, I keep getting errors that
indicate failure to compile.

First, I compiled the code for pysqlite, which created a directory that
contains the following files:

amalgamation  build  cross_bdist_wininst.py  cross_bdist_wininst.pyc  doc
lib  LICENSE  MANIFEST.in  PKG-INFO  setup.cfg  setup.py   src

Then, I fetched and added these two files to the above directory:
sqlite3ext.h  and  extension-functions.c .
Then, I made this directory the current working directory, and executed the
following command

gcc -fPIC -lm -shared extension-functions.c -o libsqlitefunctions.so

This displays a long list of error messages, some of which are copied
below.  Pysqlite works fine. I can create tables, insert data and fetch it
back out. However, I cannot compile the extension-functions.c file.   I am
running gcc version 3.4.5.

Does any one know what is going wrong?


# gcc -fPIC -lm -shared extension-functions.c -o libsqlitefunctions.so
In file included from extension-functions.c:124:
sqlite3ext.h:22:21: sqlite3.h: No such file or directory
In file included from extension-functions.c:124:
sqlite3ext.h:37: error: syntax error before '*' token
sqlite3ext.h:38: error: syntax error before '*' token
sqlite3ext.h:39: error: syntax error before '*' token
sqlite3ext.h:39: error: `bind_blob' declared as function returning a
function
sqlite3ext.h:39: warning: no semicolon at end of struct or union
sqlite3ext.h:39: error: syntax error before ')' token
sqlite3ext.h:40: error: syntax error before '*' token
sqlite3ext.h:41: error: syntax error before '*' token
sqlite3ext.h:42: error: syntax error before '*' token
sqlite3ext.h:43: error: syntax error before '*' token
sqlite3ext.h:44: error: syntax error before '*' token
sqlite3ext.h:45: error: syntax error before '*' token
sqlite3ext.h:46: error: syntax error before '*' token
sqlite3ext.h:47: error: syntax error before '*' token
sqlite3ext.h:47: error: syntax error before ')' token
sqlite3ext.h:48: error: syntax error before '*' token
sqlite3ext.h:48: error: syntax error before ')' token
sqlite3ext.h:49: error: syntax error before '*' token
sqlite3ext.h:50: error: syntax error before '*' token
sqlite3ext.h:50: error: `busy_handler' declared as function returning a
function
sqlite3ext.h:50: error: syntax error before "void"
sqlite3ext.h:51: error: syntax error before '*' token
sqlite3ext.h:52: error: syntax error before '*' token
sqlite3ext.h:53: error: syntax error before '*' token
sqlite3ext.h:54: error: syntax error before '*' token
sqlite3ext.h:54: error: syntax error before "sqlite3"
sqlite3ext.h:55: error: syntax error before '*' token
sqlite3ext.h:55: error: syntax error before "sqlite3"
sqlite3ext.h:56: error: syntax error before '*' token
sqlite3ext.h:57: error: syntax error before '*' token
sqlite3ext.h:58: error: syntax error before '*' token
sqlite3ext.h:59: error: syntax error before '*' token
sqlite3ext.h:60: error: syntax error before '*' token
sqlite3ext.h:61: error: syntax error before '*' token
sqlite3ext.h:62: error: syntax error before '*' token


sqlite3ext.h:187: error: syntax error before '*' token
sqlite3ext.h:190: error: syntax error before '*' token
sqlite3ext.h:190: error: syntax error before '*' token
sqlite3ext.h:190: warning: data definition has no type or storage class
In file included from extension-functions.c:134:
/usr/include/stdio.h:355: error: 'snprintf' redeclared as different kind of
symbol
sqlite3ext.h:130: error: previous declaration of 'snprintf' was here
In file included from extension-functions.c:137:
/usr/include/stdlib.h:584: error: 'malloc' redeclared as different kind of
symbol
sqlite3ext.h:105: error: previous declaration of 'malloc' was here
/usr/include/stdlib.h:595: error: 'realloc' redeclared as different kind of
symbol
sqlite3ext.h:113: error: previous declaration of 'realloc' was here
/usr/include/stdlib.h:597: error: 'free' redeclared as different kind of
symbol
sqlite3ext.h:95: error: previous declaration of 'free' was here
extension-functions.c: In function `sqlite3StrDup':
extension-functions.c:205: error: dereferencing pointer to incomplete type
extension-functions.c: At top level:
extension-functions.c:375: error: syntax error before '*' token
extension-functions.c: In function `sqrtFunc':
extension-functions.c:375: error: `argc' undeclared (first use in this
function)
extension-functions.c:375: error: (Each undeclared identifier is reported
only once
extension-functions.c:375: error: for each function it appears in.)
extension-functions.c:375: error: dereferencing pointer to incomplete type
extension-functions.c:375: error: `argv' undeclared (first use in this
function)
extension-functions.c:375: error: `SQLITE_NULL' undeclared (first use in
this function)
extension-functions.c:375: error: dereferencing pointer to incomplete type
extension-functions.c:375: error: `context' undeclared (first use in this
function)

Re: [sqlite] null NULL nUlL

2011-11-09 Thread Simon Slavin

On 9 Nov 2011, at 11:41am, Luuk wrote:

> On 09-11-2011 11:15, Simon Slavin wrote:
>> Is SQLite case-sensitive when parsing for NULL ?  Does it identify things 
>> like 'nUlL' as NULL ?  Can I rely on this behaviour in future versions ?
> 
> Keywords should be case-insensitive, and NULL is a keyword

Good.

> But why this question? I mean why do you want to rely on it?
> 
> It many cases it is much clearer to write every Keyword with capitals,
> avoiding typo's

I'll be doing an automated import inside a tiny hand-held device.  A cut-down 
version of SQLite is very useful because the device itself needs to resort the 
data.  But the device is extremely tiny and every byte I can save will be 
important.  The data arrives as a comma-delimited list, like

31,7345,null,339
83,null,112,412
14,null,null,339

and I can turn this into INSERT commands very simply:

INSERT INTO myTable VALUES(  +dataline+   )

(I'm aware that 

Not having to parse and rearrange the incoming data makes my code extremely 
small and fast.  Unfortunately, the detector which produces the data has gone 
through several firmware versions, and one expresses bad readings as 'Null' !  
So I'm checking to see that I can get away without doing any preprocessing.

Thanks for the answer.  I have written code over the years which does things 
like

IF A$ = "NULL" OR A$ = "null" THEN ...

and wanted to make sure SQLite didn't have the same assumptions in.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3_open() failed with file name contains japanese characters

2011-11-09 Thread Sireesha Jakku
Hi Simon,

That's what I am puzzled about.

I am using C++ in my test program and the file name is in CString which is in 
Unicode - I am not sure whether it is UTF-16.
I tried changing it from Unicode to CP_ACP (which changes to multi byte 
character in current code page) and passed that value to sqlite3_open ().

Regards,
Sireesha J

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Simon Slavin
Sent: Wednesday, November 09, 2011 4:19 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] sqlite3_open() failed with file name contains japanese 
characters


On 9 Nov 2011, at 10:17am, Sireesha Jakku wrote:

> Thank you for your quick reply.
> I tried placing the same file in other location, where the file path contains 
> only 'english', it worked fine.

Okay, good test.  When you specify a foldername in non-Roman characters, are 
you handling the string as UTF-16 ?  You say you use 'WideCharToMultiByte()' 
but I cannot tell from this whether it produces text in UTF-16 or not.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

DISCLAIMER
==
This e-mail may contain privileged and confidential information which is the 
property of Persistent Systems Ltd. It is intended only for the use of the 
individual or entity to which it is addressed. If you are not the intended 
recipient, you are not authorized to read, retain, copy, print, distribute or 
use this message. If you have received this communication in error, please 
notify the sender and delete all copies of this message. Persistent Systems 
Ltd. does not accept any liability for virus infected mails.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3_open() failed with file name contains japanese characters

2011-11-09 Thread Richard Hipp
On Wed, Nov 9, 2011 at 5:48 AM, Simon Slavin  wrote:

>
> On 9 Nov 2011, at 10:17am, Sireesha Jakku wrote:
>
> > Thank you for your quick reply.
> > I tried placing the same file in other location, where the file path
> contains only 'english', it worked fine.
>
> Okay, good test.  When you specify a foldername in non-Roman characters,
> are you handling the string as UTF-16 ?  You say you use
> 'WideCharToMultiByte()' but I cannot tell from this whether it produces
> text in UTF-16 or not.
>

If the filename is in UTF16, then you need to use sqlite3_open16().  If you
are using sqlite3_open() or sqlite3_open_v2() then the filename must be in
UTF.

In no case is it acceptable to give a filename in MBCS.  You must convert
the filename to UTF8 or UTF16 before passing it to SQLite.





>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Documentation in PDF

2011-11-09 Thread Konrad J Hambrick

Luuk wrote, On 11/09/2011 05:29 AM:

On 08-11-2011 14:04, Arbol One wrote:

Any one knows where I can get the documentation in
http://www.sqlite.org/download.html, but in PDF format?


The main problem with this is that PDF is so much baes on a page layout,
and HTML is not

I tried to 'convert' the website into one large HTML
(it a simple attempt, ;)
see: http://q.x4all.nl/sqlite.html (1.5 Mb)

and converted that HTML to PDF
see: http://q.xs4all.nl/sqlite.pdf (2.6 Mb)
(i used LibreOffice to convert it)


Very nice.pdf, Luuk !

One note, the html format is at:  http://q.xs4all.nl/sqlite.html

Thank you.

-- kjh
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] null NULL nUlL

2011-11-09 Thread Luuk
On 09-11-2011 11:15, Simon Slavin wrote:
> Is SQLite case-sensitive when parsing for NULL ?  Does it identify things 
> like 'nUlL' as NULL ?  Can I rely on this behaviour in future versions ?
> 
> Simon/

Keywords should be case-insensitive, and NULL is a keyword

But why this question? I mean why do you want to rely on it?

It many cases it is much clearer to write every Keyword with capitals,
avoiding typo's

;)

-- 
Luuk
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Documentation in PDF

2011-11-09 Thread Luuk
On 08-11-2011 14:04, Arbol One wrote:
> Any one knows where I can get the documentation in
> http://www.sqlite.org/download.html, but in PDF format?
> 
>  
> 
> TIA 
> 


The main problem with this is that PDF is so much baes on a page layout,
and HTML is not

I tried to 'convert' the website into one large HTML
(it a simple attempt, ;)
see: http://q.x4all.nl/sqlite.html (1.5 Mb)

and converted that HTML to PDF
see: http://q.xs4all.nl/sqlite.pdf (2.6 Mb)
(i used LibreOffice to convert it)

-- 
Luuk
P.S. It was a 1-time-event...
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3_open() failed with file name contains japanese characters

2011-11-09 Thread Simon Slavin

On 9 Nov 2011, at 10:17am, Sireesha Jakku wrote:

> Thank you for your quick reply.
> I tried placing the same file in other location, where the file path contains 
> only 'english', it worked fine.

Okay, good test.  When you specify a foldername in non-Roman characters, are 
you handling the string as UTF-16 ?  You say you use 'WideCharToMultiByte()' 
but I cannot tell from this whether it produces text in UTF-16 or not.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3_open() failed with file name contains japanese characters

2011-11-09 Thread Sireesha Jakku
Hi Simon,

Thank you for your quick reply.
I tried placing the same file in other location, where the file path contains 
only 'english', it worked fine.

-Sireesha

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Simon Slavin
Sent: Wednesday, November 09, 2011 3:43 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] sqlite3_open() failed with file name contains japanese 
characters


On 9 Nov 2011, at 9:55am, Sireesha Jakku wrote:

> I am using sqlite3 dll in my application.
> While testing on Japanese OS (win-XP), I am getting problem in reading from 
> the sqlite file.
> 
> My filename is something like this: “C:\サーガル\test.sqlite”. When I am passing 
> this to sqlite3_open(), it is failing.

I  am not certain that the Japanese characters are what's causing your problem. 
 Please make yourself a folder, just for testing, with only Roman characters in 
the name.  Then try the same thing with that.  Perhaps you might like to try

C:\test\test.sqlite

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

DISCLAIMER
==
This e-mail may contain privileged and confidential information which is the 
property of Persistent Systems Ltd. It is intended only for the use of the 
individual or entity to which it is addressed. If you are not the intended 
recipient, you are not authorized to read, retain, copy, print, distribute or 
use this message. If you have received this communication in error, please 
notify the sender and delete all copies of this message. Persistent Systems 
Ltd. does not accept any liability for virus infected mails.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] null NULL nUlL

2011-11-09 Thread Simon Slavin
Is SQLite case-sensitive when parsing for NULL ?  Does it identify things like 
'nUlL' as NULL ?  Can I rely on this behaviour in future versions ?

Simon/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3_open() failed with file name contains japanese characters

2011-11-09 Thread Simon Slavin

On 9 Nov 2011, at 9:55am, Sireesha Jakku wrote:

> I am using sqlite3 dll in my application.
> While testing on Japanese OS (win-XP), I am getting problem in reading from 
> the sqlite file.
> 
> My filename is something like this: “C:\サーガル\test.sqlite”. When I am passing 
> this to sqlite3_open(), it is failing.

I  am not certain that the Japanese characters are what's causing your problem. 
 Please make yourself a folder, just for testing, with only Roman characters in 
the name.  Then try the same thing with that.  Perhaps you might like to try

C:\test\test.sqlite

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] sqlite3_open() failed with file name contains japanese characters

2011-11-09 Thread Sireesha Jakku
Hi,

I am using sqlite3 dll in my application.
While testing on Japanese OS (win-XP), I am getting problem in reading from the 
sqlite file.

My filename is something like this: “C:\サーガル\test.sqlite”. When I am passing 
this to sqlite3_open(), it is failing.


1.   I first tried converting my file to multibyte char (using 
WideCharToMultiByte()) and passed it to sqlite3_open. I got error 1 -> 
SQLITE_ERROR - SQL error or missing database

2.   Then I tried using sqlite3_open16(), still no luck - I am getting  
SQLITE_CANTOPEN error -> Unable to open the database file

Somewhere on internet, I read that, I should include icu support from : 
http://icu-project.org/home.php

Can anyone help me regarding this?
And if icu is required, can anyone send me how can I compile on Windows 
platform? ( I use VS2005).

Thank you in advance.

Regards,
Sireesha J

DISCLAIMER
==
This e-mail may contain privileged and confidential information which is the 
property of Persistent Systems Ltd. It is intended only for the use of the 
individual or entity to which it is addressed. If you are not the intended 
recipient, you are not authorized to read, retain, copy, print, distribute or 
use this message. If you have received this communication in error, please 
notify the sender and delete all copies of this message. Persistent Systems 
Ltd. does not accept any liability for virus infected mails.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users