Re: [sqlite] Problem with FTS4 - Floating point error.

2011-10-24 Thread Dan Kennedy

On 10/25/2011 12:49 PM, Mohd Radzi Ibrahim wrote:


On 18-Oct-2011, at 6:52 PM, Dan Kennedy wrote:




On 10/18/2011 05:02 PM, Mohd Radzi Ibrahim wrote:

Hi,

This is my table schema:

CREATE VIRTUAL TABLE LocationFTS using FTS4
(
name text,
address text,
email text,
remark text,
telno text,
contact text,
isDeleted text
);

This select statement gives floating point error in both sqlite3.exe and also 
my program (both my Ubuntu and Windows versions failed) :

select docid, name, address, email, remark, telNo, contact
from locationFTS
where locationFTS match 'suai plantation sdn bhd, p.o.box 730 97008 bintulu 
swk, tel 2 07-6637460 ah leng   06-9766061 a3.0';

PharmacyServer.exe!GenericKedaiRuncitDB::GetLocationFTS(LocationRecord& 
  r)  Line 147 + 0x14 bytes C++
PharmacyServer.exe!wxSQLite3Statement::ExecuteQuery(bool 
transferStatementOwnership)  Line 1486 + 0xc bytes C++
PharmacyServer.exe!sqlite3_step(sqlite3_stmt * pStmt)  Line 62029 + 0x9 
bytes   C
PharmacyServer.exe!sqlite3Step(Vdbe * p)  Line 61954 + 0x9 bytes
C
PharmacyServer.exe!sqlite3VdbeExec(Vdbe * p)  Line 3856 + 0x30 bytes
C
PharmacyServer.exe!fts3FilterMethod(sqlite3_vtab_cursor * pCursor, int 
idxNum, const char * idxStr, int nVal, Mem * * apVal)  Line 51378 + 0x9 bytes   
 C
PharmacyServer.exe!fts3EvalStart(Fts3Cursor * pCsr)  Line 52714 + 0x13 
bytesC
PharmacyServer.exe!fts3EvalSelectDeferred(Fts3Cursor * pCsr, Fts3Expr * 
pRoot, Fts3TokenAndCost * aTC, int nTC)  Line 52634 + 0x2a bytesC

Does anybody know what's going on here?


Can you send me the database by email? Not via the list,
as it will strip the attachment.

Dan Kennedy.



Hi,

I've been chasing this bug and tried couple of options and found out that these 
issues fix my problem:

1. The number of words in match string is capped at 16.
2. These characters in the match string I replaced with spaces.
 case ',':
 case '.':
 case '/':
 case '-':
 case ':':
 case '\'':
 case '"':
 case '(':
 case ')':
 case '\\':
 case '@':

Was it a bug or was it the limitation? Or Perhaps I missed the documentation on 
FTS4.


This was fixed and then I forgot to follow up here. I should have.
Sorry about that. It's fixed here:

  http://www.sqlite.org/src/ci/3126754c72?sbs=0

Either updating to the latest trunk or just applying the linked
patch to fts3.c should fix your crash.

Dan.



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Problem with FTS4 - Floating point error.

2011-10-24 Thread Mohd Radzi Ibrahim

On 18-Oct-2011, at 6:52 PM, Dan Kennedy wrote:

> 
> 
> On 10/18/2011 05:02 PM, Mohd Radzi Ibrahim wrote:
>> Hi,
>> 
>> This is my table schema:
>> 
>> CREATE VIRTUAL TABLE LocationFTS using FTS4
>> (
>>  name text,
>>  address text,
>>  email text,
>>  remark text,
>>  telno text,
>>  contact text,
>>  isDeleted text
>> );
>> 
>> This select statement gives floating point error in both sqlite3.exe and 
>> also my program (both my Ubuntu and Windows versions failed) :
>> 
>> select docid, name, address, email, remark, telNo, contact
>> from locationFTS
>> where locationFTS match 'suai plantation sdn bhd, p.o.box 730 97008 bintulu 
>> swk, tel 2 07-6637460 ah leng   06-9766061 a3.0';
>> 
>>  PharmacyServer.exe!GenericKedaiRuncitDB::GetLocationFTS(LocationRecord& 
>>  r)  Line 147 + 0x14 bytes  C++
>>  PharmacyServer.exe!wxSQLite3Statement::ExecuteQuery(bool 
>> transferStatementOwnership)  Line 1486 + 0xc bytes C++
>>  PharmacyServer.exe!sqlite3_step(sqlite3_stmt * pStmt)  Line 62029 + 0x9 
>> bytes   C
>>  PharmacyServer.exe!sqlite3Step(Vdbe * p)  Line 61954 + 0x9 bytes
>> C
>>  PharmacyServer.exe!sqlite3VdbeExec(Vdbe * p)  Line 3856 + 0x30 bytes
>> C
>>  PharmacyServer.exe!fts3FilterMethod(sqlite3_vtab_cursor * pCursor, int 
>> idxNum, const char * idxStr, int nVal, Mem * * apVal)  Line 51378 + 0x9 
>> bytesC
>>  PharmacyServer.exe!fts3EvalStart(Fts3Cursor * pCsr)  Line 52714 + 0x13 
>> bytesC
>>  PharmacyServer.exe!fts3EvalSelectDeferred(Fts3Cursor * pCsr, Fts3Expr * 
>> pRoot, Fts3TokenAndCost * aTC, int nTC)  Line 52634 + 0x2a bytesC
>> 
>> Does anybody know what's going on here?
> 
> Can you send me the database by email? Not via the list,
> as it will strip the attachment.
> 
> Dan Kennedy.
> 

Hi, 

I've been chasing this bug and tried couple of options and found out that these 
issues fix my problem:

1. The number of words in match string is capped at 16.
2. These characters in the match string I replaced with spaces.
case ',':
case '.':
case '/':
case '-':
case ':':
case '\'':
case '"':
case '(':
case ')':
case '\\':
case '@':

Was it a bug or was it the limitation? Or Perhaps I missed the documentation on 
FTS4. 

best regards,
Radzi.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] EXT :Re: Concurrent readonly access to a large database.

2011-10-24 Thread Frank Missel
> boun...@sqlite.org] On Behalf Of Bo Peng
> 
> I will do this multiple times, with different conditions (e.g. SELECT
> MAX(c) FROM TABLE_X WHRE b > 1.0) so maintaining number of rows would
> not help. I intentionally avoided TRIGGERs because of the large amount
> (billions) of data inserted.
> 
> Other than using a SSD to speed up random access, I hope a VACUUM
> operation would copy tables one by one so content of the tables would not
> scatter around the whole database. If this is the case, disk caching
should
> work much better after VACUUM... fingers crossed.

Doing vacuum on a 288 Gb database is probably going to take some time. I
wonder if it would be better on just having the data organized before
loading it, so that the records in each of the 5000 tables would be
contiguously stored. Of course, that also depends on how much new data will
be added to the tables. 

Having worked with large databases and reading through this mail thread,
some questions and ideas come to mind:

How many new tables/records will be added per day/month?

Are records divided amongst the 5000 tables  based on time so that new
records will go into new tables rather than be inserted evenly among the
5000?

How many fields in the tables (I understand the 5000 tables are identical in
structure) 
What type of data is it?

Are there any indexes besides the primary key?
Unless your calculations are always or mostly on the same fields it is
probably best not to have any indexes.

Are there any redundancy in the data, e.g. character values which could be
normalized to separate tables using an integer key reference thus reducing
the size of the data carrying tables. Converting field contents to integer
or bigint wherever it is possible may give improvements in both size and
performance.

Is the primary key an integer so that it in fact is the rowid thus saving
some space?
(see http://sqlite.org/lang_createtable.html, the section: "ROWIDs and the
INTEGER PRIMARY KEY".)

If there are several calculations to be done for each table it would perhaps
be better to work with several threads concentrating on a single table at a
time. This would be particularly effective if your cache_size was set large
enough for a table to be contained entirely in memory as the threads could
share the same connection and (as I understand it) memory cache (see
http://www.sqlite.org/sharedcache.html) . Hereby the CPU and memory can be
put to good use rather than working with several independent connections
which do not utilize CPU and memory efficiently but just push the disk heads
around.

One might even consider a different strategy:
Do not use the statistical function of SQLite (count, average, max, etc.)
but access the records individually in your code. This requires that data
are loaded according to the primary key and that the threads accessing them
do so in a turn based fashion, using the modulo function to decide which
thread gets to handle the record, e.g.:

KeyThread
1001
1022
1033
1044
1051
1062
Etc.

(Sorry if the explanation is too explicit :-).

Thus you can have an army of threads using CPU and memory most efficiently
as the threads will be handling the same pages of the database which will be
cached in memory. Again, this requires that the records are physically
stored according to the key.
There is perhaps some overhead occurred by accessing the individual records
rather than letting SQLite do the count, max, etc. However, if there are
several calculations to be done on the same data, it may prove more
efficient this way. 

You could even consider having less tables so that the threads can work for
longer without having to switch to the next table. 


/Frank Missel

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Potential bug: "insert into X select * from Y" ignores the "ON CONFLICT REPLACE" conflict-clause

2011-10-24 Thread Simon L
To reproduce this problem, enter the following 5 SQL statements at the
SQLite command line.

create table X(id INTEGER primary key ON CONFLICT REPLACE);
create table Y(id INTEGER primary key ON CONFLICT REPLACE);
insert into X values (1);
insert into Y select * from X;
insert into Y select * from X;


When I tried to run the last SQL statement twice,  SQLite produced the
following error message.
Error: PRIMARY KEY must be unique


Is this a bug? Please advise. Thank you.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] C API docs

2011-10-24 Thread Simon Slavin

On 25 Oct 2011, at 2:40am, Richard Hipp wrote:

> On Mon, Oct 24, 2011 at 9:05 AM, Baruch Burstein wrote:
> 
>> How are the C API documents auto-generated? Which tool is used?
>> I see that they are all in the comments in the code, but couldn't find a
>> tool in the source that is used to extract them and make the links.
> 
> The SQLite website, including all the documentation, is generated using a
> TCL script.  That script pulls information from both hand-written files and
> from the sqlite3.h header file to create the documentation.
> 
> The source code to the documentation can be found here:
> http://www.sqlite.org/docsrc/timeline
> 
> Sadly, the documentation source code is not itself particularly well
> documented.  :-(

What about the source code to Richard Hipp ?  Is that documented ?

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3 Performance Linux x86_64, MacOSX Snow Leopard

2011-10-24 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 24/10/11 13:23, Volker Jahns wrote:
> Any help is greatly appreciated on how to solve this issue.

You have given no way for anyone else to try to repeat the issue.  We
can't see your source code and it is unlikely your benchmark is the first
to have no bugs whatsoever.  Two sql scripts, one to populate the
databases with identical content and one to run the query would help.  It
isn't obvious if the timings are from running the query once or from a
large number of runs.  If the former then you really will get something
close to random numbers.

In any event this test is completely bogus in many ways.  You are altering
the processor speed & cores, RAM, operating system, SQLite version, hard
drive and other programs that are running for each one.  They will all
affect the performance of a query.

Unless the Dell machine you are running MacOS on (!?) is using an SSD, its
timing is nonsense and certainly not repeatable as it is less than the
seek time of even current hard drives - ie something is lying about data
being on disk platters.

Roger

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)

iEYEARECAAYFAk6mFEQACgkQmOOfHg372QQZiACgxY5m7iSrOE9JIUC+nvS7UOKu
bRsAnicTMrGXWf7RyatOBZA7YZRtW6p+
=yPt8
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] C API docs

2011-10-24 Thread Richard Hipp
On Mon, Oct 24, 2011 at 9:05 AM, Baruch Burstein wrote:

> How are the C API documents auto-generated? Which tool is used?
> I see that they are all in the comments in the code, but couldn't find a
> tool in the source that is used to extract them and make the links.
>

The SQLite website, including all the documentation, is generated using a
TCL script.  That script pulls information from both hand-written files and
from the sqlite3.h header file to create the documentation.

The source code to the documentation can be found here:
http://www.sqlite.org/docsrc/timeline

Sadly, the documentation source code is not itself particularly well
documented.  :-(


>
> --
> Programming today is a race between software engineers striving to build
> bigger and better idiot-proof programs, and the Universe trying to produce
> bigger and better idiots. So far, the Universe is winning.  - Rich Cook
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Order by creating function is not sorting for input order and alphabet

2011-10-24 Thread David Garfield
Suggestions:

1) Try:

   select cli_id, antok(cli_id) as antok_cli_id .. order by antok_cli_id

   This should at least show you exactly what it thinks it sorted.

2) Don't do regcomp() more than once.  It can be very costly.  And
   cache input.c_str() and input.length() while you are at it.
   (Actually... should it really be a "string" in the first place?)

3) digits[] may be insufficient in some cases (though not in your
   samples).

--David Garfield

ChingChang Hsiao writes:
> 
> The purpose of function antok is that solve the problem of "order by cli_id".
> 
> Function antok converts,
> 
> X86-1  -> x186-01
> X86-2  -> x186-02
> X86-10 -> x186-110
> X86-111 -> x186-2111
> X86- -> x186-3
> 1.1/12-ds3 -> 01.01/112-ds03
> 
> Examples,
> Input cli_id   x86-2, x86-10, x86-1
> 
> Order by cli_id
> X86-1
> X86-10
> X86-2
> 
> Order by antok(cli_id)
> X86-2
> X86-1
> X86-10
> 
> Antok solves the problem of x86-1,x86-10 sorting, but it didn't sort x86-2, 
> x86-1(input order).
> 
> My solution is,
> 
> order by antok(cli_id) asc,cli_id asc
> 
> X86-1
> X86-2
> X86-10
> 
> 
> But It solves the same category, there are still problems for alphabet shown 
> as below.
> 
> 0.1
> 0.2
> 0.3
> 0.4
> 0.5
> x86-1
> x86-2
> chdlc-1
> chdlc-2
> chdlc-3
> x86-10
> 
> It should be,
> 
> 0.1
> 0.2
> 0.3
> 0.4
> 0.5
> chdlc-1
> chdlc-2
> chdlc-3
> x86-1
> x86-2
> x86-10
> 
> Do you know why x86-1,x86-2 jump ahead chdlc-1 when I use "order by 
> antok(cli_id)" or "order by antok(cli_id) asc,cli_id asc"?
> 
> 
> ChingChang
> 
> Here are the source code shown as below.
> 
> int32
> DbHandle::registerOvnFuncs( sqlite3* db ) {
> 
>   // Add a alpha numeric token generator
>   sqlite3_create_function(db, "antok", 1, SQLITE_UTF8, NULL,
>   &dbAnTokenFunc, NULL, NULL);
>   return 0;
> }
> 
> static void
> dbAnTokenFunc( sqlite3_context *context, int argc, sqlite3_value **argv )
> {
>   assert( argc==1 );
>   switch( sqlite3_value_type(argv[0]) ){
>   case SQLITE_INTEGER:
> sqlite3_result_int64( context, sqlite3_value_int64(argv[0]) );
> break;
>   case SQLITE_NULL:
> sqlite3_result_null( context );
> break;
>   case SQLITE_TEXT:
>   default:
> SqlSortHelper sqlSortHelper;
> string token;
> token = sqlSortHelper.GetAlphaNumericOrderToken( 
> (char*)sqlite3_value_text(argv[0]) );
> sqlite3_result_text( context, token.c_str(), token.length(), NULL );
> break;
>   }
> }
> 
> #define POTENTIAL_MATCHES 1
> 
> string SqlSortHelper::GetAlphaNumericOrderToken( string input ) {
>   regex_t re;
>   regmatch_t pmatch[POTENTIAL_MATCHES];
>   string token;
>   uint32 pos = 0;
>   char digits[10] = "012345678";
> 
>   memset( pmatch, -1, sizeof(pmatch) );
>   regcomp( &re, "[0-9]+", REG_EXTENDED);
>   while (pos < input.length())
>   {
> regexec( &re, input.c_str()+pos, 1, pmatch, 0);
> if ( pmatch[0].rm_so != -1 )
> {
> token.append( (input.c_str() + pos), pmatch[0].rm_so);
> token.push_back( digits[((pmatch[0].rm_eo-pmatch[0].rm_so)-1)] );
> token.append( (input.c_str() + pos + pmatch[0].rm_so), 
> pmatch[0].rm_eo - pmatch[0].rm_so );
> pos = pos + pmatch[0].rm_eo;
> }
> else
> {
> if (pos == 0) {
> regfree( &re);
> return input;
> }
> else
> break;
> }
>   }
>   if (pos < input.length())
> token.append( (input.c_str() + pos), input.length()-pos );
>   regfree( &re);
>   return token;
> }
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Order by creating function is not sorting for input order and alphabet

2011-10-24 Thread Igor Tandetnik

On 10/24/2011 8:19 PM, ChingChang Hsiao wrote:

   case SQLITE_INTEGER:
 sqlite3_result_int64( context, sqlite3_value_int64(argv[0]) );
 break;
   case SQLITE_NULL:
 sqlite3_result_null( context );
 break;


If you want to just return an argument unchanged, you can use 
sqlite3_result_value. In this case, you can probably do this whenever 
the type is anything other than SQLITE_TEXT.



 string token;
 token = sqlSortHelper.GetAlphaNumericOrderToken( 
(char*)sqlite3_value_text(argv[0]) );
 sqlite3_result_text( context, token.c_str(), token.length(), NULL );


NULL as the last parameter of sqlite3_result_text is the same as 
SQLITE_STATIC, telling SQLite that the string will persist sufficiently 
long beyond sqlite3_result_text call. This is not the case here - the 
string is destroyed and becomes garbage soon after sqlite3_result_text 
call. You should be passing SQLITE_TRANSIENT as the last parameter.

--
Igor Tandetnik

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Order by creating function is not sorting for input order and alphabet

2011-10-24 Thread ChingChang Hsiao

The purpose of function antok is that solve the problem of "order by cli_id".

Function antok converts,

X86-1  -> x186-01
X86-2  -> x186-02
X86-10 -> x186-110
X86-111 -> x186-2111
X86- -> x186-3
1.1/12-ds3 -> 01.01/112-ds03

Examples,
Input cli_id   x86-2, x86-10, x86-1

Order by cli_id
X86-1
X86-10
X86-2

Order by antok(cli_id)
X86-2
X86-1
X86-10

Antok solves the problem of x86-1,x86-10 sorting, but it didn't sort x86-2, 
x86-1(input order).

My solution is,

order by antok(cli_id) asc,cli_id asc

X86-1
X86-2
X86-10


But It solves the same category, there are still problems for alphabet shown as 
below.

0.1
0.2
0.3
0.4
0.5
x86-1
x86-2
chdlc-1
chdlc-2
chdlc-3
x86-10

It should be,

0.1
0.2
0.3
0.4
0.5
chdlc-1
chdlc-2
chdlc-3
x86-1
x86-2
x86-10

Do you know why x86-1,x86-2 jump ahead chdlc-1 when I use "order by 
antok(cli_id)" or "order by antok(cli_id) asc,cli_id asc"?


ChingChang

Here are the source code shown as below.

int32
DbHandle::registerOvnFuncs( sqlite3* db ) {

  // Add a alpha numeric token generator
  sqlite3_create_function(db, "antok", 1, SQLITE_UTF8, NULL,
  &dbAnTokenFunc, NULL, NULL);
  return 0;
}

static void
dbAnTokenFunc( sqlite3_context *context, int argc, sqlite3_value **argv )
{
  assert( argc==1 );
  switch( sqlite3_value_type(argv[0]) ){
  case SQLITE_INTEGER:
sqlite3_result_int64( context, sqlite3_value_int64(argv[0]) );
break;
  case SQLITE_NULL:
sqlite3_result_null( context );
break;
  case SQLITE_TEXT:
  default:
SqlSortHelper sqlSortHelper;
string token;
token = sqlSortHelper.GetAlphaNumericOrderToken( 
(char*)sqlite3_value_text(argv[0]) );
sqlite3_result_text( context, token.c_str(), token.length(), NULL );
break;
  }
}

#define POTENTIAL_MATCHES 1

string SqlSortHelper::GetAlphaNumericOrderToken( string input ) {
  regex_t re;
  regmatch_t pmatch[POTENTIAL_MATCHES];
  string token;
  uint32 pos = 0;
  char digits[10] = "012345678";

  memset( pmatch, -1, sizeof(pmatch) );
  regcomp( &re, "[0-9]+", REG_EXTENDED);
  while (pos < input.length())
  {
regexec( &re, input.c_str()+pos, 1, pmatch, 0);
if ( pmatch[0].rm_so != -1 )
{
token.append( (input.c_str() + pos), pmatch[0].rm_so);
token.push_back( digits[((pmatch[0].rm_eo-pmatch[0].rm_so)-1)] );
token.append( (input.c_str() + pos + pmatch[0].rm_so), pmatch[0].rm_eo 
- pmatch[0].rm_so );
pos = pos + pmatch[0].rm_eo;
}
else
{
if (pos == 0) {
regfree( &re);
return input;
}
else
break;
}
  }
  if (pos < input.length())
token.append( (input.c_str() + pos), input.length()-pos );
  regfree( &re);
  return token;
}

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3 Performance Linux x86_64, MacOSX Snow Leopard

2011-10-24 Thread Volker Jahns
Updating timestamp attributes 32-Bit vs. 64-Bit performance

There is an issue concerning the 32-Bit vs 64-Bit performance of sqlite
on update SQL. Ordinary update sql staements have been used to update 
the timestamp attributes on a database which holds sensordata of 
meteorological instruments. An exemplary statement is e.g.
--
update sensordata set dataset_date = '2011-03-26 22:33:21' where
dataset_no = 5284488;
--
In the testcase 1098 different datasets of a table have been updated 
using this sql statement.

Hardware/software configurations used in the test were

(1) Lenovo S10e Intel(R) Atom(TM) CPU N270   @ 1.60GHz 1.5GB RAM
  Ubuntu 10.04 i686
  sqlite 2.8.17

(2) Dell 745 Intel Core2Duo, 2.13 GHz 4GB RAM
  Snow Leopard x86_64
  sqlite 3.6.12

(3) Scenic Esprimo E5615 Athlon 64 3800+, 4GB RAM
  OpenSuSE 11.4 x86_64
  sqlite 3.7.5


(4) Lenovo T61, Intek Core 2 Duo 2.0 GHz, 2GB RAM
  Ubuntu 10.04, x86_64
  sqlite3 3.6.22

These 4 configurations gave the following performance results (the time in
milliseconds is the average time for one of the update statements).

(1)  12.2 msec (real)
(2)   1.4 msec (real)
(3) 150.0 msec (real)
(4) 117.0 msec (real)

The performance difference between Ubuntu 10.04 i686 on a Atom N270 1.6 GHz 
vs. Ubuntu 10.04 x86_64 on Core 2 Duo 2.0GHz is approx. a factor of 10, but 
in favor of the 32 Bit netbook.
The performance difference of the same SQL in Mac OS X Snow Leopard x86_64 vs. 
Ubuntu 10.04 x86_64 ia approx. a factor of 100. 

Identical databases with identical SQL statements have been used in the 4
configurations. Sqlite has been installed from standard software repositories.

Any help is greatly appreciated on how to solve this issue.
-- 
Volker Jahns, vol...@thalreit.de
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] C API docs

2011-10-24 Thread Alek Paunov
I do not know the answer, but I am thinking for an attempt to extract 
them as clang+lpeg exercise. Why you are asking ... ?


On 24.10.2011 16:05, Baruch Burstein wrote:

How are the C API documents auto-generated? Which tool is used?
I see that they are all in the comments in the code, but couldn't find a
tool in the source that is used to extract them and make the links.



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SELECT on aggrgate

2011-10-24 Thread Pete
Pete
Molly's Revenge 

Thanks Kee, that explains it.



>
> Message: 6
> Date: Sun, 23 Oct 2011 21:02:07 +0200
> From: Kees Nuyt 
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] sqlite-users Digest, Vol 46, Issue 23
> Message-ID: 
> Content-Type: text/plain; charset=us-ascii
>
> On Sun, 23 Oct 2011 10:26:14 -0700, Pete 
> wrote:
>
> >Apologies, I omitted what is the real cause of the problem.  This
> simplified
> >SELECT illustrates the error:
> >
> >SELECT sum( colc * cold ) as total from tst where total > 1000
> >
> >The error message is "misuse of aggregate: sum()".  No error if I remove
> the
> >where clause.
>
> A condition on an aggregate is expressed with a HAVING clause, not
> a WHERE clause.
>
> That is because WHERE and HAVING work on different stages of the
> SELECT statement: WHERE decides which rows to include in the
> aggregate, HAVING decides which results to present after
> aggregation.
> --
>  (  Kees Nuyt
>  )
> c[_]
>
>
>
> 
>
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Problem with binding parameters to LIKE

2011-10-24 Thread Navaneeth.K.N
On Sun, Oct 23, 2011 at 2:21 PM, Baruch Burstein  wrote:
> I have done something similar and it worked for me, but there is an issue
> with indexes you should take into account, as discussed here:
> http://sqlite.org:8080/cgi-bin/mailman/private/sqlite-users/2011-July/031470.html

Thanks a lot for that link. Sqlite was not using index even in my
case. So I removed the concatenation in the query and passed a single
parameters which will have % appended. I can see Sqlite uses index
now. Much better!

> .
> Out of curiosity (since this query and it's field names seem very similar to
> one I am using), what are you using this for?

I am developing a text editor for indic languages. It has some amount
of artificial inteligence builtin. I use the above said scheme to
remeber words entered into the editor.

> On Sun, Oct 23, 2011 at 7:36 PM, Igor Tandetnik  wrote:
>
> It should. Check the value of "data" variable - you are probably passing 
> something other than what you think you are. I don't think anything wrong 
> with the code you've shown - the problem must lie in the code you haven't.

My bad. I was passing an incorrectly encoded string. Corrected the
encoding and all started working.

> On Sun, Oct 23, 2011 at 4:28 PM, Richard Hipp  wrote:
>
> sqlite3_trace() does, since version 3.6.21 (2009-12-07).  What version of
> SQLite did you say you were using?

I got it working. I was reseting the parameters at a wrong location.
This is the reason why trace was not showing the parameter value. All
works well.

Thanks for the help.

-- 
-n
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] C API docs

2011-10-24 Thread Eugene N
hi
Perhaps it widely popular doxygen, although its just a guess

Eugene

2011/10/24 Baruch Burstein 

> How are the C API documents auto-generated? Which tool is used?
> I see that they are all in the comments in the code, but couldn't find a
> tool in the source that is used to extract them and make the links.
>
> --
> Programming today is a race between software engineers striving to build
> bigger and better idiot-proof programs, and the Universe trying to produce
> bigger and better idiots. So far, the Universe is winning.  - Rich Cook
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Using modifiers in julianday function

2011-10-24 Thread Dilip Ranganathan
On Mon, Oct 24, 2011 at 11:47 AM, Doug Currie  wrote:

>
> On Oct 24, 2011, at 11:07 AM, Dilip Ranganathan wrote:
>
> > But as you all know, this doesn't work:
> >
> > select datetime(time) from table where time >=
> > julianday(datetime(max(time)),'-2 hour','localtime') order by time desc
>
> Try replacing datetime(max(time)) with (select datetime(max(time)) from
> table)
>
> as in
>
> sqlite> select datetime(time) from table
>   ...> where time >=
>   ...> julianday((select datetime(max(time)) from t),'-2 hour','localtime')
> order by time desc;
> 2011-10-24 15:43:45
> 2011-10-24 15:43:39
> sqlite>
>
>
That worked like a charm. Thanks! I should've just tried this but somehow
the idea of a query going inside a sqlite function didn't seem all that
obvious to me.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Using modifiers in julianday function

2011-10-24 Thread Doug Currie

On Oct 24, 2011, at 11:07 AM, Dilip Ranganathan wrote:

> But as you all know, this doesn't work:
> 
> select datetime(time) from table where time >=
> julianday(datetime(max(time)),'-2 hour','localtime') order by time desc

Try replacing datetime(max(time)) with (select datetime(max(time)) from table)

as in 

sqlite> select datetime(time) from table
   ...> where time >=
   ...> julianday((select datetime(max(time)) from t),'-2 hour','localtime') 
order by time desc;
2011-10-24 15:43:45
2011-10-24 15:43:39
sqlite> 


e

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] EXT :Re: Concurrent readonly access to a large database.

2011-10-24 Thread Simon Slavin

On 24 Oct 2011, at 4:13pm, Bo Peng wrote:

>> Can I ask which file-system you were using on the SSD drive when you
>> obtained this result?
> 
> It is ext4 on a 512G SSD on a Ubuntu system.

Wow.  I don't know what hard disk hardware or driver you were using originally, 
but it sucks.  Even for data scattered all over a 288Gig file, that's bad.  
Linux itself is quite efficient at file handling so I'm guessing it's your 
storage device.

Alternatively, you have an incredibly good SSD drive.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] malformed database schema - no such table

2011-10-24 Thread aramati
Hello @all!

I have a problem which I cannot solve:

When trying to run a statement on my running db via JDBC I get the following
error:

java.sql.SQLException: malformed database schema (B_TEST_IDX) - no such
table: main.BREATHS
at org.sqlite.DB.throwex(DB.java:288)
at org.sqlite.NativeDB.prepare(Native Method)
at org.sqlite.DB.prepare(DB.java:114)
at org.sqlite.Stmt.executeQuery(Stmt.java:89)
at eu.aceos.aeroscan.db.DBDAO.getPersons(DBDAO.java:74)
at eu.aceos.aeroscan.db.DBDAO.main(DBDAO.java:258)
Exception in thread "main" java.lang.NullPointerException
at eu.aceos.aeroscan.db.DBDAO.getPersons(DBDAO.java:83)
at eu.aceos.aeroscan.db.DBDAO.main(DBDAO.java:258)

The table has nothing to do with my request, as you will see:

public class DBDAO implements DBInterface {

private static Connection connection = null;

//filename ist derzeit der path + filename, derzeit
//C:\\develop\\aeroscan.sqlite
public static Connection connect(String filename) {

if (connection == null) {

String url = "jdbc:sqlite:"
   // +  SettingsManager.getInstance().getDBDirectory()
+ filename;

try {
Class.forName("org.sqlite.JDBC"); // Treiber laden
} catch (Exception e) {
   //todo: logger, dass der Treiber nicht geladen werden konnte
return null;
}

// Verbindung zur Datenbank öffnen (unter Verwendung der
// Konstanten, die oben definiert wurden)
try {
// todo: url noch das user und passwd zufügen!
connection = DriverManager.getConnection(url);

} catch (SQLException e) {

   e.printStackTrace();
   return null;
}
}
return connection;
}



public ArrayList getPersons() {
ResultSet rs = null;
// ein ganz normales SQL-Statement wird als String angelengt
String query = "SELECT * FROM TEST_PERSON;";
  // Abfrage durchfuehren und im Fehlerfall eine Meldung ausgeben
try {
Statement stmt = connection.createStatement();
rs = stmt.executeQuery(query);  // Abfrage ausfuehren
} catch (SQLException e) {
   e.printStackTrace();
} catch (NullPointerException e) {
e.printStackTrace();
}


try {
while (rs.next()) {
System.out.println(rs.getString("NAME")); // NAME ist
Spaltenbezeichnung
}
} catch (SQLException e) {
e.printStackTrace();
}

return new ArrayList();
}

}

When I drop the table BREATHS, which is actually existing, nothing changes.
All that is happening is that a cascade of these sql-errors appear following
the next of tables, then the next, ...

What is wrong, here?

Thanks in advance,

Tamara
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] EXT :Re: Concurrent readonly access to a large database.

2011-10-24 Thread Bo Peng
> Can I ask which file-system you were using on the SSD drive when you
> obtained this result?

It is ext4 on a 512G SSD on a Ubuntu system.

Bo
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Using modifiers in julianday function

2011-10-24 Thread Dilip Ranganathan
Lets say I have a column named 'time' that stores timestamps as juliandays.
Suppose I only want to extract rows for the past 2 hours from now, I could
do:

select datetime(time) from table where time >= julianday('now','-2
hour','localtime') order by time desc

That works as expected. However what if I want to go 2 hours backwards from
a *specific* julianday value? say, 2 hours from the most recent datetime
value recorded in the table. This gets me the latest timestamped record:

select datetime(max(time)) from table

But as you all know, this doesn't work:

select datetime(time) from table where time >=
julianday(datetime(max(time)),'-2 hour','localtime') order by time desc

I am sure I am missing something here. Any suggestions?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] EXT :Re: Concurrent readonly access to a large database.

2011-10-24 Thread Dan Kennedy

On 10/24/2011 09:20 PM, Bo Peng wrote:

Other than using a SSD to speed up random access, I hope a VACUUM
operation would copy tables one by one so content of the tables would
not scatter around the whole database. If this is the case, disk
caching should work much better after VACUUM... fingers crossed.


VACUUM will defragment, too.  Unless your free space is fragmented.  So yes, 
probably a good move.


Dear all,

I moved the database to another (faster) machine with a SSD and a
regular HD. The performance of the query improved dramatically on the
SSD drive. More specifically, the time to sequentially execute 'select
count(*) from table_XX' on two tables took 17s instead of 7m, and
running the query concurrently on two and four tables took the same 9s
instead of 5m and 13m before. This firmly proved that random disk
access speed was the bottleneck.


Can I ask which file-system you were using on the SSD drive when you
obtained this result?

Thanks,
Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] EXT :Re: Concurrent readonly access to a large database.

2011-10-24 Thread Bo Peng
>> Other than using a SSD to speed up random access, I hope a VACUUM
>> operation would copy tables one by one so content of the tables would
>> not scatter around the whole database. If this is the case, disk
>> caching should work much better after VACUUM... fingers crossed.
>
> VACUUM will defragment, too.  Unless your free space is fragmented.  So yes, 
> probably a good move.

Dear all,

I moved the database to another (faster) machine with a SSD and a
regular HD. The performance of the query improved dramatically on the
SSD drive. More specifically, the time to sequentially execute 'select
count(*) from table_XX' on two tables took 17s instead of 7m, and
running the query concurrently on two and four tables took the same 9s
instead of 5m and 13m before. This firmly proved that random disk
access speed was the bottleneck.

Because the vacuum operation does not provide a progress bar, I wrote
a script to copy tables one by one to another database on the regular
HD. The time to count the number of rows for a table decreased from
about 4m to within a minute (53s). Disk access is still a bottleneck
but this is already far better than before.

In summary, the problem with my database was that, because all tables
were created at the same time and filled evenly, records of the tables
were spread all over the 288G database. It was very slow to read
pieces of a table from a regular HD to run a query. Copying the
database to a SSD driver with much faster random access speed, or
copying all tables one by one to a new database dramatically improved
the query performance.

Many thanks again for all the help from the list,
Bo
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How is the amalgamation produced?

2011-10-24 Thread Dan Kennedy

On 10/24/2011 08:32 PM, Baruch Burstein wrote:

I found what seem to be two options for producing an amalgamation:
1. "./configure" and "make sqlite3.c" (something like that, it is not in
front of me at the moment, but I have done this and know it works)
2. There is a Tcl script in the tool directory mksqlite3c.tcl that seems to
do this too (I didn't test this)

How is the official amalgamation produced?


Using "make clean sqlite3.c". Using a version of Makefile.linux-gcc
slightly customized for the local machine (in order to find the local
tcl installation).

The makefile invokes the mksqlite3c.tcl script.




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] How is the amalgamation produced?

2011-10-24 Thread Baruch Burstein
I found what seem to be two options for producing an amalgamation:
1. "./configure" and "make sqlite3.c" (something like that, it is not in
front of me at the moment, but I have done this and know it works)
2. There is a Tcl script in the tool directory mksqlite3c.tcl that seems to
do this too (I didn't test this)

How is the official amalgamation produced?

-- 
Programming today is a race between software engineers striving to build
bigger and better idiot-proof programs, and the Universe trying to produce
bigger and better idiots. So far, the Universe is winning.  - Rich Cook
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] C API docs

2011-10-24 Thread Baruch Burstein
How are the C API documents auto-generated? Which tool is used?
I see that they are all in the comments in the code, but couldn't find a
tool in the source that is used to extract them and make the links.

-- 
Programming today is a race between software engineers striving to build
bigger and better idiot-proof programs, and the Universe trying to produce
bigger and better idiots. So far, the Universe is winning.  - Rich Cook
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users