[sqlite] fts sqlite-3_5_1 ?

2007-10-15 Thread Andre du Plessis
Im sorry if I am missing something obvious but I see precompiled
fts2.dll is no longer available for download alongside 3.5.1, if this
indeed means I am not blind, what does this mean?

 

Thank you.



[sqlite] ATTACH Limit

2007-10-11 Thread Andre du Plessis
I know that the limits page says that the limit is 10 by default, I
would just like to know if the limit is per connection handle or global
for the module?

 

Thanks.



RE: [sqlite] PRAGMA synchronous=0; and crash recovery

2007-10-02 Thread Andre du Plessis
I think everyone has this requirement, I was finding many slowness
problems using synchronous = on, and I started tweaking the app with
memory tables, 
I recommend you look into it, you can create a memory Database by
specifying :memory: as the filename.

The connection you receive from this you can attach to the file
database, see the ATTACH command, then you can select data from one to
the other, for speed do all things in the memory until you want to save
at the critical point where your process must save the data or changes
it made, then you can write all of the information from the memory table
with a simple insert select from the memory table then clear memory and
continue, this also helps as youre not locking out other
threads/processes from the file db for to long.

If you search the forum you can get lots of help on ATTACH and :memory:
as you need, but then I'd leave my file DB to be as ACID as possible.

-Original Message-
From: Scott Gilbert [mailto:[EMAIL PROTECTED] 
Sent: 01 October 2007 11:01 PM
To: sqlite-users@sqlite.org
Subject: [sqlite] PRAGMA synchronous=0; and crash recovery

A quick introduction:  I'm a software developer in Tucson AZ, and I'm a
new
sqlite user (version 3.4.2).  I'm really very impressed with this
software.
Thank you very much to everyone who has contributed to it!

I have an application that requires separate processes (under Linux) to
concurrently update records in some shared tables (multiple SELECTs and
UPDATEs inside of a transaction).  Unfortunately, when I have PRAGMA
synchronous = 1 or 2, it is too slow for my needs, while synchronous = 0
is
plenty fast enough.

It is very likely that I *will* lose power when my application is
running.
When this happens, I can live with losing the last few minutes of
transactions, but a corrupted database file that lost all data would
cause
much grief to my users (and therefore me).

So my questions are:

With synchronous = 0, are transactions still atomic across
processes?
Is running with synchronous=0 likely to non-recoverably corrupt the
database file on power failure?
Are there any recovery tools that can restore consistancy?

I apologise if this has been answered somewhere else (an hour or two of
googling didn't find anything that specifically covered synchronous =
0).
Thank you in advance for any replies!

Cheers,
-Scott

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] FTS3 where ?

2007-09-21 Thread Andre du Plessis
I am indeed using Delphi, BUT,
I have quite a few programs which are using my library I put together
for Delphi, it would be quite a large change to convert to DISQLite3 at
this point, plus this same program uses Python and ASP.net

Which means that fts3.dll as a standalone would definitely be preferred.

-Original Message-
From: Ralf Junker [mailto:[EMAIL PROTECTED] 
Sent: 21 September 2007 12:50 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] FTS3 where ?

Hello Andre du Plessis,

If you are using Delphi, FTS3 is already included in the latest
DISQLite3 (Pro and Personal). Download is available from
http://www.yunqa.de/delphi/.

The source code is available from CVS. You will find FTS3 in the /ext/
directory.

Ralf

>Fts3 which everyone is talking about, I cannot see any mention of it on
>the download page, does it mean that its just the development
sourcecode
>which people are compiling at this point or are there some prebuilt
>dll's available, or has it not been officially released yet?



-
To unsubscribe, send email to [EMAIL PROTECTED]

-


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] FTS3 where ?

2007-09-21 Thread Andre du Plessis
Fts3 which everyone is talking about, I cannot see any mention of it on
the download page, does it mean that its just the development sourcecode
which people are compiling at this point or are there some prebuilt
dll's available, or has it not been officially released yet?

 

Thanks.



[sqlite] ANSI order by

2007-09-07 Thread Andre du Plessis
Sorry if this is actually a noob question, how do I do an ansi style
order by in sqlite

For example

 

A

b

a

B

 

Would be sorted as

 

A

B

a

b

 

but what you want is

 

a

A

b

B

 

I can do order by upper(column)

But then things like

 

__new__

 

Goes to the bottom and should go to the top

 

Thanks.



RE: [sqlite] SQLite or MS Access

2007-09-07 Thread Andre du Plessis
Yeah I have on many more than one occasion in MSAccess lost information
when the program is terminated, that is why so many people hate it and
don't want to go near it for any important data.

I have heard however that MSAccess from 2000 onwards is based on the SQL
server codebase so it is more stable, but apparently not designed to be
very durable, once again I could be totally wrong but just a rumour I
have heard.

SQL Server on the other hand should be ACID by default?, and remains
fast even if you commit on each insert, however when I say fast probably
as fast as SQLite doing the same thing, but when SQLite is batched in
transactions it becomes exponentially faster, where as SQLServer and
other giants the speed is much closer to eachother whether you do many
in one transaction or one at a time per transaction.

However once again you never no how much system and disk resources are
taken by things like SQLServer that helps them to do faster commits, im
not sure.

However nothing can get away from the fact that at some point disk
buffers have to be flushed and that is up to hardware, I think many
programmers are probably oblivious to how ACID like their transactions
are and have probably simply not experienced such critical failure that
their 'cached' inserts was never committed. Im assuming that SQL server
probably keeps open logfiles that obviously can be written to much
faster and have separate processes/threads that commit those to disk,
SQLite does not create more processes or threads that I know of and I
think many who are looking for fast lightweight solutions prefer this.

SQLite however does get a lot faster when turning the synchronous off
and not flushing buffers, but then again do you want to take that
gamble.

-Original Message-
From: Nuno Lucas [mailto:[EMAIL PROTECTED] 
Sent: 07 September 2007 02:46 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] SQLite or MS Access

On 9/7/07, Andre du Plessis <[EMAIL PROTECTED]> wrote:
[..]
> What I don't understand is how Access, and other DB's are able to
still
> operate much faster (maybe not as fast), and still flush file buffers
to
> disk, is beyond me. Maybe it really still caches it, I would not be
> surprised if you pull the plug from an MS access db you may end up
with
> missing records even if you committed, I could be wrong...

MS Access has several modes of operation, but the default one is not
ACID, so it seems fast.
There are ways to set the transaction mode to be ACID (which I don't
recall what/how they are), which will make the performance suffer
behind any hope, making the comparison more fair.

I don't recall now, but I seem to remember the default mode doesn't
even guarantee the data is on disc when finished using it, which makes
inserting a single record an order of magnitude slower on SQLite
(because it waits for the data to get to the disk controller) than for
Access (which just gives the data to the OS, not caring if it goes to
disk or not).

In a nutshell, benchmarks are not easy...


Regards,
~Nuno Lucas


-
To unsubscribe, send email to [EMAIL PROTECTED]

-


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] SQLite or MS Access

2007-09-07 Thread Andre du Plessis
Well here are my test results (im using Delphi for this one)

This is my insert statement:

INSERT INTO TEST (TEST_ID_NO_IDX, NO_INDEX, TEST_DATA, TEST_ID) values
(%d, %s, %s, %d)
This table deliberately has NO index.


1000 inserts took:
Inserting MS Access - 4,043.273 ms
Inserting SQLite - 249.329 ms

In my sample the key is doing the following

BEGIN TRANSACTION
Loop inside transaction:
Do inserts

COMMIT TRANSACTION

I'm suspecting that you are falling into the trap of not doing most of
your work in a transaction, in SQLite that is BAD, sqlite tries to be as
durable as possible, so it writes to the journal and flushes file
buffers each time a transaction commits, if you don't specify one, then
this will automatically happen on each insert and will kill your speed.


What I don't understand is how Access, and other DB's are able to still
operate much faster (maybe not as fast), and still flush file buffers to
disk, is beyond me. Maybe it really still caches it, I would not be
surprised if you pull the plug from an MS access db you may end up with
missing records even if you committed, I could be wrong...

Used correctly SQlite should be the fastest there is, obviously for more
single user (desktop db) style operations not multiuser.

Ive done Inserts up to the 40-50K/ sec in sqlite with the right kind of
tweaks.



-Original Message-
From: Michael Martin [mailto:[EMAIL PROTECTED] 
Sent: 07 September 2007 10:06 AM
To: sqlite-users@sqlite.org
Subject: [sqlite] SQLite or MS Access

Hi All,

 

I've done some benchmarks tests and I wonder where I've made a mistake.

 

In C# code with SQLite.NET.2.0.1 wrapper with sqlite 3.0:  1000 inserts
in a table of two columns -> 168 seconds

 

In C# code with Jet.Oledb.4.0 with MS Access:  1000 inserts in a table
of two columns ->  1.14 seconds

 

Could someone help me please

 

Thanks in advance

 


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] SQL approach to Import only new items, delete other items

2007-09-06 Thread Andre du Plessis
Thank you for the suggestion,

Im about to set up a test as soon as I have the time where I want to
create a test table and populate it with a high number of rows, then
perform an update to a temp table, and then delete with an in statement
Because my concern is this, I don't know how SQLite will do

Delete from table where col not in (select from large temp dataset)

How the delete will actually be walked, if it will create a serverside
cursor and walk the values in the in statement then it will be fine and
fast,
If however it loads all the values into memory and then walk the dataset
it would require a large amount of memory,

Your suggestion to drop the table and recreate from temp although a good
idea will probably modify the database which means that if I version
control it, it will create a large amount of changes each time an import
is run, even though nothing might have been changed.

I guess I could write another framework for versioning the database,
.dump would probably work better but it requires that user intervention.

But thanks for the suggestion I might be able to implement something
that works.

-Original Message-
From: Tom Briggs [mailto:[EMAIL PROTECTED] 
Sent: 06 September 2007 03:33 PM
To: sqlite-users@sqlite.org
Subject: RE: [sqlite] SQL approach to Import only new items, delete
other items


   Your suggested temp table approach is how I would solve this; if
everything is properly indexed it won't be too bad.  Even if it is bad,
it'll be better than updating columns within the table and then deleting
rows based on that.

   Another potential alternative is to:

   1. Load all new rows into a temp table
   2. Select the old matching rows into a second temp table
   3. Insert all the remaining new rows to that second temp table
   4. Drop the original table and rename the second temp table

   That's likely to be slower on small data sets and faster on larger
datasets, I think.  Depends on how much data is already in the database
vs. the amount of data being loaded.

   -Tom  

> -Original Message-
> From: Andre du Plessis [mailto:[EMAIL PROTECTED] 
> Sent: Thursday, September 06, 2007 5:41 AM
> To: sqlite-users@sqlite.org
> Subject: [sqlite] SQL approach to Import only new items, 
> delete other items
> 
> Im importing data
> 
> The data has a unique value, call it MD5 for now that could 
> be a unique
> value for the data.
> 
>  
> 
> Each record that gets imported is converted to MD5, a lookup 
> is done on
> the table for that MD5, 
> 
> if found it must leave it alone, if not found it must insert a new
> record...
> 
>  
> 
> All the items in the table that was not imported must be deleted...
> 
>  
> 
> The only feasible approach I have is to add a column to the 
> table, like
> UPDATE_FLAG for example, 
> 
> During the import update_flag gets set to 0,
> 
> Once a record is found update_flag gets set to 1,
> 
>  
> 
> At the end of the import all records with update_flag = 0 gets
> deleted...
> 
>  
> 
> However I did not want to add a column to the table, REASON being, I'm
> also version controlling the DB, and when an import occurs and nothing
> 
> Has been added or removed,  I don't want modifications to the 
> DB, as the
> import can run many times over.
> 
>  
> 
> I was considering the following:
> 
> Create a temp table call it,
> 
> Temp_ids for example
> 
> Then insert into the temp table for each record that was found...
> 
>  
> 
> At the end do something like
> 
> Delete from imports where import_id not in (select id from temp_ids)
> 
>  
> 
> But that might be horribly slow and memory expensive remembering that
> the import table may have millions of records..
> 
>  
> 
> What could the people here suggest to me,
> 
>  
> 
> Thanks.
> 
> 


-
To unsubscribe, send email to [EMAIL PROTECTED]

-


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] SQL approach to Import only new items, delete other items

2007-09-06 Thread Andre du Plessis
Im importing data

The data has a unique value, call it MD5 for now that could be a unique
value for the data.

 

Each record that gets imported is converted to MD5, a lookup is done on
the table for that MD5, 

if found it must leave it alone, if not found it must insert a new
record...

 

All the items in the table that was not imported must be deleted...

 

The only feasible approach I have is to add a column to the table, like
UPDATE_FLAG for example, 

During the import update_flag gets set to 0,

Once a record is found update_flag gets set to 1,

 

At the end of the import all records with update_flag = 0 gets
deleted...

 

However I did not want to add a column to the table, REASON being, I'm
also version controlling the DB, and when an import occurs and nothing

Has been added or removed,  I don't want modifications to the DB, as the
import can run many times over.

 

I was considering the following:

Create a temp table call it,

Temp_ids for example

Then insert into the temp table for each record that was found...

 

At the end do something like

Delete from imports where import_id not in (select id from temp_ids)

 

But that might be horribly slow and memory expensive remembering that
the import table may have millions of records..

 

What could the people here suggest to me,

 

Thanks.



RE: [sqlite] ColType lost

2007-09-05 Thread Andre du Plessis
I don't want to send any of the developers on a wild goose chase, the
problem might be somewhere else, since you suggested
sqlite3_column_decltype() im thinking that it might be somewhere else in
determining the column type initially I use sqlite3_column_type so the
SQLITE_NULL may actually be on a different one than the one that
actually has data, I will keep investigating and report back, I might
have to use 

-Original Message-
From: Andre du Plessis [mailto:[EMAIL PROTECTED] 
Sent: 05 September 2007 10:08 AM
To: sqlite-users@sqlite.org
Subject: RE: [sqlite] ColType lost

Should I also try and create a sample database that illustrates the
problem?
Creating a new ticket at:

http://www.sqlite.org/cvstrac/tktnew

does not allow me to attach any file,
I wonder if I do a .dump and then import it would probably fix the
problem as it would reinsert from text, so don't really want to provide
sql inserts I should attach the database in binary form

Or should I just send a mail to this group with a file attached?

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: 04 September 2007 08:11 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] ColType lost

"Andre du Plessis" <[EMAIL PROTECTED]> wrote:
> 
> I use the API as always:
> 
> ColType := sqlite3_column_type(Pointer(FHandle), I);
> 
> This has always worked fine for me and type returned was SQLITE_TEXT
> 

If sqlite3_column_type() returns SQLITE_NULL when in fact
the column really contains a text value and not a NULL, then
that is a bug.  You should report it.

Please note, however, that sqlite3_column_type() returns
the actual datatype of the content of a particular row
of a particular column.  It does *not* report on the
declared datatype of a column.  If you really asking
for the declared datatype of a column, use
sqlite3_column_decltype() instead.

--
D. Richard Hipp <[EMAIL PROTECTED]>



-
To unsubscribe, send email to [EMAIL PROTECTED]

-



-
To unsubscribe, send email to [EMAIL PROTECTED]

-


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] ColType lost

2007-09-05 Thread Andre du Plessis
Should I also try and create a sample database that illustrates the
problem?
Creating a new ticket at:

http://www.sqlite.org/cvstrac/tktnew

does not allow me to attach any file,
I wonder if I do a .dump and then import it would probably fix the
problem as it would reinsert from text, so don't really want to provide
sql inserts I should attach the database in binary form

Or should I just send a mail to this group with a file attached?

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: 04 September 2007 08:11 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] ColType lost

"Andre du Plessis" <[EMAIL PROTECTED]> wrote:
> 
> I use the API as always:
> 
> ColType := sqlite3_column_type(Pointer(FHandle), I);
> 
> This has always worked fine for me and type returned was SQLITE_TEXT
> 

If sqlite3_column_type() returns SQLITE_NULL when in fact
the column really contains a text value and not a NULL, then
that is a bug.  You should report it.

Please note, however, that sqlite3_column_type() returns
the actual datatype of the content of a particular row
of a particular column.  It does *not* report on the
declared datatype of a column.  If you really asking
for the declared datatype of a column, use
sqlite3_column_decltype() instead.

--
D. Richard Hipp <[EMAIL PROTECTED]>



-
To unsubscribe, send email to [EMAIL PROTECTED]

-


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] ColType lost

2007-09-04 Thread Andre du Plessis
Good Evening.

 

SCHEMA:

 

SOMECOLUMN TEXT

 

I use the API as always:

 

ColType := sqlite3_column_type(Pointer(FHandle), I);

 

 

This has always worked fine for me and type returned was SQLITE_TEXT

 

however lately I noticed that a particular table this is sometimes
returned to me as SQLITE_NULL, even though I can clearly see there is
text in there,

which has caused me to proceed to even if type is null get the size and
load it into a string for now, the problem is just that if the field is
a blob and I am not supposed to read it as text I would have a problem..

 

I could use the  pragma  api on the table to determine the type it was
declared with but the problem is as it is SQL statements I don't always
know what table the column belongs to.

 

Can someone give me some light on this matter please.



[sqlite] More on Column types

2007-09-04 Thread Andre du Plessis
What is the difference between:

 

MYCOLUMN NUMERIC

 

Vs

 

MYCOLUMN INTEGER

 

Or does it really mean nothing, 

I remember somewhere the default column type could be integer, after
that it defaults to whatever you insert.

 

But I was wondering... what would the preferred type be to

 

A: Store Int32 - Int64 values

B: Floating point values.

 

Is there a way to alter SQLite so that a select would always return a
column as a type you want? Either in the DB file or as part of the
select.

 

Thank you.



RE: [sqlite] How to generate Unique ID?

2007-08-31 Thread Andre du Plessis
Hi how about the following:


CREATE TABLE puids (ID INTEGER PRIMARY KEY AUTOINCREMENT)


In python:
Def GetUniquePUID():
   #OPTIONAL, if you already have a transaction
   _Conn.cursor().execute("BEGIN EXCLUSIVE")
   Try:
 _Conn.cursor().execute("INSERT INTO PUIDS (id) values(null)");
 Return _Conn.cursor().execute("select
last_insert_rowid()").fetchone()[0]

 _Conn.cursor().execute("COMMIT")
   Except:
_Conn.cursor().execute("ROLLBACK")
   raise

-Original Message-
From: B V, Phanisekhar [mailto:[EMAIL PROTECTED] 
Sent: 31 August 2007 08:54 AM
To: sqlite-users@sqlite.org
Subject: RE: [sqlite] How to generate Unique ID?


Assume I have a table 
Create table T1 (id INTEGER PRIMARY KEY not null, puid UNIQUE
INTEGER not null, format INTEGER not null);

Now some values given below

Id  puidformat
1   8000123
2   9000169
3   8001178
4   8002165
5   9001180
6   8003123

What I wanted was categorize the format values.
Format 123, 178, 165, 190, 118, 623, 789, and 234 likewise other values
to be categorized into one group.

Similarly another category of another set of different formats. Likewise
many categories.

Now if I want to retrieve all objects of category 1, I can't do where
format = 123 or format = 178, or format = 190 ...

Hence I wanted to categorize them using puid, all those that belong to
category 1 will have puid's from 8000-9000, Likewise others. That's why
I wanted to use some generator which will produce a unique puid. Since
after reaching the max value 9000; I don't have a method to generate
puid that have been deleted. 

Regards,
Phani







-Original Message-
From: John Stanton [mailto:[EMAIL PROTECTED] 
Sent: Thursday, August 30, 2007 9:00 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] How to generate Unique ID?

Why do you have a unique primary key as an integer to hold your other 
unique integer?  Why not just use the unique integer as a primary key?

If you want to have a limit on the maximum unique ID you can store your 
next to allocate and next to replace keys in another table.

B V, Phanisekhar wrote:
> Assume I have a table:
> 
> Create table YYY (id Interger PRIMARY KEY, puid Unique integer)
> 
> Id is the primary key.
> 
> Puid is an unsque interger, whose values needs to be assigned by the
> user.
> 
>  
> 
> Currently my approach is get the maximum value of puid stored in the
> table; add 1 to it and uses this value as puid for any new row that
> needs to be added. The problem occurs when I reach the max value.
> Meanwhile, some rows might have been deleted. In case, when I reach
the
> maximum value I want to reuse the puids of the deleted rows for new
rows
> that are to be added. Currently SQLite uses some algorithm to generate
a
> unique rowid (even when it reaches the limit). I want to use the same
> algorithm here also. I tried to understand the algorithm but couldn't.
I
> need a simple way by which I can generate a unique puid without
writing
> the algorithm.
> 
>  
> 
>  
> 
> Regards,
> 
> Phani
> 
>  
> 
>  
> 
> 



-
To unsubscribe, send email to [EMAIL PROTECTED]

-



-
To unsubscribe, send email to [EMAIL PROTECTED]

-


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Date comparison on UTC

2007-08-27 Thread Andre du Plessis

 select date_modified from table where julianday(date_modified) >
julianday(CURRENT_TIMESTAMP) - 1;

this seems to work perfectly, for some reason the function can simply
convert the text successfully, not sure how it's able to do that, but
impressive,

Is there anyway to round off the datestamp to the beginning of the day,
or maybe do you know if there is a list of built-in date functions
somewhere?

Thanks.

-Original Message-
From: John Stanton [mailto:[EMAIL PROTECTED] 
Sent: 27 August 2007 11:39 AM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Date comparison on UTC

If you use the Sqlite floating point number format (function julianday 
will do the conversion) you can use functions to compare days and dates.

Andre du Plessis wrote:
> I have a UTC date stored in text column in this format:
> 
>  
> 
> 2007-08-27 08:58:16.601000
> 
>  
> 
> I want to do some selects for all that is newer than one day for
> example, or then all that is older than 1 month, etc,
> 
>  
> 
> Any ideas?
> 
>  
> 
> Thanks.
> 
> 



-
To unsubscribe, send email to [EMAIL PROTECTED]

-


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Date comparison on UTC

2007-08-27 Thread Andre du Plessis
I have a UTC date stored in text column in this format:

 

2007-08-27 08:58:16.601000

 

I want to do some selects for all that is newer than one day for
example, or then all that is older than 1 month, etc,

 

Any ideas?

 

Thanks.



[sqlite] SQLite Build in M$ Visual Studio 250+ warnings

2007-08-23 Thread Andre du Plessis
Just wanted to check with you guys that my build is actually stable in
Visual Studio 2005, I get about 250+ warnings when building SQLite I can
come back to you with more details if this is not correct, just want to
make sure that's seems correct, ive been getting the occasional weird
SQLite error, I was just wondering if my build is not a bit buggy, I
have added the THREADSAFE in the defines though so I don't think it is,
just don't want to corrupt my db's.

 

Thanks.



[sqlite] Feedback: Enabling FTS2 in Python + .net or anything else

2007-08-22 Thread Andre du Plessis
Ok this is how I got the fts working in python and .net...

 

In python 2.5 sqlite3 is automatically included as a builtin. However I
could not find any method which loads the extention or to enable it,

I COULD do it in SQL by using:

 

conn.cursor().execute("SELECT load_extension('fts2')")

 

however this will give an not authorized sql error, so you MUST call
enable_load_extension however this is not available in python, so I
recompiled the SQLite3.dll with directive SQLITE_ENABLE_LOAD_EXTENSION=1
set.

Notice I had to set this to =1 because this directive is still an older
#ifdef instead of if defined, which would work with just
SQLITE_ENABLE_LOAD_EXTENSION.

 

Now this is enabled by default and the above can work in python or .net
or anything im guessing whether it has the loadextension or not, as long
as FTS2.dll is present.

 

I obviously understand that for SQLite to enable this by default is a
security issue, which could allow attackers to load malicious dll's,
maybe it would be nice, if the authors could do the following:

 

Maybe provide a precompiled dll with extension enabled, OR with FTS1,
and 2 compiled in,

 

What I tried that would be nice is to compile FTS2 and 1 in the dll so
it just works by default without having to loadext or anything,

By enabling: SQLITE_ENABLE_FTS1 and SQLITE_ENABLE_FTS2

But I could NOT get this to work in M$ Visual Studio , I was getting
weird memory corruption in my DLL when I used maybe because Im a c noob?
Could be, I did not do something I was supposed to.

 

Just thought others trying to do the same thing might find this helpful,
or provide feedback.



RE: [sqlite] FTS2 Question again Python + .NET

2007-08-21 Thread Andre du Plessis
Im using Finishar, cant remember where I got it or why I chose it, it
came up when I googled :)

It seems quite complete except for the extensions

-Original Message-
From: Samuel R. Neff [mailto:[EMAIL PROTECTED] 
Sent: 20 August 2007 06:23 PM
To: sqlite-users@sqlite.org
Subject: RE: [sqlite] FTS2 Question again Python + .NET


"
I'm having the same problem with .net, cant find a function which does
this... 
"

Which .NET wrapper are you using?

System.Data.SQLite has FTS2 precompiled.  http://sqlite.phxsoftware.com

Sam


---
We're Hiring! Seeking a passionate developer to join our team building
Flex
based products. Position is in the Washington D.C. metro area. If
interested
contact [EMAIL PROTECTED]
 
-Original Message-----
From: Andre du Plessis [mailto:[EMAIL PROTECTED] 
Sent: Monday, August 20, 2007 11:50 AM
To: sqlite-users@sqlite.org
Subject: [sqlite] FTS2 Question again Python + .NET




-
To unsubscribe, send email to [EMAIL PROTECTED]

-


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] FTS2 Question again Python + .NET

2007-08-20 Thread Andre du Plessis
Schema:

 

create table files (FILE_NAME TEXT, DATA TEXT);

create index idx_file_name on files (FILE_NAME);

create virtual table fts_files using fts2('file_name', 'data');

 

Ok, I just can't seem to figure out how to load fts2.dll in Python:

 

Standard sqlite3 module that comes with python does not even run a
query:

 

>>> row = conn.cursor().execute("select * from sqlite_master where type
= 'table'").fetchone()

Traceback (most recent call last):

  File "", line 1, in 

OperationalError: malformed database schema - near "VIRTUAL": syntax
error

 

I tried apsw (Another python SQLite Wrapper), this one gets further, but
the error remains:

 

>>> conn2.cursor().execute("select file_name from fts_files where data
match 'BEGIN'")

Traceback (most recent call last):

  File "", line 1, in 

  File "apsw.c", line 4168, in Cursor_execute.sqlite3_prepare_v2

SQLError: SQLError: no such module: fts2

 

 

Sqlite3 and apsw has no function calls I can find to set the
enable_load_extention I tried just loading it in SQL:

 

>>> conn2.cursor().execute("SELECT load_extension('fts2')")

Traceback (most recent call last):

  File "", line 1, in 

  File "apsw.c", line 3518, in resetcursor

SQLError: SQLError: not authorized

 

Obviously this gives an error, the load_extention is off and I cant set
it on.

 

Apsw, claims in the documentation there is a function called
enableloadextention, but for the life of me I cant find it.

 

I'm having the same problem with .net, cant find a function which does
this...

 

Just wondering if anyone has done this in python and C# .NET that can be
of assistance

 

Thanks.

 

 

 



RE: [sqlite] Dump with where clause

2007-08-17 Thread Andre du Plessis
But then I have to create an actual table in the database?

I suppose I can do the following:
BEGIN;
create table image_temp as select * from file_folder_data;
.dump image_temp
Rollback;

That seems to work as expected

Just another question though, how can I script this and run it from command 
line, using sqlite3.exe ?

Thanks.

-Original Message-
From: Simon Davies [mailto:[EMAIL PROTECTED] 
Sent: 17 August 2007 01:41 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Dump with where clause

Updating Igor's suggestion, just remove 'temp' from the line
create temp table image_temp as select * from file_folder_data;

For me .dump then works as expected.
Rgds,
Simon

On 17/08/07, Andre du Plessis <[EMAIL PROTECTED]> wrote:
> Hi Dennis this seems like a good idea, but there is a problem:
>
> I use the following statement:
>
> .mode insert
> select * from FILE_FOLDER_DATA;
>
>
> This is a snippet of what I get
> INSERT INTO table VALUES(1285,4323,2,'╪α','Thumb','JPEG');
> INSERT INTO table VALUES(1286,4324,2,'╪α','Thumb','JPEG');
> INSERT INTO table VALUES(1287,4325,2,'╪α','Thumb','JPEG');
> INSERT INTO table VALUES(1288,4326,2,'╪α','Thumb','JPEG');
>
> As you can see first of all the insert statement inserts into 'table' which 
> is obviously not the right name,
>
> Secondly the image data should be a HEX string as is what .dump does.
>
> .schema file_folder_data
>
> CREATE TABLE FILE_FOLDER_DATA (ID INTEGER PRIMARY KEY AUTOINCREMENT, 
> FOLDER_ID I
> NTEGER, FOLDER_TYPE INTEGER,
>  CONTENT BLOB, CONTENT_NAME TEXT, CONTENT_EXT TEXT);
> CREATE INDEX IDX_FILE_FOLDER_DATA ON FILE_FOLDER_DATA(FOLDER_ID, FOLDER_TYPE 
> ASC
> );
>
>
> Also the .dump with temp table as Igor suggested does not work.
>
> This works:
> .dump file_folder_data
>
> This does NOT
> create temp table image_temp as select * from file_folder_data;
> .dump image_temp
>
> All I get is:
>
> BEGIN TRANSACTION;
> COMMIT;
>
>
>
> -Original Message-
> From: Dennis Cote [mailto:[EMAIL PROTECTED]
> Sent: 16 August 2007 05:49 PM
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] Dump with where clause
>
> Andre du Plessis wrote:
> > HI, how can I use .dump or something similar but specify a where clause,
> > I cant see that the .dump command allows this,
> >
> > Without any arguments it seems to dump the whole db, the only argument
> > supported is the table name,
> >
> >
> >
> > I would like to be able to do something like:
> >
> > .dump table1 where ID > 1000
> >
> >
> >
> > I don't have a problem with the INSERT into statements, in fact I think
> > I prefer it because the main idea is to extract parts of the db
> > (revisions),
> >
> > And then to be able to rebuild the db in case of corruption...
> >
> >
> >
> > I know there is also the COPY command in SQL I have not really tried it
> > by the documentation it seems to be able to dump the table in comma or
> > tab delimited, but Preferably I don't want to write too much code to do
> > this.
> >
> >
> >
> >
> Andre,
>
> You can use the insert mode in the shell to do what you want. It will
> format the select output as insert statements.
>
>.mode insert
>select * from table1 where ID > 1000;
>
> This doesn't generate the transaction wrapper, or the table's create
> statement, but you can add those yourself if needed.
>
> HTH
> Dennis Cote
>
> -
> To unsubscribe, send email to [EMAIL PROTECTED]
> -
>
>


RE: [sqlite] Dump with where clause

2007-08-17 Thread Andre du Plessis
Hi Dennis this seems like a good idea, but there is a problem:

I use the following statement:

.mode insert
select * from FILE_FOLDER_DATA;


This is a snippet of what I get
INSERT INTO table VALUES(1285,4323,2,' ╪ α','Thumb','JPEG');
INSERT INTO table VALUES(1286,4324,2,' ╪ α','Thumb','JPEG');
INSERT INTO table VALUES(1287,4325,2,' ╪ α','Thumb','JPEG');
INSERT INTO table VALUES(1288,4326,2,' ╪ α','Thumb','JPEG');

As you can see first of all the insert statement inserts into 'table' which is 
obviously not the right name,

Secondly the image data should be a HEX string as is what .dump does.

.schema file_folder_data

CREATE TABLE FILE_FOLDER_DATA (ID INTEGER PRIMARY KEY AUTOINCREMENT, FOLDER_ID I
NTEGER, FOLDER_TYPE INTEGER,
  CONTENT BLOB, CONTENT_NAME TEXT, CONTENT_EXT TEXT);
CREATE INDEX IDX_FILE_FOLDER_DATA ON FILE_FOLDER_DATA(FOLDER_ID, FOLDER_TYPE ASC
);


Also the .dump with temp table as Igor suggested does not work.

This works:
.dump file_folder_data 

This does NOT
create temp table image_temp as select * from file_folder_data;
.dump image_temp

All I get is:

BEGIN TRANSACTION;
COMMIT;



-Original Message-
From: Dennis Cote [mailto:[EMAIL PROTECTED] 
Sent: 16 August 2007 05:49 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Dump with where clause

Andre du Plessis wrote:
> HI, how can I use .dump or something similar but specify a where clause,
> I cant see that the .dump command allows this,
>
> Without any arguments it seems to dump the whole db, the only argument
> supported is the table name,
>
>  
>
> I would like to be able to do something like:
>
> .dump table1 where ID > 1000
>
>  
>
> I don't have a problem with the INSERT into statements, in fact I think
> I prefer it because the main idea is to extract parts of the db
> (revisions),
>
> And then to be able to rebuild the db in case of corruption...
>
>  
>
> I know there is also the COPY command in SQL I have not really tried it
> by the documentation it seems to be able to dump the table in comma or
> tab delimited, but Preferably I don't want to write too much code to do
> this.
>
>  
>
>   
Andre,

You can use the insert mode in the shell to do what you want. It will 
format the select output as insert statements.

.mode insert
select * from table1 where ID > 1000;

This doesn't generate the transaction wrapper, or the table's create 
statement, but you can add those yourself if needed.

HTH
Dennis Cote

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Re: Dump with where clause

2007-08-16 Thread Andre du Plessis
That's a very neat little trick, 
Once again why didn't I think of that :)

Thank you very much.

-Original Message-
From: Igor Tandetnik [mailto:[EMAIL PROTECTED] 
Sent: 16 August 2007 01:52 PM
To: SQLite
Subject: [sqlite] Re: Dump with where clause

Andre du Plessis <[EMAIL PROTECTED]> wrote:
> HI, how can I use .dump or something similar but specify a where
> clause, I cant see that the .dump command allows this,
>
> I would like to be able to do something like:
> 
> .dump table1 where ID > 1000

create temp table tmp as
select * from table1 where ID > 1000;
.dump tmp
drop table tmp;

Igor Tandetnik


-
To unsubscribe, send email to [EMAIL PROTECTED]

-


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] DISQLite FTS

2007-08-16 Thread Andre du Plessis
I did not want to highjack the exsiting FTS thread but Ralf,

 

Does DISQLite have its own implementation of FTS, so not using FTS2 at
all?

 

Does it use the same mechanism as FTS2 with virtual tables?

 

And have you compared speed and functionality to FTS2,

Maybe im actually just looking for some more information on how DISQLite
does things.

 

I guess what it comes down to is to know options available, however I
think the FTS2 project is great and hopes that it continues to grow, as
it can be used on all platforms.

 

I guess what might be a problem is that I would not be able to use
DISQLite's FTS implementation in Python or .net for example, or would I?

 

Thanks.

 

 



RE: RE: [sqlite] SQLITE_BUSY error in multi-threaded environment

2007-08-16 Thread Andre du Plessis
If they are different files then you should not have any of these
problems.

-Original Message-
From: RaghavendraK 70574 [mailto:[EMAIL PROTECTED] 
Sent: 16 August 2007 11:21 AM
To: sqlite-users@sqlite.org
Subject: Re: RE: [sqlite] SQLITE_BUSY error in multi-threaded
environment

hi,

Am not clear.
Suppose i have 2 databases and a process spwans 2 threads and each
thread
opne the db will it result in any problem? They are independent files.


thx
ragha


**
 This email and its attachments contain confidential information from
HUAWEI, which is intended only for the person or entity whose address is
listed above. Any use of the information contained herein in any way
(including, but not limited to, total or partial disclosure,
reproduction, or dissemination) by persons other than the intended
recipient(s) is prohibited. If you receive this e-mail in error, please
notify the sender by phone or email immediately and delete it!
 

*

- Original Message -
From: Andre du Plessis <[EMAIL PROTECTED]>
Date: Thursday, August 16, 2007 4:36 pm
Subject: RE: [sqlite] SQLITE_BUSY error in multi-threaded environment

> Ok well I guess I forgot to mention this is what has made me want to
> pull my hair out a few times :) the fact that you have to worry about
> both scenarios for two different reasons, if multiple threads are
> working with the same connection handle, then SQL will have a better
> understanding of the state of your connection and inform you of busy
> errors better. If you are using different DB handles what will 
> happen is
> that SQLite may not care that some other thread is busy with another
> handle and all will work fine until one point, the connection handle
> needs to commit data and enter exclusive mode, it has to get an
> exclusive lock on the DB File  and no matter that other 
> connections have
> their own handles if they have any locks on the db, sqlite will go 
> intobusy handler mode and eventually timeout,
> depending on how long you wait. If a query keeps a read cursor 
> open for
> some reason inevitably this will result in a database is locked error.
> The problem to watch out for is a deadlock, example
> 
> THREAD1 THREAD2
> BEGINBEGIN 
> INSERT SOME  INSERT SOME
> COMMIT (busy handler)COMMIT (busy handler)
> As you can see thread1 waits for thread2, they will deadlock, and 
> unlessyou have a limit in your busy handler you will wait forever.
> 
> As recommended, BEGIN IMMEDIATE should prevent thread2 from even
> starting a transaction if thread1 did so first, however I think this
> will only work correctly if the same connection handle is used in 
> both,else they still may not know about eachother.
> 
> So yes there is two ways to do this, one is that make sure your busy
> handler works properly and then let your applications just try and 
> thenfail on busy throw the exception and let the application try 
> again until
> all locks are gone,
> Or two use a global mutex (IF your application runs in more than one
> process space)
> Or 3 (use a global critical section - this will be faster) if your
> application is just in one process space.
> Make sure that inserts/queries finish their business including 
> begin and
> commit transaction in the critical
> 
> If your application ONLY does queries for example you should have NO
> problem,
> 
> Additionally if you are using the same DB handle across threads EVERY
> CALL to the library no matter what should be (serialized) locked 
> in a
> critical section.
> 
> Ive used these principles  that is running fine now, so I will 
> stick to
> this design
> 
> Hope this helps
> 
> -Original Message-
> From: Mark Brown [mailto:[EMAIL PROTECTED] 
> Sent: 15 August 2007 04:34 PM
> To: sqlite-users@sqlite.org
> Subject: RE: [sqlite] SQLITE_BUSY error in multi-threaded environment
> 
> Hi Andre-
> 
> After rereading your post, I wanted to confirm something.  In your
> example
> below, are thread1 and thread2 connected to the same database, or
> different
> databases?  In my scenario, the threads are connected to different
> databases, so I'm not sure if it is the same situation.
> 
> Thanks,
> Mark
> 
> 
> > -Original Message-
> > From: Andre du Plessis [mailto:[EMAIL PROTECTED] 
> > Sent: Wednesday, August 15, 2007 5:05 AM
> > To: sqlite-users@sqlite.org
> > Subject: RE: [sqlite] SQLITE_BUSY error in multi-threaded 
> environment> 
> > 
> > Being a newbie to SQLite I've had the same problems working 
> > wi

RE: [sqlite] SQLITE_BUSY error in multi-threaded environment

2007-08-16 Thread Andre du Plessis
Ok well I guess I forgot to mention this is what has made me want to
pull my hair out a few times :) the fact that you have to worry about
both scenarios for two different reasons, if multiple threads are
working with the same connection handle, then SQL will have a better
understanding of the state of your connection and inform you of busy
errors better. If you are using different DB handles what will happen is
that SQLite may not care that some other thread is busy with another
handle and all will work fine until one point, the connection handle
needs to commit data and enter exclusive mode, it has to get an
exclusive lock on the DB File  and no matter that other connections have
their own handles if they have any locks on the db, sqlite will go into
busy handler mode and eventually timeout,
depending on how long you wait. If a query keeps a read cursor open for
some reason inevitably this will result in a database is locked error.
The problem to watch out for is a deadlock, example

THREAD1 THREAD2
BEGINBEGIN 
INSERT SOME  INSERT SOME
COMMIT (busy handler)COMMIT (busy handler)
As you can see thread1 waits for thread2, they will deadlock, and unless
you have a limit in your busy handler you will wait forever.

As recommended, BEGIN IMMEDIATE should prevent thread2 from even
starting a transaction if thread1 did so first, however I think this
will only work correctly if the same connection handle is used in both,
else they still may not know about eachother.

So yes there is two ways to do this, one is that make sure your busy
handler works properly and then let your applications just try and then
fail on busy throw the exception and let the application try again until
all locks are gone,
Or two use a global mutex (IF your application runs in more than one
process space)
Or 3 (use a global critical section - this will be faster) if your
application is just in one process space.
Make sure that inserts/queries finish their business including begin and
commit transaction in the critical

If your application ONLY does queries for example you should have NO
problem,

Additionally if you are using the same DB handle across threads EVERY
CALL to the library no matter what should be (serialized) locked in a
critical section.

Ive used these principles  that is running fine now, so I will stick to
this design

Hope this helps

-Original Message-
From: Mark Brown [mailto:[EMAIL PROTECTED] 
Sent: 15 August 2007 04:34 PM
To: sqlite-users@sqlite.org
Subject: RE: [sqlite] SQLITE_BUSY error in multi-threaded environment

Hi Andre-

After rereading your post, I wanted to confirm something.  In your
example
below, are thread1 and thread2 connected to the same database, or
different
databases?  In my scenario, the threads are connected to different
databases, so I'm not sure if it is the same situation.

Thanks,
Mark


> -Original Message-
> From: Andre du Plessis [mailto:[EMAIL PROTECTED] 
> Sent: Wednesday, August 15, 2007 5:05 AM
> To: sqlite-users@sqlite.org
> Subject: RE: [sqlite] SQLITE_BUSY error in multi-threaded environment
> 
> 
> Being a newbie to SQLite I've had the same problems working 
> with SQLite
> so maybe I can help, 
> It does not matter how well your database is synchronized, a common
> pitfall I had was that I would have a query object with an open cursor
> which prevents any other statement from committing to the database.
> 
> So for example:
> THREAD1 THREAD2
> LOCK
> QUERY   
> UNLOCK  LOCK
> (Step through query)BEGIN TRANSACTION
> INSERTS
> COMMIT <- SQLite busy error here 
> UNLOCK 
>  




-
To unsubscribe, send email to [EMAIL PROTECTED]

-


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Dump with where clause

2007-08-16 Thread Andre du Plessis
HI, how can I use .dump or something similar but specify a where clause,
I cant see that the .dump command allows this,

Without any arguments it seems to dump the whole db, the only argument
supported is the table name,

 

I would like to be able to do something like:

.dump table1 where ID > 1000

 

I don't have a problem with the INSERT into statements, in fact I think
I prefer it because the main idea is to extract parts of the db
(revisions),

And then to be able to rebuild the db in case of corruption...

 

I know there is also the COPY command in SQL I have not really tried it
by the documentation it seems to be able to dump the table in comma or
tab delimited, but Preferably I don't want to write too much code to do
this.

 

Thanks.



RE: [sqlite] SQLITE_BUSY error in multi-threaded environment

2007-08-15 Thread Andre du Plessis
Being a newbie to SQLite I've had the same problems working with SQLite
so maybe I can help, 
It does not matter how well your database is synchronized, a common
pitfall I had was that I would have a query object with an open cursor
which prevents any other statement from committing to the database.

So for example:
THREAD1 THREAD2
LOCK
QUERY   
UNLOCK  LOCK
(Step through query)BEGIN TRANSACTION
INSERTS
COMMIT <- SQLite busy error here 
UNLOCK 
 

As you can see here that even thought there are Global critical sections
or Mutexes that completely locks on a global level without any other
interferences (external connections)
The query is busy stepping and has an open cursor, so commit or
(spillover) of inserts will fail. 
In situations where this can be expected, I fetch all data into memory
inside the lock and reset the query (sqlite3_reset) releases cursor
lock.
Then step through data in memory. 
The other solution you may hear is to use BEGIN IMMEDIATE before
performing an operation, this will give any thread an immediate error
when trying to begin the same transaction level, however I think that if
you have separate database connections then they might not know this
until they try to get an exclusive lock on the file for committing.

Solution:

THREAD1 THREAD2
LOCK
QUERY   
(Read rows into memory)
SQLite3_reset
UNLOCK  LOCK
BEGIN TRANSACTION
INSERTS
COMMIT (no error)
UNLOCK 


Hope this helps my implementation is running smoothly but it's not as
concurrent as I would like it to be, but because SQLite is so fast, you
can lock globally get in and out as soon as you can, and you should
still be happy with the speed.

-Original Message-
From: Mark Brown [mailto:[EMAIL PROTECTED] 
Sent: 14 August 2007 10:25 PM
To: sqlite-users@sqlite.org
Subject: [sqlite] SQLITE_BUSY error in multi-threaded environment

Hi-

I've got an application that has many different SQLite databases.  Each
database connection is opened in its own thread.  Each database has only
one
connection.

I created some test cases that create a database and schema on the fly
and
perform various SELECT, INSERTS, UPDATES on it.  The tests execute while
the
rest of the system is running normally.

What I am seeing is that while I only have one database connection to my
test case database, and my operations on this database are done
sequentially, I have seen at random times a return of SQLITE_BUSY on
either
a prepare or execute of a statement.

On a guess, I decided to stop all other database activity going on in
the
system (db activity on different threads on different databases), and so
far, my test cases pass just fine.

What I was wondering is if there is any chance that database activity
into
SQLite from other db connections could somehow influence my db activity
on
my test database in returning a SQLITE_BUSY error.

I'm using SQLite 3.4.1 with the dotlock mechanism for thread protection
on a
vxWorks custom hardware configuration.  With other problems I have had,
they
turned out to be some file i/o method failing due to our custom h/w, so
most
likely this is the problem, but just thought I would ask.

Thanks,
Mark




-
To unsubscribe, send email to [EMAIL PROTECTED]

-


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Problem loading extension in Delphi (FTS2)

2007-08-14 Thread Andre du Plessis
No it is all cdecl, which is what I am seeing in the message? :)

-Original Message-
From: Roberto [mailto:[EMAIL PROTECTED] 
Sent: 13 August 2007 04:42 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Problem loading extension in Delphi (FTS2)

Might be a typo, but your declaration defines the calling convention
as 'stdcall' (instead of 'cdecl'), was your sqlite3.dll custom built
with this calling convention? I don't think much of sqlite would work
with stdcall.

On 13/08/07, Andre du Plessis <[EMAIL PROTECTED]> wrote:
>
> sqlite3_enable_load_extension : function (sqlite3_value: Pointer; AOnOff: 
> Integer): Integer; cdecl;
> sqlite3_load_extension : function (sqlite3_value: Pointer; AFileName, 
> AProcedure : PChar;
> var AErrorMsg : PChar): Integer; cdecl;
>
> var
>   Error : Pchar;
>   ConnectionHandle : Pointer; // Handle you got from call to sqlite3_open
> begin
> sqlite3_load_extension(ConnectionHandle, 'fts2.dll', nil, Error));
>
> This all worked very well for me.
>
> the problem as you say is that Aducom component does not expose this property 
> for you, but as far as I know you get the source for the component is 
> available so it's a small change for you to expose it.
>
> The problem with DISQLite3 is that it is not free and the sources for the 
> component is not available.
>
> Where fts and sqlite is and there are good documentation for fts.
>
>
> -Original Message-
> From: Ralf Junker [mailto:[EMAIL PROTECTED]
> Sent: 10 August 2007 03:14 PM
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] Problem loading extension in Delphi (FTS2)
>
> Hello Henrik Ræder,
>
> >I'm trying to load the FTS2 extension in Delphi, using the Aducom
> >components. Am really close, but still stuck, and thinking it's a problem
> >with the parameter to sqlite3_enable_load_extension().
>
> DISQLite3 (http://www.yunqa.de/delphi/sqlite3/) does not expose such problems.
>
> Look at the full text search demo project which incorporates both FTS1 and 
> FTS2 into a single *.exe application, with _no_ DLLs or external files needed.
>
> The new customizable tokenizer interface will be demonstrated by a 
> Unicode-aware Pascal tokenizer with the upcoming version of DISQLite3 as soon 
> as the FTS vacuum fix is official released.
>
> Ralf
>
>
> -
> To unsubscribe, send email to [EMAIL PROTECTED]
> -
>
>
> -
> To unsubscribe, send email to [EMAIL PROTECTED]
> -
>
>

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Problem loading extension in Delphi (FTS2)

2007-08-13 Thread Andre du Plessis

sqlite3_enable_load_extension : function (sqlite3_value: Pointer; AOnOff: 
Integer): Integer; cdecl;
sqlite3_load_extension : function (sqlite3_value: Pointer; AFileName, 
AProcedure : PChar;
var AErrorMsg : PChar): Integer; cdecl;

var
  Error : Pchar;
  ConnectionHandle : Pointer; // Handle you got from call to sqlite3_open
begin
sqlite3_load_extension(ConnectionHandle, 'fts2.dll', nil, Error));

This all worked very well for me.

the problem as you say is that Aducom component does not expose this property 
for you, but as far as I know you get the source for the component is available 
so it's a small change for you to expose it.

The problem with DISQLite3 is that it is not free and the sources for the 
component is not available.

Where fts and sqlite is and there are good documentation for fts.


-Original Message-
From: Ralf Junker [mailto:[EMAIL PROTECTED] 
Sent: 10 August 2007 03:14 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Problem loading extension in Delphi (FTS2)

Hello Henrik Ræder,

>I'm trying to load the FTS2 extension in Delphi, using the Aducom
>components. Am really close, but still stuck, and thinking it's a problem
>with the parameter to sqlite3_enable_load_extension().

DISQLite3 (http://www.yunqa.de/delphi/sqlite3/) does not expose such problems. 

Look at the full text search demo project which incorporates both FTS1 and FTS2 
into a single *.exe application, with _no_ DLLs or external files needed.

The new customizable tokenizer interface will be demonstrated by a 
Unicode-aware Pascal tokenizer with the upcoming version of DISQLite3 as soon 
as the FTS vacuum fix is official released.

Ralf 


-
To unsubscribe, send email to [EMAIL PROTECTED]
-


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Delete all other distinct rows

2007-08-08 Thread Andre du Plessis
The solution was actually so simple, thanks.

-Original Message-
From: Simon Davies [mailto:[EMAIL PROTECTED] 
Sent: 08 August 2007 12:01 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Delete all other distinct rows

Andre,

C:\Joinerysoft\JMS\dev\trunk>sqlite3 tst.db
SQLite version 3.4.0
Enter ".help" for instructions
sqlite> create table tmp( id integer, name text );
sqlite> insert into tmp values( 1, 'some name' );
sqlite> insert into tmp values( 2, 'some name' );
sqlite> insert into tmp values( 3, 'some name' );
sqlite> insert into tmp values( 4, 'another name' );
sqlite> insert into tmp values( 5, 'another name' );
sqlite>
sqlite> delete from tmp where id not in ( select min(id) from tmp
group by name );
sqlite>
sqlite> select * from tmp;
1|some name
4|another name
sqlite>

Rgds,
Simon

On 08/08/07, Andre du Plessis <[EMAIL PROTECTED]> wrote:
> How to delete all other distinct rows except first one.
>
>
>
> If I have a table with rows
>
>
>
> ID, NAME
>
>
>
> 1, SOME NAME
>
> 2, SOME NAME
>
> 3, SOME NAME
>
> 4, ANOTHER NAME
>
> 5, ANOTHER NAME
>
>
>
>
>
> The delete should work even if you don't know what the value of name
is,
> so simply for anything that is duplicate.
>
>
>
> The distinct delete should delete rows 2, 3, 5 and just keep 1 and 4,
is
> there a single SQL statement that can achieve this?
>
>
>
> Thanks
>
>


-
To unsubscribe, send email to [EMAIL PROTECTED]

-


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Delete all other distinct rows

2007-08-08 Thread Andre du Plessis
How to delete all other distinct rows except first one.

 

If I have a table with rows

 

ID, NAME

 

1, SOME NAME

2, SOME NAME

3, SOME NAME

4, ANOTHER NAME

5, ANOTHER NAME

 

 

The delete should work even if you don't know what the value of name is,
so simply for anything that is duplicate.

 

The distinct delete should delete rows 2, 3, 5 and just keep 1 and 4, is
there a single SQL statement that can achieve this?

 

Thanks



[sqlite] Enum user defined functions from code

2007-07-17 Thread Andre du Plessis
Hi all 

 

Is there any way to get the list of registered user defined functions
from code or SQL, and their parameters?

 

I need to provide our users with some GUI's to generate SQL and I would
like to make the user defined functions available,

I know as it is user defined functions I should know what they are, but
its simply a case of twice the work, adding the function and then coding
it into the gui, vs if possible just add them once and the gui updates
dynamically, and then obviously the possibility of the two being out of
sync.

 

Thanks.



RE: [sqlite] Re: inner join

2007-07-16 Thread Andre du Plessis
The Id piece works great thanks! If I want to update 2 fields on table
A, the following syntax doesn't work as expected, as it updates all to
records to 'DONE'.
 
update A set Flag = 'DONE',
Id = coalesce(
(select Id from B
 where A.Field1 = B.Field_Temp and B.Price > 0),
Id);

-Original Message-
From: Andre du Plessis 
Sent: 16 July 2007 02:30 PM
To: Gavin McGuinness
Subject: FW: [sqlite] Re: inner join

Here you go
It was answered by the mighty Igor, he's like THEE guru, guru's turn to
him for help, so don't question just follow :)

-Original Message-
From: Igor Tandetnik [mailto:[EMAIL PROTECTED] 
Sent: 16 July 2007 01:49 PM
To: SQLite
Subject: [sqlite] Re: inner join

Andre du Plessis <[EMAIL PROTECTED]> wrote:
> I would like to be able to accomplish the following but don't see any
> support for inner joins on update queries.
> 
> update A
> set Id = B.Id
> from A inner join B
> on A.Field1 = B.Field_Temp
> where B.Price > 0

update A set Id = coalesce(
(select Id from B
 where A.Field1 = B.Field_Temp and B.Price > 0),
Id);

Igor Tandetnik


-
To unsubscribe, send email to [EMAIL PROTECTED]

-


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] inner join

2007-07-16 Thread Andre du Plessis
 

I would like to be able to accomplish the following but don't see any
support for inner joins on update queries.

 

update A

set Id = B.Id

from A inner join B

on A.Field1 = B.Field_Temp

where B.Price > 0

 

Is this possible? What is the syntax of this query please?

 

 



[sqlite] Suggestions Add/Remove columns from Table

2007-07-06 Thread Andre du Plessis
I've been reading up a bit and I understand SQLite has limited support
for ALTER TABLE columns can be added but not dropped.

 

Some suggestions are to create a temp table and copy the data to it drop
the old table create the new one and then select the data back.

But there appears to be more to this. What I decided to do so far is:

 

Lets say the table I am working with is TEMP1

 

BEGIN;

CREATE TEMP TABLE TEMP1_TEMP as SELECT * FROM TEMP1;

DROP TEMP1;

 

CREATE TEMP1 ( NEW FIELDS);

Now at this point select all the fields that still exist after the
alteration

 

INSERT INTO TEMP1 (FIELD1, FIELD2) SELECT FIELD1, FIELD2 from TEMP1_TEMP

 

But the problem is it appears that sqlite automatically drops associated
triggers and indexes.

You can probably query the index schema by doing a select from
sqlite_master

When the indexes are determined do a PRAGMA index_info(idx_temp1..) for
instance to get the associated fields.

If those still exist, recreate the index using the same schema.

 

Maybe the same can be done with the trigger.

 

If the table has a sequence then it appears this is taken care of
automatically in the select?

 

Any foreign key constraints or references to the table in other triggers
should be ok as long as you don't delete any primary key columns?

As the select back and forth from the temp keeps the values of the
primary keys as long as they remain?

 

Im trying to figure out if I am missing something, or maybe exactly what
others have done.

 

 

Thanks in advance.



RE: [sqlite] Database Level Unique Sequence

2007-07-03 Thread Andre du Plessis
Thanks to everyone for the suggestions.
I have considered most these options, 

1. Don't really want to create a hash (md5) or guid, this will be too
large.
2. I could create a trigger for the table that as you suggest get's it
from one sequence table, but not sure what would be the speed impact.

I think I will indeed go with Igor's suggestion, it may indeed offer
some advantages in the future.

So the link table will be
TABLE_ID INTEGER, TABLE_TYPE INTEGER
And the primary key is on those two.

I might however need to change this approach, and add LINK_ID as a
sequence and rather use a unique index on those fields, as I just always
feel a one primary key in a table is a better design.

But this comes down to one of those things where there are many ways to
do it but finding the BEST way to do it.

As I said though generator or sequence support for a future version of
SQLite would be great to an already great project.

Cheers.

-Original Message-
From: Israel Figueroa [mailto:[EMAIL PROTECTED] 
Sent: 03 July 2007 04:06 AM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Database Level Unique Sequence

I think that what you need is a sequence... but sqlite doesn't implement
sequiences. A workarround cuold be to create a table with no data on
it..
but used to create a unique index which can be used to insert data on
the
others tables. To know the id you should use the
sqlite3_last_insert_rowid
function.


that should look like:

create table indextable (id integer primary key autoincrement, t text);

create table temptable1 (id integer primary key, info text);
create table temptable2 (id integer primary key , info text);

insert into indextable (t) values ('void');
UNIQUE_ID=sqlite3_last_insert_rowid();
insert into temptable1 (id,info) values (UNIQUE_ID,'info1');

insert into indextable (t) values ('void');
UNIQUE_ID=sqlite3_last_insert_rowid();
insert into temptable2 (id,info) values (UNIQUE_ID,'info2');

it doesn't looks nice... but it should work. You can add a fourth
statement
where you delete the inserted data in indextable... so you dont have
that
extra space in the database file. But eventualy it can be used to store
some
usefull data.


2007/7/2, Clark Christensen <[EMAIL PROTECTED]>:
>
> http://www.mail-archive.com/sqlite-users@sqlite.org/msg10803.html
>
> Describes a patch that implements a sequence table, and functions to
deal
> with it.  You could use something like that to implement a
> unique-across-all-tables ID scheme.  Though I think someone else
(Igor?)
> already suggested something similar.
>
> -Clark
>
> ----- Original Message 
> From: Andre du Plessis <[EMAIL PROTECTED]>
> To: sqlite-users@sqlite.org
> Sent: Monday, July 2, 2007 9:36:02 AM
> Subject: [sqlite] Database Level Unique Sequence
> Good day everyone.
>
>
>
> I would like to know how to create an Autoincrement field and insure
> that it is unique across the database, I tested this and it does not
> seem to work:
>
>
>
> c:\Temp>sqlite3 temp.db

>
> SQLite version 3.3.17
>
> Enter ".help" for instructions
>
> sqlite> create table temptable (id integer primary key autoincrement,
> info text)
>
> ;
>
> sqlite> create table temptable2 (id integer primary key autoincrement,
> info text
>
> );
>
> sqlite> insert into temptable (info) values ('info1');
>
> sqlite> insert into temptable2 (info) values ('info2');
>
> sqlite> select * from temptable;
>
> 1|info1
>
> sqlite> select * from temptable2;
>
> 1|info2
>
> sqlite>
>
>
>
> as you can see both have id = 1
>
>
>
> I need this because I need a link table that wont know which table the
> id comes from, and I cant add all the fields to make a compound key as
> some of the values would then be blank.
>
>
>
> Any suggestions is greatly appreciated.
>
>
>
>
>
>
>

-
> To unsubscribe, send email to [EMAIL PROTECTED]
>
>

-
>
>


-- 
Thanks God

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Re: Database Level Unique Sequence

2007-07-02 Thread Andre du Plessis
That could work, but it is extra work, just hoped it was available,
would make life easier, and not need extra columns in an already very
large link table, and also extra conditions in the select.

But if there is no other way I'll go this route.

I guess one nice feature that they may consider for a future version of
sqlite is generators as in interbase, and then the ability to query and
set their values.


But thank you for the suggestion.

-Original Message-
From: Igor Tandetnik [mailto:[EMAIL PROTECTED] 
Sent: 02 July 2007 06:53 PM
To: SQLite
Subject: [sqlite] Re: Database Level Unique Sequence

Andre du Plessis <[EMAIL PROTECTED]> wrote:
> I would like to know how to create an Autoincrement field and insure
> that it is unique across the database, I tested this and it does not
> seem to work:
>
> I need this because I need a link table that wont know which table the
> id comes from, and I cant add all the fields to make a compound key as
> some of the values would then be blank.

I don't understand this last statement. What again prevents you from 
having a table like this:

create table linkTable (infoId integer, inWhichTable integer, ...);

You can also play arithmetic tricks, like adding a large constant to all

IDs that come from temptable2 when storing them in linkTable.

If you insist on having non-repeating IDs through both tables, you can 
write a set of triggers to achieve that.

Igor Tandetnik 



-
To unsubscribe, send email to [EMAIL PROTECTED]

-


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Database Level Unique Sequence

2007-07-02 Thread Andre du Plessis
Good day everyone.

 

I would like to know how to create an Autoincrement field and insure
that it is unique across the database, I tested this and it does not
seem to work:

 

c:\Temp>sqlite3 temp.db

SQLite version 3.3.17

Enter ".help" for instructions

sqlite> create table temptable (id integer primary key autoincrement,
info text)

;

sqlite> create table temptable2 (id integer primary key autoincrement,
info text

);

sqlite> insert into temptable (info) values ('info1');

sqlite> insert into temptable2 (info) values ('info2');

sqlite> select * from temptable;

1|info1

sqlite> select * from temptable2;

1|info2

sqlite>

 

as you can see both have id = 1

 

I need this because I need a link table that wont know which table the
id comes from, and I cant add all the fields to make a compound key as
some of the values would then be blank.

 

Any suggestions is greatly appreciated.



[sqlite] LoadExtentions can't open DB

2007-06-26 Thread Andre du Plessis
I have been testing FTS2 and it is awesome I must say, hope that the
project will keep going,

I have this problem though:

 

Once load extentions is enabled and fts2 is enabled, I cannot see
anything in the DB anymore  when I open it in SQLiteDatabaseBrowser.

I CAN open it though, just cant see anything.

 

It is the application I use to administer the DB.

 

Any idea why or how to get it to work?



[sqlite] Index size

2007-06-25 Thread Andre du Plessis
I have a question on index size, I have a table that by doing some
testing the DB have grown to 250MB, might not seem too large, as the
table have 4million records in it. But for this DB this table would grow
a lot larger than that. So I did some tests with a simple table 3 fields

FIELD1 is a integer with autogenerated primary key, the other two fields
are integers, inserting 1 million records the DB ended up being around
12mb which seems to be correct, by adding indexes to the other 2 fields
the db has grown to about 27mb, so my question is this.

 

First what makes the indexes so large and second

Is there any way to reduce the size of the indexes?

My sql looks like this:

 

CREATE INDEX IDX_FIELD2 ON TEST(FIELD2 ASC);

 

 

Thanks.



[sqlite] FTS2 Module

2007-06-25 Thread Andre du Plessis
I saw this post on FTS2, and looked on the site and saw the FTS2 binary,
where can I find more information on this I searched the documentation
and can't find anything, it would be appreciated if someone can point me
to a doc that explains what it is and how to use it.

 

Thank you.



[sqlite] Trigger on Attached Database

2007-06-21 Thread Andre du Plessis
Is it possible to do this:

 

   Open DB1

   Attatch DB2

 

 

In DB1 have a trigger that does

  Insert into DB2. ?

 

 

Theoretically it seems possible but we couldn't get it to work. Before I
investigate further just want to know if it is possible

 

Thanks.



RE: [sqlite] More SQLite Misuse, sorted i think

2007-06-20 Thread Andre du Plessis

In my sample that I supplied I illustrated how two threads does the
following:

Lock (Global Critical Section)
Queryobject.Prepare (Sqlite3_prepare)
QueryObject.Step (Sqlite3_step)
QueryObject.Reset (Sqlite3_reset)
Unlock
QueryObject.Free;  (Sqlite3_reset (the missing piece of the puzzle))

In the above example the call to these 3 functions are locked in a
global critical section, so none of them can be executed at the same
time, 

however:

The last line of code I did not see I had an object that was destroyed
that called sqlite3_reset. This is where the problem lied, the
destructor of the object did something as follows:

Destructor
   If FHandle <> nil then begin
   Sqlite3_reset;
   Sqlite3_finalize; 
   FHandle := nil; 
   end

I understand that the call to sqlite3_reset is a bit pointless in the
destructor here as Sqlite3_finalize takes care of all that, but it is
just interesting to note that by the removal of sqlite3_reset OR by
locking the call to sqlite3_reset it seemed to work, however locking the
call to sqlite3_finalize did not seem to be necessary and did not
produce the SQLITE_MISUSE error.

Hope that is more clear.

-Original Message-

I'm not sure I completely understand, but anyway... :)

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] More SQLite Misuse, sorted i think

2007-06-20 Thread Andre du Plessis
Sorry if I created any confusion there were some code that seemed to
have called
Sqlite_reset simultaneously from more than one thread, even though the
statements were unique for each thread the call to the library was not
locked. I know assumptions are bad but I thought that reset on a unique
statement should not have to be locked and serialized, but now I think
it might, so now every single call to the library gets locked in a
critical section and it seems to work.

However finalize worked because it seems that finalize can be called
without synchronizing.

-Original Message-
From: Andre du Plessis [mailto:[EMAIL PROTECTED] 
Sent: 19 June 2007 07:21 PM
To: sqlite-users@sqlite.org
Subject: [sqlite] More SQLite Misuse

DLL version

  Sqlite3.3.17

 

The os is windows

 

After the last query of sqlite3_step

 

I decided to so some more tests, with threads, if synchronized properly,
it seems that you can use more than one thread without any problem as
long as 

Sqlite3_finalize is called is this correct?

 

Please note that this is a very simple query being executed :  "select *
from threads where id = 1"

 

Imagine in the following scenarios both threads are executing
simultaneously and will lock on the global critical section (so they are
synchronized)

Using the same DB handle.

 

Scenario 1

 

THREAD1   THREAD2

 

LockGlobalCriticalSection
LockGlobalCriticalSection

Sqlite3_prepare
Sqlite3_prepare

Sqlite3_step
Sqlite3_step<   SQLITE_MISUSE: library routine
called out of sequence here

Sqlite3_reset
Sqlite3_reset 

UnLockGlobalCriticalSection
UnLockGlobalCriticalSection

 

// The following code works fine though

 

THREAD1   THREAD2

 

LockGlobalCriticalSection
LockGlobalCriticalSection

Sqlite3_prepare
Sqlite3_prepare

Sqlite3_step
Sqlite3_step

Sqlite3_finalize
Sqlite3_finalize 

UnLockGlobalCriticalSection
UnLockGlobalCriticalSection

 

 

If my tests are correct it is not possible to retain a prepared
statement across threads. And has to be reprepared each time ??

 

 

 

 

 

 

 

 

 


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] More SQLite Misuse

2007-06-19 Thread Andre du Plessis
DLL version

  Sqlite3.3.17

 

The os is windows

 

After the last query of sqlite3_step

 

I decided to so some more tests, with threads, if synchronized properly,
it seems that you can use more than one thread without any problem as
long as 

Sqlite3_finalize is called is this correct?

 

Please note that this is a very simple query being executed :  "select *
from threads where id = 1"

 

Imagine in the following scenarios both threads are executing
simultaneously and will lock on the global critical section (so they are
synchronized)

Using the same DB handle.

 

Scenario 1

 

THREAD1   THREAD2

 

LockGlobalCriticalSection
LockGlobalCriticalSection

Sqlite3_prepare
Sqlite3_prepare

Sqlite3_step
Sqlite3_step<   SQLITE_MISUSE: library routine
called out of sequence here

Sqlite3_reset
Sqlite3_reset 

UnLockGlobalCriticalSection
UnLockGlobalCriticalSection

 

// The following code works fine though

 

THREAD1   THREAD2

 

LockGlobalCriticalSection
LockGlobalCriticalSection

Sqlite3_prepare
Sqlite3_prepare

Sqlite3_step
Sqlite3_step

Sqlite3_finalize
Sqlite3_finalize 

UnLockGlobalCriticalSection
UnLockGlobalCriticalSection

 

 

If my tests are correct it is not possible to retain a prepared
statement across threads. And has to be reprepared each time ??

 

 

 

 

 

 

 

 

 



RE: [sqlite] Step Query

2007-06-19 Thread Andre du Plessis
I had lots of problems here when starting with SQLite and painstaking I
think I've figured it out.

You have sqlite3_prepare, which compiles the sql into byte code, then 
Sqlite3_step to execute the query or update, if it is an update then
there is no row, if query then call step until no more rows. 

Once done stepping you must either sqlite3_reset or sqlite3_finalize
I believe that the finalize will do reset and free resources. Reset is
designed to reuse the query or update.

It is important though to reset (if you don't finalize) because if you
don't reset you may have an open lock on the table and this will lock
out other processes and they will get a SQLITE_BUSY error, because
depending on what the sql is doing, it may have a cursor which may lock
the table.

So your code is fine.
But at the end of your rows you don't have to call finalize but you must
call reset.
You don't have to call finalize right away but maybe on object
destruction, to free the resources, after reset is called you wont have
a lock on the table anymore.

This is how I understand things but would like for someone to tell me if
I'm wrong.


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
anand chugh
Sent: 19 June 2007 07:28 AM
To: sqlite-users@sqlite.org
Subject: [sqlite] Step Query

Hi

I am having code like this:

   rc = sqlite3_prepare(db, zSql, -1, , 0);
   if( rc!=SQLITE_OK ){
 return rc;
   }
   sqlite3_bind_text(pStmt, 1, zKey, -1, SQLITE_STATIC);
   sqlite3_bind_blob(pStmt, 2, zBlob, nBlob, SQLITE_STATIC);

   while( sqlite3_step(pStmt)==SQLITE_ROW )
 {
 *pnBlob = sqlite3_column_bytes(pStmt, 0);
 *pzBlob = (unsigned char *)malloc(*pnBlob);
 memcpy(*pzBlob, sqlite3_column_blob(pStmt, 0), *pnBlob);
   }

  sqlite3_finalize(pStmt);

My question here is do I need to do sqlite3_finalize(pStmt); after
every sqlite3_step() to free all memory allocated by
sqlite3_step().Does calling finalize at end will free all memory
allocated by all steps statements?

 Example shown http://www.sqlite.org/cvstrac/wiki?p=BlobExample does
same , it calls finalize after  every step.

My Program shows some Memory Leaks(Virtual Bytes).

Please clarify.

Anand


-
To unsubscribe, send email to [EMAIL PROTECTED]

-


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Journal File Optimization

2007-06-18 Thread Andre du Plessis
How can one optimize the creation of the journal file. The problem is
this, for our system which is an event based one each message needs to
be insterted and committed to the database (guaranteed), this results in
a commit per insert, this was obviously unacceptably slow and according
to the lists by design you need to do more bulk transactions, and with
some efford I modified it with a combination of bulk and temporary
tables to do quite a few at a time, but the problem remains that
committing is terribly slow. 

 

I'm thinking the journal file, the fact that it is created written and
deleted each time which is slowing it down, Is there not a way to create
and set up a permanent journal file pre-allocated to a certain size, so
sqlite does not have to go through the OS each time to create and delete
this file?

 

Along the same lines of this question, is there a way to fix the initial
size of the DB, ie set it to pre-allocate 700mb for instance so that no
growing of the db file is needed until the space is exceeded, may also
speed things up.

 

 

Thank you very much in advance.