Re: [sqlite] NOP INSERT still writes to the DB/journal

2014-05-06 Thread Christian Smith
On Mon, May 05, 2014 at 05:00:08PM -0400, Richard Hipp wrote:
> On Mon, May 5, 2014 at 4:53 PM, Patrick Donnelly wrote:
> 
> > Hi,
> >
> > I have an INSERT that looks like
> >
> > INSERT INTO T
> > SELECT ...
> >
> > which I'm running numerous times a second that generally does nothing
> > because the SELECT returns no rows. Unfortunately, I've found that
> > SQLite still does numerous disk writes anyway in this situation.
> >
> 
> I'm unable to reproduce this behavior.  Here is my test script:
> 
> 
> No writes.
> 

I suspect the culprit is file access time updates. Using something
like relatime mount option under Linux would eliminate atime updates
for files that have not been modified.

Christian
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Any interest for open source multi-user 'SQLite database server' application?

2014-04-18 Thread Christian Smith
On Tue, Apr 01, 2014 at 01:08:59PM +, Harmen de Jong - CoachR Group B.V. 
wrote:
> We have built our own SQLite database server application and are considering 
> making this open source. Since there will be some time involved into making 
> it an open source project and maintaining it, we would first like to make an 
> inventory to find out if there is any interest in this server application. 
> 
> 
> ==> Advantages and features of this server application:
> 
> * Takes semi-automatically care of concurrent writes (1*).
> * Very stable server application (eliminates long startup times in case of 
> client application crash with large DB's).
> * Database connections are cached.
> * Multiple parameter sets to prepare a query and execute it multiple times 
> with a different parameter set.
> * The result of a previous select query can be used in the next query.
> * Auto-checkpointing is disabled and manual checkpointing is done in a 
> separate thread, preventing transaction commits (read and write) to be 
> occasionally slowed down by a checkpoint event.
> * Built-in LZMA2 compressed and SHA256 encrypted backup and restore.
> * Built-in a-synchronous 'continuous data protection' (real-time backup) to 
> secondary location (4*).
> * Built-in slow query log and email notifications (2* 3*).
> * Built-in query error log and email notifications (3*).
> 
> 1.) A programmer can mark a transaction to be splittable and set splitting 
> points. If set, the server application will automatically split write queries 
> that take longer than the time set in milliseconds, thus allowing other write 
> queries to be executed in between.


Is this level of complexity worth it? If you really have such long transactions 
that you need this level of detail, perhaps more concurrent database server 
would suit better?


> 2.) The application can automatically log slow queries (that exceed the set 
> maximum query execution time) and send notifications by email. Maximum query 
> execution time can be set separately for read and write queries.

That's a good plan. I like this idea.


> 3.) All logging is done into separate databases, to prevent eating up costly 
> writing time from the main database.
> 4.) When a full backup has been made successfully, the real-time backup is 
> cleared. 

How much overhead does this add? Is this a high availability feature?


> 
> 
> ==> Cons:
> 
> * It has been built as a Windows service and therefore only runs on Microsoft 
> platforms

Boo. I'm sure we can "fix" that.


> * Transactions that cannot be split will block all writers.


For a typical SQLite use case, this is probably not a problem, else you'd be 
using a more scalable database.


> 
> 
> ==> How it works:
> 
> Clients can communicate with this server over TCP/IP sockets. Queries are 
> submitted and returned in XML format (BLOB results are returned in binary 
> format to prevent CPU intensive encoding and decoding). The server 
> application is written in native Visual C++ (without using MFC). If we would 
> make this project open source we would also include a client example.


I dare say that parsing and encoding XML would be more processor intensive that 
BLOB encoding.


> 
> A few examples of the XML the client sends to perform a query:
> 
> 
>   
>   
>   
>   
>name="@MinClickTime">
>   
>   
>name="@MinClickTime">
>   
> 


What form does a result set take? Is it all enclosed in an XML element, or do 
you get a cursor "object" returned and step through the result set 1 or more 
rows at a time. I ask, because the former might not be scalable for large 
result sets. Does it handle multiple result sets per query?


> 
> 
>   
>   
>   
>   
>   
>   
>   
>   
>   
>   
>   
>   
>   
>   
>   
>   
>   
> 
> 
> Because the result of the last select query is returned to the client, the 
> client can read the new ID from the result.
> 
> 
> Please let us know if you would be interested in this project!
> 

Would certainly be curious.

IMO, you'd also increase your potential target audience if you could also 
provide reasonably functional JDBC, ADO.NET, PHP and/or Python database 
drivers. Providing any of these would allow existing users to plug your new 
database into existing applications with the minimal of fuss. Personally, I 
don't like the idea of XML as the protocol, largely because of the parsing 
overhead, and have been looking at a similar server based on RPC, but for 
debugging purposes it would be great.

Regards,
Christian
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite on flash (was: [PATCH 00/16] f2fs: introduce flash-friendly file system)

2012-10-12 Thread Christian Smith
On Wed, Oct 10, 2012 at 08:47:02AM -0400, Richard Hipp wrote:
> [snip]
> 
> We would also love to have guidance on alternative techniques for obtaining
> memory shared across multiple processes that does not involve mmap() of
> temporary files.

shmget/shmat - Part of the SysV IPC primitives and portable to most (all?) 
UNIX-alikes.

shm_open - Part of the POSIX realtime extensions.

Regards,
Christian
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Windows I/O (was: Initial read speed greater than subsequent)

2012-08-01 Thread Christian Smith
On Sat, Jul 14, 2012 at 03:17:07PM +0100, Simon Slavin wrote:
> 
> On 14 Jul 2012, at 3:12pm, Udi Karni  wrote:
> 
> > I know
> > nothing about writing DB engines - so I don't know whether adding a 2nd
> > parallel process adds 10K or 10M to the code base.
> 
> You've reached the limit of what I know about parallelization.  I hope 
> someone else can chime in.


Using SQLite's VM architecture, I would guess that adding this sort of 
parallelization would be non-trival. You need a parallel VM, significantly 
different to the current sequential VM, at at least a way of managing 
asynchronous IO, with perhaps a callback mechanism into the VM to handle IO 
completion. 

While not certain, I guess other databases handle this by using tree based 
execution plans, where any single execution node can easily be split into 
branches to another thread/process/machine, then merged in the parent tree 
node, with each branch handling a certain key range.

This might make sense, for example, with a partitioned table, where each 
partition is on it's own spindle, so a full table scan can be executed in 
parallel on each spindle and merged as a final step. So, for a table scan 
between k0 and k3, find intermediate keys to split the query between spindles:

(k0-k3)
  /|\
 / | \
/  |  \
   /   |   \
  /|\
(k0-k1] (k1-k2] (k2-k3)
   |   |   |
disk1disk2disk3

I sat through an Oracle internals course once, and the instructor gave us an 
example of a setup such as this where data was partitioned across 24 disks, and 
the resulting full table scans were in fact quicker than index based scans for 
the data set they were using.

Of course, the above would be useless for SQLite anyway, being a single file 
database. And even with the likes of Oracle, Stripe And Mirror Everything 
(SAME) might also largely defeat parallel scans.

All in all, the added bloat would be measured in MB, rather than KB.

Christian

disclaimer: Not a practical DB implementation expert.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Queries on SQLite

2012-06-15 Thread Christian Smith
On Thu, Jun 14, 2012 at 06:27:17AM -0400, Richard Hipp wrote:
> On Thu, Jun 14, 2012 at 2:41 AM, vallur raghava reddy <
> vallur.raghavare...@gmail.com> wrote:
> 
> 
> >   2. Does SQLite create a any threads? If YES, how many will be created?
> >
> 
> SQLite does not create threads.
> 

Except when the asynchronous IO module is used:
http://www.sqlite.org/asyncvfs.html

This VFS module will create a single background IO thread.

Christian
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Triggers in Sqlite using C

2012-06-15 Thread Christian Smith
On Fri, Jun 15, 2012 at 11:59:33AM +0530, Bageesh.M.Bose wrote:
> Can anyone tell "How to use Triggers in Sqlite using C"
> 

Triggers in SQLite can only contain SQL DML statements. But, you can
interface with C by defining functions, then calling those functions
using SQL SELECT command.

For example:

CREATE TRIGGER mytrigger AFTER INSERT ON mytable
FOR EACH ROW
BEGIN
SELECT myfunction(NEW.field1, NEW.field2);
END;

where myfunction would be defined using:

http://sqlite.org/c3ref/create_function.html

However, such usage would result in a database that can only be used
in your application. Using the database in the SQLite shell or some
other application might result in function not found errors if the
trigger is fired.

Hope that helps,
Christian
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] integrity constraint that is equivalent to the following trigger:

2012-06-13 Thread Christian Smith
On Sun, Jun 10, 2012 at 02:35:20PM +0200, Wolfgang Meiners wrote:
> Hello,
> 
> i have written a sqlite3-db for a small books library. Within this db
> there is a table:
> sqlite> .schema ausleihen
> CREATE TABLE ausleihen (
>   aid INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT,
>   pid INTEGER NOT NULL,
>   beid INTEGER NOT NULL,
>   ausleihdatum DATE NOT NULL,
>   rueckgabe DATE,
>   FOREIGN KEY(pid) REFERENCES personen (pid) ON DELETE CASCADE,
>   FOREIGN KEY(beid) REFERENCES buchexemplare (beid) ON DELETE CASCADE
> );
> CREATE INDEX ix_ausleihen_ausleihdatum ON ausleihen (ausleihdatum);
> CREATE TRIGGER insertausleihe
> BEFORE INSERT ON ausleihen
> FOR EACH ROW
> BEGIN
> SELECT CASE
> WHEN EXISTS (SELECT * FROM ausleihen
>  WHERE (beid = NEW.beid) AND (rueckgabe is\
>   NULL))
> THEN RAISE(ROLLBACK, 'Buch ist ausgeliehen')
> END;
> SELECT CASE
> WHEN NOT EXISTS (SELECT * FROM personen
>  WHERE pid = NEW.pid)
> THEN RAISE(ROLLBACK, 'Person existiert nicht')
> END;
> SELECT CASE
> WHEN NOT EXISTS (SELECT * FROM buchexemplare
>  WHERE beid = NEW.beid)
> THEN RAISE(ROLLBACK, 'Buchexemplar existiert nicht')
> END;
> END;
> sqlite>
> 
> In this table, every row belongs to a person (pid) which has an exemplar
> of a certain book on loan (beid). If the bookexemplar is on loan, the
> value rueckgabe is equal to NULL. The first SELECT statement in the
> trigger raises a rollback, if the book is already on loan.
> 
> This works, but there is an issue with the orm-part of sqlalchemy: Since
> sqlalchemy.orm does not 'see' this trigger, the following python
> function returns true, even if the trigger is violated:
> 
> def buchausleihen(session, pid, beid, ausleihdatum=None):
> Ttry:
> session.add(Ausleihe(pid=pid, beid=beid,
>   ausleihdatum=ausleihdatum))
> return True
> except:
> return False
> 
> This means:
> (python code:)
> print(buchausleihen(session,pid=1,beid=1)) # True
> print(buchausleihen(session,pid=2,beid=1)) # True
> 
> session.commit() # raises an error
> 
> This is not what i expected. So i think, if i could write an integrity
> constraint - maybe an CHECK-clause, this would be seen by sqlalchemy.
> 
> So my question is: (how) can i write an integrity constraint in sqlite3
> (version 3.6.12) which is equivalent to the part
> SELECT CASE
> WHEN EXISTS (SELECT * FROM ausleihen
>  WHERE (beid = NEW.beid) AND (rueckgabe is\
>   NULL))
> THEN RAISE(ROLLBACK, 'Buch ist ausgeliehen')
> END;
> of the trigger?


Why not add a unique index on ausleihen(beid,rueckgabe). That way, the book 
identified by beid cannot be inserted twice into the ausleihen table if the 
current book is on loan. However, you'd need a non-NULL sentinal value for 
rueckgabe for this to work, perhaps some epoch that is before any valid return 
time, such that a book is considered on loan if rueckgabe = sentinal value.

Your foreign key constraints take care of ensuring the book and person exist. 
The unique index deals with the book only being loaned once at a time, so you 
can discard the trigger completely.

So you'd have:

CREATE TABLE ausleihen (
aid INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT,
pid INTEGER NOT NULL,
beid INTEGER NOT NULL,
ausleihdatum DATE NOT NULL,
rueckgabe DATE NOT NULL DEFAULT 0, -- 0 is sentinal to indicate not 
returned instead of NULL
CONSTRAINT Leihgabe_Scheck UNIQUE (beid,rueckgabe),
FOREIGN KEY(pid) REFERENCES personen (pid) ON DELETE CASCADE,
FOREIGN KEY(beid) REFERENCES buchexemplare (beid) ON DELETE CASCADE
);

Hope that helps,
Christian
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite Commit C API

2012-05-18 Thread Christian Smith
On Thu, May 17, 2012 at 04:47:29PM +0100, Simon Slavin wrote:
> 
> On 17 May 2012, at 4:34pm, Rajesh Kumar  wrote:
> 
> > No am not using any PRAGMAs. I just cross compiled sqlite source to
> > ARM architecture. Can't I forcefully do commit on my transaction
> 
> If you are correctly using _open() and _close(), and haven't disabled 
> synchrony with PRAGMAs or compilation settings, then you should be getting 
> COMMIT automatically.  There's no way to force SQLite to flush changes to 
> disk because SQLite flushes changes to disk as part of normal working.
> 
> As for use of COMMIT ? every SQL command executed with SQLite is part of a 
> transaction.  So normal programming tends to look like
> 
> BEGIN;
> ? any number of INSERT, DELETE, SELECT, UPDATE commands here ?
> COMMIT;
> 
> By the time SQLite has returned from the COMMIT, your changes should have 
> made it to disk.  There's no flushing to do, SQLite already did it.
> 
> If you do happen to issue a SQL command without having issued a 'BEGIN' 
> first, then SQLite helpfully wraps it in its own transaction for you, 
> executing all of these
> 
> BEGIN  (<-- I helpfully do this for you because you forgot)
> Your command here
> COMMIT  (<-- since you forgot the BEGIN you'll probably forget the COMMIT too)
> 
> with the single call to sqlite3_exec() you made, and doing the COMMIT before 
> it returns from processing that call.  So even if all you do is use 
> sqlite3_exec('UPDATE command here'), SQLite has already done all the flushing 
> to disk that needs doing.
> 
> However, the code in SQLite that does the COMMIT includes the C command 
> fsync() as part of its working.  (I think this is right.  I haven't read the 
> source code.)  And its the fsync() that does the flushing.  So if that's not 
> correctly implemented on your platform (which is depressingly common) then 
> things like application crashes or power loss can lead to a corrupt database. 
>  Which is why Richard pointed to
> 
> http://www.sqlite.org/atomiccommit.html and
> http://www.sqlite.org/howtocorrupt.html
> 


Note, also, if you do a BEGIN, but don't get as far as a COMMIT before
losing power, sqlite *should* be able to recover without corrupting
the database if fsync() is working, as it saves recovery information
on the fly.

BTW, what embedded OS are you using? Perhaps check with the vendor on
their fsync() implementation?

Christian
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] INSERT OR REPLACE

2012-03-12 Thread Christian Smith
On Fri, Mar 09, 2012 at 09:49:22AM +1100, BareFeetWare wrote:
> I suggest only using "insert or replace" if you genuinely want to delete and 
> replace with a new row. This is generally not hat you want if there are any 
> foreign keys.
> 
> Only use "insert or ignore" if you are inserting exactly the same row as what 
> might already be there.
> 
> Otherwise use a combination of insert and update.
> 
> So, for example, if you are wanting to add a person that may not already be 
> in your table:
> 
> insert or ignore into "Person" ("First Name", "Last Name", "Company", "Email")
> select 'Tom', 'Brodhurst-Hill', 'BareFeetWare', 'develo...@barefeetware.com')
> ;
> update "Person"
> set "Email" = 'develo...@barefeetware.com'
> where "First Name" = 'Tom' and "Last Name" = 'Brodhurst-Hill'
> ;

I've had a similar problem in the past, and solved it by using a pre-insert
trigger to do the desired update.

Thus, the insert/update from above becomes just an insert (or ignore),
with an implied update in the trigger, which appears to be sematically
closer to what people want in the above case (though not in the original
subject matter.)

In my case, it was inserting "event" data if not already existing, else
updating an existing event record from the new event data (such as incrementing
an event count and updating timestamps):

create table events (
identifier text primary key,
count integer default 1,
firstoccurrence date not null,
lastoccurrence date not null,

);

create trigger event_dedup
before insert on events
for each row
begin
update events set count=count+1, 
lastoccurrence=NEW.lastoccurrence where identifier=NEW.identifier;
end;

insert or ignore into events 
(identifier,firstoccurrence,lastoccurrence) values ('Some event 
1',datetime('now'),datetime('now'));
insert or ignore into events 
(identifier,firstoccurrence,lastoccurrence) values ('Some event 
2',datetime('now'),datetime('now'));
insert or ignore into events 
(identifier,firstoccurrence,lastoccurrence) values ('Some event 
1',datetime('now'),datetime('now'));

insert or ignore into events 
(identifier,firstoccurrence,lastoccurrence) values ('Some event 
1',datetime('now'),datetime('now'));
insert or ignore into events 
(identifier,firstoccurrence,lastoccurrence) values ('Some event 
2',datetime('now'),datetime('now'));
insert or ignore into events 
(identifier,firstoccurrence,lastoccurrence) values ('Some event 
1',datetime('now'),datetime('now'));

select * from events;
Some event 1|4|2012-03-12 13:34:43|2012-03-12 13:34:48|...
Some event 2|2|2012-03-12 13:34:43|2012-03-12 13:34:48|...

It must be noted as well that the above "select or ignore" data can be
generated from a select, so in the original question, the "update" could
be written as:

insert or ignore into t1 ( tid, a, b ) select t1.tid, t1.a+t2.a, 
t1.b+t2.b from t1 join t2 on (t1.tid=t2.tid);

So long as the update in the pre-insert trigger updates all the required
fields. The select with join is only evaluated once, so will be more
efficient if the join was the dominant performance bottleneck, and emulates
the "update from select" noted in oracle elsewhere in the thread.

Christian
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite on armv7l GNU/Linux - JDBC issues - any suggestions?!

2011-09-25 Thread Christian Smith
On Sun, Sep 25, 2011 at 09:52:00PM +0100, Christian Smith wrote:
> On Sun, Sep 25, 2011 at 11:08:38AM +0100, Katie Blake wrote:
> > 
> > I hope that this is the correct list to send this question.
> > I am trying to use SQLite on a Gumstix Linux module running Angstrom armv7l 
> > GNU/Linux.  (32-bit­).
> > 
> > I have installed the sqlite packages and can happily create and query a 
> > database using the sqlite command line.
> > 
> > I would like to access my database from some Java code ? and I am hitting 
> > some problems!
> > 
> > I see the error:
> > 
> > java: codegen.c:2036: codegen_emit: Assertion `(15) != 15' failed.
> > Aborted
> > 
> 
> This looks like a JVM error when trying to output JIT code.
>
> You might also want to investigate the error within the JVM from your JRE 
> vendor.

Oh, and try running the JVM in non-JIT mode. That will isolate the problem to 
the JVM JIT compiler, and at least allow you to continue testing:
java -Djava.compiler=NONE ...

Christian
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite on armv7l GNU/Linux - JDBC issues - any suggestions?!

2011-09-25 Thread Christian Smith
On Sun, Sep 25, 2011 at 11:08:38AM +0100, Katie Blake wrote:
> Hello,

Hello, Katie!

> 
> I hope that this is the correct list to send this question.
> I am trying to use SQLite on a Gumstix Linux module running Angstrom armv7l 
> GNU/Linux.  (32-bit­).
> 
> I have installed the sqlite packages and can happily create and query a 
> database using the sqlite command line.
> 
> I would like to access my database from some Java code ? and I am hitting 
> some problems!
> 
> I see the error:
> 
> java: codegen.c:2036: codegen_emit: Assertion `(15) != 15' failed.
> Aborted
> 

This looks like a JVM error when trying to output JIT code.


> Whenever I try to make a connection to the database.
> 
> I have been trying out various jdbc providers and seem to always see the same 
> error.  Is this a common problem ?!?!
> 
> At the moment I am using the jdbc jar  file downloaded from 
> http://www.xerial.org/trac/Xerial/wiki/SQLiteJDBC#RuninPure-Javamode
> And am running in ?Pure Java? mode.
> 

I've had no problems running the JDBC driver from here:
http://www.zentus.com/sqlitejdbc/

But your driver appears to be an extension of this driver, hence you may get 
the same issue. I've tested on AIX with the IBM JRE, so it will definitely be 
using the NestedVM version of the driver. If you haven't tried this driver, 
give it a go.

This driver:
http://www.ch-werner.de/javasqlite/

is a JNI based wrapper and may not be affected by the error you're seeing, 
which may be related to the NestedVM based implementation.

For the Xerial project driver, you may also want to post on it's mailing list. 
You might also want to try (cross-)compiling the Xerial driver from source for 
the armv7l target, so that you use the JNI interface, which should also improve 
performance.

You might also want to investigate the error within the JVM from your JRE 
vendor.

Hope that helps,
Christian (from IBM:)
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite disc and I/O usage

2011-09-19 Thread Christian Smith
On Mon, Sep 19, 2011 at 02:42:42PM +0100, Jaco Breitenbach wrote:
> Hi Simon,
> 
> Thanks for the reply.
> 
> On 19 September 2011 13:23, Simon Slavin  wrote:
> 
> > > I run the database (3.7.7.1) in WAL mode, with checkpointing performed at
> > 10
> > > minute intervals.
> >
> > You turned 'PRAGMA wal_autocheckpoint' off, right ?
> >
> Correct.  The frequency of checkpointing seems to have a significant impact
> on overall performance.  I've disabled auto-checkpointing and introduced a
> time-based manual checkpoint frequency similar to that of TimesTen.  In my
> tests I've set the interval to 10 minutes.
> 
> If you know you are going to do lots of INSERTs in a row you can put
> > BEGIN/END around them.  This will dramatically speed up the operation.  On
> > the other hand, if your benchmark is simulating lots of separate logging
> > entries you will not want to do this.
> >
> That is indeed how my application works.  Records are processed in
> transactions of approximately 20,000 records each.
> 
> > 
> >
> > If, on the other hand you aren't concerned about using too much space, and
> > are simply exploring to find out how things are likely to work in real life,
> > carry on.
> >
> My application is rather simple.  It simply inserts random 22-character
> strings into a single-column, unique-indexed table as fast as possible.  The
> main concern is application performance and I've spent many hours optimising
> as much as possible.  In terms of hardware, the resource I am most concerned
> about is I/O bandwidth and I am trying to estimate the I/O requirement as a
> function of record size and record processing rate.


Is this the actual application, or just some test application that is
representative of the final application?

I ask because I can't see random 22-character string inserts as representative
of anything other than worst case performance, so the test might not actually
be valid.

You'd be better off testing with real data with real insert patterns, then
evaluating the IO demands of that. Your test above will result in lots of
random IO, which is the worst case for spinning disk media.

For insert mostly updates in big batches, WAL mode might not be optimal.
The data will be written twice, once to the WAL, once to the database file
whereas in rollback journal mode, most data will be written just once (with
smaller amounts of rollback data written to the journal.)

Once you introduce updates and deletes, the performance patterns can change
as more updates become in place updates, which might favour WAL mode more.

Either way, the IO requirements are not likely to reflect record size and
processing rate per se. You're more likely to be limited to transaction
rate, transaction rate being limited by the cache flush rate of your
IO stack and bound in the spinning platter case to the write latency
of the disk. Such limits are in the order of 10s of transactions per second
max for non-NVRAM cached spinning disks. 

Of course the first rule of programming is "make it work", followed 
afterwards by "make it work fast". And well designed schemas and indexes
are likely to have more impact on performance (orders of magnitude
potentially) than the micro-benchmark based optimizations you're
looking at at the moment.

Hope that helps,
Christian
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite on AIX

2011-09-19 Thread Christian Smith
On Thu, Sep 15, 2011 at 05:47:47AM +, Grice, Lynton (L) wrote:
> Hi Ross,
> 
> Many thanks for your response, I really appreciate it and will definitely 
> drop you a quick email if I have any issues with the build.
> 
> I am going to install GCC onto the AIX 5.3 box and will give it a try. I have 
> never used the IBM C compiler..do you prefer it to GCC?

[disclaimer - I am an IBM employee, but this is not an IBM policy statement]

You might want to browse the IBM developerworks site for information on GCC
and AIX.

Some things you might want to bear in mind:
- If using C++, mixing IBM and GCC compilers is not a good idea. If you have
  any third party libraries compiled using the IBM C++ compiler, you might be
  better off using just the IBM compiler.
- You must use the AIX linker, which is the default in GCC I believe.


This page:
http://www.ibm.com/developerworks/aix/library/au-gnu.html

gives useful information on GCC on AIX, as well as links to other useful
resources such as evaluation links to XL C/C++ for AIX.

My teams' products are built using a variety of compilers anyway, including
GCC, IBM, HP and Oracle development suites. If your code is suffiently
portable, and you stick to C, you should have no problems whatever compiler
you use. We don't use GCC on AIX, however.

I can't see performance being much of an issue on modern machines, in these
days of multi-GHz cores. The odd micro-optimizations done by the XL compiler
may look good in benchmarks, but the real world performance will more
likely be dominated by IO.

The last SQLite version I regularly used on AIX was 2.8.17, which is long
obsolete, but we had no problems with that code.

Christian

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] fsync and ext3 ordered mode, data integrity.

2011-08-26 Thread Christian Smith
On Tue, Aug 23, 2011 at 09:28:35AM +0200, Sébastien Escudier wrote:
> Hi,
> 
> I am currently using sqlite on an ext3 file system with ordered mode.
> But we have serious performance issues when sqlite calls fsync,
> especially on RAID devices.
> 
> We noticed that disabling fsync in sqlite OR removing ordered mode
> solves this performance issue.
> 
> But I couldn't find if there is a risk of corrupted database in one of
> these solutions.
> 
> in http://www.sqlite.org/howtocorrupt.html 3.2, I read that disabling
> sync is risky if the system reorder writes. 
> So can I assume that :
> - I can safely disable sqlite syncs in ext3 ordered mode ?

No. For appending data, ordered mode provides you with the guarantee that
the file will either have the appended data in it's entirety or not at all.
For in place updates, all bets are off, data must be fsync'd.

> - I can safely store my databases on an axt3 filesystem without ordered
> mode if I keep sqlite syncs ?

Yes. 

I found that I can increase SQLite performance on ext3 by using data=journal.

This way, all synchronous writes become a big sequential write to the journal,
at which point it is considered safely written. You do increase the volume
of data written, but the in place writes to the files can be deferred, removing
latency. I found I could double the number of transactions SQLite could
write using this. Details of my non-scientific test here:
http://www.osnews.com/thread?184137

Another option you might want to investigate is using WAL mode. Similar to
the data=journal mode above, data is written sequentially to a Write Ahead
Log, and copied from there to it's final destination at a later time.

Cheers,
Christian
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Potential problem with ON CONFLICT REPLACE?

2011-03-02 Thread Christian Smith
Inline...

On Mon, Feb 28, 2011 at 04:37:13PM -0700, Carl Baldwin wrote:
> Hello,
> 
> I've used sqlite for a number of years now.  I ran in to something over the
> week-end like nothing I've experienced before.  An application linked
> against 3.6.18 corrupted one of my databases.  The application did not crash
> or misbehave, the only reason that I discovered the problem is that it runs
> "PRAGMA integrity_check;" periodically.  I got the following errors:
>  Attempts to reproduce this scenario from a healthy database have failed.
> 
> sqlite> PRAGMA integrity_check;
> rowid 13 missing from index sqlite_autoindex_MetricDefn_1
> rowid 20 missing from index sqlite_autoindex_MetricDefn_1
> rowid 21 missing from index sqlite_autoindex_MetricDefn_1
> rowid 22 missing from index sqlite_autoindex_MetricDefn_1
> rowid 23 missing from index sqlite_autoindex_MetricDefn_1
> rowid 24 missing from index sqlite_autoindex_MetricDefn_1
> rowid 25 missing from index sqlite_autoindex_MetricDefn_1
> rowid 26 missing from index sqlite_autoindex_MetricDefn_1
> rowid 28 missing from index sqlite_autoindex_MetricDefn_1
> rowid 29 missing from index sqlite_autoindex_MetricDefn_1
> rowid 30 missing from index sqlite_autoindex_MetricDefn_1
> rowid 31 missing from index sqlite_autoindex_MetricDefn_1
> rowid 32 missing from index sqlite_autoindex_MetricDefn_1
> rowid 33 missing from index sqlite_autoindex_MetricDefn_1
> rowid 34 missing from index sqlite_autoindex_MetricDefn_1
> wrong # of entries in index sqlite_autoindex_MetricDefn_1
> 
> I dumped the database using the command line .dump utility and tried to
> create a the database fresh with the data.  This table had trouble because a
> UNIQUE constraint on the table was violated.  The table definition looks
> roughly like this:
> 
> CREATE TABLE IF NOT EXISTS MetricDefn
> (
>   Id INTEGER NOT NULL PRIMARY KEY ON CONFLICT REPLACE,
>   Owner INTEGER NOT NULL,
>   Name TEXT NOT NULL,
>   Type TEXT NOT NULL,
>   ... data fields with no constraints or indexes ...
>   UNIQUE(Owner, Name, Type) ON CONFLICT ABORT
> );
> 
> I only perform simple INSERT INTO ... (...) VALUES (...) and DELETE FROM ...
> WHERE Id = ... operations on this table.  I actually don't think that my
> DELETE code is ever called anywhere yet.  The INSERT operation on this table
> is the only operation modifying it.
> 
> In the dump file, I found that rows 13 and 20-25 (all mentioned in the
> errors above) conflicted with records later in the file with higher row id
> numbers.  On further inspection, all of the rows mentioned above have newer
> versions with higher ids.  It just happens that in some of the rows, the
> Type field changed and so the UNIQUE constraint was not violated.
>  Typically, if some of the data in a record changes, I perform an INSERT
> with the same Id and the record gets replaced.  In this case, it appears
> that this didn't happen correctly and new rows got inserted with new rowids
> 
> Could my use of the ON CONFLICT REPLACE algorithm have triggered a bug in
> sqlite?  Could there be anything else going on here?


I find it unlikely that the ON CONFLICT REPLACE algorithm has triggered a
bug, otherwise it is likely you'd be able to reproduce the problem.
The ON CONFLICT ABORT on the UNIQUE constraint is the default
and therefore redundent, so will be a well tested code path with low
probability of non-reproducable failure.

Do the problem Id rows have similar values for Owner, Name, Type? Is it likely
they were stored in the same index page, for example? Or perhaps correlated
by time?

Of course, bugs do happen, and SQLite has had corruption problems in
the past with corner cases. But without a reproduceable test case, you 
may be out of luck.

Some information on the underlying platform might help, such as OS, hardware.
Have you seen other random looking errors on this machine? Machine crashes?
Check also that your library is correctly compiled for threading if your
application is threaded, and that your hard disks don't have any underlying
errors (check for SMART errors) and PSU issues.

Check out:
http://www.sqlite.org/lockingv3.html#how_to_corrupt

Christian
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bus error on sqlite3_close on Sun Sparc w/SQLite 3.7.4

2011-01-31 Thread Christian Smith
On Wed, Jan 26, 2011 at 02:31:05PM -0500, Joe Mokos wrote:
> I've written an application that uses SQLite 3.7.4.  The application runs
> fine on WinXP, AIX, HPUX and Linux.  When I try to run on Solaris 5.8 SPARC
> I get a bus error when I call sqlite3_close.  I get the following stack
> trace:
> 
>  
> 
> (/opt/SUNWspro/bin/../WS6U2/bin/sparcv9/dbx) where
> 
> current thread: t@1
> 
> =>[1] t_delete(0x1b5ed0, 0xfefbc008, 0x40, 0x1b5e90, 0x1a3840, 0x38), at
> 0xfef427bc
> 
>   [2] realfree(0x1b5ec8, 0xfefc2858, 0xfefbc008, 0x1b5e90, 0x3b, 0x1b5e98),
> at 0xfef423dc
> 
>   [3] _free_unlocked(0xfefc27cc, 0xfefbc008, 0x2a57c8, 0xfefbc008,
> 0xffbede4a, 0x0), at 0xfef42c1c
> 
>   [4] free(0x2a57c8, 0xfefc3a54, 0xfefbfc78, 0xfefbfc78, 0xe9, 0x2a57c8), at
> 0xfef42b6c
> 
>   [5] sqlite3MemFree(0x2a57d0, 0x, 0xc, 0xfef91bf8, 0x1b5d40,
> 0x2a57c8), at 0x5a8cc


Faults from within the libc memory allocator are almost always the result of
heap corruption, typically caused by buffer overruns. Check your code for
potential buffer overruns.

Solaris has good default memory error detection available within libumem. Check
out:
http://developers.sun.com/solaris/articles/libumem_library.html

In combination with mdb, you can find the source of the nmemory allocation
that is corrupted, which may give you a clue about where to look for overruns.

Differences in malloc implementation between different OSes can mask bugs, as 
some
implementations are quite tolerent of subtle bugs like off by one overruns. As
Solaris has a slab based allocator by default, overunning one buffer puts you
straight into the next buffer in the slab, corrupting memory that under Linux
Windows or AIX might be dead space between blocks.

Hope that helps,
Christian
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] assert crash in wal

2010-12-16 Thread Christian Smith
On Thu, Dec 16, 2010 at 09:41:27AM +0200, Yoni Londner wrote:
> 
> > The underlying error here is that you are attempting to use threads in the
> > first place.  You should never do that.  Threads are evil and should be
> > avoided wherever possible.  Use separate processes for concurrency.  Threads
> > in application programs always result in subtle bugs (such as this one) that
> > are hard to reproduce and waste countless hours of developer time.  Just say
> > "no" to threads.
> Could not agree with you more. Threads are evil. This is exactly the 
> reason I cannot EVER do long operations
> in the main thread (and by long I mean more than 20-30 ms). If I will do 
> long operations, my system wont be
> responsive enough.
> When is it allowed to use threads? Only to do long, standalone 
> operations, that would block the main thread (such
> as wal checkpoints).
> That is exactly what I did in my example program.
> 
> What I really wanted to do, is to use long transactions in the main 
> thread and checkpoint in a background thread.
> long transaction increase the performance (and responsiveness) 
> dramatically, since most of the work is in memory,
> and when committing, much less pages needs to be written to the DB 
> (since many sql statements update the same pages),
> so I/O is reduced by order of magnitude. In addition there is no 
> overhead of begin/end transaction for every statement.

In WAL mode, any data manipulated will not be visible outside the
transaction until the final commit. Therefore, why not collect
data into an internal buffer, and write it out periodically
to SQLite in a background thread. That background thread can
do all the inserts and checkpointing in a big transaction,
without worrying about blocking the foreground data collection
thread.

Once the forground thread has collected enough data for a big
transaction, it passes if off to the background SQLite thread,
then continues on with a new list of data.

Using the above scheme, you'll have all the durability guarantees
as your existing code, and with no restrictions on the checkpoint
time length you currently have.

I understand that this may change the structure of your code
somewhat, but that is preferable to large scale messing with the WAL
code that noone else needs.

Christian
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] WAL index in memory - multiple connections

2010-12-15 Thread Christian Smith
On Mon, Dec 13, 2010 at 12:29:20PM +0200, Yoni Londner wrote:
> Hi,
> 
> In general I think that SQLite should have a in memory VFS, which is OS 
> independent.


A laudable goal, where mmap is not available.


> 
> I am going to implement proc_exclusive now, and would love to get any 
> directions/tips from sqlite developers (if you think its not a good 
> idea, please let me know, or better - suggest another way :-))


While I'm not a SQLite developer, I have to say I think you're going
down the wrong path. Are you sure the WAL index is your bottleneck?
I think it is unlikely.

>From your previous descriptions of your application, you might be better
off rethinking how big you make your batches, and committing more often
than you currently do, and using the default checkpoint mechanism in a
single thread. With smaller batches, checkpointing will take less time,
so you'll see less jitter when inserting data.

Or, have smaller batches with a background checkpointing thread.

As it is at the moment, your batches sound like they're too big, 
and the checkpoint thread can never hope to catch up, hence your
huge checkpoint file.

In order to help further, you'd need to specify more details
of the event source, such as how big and how often data is
coming in.

Christian
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Lemon behavior

2010-12-15 Thread Christian Smith
On Tue, Dec 07, 2010 at 08:09:53PM +0100, Begelman, Jamie wrote:
> I'm using Lemon for a non-sqlite related project and it is exiting with an 
> assertion failure that I would like to understand. I have extracted the 
> following small set of productions from a larger grammar. The "list" 
> production happens to be the start symbol in the larger grammar.
> 
> list::= list DELIMITER command.
> list::= command.
> command ::= TERMINAL1.
> command ::= TERMINAL2.
> 
> When I place these in a .y file by themselves and build the file, Lemon fails 
> with:
> 
> Assertion failed: apx->type==SH_RESOLVED || apx->type==RD_RESOLVED || 
> apx->type==SSCONFLICT || apx->type==SRCONFLICT || apx->type==RRCONFLICT || 
> apy->type==SH_RESOLVED || apy->type==RD_RESOLVED || apy->type==SSCONFLICT || 
> apy->type==SRCONFLICT || apy->type==RRCONFLICT, file lemon.c, line 1065
> 
> The odd thing is I use this pattern to parse lists of things elsewhere in the 
> grammar without issue. Any insight appreciated.
> 


I observed that adding a single rule to the beginning of the grammar:

main::= list.

Fixed things for me. Try that. I've no idea why it makes a difference, mind. 
This was tried
with lemon in SQLite 3.5.9, from Debian lenny.

Christian
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] WAL index in memory - multiple connections

2010-12-10 Thread Christian Smith
On Fri, Dec 10, 2010 at 09:49:46AM -0500, Pavel Ivanov wrote:
> > Given that the WAL index is mmap'ed, you're unlikely to see improvement
> > in performance by storing it in heap memory. Reads/writes will go at
> > main memory speeds once mapped into your address space, and under memory
> > pressure, it will be no slower than if the heap was pushed to the swapfile.
> 
> Still I think pushing actual memory to swap file has bigger memory
> pressure threshold than pushing cache pages that are backed by actual
> file data out of physical memory. Also writing to mmaped file will
> still force OS to write it to disk from time to time and that brings
> additional pressure on the system overall.
> 

Once you're pushing working memory to disk, you've basically lost the
performance battle either way.

Given the OP problem, it doesn't sound like memory is the limiting
factor anyway.

>From the past posts, it appears Yoni is after predictable performance
with high throughput (logging system?) but without the durability
gaurantees provided by SQLite by default.

Perhaps using async VFS mode would better suit Yoni's application?

http://www.sqlite.org/asyncvfs.html

This way, the foreground thread handles writes to the SQLite IO queue,
and the background SQLite IO thread handles any latencies that result
from the commits. Yoni's already mentioned in other threads that
durability is not the main priority.

I'm not sure how this async VFS fits in with WAL. Might be that normal
rollback journalling only is supported, but from a performance 
standpoint, that's probably not a problem.

Christian 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] WAL index in memory - multiple connections

2010-12-10 Thread Christian Smith
On Thu, Dec 09, 2010 at 12:17:34PM +0200, Yoni Londner wrote:
> 
> > The alternative is to create your own VFS that stores the WAL index in
> > heap memory.
> I guess I have to do it, but I think this should be available in sqlite, 
> since it is a common case to use sqlite from single process (but 
> multithreaded), and using WAL index in memory will improve system 
> performance.


Given that the WAL index is mmap'ed, you're unlikely to see improvement
in performance by storing it in heap memory. Reads/writes will go at
main memory speeds once mapped into your address space, and under memory
pressure, it will be no slower than if the heap was pushed to the swapfile.

Christian
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Oracle joins the SQLite Consortium

2010-06-21 Thread Christian Smith
On Mon, Jun 21, 2010 at 04:47:12AM -0700, Igor Sereda wrote:
> 
> Wow, that's interesting news. Berkeley DB is still GPL/commercial, I guess? I
> hope SQLite will keep on going under public domain, including its B-tree
> level. 
> 
> Also, here's an interesting statement in the BDB/SQLite announcement:
> 
> 
> > Thus, applications written to the SQLite version 3 API can switch to using
> > Oracle Berkeley DB with no code changes, by re-linking against the
> > Berkeley DB SQLite library. The application-level behavior of the two
> > products is identical, but the advanced features of Berkeley DB provide
> > SQLite applications improved performance, concurrency, scalability, and
> > reliability.
> > 
> 
> Could you please comment on that? 
> Does that mean SQLite storage level is less reliable than BDB? 

 My own opinion, not that of DRH 

If you look at the features of BDB, it includes functionality such as 
replication, enabling high availability across multiple hosts. From
an integrity point of view, SQLite is generally as reliable as the
hardware underneath it. From an outside point of view, SQLite is only
as reliable as the box it is running on, and should the box be 
unavailable, so is your SQLite data.


> Are there any performance measurements and comparison of SQLite vs.
> SQLite/BDB? 
> Does SQLite/BDB really provide more concurrency, while maintaining
> SERIALIZABLE isolation level?

BDB provides MVCC, though I've no idea if this spans process
boundaries. MVCC is well known for allowing increased concurrency
but is difficult to implement without extra IPC primitives that
SQLite doesn't have access to.

That said, the WAL work currently in progress appears to offer
simple MVCC conceptually similar to how Oracle implements MVCC
(keeping old copies of pages about), so this should increase
concurrency somewhat in the cases where WAL can be used.

BDB can probably enable multiple writers if it uses versioned
rows or page/row level MVCC, as opposed to SQLite's table level
locking.

I'd be surprised if Oracle post any SQLite vs. SQLite/DBD 
benchmarks, and I'd also be surprised if BDB licensees are
allowed to publish such data.

Regards,
Christian
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] very large SQLite tables

2009-07-18 Thread Christian Smith
On Wed, Jun 24, 2009 at 02:21:09PM -0500, Matthew O'Keefe wrote:
> 
> 
> We are using SQLite for indexing a huge number (i.e., 100 million to 1
> billion) of key pairs
> that are represented by an 88-byte key. We are using a single table with a
> very large number of rows (one for each data chunk), and two columns.
> 
> The table has two columns.  One is of type ³text² and the other is type
> ³integer².
> > 
> > The table is created with:
> > 
> > CREATE TABLE chunks
> > (
> >   name text primary key,
> >   pid integer not null
> );
> 
> As expected, as the
> table grows, the underlying B-tree implementation for SQLite means that the
> number of
> disks accesses to (a) find, and (b) add a chunk, grows larger and larger.
> We¹ve tested up
> to 20 million chunks represented in the table: as expected performance
> exponentially 
> decreases as the number of table entries grows.
> 
> We wanted to post to the mailing list to see if there are any obvious,
> first-order things
> we can try to improve performance for such a large table.

Bit late to the game...

Try increasing your page size. The larger page size will result in greater
fan out of the btree, resulting in a shallower tree and less IO requests.

Christian
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-06-08 Thread Christian Smith
On Wed, May 27, 2009 at 08:05:00AM -0400, pyt...@bdurham.com wrote:
> Dr. Hipp,
> 
> > Your OS and filesystem configuration have a big impact too. I've notice, 
> > for example, that transactions are really slow on RieserFS on Linux 
> > compared to Ext3.
> 
> In your experience, which Linux file system(s) provide the high
> performance platform for SQLite?


I can't speak for DRH, but I have found that ext3 with the option of
"data=journal" gives a massive improvement speed wise than default ext3
options, mainly because the journal is contiguous and ext3 can avoid seeks
while still ensuring data is written safely to disk. This is a big win for
rotating platter disks.

I did an informal benchmark of various filesystem types on
Linux (note this is 2 1/2 years ago) as part of an OSNews thread here:
http://www.osnews.com/permalink?184137

I'd be interested in how ext4 compares to ext3. Perhaps an evening project
beckons.

> 
> Which Linux file systems do you recommend avoiding for SQLite use?


Anything with FAT in the name...

Plus, avoid NFS due to possible locking issus.

> 
> Thank you,
> Malcolm
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite version 3.6.14

2009-05-08 Thread Christian Smith
On Fri, May 08, 2009 at 05:32:45PM +0700, Dan wrote:
> 
> On May 8, 2009, at 5:21 PM, Christian Smith wrote:
> 
> > On Wed, May 06, 2009 at 10:36:50PM -0400, D. Richard Hipp wrote:
> >>
> >> A new optional extension is included that implements an  
> >> asynchronous I/
> >> O backend for SQLite on either windows or unix.  The asynchronous I/O
> >> backend processes all writes using a background thread.  This gives
> >> the appearance of faster response time at the cost of durability and
> >> additional memory usage.  See http://www.sqlite.org/asyncvfs.html for
> >> additional information.
> >
> >
> > What are the benefits of using async I/O over "PRAGMA synchronous =   
> > OFF"?
> > If AIO is used for the rollback journal as well, you've lost your ACID
> > properties already, so you may as well just use "PRAGMA synchronous  
> > =  OFF"
> > anyway and keep the code simpler.
> 
> That's not the case. You lose the Durability property, in that a COMMIT
> statement may return before a transaction is stored on the persistent  
> media,
> but transactions are still Atomic, Consistent and Isolated.
> 
> When using the "PRAGMA synchronous=off" your database might be corrupted
> by a power failure or OS crash. When using asynchronous IO this should  
> not
> be possible (assuming the hardware is not being untruthful - just as  
> when
> using regular "PRAGMA synchronous=full" mode without the async IO VFS).


Ah, the bulb has lit. Because the writes and syncs are processed by the
single queue in order, journal writes are guaranteed to be synced and
consistent before main in-place updates to the db file.

Might be worth mentioning this in the documentation, as this is not
clear without examining the source.

In that case, I like it :)

Is this something that might be made the default in the future, with the
addition of some synchronization between foreground and background threads
on the xSync messages to emulate the existing "PRAGMA synchronous=full" 
behaviour?

> 
> Dan.

Christian
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite version 3.6.14

2009-05-08 Thread Christian Smith
On Wed, May 06, 2009 at 10:36:50PM -0400, D. Richard Hipp wrote:
>
> A new optional extension is included that implements an asynchronous I/ 
> O backend for SQLite on either windows or unix.  The asynchronous I/O  
> backend processes all writes using a background thread.  This gives  
> the appearance of faster response time at the cost of durability and  
> additional memory usage.  See http://www.sqlite.org/asyncvfs.html for  
> additional information.


What are the benefits of using async I/O over "PRAGMA synchronous =  OFF"?
If AIO is used for the rollback journal as well, you've lost your ACID
properties already, so you may as well just use "PRAGMA synchronous =  OFF"
anyway and keep the code simpler.

Where I might be able to see the benefit of this background thread is if
the background thread grouped all pending write requests into a single
writev (or win32 equiv), which would reduce the system call count, but
this may be offset by all the extra memory buffer copying that is occurring
when copying a write request to the write queue. We now have 2 buffer
copies when writing a buffer (once to the AIO queue, plus the copy to the
OS.)

Are there any benchmarks numbers that indicate AIO is better than the async
PRAGMA?

Cheers,
Christian
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Storage of blobs: to encode or not to encode?

2009-04-14 Thread Christian Smith
On Mon, Apr 13, 2009 at 11:14:17AM -0700, Julian Bui wrote:
> Hi all,
> 
> I have a question about encoding blob bytes before inserting a record into
> SQLite.
> 
> CONTEXT:
> I have a base java (I'm using JDBC) class that has a many children.  Each
> child may store a different data type (such as String, int, long, etc.) in
> its VALUE field.  Since this value field could be many different data types,
> I will be storing the field's bytes into the database as a blob.
> 
> QUESTIONS:
> 
> -I have seen a couple of places on the internet saying that SQLite cannot
> inserting binary data unless it has been encoded.  Can someone confirm this
> or preferrably point me to an OFFICIAL sqlite statement/page saying that
> this is true?


As pointed out by others, SQLite is perfectly capable of storing unencoded
BLOBS since SQLite 3.0 was released.


> 
> -Will I need to encode my data?  I do not think the String.getBytes()
> command returns bytes including a terminator.  However, since the terminator
> is just 0, I assume that even storing the integer 0 as bytes in the blob
> would be a problem.  Is it not?


Given your Java context, the most portable way to encode Java objects is
to make them implement the Serializable interface, and store the 
serialized text representation in the column.

However, if all you're doing is storing simple types and not complex
data structures, this is overkill and you're best off just exploiting
SQLite's manifest type system as pointed out in other replies.

Cheers,
Christian
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite JDBC driver performance

2009-01-27 Thread Christian Smith
On Wed, Jan 21, 2009 at 03:48:50PM -0800, Jim Dodgen wrote:
> are you running the pure version of the sqlite driver?


>From the profile output below, I'd say the pure Java version was being
used.

My guess is that the Java version is slower due to the fact that the Java
version is based on NestedVM, which basically converts MIPS binaries into
Java class files, allowing C code to be used in Java without expensive and
proprietry compilers, via compilation with a MIPS targeted compiler.

Such a scheme, while portable, will not be as fast as a direct C to Java byte
code compiler.

Notice, also, that BLOBS are being accessed. This is probably causing cache
thrashing. It'd be interesting to see relative times for queries that do not
access BLOBS.

I guess the original poster needs to retry his query with the JNI based
JDBC driver. MacOS support should be provided in the download jar on the
project front page, so perhaps the OP has downloaded the wrong jar?

Christian

> 
> On Tue, Jan 20, 2009 at 3:10 AM, Christopher Mason <
> christopher.ma...@proteomesoftware.com> wrote:
> 
> > [Apologies for posting this here, but the sqlitejdbc list appears to be
> > defunct.]
> >
> > Hello.
> >
> > I'm noticing fairly severe performance difference between running a
> > complex query from the sqlite 3.6.7 command line and from within the
> > sqlite jdbc driver version 0.54 available at [1].
> >
> > Sqlite command line:
> > real0m44.708s
> > user0m5.211s
> > sys 0m1.994s
> >
> > java:
> > 2.7min
> >
> > I've tried a number of things:
> >
> >  * I've tried both sqlite 3.6.1 and 3.6.7 (I upgraded the jdbc driver
> > to  .7);
> >  * I've tried with and without the shared_cache option; this has no
> > effect for me.
> >  * I've tried changing the optimization settings for sqlitejdbc to
> > match those of the command line tool I build and installed (-O2).
> >  * Explicitly set the cache size of the
> >
> >
> > I've done some profiling [2] and the prevalence of
> > 'sqlite3_enable_shared_cache' in the java version is surprising.  The
> > java version also uses significantly less CPU than the command line
> > version.
> >
> > I've copied and pasted the SQL out of the prepared statement, so I'm
> > confident that I'm running the same SQL.  It's a single query, so I
> > doubt that JDBC/JNI overhead is to blame.
> >
> > Any help or ideas would be most appreciated!
> >
> > Thanks so much,
> >
> > -c
> >
> >
> > [1] http://zentus.com/sqlitejdbc/
> >
> >
> > [2]
> >
> > Profiling with shark on Mac OS X 10.5.6:
> >
> > java:
> >self  total
> >   0.0%  50.2%   sqlite3_step
> >   5.5%  48.4%sqlite3_blob_open
> >   9.2%  40.0% sqlite3_randomness
> >   8.4%  25.1%  sqlite3_enable_shared_cache
> >   2.1%  16.0%   sqlite3_config
> >   5.4%  7.6% sqlite3_malloc
> >   1.1%  1.2%  sqlite3_config
> >   0.0%  0.1%   sqlite3_free
> >   0.1%  0.1%sqlite3_vfs_unregister
> >   0.8%  1.0%  sqlite3_mutex_try
> >   0.1%  0.1%  sqlite3_mutex_leave
> >   0.2%  3.3% sqlite3_value_type
> >   0.9%  2.7% sqlite3_os_end
> >   0.2%  0.2% sqlite3_mutex_try
> >   0.1%  0.1% sqlite3_snprintf
> >   0.3%  0.3%sqlite3_free
> >   0.2%  0.2%sqlite3_malloc
> >   0.1%  0.1%sqlite3_snprintf
> >
> > command line:
> >   self total
> >   0.0%  64.1% sqlite3_exec
> >   0.0%  64.0%  sqlite3_step
> >   8.6%  63.3%   sqlite3Step
> >   0.3%  21.0%sqlite3VdbeCursorMoveto
> >   6.0%  20.6% sqlite3BtreeMovetoUnpacked
> >   0.3%  10.6%  moveToChild
> >   1.0%  10.2%   getAndInitPage
> >   0.5%  5.1%  sqlite3PcacheFetch
> >   3.0%  4.5%   pcache1Fetch
> >   0.6%  0.9%pthreadMutexEnter
> >   0.2%  0.2% dyld_stub_pthread_self
> >   0.1%  0.1% dyld_stub_pthread_mutex_lock
> >   0.2%  0.2%pcache1PinPage
> >   0.2%  0.2%sqlite3_mutex_leave
> >   0.1%  0.1%sqlite3_mutex_enter
> >   0.1%  0.1%pthreadMutexLeave
> >   0.1%  0.1%dyld_stub_pthread_mutex_unlock
> >   0.1%  0.1%   pthreadMutexLeave
> >   0.0%  3.3%  sqlite3BtreeGetPage
> >   0.3%  0.3%  btreePageFromDbPage
> >   0.0%  0.2%  pagerPagecount
> >   0.1%  0.1%  sqlite3BtreeInitPage
> >   0.1%  0.1%  sqlite3PagerGetData
> >   0.1%  0.1%  sqlite3PagerGetExtra
> >   0.1%  0.1% btreePageFromDbPage
> >   0.1%  0.1% sqlite3PcacheFetch
> >   0.3%  2.0%moveToRoot
> >   1.9%  1.9%sqlite3GetVarint
> >   0.1%  0.1%sqlite3Get4byte
> >   0.1%  0.1%sqlite3PagerUnref
> >   0.1%  0.1%   sqlite3GetVarint
> >   0.1%  0.1%   sqlite3Get4byte
> >   1.5%  15.1%sqlite3BtreeMovetoUnpacked
> >
> >
> >
> > --
> > Christopher Mason   Proteome Software(503) 244-6027
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> > 

Re: [sqlite] Compressed dump SQLite3 database

2009-01-19 Thread Christian Smith
On Mon, Jan 19, 2009 at 06:22:33PM +0100, vlema...@ausy.org wrote:
> Hello,
> 
> We need to produce copies of our databases for archive.
> It is a requirement that the size of those copies being as small as
> possible, without having to perform an external compression.
> vacuum doesn't seem to perform a compression (it works on fragmented
> data), is there any other way to do that ?


If you're taking snapshots of your databases while live, be careful
not to just copy the database files, as changes may be occurring in the
database while you're taking a copy, leaving you with an inconsistent
file.

In which case, you'll probably want to script some sort of archiving,
using perhaps the sqlite shell .dump command to take a text dump of the
database (which will lock it correctly) or to start an exclusive 
transaction while you copy the raw database file, before a rollback to
unlock the raw database once you've finished copying.

Either way, by scripting it, you'll have the opportunity to also
compress the file (dump or raw). Having an arbitrary no compress policy
without good justification seems like a poor policy to implement.


> 
> Vincent
> 

Christian
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database file size

2008-12-03 Thread Christian Smith
On Thu, Nov 27, 2008 at 08:12:02AM +, Simon Bulman wrote:
> Morning,
> 
> Table 1
> 
> BIGINT (index),  VARCHAR(30), VARCHAR(10)
> 
>  
> 
> Table 2
> 
> BIGINT (index), FLOAT


For the second table, the index will contain the BIGINT value and the table
rowid, which is almost as big as the actual row, so use of a separate index
will literally double storage used for the second table including index.

If your BIGINT fields are just record ids, then you are better off using
an "integer primary key" for these fields, as this is used as the table
rowid, and no separate index will be required. The "integer primary key"
type is good for 64 bit ids, so should be adequate.

Regards,
Christian
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why is SQLite so slow across a network (afp Mac OS X)?

2008-05-24 Thread Christian Smith
On Fri, May 23, 2008 at 12:55:47PM -0600, Peter K. Stys wrote:
> On Fri, May 23, 2008 at 4:31 AM, Christian Smith <
> [EMAIL PROTECTED]> wrote:
> 
> > On Tue, May 13, 2008 at 11:15:51PM -0600, Peter K. Stys wrote:
> > > Hi Folks:
> > > I'm new to this list and just came across a major issue so I thought I'd
> > > post here: I'm using SQLite (from REALbasic which uses the SQLite DB
> > engine
> > > for SQL support) to store image files, and while R/W to local files is
> > > excellent, the same operations on files residing on a remote volume
> > mounted
> > > via afp or smb (all Mac OS X) suffer a 20-fold performance hit.
> > >
> >
> > Try playing with synchronous writes turned off. The reduced synchronous
> > requirements may allow you to make more optimum use of the network file
> > protocols, which operate best asynchronously.
> >
> > Try, in order:
> > PRAGMA synchronous = NORMAL;
> > PRAGMA synchronous = OFF;
> >
> > And measure performance of each. But SQLite is simply not designed to work
> > efficiently across a network based file system, so manage your
> > expectations.
> >
> 
> BTW, those PRAGMAs made little difference.  I resorted to caching the remote
> file to the local drive via a fast OS-level file copy then doing the SQL
> R/W, then copying back to the remote in a bkgnd thread.  A programming
> headache to keep everything in sync, but very acceptable performance.


Actually, you might want to try using a larger page size. SQLite uses, by
default, 1KB pages. Increasing that to 16KB or perhaps larger will not
only reduce the overhead of BLOBs, but also increase performance 
significantly, as each page will be going across the network one by one.

Christian
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why is SQLite so slow across a network (afp Mac OS X)?

2008-05-23 Thread Christian Smith
On Tue, May 13, 2008 at 11:15:51PM -0600, Peter K. Stys wrote:
> Hi Folks:
> I'm new to this list and just came across a major issue so I thought I'd
> post here: I'm using SQLite (from REALbasic which uses the SQLite DB engine
> for SQL support) to store image files, and while R/W to local files is
> excellent, the same operations on files residing on a remote volume mounted
> via afp or smb (all Mac OS X) suffer a 20-fold performance hit.
> 
> Why is this, and is there a workaround?
> 
> To be clear, while the remote files are on a server, this is still single
> user access to a single file at a time, just remotely.
> 
> Any input greatly appreciated!


A local disk is on a >1Gb/s connection, probably at the end of a wire <0.5m
long. The remote volume will have that, but also a <1Gb/s connection, on top
of a >10m length of cabling to implement the network.

Laws of physics, such as the speed of light, limit the turn-around of
synchronous writes across a network. Your hard disk has probably an order
of magnitude quicker synchronous write than your network share.

Try playing with synchronous writes turned off. The reduced synchronous
requirements may allow you to make more optimum use of the network file
protocols, which operate best asynchronously.

Try, in order:
PRAGMA synchronous = NORMAL;
PRAGMA synchronous = OFF;

And measure performance of each. But SQLite is simply not designed to work
efficiently across a network based file system, so manage your expectations.

> 
> Peter.
> 

Christian
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Major memory leak

2008-03-23 Thread Christian Smith
On Fri, Mar 21, 2008 at 10:41:10AM -0400, Rob Richardson wrote:
> My SQLite library is built from the single translation unit
> sqlite.c/sqlite.h.  That file contains the version number 3.3.17.
> 
> I do not have valgrind, but circumstantial evidence that this is a
> SQLite problem is strong.  When stepping through my code, I see that
> my application's memory jumps by over 2.5 megabytes when the
> sqlite3_step() method is called when using either the sorted query or
> the query using max().  The unsorted query doesn't show any memory
> jump.  Also, the difference in memory consumption before this part of
> the code is executed and after it is left is the same size as the jump
> in memory when sqlite3_step() is called.


When doing a sorted query, the result set is formed in a temporary database
somewhere defined by the environment. In your case, it sounds like the 
temporary database is memory based. Once the result set is done with, SQLite
may return the memory to the OS using free, but that will show under the
process's virtual memory footprint.

You can tell SQLite to use a disk based temporary database using:
http://sqlite.org/pragma.html#pragma_temp_store

Using this, your memory usage will probably be more stable.

However, this certainly isn't a memory leak.


> 
> RobR
> 

Christian
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Where is the database file created?

2008-03-22 Thread Christian Smith
On Sat, Mar 22, 2008 at 06:44:45PM +0900, Fred Janon wrote:
> Hi,
> 
> I read the documentation, features and faq and could not find anything that
> specifies where (which directory) the database file is stored. I launched
> sqlite3.exe on windows without a database name, using the '.databases'
> command, I get:
> 
> sqlite> .databases
> seq  name file
> ---  ---
> --
> 0main
> sqlite>
> 
> and since it doesn't show a file, I presume that sqlite does actually
> support in-memory temporary databases? Where is is documented?


SQLite supports file and/or in-memory databases. The file is wherever you
tell it to be. Start the sqlite3.exe command with an (unused) filename and
you'll see that file created when you do any writes to this new database.

You might want to start here:
http://sqlite.org/quickstart.html

> 
> Thanks
> 
> Fred
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performance

2008-03-06 Thread Christian Smith
On Wed, Mar 05, 2008 at 09:02:17PM -0500, James Kimble wrote:
> 
> > One thing I can highly recommend on embedded systems, especially flash
> > based ones, is turn pragma synchronous to off. Having sqlite write every
> > record modification to the flash, is a stunningly expensive process,
> > even when it's all encapsulated in a large transaction. Let linux handle
> > the synchronisation and write caching for you. A lot less robust, but,
> > the benefits more than outweighed the cost. If you need guaranteed write
> > to disk, then perform the synch yourself.
> 
> > Cost vs benefit and all that guff.
> 
> 
> That's sounds like good advice. I'll do that.
> 
> Working with flash in this way is going to be a challenge. With limited 
> number of writes in a lifetime (this device needs to last approx 20
> years...) I will have to make some major design decisions around how
> I handle the writes.

How important is the persisent data? Is it kept for audit, statistical 
analysis, what? Basically, can you afford to lose it, or at least a subset of
it? If so , then I'd say maintain the data in an in-memory database, and write
out the data to disk (using safe synchronous writes) at whatever intervals
you desire.

I say use safe synchronous writes, as recovery may be an issue if you don't
write safely. Not what you need on an embedded system where user interaction
may be required.

Christian
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performance.....

2008-03-05 Thread Christian Smith
On Wed, Mar 05, 2008 at 10:21:58AM -0500, [EMAIL PROTECTED] wrote:
> 
> 
> I'm in the process of architecting the software for an embedded Linux system
> that functions as a remote and local user interface to a control system.
> There
> will be a lot of analog (fast) data arriving via SPI bus and I'm thinking of
> using SQLite to store this data in a well organized and easy to access
> manner.
> 
> My main concern is performance. Has anyone had any similar application
> experience they could comment on? I want to be able to insert data arriving
> on the SPI bus and then query the data to update a GUI at a very high rate
> (less than 250Ms). This is not real time so 250Ms is desirable but does not
> have to be guaranteed.
> 
> Any thoughts or experience would be appreciated...


We'd need more details for definitive answers, such as whether you're
using disk or FLASH based storage, your data and schema format.

Some things to consider:
- Batch inserts. Given your 250ms update requirement, you could perhaps
  batch data 4 times a second. That'll give you a very high insert rate.
- If using disk based storage, using ext3 with "data=journal" mount option
  The journal can be written and sync'ed very quickly.
- Experiment with indexing. Indexes will slow insertions, but improve
  querying.
- If the above is still too slow, and you're happy risking the database in
  the event of a system crash, then you can turn off synchronous updates.

HTH,
Christian
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] using lemon to create a c++ parser class

2007-12-09 Thread Christian Smith

Wilson, Ron uttered:

It has been a very long time since I have tinkered with lex/yacc but my 
current project requires a parser.  I'm thinking of learning lemon. 
Frankly, the sqlite code base is far more complex than what I will 
implement.  Is anyone willing to share a lemon parse.y code example for 
something less complex than SQL?



There are tutorials on the net that might be worth looking at, for 
example:

http://freshmeat.net/articles/view/1270/


  Also, i'm looking for advice on using 
lemon to make a c++ parser class instead of a global c parser function. 
Is it as simple as declaring the following?


%name MyParserClass::Parse

I'm pretty sure I can create the right c++ preamble with %include. 
Also, is there a cheap way to make lemon output a .cpp file besides 
renaming the output file?


Feel free to tell me I'm on a foolish quest if I am.



The C++ quest might be unnecassary, but I wouldn't say foolish. There is 
no problem linking C and C++ code. I'd say just leave the Parse function 
as a C function. You might even be able to make it static, thus limiting 
it's scope, and wrapping that static function in a class, but why bother?





RW



Christian


--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] multiple connections

2007-09-10 Thread Christian Smith

Joe Wilson uttered:


--- "D. Richard Hipp" <[EMAIL PROTECTED]> wrote:

In 3.5, cache can be shared between
all threads, but shared cache is still disabled by default.  You have to
invoke sqlite3_enable_shared_cache() to turn it on.  I put a comment in
the documentation that we might turn shared cache on by default in
future
releases.  But until I better understand the backwards compatibility
issues,
I think it is probably better to leave it off for now.


There's no quicker way to see if there's a problem than enabling it
in a release by default. ;-)




As we saw when 3.3.0 was release with a non-backword compatible change in 
storing booleans...


Christian

--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Difference between OR and IN [http://www.sqlite.org/php2004/page-052.html]

2007-09-02 Thread Christian Smith
Once you get your first row back (corresponding to (a==1), simply halt 
there and sqlite3_finalize() or sqlite3_reset the statement. You control 
the execution and how many rows you want back.



RaghavendraK 70574 uttered:


Hi,

Ok.

Is there any way to tell the VDBE to stop execution moment it gets a record
from the IN list rather than continue to query for all the parameters?

I mean can it work like the C if clause
a = 1;
b = 100;
if( a == 1 or b == 10)
{
}

in the above case a is evauated but not b. Is this possible in SQL or SQLite?
Pls suggest.

regards
ragha


**
This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
*

- Original Message -
From: Trevor Talbot <[EMAIL PROTECTED]>
Date: Sunday, September 2, 2007 1:03 am
Subject: Re: [sqlite] Difference between OR and IN 
[http://www.sqlite.org/php2004/page-052.html]


On 9/1/07, RaghavendraK 70574 <[EMAIL PROTECTED]> wrote:


I want to know why
prepareStatement: select * from xxx where IN (?);
stmt.bind("abc,xyz,123"); is not supported for multiple
values.


It's not supported because it doesn't make sense.  The parametric
binding mechanism is for single values; it's not a macro-like text
replacement system.  With your syntax, how do I bind a set of
integers?  Strings?  Blobs?

One common use for parametric binding (besides convenience) is to
avoid SQL injection attacks.  The example you posted doesn't do that;
you have to manually escape each individual value to make sure it's
valid syntax for the IN() group in text form.  Why even use parameters
in that case?  It's the same amount of work whether you build the
entire SQL statement or not.

All common databases I'm aware of work exactly the same way.

---
--
To unsubscribe, send email to [EMAIL PROTECTED]
---
--




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] version 3.5.0 - Segv

2007-09-02 Thread Christian Smith

[EMAIL PROTECTED] uttered:


[EMAIL PROTECTED] wrote:

Ken <[EMAIL PROTECTED]> wrote:

Recompiled with:
gcc -DSQLITE_THREADSAFE -I. -I../src

^^^

Should be -DSQLITE_THREADSAFE=1

The =1 is important in this case.



This problem will likely come up again.  To try and work
around it, I have added a new (experimental) API to the
latest version in CVS.  Call

   sqlite3_threadsafe()

To get back a boolean to indicate whether or not your
build is threadsafe.

May I suggest adding a call to this routine at the
beginning of sqlitetest_thrd35.c and printing and error
message and aborting if the library is not threadsafe?



Is it not worth simply making the library threadsafe by default? There is 
basically no platform supported today that doesn't have some form of 
thread abstraction, the overhead of mutex locking is probably negligible, 
and if someone wants an absolutely no holds barred fastest single threaded 
implementation, then they can provide their own platform abstraction with 
no-op mutexes.


Christian

--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] sqlite as server queries

2007-08-07 Thread Christian Smith

Edwin Eyan Moragas uttered:


On 8/6/07, Christian Smith <[EMAIL PROTECTED]> wrote:


2) anybody ever implemented something like a single
process of sqlite doing queries for a lot of networked
clients?



A few people have implemented such a solution. It loses one of the
benefits of SQLite, however, in that SQLite is no longer admin free.


how so?



Because now you need to manage port numbers, multiple processes on 
potentially multiple machines. Not massive overhead, but still not as easy 
as starting or stopping your one process.









3) how big has your sqlite database grown? have you had any trouble
managing the db? any bad experiences as to stability of
the db file?



Stability of the file? In what sense? Compatibility? Or resistence to
corruption? Or size, perhaps?


resistance to corruption in particular. thinking about it, this
may be an OS issue but given that the OS is ok, how
does sqlite handle it?



SQLite uses a rollback journal along with timely OS level syncs to ensure 
the database is always in a consistant or recoverable state. SQLite can 
survive OS or hardware failure so long as the filesystem remains intact.





thank you for the response.

./e




Christian

--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] [SQLite improve productivity]

2007-08-06 Thread Christian Smith

Igor Mironchick uttered:

Thx, very helpfull reply. One more question: is it need to do "END" after 
"BEGIN" or enought "COMMIT"?




You can use "COMMIT". Probably should do, as it is more descriptive about 
what is happening. Check the docs for transaction commands:

http://www.sqlite.org/lang_transaction.html


Christian

--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] sqlite as server queries

2007-08-06 Thread Christian Smith

Edwin Eyan Moragas uttered:


hi group,

i have several small questions for the group any
experiences or thoughts shared would be greatly
appreciated.

1) anybody used sqlite as a sql server? i'm thinking
of say using the embedded sqlite in PHP5 or similar.

2) anybody ever implemented something like a single
process of sqlite doing queries for a lot of networked
clients?



A few people have implemented such a solution. It loses one of the 
benefits of SQLite, however, in that SQLite is no longer admin free.





3) how big has your sqlite database grown? have you had any trouble
managing the db? any bad experiences as to stability of
the db file?



Stability of the file? In what sense? Compatibility? Or resistence to 
corruption? Or size, perhaps?





i am asking all of these is because i'm seriously considering
sqlite to be used as my sql server for a project.

how i'll implement it looks something like this:

components of app:
embedded web server
sqlite
some scripting language

there will only be one sqlite process which will be forked
when the web server is launched. queries to sqlite will
be sent to the sqlite process via sockets.



You can do this, but you may find it easier to embed SQLite right into 
your app, using whatever wrapper language binding your app is written in. 
That is how it's designed to be used. It'll also be faster that way as 
well, and easier to manage.





i can see that the queries will be sequential. no problems there. i'm 
not worried with speed at the moment. i just want to know if this has 
been done before and i'd like to solicit wisdom from the group.



I have thoughts on wrapping SQLite with FreeTDS, in order to provide 
networked access to legacy apps that expect a TDS server to talk to. But 
that is more for legacy reasons. You don't have this legacy burden by the 
sounds of it, so just embed SQLite.





thank you.

./e




Christian

--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] [SQLite improve productivity]

2007-08-05 Thread Christian Smith
A common issue of high latency transactions. SQLite has a high 
per-transaction overhead, which can be amortized across multiple INSERTs 
or UPDATEs to improve the average INSERT rate. You are doing a single 
INSERT per transaction, so wrap multiple INSERTs inside a single "BEGIN" 
... "END" transaction.


See:
http://www.sqlite.org/cvstrac/wiki?p=PerformanceConsiderations

Christian

Igor Mironchick uttered:


Hi.

How can I improve productivity of this code:

a_messenger_t::data_buff_t --> std::deque< some_struct >

//

  char * errors = 0;

  for( a_messenger_t::data_buff_t::const_iterator it = msg.m_buff.begin(),
  last = msg.m_buff.end(); it != last; ++it )
  {
  // Converting long to std::string...
  std::string sec( itos( it->m_time.sec() ) );
  std::string usec( itos( it->m_time.usec() ) );

  // Columns in 'data' table defined like (INTEGER, INTEGER, INTEGER, 
TEXT)
  char * sql = sqlite3_mprintf( "INSERT INTO data VALUES( '%q', '%q', 
'%q', '%q' )",

  m_sources_map[ it->m_source ].m_sid.c_str(),
  sec.c_str(), usec.c_str(), it->m_value.c_str() );

  // m_db --> sqlite3*
  int ret = sqlite3_exec( m_db, sql, 0, 0,  );

  if( ret != SQLITE_OK )
  {
  ACE_DEBUG( ( LM_INFO, ACE_TEXT( "[%D] %M -- %C\n" ), errors ) );
  sqlite3_free( errors );
  }

  sqlite3_free( sql );
  }

Any idea? This method are so slow - about 1 kB per second new data in my DB.




--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] In-Memory Database: Delete rows on a Table increases the memory usage.

2007-08-03 Thread Christian Smith

Scott Derrick uttered:

are you saying this is a memory leak? 
sqlite never gives back the unused memory?



No, libc never gives back the memory.

It is not leaked because the malloc implementation keeps a reference to 
all the free'd heap memory in tracking it for future requests.





Christian Smith wrote:

Lokesh Babu uttered:


Hello Folks,

When I perform the DELETE operation on a Table using In-Memory Database
(":memory:"), the memory usage increases.

I tried using PRAGMA auto_vacuum=1; /* result - nothing works */
Even I tried using VACUUM table_name; /* this too isn't work */

if I perform DROP the table operation, memory usage works. Deleting the 
rows

doesn't work.

Anybody please help me in resolving this.



Memory usage goes up because SQLite must temporarily store copies of the 
old pages that store the deleted rows, in order to allow a future rollback 
if required.


Once a commit is done, the old pages are free'd, but by that time the 
memory footprint has already increased. Not many libc implementations 
release heap memory back to the OS once it's allocated.





Thanks in advance,

Lokee



--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \


- 
To unsubscribe, send email to [EMAIL PROTECTED]


- 










--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] In-Memory Database: Delete rows on a Table increases the memory usage.

2007-08-02 Thread Christian Smith

Lokesh Babu uttered:


Hi Smith,

Is there any way where I can free the old pages and without using rollback
feature.
Because I'm much concerned about memory usage.

As soon as I delete some records, It should free up the memory.



Use a libc that has a malloc implementation that releases excess memory 
back to the operating system. I don't know of specific instances of libc 
that do this, so I can't help further, sorry.





Thanks


On 8/1/07, Christian Smith <[EMAIL PROTECTED]> wrote:


Lokesh Babu uttered:


Hello Folks,

When I perform the DELETE operation on a Table using In-Memory Database
(":memory:"), the memory usage increases.

I tried using PRAGMA auto_vacuum=1; /* result - nothing works */
Even I tried using VACUUM table_name; /* this too isn't work */

if I perform DROP the table operation, memory usage works. Deleting the

rows

doesn't work.

Anybody please help me in resolving this.



Memory usage goes up because SQLite must temporarily store copies of the
old pages that store the deleted rows, in order to allow a future rollback
if required.

Once a commit is done, the old pages are free'd, but by that time the
memory footprint has already increased. Not many libc implementations
release heap memory back to the OS once it's allocated.

--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \


-
To unsubscribe, send email to [EMAIL PROTECTED]

-






--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] In-Memory Database: Delete rows on a Table increases the memory usage.

2007-08-01 Thread Christian Smith

Lokesh Babu uttered:


Hello Folks,

When I perform the DELETE operation on a Table using In-Memory Database
(":memory:"), the memory usage increases.

I tried using PRAGMA auto_vacuum=1; /* result - nothing works */
Even I tried using VACUUM table_name; /* this too isn't work */

if I perform DROP the table operation, memory usage works. Deleting the rows
doesn't work.

Anybody please help me in resolving this.



Memory usage goes up because SQLite must temporarily store copies of the 
old pages that store the deleted rows, in order to allow a future rollback 
if required.


Once a commit is done, the old pages are free'd, but by that time the 
memory footprint has already increased. Not many libc implementations 
release heap memory back to the OS once it's allocated.





Thanks in advance,

Lokee



--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Milliseconds

2007-07-15 Thread Christian Smith

Scott Baker uttered:


Christian Smith wrote:

If you use the julianday representation, the integer component is the
number of days since "noon in Greenwich on November 24, 4714 B.C", with
the fractional part being the fraction of that day. Hence, the
resolution is determined by the fractional component of the real number.
Now, in the UK, I get the following:
sqlite> select julianday('now');
2454295.1407767

The integer component consumes probably 21 bits of the available 52 bits
mantissa of an IEEE-754 64-bit real. That leaves 31 bits for the
fractions of a day, giving a resolution of 1/24855 of a second:
2^31/(60*60*24) = 24855.134814814814814814814814815

Plenty enough for milli-second resolution.

Probably not very good for embedded applications if an FPU is not
available.


I'm a little confused by the math... help me work this out.

sqlite> SELECT julianday('now');
2454295.20404931

That gives me days since the Julian epoch. If I multiply by 86400 I
should get seconds since the Julian epoch.

sqlite> SELECT julianday('now') * 86400;
212051105903.613

That leaves me three decimal points of precision for seconds. So
that's thousandths of a second? Where do you get 24000ths of a second?



The floating point representation used by SQLite maps to the IEEE 754 
64-bit representation, which has 1 bit for sign, 11 bits for for the 
exponent, leaving 52 bits (effectively 53 bits including the implied 
leading 1 binary digit) for the precision.


Given that, the 2454295.20404931 uses 21 bits for the integral part of the 
number (before the floating point) including the implied initial 1 digit. 
That leaves 52-21 bits of precision, or 31 bits for the fraction of a day.


So, you have 1/2^31 days resolution, or 86400/2^31 seconds resolution. 
That is 1/24855.134814814814814814814814815 second resolution.


Christian


--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Get the data from previous months: please, help me to optimize the query...

2007-07-13 Thread Christian Smith

Joe Wilson uttered:


CREATE TABLE 'Months'
(
  IDMonth  INTEGER PRIMARY KEY NOT NULL,
  MonthRef INTEGER
);

(where MonthRef is the date of the first day of the month - created in the code)



Using what epoc?




CustomerData
--
CREATE TABLE 'CustomerData'
(
IDCustomerData  INTEGER PRIMARY KEY NOT NULL,
IDMonth INTEGER,
NdgSingolo TEXT NOT NULL DEFAULT '0',
NdgCliente TEXT NOT NULL DEFAULT '0',
FatturatoNdg REAL DEFAULT 0 ,
FatturatoGruppo REAL DEFAULT 0  ,
MargineIntermediazioneLordo REAL DEFAULT 0  ,
MargineInteresse REAL DEFAULT 0 ,
MargineServizi REAL DEFAULT 0   ,
RaccoltaDirettaSM REAL DEFAULT 0,
RaccoltaIndirettaSM REAL DEFAULT 0  ,
ImpieghiSM REAL DEFAULT 0   ,
RaccoltaDirettaSP REAL DEFAULT 0
);

(where IDMonth is the foreign key to the Months table).

CustomerData contains the data of a single Customer (NdgSingolo), for 
the selected month ID. What I need to do is to get "some" data in a 
record from the previous year, and from the end of the previous year. 
For instance, if the current month is March 2007, then I need the data 
of March 2006, and of December 2006. To accomplish this, I created 
these two views:


_VCustDataMonths
--
CREATE VIEW _VCustDataMonths AS
SELECT * FROM CustomerData A LEFT OUTER JOIN Months B ON A.IDMonth = B.IDMonth;

_VCustomerData_1
--
CREATE VIEW _VCustomerData_1 AS
SELECT AC.*,
   M1.MargineIntermediazioneLordo AS MargineIntermediazioneLordo_m1,
   AP.MargineIntermediazioneLordo AS MargineIntermediazioneLordo_ap,
   M1.MargineInteresseAS MargineInteresse_m1,
   AP.MargineInteresseAS MargineInteresse_ap,
FROM _VCustDataMonths AC
 LEFT OUTER JOIN _VCustDataMonths M1 ON AC.NdgSingolo = M1.NdgSingolo AND 
AC.NdgCliente =
M1.NdgCliente AND M1.MonthRef = date( AC.MonthRef, '-1 year' )
 LEFT OUTER JOIN _VCustDataMonths AP ON AC.NdgSingolo = AP.NdgSingolo AND 
AC.NdgCliente =
AP.NdgCliente AND AP.MonthRef = date( AC.MonthRef, 'start of year', '-1 month' 
);

Now, the query _VCustomerData_1 (that is the one that I need) takes 
*145,23 seconds* to run!! (with about 4000 records in the CustomerData 
table). This is really too much...


I have indexes in the Months and CustomerData tables for the fields 
NdgSingolo and NdgCliente...


How could I increase the performance of this query to get reasonable 
results??


Much faster - add 3 new fields in CustomerData which you can populate
via SQLite's trigger mechanism, or an explicit UPDATE prior to your
SELECT:

 MonthRef-- populate from Months table
 MonthRef2   -- date(Months.MonthRef, '-1 year')
 MonthRef3   -- date(Months.MonthRef, 'start of year', '-1 month')

This way you can avoid several joins with the Months table
and avoid the use of the slow view.



This is leaving you open to data errors. Better to use a single IDMonth 
and calculate the join values at run time. Even better, avoid the MonthRef 
table completely, and use the first day of the month directly.



My take:

CREATE TABLE 'CustomerData'
(
IDCustomerData  INTEGER PRIMARY KEY NOT NULL,
IDMonth INTEGER,
NdgSingolo TEXT NOT NULL DEFAULT '0',
NdgCliente TEXT NOT NULL DEFAULT '0',
FatturatoNdg REAL DEFAULT 0 ,
FatturatoGruppo REAL DEFAULT 0  ,
MargineIntermediazioneLordo REAL DEFAULT 0  ,
MargineInteresse REAL DEFAULT 0 ,
MargineServizi REAL DEFAULT 0   ,
RaccoltaDirettaSM REAL DEFAULT 0,
RaccoltaIndirettaSM REAL DEFAULT 0  ,
ImpieghiSM REAL DEFAULT 0   ,
RaccoltaDirettaSP REAL DEFAULT 0
);
CREATE INDEX CustomerDataByMonth ON CustomerData(IDMonth,NdgCliente,NdgSingolo);

DROP VIEW IF EXISTS _VCustomerData_1;
CREATE VIEW _VCustomerData_1 AS
SELECT AC.*,
M1.MargineIntermediazioneLordo AS MargineIntermediazioneLordo_m1,
AP.MargineIntermediazioneLordo AS MargineIntermediazioneLordo_ap,
M1.MargineInteresseAS MargineInteresse_m1,
AP.MargineInteresseAS MargineInteresse_ap
FROM CustomerData AC
LEFT OUTER JOIN CustomerData M1
ON  AC.NdgSingolo = M1.NdgSingolo
AND AC.NdgCliente = M1.NdgCliente
AND M1.IDMonth = date(AC.IDMonth,'-1 year')
LEFT OUTER JOIN CustomerData AP
ON  AC.NdgSingolo = AP.NdgSingolo
AND AC.NdgCliente = AP.NdgCliente
AND AP.IDMonth = date(AC.IDMonth,'start of year', '-1 month');


Now you have the same speed as Joe's solution (similar query plan):
sqlite> explain query plan select * from _VCustomerData_1 ;
0|0|TABLE CustomerData AS AC
1|1|TABLE CustomerData AS M1 WITH INDEX 

Re: [sqlite] Milliseconds

2007-07-13 Thread Christian Smith

John Stanton uttered:

The Sqlite date/time routimes have a resolution to seconds, not milliseconds. 
If you want milliseconds from SQL implement your own user defined functions 
which give you milliseconds.  You would access the time functions using the 
API of the underlying OS.


You might choose to implement your underlying storage as a 64 bit integer.



If you use the julianday representation, the integer component is the 
number of days since "noon in Greenwich on November 24, 4714 B.C", with 
the fractional part being the fraction of that day. Hence, the resolution 
is determined by the fractional component of the real number. Now, in the 
UK, I get the following:

sqlite> select julianday('now');
2454295.1407767

The integer component consumes probably 21 bits of the available 52 bits 
mantissa of an IEEE-754 64-bit real. That leaves 31 bits for the fractions 
of a day, giving a resolution of 1/24855 of a second:

2^31/(60*60*24) = 24855.134814814814814814814814815

Plenty enough for milli-second resolution.

Probably not very good for embedded applications if an FPU is not 
available.


Christian

--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Converting from 2.8.x to 3.x?

2007-06-20 Thread Christian Smith

Gilles Ganault uttered:


Hello

As we move from a 2.8.x file-based solution to a 3.x c/s solution, we'll have 
to convert databases from one format to the other.


What's the easiest way to do this?



sqlite olddb .dump | sqlite3 newdb




Thank you
G.



Christian

--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Cache invalidation after insert statements.

2007-06-19 Thread Christian Smith

pompomJuice uttered:



I suspected something like this, as it makes sense.

I have multiple binaries/different connections ( and I cannot make them
share a connection ) using this one lookup table and depending on which
connection checks first, it will update the table.



What is your working set like? Are all processes on the same machine?

Sounds like you might benefit from increasing the amount of RAM on this 
machine. You may be thrashing the OS cache, as your lookup process hogs 
memory for it's own cache, pushing out old pages from the OS filesystem 
cache.


If RAM upgrade is not feasible, then try reducing the cache of the lookup 
process, so that the OS cache isn't forced out of memory so easily. Then, 
when the lookup process has a cache miss, it's missed page is more likely 
to be in the OS memory cache, and copied to the lookup process at memory 
copy speed.


As you may have guessed, choosing the correct cache size for the lookup 
process may involve several tuning iterations.



Christian

--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Lemon Parser - Modular & Extensible ?

2007-06-19 Thread Christian Smith

Uma Krishnan uttered:

Hey, There's no need to be offensive. I did not mean to be critical. Far 
from it, it does a great a job (far more than I'm capable of producing). 
What I was trying to find out was, if it is possible for a .y files to 
be broken such that it can be built on top on other .y files.



Sorry if I came across as offensive. That was not the intention. I was 
just a little confused about the question.


I think lemon can only handle single input files. But you can can include 
C source into your output C file using the %include directive. Check out 
the documentation at, if you haven't already done so:

http://www.hwaci.com/sw/lemon/lemon.html

Now, what may draw some critical analysis is top posting and hijacking an 
existing thread for a new topic... [snip]





 Not sure if this is the right group. But could not find a lemon parser 
user group.



This is the best group to ask. While not tied to SQLite, it appears to be 
maintained as part of SQLite (but I may be wrong.)


Christian

--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Lemon Parser - Modular & Extensible ?

2007-06-18 Thread Christian Smith

Uma Krishnan uttered:


Hello:

 Is lemon parser modular and extensible?



Extensible to do what? It generates parsers, and is self contained. It 
does a single job, and does it well. What more could you ask for?





 Thanks

 Uma

Asif Lodhi <[EMAIL PROTECTED]> wrote:
 Hi Everybody,

I have just joined this mailing list as Sqlite looks like a good
software solution to my needs. What I need right now is RE-assurance
of "crash-recovery" that is mentioned on your front page. So, I would
be thankful if you experts would give me an "accurate" and fair
picture of the crash-recovery aspects of SQLite - without any hype.

--
Best regards,

Asif

-
To unsubscribe, send email to [EMAIL PROTECTED]
-





--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Recovery After Crash

2007-06-18 Thread Christian Smith

Asif Lodhi uttered:


Hi Kees,

Thanks for replying.

On 6/17/07, Kees Nuyt <[EMAIL PROTECTED]> wrote:

>... thankful if you experts would give me an "accurate" and fair
>picture of the crash-recovery aspects of SQLite - without any hype.

I'm not sure if you would qualify this as hype, but sqlite is
used in many end-user products, ranging from operating systems ..


Basically, I intend to use sqlite's data capacity as well - I mean
2^41 bytes - for reasonably sized databases. Well, not as much as 2^41
but somewhere around 2^32 to 2^36 bytes. I would like to know if the
"crash-recovery" feature will still work and the high-performance
mentioned will be valid even if I have this kind of a data volume. And
yes, I am talking about highly normalized database schemas with number
of tables exceeding 80. Please reply assuming I tend to come up
optimized db & query designs - keeping in view general rules for
database/query optimizations.



SQLite is not optimised for large datasets. Data recovery will work, as 
advertised, in the general case including large datasets, but the memory 
footprint of the library increases as the size of the database grows.


Consider using larger pages than the default 1024 bytes to limit the 
number of pages SQLite must track.


Other than that, the performance should degrade predictably with 
increasing datasets, given that SQLite uses the same BTree(+) based 
algorithms used by most database engines.





--
Thanks again and best regards,

Asif

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Journal File Optimization

2007-06-18 Thread Christian Smith

Andre du Plessis uttered:


How can one optimize the creation of the journal file. The problem is
this, for our system which is an event based one each message needs to
be insterted and committed to the database (guaranteed), this results in
a commit per insert, this was obviously unacceptably slow and according
to the lists by design you need to do more bulk transactions, and with
some efford I modified it with a combination of bulk and temporary
tables to do quite a few at a time, but the problem remains that
committing is terribly slow.




What sort of latency is acceptable for commits? How many event sources are 
there? How do you track commits against the event source?


If you require guaranteed single inserts, a client/server database may be 
better performing, as writes can be better optimised in a redo/undo
journal used by client/server systems, rather than the undo journal used 
by SQLite.






I'm thinking the journal file, the fact that it is created written and
deleted each time which is slowing it down, Is there not a way to create
and set up a permanent journal file pre-allocated to a certain size, so
sqlite does not have to go through the OS each time to create and delete
this file?



If you require multi-process access to the database, then Dan's suggestion 
may not work, and you'll have to optimise the sync case. You can do this, 
as suggested elsewhere by turning of synchronous updates. This leaves you 
at the mercy of the OS to guard against crashes.


Depending on your OS, you may be able to optimise OS sync. Solaris ZFS is 
a tree based FS, a bit like WAFL by NetApp. Synchronous writes are 
aggregated and written to minimize seeks. I've not done any benchmarking 
on ZFS, so YMMV (time to power up the Solaris Express partition, me 
thinks.)


Linux ext3 can write data to the same journal that FS meta-data is written 
to, which can greatly enhance single insert speeds due to the journal 
being written at disk IO speed without seeks. Tests I've done indicate a 
doubling of performance over regular ordered data writing that is the 
default for ext3.


Finally, on NetBSD (where LFS is still actively developed) you may see 
performance improvements using LFS, for similar reasons to the ext3 case 
above. I've not, however, tried that recently, so again YMMV.


Of course, if you're not running Solaris, Linux or NetBSD, you may be 
stuck as not many other OS/FS support such optimisations.







Along the same lines of this question, is there a way to fix the initial
size of the DB, ie set it to pre-allocate 700mb for instance so that no
growing of the db file is needed until the space is exceeded, may also
speed things up.



Probably won't improve speed that much, especially as you approach your 
working database size. Avoid vacuuming your database, so that free pages 
are recycled and the database size will stabilise. If your dataset is 
likely to constantly grow without bounds, then SQLite may not be your 
optimal choice in the long run, and a client/server database may provide 
better performance over the long term.


You might also try increasing your page size, up to the maximum of 32768, 
so that new page allocations are required less.









Thank you very much in advance.






--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Proper way to transfer a live sqlite database

2007-06-18 Thread Christian Smith

Rich Rattanni uttered:


The databases will be in flux, and I didnt necessairly want to suspend
the application that is performs reads and writes into the database.
A simple copy worries me because it seems like messing with SQLITE on
the file level is dangerous since you circumvent all the protection
mechanisms that provide fault tolerance.  I didnt want to have to
worry about if the database has a journal file that needs copied, or
any other situation like that.  I figured using the SQLITE API to do
the copy would award me some protection against corruption.



You're right to be cautious. Never copy an in use database if that 
database could possibly be updated.


If you open the database, and obtain a SQLite read lock on it, you can be 
sure it is not going to be modified, and can be safely copied at the OS 
level.


Best way of doing this is to execute a 'BEGIN IMMEDIATE', copying the 
database file, then executing a 'ROLLBACK' to end the transaction.


To limit the time the database is locked, I suggest copying the file to a 
local filesystem first, then transferring across the network after the 
lock is released.


Christian



--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Does Attach improve Concurrency ?

2007-06-05 Thread Christian Smith

Eduardo Morras uttered:


At 19:32 01/06/2007, you wrote:


When you have a connection with multiple attached databases and the
connection acquires an exclusive lock, does it always lock all attached
databases or does it keep track of which databases require the lock?  Does
using separate databases and attaching them improve concurrency (by
providing finer-grained locking)?


It locks all attached databases. No, it does not improve concurrency but 
i can improve speed if database files are on more than one phisical 
disk. The sqlite bottleneck is i/o access on most cases.




Tip for performance on Linux and ext3.

Mount your database filesystem with "data=journal" option to write data to 
the journal before being written in place. This reduces latency 
drastically, as the journal is contiguous and written at the full IO rate 
of the device without seeks. An example of the effect it has on a test I 
did can be found here:

http://www.osnews.com/permalink.php?news_id=16522_id=184137


Christian

--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Does Attach improve Concurrency ?

2007-06-05 Thread Christian Smith

[EMAIL PROTECTED] uttered:




When you have a connection with multiple attached databases and the
connection acquires an exclusive lock, does it always lock all attached
databases or does it keep track of which databases require the lock?


1st process:
C:\Documents and Settings\Administrator\Pulpit>sqlite3.exe a.db
SQLite version 3.3.17
Enter ".help" for instructions
sqlite> attach 'b.db' as b;
sqlite> begin exclusive;

2nd:
C:\Documents and Settings\Administrator\Pulpit>sqlite3.exe b.db
SQLite version 3.3.17
Enter ".help" for instructions
sqlite> create table tab(col);
SQL error: database is locked

So it locks all attached databases.



Not by default. BEGIN EXCLUSIVE is not the default transaction locking 
mode. By default, BEGIN will not lock anything until needed, in which case 
you can have different sessions locking different attached databases in 
different ways. SQLite will not block readers until it gets an EXCLUSIVE 
lock, which is usually upon committal of a transaction, or the spillage of 
pages from a full page cache. But that will only be on the database that 
is being updated. BEGIN EXCLUSIVE is explicit and applied to all attached 
databases. But upgrading an attached database to an EXCLUSIVE lock does 
not upgrade locks on other attached databases.


So, by default, using multiple attached databases may increase 
concurrency, but watch for deadlock conditions.








Does
using separate databases and attaching them improve concurrency (by
providing finer-grained locking)?


Yes it does. You can open a connection to a memory database, store each table
in a separate database and attach  them if needed. I'v already changed
sqlite to do it transparently. It still has a lot of bugs, but it has
already proven to work. But if you don't mind attaching databases manually
originall sqlite will work perfectly.



As stated above, not needed if you avoid using BEGIN EXCLUSIVE.


Christian

--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] How to restrict the peak heap usage during multiple inserts and updates?

2007-05-29 Thread Christian Smith

Kalyani Tummala uttered:


I am planning to use sqlite as a database for storing and retrieving
media data of about 5-10k records in a device whose main memory is
extremely small. A sequence of insert statements increasing the heap
usage to nearly 70K(almost saturating point) which is crashing my
application. I want to restrict this to 30K.

I tried closing database and reopen after some inserts but of no use.

I have observed that, when I open the database with about 1K to 2K
records in it, inserts and updates take more heap and also gradually
increase than a a database with less than 1k records in it.



When updating the database, SQLite will keep a bitmap representing 
modified pages in memory, so as to manage the rollback journal. Therefore, 
making your minimum pages size smaller will now require more bits to track 
all the potentially modified pages in the database file. Instead, using 
the stock SQLite parameters, increase the page size and reduce the number 
of buffers. Increasing the page size will reduce the number of pages being 
tracked, as well as increasing the number of rows in each page. But, 
depending on how big your database is, this may not be a significant 
amount of memory. How big is a typical database?





My objective is to reduce the peak heap usage during inserts, updates
and also deletes with little or no performance degradation.



You'll get a certain amount of slowdown when reducing the number of 
available buffers, as you'll be spilling dirty buffers to disk more often.





Please suggest me if I can do anything to do so.



You could try profiling memory usage before randomly changing parameters. 
In the source, perhaps on a test machine rather than the target platform, 
replace sqliteMalloc with a macro to log memory allocation, along with 
source file and line number information, something like what is done now 
with memory debugging turned on (see src/malloc.c and src/sqliteInt.h).





Thank you in advance
Kalyani





-Original Message-
From: John Stanton [mailto:[EMAIL PROTECTED]
Sent: Tuesday, May 29, 2007 6:51 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] How to restrict the peak heap usage during
multiple inserts and updates?

Since you are only using part of Sqlite have you considered using a much

smaller footprint storage system which only implements the functions you

are using?

Kalyani Tummala wrote:

Hi joe,

Thanks for your response.

In order to reduce the footprint size, I have bypassed parser

completely

and using byte codes directly as my schema and queries are almost
compile time fixed. Hence I am not using sqlite3_prepare().

The following is the schema and inserts I am using.
CREATE TABLE OBJECT(

PUOIINTEGER  PRIMARY KEY,
Storage_Id  INTEGER,
Object_Format   INTEGER,
Protection_Status   INTEGER,
Object_Size INTEGER,
Parent_Object   INTEGER,
Non_Consumable  INTEGER,
Object_file_nameTEXT,
NameTEXT,
File_Path   TEXT
);

CREATE TABLE AUDIO(

PUOIINTEGER PRIMARY KEY,
Use_Count   INTEGER,
Audio_Bit_Rate  INTEGER,
Sample_Rate INTEGER,
Audio_Codec_TypeINTEGER,
Number_of_Channels  INTEGER,
Track   INTEGER,
Artist  TEXT,
Title   TEXT,
Genre   TEXT,
Album_Name  TEXT,
File_Path   TEXT
);

INSERT INTO OBJECT VALUES (
 7, 65537, 12297, 0,
 475805, 6, 0,
 'ANJANEYASTOTRAM.mp3', NULL,
'C:\\MTPSim\\Store0\\Music\\Artist\\Album\\ANJANEYASTOTRAM.mp3'
);


INSERT INTO AUDIO VALUES (
 7, 6, 144100, 0,
 0, 0, 6,
 NULL, NULL, NULL, NULL,
'C:\\MTPSim\\Store0\\Music\\Artist\\Album\\ANJANEYASTOTRAM.mp3'
);

INSERT INTO OBJECT VALUES (
 8, 65537, 12297, 0,
 387406, 6, 0,
 'BHADRAM.mp3', NULL,
'C:\\MTPSim\\Store0\\Music\\Artist\\Album\\BHADRAM.mp3'
);


 INSERT INTO AUDIO VALUES (
 8, 6, 144100, 0,
 0, 0, 6,
 NULL, NULL, NULL, NULL,
'C:\\MTPSim\\Store0\\Music\\Artist\\Album\\BHADRAM.mp3'
);


Warm regards
Kalyani

-Original Message-
From: Joe Wilson [mailto:[EMAIL PROTECTED]
Sent: Tuesday, May 29, 2007 9:42 AM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] How to restrict the peak heap usage during
multiple inserts and updates?



I am working at porting sqlite ( ver 3.3.8 ) on an embedded device


with


extremely low main memory.

I tried running select queries on the tables( with about 2k records


each


having about 5 strings) and they do well within 20kB of runtime heap
usage.

But, when I try new insertions, the heap usage grows tremendously


(about


70 kB at peak).



Perhaps preparing the statements (sqlite3_prepare) might decrease RAM
use somewhat.

Can you post an example of your schema and these insert statements?








Choose the right car based on your needs.  Check out

Yahoo!

Autos 

Re: [sqlite] Auto Vaccum and fragmentation?

2007-04-16 Thread Christian Smith

Ludvig Strigeus uttered:


Assuming I have an autovacuum database that primarily stores 32k blobs. If I
add/remove lots of rows, will this lead to excessive fragmentation of the
overflow chains, or does Sqlite do anything to try to unfragment the pages
belonging to a single row?



I believe auto-vacuum will simply shuffle data so as to compact free 
space, rather than ordering table and overflow data into contiguous 
regions. Hence, you'll still have fragmentation.





Thanks,
Ludvig



--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Strange performance behavior

2007-03-19 Thread Christian Smith

Hubertus uttered:


Dear list,
sorry to just come up with another performance question. I build a yet small
database with one table. It has about 650.000 rows, 75 columns and
has at the moment about 650 Mb. It runs on a Intel Pentium M with 2 GHz. The
Laptop runs Suse 10.2 and does basicly nothing but this database.
sqlite3 -version is 3.3.8

This is the schema:
CREATE TABLE 'data'('nr' INTEGER PRIMARY KEY,
 'year' INTEGER,
 'month' INTEGER,
 'day' INTEGER,
 'sec' REAL,
 'campId' TEXT,
 'flightNr' INTEGER,
 '1' REAL,
 ...
 '71' REAL
 );
CREATE INDEX sec on data(year,month,day,sec);



What a nasty schema! What exactly do the '1'...'71' fields represent? Are 
they all used in each row? If not, you might be better off putting the 
data in a seperate table and joining the data.





I experience a big variability of time a query needs:
~database> time sqlite3  data.db 'select "14" from data where campId="polstar98" and 
"14">-;' >/dev/null
real0m3.115s
user0m1.748s
sys 0m1.368s
~/database> time sqlite3  data.db 'select "14" from data where campId="polstar98" and 
"14">-;' >/dev/null
real0m3.139s
user0m1.756s
sys 0m1.380s
~/database> time sqlite3  data.db 'select "42" from data where campId="stream94" and 
"14">-;' >/dev/null
real0m50.227s
user0m4.692s
sys 0m3.028s

I've tried it with and without index and at different times. Most of the 
time such simple queries take about 35 sec. Why is that so and what can 
be done? I can live with 3 sec of response but not up to one minute and 
the database is still not complete. Would pytables with hdf5 be an 
alternative? Knowing that this is probably not the right place to ask...



The rows are probably quite big (each real value is 8 bytes), and would 
not fit in a single low level SQLite BTree cell, but instead overflow 
using an overflow page per row. As well as being inefficient for access of 
columns in the overflow page, it is also massively space inefficient, as 
the overflow page is not shared and most of it's space is probably wasted.





Tips, suggestions, recommendation are gratefuly appreciated!



If you can't change the schema, your best bet is to increase the page size 
of the database, which will hopefully allow you to keep entire rows 
together without using overflow pages. Create a new database, and use:

PRAGMA page_size=4096;

then import your existing data from your old database. Something like:

$ rm new.db
$ sqilte3 new.db
sqlite> PRAGMA page_size=4096;
sqlite> ATTACH 'old.db' AS old;
sqlite> CREATE TABLE data AS SELECT * FROM old.data;



Thanks in advance

Hubertus



Christian


--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Question about speed of CASE WHEN

2007-03-18 Thread Christian Smith

RB Smissaert uttered:


Looking at the fastest way to convert a field in a table and wonder if in
general an update with a CASE WHEN construction or an update with a join to
a lookup table is faster.



My guess is that the CASE WHEN form will be faster for small number of 
possibilities. It compiles into a straight list of compare instructions.





These conversions are of this type:

UPDATE A3SQL77D_J
SET ENTRY_TYPE = (case
when ENTRY_TYPE = 9  then 'Issue
when ENTRY_TYPE = 2  then 'Note'
when ENTRY_TYPE = 1  then 'Encounter'
when ENTRY_TYPE = 8  then 'Authorisation'
when ENTRY_TYPE = 11  then 'Prescription'
when ENTRY_TYPE = 5  then 'Treatment'
when ENTRY_TYPE = 3  then 'Problem'
when ENTRY_TYPE = 13  then 'Discontinuation'
when ENTRY_TYPE = 6  then 'Reminder'
when ENTRY_TYPE = 14  then 'Adverse reaction'
when ENTRY_TYPE = -1  then 'Unknown'
when ENTRY_TYPE = 4  then 'Sub-problem'
when ENTRY_TYPE = 7  then 'Battery'
when ENTRY_TYPE = 10  then 'Return-Script'
else ENTRY_TYPE end)

So, an integer value to be converted to a limited number of strings.
I could figure this out with some testing, but maybe there are some general
rules that apply to this.



Testing is probably the best way. However, the difference in speed may not 
be significant given the increased maintainability of the table lookup 
based solution.





Also, would the order of the CASE WHEN options make a difference,
speed-wise, so would it be faster to have the WHEN options in decreasing
order of frequency?



The sequence of code generated compares the cases in the order written. So 
the common cases should go first.





RBS



Christian

--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQLite db lock problem

2007-02-20 Thread Christian Smith

Allan, Mark uttered:


Hi,

A little while back I submitted a query as to whether SQLite would be a 
good alternative to using MS Access as an internal database for a PC 
application. I received many repiles for which I was grateful. Mostly 
people thought that SQLite was a far more superior option to Access, the 
main concern I was warned about however was that SQLite does not work 
particularly well in a multiuser environment. We will need (in some 
instances) for the database file to be located on a network drive and 
there is the requirement to support up to 10 concurrent users. The 
actual traffic in most cases will be very light and the likelyhood of 
writes actually taking place at the same time actually very slim. 
However we do need the database engine to be able to handle this.


Basically my questions are thus:-


1) What are peoples experiences with SQLite under the scenario I have 
described above?



SQLite has been known to be problematic using NFS drives, mainly due to 
flaky NFS locking implementations. I believe SMB has better integrated 
locking in the protocol, and so may be better than NFS with regards to 
locking reliability.



2) What are peoples opinions on whether SQLite is a good choice for this 
project? Any other alternatives?



If the locking issues above are non-issues under SMB, SQLite should be at 
least as good as Access, and likely much better.



3) How severe is the database locking problem? How easy is it to unlock 
a locked database? How often on average will it occur?



Given that you will have writers rarely updating the database, you should 
have little contention on the database.


Locks under SQLite are advisery OS locks. If a SQLite application crashes 
while holding a lock, the OS should automatically clear that lock when the 
application exits. A crashed machine may be more problematic, as the 
server holding the SQLite database will likely have to wait for the client 
connection to time out before releasing the lock.


This is all OS level, though, and the same issues would apply to Access as 
well. So if you're currently happy with Access's locking, then you should 
be OK with SQLite. This assumes Access's locking is based on the OS's 
locking mechanisms.





It is worth noting that the project is a complete upgrade from an older 
version, the old version used access in the same environment as 
described above and we had no complaints of problems in multiuser usage. 
However we are finding access old technology and too slow and the 2Gb 
limit is mnow too small for some of our customers.



Sounds like SQLite should fit your needs well.




Any help/suggestions will be gratefully received.

Mark




Christian

--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Backing up a SQlite database

2007-02-09 Thread Christian Smith

Rich Shepard uttered:


On Fri, 9 Feb 2007, Mikey C wrote:


This might be a dumb question, but is taking a backup of a live database
simply a matter of copying the file to a backup device/drive?


 Yes. It's a regular file to your OS. As a matter of fact, you can copy the
file to another name and open that other name to see the same tables and
data as with the original. That's how I keep a backup of the database I'm
developing.



No, no, no! Copying the file is not atomic, and a live database may be 
updated part way through the copy.


Use the sqlite shell .dump command, which will implement the necessary 
locking:

$ sqlite3 db.file .dump  > backup.sql

The backed up file is a SQL script that will restore the database.


Christian

--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Shared Lock Transactions

2007-01-23 Thread Christian Smith

Jay Sprenkle uttered:


On 1/21/07, Andrew Teirney <[EMAIL PROTECTED]> wrote:
>
> > What is a 'shared' lock? You can't share a lock, only one process 
> > can own it at a time.

> >
>
> As per this document
>
> http://www.sqlite.org/lockingv3.html
>
> SHARED The database may be read but not written. Any number of 
> processes can hold SHARED locks at the same time, hence there can be 
> many simultaneous readers. But no other thread or process is allowed 
> to write to the database file while one or more SHARED locks are 
> active.

>

A file lock is used to serialize write access. When reading the file 
is not locked. This makes little sense. There are no 'shared' locks.
Sorry, Accidently hit send before I was finished. This documentation 
needs to be updated so it makes sense. Making up this conceptual 'shared 
lock' thing instead of explaining how the locking mechanism really works 
is not helpful. A 'shared lock' in reality is just opening the disk file 
that makes up the database without locking. Any number of other 
processes can read the file. You can't acquire a shared lock, you just 
don't acquire a write lock.



No, you do acquire a lock. You must lock out any writer, but you do not 
require locking out other readers, hence you can share the lock with other 
readers, but it is a lock. It is a conceptual lock, and an OS level lock.


Without the shared lock, the writer would have no way of knowing whether 
any other process is reading the file, thus making it unsafe to change the 
file.


Christian



--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Is there a way to turn off -journal files?

2007-01-16 Thread Christian Smith

Dave Gierok uttered:

Yes, that's exactly what we do: use the in-memory db until it comes time 
to save, and then we open a file db, attach, and copy it to disk.  At 
that time, the journal files are created.


If there is a nice way to serialize an in-memory db to disk without 
having to open/attach to a file db, I'm all ears.  Anyone know if this 
is possible?



Look at the source of shell.c. This implements the sqlite3.exe shell, and 
contains the ".dump" command to serialise a database into SQL CREATE and 
INSERT commands. Just use similar logic when saving state, then replay the 
saved file to recreate your memory database. You might also want to 
compress the resulting file.





Thanks,
Dave

-Original Message-
From: Will Leshner [mailto:[EMAIL PROTECTED]
Sent: Monday, January 15, 2007 12:42 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Is there a way to turn off -journal files?

On 1/15/07, Dave Gierok <[EMAIL PROTECTED]> wrote:

I'm building the Sqlite lib using the preprocessor definition TEMP_STORE=3, but 
I still notice that temporary '-journal' files are being created/deleted when 
running queries that will modify the DB.  I thought that the TEMP_STORE=3 was 
supposed to disable these temporary files that get created.


-journal files are an integral part of SQLite's ACID-ity, and as such,
they can't be turned off or otherwise avoided.


How can I build/use Sqlite such that these temporary files do not get created?  
I am running on Xbox360 and perf/storage are critical because the DB can be 
stored to a memory card (not much storage).


I wonder if this might work: keep the database on disk, but when using
it, always use it from an in-memory version. You could open an
in-memory database, attach the on-disk version, and then copy all the
data from the on-disk version to the in-memory version. Hmm.
Unfortunately, I'm not sure how to copy the data back out to the
on-disk version without the -journal file getting created, so maybe
this isn't such a great idea after all. If it were possible to
serialize the in-memory database as save it as a raw binary stream,
then it might work. But I don't think there is a way to do that.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Pager modification question

2007-01-12 Thread Christian Smith

Ken uttered:

Would it be possible to implement a Page level locking system using 
sqlite?


Deadlock detection/recovery and/or prevention is much more complex for 
page level locking. As there is no shared data between SQLite clients 
other than the database file itself, tracking locks in other clients is 
impossible beyond the OS level locking.


A single process with multiple threads could feasibly track page level 
locking, but it would be intra-process only, and would complicate the 
locking significantly for relatively little benefit.


Note that a writer can already operate concurrently with readers until it 
fills it's cache, and only has to have exclusive use of the database when 
synching it's cache on spill or commit.




I seem to recall a commercial product Ingress that utilized a page level 
locking scheme that would escalte to a table level lock when over a 
certain % of a tables pages were locked.



If you want multiple concurrent writers, use PostgreSQL instead. It uses 
MVCC to avoid even row level locks. And it has an embeddded SQL 
pre-processor to boot.





This obviously would be very beneficial for those of us who are 
implementing threaded applications using sqlite. The shared cache 
concept would need to be extended so that a global context for the 
shared cache would be maintained, allowing each individual thread that 
connects (sqlite3_open) to utilize the shared cache for that DB.


The benefit would be to move the Locking from the DB level down to the 
page level. Keep a list of the before image pages in memory so that 
readers are not blocked by the writers (lock holders)  of the pages.


When the writer txn completes, Mark it txn as completed in the journal.

I think it would be beneficial to visit the journal design as well for 
this. Mainly don't open/close journals, since a journal is always part 
of the DB pre-build the journal and allow multiple threads of writers 
into the journal. This should also help with some of the areas inside 
the pager/journal code where a random checksum is used to validate the 
journal upon crash recovery.


Just some ideas to improve concurrency that I had and wanted to jot 
down.



The current journal implementation is simple and fast. Using a single 
rollback journal, rather than a redo-undo journal minimizes the amount of 
data that has to be written, and the moves the needs for asynchronous 
commits that would be required for any other journal design. Consider the 
alternatives for a minute:


- Current journal design:
  Pros: simple, fast for single writer big transactions.
  Cons: Single writer only. Writer block readers. D in ACID requires
multiple file syncs.

- Write Ahead Redo-Undo log:
  Pros: Potentially multiple writers, ACID with single file sync.
  Cons: Difficult to coordinate between multiple processes. Requires
async process to write log entries to database file.

- Write Ahead Redo log:
  Pros: Simpler than Redo-Undo log. ACID with single file sync.
  Cons: No better write concurrency than current journal. Still requires
async process to write log entries to database file.



DRH, just reviewd the tokenize.c and pager.c, I concurr that Yacc/bison 
have the token passing incorrect.  Your implementation of tokenize is 
very interesting, what an excelent technique to embedd the tokens using 
overlapping strings.


Can you send me an email address, I have some code that you might find 
intersting to utilze in the pager.c for the checksums.



Just post it to the list. Perhaps use the ticket tracker and attach your 
patch(es) to a ticket.


Christian


--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] delayed (batch) transactions

2006-12-24 Thread Christian Smith

[EMAIL PROTECTED] uttered:



It takes at least two complete rotations of the disk platter
to do an atomic and durable commit.  On a 7200 RPM disk, that
means 60 transactions per second is your speed of light.

Your question boils down to this:  Can you speed up transactions
by dropping the durable property - the D in ACID.  Yes you
can.  Actually, most client/server database engines already
do this for you without telling you. Very few client/server
databases are really ACID - they are usually on ACI when
confronted with a power failure.



Are you sure about this? I'm pretty sure PostgreSQL is ACID by default, 
and as certain as I can be without proof that the commercial (Oracle etc.) 
are as ACID as the hardware they run on.


They achieve this by not requiring seeks to overwrite data, as the write 
ahead log is sequential and so can be written at the sequential IO rate of 
the underlying device in the best case.


The client/server databases are also not limited by the filesystem 
synchronous metadata updates, and so are more likely to achieve the peak 
IO rate of the underlying device.




[snip]

A second approach would be to overload the OS drivers on
the backend of SQLite to support asynchronous I/O.  Mozilla
does this in Firefox in order to boost performance on NFS.
There is well-commented sample code showing how to do this
in the SQLite source file "test_async.c".  That sample code
does not combine multiple transactions, but you could probably
tweak it to make that happen.



When doing some performance testing between different FS on Linux, I found 
that Ext3 with "data=journal" option was about 50% faster than the next 
fastest options (XFS and ext3 with "data=writeback"). Again, this is 
because synchronous meta-data, as well as normal synchronous file data, 
can be written sequentially in the FS journal at peak IO speed. If I can 
find the figures, I'll post them. However, I mislaid the test results and 
haven't got a suitable machine to reproduce at the moment.




--
D. Richard Hipp  <[EMAIL PROTECTED]>



Christian

PS. If people are after maximum performance, I can recommend ext3 with
"data=journal" option for SQLite. The performance is stunning!

PPS. Merry Christmas to those so inclined.

--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQLiteJDBC is now 100% Pure Java

2006-12-14 Thread Christian Smith

David Crawshaw uttered:


For several months now I have been providing a Java JDBC driver for
SQLite 3. Now I am happy to announce with v030, the completion of a
100% Pure Java version.

http://www.zentus.com/sqlitejdbc

This is accomplished with the support of the great NestedVM project.
The SQLite C source is compiled by gcc to a MIPS binary, that is then
dynamically interpreted on top of the Java runtime. This involves a
performance hit but greatly simplifies inclusion in current projects
and delivery cross-platform.



Is locking now handled by NestedVM so that the driver is useable with 
native SQLite processes?





The native JNI version has not been abandoned, and binaries are still
provided for Windows and Mac OS X. Linux can be directly compiled.

The new native version has been extensively tested, with the only
major caveat at the moment being the size of the binary. I believe it
can be greatly reduced from the current ~ 800kb, I just do not have
the time.



Still, it's smaller than Derby. A head to head might be nice.

Christian

--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] building sqlite on windows in Unicode

2006-12-14 Thread Christian Smith

Check the requirements in:
http://www.sqlite.org/copyright.html

for patches and other submissions to SQLite. This could be what is holding 
up inclusion of the patch.


Christian

Brodie Thiesfield uttered:


Hi,

Building sqlite on windows in Unicode mode broke with the addition of
the loadable extensions. I found a bug matching this problem and
attached a patch to it to fix it a while ago, however I haven't seen any
other comments or movement in the bug. I'm not sure what else needs to
be done to have a patch accepted, so I'm posting here in the hope to
prod it along for review or acceptance.

The problem is that the dlopen/LoadLibrary code looks like it was hacked
in instead of being added to the platform abstraction API and it doesn't
support windows unicode builds out of the box. The patch fixes that
problem and silences a warning generated by the new index format.

bug...
http://www.sqlite.org/cvstrac/tktview?tn=2023

patch...
http://www.sqlite.org/cvstrac/attach_get/309/sqlite3.patch

Regards,
Brodie

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: creating own pragmas

2006-12-07 Thread Christian Smith

Dennis Cote uttered:


Igor Tandetnik schrieb:

G�nter Greschenz <[EMAIL PROTECTED]> wrote:

i'm using "sqlite3_create_function()" to create own functions in my
database.
this is a really nice possibility to extend the database with powerful
functions.
but i did not find a way to create own pragmas like "pragma foo=bar"
to pass some global information to my application.


Pass global information from where? Why would your application use such
a roundabout way to pass global information to itself? If you need to
store some global settings in the database, why not just create a table
for them?


Gunter,

Igor is right, you should store your password in a table. If you are 
concerned about the password being saved in the database, then you could use 
a temporary table, or even attach a :memory: database to hold the table that 
contains the password. In any case the saved password is available to your 
encrypt and decrypt functions.



Or, even define a custom function to set the password:
SELECT set_password('password');




HTH
Dennis Cote



Christian


--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \
-
To unsubscribe, send email to [EMAIL PROTECTED]
-

Re: [sqlite] a question about muticonnection

2006-12-07 Thread Christian Smith

Mario Frasca uttered:


Christian Smith wrote:

You can use the rowid to track the row version, and a unique constraint on 
your key to track conflicts. When updating a record, read the row, 
including the rowid. Update any columns.


When writing the row out, delete the existing rowid, then insert the new 
updated row, all within a transaction.


If noone has updated the row since you read it, the delete should delete 
one row and the insert should succeed. If someone else has updated the row 
using this protocol, the delete should delete no rows, and the insert 
should fail with a unique constraint violation on your key.


maybe I'm missing something, but how does this behave if the row is updated a 
second time in the meanwhile (something like C acting as B between A-read and 
A-write)?  I've the impression that the insert will not fail...



If the row is subsequently updated, it doesn't matter, as the original row 
I based my record on has vanished due to the first update. My delete will 
delete no rows, and my insert will fail if the key clashes with an updated 
record.


Basically, I'm using the rowid as a row version number, and assuming it 
is monotonically increasing (though not necessarily correctly.) The key 
identifies the unique record.


It's a way of detecting conflicts, not preventing them. SQL UPDATE, OTOH, 
is atomic and thus prevents conflicts.





just a doubt...

regards,
MF


Christian

--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] a question about muticonnection

2006-12-07 Thread Christian Smith

hongdong uttered:


I just have a base question:
assume user A and user B now both connection to a same database and both of
them want to update a  same record,but only one is allowed
in  this condition:
A begin to browse the data in a client application,and load rowid into GUI
and keep it in memory.
B update the record which A want to update
A now used the id stored in memory to update the same record
now it's wrong.
anyone can give me advice how to avoid this and keep the operation is
effient



You can use the rowid to track the row version, and a unique constraint on 
your key to track conflicts. When updating a record, read the row, 
including the rowid. Update any columns.


When writing the row out, delete the existing rowid, then insert the new 
updated row, all within a transaction.


If noone has updated the row since you read it, the delete should delete 
one row and the insert should succeed. If someone else has updated the row 
using this protocol, the delete should delete no rows, and the insert 
should fail with a unique constraint violation on your key.




thanks!



Christian

--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] database permanently locked

2006-12-02 Thread Christian Smith

Max Barry uttered:



My database is permanently locked, and I've spent two fruitless days
trying to unlock it.



You haven't said what sort of box this is. I guess a generic Unix. If 
Linux, you'll probably have fuser installed. Run fuser against the 
database file, and it'll tell you the PID of any process that has the file 
open. I'm not sure if other Unix have fuser installed by default.


If the file is owned by a trac processs, you can kill it using "fuser -k", 
assuming you have permission.





The problem:

$ sqlite trac.db
SQLite version 3.3.6
Enter ".help" for instructions
sqlite> .databases
Error: database is locked

I believe this was caused by something bad happening in Trac 0.9.6. More
than that I don't know.

What I've tried so far:

* Killing every process I own
* Getting the machine owner to restart Apache
* Using lsof to verify that no process I own is locking trac.db, or
using any SQL libs
* Moving/renaming the trac.db file

Because it's a shared box I don't have root access and can't reboot it.
What else can I try?

Thanks for any help,

Max.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] C++ SQLite

2006-11-29 Thread Christian Smith

RB Smissaert uttered:


Yes, that looks to be the way to do it. Basically will have to learn how to
translate C defines to VB declares.



Why not just use ODBC?

http://www.sqlite.org/cvstrac/wiki?p=SqliteOdbc
http://www.ch-werner.de/sqliteodbc/

That way, you're not even tied to SQLite.




RBS

-Original Message-
From: John Stanton [mailto:[EMAIL PROTECTED]
Sent: 28 November 2006 21:58
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] C++ SQLite

Maybe all you have to do is to make yourself a list og VB types cross
referenced to the fundamental type used in the Win32 and Sqlite APIs.
Then you could link any library into your VB programs.

RB Smissaert wrote:

You might be right, but with the Win32 API you have loads of nice
documents/programs (I like the API guide from KPD) that help you out.
All I have to do is copy their declares straight to VB and look at the

data

types I have to provide. Is the same available for the SQLite API?

RBS

-Original Message-
From: John Stanton [mailto:[EMAIL PROTECTED]
Sent: 28 November 2006 18:43
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] C++ SQLite

If you can use the Win32 API you can use the Sqlite API.  Just because
they can be called from C programs does not make them "all to do with C".

RB Smissaert wrote:


Will have a look, but I was looking for a text geared towards VB. I take


it


the documentation that comes with SQLite is all to do with C.
In fact I already have a wrapper that seems to work well, the one from
TerraInformatica, but maybe there was more control if I could write my


own.


RBS

-Original Message-
From: Clay Dowling [mailto:[EMAIL PROTECTED]
Sent: 28 November 2006 18:19
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] C++ SQLite


[EMAIL PROTECTED] said:



Thanks. Would you know any VB source code that wraps all the possible
declares in a class? Or do you know where to find the documentation
to make all the possible declares in VB/VBA?

RBS



The SQLite documentation will give you everything you need to write the
wrapper.  The sqlite.h file in the source bundle would also be a great
help.  You may also be able to find a wrapper already written linked on
the SQLite web site.

Clay Dowling




Yes.  It's a regular windows DLL, so it will behave like all other
Windows
DLLs.

Clay Dowling

[EMAIL PROTECTED] said:



Can I call the SQLite API (as in the dll SQLite.dll) directly from
VB or do I need the wrapper? So, could it work from VB with declares
as I use for the Windows API?

RBS




sebcity wrote:



How would one go about using c++ (Visual Studio.NET) to call and
display
SQLite tables. C++ wrappers?


You should be able to call the Sqlite3 API directly.












-



To unsubscribe, send email to [EMAIL PROTECTED]











-















-



To unsubscribe, send email to [EMAIL PROTECTED]











-



--
Simple Content Management
http://www.ceamus.com













-



To unsubscribe, send email to [EMAIL PROTECTED]











-















-



To unsubscribe, send email to [EMAIL PROTECTED]











-










-
To unsubscribe, send email to [EMAIL PROTECTED]




-







-

To unsubscribe, send email to [EMAIL PROTECTED]



-






-
To unsubscribe, send email to [EMAIL PROTECTED]

-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Saving tables

2006-11-29 Thread Christian Smith

John Stanton uttered:

If you use an extension greater than 3 chars you violate rules for some 
existing file systems and reduce portability.  You can also make it difficult 
to parse the extension when it is used to type the file.



An extension of arbitray length should be just as easy to parse as an 
extension up to 3 characters. "Upto" implies that the extension can also 
be 0, 1 or 2 characters long, so the parsing code should be sufficiently 
flexible to handle 0..3 characters, and by extenstion 0..n characters is 
not much more difficult (where n is the length of the filename).


With VFAT, there is no filesystem in common use that can't handle 
extenstions >3 characters long.



Christian


--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Sqlite books

2006-11-16 Thread Christian Smith

Michael Young uttered:

I've had PHP5 for two months now and have not been able to compile it 
successfully on my Mac so that SQLite 3.x files can be accessed. I had hoped 
to find help in compiling in Mike Owens' book.


Does the book offer any details on compiling that might make the purchase 
worthwhile? I've tried in vain to find Mike Owens' email address to ask this 
simple question. Can't find it though.



I've not got the book to hand, so can't answer, but my guess is no. Try 
posting your problems to this or a PHP list, someone may be able to help.





Mike


On Nov 15, 2006, at 8:38 AM, Christian Smith wrote:


John Gunnarsson uttered:


Hi,

I'm about to buy a book about Sqlite. I'm coding in C++ so I'm looking
for the best book to suite my needs.

In my local bookstore I have:
The Definitive Guide to SQLite, APress
http://www.apress.com/book/bookDisplay.html?bID=10130

and

SQLite, Sams
http://safari.samspublishing.com/067232685X

Have anyone of you a opinion of the books above, or maybe an even
better recommendation?



The former is more up to date, covering SQLite 3.x. The latter covers only 
2.x, which is unfortunate.


I have both books, and the former is more comprehensive IMO. If you want 
just the one book, go for the former.





//John




Christian

--
   /"\
   \ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
X   - AGAINST MS ATTACHMENTS
   / \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-


--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] SQLITE readonly Performance on a CD DB

2006-11-16 Thread Christian Smith

Vis Naicker uttered:


[EMAIL PROTECTED] uttered:


the blob database... within 1.5sec or less from the CD. ... LED on the

CD rom blinking only occasionally, not

constantly like when I copy a file.




[If the LED only lights when data is being transferred, then what

you're
seeing is the latency of individual reads. This is not surprising, as
cdroms have horrendously slow seek times. When copying regular files,
the
file itself is likely to be contiguous on the CD, therefore no seeks are

required. A SQLite database, on the other hand, has tree structure
scattered around the file. Seeks are common and slow. Vacuuming, as DRH
suggests, will reduce the seeks as tables will be more contiguous.]

* The database is only populated once , I posted search fields into one
database which performs badly, and posted the text as blobs which
performs well even on CD. I also have a system in place where I can post
the blobs as either sqlite or zip or raw files and I am happy with that
performance



The blobs are most likely largely contiguous.





[The OS should shield you from this. One way to possibly increase

performance is to the prime the OS cache by reading in the CD file in
it's
entirety (just read the raw file). Hopefully, the file should fit in the

OS's memory cache, and subsequent SQLite reads can be satisfied from the

OS cache, though that may be unfeasable with a database of your size.]

* The records db is 20MB indexed, and the blob is 140MB.



So it may be feasible to prime your machine cache with the records db. Not 
neccessary for the blob database if it's working fine.






[You might also want to increase the database page size when creating

the
database in the first place, as you'll have less actual pages and hence
less seeks for a given database size.]

* I am using a Delphi wrapper, I need to investigate further.



Execute "PRAGMA page_size=;" to set the page size, BEFORE you put 
any data in the database. Try, off the bat, a page size of 16384.





[* sorry first time with outlook I have to set it up later properly]




You have no hope of setting it up properly:)


--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Sqlite books

2006-11-15 Thread Christian Smith

John Gunnarsson uttered:


Hi,

I'm about to buy a book about Sqlite. I'm coding in C++ so I'm looking
for the best book to suite my needs.

In my local bookstore I have:
The Definitive Guide to SQLite, APress
http://www.apress.com/book/bookDisplay.html?bID=10130

and

SQLite, Sams
http://safari.samspublishing.com/067232685X

Have anyone of you a opinion of the books above, or maybe an even
better recommendation?



The former is more up to date, covering SQLite 3.x. The latter covers only 
2.x, which is unfortunate.


I have both books, and the former is more comprehensive IMO. If you want 
just the one book, go for the former.





//John




Christian

--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQLITE readonly Performance on a CD DB

2006-11-15 Thread Christian Smith

[EMAIL PROTECTED] uttered:

I have 300K records in my database and the blob text file (invoices) in 
a separate database. I query the database over a network or USB stick 
and the performance is good. However when I moved my database to CD 
media, the performance dropped drastically. It takes 2 minutes sometime 
to retreive the result. The CPU is not busy during the time, and the 
memory usage increases slowly.



Is the cdrom noisy at this point? Making intermittent whirring noises? 
Some cdroms are noisier than others.





When the user selects a row, the system queries the blob database, and 
here I am able to retreive the text, convert it into a tiff file, and 
load it into my program within 1.5sec or less from the CD. Copying the 
database to the hard drive and querying that resumes with normal 
performance. Querying the CD version (immeadiately) and the performance 
lags again. The LED on the CD rom blinking only occasionally, not 
constantly like when I copy a file.



If the LED only lights when data is being transferred, then what you're 
seeing is the latency of individual reads. This is not surprising, as 
cdroms have horrendously slow seek times. When copying regular files, the 
file itself is likely to be contiguous on the CD, therefore no seeks are 
required. A SQLite database, on the other hand, has tree structure 
scattered around the file. Seeks are common and slow. Vacuuming, as DRH 
suggests, will reduce the seeks as tables will be more contiguous.


The OS should shield you from this. One way to possibly increase 
performance is to the prime the OS cache by reading in the CD file in it's 
entirety (just read the raw file). Hopefully, the file should fit in the 
OS's memory cache, and subsequent SQLite reads can be satisfied from the 
OS cache, though that may be unfeasable with a database of your size.


You might also want to increase the database page size when creating the 
database in the first place, as you'll have less actual pages and hence 
less seeks for a given database size.





Both databases have 300k records, the first(invoice no indexed) has 8 
retrieval fields, while the second has just the ID and blob. The average 
row size is 60-80 bytes, while the blob has on average 500bytes. I am 
looking to improve the performance, and reasons for the failings. It 
does sound like cacheing mentioned in the thread ...





Only in that the required data is not in the cache, and has to be bought 
in from high latency media.



Christian


--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQL tree and deepness level

2006-11-14 Thread Christian Smith

Robert Zajda uttered:


Hello

I have a table
CREATE TABLE acategories {
  id integer not null primary key
  category string
  parent_category integer references acategories.id
)

query:
SELECT
  a.id AS 'id',
  a.category AS 'Category',
  b.category AS 'Parent category'
FROM acategories AS a
LEFT JOIN acategories AS b ON a.parent_category=b.id

everything works ok, but i need to add new field with deepness level
of each row.
Is it possible ?



Yes, just add the field and maintain it yourself, but you've now created 
redundent data that must be maintained, going against the point of 
normalisation.


If you need the depth of a node, then simply do an iterative count using 
SELECT to walk from the node to the root category.


You can represent tree data using a method such as Modified Preorder Tree 
Traversal:

http://www.sitepoint.com/article/hierarchical-data-database/2

Insert and deleting nodes is more involved, but retrieving sub-trees is 
faster and doable with a single query.




Thanks
Robert



Christian

--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] so many warnings on vc6 with warning level 4

2006-11-13 Thread Christian Smith

Gunnar Roth uttered:


Hello,
i want to use sqlite3 in a sub-project at work.
when compiling with warning level 4 with vc6 ( internal philosophy ) , i
get over 480 warnings. With warning level 3 there are still 119 left.
This leads to problems to convince the project lead of the quality of
sqlite3 ( which i do not doubt, as i have used it for my private project
since some years).



I'd be more worried about using VC6 period. This is a really old release, 
and is unsupported by MS since something like 2003.




Is there any chance this warnings will be fixed sometime in the future?
How are others dealing with this problem at work?



Use a current, supported compiler first, then check again.




regards,
gunnar



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Insert statement taking too long

2006-11-09 Thread Christian Smith

Unit 5 uttered:


--- Robert Simpson <[EMAIL PROTECTED]> wrote:

You need to create an index on the columns you're
joining.  Otherwise I
believe 100,000,000 rows (10k x 10k) in table 2 will
be scanned while SQLite
looks for matches against the 10,000 rows in table
1.


While that makes sense, I suspect there is something
else going on.

I did a few more tests.  For example, if I remove the
INSERT but keep the exact same SELECT statement with
the joins, it is fast again.  So, it seems that it is
quite slow when doing the insert's.  I was thinking
that perhaps the statement was not in a transaction,
but I tried that too.



Try both the INSERT and the plain SELECT using EXPLAIN QUERY PLAN, which 
will give an indication of which indexes are being used. I'd hazard a 
guess that the INSERT case is not using the same query plan as the plain 
select case.


sqlite> EXPLAIN QUERY PLAN INSERT ... SELECT ... FROM table-1 JOIN table-2 ...
sqlite> EXPLAIN QUERY PLAN SELECT ... FROM table-1 JOIN table-2 ...


Christian

--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] indexes in memory

2006-11-09 Thread Christian Smith

chetana bhargav uttered:


Hi,

I have a question regrading indexes,

When I open a connection,

 Will indexes be loaded into memory. If one of the tables in the DB, the 
connection for which I have opened, has an index.

  If, so is there any way to selectively load/unload that from memory.



Indexes will be loaded into the cache as needed. The whole SQLite database 
is page based, and the cache caches the pages. The tables and indexes are 
implemented as page based btrees, with nodes represented by pages.


The cache is unaware of the higher level structure of the btrees, and 
there is no way to selectively bring load/unload tables or indexes from 
memory. The page cache will manage itself on an LRU basis.



...
Chetana.


Christian

--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Q about new SQLite API

2006-11-08 Thread Christian Smith

[EMAIL PROTECTED] uttered:


I'm working on a new API routine for SQLite and I have
questions for the community.

The working name of the new api is sqlite3_prepare_v2().
sqlite3_prepare_v2() works like sqlite3_prepare() in that
it generates a prepared statement in an sqlite3_stmt
structure.  The differences is in the behavior of the
resulting sqlite3_stmt and in particular a difference in
the way sqlite3_step() responds to the sqlite3_stmt.  The
differences are these:

 * You never get an SQLITE_SCHEMA error.  sqlite3_prepare_v2
   retains the original SQL and automatically reprepares and
   rebinds it following a schema change.



Good.




 * sqlite3_step() returns the correct error code right
   away, rather than just returning SQLITE_ERROR and making
   you call sqlite3_reset() to find the true reason for the
   error.








In this way, I am hoping that sqlite3_prepare_v2() will work
around two of the most visible warts in the current API.

QUESTION 1: sqlite3_prepare_v2 is the merely the working name
for the new function.  What should the official name be?
Some possibilities include:

   sqlite3_prepare_ex1
   sqlite3_prepare_ng
   sqlite3_new_prepare
   sqlite3_compile



I'd leave it as sqlite3_prepare, with default as the old behaviour, then 
add a new function to switch an existing sqlite3_stmt to the new 
behaviour, such as:


 #define SQLITE3_STMT_RECOMPILE   (1<<0)
 #define SQLITE3_STMT_STEP_WITH_ERRORCODE (1<<1)
 int sqlite3_stmt_setflags( int flags );

My personal opinion is that functions appended with _ex or 2 or obviously 
extending an older version function looks sloppy. You just have to look at 
Win32 for examples of bad APIs extended. It's not an entirely rationale 
argument, I admit.





QUESTION 2: Are there any other API warts that need to be
worked around that can be fixed by this same change?



I trust that the return codes from sqlite3_step will now be able to be 
arbitrarily extended, to cover such cases as:

http://www.sqlite.org/cvstrac/tktview?tn=1837,2




QUESTION 3: Suppose there is a schema change and the SQL
statement is automatically reprepared. But the schema change
is such that the SQL is no longer valid.  (Perhaps one of the
tables mentioned in a SELECT statement was dropped.) What
error code should sqlite3_step() return in that case?



How about the error code that would be returned from sqlite3_prepare with 
the given SQL.





--
D. Richard Hipp  <[EMAIL PROTECTED]>



--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] serious performance problems with indexes

2006-11-03 Thread Christian Smith

Peter De Rijk uttered:


--On Friday 27 October 2006 15:31, [EMAIL PROTECTED] wrote:

When a table is indexed, INSERT performance is logorithmic in the
number of rows in the table and linear in the number of indices.
This is because entries have to be inserted into the index in
sorted order (otherwise it wouldn't be an index).  And each
insert thus requires a binary search.

If your index becomes very large so that it no longer fits
in your disk cache, then the binary search can involve a lot
of disk I/O which can make things really slow.  The usual
work-around here is to keep a much smaller staging table into
which you do your inserts and then periodically merge the
staging table into the main table.  This makes your queries
more complex (and slower) since you are now having to
search multiple tables.  But it does speed up inserts.


I have checked more fine grained (results and code at the end of the
mail), and it seems indeed that the extreme growth of time needed
for inserts only starts at around 16 rows (on my system). Before
that, it also rises, but much much more slowly. As this indeed
looked like a cache effect, I tried setting the default_cache_size
to something big (2). This was enough to remove the bend (at
least till 200 rows), so that the increase in time kept going
at the same slower pace as for the smaller data sizes.



You might want to try increasing the page size as well as the size of the
page cache. For index pages, this'll increase the btree fanout, reducing 
the btree depth for a given number of entries, and reducing the pressure 
on the cache when inserting new entries.


Christian


--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Query Execution speed.

2006-11-03 Thread Christian Smith

Manzoor Ilahi Tamimy uttered:



I found that Version 2.8 was much Faster than 3.6 using ":memory:", 30
Seconds and 60 seconds in case of 2.8 and 3.6 respectively.
can I use 2.8 in my project when i have a huge amount of data to handle.



Version 2 used a Red/Black balanced tree in :memory:, whereas version 3 
appears to have done away with this optimisation. I'm curious as to why?





Thanks
Regards,

Manzoor Ilahi Tamimy




Christian

--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] any data access and retrieval engine?

2006-10-25 Thread Christian Smith

Sarah uttered:


Hi, John Stanton

I really really appreciate your warm help.
That's great if you can send me the codes of B tree and B+ tree.
Many thanks in advance.

My requirements for data access are as follows:
-all the data are stored in non-volatile memory instead of volatile memory
-the footprint of the DARE should be better less than 100KB
-when executing, the memory occupation should be better less than 20KB
-no need for relational access, just key-value retrieval is ok
-all the create, insert, update work can be done outside, however, pretty fast 
retrieval is needed

If there is some open-source DARE(as excellent as SQLite) suitable for my 
platform, that will be great.
Orelse, I would try to write a simple one.



There is, of course, the BTree code in SQLite itself. It is small and 
fast. btree.h is pretty self documenting.


Not to knock John's work, mind:)




Sarah


- Original Message -
From: "John Stanton" <[EMAIL PROTECTED]>
To: 
Sent: Monday, October 23, 2006 5:54 AM
Subject: Re: [sqlite] any data access and retrieval engine?



Clay Dowling wrote:

Sarah wrote:


Hi,all

First of all, I want to thank all the guys on this mailing list for their warm 
help.

After 1 more month of work, I finally make SQLite work on my embedded 
environment. SQLite is really great! Many thanks,Drh.

But, due to the extremely heavy hardware constraints, I have to give up SQLite 
finally.

So I'm trying to find a much simpler data access and retrieval engine.

Could anyone give me some help on this issue?(some guidance on how to make a 
DARE or is there any open-source one available?)

thanks in advance.



The Berkeley DB engine and it's related engines might be suitable for
your situation.  They don't give relational access, but they do give
fast key=>value retreival and that might be suitable.  The SleepyCat DB
engine from SleepyCat Software is probably the best, but for a
commercial application the licensing fees mean that you have to be well
funded and expect a good return on the product.

Clay Dowling

Berkely DB is still quite bloated.  What do you require for data access?
For an embedded system you might find something which matches your
needs very well and has a tiny footprint.

I can give you some B* Tree code which is suitable for a high
performance simple and lightweight embedded application or some AVL tree
code which would suit a simpler smaller scale memory resident embedded
data access application.  You would have to adapt it to your
application, but could expect to get your database access in 20K or less
of executable image.  Of course you have no SQL.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Trouble with ALTER TABLE/ADD

2006-10-25 Thread Christian Smith

Isaac Raway uttered:


Thank, I will look at that (away from my dev machine for the day).

One other related question, are there plans to expand the functionality of
ALTER TABLE? I am working on a feature that could benefit greatly from
REMOVE/RENAME COLUMN. As it stands, I am going to have to simulate this by
using generic column names and mapping them to a list of the "actual" names.
It would be *very* nice to see these features added to sqlite before I
finish this feature, but I imagine this has been requested before...



A better solution would be to transfer the contents of the table being 
updated to a temporary table, then recreate the original tables sans the 
surplus columnn:


  BEGIN;
  CREATE TABLE temp_topic AS SELECT  FROM topic;
  DROP TABLE topic;
  CREATE TABLE topic ...;
  INSERT INTO topic SELECT * FROM temp_topic;
  DROP TABLE temp_topic;
  COMMIT;

It may not be quick for large tables, but how often are you going to be 
updating the table definition? If often, then you probably have a more 
fundamental problem on your hands.


If your data format is by definition user defined, then you might be best 
off with an  table instead, and use views to map 
instances into a virtual table.





Isaac




Christian




On 10/23/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:


"Isaac Raway" <[EMAIL PROTECTED]> wrote:
>
> ALTER TABLE topic ADD COLUMN type_id integer;
>
> This works fine when I run it on the sqlite3 command line, but fails in
the
> Delphi units. Any thoughts? Has anyone ahd trouble running ALTER TABLE
from
> the Delphi bindings?
>

Perhaps the delphi code is statically linked against an
older version of SQLite.  ADD COLUMN was added in version
3.2.0.  You can find what version delphi uses by executing

   SELECT sqlite_version();

--
D. Richard Hipp  <[EMAIL PROTECTED]>




-
To unsubscribe, send email to [EMAIL PROTECTED]


-








--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQLite Performance Issues

2006-10-23 Thread Christian Smith

James Mills uttered:


Hi Richard,

When I mean high-traffic I would imagine more than 1 hit/s.

I do want to clear something up though (if you could):

If a site using sqlite takes 700ms to load and there are
two simultaneous (as simultaneous as one can get) hits
to the site, say user A and user B. What happens ?

Does user A get the site displayed while user B waits ?



If you have two simultaneous sessions loading, so long as the sessions are 
read only, they will open in parallel (assuming you have a threaded 
server.)


This is probably not a SQLite issue, rather a webapp issue.




Obviously this is a really simple scenario and with high
volume websites this could become a problem.

Also can you suggest any tips and things to think about
when designing a web app with SQLite ?



What are you using the database for? Caching? Session state management?

If you want per-session writable data, you might want to partition 
session data from global data, and have the session data in separate 
per-session databases. The webapp opens the global database, which 
hopefully will be mostly read-only, and then ATTACHes a per-session 
database for session updates.


If your global data is going to be the write bottleneck, then you may be 
boned and will have to look at something like PostgreSQL. One possible way 
around this is to use group commit to marshal multiple sessions' writes 
to the database in a single transaction.


For further suggestions, you'll have to let people know exactly how you 
intend to use the database.






Thanks again,

cheers
James




Christian


--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Optimistic concurrency control

2006-09-19 Thread Christian Smith

Mikey C uttered:



Hi,

Maybe I didn't make the question clear.  I'm not talking about locking and
multiple writers.  I'm talking about optimistic concurrency control in a
disconnected environment.

Two processes (say a webserver).  One reads some data and presents it to a
user (open - read - close).  The other reads the same same data and presents
it to another user (open - read - close).  The first user updates the data
(open - write - close).  Several seconds/minutes later the second user
updates the same data (open - read - close).  Result is the first users
changes are lost.



Aha. OK, I see where you're coming from.




ALL I am asking is could SQLite give each table a special column that
increases it's value for each row whenever data changes?

I could implement is with a trigger on each table BUT it would be nice if
SQLite supported this natively.

Anyhow, I get from the tone of the answers that this is not likely to
happen, so I'll code it up myself.



Sorry if I sounded arsy. I wasn't trying to, I was just trying to be 
succinct.





Cheers,

Mike



Christian


--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] how sqlite works?

2006-09-19 Thread Christian Smith

Cesar David Rodas Maldonado uttered:


I meen, If SQLite has two index and very large Index (about 10.000.000 each
one) how do i merge it, I mean (index1 = index2 for every one and limit it
in thousand).

Understand?



I guess not.

Are you trying to do an INNER JOIN merger on the two indexes? SQLite does 
this by having a nested loop, with the equality test as the action of the 
inner loop:


for each row in x {
  for each row in y matching x.row {
# Do whatever for the inner join
  }
}

If you're trying to do equivalent of a UNION select, then this is just a 
standard merge sort using the two indexes as the sources.

http://en.wikipedia.org/wiki/Merge_sort





On 9/15/06, Dennis Cote <[EMAIL PROTECTED]> wrote:


Cesar David Rodas Maldonado wrote:
> If there a document of how SQLite Virtual Machine Works ( papers )? I
> would
> like do something similar with B-Tree, B+ Tree but i dont know how to
> merge
> a select with tow Index? Understand my question??
>
> Please answer me
>
see the links VDBE Tutorial and VDBE Opcodes near he bottom of the
documentation page http://www.sqlite.org/docs.html

Dennis Cote



-
To unsubscribe, send email to [EMAIL PROTECTED]


-






--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Optimistic concurrency control

2006-09-19 Thread Christian Smith

Mikey C uttered:



What are peoples thoughts on implementing optimistic concurrency control in
SQLite?



Not an option. SQLite has a single writer database locking protocol which 
can't handle multiple writers, so the issue of concurrency control is 
moot.





One way is modify the where clause to compare every column being updated,
old value to new value.  This makes the SQL cumbersome.

Microsoft SQL Server has a column data type named TIMESTAMP/ROWVERSION which
is database-wide unique number that gets updated every time a row gets
updated.  Effectively it is a hash of all the current values in every row of
the table and is updated automtically.

Hence to see if any row has been updated by another person you just have to
compare the  TIMESTAMP/ROWVERSION  value you read with the one currently in
the table in the UPDATE where clause.



Q. Does SQlite has such a capability?  Can we have one please? If not, is it
easy to simulate one?  If not, how do people manage concurrency in
applications such as web sites?



A. No. Probably not. Probably not. Use a client/server DB such as
   PostgreSQL which already has multiple version concurrency control.

Right tool for the job. If it's multiple concurrent writers, SQLite isn't 
it.



Christian



--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] How to speed up the performance in LIMIT & OFFSET?

2006-09-19 Thread Christian Smith

PY uttered:


Hi All,
I have a problem about LIMIT & OFFSET profermance.
Due to the limitation of memory, I could not get all of the query result at
a time.
In our soluction, we use the LIMIT and OFFSET to avoid the problem of memory
issue.

we observed the performance of LIMIT & OFFSET, it looks like a liner grow of
the response time. In our table, it only has 300~500 records.



Here is the dummy script
Create Table foo(id INTEGER PRIMARY KEY AUTOINCREMENT, x TEXT);
create index idx_foo on foo(x);

insert into foo(x) values('text001');
:
:
insert into foo(x) values('text300');


Select distinct x from foo LIMIT 20 OFFSET 0; Select distinct x from foo
LIMIT 20 OFFSET 20; Select distinct x from foo LIMIT 20 OFFSET 40; Select
distinct x from foo LIMIT 20 OFFSET 60;
:
:
Select distinct x from foo LIMIT 20 OFFSET 280;



Would you please help to tell me how to improve that?



I could only observe the issue on the slowest machine I have (SUN 
SPARCclassic). I guess this is an embedded device, based on the fact 
you're also limited in memory.


Using your query, the index for x is ignored. To do the "DISTINCT" 
processing, SQLite creates a temporary table which is populated with 
distinct rows from the base table. This is done using a full table scan, 
so the index wouldn't help anyway.


Your best bet, if you always want distinct x, is to add a UNIQUE 
constraint to x:

CREATE TABLE foo (id INTEGER PRIMARY KEY, x TEXT UNIQUE);

Now your selects, minus the distinct clause, will run in constant time:
SELECT x FROM foo ORDER BY x LIMIT 20 OFFSET 60;

Note, using "ORDER BY" will make SQLite use the implicit index on x, and 
also ensure the ordering of the result set, which is not defined 
otherwise.





Thanks for your great help.


Thanks,
VK



--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] no TRUNCATE

2006-09-13 Thread Christian Smith

P Kishor uttered:


looking at the SQL supported by SQLite, I see no mention of TRUNCATE.
Not in the SQL clauses supported, nor in the clauses omitted. Is
TRUNCATE a weird beast?




I don't think it's standard SQL. At least not SQL92 that SQLite aims to 
implement.


TRUNCATE is an optimised version of DELETE for tables, which dispenses 
with deleting row by row. SQLite already optimises this case anyway, by 
DROPping a table and recreating it when a DELETE with no WHERE clause is 
executed.


Christian


--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQLite3 working with Visual C++.net 2003

2006-09-12 Thread Christian Smith

Abhilash Menon uttered:


After a lot of reading and experimenting, I was able to get SQLite3
working with Visual C++.net 2003.
Here is what I did

...

Built the project and it worked without any issues. If you have a
database ready you are ready to test and it works fine



Wow! You poor soul!

Thanks for reminding me why I hate developing on Windows:)





--
Abhi Menon



--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: Re: [sqlite] met "ARM7TDMI raised an exception,data abort" when executing sqlite3Parser() in ARM environment

2006-09-12 Thread Christian Smith

[EMAIL PROTECTED] uttered:


I don't think NULL callback and error pointer will be the reason. Because I've 
seen this kind of usage in the list for several times.

Anyway, I'll try later and report the result to the list.

If it is a misaligned-pointer problem, what can I do?



You say you are using an OS-less platform. Check the alignment of pointers 
returned by your own malloc() functions. Not sure of the alignment 
requirements of ARM, but I presume it's 32-bit.


Do you handle the pointer exception? Try dumping the contents of the 
pointer. If it's not 32-bit aligned, it's likely an alignment issue.





I don't make any change to the sqlite source code, how should this happen? I 
can run it correctly on PC(windows XP), why can't in an embedded environment? 
I'm confused...




On 9/11/06, Sarah <[EMAIL PROTECTED]> wrote:

During the execution of a test program in an embedded environment,
after it goes into sqlite3_exec()-->sqlite3_prepare()-->sqlite3RunParser(, 
zSql, ),
the debugger pops "the ARM7TDMI raised an exception,data abort" when the 
following sentence is executed(bold and underlined):


It can be a misaligned pointer.
Try to check what is the value of r0.

Also, try to give it a real callback and error pointer. Just to check
it's not a bug of sqlite failing to check the parameters are NULL
(although I would not bet that it's the reason).


Regards,
~Nuno Lucas

-
To unsubscribe, send email to [EMAIL PROTECTED]
-




--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: met "ARM7TDMI raised an exception,data abort" when executing sqlite3Parser() in ARM environment

2006-09-11 Thread Christian Smith

Sarah uttered:


plus, I'm using sqlite-source-3_3_5.

I also find a wired phenomenon:

When I trace into sqlite3Parser(pEngine, TK_SEMI, pParse->sLastToken, 
pParse);


the next executed statement pointed by debugger does not match with the 
real one, It seems there are one or two lines space. That is to say, the 
debugger arrow doesn't point to the real statement which is going to be 
executed next.




This can be a problem if the binary is compiled with optimisations of any 
level. You should turn optimisations off when compiling for debugging, as 
the compiler often rearranges the output assembler for optimal use of 
pipeline resources.






 - Original Message -
 From: Sarah
 To: sqlite-users@sqlite.org
 Sent: Monday, September 11, 2006 7:22 PM
 Subject: met "ARM7TDMI raised an exception,data abort" when executing 
sqlite3Parser() in ARM environment


 Hi,all

 During the execution of a test program in an embedded environment,
 after it goes into sqlite3_exec()-->sqlite3_prepare()-->sqlite3RunParser(, 
zSql, ),
 the debugger pops "the ARM7TDMI raised an exception,data abort" when the 
following sentence is executed(bold and underlined):

 ...
 abort_parse:
...
sqlite3Parser(pEngine, TK_SEMI, pParse->sLastToken, pParse);
...

 After "data abort" happens, the debugger stops at assembly languages like 
this(pointed by an arrow):
 ...
 [0xbc38]   pop  {r3-r5}
 [0xbc08]   pop  {r3}
 [0x4718]   bx   r3
 [0x6868]   ldr  r0,[r5,#4]
 [0x1c29]   mov  r1,r5
 [0x2800]   cmp  r0,#0
 [0xd018]   beq  0x42fc4  ; (__Heap_Alloc + 0x5a)
 --> [0x6802]   ldr  r2,[r0,#0]
 [0x42a2]   cmp  r2,r4
 [0xd311]   bcc  0x42fbc  ; (__Heap_Alloc + 0x52)
 [0x1c23]   mov  r3,r4
 [0x6802]   ldr  r2,[r0,#0]
 ...

 When I trace into sqlite3Parser(),it seems like the error is caused by 
"yy_reduce(yypParser,yyact-YYNSTATE);"

 My test program is like this:
 {
  char * database = ":memory:";
  sqlite3 * db;

  sqlite3_open(database, );

  sqlite3_exec(db, "create table myt(age smallint)", NULL, NULL, NULL);
  
 }

 sqlite3_open() runs successfully.
 And CPU type is ARM7TDMI, no operating system is used, such as linux.

 I'm totally confused. Please help me diagnose this error.
 Any little help will be appreciated.

 Sarah



--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Sqlite 3.3.7 intel compiler warnings

2006-08-31 Thread Christian Smith

Jay Sprenkle uttered:


hidden variables might very well not be a problem, if it was done that
way on purpose.



It should never be done on purpose. It is obfuscation pure and simple. 
There is never a good reason to do this, other than perhaps hiding a 
global variable (and even that is not a good idea.)


Just use a different variable name if it has a different use.

MHO.




On 8/30/06, Miguel Angel Latorre Díaz <[EMAIL PROTECTED]> wrote:

I already filtered out all the "common" warnings but these:

build.c(1969): remark #1599: declaration hides variable "v" (declared at
line 1883)
Vdbe *v = sqlite3GetVdbe(pParse);
  ^



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \
-
To unsubscribe, send email to [EMAIL PROTECTED]
-

  1   2   3   4   >