Re: [sqlite] Multithreading. Again.

2006-06-05 Thread Joe Wilson
--- Joe Wilson <[EMAIL PROTECTED]> wrote:
> Let's assume that for a given OS that fcntl() file locks
> work perfectly well on any thread. Would it then be safe
> from an SQLite perspective to finalize statements that were 
> prepared in one thread in a different thread? (where the sqlite 
> connection would only be used on one thread at a time).

s/prepared/last stepped/

Please let me restate the question.
Is fcntl()'s reliability from any thread the sole issue preventing
transferring sqlite database connections between threads?

> --- [EMAIL PROTECTED] wrote:
> > It is possible to construct a case where finalizing a statement
> > from a different thread that the one where it was last stepped
> > would cause a problem.
> > 
> > Remember, that the operating system bug that is causing all the
> > multithreading grief is that file locks created by one thread
> > cannot be reliably removed or modified by a different thread.
> > So if a statement acquires a lock on the database file in one
> > thread and you try to finalize the statement in a different
> > thread, the finalization would involve releasing the lock in
> > a different thread from which it was acquired - an operation
> > that silently fails on certain Linux kernels.
> > 
> > On the other hand, if you sqlite3_reset() all statements in 
> > the thread where they were last run, then all locks are released
> > by the reset.  Then you are free to finialize the statements
> > from any thread you want.


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


Re: [sqlite] Multithreading. Again.

2006-06-05 Thread Joe Wilson
Let's assume that for a given OS that fcntl() file locks
work perfectly well on any thread. Would it then be safe
from an SQLite perspective to finalize statements that were 
prepared in one thread in a different thread? (where the sqlite 
connection would only be used on one thread at a time).

--- [EMAIL PROTECTED] wrote:
> It is possible to construct a case where finalizing a statement
> from a different thread that the one where it was last stepped
> would cause a problem.
> 
> Remember, that the operating system bug that is causing all the
> multithreading grief is that file locks created by one thread
> cannot be reliably removed or modified by a different thread.
> So if a statement acquires a lock on the database file in one
> thread and you try to finalize the statement in a different
> thread, the finalization would involve releasing the lock in
> a different thread from which it was acquired - an operation
> that silently fails on certain Linux kernels.
> 
> On the other hand, if you sqlite3_reset() all statements in 
> the thread where they were last run, then all locks are released
> by the reset.  Then you are free to finialize the statements
> from any thread you want.


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


Re: [sqlite] autonum primary key

2006-06-05 Thread Derrell . Lipman
"[EMAIL PROTECTED]" <[EMAIL PROTECTED]> writes:

> I need help in generating a unique integer for the table's primary key. I am
> more familiar with MS Access that has a data type called "Autonum" that
> generates the integer. Do I need to find the last record to know what the
> next number should be?

In SQLite, if you declare a column as INTEGER PRIMARY KEY and insert NULL into
that column, a unique value will be automagically inserted in that column for
you.  The value may be the "next" unused value, or could also be a
currently-unused value due to a "hole" in the sequence (perhaps because you
deleted a record).  If you need to guarantee that each new value is always the
next in a continuous sequence,you can use INTEGER PRIMARY KEY AUTOINCREMENT
instead of INTEGER PRIMARY KEY.

Derrell


Re: [sqlite] autonum primary key

2006-06-05 Thread rbundy

http://www.sqlite.org/faq.html. No. 1.



|-+>
| |   "[EMAIL PROTECTED]|
| |   " 
  
>--|
  | 
 |
  |   To:   sqlite-users@sqlite.org 
 |
  |   cc:   
 |
  |   Subject:  [sqlite] autonum primary key
 |
  
>--|




Hi,
I need help in generating a unique integer for the table's primary key. I
am more familiar with MS Access that has a data type called "Autonum" that
generates the integer. Do I need to find the last record to know what the
next number should be?
Thanks in advance,
Bill





** PLEASE CONSIDER OUR ENVIRONMENT BEFORE PRINTING
*
*** Confidentiality and Privilege Notice
***

This e-mail is intended only to be read or used by the addressee. It is 
confidential and may contain legally privileged information. If you are not the 
addressee indicated in this message (or responsible for delivery of the message 
to such person), you may not copy or deliver this message to anyone, and you 
should destroy this message and kindly notify the sender by reply e-mail. 
Confidentiality and legal privilege are not waived or lost by reason of 
mistaken delivery to you.

Qantas Airways Limited
ABN 16 009 661 901

Visit Qantas online at http://qantas.com




[sqlite] Re: autonum primary key

2006-06-05 Thread Igor Tandetnik

[EMAIL PROTECTED] wrote:

I need help in generating a unique integer for the table's primary
key.


See http://sqlite.org/lang_createtable.html, particularly the part about 
ROWID, PRIMARY KEY and AUTOINCREMENT


Igor Tandetnik 



[sqlite] autonum primary key

2006-06-05 Thread [EMAIL PROTECTED]
Hi,
I need help in generating a unique integer for the table's primary key. I am 
more familiar with MS Access that has a data type called "Autonum" that 
generates the integer. Do I need to find the last record to know what the next 
number should be?
Thanks in advance,
Bill
 


Re: [sqlite] Re: Query performance issues - index selection

2006-06-05 Thread Steve Green

Joe,

Thanks for the info...

Unfortunately, we don't have the option of moving back to v3.2.1.  However,
I'm curious as well to see the difference in performance, so when (if?) I
get my current performance issues under control, I'll run the test and
post the results.

Steve

Joe Wilson wrote:

If you test against 3.2.1, just use your original where clause 
and original index pk_data:


 where utime >= 1146441600 and utime < 114912

as I don't know what effect the unary +'s would have on it.

--- Joe Wilson <[EMAIL PROTECTED]> wrote:


For my databases, GROUP BY is slower in recent SQLite releases
because my queries are not able to use indexed columns for 
GROUP BY items by design:


http://www.mail-archive.com/sqlite-users%40sqlite.org/msg15563.html

I'm curious what sort of timings you'd get with SQLite 3.2.1.
http://sqlite.org/sqlite-3.2.1.tar.gz

To convert to the older database format:

sqlite334 334.db .dump | sqlite321 321.db

And then perform your timings with both SQLite versions against 
the 321.db file to level the playing field.

Just having an index on utime should be optimal for SQLite 3.2.1.

--- Steve Green <[EMAIL PROTECTED]> wrote:


Using Richard's suggestion of changing the where clause of my query
to

where +utime >= 1146441600 and +utime < 114912000

did force sqlite to use the index that gave better performance.

However, I'm seeing some strange behavior that I'm hoping someone can
shed some light on.

With the time period mentioned below, the data set is about 2.5 million
rows and and 86K distinct u_id values.  Using the index on (u_id, utime),
the query time was reduced from 13.5 minutes to 26 seconds (not great,
but at least the browser won't timeout waiting for a response).

However, with a different time period, I have a much smaller data set of
about 150K rows and 20K distinct u_id values.  Using the index on (u_id,
utime), the query still takes about 20 seconds.  However, if the primary
index on (utime, r_id, u_id) is used, the query only takes 0.5 seconds.
Unfortunately at query time I have no idea of knowing how much data is
going to have to be traversed, so the idea of modifying the query to force
the use of different indexes is not possible.  Can anyone explain why the
performance is so poor with the smaller data set and the "optimal" index.
Any suggestions on a workaround so that optimal performance can be achieved
with all data set sizes?

Thanks for your help,

Steve

Steve Green wrote:



Sorry, I forgot to mention that I'm using sqlite v3.3.4 on redhat linux
v7.3

Steve

Steve Green wrote:



Hi,

I have the following schema

CREATE TABLE data(
   utime int4,
   r_id int2,
   u_id int4,
   data1 int8,
   data2 int8
);

Each row is uniquely defined by utime, r_id, and u_id, so I have the
following index

CREATE UNIQUE INDEX pk_data on data( utime, r_id, u_id );

This index is also needed because at update time, I use "insert or
replace" to update the data table.

The type of query that I'm attempting to perform is similar to

select u_id, sum( data1 ), sum( data2 )
from data where utime >= 1146441600 and utime < 114912
group by u_id
order by sum( data1 ) desc
limit 10

My current table has about 2.5 million rows and about 86,000 distinct
u_id values in the time period selected, and the query takes about 13.5
minutes.

Performing an explain query plan reveals

0|0|TABLE data WITH INDEX pk_data

so the primary index is being used...

Based on some past experiences, I added the following index to the table

CREATE INDEX ix_data_ut on data( u_id, utime );

Note that the utime is the last parameter in the index.  With the primary
index in place, I was not able to convince sqlite to use this index.  
To test

the index, I was forced to drop the primary index (which I can't do in my
production environment).   After dropping the primary index, an 
explain query

plan revealed

0|0|TABLE data WITH INDEX ix_data_ut

and the query ran in 26 seconds...

Subsequent tests using the following indexes provided no performance 
improvement
over the unique index, although sqlite's query planner chose these 
indexes over

the unique index

CREATE INDEX ix_data_tu on data( utime, u_id );
CREATE INDEX ix_data_t on data( utime );

So, is there a way that I can get sqlite to use the optimal index 
without having
to remove my unique index?  Perhaps this would involve rewritting the 
query, but

I'm at a loss as to how that could be done.

Thanks for your time,

Steve




__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


--
Steve Green
SAVVIS
Transforming Information Technology SM

This message contains information which may be confidential and/or
privileged.  Unless you are the intended recipient (or authorized
to receive for the intended recipient), you may not read, use,
copy or disclose to anyone the message or any information contained
in the message. If you have received the 

RE: [sqlite] Memory DB: Load from file

2006-06-05 Thread Dave Gierok
Thank you for the help Andrew and D. Richard Hipp.  But how would I do
this (page 24 & 25 of the slides) using the C/C++ interface?

{
   sqlite3* pFileDB;
   sqlite3* pMemoryDB;

   sqlite3_open(fileName, );
   sqlite3_open(":memory:", );

   //
   //WHAT DO I DO NOW?
   //
}

Thanks,
Dave Gierok

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Monday, June 05, 2006 12:49 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Memory DB: Load from file

Andrew Piskorski <[EMAIL PROTECTED]> wrote:
> On Mon, Jun 05, 2006 at 12:05:52PM -0700, Dave Gierok wrote:
> 
> > Is there a way to load a Sqlite file based DB and then specify we
want
> > that to go into memory?  Or is there a more efficient way to create
our
> > memory DB?
> 
> You could use attach to copy from an on-disk SQLite database:
> 
>   http://www.sqlite.org/google-talk-slides/page-024.html
>   http://www.sqlite.org/google-talk-slides/page-025.html
> 

Andrews suggestion above is better than the sqlite3_raw_write()
routine of my previous post because Andrew's idea is portable.
Use this idea if it is fast enough for you.  The raw_write()
thing will be a little bit faster, but as I said, it is subject
to break without warning.
--
D. Richard Hipp   <[EMAIL PROTECTED]>



Re: [sqlite] Multithreading. Again.

2006-06-05 Thread John Stanton
In a multi-threaded approach all threads are c-operative since they 
belong to the same process, and therefore synchronization is possible in 
multiple fashions.  Synchronization of multiple processes is only 
possible if they co-operate, such as using file locks or semaphores. 
Synchronization over a network is more of a problem, handled by "lock 
managers" or daemons using inter-process communication.


I would suggest that people who want to distribute a database over a 
network use a network database like PostgreSQL, not a library like 
Sqlite which is crafted to be embedded into applications.  As Dr Hipp 
pointed out, it is a replacement for fopen(), not Oracle or DB2.


Doug Nebeker wrote:

The problem with that solution is that it assumes all database access
happens from within a single process.  As far as I understand it, SQLite
allows database access from multiple processes (and even from remote
processes I assume) and thus the locking has to happen outside of the
process.  In process locking would be a nice conditionally compiled
optimization, but you'd have to guarantee that no other process would
never touch the database.

-Original Message-
From: John Stanton [mailto:[EMAIL PROTECTED] 
Sent: Saturday, June 03, 2006 8:14 PM

To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Multithreading. Again.

Joe Wilson wrote:

Remember, that the operating system bug that is causing all the 
multithreading grief is that file locks created by one thread cannot 
be reliably removed or modified by a different thread.



You could have a single thread that exclusively performs file


locking/unlocking.

This thread would wait on a threadsafe work queue (using a POSIX 
condition variable mechanism) and execute file locking/unlocking 
tasks, otherwise it would use zero CPU. Functions could be provided to




put file lock/unlock operations on to this work queue and wait for the


result.

Such file locking/unlocking functions could be called safely and 
reliably from any thread.




That sounds like good practice to me.  When one discovers that a
particular function is flaky and variable between implementations, O/S's
and versions prudence requires that it be bypassed.  In our Sqlite usage
we want platform independence and reliability and accordingly never rely
on file locking for synchronization.

A bonus of such an approach is simpler code and logic and better
execution efficiency.
JS


To find out more about Reuters visit www.about.reuters.com

Any views expressed in this message are those of the individual sender, except 
where the sender specifically states them to be the views of Reuters Ltd.





Re: [sqlite] Flashing memory database.

2006-06-05 Thread John Stanton
I would be tempted to create triggers to maintain a "dirty table" list 
and use that to drive the copy.  If you were to organize your copies in 
transactions with COMMITs you would be protected from hardware failures.


If you find a way to do it without a performance hit expect a Nobel Prize

[EMAIL PROTECTED] wrote:
This is an implementation question but I am pretty sure it applies to many 
people out there.


I am using a memory database which I want to "flash" periodically. I have 
logically broken down the database into three pieces which have 3 degrees 
of importance (critical, high and low). I have written 3 functions that 
can be called to "flash" them in 3 different flash databases. 

Upon system startup I am checking if these 3 databases exist and if so I 
do a "create table xxx as select * from db1.xxx". 

What I am doing conceptually works although I hate the way I do it. What I 
am looking is the optimal solution based on the following criterion:


- I want to copy a certain number of known tables to each database
- I do not want to write to flash if the memory table is identical to the 
flash table, in fact I would like to keep flash access to a minimum.
- I would like to not loose my configuration if the phone is powered down 
during flash operation

- I do not want a performance hit when I flash any of these

Any suggestion, any application notes out there ?

Mario




Re: [sqlite] Large DB Performance Questions

2006-06-05 Thread drh
Mark Drago <[EMAIL PROTECTED]> wrote:
> 
> The query I'm running is the following:
> select count(host), host from log group by host;
> 
> The results include the first time the query is run (when the file is
> not cached) and then the times of a few runs after that (when the file
> is cached).
> 
> SQLite: 221.9s, 1.6s, 1.6s, 1.6s
>  MySQL:   2.2s, 1.8s, 1.8s, 1.8s
> 
> It is apparent that SQLite is reading the entire database off of the
> disk and MySQL somehow is not.  The MySQL query cache is not in use on
> this machine and MySQL does not claim very much memory for itself before
> the test is conducted (maybe 30M).
> 
> I've tried looking in to the output from 'explain' to see if SQLite was
> using the index that I have on the 'host' column, but I don't think it
> is.  The output from 'explain' is included below.  Note that the
> 'explain' output is from a different machine which is running SQLite
> 3.3.5 compiled from source as the SQLite on FC5 kept Segfaulting when I
> tried to use 'explain'.
> 

From the EXPLAIN output, I see that your query is using
the HOST index only.  It is never even looking at the LOG
table.  And it is doing a single straight pass through the
HOST table - no binary searchs or anything like that to
slow it down.  This is all working correctly.

I'm guessing that the HOST index must be really badly
fragmented within the database file and that this is
preventing the read-ahead logic in your filesystem from
working well (or at all, probably).  Hence, each page
read from the disk is requiring one full revolution of
the disk platter and maybe some head movement too.  That
is what is slowing things down so when you have a cold
cache.

MySQL is trouncing SQLite in this case probably for three
separate reasons:  (1) MySQL keeps each index in a separate
disk file so that all the information is closer together.
Hence the filesystem read-ahead logic is able to do a better
job.  (2) MySQL probably implements prefix compression on
its indices, which will make the indices much smaller.  This
is something that we need to add to SQLite (prefix compression)
but is not currently available.  And (3) MySQL may well do
a better job of keeping pages of the index in logical order.
But it does so at the expense of disk space.  MySQL and
SQLite have different objectives here.

If you wait to do the

   CREATE INDEX host ON log(host);

until right before you run your query (instead of creating
the index at the very beginning) that will tend to increase
the locality of the index and might make a big difference
in cold-cache performance.  Other than that, I cannot think
of anything to help, other than for me to get busy and
implement prefix compression on indices - and that is not
likely to happen this week  :-)  The other thing you might
try is using a large page size for the database: 32K instead
of 1K.  That will slow down insert performance somewhat but
should increase locality of reference and make cold-cache
queries run faster.  Maybe experiement with intermediate
page size values too:  4k or 8K.

If I think of anything else I will let you know.  Thanks for
bringing this case to my attention.  I will be working on it
over the coming months.

--
D. Richard Hipp   <[EMAIL PROTECTED]>




[sqlite] Large DB Performance Questions

2006-06-05 Thread Mark Drago
Hello,

I'm writing a web cache and I want to use SQLite to store the log of all
of the accesses made through the web cache.  The idea is to install this
web cache in large institutions (1000-5000 workstations).  The log
database can grow in size very quickly and can reach in to the gigabytes
after just a few days.

Writing to the database is speedy enough that I haven't seen much of a
problem.  I collect the data for 1000 web requests and then insert them
all in a single transaction using a prepared statement.  This works
rather well.

The problem that I'm encountering has to do with generating reports on
the data in the log database.  SQLite is showing good performance on
some simple queries, but that is not the case once something more
advanced is involved, like an aggregate function for example.  More
over, once the SQLite file is cached in memory it is really quick.
However, I can't count on this file being cached at all when a user goes
to run the report.  So, I've been clearing my file cache before running
a test, and then running the same test again now that everything has
been loaded in to the cache.  Like I said, for most cases SQLite is
fine, but here is one example where it doesn't fare as well.

The system that I'm running these tests on is a P4 2.8GHz HT with 1 GB
of RAM running Fedora Core 5 and using SQLite version 3.3.3 (being as
that is what comes with FC5).  I'm doing my tests with a database that
is 732M in size and contains 1,280,881 records (the DB schema is
included below).

I clear the file cache by running the following command. I wait until it
consumes all of memory and then I kill it:
perl -e '@f[0..1]=0'

I'm running the tests by running the following script:
#!/bin/bash
echo "$1;" | sqlite3 log.db > /dev/null

The query I'm running is the following:
select count(host), host from log group by host;

The results include the first time the query is run (when the file is
not cached) and then the times of a few runs after that (when the file
is cached).

SQLite: 221.9s, 1.6s, 1.6s, 1.6s
 MySQL:   2.2s, 1.8s, 1.8s, 1.8s

The MySQL tests were done with the following script:
#!/bin/bash
mysql -u root --database=log -e "$1" > /dev/null

It is apparent that SQLite is reading the entire database off of the
disk and MySQL somehow is not.  The MySQL query cache is not in use on
this machine and MySQL does not claim very much memory for itself before
the test is conducted (maybe 30M).

I've tried looking in to the output from 'explain' to see if SQLite was
using the index that I have on the 'host' column, but I don't think it
is.  The output from 'explain' is included below.  Note that the
'explain' output is from a different machine which is running SQLite
3.3.5 compiled from source as the SQLite on FC5 kept Segfaulting when I
tried to use 'explain'.

Any information or ideas on how to speed up this query are greatly
appreciated.  The only un-implemented idea I have right now is to remove
some of the duplicated data from the schema in an attempt to reduce the
size of the average row in the table.  In some cases I can store just an
integer where I'm storing both the integer and a descriptive string
(category_name and category_no for example).  Some of the other
information in the schema holds data about things that are internal to
the web cache (profile*, ad*, etc.).

Thank you very much for any ideas,
Mark.

TABLE SCHEMA:
CREATE TABLE log(
log_no integer primary key,
add_dte datetime,
profile_name varchar(255),
workstation_ip integer,
workstation_ip_txt varchar(20),
verdict integer,
verdict_reason varchar(255),
category_name varchar(80),
category_no integer,
set_user_name varchar(255),
profile_zone varchar(40),
profile_zone_no integer,
author_user_name varchar(255),
workstation_name varchar(255),
workstation_group_name varchar(255),
profile_association varchar(255),
profile_association_no integer,
protocol varchar(40),
connection_type varchar(255),
connection_type_no integer,
host varchar(255),
url text,
ad_username varchar(255),
ad_groups text,
ad_domain varchar(255),
ad_workstation_name varchar(255),
ad_last_update_dte datetime);

INDEXES:
CREATE INDEX add_dte ON log (add_dte);
CREATE INDEX profile_name ON log(profile_name);
CREATE INDEX workstation_ip ON log(workstation_ip);
CREATE INDEX verdict ON log (verdict);
CREATE INDEX research_zone_no ON log(research_zone_no);
CREATE INDEX profile_zone_no ON log(profile_zone_no);
CREATE INDEX workstation_name ON log(workstation_name);
CREATE INDEX workstation_group_name ON log(workstation_group_name);
CREATE INDEX profile_association_no ON log(profile_association_no);
CREATE INDEX connection_type_no ON log(connection_type_no);
CREATE INDEX host ON log(host);
CREATE INDEX ad_username on log(ad_username);
CREATE INDEX ad_domain on log(ad_domain);
CREATE INDEX ad_workstation_name on log(ad_workstation_name);

EXPLAIN OUTPUT:
sqlite> explain select count(host), host from log group by host;
0|Noop|0|0|
1|MemInt|0|3|
2|MemInt|0|2|
3|Goto|0|16|

[sqlite] sqlite3HashInsert issue converting from sqlite2-8.13 to sqlite-3.3.5

2006-06-05 Thread Sleigher, William H.
I am using tcl8.4.6 and have upgraded my sqlite from sqlite-2.8-13 to
sqlite-3.3.5.  I originally tried to update my tcl8.4.6 to tcl8.4.11 and
could not get my application to run.  I can get my application to come
up with tcl8.4.6 and the sqlite-3.3.5 with some problems.  After
changing application calls from sqlite to sqlite3 where needed, the
application crashes getting a signum 11 error after calling
sqlite3HashInsert.  I have placed fprintf statements in the
sqlite3HashInsert, so it gets there, it just doesn't return.  It appears
that the arguments are correct in calling the sqlite3HashInsert in
hash.c.  This procedure is called in my application after some parsing
and detecting a semi-colon.  

 

Has anyone experienced problems in their application(s) upgrading to
sqlite-3.3.5 when making a call to sqlite3HashInsert and if so, is there
a solution to this problem?

 

Thanks

-Bill

 



Re: [sqlite] SQlite

2006-06-05 Thread Stefan de Konink
On Mon, 5 Jun 2006, Cesar David Rodas Maldonado wrote:

> I am developing something for that.
>
> where can i send you my script? (is done in C)

I think more people will be interested in it. So if it is not too big send
it to the list, otherwise put in on a website :) (or mail me privately)



Stefan



Re: [sqlite] SQlite

2006-06-05 Thread Cesar David Rodas Maldonado

I am developing something for that.

where can i send you my script? (is done in C)

On 6/5/06, Stefan de Konink <[EMAIL PROTECTED]> wrote:


On Mon, 5 Jun 2006, Cesar David Rodas Maldonado wrote:

> i am interesting to do something for sqlite, and i was thinking to try
to do
> a multi threading for read and write at the same time, like MySQL, or i
> would like to do something for fulltext search.
>
> Please people answer me giving me ideas!!!

Fulltextsearch would be sweet :) Currently my database is too big to do a
'like' in reasonable time.


Stefan




Re: [sqlite] SQlite

2006-06-05 Thread Stefan de Konink
On Mon, 5 Jun 2006, Cesar David Rodas Maldonado wrote:

> i am interesting to do something for sqlite, and i was thinking to try to do
> a multi threading for read and write at the same time, like MySQL, or i
> would like to do something for fulltext search.
>
> Please people answer me giving me ideas!!!

Fulltextsearch would be sweet :) Currently my database is too big to do a
'like' in reasonable time.


Stefan



[sqlite] SQlite

2006-06-05 Thread Cesar David Rodas Maldonado

hello

i am interesting to do something for sqlite, and i was thinking to try to do
a multi threading for read and write at the same time, like MySQL, or i
would like to do something for fulltext search.

Please people answer me giving me ideas!!!


Re: [sqlite] Memory DB: Load from file

2006-06-05 Thread drh
Andrew Piskorski <[EMAIL PROTECTED]> wrote:
> On Mon, Jun 05, 2006 at 12:05:52PM -0700, Dave Gierok wrote:
> 
> > Is there a way to load a Sqlite file based DB and then specify we want
> > that to go into memory?  Or is there a more efficient way to create our
> > memory DB?
> 
> You could use attach to copy from an on-disk SQLite database:
> 
>   http://www.sqlite.org/google-talk-slides/page-024.html
>   http://www.sqlite.org/google-talk-slides/page-025.html
> 

Andrews suggestion above is better than the sqlite3_raw_write()
routine of my previous post because Andrew's idea is portable.
Use this idea if it is fast enough for you.  The raw_write()
thing will be a little bit faster, but as I said, it is subject
to break without warning.
--
D. Richard Hipp   <[EMAIL PROTECTED]>



Re: [sqlite] Memory DB: Load from file

2006-06-05 Thread drh
"Dave Gierok" <[EMAIL PROTECTED]> wrote:
> I am developing a game on Xbox360 using Sqlite for a lot of our data
> manipulation.

That's good to know.  I'll be shortly adding a Microsoft logo
to the image of companies and projects using SQLite at

   http://www.sqlite.org/google-talk-slides/logos4.gif

:-)

> We use the ':memory:' functionality to create our DB in
> RAM, because we need our DB manipulations to be very fast - we can't
> afford for our queries to cause seeks/loads on the DVD.  
> 
> This works very well for us with one exception:  It takes a long time to
> load the DB.  Since we need the DB to be in memory, we create an empty
> ':memory:' DB, then load up a file which contains all the SQL (CREATE
> TABLE, INSERT INTO) we need to create our tables and execute them on the
> memory-DB.  This process currently takes 10 seconds (all running the SQL
> statements to create the tables), which is not acceptable for our game.
> 

The code below should do what you need.  First create your :memory:
database.  Then open the file that contains the initialization
database.  Then read a chunk of the file and write that same chunk
into the :memory: database by calling sqlite3_raw_write().

The code below messes with internal data structures of SQLite and
is not guaranteed to work in future SQLite releases.  So beware.


/*
** This routine is called to write data directly into the database image
** (presumably taken from a database file created externally).
**
** The database from which the supplied data is taken must have a page-size
** equal to the value of the SQLITE_DEFAULT_PAGE_SIZE macro this file is
** compiled with.
*/
int sqlite3_raw_write(
  sqlite3 *db, 
  int nData, 
  int iOffset, 
  unsigned const char *zData
){
  Pager *pPager;
  int pageSize;  /* Copy of pPager->pageSize */
  int psAligned; /* Copy of pPager->psAligned */
  int n; /* Remaining bytes to copy into database pages */
  unsigned const char *z;/* Bytes to copy into database pages */
  int iPage; /* Page number to copy into */
  int iOff;  /* Offset on iPage to write to */
  int rc = SQLITE_OK;/* Offset on iPage to write to */
  unsigned char *p1 = 0; /* First page of database */

  assert( db );
  assert( nData>=0 );
  assert( iOffset>=0 );
  assert( db->aDb[0].pBt );
  assert( zData || nData==0 );

  pPager = sqlite3BtreePager(db->aDb[0].pBt);
  psAligned = pPager->psAligned;
  pageSize = pPager->psAligned;
  z = zData;
  n = nData;

  /* This routine may not be used if any statements or a transaction are
  ** currently active. If this is not the case and we can proceed, grab a
  ** reference to page 1 and hold it until the end of this function so that
  ** the transaction is not rolled back because the page reference count 
  ** reaches zero.
  */
  if( pPager->nRef>1 ){
rc = SQLITE_MISUSE;
  }else{
rc = sqlite3pager_get(pPager, 1, (void **));
  }

  iPage = (iOffset / psAligned) + 1;
  iOff = (iOffset % psAligned);
  while( n>0 && rc==SQLITE_OK ){
int bytes;   /* Number of bytes to write */
unsigned char *p = 0;/* Page iPage */
bytes = pageSize-iOff;
bytes = (bytes



Re: [sqlite] Memory DB: Load from file

2006-06-05 Thread Andrew Piskorski
On Mon, Jun 05, 2006 at 12:05:52PM -0700, Dave Gierok wrote:

> Is there a way to load a Sqlite file based DB and then specify we want
> that to go into memory?  Or is there a more efficient way to create our
> memory DB?

You could use attach to copy from an on-disk SQLite database:

  http://www.sqlite.org/google-talk-slides/page-024.html
  http://www.sqlite.org/google-talk-slides/page-025.html

-- 
Andrew Piskorski <[EMAIL PROTECTED]>
http://www.piskorski.com/


Re: [sqlite] Multithreading. Again.

2006-06-05 Thread Nikki Locke
Doug Nebeker wrote:
> The problem with that solution is that it assumes all database access
> happens from within a single process.  As far as I understand it, SQLite
> allows database access from multiple processes (and even from remote
> processes I assume) and thus the locking has to happen outside of the
> process.  In process locking would be a nice conditionally compiled
> optimization, but you'd have to guarantee that no other process would
> never touch the database. 
>  
> -Original Message- 
> You could have a single thread that exclusively performs file 
> locking/unlocking. This
> thread would wait on a threadsafe work queue (using a POSIX 
> condition variable mechanism) and execute file locking/unlocking 
> tasks, otherwise it would use zero CPU. Functions could be provided to
> put file lock/unlock operations on to this work queue and wait for
> the result. 
> Such file locking/unlocking functions could be called
> safely and reliably from any thread. 

I read the original post to mean that the file locking/unlocking was still 
done, but that it was all done from a single worker thread, so that it was 
safe to do stuff that requires locks from any thread. Which solves your 
objection.


-- 
Nikki Locke, Trumphurst Ltd.  PC & Unix consultancy & programming
http://www.trumphurst.com/




[sqlite] Memory DB: Load from file

2006-06-05 Thread Dave Gierok
I am developing a game on Xbox360 using Sqlite for a lot of our data
manipulation.  We use the ':memory:' functionality to create our DB in
RAM, because we need our DB manipulations to be very fast - we can't
afford for our queries to cause seeks/loads on the DVD.  

 

This works very well for us with one exception:  It takes a long time to
load the DB.  Since we need the DB to be in memory, we create an empty
':memory:' DB, then load up a file which contains all the SQL (CREATE
TABLE, INSERT INTO) we need to create our tables and execute them on the
memory-DB.  This process currently takes 10 seconds (all running the SQL
statements to create the tables), which is not acceptable for our game.

 

Is there a way to load a Sqlite file based DB and then specify we want
that to go into memory?  Or is there a more efficient way to create our
memory DB?

 

Thank you very much for your help,

Dave Gierok



Re: [sqlite] Random error SQLITE_CANTOPEN on Windows XP SP2 with sqlite3_exec

2006-06-05 Thread Bogusław Brandys

[EMAIL PROTECTED] wrote:

Dave Dyer <[EMAIL PROTECTED]> wrote:

The real problem is that sqlite assumes it "owns" the temporary
transaction file that it created, and can do anything it wants with it; 
for example read, rename, or delete it.


I think this is a very reasonable assumption.

Any other process which 
gets it's hooks into the file can obviously invalidate this assumption.




This is not at all obvious to someone who does not
user or program for windows.

The obvious quick fix is to retry these file operations that must 
succeed,...


Can some windows programmers suggest patches to os_win.c to
implement this.

but the underlying problem is a fundamental one that deserves 
to be addressed. 


Yes, I agree.  But microsoft is unlikely to address the 
gross deficiencies in their filesystem.  So I suppose it 
will fall to me to find a work-around.


I'll look into the matter when I have a chance.

--
D. Richard Hipp   <[EMAIL PROTECTED]>



The solution used by others (for example Firebird) is to create 
exclusive lock on file (in case of Firebird it is database file) to 
prevent this situation.However in case of sqlite.dll it not so obvious, 
because there is no database engine working all the time.



Regards
Boguslaw Brandys




Re: [sqlite] Random error SQLITE_CANTOPEN on Windows XP SP2 with sqlite3_exec

2006-06-05 Thread Bogusław Brandys

A. Pagaltzis wrote:

* Roger Binns <[EMAIL PROTECTED]> [2006-06-05 00:05]:

I don't know if anyone has tracked how well Spotlight (Mac) or
Beagle (Linux) work with SQLite files.


They cause no problems. The semantics of the filesystem API on
*nix systems differ from those of Windows; in short, a file and
its name are separate entities on *nix and locking is advisory.

One effect is that you can delete the name of a file without the
file being (immediately) affected. It is only "garbage collected"
once there are no other names for it elsewhere in the filesystem
nor any processes holding it open. There are several other
effects that cumulatively make concurrent modifications of the
filesystem by multiple processes a non-issue in most cases where
it requires hacky workarounds on Windows. (It does mean you have
to be more careful and diligent about some things, as well.)

This is what makes it a reasonable assumption for SQLite that it
"owns" any temporary transaction file it created. On *nix, that
Just Works with no further effort or any thought about action at
a distance.

Regards,


Sqlite could create journal with exclusive RW access under Windows.
That way no other process can read from it.The only problem is that the 
other process could be just another sqlite driven application ;-)


In this case must use the same lock (this means probably single sqlite 
instance?) which further complicates this issue (shared memory needed 
(?)) - I don't even know if this is possible.


Anyway - it is a bad design of other applications (like TortoiseSVN) - 
which should never exclusively lock not owned files.


Just my 2 cents.

Regards
Boguslaw Brandys


[sqlite] case insensitive joins and comparisons

2006-06-05 Thread Mikey C

What is the best/most efficient way to perform comparisons or joins on data
where case sensivitiy is not important?

e.g join two tables where the primary and foreign key values have different
case?

Best to use Like or upper() or Lower() or some other way of ignoring case?

Thanks,

Mike
--
View this message in context: 
http://www.nabble.com/case-insensitive-joins-and-comparisons-t1736367.html#a4718438
Sent from the SQLite forum at Nabble.com.



RE: [sqlite] Multithreading. Again.

2006-06-05 Thread Doug Nebeker
The problem with that solution is that it assumes all database access
happens from within a single process.  As far as I understand it, SQLite
allows database access from multiple processes (and even from remote
processes I assume) and thus the locking has to happen outside of the
process.  In process locking would be a nice conditionally compiled
optimization, but you'd have to guarantee that no other process would
never touch the database.

-Original Message-
From: John Stanton [mailto:[EMAIL PROTECTED] 
Sent: Saturday, June 03, 2006 8:14 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Multithreading. Again.

Joe Wilson wrote:
>>Remember, that the operating system bug that is causing all the 
>>multithreading grief is that file locks created by one thread cannot 
>>be reliably removed or modified by a different thread.
> 
> 
> You could have a single thread that exclusively performs file
locking/unlocking.
> This thread would wait on a threadsafe work queue (using a POSIX 
> condition variable mechanism) and execute file locking/unlocking 
> tasks, otherwise it would use zero CPU. Functions could be provided to

> put file lock/unlock operations on to this work queue and wait for the
result.
> Such file locking/unlocking functions could be called safely and 
> reliably from any thread.
> 
That sounds like good practice to me.  When one discovers that a
particular function is flaky and variable between implementations, O/S's
and versions prudence requires that it be bypassed.  In our Sqlite usage
we want platform independence and reliability and accordingly never rely
on file locking for synchronization.

A bonus of such an approach is simpler code and logic and better
execution efficiency.
JS


To find out more about Reuters visit www.about.reuters.com

Any views expressed in this message are those of the individual sender, except 
where the sender specifically states them to be the views of Reuters Ltd.



[sqlite] Flashing memory database.

2006-06-05 Thread Mario . Hebert
This is an implementation question but I am pretty sure it applies to many 
people out there.

I am using a memory database which I want to "flash" periodically. I have 
logically broken down the database into three pieces which have 3 degrees 
of importance (critical, high and low). I have written 3 functions that 
can be called to "flash" them in 3 different flash databases. 

Upon system startup I am checking if these 3 databases exist and if so I 
do a "create table xxx as select * from db1.xxx". 

What I am doing conceptually works although I hate the way I do it. What I 
am looking is the optimal solution based on the following criterion:

- I want to copy a certain number of known tables to each database
- I do not want to write to flash if the memory table is identical to the 
flash table, in fact I would like to keep flash access to a minimum.
- I would like to not loose my configuration if the phone is powered down 
during flash operation
- I do not want a performance hit when I flash any of these

Any suggestion, any application notes out there ?

Mario

[sqlite] Re: Vacuum command

2006-06-05 Thread Igor Tandetnik

Anish Enos Mathew <[EMAIL PROTECTED]>
wrote:

I want to do a vacuum on my table. Can any one tell me
whether the below statement is the valid syntax for the vacuum
command.

 sqlite3_exec (db, "vacuum (data_table)", NULL, NULL, NULL);


The syntax is

VACUUM;
-- or
VACUUM table_name;

but the table name is ignored. See http://sqlite.org/lang_vacuum.html


How can we make sure that the command is really executing or not.


I don't understand this question. What makes you mistrust the engine?


Will
there be any time difference if we do the vacuum for a table?


No. The syntax allowing for a table name is for backward compatibility 
only.



I tried
inserting some records with and without using the vacuum command, but
there was not much time difference between the two.


VACUUM has nothing to do with execution time. Deleting records may 
result in empty pages that are kept in the database, to be reused later 
for new records. Under some usage patterns, this may result in lots of 
unused space in the database file. VACUUM command frees this space and 
makes the file smaller.


Igor Tandetnik 



Re: [sqlite] SQLite incompatible with NFS?

2006-06-05 Thread drh
Halim Issa <[EMAIL PROTECTED]> wrote:
> Hi,
> 
> I have been trying to get an application that utilizes SQLite to run on our 
> system, but according to the application developers it fails because SQLite 
> is incompatible with NFS and thus cannot work on NFS-mounted volumes.
> Attempts at locating this bug in the SQLite database has been futile, so I 
> wondered if someone could kindly point me towards the bug ID of this problem?
> 

The bug is in NFS, not in SQLite.

SQLite uses fcntl() files locks, which many NFS implementations 
do incorrectly or not at all.

--
D. Richard Hipp   <[EMAIL PROTECTED]>



Re: [sqlite] SQLite incompatible with NFS?

2006-06-05 Thread Clay Dowling

Halim Issa wrote:

Hi,

I have been trying to get an application that utilizes SQLite to run on our 
system, but according to the application developers it fails because SQLite 
is incompatible with NFS and thus cannot work on NFS-mounted volumes.
Attempts at locating this bug in the SQLite database has been futile, so I 
wondered if someone could kindly point me towards the bug ID of this problem?


The application in question is KDE's digiKam, and the bugs are described in 


Yes this is a main sqlite problem. Look here :

http://bugs.kde.org/show_bug.cgi?id=116252
http://article.gmane.org/gmane.comp.kde.digikam.devel/2166/


I'm a bit puzzled by this, as all our clients are diskless and thus rely on 
NFS for both /home and other volumes, and we run multiple other SQLite-based 
applications without trouble.


Any insight regarding the status and progress of this bug would be greatly 
appreciated!


That will depend entirely on your OS vendor's plans to upgrade NFS and 
file locking.  This is an old and known problem with NFS, and is not a 
problem with SQLite.  SQLite relies on filesystem locking to control 
database access during writes, and many (most?) network filesystems 
don't have very fine-grained locking.


You may get away with using SQLite just fine on a networked file system, 
but don't be shocked if the database gets corrupted during writes.  It's 
mentioned in the documentation, and again, isn't an SQLite bug.


Clay
--
CeaMuS, Simple Content Management
http://www.ceamus.com


[sqlite] How to cross-compile sqlite for ARM9

2006-06-05 Thread somebodydudu
How to cross-compile sqlite for ARM9?
 
 
I have read some articles about that,
but it doesn't work.
 
May someone send the image to me, 
or send the modified package to me.
 
I really appreciate it.
My address: [EMAIL PROTECTED]
 
 
 

Re: [sqlite] reasonable assumptions

2006-06-05 Thread René Tegel

[EMAIL PROTECTED] schreef:

See http://www.sqlite.org/cvstrac/chngview?cn=3200 for the fix.

  


In the changes i don't see calls to 'GetLastError()' as MS suggests.. 
Error logging might help tracing the error. Maybe a sqlite error log is 
a usefull feature request..


Another hint from the DeleteFile api documentation 
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/fileio/fs/deletefile.asp 
reads
"If you request delete permission at the time you create a file, you can 
delete or rename the file with that handle, but not with any other 
handle. For more information, see File Security and Access Rights 
."
However i am unable to find how to delete a file based on this handle, 
all msdn docs say you should close the/all handle(s) prior to deleting..
Another suggestion done by MSDN (and known workaround) is renaming the 
file prior to deleting. Renaming will succeed even is the file is open 
for reading, and at least a new journal can be created...However, if it 
will work it will put additional overhead on each transaction.


The pause and retries in the patch seem very arbitrary, they may not 
work correctly if the journal file is larger (several Mb or so), but it 
may be good for tracing the issue..
Since putting synchronous to OFF seems to help, it suggest a timing 
error. You could try to close the handle, delete the file right after, 
and only then flush buffers to disk, so no flushing after file handle 
was closed, since this few hudred ms allows other applications to 
quickly open the file.


However, to be honest, personally i do not favour creating workarounds 
for things that are supposed to work but are affected by other software. 
It this case, i think it is reasonable to say the cause is outside 
sqlite, namely a virrusscanner or tortoiseSVN.
Any self-respecting virus scanner will have kernel hooks to monitor I/O, 
acting transparently to applications (= no unexpected 'locking' issues) 
which leaves tortoise.


If tortoiseSVN uses ReadDirectoryChangesA/W() to monitor the disk, opens 
files for scanning for changes, and thereby locks sqlite, i am feeling 
to say it is a bug in tortoise, not in SQLite.. Maybe configuring 
tortoise to skip journal files would already fix it, if such feature is 
not available they could easily implement it (just skip all files that 
SVN is supposed to ignore anyhow, unlikely one would want to have 
journal files in the SVN repository)..
I've tried using such drive monitoring technique in the past, see 
http://www.howtodothings.com/computers/a841-how-to-create-a-pre-caching-system.html
and encountered several issues with applications that looks a lot like 
this very issue...
If this is the case, there would be no 'uniform' fix, since you will not 
know what the remote application is doing, only hoping it will release 
it's handle soon.


Apologies for not having a straight forward fix.. The changes made to 
win_os.c do not seem a real solution but also won't do any harm probably...
It seems NT allows some more control to file and directory access than 
win9x, but splitting the win_os layer may not be desired.. As far as i 
see the cause is outside sqlite, so i think it probably should be fixed 
outside sqlite.


kind regards,

Rene











RE: [sqlite] reasonable assumptions

2006-06-05 Thread Costas Stergiou
> The code compiles and runs for me.  I am running the regression
> test suite as I type this and there are no problems so far.  However,
> let me restate that I do not have the ability to test the behavior
> of SQLite in the presence of virus scanners and Tortoise SVN, so
> I am looking for people who do have that capability to help me
> by testing these changes thoroughly.  Please let me know if the
> changes implemented by checking [3200] fail to fix the random
> SQLITE_CANTOPEN problem.
> 
I got the changes and recompiled the sqlite3.dll
The problem is not there any more: I tested in 2 different machines (both
with WinXP and SVN) and I never got any error, while testing at the same
time with the previous version I got the error.

Although I cannot say that i am 100% certain that this solution is total, I
can see that the error is not happening any more (and reading the changes I
can confirm that the patch approaches the problem properly).
I will do more tests later and report back should I find any failure.

Thank you Richard for your prompt response to this serious issue! 

Costas





[sqlite] Vacuum command

2006-06-05 Thread Anish Enos Mathew

Hi all,
I want to do a vacuum on my table. Can any one tell me
whether the below statement is the valid syntax for the vacuum command.
How can we make sure that the command is really executing or not. Will
there be any time difference if we do the vacuum for a table? I tried
inserting some records with and without using the vacuum command, but
there was not much time difference between the two.

 sqlite3_exec (db, "vacuum (data_table)", NULL, NULL, NULL);
   
Regards,
Anish




The information contained in, or attached to, this e-mail, contains 
confidential information and is intended solely for the use of the individual 
or entity to whom they are addressed and is subject to legal privilege. If you 
have received this e-mail in error you should notify the sender immediately by 
reply e-mail, delete the message from your system and notify your system 
manager. Please do not copy it for any purpose, or disclose its contents to any 
other person. The views or opinions presented in this e-mail are solely those 
of the author and do not necessarily represent those of the company. The 
recipient should check this e-mail and any attachments for the presence of 
viruses. The company accepts no liability for any damage caused, directly or 
indirectly, by any virus transmitted in this email.

www.aztecsoft.com

Re: [sqlite] Random error SQLITE_CANTOPEN on Windows XP SP2 with sqlite3_exec

2006-06-05 Thread Roger Binns

* Roger Binns <[EMAIL PROTECTED]> [2006-06-05 00:05]:

I don't know if anyone has tracked how well Spotlight (Mac) or
Beagle (Linux) work with SQLite files.


They cause no problems. The semantics of the filesystem API on
*nix systems differ from those of Windows; in short, a file and
its name are separate entities on *nix and locking is advisory.


Having authored fileservers in the past I am fully aware of 
all the semantic differences.  My point is that the world

is no longer as simple today as it used to be.  For example
every operating system that has users sitting in front of it
supports FAT/FAT32 since that is the default filesystem for
USB memory sticks.  FAT requires a directory entry for a file
(there isn't the layer of indirection like an inode as you were
discussing).  If an operating system is maintaining the FAT 
filesystem in a stable state (eg because they allow for users 
ripping the memory stick out at any point) then the filesystem 
driver may choose to enforce FAT semantics even if the operating 
system by default has different semantics.


There are several elements that combine to produce the final
behaviour.  It is no longer just Windows that you have to
worry about for broadest compatibility and interoperability.

- API semantics (eg Win32, libc, Core Foundation, stdio)
- Kernel syscall semantics (eg NTapi, Linux, Mac etc)
- Filesystem semantics (NTFS, ext3, HPFS+, FAT, NFS, SMB)
- Tag-along programs (Indexing, Beagle, Spotlight, Virus, Backup,
  Policy checkers)

Roger


Re: [sqlite] SQLite incompatible with NFS?

2006-06-05 Thread Roger Binns
system, but according to the application developers it fails because SQLite 
is incompatible with NFS and thus cannot work on NFS-mounted volumes.


The bug is in the NFS implementations out there.  They do not perform
locking completely and correctly.  This cannot be fixed in SQLite - it
has to be fixed in the NFS implementations.

 http://sqlite.org/faq.html#q7

Roger