[sqlite] Sqlite Locking and Busy

2006-03-08 Thread John Stanton
For the person having difficulty with locking and busy with Sqlite it is 
important to recognize the LITE in the name.  A single file is used to 
hold the entire database and each user process has that file open which 
means that the locking is restricted to file locks, which essentially 
lock the entire file and hence the entire database.


The locking is essential if you dont want one process stomping all over 
the changes to the database being simultaneoulsy made by another process.


If you want to have a great deal of concurrency, don't use SQLITE, use 
PostgreSQL.  If you want the simplicity, ease of management and the 
small footprint of Sqlite, you need to work out a process whereby 
multiple user processes and threads can co-operate to access the Sqlite 
database without clashing.


If all the processes are co-operative you can sync with a semaphore or 
mutex like device.  If they don't you must rely on the file locking and 
handle the BUSY and LOCKED messages to allocate control of the database 
to one process or thread at a time  by having all bar one process yield 
when there is a contention.  If two processes want to submit a 
transaction, one must basically wait until the other finishes.


I find that syncing processes with a semaphore and threads with events 
provides an efficient and speedy control of Sqlite concurrent access, 
subject to the entire application obeying the rules.  Polling is avoided 
as is BUSY logic in the API calls.  Waits are eliminated and blocking is 
reduced to the minimum necessary to eliminate clashes.


[sqlite] "not an error"

2006-03-08 Thread Carl Jacobs
Hello,

I have the following process (roughly) on a 3.xx database:

== Start of process ==
1)  sqlite_prepare (an insert statement which is then used many times)
2)  loop to insert data (thousands of iterations)
  sqlite_reset / sqlite_bindxxx / sqlite_step
3)  'ATTACH DATABASE secondary'
  do some processing on attached database
'DETACH DATABASE secondary'
4)  sqlite_reset on prepared query from step 1
  at this point I get an SQLITE_SCHEMA error so I do an sqlite_finalize of
the old prepared statement, and sqlite_prepare a new statement - which is
the same as step 1.
5) sqlite_reset - return value is SQLITE_OK
6) sqlite_bindxxx - they all seem to work OK.
7) sqlite_step - ERROR!!  Return code is 1, and a call to sqlite_errmsg
gives the result "not an error".
== End of process ==

== More information ==
=>  In step 1 three inserts are prepared.
=>  In step 2 the three prepared statement are interleaved. (reset1, bind1,
step1, reset2, bind2, step2 etc)
=>  Steps 4, 5, 6, 7 are an expansion of step 2 - it's the same function
call.
=>  In step 4 only the first of the three queries is re-prepared before the
error in step 7.

BUT, the error is sort-of self-fixing!
So this is my program, and what happens:
{
  sqlite_open()
  Step1() - Success
  Step2() - Success
  Step2() - Success
  Step3() - Success
  Step2() - SQLITE_SCHEMA on reset1, so finalize1 prepare1 then error_code 1
on step1 and exit
  Step2() - SQLITE_SCHEMA on reset2, so finalize2 prepare2 then error_code 1
on step2 and exit
  Step2() - SQLITE_SCHEMA on reset3, so finalize3 prepare3 then error_code 1
on step3 and exit
  Step2() - Success
  Step2() - Success
  Step3() - Success
  Step2() - SQLITE_SCHEMA on reset1, so finalize1 prepare1 then error_code 1
on step1 and exit
  Step2() - SQLITE_SCHEMA on reset2, so finalize2 prepare2 then error_code 1
on step2 and exit
  Step2() - SQLITE_SCHEMA on reset3, so finalize3 prepare3 then error_code 1
on step3 and exit
  Step2() - Success
  Step2() - Success
}

Step2 inserts the same data everytime.

So what "officially" should I do with an error code of 1 as the result of an
sqlite_step?

Regards,
Carl.




Re: [sqlite] sqlite performance with sizeable tables

2006-03-08 Thread spaminos-sqlite
- Original Message 
From: [EMAIL PROTECTED]
To: sqlite-users@sqlite.org

> SQLite inserts in records in primary key order.  (That is not
> strictly true - but it is close enough to being true for the
> purposes of what follows.)  So when you insert records that
> are already in fname order, they can be appended to the file
> and the insert goes relatively quickly. This is because the
> file does not need to be reorganized to make space for the
> new row (it is simply appended) and because nearby information
> is already in cache and can be fetched quickly.  But if you 
> insert with randomly ordered fnames, then records are constantly 
> being inserted into different places in the middle of the file.
> This takes more time because the file has to be reorganized 
> to make space for the new record and because each record is 
> in a different spot you are have no locality of reference 
> and the cache is much less effective.

OK, now I understand better why there is such a difference in performance. It 
seems that sqlite maybe is not an option for me as I am updating records pretty 
often (50% of the time roughly: access some data, perform some task with it and 
later write back the updated version).

One question though: are the file access "sorted", so that seeks are minimised 
when performing a transaction (making the assumption that the file is not 
fragmented on disk)?

Nicolas






RE: [sqlite] File locking additions

2006-03-08 Thread Marian Olteanu
I fully agree with you. This would be an external tool. But I underlined
that building such a tool is not a big enterprise. It can be done by a good
programmer in a reasonable amount of time.

Also, I would say that perfect sync over network file systems done by sqlite
is out of sqlite's scope. Especially if this would involve custom sync
mechanisms designed for each file system.

-Original Message-
From: Jay Sprenkle [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, March 08, 2006 8:18 AM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] File locking additions

On 3/7/06, Marian Olteanu <[EMAIL PROTECTED]> wrote:
> I would say that the best way to access a sqlite database mounted from a
> remote file server, concurrently with other processes is through a
database
> server. My opinion is that the overhead of file sync and file locking for
a
> remote file system is higher than simple TCP/IP communication overhead.
The
> server would be able to use also the same cache, would have very fast
access
> to the file (local file), etc.
>
> Building a server for sqlite is not a very complicated task. It can take
as
> few as a couple hours.

I'd like to see this option built as a separate project from sqlite.
Sqlite is a great tool when you don't need a server, and I'd hate to lose
that.
Let's add more tools to our toolset instead of just morphing one into
whatever is needed at the moment.



Re: [sqlite] Deadlock when using transactions?

2006-03-08 Thread Dennis Brakhane
On 3/8/06, John Stanton <[EMAIL PROTECTED]> wrote:
> Try an EXCLUSIVE TRANSACTION.

This wouldn't help, as other processes need to read from the database.

I still think it's a bug: A cannot COMMIT it's changes, because B's
INSERT failed!
That just doesn't make any sense to me. Why does B need to ROLLBACK
changes that were unsuccessful in any case?


The problem I'm trying to solve:
I have several worker threads that modify some parts of the database.
Each of these threads can take a long time to finish (They are
indexing directories of servers).

What I could do it the following:

CREATE TABLE foo (server_id INT, bar TEXT, newRevision BOOL)
- The search engine only looks for Data with newRevision = FALSE
- Each worker thread (responsible for one specific server_id) writes
(without transactions) with newRevision set to TRUE
- At the end, each worker thread does DELETE FROM foo where
newRevision==false AND server_id == x, followed by UPDATE foo SET
newRevision==false where server_id == x

But I find it kinda pointless to do a "poor mans transaction". Do I
have to switch to PostgreSQL to get "independent" transactions?


Re: [sqlite] Deadlock when using transactions?

2006-03-08 Thread John Stanton

Try an EXCLUSIVE TRANSACTION.

Dennis Brakhane wrote:

(In case it matters, im running SQLite 3.2.8)

Hi. I have a problem with multiple processes that use transactions on
the same database:

Suppose I have two processes that access the same database
Now, both do a "begin transaction", which succeeds
Process A now issues "insert into foo values ('bar')", which succeeds
Process B now also issues "insert into foo values ('baz')", which
fails - database is locked
Now, neither can commit the changes. So, at least one has to do a
rollback; this however means that I will need to store the data (which
is generated on-the-fly during the transaction) somewhere, which seems
kinda pointless to me. I'm storing data so that I can store them in a
database.

So, is there a way to solve this problem?




Re: [sqlite] Busy management

2006-03-08 Thread John Stanton
Perhaps your best general approach is to establish some form of 
synchronization so that BUSYs never happen.  That would simplify your logic.

JS
Ludovic Ferrandis wrote:

I'm currently writing a C wrapper (yes, I know, yet another ... ;) for my
applications. But it's a good training to learn.

So I was looking at the best generic way to manage busy statement.
My first idea was to use the callback option as it seems to be much simplier
and it would benefit to all functions (step, ...).
The loop option is very simple, but that implies to write the same loop code
for every functions that could return SQLITE_BUSY.

The only issue that remain for instance, is how to manage the SQLITE_LOCKED
error. It's not very clear in the documentation when this error could be
returned (instead of SQLITE_BUSY).

I'm not sure the busy callback will be called before returning
SQLITE_LOCKED, like for SQLITE_BUSY.

So, for instance, I will implement the loop option. I will see later when I
will figure out how to manage SQLITE_LOCKED error.

Thanks

Ludovic

On 3/7/06, John Stanton <[EMAIL PROTECTED]> wrote:


It depends upon your application.  What are you trying to do?










Re: [sqlite] File locking additions

2006-03-08 Thread John Stanton
We use the server approach, with a simple daemon on the remote machine 
which delivers its result in XML in accordance with the schema.  It 
works well.


Marian Olteanu wrote:

I would say that the best way to access a sqlite database mounted from a
remote file server, concurrently with other processes is through a database
server. My opinion is that the overhead of file sync and file locking for a
remote file system is higher than simple TCP/IP communication overhead. The
server would be able to use also the same cache, would have very fast access
to the file (local file), etc.

Building a server for sqlite is not a very complicated task. It can take as
few as a couple hours.


-Original Message-
From: Adam Swift [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, March 07, 2006 6:07 PM

To: sqlite-users@sqlite.org
Subject: [sqlite] File locking additions

All,

In order to provide locking support for database files mounted from  
remote file systems (NFS, SMB, AFP, etc) as well as provide  
compatible locking support between database clients both local and  
remote, I would like to propose some additions to the existing  
database file locking behavior.


I have discussed this briefly with D. Richard Hipp and he has  
expressed interest in pursuing it.  We would appreciate feedback/ 
suggestions/concerns on the proposed solutions below.


1. Support a variety of locking mechanisms, from the lowest  
granularity (using a .lock file)  to the highest (the  
existing advisory locks).  A preliminary list: .lock files, flock,  
afp locks, posix advisory locks.


2. Allow clients to specify type of locking (on database open)  
manually or ask sqlite to automatically detect the best locking  
supported by the file system hosting the database file (automatic  
detection would not work in a mixed local/remote file system  
situation, all clients of a single database file need to use the same  
type of locking to avoid conflicts).


3. Extend the sqlite3_open commands to support URI style path  
references in order to specify the file system locking type (as  
opposed to modifying the arguments list).  After a little poking  
around on RFC 3986  I'm inclined  
to specify the locking choice via the query part of the URI.  For  
example:


file:///mounts/myhost/dbfile.db?locktype=flock
file:///localdisk/otherdbfile.db?locktype=automatic

Thanks in advance for your input.

Adam Swift






[sqlite] VxWorks

2006-03-08 Thread Eric Evenson
Does anyone have experience using sqlite with VxWorks?
I see references to VxWorks in the configure scripts, but it does not seem
to be implemented.
Any help would be appreciated.
--
Eric


Re: [sqlite] Read row data into array?

2006-03-08 Thread Jay Sprenkle
On 3/8/06, Pam Greene <[EMAIL PROTECTED]> wrote:
> On 3/8/06, cstrader <[EMAIL PROTECTED]> wrote:
> >
> > Would someone be willing to share with me c++ code that reads the result
> > of a select query into an array representing the data of the j rows in a
> > selected column?  I understand that callback() is executed once for each row
> > of the data.  But what is the best and fastest way to iteratively write
> > these row data into a single array that I can pass to my host program?
>
>
> Have you looked at sqlite3_get_table()?

Useful, but it doesn't have a method of using bound parameters.
It makes you vulnerable to sql injection attacks.


Re: [sqlite] Read row data into array?

2006-03-08 Thread Pam Greene
On 3/8/06, cstrader <[EMAIL PROTECTED]> wrote:
>
> Would someone be willing to share with me c++ code that reads the result
> of a select query into an array representing the data of the j rows in a
> selected column?  I understand that callback() is executed once for each row
> of the data.  But what is the best and fastest way to iteratively write
> these row data into a single array that I can pass to my host program?


Have you looked at sqlite3_get_table()?

- Pam


Re: [sqlite] File locking additions

2006-03-08 Thread Christian Smith
On Tue, 7 Mar 2006, Adam Swift wrote:

>All,
>
>In order to provide locking support for database files mounted from
>remote file systems (NFS, SMB, AFP, etc) as well as provide
>compatible locking support between database clients both local and
>remote, I would like to propose some additions to the existing
>database file locking behavior.


Would be useful, especially for the Mozilla work.


>
>I have discussed this briefly with D. Richard Hipp and he has
>expressed interest in pursuing it.  We would appreciate feedback/
>suggestions/concerns on the proposed solutions below.
>
>1. Support a variety of locking mechanisms, from the lowest
>granularity (using a .lock file)  to the highest (the
>existing advisory locks).  A preliminary list: .lock files, flock,
>afp locks, posix advisory locks.


Why not start with just .lock and the existing locks? MacOS provides posix
advisory locks, I assume?


>
>2. Allow clients to specify type of locking (on database open)
>manually or ask sqlite to automatically detect the best locking
>supported by the file system hosting the database file (automatic
>detection would not work in a mixed local/remote file system
>situation, all clients of a single database file need to use the same
>type of locking to avoid conflicts).


So long as the better locking recognises the more primitive .lock files,
this should be workable. The use of .lock files is easy to test for at
database open time.

>
>3. Extend the sqlite3_open commands to support URI style path
>references in order to specify the file system locking type (as
>opposed to modifying the arguments list).  After a little poking
>around on RFC 3986  I'm inclined
>to specify the locking choice via the query part of the URI.  For
>example:
>
>   file:///mounts/myhost/dbfile.db?locktype=flock
>   file:///localdisk/otherdbfile.db?locktype=automatic


I'd be more inclined to add a PRAGMA. URIs are ugly and a pain to type in,
epsecially if you're lazy and rely on filename completion. Of course, I
assume the non-URI form would still work, but a PRAGMA still makes more
sense to me (and can be queried without parsing the URI.)

Perhaps multiple .lock files could be used to implement read/write locks.
Locks files of the form ".rd.lock." would be individual read
locks, which would be all hard links to the same ".rd.lock"
file, and the file reference count would be the read lock count.

A reserved lock would be indicated by the presence of both the
".rd.lock" and ".wr.lock" files, and once the last
reader has finished, the ".rd.lock" file is removed and the lock
is promoted to exclusive.

Such use of read/write lock files might slow the library down, though, as
directory operations are generally synchronous.


>
>Thanks in advance for your input.
>
>Adam Swift
>

Christian


-- 
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \


Re: [sqlite] File locking additions

2006-03-08 Thread Helmut Tschemernjak

Hello Adam, all,

Cross platform locking is defiantly very important. The goal should be 
to use the identical sqlite DB via UNIX, AFP, SMB, NFS and others. My 
opinion is that it is not needed to have any additional open parameters, 
just use the locking features which are in common between all platforms.


Using an additional TCP/IP server is more complex in term of cross 
platform compatibility and may be an overkill in terms of performance, 
and will introduce additional problems.


At present the os_unix.c has some problems, I don't recall all its 
details so here is just an estimate:


Multiple locks are getting unlocked with one call
- This does not work on AFP and SMB

A Read lock gets turned into a writer lock (and or back?)
- This does not work on AFP, SMB has limited support for it.

The current lock offset will not allow to copy open DB files if the 
database gets larger than 0x4000 bytes. This is because locked 
regions cannot be copied under Windows, we changed it to:


#define PENDING_BYTE  I64CONST(0x7fff)
#define RESERVED_BYTE I64CONST(0x7fff0001)
#define SHARED_FIRST  I64CONST(0x7fff0002)

Advisory versus mandatory record locking
AFP and SMB is doing mandatory locking which means other clients cannot 
read locked file areas, advisory locking means that we can ready any 
data therefore we must call lock first before reading bytes.


The benefit of supporting the OS based locking is that after a program 
exits it will automatically cleanup all locks. Windows has some "oplock" 
features which will handle remote locks completely on the Windows SMB 
clients as long as only one writer uses the file.


It should not be this difficult to use only locking features which are 
in common cross major platforms and network file systems.



Helmut Tschemernjak

Adam Swift wrote:

All,

In order to provide locking support for database files mounted from 
remote file systems (NFS, SMB, AFP, etc) as well as provide compatible 
locking support between database clients both local and remote, I would 
like to propose some additions to the existing database file locking 
behavior.


I have discussed this briefly with D. Richard Hipp and he has expressed 
interest in pursuing it.  We would appreciate 
feedback/suggestions/concerns on the proposed solutions below.


1. Support a variety of locking mechanisms, from the lowest granularity 
(using a .lock file)  to the highest (the existing 
advisory locks).  A preliminary list: .lock files, flock, afp locks, 
posix advisory locks.


2. Allow clients to specify type of locking (on database open) manually 
or ask sqlite to automatically detect the best locking supported by the 
file system hosting the database file (automatic detection would not 
work in a mixed local/remote file system situation, all clients of a 
single database file need to use the same type of locking to avoid 
conflicts).


3. Extend the sqlite3_open commands to support URI style path references 
in order to specify the file system locking type (as opposed to 
modifying the arguments list).  After a little poking around on RFC 3986 
 I'm inclined to specify the 
locking choice via the query part of the URI.  For example:


file:///mounts/myhost/dbfile.db?locktype=flock
file:///localdisk/otherdbfile.db?locktype=automatic

Thanks in advance for your input.

Adam Swift




Re: [sqlite] Re: Deadlock when using transactions?

2006-03-08 Thread Dennis Brakhane
On 3/8/06, Igor Tandetnik <[EMAIL PROTECTED]> wrote:
> BEGIN IMMEDIATE. Once BEGIN IMMEDIATE succeeds on one connection, it
> will fail on other connections.

Thanks


[sqlite] Re: Deadlock when using transactions?

2006-03-08 Thread Igor Tandetnik

Dennis Brakhane 
wrote:

(In case it matters, im running SQLite 3.2.8)

Hi. I have a problem with multiple processes that use transactions on
the same database:

Suppose I have two processes that access the same database
Now, both do a "begin transaction", which succeeds
Process A now issues "insert into foo values ('bar')", which succeeds
Process B now also issues "insert into foo values ('baz')", which
fails - database is locked
Now, neither can commit the changes. So, at least one has to do a
rollback; this however means that I will need to store the data (which
is generated on-the-fly during the transaction) somewhere, which seems
kinda pointless to me. I'm storing data so that I can store them in a
database.

So, is there a way to solve this problem?


BEGIN IMMEDIATE. Once BEGIN IMMEDIATE succeeds on one connection, it 
will fail on other connections.


Igor Tandetnik 



Re: [sqlite] Read row data into array?

2006-03-08 Thread Jay Sprenkle
On 3/8/06, cstrader <[EMAIL PROTECTED]> wrote:
> Would someone be willing to share with me c++ code that reads the result of a 
> select query into an array representing the data of the j rows in a selected 
> column?  I understand that callback() is executed once for each row of the 
> data.  But what is the best and fastest way to iteratively write these row 
> data into a single array that I can pass to my host program?


Callbacks are problematic in C++. It's tricky to get the address of a
class member
and provide the 'this' pointer correctly. Here's a quick example of
reading a single
record (I have a wrapper around the sqlite calls, but I think it gets
the message across):


// open database
dbOpen();

// check to see if user is admin or moderator
string sql = "SELECT
CookieUser,CookieVisit,Timezone,CookieLife,CGI,URL,ContactName,ContactMail,Description
FROM Setup";

// prepare statement instead of building it to avoid sql injection attacks
if ( ! dbPrep( sql ) )
  throw ConException( string("Cannot prepare sql: ") + sql +
string(", ") +  + sqlite3_errmsg(db) );

switch ( dbStep() )
  {
case SQLITE_ROW:
  // get results ( 0 based index!!! )
  CookieUser  = dbColumn( 0 );
  CookieVisit = dbColumn( 1 );
  Timezone= dbColumn( 2 );
  CookieLife  = atol( dbColumn( 3 ) );
  CGI = dbColumn( 4 );
  URL = dbColumn( 5 );
  ContactName = dbColumn( 6 );
  ContactMail = dbColumn( 7 );
  Description = dbColumn( 8 );
  break;
case SQLITE_DONE:
  if ( CookieUser.empty() )
throw ConException( string("Invalid configuration") );
  break;
default:
  throw ConException( string("Cannot execute sql: ") + sql );
  break;
  }

// clean up when finished
dbFinalize();
dbClose();


dbColumn looks like this:

//---
// return a pointer to the text of a result row
// 0 based index!
//---
char* Convention::dbColumn( const unsigned int Index )
  {
char* p = (char*) sqlite3_column_text( pStmt, Index );
return (char*) ( p ? p : "" );
  }


[sqlite] Deadlock when using transactions?

2006-03-08 Thread Dennis Brakhane
(In case it matters, im running SQLite 3.2.8)

Hi. I have a problem with multiple processes that use transactions on
the same database:

Suppose I have two processes that access the same database
Now, both do a "begin transaction", which succeeds
Process A now issues "insert into foo values ('bar')", which succeeds
Process B now also issues "insert into foo values ('baz')", which
fails - database is locked
Now, neither can commit the changes. So, at least one has to do a
rollback; this however means that I will need to store the data (which
is generated on-the-fly during the transaction) somewhere, which seems
kinda pointless to me. I'm storing data so that I can store them in a
database.

So, is there a way to solve this problem?


Re: [sqlite] sqlite_get_table performance problems

2006-03-08 Thread Rob Lohman

Hi,

I suspect "your network" (solution) is to blame. The client
probably needs to (re)connect to your server or (re-)
authenticate etc.

This can easily be tested by running the application and
database on the same system. Is the first query does not
take 2 seconds you have your answer.

Keep in mind that SQLite is not a client-server system,
it is more like an embedded database system.

To me it feels like you are trying to use a product in a
way it was not designed and I would expect problems
like these from that.

Good luck,

Rob

- Original Message - 
From: "Patrik Svensson" <[EMAIL PROTECTED]>

To: 
Sent: Wednesday, March 08, 2006 2:40 PM
Subject: [sqlite] sqlite_get_table performance problems



Hi,

I have BIG BIG problems with my database (sqlite2).

I have a 50Mb database located on a server.
From my client I run a quite large select statement using the
sqlite_get_table function.

The first time this select is issued it takes about 2 minutes to get the
result.
If I run the same select again it will take approx 1 second.
Then after a while it suddenly can take 2 minutes again.

I guess this must be some caching problem but I'm stuck and 2 minutes is 
NOT

acceptable even for the first select.

Can someone PLEASE PLEASE PLEASE help me?

Cheers,
Patrik







Re: [sqlite] Busy management

2006-03-08 Thread Ludovic Ferrandis
I'm currently writing a C wrapper (yes, I know, yet another ... ;) for my
applications. But it's a good training to learn.

So I was looking at the best generic way to manage busy statement.
My first idea was to use the callback option as it seems to be much simplier
and it would benefit to all functions (step, ...).
The loop option is very simple, but that implies to write the same loop code
for every functions that could return SQLITE_BUSY.

The only issue that remain for instance, is how to manage the SQLITE_LOCKED
error. It's not very clear in the documentation when this error could be
returned (instead of SQLITE_BUSY).

I'm not sure the busy callback will be called before returning
SQLITE_LOCKED, like for SQLITE_BUSY.

So, for instance, I will implement the loop option. I will see later when I
will figure out how to manage SQLITE_LOCKED error.

Thanks

Ludovic

On 3/7/06, John Stanton <[EMAIL PROTECTED]> wrote:
>
> It depends upon your application.  What are you trying to do?
>
>
>


[sqlite] Read row data into array?

2006-03-08 Thread cstrader
Would someone be willing to share with me c++ code that reads the result of a 
select query into an array representing the data of the j rows in a selected 
column?  I understand that callback() is executed once for each row of the 
data.  But what is the best and fastest way to iteratively write these row data 
into a single array that I can pass to my host program?  


Thanks in advance!




[sqlite] OT: patch for SQLite ODBC

2006-03-08 Thread Jarl Friis
Sorry for being OT, but I could not find a better place.

I think when SQLite ODBC driver postings are very low-frequent, it
wouldn't hurt to (ab)use this list for that.

Christian Werner <[EMAIL PROTECTED]> writes:

> Version 0.66 of the SQLite ODBC Driver is available
> on http://www.ch-werner.de/sqliteodbc
> It supports now SQLite 3.3.4 and 2.8.17.

Dear Christian.

I have fixed a bug in the SQLite ODBC driver. The problem is that the
ODBC driver does not use the info from "PRAGMA table_info(...)" when
there are no columns declared as "integer".

Further I wonder what's behind the decision to use
PRAGMA short_column_names = off
PRAGMA full_column_names = on

When using short_column_names instead (which is default),
behaviour seems closer to standard Database SQL engines.

Could you ellaborate on this decision?

Index: sqlite3odbc.c
===
--- sqlite3odbc.c
+++ sqlite3odbc.c
@@ -1157,9 +1157,6 @@
 	s->dyncols[i].autoinc = 0;
 	}
 }
-if (!doautoinc) {
-	return;
-}
 if (s->dcols > array_size(flags)) {
 	flagp = xmalloc(sizeof (flags[0]) * s->dcols);
 	if (flagp == NULL) {
@@ -1170,9 +1167,6 @@
 }
 memset(flagp, 0, sizeof (flags[0]) * s->dcols);
 for (i = 0; i < s->dcols; i++) {
-	s->dyncols[i].autoinc = 0;
-}
-for (i = 0; i < s->dcols; i++) {
 	int ret, lastpk = -1, autoinccount = 0;
 	char *sql;
 

Jarl

-- 
Jarl Friis
Softace ApS
Omøgade 8, 2.sal
2100 København Ø.
Denmark
Phone:  +45 26 13 20 90
E-mail: [EMAIL PROTECTED]
LinkedIn: https://www.linkedin.com/in/jarlfriis




Re: [sqlite] File locking additions

2006-03-08 Thread Jay Sprenkle
On 3/7/06, Marian Olteanu <[EMAIL PROTECTED]> wrote:
> I would say that the best way to access a sqlite database mounted from a
> remote file server, concurrently with other processes is through a database
> server. My opinion is that the overhead of file sync and file locking for a
> remote file system is higher than simple TCP/IP communication overhead. The
> server would be able to use also the same cache, would have very fast access
> to the file (local file), etc.
>
> Building a server for sqlite is not a very complicated task. It can take as
> few as a couple hours.

I'd like to see this option built as a separate project from sqlite.
Sqlite is a great tool when you don't need a server, and I'd hate to lose that.
Let's add more tools to our toolset instead of just morphing one into
whatever is needed at the moment.


RE: [sqlite] sqlite_get_table performance problems

2006-03-08 Thread Griggs, Donald
-Original Message-
From: Patrik Svensson [mailto:[EMAIL PROTECTED] 
Subject: [sqlite] sqlite_get_table performance problems
...
Can someone PLEASE PLEASE PLEASE help me?

=

Hi Patrik,

The good folks on this list likely cannot help without many more details
about your setup.

You may wish to post again and include details such as:
  -- The schema (including indexes) of the tables involved in the problem
query.
  -- The problem query itself.
  -- The type of environment you're using (language, wrappers, etc., cpu,
os)
  -- Approximate number of rows you get returned by these long queries
  -- Precise version of sqlite 2 that you're using (and perhaps the reason
you are using Sqlite 2 rather than a more current version).

Respectfully,

Donald Griggs


Opinions are not necessarily those of Misys Healthcare Systems nor its board
of directors.



Re: [sqlite] sqlite_get_table performance problems

2006-03-08 Thread Jay Sprenkle
On 3/8/06, Patrik Svensson <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I have BIG BIG problems with my database (sqlite2).
>
> I have a 50Mb database located on a server.
> From my client I run a quite large select statement using the
> sqlite_get_table function.
>
> The first time this select is issued it takes about 2 minutes to get the
> result.
> If I run the same select again it will take approx 1 second.
> Then after a while it suddenly can take 2 minutes again.
>
> I guess this must be some caching problem but I'm stuck and 2 minutes is NOT
> acceptable even for the first select.


have you looked at the indexes to see if they're correct for the query
you're running?


[sqlite] sqlite_get_table performance problems

2006-03-08 Thread Patrik Svensson
Hi,

I have BIG BIG problems with my database (sqlite2).

I have a 50Mb database located on a server.
>From my client I run a quite large select statement using the
sqlite_get_table function.

The first time this select is issued it takes about 2 minutes to get the
result.
If I run the same select again it will take approx 1 second.
Then after a while it suddenly can take 2 minutes again.

I guess this must be some caching problem but I'm stuck and 2 minutes is NOT
acceptable even for the first select.

Can someone PLEASE PLEASE PLEASE help me?

Cheers,
Patrik





Re: [sqlite] File locking additions

2006-03-08 Thread Manfred Bergmann

Hi.

You guys already distribute a SQLite version with locking support for  
remote databases with Mac OSX.

What would be the difference to that mechanism?
To have this in the original SQLite would be great.


Best regards,
Manfred


Am 08.03.2006 um 11:06 schrieb Adam Swift:


All,

In order to provide locking support for database files mounted from  
remote file systems (NFS, SMB, AFP, etc) as well as provide  
compatible locking support between database clients both local and  
remote, I would like to propose some additions to the existing  
database file locking behavior.


I have discussed this briefly with D. Richard Hipp and he has  
expressed interest in pursuing it.  We would appreciate feedback/ 
suggestions/concerns on the proposed solutions below.


1. Support a variety of locking mechanisms, from the lowest  
granularity (using a .lock file)  to the highest  
(the existing advisory locks).  A preliminary list: .lock files,  
flock, afp locks, posix advisory locks.


2. Allow clients to specify type of locking (on database open)  
manually or ask sqlite to automatically detect the best locking  
supported by the file system hosting the database file (automatic  
detection would not work in a mixed local/remote file system  
situation, all clients of a single database file need to use the  
same type of locking to avoid conflicts).


3. Extend the sqlite3_open commands to support URI style path  
references in order to specify the file system locking type (as  
opposed to modifying the arguments list).  After a little poking  
around on RFC 3986  I'm  
inclined to specify the locking choice via the query part of the  
URI.  For example:


file:///mounts/myhost/dbfile.db?locktype=flock
file:///localdisk/otherdbfile.db?locktype=automatic

Thanks in advance for your input.

Adam Swift









___ 
Telefonate ohne weitere Kosten vom PC zum PC: http://messenger.yahoo.de