Re: [sqlite] shared cache/ test_server.c

2007-07-25 Thread John Stanton
Now you have made me worry.  If cache can only be shared by connections 
created in one thread then there is no shared cache.  I must investigate 
this more closely.  Perhaps my reading of the documentation included a 
dose of wishful thinking and a belief that "shared" meant shared! 
Looking through the code shows shared cache mode introducing table 
locking and gives the impression that "shared cache mode" is actually an 
implementation of finer granularity locking, to table level, and the 
multiple connections just store the cursor context state.


I shall need to write some test programs to be certain, but at this 
stage it does look like there is no shared cache mode as such and that a 
server needs to single stream Sqlite access in one thread to avoid 
having large amounts of data replicated in memory, but at the cost of 
restricting concurrent read access.


Shared cache mode would be better named "persistent cache mode" because 
its main effect is to permit one thread to not flush the cache after 
each transaction.  The people at Mozilla report that they use it and get 
better throughput on small transactions.


Thankyou to those people who contributed to this discussion.

Ken wrote:

John,
 
 According to the Sqlite documentation on sqlite3_enable_shared_cache:

 "There is no mechanism for sharing cache between database connections running in 
different threads."
 
 This means exactly what I said in the first place: You cannot have a "shared cache" access across threads.  I really wish that you could have multiple threads each with a database connection using shared cache running concurrently.
 
 Can you provide sample code showing the concept you are describing? 
 
 I totally understand what you are getting at with the locking. Indeed handling locking internally in memory will always be faster (assuming speed of ram access is faster than disk I/O). 
 
 John Stanton <[EMAIL PROTECTED]> wrote:  I think that you misunderstood the shared cache description.  Cache is 
shared by many connections but connections may not be passed between 
threads.  Each thread must maintain and use its its own connection.  In 
our case a thread has an associated control block and the connection 
handle resides there.


As long as you only access the Sqlite connection from the thread which 
created it you share the cache and it works fine.


The locking is a seperate issue and is aimed at avoiding the dreaded 
"busy wait".  We use a leisurely busy wait to handle mutli-process 
Sqlite using file locks.  The technique is to not delay after a busy is 
intercepted but to force a process time slice yield but in a server our 
intention is to avoid these inefficiencies by using the more efficient 
synchronization features.   As you would appreciate a few percent better 
efficiency on your server means a corresponding increase in the number 
of possible users.


Ken wrote:

John, 
The server can maintaine a "shared cache" but if a thread also opens the DB then that execution line will not have a "shared cache" but rather a cache per thread. Only the server thread may open and act upon the connection utilizing a shared cache on behalf of the client. The client may not do things unto the connection handle such as open, prepare, step, reset, close, finalize.


At least thats my understanding of the "shared_cache" mode.

Using a locking primitive internally does simplify the code. But I'll contend 
that if you are using multiple threads and each having a connection to a DB 
with a locking structure for internal synchronization. Then you are not using 
the sqlite shared cache. And you will not benefit from sqlites locking 
internals (read/writer starvation ). And if it is write intensive and 
concurrent you might as well have a single connections that is shared across 
all threads.

I guess my point was that inside the server thread, once a transaction is 
entered upon behalf of a client then only that activity may continue and no 
others. So in my design i only had two choices, re-enqueu the message inside 
the server until the transactional thread completed or return an error to the 
client. I preferred keeping the message on the queue waiting to be serviced. 
This is also programatically a pain in the arse since you must guarantee the 
client doesn't abandon its responsiblities and exit without sending a close 
command into the server thread, resulting in a permanently blocked server queue.

You can test this behavouir using the src/test_server.c code and some client 
connections into the test_server thread.

Or I may just be totally off my rocker.. and thats ok too. 
Ken



John Stanton  wrote: That is why the Sqlite locking is not a good fit for a threaded server. 
 Why not use thread locks instead and achieve the synchronization with 
minimum overhead and latency?  You do miss out on a couple of Sqlite 
features doing that (the pending and reserved locks which help with 
concurrency and write starvation) so you need to balance 

[sqlite] Re: 3.4.1: select with no qualifying rows returns a row of all NULLs

2007-07-25 Thread Andrew Finkenstadt

...snip...

http://www.sqlite.org/cvstrac/tktview?tn=2536

I am working on building a test case.  The problem remained when I reverted
to 3.3.17 using the identical database files.


[sqlite] Re: 3.4.1: select with no qualifying rows returns a row of all NULLs

2007-07-25 Thread Andrew Finkenstadt

On further research, it appears to be related to max_page_count being
exceeded during insertion of a row, leaving behind a primary key reference
in the (primary key) index on the BLOB instead of being cleaned up during
rollback.



On 7/24/07, Andrew Finkenstadt <[EMAIL PROTECTED]> wrote:


Using 3.4.0 or 3.4.1 compiled for windows with -DMULTITHREAD, i seem to
have a problem where sqlite3_step() will return SQLITE_ROW even when there
are, in fact, no rows that meet the WHERE clause criteria.  I'm currently
attempting a reversion to 3.3.17 to see how it behaved.


create table file_data
( rod blob not null primary key /* always 20 bytes long based on a
collision-resistant hash function */
, file_size integer not null
, compression_method integer not null
, the_data blob not null
);

sqlite3_prepare(..., "select file_size, compression_method from file_data
where rod = ?");
sqlite3_reset(statement);
sqlite3_bind_blob(...,0, ptr, len);
sqlite3_step(...)


sqlite3_step returned SQLITE_ROW, which was my indication that the result
set would be non-empty.  So I construct a query_result(statement, bool
at_end=false), and then execute

sqlite3_get_int64(..., first_column, null=-1)

and I get back -1.  Before I was passing in null=-1 I was passing in =0,
which was a permissible value for file_size.

** keep in mind that my exact sqlite3_api calls might be slightly
different, since I have a C++ framework that isolates me from having to
continually remember the same API calls.  Suffice it to say this code worked
(so far as I knew) under 3.3.17.

--andy






Re: [sqlite] Re: Re: SQL Challenge, select stack

2007-07-25 Thread Ken
Igor, 
 
 You were indeed correct the SQL only version when the stack has say 8000 rows 
was terrible. It took 20+ seconds in my application.
 
 I re-wrote using a programatic approach, the resulting timing was astounding: 
.1 second.
 
 I suppose this is one of those odd cases where sql iteration in a loop is 
better than a singleton sql statement.
 
 Thanks again,
 Ken
 

Ken <[EMAIL PROTECTED]> wrote: Yes that is much better.
 
 Thanks again!
 Ken
 

Igor Tandetnik  wrote: Ken  wrote:
> The 22 is kind of like a time stamp..  (or you could just as easily
> add another column indicative of a timestamp.
>
> insert into stack values (1, 1234);
> insert into stack values (2, 1234);
> insert into stack values (6, 1234);
> insert into stack values (9, 1234);
> insert into stackpop values (12, 1234) ;
> insert into stackpop values (14, 1234) ;
> insert into stackpop values (18, 1234) ;
> insert into stack values (22, 1234);
>
> so  that 12 should pop 9, 14 pops 6 and 18 pops 2  leaving the stack
> with 1 and 22.

Ah, I didn't realize the ids are supposed to act like timestamps. In 
this case, you would need something like this:

select p.id, max(s.id) from stack s, stackpop p
where s.value = p.value and s.id < p.id and
(select count(*) from stack s2 where s2.id between s.id and p.id) =
(select count(*) from stackpop p2 where p2.id between s.id and p.id)
group by p.id;

Igor Tandetnik 


-
To unsubscribe, send email to [EMAIL PROTECTED]
-





[sqlite] SQLIte Templates

2007-07-25 Thread Meftah Tayeb
Hello

please help me:
i am looking for a SQLIte project Templates for Dev C++ or Visual C++ 6
Thank you

Re: [sqlite] Is VACUUM safe from power loss/crashing

2007-07-25 Thread scoobyjh

I didn't realise VACUUM could be so memory hungry but that should still be ok
for us because:

My system is an embedded arm9 running an RTOS with a flash file system, so
no disk.
Database files will reside in a flash based r/w partition. The /var is
currently mounted on a RAM disk, which should mean faster VACUUM cleaning
than flash.

We should have more than enough RAM for the VACUUM operation and it won't
matter if the unit dies because of the rollback mechanism you and Dan
mention.

Thanks again,

Scooby


drh-2 wrote:
> 
> scoobyjh <[EMAIL PROTECTED]> wrote:
>> When VACUUM cleaning a database it creates a temporary file by default in
>> /var/tmp. What happens when power is lost or unit crashes? Can the
>> database
>> be correupted or lose data?
>> 
> 
> VACUUM generates a rollback journal.  So losing power
> in the middle of a VACUUM will just cause the database
> to revert to its original state when power is restored.
> 
> Note, however, that because of the temporary file and the
> journal, VACUUM can use temporary disk space that is a
> little more than twice the size of the original database
> file.
> 
> --
> D. Richard Hipp <[EMAIL PROTECTED]>
> 
> 
> -
> To unsubscribe, send email to [EMAIL PROTECTED]
> -
> 
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Is-VACUUM-safe-from-power-loss-crashing-tf4136949.html#a11777692
Sent from the SQLite mailing list archive at Nabble.com.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Saving an in-memory database to file

2007-07-25 Thread miguel manese

Or, attach then INSERT-SELECT

On 7/25/07, Mohd Radzi Ibrahim <[EMAIL PROTECTED]> wrote:

How about dumping and import into new db?


- Original Message -
From: "Colin Manning" <[EMAIL PROTECTED]>
To: 
Sent: Wednesday, July 25, 2007 7:05 AM
Subject: [sqlite] Saving an in-memory database to file


> Hi
>
> If I create an in-memory database (by opening with ":memory:"), then add
> tables etc to it, is it possible to then write it to a disk file so it can
> subsequently be used as a file-based db?
>
> Thanks
>
>
>
> --
> No virus found in this outgoing message.
> Checked by AVG. Version: 7.5.476 / Virus Database: 269.10.14/912 - Release
> Date: 22/07/2007 19:02
>
>
>
> -
> To unsubscribe, send email to [EMAIL PROTECTED]
> -
>
>



-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-