Re: [sqlite] suggestions for avoiding "database locked" on ajax

2009-10-19 Thread Douglas E. Fajardo

While the idea of uniquely identifying a user is interesting, the assumption 
that there is only one user for each IP address is questionable at best - 
Consider assigning a 'session id' of some kind instead. In *theory* (and yes, 
I've seen it happen in reality) the source address can change between two 
successive TCP connections with a multi-homed host.
*** Doug F.


-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Keith Roberts
Sent: Saturday, October 17, 2009 7:53 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] suggestions for avoiding "database locked" on ajax

On Fri, 16 Oct 2009, Roger Binns wrote:

> To: General Discussion of SQLite Database 
> From: Roger Binns 
> Subject: Re: [sqlite] suggestions for avoiding "database locked" on ajax
> 
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> P Kishor wrote:
>> So, what suggestion might you all have for getting around this?
>
> Why not set a busy timeout?
>
> Roger

As each user's IP address is unique, you could use a LOCK 
column for the whole database, or a particular table.

You could then store the IP address of the user initiating 
the transaction.

When a transaction starts, you can use a trigger to test 
whether the LOCK column is NULL for empty.

If so, set the LOCK column to value of the users IP address.

If another user from a different IP address attempts a 
similar transaction, then block that transaction from 
continuing, until the LOCK on the other IP address has been 
removed.

Once the transaction has finished, release the LOCK by 
clearing the user's IP address from the LOCK column by 
setting it back to NULL.

I'm not sure how you would go about dealing with muliple 
transaction being stalled, and waiting to get the LOCK 
freed, so each one can continue.

Maybe muliple retries, until the user's can get access to 
the db?

Kind Regards,

Keith Roberts

-
Websites:
http://www.php-debuggers.net
http://www.karsites.net
http://www.raised-from-the-dead.org.uk

All email addresses are challenge-response protected with
TMDA [http://tmda.net]
-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email 
__
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database Diff libs or applications

2009-10-01 Thread Douglas E. Fajardo
If your application is creating/deleting tables, then it (the program) can 
insert its own 'I am creating/deleting a table' record into the replay table, 
at the same time it creates the triggers.
*** Doug


-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Shaun Seckman (Firaxis)
Sent: Thursday, October 01, 2009 8:10 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Database Diff libs or applications

I've been looking into using triggers to create a replay table.  This seems to 
be pretty restrictive in that it requires a trigger to be created for each 
table and cannot track when tables are dropped or created (documentation says 
that triggers will not be applied to tables like sqlite_master) 

Is there a way around this?

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Shaun Seckman (Firaxis)
Sent: Thursday, October 01, 2009 10:13 AM
To: punk...@eidesis.org; General Discussion of SQLite Database
Subject: Re: [sqlite] Database Diff libs or applications

Are there any limitations to this?  Will this track any and all SQL actions 
done to the database or just table data changes?  I'll hunt down the post now :)

-Shaun

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of P Kishor
Sent: Thursday, October 01, 2009 10:03 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Database Diff libs or applications

On Thu, Oct 1, 2009 at 8:44 AM, Shaun Seckman (Firaxis)
 wrote:
> Hello,
>
>                I'm looking to externally track the actions made to a
> database so that I can apply those same actions to another database
> (assuming the other database has a similar schema).   I've searched the
> documentation and there doesn't seem to be an easy way to extract this
> data so my only option seems to be utilizing some sort of library or
> application to diff the two databases and generate a SQL script based on
> the changes.
>
>

Store the "actions made to a database," in other words, the SQL, in a
log and then replay that log against the other database.

You can use TRIGGERs to store the SQL in a replay table. Search the
archives. Dennis Cote has contributed some code for that in one of the
threads.


>
> Has anyone done this before?  Does such a library or application exist?
> I'm sure I could write my own if needed.
>
>
>
> -Shaun
>
>
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Puneet Kishor http://www.punkish.org
Carbon Model http://carbonmodel.org
Charter Member, Open Source Geospatial Foundation http://www.osgeo.org
Science Commons Fellow, http://sciencecommons.org/about/whoweare/kishor
Nelson Institute, UW-Madison http://www.nelson.wisc.edu
---
Assertions are politics; backing up assertions with evidence is science
===
Sent from Madison, WI, United States
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email 
__
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Temporal SQLite database

2009-09-16 Thread Douglas E. Fajardo
I'm not sure if you are looking to make a entry unique, or determine the order 
in which the entries occurred. In either case, be aware - time can go 
*backwards* on a system, especially if it is being syncd to an outside source 
such as with NTP.

Normally the 'jitter' is under a second, but exceptions do occur (including the 
one where the Sysadmin changes the system clock!). Note that a backward 
"jitter" could (conceivably) result in the same timestamp occurring twice. 
Also, depending on the resolution of the clock (which may vary depending on 
installation options) it may be possible for two entries to occur at the same 
'time'. As a result, I've sworn off using the time for anything more than a 
logging label.

I've had good success with the auto-increment approach.  As for code to hide 
it, just set auto-incrementing up on a field in the schema, and there is no 
extra code involved.

*** Doug F.
 

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Alberto Simões
Sent: Tuesday, September 15, 2009 4:38 AM
To: General Discussion of SQLite Database
Subject: [sqlite] Temporal SQLite database

Dear SQLite users,

I am preparing a temporal database. Basically, entries will have a compound
key composed by the real key and some kind of time stamp. This is not
complicated to do, but I would like to know if anybody did something similar
and if:
 - used the number of non-leap seconds since whatever time the system
considers to be the epoch, as time stamp
 - used an auto-increment value as the time-stamp, and adding the time stamp
as an auxiliary column,
 - knows of any Perl module that hides this time management.

Thanks for your ideas.
ambs
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email 
__
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] very large SQLite tables

2009-06-26 Thread Douglas E. Fajardo
No, I admit I haven't tried this under SQLITE.

Whether this approach will help for the specific application will depend on 
data usage patterns, which we haven't delved into for this application. Call me 
simple: since the main issue is degraded performance with larger groupings of 
data, it seemed to make sense that breaking the data into smaller groupings 
would help. 

Of course it's very possible that the size of the database in question may mean 
that the number of hash buckets needed to reap significant benefits makes this 
approach counter-productive. That's why it is only a suggestion :-)

*** Doug Fajardo

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Kosenko Max
Sent: Friday, June 26, 2009 4:06 AM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] very large SQLite tables


Have you ever tested such proposal?
I believe that doesn't works.


Doug Fajardo wrote:
> 
> One approach might be to split the big, monolithic table into some number
> of hash buckets, where each 'bucket' is separate table. When doing a
> search, the program calculates the hash and accesses reads only the bucket
> that is needed.
> 
> This approach also has the potential for allowing multiple databases,
> where tables would be spread across the different databases. The databases
> could be spread across multiple drives to improve performance.
> 
-- 
View this message in context: 
http://www.nabble.com/very-large-SQLite-tables-tp24201098p24218386.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] very large SQLite tables

2009-06-25 Thread Douglas E. Fajardo
One approach might be to split the big, monolithic table into some number of 
hash buckets, where each 'bucket' is separate table. When doing a search, the 
program calculates the hash and accesses reads only the bucket that is needed.

This approach also has the potential for allowing multiple databases, where 
tables would be spread across the different databases. The databases could be 
spread across multiple drives to improve performance.

*** Doug Fajardo


-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Matthew O'Keefe
Sent: Wednesday, June 24, 2009 12:21 PM
To: sqlite-users@sqlite.org
Subject: [sqlite] very large SQLite tables



We are using SQLite for indexing a huge number (i.e., 100 million to 1
billion) of key pairs
that are represented by an 88-byte key. We are using a single table with a
very large number of rows (one for each data chunk), and two columns.

The table has two columns.  One is of type ³text² and the other is type
³integer².
> 
> The table is created with:
> 
> CREATE TABLE chunks
> (
>   name text primary key,
>   pid integer not null
);

As expected, as the
table grows, the underlying B-tree implementation for SQLite means that the
number of
disks accesses to (a) find, and (b) add a chunk, grows larger and larger.
We¹ve tested up
to 20 million chunks represented in the table: as expected performance
exponentially 
decreases as the number of table entries grows.

We wanted to post to the mailing list to see if there are any obvious,
first-order things
we can try to improve performance for such a large table.

We really appreciate the efforts of the SQLite developer community!

Matt O¹Keefe

sqlite-users@sqlite.org

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database inserts gradually slowing down

2009-06-16 Thread Douglas E. Fajardo
This may be a red herring, but how do you generate the 'random' characters for 
the test? Could entropy exhaustion affecting the results?

Just a thought...
*** Doug Fajardo

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of P Kishor
Sent: Tuesday, June 16, 2009 12:29 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Database inserts gradually slowing down

On Tue, Jun 16, 2009 at 1:59 PM, Jens Páll
Hafsteinsson wrote:
> Ok, but you might have to run this a bit longer (or more iterations).

I ran the code a bit longer, and yes, I do notice an increase. Here
are my results after 610 iterations... I have provided the numbers for
10 iterations every hundred (my numbers are lower this time than the
earlier posted 8 seconds or so per iteration because of a different
way of generating random strings).

1.  3 wallclock secs ( 0.87 usr +  0.24 sys =  1.11 CPU)
2.  2 wallclock secs ( 0.88 usr +  0.26 sys =  1.14 CPU)
3.  3 wallclock secs ( 0.89 usr +  0.26 sys =  1.15 CPU)
4.  2 wallclock secs ( 0.88 usr +  0.27 sys =  1.15 CPU)
5.  3 wallclock secs ( 0.88 usr +  0.26 sys =  1.14 CPU)
6.  2 wallclock secs ( 0.88 usr +  0.27 sys =  1.15 CPU)
7.  3 wallclock secs ( 0.89 usr +  0.27 sys =  1.16 CPU)
8.  2 wallclock secs ( 0.88 usr +  0.26 sys =  1.14 CPU)
9.  3 wallclock secs ( 0.89 usr +  0.27 sys =  1.16 CPU)
10.  2 wallclock secs ( 0.88 usr +  0.27 sys =  1.15 CPU)
..
200.  4 wallclock secs ( 0.89 usr +  0.30 sys =  1.19 CPU)
201.  4 wallclock secs ( 0.89 usr +  0.29 sys =  1.18 CPU)
202.  4 wallclock secs ( 0.89 usr +  0.30 sys =  1.19 CPU)
203.  3 wallclock secs ( 0.88 usr +  0.31 sys =  1.19 CPU)
204.  4 wallclock secs ( 0.89 usr +  0.30 sys =  1.19 CPU)
205.  4 wallclock secs ( 0.88 usr +  0.31 sys =  1.19 CPU)
206.  4 wallclock secs ( 0.90 usr +  0.30 sys =  1.20 CPU)
207.  4 wallclock secs ( 0.89 usr +  0.29 sys =  1.18 CPU)
208.  4 wallclock secs ( 0.88 usr +  0.30 sys =  1.18 CPU)
209.  4 wallclock secs ( 0.89 usr +  0.30 sys =  1.19 CPU)
210.  3 wallclock secs ( 0.89 usr +  0.30 sys =  1.19 CPU)
..
300.  5 wallclock secs ( 0.89 usr +  0.32 sys =  1.21 CPU)
301.  4 wallclock secs ( 0.90 usr +  0.33 sys =  1.23 CPU)
302.  5 wallclock secs ( 0.89 usr +  0.34 sys =  1.23 CPU)
303.  6 wallclock secs ( 0.89 usr +  0.33 sys =  1.22 CPU)
304.  5 wallclock secs ( 0.89 usr +  0.32 sys =  1.21 CPU)
305.  6 wallclock secs ( 0.90 usr +  0.33 sys =  1.23 CPU)
306.  5 wallclock secs ( 0.89 usr +  0.33 sys =  1.22 CPU)
307.  5 wallclock secs ( 0.89 usr +  0.31 sys =  1.20 CPU)
308.  8 wallclock secs ( 0.89 usr +  0.32 sys =  1.21 CPU)
309. 13 wallclock secs ( 0.90 usr +  0.35 sys =  1.25 CPU)
310.  6 wallclock secs ( 0.90 usr +  0.33 sys =  1.23 CPU)
..
400.  5 wallclock secs ( 0.89 usr +  0.33 sys =  1.22 CPU)
401.  5 wallclock secs ( 0.89 usr +  0.32 sys =  1.21 CPU)
402.  5 wallclock secs ( 0.89 usr +  0.33 sys =  1.22 CPU)
403.  5 wallclock secs ( 0.89 usr +  0.33 sys =  1.22 CPU)
404.  5 wallclock secs ( 0.89 usr +  0.32 sys =  1.21 CPU)
405.  6 wallclock secs ( 0.89 usr +  0.34 sys =  1.23 CPU)
406.  5 wallclock secs ( 0.89 usr +  0.33 sys =  1.22 CPU)
407.  5 wallclock secs ( 0.89 usr +  0.33 sys =  1.22 CPU)
408.  5 wallclock secs ( 0.89 usr +  0.33 sys =  1.22 CPU)
409.  5 wallclock secs ( 0.89 usr +  0.33 sys =  1.22 CPU)
410.  5 wallclock secs ( 0.89 usr +  0.33 sys =  1.22 CPU)
..
500.  5 wallclock secs ( 0.89 usr +  0.34 sys =  1.23 CPU)
501.  6 wallclock secs ( 0.89 usr +  0.34 sys =  1.23 CPU)
502.  6 wallclock secs ( 0.89 usr +  0.34 sys =  1.23 CPU)
503.  6 wallclock secs ( 0.90 usr +  0.36 sys =  1.26 CPU)
504.  6 wallclock secs ( 0.88 usr +  0.37 sys =  1.25 CPU)
505. 11 wallclock secs ( 0.89 usr +  0.36 sys =  1.25 CPU)
506.  9 wallclock secs ( 0.89 usr +  0.34 sys =  1.23 CPU)
507.  5 wallclock secs ( 0.89 usr +  0.34 sys =  1.23 CPU)
508.  6 wallclock secs ( 0.89 usr +  0.35 sys =  1.24 CPU)
509.  6 wallclock secs ( 0.89 usr +  0.34 sys =  1.23 CPU)
510.  6 wallclock secs ( 0.88 usr +  0.34 sys =  1.22 CPU)
..
600.  6 wallclock secs ( 0.89 usr +  0.36 sys =  1.25 CPU)
601.  7 wallclock secs ( 0.89 usr +  0.35 sys =  1.24 CPU)
602.  6 wallclock secs ( 0.88 usr +  0.35 sys =  1.23 CPU)
603.  7 wallclock secs ( 0.89 usr +  0.35 sys =  1.24 CPU)
604.  6 wallclock secs ( 0.89 usr +  0.35 sys =  1.24 CPU)
605.  6 wallclock secs ( 0.89 usr +  0.35 sys =  1.24 CPU)
606.  7 wallclock secs ( 0.89 usr +  0.35 sys =  1.24 CPU)
607.  6 wallclock secs ( 0.88 usr +  0.35 sys =  1.23 CPU)
608.  7 wallclock secs ( 0.89 usr +  0.35 sys =  1.24 CPU)
609.  6 wallclock secs ( 0.89 usr +  0.36 sys =  1.25 CPU)
610.  7 wallclock secs ( 0.89 usr +  0.35 sys =  1.24 CPU)


> My first 10 runs give the following results (in seconds):
>
> 1       15,681
> 2       16,010
> 3       16,093
> 4       16,168
> 5       16,057
> 6       16,585
> 7       16,114
> 8       16,596
> 9       16,115
> 10      16,270
>
> Jumping around 

Re: [sqlite] Sqlite as a FIFO buffer?

2009-05-19 Thread Douglas E. Fajardo
One-per-second sounds *very* slow - I think I was getting around 10 per second 
in my application, although usage patterns may account for the difference.

The main cause of the slowdown (IMHO) is inherent in ensuring that changes are 
written to physical disk. There are some pragmas that change the behavior of 
SQLITE in this respect, but performance improvements in this area are at some 
(finite) expense in data integrity. The only other solution for this might be 
fast disk drives or a solid-state disk.
 
There is also the issue of attempting to synchronize writes to two (or more) 
different tasks. My own implementation was pretty primitive - it simply used 
the transaction controls in SQLITE. This approach worked, but I believe that a 
semaphore or other (external to the SQL language) signaling mechanism would be 
a more efficient means of synchronizing database access between tasks.

*** Doug F.


-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Christopher Taylor
Sent: Monday, May 18, 2009 9:36 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Sqlite as a FIFO buffer?

--
From: "Douglas E. Fajardo" <dfaja...@symark.com>
Sent: Monday, May 18, 2009 12:25 PM
To: "General Discussion of SQLite Database" <sqlite-users@sqlite.org>; 
"AllenFowler" <allen.fow...@yahoo.com>
Subject: Re: [sqlite] Sqlite as a FIFO buffer?

> One thing to watch out for - using SQLITE for a FIFO will have limited 
> throughput, because commits will have to be done after inserting or 
> removing each entry. This might not be an issue in some situations - I 
> have actually implemented a FIFO to communicate between two tasks where 
> the work-per-entry was significant, the transaction rate was low, and the 
> protection against accidental loss was paramount.
>
> *** Doug F.
>
>
I have seen this.  I have two other tables where the inserts can take up to 
1 second to do.  For my application this is acceptable.  The events 
themselves are queued and there is enough idle time between jobs to catch up 
if needed.  Is there a better way to do this?

Chris 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite as a FIFO buffer?

2009-05-18 Thread Douglas E. Fajardo
One thing to watch out for - using SQLITE for a FIFO will have limited 
throughput, because commits will have to be done after inserting or removing 
each entry. This might not be an issue in some situations - I have actually 
implemented a FIFO to communicate between two tasks where the work-per-entry 
was significant, the transaction rate was low, and the protection against 
accidental loss was paramount.

*** Doug F.


-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Christopher Taylor
Sent: Monday, May 18, 2009 9:13 AM
To: Allen Fowler; General Discussion of SQLite Database
Subject: Re: [sqlite] Sqlite as a FIFO buffer?

The way this table works is that when an insert is made, the trigger is 
activated.  The trigger looks to see if there are more than a set number of 
records.  If so, the oldest record(s) are deleted by the trigger.  This is 
accomplished using an auto increment field.  This number increases by one on 
each insert.  Therefore I know that if the key field is greater than my 
limit I can safely delete any record whose key field is less than the max 
key minus the limit.  Hope this helps.

Chris 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] mutex and begin/end transaction

2009-05-01 Thread Douglas E. Fajardo
Gentlefolk:

Sorry to be a 'wet blanket' here, but while this thread is an interesting 
debate, *this* list is about SQLITE, not the Scientific Method... As an 
observer on the sidelines, I would suggest that this discussion has strayed a 
little bit off topic :-)

 Uh... does anyone remember what this thread was *really* about? Oh yes, 
something about Mutexes and transactions...

*** Doug Fajardo


-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of James Gregurich
Sent: Friday, May 01, 2009 10:02 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] mutex and begin/end transaction


I describe reality.

Someone has to be the arbiter of "better." Generally, that arbiter is  
the guy handing out the research grants.

On May 1, 2009, at 5:33 AM, John Stanton wrote:

>
> Science is the Scientific Method - observation, hypothesis and
> skepticism.  The antithesis of politics.  There are no facts in  
> science,
> only observations and any hypothesis is only valid until a better one
> replaces it.
>
> You describe bad, politicized science.
>
> James Gregurich wrote:
>> With all due respect,  science itself is a set of
>> "positions" (opinions) which are endorsed by small group of people as
>> official doctrine after appropriate study. Saying "A 'position' is
>> politics, not science" is not a particularly meaningful statement.   
>> If
>> you want to argue that point, feel free to send me a private email.
>>
>> My threaded application works pretty darn well. I can process
>> thousands of print industry files on an 8-core system keeping the
>> cores busy without lagging the GUI for other applications. Just
>> because many people create ill conceived programs doesn't mean
>> threaded programs are inherently doomed to be ill-conceived. The
>> development tools and techniques for building concurrent systems are
>> advancing and making concurrency quite feasible.
>>
>> James Gregurich
>> Engineering Manager
>> Markzware
>>
>> On Apr 30, 2009, at 5:01 AM, John Stanton wrote:
>>
>>> A "position" is politics, not science.  Warnings about the use of
>>> threads are based on science, and advise you to avoid them if  
>>> possible
>>> for your own protection.
>>>
>>> I see ill conceived programs using threads which go to complex
>>> synchronization to achieve the equivalent of single stream execution
>>> but
>>> with much greater overhead.  A KISS situation.
>>>
>>> James Gregurich wrote:
 thanks for the info. That should work for me.

 Given the industry is going multicore and 16-core macintoshes for
 your
 grand-mother are  just a few years away, I recommend you rethink  
 your
 position on the use of threading. Apple is heavily pushing
 parallelism
 on its developers.  NSOperation is a major part of that effort.  
 As I
 understand it, MS is developing their copy of NSOperation for  
 VS2010.
 The development landscape is only going to get more threaded as  
 time
 goes on.

 -James



> On Apr 29, 2009, at 10:03 PM, James Gregurich wrote:
>
>
>> howdy!
>>
>> question:
>>
>> for an in-memory db with the threading mode set to serialized, is
>>
> the
>
>> internal mutex held for an entire transaction so that one thread
>>
> won't
>
>> access the db while another one is in the middle of a transaction
>>
> with
>
>> multiple insert statements?
>>
> No.  But the mutex is recursive.  So you can get a copy of it  
> using
> sqlite3_db_mutex() then lock it yourself using
> sqlite3_mutex_enter()/
> leave().
>
> Also remember:  You should not be using threads.  Threads will  
> bring
> only grief and woe.  On your own head be it.
>
>
>
> D. Richard Hipp
> drh at hwaci.com
>
>
 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Distinguishing between sqlite3_stmts

2008-11-10 Thread Douglas E. Fajardo

   Roger and John both, of course have good points -

   John Stanton has correctly pointed out that there is a programming model 
where an application effectively does the cacheing itself by precompiling 
statements at startup. In this situation, the proposed cacheing feature 
represents double work, and double memory (or worse).

   My proposal of a statement cache, which Rodger Binns supports, is based on 
the idea that an application does not store prepared statements. In this case, 
the performance benefits are clear.

   I have no doubt that both approaches are in common use, and neither is, 
IMHO,  'wrong'.

   It seems to me that a run time control (a pragma?) to enable/disable the 
feature would accommodate both program models. In addition, a compile-time 
feature macro to prevent the bloatware problems for embedded applications is in 
order.

   I believe the pragma should enable cacheing, so the default is with cacheing 
disabled. This makes the feature transparent to *all* existing programs.

   The compile-time Macro should default to include the cacheing feature. This 
makes the feature available normally. Those who are memory conscious will 
probably be rebuilding the library anyway, since it is likely that other 
compile-time changes would already be necessary to reduce the size of sqlite.

*** Doug Fajardo


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of John Stanton
Sent: Monday, November 10, 2008 8:15 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Distinguishing between sqlite3_stmts

You make a argument for Bloatware. It is not oersuasive.
JS
Roger Binns wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> John Stanton wrote:
>
>>Perhaps this featrure could be reserved for "Sqlheavy", a replacement
>>for Oracle.
>
>
> Or a #if OMIT_STATEMENT_CACHE like all sorts of other functionality that
> can be omitted.
>
>
>>We have actually implemented the cacheing of prepared statements, and
>>add it in the form of a local library which extends the Sqlite API.
>
>
> If almost everyone is doing a statement cache then there is a stronger
> argument for it being part of the core library.
>
>
>>Cacheing compiled SQL is only helpful in certain types of application.
>
>
> I'd argue that it is helpful in most applications that use bindings and
> don't implement their own statement caching.
>
>
>>In a typical emdedded application where all SQL is compiled as the
>>application intializes and subsequent usage involves the binding of
>>variables cacheing would be detrimental to performance and footprint.
>
>
> Caching would increase memory for one off queries since unused
> statements would be in the cache.  In your scenario there would actually
> be no difference since the cache would be empty.  A SQLite hash table
> with no entries is approximately 28 bytes.
>
> Roger
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.9 (GNU/Linux)
>
> iEYEARECAAYFAkkV6qIACgkQmOOfHg372QTsOgCggRTY1TSkxHLznv95G64N+PjH
> p34AoMHMjgGsxZTgufEVUz7hP6uM8/14
> =0HbU
> -END PGP SIGNATURE-
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Distinguishing between sqlite3_stmts

2008-11-10 Thread Douglas E. Fajardo
Rodger:
   Thanks for creating the ticket. I will be interested to see the result when 
(if? :-) it gets implemented.

Dan:
  It was my thought that a single, well thought-out implementation of caching 
(as opposed to something that each programmer implements on an 'ad-hoc' basis) 
would offer a speed advantage to all, and would even be source-code compatible 
for existing programs.

  As for internal vs. external, Do you mean 'external' as an add-on library 
that 'wraps' the prepare* statements? It could be done, but I'm not 
(personally) fond of the idea. Either some fancy linking is needed to replace 
the affected calls with 'new' versions (doesn't sound like fun when you want to 
do it across a wide range of platforms), or a new function call name (which is 
not source-compatible) would be necessarily.

If by 'external' you meant 'in the API' as opposed to 'in the engine itself', I 
don't feel I have enough background in sqlite internals to express an opinion.

*** Doug Fajardo

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Dan
Sent: Friday, November 07, 2008 9:20 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Distinguishing between sqlite3_stmts


On Nov 8, 2008, at 3:25 AM, Roger Binns wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Douglas E. Fajardo wrote:
>>   ( To the 'powers that be'... I wonder if some form of 'cache' for
>> prepared statements might be built in to the 'sqlite3_prepare*'
>> functions as a performance enhancement? )
>
> I couldn't find an existing ticket so created a new one:
>
>  http://www.sqlite.org/cvstrac/tktview?tn=3483

Are there advantages to implementing this internally instead of
externally?


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Distinguishing between sqlite3_stmts

2008-11-07 Thread Douglas E. Fajardo
If I am understanding you correctly, you are attempting to 'cache' prepared 
statements.

   My solution was to implement a wrapper function around the SQLITE3_PREPARE 
call function that stored pointers to the sql statement, the database handle, 
and the 'prepared' statement in a linked list. On subsequent calls the function 
would return the already prepared statement if one was present. The search of 
the list was based on the SQL text of the query. Various techniques such as 
using a hash or checksum of the SQL text as the initial search key will speed 
up the search.

   ( To the 'powers that be'... I wonder if some form of 'cache' for prepared 
statements might be built in to the 'sqlite3_prepare*' functions as a 
performance enhancement? )

Hope this helps.
*** Doug Fajardo


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of D. Richard Hipp
Sent: Thursday, November 06, 2008 4:54 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Distinguishing between sqlite3_stmts


On Nov 6, 2008, at 7:47 PM, Brown, Daniel wrote:

> Good afternoon list,
>
> Are there any methods for distinguishing between sqlite3_stmt
> structures
> prepared by sqlite3_prepare_v2?  I'd like to be able to tell if a
> statement structure has been finalized and then prepared with a
> different query programmatically.  Is there any sort of unique
> identifier in those prepared structures?


I'm not sure I understand the question.  Is http://www.sqlite.org/c3ref/sql.html
  what you are asking for?

D. Richard Hipp
[EMAIL PROTECTED]



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users