RE: [sqlite] Strange error in sqlite 3.4.2 Win32 version

2007-10-31 Thread Dan Petitt
I compiled up your code and ran it on Windows using VC6 and got:
a
98

Hope this helps
Dan


-Original Message-
From: Marco Bambini [mailto:[EMAIL PROTECTED] 
Sent: 31 October 2007 09:33
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Strange error in sqlite 3.4.2 Win32 version

No, the database is created by the code it doesn't already exists.
I was able to reproduce the issue only on Windows, Mac and Linux worked
fine.

---
Marco Bambini
http://www.sqlabs.net
http://www.sqlabs.net/blog/
http://www.sqlabs.net/realsqlserver/



On Oct 31, 2007, at 10:25 AM, Stephan Beal wrote:

> On 10/31/07, Marco Bambini <[EMAIL PROTECTED]> wrote:
>>
>> ...
>> On Windows (not on Mac!) it returns 99 instead of the correct 98 
>> value.
>> Anyone can confirm that on Windows?
>
>
> Hi, Marco! While i can't confirm how it behaves under Windows, i can 
> confirm that it returns 98 on Linux:
>
> [EMAIL PROTECTED]:~/tmp$ gcc -o win -I$HOME/include win.c -L$HOME/lib -
> lsqlite3
> [EMAIL PROTECTED]:~/tmp$ ./win
> a
> 98
> simple test finished!
>
> i quickly scanned through your code and found no reason that 99 should 
> come up.
>
> One thing to check: does your test.sqlite DB already exist o your 
> windows box, with a record already in it? That would explain the 
> discrepancy (but if that were the case, the CREATE TABLE call should 
> fail, so that's probably not the problem).
>
> --
> - stephan beal
> http://wanderinghorse.net/home/stephan/



-
To unsubscribe, send email to [EMAIL PROTECTED]

-





-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Your Concurrency Idea

2007-10-30 Thread Dan Petitt
The journal file sounds like the wrong place to put it, the journal seems to
be the place to store information that needs writing to the database on
completion; storing some read-only information in the same file seems at
odds with its current purpose.

Perhaps a separate file(s) might be more appropriate to store this
information, this might resolve your file format issues but also improve
(but not eliminate) any performance considerations; perhaps the information
could even (optionally) be stored in memory to greatly improve things.

Dan



-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: 30 October 2007 17:15
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Your Concurrency Idea

"Dan Petitt" <[EMAIL PROTECTED]> wrote:
> Richard, i noticed this ticket in the system:
> http://www.sqlite.org/cvstrac/tktview?tn=2417,8
> 
> And wondered if its something that is getting any serious thought or 
> something that is just a faint possibility?
> 

Seems to be an incompatibly file format change, which more or less rules it
out for any consideration.  Also unaddressed in the proposal is how to
locate a particular page within the journal file without having to do
(performance killing) sequential scan of the possible very large file.

--
D. Richard Hipp <[EMAIL PROTECTED]>



-
To unsubscribe, send email to [EMAIL PROTECTED]

-





-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Your Concurrency Idea

2007-10-30 Thread Dan Petitt
Richard, i noticed this ticket in the system:
http://www.sqlite.org/cvstrac/tktview?tn=2417,8

And wondered if its something that is getting any serious thought or
something that is just a faint possibility?



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Proposed sqlite3_initialize() interface

2007-10-30 Thread Dan Petitt
> Alternatively, you don't actually need the interface for 99.99% of users
out there (Windows, Linux, Mac)
> so you could make it unnecessary for them, but do require it for the
various esoteric embedded systems. 
> That would justify still calling it SQLite version 3.
That was my first thought, just require it for the OS's that need it; all
other systems are unchanged and work as before.



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] improving query performance

2006-03-30 Thread Dan Petitt
I had a similar problem, importing a lot of data into a database (import
very infrequently but read a lot) and then accessing it. With about 6million
rows it was taking 12 hours to get halfway through importing so I gave up.

These are the things that massively helped me:
* Increased default_page_cache to 6 (people have a lot of RAM now on
non-embedded systems so why not use it)
* Increased page_size to 32768 (maximum amount)
* Set synchronous = OFF (big difference, data integrity was not important to
us)
* Set temp_store = MEMORY (not sure of the usefulness of this, but used it
anyway)
* Set auto_vacuum = 0 (not deleting anything so probably not useful, set it
anyway)
* Using prepared statements cut import down by more than 50%

I also found if I was inserting into a multi-indexed table with "on conflict
ignore" it was actually a lot quicker to search for the record and only
insert if it didn't exist, than to rely on insert/ignore failure.

Hope this helps.




-Original Message-
From: Christian Smith [mailto:[EMAIL PROTECTED] 
Sent: 30 March 2006 15:54
To: sqlite-users@sqlite.org
Cc: Subhash Mangipudi; Herc Silverstein
Subject: Re: [sqlite] improving query performance

On Wed, 29 Mar 2006, Andy Spencer wrote:

>I have a sqlite database with about 3 GB of data, most of which is stored
>in a data table with about 75 million records, having three columns
>(EntryId INTEGER, PropertyId INTEGER, Value NUMERIC) and
>PRIMARY KEY(EntryId, PropertyId).
>
>This table is not indexed, to allow faster updates.


It is indexed. The primary key clause creates an implied index on
(EntryId,PropertyId).


>
>The problem is that it takes over an hour to access all Values, for a
>specified PropertyId, when the value is obtained for each EntryId
>separately (using "SELECT Value FROM Data WHERE PropertyId=? AND
>EntryId=?", bound to the specified PropertyId and EntryId) and
>the EntryId values for successive database queries are in essentially
>random order (taken from an external list of entries that has been
>sorted by property values).
>
>This same query (getting the property value for each EntryId,
>separately) only takes about 7 minutes when the EntryId values for
>successive database queries are in the same ascending order as
>the data orginally inserted into the table.


Yes. You're accessing the database in about as inefficient way as is
possible with your data, resulting in much thrashing of caches. Under
UNIX, if you're thrashing the OS cache, you can monitor this using vmstat.


>
>I assume that this has to do with better pager caching of successive
>records in the database, whereas random access may re-read the same
>page multiple times (due to the limited cache).


If you're not thrashing the OS cache (do you have lots of RAM?) try
increasing the size of your SQLite cache. Use:

PRAGMA cache_size=2;

This will make your cache 10x bigger, and may increase hit rate.


>
>My question is whether it should be faster to
>
>A) create an index for the table before the query,
>   query the value (for the specified PropertyId) for each EntryId
>   (in essentially random order, from external list of entries),
>   and delete the index after the queries (for each EntryId) are done


Won't help. You already have an index from the primary key.


>
>or
>
>B) issue a single "SELECT EntryId, Value FROM Data WHERE PropertyId=?"
query
>   (bound to the specified PropertyId) and step through the results,
>   using something like a hash table lookup to map the EntryId values
>   (returned from the query) back to an index into the external list of
>   entries.


This may help, as you'll not be using the primary key index, and thus the
index pages will not be competing with the table pages for memory.


>
>The values extracted from the database are to be copied into an entry
>property data structure, having the same order as the external list of
>entries.
>

If you must group the values by PropertyId rather than EntryId, then
insert them into the database in that order. Is that possible?

That, or increase the amount of RAM you have.

Christian


-- 
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \




RE: [sqlite] READ UNCOMMITTED isolation?

2006-01-30 Thread Dan Petitt
I think (looking at the source) that it's a pragma, but I don't know when
you set it, once when DB is opened, on each write or on each read.

You are the third to ask (including me), maybe Richard or someone else can
through some light on it for us.




Dan Petitt

DigiGuide TV Guide 
First Floor Office Suite 
17 The Strand 
Exmouth 
Devon. EX8 1AF 
Tel / Fax: 01395 272555



-Original Message-
From: Jack Pan [mailto:[EMAIL PROTECTED] On Behalf Of Cecilia Chen
Sent: 29 January 2006 15:33
To: sqlite-users@sqlite.org
Subject: [sqlite] READ UNCOMMITTED isolation?

Does anyone know how to use the new READ UNCOMMITTED isolation? It would be
great to have this isolation level when one thread reads and another writes.
My program doesn't worry too much about read consistency.
 
Thanks,
 
Jack Pan





[sqlite] Read Uncommitted

2006-01-23 Thread Dan Petitt
Hi

I searched the mail archive and documentation but could not find any
'answer' to the new "READ UNCOMMITTED" ability.

Is it a pragma? If it is; is it set when database is opened, or each time
you start a write transaction, or maybe something else?

Any assistance would be appreciated, thanks
Dan




RE: [sqlite] Locking

2005-12-24 Thread Dan Petitt
> Isolation in SQLite is SERIALIZABLE.  Note that SERIALIZABLE
> implies that locking can be no more fine-grained than table-level.
> You can obtain table-level locking in SQLite now.  Just put each
> table in a separate database file and ATTACH as many tables to
> your connection as you require. 
Yes, I did think of that, but it's a bit messy and things like relationships
no longer work. I have a question on that also, would attaching databases
make queries quite a bit slower? Does SQLite maintain a cache of connections
for each of these 'ATTACH'es, or on each query, does it have to make a
connection and retrieve info then close it again? That would be quite an
overhead would it not?

> Beginning with version 3.3.0, you will be able to configure SQLite
> so that multiple connections running in the same thread will be
> able to select READ UNCOMMITED isolation relative to one another.
This sounds really interesting, I think it would help some of our tasks but
we do have multiple threads accessing the database abstraction layer so
those areas wouldn't be able to use this which is a shame ... I would
interested if improvements in concurrency is an ongoing thing with more and
more support being added as versions get released?

Maybe we could assist development in that area possibly if required, but as
it may be a core area you would rather control this part of development
yourself. What are your thoughts?

Thanks for your answers and I must say thanks a lot for your hard work in
the development in SQLite ... I have done *a lot* of investigation in
databases for my testing and there is a lot of *rubbish* out there, there is
a lot of *expensive* solutions, and a lot of *slow* solutions, SQLite is by
far one of the quickest, easiest to use and integrate, excellently
documented with good user support (through these lists), small/light, and
its *free*!!

Well done and Merry Christmas to you.




RE: [sqlite] Locking

2005-12-24 Thread Dan Petitt
> Does your flat file support ACID transactions? That´s the killer feature
fo
> my app. I want to store financial transactions and I don´t trust normal
flat
> files.
No it isnt acid, its not got critical information in it but it does need
very fast read access and write access that doesn’t block reads.

But this is one of the reasons why we are investigating SQLite, we want a
bit more resiliant data storage but also the flexibility that a quick query
engine will give us ... Searching for a specific word in all of the records
(300k) isnt very quick but with SQLite would be much quicker and more
flexible.




[sqlite] Locking

2005-12-23 Thread Dan Petitt
Are there any plans for sqlite to support row or table level locking, or
possibly even Multiversion Concurrency Control, MVCC, its definition being:
 
~~
While querying a database each transaction sees a snapshot of data (a
database version) as it was some time ago, regardless of the current state
of the underlying data. This protects the transaction from viewing
inconsistent data that could be caused by (other) concurrent transaction
updates on the same data rows, providing transaction isolation for each
database session. 
 
The main advantage to using the MVCC model of concurrency control rather
than locking is that in MVCC locks acquired for querying (reading) data do
not conflict with locks acquired for writing data, and so reading never
blocks writing and writing never blocks reading
~~
 
The reason being is that we are inserting large amounts of data into the
database but we need to read it at the same time, and we need to do this
quickly.
 
Some background info:
 
Currently our indexed flat file system is working at speeds in excess of
sqlite (or any DB we have found) but sqlite is very close, but the locking
issue effectively makes the gui stall whilst the inserts are occuring.
 
Yes they are wrapped up in transactions and we only have a couple of indexes
(there are only 5 fields anyway).
 
Also CPU seems to be very high whilst this is going on.
 
We want to use sqlite (if possible) for its flexibility in producing better
querying and results than we currently are able to.
 
Thanks for your time.
 
Dan Petitt