Re: [sqlite] Multi-thread support

2005-07-12 Thread Weijers

Roushan,

No, sqlite3 uses database-level locking, either one process can  
write, or many processes can read. Good enough for most embedded uses.


Gé

On Jul 12, 2005, at 3:12 AM, Roushan Ali wrote:


Hi all,
  Can anybody tell me how to handle concurrent write using   
sqlite3

database ? Does sqlite3 use table level locking ?


Regards,
Roushan






--
Gé Weijers
e-mail: [EMAIL PROTECTED]




Re: [sqlite] Error 21, "library routine called out of sequence"

2005-07-09 Thread Weijers

On an almost pedantic note:

declare the variable as:

  static volatile sig_atomic_t bKilled = 0;

"volatile" keeps the compiler from caching the value in a register,  
and not noticing its change,
and "sig_atomic_t" is an integer type guaranteed to be written in one  
instruction.


Some processors can only write certain integer sizes atomically,  
chars may need read and write cycles and are not atomic.


alternatively you can mask the signal before you read 'bKilled'.

Gé

On Jul 8, 2005, at 7:50 AM, Ben Clewett wrote:


Derrell,

Thanks for the idea and the excellent coding example.

This works perfectly, thank!

Regards,

Ben.


[EMAIL PROTECTED] wrote:


Ben Clewett <[EMAIL PROTECTED]> writes:


Dear SQLite,

I am running a sequence of inserts:

BEGIN
INSERT INTO table ...
INSERT INTO table ...
INSERT INTO table ...
INSERT INTO table ...
INSERT INTO table ...

I am catching the 'kill -1' signal (aka CTRL-C) and executing a  
final query:


COMMIT

When I execute the 'COMMIT' I get:

"library routine called out of sequence"

Every other query command after this returns the same.

My guess is the interrupt is kicking in during SQLite completion  
of the previous query.  Therefore SQLite is half way through  
something when this occurs.


Can any person suggest a possible solution as I am out of  
options.  For instance, some command to tidy up SQLite so that  
the next statement does not fail.  Otherwise I loose all my  
inserted data :)


Instead of issuing the COMMIT from the signal handler, set a  
global flag in
the signal handler which you check in your main code.  If the flag  
has been

set, then COMMIT and exit.
You can do something akin to this (untested code):
- 
-

static int  bKilled = 0;
static void
sigHandler(int signum)
{
if (signum == SIGTERM)
{
bKilled = 1;
}
}
static void
doInserts()
{
char ** ppQueries;
char *  queries[] =
{
"INSERT INTO table ...",
"INSERT INTO table ...",
"INSERT INTO table ...",
NULL
};
/* Start a transaction */
issueQuery("BEGIN;");
/* For each query... */
for (ppQueries = queries; ppQueries != NULL; ppQueries++)
{
/* Issue the query */
issueQuery(*ppQueries);
/* If we've been signaled, exit loop */
if (bKilled)
{
break;
}
}
/*
 * Commit the transaction.
 *
 * Note that signal could have occurred *before* the BEGIN.   
You'll need

 * to handle that case as well (or ignore the error from COMMIT)
 */
issueQuery("COMMIT;");
}








--
Gé Weijers
e-mail: [EMAIL PROTECTED]




Re: [sqlite] Reads and Writes

2005-06-14 Thread Weijers
Sean Heber wrote:

>
>
> My database file is only around 4MB and I have set the 
> default_cache_size to 5.  From what I've read, that should 
> translate to almost 50MB of cache size which would be more than 
> enough to keep the entire database in memory, I'd think.  Yet it 
> doesn't seem to actually do that since it is reading from the file so 
> often.

SQLite cannot know whether the database file has been modified by
another process once it releases its file locks, so it needs to read
everything again. If you wrap multiple queries in a transaction the file
stays locked and cache remains valid, and the number of seeks/reads
should go down.

After SQLite obtains a read lock it will have to perform at least the
following:
- check the file header to check for corruption
- check for a hot transaction log, and roll back changes if necessary
- read the database schema from the sqlite_master table

This is the most likely cause of the repetitive behavior you're seeing.

That said: the OS will of course cache the database file as well, and
many of the 'read' calls will be very fast.

I can imagine a protocol that will improve performance by keeping track
of whether the database was updated, which would allow a quick
determination whether a program's cache was still valid. The idea is to
increment a 'version number' in the database file as soon as a program
obtains a write lock. A reader could then determine whether the file was
written to and flush the cache only when necessary, speeding things up a
little. A 128 bit counter should never wrap around.


Gé

-- 
Ge' Weijers
e-mail: [EMAIL PROTECTED]
tel:  (520)623-8542



Re: [sqlite] Does sqlite really support transaction?

2005-05-12 Thread Weijers
John Buck wrote:

> MySql works like you described..  Frankly im surprised Postgres doesn't .
> Id imagine there must be a "continue trnasaction" command or something.

You can define a 'savepoint' inside a transaction. If something goes
wrong you roll back to the savepoint and continue from there.

You basically roll back to a known-good point. Sqlite implicitly rolls
back to the state that existed before a problematic statement.

Gé


> 
> --
> JB
> 
> -Original Message-
> From: Thomas Briggs [mailto:[EMAIL PROTECTED] 
> Sent: Thursday, May 12, 2005 12:25 PM
> To: sqlite-users@sqlite.org
> Subject: RE: [sqlite] Does sqlite really support transaction?
> 
>  
> 
>>>  This isn't an SQLite thing either... All databases work 
>>
>>this way, as
>>
>>>far as I'm aware.
>>> 
>>>
>>
>>Postgres refuses to process any further sql statements in a 
>>transaction 
>>after an error occurs with
>>one of the sql statements.
> 
> 
>Heh.  I should have said that "all databases with which I am familiar
> work this way".  Postgres is obviously not one of the databases with
> which I'm familiar. :)  I did try MS SQL Server, Oracle and DB2 and they
> all function this way.  Didn't try MySQL though... Hrm.
> 
>-Tom
> 
> 



Re: [sqlite] Does sqlite really support transaction?

2005-05-12 Thread Weijers
Vladimir,

When you execute individual statements and sqlite3_step or sqlite3_exec
returns an error code you should execute a 'ROLLBACK' in stead of a
'COMMIT'. So the logic is:

exec "BEGIN"

perform a bunch of statements

if(all statements successful)
   exec "COMMIT"
else
   exec "ROLLBACK"

Ideally you'd quit executing statements as soon as one fails.
BTW: the 'END' statement is not needed.

Gé

Vladimir Zelinski wrote:
> Of course, I read documentation and saw that
> transaction is a supported feature. But I can't get to
> work it. In order to make sure that we talk about the
> same thing I understand transaction as atomic
> operation. One or more data modification language
> statements can be placed within transaction. Only two
> legal results for transaction might have take place:
> 1) all statements executed successfully OR 2) none of
> them successful. It can't be situation when some
> statements are successful and some failed within the
> same transaction.
> 
> I downloaded sqlite3 for Windows. My program
> dynamically builds a file with sql statements like
> this bellow:
> 
> 
> BEGIN TRANSACTION;
> 
> -- This SQL statement fails because it's trying 
> -- to insert a duplicate key.
> INSERT INTO t(id) VALUES(10);
> 
> -- This SQL statement always successful despite on
> fact
> -- that first SQL statement failed
> UPDATE t SET end_date=20 WHERE id=9;
>  
> END TRANSACTION;
> COMMIT TRANSACTION;
> 
> 
> Update statement always successful, even when INSERT
> fails. So situation is such that some DML statements
> are successful and some failed within the same
> transaction. I could not belive that. I tried dozens
> of times using syntax UPDATE OR ROLLBACK SET ..., but
> I had no luck
> I expect that UPDATE will never change a data on the
> table if INSERT fails.
> 
> Please, explain me what am I doing wrong? I'm a
> database person, but I could not figure out what to do
> in such situation.
> I need to get to work number of SQL statements in
> transactional mode: all or nothing succeeds.
> 
> Is any way to check result of execution SQL statement?
> Is it possible to select a system variable that would
> give me status of the last SQL.
> 
> Thank you very much,
> Vladimir



Re: [sqlite] sqlite3_bind_text() and SQLITE_STATIC question

2005-05-02 Thread Weijers
Greg Miller wrote:
> Thomas Briggs wrote:
> 
>>  
>>
>>> From the looks of this warning, I would guess that you could redefine
>>> SQLITE_STATIC like this (or some variation of this that is legal C++)
>>> to solve
>>> the problem:
>>>
>>>  #define SQLITE_STATIC ((extern "C" void(*)(void*)) 0)

what about

typedef void (*sqlite_static_t)(void *);

#define SQLITE_STATIC ((sqlite_static_t)0)

In C++ the type 'sqlite_static_t' would be between

  extern "C" {

and

  };

so the typedef should refer to C linkage, not C++ linkage. That way you
don't need any #ifdef sections


Gé


Re: [sqlite] Accessing locked databases

2005-04-29 Thread Weijers
Tomas Franzén wrote:
> Hi,
> 
> I'm using SQLite to access a database that is created and used by 
> another application. Sometimes when I try to access it, I get an  error
> back that the database is locked. This lock seems to be pretty  long
> lasting, so I don't think I can't wait until it's unlocked. How  can I
> bypass this lock? I only need to read from the database, so  there is no
> risk of me overwriting something.

When the database is locked for writing you can assume the file is
actually in an inconsistent state. The application holding the lock has
started to modify the file and until the lock is released there is no
guarantee that another application could do anything useful with the
contents of the file.

Solution: modify the other application to make sure it does not hold the
lock too long, or switch to a database that allows reading during
transactions.

Gé



> 
> Any ideas? Solutions? Workarounds?
> 
> Thanks.
> 
> Tomas Franzén
> Lighthead Software
> http://www.lightheadsw.com/
> 



Re: [sqlite] sqlite3_bind_text() and SQLITE_STATIC question

2005-04-27 Thread Weijers
Ulrik Petersen wrote:
> Hi Gerry,
> 
> Gerry Blanchette wrote:
> 
>> Greetings All,
>> In general, is passing NULL to sqlite3_bind_text() as parameter 5 valid,
>> instead of using either SQLITE_STATIC or SQLITE_TRANSIENT? (My bind
>> value is a heap pointer which I manage).
>>
>> I ask because on SOLARIS, compiling (C++ compiler, .cpp module) when
>> passing SQLITE_STATIC produces this rather annoying warning:
>>
>> Warning (Anachronism): Formal argument 5 of type extern "C"
>> void(*)(void*) in call to sqlite3_bind_text(sqlite3_stmt*, int, const
>> char*, int, extern "C" void(*)(void*)) is being passed void(*)(void*).
>>
>> Thanks for your help,
>>
>> -- Gerry Blanchette
>>  
>>
> 
> When you #include , you should enclose that #include in this
> construct:
> 
> extern "C" {
> #include 
> }
> 
> That may solve your problem.

"sqlite3.h" contains:

#ifdef __cplusplus
extern "C" {
#endif



#ifdef __cplusplus
}  /* End of the 'extern "C"' block */
#endif

It does the `extern "C"' block automatically.



> 
> Likewise, the function that you pass should be declared 'extern "C"' if
> it's one of your own functions.

I agree with this one.

extern "C" void foo (void*);
  and
void foo (void *);

are two different types.


Gé

> 
> HTH
> 
> Ulrik Petersen
> 



Re: [sqlite] Problem storing integers

2005-04-14 Thread Weijers
Same thing on Mac OSX. Must be a platform-independent issue.

Gé

Richard Boulton wrote:

>Hi,
>
>I'm running the latest sqlite 3.2.1 command line tool on Windows XP and have
>noticed that I don't seem to be able to store 48bit integers anymore :-S
>
>CREATE TABLE test (a INTEGER);
>INSERT INTO test VALUES(4294967295);
>INSERT INTO test VALUES(1099511627775);
>INSERT INTO test VALUES(281474976710655);
>INSERT INTO test VALUES(72057594037927935);
>SELECT * FROM test;
>
>Results in:
>
>4294967295
>1099511627775
>-1
>72057594037927935
>
>i.e. 281474976710655 is stored as -1
>
>Regards,
>Rich
>
>  
>



Re: [sqlite] sqlite performance problem

2005-04-12 Thread Weijers
Maksim Yevmenkin wrote:

>
>>>so, just with plain ascii file i get four times the speed i get with
>>>sqlite. note that my c program will scale linearly with the size of
>>>dataset (just like i see with sqlite).
>>>  
>>>
>>   With anything related to computers, there are always tradeoffs - most
>>commonly power for complexity, and flexibility for speed.  Your C
>>program *should* be faster than anything SQLite can do - it's simpler
>>and more specific to the problem you're trying to solve.  On the flip
>>side, it'll never do anything other than what it already does - it can
>>never be used to solve any other problem.
>>
>>
>
>what you say is correct, but four (!) times performance increase?
>please, you have got to admit that something is not right here.
>
>  
>
I have to agree with Thomas, your expectations are too high. If I'd be
using a relational database and I could get within a factor of 4 of what
I can concoct in C I'd declare victory. Relational databases are often
far from speedy, even on simple queries. You pay for:

* variable record formats
* integrity checks
* duplicate storage of keys in the BTree
* duplicate storage of keys in multiple tables
* the ACID property, even if you're not using it in your samples
* the ability to perform queries in a flexible way
* .

If your database is simple you may be better off performance wise by
rolling your own solution, or using another database. MySQL is pretty
fast if you run it using ISAM tables, but you pay with data corruption
if the DB or system crashes.

If your queries generally produce a sizeable percentage of the records
stored you might as well do a sequential scan over a file, if written
with care, performance will be completely I/O bound. Use the 'mmap'
system call or equivalent to map the DB into memory, and you can read
your DB using pointer arithmetic, and use 'memmove' for updates.

Gé



Re: [sqlite] sqlite performance problem

2005-04-12 Thread Weijers
Maksim,

Some things you could try:

1) increase cache memory

You may be causing a lot of cache misses if the size of the query result
is very large compared to the size of the cache. Index-based searches
can cause multiple reloads of the same page because of a lack of
locality in the cache. An index-less search will just load each page once.

as an experiment, try 'PRAGMA cache_size = 30', before you run the
query. 1 GB of ram should be able to support 300MB of cache.

2) use 8192-byte pages

Larger pages seem to improve performance quite a bit, in my experience.

Do 'PRAGMA page_size = 8192' before you create the database.

Doing both may cause excessive memory use (20 * 8K = ...). I've
never tried that.


Gé



Maksim Yevmenkin wrote:

>Robert,
>
>  
>
>>[snip]
>>
>>
>>
>>>i said i print these rows to /dev/null too in my perl code. plus the
>>>perl code does some other things such as joining these rows with other
>>>hashes and summing the numbers.
>>>  
>>>
>>That's fine.  I was merely trying to account for the 50% speed difference
>>between the two differing column tests, which has been accomplished.
>>
>>
>>
As for the temp table ... I haven't tried this, but isn't


>>>"temp" a reserved
>>>  
>>>
word in SQLite?  More importantly, you should be doing this


>>>statement inside
>>>
>>>yes, it is. i really want to create 'temporary table' in memory. i was
>>>really hoping it would speed things up.
>>>  
>>>
>>I misread the statement, so ignore me on that part.  However, 339,000 rows
>>into a temporary in-memory table ... I tried some experiments locally here
>>and none of them took more than 2 seconds to execute.  Are you sure you're
>>not using up all available memory, which is causing the system to hit the
>>swapfile?  What does this same query look like when you drop the "temp" from
>>the query?
>>
>>
>
>the system has 1G of ram. i was "monitoring" sqlite3 memory usage with
>'top'. the SIZE and RES did not exceed 30M. so i do not think the
>memory is the issue here.
>
>  
>
>>time sqlite3 db 'create table foo as select * from data where a <= 18234721' 
>>> /dev/null
>>
>>
>22.06u 1.39s 0:27.75 84.5%
>
>so pretty much the same time without 'temp'.
>
>i'm starting to suspect disk. here is what i did. i created a separate
>database with only one table. this table contains subset of 92
>rows from original data table. it also has the same index on "a"
>column, i.e. i did
>
>  
>
>>sqlite3 db1
>>
>>
>sqlite> attach db as s;
>sqlite> create table data as select * from s.data where a <= 18234721;
>sqlite> create index data_by_a on data (a);
>
>full scan
>
>  
>
>>time sqlite3 db1 'select n1 from data' > /dev/null
>>
>>
>17.19u 0.55s 0:19.06 93.0%
>
>"bad" index scan, because it is guaranteed then the table only has
>keys that match "where"
>
>  
>
>>time sqlite3 db1 'select n1 from data where a <= 18234721' > /dev/null
>>
>>
>25.73u 0.59s 0:28.37 92.7%
>
>+10 seconds! is this the overhead of "indexed" scan? is this what it
>really takes to seek back and forth between index and data? what am i
>missing here?
>
>thanks,
>max
>  
>



Re: [sqlite] 50MB Size Limit?

2005-04-11 Thread Weijers
Jonathan Zdziarski wrote:

>
> D. Richard Hipp wrote:
>
>> Are you sure your users are not, in fact, filling up their disk
>> drives?
>
>
> nope, plenty of free space on the drives. The 50MB limit seems to be
> very exact as well...exactly 51,200,000 bytes. I'm stumped too.

Assuming your application is called by the mail system: your mail
delivery system may limit the size of files to 50MB using the
'setrlimit' system call. If you have source code you can check that easily.

The 'bash' shell shows all resource limits with the command 'ulimit -a',
maybe you can insert it in a script somewhere and find out what it's
actually set to.

BTW: sendmail (8.13.1) explicitly tries to set the file resource limit
to 'infinity', which may fail if the hard limit was lowered.

Gé