Re: [sqlite] rollback/replay journals and durability of the most recent transaction

2008-07-02 Thread Karl Tomlinson
Igor Tandetnik writes:

> Karl Tomlinson wrote:
>
>> I really meant, as a first possibility, that writing the pages
>> to the database itself would be performed during the commit
>> process after syncing the replay journal (but the database need
>> not be synced until the journal is about to be removed).
>
> So if I insert large amounts of data, this data would be written
> to disk twice: once into a replay journal, and then again into
> the database file on commit.

Right.

> With rollback journal, it is written once into the database
> (with rollback journal merely noting which new pages have been
> created), and commit is nearly instantaneous.

Thanks.  That is an advantage of the rollback journal approach,
which could be twice as fast when limited by async IO bandwidth.
(But if the rollback is really instantaneous, the replay would
only take two instants.)

On my initial reading of "writes into the rollback journal the
original content of the database pages that are to be altered" at
http://www.sqlite.org/atomiccommit.html I assumed the advantage
would swing the other way on deletion of pages, but of course the
deleted pages need not be completely erased, and so their content
need not be recorded in the rollback journal.

Perhaps, the rollback journal could also have this advantage even
on page modifications when the complete page is being written.  By
writing to a new page instead of over an old page (and not
removing the data from the old page until a later transaction),
the rollback journal need not record the content of the old page.

Note that, if there are situations where the original content of a
page needs to be recorded in the rollback journal, then there may
be an advantage in the replay journal approach, as it knows the
data that it needs to write without needing to read the old
content.

> I'd rather optimize for the common case (transactions
> successfully committed) than the exceptional one.

I'm not actually trying to optimize for the filesystem
interruption case.

I'm trying to optimize for a situation where IO bandwidth is not
high and so async IO is cheap, but there are many small
transactions and the filesystem is shared by many other apps and
so fsyncs can be very expensive.

There will be different "best" solutions for different situations,
but maybe it is possible to disable sqlite's standard journaling
and use the vfs sqlite3_io_methods to implement a virtual
filesystem with its own customized (low-level) journaling system.

Ken writes:

> Ok, I'll argue why write the entire page, why not just write
> what changed on the page? Allowing more information to be
> written to a redo journal (ie more than one modification) per
> redo page 

Potentially a good optimization, thank you.

> How often does a write actually modify the entire page?

I don't know the answer to this question.


Igor Tandetnik writes:

>>> SQLite would have to keep track of where in the replay journal each
>>> new page was written, so that if it needed to reload that page for
>>> any reason, it could.  As long as the transaction (and thus the
>>> replay journal) is not too big, this is not a problem.  A hash table
>>> in memory would suffice.  But sometimes transactions do get very
>>> large. For example, with the VACUUM command.  And in those cases,
>>> the amount of memory needed to record the mapping from pages to
>>> offsets into the replay journal could become too large to fit into
>>> memory, especially in memory-constrained devices such as cellphones.
>>
>> The size of the transaction would be something to consider in the
>> decision on when to sync the journal and write its pages to the
>> database.
>
> So presumably, as the transaction grows, at some point you may decide to 
> dump the replay journal into the database file even while the 
> transaction is still open. What if the user decides to roll it back soon 
> afterwards? Wouldn't you need both a replay and a rollback journals 
> then?

Ken writes:

> Yes you would, but then again you'd also want to put the changes
> for the undo journal into the redo journal so that if the "DB"
> crashed it would be able to recover not just the data files but
> the undo as well.

I didn't consider uncommitted transactions being written to the
database until synced to the replay journal.

I imagined that if the transaction got too large to keep in
memory, then it would need to be reread from the journal when it
is written to the database.  This would of course impact
performance, so this is another situation where the rollback
implementation for cache spill would perform better.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Attempting to merge large databases

2008-07-02 Thread John
Gene Allen wrote:
[SNIP]

> Enter ".help" for instructions
> 
> sqlite> attach 'c:\test\b.db3' as toMerge;   
> 
try this:

BEGIN;
> sqlite> insert into AuditRecords select * from toMerge.AuditRecords;
COMMIT;
> 
> sqlite> detach database toMerge;
> 
John

-- 
Regards
John McMahon
   [EMAIL PROTECTED]


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] rollback/replay journals and durability of the mostrecent transaction

2008-07-02 Thread Ken


Igor Tandetnik <[EMAIL PROTECTED]> wrote:Karl Tomlinson 
wrote:
> Thank you for your comments.
>
>>> Karl Tomlinson wrote:
>>>
 One thing I'm trying to understand is whether there was a reason
 for SQLite choosing to use a rollback journal (of the steps to
 undo a transaction) rather than a replay journal (of the steps to
 perform a transaction).
>
> I didn't make this quite clear. I really meant, as a first
> possibility, that writing the pages to the database itself would
> be performed during the commit process after syncing the replay
> journal (but the database need not be synced until the journal is
> about to be removed).

So if I insert large amounts of data, this data would be written to disk 
twice: once into a replay journal, and then again into the database file 
on commit. With rollback journal, it is written once into the database 
(with rollback journal merely noting which new pages have been created), 
and commit is nearly instantaneous. I'd rather optimize for the common 
case (transactions successfully committed) than the exceptional one.

  Ok, I'll argue why write the entire page, why not just write what changed on 
the page? Allowing more information to be written to a redo journal (ie more 
than one modification) per redo page  How often does a write actually 
modify the entire page?
  Definately agree about optimizing for the common case of successfully 
commited transactions.
   

>> SQLite would have to keep track of where in the replay journal each
>> new page was written, so that if it needed to reload that page for
>> any reason, it could. As long as the transaction (and thus the
>> replay journal) is not too big, this is not a problem. A hash table
>> in memory would suffice. But sometimes transactions do get very
>> large. For example, with the VACUUM command. And in those cases,
>> the amount of memory needed to record the mapping from pages to
>> offsets into the replay journal could become too large to fit into
>> memory, especially in memory-constrained devices such as cellphones.
>
> The size of the transaction would be something to consider in the
> decision on when to sync the journal and write its pages to the
> database.

So presumably, as the transaction grows, at some point you may decide to 
dump the replay journal into the database file even while the 
transaction is still open. What if the user decides to roll it back soon 
afterwards? Wouldn't you need both a replay and a rollback journals 
then?

  Yes you would, but then again you'd also want to put the changes for the undo 
journal into the redo journal so that if the "DB" crashed it would be able to 
recover not just the data files but the undo as well.   Which if doing this 
will naturaly lead you to multiple transactions, concurrency and multiple redo 
journals and a WHOLE lot of other features.
  Which seems to me to be a pretty large scope creep. 
   
   
  The 
   
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] validate SQL Statement

2008-07-02 Thread Umaa Krishnan
Hello,

I was wondering if there a way in sqlite, wherein I could validate the SQL 
statement (for correct grammar, resource name - column name, table name etc), 
w/o having to do prepare.

Thanks in advance



--- On Wed, 7/2/08, Alex Katebi <[EMAIL PROTECTED]> wrote:
From: Alex Katebi <[EMAIL PROTECTED]>
Subject: Re: [sqlite] Table Level Locking
To: "General Discussion of SQLite Database" 
Date: Wednesday, July 2, 2008, 7:21 PM

This is the way I hoped it should work, and it does.
Thanks so much Igor!
-Alex

On Wed, Jul 2, 2008 at 9:39 PM, Igor Tandetnik <[EMAIL PROTECTED]>
wrote:

> "Alex Katebi" <[EMAIL PROTECTED]>
> wrote in message
>
news:[EMAIL PROTECTED]<[EMAIL PROTECTED]>
> > Below is a section from The Definitive Guide to SQLite book
> > Is this not valid any more for the newer releases of SQLite.
>
> This is not valid anymore. See the message from Dr. Hipp in this thread:
>
>
>
http://archives.devshed.com/forums/databases-124/database-table-is-locked-again-sorry-2068902.html
>
> Igor Tandetnik
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Table Level Locking

2008-07-02 Thread Alex Katebi
This is the way I hoped it should work, and it does.
Thanks so much Igor!
-Alex

On Wed, Jul 2, 2008 at 9:39 PM, Igor Tandetnik <[EMAIL PROTECTED]> wrote:

> "Alex Katebi" <[EMAIL PROTECTED]>
> wrote in message
> news:[EMAIL PROTECTED]<[EMAIL PROTECTED]>
> > Below is a section from The Definitive Guide to SQLite book
> > Is this not valid any more for the newer releases of SQLite.
>
> This is not valid anymore. See the message from Dr. Hipp in this thread:
>
>
> http://archives.devshed.com/forums/databases-124/database-table-is-locked-again-sorry-2068902.html
>
> Igor Tandetnik
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Table Level Locking

2008-07-02 Thread Igor Tandetnik
"Alex Katebi" <[EMAIL PROTECTED]>
wrote in message
news:[EMAIL PROTECTED]
> Below is a section from The Definitive Guide to SQLite book
> Is this not valid any more for the newer releases of SQLite.

This is not valid anymore. See the message from Dr. Hipp in this thread:

http://archives.devshed.com/forums/databases-124/database-table-is-locked-again-sorry-2068902.html

Igor Tandetnik



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Building ft3?

2008-07-02 Thread Stephen Woodbridge
Stephen Woodbridge wrote:
> Hi all,
> 
> I'm on Linux and I would like to build sqlite3 with rtree and ft3 
> support. How do I do that? I have read through the docs, website, the 
> wiki and have evidently missed the needed page(s).
> 
> For example:
> http://www.sqlite.org/compile.html
> talks about compilation options but does not say where/how to use these 
> options and it does not discuss rtree of fts3.
> 
> Help or pointers appreciated.

OK, for the record, after search the archive and various wiki pages the 
sort of hinted at how to do it, etc.

Anyway, I got this to work like this:


#
rm -rf bld lib

mkdir bld lib
cd bld
CFLAGS="-Os -DSQLITE_ENABLE_FTS3=1 -DSQLITE_ENABLE_RTREE=1" LDFLAGS=-ldl 
../sqlite/configure \
 --prefix=$HOME \
 --disable-tcl \
 --enable-load-extension \
 --with-pic \
 --enable-threadsafe \
 --enable-tempstore \

make
make install

exit

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Table Level Locking

2008-07-02 Thread Alex Katebi
Below is a section from The Definitive Guide to SQLite book
Is this not valid any more for the newer releases of SQLite.
==
Table Locks
Even if you are using just one connection, there is a special edge case that
sometimes trips
people up. While you would think that two statements from the same
connection could work
on the database with impunity, there is one important exception.
When you execute a SELECT command on a table, the resulting statement object
creates a
B-tree cursor on that table. As long as there is a B-tree cursor active on a
table, other statements—
even in the same connection—cannot modify it. If they try, they will get
SQLITE_BUSY. Consider
the following example:
c = sqlite.open("foods.db")
stmt1 = c.compile('SELECT * FROM episodes LIMIT 10')
while stmt1.step() do
# Try to update the row
row = stm1.row()
stmt2 = c.compile('UPDATE episodes SET …')
# Uh oh: ain't gonna happen
stmt2.step()
end
stmt1.finalize()
stmt2.finalize ()
c.close()
CHAPTER 5 ■ DES IGN AND CONCEPTS 199
We are only using one connection here. Regardless, when stmt2.step() is
called, it won't
work because stmt1 has a cursor on the episodes table. In this case,
stmt2.step() may actually
succeed in promoting the connection's database lock to EXCLUSIVE, but it
will still return
SQLITE_BUSY. The cursor on episodes prevents it from modifying the table. In
order to get around
this, you can do one of two things:
‧ Iterate over the results with one statement, storing the information you
need in memory.
Then finalize the reading statement, and then do the updates.
‧ Store the SELECT results in a temporary table (as described in a moment)
and open the
read cursor on it. In this case you can have both a reading statement and a
writing statement
working at the same time. The reading statement's cursor will be on a
different
table—the temporary table—and won't block the updates on the main table from
the
second statement. Then when you are done, simply drop the temporary table.
When a statement is open on a table, its B-tree cursor will be removed from
the table when
one of two things happens:
‧ The statement reaches the end of the result set. When this happens, step()
will automatically
close the statement's cursor(s). In VDBE terms, when the end of the results
set is
reached, the VDBE encounters a Close instruction, which causes all
associated cursors
to be closed.
‧ The statement is finalized. The program explicitly calls finalize(),
thereby removing all
associated cursors.
In many extensions, the call to sqlite3_finalize() is done automatically in
the statement
object's close() function, or something similar.
■Note As a matter of interest, there are exceptions to these scenarios where
you could theoretically get
away with reading and writing to the same table at the same time. In order
to do so, you would have to
convince the optimizer to use a temporary table, using something like an
ORDER BY, for example. When this
happens, the optimizer will automatically create a temporary table for the
SELECT statement and place the
reading statement's cursor on it rather than the actual table itself. In
this case, it is technically possible for a
writer to then modify the real table because the reader's cursor is on a
temporary table. The problem with this
approach is that the decision to use temporary tables is made by the
optimizer. It is not safe to presume what
the optimizer will and will not do. Unless you like to gamble, or are just
intimately acquainted with the ins and
outs of the optimizer, it is best to just follow the general rule of thumb:
don't read and write to the same table
at the same time.
Fun with Temporary Tables
Temporary tables let you bend the rules. If you absolutely have to have two
connections going
in the same block of code, or two statements operating on the same table,
you can safely do so
if you use temporary tables. When a connection creates or writes to a
temporary table, it does
not have to get a RESERVED lock, because temporary tables are maintained
outside of the database


On Wed, Jul 2, 2008 at 7:25 PM, Igor Tandetnik <[EMAIL PROTECTED]> wrote:

> Alex Katebi <[EMAIL PROTECTED]> wrote:
> >   Notice that I have multiple stmts stepping over the same table at
> > the
> > same time.
> >   Why is this OK?
>
> Why shouldn't it be?
>
> > There isn't a table level lock?
>
> A file level lock, even. It happily locks out other connections (of
> which you have none). But a connection cannot lock _itself_ out.
>
> Igor Tandetnik
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Table Level Locking

2008-07-02 Thread Igor Tandetnik
Alex Katebi <[EMAIL PROTECTED]> wrote:
>   Notice that I have multiple stmts stepping over the same table at
> the
> same time.
>   Why is this OK?

Why shouldn't it be?

> There isn't a table level lock?

A file level lock, even. It happily locks out other connections (of 
which you have none). But a connection cannot lock _itself_ out.

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Table Level Locking

2008-07-02 Thread Alex Katebi
Igor,

   Notice that I have multiple stmts stepping over the same table at the
same time.
   Why is this OK? There isn't a table level lock?
   When is a table locked?

Thanks,
-Alex

On Wed, Jul 2, 2008 at 5:12 PM, Igor Tandetnik <[EMAIL PROTECTED]> wrote:

> Alex Katebi <[EMAIL PROTECTED]> wrote:
> >> I created a test file. It is attached in this email. I can not see
> >> any
> >> locking happening at all.
>
> Which part of "you should never experience any locking at all in this
> scenario" did you find unclear the first time round? Why exactly are you
> surprised?
>
> Igor Tandetnik
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] rollback/replay journals and durability of the mostrecent transaction

2008-07-02 Thread Igor Tandetnik
Karl Tomlinson <[EMAIL PROTECTED]>
wrote:
> Thank you for your comments.
>
>>> Karl Tomlinson wrote:
>>>
 One thing I'm trying to understand is whether there was a reason
 for SQLite choosing to use a rollback journal (of the steps to
 undo a transaction) rather than a replay journal (of the steps to
 perform a transaction).
>
> I didn't make this quite clear.  I really meant, as a first
> possibility, that writing the pages to the database itself would
> be performed during the commit process after syncing the replay
> journal (but the database need not be synced until the journal is
> about to be removed).

So if I insert large amounts of data, this data would be written to disk 
twice: once into a replay journal, and then again into the database file 
on commit. With rollback journal, it is written once into the database 
(with rollback journal merely noting which new pages have been created), 
and commit is nearly instantaneous. I'd rather optimize for the common 
case (transactions successfully committed) than the exceptional one.

>> SQLite would have to keep track of where in the replay journal each
>> new page was written, so that if it needed to reload that page for
>> any reason, it could.  As long as the transaction (and thus the
>> replay journal) is not too big, this is not a problem.  A hash table
>> in memory would suffice.  But sometimes transactions do get very
>> large. For example, with the VACUUM command.  And in those cases,
>> the amount of memory needed to record the mapping from pages to
>> offsets into the replay journal could become too large to fit into
>> memory, especially in memory-constrained devices such as cellphones.
>
> The size of the transaction would be something to consider in the
> decision on when to sync the journal and write its pages to the
> database.

So presumably, as the transaction grows, at some point you may decide to 
dump the replay journal into the database file even while the 
transaction is still open. What if the user decides to roll it back soon 
afterwards? Wouldn't you need both a replay and a rollback journals 
then?

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] rollback/replay journals and durability of the most recent transaction

2008-07-02 Thread Karl Tomlinson
Thank you for your comments.

>> Karl Tomlinson wrote:
>>
>>> One thing I'm trying to understand is whether there was a reason for
>>> SQLite choosing to use a rollback journal (of the steps to undo a
>>> transaction) rather than a replay journal (of the steps to perform a
>>> transaction).

I didn't make this quite clear.  I really meant, as a first
possibility, that writing the pages to the database itself would
be performed during the commit process after syncing the replay
journal (but the database need not be synced until the journal is
about to be removed).

In this situation, the replay journal would merely contain the
steps to _re-perform_ a transaction on recovery from filesystem
interruption (in much the same way as a rollback journal is used).

This makes the following easier to implement:

> On Jul 2, 2008, at 10:56 AM, Igor Tandetnik wrote:
>
>> begin;
>> insert into mytable values ('xyz');
>> select * from mytable;
>> end;
>>
>> I would expect the select to include the row I've just inserted.

There is also the option of not writing to the database
immediately (and possibly even not syncing the journal
immediately if durability is not required).

>> But with your scheme, the record is not in the database, but is
>> still sitting in the replay journal.

Yes, this scheme would require maintaining a buffer of page
numbers in the journal as D. Richard Hipp describes:

> SQLite would have to keep track of where in the replay journal each  
> new page was written, so that if it needed to reload that page for any  
> reason, it could.  As long as the transaction (and thus the replay  
> journal) is not too big, this is not a problem.  A hash table in  
> memory would suffice.  But sometimes transactions do get very large.   
> For example, with the VACUUM command.  And in those cases, the amount  
> of memory needed to record the mapping from pages to offsets into the  
> replay journal could become too large to fit into memory, especially  
> in memory-constrained devices such as cellphones.

The size of the transaction would be something to consider in the
decision on when to sync the journal and write its pages to the
database.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Table Level Locking

2008-07-02 Thread Igor Tandetnik
Alex Katebi <[EMAIL PROTECTED]> wrote:
>> I created a test file. It is attached in this email. I can not see
>> any
>> locking happening at all.

Which part of "you should never experience any locking at all in this 
scenario" did you find unclear the first time round? Why exactly are you 
surprised?

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Update static database with records from memory database.

2008-07-02 Thread Dennis Cote
Pejayuk wrote:
> I still can't get this to work.
> Can someone help please.
> 
> :working:
> 
> 
> Pejayuk wrote:
>> That is fantastic Igor.
>> Thankyou.
>> All I need to do now is work out how to get an update query to use a
>> select query to update the records from stats_memory to stats_static after
>> doing the link.
>> I think google may be able to help me with that.
>> Thankyou for your time.
>> =)
>>
>> EDIT:
>> I have just had a look around google. I think the following is what I need
>> to do but would like to know if someone can confirm I have this correct.
>>
>> I have the static_stats database and :memory: database open.
>> I then attach as follows.
>>
>> ATTACH ':memory:' as db1;
>> ATTACH 'stats_static' as db2;
>>
>> I then update the records in stats_static stats as follows from the
>> :memory: stats table.
>> INSERT OR REPLACE INTO db2.stats (rank, steamid, playername, zkills,
>> hkills, deaths, degrade, prounds, lastvisit, points) SELECT rank, steamid,
>> playername, zkills, hkills, deaths, degrade, prounds, lastvisit, points
>> from db1.stats
>>
>> I then detach the databases as follows.
>> DETACH 'stats_static'
>> DETACH ':memory:' 
>>
>> a) Would the insert or replace query as shown above, update the
>> stats_static database with the fields from :memory: ?.
>> b) After the detach command, would both the databases still be open as
>> they were before the attach command?.
>>
>> Many thanks in advance.
>>
>> EDIT2:
>> Another thought.
>> If I executed the following querys on the stats_static database.
>>
>> ATTACH ':memory:' as db1;
>>
>> INSERT OR REPLACE INTO stats SELECT rank, steamid, playername, zkills,
>> hkills, deaths, degrade, prounds, lastvisit, points from db1.stats
>>
>> DETACH ':memory:'
>>
>> Does that give me what I want.
>> Think this is more what I should be doing.
>> Can someone please confirm.
>>
>> Sorry for all the edits.
>>

You would be better served if you took a little more time to format your 
questions so they are clear to the reader. Rambling edits and pasting 
content as quotations doesn't help.

You need to open your permanent database as you normally would and then 
execute an SQL attach command to attach the memory database. I'm don't 
know the syntax for your eventscripts language, but if you can execute 
queries this would be much the same.

   es_xsql open stats_static |zmstats
   stats_static.execute "ATTACH ':memory:' as stats_temp"
   stats_static.execute "create table stats_temp.stats (...)"

Now copy data from the permanent database to the temp or vice versa. 
Note, "main" is the implicit name of the database that was opened as 
stats_static (i.e. by a call to sqlite3_open() by your eventscripts).

   stats_static.execute "INSERT into stats_temp.stats select * from 
main.stats"

Once the temp data is loaded it can be manipulated using normal SQL 
commands and the table name "stats_temp.stats".

When it is time to save the temp data simply copy it back.

   stats_static.execute "INSERT OR REPLACE into main.stats select * from 
stats_temp.stats"

Now you can detach the memory database and close the permanent database.

   stats_static.execute "DETACH stats_temp"
   es_xsql close stats_static

I'm doing a lot of reading between the lines, but I hope you get the 
gist of it.

HTH
Dennis Cote
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Table Level Locking

2008-07-02 Thread Alex Katebi
I created a test file. It is attached in this email. I can not see any
locking happening at all.


On Wed, Jul 2, 2008 at 4:25 PM, Igor Tandetnik <[EMAIL PROTECTED]> wrote:

> Alex Katebi <[EMAIL PROTECTED]> wrote:
> > Do I need to enable shared cache mode plus read uncommitted option?
>
> You only have one connection (one call to sqlite3_open), right? Then it
> doesn't matter. "Shared" only makes a difference if there are at least
> two connections to share between.
>
> > Also you mentioned earlier:
> > "(but you will experience "dirty reads" with all the attendant
> > problems)."
> >
> > What is a dirty read?  What problems does it cause?
>
> Dirty read is another term for read uncommitted. Your select statement
> may see changes made to the database while the statement is still
> active. This is especially "interesting" if you have a query that may
> scan the same table several times. For example:
>
> select * from table1
> where exists (select * from table2 where table2.value = table1.value);
>
> Suppose you have two records in table1 both having value=1 - let's call
> them A and B. Looking at the statement, one would think that, regardless
> of what's in table2, it should always return both A and B, or neither.
>
> So, you step through the statement above. For each record in table1, it
> scans table2 in search of a matching record. At some point, a call to
> sqlite3_step returns record A. Then you run another statement that
> deletes one and only record from table2 that had value=1. A subsequent
> sqlite3_step call won't find record B anymore. So you get A but not B,
> which may be surprising.
>
> Igor Tandetnik
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>

#include 
#include 
#include 

int busy(void* arg, int cnt)
{
  printf("%s: arg %p, cnt %d\n", __func__, arg, cnt);
  return 1;
}

int main()
{
  const char* zSql;
  sqlite3* db;
  sqlite3_stmt* stmt, *stmt2, *stmt3;
  int rc;

  if((rc = sqlite3_open(":memory:", )))
{
  printf("open rc = %d\n", rc);
  exit(1);
}

  rc = sqlite3_busy_handler(db, busy, 0);
  printf("busy_handler rc = %d\n", rc);

  zSql = "create table t(a,b)"; 
  rc = sqlite3_prepare_v2(db, zSql, -1, , 0);
  printf("prepare rc = %d, %s\n", rc, zSql);
  rc = sqlite3_step(stmt);
  printf("step rc = %d\n", rc);
  rc = sqlite3_finalize(stmt);
  printf("finalize rc = %d\n", rc);

  zSql = "insert into t values('alex','katebi')"; 
  rc = sqlite3_prepare_v2(db, zSql, -1, , 0);
  printf("prepare rc = %d, %s\n", rc, zSql);
  rc = sqlite3_step(stmt);
  printf("step rc = %d\n", rc);
  rc = sqlite3_finalize(stmt);
  printf("finalize rc = %d\n", rc);


  zSql = "select * from t";

  rc = sqlite3_prepare_v2(db, zSql, -1, , 0);
  printf("stmt2: prepare rc = %d, %s\n", rc, zSql);
  rc = sqlite3_step(stmt2);
  printf("stmt2: step rc = %d\n", rc);

  zSql = "insert into t values('alex','katebi')"; 
  rc = sqlite3_prepare_v2(db, zSql, -1, , 0);
  printf("prepare rc = %d, %s\n", rc, zSql);
  rc = sqlite3_step(stmt);
  printf("step rc = %d\n", rc);
  rc = sqlite3_finalize(stmt);
  printf("finalize rc = %d\n", rc);

  rc = sqlite3_prepare_v2(db, zSql, -1, , 0);
  printf("stmt3: prepare rc = %d, %s\n", rc, zSql);
  rc = sqlite3_step(stmt3);
  printf("stmt3: step rc = %d\n", rc);

}
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Update static database with records from memory database.

2008-07-02 Thread Igor Tandetnik
Pejayuk <[EMAIL PROTECTED]> wrote:
> I still can't get this to work.

Can't get what to work? What specifically seems to be the problem?

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Update static database with records from memory database.

2008-07-02 Thread Pejayuk

I still can't get this to work.
Can someone please help.
:working:
I don't understand how to do this.
Many thanks in advance.



Pejayuk wrote:
> 
> That is fantastic Igor.
> Thankyou.
> All I need to do now is work out how to get an update query to use a
> select query to update the records from stats_memory to stats_static after
> doing the link.
> I think google may be able to help me with that.
> Thankyou for your time.
> =)
> 
> EDIT:
> I have just had a look around google. I think the following is what I need
> to do but would like to know if someone can confirm I have this correct.
> 
> I have the static_stats database and :memory: database open.
> I then attach as follows.
> 
> ATTACH ':memory:' as db1;
> ATTACH 'stats_static' as db2;
> 
> I then update the records in stats_static stats as follows from the
> :memory: stats table.
> INSERT OR REPLACE INTO db2.stats (rank, steamid, playername, zkills,
> hkills, deaths, degrade, prounds, lastvisit, points) SELECT rank, steamid,
> playername, zkills, hkills, deaths, degrade, prounds, lastvisit, points
> from db1.stats
> 
> I then detach the databases as follows.
> DETACH 'stats_static'
> DETACH ':memory:' 
> 
> a) Would the insert or replace query as shown above, update the
> stats_static database with the fields from :memory: ?.
> b) After the detach command, would both the databases still be open as
> they were before the attach command?.
> 
> Many thanks in advance.
> 
> EDIT2:
> Another thought.
> If I executed the following querys on the stats_static database.
> 
> ATTACH ':memory:' as db1;
> 
> INSERT OR REPLACE INTO stats SELECT rank, steamid, playername, zkills,
> hkills, deaths, degrade, prounds, lastvisit, points from db1.stats
> 
> DETACH ':memory:'
> 
> Does that give me what I want.
> Think this is more what I should be doing.
> Can someone please confirm.
> 
> Sorry for all the edits.
> 
> Many thanks.
> 
> 
> 
> Igor Tandetnik wrote:
>> 
>> "Pejayuk" <[EMAIL PROTECTED]> wrote in
>> message news:[EMAIL PROTECTED]
>>> Is it possible to have a single SQL query execute on the stats_static
>>> stats database and have it update from the records in stats_memory
>>> stats database?.
>> 
>> http://www.sqlite.org/lang_attach.html
>> 
>> Igor Tandetnik
>> 
>> 
>> 
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>> 
>> 
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Update-static-database-with-records-from-memory-database.-tp18187288p18246157.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Update static database with records from memory database.

2008-07-02 Thread Pejayuk

I still can't get this to work.
Can someone help please.

:working:


Pejayuk wrote:
> 
> That is fantastic Igor.
> Thankyou.
> All I need to do now is work out how to get an update query to use a
> select query to update the records from stats_memory to stats_static after
> doing the link.
> I think google may be able to help me with that.
> Thankyou for your time.
> =)
> 
> EDIT:
> I have just had a look around google. I think the following is what I need
> to do but would like to know if someone can confirm I have this correct.
> 
> I have the static_stats database and :memory: database open.
> I then attach as follows.
> 
> ATTACH ':memory:' as db1;
> ATTACH 'stats_static' as db2;
> 
> I then update the records in stats_static stats as follows from the
> :memory: stats table.
> INSERT OR REPLACE INTO db2.stats (rank, steamid, playername, zkills,
> hkills, deaths, degrade, prounds, lastvisit, points) SELECT rank, steamid,
> playername, zkills, hkills, deaths, degrade, prounds, lastvisit, points
> from db1.stats
> 
> I then detach the databases as follows.
> DETACH 'stats_static'
> DETACH ':memory:' 
> 
> a) Would the insert or replace query as shown above, update the
> stats_static database with the fields from :memory: ?.
> b) After the detach command, would both the databases still be open as
> they were before the attach command?.
> 
> Many thanks in advance.
> 
> EDIT2:
> Another thought.
> If I executed the following querys on the stats_static database.
> 
> ATTACH ':memory:' as db1;
> 
> INSERT OR REPLACE INTO stats SELECT rank, steamid, playername, zkills,
> hkills, deaths, degrade, prounds, lastvisit, points from db1.stats
> 
> DETACH ':memory:'
> 
> Does that give me what I want.
> Think this is more what I should be doing.
> Can someone please confirm.
> 
> Sorry for all the edits.
> 
> Many thanks.
> 
> 
> 
> Igor Tandetnik wrote:
>> 
>> "Pejayuk" <[EMAIL PROTECTED]> wrote in
>> message news:[EMAIL PROTECTED]
>>> Is it possible to have a single SQL query execute on the stats_static
>>> stats database and have it update from the records in stats_memory
>>> stats database?.
>> 
>> http://www.sqlite.org/lang_attach.html
>> 
>> Igor Tandetnik
>> 
>> 
>> 
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>> 
>> 
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Update-static-database-with-records-from-memory-database.-tp18187288p18246130.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Table Level Locking

2008-07-02 Thread Igor Tandetnik
Alex Katebi <[EMAIL PROTECTED]> wrote:
> Do I need to enable shared cache mode plus read uncommitted option?

You only have one connection (one call to sqlite3_open), right? Then it 
doesn't matter. "Shared" only makes a difference if there are at least 
two connections to share between.

> Also you mentioned earlier:
> "(but you will experience "dirty reads" with all the attendant
> problems)."
>
> What is a dirty read?  What problems does it cause?

Dirty read is another term for read uncommitted. Your select statement 
may see changes made to the database while the statement is still 
active. This is especially "interesting" if you have a query that may 
scan the same table several times. For example:

select * from table1
where exists (select * from table2 where table2.value = table1.value);

Suppose you have two records in table1 both having value=1 - let's call 
them A and B. Looking at the statement, one would think that, regardless 
of what's in table2, it should always return both A and B, or neither.

So, you step through the statement above. For each record in table1, it 
scans table2 in search of a matching record. At some point, a call to 
sqlite3_step returns record A. Then you run another statement that 
deletes one and only record from table2 that had value=1. A subsequent 
sqlite3_step call won't find record B anymore. So you get A but not B, 
which may be surprising.

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Table Level Locking

2008-07-02 Thread Alex Katebi
Do I need to enable shared cache mode plus read uncommitted option?
Also you mentioned earlier:
"(but you will experience "dirty reads" with all the attendant problems)."

What is a dirty read?  What problems does it cause?

Thanks,
-Alex

On Wed, Jul 2, 2008 at 2:55 PM, Igor Tandetnik <[EMAIL PROTECTED]> wrote:

> Alex Katebi <[EMAIL PROTECTED]> wrote:
> > Just to make myself clearer.  I have one memory connection and many
> > stmts.
> > Each stmt multiplexes the thread. This means that a stmt could give
> > up the
> > thread without finalizing itself.
>
> That's OK. It used to be that, say, a SELECT statement in progress (not
> yet finalized or reset) would block an UPDATE statement on the same
> connection. This has not been the case for a long time now. Just use a
> reasonlably recent version of SQLite.
>
> Igor Tandetnik
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] rollback/replay journals and durability of the most recenttransaction

2008-07-02 Thread D. Richard Hipp

On Jul 2, 2008, at 10:56 AM, Igor Tandetnik wrote:

> Karl Tomlinson <[EMAIL PROTECTED]>
> wrote:
>> I've done a little looking into journals, fsyncs, and filesystems
>> recently.
>>
>> One thing I'm trying to understand is whether there was a reason for
>> SQLite choosing to use a rollback journal (of the steps to undo a
>> transaction) rather than a replay journal (of the steps to perform a
>> transaction).
>
> It seems to me that with a replay journal, it would be rather  
> difficult
> to make this work:
>
> begin;
> insert into mytable values ('xyz');
> select * from mytable;
> end;
>
> I would expect the select to include the row I've just inserted. But
> with your scheme, the record is not in the database, but is still
> sitting in the replay journal.
>

Right.  In order to get transactions like the above to work correctly,  
SQLite would have to keep track of where in the replay journal each  
new page was written, so that if it needed to reload that page for any  
reason, it could.  As long as the transaction (and thus the replay  
journal) is not too big, this is not a problem.  A hash table in  
memory would suffice.  But sometimes transactions do get very large.   
For example, with the VACUUM command.  And in those cases, the amount  
of memory needed to record the mapping from pages to offsets into the  
replay journal could become too large to fit into memory, especially  
in memory-constrained devices such as cellphones.

Keeping track of changes is not a problem with a rollback journal, you  
will notice.  The current value of any page can always be obtained  
directly from the database file using the page number to compute the  
offset.

D. Richard Hipp
[EMAIL PROTECTED]



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Table Level Locking

2008-07-02 Thread Alexey Pechnikov
В сообщении от Wednesday 02 July 2008 22:42:48 Alex Katebi написал(а):
>    Do you mean the sqlite3_busy_timeout( ) ?
> I never thought I could use it for simulating this.
> I will give that a shot.

Client don't get database busy error but sleep some time and execute query 
later.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Index and ORDER BY

2008-07-02 Thread D. Richard Hipp

On Jul 1, 2008, at 3:53 PM, Alexey Pechnikov wrote:

> В сообщении от Tuesday 01 July 2008 23:47:50  
> [EMAIL PROTECTED] написал(а):
>> On Tue, 1 Jul 2008, Alexey Pechnikov wrote:
>>> Is any difference between "CREATE INDEX ev_idx ON events(type,eid)"
>>> and "CREATE INDEX ev_idx ON events(type,eid desc)"? What is "desc"
>>> keyword for index?
>>
>> The DESC keyword creates the index in descending collation order,  
>> rather
>> than ascending order (default). I believe this sort order may not be
>> observed in older versions, but more recent ones do so.
>
> I'm using SQLite 3.5.9 and there are no differents in my tests  
> between DESC
> and default indeces. I try create index with keywork DESC for  
> optimize DESC
> sorting but it don't work for me. My tests you can see above.
>

Production tests for the descending index feature are found here:

 http://www.sqlite.orc/cvstrac/fileview?f=sqlite/test/descidx1.test
 http://www.sqlite.orc/cvstrac/fileview?f=sqlite/test/descidx2.test
 http://www.sqlite.orc/cvstrac/fileview?f=sqlite/test/descidx3.test

Perhaps you can look at those tests and figure out what the difference  
is between them and your tests.


D. Richard Hipp
[EMAIL PROTECTED]



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Float, numeric and query (Continued)

2008-07-02 Thread Dominique
Igor Tandetnik <[EMAIL PROTECTED]> writes:

> 
> Dom Dom <[EMAIL PROTECTED]> wrote:
> > Igor Tandetnik <[EMAIL PROTECTED]> writes:
> >
> >> Would it be possible to get SQLAlchemy to cast to REAL, FLOAT or
> >> DOUBLE rather than NUMERIC?
> >

Hi Igor,

Thanks for answering.

Seems SQLAlchemy had a different numeric type than SQLite (no REAL for Alchemy).
It's being corrected so that both correspond and so that SQLAlchemy connects
without problems to SQLite in this particlar case.
I will look further this later (tomorrow!).
Thanks very much for your help

Dominique




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Table Level Locking

2008-07-02 Thread Igor Tandetnik
Alex Katebi <[EMAIL PROTECTED]> wrote:
> Just to make myself clearer.  I have one memory connection and many
> stmts.
> Each stmt multiplexes the thread. This means that a stmt could give
> up the
> thread without finalizing itself.

That's OK. It used to be that, say, a SELECT statement in progress (not 
yet finalized or reset) would block an UPDATE statement on the same 
connection. This has not been the case for a long time now. Just use a 
reasonlably recent version of SQLite.

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Table Level Locking

2008-07-02 Thread Alex Katebi
Alexey,

   Do you mean the sqlite3_busy_timeout( ) ?
I never thought I could use it for simulating this.
I will give that a shot.

Thanks,
-Alex


On Wed, Jul 2, 2008 at 11:40 AM, Alexey Pechnikov <[EMAIL PROTECTED]>
wrote:

> В сообщении от Wednesday 02 July 2008 19:11:58 Alex Katebi написал(а):
> >I have an in memory database and a single multiplexed thread for all
> > readers and writes.
> > I like to be able to read tables without locking out other readers and
> > writers.
> > Is this possible? I don't mind writers using locks but some of my readers
> > are slow and I don't want them to hold locks for long periods.
>
> You can simulate this. See "db timeout" function.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Table Level Locking

2008-07-02 Thread Alex Katebi
Just to make myself clearer.  I have one memory connection and many stmts.
Each stmt multiplexes the thread. This means that a stmt could give up the
thread without finalizing itself.



On Wed, Jul 2, 2008 at 11:19 AM, Igor Tandetnik <[EMAIL PROTECTED]> wrote:

> Alex Katebi <[EMAIL PROTECTED]> wrote:
> >   I have an in memory database and a single multiplexed thread for all
> > readers and writes.
> > I like to be able to read tables without locking out other readers and
> > writers.
>
> As far as I can tell, you do everything on a single thread using a
> single connection. You should never experience any locking at all in
> this scenario (but you will experience "dirty reads" with all the
> attendant problems).
>
> Igor Tandetnik
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Building ft3?

2008-07-02 Thread Stephen Woodbridge
Hi all,

I'm on Linux and I would like to build sqlite3 with rtree and ft3 
support. How do I do that? I have read through the docs, website, the 
wiki and have evidently missed the needed page(s).

For example:
http://www.sqlite.org/compile.html
talks about compilation options but does not say where/how to use these 
options and it does not discuss rtree of fts3.

Help or pointers appreciated.

-Steve
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] PRAGMA synchronous = OFF on transaction-safe file system TFAT WinCE

2008-07-02 Thread [EMAIL PROTECTED]
Hello

Is it safe do set PRAGMA synchronous = OFF when a transaction-safe file system 
is used?

We are working on WinCE with TFAT (see below) - but this might be a general 
question.

Regards
Daniel

TFAT:
The original file allocation table (FAT) file system enabled file modification 
operations to be interrupted before 
completion. In this way, actions such as sudden power loss or sudden removal of 
a storage card frequently resulted in 
data loss and file system corruption. By making file operations 
transaction-safe, TFAT stabilizes the file system and 
ensures that the file system is not corrupted when an interruption occurs.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Float, numeric and query (Continued)

2008-07-02 Thread Igor Tandetnik
Dom Dom <[EMAIL PROTECTED]> wrote:
> Igor Tandetnik <[EMAIL PROTECTED]> writes:
>
>> Would it be possible to get SQLAlchemy to cast to REAL, FLOAT or
>> DOUBLE rather than NUMERIC?
>
> Hello Igor,
> Thanks for answering.
>
>> Casting to NUMERIC leaves integers as integers;
>> basically, it's a no-op in your query.
>
> Are you sure ?

Yes.

>
> When I do in my Query1 10/2, it gives 5. Fine
> If I do 11/2, it gives 5 and not 5.5. It seems it's a classic problem
> of division.

I don't see a contradiction between this and my statement.

Compare and contrast:

select
11 / 2,
11.0 / 2,
11 / 2.0,
cast(11 as real) / 2,
cast(11 as numeric) / 2;

Exercise for the reader: run this statement in SQLite, explain its 
output. Discuss.

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Table Level Locking

2008-07-02 Thread Igor Tandetnik
Alex Katebi <[EMAIL PROTECTED]> wrote:
>   I have an in memory database and a single multiplexed thread for all
> readers and writes.
> I like to be able to read tables without locking out other readers and
> writers.

As far as I can tell, you do everything on a single thread using a 
single connection. You should never experience any locking at all in 
this scenario (but you will experience "dirty reads" with all the 
attendant problems).

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Table Level Locking

2008-07-02 Thread Alex Katebi
Hi Igor,

   I have an in memory database and a single multiplexed thread for all
readers and writes.
I like to be able to read tables without locking out other readers and
writers.
Is this possible? I don't mind writers using locks but some of my readers
are slow and I don't want them to hold locks for long periods.

Thanks,
-Alex


On Wed, Jul 2, 2008 at 10:51 AM, Igor Tandetnik <[EMAIL PROTECTED]> wrote:

> Joanne Pham <[EMAIL PROTECTED]> wrote:
> > I read the online document regarding "Table Level Locking" as below:
> > At any one time, a single table may have any number of active
> > read-locks or a single active write lock. To read data a table, a
> > connection must first obtain a read-lock. To write to a table, a
> > connection must obtain a write-lock on that table. If a required
> > table lock cannot be obtained, the query fails and SQLITE_LOCKED is
> > returned to the caller
> > So the question that I had is while writing the data to table(write
> > lock) another process can read the data from same table without any
> > problem?
>
> The article you quote applies to connections that have opted into shared
> cache. Only connections in the same process can share cache. Connections
> from different process use the traditional file-level locking.
>
> Even connections with shared cache cannot read and write the same table
> simultaneously (unless you also turn on "read uncommitted" option).
> Consider the passage you yourself have just quoted: "at any one time, a
> single table may have any number of active read-locks *OR* a single
> active write lock" (emphasis mine).
>
> Igor Tandetnik
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] rollback/replay journals and durability of the most recenttransaction

2008-07-02 Thread Igor Tandetnik
Karl Tomlinson <[EMAIL PROTECTED]>
wrote:
> I've done a little looking into journals, fsyncs, and filesystems
> recently.
>
> One thing I'm trying to understand is whether there was a reason for
> SQLite choosing to use a rollback journal (of the steps to undo a
> transaction) rather than a replay journal (of the steps to perform a
> transaction).

It seems to me that with a replay journal, it would be rather difficult 
to make this work:

begin;
insert into mytable values ('xyz');
select * from mytable;
end;

I would expect the select to include the row I've just inserted. But 
with your scheme, the record is not in the database, but is still 
sitting in the replay journal.

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Table Level Locking

2008-07-02 Thread Igor Tandetnik
Joanne Pham <[EMAIL PROTECTED]> wrote:
> I read the online document regarding "Table Level Locking" as below:
> At any one time, a single table may have any number of active
> read-locks or a single active write lock. To read data a table, a
> connection must first obtain a read-lock. To write to a table, a
> connection must obtain a write-lock on that table. If a required
> table lock cannot be obtained, the query fails and SQLITE_LOCKED is
> returned to the caller
> So the question that I had is while writing the data to table(write
> lock) another process can read the data from same table without any
> problem?

The article you quote applies to connections that have opted into shared 
cache. Only connections in the same process can share cache. Connections 
from different process use the traditional file-level locking.

Even connections with shared cache cannot read and write the same table 
simultaneously (unless you also turn on "read uncommitted" option). 
Consider the passage you yourself have just quoted: "at any one time, a 
single table may have any number of active read-locks *OR* a single 
active write lock" (emphasis mine).

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Limiting the time a query takes

2008-07-02 Thread Graeme
Thanks for having the patient to answer. I really deserved an RTFM for that.

On Wednesday 02 July 2008 13:51:29 Igor Tandetnik wrote:
> sqlite3_progress_handler



-- 
Graeme Pietersz
http://moneyterms.co.uk/
http://pietersz.co.uk/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple constraints per table?

2008-07-02 Thread Harold Wood & Meyuni Gani
Lol. Thanks. If you want a schema I can attach and send to you.

Woody
from his pda

-Original Message-
From: flakpit <[EMAIL PROTECTED]>
Sent: Tuesday, July 01, 2008 11:30 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Multiple constraints per table?



Harold Wood  Meyuni Gani wrote:
> 
> U, hmm. The tips I gave you were from my pda based shopping program
> that will be selling for 9.99 soon.
> 

Good on you Woody, hope you sell a lot.! If my eyes were up to the challenge
of reading my PDA's small screen, i'd buy a copy and save myself work (LOL!) 

With the amount of junk I've churned out of the years, i've yet to get
anyone to buy anything yet (ROFL).  Actually, not quite true. One
registration out of 6,500 downloads of my address book.
-- 
View this message in context: 
http://www.nabble.com/Multiple-constraints-per-table--tp18209309p18231100.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to connect the SQLite with DBDesigner4?

2008-07-02 Thread Mihai Limbasan
winstonma wrote:
> Then I wonder how can I "export" my DB4Designer work to the SQLite database?
>
>   
I have no idea - again, please consult the DB4Designer documentation to 
find out the available export options.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Limiting the time a query takes

2008-07-02 Thread Igor Tandetnik
"Graeme" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> Is there any way to limit the time a query takes? i.e. tell sqlite to
> give up and return an error is the query is not done within a certain
> time.

sqlite3_progress_handler

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Limiting the time a query takes

2008-07-02 Thread Graeme
Is there any way to limit the time a query takes? i.e. tell sqlite to give up 
and return an error is the query is not done within a certain time.

>From the limits page of the documentation, it appears not to be possible, but 
also not to matter too much: the explanation of limits of LIKE and GLOB 
patterns discusses maliciously constructed patterns, but DOS by a malicious 
user is not discussed elsewhere, so I guess it is unlikely to be an issue in 
other circumstances and I should not be too bothered about this.

Graeme
-- 
Graeme Pietersz
http://moneyterms.co.uk/
http://pietersz.co.uk/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Can't get MAX value from table

2008-07-02 Thread qinligeng
MAX( ColumnName )>No column name

- Original Message - 
From: "Bob Dennis" <[EMAIL PROTECTED]>
To: "SQLite user gorup" 
Sent: Wednesday, July 02, 2008 4:01 PM
Subject: [sqlite] Can't get MAX value from table


> 
> Hi, I am fairly new to SQLite, and using it to replace Microsoft db in
> PocketPC applications.
> 
> I am having trouble getting a MAX value from a table as follows:-
> 
> SQL = SELECT MAX( ColumnName ) FROM  TableName
> 
> Set Recs = db.Execute(sql) 
> 
> Result = recs(1)(ColumnName) 
> 
> I get nothing in the Result value.
> 
> I have tried putting an index on the column but it made no difference.
> 
> Any ideas would be greatly appreciated
> 
> Thanks
> 
> Bob Dennis
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Float, numeric and query

2008-07-02 Thread Igor Tandetnik
"Dom Dom" <[EMAIL PROTECTED]> wrote in
message
news:[EMAIL PROTECTED]
>> The query, translated into sql by SQLAlchemy, is (it returns tuple
>> objects): SELECT mytable.id AS mytable_id, mytable."colA" AS
>> "mytable_colA", mytable."colB" AS "mytable_colB", mytable."colC" AS
>> "mytable_colC", CAST(mytable."colB" AS NUMERIC(10, 2)) /
>> CAST(mytable."colC" AS NUMERIC(10, 2)) AS anon_1
>> FROM mytable ORDER BY mytable.oid
>>
>> This query does not return correct results: 1/10 should be 0.1 and
>> not 0

Would it be possible to get SQLAlchemy to cast to REAL, FLOAT or DOUBLE 
rather than NUMERIC? Casting to NUMERIC leaves integers as integers; 
basically, it's a no-op in your query. For more details, see 
http://www.sqlite.org/datatype3.html
-- 
With best wishes,
Igor Tandetnik

With sufficient thrust, pigs fly just fine. However, this is not 
necessarily a good idea. It is hard to be sure where they are going to 
land, and it could be dangerous sitting under them as they fly 
overhead. -- RFC 1925 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Float, numeric and query

2008-07-02 Thread Dom Dom
Hi,

I am using SQLAlchemy which is an very nice ORM under python:
http://www.sqlalchemy.org/
I am only an amateur.
I attach a file, which will be helpful for people willing to help me.

I am trying to make a query with a simple table containing integers and
floats.
The purpose of the query is to divide colB by colC.
the result of the division is between 0 and 1, since 0 =< colB =http://groups.google.com/group/sqlalchemy/browse_thread/thread/a3cc437e0db6059a#

Therefore, my question is:
Can the above problem be due to sqlite ? I doubt since it returns correct
results with my Query2 example.
If not, does anybody know if my Query1 is wrong and why ?

Sorry not to be as clear as I would have liked to.
Thanks a lot in advance

Dominique
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to connect the SQLite with DBDesigner4?

2008-07-02 Thread winstonma

Then I wonder how can I "export" my DB4Designer work to the SQLite database?


winstonma wrote:
> 
> I tried to export the the SQL command exported from DBDesigner4 is not
> going to run on SQLite. But working on MYSQL. However I saw that the
> DBDesigner4 can connect to SQLite server.
> 
> I tried to download the source code from SQLite CVS. However I have no
> clue how to build the SQLite server from the source code. I also tried the
> binary on the SQLite webpage. I can't find the server option as well. So
> is there a way that I can export my table from DBDesigner4 to SQLite.
> 

-- 
View this message in context: 
http://www.nabble.com/How-to-connect-the-SQLite-with-DBDesigner4--tp18189489p18235177.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Temp Files are not closed during a select call.

2008-07-02 Thread Naganathan Rajesh

We are executing a query which does a select from two different tables and does 
a union.(For eg : 
select phonename,uid from contact_primary_info union select name,itemId from 
Contact_SIM order by 1 ASC;
)
We are seeing that Sqlite lib is calling a openFile call two times with the 
same file name during this select operation.The second time the openFile call 
is 
 
For the first time,the sqlite library passed the following flag values:
isExclusive: 16 isDelete : 8 isCreate : 4 isReadOnly : 0 isReadWrite :2
 
Before closing the file,sqlite library again passed the same flag values with 
the same file name.Can anyone please explain if there is a chance that sqlite 
does like this without closing the previous file that is opened?
 

Best Regards,
N.Rajesh
Courage is the knowledge of how to fear what ought to be feared and how not to 
fear what ought not to be feared.
_
Post free property ads on Yello Classifieds now! www.yello.in
http://ss1.richmedia.in/recurl.asp?pid=221
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite and updating VIEWs

2008-07-02 Thread Dan

On Jul 2, 2008, at 2:12 AM, Shawn Wilsher wrote:

> Hey all,
>
> I'm working with a partitioned table setup with a permanent table and
> a temp table with the same columns and indexes.  Every X time we dump
> all the records in the temp table over to the permanent one.  In order
> to make selection queries easier to manage, I've gone and created a
> view like so:
> CREATE TEMPORARY VIEW table_view AS
> SELECT * FROM table_temp
> UNION
> SELECT * FROM table
>
> This was all going well, until I realized that updating was going to
> be very hard (insertion always goes to the temporary table).  That
> seemed easy enough to manage if I use an INSTEAD OF trigger on the
> view for UPDATE statements.  The problem is what I want to do in the
> trigger, which is this:
> 1) if the data is in the temporary table, update that
> 2) if the data is not in the temporary table, copy the data from the
> permanent table into the temp one, and then update the temp table
> Sadly, my SQL-fu isn't strong enough to know how to do this, and I'm
> starting to wonder if it's even possible.  If someone could tell me if
> I can do it, and then provide a pointer as to how to go about it, I'd
> really appreciate it.

(Please excuse the rambling nature of this post. Thinking out loud here.
None of the sql is tested either.)

The idea here is to use a temp table as a cache for write operations  
on the
real table, correct? To minimize the number of writes (and therefore  
fsync()
operations) on the real table?

And the cache has to support both UPDATE and INSERT operations? How  
about
DELETE?

Is the following correct?

   * When inserting a new row into the system, it should be added to the
 temporary table.

   * When updating a row, if it is not already in the temp table, it  
should
 be copied from the real table into the temp table and then the temp
 table copy updated.

   * When reading from the system, we want to return all the records  
from
 the temp table, and all those records from the "real" table that do
 not have corresponding temp table records.

   * (not sure how a delete, it it is required, should work).

I think that supporting the UPDATE operation makes it more difficult to
arrange all this using SQL triggers and views than it would be if each
record existed in either the temp or real tables (but not both).

For example, say we're working with:

 CREATE TABLE uris_real(host PRIMARY KEY, hits INTEGER);
 INSERT INTO uris_real VALUES('sqlite.org', 100);
 INSERT INTO uris_real VALUES('slashdot.org', 200);
 INSERT INTO uris_real VALUES('mozilla.org', 300);

So to create the cache table, we could do:

 CREATE TEMP TABLE uris_temp(host PRIMARY KEY, hits INTEGER);

Records are identified by their primary key and at some point in the  
future
the cache will be flushed through by doing something like:

   BEGIN;
 INSERT OR REPLACE INTO uris_real SELECT * FROM uris_temp;
 DELETE FROM uris_temp; /* Optional step */
   COMMIT;

Ok, then:

 CREATE TEMP VIEW uris AS
   SELECT * FROM uris_temp
   UNION ALL
   SELECT * FROM uris_real
 ;

A trigger to support INSERT operations seems easy enough:

 CREATE TRIGGER uris_insert INSTEAD OF INSERT ON uris BEGIN
   INSERT INTO uris_temp VALUES(new.host, new.hits);
 END;

Of course, that would allow me to insert a new record ('sqlite.org', 1),
which violates the PRIMARY KEY uniqueness constraint. So I could  
adjust the
trigger to deal with that:

 CREATE TRIGGER uris_insert INSTEAD OF INSERT ON uris BEGIN
   SELECT CASE
 WHEN EXISTS (SELECT host FROM uris_real WHERE host =  
new.host) THEN
   RAISE(ABORT, 'primary key constraint violated');
 END;
   INSERT INTO uris_temp VALUES(new.host, new.hits);
 END;

Now the UPDATE trigger.

 CREATE TRIGGER uris_update INSTEAD OF UPDATE ON uris BEGIN
   INSERT OR REPLACE INTO uris_temp VALUES(new.host, new.hits);
 END;

The constraint again... The trigger above would let me do something like
"UPDATE uris SET host = 'sqlite.org'". And besides, if the primary key
is updated, how will we tell which records to copy over in the real  
table
when the cache is flushed through? So maybe we disallow updates on the
primary key:

 CREATE TRIGGER uris_update INSTEAD OF UPDATE ON uris BEGIN
   SELECT CASE
 WHEN new.host != old.host
   RAISE(ABORT, 'cannot update primary key')
 END;
   INSERT OR REPLACE INTO uris_temp VALUES(new.host, new.hits);
 END;

One more problem - after an UPDATE, the view won't work properly  
anymore,
as duplicate records may be returned. So we can modify it to:

 CREATE TEMP VIEW uris AS
   SELECT * FROM uris_temp
   UNION ALL
   SELECT * FROM uris_real WHERE host NOT IN (SELECT host FROM  
uris_temp)
 ;

So we now have a system that supports UPDATE and INSERT. So long as one
doesn't UPDATE the table's primary key. Still not sure how to 

Re: [sqlite] Index and ORDER BY

2008-07-02 Thread Alexey Pechnikov
В сообщении от Wednesday 02 July 2008 08:25:10 Dan написал(а):
> > I'm using SQLite 3.5.9 and there are no differents in my tests  
> > between DESC
> > and default indeces. I try create index with keywork DESC for  
> > optimize DESC
> > sorting but it don't work for me. My tests you can see above.
>
> Have you seen the notes on file-format and "pragma legacy_file_format"
> in the documentation for CREATE INDEX?
>
>    http://www.sqlite.org/lang_createindex.html

I try test new file format with "PRAGMA legacy_file_format = off":

#!/usr/bin/tclsh
package require sqlite3
sqlite3 db index_order.db
db eval {PRAGMA legacy_file_format = off}
db eval {DROP TABLE IF EXISTS events}
db eval {CREATE TABLE events (eid INTEGER PRIMARY KEY,type INTEGER)}
db transaction {
for {set i 0} {$i<1} {incr i} {
set type [expr {$i % 50}]
db eval {insert into events values ($i,$type)}
}
}
db eval {CREATE INDEX ev_desc_idx ON events(type asc,eid desc)}
db close

$ ls -lh
итого 4,0G
-rw-r--r-- 1 veter veter 4,0G Июл  2 12:44 index_order.db

Database size is more than with default "PRAGMA legacy_file_format = on".

sqlite> SELECT events.* FROM events WHERE eid<=32619760 and type=22 ORDER BY 
eid DESC LIMIT 1;
32619722|22
CPU Time: user 0.572035 sys 0.232014

sqlite> EXPLAIN QUERY PLAN SELECT events.* FROM events WHERE eid<=32619760 and 
type=22 ORDER BY eid DESC LIMIT 1;
0|0|TABLE events WITH INDEX ev_desc_idx ORDER BY

===
Result: With new file format index with keyword "desc" not work again. 
===

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Can't get MAX value from table

2008-07-02 Thread Bob Dennis

Hi, I am fairly new to SQLite, and using it to replace Microsoft db in
PocketPC applications.

I am having trouble getting a MAX value from a table as follows:-

SQL = SELECT MAX( ColumnName ) FROM  TableName

Set Recs = db.Execute(sql) 

Result = recs(1)(ColumnName) 

I get nothing in the Result value.

I have tried putting an index on the column but it made no difference.

Any ideas would be greatly appreciated

Thanks

Bob Dennis
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple constraints per table?

2008-07-02 Thread flakpit


Harold Wood  Meyuni Gani wrote:
> 
> U, hmm. The tips I gave you were from my pda based shopping program
> that will be selling for 9.99 soon.
> 

Good on you Woody, hope you sell a lot.! If my eyes were up to the challenge
of reading my PDA's small screen, i'd buy a copy and save myself work (LOL!) 

With the amount of junk I've churned out of the years, i've yet to get
anyone to buy anything yet (ROFL).  Actually, not quite true. One
registration out of 6,500 downloads of my address book.
-- 
View this message in context: 
http://www.nabble.com/Multiple-constraints-per-table--tp18209309p18231100.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple constraints per table?

2008-07-02 Thread Harold Wood & Meyuni Gani
U, hmm. The tips I gave you were from my pda based shopping program that 
will be selling for 9.99 soon.  Its 6 for one, half dozen for the other. You 
can design the db so it does the work for you or you code the program to do the 
work for you.

Either way, you will get things to work, it just depends upon how you want to 
partition your code.

Woody
from his pda

-Original Message-
From: flakpit <[EMAIL PROTECTED]>
Sent: Tuesday, July 01, 2008 11:00 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Multiple constraints per table?


I understand your solution Woody, but it isn't practical for me, not for my
home shopping list program. A full on relational database like that is an
awful lot of work and it's only for home use (and any other family I can con
into using it and testing it).

I'd go for the relational route if I were designing an enterprise wide
product, it's only sensible (as you intimated) but Igor's solution (that I
also found independantly) will work fine for a small system for now.

Thanks for the ideas, will keep proper design in mind if I ever get talked
into doing something for a company (something I try mightily to avoid,
believe me!!!)
-- 
View this message in context: 
http://www.nabble.com/Multiple-constraints-per-table--tp18209309p18230807.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple constraints per table?

2008-07-02 Thread flakpit

I understand your solution Woody, but it isn't practical for me, not for my
home shopping list program. A full on relational database like that is an
awful lot of work and it's only for home use (and any other family I can con
into using it and testing it).

I'd go for the relational route if I were designing an enterprise wide
product, it's only sensible (as you intimated) but Igor's solution (that I
also found independantly) will work fine for a small system for now.

Thanks for the ideas, will keep proper design in mind if I ever get talked
into doing something for a company (something I try mightily to avoid,
believe me!!!)
-- 
View this message in context: 
http://www.nabble.com/Multiple-constraints-per-table--tp18209309p18230807.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users