Re: [sqlite] disable file locking mechanism over the network

2019-09-30 Thread Rowan Worth
On Sat, 28 Sep 2019 at 06:59, Roman Fleysher 
wrote:

> ( somewhat related to Re: [sqlite] Safe to use SQLite over a sketchy
> network?)
>
> Dear SQLiters,
>
> I am using SQLite over GPFS distributed file system. I was told it
> honestly implements file locking. I never experienced corruption. But it is
> slow in the sense that when many jobs from many compute nodes try to access
> the same database things slow down considerably.
>
> I suspect, from the point of view of file system, there is lots of
> pressure to develop fast grabbing of a lock and slow release. I think this
> is because the key to fast network file system in general is making it as
> independent as possible, thus distributed. Avoid bottlenecks. But locking
> is by definition a bottleneck.
>


> From experience, it seems that because SQLite still requests file locks,
> the performance increase is not that big. I wonder if there is a way to
> disable SQLite's internal file locking mechanism.


In my experience with SQLite over network file systems, the biggest
bottleneck has nothing to do with locking and everything to do with
synchronous I/O, journalling, and the single-writer model.

Disabling locking in your scenario is _guaranteed_ to break your jobs.
SQLite on the compute nodes will at some point read a half-committed change
to the database and return SQLITE_CORRUPT (best case), or silently compute
a garbage result (worst case). Unless, that is, the database in question is
read-only and never updated. But if that was the case there would be no
scaling issue with the number of compute nodes as read-locks do not
conflict with each other.

The best thing you can do to improve concurrency for SQLite over a
networked file system is to carefully manage your transaction lifetimes.
There are several patterns to avoid:

1. Lots of small write transaction
2. Transactions which are open for a long time
3. Write transactions which do a lot of work before taking the RESERVED lock

All of which apply to SQLite on a local filesystem, but the network latency
magnifies the effects. To elaborate quickly, synchronous I/O and data being
written twice¹ impose a significant constant-time cost per transaction,
which is why small writes are not efficient. Avoiding long-running
transactions applies to both read and write transactions, because during a
DB update there is a period where the writer needs exclusive access to the
DB. If there is a long-running read transaction active at this point, the
writer must wait for it to finish and the effect is _every_ node wanting to
access the DB has to wait for this one read transaction.

¹once to the journal, once to the main DB

Somewhat related is a transaction which reads a bunch of data before doing
any DB updates - the problem here is that another node may take the
RESERVED lock during the read phase. SQLite only supports a single writer
at a time, so when the transaction tries to proceed to its write phase it
will not be able to proceed; you end up having to abort it and redo the
read phase. This one is avoided by phrasing the transaction using "BEGIN
IMMEDIATE", which will cause SQLite to take the RESERVED lock at the start
of the transaction.


I think WAL journal mode can improve concurrency but of course it doesn't
work in a network context. Anyway, trying to shortcut SQLite's mechanisms
is almost certainly the wrong question to be asking. If you don't need
locking then you don't need consistency and you should consider whether a
DB is the right tool or whether regular files would suffice.

-Rowan
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Lookup join

2019-09-30 Thread Keith Medcalf
Or this, which is even simpler:

   select t1.key, 
  max(t2.rev) as maxrev, 
  t2.data
 from t1
left join t2
   on t1.key == t2.key
  and rev < :rev
group by t1.key
order by t1.key;

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of Keith Medcalf
>Sent: Monday, 30 September, 2019 19:31
>To: SQLite mailing list 
>Subject: Re: [sqlite] Lookup join
>
>
>You mean something like this:
>
> select key,
>maxrev,
>data
>   from (
>  select key,
> null as maxrev,
> null as data
>from t1
>   where key not in (select key
>   from t2
>  where rev < :rev)
>   union all
>  select t1.key,
> max(rev) as maxrev,
> data
>from t1, t2
>   where t1.key == t2.key
> and rev < :rev
>group by t2.key
> )
>order by key;
>
>t1 should have an index on key
>t2 should have an index on key, rev
>
>--
>The fact that there's a Highway to Hell but only a Stairway to Heaven
>says a lot about anticipated traffic volume.
>
>>-Original Message-
>>From: sqlite-users  On
>>Behalf Of Fredrik Larsen
>>Sent: Monday, 30 September, 2019 04:12
>>To: SQLite mailing list 
>>Subject: [sqlite] Lookup join
>>
>>Consider query below;
>>
>>SELECT key
>>FROM t1
>>LEFT JOIN (
>>  SELECT key,max(rev),data
>>  FROM t2
>>  WHERE rev < ?
>>  GROUP BY key
>>) USING (key)
>>ORDER BY key ?
>>LIMIT ?
>>
>>In above query sqlite will materialize the t2-sub-query and then start
>>working on the outer query. I have a lot of data in t2 so this will
>>consume
>>a lot of time.
>>
>>To overcome this I perform above query manually i two stages; I fetch
>the
>>t1 data, then for each row I do a lookup and manually join. This is very
>>fast by but makes it hard to reuse this base-query in other queries.
>>
>>So my question is; Can sqlite do lookup-type joins, like to do manually
>>in
>>code, to avoid the overhead of materializing the full t2-query on all
>>keys,
>>and using just a fraction of this work?
>>
>>I suspect the answer is no, if so, maybe this is solvable through a
>>custom
>>virtual table? I have looked at ext/misc/eval.c, and this custom
>function
>>could be used if a function where alloed to return multiple columns..
>>
>>Fredrik
>>___
>>sqlite-users mailing list
>>sqlite-users@mailinglists.sqlite.org
>>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
>
>
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Lookup join

2019-09-30 Thread Keith Medcalf

You mean something like this:

 select key, 
maxrev, 
data
   from (
  select key, 
 null as maxrev, 
 null as data
from t1 
   where key not in (select key 
   from t2 
  where rev < :rev)
   union all
  select t1.key, 
 max(rev) as maxrev, 
 data
from t1, t2
   where t1.key == t2.key
 and rev < :rev
group by t2.key
 )
order by key;

t1 should have an index on key
t2 should have an index on key, rev

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of Fredrik Larsen
>Sent: Monday, 30 September, 2019 04:12
>To: SQLite mailing list 
>Subject: [sqlite] Lookup join
>
>Consider query below;
>
>SELECT key
>FROM t1
>LEFT JOIN (
>  SELECT key,max(rev),data
>  FROM t2
>  WHERE rev < ?
>  GROUP BY key
>) USING (key)
>ORDER BY key ?
>LIMIT ?
>
>In above query sqlite will materialize the t2-sub-query and then start
>working on the outer query. I have a lot of data in t2 so this will
>consume
>a lot of time.
>
>To overcome this I perform above query manually i two stages; I fetch the
>t1 data, then for each row I do a lookup and manually join. This is very
>fast by but makes it hard to reuse this base-query in other queries.
>
>So my question is; Can sqlite do lookup-type joins, like to do manually
>in
>code, to avoid the overhead of materializing the full t2-query on all
>keys,
>and using just a fraction of this work?
>
>I suspect the answer is no, if so, maybe this is solvable through a
>custom
>virtual table? I have looked at ext/misc/eval.c, and this custom function
>could be used if a function where alloed to return multiple columns..
>
>Fredrik
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Calling a java function from a trigger

2019-09-30 Thread Jens Alfke


> On Sep 29, 2019, at 5:05 PM, Faria  wrote:
> 
> Is it possible to call a java function from a trigger in SQLite?

Yes, but you'd have to implement a C function that calls the Java function via 
JNI, then register the C function with SQLite, then call that function in your 
trigger.

—Jens
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Calling a java function from a trigger

2019-09-30 Thread Faria
Hi,

Is it possible to call a java function from a trigger in SQLite? If so, is
there an example showing how to do it?

Thanks,
Faria



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Lookup join

2019-09-30 Thread Fredrik Larsen
Consider query below;

SELECT key
FROM t1
LEFT JOIN (
  SELECT key,max(rev),data
  FROM t2
  WHERE rev < ?
  GROUP BY key
) USING (key)
ORDER BY key ?
LIMIT ?

In above query sqlite will materialize the t2-sub-query and then start
working on the outer query. I have a lot of data in t2 so this will consume
a lot of time.

To overcome this I perform above query manually i two stages; I fetch the
t1 data, then for each row I do a lookup and manually join. This is very
fast by but makes it hard to reuse this base-query in other queries.

So my question is; Can sqlite do lookup-type joins, like to do manually in
code, to avoid the overhead of materializing the full t2-query on all keys,
and using just a fraction of this work?

I suspect the answer is no, if so, maybe this is solvable through a custom
virtual table? I have looked at ext/misc/eval.c, and this custom function
could be used if a function where alloed to return multiple columns..

Fredrik
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] PRAGMA table_info fails with "no such tokenizer"

2019-09-30 Thread Gwendal Roué
Hello,

The fts5 table in your database was created with a custom FTS5 tokenizer
named "TrackerTokenizer" (see
https://www.sqlite.org/fts5.html#custom_tokenizers).

Custom FTS5 tokenizers must be registered in each SQLite connection to the
database file, or you get an error like "no such tokenizer:
TrackerTokenizer".

You thus have two options:

1. Grab information about your Python support for SQLite, and register a
custom tokenizer named "TrackerTokenizer" in your connection. For example,
check https://pypi.org/project/sqlitefts/
2. Ask for a database file that does not use any custom FTS5 tokenizer.

Gwendal Roué

On Sun, Sep 29, 2019 at 12:24 PM Anatoli Babenia 
wrote:

> Python 3.7.4 (default, Jul  9 2019, 16:32:37)
> [GCC 9.1.1 20190503 (Red Hat 9.1.1-1)] on linux
> Type "help", "copyright", "credits" or "license" for more information.
> >>> import sqlite3
> >>> conn = sqlite3.connect("/home/anatoli/.cache/tracker/meta.db")
> >>> cursor = conn.cursor()
> >>> cursor.execute('PRAGMA main.table_info("fts5")')
> Traceback (most recent call last):
>   File "", line 1, in 
> sqlite3.OperationalError: no such tokenizer: TrackerTokenizer
>
>
> The `sqlite3 meta.db .dump` works without errors. The `fts5` table
> referenced in the dump.
>
> INSERT INTO
> sqlite_master(type,name,tbl_name,rootpage,sql)VALUES('table','fts5','fts5',0,'CREATE
> VIRTUAL TABLE fts5 USING fts5(content="fts_view", "nfo:fileName",
> "nco:title", "nfo:genre", "nmm:genre", "mtp:creator",
> "nco:nameFamily", "nco:nameGiven", "nco:nameAdditional",
> "nco:phoneNumber", "nao:prefLabel", "nao:description", "nco:fullname",
> "nco:nickname", "nco:contactGroupName", "nfo:tableOfContents",
> "ncal:comment", "ncal:location", "ncal:summary", "ncal:contact",
> "ncal:description", "nmo:messageSubject", "nmm:albumTitle",
> "nmm:artistName", "nco:department", "nco:role", "nco:note",
> "nie:keyword", "nco:imID", "nco:imNickname", "nco:emailAddress",
> "nmm:category", "nfo:fontFamily", "nco:region", "nco:country",
> "nco:extendedAddress", "nco:streetAddress", "nco:postalcode",
> "nco:locality", "nco:county", "nco:district", "nco:pobox",
> "nie:title", "nie:subject", "nie:plainTextContent", "nie:description",
> "nie:comment", "nid3:title", "nid3:albumTitle", "nid3:contentType",
> tokenize=TrackerTokenizer)');
>
>
> --
> Anatoli Babenia
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Conflict between snapshots and checkpoints

2019-09-30 Thread Dominique Devienne
On Mon, Sep 30, 2019 at 2:07 PM Keith Medcalf  wrote:

> On Monday, 30 September, 2019 02:06, Dominique Devienne <
> ddevie...@gmail.com> wrote:
> >On Sun, Sep 29, 2019 at 2:13 PM Keith Medcalf 
> wrote:
> >> On Sunday, 29 September, 2019 01:28, Gwendal Roué <
> gwendal.r...@gmail.com> wrote:
>
> >> >Those N reader connections allow concurrent database reads. Those
> >> "reads" are generally wrapped in a deferred transaction which provides
> >> snapshot isolation.
>
> >> No, it provides REPEATABLE-READ isolation.  There is no actual
> >> "snapshot" taken and no snapshot exists.
>
> > You are merely not seeing data written to the WAL transaction log at a
> > point-in-time subsequent to the point in time at which you commenced
> > the "repeatable-read".
>
> >I don't see where you are going with this Keith.
> >Repeatable-reads "in the past" *are* snapshots IMHO.
>
> Generally no.  "Snapshot Isolation" does not really exist, though some
> RDBMS have created it to permit more opportunistic updates.  Reads are at
> the Repeatable-Read isolation level (both for WAL and DELETE journal
> modes), and writes are Serialized.  Snapshot Isolation is an invention of
> the MVCC folks to theoretically permit greater update concurrency at the
> expense of serializable isolation and introduces anomalies into the
> database read and write processing that cannot occur when the updates are
> serialized, and which generally requires the application programmer to take
> extra steps to ensure database consistency.
>
> >The WAL file *does* contain enough information combined with the main
> >database file pages, to logically *and* physically represent a "snapshot"
> >of the DB at that point-in-time.
>
> For the purposes of reading only yes, I suppose you could call it a
> "snapshot", except that it isn't.  It is just a point-in-time
> repeatable-read.  You can only upgrade a transaction from read to write if
> you are holding the "top" snapshot (that is, you must be seeing the entire
> database, not a point-in-time version of it).
>
> >So not calling it a "snapshot" is a stretch at the very least. What is a
> "snapshot"
> >according to you, if that's not it?
>
> Snapshot Isolation is implemented by a bunch of different databases that
> do not conform to the SQL Standard and it introduces anomalies into the
> update process that cannot be introduced when using Serializable
> Isolation.
>
> https://en.wikipedia.org/wiki/Isolation_(database_systems)
> https://en.wikipedia.org/wiki/Snapshot_isolation
>
> >And also why do you think Richard, who knows a thing or two about
> >databases, called these API *snaphot* then?
>
> Because it is a convenient descriptor, perhaps?  They may in fact be
> considered to be a snapshot of the database as it existed at some point in
> the past (without full view of all committed transactions) however only the
> "top" snapshot, the one that has a view of all committed transactions is
> permitted to update/write to the database.
>
> >I'm genuinely curious here. I think I disagree with you, but most time I
> >do, I'm wrong, so I'd like to understand, really. --DD
>
> The isolation is either repeatable-read for read transactions, or
> serializable for writes.  It is not Snapshot Isolation.
> So although one may consider that what you are looking at is a "snapshot"
> of the database that existed at a particular point-in-time, it should not
> be confused with "snapshot isolation" which is an entirely different beast
> altogether.
>

So I guess our main difference here, is that I have no qualms at all with
point-in-time *read-only* repeatable-read transaction being called a
"Snapshot".
MVCC (which to me is synonymous with snapshots) is more about
read-consistency across statements (i.e. a read transaction) that does
*not* prevent writes.
Using snapshots, I can parallelize access to several tables across
connections (and threads), ensuring read-consistency in several separate
transactions and connections.
I used to do that in Oracle, and I'm glad that I can try to do it in SQLite
too now. There are caveats of course, like controlling checkpointing, but I
can live with that. --DD

PS: Note that I never used "Snapshot *Isolation*" myself. For me, Snapshot
= point-in-time read-consistency.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Conflict between snapshots and checkpoints

2019-09-30 Thread Keith Medcalf
On Monday, 30 September, 2019 02:06, Dominique Devienne  
wrote:

>On Sun, Sep 29, 2019 at 2:13 PM Keith Medcalf  wrote:

>> On Sunday, 29 September, 2019 01:28, Gwendal Roué  
>> wrote:

>> >Those N reader connections allow concurrent database reads. Those
>> "reads" are generally wrapped in a deferred transaction which provides 
>> snapshot isolation.

>> No, it provides REPEATABLE-READ isolation.  There is no actual
>> "snapshot" taken and no snapshot exists.

> You are merely not seeing data written to the WAL transaction log at a
> point-in-time subsequent to the point in time at which you commenced
> the "repeatable-read".

>I don't see where you are going with this Keith.
>Repeatable-reads "in the past" *are* snapshots IMHO.

Generally no.  "Snapshot Isolation" does not really exist, though some RDBMS 
have created it to permit more opportunistic updates.  Reads are at the 
Repeatable-Read isolation level (both for WAL and DELETE journal modes), and 
writes are Serialized.  Snapshot Isolation is an invention of the MVCC folks to 
theoretically permit greater update concurrency at the expense of serializable 
isolation and introduces anomalies into the database read and write processing 
that cannot occur when the updates are serialized, and which generally requires 
the application programmer to take extra steps to ensure database consistency.

>The WAL file *does* contain enough information combined with the main
>database file pages, to logically *and* physically represent a "snapshot" 
>of the DB at that point-in-time. 

For the purposes of reading only yes, I suppose you could call it a "snapshot", 
except that it isn't.  It is just a point-in-time repeatable-read.  You can 
only upgrade a transaction from read to write if you are holding the "top" 
snapshot (that is, you must be seeing the entire database, not a point-in-time 
version of it).  

>So not calling it a "snapshot" is a stretch at the very least. What is a 
>"snapshot"
>according to you, if that's not it?

Snapshot Isolation is implemented by a bunch of different databases that do not 
conform to the SQL Standard and it introduces anomalies into the update process 
that cannot be introduced when using Serializable Isolation.  

https://en.wikipedia.org/wiki/Isolation_(database_systems)
https://en.wikipedia.org/wiki/Snapshot_isolation

>And also why do you think Richard, who knows a thing or two about
>databases, called these API *snaphot* then?

Because it is a convenient descriptor, perhaps?  They may in fact be considered 
to be a snapshot of the database as it existed at some point in the past 
(without full view of all committed transactions) however only the "top" 
snapshot, the one that has a view of all committed transactions is permitted to 
update/write to the database.

>I'm genuinely curious here. I think I disagree with you, but most time I
>do, I'm wrong, so I'd like to understand, really. --DD

The isolation is either repeatable-read for read transactions, or serializable 
for writes.  It is not Snapshot Isolation.

So although one may consider that what you are looking at is a "snapshot" of 
the database that existed at a particular point-in-time, it should not be 
confused with "snapshot isolation" which is an entirely different beast 
altogether.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Conflict between snapshots and checkpoints

2019-09-30 Thread Richard Damon
On 9/30/19 4:28 AM, Gwendal Roué wrote:
> According to
> http://dbmsmusings.blogspot.com/2019/06/correctness-anomalies-under.html,
> SNAPSHOT ISOLATION is stronger than REPEATABLE READ, in that it prevents
> "phantom reads" (
> http://dbmsmusings.blogspot.com/2019/05/introduction-to-transaction-isolation.html).
> I think SQLite prevents phantom reads, and so we actually get actual
> SNAPSHOT ISOLATION (as written in https://www.sqlite.org/isolation.html).

Based on a bit of research with things like
https://en.wikipedia.org/wiki/Snapshot_isolation

Snapshot Isolation appears to be a Term of Art, not a Standardized Term
so it can be forgiven if it isn't used in the documentation. Also, it
seems to imply non-serialized writes, which SQLite does NOT provide, so
isn't even really applicable.

-- 
Richard Damon

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Conflict between snapshots and checkpoints

2019-09-30 Thread Gwendal Roué
According to
http://dbmsmusings.blogspot.com/2019/06/correctness-anomalies-under.html,
SNAPSHOT ISOLATION is stronger than REPEATABLE READ, in that it prevents
"phantom reads" (
http://dbmsmusings.blogspot.com/2019/05/introduction-to-transaction-isolation.html).
I think SQLite prevents phantom reads, and so we actually get actual
SNAPSHOT ISOLATION (as written in https://www.sqlite.org/isolation.html).

On Mon, Sep 30, 2019 at 10:06 AM Dominique Devienne 
wrote:

> On Sun, Sep 29, 2019 at 2:13 PM Keith Medcalf  wrote:
>
> > On Sunday, 29 September, 2019 01:28, Gwendal Roué <
> gwendal.r...@gmail.com>
> > wrote:
> > >Those N reader connections allow concurrent database reads. Those
> "reads"
> > are
> > >generally wrapped in a deferred transaction which provides snapshot
> > >isolation.
> >
> > No, it provides REPEATABLE-READ isolation.  There is no actual "snapshot"
> > taken and no snapshot exists.
>
> You are merely not seeing data written to the WAL transaction log at a
> > point-in-time subsequent to the point in time at which you commenced the
> > "repeatable-read".
> >
>
> I don't see where you are going with this Keith.
> Repeatable-reads "in the past" *are* snapshots IMHO.
>
> The WAL file *does* contain enough information combined with the main
> database file pages,
> to logically *and* physically represent a "snapshot" of the DB at that
> point-in-time. So not calling
> it a "snapshot" is a stretch at the very least. What is a "snapshot"
> according to you, if that's not it?
>
> And also why do you think Richard, who knows a thing or two about
> databases, called these API *snaphot* then?
>
> I'm genuinely curious here. I think I disagree with you, but most time I
> do, I'm wrong, so I'd like to understand, really. --DD
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Conflict between snapshots and checkpoints

2019-09-30 Thread Dominique Devienne
On Sun, Sep 29, 2019 at 2:13 PM Keith Medcalf  wrote:

> On Sunday, 29 September, 2019 01:28, Gwendal Roué 
> wrote:
> >Those N reader connections allow concurrent database reads. Those "reads"
> are
> >generally wrapped in a deferred transaction which provides snapshot
> >isolation.
>
> No, it provides REPEATABLE-READ isolation.  There is no actual "snapshot"
> taken and no snapshot exists.

You are merely not seeing data written to the WAL transaction log at a
> point-in-time subsequent to the point in time at which you commenced the
> "repeatable-read".
>

I don't see where you are going with this Keith.
Repeatable-reads "in the past" *are* snapshots IMHO.

The WAL file *does* contain enough information combined with the main
database file pages,
to logically *and* physically represent a "snapshot" of the DB at that
point-in-time. So not calling
it a "snapshot" is a stretch at the very least. What is a "snapshot"
according to you, if that's not it?

And also why do you think Richard, who knows a thing or two about
databases, called these API *snaphot* then?

I'm genuinely curious here. I think I disagree with you, but most time I
do, I'm wrong, so I'd like to understand, really. --DD
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users