Re: [sqlite] blob incremental i/o constraints

2008-02-13 Thread drh
"Igor Sereda" <[EMAIL PROTECTED]> wrote:
> The questions around sqlite3_blob_xxx methods that Roger brought up a couple
> of months ago are very interesting for me too, and I haven't seen any reply
> to Roger's message. (Roger - do you have any update?)
> 
> As far as I can gather from the cited description of the problem, we should
> manually acquire SHARED db lock before reading a blob, and RESERVED lock
> before writing a blob. Can someone confirm that?

Transactions and locks are created and close automatically
by sqlite3_blob_open() and sqlite3_blob_close().  No other
interaction by the application is required.

> 
> I guess I should have run some experiments
> 

That sounds like a good plan...

--
D. Richard Hipp <[EMAIL PROTECTED]>

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] blob incremental i/o constraints

2008-02-13 Thread Igor Sereda
The questions around sqlite3_blob_xxx methods that Roger brought up a couple
of months ago are very interesting for me too, and I haven't seen any reply
to Roger's message. (Roger - do you have any update?)

As far as I can gather from the cited description of the problem, we should
manually acquire SHARED db lock before reading a blob, and RESERVED lock
before writing a blob. Can someone confirm that?

Also, how blob i/o operations deal with transactional context is indeed not
very clear. Based on few words in description of sqlite3_blob_close, I
assume blob i/o is transactional, but this also brings a more subtle point:

What happens with the cache when a really large BLOB (larger than cache
size) is read/written? I guess I should have run some experiments, and I
will probably do, but it seems quite probable that everything will be paged
out of cache by the blob. Considered that the reason for incremental i/o is
(supposedly) to avoid lots of memory allocation, we can expect that every
BLOB going through sqlite3_blob_read/write will be quite large, so the cache
will be thrashing. Please tell me it is not so :)

What I'd suggest is to maybe check whether there is an EXCLUSIVE lock and
write directly to filesystem, as when cache spill happens? Otherwise, I'm
thinking of creating a separate connection with very small cache
specifically to handle blob i/o.

Thanks!
Igor



> -Original Message-
> From: Roger Binns [mailto:[EMAIL PROTECTED]
> Sent: Sunday, December 02, 2007 10:16 AM
> To: sqlite-users@sqlite.org
> Subject: [sqlite] blob incremental i/o constraints
> 
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> There isn't a documentation or wiki page about the blob i/o
> functionality beyond the api reference.  I am curious if the behaviour
> I
> am seeing was intentional.
> 
> Open a blob for reading.  Insert into the same table (not affecting the
> blob or its row).  On calling sqlite3_blob_read, I get SQLITE_ABORT
> returned and the errmsg is "not an error".  This also happens if I do a
> few reads, and then the insert, on the next read.  (This also happens
> with blobs open for writing and doing read or write after the table is
> changed).
> 
> Open a blob for reading.  Call sqlite3_blob_write, and get back
> SQLITE_READONLY.  On next calling sqlite3_blob_close, I also get back
> SQLITE_READONLY.  If sqlite_blob_close is going to return any errors
> from prior reads/writes then it should also do so for the above
> paragraph (which it doesn't).
> 
> You can delete (via SQL) a blob that is open.  The next
> sqlite3_blob_read on the blob gets SQLITE_ABORT.  (Heck you can even
> change the contents via SQL).
> 
> It isn't stated anywhere what the transactional nature of blob i/o is.
> For example is data committed after each write, or does it happen on
> the
> close.
> 
> Roger
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.6 (GNU/Linux)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
> 
> iD8DBQFHUluYmOOfHg372QQRAr6wAKCyo4lRyfeu5gtAxJ+yfH8/KFhhGwCfTV36
> F5Z1rGEiL8hjdSMIC+XjWTs=
> =nd4s
> -END PGP SIGNATURE-
> 
> ---
> --
> To unsubscribe, send email to [EMAIL PROTECTED]
> ---
> --


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] blob incremental i/o constraints

2007-12-01 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

There isn't a documentation or wiki page about the blob i/o
functionality beyond the api reference.  I am curious if the behaviour I
am seeing was intentional.

Open a blob for reading.  Insert into the same table (not affecting the
blob or its row).  On calling sqlite3_blob_read, I get SQLITE_ABORT
returned and the errmsg is "not an error".  This also happens if I do a
few reads, and then the insert, on the next read.  (This also happens
with blobs open for writing and doing read or write after the table is
changed).

Open a blob for reading.  Call sqlite3_blob_write, and get back
SQLITE_READONLY.  On next calling sqlite3_blob_close, I also get back
SQLITE_READONLY.  If sqlite_blob_close is going to return any errors
from prior reads/writes then it should also do so for the above
paragraph (which it doesn't).

You can delete (via SQL) a blob that is open.  The next
sqlite3_blob_read on the blob gets SQLITE_ABORT.  (Heck you can even
change the contents via SQL).

It isn't stated anywhere what the transactional nature of blob i/o is.
For example is data committed after each write, or does it happen on the
close.

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFHUluYmOOfHg372QQRAr6wAKCyo4lRyfeu5gtAxJ+yfH8/KFhhGwCfTV36
F5Z1rGEiL8hjdSMIC+XjWTs=
=nd4s
-END PGP SIGNATURE-

-
To unsubscribe, send email to [EMAIL PROTECTED]
-