Re: [sqlite] problem with blobs (perl code)

2005-11-30 Thread Nathan Kurz
On Wed, Nov 30, 2005 at 05:10:19PM -0600, Jim Dodgen wrote:
> > What do you get back (using the command-line client) when you
> > ask for LENGTH(x) or QUOTE(x) instead of just the column x?
> 
> sqlite> select length(val) from foo;
> 3
> sqlite> select quote(val) from foo;
> 'MZP'
> 
> strange, reports a length of 3 but the database is > the size of the
> file i put into the blob.  other than the record in foo the database
> is fresh and empty.

Although the problem is definitely with Perl here, SQLite's response
_is_ pretty odd.  The whole value of the BLOB gets inserted (since the
length is set correctly by Perl), but since it is inserted as type
TEXT, the retrieval (and the length) depend on the first NUL even
though the entire blob is actually entered in the database.  Probably
not quite a bug, but maybe worth a spare thought.

> I tried one ot the workarounds noted in
> http://rt.cpan.org/NoAuth/Bug.html?id=14595 which had you force the
> data type to SQL_BLOB this makes things work!! horray!!

I'm glad something worked for you!  In some ways that is probably a
better solution than my patch, since it is more explicit.

D. Richard Hipp <[EMAIL PROTECTED]> writes:
> Matt Sergeant monitors this mailing list and should see your post.

Thanks!

--nate



Re: [sqlite] problem with blobs (perl code) workaround

2005-11-30 Thread Jim Dodgen
thanks for the help, 

I tried one ot the workarounds noted in 
http://rt.cpan.org/NoAuth/Bug.html?id=14595 
which had you force the data type to SQL_BLOB
this makes things work!! horray!!

I included the complete test program for reference to others



 code snippit 
use DBI qw(:sql_types);

print "original slurped file size $fn=".length($data)."\n";;
{
  my $sth = $dbh->prepare("INSERT or replace INTO foo (nm, val) VALUES (?, ?)");
  $sth->bind_param(1, $fn);
  $sth->bind_param(2, $data, {TYPE => SQL_BLOB});
  $sth->execute();
  $sth->finish;
}


--- complete test program ---
#!/usr/bin/perl -w
#
# jim dodgen 2005
use Carp;
use DBI;
use DBI qw(:sql_types);
use strict;

my %attr = (PrintError => 1,
RaiseError => 0,
AutoCommit => 0);
my $dbh = DBI->connect("dbi:SQLite:slurp.db","","",\%attr);
if (!defined($dbh))
{
  croak("could not connect to db");
}
{
  my $sth = $dbh->prepare("drop table foo");
  if (defined $sth)
  {
$sth->execute();
$sth->finish;
  }
}
{
  my $sth = $dbh->prepare("create table foo (nm, val, primary key (nm))");
  $sth->execute();
  $sth->finish;
}

undef $/;
my ($fn) = @ARGV; # name of big file
open (IN, $fn);
my $data = ;  # slurp complete file into variable
close IN;

print "original slurped file size $fn=".length($data)."\n";;
{
  my $sth = $dbh->prepare("INSERT or replace INTO foo (nm, val) VALUES (?, ?)");
  $sth->bind_param(1, $fn);
  $sth->bind_param(2, $data, {TYPE => SQL_BLOB});
  $sth->execute();
  $sth->finish;
}
undef $data; # JUST TO FREE SOME SPACE

my $max = $dbh->selectrow_array("SELECT MAX(LENGTH(val)) FROM foo");
$dbh->{LongReadLen} = $max+10;
print "largest BLOB $max\n";

{
  my $sth = $dbh->prepare("select nm, val from foo where nm = ?");
  my $stat = $sth->execute($fn);
  my ($nm, $out) = $sth->fetchrow_array;
  $sth->finish;
  
  print "size of $nm BLOB returned from query ".length($out)."\n";
  open (NEW1, ">out_".$fn);
  binmode NEW1;
  print NEW1 $out;
  close NEW1;
}
$dbh->commit;
$dbh->disconnect;














[sqlite] Feature request

2005-11-30 Thread Cariotoglou Mike
Hi all. I would like to propose an enchancement in sqlite. I am not sure 
whether the issue
has come up before, if it has, and has been shot down, pls let me know.

The concept of in-memory database would be greatly enhanced if there was a 
mechanism to
serialize the internal cache to/from a stream (not necessarily a disk stream). 
In-memory
databases are extremely useful for a number of applications, but ar a PITA to 
initalize/persist.

one might argue why such a mechanism would be useful. If I need a persistent 
database, you could
say, then I would make it a disk-based one. Not so. Consider the case where you 
would like to
store the contents of an entire "database" as part of another storage structure.
consider the case where an entire sqlite database was a field in another RDBMS. 
consider
OO databases. Consider case tool implementation. The applications would be 
endless.

I took a look at the ":memory:" implementation, and it seems to me that this 
enchancement
would be trivial, consisting of maybe 10-20 c code lines, for somebody who 
understands the
internal structures involved. unfortunately, I do not do C, so I cannot do this 
myself.
OTOH, given the extreme usefuleness of what I propose (IMHO), could you 
consider this as a
permament addition in sqlite?

the design I have in mind would be something like this:

int sqlite3_loadMemDb(sqlite3 * db,reader)
int sqlite3_saveMemDb(sqlite3 * db,writer)

where "reader" and "writer" are function pointers with a signature like :
 int reader(void mem, int size)

(excuse my attempt at C syntax, this is meant only as a tip)

I suspect that the "load" function might need to know the number of pages 
beforehand,
so some kind of overloaded definiton of "reader" would be required, that would 
return this
information. if we agree that the idea has merit, the details can be worked out 
easily.

what does the community, and especially DRH, think about this ?




Re: [sqlite] problem with blobs (perl code)

2005-11-30 Thread drh
Nathan Kurz <[EMAIL PROTECTED]> wrote:
> On Wed, Nov 30, 2005 at 04:36:30PM -0600, Jim Dodgen wrote:
> 
> The Perl interface through DBD-SQLite-1.09 is broken with regard to
> blobs.  It binds the result as text, thus doesn't handle NUL's.  
> 
> I've submitted a patch (http://rt.cpan.org/NoAuth/Bug.html?id=14595),
> but never heard back about it. 
> 
> DRH -- perhaps if you have contact with the maintainer of the Perl
> interface you could pass them on?
> 

Matt Sergeant monitors this mailing list and should see your post.

--
D. Richard Hipp <[EMAIL PROTECTED]>



Re: [sqlite] sqlite3_progress_handler

2005-11-30 Thread drh
Marco Bambini <[EMAIL PROTECTED]> wrote:
> From the documentation: "If the progress callback returns a result  
> other than 0, then the current query is immediately terminated...",  
> so next time I call sqlite3_step what is the error returned?  
> SQLITE_ERROR or SQLITE_ABORT or something else?
> 

SQLITE_ABORT.

--
D. Richard Hipp <[EMAIL PROTECTED]>



[sqlite] sqlite3_progress_handler

2005-11-30 Thread Marco Bambini
From the documentation: "If the progress callback returns a result  
other than 0, then the current query is immediately terminated...",  
so next time I call sqlite3_step what is the error returned?  
SQLITE_ERROR or SQLITE_ABORT or something else?


Thanks a lot.
---
Marco Bambini
http://www.sqlabs.net
http://www.sqlabs.net/blog/


[sqlite] problem with blobs (perl code)

2005-11-30 Thread Jim Dodgen
I am having a problem with blobs, I seem to insert ok but only get three (3) 
bytes when I query it back. yes I am setting LongReadLen. any ideas?

thanks

Jim 

I'm using sqlite 3.2.7, 1.09 and of the perl module also code and test results 
are below.

also when I use the command line sqlite3 I also only get 3 characters back.


-- test perl code

#!/usr/bin/perl -w
#
use Carp;
use DBI;
use strict;

my %attr = (PrintError => 1,
RaiseError => 0,
AutoCommit => 0);
my $dbh = DBI->connect("dbi:SQLite:slurp.db","","",\%attr);
if (!defined($dbh))
{
  croak("could not connect to db");
}
{
  my $sth = $dbh->prepare("drop table foo");
  if (defined $sth)
  {
$sth->execute();
$sth->finish;
  }
}
{
  my $sth = $dbh->prepare("create table foo (nm, val, primary key (nm))");
  $sth->execute();
  $sth->finish;
}

undef $/;
my ($fn) = @ARGV; # name of big file
open (IN, $fn);
my $data = ;  # slurp complete file into variable
close IN;

print "original slurped file size $fn=".length($data)."\n";;
{
  my $sth = $dbh->prepare(qq{
  INSERT or replace INTO foo (nm, val) VALUES (?, ?)
});
  $sth->execute($fn, $data);
  $sth->finish;
}

undef $data; # JUST TO FREE SOME SPACE
$dbh->{LongReadLen} = $dbh->selectrow_array(qq{
  SELECT MAX(LENGTH(val))
  FROM foo
  });

{
  my $sth = $dbh->prepare("select nm, val from foo where nm = ?");
  my $stat = $sth->execute($fn);
  my ($nm, $out) = $sth->fetchrow_array;
  $sth->finish;
  
  print "size of $nm BLOB returned from query ".length($out)."\n";
  open (NEW1, ">out_".$fn);
  binmode NEW1;
  print NEW1 $out;
  close NEW1;
}
$dbh->commit;
$dbh->disconnect;



--- results of running slurp_test.pl 

[EMAIL PROTECTED] dev]# time ./slurp_test.pl gorp.foo; ls -l *.foo slurp.db
original slurped file size gorp.foo=4166070
size of gorp.foo BLOB returned from query 3

real  0m0.317s
user  0m0.070s
sys   0m0.150s
-rw-r--r--1 root root  4166070 Nov 30 10:33 gorp.foo
-rw-r--r--1 root root3 Nov 30 14:22 out_gorp.foo
-rw-r--r--1 root root  4185088 Nov 30 14:22 slurp.db





[sqlite] Re: Concurrency handling question

2005-11-30 Thread Igor Tandetnik

Doug Nebeker wrote:

I've written a C++ wrapper for SQLite which has been working great.
Lately though I've started getting what might be deadlocks when
running
two threads against the same database.  One thread is a reader and the
other is a writer (at the moment, I'm getting 'database locked'
errors,
but that's after many hours of failures so not sure where it started).
It's possible to have two or three readers and/or writers, but not
more
than that.

Each thread has it's own database handle, however, I keep the database
handle open for many calls (where each 'call' opens a transaction,
executes SQL statements, then closes the transaction).

I've read about concurrency, and it sounds like the best way to work
is
just to retry the last call if SQLITE_BUSY is ever returned, but that
doesn't seem to be working for me in this case.


Do you, by any chance, have two or more concurrent writers where at 
least one performs transactions with the following access pattern:


begin
select
modify (insert, update or delete)
end

If you do, you may end up in a "deadlock" situation. The "funny" writer 
starts its read part, acquiring a SHARED lock. At this point, another 
writer attempts to write, acquires RESERVED and PENDING locks and waits 
for the readers to clear. The "funny" writer now reaches its modify 
part, tries to acquire the RESERVED lock and fails (one is already held 
by the other writer). At this point, neither thread can progress - no 
matter how many times they retry, they'll always get SQLITE_BUSY. The 
only way out of this situation is for one of the threads to roll back 
its transaction.


One possible way out is to use "BEGIN IMMEDIATE" so start the writer 
transactions, especially the "funny" mixed ones.


Igor Tandetnik 



Re: [sqlite] Concurrency handling question

2005-11-30 Thread drh
Doug Nebeker <[EMAIL PROTECTED]> wrote:
> Thanks for the input Ned--I thought there had to be more going on.  
> 
> After reading about BEGIN TRANSACTION some more, it seems like some
> times my retry strategy works (a reader gets SQLITE_BUSY because a
> writer is holding an exclusive lock, in which case retrying the same
> sqlite3_ call later in the reader would work--and does from what I've
> seen).  But once the deadlock case is hit, one must abort/rollback as
> you point out.  
> 
> Is there a way to detect the deadlock situation (will SQLite return
> SQLITE_LOCKED), or would it be better to retry a couple of times and
> then rollback?  
> 
> Is it best to bail out by closing the database handle, or calling
> sqlite3_prepare16 on a "rollback" statement and trying to execute it?
> 

If you always do "BEGIN EXCLUSIVE" instead of just "BEGIN" to
start your transaction, you will never get into the deadlock in
the first place.
--
D. Richard Hipp <[EMAIL PROTECTED]>



RE: [sqlite] Concurrency handling question

2005-11-30 Thread Doug Nebeker
Thanks for the input Ned--I thought there had to be more going on.  

After reading about BEGIN TRANSACTION some more, it seems like some
times my retry strategy works (a reader gets SQLITE_BUSY because a
writer is holding an exclusive lock, in which case retrying the same
sqlite3_ call later in the reader would work--and does from what I've
seen).  But once the deadlock case is hit, one must abort/rollback as
you point out.  

Is there a way to detect the deadlock situation (will SQLite return
SQLITE_LOCKED), or would it be better to retry a couple of times and
then rollback?  

Is it best to bail out by closing the database handle, or calling
sqlite3_prepare16 on a "rollback" statement and trying to execute it?

Thanks again for your time.



-Original Message-
From: Ned Batchelder [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, November 30, 2005 11:54 AM
To: sqlite-users@sqlite.org
Subject: RE: [sqlite] Concurrency handling question

You can't simply loop on retrying the last sqlite3_* call.  You need to
rollback one of the transactions and retry the entire transaction.  Your
two threads are deadlocked because (roughly) each has acquired a lock
that the other needs.  One needs to release the locks it holds.  Rolling
back is the way to do that.

--Ned.
http://nedbatchelder.com
 

-Original Message-
From: Doug Nebeker [mailto:[EMAIL PROTECTED]
Sent: Wednesday, 30 November, 2005 11:04 AM
To: sqlite-users@sqlite.org
Subject: [sqlite] Concurrency handling question

I've written a C++ wrapper for SQLite which has been working great.
Lately though I've started getting what might be deadlocks when running
two threads against the same database.  One thread is a reader and the
other is a writer (at the moment, I'm getting 'database locked' errors,
but that's after many hours of failures so not sure where it started).
It's possible to have two or three readers and/or writers, but not more
than that.

Each thread has it's own database handle, however, I keep the database
handle open for many calls (where each 'call' opens a transaction,
executes SQL statements, then closes the transaction).

I've read about concurrency, and it sounds like the best way to work is
just to retry the last call if SQLITE_BUSY is ever returned, but that
doesn't seem to be working for me in this case.  I've stripped my main
processing loop down to make it concise (it is not completely correct
nor will it compile as shown, but it shows the flow).

Can anyone see what I'm doing wrong?  Is the fact that I keep the
database handle open between calls and between retries the problem?  If
the retry loops finally fails, sqlite3_finalize gets called,
sqlite3_close and then sqlite3_open16 to re-initialize the state.

Thanks in advance for any insight.

do
{
long prepareLoopCount = 0;

PrepareLoop:
if(SQLITE_BUSY == (res = sqlite3_prepare16(m_hDB,
nextStatement, -1, , )))
{
if(prepareLoopCount++ < 200)
{
Sleep(300 ms);
goto PrepareLoop;
}
}

if((SQLITE_OK == res) && (NULL != pStmt)) 
{
long stepLoopCount = 0;
StepLoop:
res = sqlite3_step(pStmt);
if(SQLITE_BUSY == res)
{
if(stepLoopCount++ < 200)
{
Sleep(300 ms);
goto StepLoop;
}
}
else do other processing like fetching the
rows

}

if(NULL != pStmt)
{
sqlite3_finalize(pStmt);
pStmt = NULL;
}
}
while(NULL != nextStatement)


To find out more about Reuters visit www.about.reuters.com

Any views expressed in this message are those of the individual sender,
except where the sender specifically states them to be the views of
Reuters Ltd.




To find out more about Reuters visit www.about.reuters.com

Any views expressed in this message are those of the individual sender, except 
where the sender specifically states them to be the views of Reuters Ltd.



RE: [sqlite] Concurrency handling question

2005-11-30 Thread Ned Batchelder
You can't simply loop on retrying the last sqlite3_* call.  You need to
rollback one of the transactions and retry the entire transaction.  Your two
threads are deadlocked because (roughly) each has acquired a lock that the
other needs.  One needs to release the locks it holds.  Rolling back is the
way to do that.

--Ned.
http://nedbatchelder.com
 

-Original Message-
From: Doug Nebeker [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, 30 November, 2005 11:04 AM
To: sqlite-users@sqlite.org
Subject: [sqlite] Concurrency handling question

I've written a C++ wrapper for SQLite which has been working great.
Lately though I've started getting what might be deadlocks when running
two threads against the same database.  One thread is a reader and the
other is a writer (at the moment, I'm getting 'database locked' errors,
but that's after many hours of failures so not sure where it started).
It's possible to have two or three readers and/or writers, but not more
than that.

Each thread has it's own database handle, however, I keep the database
handle open for many calls (where each 'call' opens a transaction,
executes SQL statements, then closes the transaction).

I've read about concurrency, and it sounds like the best way to work is
just to retry the last call if SQLITE_BUSY is ever returned, but that
doesn't seem to be working for me in this case.  I've stripped my main
processing loop down to make it concise (it is not completely correct
nor will it compile as shown, but it shows the flow).

Can anyone see what I'm doing wrong?  Is the fact that I keep the
database handle open between calls and between retries the problem?  If
the retry loops finally fails, sqlite3_finalize gets called,
sqlite3_close and then sqlite3_open16 to re-initialize the state.

Thanks in advance for any insight.

do
{
long prepareLoopCount = 0;

PrepareLoop:
if(SQLITE_BUSY == (res = sqlite3_prepare16(m_hDB,
nextStatement, -1, , )))
{
if(prepareLoopCount++ < 200)
{
Sleep(300 ms);
goto PrepareLoop;
}
}

if((SQLITE_OK == res) && (NULL != pStmt)) 
{
long stepLoopCount = 0;
StepLoop:
res = sqlite3_step(pStmt);
if(SQLITE_BUSY == res)
{
if(stepLoopCount++ < 200)
{
Sleep(300 ms);
goto StepLoop;
}
}
else do other processing like fetching the
rows

}

if(NULL != pStmt)
{
sqlite3_finalize(pStmt);
pStmt = NULL;
}
}
while(NULL != nextStatement)


To find out more about Reuters visit www.about.reuters.com

Any views expressed in this message are those of the individual sender,
except where the sender specifically states them to be the views of Reuters
Ltd.




[sqlite] Concurrency handling question

2005-11-30 Thread Doug Nebeker
I've written a C++ wrapper for SQLite which has been working great.
Lately though I've started getting what might be deadlocks when running
two threads against the same database.  One thread is a reader and the
other is a writer (at the moment, I'm getting 'database locked' errors,
but that's after many hours of failures so not sure where it started).
It's possible to have two or three readers and/or writers, but not more
than that.

Each thread has it's own database handle, however, I keep the database
handle open for many calls (where each 'call' opens a transaction,
executes SQL statements, then closes the transaction).

I've read about concurrency, and it sounds like the best way to work is
just to retry the last call if SQLITE_BUSY is ever returned, but that
doesn't seem to be working for me in this case.  I've stripped my main
processing loop down to make it concise (it is not completely correct
nor will it compile as shown, but it shows the flow).

Can anyone see what I'm doing wrong?  Is the fact that I keep the
database handle open between calls and between retries the problem?  If
the retry loops finally fails, sqlite3_finalize gets called,
sqlite3_close and then sqlite3_open16 to re-initialize the state.

Thanks in advance for any insight.

do
{
long prepareLoopCount = 0;

PrepareLoop:
if(SQLITE_BUSY == (res = sqlite3_prepare16(m_hDB,
nextStatement, -1, , )))
{
if(prepareLoopCount++ < 200)
{
Sleep(300 ms);
goto PrepareLoop;
}
}

if((SQLITE_OK == res) && (NULL != pStmt)) 
{
long stepLoopCount = 0;
StepLoop:
res = sqlite3_step(pStmt);
if(SQLITE_BUSY == res)
{
if(stepLoopCount++ < 200)
{
Sleep(300 ms);
goto StepLoop;
}
}
else do other processing like fetching the
rows

}

if(NULL != pStmt)
{
sqlite3_finalize(pStmt);
pStmt = NULL;
}
}
while(NULL != nextStatement)


To find out more about Reuters visit www.about.reuters.com

Any views expressed in this message are those of the individual sender, except 
where the sender specifically states them to be the views of Reuters Ltd.