RE: [sqlite] Step Query

2007-06-18 Thread B V, Phanisekhar
> My question here is do I need to do sqlite3_finalize(pStmt); after
> every sqlite3_step() to free all memory allocated by
> sqlite3_step().Does calling finalize at end will free all memory
> allocated by all steps statements?

No you don't need to call sqlite3_finalize after every sqlite3_step.

> *pzBlob = (unsigned char *)malloc(*pnBlob);

This portion of the memory needs to be freed up by you.


Regards,
Phani



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Step Query

2007-06-18 Thread Dan Kennedy
On Tue, 2007-06-19 at 10:58 +0530, anand chugh wrote:
> Hi
> 
> I am having code like this:
> 
>rc = sqlite3_prepare(db, zSql, -1, &pStmt, 0);
>if( rc!=SQLITE_OK ){
>  return rc;
>}
>sqlite3_bind_text(pStmt, 1, zKey, -1, SQLITE_STATIC);
>sqlite3_bind_blob(pStmt, 2, zBlob, nBlob, SQLITE_STATIC);
> 
>while( sqlite3_step(pStmt)==SQLITE_ROW )
>  {
>  *pnBlob = sqlite3_column_bytes(pStmt, 0);
>  *pzBlob = (unsigned char *)malloc(*pnBlob);
>  memcpy(*pzBlob, sqlite3_column_blob(pStmt, 0), *pnBlob);
>}
> 
>   sqlite3_finalize(pStmt);
> 
> My question here is do I need to do sqlite3_finalize(pStmt); after
> every sqlite3_step() to free all memory allocated by
> sqlite3_step().

No. Exactly one sqlite3_finalize() for each sqlite3_prepare(). In
this respect the code above is fine.

It's not SQLite related, but if the SQL statement returns more 
than one row, the malloc() in the while loop will cause a memory 
leak.

Dan.

> Does calling finalize at end will free all memory
> allocated by all steps statements?
> 
>  Example shown http://www.sqlite.org/cvstrac/wiki?p=BlobExample does
> same , it calls finalize after  every step.
> 
> My Program shows some Memory Leaks(Virtual Bytes).
> 
> Please clarify.
> 
> Anand
> 
> -
> To unsubscribe, send email to [EMAIL PROTECTED]
> -
> 


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Step Query

2007-06-18 Thread anand chugh

Hi

I am having code like this:

  rc = sqlite3_prepare(db, zSql, -1, &pStmt, 0);
  if( rc!=SQLITE_OK ){
return rc;
  }
  sqlite3_bind_text(pStmt, 1, zKey, -1, SQLITE_STATIC);
  sqlite3_bind_blob(pStmt, 2, zBlob, nBlob, SQLITE_STATIC);

  while( sqlite3_step(pStmt)==SQLITE_ROW )
{
*pnBlob = sqlite3_column_bytes(pStmt, 0);
*pzBlob = (unsigned char *)malloc(*pnBlob);
memcpy(*pzBlob, sqlite3_column_blob(pStmt, 0), *pnBlob);
  }

 sqlite3_finalize(pStmt);

My question here is do I need to do sqlite3_finalize(pStmt); after
every sqlite3_step() to free all memory allocated by
sqlite3_step().Does calling finalize at end will free all memory
allocated by all steps statements?

Example shown http://www.sqlite.org/cvstrac/wiki?p=BlobExample does
same , it calls finalize after  every step.

My Program shows some Memory Leaks(Virtual Bytes).

Please clarify.

Anand

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Recovery After Crash

2007-06-18 Thread Asif Lodhi

Hi Christian,

On 6/19/07, Christian Smith <[EMAIL PROTECTED]> wrote:

SQLite is not optimised for large datasets. 
..
Consider using larger pages than the default 1024 bytes to limit the
number of pages SQLite must track. ..


Thank you for replying. I think performance can be tested only by
actual testing on live data. I'll code my application accordingly so
that I can replace Sqlite specfic code with something else in case I
run  into major problems.

--
Thanks again,

Asif

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Proper way to transfer a live sqlite database

2007-06-18 Thread Dan Kennedy
On Tue, 2007-06-19 at 00:46 +0100, Michael Hooker wrote:
> Christian wrote:
> 
> >>Best way of doing this is to execute a 'BEGIN IMMEDIATE', copying the
> database file, then executing a 'ROLLBACK' to end the transaction.<<
> 
> >>and can be safely copied at the OS level<<
> 
> I also have a need to copy a live database which is constantly being updated 
> 24/7 by software which decodes radio signals.  I need to safely interrogate 
> and process a static version of the data without risk of messing up the 
> original. Once midnight has passed, the copy can be taken and the previous 
> day's data extracted from the copy.But as a raw beginner I don't clearly 
> understand what is being said here.
> 
> When you say >>and can be safely copied at the OS level<<, I guess you must 
> mean something more than right-click the file icon and select "Copy" ;)   In 
> any case I would much prefer the copy to be taken programmatically, which 
> would be neater and faster, and could be done automatically in the middle of 
> the night when the data flow is much less intense.   I use, as best I can, 
> Delphi 7 and Ralf Junker's DisqLite3, so can I safely:-
> 
> (1) send a BEGIN IMMEDIATE command,
> 
> (2) issue Delphi Windows API command "CopyFile(PChar(SourceFileName), 
> PChar(DestFileName), FALSE);
> 
> (3) send a ROLLBACK command.
> 
> The destination would be the same folder as the source, so no network delay. 
> The file is about 55 Megabytes.
> 
> My data is of no commercial value, but I have a few hundred people looking 
> forward to my reports every day and don't want to mess it up;  I have no 
> other SQLite3 databases to experiment with, so please forgive me for asking 
> you experts what is probably a very basic question.  It would also be very 
> helpful if someone could explain in jargon-free terms what ROLLBACK means in 
> this context and why it apparently serves the purpose of finishing the 
> transaction which has not attempted to change anything(why not END?)

"ROLLBACK" means abandon the current transaction, and put the database
back the way it was before the transaction started. To "roll back" all
changes so far.

In this specific context, the important part is that the "BEGIN 
IMMEDIATE" locks the database file and the "ROLLBACK" releases the
lock. A "COMMIT" or "END" would be logically identical - it releases
the lock, and since there were no database changes made in this
transaction, it doesn't matter if they are rolled back or not.

Under the hood, there is a minor difference - a COMMIT will update
the database change-counter, meaning that all other connections
will need to discard their caches. A ROLLBACK does not update the
change-counter, so caches held by other connections will remain
valid.

Dan.


>  - I 
> keep coming across the word and I'm sure it means something fairly simple, 
> but I have not encountered it until I started looking at SQLite.  I've let 
> Delphi and VisualDB handle all my database work through the BDE until now 
> and never had any need to worry about locking or contentions.
> 
> Thanks
> 
> Michael Hooker
> 
> - Original Message - 
> From: "Christian Smith" <[EMAIL PROTECTED]>
> To: 
> Sent: Monday, June 18, 2007 6:39 PM
> Subject: Re: [sqlite] Proper way to transfer a live sqlite database
> 
> 
> > Rich Rattanni uttered:
> >
> >> The databases will be in flux, and I didnt necessairly want to suspend
> >> the application that is performs reads and writes into the database.
> >> A simple copy worries me because it seems like messing with SQLITE on
> >> the file level is dangerous since you circumvent all the protection
> >> mechanisms that provide fault tolerance.  I didnt want to have to
> >> worry about if the database has a journal file that needs copied, or
> >> any other situation like that.  I figured using the SQLITE API to do
> >> the copy would award me some protection against corruption.
> >
> >
> > You're right to be cautious. Never copy an in use database if that 
> > database could possibly be updated.
> >
> > If you open the database, and obtain a SQLite read lock on it, you can be 
> > sure it is not going to be modified, and can be safely copied at the OS 
> > level.
> >
> > Best way of doing this is to execute a 'BEGIN IMMEDIATE', copying the 
> > database file, then executing a 'ROLLBACK' to end the transaction.
> >
> > To limit the time the database is locked, I suggest copying the file to a 
> > local filesystem first, then transferring across the network after the 
> > lock is released.
> >
> > Christian
> >
> >
> >
> > --
> > /"\
> > \ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
> >  X   - AGAINST MS ATTACHMENTS
> > / \
> >
> > -
> > To unsubscribe, send email to [EMAIL PROTECTED]
> > -
> > 
> 
> 
> ---

[sqlite] Custom Aggregation Not Working

2007-06-18 Thread wcmadness

Hey, Folks: 

I'm writing a financial application and MUST have exact math decimals (no
floats). So, I'm using Python's decimal module. 

My database is Sqlite (and my language is Python with Pysqlite); Sqlite
doesn't offer a non-floating point decimal type. But, it does have adapters
and converters to store data as a native Sqlite type (string / text) in the
database and bring it out of the database and into memory as something else
(Python Decimal, in my case). That works great, but it does NOT seem to
apply to aggregation operations. I need it to. 

So, I tried using another Sqlite feature, custom aggregation functions, but
to no avail. Does anyone know how to fix this? What am I doing wrong? 

I am submitting all of my test code, below. Following that, I am submitting
my results. 

Thanks for your time in helping with this! 

Here's the code: 

import sqlite3 
import decimal 

# This way will store the value as float, potentially losing
precision. 

print '-' * 25 
print 'Testing native data types, no adapters / converters.' 
con = sqlite3.connect('test1.db') 
cur = con.cursor() 
cur.execute("create table test (pkey integer primary key,somenumber
Decimal);") 
cur.execute("insert into test values (null,.1);") 
cur.execute("insert into test values (null,.2);") 
cur.execute("select * from test;") 
rows = cur.fetchall() 
for row in rows: 
print row[0], type(row[0]) 
print row[1], type(row[1]) 
cur.close() 
con.close() 
print '-' * 25 

# This way will store the value as decimal, keeping exact precision. 

def AdaptDecimal(pdecValue): 
return str(pdecValue) 
def ConvertDecimal(pstrValue): 
return decimal.Decimal(pstrValue) 

decimal.getcontext().precision = 50 
sqlite3.register_adapter(decimal.Decimal, AdaptDecimal) 
sqlite3.register_converter("Decimal", ConvertDecimal) 

print 'Testing data type with adapters / converters. Decimal numbers should
be Python Decimal types.' 
con = sqlite3.connect('test2.db',detect_types = sqlite3.PARSE_DECLTYPES) 
cur = con.cursor() 
cur.execute("create table test (pkey integer primary key,somenumber
Decimal);") 
cur.execute("insert into test values (null,.1);") 
cur.execute("insert into test values (null,.2);") 
cur.execute("select * from test;") 
rows = cur.fetchall() 
for row in rows: 
print row[0], type(row[0]) 
print row[1], type(row[1]) 
cur.close() 
con.close() 
print '-' * 25 

# OK. That works. Now for the real test. Let's try an equality test. 
# Classic float equality failure .1 + .1... 10 times should NOT
equal 1. 
# As predicted, this will FAIL the equality test 

print 'Testing Sum aggregation on native data types. Should be float and
should fail equality test.' 
con = sqlite3.connect('test3.db') 
cur = con.cursor() 
cur.execute("create table test (pkey integer primary key,somenumber
Decimal);") 
for x in range(10): 
cur.execute("insert into test values (null,.1);") 
cur.execute("select sum(somenumber) as total from test;") 
rows = cur.fetchall() 
print rows[0][0], type(rows[0][0]) 
if rows[0][0] == 1: 
print 'equal' 
else: 
print 'NOT equal' 
cur.close() 
con.close() 
print '-' * 25 

# Now, we try the exact same equality test, using adapters and
converters, substituting 
# the Python exact precision decimal type for float. 

# Probably don't need to re-register. We did that above. We probably just
need to parse declared types when 
# we open the connection. 
# H... This fails whether I re-register or not. 
# sqlite3.register_adapter(decimal.Decimal, AdaptDecimal) 
# sqlite3.register_converter("Decimal", ConvertDecimal) 
print "Testing Sum aggregation with adapters / converters registered. Result
SHOULD BE Python Decimal type, but is NOT. Should PASS equality test, but
doesn't." 
con = sqlite3.connect('test4.db',detect_types = sqlite3.PARSE_DECLTYPES) 
cur = con.cursor() 
cur.execute("create table test (pkey integer primary key,somenumber
Decimal);") 
for x in range(10): 
cur.execute("insert into test values (null,.1);") 
cur.execute("select sum(somenumber) as total from test;") 
rows = cur.fetchall() 
print rows[0][0], type(rows[0][0]) 
if rows[0][0] == 1: 
print 'equal' 
else: 
print 'NOT equal' 
cur.close() 
con.close() 
print '-' * 25 

## OK. Let's try the exact same equality test, using manual
summation. First for floats. 

print 'Testing manual summation against native data types, no adapters
converters. Should FAIL equality test, because the sum is a float.' 
con = sqlite3.connect('test5.db') 
cur = con.cursor() 
cur.execute("create table test (pkey integer primary key,somenumber
Decimal);") 
for x in range(10): 
cur.execute("insert into test values (null,.1);") 
cur.execute("select * from test;") 
rows = cur.fetchall() 
total = 0.0 
for row in rows: 
total += row[1] 
print total,type(total) 
if total == 1: 
print 'equal' 
else: 
print 'NOT equal' 
cur.close() 
con.close() 
print '-' * 25 

# Now, using adapters and converters with manual summation. 

print 'Testing manual summ

Re: [sqlite] Proper way to transfer a live sqlite database

2007-06-18 Thread Michael Hooker

Christian wrote:


Best way of doing this is to execute a 'BEGIN IMMEDIATE', copying the

database file, then executing a 'ROLLBACK' to end the transaction.<<


and can be safely copied at the OS level<<


I also have a need to copy a live database which is constantly being updated 
24/7 by software which decodes radio signals.  I need to safely interrogate 
and process a static version of the data without risk of messing up the 
original. Once midnight has passed, the copy can be taken and the previous 
day's data extracted from the copy.But as a raw beginner I don't clearly 
understand what is being said here.


When you say >>and can be safely copied at the OS level<<, I guess you must 
mean something more than right-click the file icon and select "Copy" ;)   In 
any case I would much prefer the copy to be taken programmatically, which 
would be neater and faster, and could be done automatically in the middle of 
the night when the data flow is much less intense.   I use, as best I can, 
Delphi 7 and Ralf Junker's DisqLite3, so can I safely:-


(1) send a BEGIN IMMEDIATE command,

(2) issue Delphi Windows API command "CopyFile(PChar(SourceFileName), 
PChar(DestFileName), FALSE);


(3) send a ROLLBACK command.

The destination would be the same folder as the source, so no network delay. 
The file is about 55 Megabytes.


My data is of no commercial value, but I have a few hundred people looking 
forward to my reports every day and don't want to mess it up;  I have no 
other SQLite3 databases to experiment with, so please forgive me for asking 
you experts what is probably a very basic question.  It would also be very 
helpful if someone could explain in jargon-free terms what ROLLBACK means in 
this context and why it apparently serves the purpose of finishing the 
transaction which has not attempted to change anything(why not END?) - I 
keep coming across the word and I'm sure it means something fairly simple, 
but I have not encountered it until I started looking at SQLite.  I've let 
Delphi and VisualDB handle all my database work through the BDE until now 
and never had any need to worry about locking or contentions.


Thanks

Michael Hooker

- Original Message - 
From: "Christian Smith" <[EMAIL PROTECTED]>

To: 
Sent: Monday, June 18, 2007 6:39 PM
Subject: Re: [sqlite] Proper way to transfer a live sqlite database



Rich Rattanni uttered:


The databases will be in flux, and I didnt necessairly want to suspend
the application that is performs reads and writes into the database.
A simple copy worries me because it seems like messing with SQLITE on
the file level is dangerous since you circumvent all the protection
mechanisms that provide fault tolerance.  I didnt want to have to
worry about if the database has a journal file that needs copied, or
any other situation like that.  I figured using the SQLITE API to do
the copy would award me some protection against corruption.



You're right to be cautious. Never copy an in use database if that 
database could possibly be updated.


If you open the database, and obtain a SQLite read lock on it, you can be 
sure it is not going to be modified, and can be safely copied at the OS 
level.


Best way of doing this is to execute a 'BEGIN IMMEDIATE', copying the 
database file, then executing a 'ROLLBACK' to end the transaction.


To limit the time the database is locked, I suggest copying the file to a 
local filesystem first, then transferring across the network after the 
lock is released.


Christian



--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Version 3.4.0

2007-06-18 Thread Gerry Snyder

[EMAIL PROTECTED] wrote:

yes.  I messed up the build again.  Please try one more
time.
--

Thanks for the quick responses.

More better now.  ;-)


Gerry

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] FTS2 Experiences?

2007-06-18 Thread John Stanton
We have just started to use it.  So far it is performing well, but we 
have not subjected it to high volume and large data sets yet.


I have written a simple function which helps in our application.  The 
function concanenates columns to produce a block of text then strips out 
punctuation and prepositions etc and upshifts the case.  The resulting 
string of words is used to create the FTS2 index.  Search strings go 
through the same function.  The FTS2 index is maintained by triggers and 
is transparent to the applications thanks to the user function.


Russell Leighton wrote:


Could folks that have used fts2 in production apps/systems relate their 
experiences to the group?


I would very much be interested in how folks are using it, how well it 
performs with large data and general impressions.


Thanks in advance.

Russ


- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Proper way to transfer a live sqlite database

2007-06-18 Thread John Stanton
One of the most endearing features of Sqlite is that it is a single 
file.  You can copy it with impunity.  If it is in use while you are 
copying you can launch an exclusive transaction to block other users and 
copy it and  be assured of its state.


Rich Rattanni wrote:

The databases will be in flux, and I didnt necessairly want to suspend
the application that is performs reads and writes into the database.
A simple copy worries me because it seems like messing with SQLITE on
the file level is dangerous since you circumvent all the protection
mechanisms that provide fault tolerance.  I didnt want to have to
worry about if the database has a journal file that needs copied, or
any other situation like that.  I figured using the SQLITE API to do
the copy would award me some protection against corruption.

On 6/18/07, Fred Williams <[EMAIL PROTECTED]> wrote:


It would most likely be much quicker (and simpler) just to utilize the
OS's file coping feature to copy the table.  What would be gained with
the attaching databases approach over just a straight file copy?

Fred

> -Original Message-
> From: Rich Rattanni [mailto:[EMAIL PROTECTED]
> Sent: Monday, June 18, 2007 10:20 AM
> To: sqlite-users@sqlite.org
> Subject: [sqlite] Proper way to transfer a live sqlite database
>
>
> I was writing to ask some opinions on how to perform a download of a
> live sqlite database.  Basically I have a device, which stores all
> manner of data in a sqlite database.  Periodically I want to download
> the data to a central server for viewing.  I discussed it with my
> colleagues, and they felt that I should just copy the file to the
> server.  However I was thinking of having a blank database with
> identical schema to the database I am copying.  Then when the download
> occurs, I would ATTACH the live database to the blank database, and
> query the data from one to the other.  Then I would close the cloned
> version and offload that to the server.
>
> The standard questions now follow...
> Is this an acceptable way?
> Is there a better/best way?
>
> Thanks for any input,
> Rich Rattanni
>
> --
> ---
> To unsubscribe, send email to [EMAIL PROTECTED]
> --
> ---
>


- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Trigger update of multiple columns

2007-06-18 Thread Kees Nuyt
On Mon, 18 Jun 2007 23:54:10 +1000, you wrote:

>So, my question remains, is it possible to update multiple columns  
>from a single related row in another table, without having to perform  
>multiple redundant WHERE clauses?

You may want to introduce a sold_products table, one row per
product-incarnation, only containing products you actually sold,
1 : n with the sales_products table, which would only contain
sale_id and a foreign key pointing to sold_products, not the
product properties at the moment of sale.
The same row in sold_products could be used by other sales where
the exact same product incarnation is sold. sold_products
removes the redundancy your solution still has.

I wouldn't worry too much about multiple redundant WHERE
clauses, because the row would still be in cache and found
immediately by product_id.

But i agree, what we seem to miss sometimes is an expression to
transfer a column list from a subquery to some outer SQL
contruct for SETting or comparison purposes.

If they existed your trigger action
   UPDATE sale_products
   SET
   buy  = (SELECT buy  FROM products WHERE
products.product_id =  
NEW.product_id)
 , sell = (SELECT sell FROM products WHERE
products.product_id =  
NEW.product_id)
 , desc = (SELECT desc FROM products WHERE
products.product_id =  
NEW.product_id)
WHERE ...

could be expressed by:
   SET (buy,sell,desc) = (
  SELECT buy,sell,desc 
FROM products 
   WHERE products.product_id = NEW.product_id   
   )
   WHERE ...

See also:
http://www.sqlite.org/cvstrac/wiki?p=UnsupportedSql
item 2005.10.06
-- 
  (  Kees Nuyt
  )
c[_]

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Lemon Parser - Modular & Extensible ?

2007-06-18 Thread Uma Krishnan
Hey, There's no need to be offensive. I did not mean to be critical. Far from 
it, it does a great a job (far more than I'm capable of producing). What I was 
trying to find out was, if it is possible for a .y files to be broken such that 
it can be built on top on other .y files. 
   
  Not sure if this is the right group. But could not find a lemon parser user 
group.
   
  

Christian Smith <[EMAIL PROTECTED]> wrote:
  Uma Krishnan uttered:

> Hello:
>
> Is lemon parser modular and extensible?


Extensible to do what? It generates parsers, and is self contained. It 
does a single job, and does it well. What more could you ask for?


>
> Thanks
>
> Uma
>
> Asif Lodhi wrote:
> Hi Everybody,
>
> I have just joined this mailing list as Sqlite looks like a good
> software solution to my needs. What I need right now is RE-assurance
> of "crash-recovery" that is mentioned on your front page. So, I would
> be thankful if you experts would give me an "accurate" and fair
> picture of the crash-recovery aspects of SQLite - without any hype.
>
> --
> Best regards,
>
> Asif
>
> -
> To unsubscribe, send email to [EMAIL PROTECTED]
> -
>
>
>

--
/"\
\ / ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
X - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-




Re: [sqlite] SQL query help

2007-06-18 Thread Jeff Godfrey


- Original Message - 
From: "P Kishor" <[EMAIL PROTECTED]>

To: 
Sent: Monday, June 18, 2007 2:55 PM
Subject: Re: [sqlite] SQL query help



On 6/18/07, Jeff Godfrey <[EMAIL PROTECTED]> wrote:



I have a table which contains (among other things), a "name" column
and a "version" column (a software asset table).  I need a query 
that

will group all like "names" together in a single record, and return
the latest "version" (the largest value) for each group.  What I 
have



Jeff, how about something like

SELECT name, MAX(version) AS latest
FROM asset
GROUP BY name


Thanks Puneet - that's just what I needed.

Side note - the timing on the list seems to be whacky right now (at 
least for me), so some of my responses seem to be coming out of order. 
Sorry if that causes some confusion...


Thanks again.

Jeff 



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Re: 3.4.0 make

2007-06-18 Thread Ken
DRH
 Maybe the following suggestion for the Makefile...
  
  Allow two targets, 
  
  1. the default  "make all" which generates the .a and .so from individual .o 
  2. a seperate target:   make amalgamated  which would build the tsrc and 
sqlite3.c 
Then create the .a, .so an sqlite3 (exe) from the 
amalgamated sources.
  
  Regards and Thanks for a great product,
  Ken
  
 

Ken <[EMAIL PROTECTED]> wrote: 
 make distclean 
   does not seem to clear off the sqlite3.c no tsrc directories.
 
 
 My current build script:
 
 configure
 make distclean
 configure --enable-threadsafe --disable-tcl --enable-tempstore
 make sqlite3.c
 make
 
 
 How does one now build libraries static and shared based upon the amalgamated 
source? 
 
 Thanks for any help.
 
 





Re: [sqlite] Lemon Parser - Modular & Extensible ?

2007-06-18 Thread Christian Smith

Uma Krishnan uttered:


Hello:

 Is lemon parser modular and extensible?



Extensible to do what? It generates parsers, and is self contained. It 
does a single job, and does it well. What more could you ask for?





 Thanks

 Uma

Asif Lodhi <[EMAIL PROTECTED]> wrote:
 Hi Everybody,

I have just joined this mailing list as Sqlite looks like a good
software solution to my needs. What I need right now is RE-assurance
of "crash-recovery" that is mentioned on your front page. So, I would
be thankful if you experts would give me an "accurate" and fair
picture of the crash-recovery aspects of SQLite - without any hype.

--
Best regards,

Asif

-
To unsubscribe, send email to [EMAIL PROTECTED]
-





--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Version 3.4.0

2007-06-18 Thread drh
Gerry Snyder <[EMAIL PROTECTED]> wrote:
> 
> (bin) 1 % load /sqlite/tclsqlite3.dll
> couldn't find procedure Tclsqlite_Init
> (bin) 2 %
> 

yes.  I messed up the build again.  Please try one more
time.
--
D. Richard Hipp <[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] FTS2 Experiences?

2007-06-18 Thread Jos van den Oever

2007/6/18, Russell Leighton <[EMAIL PROTECTED]>:

Could folks that have used fts2 in production apps/systems relate their
experiences to the group?

I would very much be interested in how folks are using it, how well it
performs with large data and general impressions.


I had a look at it for an desktop search app, but the fact that you
must have data + index and cannot have only the index, made the db too
large. So I'm hoping for a version where you can use only the index.

As an alternative, I' might use clucene + virtual tables.

Cheers,
Jos

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Recovery After Crash

2007-06-18 Thread Christian Smith

Asif Lodhi uttered:


Hi Kees,

Thanks for replying.

On 6/17/07, Kees Nuyt <[EMAIL PROTECTED]> wrote:

>... thankful if you experts would give me an "accurate" and fair
>picture of the crash-recovery aspects of SQLite - without any hype.

I'm not sure if you would qualify this as hype, but sqlite is
used in many end-user products, ranging from operating systems ..


Basically, I intend to use sqlite's data capacity as well - I mean
2^41 bytes - for reasonably sized databases. Well, not as much as 2^41
but somewhere around 2^32 to 2^36 bytes. I would like to know if the
"crash-recovery" feature will still work and the high-performance
mentioned will be valid even if I have this kind of a data volume. And
yes, I am talking about highly normalized database schemas with number
of tables exceeding 80. Please reply assuming I tend to come up
optimized db & query designs - keeping in view general rules for
database/query optimizations.



SQLite is not optimised for large datasets. Data recovery will work, as 
advertised, in the general case including large datasets, but the memory 
footprint of the library increases as the size of the database grows.


Consider using larger pages than the default 1024 bytes to limit the 
number of pages SQLite must track.


Other than that, the performance should degrade predictably with 
increasing datasets, given that SQLite uses the same BTree(+) based 
algorithms used by most database engines.





--
Thanks again and best regards,

Asif

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: SQL query help

2007-06-18 Thread Jeff Godfrey


- Original Message - 
From: "Igor Tandetnik" <[EMAIL PROTECTED]>

To: "SQLite" 
Sent: Monday, June 18, 2007 1:17 PM
Subject: [sqlite] Re: SQL query help



Jeff Godfrey <[EMAIL PROTECTED]>
wrote:

I have a table which contains (among other things), a "name" column
and a "version" column (a software asset table).  I need a query 
that

will group all like "names" together in a single record, and return
the latest "version" (the largest value) for each group.


select name, max(version)
from asset
group by name;

Igor Tandetnik


Igor,

Perfect - thank you.  It's so simple once you see it done... ;^)

Jeff 



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Journal File Optimization

2007-06-18 Thread Christian Smith

Andre du Plessis uttered:


How can one optimize the creation of the journal file. The problem is
this, for our system which is an event based one each message needs to
be insterted and committed to the database (guaranteed), this results in
a commit per insert, this was obviously unacceptably slow and according
to the lists by design you need to do more bulk transactions, and with
some efford I modified it with a combination of bulk and temporary
tables to do quite a few at a time, but the problem remains that
committing is terribly slow.




What sort of latency is acceptable for commits? How many event sources are 
there? How do you track commits against the event source?


If you require guaranteed single inserts, a client/server database may be 
better performing, as writes can be better optimised in a redo/undo
journal used by client/server systems, rather than the undo journal used 
by SQLite.






I'm thinking the journal file, the fact that it is created written and
deleted each time which is slowing it down, Is there not a way to create
and set up a permanent journal file pre-allocated to a certain size, so
sqlite does not have to go through the OS each time to create and delete
this file?



If you require multi-process access to the database, then Dan's suggestion 
may not work, and you'll have to optimise the sync case. You can do this, 
as suggested elsewhere by turning of synchronous updates. This leaves you 
at the mercy of the OS to guard against crashes.


Depending on your OS, you may be able to optimise OS sync. Solaris ZFS is 
a tree based FS, a bit like WAFL by NetApp. Synchronous writes are 
aggregated and written to minimize seeks. I've not done any benchmarking 
on ZFS, so YMMV (time to power up the Solaris Express partition, me 
thinks.)


Linux ext3 can write data to the same journal that FS meta-data is written 
to, which can greatly enhance single insert speeds due to the journal 
being written at disk IO speed without seeks. Tests I've done indicate a 
doubling of performance over regular ordered data writing that is the 
default for ext3.


Finally, on NetBSD (where LFS is still actively developed) you may see 
performance improvements using LFS, for similar reasons to the ext3 case 
above. I've not, however, tried that recently, so again YMMV.


Of course, if you're not running Solaris, Linux or NetBSD, you may be 
stuck as not many other OS/FS support such optimisations.







Along the same lines of this question, is there a way to fix the initial
size of the DB, ie set it to pre-allocate 700mb for instance so that no
growing of the db file is needed until the space is exceeded, may also
speed things up.



Probably won't improve speed that much, especially as you approach your 
working database size. Avoid vacuuming your database, so that free pages 
are recycled and the database size will stabilise. If your dataset is 
likely to constantly grow without bounds, then SQLite may not be your 
optimal choice in the long run, and a client/server database may provide 
better performance over the long term.


You might also try increasing your page size, up to the maximum of 32768, 
so that new page allocations are required less.









Thank you very much in advance.






--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQL query help

2007-06-18 Thread P Kishor

On 6/18/07, Jeff Godfrey <[EMAIL PROTECTED]> wrote:

Not seeing this on the list 1.5 hrs after posting, I thought I'd try
again.  Sorry if this is a duplicate...

Jeff

=

Hi All,

I have a table which contains (among other things), a "name" column
and a "version" column (a software asset table).  I need a query that
will group all like "names" together in a single record, and return
the latest "version" (the largest value) for each group.  What I have
so far is this:

SELECT name, version
FROM asset
GROUP BY name
ORDER BY name ASC, version DESC

While the above seems to return the expected results, I'm not
convinced that I'm actually controlling the sort order, as changing
"version DESC" to "version ASC" does not return the *earliest* version
as I'd expect.  I assume the record that will be returned has already
been selected at the "GROUP BY" stage and therefore I have no control
over it at the "ORDER BY" stage?  I know, I need to do some more
reading... ;^)

Thanks for any input.


Jeff, how about something like

SELECT name, MAX(version) AS latest
FROM asset
GROUP BY name

--
Puneet Kishor http://punkish.eidesis.org/
Nelson Inst. for Env. Studies, UW-Madison http://www.nelson.wisc.edu/
Open Source Geospatial Foundation http://www.osgeo.org/education/
S&T Policy Fellow, National Academy of Sciences http://www.nas.edu/
-
collaborate, communicate, compete
=

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] SQL query help

2007-06-18 Thread Jeff Godfrey
Not seeing this on the list 1.5 hrs after posting, I thought I'd try 
again.  Sorry if this is a duplicate...


Jeff

=

Hi All,

I have a table which contains (among other things), a "name" column 
and a "version" column (a software asset table).  I need a query that 
will group all like "names" together in a single record, and return 
the latest "version" (the largest value) for each group.  What I have 
so far is this:


SELECT name, version
FROM asset
GROUP BY name
ORDER BY name ASC, version DESC

While the above seems to return the expected results, I'm not 
convinced that I'm actually controlling the sort order, as changing 
"version DESC" to "version ASC" does not return the *earliest* version 
as I'd expect.  I assume the record that will be returned has already 
been selected at the "GROUP BY" stage and therefore I have no control 
over it at the "ORDER BY" stage?  I know, I need to do some more 
reading... ;^)


Thanks for any input.

Jeff 



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] 3.4.0 make

2007-06-18 Thread Ken

 make distclean 
   does not seem to clear off the sqlite3.c no tsrc directories.
 
 
 My current build script:
 
 configure
 make distclean
 configure --enable-threadsafe --disable-tcl --enable-tempstore
 make sqlite3.c
 make
 
 
 How does one now build libraries static and shared based upon the amalgamated 
source? 
 
 Thanks for any help.
 
 


[sqlite] Need help linking into Delphi Application

2007-06-18 Thread John Elrick
I've been using the Delphi ASGSqlite components with static linking for 
some time with version 3.3.13.  I'd like to move on up to 3.4.0, 
however, no one seems to have documented how to do this yet.


I tried compiling the Amalgamation with Borland C++ 5.0 and it generates 
the obj file nicely.  However, when I attempt to link the obj into my 
application, I am getting an "unsatisfied forward declaration __streams".


I'm a Delphi programmer and it is more than frustrating attempting to 
figure out what libraries are missing and how to even find them in the 
wide, wonderful world.  I programmed in C back in the '80's, so my skill 
set there is beyond rusty.


Can someone point me to resources so I can learn enough to solve these 
types of issues on my own in the future?  I've tried Google and it 
hasn't given me anything of value, but I could be asking the wrong 
questions.



John Elrick

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Re: SQL query help

2007-06-18 Thread Igor Tandetnik

Jeff Godfrey <[EMAIL PROTECTED]>
wrote: 

I have a table which contains (among other things), a "name" column
and a "version" column (a software asset table).  I need a query that
will group all like "names" together in a single record, and return
the latest "version" (the largest value) for each group. 


select name, max(version)
from asset
group by name;

Igor Tandetnik

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Proper way to transfer a live sqlite database

2007-06-18 Thread Rich Rattanni

Thank you Igor and Christian.  I appreciate your help.


On 6/18/07, Christian Smith <[EMAIL PROTECTED]> wrote:

Rich Rattanni uttered:

> The databases will be in flux, and I didnt necessairly want to suspend
> the application that is performs reads and writes into the database.
> A simple copy worries me because it seems like messing with SQLITE on
> the file level is dangerous since you circumvent all the protection
> mechanisms that provide fault tolerance.  I didnt want to have to
> worry about if the database has a journal file that needs copied, or
> any other situation like that.  I figured using the SQLITE API to do
> the copy would award me some protection against corruption.


You're right to be cautious. Never copy an in use database if that
database could possibly be updated.

If you open the database, and obtain a SQLite read lock on it, you can be
sure it is not going to be modified, and can be safely copied at the OS
level.

Best way of doing this is to execute a 'BEGIN IMMEDIATE', copying the
database file, then executing a 'ROLLBACK' to end the transaction.

To limit the time the database is locked, I suggest copying the file to a
local filesystem first, then transferring across the network after the
lock is released.

Christian



--
 /"\
 \ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
  X   - AGAINST MS ATTACHMENTS
 / \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] SQL query help

2007-06-18 Thread Jeff Godfrey
Hi All,

I have a table which contains (among other things), a "name" column and a 
"version" column (a software asset table).  I need a query that will group all 
like "names" together in a single record, and return the latest "version" (the 
largest value) for each group.  What I have so far is this:

SELECT name, version 
FROM asset
GROUP BY name
ORDER BY name ASC, version DESC

While the above seems to return the expected results, I'm not convinced that 
I'm actually controlling the sort order, as changing "version DESC" to "version 
ASC" does not return the *earliest* version as I'd expect.  I assume the record 
that will be returned has already been selected at the "GROUP BY" stage and 
therefore I have no control over it at the "ORDER BY" stage?  I know, I need to 
do some more reading... ;^)

Thanks for any input.

Jeff

RE: [sqlite] Version 3.4.0

2007-06-18 Thread Andreas Kupries

> > Is the incremental blob I/O available at the level of the Tcl binding ?
> > From the online information this seems to be currently available only at
the
> > C level.
>
> Yes.  There is a new (undocumented!) "incrblob" method on the
> database object that opens a channel to BLOB.  The syntax is
> like this:
>
> db incrblob ?-readonly? ?DBNAME? TABLENAME COLUMN ROWID
>
> I neglected to update the documentation to describe how this
> works.  Please open a ticket for me so that I do not forget
> again...

Done.
http://www.sqlite.org/cvstrac/tktview?tn=2424

Oh. Oh. Will/does this work with fts1/2/... etc as well ?
Adding question to the ticket.

--
Andreas Kupries <[EMAIL PROTECTED]>
Developer @ http://www.ActiveState.com
Tel: +1 778-786-1122


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Optimization of equality comparison when NULL involved

2007-06-18 Thread Joe Wilson
--- Sean Cunningham <[EMAIL PROTECTED]> wrote:
> I have very large datasets and have found that the built in union, 
> intersect, and except operations do not seem to use indices 
> (would be happy to be proven wrong here).   As  such, they
> are not very speedy with large large data sets.

A patch to speed up queries on a view (or a subquery) with 
compound selects:

  http://www.sqlite.org/cvstrac/tktview?tn=1924
  http://marc.info/?l=sqlite-users&m=117958960408282



   

Need a vacation? Get great deals
to amazing places on Yahoo! Travel.
http://travel.yahoo.com/

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Proper way to transfer a live sqlite database

2007-06-18 Thread Christian Smith

Rich Rattanni uttered:


The databases will be in flux, and I didnt necessairly want to suspend
the application that is performs reads and writes into the database.
A simple copy worries me because it seems like messing with SQLITE on
the file level is dangerous since you circumvent all the protection
mechanisms that provide fault tolerance.  I didnt want to have to
worry about if the database has a journal file that needs copied, or
any other situation like that.  I figured using the SQLITE API to do
the copy would award me some protection against corruption.



You're right to be cautious. Never copy an in use database if that 
database could possibly be updated.


If you open the database, and obtain a SQLite read lock on it, you can be 
sure it is not going to be modified, and can be safely copied at the OS 
level.


Best way of doing this is to execute a 'BEGIN IMMEDIATE', copying the 
database file, then executing a 'ROLLBACK' to end the transaction.


To limit the time the database is locked, I suggest copying the file to a 
local filesystem first, then transferring across the network after the 
lock is released.


Christian



--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Re: Proper way to transfer a live sqlite database

2007-06-18 Thread Igor Tandetnik

Rich Rattanni <[EMAIL PROTECTED]> wrote:

The databases will be in flux, and I didnt necessairly want to suspend
the application that is performs reads and writes into the database.
A simple copy worries me because it seems like messing with SQLITE on
the file level is dangerous since you circumvent all the protection
mechanisms that provide fault tolerance.  I didnt want to have to
worry about if the database has a journal file that needs copied, or
any other situation like that.  I figured using the SQLITE API to do
the copy would award me some protection against corruption.


Open a transaction with BEGIN IMMEDIATE, copy the file over, then close 
the transaction. You would have the exact same protection as a regular 
database reader (a SELECT statement) is afforded.


Igor Tandetnik 



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Version 3.4.0

2007-06-18 Thread drh
"Andreas Kupries" <[EMAIL PROTECTED]> wrote:
> > SQLite version 3.4.0 is now available for download from the SQLite
> > website and from the back-up site:
> >
> > Version 3.4.0 also includes support for new features such as:
> >
> >*  Incremental BLOB I/O
> >
> > Additional information about these new features is available online.
> 
> Is the incremental blob I/O available at the level of the Tcl binding ?
> >From the online information this seems to be currently available only at the
> C level.
> 

Yes.  There is a new (undocumented!) "incrblob" method on the
database object that opens a channel to BLOB.  The syntax is
like this:

db incrblob ?-readonly? ?DBNAME? TABLENAME COLUMN ROWID

I neglected to update the documentation to describe how this
works.  Please open a ticket for me so that I do not forget
again...
--
D. Richard Hipp <[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] FTS2 Experiences?

2007-06-18 Thread Russell Leighton


Could folks that have used fts2 in production apps/systems relate their 
experiences to the group?


I would very much be interested in how folks are using it, how well it 
performs with large data and general impressions.


Thanks in advance.

Russ


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Proper way to transfer a live sqlite database

2007-06-18 Thread Rich Rattanni

The databases will be in flux, and I didnt necessairly want to suspend
the application that is performs reads and writes into the database.
A simple copy worries me because it seems like messing with SQLITE on
the file level is dangerous since you circumvent all the protection
mechanisms that provide fault tolerance.  I didnt want to have to
worry about if the database has a journal file that needs copied, or
any other situation like that.  I figured using the SQLITE API to do
the copy would award me some protection against corruption.

On 6/18/07, Fred Williams <[EMAIL PROTECTED]> wrote:

It would most likely be much quicker (and simpler) just to utilize the
OS's file coping feature to copy the table.  What would be gained with
the attaching databases approach over just a straight file copy?

Fred

> -Original Message-
> From: Rich Rattanni [mailto:[EMAIL PROTECTED]
> Sent: Monday, June 18, 2007 10:20 AM
> To: sqlite-users@sqlite.org
> Subject: [sqlite] Proper way to transfer a live sqlite database
>
>
> I was writing to ask some opinions on how to perform a download of a
> live sqlite database.  Basically I have a device, which stores all
> manner of data in a sqlite database.  Periodically I want to download
> the data to a central server for viewing.  I discussed it with my
> colleagues, and they felt that I should just copy the file to the
> server.  However I was thinking of having a blank database with
> identical schema to the database I am copying.  Then when the download
> occurs, I would ATTACH the live database to the blank database, and
> query the data from one to the other.  Then I would close the cloned
> version and offload that to the server.
>
> The standard questions now follow...
> Is this an acceptable way?
> Is there a better/best way?
>
> Thanks for any input,
> Rich Rattanni
>
> --
> ---
> To unsubscribe, send email to [EMAIL PROTECTED]
> --
> ---
>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Version 3.4.0

2007-06-18 Thread Andreas Kupries

> SQLite version 3.4.0 is now available for download from the SQLite
> website and from the back-up site:
>
> Version 3.4.0 also includes support for new features such as:
>
>*  Incremental BLOB I/O
>
> Additional information about these new features is available online.

Is the incremental blob I/O available at the level of the Tcl binding ?
>From the online information this seems to be currently available only at the
C level.

--
Andreas Kupries <[EMAIL PROTECTED]>
Developer @ http://www.ActiveState.com
Tel: +1 778-786-1122



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Version 3.4.0

2007-06-18 Thread Joe Wilson
--- [EMAIL PROTECTED] wrote:
> Gerry Snyder <[EMAIL PROTECTED]> wrote:
> > [EMAIL PROTECTED] wrote:
> > > SQLite version 3.4.0 is now available for download
> > >   
> > 
> > The tcl bindings for windows appear to be missing. Is this deliberate?
> > 
> 
> Build-script bug.  Now fixed.  Try again, please.

Check-in [4086] http://www.sqlite.org/cvstrac/chngview?cn=4086 
does not make sense to me.

It now overwrites the previously created tclsqlite3.dll file in
the same script and produces no sqlite3.dll file.
It seemed to be correct before in cvs version 1.9.


   
Ready
 for the edge of your seat? 
Check out tonight's top picks on Yahoo! TV. 
http://tv.yahoo.com/

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Version 3.4.0

2007-06-18 Thread Gerry Snyder

[EMAIL PROTECTED] wrote:

Gerry Snyder <[EMAIL PROTECTED]> wrote:
  

The tcl bindings for windows appear to be missing. Is this deliberate?



Build-script bug.  Now fixed.  Try again, please.

(Note that I have observed that the DLL is now generated in my
cross-compiler environment, but I did not actually attempt to use
that DLL.  I trust you will let me know if anything is amiss ;-))
  
I hate to be so predictable, but I think there is a problem. The new 
.zip file is about 60 KB smaller than the 3.3 series, and the .dll file 
contained in it is about 200 KB smaller. A brief test with 3.3.17 shows:


(bin) 1 % load /sqlite/tclsqlite3.dll
(bin) 2 % sqlite3 db gigo.db3
(bin) 3 % db
wrong # args: should be "db SUBCOMMAND ..."
(bin) 4 %

while with 3.4.0 I get:

(bin) 1 % load /sqlite/tclsqlite3.dll
couldn't find procedure Tclsqlite_Init
(bin) 2 %

Thank you,

Gerry

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Update of multiple columns

2007-06-18 Thread John Elrick

T&B wrote:
In case the original question has been clouded by discussions of 
running a general store and the arson habits of the Hand Crafted 
Guild, allow me to crystalize my purely SQLite question:


I know I can update via:

  update Table1
set
  c1 = (select d1 from Table2 where Table2.id = desired_id)
, c2 = (select d2 from Table2 where Table2.id = desired_id)
, c3 = (select d3 from Table2 where Table2.id = desired_id)
, cn = (select dn from Table2 where Table2.id = desired_id)
  where
rowid = desired_rowid

But that executes the same where clause n times, so scans through 
Table2 for a to find the same matching row n times.


Is it possible to construct an update that executes a where clause 
once to locate all of the desired columns?


The obvious solution would be to do it programatically, however, I'm 
guessing that doesn't meet your needs?



John

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Optimization of equality comparison when NULL involved

2007-06-18 Thread Scott Hess

On 6/18/07, Sean Cunningham <[EMAIL PROTECTED]> wrote:

There was talk in the mailing list a while back about creating a new
operator that would act as a superset of '==' which would treat
NULL==NULL as True.  I have seen this in some other database.
Anybody know if this is on the roadmap?


It wouldn't be hard to write a user-defined function to accomplish
this.  It naturally wouldn't be an infix operator.

-scott

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Version 3.4.0

2007-06-18 Thread Andrew Finkenstadt

On 6/18/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:


SQLite version 3.4.0 is now available for download from the SQLite



and it's in production use here as of this morning.

Version 3.4.0 also includes support for new features such as:


   *  Incremental BLOB I/O
   *  The zeroblob() SQL function
   *  Incremental vacuum

Additional information about these new features is available online.




Much thanks!  Just this morning I was wishing for an "indirect blob"
interface to avoid having to blow through megabytes of memory when loading
blobs from the database into my own container classes.

By popular request, the preprocessed source code is now available

again as individual C code files.  The amalgamation is also still
available but has a different filename for downloading.




Excellent news!


Re: [sqlite] Optimization of equality comparison when NULL involved

2007-06-18 Thread Sean Cunningham


On Jun 15, 2007, at 5:27 PM, Scott Hess wrote:



select tableA.path, tableA.value from tableA,tableB where
tableA.path=tableB.path and tableA.value=tableB.value union
select tableA.path, tableA.value from tableA,tableB where
tableA.path=tableB.path and tableA.value IS NULL AND tableB.value IS
NULL;




I think I oversimplified the problem a little bit.  In the example I  
gave, there
were just two columns.  In the problem I have to solve, there are n  
columns,

and any item in any column might be NULL.  The above approach will work
well with just two columns, but with multiple columns you would have to
take an iterative approach; building intermediate results and  
intersecting

with the first column.

What I am trying to do is build efficient set operations.  I have  
very large datasets
and have found that the built in union, intersect, and except  
operations do not
seem to use indices (would be happy to be proven wrong here).   As  
such, they

are not very speedy with large large data sets.


If you really can treat null as '', then you might be better off
defining the column as NOT NULL DEFAULT ''.


I am leaning toward avoid NULL and using some other token which I can  
interpret

as "no data".  Not an ideal solution, but should scale.

There was talk in the mailing list a while back about creating a new  
operator that
would act as a superset of '==' which would treat NULL==NULL as  
True.  I have
seen this in some other database.  Anybody know if this is on the  
roadmap?




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Version 3.4.0

2007-06-18 Thread drh
Gerry Snyder <[EMAIL PROTECTED]> wrote:
> [EMAIL PROTECTED] wrote:
> > SQLite version 3.4.0 is now available for download
> >   
> 
> The tcl bindings for windows appear to be missing. Is this deliberate?
> 

Build-script bug.  Now fixed.  Try again, please.

(Note that I have observed that the DLL is now generated in my
cross-compiler environment, but I did not actually attempt to use
that DLL.  I trust you will let me know if anything is amiss ;-))

--
D. Richard Hipp <[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Capturing data at a point in time

2007-06-18 Thread

Subject was: [sqlite] Trigger update of multiple columns

I've change the subject since this thread is discussing a tangent to  
my original query.


Hi Fred,

Therefore, the data items contained in each row of this table  
should be a permanent reflection of
the master tables' data content at an exact point in time and must  
not remain linked to the related dynamic master tables.


Exactly what I'm after here :-) Thanks for cutting through my haze ;-)

Thanks,
Tom


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Update of multiple columns

2007-06-18 Thread
In case the original question has been clouded by discussions of  
running a general store and the arson habits of the Hand Crafted  
Guild, allow me to crystalize my purely SQLite question:


I know I can update via:

  update Table1
set
  c1 = (select d1 from Table2 where Table2.id = desired_id)
, c2 = (select d2 from Table2 where Table2.id = desired_id)
, c3 = (select d3 from Table2 where Table2.id = desired_id)
, cn = (select dn from Table2 where Table2.id = desired_id)
  where
rowid = desired_rowid

But that executes the same where clause n times, so scans through  
Table2 for a to find the same matching row n times.


Is it possible to construct an update that executes a where clause  
once to locate all of the desired columns?


Thanks,
Tom


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Trigger update of multiple columns

2007-06-18 Thread

Hi Ed,


I tried to update a list of columns:
UPDATE t SET (c1, c2, c3) = (SELECT c1, c2, c3) FROM t2 WHERE ..
but this syntax is not accepted as you probably already know.


Thanks for your very thoughtful reply. That is exactly the type of  
syntax I'm after, with only one executed WHERE clause for multiple  
columns. But, as you say, it doesn't work in SQLite.


I may promote [INSERT OR] REPLACE then. It is syntactically  
described in the SQLite documentation but for the semantics you may  
see the original MySQL doc.

 http://dev.mysql.com/doc/refman/5.0/en/replace.html
It is the only way that I see to do the update with only a single  
scan of the product table.


Yes, a single scan is the objective.

Hmmm, yes, I can see how REPLACE might be useful. Perhaps something  
like:
REPLACE INTO t ( id, c1, c2, c3 ) SELECT id c1, c2, c3 FROM t2 WHERE  
id = new.id


REPLACE, as I understand it, does rely on the id field being created  
as a PRIMARY KEY, but that should be fine (and most likely already  
the case).


But may be REPLACE causes troubles in combination with triggers.  
Because indirectly it performs a DELETE and a new INSERT.


I guess in a trigger it would look something like:

CREATE TRIGGER trigger
AFTER UPDATE OF id
ON t
BEGIN
  UPDATE t
REPLACE INTO t ( id, c1, c2, c3 ) SELECT id, c1, c2, c3 FROM t2  
WHERE id = new.id

  WHERE
rowid=new.rowid
  ;
END

I'll have to check if that's allowed.


Other suggestions should be welcome.


Yes, other suggestions would be very welcome.

Thanks,
Tom


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Trigger update of multiple columns

2007-06-18 Thread Trey Mack
I want to insert the transactions data  (product_id, buy, sell, desc) into 
the sale_products table. But I  want a mechanism whereby if I enter the 
product_id, then the buy,  sell, desc columns are auto entered (copied) 
from their corresponding  row in the products table.


Given:

CREATE TABLE products ( -- I believe this is a view of UNIONs, but this 
should be a close approximation

 product_id INTEGER PRIMARY KEY,
 buy REAL,
 sell REAL,
 desc TEXT
);

CREATE TABLE sales (
 sale_id INTEGER PRIMARY KEY,
 product_id INTEGER,   -- FK to products table
 customer_id INTEGER   -- FK to customes table
);

CREATE TABLE sale_products (
 sale_id INTEGER,  -- FK to sales table
 product_id INTEGER,   -- FK to products table
 buy REAL,
 sell REAL,
 desc TEXT
);

Yes, getting your database closer to 3NF would be better. But as a quick 
fix, could you do this on insert just after entering the sale record into 
the sales table?


INSERT INTO sale_products (sale_id, product_id, buy, sell, desc)
   SELECT s.sale_id, s.product_id, p.buy, p.sell, p.desc
   FROM sales s INNER JOIN products p
   ON s.product_id = p.product_id
   WHERE s.sale_id = @sale_id; -- sqlite3_last_insert_rowid()

Or, if you know sale_id and product_id, save the JOIN:

INSERT INTO sale_products (sale_id, product_id, buy, sell, desc)
   SELECT @sale_id, @product_id, buy, sell, desc
   FROM products
   WHERE product_id = @product_id;

- Trey



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Proper way to transfer a live sqlite database

2007-06-18 Thread Fred Williams
It would most likely be much quicker (and simpler) just to utilize the
OS's file coping feature to copy the table.  What would be gained with
the attaching databases approach over just a straight file copy?

Fred

> -Original Message-
> From: Rich Rattanni [mailto:[EMAIL PROTECTED]
> Sent: Monday, June 18, 2007 10:20 AM
> To: sqlite-users@sqlite.org
> Subject: [sqlite] Proper way to transfer a live sqlite database
>
>
> I was writing to ask some opinions on how to perform a download of a
> live sqlite database.  Basically I have a device, which stores all
> manner of data in a sqlite database.  Periodically I want to download
> the data to a central server for viewing.  I discussed it with my
> colleagues, and they felt that I should just copy the file to the
> server.  However I was thinking of having a blank database with
> identical schema to the database I am copying.  Then when the download
> occurs, I would ATTACH the live database to the blank database, and
> query the data from one to the other.  Then I would close the cloned
> version and offload that to the server.
>
> The standard questions now follow...
> Is this an acceptable way?
> Is there a better/best way?
>
> Thanks for any input,
> Rich Rattanni
>
> --
> ---
> To unsubscribe, send email to [EMAIL PROTECTED]
> --
> ---
>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Trigger update of multiple columns

2007-06-18 Thread Fred Williams
I think maybe "Normalization" could be one of the best underutilized
features of modern database design by most practicing "Database
Consultants."  Never have I seen such a rule with so many exceptions!
Each normalization decision must be prefaced with "It Depends" in every
instance.  Ah the potential billable time that can be spent with each
normalization effort boggles the mind :-)

If the subject record is be looked on as a "history" record documenting
the exact circumstances of a "sales" transaction then it is no longer a
dynamic entity subject to the variegations of the ever changing data
items within the related "master" tables.  Therefore, the data items
contained in each row of this table should be a permanent reflection of
the master tables' data content at an exact point in time and must not
remain linked to the related dynamic master tables.  (i.e. the subject
table is no longer a "transaction" table in the true sense but a static
snapshot of a given physical activity.  The exact definition of a
particular "sales" event and not subject to change.

But then again...

Fred


> -Original Message-
> From: John Stanton [mailto:[EMAIL PROTECTED]
> Sent: Monday, June 18, 2007 9:58 AM
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] Trigger update of multiple columns
>
>
> If you can automatically enter data then you are violating the
> normalization rules.  Maybe you should get a book on database
> design and
> become familiar with some of the fundamentals.
>
> T&B wrote:
> > Hi John,
> >
> >> You have a reference data set which is accessed to get the
> current
> >> value of reference elements and store transactions to
> record  events.
> >> The transaction trails provide event history.
> >
> >
> > Yes, agreed.
> >
> >> A price is in the reference data, its value transferred to a
> >> transaction is no longer a price, it is a sale which
> represents the
> >> value of the price at the time the event occurred.
> >
> >
> > Yes.
> >
> >> How about reading your price data etc and just inserting a
> >> transaction into your sales table?
> >
> >
> > Yes, that's what I'm doing. I just want to make it more efficient.
> >
> > Technically it's the sale_products table (since each sale has many
> > products etc), but yes, I want to insert the transactions data
> > (product_id, buy, sell, desc) into the sale_products table.
> But I  want
> > a mechanism whereby if I enter the product_id, then the
> buy,  sell, desc
> > columns are auto entered (copied) from their corresponding
> row in the
> > products table.
> >
> > Tom
> >
> >
> >
> >
> --
> ---
> >
> > To unsubscribe, send email to [EMAIL PROTECTED]
> >
> --
> ---
> >
> >
>
>
> --
> ---
> To unsubscribe, send email to [EMAIL PROTECTED]
> --
> ---
>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Trigger update of multiple columns

2007-06-18 Thread

Hi Gerry,

keep each version of each product's description in the products  
table, along with the date that description became valid. Then the  
product ID and date in each transaction would reference the  
appropriate product table data.


In certain circumstances, I can see how that would be useful.  
However, it's a problem here because:


1. The products table has many more rows and columns than I need to  
copy into the occasional sale. So maintaining the products table  
beyond this need is cumbersome.


2. When a product changes (such as price increase), I don't want to  
add a whole new product_id and mostly duplicate information, into the  
already huge products database.


3. 99% of the items in the Products database get deleted without any  
sale being made against them. So keeping them all around for  
historical reasons would multiply the size of the database many fold.  
It also raises issues of tracking what products can be deleted and  
what needs to be kept since a sale used it.


4. I only need to "track" a change in a product if a sale is made  
against it. So it makes sense to capture the details of that product  
in the sale when the sale is made.


So, what I need is, that when (and only when) a product is added to a  
sale, then that product's buy price, sell price and description are  
stored against that sale.


I hope that clarifies the situation. Thanks for your thoughts and time.

Tom


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQLITE_CORRUPT recover

2007-06-18 Thread drh
"Sabyasachi Ruj" <[EMAIL PROTECTED]> wrote:
> I still fail to understand what should I synchronize on. I am *not* sharing
> sqlite* across multiple threads.
> 

If you compile SQLite so that it is threadsafe (-DTHREADSAFE=1) and
if you do not share sqlite3* pointers across threads, then you should
not have to do any synchronization.  

If you are failing to synchronize threads correctly, then you should
expect to see SQLITE_MISUSE errors, not SQLITE_CORRUPT errors.

--
D. Richard Hipp <[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Version 3.4.0

2007-06-18 Thread Gerry Snyder

[EMAIL PROTECTED] wrote:

SQLite version 3.4.0 is now available for download
  


The tcl bindings for windows appear to be missing. Is this deliberate?

Thanking you for a wonderful product,

Gerry

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQLITE_CORRUPT recover

2007-06-18 Thread John Stanton
You said that you were sharing Sqlite between threads by opening a 
connection in each thread.  Sqlite is a single resource and must be 
sync'd somehow if it has multiple users.


Complaining about it is like complaining that the sky is blue.  It is 
blue because of Rayleigh scattering, a fundamental phenomenon which 
cannot be changed by wishes.  You just have to get used to it.


We successfully share Sqlite between threads.  Sometimes we use a 
connection per thread and sometimes a single connection.  The single 
connection is optimal because it enhances the cache usage.  We sync with 
mutexes.  The logic is very simple.


Sabyasachi Ruj wrote:

I still fail to understand what should I synchronize on. I am *not* sharing
sqlite* across multiple threads.

On 6/18/07, John Stanton <[EMAIL PROTECTED]> wrote:



It is fundamental computer science, CS101 you might say.  Pick up a
textbook on basic computing.

Sabyasachi Ruj wrote:
> But can you tell me where is this documented please?
>
> On 6/18/07, John Stanton <[EMAIL PROTECTED]> wrote:
>
>>
>> If you knew the answer then why did you ask the question?  You can
apply
>> regular CS or look for miracles.
>>
>> You might be better off using a server based RDBMS like Oracle or
>> PostgreSQL where the application programmer is insulated from
>> synchronization issues.  Sqlite has the "lite" in its name for a very
>> good reason.  It is designed for embedded applications, not as an
>> enterprise DBMS.
>>
>> Sabyasachi Ruj wrote:
>> > But the following link
>> > http://www.sqlite.org/cvstrac/wiki/wiki?p=MultiThreading
>> > says nothing that I have to synchronize at the application level to
>> create
>> > multiple connections, until the same database connection is being
>> shared!
>> > Ref: The four points in 'Short Answer' section.
>> >
>> > BTW: for a DBMS it does not make sense if the application programmer
>> has
>> to
>> > synchronize to create multiple connection.
>> > And synhing will have considerable performance drop also.
>> >
>> >
>> > On 6/18/07, John Stanton <[EMAIL PROTECTED]> wrote:
>> >
>> >>
>> >> Threadsafe only means that threads do not access global data
elements
>> or
>> >> that they synchronize (serialize) access to global data.  It does
>> >> nothing to synchronize threads.  That is up to the application
>> >> programmer.  Sqlite uses POSIX file locks for synchronixation 
but if

>> you
>> >> are in a totally threaded environment you can use thread sync
>> functions
>> >> like mutexes or the finer grained read and write lock thread
>> primitives.
>> >>
>> >> If you are accessing Sqlite across a network file locks are the way
to
>> >> go, but do depend upon network implementations and settings.  If 
you

>> >> have multiple processes on one OS you can sync using semaphores.
>> >>
>> >> Using textbook style synchronization ensures that you have minimal
>> >> problems and optimal performance.
>> >>
>> >> Sabyasachi Ruj wrote:
>> >> > But I think we do not have to take care of synchronizing sqlite
>> access.
>> >> > sqlite internally does if it is compiled with THREADSAFE=1.
>> >> >
>> >> > On 6/18/07, John Stanton <[EMAIL PROTECTED]> wrote:
>> >> >
>> >> >>
>> >> >> The problem is fairly straight forward.  Sqlite is a single
>> resource
>> >> >> being shared by multiple thyreads so you just use fundamental
>> >> >> synchronization logic as you would when sharing any resource
>> between
>> >> >> competing threads.
>> >> >>
>> >> >> Sabyasachi Ruj wrote:
>> >> >> > Hi,
>> >> >> >
>> >> >> > I am using sqlite in a multithreaded environment. I have take
>> >> >> > reasonable like not sharing sqlite* handles. I am creating a
new
>> >> >> sqlite*
>> >> >>
>> >> >> > for
>> >> >> > every thread.
>> >> >> >
>> >> >> > Where can we get more info on working with SQLite in a
>> multithreaded
>> >> >> > environment?
>> >> >> >
>> >> >> > The application is working as a service in windows.
>> >> >> >
>> >> >> > sqlite3_step() is failing with the following error message:
>> >> >> >
>> >> >> > * SQL error or missing database
>> >> >> > SQL logic error or missing database*
>> >> >> >
>> >> >> > I am getting this message after running the application for
quite
>> a
>> >> >> long
>> >> >> > time (few days).
>> >> >> > And then if I execute *PRAGMA INTEGRITY_CHECK* on that 
table, I

>> get
>> >> the
>> >> >> > same
>> >> >> > error message.
>> >> >> >
>> >> >> > My application updates the database very 2 mins and the
>> corruption
>> >> >> happend
>> >> >> > randomly
>> >> >> >
>> >> >> > I dont have any clue how to debug this!
>> >> >> >
>> >> >> > Thanks.
>> >> >> >
>> >> >> > On 5/11/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>> >> >> >
>> >> >> >> "Sabyasachi Ruj" < [EMAIL PROTECTED]> wrote:
>> >> >> >> > Hi,
>> >> >> >> > Is there any way to programmatically fix a corrupted sqlite
>> >> >> database?
>> >> >> >> > I am using sqlite version 3.3.8 with C APIs
>> >> >> >> >
>> >> >> >>
>> >> >> >> Sometimes VACUUM or REINDEX will help, but usually not.
>> 

[sqlite] Proper way to transfer a live sqlite database

2007-06-18 Thread Rich Rattanni

I was writing to ask some opinions on how to perform a download of a
live sqlite database.  Basically I have a device, which stores all
manner of data in a sqlite database.  Periodically I want to download
the data to a central server for viewing.  I discussed it with my
colleagues, and they felt that I should just copy the file to the
server.  However I was thinking of having a blank database with
identical schema to the database I am copying.  Then when the download
occurs, I would ATTACH the live database to the blank database, and
query the data from one to the other.  Then I would close the cloned
version and offload that to the server.

The standard questions now follow...
Is this an acceptable way?
Is there a better/best way?

Thanks for any input,
Rich Rattanni

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQLITE_CORRUPT recover

2007-06-18 Thread Andrew Finkenstadt

  - Make sure you're compiling SQLite with *-DTHREADSAFE=1*.
  - Make sure that each thread opens the database file and keeps its own
  sqlite structure.
  - Make sure you handle the likely possibility that one or more threads
  collide when they access the db file at the same time: handle *
  SQLITE_BUSY* appropriately.
  - Make sure you enclose within transactions the commands that modify
  the database file, like *INSERT*, *UPDATE*, *DELETE*, and others.


Whenever I get the SQLITE_MISUSE or SQLITE_BUSY errors when "it should
work", I discover that I have a statement left over in execution mode
without it having been sqlite3_reset(...).  Perhaps the same is true in your
case.

andy

On 6/18/07, Sabyasachi Ruj <[EMAIL PROTECTED]> wrote:


But the following link
http://www.sqlite.org/cvstrac/wiki/wiki?p=MultiThreading
says nothing that I have to synchronize at the application level to create
multiple connections, until the same database connection is being shared!
Ref: The four points in 'Short Answer' section.

BTW: for a DBMS it does not make sense if the application programmer has
to
synchronize to create multiple connection.
And synhing will have considerable performance drop also.


On 6/18/07, John Stanton <[EMAIL PROTECTED]> wrote:
>
> Threadsafe only means that threads do not access global data elements or
> that they synchronize (serialize) access to global data.  It does
> nothing to synchronize threads.  That is up to the application
> programmer.  Sqlite uses POSIX file locks for synchronixation but if you
> are in a totally threaded environment you can use thread sync functions
> like mutexes or the finer grained read and write lock thread primitives.
>
> If you are accessing Sqlite across a network file locks are the way to
> go, but do depend upon network implementations and settings.  If you
> have multiple processes on one OS you can sync using semaphores.
>
> Using textbook style synchronization ensures that you have minimal
> problems and optimal performance.
>
> Sabyasachi Ruj wrote:
> > But I think we do not have to take care of synchronizing sqlite
access.
> > sqlite internally does if it is compiled with THREADSAFE=1.
> >
> > On 6/18/07, John Stanton <[EMAIL PROTECTED]> wrote:
> >
> >>
> >> The problem is fairly straight forward.  Sqlite is a single resource
> >> being shared by multiple thyreads so you just use fundamental
> >> synchronization logic as you would when sharing any resource between
> >> competing threads.
> >>
> >> Sabyasachi Ruj wrote:
> >> > Hi,
> >> >
> >> > I am using sqlite in a multithreaded environment. I have take
> >> > reasonable like not sharing sqlite* handles. I am creating a new
> >> sqlite*
> >>
> >> > for
> >> > every thread.
> >> >
> >> > Where can we get more info on working with SQLite in a
multithreaded
> >> > environment?
> >> >
> >> > The application is working as a service in windows.
> >> >
> >> > sqlite3_step() is failing with the following error message:
> >> >
> >> > * SQL error or missing database
> >> > SQL logic error or missing database*
> >> >
> >> > I am getting this message after running the application for quite a
> >> long
> >> > time (few days).
> >> > And then if I execute *PRAGMA INTEGRITY_CHECK* on that table, I get
> the
> >> > same
> >> > error message.
> >> >
> >> > My application updates the database very 2 mins and the corruption
> >> happend
> >> > randomly
> >> >
> >> > I dont have any clue how to debug this!
> >> >
> >> > Thanks.
> >> >
> >> > On 5/11/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> >> >
> >> >> "Sabyasachi Ruj" < [EMAIL PROTECTED]> wrote:
> >> >> > Hi,
> >> >> > Is there any way to programmatically fix a corrupted sqlite
> >> database?
> >> >> > I am using sqlite version 3.3.8 with C APIs
> >> >> >
> >> >>
> >> >> Sometimes VACUUM or REINDEX will help, but usually not.
> >> >> You can also try to recover using:
> >> >>
> >> >>sqlite3 OLD.DB .dump | sqlite3 NEW.DB
> >> >>
> >> >> But that doesn't always work either.  The best approach
> >> >> is to avoid corruption in the first place.
> >> >> --
> >> >> D. Richard Hipp <[EMAIL PROTECTED]>
> >> >>
> >> >>
> >> >>
> >> >>
> >>
>
-
> >>
> >>
> >> >>
> >> >> To unsubscribe, send email to [EMAIL PROTECTED]
> >> >>
> >> >>
> >>
>
-
> >>
> >>
> >> >>
> >> >>
> >> >>
> >> >
> >> >
> >>
> >>
> >>
> >>
>
-
> >>
> >> To unsubscribe, send email to [EMAIL PROTECTED]
> >>
> >>
>
-
> >>
> >>
> >>
> >
> >
>
>
>
>
-
> To unsubscribe, send email to [EMAIL PROTECTED]
>
>
-
>
>


--
Sabyasachi



Re: [sqlite] Trigger update of multiple columns

2007-06-18 Thread Gerry Snyder

T&B wrote:


Yes, that's what I'm doing. I just want to make it more efficient.

Technically it's the sale_products table (since each sale has many 
products etc), but yes, I want to insert the transactions data 
(product_id, buy, sell, desc) into the sale_products table. But I want 
a mechanism whereby if I enter the product_id, then the buy, sell, 
desc columns are auto entered (copied) from their corresponding row in 
the products table.


Tom


I have probably misunderstood much of this thread, but I think how I 
would handle this situation is to keep each version of each product's 
description in the products table, along with the date that description 
became valid. Then the product ID and date in each transaction would 
reference the appropriate product table data.


That seems to fit "normal" database usage better than what you describe.

HTH,

Gerry

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQLITE_CORRUPT recover

2007-06-18 Thread Sabyasachi Ruj

I still fail to understand what should I synchronize on. I am *not* sharing
sqlite* across multiple threads.

On 6/18/07, John Stanton <[EMAIL PROTECTED]> wrote:


It is fundamental computer science, CS101 you might say.  Pick up a
textbook on basic computing.

Sabyasachi Ruj wrote:
> But can you tell me where is this documented please?
>
> On 6/18/07, John Stanton <[EMAIL PROTECTED]> wrote:
>
>>
>> If you knew the answer then why did you ask the question?  You can
apply
>> regular CS or look for miracles.
>>
>> You might be better off using a server based RDBMS like Oracle or
>> PostgreSQL where the application programmer is insulated from
>> synchronization issues.  Sqlite has the "lite" in its name for a very
>> good reason.  It is designed for embedded applications, not as an
>> enterprise DBMS.
>>
>> Sabyasachi Ruj wrote:
>> > But the following link
>> > http://www.sqlite.org/cvstrac/wiki/wiki?p=MultiThreading
>> > says nothing that I have to synchronize at the application level to
>> create
>> > multiple connections, until the same database connection is being
>> shared!
>> > Ref: The four points in 'Short Answer' section.
>> >
>> > BTW: for a DBMS it does not make sense if the application programmer
>> has
>> to
>> > synchronize to create multiple connection.
>> > And synhing will have considerable performance drop also.
>> >
>> >
>> > On 6/18/07, John Stanton <[EMAIL PROTECTED]> wrote:
>> >
>> >>
>> >> Threadsafe only means that threads do not access global data
elements
>> or
>> >> that they synchronize (serialize) access to global data.  It does
>> >> nothing to synchronize threads.  That is up to the application
>> >> programmer.  Sqlite uses POSIX file locks for synchronixation but if
>> you
>> >> are in a totally threaded environment you can use thread sync
>> functions
>> >> like mutexes or the finer grained read and write lock thread
>> primitives.
>> >>
>> >> If you are accessing Sqlite across a network file locks are the way
to
>> >> go, but do depend upon network implementations and settings.  If you
>> >> have multiple processes on one OS you can sync using semaphores.
>> >>
>> >> Using textbook style synchronization ensures that you have minimal
>> >> problems and optimal performance.
>> >>
>> >> Sabyasachi Ruj wrote:
>> >> > But I think we do not have to take care of synchronizing sqlite
>> access.
>> >> > sqlite internally does if it is compiled with THREADSAFE=1.
>> >> >
>> >> > On 6/18/07, John Stanton <[EMAIL PROTECTED]> wrote:
>> >> >
>> >> >>
>> >> >> The problem is fairly straight forward.  Sqlite is a single
>> resource
>> >> >> being shared by multiple thyreads so you just use fundamental
>> >> >> synchronization logic as you would when sharing any resource
>> between
>> >> >> competing threads.
>> >> >>
>> >> >> Sabyasachi Ruj wrote:
>> >> >> > Hi,
>> >> >> >
>> >> >> > I am using sqlite in a multithreaded environment. I have take
>> >> >> > reasonable like not sharing sqlite* handles. I am creating a
new
>> >> >> sqlite*
>> >> >>
>> >> >> > for
>> >> >> > every thread.
>> >> >> >
>> >> >> > Where can we get more info on working with SQLite in a
>> multithreaded
>> >> >> > environment?
>> >> >> >
>> >> >> > The application is working as a service in windows.
>> >> >> >
>> >> >> > sqlite3_step() is failing with the following error message:
>> >> >> >
>> >> >> > * SQL error or missing database
>> >> >> > SQL logic error or missing database*
>> >> >> >
>> >> >> > I am getting this message after running the application for
quite
>> a
>> >> >> long
>> >> >> > time (few days).
>> >> >> > And then if I execute *PRAGMA INTEGRITY_CHECK* on that table, I
>> get
>> >> the
>> >> >> > same
>> >> >> > error message.
>> >> >> >
>> >> >> > My application updates the database very 2 mins and the
>> corruption
>> >> >> happend
>> >> >> > randomly
>> >> >> >
>> >> >> > I dont have any clue how to debug this!
>> >> >> >
>> >> >> > Thanks.
>> >> >> >
>> >> >> > On 5/11/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>> >> >> >
>> >> >> >> "Sabyasachi Ruj" < [EMAIL PROTECTED]> wrote:
>> >> >> >> > Hi,
>> >> >> >> > Is there any way to programmatically fix a corrupted sqlite
>> >> >> database?
>> >> >> >> > I am using sqlite version 3.3.8 with C APIs
>> >> >> >> >
>> >> >> >>
>> >> >> >> Sometimes VACUUM or REINDEX will help, but usually not.
>> >> >> >> You can also try to recover using:
>> >> >> >>
>> >> >> >>sqlite3 OLD.DB .dump | sqlite3 NEW.DB
>> >> >> >>
>> >> >> >> But that doesn't always work either.  The best approach
>> >> >> >> is to avoid corruption in the first place.
>> >> >> >> --
>> >> >> >> D. Richard Hipp <[EMAIL PROTECTED]>
>> >> >> >>
>> >> >> >>
>> >> >> >>
>> >> >> >>
>> >> >>
>> >>
>>
-
>>
>> >>
>> >> >>
>> >> >>
>> >> >> >>
>> >> >> >> To unsubscribe, send email to
>> [EMAIL PROTECTED]
>> >> >> >>
>> >> >> >>
>> >> >>
>> >>
>>

Re: [sqlite] SQLITE_CORRUPT recover

2007-06-18 Thread John Stanton
It is fundamental computer science, CS101 you might say.  Pick up a 
textbook on basic computing.


Sabyasachi Ruj wrote:

But can you tell me where is this documented please?

On 6/18/07, John Stanton <[EMAIL PROTECTED]> wrote:



If you knew the answer then why did you ask the question?  You can apply
regular CS or look for miracles.

You might be better off using a server based RDBMS like Oracle or
PostgreSQL where the application programmer is insulated from
synchronization issues.  Sqlite has the "lite" in its name for a very
good reason.  It is designed for embedded applications, not as an
enterprise DBMS.

Sabyasachi Ruj wrote:
> But the following link
> http://www.sqlite.org/cvstrac/wiki/wiki?p=MultiThreading
> says nothing that I have to synchronize at the application level to
create
> multiple connections, until the same database connection is being
shared!
> Ref: The four points in 'Short Answer' section.
>
> BTW: for a DBMS it does not make sense if the application programmer 
has

to
> synchronize to create multiple connection.
> And synhing will have considerable performance drop also.
>
>
> On 6/18/07, John Stanton <[EMAIL PROTECTED]> wrote:
>
>>
>> Threadsafe only means that threads do not access global data elements
or
>> that they synchronize (serialize) access to global data.  It does
>> nothing to synchronize threads.  That is up to the application
>> programmer.  Sqlite uses POSIX file locks for synchronixation but if
you
>> are in a totally threaded environment you can use thread sync 
functions

>> like mutexes or the finer grained read and write lock thread
primitives.
>>
>> If you are accessing Sqlite across a network file locks are the way to
>> go, but do depend upon network implementations and settings.  If you
>> have multiple processes on one OS you can sync using semaphores.
>>
>> Using textbook style synchronization ensures that you have minimal
>> problems and optimal performance.
>>
>> Sabyasachi Ruj wrote:
>> > But I think we do not have to take care of synchronizing sqlite
access.
>> > sqlite internally does if it is compiled with THREADSAFE=1.
>> >
>> > On 6/18/07, John Stanton <[EMAIL PROTECTED]> wrote:
>> >
>> >>
>> >> The problem is fairly straight forward.  Sqlite is a single 
resource

>> >> being shared by multiple thyreads so you just use fundamental
>> >> synchronization logic as you would when sharing any resource 
between

>> >> competing threads.
>> >>
>> >> Sabyasachi Ruj wrote:
>> >> > Hi,
>> >> >
>> >> > I am using sqlite in a multithreaded environment. I have take
>> >> > reasonable like not sharing sqlite* handles. I am creating a new
>> >> sqlite*
>> >>
>> >> > for
>> >> > every thread.
>> >> >
>> >> > Where can we get more info on working with SQLite in a
multithreaded
>> >> > environment?
>> >> >
>> >> > The application is working as a service in windows.
>> >> >
>> >> > sqlite3_step() is failing with the following error message:
>> >> >
>> >> > * SQL error or missing database
>> >> > SQL logic error or missing database*
>> >> >
>> >> > I am getting this message after running the application for quite
a
>> >> long
>> >> > time (few days).
>> >> > And then if I execute *PRAGMA INTEGRITY_CHECK* on that table, I
get
>> the
>> >> > same
>> >> > error message.
>> >> >
>> >> > My application updates the database very 2 mins and the 
corruption

>> >> happend
>> >> > randomly
>> >> >
>> >> > I dont have any clue how to debug this!
>> >> >
>> >> > Thanks.
>> >> >
>> >> > On 5/11/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>> >> >
>> >> >> "Sabyasachi Ruj" < [EMAIL PROTECTED]> wrote:
>> >> >> > Hi,
>> >> >> > Is there any way to programmatically fix a corrupted sqlite
>> >> database?
>> >> >> > I am using sqlite version 3.3.8 with C APIs
>> >> >> >
>> >> >>
>> >> >> Sometimes VACUUM or REINDEX will help, but usually not.
>> >> >> You can also try to recover using:
>> >> >>
>> >> >>sqlite3 OLD.DB .dump | sqlite3 NEW.DB
>> >> >>
>> >> >> But that doesn't always work either.  The best approach
>> >> >> is to avoid corruption in the first place.
>> >> >> --
>> >> >> D. Richard Hipp <[EMAIL PROTECTED]>
>> >> >>
>> >> >>
>> >> >>
>> >> >>
>> >>
>>
- 


>>
>> >>
>> >>
>> >> >>
>> >> >> To unsubscribe, send email to 
[EMAIL PROTECTED]

>> >> >>
>> >> >>
>> >>
>>
- 


>>
>> >>
>> >>
>> >> >>
>> >> >>
>> >> >>
>> >> >
>> >> >
>> >>
>> >>
>> >>
>> >>
>>
- 


>>
>> >>
>> >> To unsubscribe, send email to [EMAIL PROTECTED]
>> >>
>> >>
>>
- 


>>
>> >>
>> >>
>> >>
>> >
>> >
>>
>>
>>
>>
- 


>>
>> To unsubscribe, send email to [EMAIL PROTECTED]
>>
>>

Re: [sqlite] Trigger update of multiple columns

2007-06-18 Thread John Stanton
If you can automatically enter data then you are violating the 
normalization rules.  Maybe you should get a book on database design and 
become familiar with some of the fundamentals.


T&B wrote:

Hi John,

You have a reference data set which is accessed to get the current  
value of reference elements and store transactions to record  events. 
The transaction trails provide event history.



Yes, agreed.

A price is in the reference data, its value transferred to a  
transaction is no longer a price, it is a sale which represents the  
value of the price at the time the event occurred.



Yes.

How about reading your price data etc and just inserting a  
transaction into your sales table?



Yes, that's what I'm doing. I just want to make it more efficient.

Technically it's the sale_products table (since each sale has many  
products etc), but yes, I want to insert the transactions data  
(product_id, buy, sell, desc) into the sale_products table. But I  want 
a mechanism whereby if I enter the product_id, then the buy,  sell, desc 
columns are auto entered (copied) from their corresponding  row in the 
products table.


Tom



- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQLITE_CORRUPT recover

2007-06-18 Thread Sabyasachi Ruj

But can you tell me where is this documented please?

On 6/18/07, John Stanton <[EMAIL PROTECTED]> wrote:


If you knew the answer then why did you ask the question?  You can apply
regular CS or look for miracles.

You might be better off using a server based RDBMS like Oracle or
PostgreSQL where the application programmer is insulated from
synchronization issues.  Sqlite has the "lite" in its name for a very
good reason.  It is designed for embedded applications, not as an
enterprise DBMS.

Sabyasachi Ruj wrote:
> But the following link
> http://www.sqlite.org/cvstrac/wiki/wiki?p=MultiThreading
> says nothing that I have to synchronize at the application level to
create
> multiple connections, until the same database connection is being
shared!
> Ref: The four points in 'Short Answer' section.
>
> BTW: for a DBMS it does not make sense if the application programmer has
to
> synchronize to create multiple connection.
> And synhing will have considerable performance drop also.
>
>
> On 6/18/07, John Stanton <[EMAIL PROTECTED]> wrote:
>
>>
>> Threadsafe only means that threads do not access global data elements
or
>> that they synchronize (serialize) access to global data.  It does
>> nothing to synchronize threads.  That is up to the application
>> programmer.  Sqlite uses POSIX file locks for synchronixation but if
you
>> are in a totally threaded environment you can use thread sync functions
>> like mutexes or the finer grained read and write lock thread
primitives.
>>
>> If you are accessing Sqlite across a network file locks are the way to
>> go, but do depend upon network implementations and settings.  If you
>> have multiple processes on one OS you can sync using semaphores.
>>
>> Using textbook style synchronization ensures that you have minimal
>> problems and optimal performance.
>>
>> Sabyasachi Ruj wrote:
>> > But I think we do not have to take care of synchronizing sqlite
access.
>> > sqlite internally does if it is compiled with THREADSAFE=1.
>> >
>> > On 6/18/07, John Stanton <[EMAIL PROTECTED]> wrote:
>> >
>> >>
>> >> The problem is fairly straight forward.  Sqlite is a single resource
>> >> being shared by multiple thyreads so you just use fundamental
>> >> synchronization logic as you would when sharing any resource between
>> >> competing threads.
>> >>
>> >> Sabyasachi Ruj wrote:
>> >> > Hi,
>> >> >
>> >> > I am using sqlite in a multithreaded environment. I have take
>> >> > reasonable like not sharing sqlite* handles. I am creating a new
>> >> sqlite*
>> >>
>> >> > for
>> >> > every thread.
>> >> >
>> >> > Where can we get more info on working with SQLite in a
multithreaded
>> >> > environment?
>> >> >
>> >> > The application is working as a service in windows.
>> >> >
>> >> > sqlite3_step() is failing with the following error message:
>> >> >
>> >> > * SQL error or missing database
>> >> > SQL logic error or missing database*
>> >> >
>> >> > I am getting this message after running the application for quite
a
>> >> long
>> >> > time (few days).
>> >> > And then if I execute *PRAGMA INTEGRITY_CHECK* on that table, I
get
>> the
>> >> > same
>> >> > error message.
>> >> >
>> >> > My application updates the database very 2 mins and the corruption
>> >> happend
>> >> > randomly
>> >> >
>> >> > I dont have any clue how to debug this!
>> >> >
>> >> > Thanks.
>> >> >
>> >> > On 5/11/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>> >> >
>> >> >> "Sabyasachi Ruj" < [EMAIL PROTECTED]> wrote:
>> >> >> > Hi,
>> >> >> > Is there any way to programmatically fix a corrupted sqlite
>> >> database?
>> >> >> > I am using sqlite version 3.3.8 with C APIs
>> >> >> >
>> >> >>
>> >> >> Sometimes VACUUM or REINDEX will help, but usually not.
>> >> >> You can also try to recover using:
>> >> >>
>> >> >>sqlite3 OLD.DB .dump | sqlite3 NEW.DB
>> >> >>
>> >> >> But that doesn't always work either.  The best approach
>> >> >> is to avoid corruption in the first place.
>> >> >> --
>> >> >> D. Richard Hipp <[EMAIL PROTECTED]>
>> >> >>
>> >> >>
>> >> >>
>> >> >>
>> >>
>>
-
>>
>> >>
>> >>
>> >> >>
>> >> >> To unsubscribe, send email to [EMAIL PROTECTED]
>> >> >>
>> >> >>
>> >>
>>
-
>>
>> >>
>> >>
>> >> >>
>> >> >>
>> >> >>
>> >> >
>> >> >
>> >>
>> >>
>> >>
>> >>
>>
-
>>
>> >>
>> >> To unsubscribe, send email to [EMAIL PROTECTED]
>> >>
>> >>
>>
-
>>
>> >>
>> >>
>> >>
>> >
>> >
>>
>>
>>
>>
-
>>
>> To unsubscribe, send email to [EMAIL PROTECTED]
>>
>>
-
>>
>>
>>
>
>



-
To unsubscribe, send e

Re: [sqlite] SQLITE_CORRUPT recover

2007-06-18 Thread John Stanton
If you knew the answer then why did you ask the question?  You can apply 
regular CS or look for miracles.


You might be better off using a server based RDBMS like Oracle or 
PostgreSQL where the application programmer is insulated from 
synchronization issues.  Sqlite has the "lite" in its name for a very 
good reason.  It is designed for embedded applications, not as an 
enterprise DBMS.


Sabyasachi Ruj wrote:

But the following link
http://www.sqlite.org/cvstrac/wiki/wiki?p=MultiThreading
says nothing that I have to synchronize at the application level to create
multiple connections, until the same database connection is being shared!
Ref: The four points in 'Short Answer' section.

BTW: for a DBMS it does not make sense if the application programmer has to
synchronize to create multiple connection.
And synhing will have considerable performance drop also.


On 6/18/07, John Stanton <[EMAIL PROTECTED]> wrote:



Threadsafe only means that threads do not access global data elements or
that they synchronize (serialize) access to global data.  It does
nothing to synchronize threads.  That is up to the application
programmer.  Sqlite uses POSIX file locks for synchronixation but if you
are in a totally threaded environment you can use thread sync functions
like mutexes or the finer grained read and write lock thread primitives.

If you are accessing Sqlite across a network file locks are the way to
go, but do depend upon network implementations and settings.  If you
have multiple processes on one OS you can sync using semaphores.

Using textbook style synchronization ensures that you have minimal
problems and optimal performance.

Sabyasachi Ruj wrote:
> But I think we do not have to take care of synchronizing sqlite access.
> sqlite internally does if it is compiled with THREADSAFE=1.
>
> On 6/18/07, John Stanton <[EMAIL PROTECTED]> wrote:
>
>>
>> The problem is fairly straight forward.  Sqlite is a single resource
>> being shared by multiple thyreads so you just use fundamental
>> synchronization logic as you would when sharing any resource between
>> competing threads.
>>
>> Sabyasachi Ruj wrote:
>> > Hi,
>> >
>> > I am using sqlite in a multithreaded environment. I have take
>> > reasonable like not sharing sqlite* handles. I am creating a new
>> sqlite*
>>
>> > for
>> > every thread.
>> >
>> > Where can we get more info on working with SQLite in a multithreaded
>> > environment?
>> >
>> > The application is working as a service in windows.
>> >
>> > sqlite3_step() is failing with the following error message:
>> >
>> > * SQL error or missing database
>> > SQL logic error or missing database*
>> >
>> > I am getting this message after running the application for quite a
>> long
>> > time (few days).
>> > And then if I execute *PRAGMA INTEGRITY_CHECK* on that table, I get
the
>> > same
>> > error message.
>> >
>> > My application updates the database very 2 mins and the corruption
>> happend
>> > randomly
>> >
>> > I dont have any clue how to debug this!
>> >
>> > Thanks.
>> >
>> > On 5/11/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>> >
>> >> "Sabyasachi Ruj" < [EMAIL PROTECTED]> wrote:
>> >> > Hi,
>> >> > Is there any way to programmatically fix a corrupted sqlite
>> database?
>> >> > I am using sqlite version 3.3.8 with C APIs
>> >> >
>> >>
>> >> Sometimes VACUUM or REINDEX will help, but usually not.
>> >> You can also try to recover using:
>> >>
>> >>sqlite3 OLD.DB .dump | sqlite3 NEW.DB
>> >>
>> >> But that doesn't always work either.  The best approach
>> >> is to avoid corruption in the first place.
>> >> --
>> >> D. Richard Hipp <[EMAIL PROTECTED]>
>> >>
>> >>
>> >>
>> >>
>>
- 


>>
>>
>> >>
>> >> To unsubscribe, send email to [EMAIL PROTECTED]
>> >>
>> >>
>>
- 


>>
>>
>> >>
>> >>
>> >>
>> >
>> >
>>
>>
>>
>>
- 


>>
>> To unsubscribe, send email to [EMAIL PROTECTED]
>>
>>
- 


>>
>>
>>
>
>



- 


To unsubscribe, send email to [EMAIL PROTECTED]

- 










-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Trigger update of multiple columns

2007-06-18 Thread

Hi John,

You have a reference data set which is accessed to get the current  
value of reference elements and store transactions to record  
events. The transaction trails provide event history.


Yes, agreed.

A price is in the reference data, its value transferred to a  
transaction is no longer a price, it is a sale which represents the  
value of the price at the time the event occurred.


Yes.

How about reading your price data etc and just inserting a  
transaction into your sales table?


Yes, that's what I'm doing. I just want to make it more efficient.

Technically it's the sale_products table (since each sale has many  
products etc), but yes, I want to insert the transactions data  
(product_id, buy, sell, desc) into the sale_products table. But I  
want a mechanism whereby if I enter the product_id, then the buy,  
sell, desc columns are auto entered (copied) from their corresponding  
row in the products table.


Tom



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQLITE_CORRUPT recover

2007-06-18 Thread Sabyasachi Ruj

But the following link
http://www.sqlite.org/cvstrac/wiki/wiki?p=MultiThreading
says nothing that I have to synchronize at the application level to create
multiple connections, until the same database connection is being shared!
Ref: The four points in 'Short Answer' section.

BTW: for a DBMS it does not make sense if the application programmer has to
synchronize to create multiple connection.
And synhing will have considerable performance drop also.


On 6/18/07, John Stanton <[EMAIL PROTECTED]> wrote:


Threadsafe only means that threads do not access global data elements or
that they synchronize (serialize) access to global data.  It does
nothing to synchronize threads.  That is up to the application
programmer.  Sqlite uses POSIX file locks for synchronixation but if you
are in a totally threaded environment you can use thread sync functions
like mutexes or the finer grained read and write lock thread primitives.

If you are accessing Sqlite across a network file locks are the way to
go, but do depend upon network implementations and settings.  If you
have multiple processes on one OS you can sync using semaphores.

Using textbook style synchronization ensures that you have minimal
problems and optimal performance.

Sabyasachi Ruj wrote:
> But I think we do not have to take care of synchronizing sqlite access.
> sqlite internally does if it is compiled with THREADSAFE=1.
>
> On 6/18/07, John Stanton <[EMAIL PROTECTED]> wrote:
>
>>
>> The problem is fairly straight forward.  Sqlite is a single resource
>> being shared by multiple thyreads so you just use fundamental
>> synchronization logic as you would when sharing any resource between
>> competing threads.
>>
>> Sabyasachi Ruj wrote:
>> > Hi,
>> >
>> > I am using sqlite in a multithreaded environment. I have take
>> > reasonable like not sharing sqlite* handles. I am creating a new
>> sqlite*
>>
>> > for
>> > every thread.
>> >
>> > Where can we get more info on working with SQLite in a multithreaded
>> > environment?
>> >
>> > The application is working as a service in windows.
>> >
>> > sqlite3_step() is failing with the following error message:
>> >
>> > * SQL error or missing database
>> > SQL logic error or missing database*
>> >
>> > I am getting this message after running the application for quite a
>> long
>> > time (few days).
>> > And then if I execute *PRAGMA INTEGRITY_CHECK* on that table, I get
the
>> > same
>> > error message.
>> >
>> > My application updates the database very 2 mins and the corruption
>> happend
>> > randomly
>> >
>> > I dont have any clue how to debug this!
>> >
>> > Thanks.
>> >
>> > On 5/11/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>> >
>> >> "Sabyasachi Ruj" < [EMAIL PROTECTED]> wrote:
>> >> > Hi,
>> >> > Is there any way to programmatically fix a corrupted sqlite
>> database?
>> >> > I am using sqlite version 3.3.8 with C APIs
>> >> >
>> >>
>> >> Sometimes VACUUM or REINDEX will help, but usually not.
>> >> You can also try to recover using:
>> >>
>> >>sqlite3 OLD.DB .dump | sqlite3 NEW.DB
>> >>
>> >> But that doesn't always work either.  The best approach
>> >> is to avoid corruption in the first place.
>> >> --
>> >> D. Richard Hipp <[EMAIL PROTECTED]>
>> >>
>> >>
>> >>
>> >>
>>
-
>>
>>
>> >>
>> >> To unsubscribe, send email to [EMAIL PROTECTED]
>> >>
>> >>
>>
-
>>
>>
>> >>
>> >>
>> >>
>> >
>> >
>>
>>
>>
>>
-
>>
>> To unsubscribe, send email to [EMAIL PROTECTED]
>>
>>
-
>>
>>
>>
>
>



-
To unsubscribe, send email to [EMAIL PROTECTED]

-





--
Sabyasachi


Re: [sqlite] Trigger update of multiple columns

2007-06-18 Thread John Stanton
I mean something else.  You have a reference data set which is accessed 
to get the current value of reference elements and store transactions to 
record events. The transaction trails provide event history.


A price is in the reference data, its value transferred to a transaction 
is no longer a price, it is a sale which represents the value of the

price at the time the event occurred.

How about reading your price data etc and just inserting a transaction 
into your sales table?  You do not need to update anything.  You also 
have integrity for multiple users.


T&B wrote:

Hi John,

A general rule of database design is to seperate reference and  
transactional data.  Then you can have a normalized database in a  
dynamic environment.



Yes, I think that's what I am designing.

The reference data is the products table (and potentially customer  
table etc)


The transactional data is the sale_products table which lists what  
products went with each sale.


Or do you mean something else?

Thanks,
Tom


- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQLITE_CORRUPT recover

2007-06-18 Thread John Stanton
Threadsafe only means that threads do not access global data elements or 
that they synchronize (serialize) access to global data.  It does 
nothing to synchronize threads.  That is up to the application 
programmer.  Sqlite uses POSIX file locks for synchronixation but if you 
are in a totally threaded environment you can use thread sync functions 
like mutexes or the finer grained read and write lock thread primitives.


If you are accessing Sqlite across a network file locks are the way to 
go, but do depend upon network implementations and settings.  If you 
have multiple processes on one OS you can sync using semaphores.


Using textbook style synchronization ensures that you have minimal 
problems and optimal performance.


Sabyasachi Ruj wrote:

But I think we do not have to take care of synchronizing sqlite access.
sqlite internally does if it is compiled with THREADSAFE=1.

On 6/18/07, John Stanton <[EMAIL PROTECTED]> wrote:



The problem is fairly straight forward.  Sqlite is a single resource
being shared by multiple thyreads so you just use fundamental
synchronization logic as you would when sharing any resource between
competing threads.

Sabyasachi Ruj wrote:
> Hi,
>
> I am using sqlite in a multithreaded environment. I have take
> reasonable like not sharing sqlite* handles. I am creating a new 
sqlite*


> for
> every thread.
>
> Where can we get more info on working with SQLite in a multithreaded
> environment?
>
> The application is working as a service in windows.
>
> sqlite3_step() is failing with the following error message:
>
> * SQL error or missing database
> SQL logic error or missing database*
>
> I am getting this message after running the application for quite a 
long

> time (few days).
> And then if I execute *PRAGMA INTEGRITY_CHECK* on that table, I get the
> same
> error message.
>
> My application updates the database very 2 mins and the corruption
happend
> randomly
>
> I dont have any clue how to debug this!
>
> Thanks.
>
> On 5/11/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
>> "Sabyasachi Ruj" < [EMAIL PROTECTED]> wrote:
>> > Hi,
>> > Is there any way to programmatically fix a corrupted sqlite 
database?

>> > I am using sqlite version 3.3.8 with C APIs
>> >
>>
>> Sometimes VACUUM or REINDEX will help, but usually not.
>> You can also try to recover using:
>>
>>sqlite3 OLD.DB .dump | sqlite3 NEW.DB
>>
>> But that doesn't always work either.  The best approach
>> is to avoid corruption in the first place.
>> --
>> D. Richard Hipp <[EMAIL PROTECTED]>
>>
>>
>>
>>
- 



>>
>> To unsubscribe, send email to [EMAIL PROTECTED]
>>
>>
- 



>>
>>
>>
>
>



- 


To unsubscribe, send email to [EMAIL PROTECTED]

- 










-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Trigger update of multiple columns

2007-06-18 Thread

Hi John,

A general rule of database design is to seperate reference and  
transactional data.  Then you can have a normalized database in a  
dynamic environment.


Yes, I think that's what I am designing.

The reference data is the products table (and potentially customer  
table etc)


The transactional data is the sale_products table which lists what  
products went with each sale.


Or do you mean something else?

Thanks,
Tom


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQLITE_CORRUPT recover

2007-06-18 Thread Sabyasachi Ruj

But I think we do not have to take care of synchronizing sqlite access.
sqlite internally does if it is compiled with THREADSAFE=1.

On 6/18/07, John Stanton <[EMAIL PROTECTED]> wrote:


The problem is fairly straight forward.  Sqlite is a single resource
being shared by multiple thyreads so you just use fundamental
synchronization logic as you would when sharing any resource between
competing threads.

Sabyasachi Ruj wrote:
> Hi,
>
> I am using sqlite in a multithreaded environment. I have take
> reasonable like not sharing sqlite* handles. I am creating a new sqlite*

> for
> every thread.
>
> Where can we get more info on working with SQLite in a multithreaded
> environment?
>
> The application is working as a service in windows.
>
> sqlite3_step() is failing with the following error message:
>
> * SQL error or missing database
> SQL logic error or missing database*
>
> I am getting this message after running the application for quite a long
> time (few days).
> And then if I execute *PRAGMA INTEGRITY_CHECK* on that table, I get the
> same
> error message.
>
> My application updates the database very 2 mins and the corruption
happend
> randomly
>
> I dont have any clue how to debug this!
>
> Thanks.
>
> On 5/11/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
>> "Sabyasachi Ruj" < [EMAIL PROTECTED]> wrote:
>> > Hi,
>> > Is there any way to programmatically fix a corrupted sqlite database?
>> > I am using sqlite version 3.3.8 with C APIs
>> >
>>
>> Sometimes VACUUM or REINDEX will help, but usually not.
>> You can also try to recover using:
>>
>>sqlite3 OLD.DB .dump | sqlite3 NEW.DB
>>
>> But that doesn't always work either.  The best approach
>> is to avoid corruption in the first place.
>> --
>> D. Richard Hipp <[EMAIL PROTECTED]>
>>
>>
>>
>>
-

>>
>> To unsubscribe, send email to [EMAIL PROTECTED]
>>
>>
-

>>
>>
>>
>
>



-
To unsubscribe, send email to [EMAIL PROTECTED]

-





--
Sabyasachi


Re: [sqlite] Trigger update of multiple columns

2007-06-18 Thread

Hi Puneet,


ok, your problem makes a bit more sense


Great :-)


but still, much in it doesn't make sense.


OK, I'll see if I can clarify further.

even if the products table is changing, the buy, sell, and  
description of a given item shouldn't change.


The buy and sell price of products always change, due to inflation,  
sales, competition and so on. Pick up just about any grocery or  
electronic goods catalog today and next month and compare prices on  
the same products.


Further, even if buy and sell change, then at least the description  
will be not be different.


True, the description changes less often than prices. However, it  
does change. It may be as small as a spelling correction, or have  
some promotion attached eg "Now 20% bigger", or just a refinement of  
the description without a change to the actual product. In all cases,  
it is important that a particular sale shows the price and  
description (and any other details) that were listed for the product  
at the moment is was bought/sold.


If none of that apply as you imply above, then it is no longer the  
same product... it is a different product altogether. In other  
words, you can still identify each product with a unique id


What you say is true if it is a completely different or even updated  
model/product. You would expect that the product_id would also change  
for such a major change. However, this type of change is not what I'm  
catering for. And, in any case, the product_id of each product is set  
by the suppliers, so I have no control over it. And yes, I could  
introduce my own product_id to override theirs but that's getting way  
of track and doesn't really occur here anyway.


and one of the basic rules of normalization is that all related  
things should be together.


Yes, I understand that. But I also hope I'm getting the point through  
that the price is not static for a given product, and to a lesser  
degree the description may change slightly, and I need to keep a  
record of the product price and features as it was when purchased.


So, all information about a product should be in the products  
table, not in the sale_products table.


The sales_products table is a many to many table, linking many sales  
with many products. One sale may contain many products. And one  
product may appear in many sales. To my knowledge the only way to set  
up a many to many relationship is to create an intermediate table in  
this way, even without consideration of the changing price etc.


Take a bar of soap -- bought it for $2, sold it for $2.40,  
description is "woodsy, honey dew cataloupe smelling hand crafted  
soap." A product_id of 243 identifies that entity uniquely. If you  
change its attributes, say now it is, "viscous, tar-based paste  
guaranteed to get motor grease off" then it is a completely  
different product, and should have a different product_id.


Good example, I'll use it. I sell that soap today to Bill for the  
price shown in today's catalog, at $2.40. But next week we're  
overstocked, so I sell it to Ted for $2.10. And next week I revise my  
catalog after having realized that people in my state don't know what  
cantaloupe is, but instead call it "water melon" (true story). The  
following week the "Hand Crafted" guild tells me that the name is  
trademarked, so I change it's description again before selling it to  
Neo. And a month later, the product is completely removed from my  
catalog since I can no longer source it (Hand Crafted Guild under  
suspicion of arson).


So, in each case, I need to capture the current product details for  
the invoice/sale. I need an exact record of what I invoiced each  
customer for. I need to know that I sold it to Bill for $2.40 when it  
was called "cantaloupe", even though the last product catalog shows  
it at $2.10 and called it "water melon", and especially since it no  
longer appears in my current catalog in a month's time.


So, my sale_products table looks like this, in part:

sale_idproduct_id   buy   sell   desc
1001   243  $2$2.40  woodsy, honey dew cataloupe  
smelling hand crafted soap
1013   243  $2$2.10  woodsy, honey dew cataloupe  
smelling hand crafted soap

1042   145  $5$6.00  white rabbit tattoo remover
1042   243  $2$2.10  woodsy, honey dew water melon  
scented hand crafted soap

1042   176  $4$5.10  red pill
1058   243  $2$2.10  woodsy, honey dew water melon  
scented home made soap


And sales contains (simplified):

sales:

sale_id  customer   date
1001 Bill   2007-06-18
1013 Ted2007-06-24
1042 Neo2007-06-30
1058 Morpheus   2007-07-04

Even if the product table is updated to capture new items from the  
different catalogs, it will forever store the attributes of each  
product, creating a unique history right there.


No. The product catalog/table is orders of magnitude larger than the  
s

Re: [sqlite] SQLITE_CORRUPT recover

2007-06-18 Thread John Stanton
The problem is fairly straight forward.  Sqlite is a single resource 
being shared by multiple thyreads so you just use fundamental 
synchronization logic as you would when sharing any resource between 
competing threads.


Sabyasachi Ruj wrote:

Hi,

I am using sqlite in a multithreaded environment. I have take
reasonable like not sharing sqlite* handles. I am creating a new sqlite* 
for

every thread.

Where can we get more info on working with SQLite in a multithreaded
environment?

The application is working as a service in windows.

sqlite3_step() is failing with the following error message:

* SQL error or missing database
SQL logic error or missing database*

I am getting this message after running the application for quite a long
time (few days).
And then if I execute *PRAGMA INTEGRITY_CHECK* on that table, I get the 
same

error message.

My application updates the database very 2 mins and the corruption happend
randomly

I dont have any clue how to debug this!

Thanks.

On 5/11/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:


"Sabyasachi Ruj" <[EMAIL PROTECTED]> wrote:
> Hi,
> Is there any way to programmatically fix a corrupted sqlite database?
> I am using sqlite version 3.3.8 with C APIs
>

Sometimes VACUUM or REINDEX will help, but usually not.
You can also try to recover using:

   sqlite3 OLD.DB .dump | sqlite3 NEW.DB

But that doesn't always work either.  The best approach
is to avoid corruption in the first place.
--
D. Richard Hipp <[EMAIL PROTECTED]>



- 


To unsubscribe, send email to [EMAIL PROTECTED]

- 










-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Version 3.4.0

2007-06-18 Thread drh
SQLite version 3.4.0 is now available for download from the SQLite
website and from the back-up site:

   http://www.sqlite.org/
   http://www.hwaci.com/sw/sqlite/

Version 3.4.0 fixes two bugs which can cause database corruption.
Upgrading is recommended for all users.  If you are compelled to
continue using an older version of SQLite, then please at least
read the documentation on how to avoid hitting these bugs.  See

   http://www.sqlite.org/cvstrac/wiki?p=CorruptionFollowingBusyError
   http://www.sqlite.org/cvstrac/tktview?tn=2418

The consequences of hitting either of these bugs is severe, but
both bugs are obscure and relatively difficult to hit.  So the
impact on most applications is likely to be minimal.  Even so,
we think it is better to be safe than sorry and recommend that
everybody upgrade as soon as possible.

In addition to the critical bug fixes above, Version 3.4.0 also
includes explicit, well-defined, and tested upper bounds for many
properties of SQLite.  The new upper-bounds might cause some legacy
applications that use excessively large strings, tables, or SQL
statements to break unless the default upper-bounds are increased
using compile-time options.  This is the reason that the version
of the new release is 3.4.0 instead of 3.3.18.  For more information
about the new upper bounds, see

   http://www.sqlite.org/limits.html

Version 3.4.0 also includes support for new features such as:

   *  Incremental BLOB I/O
   *  The zeroblob() SQL function
   *  Incremental vacuum 

Additional information about these new features is available online.

By popular request, the preprocessed source code is now available
again as individual C code files.  The amalgamation is also still
available but has a different filename for downloading.

Do not be deceived by the ".0" at the end of the version number.
This release is one of the most extensively tested and debugged
releases of SQLite in a long time.  We expect it to be stable.
We will be more confident in that assertion after we have had some
additional real-world experience with it, but we are going into
this release with the expectation that it will prove to be 
production ready.  I will send follow-up messages to this list
if that expectation proves unfounded.  The key point is that 
version 3.4.0 is *not* a beta release.

As always, please report any issues that you find either to this
mailing list, to the SQLite ticket system, or directly to me and/or
Dan.

--
D. Richard Hipp <[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Trigger update of multiple columns

2007-06-18 Thread John Stanton
A general rule of database design is to seperate reference and 
transactional data.  Then you can have a normalized database in a 
dynamic environment.


T&B wrote:

Hi Puneet and John,

You each respectively said:

Why are you repeating the Code, Buy, Sell, and Description columns  in 
the Sale_Products table when they already exists in the Products  table?



A traditional goal in database design is to place data in "Third  
Normal Form" which means in essence that each data element is only  
stored in one place



Good question/point. Sorry I thought from my introverted world that  
this would be obvious, so didn't elaborate. So let me clarify:


The Products table is changing all the time. What might be listed  today 
when a Sale is made, might no longer be listed in Products in a  couple 
of weeks (but needs to retain the details in the Sale). And  even if it 
is still listed in Products, it's very likely that the  price and 
probably description will have changed.


So the Sale_Products table needs to capture the values of the Buy  
Price, Sell Price and Description (and others) when the sale is made,  
and cannot simply be dynamically related to their occurrence in the  
Products table (since it will change, but the Sale_Products extract  for 
the current sale must remain unchanged).


assume that the "buy" column is the price I paid for the item, and  
"sell" column is the price I get for it) --



Yes, that's right.


CREATE TABLE products (
 product_id INTEGER PRIMARY KEY,
 buy REAL,
 sell REAL,
 desc TEXT
);

CREATE TABLE customers (
 customer_id INTEGER PRIMARY KEY,
 .. other customer info columns ..
);

CREATE TABLE sales (
 sale_id INTEGER PRIMARY KEY,
 product_id INTEGER,   -- FK to products table
 customer_id INTEGER   -- FK to customes table
);



One sale may involve several products, so it's more like this:

CREATE TABLE sales (
 sale_id INTEGER PRIMARY KEY,
 product_id INTEGER,   -- FK to products table
 customer_id INTEGER   -- FK to customes table
);

CREATE TABLE sale_products (
 sale_id INTEGER,  -- FK to sales table
 product_id INTEGER,   -- FK to products table
 buy REAL,
 sell REAL,
 desc TEXT
);

Now, when a new product is added to a Sale, I do this:

INSERT INTO sale_products(sale_id, product_id) VALUES( ? , ? )

And what I need is some kind of trigger that will automatically fill  in 
the buy, sell and desc columns for me.


Something like:

CERATE TRIGGER update_sale_products
AFTER UPDATE OF product_id
on sale_products
BEGIN
  UPDATE sale_products
SET
  buy  = (SELECT buy  FROM products WHERE products.product_id =  
NEW.product_id)
, sell = (SELECT sell FROM products WHERE products.product_id =  
NEW.product_id)
, desc = (SELECT desc FROM products WHERE products.product_id =  
NEW.product_id)

  WHERE
ROWID=NEW.ROWID
  ;
END

which is basically just a rewrite of my original post, but using your  
capitalization and entity names.


But I want something without the multiple lookups on the products  table 
of the same thing, ie the:

WHERE products.product_id = NEW.product_id

Any ideas?

Thanks,
Tom


- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Journal File Optimization

2007-06-18 Thread Dan Kennedy
On Mon, 2007-06-18 at 06:04 -0500, John Stanton wrote:
> Andre du Plessis wrote:
> > How can one optimize the creation of the journal file. The problem is
> > this, for our system which is an event based one each message needs to
> > be insterted and committed to the database (guaranteed), this results in
> > a commit per insert, this was obviously unacceptably slow and according
> > to the lists by design you need to do more bulk transactions, and with
> > some efford I modified it with a combination of bulk and temporary
> > tables to do quite a few at a time, but the problem remains that
> > committing is terribly slow. 
> > 
> >  
> > 
> > I'm thinking the journal file, the fact that it is created written and
> > deleted each time which is slowing it down, Is there not a way to create
> > and set up a permanent journal file pre-allocated to a certain size, so
> > sqlite does not have to go through the OS each time to create and delete
> > this file?
> > 
> >  
> > 
> > Along the same lines of this question, is there a way to fix the initial
> > size of the DB, ie set it to pre-allocate 700mb for instance so that no
> > growing of the db file is needed until the space is exceeded, may also
> > speed things up.
> > 
> >  
> > 
> >  
> > 
> > Thank you very much in advance.
> > 
> I suspest the you will find that the sync process is the logjam.
> 

This is almost certainly right.

But for trivias sake: In exclusive mode, sqlite truncates the journal
file at the end of each transaction instead of deleting it. Because
file creation/deletion is slow on some systems.

You can turn on exclusive mode using:

   PRAGMA locking_mode = "exclusive;


Dan. 



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQLITE_CORRUPT recover

2007-06-18 Thread Sabyasachi Ruj

Hi,

I am using sqlite in a multithreaded environment. I have take
reasonable like not sharing sqlite* handles. I am creating a new sqlite* for
every thread.

Where can we get more info on working with SQLite in a multithreaded
environment?

The application is working as a service in windows.

sqlite3_step() is failing with the following error message:

* SQL error or missing database
SQL logic error or missing database*

I am getting this message after running the application for quite a long
time (few days).
And then if I execute *PRAGMA INTEGRITY_CHECK* on that table, I get the same
error message.

My application updates the database very 2 mins and the corruption happend
randomly

I dont have any clue how to debug this!

Thanks.

On 5/11/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:


"Sabyasachi Ruj" <[EMAIL PROTECTED]> wrote:
> Hi,
> Is there any way to programmatically fix a corrupted sqlite database?
> I am using sqlite version 3.3.8 with C APIs
>

Sometimes VACUUM or REINDEX will help, but usually not.
You can also try to recover using:

   sqlite3 OLD.DB .dump | sqlite3 NEW.DB

But that doesn't always work either.  The best approach
is to avoid corruption in the first place.
--
D. Richard Hipp <[EMAIL PROTECTED]>



-
To unsubscribe, send email to [EMAIL PROTECTED]

-





--
Sabyasachi


Re: [sqlite] Trigger update of multiple columns

2007-06-18 Thread P Kishor

ok, your problem makes a bit more sense, but still, much in it doesn't
make sense.

On 6/18/07, T&B <[EMAIL PROTECTED]> wrote:

Hi Puneet and John,

You each respectively said:

> Why are you repeating the Code, Buy, Sell, and Description columns
> in the Sale_Products table when they already exists in the Products
> table?

> A traditional goal in database design is to place data in "Third
> Normal Form" which means in essence that each data element is only
> stored in one place

Good question/point. Sorry I thought from my introverted world that
this would be obvious, so didn't elaborate. So let me clarify:

The Products table is changing all the time. What might be listed
today when a Sale is made, might no longer be listed in Products in a
couple of weeks (but needs to retain the details in the Sale). And
even if it is still listed in Products, it's very likely that the
price and probably description will have changed.


even if the products table is changing, the buy, sell, and description
of a given item shouldn't change. Further, even if buy and sell
change, then at least the description will be not be different. If
none of that apply as you imply above, then it is no longer the same
product... it is a different product altogether. In other words, you
can still identify each product with a unique id, and one of the basic
rules of normalization is that all related things should be together.
So, all information about a product should be in the products table,
not in the sale_products table. Take a bar of soap -- bought it for
$2, sold it for $2.40, description is "woodsy, honey dew cataloupe
smelling hand crafted soap." A product_id of 243 identifies that
entity uniquely. If you change its attributes, say now it is,
"viscous, tar-based paste guaranteed to get motor grease off" then it
is a completely different product, and should have a different
product_id.

Even if the product table is updated to capture new items from the
different catalogs, it will forever store the attributes of each
product, creating a unique history right there.

If you want to track inventory, then you can add columns
appropriately, have unique rows in the sales table for each sale, and
decrement the inventory column for every sale.



So the Sale_Products table needs to capture the values of the Buy
Price, Sell Price and Description (and others) when the sale is made,
and cannot simply be dynamically related to their occurrence in the
Products table (since it will change, but the Sale_Products extract
for the current sale must remain unchanged).



right... it can still capture the price, sell, and desc of each sale
because it is referring to the unique product_id from the products
table. Now, it is likely that you buy two units of a particular soap
today for $2, and sell one for for $2.40 today, and the other for
$2.35 tomorrow. In that case, you can move the sell column from the
products table to the sales table. However, you are still identifying
the same product from the products table. On the other hand, if you
buy a soap today for $2, and the same soap tomorrow for $2.15, then
make it a different product. See, your buy happens before your sell,
so you need to store the buy price somewhere when you get the item.
You can't wait to store the buy price till you sell it.

List out all your application requirements, group all the related
items together so that each unique item occupies one and only one row
in a table, and then refer to that using ids.



> assume that the "buy" column is the price I paid for the item, and
> "sell" column is the price I get for it) --

Yes, that's right.

> CREATE TABLE products (
>  product_id INTEGER PRIMARY KEY,
>  buy REAL,
>  sell REAL,
>  desc TEXT
> );
>
> CREATE TABLE customers (
>  customer_id INTEGER PRIMARY KEY,
>  .. other customer info columns ..
> );
>
> CREATE TABLE sales (
>  sale_id INTEGER PRIMARY KEY,
>  product_id INTEGER,   -- FK to products table
>  customer_id INTEGER   -- FK to customes table
> );

One sale may involve several products, so it's more like this:

CREATE TABLE sales (
  sale_id INTEGER PRIMARY KEY,
  product_id INTEGER,   -- FK to products table
  customer_id INTEGER   -- FK to customes table
);

CREATE TABLE sale_products (
  sale_id INTEGER,  -- FK to sales table
  product_id INTEGER,   -- FK to products table
  buy REAL,
  sell REAL,
  desc TEXT
);

Now, when a new product is added to a Sale, I do this:

INSERT INTO sale_products(sale_id, product_id) VALUES( ? , ? )

And what I need is some kind of trigger that will automatically fill
in the buy, sell and desc columns for me.

Something like:

CERATE TRIGGER update_sale_products
AFTER UPDATE OF product_id
on sale_products
BEGIN
   UPDATE sale_products
 SET
   buy  = (SELECT buy  FROM products WHERE products.product_id =
NEW.product_id)
 , sell = (SELECT sell FROM products WHERE products.product_id =
NEW.product_id)
 , desc = (SELECT desc FROM products WHERE products.product_id =
NEW

Re: [sqlite] Trigger update of multiple columns

2007-06-18 Thread

Hi Puneet and John,

You each respectively said:

Why are you repeating the Code, Buy, Sell, and Description columns  
in the Sale_Products table when they already exists in the Products  
table?


A traditional goal in database design is to place data in "Third  
Normal Form" which means in essence that each data element is only  
stored in one place


Good question/point. Sorry I thought from my introverted world that  
this would be obvious, so didn't elaborate. So let me clarify:


The Products table is changing all the time. What might be listed  
today when a Sale is made, might no longer be listed in Products in a  
couple of weeks (but needs to retain the details in the Sale). And  
even if it is still listed in Products, it's very likely that the  
price and probably description will have changed.


So the Sale_Products table needs to capture the values of the Buy  
Price, Sell Price and Description (and others) when the sale is made,  
and cannot simply be dynamically related to their occurrence in the  
Products table (since it will change, but the Sale_Products extract  
for the current sale must remain unchanged).


assume that the "buy" column is the price I paid for the item, and  
"sell" column is the price I get for it) --


Yes, that's right.


CREATE TABLE products (
 product_id INTEGER PRIMARY KEY,
 buy REAL,
 sell REAL,
 desc TEXT
);

CREATE TABLE customers (
 customer_id INTEGER PRIMARY KEY,
 .. other customer info columns ..
);

CREATE TABLE sales (
 sale_id INTEGER PRIMARY KEY,
 product_id INTEGER,   -- FK to products table
 customer_id INTEGER   -- FK to customes table
);


One sale may involve several products, so it's more like this:

CREATE TABLE sales (
 sale_id INTEGER PRIMARY KEY,
 product_id INTEGER,   -- FK to products table
 customer_id INTEGER   -- FK to customes table
);

CREATE TABLE sale_products (
 sale_id INTEGER,  -- FK to sales table
 product_id INTEGER,   -- FK to products table
 buy REAL,
 sell REAL,
 desc TEXT
);

Now, when a new product is added to a Sale, I do this:

INSERT INTO sale_products(sale_id, product_id) VALUES( ? , ? )

And what I need is some kind of trigger that will automatically fill  
in the buy, sell and desc columns for me.


Something like:

CERATE TRIGGER update_sale_products
AFTER UPDATE OF product_id
on sale_products
BEGIN
  UPDATE sale_products
SET
  buy  = (SELECT buy  FROM products WHERE products.product_id =  
NEW.product_id)
, sell = (SELECT sell FROM products WHERE products.product_id =  
NEW.product_id)
, desc = (SELECT desc FROM products WHERE products.product_id =  
NEW.product_id)

  WHERE
ROWID=NEW.ROWID
  ;
END

which is basically just a rewrite of my original post, but using your  
capitalization and entity names.


But I want something without the multiple lookups on the products  
table of the same thing, ie the:

WHERE products.product_id = NEW.product_id

Any ideas?

Thanks,
Tom


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Tomcat crashes with SQLite

2007-06-18 Thread Frederic de la Goublaye

Hi all.

Lucy thanks for your answer :+)
http://lucy.ysalaya.org

My issue is not closed because the site crashes these days.

I do not think I use the same connection in different threads at the same
time.

I am still using the javasqlite ODBC driver.
But I did not but the jdbc into the Tomcat server.xml file.
As I did for MySQL because as far as know, the jdbc needs the path to the
database file
and under my project, all the chatbot databases are different
/user_idr/bot_base_type.db

Here is my source code for the Java class connection.
Maybe you can have a look (it is simple, maybe too much)

_
Copyright 2006-2007 Frederic de la Goublaye
_
public class SQLite {
   private static Logger logger = Logger.getLogger(SQLite.class.getName());
   private final static String pilote = " SQLite.JDBCDriver";
   private Connection con = null;
   private PreparedStatement s;
   private  static SQLite instance = new SQLite();
   public static SQLite getInstance() {return instance;}
   private static String sDBPath;
private SQLite() {
   try {
   Class.forName(pilote);
   con = null;
   s = null;
   } catch (Exception e) {
   logger.error(e.getMessage());
   }
}
public String getDBPath() {return sDBPath;}
public void setDBPath(String pDBPath) {sDBPath= pDBPath;}
public void connection(String pDBPath) {
   try {
sDBPath = pDBPath;
con = DriverManager.getConnection(sDBPath);
con.setAutoCommit ( true );
   }
   catch (SQLException e) {
   logger.error(e.getMessage());
   logger.error("sDBPath="+sDBPath);
   if (con != null) {
   try {
   con.close();
   } catch (Exception ex) {
   logger.error(ex.getMessage());
   }
   finally {
   con = null;
   }
   }
   }
}
public void deconnection() {
   if (s != null) {
   try {
   s.close();
   } catch (Exception e) {
   logger.error(e.getMessage());
   }
   finally {
   s = null;
   }
   }
   if (con != null) {
   try {
   con.close();
   } catch (Exception e) {
   logger.error(e.getMessage());
   }
   finally {
   con = null;
   }
   }
}
public ResultSet execQuery(String pRequest) {
   try
   {
   s = con.prepareStatement(pRequest);
   ResultSet rset = s.executeQuery();
   return rset;
   }
  catch(SQLException e) {
  logger.error(e.getMessage());
  logger.error("sSql="+pRequest);
  if (s != null) {
  try {
  s.close();
  } catch (Exception ex) {
  logger.error(ex.getMessage());
  }
  finally {
  s = null;
  }
  }
  return null;
   }
}
public void finalize() {
   try {
 deconnection();
   }
   catch (Exception e) {
   logger.error(e.getMessage());
   }
}
public String getStringForSQL(String str) {
   StringBuffer sb = new StringBuffer(255);
   if (str == null) return str;
   char[] cc = str.toCharArray();
   for (char i = 0; i < cc.length; i++) {
   if (cc[i] == '\'') {
   sb.append('\\');
   } else if (cc[i] == '\\') {
   sb.append('\\');
   }
   sb.append(cc[i]);
   }
   return sb.toString();
}
}
_

Thanks again for your attention.

Frederic de la Goublaye


On 6/3/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:



> My project is working with Tomcat, SQLite and javasqlite.
> http://www.ch-werner.de/javasqlite/
>
> http://www.ysalaya.org
>
> Since a few weeks Tomcat server crashes very often: error 505
> and I need to restart it manually. It is installed on FreeBSD 5.4.
>
> Please see the Tomcat log file bellow.
> It seems to be an error in SQLite outsite the Java Virtual Machine.
>
> ANY IDEAR ?

I see 2 possible explanations:
1. You use the same connection in different threads at the same time.
2. There are bugs in JDBC driver. If this is the case, try driver from
http://www.zentus.com/sqlitejdbc. It for sure has bugs, but may be
different and you won't even notice them. You may use pure java version -
it will be probably slower than JNI based, but should never crash VM.



--
Wicie, rozumicie
Zobacz >>> http://link.interia.pl/f1a74



-
To unsubscribe, send email to [EMAIL PROTECTED]

-




Re: [sqlite] Journal File Optimization

2007-06-18 Thread Andrew Finkenstadt

On 6/18/07, John Stanton <[EMAIL PROTECTED]> wrote:


Andre du Plessis wrote:
> How can one optimize the creation of the journal file. The problem is
> this, for our system which is an event based one each message needs to
> be insterted and committed to the database (guaranteed), this results in
> a commit per insert, this was obviously unacceptably slow and according
> to the lists by design you need to do more bulk transactions, and with
> some efford I modified it with a combination of bulk and temporary
> tables to do quite a few at a time, but the problem remains that
> committing is terribly slow.



I, too, discovered this.  By at least a factor of 50 to 100 times as slow
with my current hardware (stripe & mirror hardware card software drivers)
than by running in "PRAGMA SYNCHRONOUS=OFF;" mode.

In my case I chose to accept the risk that the OPERATING SYSTEM or MACHINE
will crash before the data is truly written to the hard drive platters.

My alternative that I may still implement was to have an interior journaling
table which recorded each operation, and then applied the operations in bulk
within one transaction to the desired tables.

I am not a SQlite expert by any stretch of the imagination, but these
techniques worked for me.

--andy


Re: [sqlite] Journal File Optimization

2007-06-18 Thread John Stanton

Andre du Plessis wrote:

How can one optimize the creation of the journal file. The problem is
this, for our system which is an event based one each message needs to
be insterted and committed to the database (guaranteed), this results in
a commit per insert, this was obviously unacceptably slow and according
to the lists by design you need to do more bulk transactions, and with
some efford I modified it with a combination of bulk and temporary
tables to do quite a few at a time, but the problem remains that
committing is terribly slow. 

 


I'm thinking the journal file, the fact that it is created written and
deleted each time which is slowing it down, Is there not a way to create
and set up a permanent journal file pre-allocated to a certain size, so
sqlite does not have to go through the OS each time to create and delete
this file?

 


Along the same lines of this question, is there a way to fix the initial
size of the DB, ie set it to pre-allocate 700mb for instance so that no
growing of the db file is needed until the space is exceeded, may also
speed things up.

 

 


Thank you very much in advance.


I suspest the you will find that the sync process is the logjam.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Trigger update of multiple columns

2007-06-18 Thread P Kishor

On 6/17/07, T&B <[EMAIL PROTECTED]> wrote:

Hi All,

I have a pretty standard sales tracking database consisting of tables:

Products  - Each row is a product available for sale.
 Includes fields: Code, Buy, Sell, Description

Sales - Each row is a sale made to a customer.
 Includes fields: Ref, Customer

Sale_Products - Each row is an product (many) included in a sale (one).
 Includes fields: Sale_Ref, Code, Buy, Sell, Description

Now, when I add a new Sale_Products row and assign a product Code to
it, I want to trigger it to auto enter the Buy and Sell prices, and
the description, by looking up the related Product (ie where
Sale_Products.Code = Products.Code)

How can I do this?

I have something like this:

create trigger Update_Sale_Products_Code
after update of Code
on Sale_Products
begin
   update Sale_Products
 set
   Buy = (select Buy from Products where Products.Code = new.Code)
 , Sell = (select Sell from Products where Products.Code = new.Code)
 , Description = (select Description from Products where
Products.Code = new.Code)
   where
 rowid=new.rowid
   ;
end

It works, but it's unnecessarily slow, since it takes a while to look
up the huge Products table (which is actually a UNION ALL of various
supplier catalogs), and it's looking it up for each updating field
(and I have more fields to lookup than shown in this example). It
would be more efficient to look it up once to find the corresponding
product (according to Products.Code = new.Code), but I'm stumped as
to how to do that.

I tried:

create trigger Update_Sale_Products_Code
after update of Code
on Sale_Products
begin
   update Sale_Products
 set
   Buy = (select Buy from Products)
 , Sell = (select Sell from Products)
 , Description = (select Description from Products)
   where
 rowid=new.rowid
 and Products.Code = new.Code
   ;
end

But that fails, and seems a bit ambiguous anyway. It seems to need
some kind of JOIN, but I can't see provision for it in the UPDATE
syntax.

There must be a much simpler way that I'm overlooking. Please
enlighten me.




Why are you repeating the Code, Buy, Sell, and Description columns in
the Sale_Products table when they already exists in the Products
table?

I can't decipher what the Buy and the Sell columns are supposed to
hold in your scenario, but here is what I would do (you can always add
other columns, but these would be the minimum necessary, and these
assume that the "buy" column is the price I paid for the item, and
"sell" column is the price I get for it) --

CREATE TABLE products (
 product_id INTEGER PRIMARY KEY,
 buy REAL,
 sell REAL,
 desc TEXT
);

CREATE TABLE customers (
 customer_id INTEGER PRIMARY KEY,
 .. other customer info columns ..
);

CREATE TABLE sales (
 sale_id INTEGER PRIMARY KEY,
 product_id INTEGER,   -- FK to products table
 customer_id INTEGER   -- FK to customes table
);

Then, as a sale is made to a customer, I would

INSERT INTO sales (product_id, customer_id) VALUES (?, ?);

and fill up the bindvals with the customer_id I picked up from the
customer's log in or however you identified the customer, and the
product_id from my application.

There are no duplicates above, everything is normalized properly. Also
notice the naming of the columns... there is a consistency... _id
is always the first row in a table, and is always INTEGER PRIMARY KEY.
To another reader, it is very clear what is happening without
requiring lots of explanations. And, no TRIGGERs are involved.

--
Puneet Kishor http://punkish.eidesis.org/
Nelson Inst. for Env. Studies, UW-Madison http://www.nelson.wisc.edu/
Open Source Geospatial Foundation http://www.osgeo.org/education/
S&T Policy Fellow, National Academy of Sciences http://www.nas.edu/
-
collaborate, communicate, compete
=

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Help with compiling 3.3.17 version for WinCE

2007-06-18 Thread Nuno Lucas

On 6/16/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:



I've download the amalgamation (single .c file) version of SQLite 3.3.17 and 
I'm trying to compile it using Embedded Visual C++ 3.0, but I'm getting some 
compiling errors such as,fatal error C1083: Cannot open include file: 
'assert.h': No such file or directoryIs there any special settings I need to 
make to compile the amalgamation version of the code?Thanks for your help.Dave


You are compiling it using an old SDK. Only the SDK for WinCE 4.0 and
above include "assert.h".

If you want to use sqlite on WinCE 3.0, you may want to look at the
old sqlite-wince.sf.net code, which include compatibility headers for
assert.h and time.h (and support for WinCE 2.x using the legacy 2.8.x
branch).


Regards,
~Nuno Lucas

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Trigger update of multiple columns

2007-06-18 Thread John Stanton
A traditional goal in database design is to place data in "Third Normal 
Form" which means in essence that each data element is only stored in 
one place.  Acesses join the rows to deliver data.


A normalized database does not hold redundant data and changing the 
value of one element changes its value everywhere.


You are proposing a database which is more like :First Normal Form" and 
is deprecated.


T&B wrote:

Hi John,

Thanks for your reply.

You would have a better database if you normalize it and not do  what 
you propose.



Meaning what, exactly?

Obviously, I've only given rough indications as to my data source,  such 
as:


the huge Products table (which is actually a UNION ALL of various  
supplier catalogs)



without detailing where all the data in there comes from, since that  
would distract too much from my question. But suffice to say that  
"normalizing" beyond the current structure is not straight forward.


In any case, at least in my ignorance of what you propose, it's  beside 
the point.


In essence, I'm asking: is it possible to update multiple columns in  a 
row, where all those values come from a single related row, without  
SQLite having to find (eg via SELECT) that related row multiple times?


Or, put another way, I want to get SQLite to:

1. Locate the related row.

2. Grab the desired columns from that row, putting each in the  related 
row.


I hope that clarifies.

Thanks,
Tom


- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Journal File Optimization

2007-06-18 Thread Andre du Plessis
How can one optimize the creation of the journal file. The problem is
this, for our system which is an event based one each message needs to
be insterted and committed to the database (guaranteed), this results in
a commit per insert, this was obviously unacceptably slow and according
to the lists by design you need to do more bulk transactions, and with
some efford I modified it with a combination of bulk and temporary
tables to do quite a few at a time, but the problem remains that
committing is terribly slow. 

 

I'm thinking the journal file, the fact that it is created written and
deleted each time which is slowing it down, Is there not a way to create
and set up a permanent journal file pre-allocated to a certain size, so
sqlite does not have to go through the OS each time to create and delete
this file?

 

Along the same lines of this question, is there a way to fix the initial
size of the DB, ie set it to pre-allocate 700mb for instance so that no
growing of the db file is needed until the space is exceeded, may also
speed things up.

 

 

Thank you very much in advance.