Re: [sqlite] Difficulty with sqlite3.3.6 under VisualStudio.net

2006-09-19 Thread thomas . l
Hi Dixon

Dixon Hutchinson wrote:
> I am getting lots of errors of the sort:
>
> 'size_t' to 'int', possible loss of data
>
> I could just turn off the warnings, but that seems kind of wreckless.
> The output from the compile is attached.

Try this (But this is not really necessary.):
#pragma warning( disable : 4267)

Take a look at my HP. There is  a working sample to create the Libraries.

Best Regards
Thomas

-- 
www.thlu.de


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] How to speed up the performance in LIMIT & OFFSET?

2006-09-19 Thread Michael Scharf

Hi,

I had a similar problem at a larger scale. One trick is to create
a temporary table from where you can fetch the rows quickly:

  Create Table foo(id INTEGER PRIMARY KEY AUTOINCREMENT, x TEXT);
  create index idx_foo on foo(x);

  insert into foo(x) values('text001');
  :
  :
  insert into foo(x) values('text300');

Now create a temp table on your query:

  Create Table tmp as select distinct x from foo;

This table has the nice property that the (implicitely
created) OID is in order. OID starts with 1!
But now you can say:

  Select x from tmp where OID>0 and OID<=20;
  Select x from tmp where OID>20 and OID<=30;
  ...

Also the calculation of the size of the table is
also very fast:

  Select max(OID) from t;

This is *much* faster for huge tables than

  Select count(*) from t;

Because max simply jumps to the last record, but
count has to touch every record.

Don't modify the temp table: drop it and create it
new. Else the trick with the order of the OID
won't work.

BTW: how can 500 records be a memory problem?

Michael



I have a problem about LIMIT & OFFSET profermance.
Due to the limitation of memory, I could not get all of the query result at
a time.
In our soluction, we use the LIMIT and OFFSET to avoid the problem of 
memory

issue.

we observed the performance of LIMIT & OFFSET, it looks like a liner 
grow of

the response time. In our table, it only has 300~500 records.



Here is the dummy script
Create Table foo(id INTEGER PRIMARY KEY AUTOINCREMENT, x TEXT);
create index idx_foo on foo(x);

insert into foo(x) values('text001');
:
:
insert into foo(x) values('text300');


Select distinct x from foo LIMIT 20 OFFSET 0; Select distinct x from foo
LIMIT 20 OFFSET 20; Select distinct x from foo LIMIT 20 OFFSET 40; Select
distinct x from foo LIMIT 20 OFFSET 60;
:
:
Select distinct x from foo LIMIT 20 OFFSET 280;



Would you please help to tell me how to improve that?

Thanks for your great help.


Thanks,
VK




--
http://MichaelScharf.blogspot.com/


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Difficulty with sqlite3.3.6 under VisualStudio.net

2006-09-19 Thread Richard Stern
The warnings are normal and unless you want to rewrite sqlite, not much you
can do about them.

Not sure about the run-time check failure

 

 

  _  

From: Dixon Hutchinson [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, 20 September 2006 8:41 AM
To: sqlite-users@sqlite.org
Subject: [sqlite] Difficulty with sqlite3.3.6 under VisualStudio.net

 

I am having difficulty getting a clean build under Visual Studio.  I believe
I have followed the advice given at
http://www.sqlite.org/cvstrac/wiki?p=VsNetSolution.

I am getting lots of errors of the sort:

...\\sqlite3\\vdbemem.c(194) : warning C4267: '=' : conversion from 'size_t'
to 'int', possible loss of data
...\\sqlite3\\vdbemem.c(319) : warning C4244: '=' : conversion from 'double'
to 'i64', possible loss of data
...\\sqlite3\\vdbemem.c(779) : warning C4311: 'type cast' : pointer
truncation from 'char *' to 'int'
...\\sqlite3\\vdbeaux.c(531) : warning C4267: 'initializing' : conversion
from 'size_t' to 'int', possible loss of data
...\\sqlite3\\vdbeaux.c(564) : warning C4018: '<=' : signed/unsigned
mismatch
...\\sqlite3\\vdbeaux.c(659) : warning C4267: '=' : conversion from 'size_t'
to 'int', possible loss of data


I could just turn off the warnings, but that seems kind of wreckless. The
output from the compile is attached.

I am also getting:

Run-Time Check Failure #2 - Stack around the variable 'ts' was corrupted."

This is in vdbe.c  Although it is not at the same variable every time.  I am
running a single threaded app.  This is what has lead me to want to fix the
compiler warnings.

Any suggestions?  I'm hoping for a recommended change in my preprocessors
settings.  Drastic modifications to sqlite does not seem appropriate.
Current preprocessor definitions are:

WIN32
WINVER=0x0500
_WIN32_WINNT=0x0500
_DEBUG
_WINDOWS
_USRDLL
NO_TCL
SQLITE3_EXPORTS
_WINDLL
_UNICODE
UNICODE



[sqlite] Difficulty with sqlite3.3.6 under VisualStudio.net

2006-09-19 Thread Dixon Hutchinson




I am having difficulty getting a clean build under Visual Studio.  I
believe I have followed the advice given at
http://www.sqlite.org/cvstrac/wiki?p=VsNetSolution.

I am getting lots of errors of the sort:
...\\sqlite3\\vdbemem.c(194) : warning C4267: '=' :
conversion from 'size_t' to 'int', possible loss of data
  ...\\sqlite3\\vdbemem.c(319) : warning C4244: '=' : conversion
from 'double' to 'i64', possible loss of data
  ...\\sqlite3\\vdbemem.c(779) : warning C4311: 'type cast' :
pointer truncation from 'char *' to 'int'
  ...\\sqlite3\\vdbeaux.c(531) : warning C4267: 'initializing' :
conversion from 'size_t' to 'int', possible loss of data
  ...\\sqlite3\\vdbeaux.c(564) : warning C4018: '<=' :
signed/unsigned mismatch
  ...\\sqlite3\\vdbeaux.c(659) : warning C4267: '=' : conversion
from 'size_t' to 'int', possible loss of data


I could just turn off the warnings, but that seems kind of wreckless.
The output from the compile is attached.

I am also getting:
Run-Time Check Failure #2 - Stack around the variable
'ts' was corrupted."

This is in vdbe.c  Although it is not at the same variable every time. 
I am running a single threaded app.  This is what has lead me to want
to fix the compiler warnings.

Any suggestions?  I'm hoping for a recommended change in my
preprocessors settings.  Drastic modifications to sqlite does not seem
appropriate.  Current preprocessor definitions are:

WIN32
  WINVER=0x0500
  _WIN32_WINNT=0x0500
  _DEBUG
  _WINDOWS
  _USRDLL
  NO_TCL
  SQLITE3_EXPORTS
  _WINDLL
  _UNICODE
  UNICODE



Deleting intermediate files and output files for project 'sqlite3', 
configuration 'Debug|Win32'.
Compiling...
where.c
vdbemem.c
...\\sqlite3\\vdbemem.c(194) : warning C4267: '=' : conversion from 'size_t' to 
'int', possible loss of data
...\\sqlite3\\vdbemem.c(319) : warning C4244: '=' : conversion from 'double' to 
'i64', possible loss of data
...\\sqlite3\\vdbemem.c(491) : warning C4267: '=' : conversion from 'size_t' to 
'int', possible loss of data
...\\sqlite3\\vdbemem.c(557) : warning C4244: '=' : conversion from 'const i64' 
to 'double', possible loss of data
...\\sqlite3\\vdbemem.c(562) : warning C4244: '=' : conversion from 'const i64' 
to 'double', possible loss of data
...\\sqlite3\\vdbemem.c(779) : warning C4311: 'type cast' : pointer truncation 
from 'char *' to 'int'
...\\sqlite3\\vdbemem.c(859) : warning C4267: '=' : conversion from 'size_t' to 
'int', possible loss of data
vdbefifo.c
vdbeaux.c
...\\sqlite3\\vdbeaux.c(482) : warning C4267: '=' : conversion from 'size_t' to 
'int', possible loss of data
...\\sqlite3\\vdbeaux.c(527) : warning C4267: '=' : conversion from 'size_t' to 
'int', possible loss of data
...\\sqlite3\\vdbeaux.c(531) : warning C4267: 'initializing' : conversion from 
'size_t' to 'int', possible loss of data
...\\sqlite3\\vdbeaux.c(564) : warning C4018: '<=' : signed/unsigned mismatch
...\\sqlite3\\vdbeaux.c(659) : warning C4267: '=' : conversion from 'size_t' to 
'int', possible loss of data
...\\sqlite3\\vdbeaux.c(676) : warning C4267: '=' : conversion from 'size_t' to 
'int', possible loss of data
...\\sqlite3\\vdbeaux.c(862) : warning C4244: 'function' : conversion from 
'__w64 int' to 'int', possible loss of data
...\\sqlite3\\vdbeaux.c(1034) : warning C4267: 'function' : conversion from 
'size_t' to 'int', possible loss of data
...\\sqlite3\\vdbeaux.c(1595) : warning C4244: 'return' : conversion from 'i64' 
to 'u32', possible loss of data
...\\sqlite3\\vdbeaux.c(1651) : warning C4244: '=' : conversion from 'u64' to 
'unsigned char', possible loss of data
...\\sqlite3\\vdbeaux.c(1813) : warning C4018: '>=' : signed/unsigned mismatch
...\\sqlite3\\vdbeaux.c(1815) : warning C4018: '>=' : signed/unsigned mismatch
...\\sqlite3\\vdbeaux.c(1841) : warning C4018: '<' : signed/unsigned mismatch
...\\sqlite3\\vdbeaux.c(1843) : warning C4018: '<' : signed/unsigned mismatch
...\\sqlite3\\vdbeaux.c(1887) : warning C4244: 'function' : conversion from 
'i64' to 'int', possible loss of data
...\\sqlite3\\vdbeaux.c(1926) : warning C4244: 'function' : conversion from 
'i64' to 'int', possible loss of data
vdbeapi.c
...\\sqlite3\\vdbeapi.c(55) : warning C4244: 'return' : conversion from 'i64' 
to 'int', possible loss of data
...\\sqlite3\\vdbeapi.c(152) : warning C4047: 'initializing' : 'DWORD' differs 
in levels of indirection from 'HANDLE'
...\\sqlite3\\vdbeapi.c(162) : warning C4047: '==' : 'HANDLE' differs in levels 
of indirection from 'DWORD'
...\\sqlite3\\vdbeapi.c(208) : warning C4244: '=' : conversion from 'double' to 
'i64', possible loss of data
...\\sqlite3\\vdbeapi.c(245) : warning C4244: '=' : conversion from 'double' to 
'u64', possible loss of data
vdbe.c
...\\sqlite3\\vdbe.c(665) : warning C4267: '=' : conversion from 'size_t' to 
'int', possible loss of data
...\\sqlite3\\vdbe.c(680) : warning C4267: '=' : conversion from 'size_t' to 
'int', possible loss of data
...\\sqlite3\\vdbe.c(696) : warning C4267: '=' : conversion from 'size_t' to 

Re: [sqlite] Optimistic concurrency control

2006-09-19 Thread Christian Smith

Mikey C uttered:



Hi,

Maybe I didn't make the question clear.  I'm not talking about locking and
multiple writers.  I'm talking about optimistic concurrency control in a
disconnected environment.

Two processes (say a webserver).  One reads some data and presents it to a
user (open - read - close).  The other reads the same same data and presents
it to another user (open - read - close).  The first user updates the data
(open - write - close).  Several seconds/minutes later the second user
updates the same data (open - read - close).  Result is the first users
changes are lost.



Aha. OK, I see where you're coming from.




ALL I am asking is could SQLite give each table a special column that
increases it's value for each row whenever data changes?

I could implement is with a trigger on each table BUT it would be nice if
SQLite supported this natively.

Anyhow, I get from the tone of the answers that this is not likely to
happen, so I'll code it up myself.



Sorry if I sounded arsy. I wasn't trying to, I was just trying to be 
succinct.





Cheers,

Mike



Christian


--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Optimistic concurrency control

2006-09-19 Thread Thomas . L
Hello

On Tue, 19 Sep 2006 11:24:02 -0700 (PDT), you wrote:

>Maybe I didn't make the question clear.  I'm not talking about locking and
>multiple writers.  I'm talking about optimistic concurrency control in a
>disconnected environment.  
>
>IF anyone has changed the data since you last read it, the UPDATE affects no
>rows and you know your update failed due to optimistic concurrency failure.

I solved this problem by doing a logical Recordlocking. I connect to a
SQL-Server DB always in optimistic Mode... and I open my SQLite-DB 
as if it was also (and any other DB too). The solution is a logical
Recordlocking.That is what I do, before I try to alter a record. 

I check, if  this attempt to write is permitted, to avoid that one
User overwrite the changes from another User.

Therefore I store (after I fetched the wanted Record) a Timestamp to a
specific Table in a external specific DB (a specially Key-Collection).
The TimeStamp contains:
- the Users ID (Users Domain-ID)
- the Workstations-ID  (Network-Computername)
- the Operating-Systems Process-ID
- a initial Timestamp with Date and exhausted Seconds this Day
- a Heartbeat-Timestamp (Always after 10 minutes every process updates
   his own locks)
- a Class or Data-ID and 
- the Records ID (such as a Personnel number, not the RecID). It's 
   needed, to lock also Records in several Childtables with same
   Personnel number, or so.

If this full Timestamp successful written to the Recordlocking-Table,
I perform a search to another Record with same tokens. If I found
anyone and the founded initial Timestamp is earlier than my, only read
is allowed to me. If my own Timestamp is the earliest, I can write and
any other user only can read. 

If this "Edit-Class" is closed, or Programm is ended, the Lock becomes
removed.  If moved to another Record, the Timestamps and the
Records-ID will be updated. If change to another class, the Class-ID
will be updated in addition..

Sometimes I search dead or lost Locks and remove them. Dead or Lost
Locks are Locks, which Heartbeat is older than 2 hours. 

In this way I lock also whole Tables or the full DB. That works really
successful in a Network-Environment with up to 1500 concurrent users.
The real advantage is in that way, that permitted Jobs (started by a
Admin) can get and hold full rights.

Best Regards
Thomas

www.thlu.de


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] how sqlite works?

2006-09-19 Thread Cesar David Rodas Maldonado

Thanks for your answer Ms. Smith!

On 9/19/06, Christian Smith <[EMAIL PROTECTED]> wrote:


Cesar David Rodas Maldonado uttered:

> I meen, If SQLite has two index and very large Index (about 10.000.000each
> one) how do i merge it, I mean (index1 = index2 for every one and limit
it
> in thousand).
>
> Understand?


I guess not.

Are you trying to do an INNER JOIN merger on the two indexes? SQLite does
this by having a nested loop, with the equality test as the action of the
inner loop:

for each row in x {
   for each row in y matching x.row {
 # Do whatever for the inner join
   }
}

If you're trying to do equivalent of a UNION select, then this is just a
standard merge sort using the two indexes as the sources.
http://en.wikipedia.org/wiki/Merge_sort


>
>
> On 9/15/06, Dennis Cote <[EMAIL PROTECTED]> wrote:
>>
>> Cesar David Rodas Maldonado wrote:
>> > If there a document of how SQLite Virtual Machine Works ( papers )? I
>> > would
>> > like do something similar with B-Tree, B+ Tree but i dont know how to
>> > merge
>> > a select with tow Index? Understand my question??
>> >
>> > Please answer me
>> >
>> see the links VDBE Tutorial and VDBE Opcodes near he bottom of the
>> documentation page http://www.sqlite.org/docs.html
>>
>> Dennis Cote
>>
>>
>>
>>
-
>> To unsubscribe, send email to [EMAIL PROTECTED]
>>
>>
>>
-
>>
>>
>

--
 /"\
 \ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
  X   - AGAINST MS ATTACHMENTS
 / \


-
To unsubscribe, send email to [EMAIL PROTECTED]

-




Re: [sqlite] Optimistic concurrency control

2006-09-19 Thread Mikey C

Hi,

Maybe I didn't make the question clear.  I'm not talking about locking and
multiple writers.  I'm talking about optimistic concurrency control in a
disconnected environment.  

Two processes (say a webserver).  One reads some data and presents it to a
user (open - read - close).  The other reads the same same data and presents
it to another user (open - read - close).  The first user updates the data
(open - write - close).  Several seconds/minutes later the second user
updates the same data (open - read - close).  Result is the first users
changes are lost.

You can of course create a complex WHERE clause in all your SQL UPDATE
statements so that an update only succeeds in changing a row if the all the
column values match the original values. e.g.

UPDATE ...
SET col1 = new_value_1
col2 = new_value_2
WHERE col1 = old_value_1
AND col2 = old_value_2
etc.


BUT (and here's what I'm asking) many databases (e.g. MS SQL Server) have a
special data type or column in each table for each row which is an
incrementing value.  Whenever a row is written to, this value
changes/increases.

Hence your where clause needs only include:

WHERE row_version_column = old_row_version_value

IF anyone has changed the data since you last read it, the UPDATE affects no
rows and you know your update failed due to optimistic concurrency failure.

ALL I am asking is could SQLite give each table a special column that
increases it's value for each row whenever data changes?

I could implement is with a trigger on each table BUT it would be nice if
SQLite supported this natively.

Anyhow, I get from the tone of the answers that this is not likely to
happen, so I'll code it up myself.

Cheers,

Mike

Christian Smith-4 wrote:
> 
> Mikey C uttered:
> 
>>
>> What are peoples thoughts on implementing optimistic concurrency control
>> in
>> SQLite?
> 
> 
> Not an option. SQLite has a single writer database locking protocol which 
> can't handle multiple writers, so the issue of concurrency control is 
> moot.
> 
> 
>>
>> One way is modify the where clause to compare every column being updated,
>> old value to new value.  This makes the SQL cumbersome.
>>
>> Microsoft SQL Server has a column data type named TIMESTAMP/ROWVERSION
>> which
>> is database-wide unique number that gets updated every time a row gets
>> updated.  Effectively it is a hash of all the current values in every row
>> of
>> the table and is updated automtically.
>>
>> Hence to see if any row has been updated by another person you just have
>> to
>> compare the  TIMESTAMP/ROWVERSION  value you read with the one currently
>> in
>> the table in the UPDATE where clause.
>>
>>
>>
>> Q. Does SQlite has such a capability?  Can we have one please? If not, is
>> it
>> easy to simulate one?  If not, how do people manage concurrency in
>> applications such as web sites?
>>
> 
> A. No. Probably not. Probably not. Use a client/server DB such as
> PostgreSQL which already has multiple version concurrency control.
> 
> Right tool for the job. If it's multiple concurrent writers, SQLite isn't 
> it.
> 
> 
> Christian
> 
> 
> 
> --
>  /"\
>  \ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
>   X   - AGAINST MS ATTACHMENTS
>  / \
> 
> -
> To unsubscribe, send email to [EMAIL PROTECTED]
> -
> 
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Optimistic-concurrency-control-tf2299903.html#a6394076
Sent from the SQLite mailing list archive at Nabble.com.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] speeding SQLite on a cross-join over two tables

2006-09-19 Thread P Kishor

I have the following two tables and related indexes --

   CREATE TABLE IF NOT EXISTS pt (
 id INTEGER PRIMARY KEY,
 xREAL,
 yREAL,
 attr TEXT
   )

   CREATE INDEX ix_pt_x ON pt (x)
   CREATE INDEX ix_pt_y ON pt (y)

   CREATE TABLE IF NOT EXISTS py (
 id  INTEGER PRIMARY KEY,
 xmin  REAL,
 ymin  REAL,
 xmax  REAL,
 ymax  REAL,
 attr  TEXT
   )

   CREATE INDEX ix_py ON py (xmin, ymin, xmax, ymax)

I want to UPDATE pt SETting pt.attr = py.attr WHEREver the Count of
(pt.x  BETWEEN py.xmin AND py.xmax  AND pt.y  BETWEEN py.ymin AND
py.ymax) is 1.

So, I have come up with the following way -- I have created a view to
do my SELECTs

   CREATE VIEW v_attr AS
 SELECT pt.id AS pt_id, Count(pt.id) AS pt_id_count, py.attr AS py_attr
 FROM pt JOIN py ON
   (
 (pt.x BETWEEN py.xmin AND py.xmax) AND
 (pt.y BETWEEN py.ymin AND py.ymax)
   )
 GROUP BY pt_id_count
 HAVING pt_id_count = 1
   )

Well, any SELECTs from the view v_attr take forever. Anyway to speed it up?

If all of the above works, then I can do the following UPDATE --

   UPDATE pt AS a_pt
   SET attr = (
 SELECT py_attr
 FROM v_attr
 WHERE a_pt.id = v_attr.pt_id
   )

And so I ask you, does this make sense? Is my JOIN okay? One local wag
was saying that JOINs can only be done using '=', not BETWEEN.
Nevertheless, perhaps a SQL guru on this list can guide me on this
task.


--
Puneet Kishor http://punkish.eidesis.org/
Nelson Inst. for Env. Studies, UW-Madison http://www.nelson.wisc.edu/
Open Source Geospatial Foundation https://edu.osgeo.org/

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] how sqlite works?

2006-09-19 Thread Christian Smith

Cesar David Rodas Maldonado uttered:


I meen, If SQLite has two index and very large Index (about 10.000.000 each
one) how do i merge it, I mean (index1 = index2 for every one and limit it
in thousand).

Understand?



I guess not.

Are you trying to do an INNER JOIN merger on the two indexes? SQLite does 
this by having a nested loop, with the equality test as the action of the 
inner loop:


for each row in x {
  for each row in y matching x.row {
# Do whatever for the inner join
  }
}

If you're trying to do equivalent of a UNION select, then this is just a 
standard merge sort using the two indexes as the sources.

http://en.wikipedia.org/wiki/Merge_sort





On 9/15/06, Dennis Cote <[EMAIL PROTECTED]> wrote:


Cesar David Rodas Maldonado wrote:
> If there a document of how SQLite Virtual Machine Works ( papers )? I
> would
> like do something similar with B-Tree, B+ Tree but i dont know how to
> merge
> a select with tow Index? Understand my question??
>
> Please answer me
>
see the links VDBE Tutorial and VDBE Opcodes near he bottom of the
documentation page http://www.sqlite.org/docs.html

Dennis Cote



-
To unsubscribe, send email to [EMAIL PROTECTED]


-






--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Optimistic concurrency control

2006-09-19 Thread Christian Smith

Mikey C uttered:



What are peoples thoughts on implementing optimistic concurrency control in
SQLite?



Not an option. SQLite has a single writer database locking protocol which 
can't handle multiple writers, so the issue of concurrency control is 
moot.





One way is modify the where clause to compare every column being updated,
old value to new value.  This makes the SQL cumbersome.

Microsoft SQL Server has a column data type named TIMESTAMP/ROWVERSION which
is database-wide unique number that gets updated every time a row gets
updated.  Effectively it is a hash of all the current values in every row of
the table and is updated automtically.

Hence to see if any row has been updated by another person you just have to
compare the  TIMESTAMP/ROWVERSION  value you read with the one currently in
the table in the UPDATE where clause.



Q. Does SQlite has such a capability?  Can we have one please? If not, is it
easy to simulate one?  If not, how do people manage concurrency in
applications such as web sites?



A. No. Probably not. Probably not. Use a client/server DB such as
   PostgreSQL which already has multiple version concurrency control.

Right tool for the job. If it's multiple concurrent writers, SQLite isn't 
it.



Christian



--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] How to speed up the performance in LIMIT & OFFSET?

2006-09-19 Thread Christian Smith

PY uttered:


Hi All,
I have a problem about LIMIT & OFFSET profermance.
Due to the limitation of memory, I could not get all of the query result at
a time.
In our soluction, we use the LIMIT and OFFSET to avoid the problem of memory
issue.

we observed the performance of LIMIT & OFFSET, it looks like a liner grow of
the response time. In our table, it only has 300~500 records.



Here is the dummy script
Create Table foo(id INTEGER PRIMARY KEY AUTOINCREMENT, x TEXT);
create index idx_foo on foo(x);

insert into foo(x) values('text001');
:
:
insert into foo(x) values('text300');


Select distinct x from foo LIMIT 20 OFFSET 0; Select distinct x from foo
LIMIT 20 OFFSET 20; Select distinct x from foo LIMIT 20 OFFSET 40; Select
distinct x from foo LIMIT 20 OFFSET 60;
:
:
Select distinct x from foo LIMIT 20 OFFSET 280;



Would you please help to tell me how to improve that?



I could only observe the issue on the slowest machine I have (SUN 
SPARCclassic). I guess this is an embedded device, based on the fact 
you're also limited in memory.


Using your query, the index for x is ignored. To do the "DISTINCT" 
processing, SQLite creates a temporary table which is populated with 
distinct rows from the base table. This is done using a full table scan, 
so the index wouldn't help anyway.


Your best bet, if you always want distinct x, is to add a UNIQUE 
constraint to x:

CREATE TABLE foo (id INTEGER PRIMARY KEY, x TEXT UNIQUE);

Now your selects, minus the distinct clause, will run in constant time:
SELECT x FROM foo ORDER BY x LIMIT 20 OFFSET 60;

Note, using "ORDER BY" will make SQLite use the implicit index on x, and 
also ensure the ordering of the result set, which is not defined 
otherwise.





Thanks for your great help.


Thanks,
VK



--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Optimistic concurrency control

2006-09-19 Thread Mikey C

What are peoples thoughts on implementing optimistic concurrency control in
SQLite?

One way is modify the where clause to compare every column being updated,
old value to new value.  This makes the SQL cumbersome.

Microsoft SQL Server has a column data type named TIMESTAMP/ROWVERSION which
is database-wide unique number that gets updated every time a row gets
updated.  Effectively it is a hash of all the current values in every row of
the table and is updated automtically.

Hence to see if any row has been updated by another person you just have to
compare the  TIMESTAMP/ROWVERSION  value you read with the one currently in
the table in the UPDATE where clause.



Q. Does SQlite has such a capability?  Can we have one please? If not, is it
easy to simulate one?  If not, how do people manage concurrency in
applications such as web sites?

-- 
View this message in context: 
http://www.nabble.com/Optimistic-concurrency-control-tf2299903.html#a6391291
Sent from the SQLite mailing list archive at Nabble.com.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] reg:blob data reading

2006-09-19 Thread Dennis Jenkins

Dennis Jenkins wrote:

Teg wrote:

Hello Dennis,
  I'm, probably going to be offering optional encryption too. Why did
you chose to use the SQLite encryption extensions versus just
encrypting the blobs after you read them back in and before you write
them out?
  


1) We wanted the entire database encrypted.  There is sensitive 
non-blob data too.


2) Dr. Hipp's encryption extension is well tested and already 
integrated into sqlite.


3) The encryption is very transparent to the rest of our application.  
I don't have to manually call functions to look up keys and encrypt or 
decrypt blocks of data.




4) Updates to the blobs can now take advantage of the ACIDity of the 
sqlite engine.  (Is that a valid use of the acronym 'ACID'?  Gotta love 
the English language.  We can conjugate anything anyway we want to.)



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] reg:blob data reading

2006-09-19 Thread Dennis Jenkins

Teg wrote:

Hello Dennis,
  
I'm, probably going to be offering optional encryption too. Why did

you chose to use the SQLite encryption extensions versus just
encrypting the blobs after you read them back in and before you write
them out?
  


1) We wanted the entire database encrypted.  There is sensitive non-blob 
data too.


2) Dr. Hipp's encryption extension is well tested and already integrated 
into sqlite.


3) The encryption is very transparent to the rest of our application.  I 
don't have to manually call functions to look up keys and encrypt or 
decrypt blocks of data.



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] How to speed up the performance in LIMIT & OFFSET?

2006-09-19 Thread Jay Sprenkle

On 9/18/06, PY <[EMAIL PROTECTED]> wrote:

Hi All,
I have a problem about LIMIT & OFFSET profermance.


Is there any chance you can use an index on your select?
If it has an index it might be able to calculate the position of the
offset and not have to read the entire result set. This is just a
guess though. I'm not sure if sqlite is able to do that.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] How to speed up the performance in LIMIT & OFFSET?

2006-09-19 Thread Chandrashekar H S

Hi
Are u using precompiled queries? If not try using it...


On 9/19/06, PY <[EMAIL PROTECTED]> wrote:


Hi All,
I have a problem about LIMIT & OFFSET profermance.
Due to the limitation of memory, I could not get all of the query result
at
a time.
In our soluction, we use the LIMIT and OFFSET to avoid the problem of
memory
issue.

we observed the performance of LIMIT & OFFSET, it looks like a liner grow
of
the response time. In our table, it only has 300~500 records.



Here is the dummy script
Create Table foo(id INTEGER PRIMARY KEY AUTOINCREMENT, x TEXT);
create index idx_foo on foo(x);

insert into foo(x) values('text001');
:
:
insert into foo(x) values('text300');


Select distinct x from foo LIMIT 20 OFFSET 0; Select distinct x from foo
LIMIT 20 OFFSET 20; Select distinct x from foo LIMIT 20 OFFSET 40; Select
distinct x from foo LIMIT 20 OFFSET 60;
:
:
Select distinct x from foo LIMIT 20 OFFSET 280;



Would you please help to tell me how to improve that?

Thanks for your great help.


Thanks,
VK




Re: [sqlite] Quotation handling bug?

2006-09-19 Thread He Shiming
- Original Message - 
From: "Robert Simpson" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, September 19, 2006 2:19 PM
Subject: RE: [sqlite] Quotation handling bug?

Single quotes are supposed to be used for string literals, and double
quotes/brackets for identifiers such as table names and column names.

SELECT 'ID' FROM 'MYTABLE' is selecting the literal string 'ID', not the
column.

I am not positive, but I think if you use single quotes around something,
SQLite will first try and treat it like a literal -- and if the SQL parser
is expecting an identifier where you've placed a literal, it will try and
re-evaluate it as an identifier instead.  So since the statement CREATE
TABLE 'MYTABLE' ('ID' INTEGER PRIMARY KEY) contains literals in places
identifiers are expected, SQLite treats them as identifiers instead of
literals.

Conversely, SELECT 'ID' FROM 'MYTABLE' is ambiguous in that 'ID' could 
mean
the literal string 'ID' or could mean an identifier.  In a SELECT clause 
the

string is first evaluated as a literal, and since literals are allowed in
the return columns of a SELECT, the literal code path is taken and there 
is
no need to try and evaluate it as an identifier.  The FROM 'MYTABLE' 
portion

is parsed later, but literals aren't allowed as a target in a FROM clause,
so 'MYTABLE' is treated as an identifier.

In short ... Don't use single-quotes around identifiers.  Use 
single-quotes
for string literals, and use double-quotes or brackets around identifiers 
so

your code is more readable and explicit.

Robert



That's very helpful. Thanks.

Best regards,
He Shiming 



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] Quotation handling bug?

2006-09-19 Thread Robert Simpson
> -Original Message-
> From: He Shiming [mailto:[EMAIL PROTECTED] 
> Sent: Monday, September 18, 2006 10:02 PM
> To: sqlite-users@sqlite.org
> Subject: [sqlite] Quotation handling bug?
> 
> Hi,
> 
> I think I found a bug in sqlite version 3.3.7. The steps to 
> reproduce it is 
> as follows. I've tested it on Windows only.
> 
> C:\Something>sqlite3 newdb.db
> CREATE TABLE 'MYTABLE' ('ID' INTEGER PRIMARY KEY);
> INSERT INTO 'MYTABLE' ('ID') VALUES(1);
> INSERT INTO 'MYTABLE' ('ID') VALUES(2);
> INSERT INTO 'MYTABLE' ('ID') VALUES(3);
> 
> This is pretty straightfoward. But when I try to fetch the data out...
> SELECT 'ID' FROM 'MYTABLE' WHERE 'ID'=2;  // no result
> SELECT 'ID' FROM 'MYTABLE' WHERE ID=2; // result is ID
> SELECT ID FROM 'MYTABLE' WHERE ID=2; // result is 2
> SELECT 'MYTABLE'.'ID' FROM 'MYTABLE' WHERE 'MYTABLE'.'ID'=2; 
> // result is 2
> 
> I guess, to make it safer, I'll have to use the last one. 
> However, the 
> behavior or the first one and the second one looks like 
> malfunctioning. The 
> four queries should produce completely equivalent results, 
> which is "2". Or 
> is it something I did wrong?

Single quotes are supposed to be used for string literals, and double
quotes/brackets for identifiers such as table names and column names.  

SELECT 'ID' FROM 'MYTABLE' is selecting the literal string 'ID', not the
column.  

I am not positive, but I think if you use single quotes around something,
SQLite will first try and treat it like a literal -- and if the SQL parser
is expecting an identifier where you've placed a literal, it will try and
re-evaluate it as an identifier instead.  So since the statement CREATE
TABLE 'MYTABLE' ('ID' INTEGER PRIMARY KEY) contains literals in places
identifiers are expected, SQLite treats them as identifiers instead of
literals.

Conversely, SELECT 'ID' FROM 'MYTABLE' is ambiguous in that 'ID' could mean
the literal string 'ID' or could mean an identifier.  In a SELECT clause the
string is first evaluated as a literal, and since literals are allowed in
the return columns of a SELECT, the literal code path is taken and there is
no need to try and evaluate it as an identifier.  The FROM 'MYTABLE' portion
is parsed later, but literals aren't allowed as a target in a FROM clause,
so 'MYTABLE' is treated as an identifier.

In short ... Don't use single-quotes around identifiers.  Use single-quotes
for string literals, and use double-quotes or brackets around identifiers so
your code is more readable and explicit.

Robert



-
To unsubscribe, send email to [EMAIL PROTECTED]
-