Re: [sqlite] SQLite 3.2.5 and Mac OSX network folders

2005-10-27 Thread vidushi tandon
hi all
   can you please tell me is their any sqlite
source code available for the java 

rgds 
vidushi tandon

--- Aaron Burghardt <[EMAIL PROTECTED]> wrote:

> You can take a look at what Apple has done for OS X
> here:
> 
>
http://www.opensource.apple.com/darwinsource/10.4/SQLite-28/
> 
> Don't be mislead by the project name--it is SQLite
> 3.  I haven't  
> studied it closely, but it should allow you to build
> exactly what  
> Apple shipped.  The Makefile has references to
> locking callbacks, so  
> this might be relevant:
> 
> # add -DSQLITE_DEBUG=1 to enable lock tracing and
> other debugging  
> features
> # you also need to set sqlite3_os_trace to 1 in
> SQLite3/src/os_common.h
> Extra_CC_Flags += -DASSERT_VIA_CALLBACK=1
> -DENABLE_LOCKING_CALLBACKS=1
> 
> HTH,
> 
> Aaron
> 
> On Oct 25, 2005, at 12:45 PM, Steve Palmer wrote:
> 
> > What are those reasons and is there any
> expectation that they can be
> > made available as patches for folks who build
> SQLite privately? I
> > cannot use the libsqlite3.dylib that comes with
> Mac OSX 10.4 since my
> > application needs to run on 10.3.9 too and there
> is no equivalent
> > static version that I can find.
> >
> > - Steve
> >
> >
> > On Oct 25, 2005, at 4:57am, [EMAIL PROTECTED] wrote:
> >
> >> Steve Palmer <[EMAIL PROTECTED]> wrote:
> >>
> >>> I'm currently investigating a problem with my
> application, SQLite
> >>> 3.2.5 and a database located on a Mac OSX Server
> network share that
> >>> does not seem to repro with the SQLite 3.1.3
> that ships with Mac OSX
> >>> 4.1. Specifically if I place a SQLite database
> file on a folder on a
> >>> remote network share and attempt to access it
> using the sqlite3
> >>> utility, I get different results.
> >>>
> >>>
> >>
> >> Apple added special hacks to their release of
> SQLite 3.1.3 that
> >> allow it to work on remote filesystems with
> broken file locking.
> >> For various reasons, those hacks have not been
> incorporated into
> >> the SQLite core, yet.
> >>
> >> --
> >> D. Richard Hipp <[EMAIL PROTECTED]>
> >>
> >>
> >
> 
> 
> 




__ 
Start your day with Yahoo! - Make it your home page! 
http://www.yahoo.com/r/hs


Re: [sqlite] Re: Number of rows in a query result

2005-10-27 Thread Dennis Cote

Puneet Kishor wrote:


Igor Tandetnik wrote:


Alfredo Cole <[EMAIL PROTECTED]> wrote:


In order to update a progress bar, I need to know the total number of
rows returned by a query, similar to MySQL's mysql_num_rows. Is there
a function like that in the C API? I may have overlooked it, but have
not found it so far.



There is no such function. Most of the time, SQLite does not know how 
many rows there are in the resultset until they are all actually 
retrieved. It just produces rows one by one as it finds them.


The only way I know to achieve what you want is to run the query 
twice, first as "select count(*)" to obtain the count, then again 
with the desired column list. Depending on the query, "select 
count(*)" may take as long as the full query, and in some rare cases 
it may even be slower. Bottom line, it is impractical to try and 
produce an accurate progress indicator for SQLite queries.




one suggestion has been to create another table with a single row 
holding the number of rows in your table of interest. Just query that 
one table for its one value.


Use TRIGGERs  on INSERT, UPDATE, and DELETE to automatically adjust 
the value in the table with the row count.



This only works if your select query is returning all the records in the 
table. If you use where conditions to select a subset of the rows this 
will fail because any query could return a number of rows that won't 
match your carefully maintained count.


Re: [sqlite] Number of rows in a query result

2005-10-27 Thread Alfredo Cole
El Jueves, 27 de Octubre de 2005 15:44, Radu Lodina escribió:
 > Hi Alfredo,
 >
 >
 > I supose you don't use
 > sqlite3_get_tablefu
 >nction right ?
 >
 >
 > Call another query with:
 > SELECT COUNT(*) AS NrRecords FROM ( )
 >
 >
 > Radu Lodina

I read about the sqlite3_get_table function, which the manual refers to as 
"really just a wrapper around sqlite3_exec()", which in turn "is just a 
wrapper around calls to the prepared statement interface". Since I'm using 
prepare, step and finalize as the recommended procedure, I thought there 
might be a method to get the number of rows directly. I will try the 
get_table funtion.

Thank you.

-- 
Alfredo J. Cole
Grupo ACyC


Re: [sqlite] Re: Number of rows in a query result

2005-10-27 Thread Puneet Kishor

Igor Tandetnik wrote:

Alfredo Cole <[EMAIL PROTECTED]> wrote:

In order to update a progress bar, I need to know the total number of
rows returned by a query, similar to MySQL's mysql_num_rows. Is there
a function like that in the C API? I may have overlooked it, but have
not found it so far.


There is no such function. Most of the time, SQLite does not know how 
many rows there are in the resultset until they are all actually 
retrieved. It just produces rows one by one as it finds them.


The only way I know to achieve what you want is to run the query twice, 
first as "select count(*)" to obtain the count, then again with the 
desired column list. Depending on the query, "select count(*)" may take 
as long as the full query, and in some rare cases it may even be slower. 
Bottom line, it is impractical to try and produce an accurate progress 
indicator for SQLite queries.



one suggestion has been to create another table with a single row 
holding the number of rows in your table of interest. Just query that 
one table for its one value.


Use TRIGGERs  on INSERT, UPDATE, and DELETE to automatically adjust the 
value in the table with the row count.




[sqlite] Re: Number of rows in a query result

2005-10-27 Thread Igor Tandetnik

Alfredo Cole <[EMAIL PROTECTED]> wrote:

In order to update a progress bar, I need to know the total number of
rows returned by a query, similar to MySQL's mysql_num_rows. Is there
a function like that in the C API? I may have overlooked it, but have
not found it so far.


There is no such function. Most of the time, SQLite does not know how 
many rows there are in the resultset until they are all actually 
retrieved. It just produces rows one by one as it finds them.


The only way I know to achieve what you want is to run the query twice, 
first as "select count(*)" to obtain the count, then again with the 
desired column list. Depending on the query, "select count(*)" may take 
as long as the full query, and in some rare cases it may even be slower. 
Bottom line, it is impractical to try and produce an accurate progress 
indicator for SQLite queries.


Igor Tandetnik 



Re: [sqlite] Number of rows in a query result

2005-10-27 Thread Radu Lodina
Hi Alfredo,


I supose you don't use
sqlite3_get_tablefunction
right ?


Call another query with:
SELECT COUNT(*) AS NrRecords FROM ( )


Radu Lodina

On 10/28/05, Alfredo Cole <[EMAIL PROTECTED]> wrote:
>
> Hi:
>
> In order to update a progress bar, I need to know the total number of rows
> returned by a query, similar to MySQL's mysql_num_rows. Is there a
> function
> like that in the C API? I may have overlooked it, but have not found it so
> far.
>
> Thank you.
>
> --
> Alfredo J. Cole
> Grupo ACyC
>


[sqlite] Number of rows in a query result

2005-10-27 Thread Alfredo Cole
Hi:

In order to update a progress bar, I need to know the total number of rows 
returned by a query, similar to MySQL's mysql_num_rows. Is there a function 
like that in the C API? I may have overlooked it, but have not found it so 
far.

Thank you.

-- 
Alfredo J. Cole
Grupo ACyC


Re: [sqlite] Size of INSERT and UPDATE in TRANSACTION

2005-10-27 Thread R S
It would be the other way around, no?
The larger the no of inserts within a Transaction, the better the
performance.


On 10/27/05, Hannes Ricklefs <[EMAIL PROTECTED]> wrote:
>
> Hello,
>
> I was wondering if anyone has any experience with the number of INSERT
> UPDATE
> statements in one TRANSACTION. For example I have the situation that i
> have to
> do around 20.000 INSERTS in one TRANSACTION, so I am wondering if it has
> any
> performance improvements if I split these up into smaller TRANSACTIONS of
> 1000
> each?
>
> Thanks,
> Hannes
>


Re: [sqlite] Time scale on INSERT and UPDATE

2005-10-27 Thread Dennis Cote

Hannes Ricklefs wrote:


Hello,

does anyone have any experience in the amount of time increase for an UPDATE, 
INSERT, SELECT in relation to the size of the actual database?


Is it proportional or exponential for example... 


Regards,
Hannes

 


Hannes,

The execution time for inserts depends upon the number of existing 
records and the number of indexes. For each insert the engine must 
locate the correct btree block to add a new record to in both the table 
and each index. This is an O(log N) operation for both indexes and 
tables. The base of the log depends upon the fanout of the btree blocks, 
which in turn depends on the size of the records and index keys, but 
usually isn't important. So for inserts you have a time T that is 
proportional to:


T ~= (number of indexes + 1) O(log number of records)

Note that this depends primarily upon the number of records in the 
table, not the size of the database. The size of the database depends 
upon the number of records in all the tables. The database size may have 
a minor impact on the insert speed by causing the btree blocks to be 
spread out further across the disk, therefore requiring additional disk 
seeks to read the blocks if they aren't already in the disk cache.


Similar arguments apply to updates. Each update needs to locate the 
existing record, and then store the modified values. This may move the 
records in the table (if it changes an integer primary key) and in each 
of the indexes that were affected by the update. Updates often use 
indexes to locate the records to be changed, or are performing a full 
table scan if all records are being modified or there is no index. For a 
table scan the amortized cost of locating the record is constant, for an 
indexed lookup the cost is O(log N), for an unindexed lookup the time is 
O(N). After the record is changed it is written back and then all 
affected indexes have to be updated. This requires locating and deleting 
the existing index record, and inserting a new index record. Both of 
these are O(log N) operations on each index. So we have a time 
proportional to:


T~= choose_one_of(O(1), O(log N), O(N)) + if_int_pk(O(log N)) + (2 * 
number of indexes)O(log N)
  
where N is the number of records in the table.


Selects are generally too complicated to be analyzed in this manner 
especially considering joins, grouping and sorting. But the same general 
principles apply. A full table scan lookup has a constant amortized 
cost, an indexed lookup has a O(log N) cost, and a nunindexed lookup has 
an O(N) cost. Joins generally multiply these values together. For 
example a full table scan joined to a second table using an index 
requires a time proportional to


T ~= O(1) + O(log number of records in the second table) = O(log N2)

for each record returned. For the entire select the time is proportional to

T~= O(N1) * O(log N2)

Sorting a select generally takes O(N log N) where N is the number of 
result records, but if an appropriate index is available that can be 
used to retrieve the records in the correct order and eliminate the need 
for a sort.


As you can see, the short answer is "it depends...".

HTH
Dennis Cote




Re: [sqlite] Time scale on INSERT and UPDATE

2005-10-27 Thread John Stanton
This depends on the number of indices.  Insertions into a B-Tree 
essntially follow an Nlog(N) law, so the best you can expect is 
O(Nlog(N)).  Similarly for deletions.


A SELECT depends upon the query and indices.  A row search is 
essentially linear with size, O(N), whereas a B-Tree index search is 
enormously faster, but does follow a basic O(Nlog(N)) law.


The conclusion is that small N is always faster in a tree or list 
structure.  A table for each data type may be preferable to all types 
being lumped into one table and selected on type.

JS

Hannes Ricklefs wrote:

Hello,

does anyone have any experience in the amount of time increase for an UPDATE, 
INSERT, SELECT in relation to the size of the actual database?


Is it proportional or exponential for example... 


Regards,
Hannes




Re: [sqlite] SQLite 3.2.5 and Mac OSX network folders

2005-10-27 Thread Aaron Burghardt

You can take a look at what Apple has done for OS X here:

http://www.opensource.apple.com/darwinsource/10.4/SQLite-28/

Don't be mislead by the project name--it is SQLite 3.  I haven't  
studied it closely, but it should allow you to build exactly what  
Apple shipped.  The Makefile has references to locking callbacks, so  
this might be relevant:


# add -DSQLITE_DEBUG=1 to enable lock tracing and other debugging  
features

# you also need to set sqlite3_os_trace to 1 in SQLite3/src/os_common.h
Extra_CC_Flags += -DASSERT_VIA_CALLBACK=1 -DENABLE_LOCKING_CALLBACKS=1

HTH,

Aaron

On Oct 25, 2005, at 12:45 PM, Steve Palmer wrote:


What are those reasons and is there any expectation that they can be
made available as patches for folks who build SQLite privately? I
cannot use the libsqlite3.dylib that comes with Mac OSX 10.4 since my
application needs to run on 10.3.9 too and there is no equivalent
static version that I can find.

- Steve


On Oct 25, 2005, at 4:57am, [EMAIL PROTECTED] wrote:


Steve Palmer <[EMAIL PROTECTED]> wrote:


I'm currently investigating a problem with my application, SQLite
3.2.5 and a database located on a Mac OSX Server network share that
does not seem to repro with the SQLite 3.1.3 that ships with Mac OSX
4.1. Specifically if I place a SQLite database file on a folder on a
remote network share and attempt to access it using the sqlite3
utility, I get different results.




Apple added special hacks to their release of SQLite 3.1.3 that
allow it to work on remote filesystems with broken file locking.
For various reasons, those hacks have not been incorporated into
the SQLite core, yet.

--
D. Richard Hipp <[EMAIL PROTECTED]>









Re[2]: [sqlite] Time scale on INSERT and UPDATE

2005-10-27 Thread Wilfried Mestdagh
Hi,

> with updates and selects, i would expect that the time of finding the
> record(s) (to update or select) (the where-clause) depends on the size
> of the table and on whether indexes are used.

With indexing if the database has eg 65535 records, then maximum 17
comparisations. it is one more for power of 2 every time. eg 128
KRecords maximum 18 comparisations etc.

---
Rgds, Wilfried
http://www.mestdagh.biz



Re: [sqlite] Time scale on INSERT and UPDATE

2005-10-27 Thread Martin Engelschalk

Hello Hannes,

in my experience, the time of inserts does not grow with the size of the 
database.
with updates and selects, i would expect that the time of finding the 
record(s) (to update or select) (the where-clause) depends on the size 
of the table and on whether indexes are used.


Martin

Hannes Ricklefs schrieb:


Hello,

does anyone have any experience in the amount of time increase for an UPDATE, 
INSERT, SELECT in relation to the size of the actual database?


Is it proportional or exponential for example... 


Regards,
Hannes
 



[sqlite] Time scale on INSERT and UPDATE

2005-10-27 Thread Hannes Ricklefs
Hello,

does anyone have any experience in the amount of time increase for an UPDATE, 
INSERT, SELECT in relation to the size of the actual database?

Is it proportional or exponential for example... 

Regards,
Hannes


Re: [sqlite] Size of INSERT and UPDATE in TRANSACTION

2005-10-27 Thread Martin Engelschalk

Hello Hannes,

I think no, you need not split your transaction. I do millions on insert 
in one transaction and it works real fast


Martin

Hannes Ricklefs schrieb:


Hello,

I was wondering if anyone has any experience with the number of INSERT UPDATE
statements in one TRANSACTION. For example I have the situation that i have to
do around 20.000 INSERTS in one TRANSACTION, so I am wondering if it has any
performance improvements if I split these up into smaller TRANSACTIONS of 1000
each?

Thanks,
Hannes
 



Re: [sqlite] Are DELETE TRIGGERS recursive or not?

2005-10-27 Thread Ralf Junker
Hello [EMAIL PROTECTED],

Thank you! I am very much looking forward to recursive delete triggers for just 
the very purpose you mentioned!

Regards,

Ralf

>Not at this time.  Though work is underway to change this.
>We need recusive delete triggers in order to implement
>cascadinig deletes for referential integrity.



[sqlite] Size of INSERT and UPDATE in TRANSACTION

2005-10-27 Thread Hannes Ricklefs
Hello,

I was wondering if anyone has any experience with the number of INSERT UPDATE
statements in one TRANSACTION. For example I have the situation that i have to
do around 20.000 INSERTS in one TRANSACTION, so I am wondering if it has any
performance improvements if I split these up into smaller TRANSACTIONS of 1000
each?

Thanks,
Hannes


RE: [sqlite] Very Slow delete times on larger databases, please help!

2005-10-27 Thread Allan, Mark
Yes we found this out too. We never vacuum the file, it is acceptable for it to 
just get larger not smaller. We generate an indication on the current database 
capacity not from the file size but from the total number of pages in the 
database file minus the number of free pages in the database.

So these times are not affected by vacuum.

Thanks

Mark


> -Original Message-
> From: Brett Wilson [mailto:[EMAIL PROTECTED]
> Sent: 26 October 2005 19:22
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] Very Slow delete times on larger 
> databases, please
> help!
> 
> 
> Vacuuming is just slow. I don't think there is much you can do except
> don't do it unless you really need it, and don't turn on autovacuum.
> 
> Brett
> 
> On 10/26/05, R S <[EMAIL PROTECTED]> wrote:
> > In my case Delete happens reasonably OK but Vaccuuming 
> takes incredibly
> > long?
> >
> >
> > On 10/21/05, Allan, Mark <[EMAIL PROTECTED]> wrote:
> > >
> > >
> > > Thanks to both Christian Smith and John Stanton for your posts.
> > >
> > > > On Wed, 19 Oct 2005, Christian Smith wrote:
> > > > From the VDBE output you originally posted, you are doing a
> > > > fair amount of
> > > > work for each deleted row:
> > > > - Index search to find the next row from EXAMINATIONS to delete
> > > > - Removing the row from 3 indexes on EXAMINATIONS
> > > > - (trigger) Remove related row in SPIRO_TEST from 2 indexes
> > > > on SPIRO_TEST
> > > > - (trigger) Remove related row in SPIRO_TEST
> > > > - Remove the row from EXAMINATIONS
> > > >
> > > > Check your cache size. If the above work is causing the 
> 75 page entry
> > > > cache to thrash, you're likely to hit worst case 
> performance as the
> > > > thrashing pages may be being accessed in a cyclical fashion.
> > > > Not sure how
> > > > like it is that your page cache is not big enough. How 
> big is a row of
> > > > data, typically?
> > >
> > > Our cache size is 75 pages of 8192 bytes = 600Kb.
> > >
> > > The maximum size of an EXAMINATIONS record is about 500 
> bytes, 450 bytes
> > > of this is a varchar field. In the test example the 
> average size of an
> > > EXAMINATIONS record is 60 bytes as not much text is saved.
> > >
> > > The maximum size of a SPIRO_TEST record is about 5Kb, 
> these will vary from
> > > test to test, but for the test example the size of each 
> SPIRO_TEST record is
> > > fixed to approx 1Kb.
> > >
> > > Based on this I dont think that we should be thrashing 
> the cache. I am
> > > however unsure how SQlite works here.
> > >
> > > > Also, a 60x slowdown is not to be unexpected. The PC 
> version, while
> > > > probably having the same SQLite page cache size in the SQLite
> > > > app itself,
> > > > will most likely be reading and writing to the OSes 
> cache at memory to
> > > > memory copy speed most of the time, with synchronous writes
> > > > only done when
> > > > needed. The embedded platform you're using probably 
> writes straight to
> > > > FLASH, which is necassarily a synchronous operation if 
> your OS doesn't
> > > > have a cache between your app and the FLASH FS. While flash
> > > > writes are low
> > > > latency, they are also low bandwidth, and won't be 
> within an order of
> > > > magnitude of performance when compared to a desktop PC 
> write to OS
> > > > filesystem cache.
> > > >
> > > > Finally, you give no indication on the actual CPU speed of
> > > > the embedded
> > > > platform. It's quite reasonable to assume a development 
> PC could be an
> > > > order of magnitude faster on sheer integer throughput. I'm
> > > > amazed how slow
> > > > my 50MHz microSPARC based SPARCclassic is. Such a platform
> > > > would not be
> > > > much, if at all, slower than a modern embedded 
> platform, and has the
> > > > benefit of gobs of RAM, but still runs the same code 
> two orders of
> > > > magnitude slower at least than my Athlon XP 1700 based
> > > > desktop. You have
> > > > to keep your performance expectations realistic. You 
> are, afterall,
> > > > running a complete, ACID transaction, SQL relational database.
> > >
> > >
> > > The maximum CPU speed of our ARM7 chip is 71Mhz.
> > >
> > > > Others have indicated that dropping indexes might help when
> > > > deleting or
> > > > inserting records. However, have you tried simply not having
> > > > indexes at
> > > > all? Would that cause unacceptable slowdown? Perhaps, for the
> > > > demo query
> > > > from the original post, just keep the DATE index on
> > > > EXAMINATIONS, and use
> > > > full table scans for queries based on EXAM_TYPE and
> > > > STATUS_FLAG. Truth is,
> > > > given the small number of EXAM_TYPE and STATUS_FLAG values (I
> > > > presume),
> > > > you're as well just doing table scans when looking for
> > > > specific exam types
> > > > and statuses. Indexes only really help when you have a large
> > > > variation in
> > > > values with few collisions. Doing this will leave a single
> > > > index update in
> > > > addition to the actual row removals, which should 
>