[sqlite] Sqlite fd problem!

2015-11-16 Thread Marcus Grimm
you are probably missing to finalize the statement
prior calling sqlite3_close().
additionally you may check the return value of
sqlite3_close() - it will tell you something.

marcus

Am 16.11.2015 um 12:02 schrieb Nader Lavasani:
> Hi all,
>
> This happened in iOS with Objective-C language.
>
> When we open a database(sqlite3_open()), that creates a fd(file descriptor)
> and when we close the database(sqlite3_close()) that only close connection
> and fd remain open.so when user open and close many db(or one db for many
> time) so many fds remain open and when fds reach to 255, app not opening
> any file or db.
>
> this is disclosure video : https://www.youtube.com/watch?v=uvKwko4LhWo
>
>
> Bug? or my silly mistake?!
>
>
> Thanks
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>



[sqlite] ATTACH DATABASE statement speed

2015-08-18 Thread Marcus Grimm
Just another guess:
Have you tried to increase the page chache drastically ?
I can remeber that "PRAGMA quick_check" is pretty slow
for bigger DBs without an increased page cache.
Maybe something like:
PRAGMA cache_size=50;
PRAGMA quick_check;

Marcus

Am 18.08.2015 um 12:38 schrieb Paolo Bolzoni:
> It really seems something strange happens at filesystem level.
>
> This is a simple copy of slightly less than 1gb. It needs 9 seconds
> including sync.
> % date && sudo rsync -Pr italy-latest.osm.pbf / && sync && date
> Tue Aug 18 19:22:23 JST 2015
> sending incremental file list
> italy-latest.osm.pbf
>  946,976,283 100%  123.88MB/s0:00:07 (xfr#1, to-chk=0/1)
> Tue Aug 18 19:22:32 JST 2015
>
>
> However, when I start sqlite3 db 'PRAGMA quick_check;' the IO looks
> normal for a while.
> (I hope gmail don't mess up with the formatting...)
>
> 60, 90, 80 MB/s is kinda expected:
> 08/18/2015 07:27:38 PM
> Device: rrqm/s   wrqm/s r/s w/srMB/swMB/s
> avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
> encplate
>0.00 0.00  820.00   13.0062.11 0.26
> 153.34 1.872.271.14   73.46   1.20  99.80
>0.00 0.00 1214.500.0094.58 0.00
> 159.49 0.960.780.780.00   0.78  95.20
>0.00 0.00 1008.50   22.0078.09 0.41
> 155.99 1.501.460.96   24.16   0.93  95.80
>
> but after some seconds it drops terribly to less than 10MB/s
> 08/18/2015 07:29:04 PM
> Device: rrqm/s   wrqm/s r/s w/srMB/swMB/s
> avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
> encplate
>0.00 0.00  124.003.50 9.88 0.12
> 160.72 1.67   12.99   11.21   76.14   7.65  97.50
>0.00 0.00   69.00   18.00 5.68 0.29
> 140.55 1.81   20.92   14.15   46.86  11.38  99.00
>0.00 0.00   86.000.00 7.05 0.00
> 167.91 1.04   12.03   12.030.00  11.24  96.70
>
> And so, going to 10MB per second it can easily require few hours...
>
>
> I am out of ideas, but thanks for all the support.
>
>
>
> On Tue, Aug 18, 2015 at 7:26 PM, Simon Slavin  wrote:
>>
>> On 18 Aug 2015, at 7:30am, Paolo Bolzoni  
>> wrote:
>>
>>> Any other idea of what can I try? Perhaps my filesystem is misconfigured?
>>
>> The long time you quote is not standard for SQLite and I don't think anyone 
>> can help you solve it by knowing picky details of SQLite.  I'm even 
>> surprised that it changed with your -O0 compilation since this suggests 
>> features of your compiler I didn't know about.
>>
>> It's possible one of the developer team can help but they're reading this 
>> and can pitch in if they think so.
>>
>> Simon.
>> ___
>> sqlite-users mailing list
>> sqlite-users at mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


[sqlite] Appropriate Uses For SQLite

2015-02-18 Thread Marcus Grimm
We use sqlite as the db engine inside a server application
with a number of clients that connect to the server.
Sqlite works just beatiful here and I wish these statements
"sqlite shall not be used for client/server things" would be
worded less generally. In fact when we mention sqlite as our
db engine customer point to this restriction and we run into
an excuse sort of arguments.
On the bottom line: Sqlite CAN very well serve as the DB
engine for client/server applications, it just depend how
the api is used.

Marcus

Am 2015-02-18 15:34, schrieb Richard Hipp:
> In a feeble effort to do "marketing", I have revised the "Appropriate
> Uses For SQLite" webpage to move trendy buzzwords like "Internet of
> Things" and "Edge of the Network" above the break.  See:
> 
> https://www.sqlite.org/whentouse.html
> 
> Please be my "focus group", and provide feedback, comments,
> suggestions, and/or criticism about the revised document.   Send your
> remarks back to this mailing list, or directly to me at the email in
> the signature.
> 
> Thank you for your help.


Re: [sqlite] Bug in sqlite.exe?

2013-11-30 Thread Marcus Grimm

Am 2013-11-30 20:04, schrieb Eric Teutsch:

Sorry, should have pointed out that the "missing" tables are:
CollectionDevices and CollectionChannels


You have the DB file in ProgramData, maybe you are a victim of the
windows file virtualization ?
Try to use a different folder and see if that changes...

Marcus



-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Eric Teutsch
Sent: November-30-13 14:04
To: 'General Discussion of SQLite Database'
Subject: Re: [sqlite] Bug in sqlite.exe?

I've taken a picture of the odbc tool (showing the tables and the 
database
name) and sqlite.exe with .database and .table showing.  You can see 
it

here: http://www.powersoft.ca/files/sqlite.png
It's the same file...


-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Igor Tandetnik
Sent: November-30-13 13:13
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Bug in sqlite.exe?

On 11/30/2013 11:55 AM, Eric Teutsch wrote:

.Tables doesn't show the 2 new tables.  A select statement on one of
those tables says "No such table".  And "select * from 
sqlite_master"

shows the 8 tables and 1 trigger.  And ends there.  But when using
sqliteodbc, I can run the select statement on a new table.


My educated guess is, you are not looking at the file you think you 
are
looking at. Somehow, in the shell you are opening a different 
database file
than the one you are opening in other tools, and that file does 
indeed have

8 tables and 1 trigger.
--
Igor Tandetnik

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] how to move around records in sqlite as the way as in CDaoRecordset of MFC??

2012-12-27 Thread Marcus Grimm

please be aware that you will run into quite some programming effort
if you attempt to convert the MFC CRecordSet methods like MovePrev(),
into an sqlite equivalent. In MFC they often use statements like

myTable.Open();
while( !myTable.IsEOF() )
{
  // do something ...

  myTable.MoveNext();
}

I tend to argue that it is impossible to add a MovePrev(), inside such a
loop without the need to restart the hole processing - IF you intend to
maintain the MFC style.

Some time ago I created CRecordSet wrapper for sqlite while porting an
application to use sqlite instead - but I skipped the MovePrev() method - 
Usually
they can be avoided with a little restructuring of the MFC style of
coding - you might consider this as well.

Anyway, back to your question - It is not clear what you are asking since
you already quote the relevant chapters from the sqlite doc. So the answer
is yes, the ROWID can be accessed obviously faster compared to a standard
column. But I think this is not really your problem here...

Marcus


On 26.12.2012 17:28, genliu777 wrote:

the reason that i talk about how to get ROWID of record is that i want to
access specific record by the FASTEST way!

i read the document of sqlite again, and find if i define a field(e.g.
social_num) with the feature of  "*integer primary key*" , i can access
record by the value of the field in the fastest way.

is it true!?

the orignal descripiton follows,
"Searching for a record with a specific rowid, or for all records with
rowids within a specified range is around twice as fast as a similar search
made by specifying any other PRIMARY KEY or indexed value."

"*With one exception*, if a table has a primary key that consists of a
single column, and the declared type of that column is "INTEGER" in any
mixture of upper and lower case, then the column becomes an alias for the
rowid."

"Rowid values may be modified using an UPDATE statement in the same way as
any other column value can, either using one of the built-in aliases
("rowid", "oid" or "_rowid_") or *by using an alias* created by an integer
primary key. "



--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/how-to-move-around-records-in-sqlite-as-the-way-as-in-CDaoRecordset-of-MFC-tp66191p66276.html
Sent from the SQLite mailing list archive at Nabble.com.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



--
Marcus Grimm
MedCom GmbH Darmstadt, Rundeturmstr. 12, 64283 Darmstadt
Tel: +49(0)6151-95147-10
Fax: +49(0)6151-95147-20
web: www.medcom-online.de
--
MedCom slogans of the month:
"Vacation ? -- Every day at MedCom is a paid vacation!"
"Friday I have monday in my mind."
"MedCom -- Every week a vacation, every day an event, every hour a cliffhanger,
   every minute a climax."
"Damned, it's weekend again!"
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] CREATE INDEX on huge DB fails with SQLITE_NOMEM

2012-10-09 Thread Marcus Grimm


On 09.10.2012 12:46, Clemens Ladisch wrote:

Marcus Grimm wrote:

I can see that the application increases memory up to appx. 600 MB
while the database is populated - That is the cache_size=50,
I assume.


cache_size=50 corresponds to:
  255 MB (page size 512 bytes)
  488 MB (page size 1 KB)
  977 MB (page size 2 KB)
  1.9 GB (page size 4 KB)

What does "PRAGMA page_size;" say in your database?


I think it the default value (1 KB).



And are you running in a 32-bit or 64-bit process?


It is a 32 bit process on a 64 bit Win7 machine.

During the process of inserting the test data the
memory usage increases constantly to around 650 MB and
remains there - no problem with that. Sqlite just fills the cache
that I specified.

It is interesting though, that create index appears to require
memory above the specified page cache. No problem actually
but one need to be aware about it.

Anyway, everything works now and I can proceed.

Marcus




Regards,
Clemens
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] CREATE INDEX on huge DB fails with SQLITE_NOMEM

2012-10-09 Thread Marcus Grimm

On 09.10.2012 11:00, Marcus Grimm wrote:

On 09.10.2012 10:44, Dan Kennedy wrote:

On 10/09/2012 03:30 PM, Marcus Grimm wrote:

Hello,

I'm currently investigating how far I can go with my
favorite DB engine. For that purpose I'm testing
my application with an artificial database that is
appx. 50 times bigger that the maximum I have
seen in the field so far.
The test creates a database from the scratch and just fills
the tables with random data. To speed up this operation
(which takes 3 hours) I drop all irrelevant indices
prior running the inserting.
Afterwards I need to create these indices because they are
necessary for the regular database operations.
Now, this (CREATE INDEX) fails after a few minutes with an error code
of 7 (malloc failed). I'm using the native C-Api...
I also specify: "PRAGMA cache_size=50;" if that matters.


Try changing that to 2000 or something before executing the
CREATE INDEX statement.

Assuming you don't have SQLite configured to stored temp files
in memory, a CREATE INDEX statement starts by sorting the data
using a merge-sort before building the b-tree structure to
represent the index. This is much faster than just inserting keys
into the new b-tree in arbitrary order. But, the merge-sort code
assumes that it can plausibly allocate up to (cache-size * page-size)
bytes of memory using malloc() (not as a single chunk - in total).
And if a malloc() fails, you get this SQLITE_NOMEM error. In this
case, with "PRAGMA cache_size=50", that might end up being too
much.


Hi Dan,

thanks for the quick reply. Of course I'm not using temp-memory files... :-)

I'll give it a try - However, I can see that the application increases
memory up to appx. 600 MB while the database is populated - That
is the cache_size=50, I assume. So sqlite was able to allocate
that memory - Does the create index allocate that amount in
addition ?

Anyway, I now re-run the test as suggested and will report how it goes...



Bravo... It finished! :-)

Cool...

I don't quite understand why sqlite didn't run out of memory
during the heavy insert operations but later on when
creating the indices.
Also it is interesting that the application reaches about
650 MB memory usage during the insertion, after that loop I reduce
the page cache to "PRAGMA cache_size=5;" and the memory usage
increases slightly to 750 MB and remains there while the various create index
command take place. I would expect that the memory usage drops down
after issuing the reduced cache size pragma.

Anyway, thanks again Dan! -  now I can proceed with by tests..

Marcus



Marcus




Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] CREATE INDEX on huge DB fails with SQLITE_NOMEM

2012-10-09 Thread Marcus Grimm

On 09.10.2012 10:44, Dan Kennedy wrote:

On 10/09/2012 03:30 PM, Marcus Grimm wrote:

Hello,

I'm currently investigating how far I can go with my
favorite DB engine. For that purpose I'm testing
my application with an artificial database that is
appx. 50 times bigger that the maximum I have
seen in the field so far.
The test creates a database from the scratch and just fills
the tables with random data. To speed up this operation
(which takes 3 hours) I drop all irrelevant indices
prior running the inserting.
Afterwards I need to create these indices because they are
necessary for the regular database operations.
Now, this (CREATE INDEX) fails after a few minutes with an error code
of 7 (malloc failed). I'm using the native C-Api...
I also specify: "PRAGMA cache_size=50;" if that matters.


Try changing that to 2000 or something before executing the
CREATE INDEX statement.

Assuming you don't have SQLite configured to stored temp files
in memory, a CREATE INDEX statement starts by sorting the data
using a merge-sort before building the b-tree structure to
represent the index. This is much faster than just inserting keys
into the new b-tree in arbitrary order. But, the merge-sort code
assumes that it can plausibly allocate up to (cache-size * page-size)
bytes of memory using malloc() (not as a single chunk - in total).
And if a malloc() fails, you get this SQLITE_NOMEM error. In this
case, with "PRAGMA cache_size=50", that might end up being too
much.


Hi Dan,

thanks for the quick reply. Of course I'm not using temp-memory files... :-)

I'll give it a try - However, I can see that the application increases
memory up to appx. 600 MB while the database is populated - That
is the cache_size=50, I assume. So sqlite was able to allocate
that memory - Does the create index allocate that amount in
addition ?

Anyway, I now re-run the test as suggested and will report how it goes...

Marcus




Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] CREATE INDEX on huge DB fails with SQLITE_NOMEM

2012-10-09 Thread Marcus Grimm

Hello,

I'm currently investigating how far I can go with my
favorite DB engine. For that purpose I'm testing
my application with an artificial database that is
appx. 50 times bigger that the maximum I have
seen in the field so far.
The test creates a database from the scratch and just fills
the tables with random data. To speed up this operation
(which takes 3 hours) I drop all irrelevant indices
prior running the inserting.
Afterwards I need to create these indices because they are
necessary for the regular database operations.
Now, this (CREATE INDEX) fails after a few minutes with an error code
of 7 (malloc failed). I'm using the native C-Api...
I also specify: "PRAGMA cache_size=50;" if that matters.

The table/index in question has appx. 300 million rows...

Is there a workaround, other than having the indices defined
from the beginning ? Haven't tried yet... though.

It could be that I'll need to add indices in future versions
of the application and I'm concerned that sqlite will not be able
to do so if the database exceeds a certain size.

Please note that sqlite can (I think) very well handle that DB size,
it's just the CREATE INDEX that is, so far, a bit disappointing.

Any comment on this ?

I tried with sqlite 3.7.14.1 and 3.7.8 - no difference.


Kind regards

Marcus
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] C++ - WHERE clause - 2nd update

2012-09-07 Thread Marcus Grimm


On 07.09.2012 08:58, Arbol One wrote:

I got this code to work, however, I am getting a segmentation fault on this
code.

I pass to SQLite only one statement [db->setStmt(apstr);], I read the first
of the 'fname', but I don't know how to get to the second 'fname' in the
database.
I am not very sure as to what do to tell the program to read the next row
until there are no more [ read_str until SQLITE_DONE ] rows to read.


well.. you already answered your question:
You step thru the result list until you reach SQLITE_DONE.

In your example you re prepare the statement all the time and thus
you will always get the first hit in your data.
The sequence should be:

sqlite3_prepare_v2

while( sqlite3_step(mystmt) == SQLITE_ROW )
{
/** read the data .. **/
}

sqlite3_finalize
...



Help?

 Glib::ustring apstr;
 Glib::ustring sName;
 int apint;
 mySQLite3* db;
 try {
 db = new mySQLite3(db_name.c_str());
 } catch(somexception&  e) {
 //do something
 }

 // SQL statement
 Glib::ustring sName;
 apstr = "SELECT fname FROM ";
 apstr += this->db_table_name;
 apstr += " WHERE title = \'";
 apstr += token;
 apstr += "\' ";

 apint = 0;
 db->setStmt(apstr);
 do{
 try {
 sName = db->read_str(apint);
 } catch(jme::Exception&  e ) {
 e.Display();
 }
 apex.setException(sName, FILE, METHOD, LINE);
 apex.Display();
 }while(sName != "finished");



const Glib::ustring&  mySQLite3::read_str(const int pos)
throw(somexception) {

 rc = sqlite3_prepare_v2(db, this->SQLStatement.c_str(), -1,,
NULL);
 if(rc != SQLITE_OK) {
 // do something
 }
 rc = sqlite3_step(mystmt);
 if(rc == SQLITE_ROW ) {
 apstr = (const char*)sqlite3_column_text(mystmt,pos);
 }else{
 apstr = "finished"; // a small modification
 }
 try {
 this->finalize();
 } catch(somexception&  e) {
 throw e;
 }
 return apstr;
}

What am I doing wrong?

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3_trace() threadsafe

2012-09-05 Thread Marcus Grimm

Am 2012-09-05 18:53, schrieb esum:

Thank you so much for all help.  I really appreciate it.

/"All the same, I'm guessing that
http://www.sqlite.org/src/info/39f763bfc0will fix your problem."/

I gave this new src code from the src tree a shot, but I seem to be 
getting

the same behavior.

Interestingly though, I tried enforcing my own serialization of the
execution of the insert operations being called from the different 
threads

with a mutex such that the code that performs the inserts is called
sequentially from each thread, and this seemed to fix things.  I did 
not see
any duplicate sql statements.  (This is with the sqlite src provided 
in the

link compiled with the --enable-threadsafe and
--enable-cross-thread-connections which I assume sets the threading 
mode to

serialized).


It is not mentioned by the docs that your trace callback will be called
encapsulated by a mutex - In other words: Are you sure that your
trace callback is threadsafe ? I can recall strange effects when doing
"fprintf(stderr, ..."  from different threads concurrently.
Just a guess.. though.

Marcus





This leads me to believe that it has to do with contention/blocking 
within

sqlite as Simon explains:

/"What I am guessing is happening is that sometimes sqlite3_trace() 
is
called, but is eventually held up because of access contention, a 
situation
where it can't go ahead because the database is locked.  Eventually 
the
database becomes free again and this thread can go ahead, at which 
point

_trace() gets called again."/

I also tried using the sqlite_profile( ) function, but this seems to 
not
keep the bound parameter values in the const char* sql statement 
text, and
this also replicated statements (I could tell because an incorrect 
number of

INSERTS were printed).

I greatly appreciate any other thoughts on what my problem could be.



--
View this message in context:

http://sqlite.1065341.n5.nabble.com/sqlite3-trace-threadsafe-tp64004p64030.html
Sent from the SQLite mailing list archive at Nabble.com.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] using the SQLITE_BUSY and BEGIN IMMEDIATE

2012-07-10 Thread Marcus Grimm

Am 2012-07-10 20:50, schrieb deltagam...@gmx.net:

In  http://ideone.com/eWPWD  is my example code.

I would like to know if this would be an appropiate implemetation/use 
of

BEGIN IMMEDIATE
and sqlite3_busy_timeout.


sqlite3_busy_timeout is specified once per db connection.
Probably at the beginning after sqlite_open

However, I think you slightly missinterpret the story about
SQLITE_BUSY:
First of all: Any sql operation that works on the DB might
return with an SQLITE_BUSY - No matter if its SELECT, INSERT, BEGIN, 
COMMIT, etc.


It is in the hand of programmer to deal with that:

a) by some help from sqlite by using the busy timeout, which
   internally causes sqlite to retry a statement until the timeout.
b) by manually retrying the operation that returns
   the busy state.

Be aware that you may not only deal with SQLITE_BUSY but also
with SQLITE_LOCKED.

By surrounding you sql commands with an BEGIN IMMEDIATE you just
move the issue from the INSERT command one level up to the BEGIN
statement - the principal handling of SQLITE_BUSY remains untouched.

I think the tip to use BEGIN IMMEDIATE concerns the deadlock situation
rather than a simple busy handling: It is used to make sure that a
typical cursor operation on a table that might change data in
the DB doesn't dead lock because another process has a pending lock
acquired in parallel. But that's another story anyway...

Marcus



As I understood from the documentation and literature about 
Transaction

and locks
the sqlite3_step is the action state part so I enclosed sqlite3_step
with BEGIN IMMEDIATE and COMMIT, but then occurred errors from the
sqlite3_prepare_v2

In http://www.mail-archive.com/sqlite-users@sqlite.org/msg17107.html
is explained why the sqlite3_prepare_v2
can cause a SQLITE_BUSY


So  I enclosed now all db-concerning steps with BEGIN IMMEDIATE
and COMMIT.
With this construction appear still some database lock errors but 
lesser

then with a
sqlite3_busy_timeout(db, 23) e.g.

As I understood, the BEGIN IMMEDIATE would try to get the RESERVED 
lock

once, and if it fails it gets a SQLITE_BUSY.

The BEGIN IMMEDIATE will however try to get the RESERVED lock many 
times
, if a sqlite3_busy_timeout is defined, but all tries happen within 
the

time defined by the sqlite3_busy_timeout.
This behaviour is promoted here
http://www.mail-archive.com/sqlite-users@sqlite.org/msg10447.html


Or is it like this, that
BEGIN IMMEDIATE tries it again exactly once again after the timeout ?
In literature it is also mentioned  that the BEGIN IMMEDIATE has to 
be

resetted.
How and where do I have to do this in my example code ?


I would appreciate some suggestions

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] An interesting (strange) issue with selects [solved]

2012-06-30 Thread Marcus Grimm

Am 2012-06-30 12:06, schrieb Dennis Volodomanov:

On 30/06/2012 7:47 PM, Marcus Grimm wrote:

Here is another theory:

Maybe you run into an issue with Windows 7 Virtualization,
I did run into a similar effect with the registry access/write
sometime go - mainly with Windows 7 Home Editions,
by reading this:
http://support.microsoft.com/kb/927387/EN-US

It suggests that something similar can also apply on files in
certain programdata folders.



I think you just nailed it! From that link, Scenario 4 - I go into
that Virtual folder and lo and behold - there's only file there and
it's my app's database! I've killed that and then tried the usual
sqlite shell routine and it's all back to normal (e.g. there is no
data in the database file, which is all correct). Wow :)


Cool... :-)

Boy... isn't that crazy what WIndows occasionally does just to follow
some overzealous security constrains ?

Sometimes I really glorify the good old windows NT / Win2000 times.

Marcus



Thank you for solving this and hopefully this helps someone in the
future facing the same problem - if you are using Windows 7 (Ultimate
in my case) and you see a file which shouldn't be there, check the
link: http://support.microsoft.com/kb/927387/EN-US and see if any of
the scenarios listed there apply to you.

   Dennis

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] An interesting (strange) issue with selects

2012-06-30 Thread Marcus Grimm

Here is another theory:

Maybe you run into an issue with Windows 7 Virtualization,
I did run into a similar effect with the registry access/write
sometime go - mainly with Windows 7 Home Editions,
by reading this:
http://support.microsoft.com/kb/927387/EN-US

It suggests that something similar can also apply on files in
certain programdata folders.

Marcus

Am 2012-06-30 10:32, schrieb Dennis Volodomanov:

On 30/06/2012 12:57 PM, Kees Nuyt wrote:


Is the database file in a protected folder (that is, "\Program 
Files",

or somewhere in the Windows system software tree) ?

It shouldn't be. Data belongs somewhere else. Either in your
userprofile/appdata or in a completely separate dirtree that 
Microsoft

doesn't try to manage.

HTH



No, the database is not in a protected folder, it's in the common
appdata folder (ProgramData on Windows7).

Dennis
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] An interesting (strange) issue with selects

2012-06-29 Thread Marcus Grimm



On 29.06.2012 13:42, Dennis Volodomanov wrote:

On 29/06/2012 9:25 PM, Simon Slavin wrote:

Your app or the shell tool is running while you delete the file, or do you 
quit, delete, then restart them ?

When you specify the database file to open are you specifying a full path, from 
the 'C:\' on down, or are you relying
on some default folder being specified by something ? The usual trick is that 
your application is opening a database
from one folder, the shell tool is opening the database from another folder, 
and the one you're deleting is one or
none of them. Make sure everything specifies the full path, just until you've 
figured out this problem.


It's all fully-qualified paths, no default folders :) And yes, the db is 
deleted when everything is closed (and checked
using Task Manager).


It's probably best to start off by assuming that the SQLite shell tool does 
exactly what it's documented to do 100% of
the time. There are thousands of users of it out there and a bug like you 
describe would have been reported to this
list many times by now.

You should be able to use a database file as a messaging system. Put a row in 
it using the shell tool, then read it
out using your app and make sure it has the right value. Then put a row in it 
using your app and read it out using the
shell tool. If that's not working, they're opening different files, your app is 
buggy, or you have a hardware failure
of some sort.




What's weird here (I'll just re-instate it) is that when a DB is created from 
within the app, it seems to inherit some
entries from an already deleted DB. I'm not sure how that's possible, but it 
appears to be what I'm seeing (if the .dump
command in sqlite shell is reporting the truth, which I do assume it does).


Just a quess: Are you using wal mode ?
And how to you "delete" the DB ? Are you removing also any journal or -wal 
files (if any) ?

Marcus



I'll try to reproduce this using the shell tool (creating the DB) and see what 
happens.

Dennis

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



--
Marcus Grimm
MedCom GmbH Darmstadt, Rundeturmstr. 12, 64283 Darmstadt
Tel: +49(0)6151-95147-10
Fax: +49(0)6151-95147-20
web: www.medcom-online.de
--

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Problem with select

2012-06-20 Thread Marcus Grimm

On 20.06.2012 15:03, deltagam...@gmx.net wrote:

Hello,

I have a problem with the selection of the row, errorcode 21 s


You are missing a sqlite3_prepare_v2 for your select.



Indeed I would like to retrieve all rows, but as far as I understand it, this 
construction will retrieve only the last
row/insert.


Well.. if it would work then yes because you ask for a specific ID, don't you ?



What happens if the db is closed after the insert and reopened then for the 
select-statement ?



What do you expect what will happen ?





===

#include 
#include 
#include "sqlite3.h"

#include 
#include 
#include 
#include 

using namespace std;


int main() {



string filename = "errorlog.txt";
std::ofstream outfile(filename);

char eventtype[]="zweiter Anlauf/Versuch";
int zaehler = strlen(eventtype);


int rc;
char *exec_errmsg;

const char dbname[] = "ef.db";
sqlite3 *db = NULL;

rc = sqlite3_open(dbname, );
if(SQLITE_OK != rc) {
outfile << "Can't open database "<< dbname << " (" << rc << "): " << 
sqlite3_errmsg(db) << std::endl;
//fprintf(stderr, "Can't open database %s (%i): %s\n", dbname, rc, 
sqlite3_errmsg(db));
sqlite3_close(db);
exit(1);
}

/ maybe discard this part 
///
const char create_sql[] = "CREATE TABLE eventlog ("
"id INTEGER PRIMARY KEY,"
"eventdate DATETIME default current_timestamp,"
"eventtype TEXT,"
"counter INTEGER"
")";

// Use exec to run simple statements that can only fail/succeed
rc = sqlite3_exec(db, create_sql, NULL, NULL, _errmsg);
if(SQLITE_OK != rc) {
outfile << "Error creating table (" << rc << "): " << sqlite3_errmsg(db) << 
std::endl;
//fprintf(stderr, "Error creating table (%i): %s\n", rc, exec_errmsg);
sqlite3_free(exec_errmsg);
//sqlite3_close(db);
//exit(1);
}

/ maybe discard this part 
///


const char insert_sql[] = "INSERT INTO eventlog (eventtype, counter) VALUES 
(?,?)";
sqlite3_stmt *insert_stmt = NULL;



rc = sqlite3_prepare_v2(db, insert_sql, -1, _stmt, NULL);
if(SQLITE_OK != rc) {
outfile << "Can't prepare insert statment " << insert_sql << " (" << rc << "): " << 
sqlite3_errmsg(db) << std::endl;
//fprintf(stderr, "Can't prepare insert statment %s (%i): %s\n", insert_sql, 
rc, sqlite3_errmsg(db));
sqlite3_close(db);
exit(1);
}



rc = sqlite3_bind_text(insert_stmt, 1, eventtype, strlen(eventtype), NULL);
if(SQLITE_OK != rc) {
outfile << "Error binding value in insert (%i): %s\n", rc, sqlite3_errmsg(db);
//fprintf(stderr, "Error binding value in insert (%i): %s\n", rc, 
sqlite3_errmsg(db));
sqlite3_close(db);
exit(1);
} else {
printf("Successfully bound string for insert: '%s'\n", eventtype);
}

rc = sqlite3_bind_int(insert_stmt, 2, zaehler);
if(SQLITE_OK != rc) {
outfile << "Error binding value in insert (%i): %s\n", rc, sqlite3_errmsg(db);
//fprintf(stderr, "Error binding value in insert (%i): %s\n", rc, 
sqlite3_errmsg(db));
sqlite3_close(db);
exit(1);
} else {
printf("Successfully bound real for insert: %d\n", zaehler);
}


rc = sqlite3_step(insert_stmt);
if(SQLITE_DONE != rc) {
outfile << "insert statement didn't return DONE (%i): %s\n", rc, 
sqlite3_errmsg(db);
//fprintf(stderr, "insert statement didn't return DONE (%i): %s\n", rc, 
sqlite3_errmsg(db));
} else {
printf("INSERT completed\n\n");
}


// start selecting ///
const char select_sql[] = "SELECT eventtype, counter FROM eventlog WHERE id=?";
//const char insert_sql[] = "INSERT INTO eventlog (eventtype, counter) VALUES 
(?,?)";

sqlite3_stmt *select_stmt = NULL;

// Now attempt to get that row out
sqlite3_int64 id = sqlite3_last_insert_rowid(db);
sqlite3_bind_int64(select_stmt, 1, id);


// This is your standard pattern
while(SQLITE_ROW == (rc = sqlite3_step(select_stmt))) {
int col;
printf("Found row\n");
for(col=0; col

Re: [sqlite] Using "attach database" to work around DB locking

2012-04-11 Thread Marcus Grimm
> Converting variable name to variable ID (with separate lookup table)
> was one of my first ideas, but turns out that the lookup itself was a
> bigger hit in performance than the indexing.  I'll revisit that and
> see if I failed to tweak something properly.

To me it sounds that it will just move the performance
problem from the indexed column to another separated table
which might as well need an index on "variable name".
I also don't see why a lookup will be faster via
attached databases: Doesn't this just move the lookup
to sqlite's attach command ?
The locking issue of a SELECT might be improved by using
wal mode.

Anyway, you didn't specify how these names look like - in case
they are chars with, say, more than 8 bytes you might
trying to add a CRC integer hash on the variable name
and index that value. Your selects might then look
like SELECT * FROM table WHERE VarName = 'abc' AND VarCRC=N;
This approach avoids to do string comparisons on table
lookup and indexing. It also reduces the size of the index.

Marcus

>
> -Ian
>
> On Wed, Apr 11, 2012 at 1:22 PM, Pavel Ivanov  wrote:
>>> Am I missing any features of SQLite that would solve this problem in a
>>> different/better way?
>>
>> If you didn't try it I would suggest to try a single table (timestamp,
>> variable id, value). Index on integer variable id will work faster
>> than text variable name.
>> Other than that I'd say your design of one table per database is well
>> justified. Especially if you won't attach all databases together while
>> inserting data, but will connect to necessary database directly.
>>
>>
>> Pavel
>>
>>
>> On Wed, Apr 11, 2012 at 1:11 PM, Ian Katz  wrote:
>>> These are all good points, and introduce some features of sqlite that
>>> I didn't know existed!
>>>
>>> The database system that I'm designing is for an autonomous vehicle;
>>> it collects a lot of (data which is currently getting stored as a flat
>>> text file).  So, it's going to write a LOT of data into many tables
>>> independently, occasionally do single-table reads, and at the end of
>>> the day create a report that joins all the tables together.  So, my
>>> main goal is to give maximum speed to writes by compartmentalizing the
>>> locks that the incoming reads will cause.  It sounds like this use
>>> case won't hit any of the disadvantages mentioned above.
>>>
>>> In the past, I've tried to do this in SQLite and MySQL by putting all
>>> the data in a single table (timestamp, variable name, value).  But, if
>>> you index the variable name then writes become too slow (by the end of
>>> a day's worth of data collection, the next round of data comes in
>>> before the previous round is written); if you don't index then the
>>> table is impossible to select from in any reasonable amount of time.
>>> So, the solution seems to be splitting every variable into its own
>>> table -- not very good normalization, but retaining good read
>>> performance without having write performance degrade over time.  The
>>> join-all-tables-together query would be used for generating a logfile
>>> in the old format, just in case we need it.
>>>
>>> Am I missing any features of SQLite that would solve this problem in a
>>> different/better way?
>>>
>>> -Ian
>>>
>>> On Wed, Apr 11, 2012 at 12:20 PM, Pavel Ivanov 
>>> wrote:
 On Wed, Apr 11, 2012 at 12:01 PM, Ian Katz 
 wrote:
> The Sqlite3 manual says that any locking operations affect the entire
> database, not individual tables.
> http://www.sqlite.org/lockingv3.html
>
> I was wondering if this effect could be compensated for by splitting
> tables into separate databases and using the "attach database" option
> outlined here:
> http://stackoverflow.com/questions/6671678/objective-c-sqlite-join-tables-from-multiple-database
>
> I would assume that the databases will not become locked until the
> statement is executed (i.e., preparing the statement won't lock it).
> Is that correct?

 Yes, that's correct, although I don't see a link between this
 statement and "attache database" discussion above.

> If so, is there a significant disadvantage or
> performance hit to using this workaround?

 The first performance hit that comes to mind is either you won't be
 able to use WAL mode (which is a significant performance hit) or you
 lose overall atomicity of transactions (see disadvantage point 3 here
 http://www.sqlite.org/wal.html).

 So I wouldn't do that if I were you.


 Pavel
 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>>>
>>>
>>> --
>>> Ian Katz
>>> Research Software Engineer, MIT LAMSS
>>> i...@mit.edu
>>> ___
>>> sqlite-users mailing list
>>> 

Re: [sqlite] Backup API handling of corrupt source DB

2012-03-07 Thread Marcus Grimm

>>> You can do the backup and after that do an integrity check on the
>>> backup. Surely you're backing up on a different
>>> server, don't you? If the back up pass the integrity check it a real
>>> backup, if not, launch a warning.
>>
>> Yeah, that's a good idea.
>> Ohh boy, why I didn't think about that my self ? :-)
>
> In this process, you have to prevent that you overwrite your last
> "working" backup! Which database will you use, if you realize, that the
> backup is a corrupt database?

Yes, it creates a few backups over last N Days prior starting
to overwrite the oldest one.

It is not a embedded environment, so no problem.
I could create backups of a few month if necessary..

Thanks

Marcus

>
> On a webserver you should have enough space to make additional copies.
> In an embedded environment, this could be difficult.
>
> Thomas
>
>>
>> Thanks
>>
>> Marcus
>>
>>>
 Sure I could try to simulate that, but probably somebody here
 knows the answer.

 Thank you.

 Marcus
>>>
>>>
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Backup API handling of corrupt source DB

2012-03-07 Thread Marcus Grimm


On 07.03.2012 13:13, Eduardo Morras wrote:

At 12:22 07/03/2012, you wrote:

Dear list,

I'm using the backup api to frequently backup
a running sqlite database.

I'm wondering if the backup API is able to detect a corrupt
database or will it simply also backup a corrupt DB ?

I evaluating the need to issue a (timeconsuming) "pragma integrity_check" prior
running the backup to avoid that a backup will be overwritten
with an invalid database.


You can do the backup and after that do an integrity check on the backup. 
Surely you're backing up on a different
server, don't you? If the back up pass the integrity check it a real backup, if 
not, launch a warning.


Yeah, that's a good idea.
Ohh boy, why I didn't think about that my self ? :-)

Thanks

Marcus




Sure I could try to simulate that, but probably somebody here
knows the answer.

Thank you.

Marcus



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Backup API handling of corrupt source DB

2012-03-07 Thread Marcus Grimm


On 07.03.2012 12:53, Simon Slavin wrote:


On 7 Mar 2012, at 11:22am, Marcus Grimm<mgr...@medcom-online.de>  wrote:


I evaluating the need to issue a (timeconsuming) "pragma integrity_check" prior
running the backup to avoid that a backup will be overwritten
with an invalid database.


Did you know about

<http://www.sqlite.org/pragma.html#pragma_quick_check>

?  It misses out a check, but the check it misses out is for data which can be 
recovered simply by recreating corrupt indexes.


Yes, I'm aware about that check.



On the other hand, if you are using this as a warning system that corruption 
has occurred and therefore you may have faulty hardware, then the more thorough 
the check, the better.


Yes, it is just about safe design. I'm currently reviewing my implementation 
and thought
about the unlikely event of a DB corruption - In that case my current backup
mechanism will become easily useless, depending if the corruption is so
small that it is not detected during the normal program flow.
(The application is a server that runs for weeks)

However, it is so far a theoretical issue - we haven't yet seen any corruption
in the field. :-)

Thanks also to Dan for the clarification.

Kind regards

Marcus



Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Backup API handling of corrupt source DB

2012-03-07 Thread Marcus Grimm

Dear list,

I'm using the backup api to frequently backup
a running sqlite database.

I'm wondering if the backup API is able to detect a corrupt
database or will it simply also backup a corrupt DB ?

I evaluating the need to issue a (timeconsuming) "pragma integrity_check" prior
running the backup to avoid that a backup will be overwritten
with an invalid database.

Sure I could try to simulate that, but probably somebody here
knows the answer.

Thank you.

Marcus
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3_step getting core dumbed.

2012-02-09 Thread Marcus Grimm


On 09.02.2012 13:12, bhaskarReddy wrote:


Thank you. Its working.

But why i am getting   sqlite3_column_type returning 5.



I can only guess:

a) because you have no data in that table.
b) probably you have no known type defined in table
   definition and sqlite does a conversion anyhow.
c) is sqlite3_step(stmt); really required here ?
   what is the result of step ? is it SQLITE_ROW ?
   If not, probably sqlite3_column_type doesn't make much sense.


Marcus






Marcus Grimm wrote:


try changing this:

colNames = (char **)malloc(sizeof(char));

into something like

colNames = (char **)malloc(sizeof(char * ) * noOfColumns);


On 09.02.2012 11:21, bhaskarReddy wrote:


HI Friends,

I dont know why the sqlite3_step getting core dumped.

   #include
   #include
   #include "sqlitedb1.h"
   #include
   #include
   #include
   #include
   #include

   int db_retrieve_row(char *db_name, char *table_name, int num_of_keys,
char
* col_names[],column_value_t * col_values);

int main(void)
{
column_type_t enm[2];
 //   int retVal;
char *col_name[3];
char *db_name = "LR6002.db";
char *table_name = "ONTTable";
column_value_t col_val[3];

enm[0] = COLUMN_TYPE_INT;   // slotId
enm[1] = COLUMN_TYPE_STRING;

col_val[0].number = 1;
col_val[1].number = 2;
col_val[2].number = 3;
  /* Array of Column Names. */
   col_name[0] = "slotId";
   col_name[1] = "ponChannelId";
   col_name[2] = "onuType";

 db_retrieve_row(db_name, table_name, 3,  col_name,col_val);
 return 0;
   }
   int db_retrieve_row(char *db_name, char *table_name, int num_of_keys,
char
* col_names[],column_value_t * col_values)
   {
   sqlite3 *db;
   sqlite3_stmt *stmt;
   int status = 0,loopVar,noOfColumns;
   char *query = NULL,**colNames,*str;
   int  retVal,*colType;

   retVal = sqlite3_open(db_name,);
   if(retVal) {
   fprintf(stderr,"Can't  open database:
%s\n",sqlite3_errmsg(db));
   sqlite3_close(db);
   exit(1);
   }
  /* A Query to find the number  of columns in the table. */
  query = (char *)malloc(sizeof(char) * 255);
  memset(query,0,255);
  strcat(query, "SELECT * FROM ");
  strcat(query, table_name);
  strcat(query, ";");

  status = sqlite3_prepare_v2(db,query,strlen(query) + 1,, NULL);
  if(status != SQLITE_OK) {
  printf("Prepare error: %s\n", sqlite3_errmsg(db));
  exit(1);
  }
  noOfColumns = sqlite3_column_count(stmt);
  if(SQLITE_OK != sqlite3_finalize(stmt))
  {
   printf("The prepared statement is Not deleted.\n");
  }
  free(query);
  query = NULL;

/* A Query to find the Names of each column. */
query = (char *)malloc(sizeof(char) * 255);
   memset(query,0,255);
strcat(query, "SELECT * FROM ");
strcat(query, table_name);
strcat(query, ";");

//stmt = NULL;
status = sqlite3_prepare_v2(db,query,strlen(query) + 1,,
NULL);
   if(status != SQLITE_OK) {
   printf("Prepare error: %s\n", sqlite3_errmsg(db));
exit(1);
}

colNames = (char **)malloc(sizeof(char));
memset(colNames,0,1);

 for(loopVar = 0; loopVar<   noOfColumns; loopVar++)
 {
 colNames[loopVar] = (char *)malloc(sizeof(char) * 20);
 memset(colNames[loopVar], 0, 20);
  strcat(colNames[loopVar],sqlite3_column_name(stmt,
loopVar));
 }
 if(SQLITE_OK != sqlite3_finalize(stmt))
 {
printf("The prepared statement is Not deleted.\n");
 }
 free(query);
query = NULL;

/*  A Query to find types of each column. */
str = (char *)malloc(sizeof(char) * 255);
 memset(str, 0, 255);
 strcat(str, "SELECT ");
 for(loopVar = 0; loopVar<   noOfColumns; loopVar++)
 {
   strcat(str, colNames[loopVar]);
   if(loopVar<   (noOfColumns -1))
 strcat(str,",");
 }
 strcat(str, " FROM ");
 strcat(str, table_name);
 strcat(str, ";");

 status=sqlite3_prepare_v2(db,str,strlen(str)+1,, NULL);
 if (status != SQLITE_OK) {
printf("prepare error:%s\n",sqlite3_errmsg(db));
exit(1);
}
sqlite3_step(stmt);   //Causing Segmentation Fault. //
colType = (int *)malloc(sizeof(int) * noOfColumns);
for(loopVar = 0; loopVar<   noOfColumns; loopVar++){
colType[loopVar] = sqlite3_column_type(stmt, loopVar);
printf("Column Types = %d\n",colType[loopVar]);
 }
if(SQLITE_OK != sqlite3_finalize(stmt))
{
  

Re: [sqlite] sqlite3_step getting core dumbed.

2012-02-09 Thread Marcus Grimm

try changing this:

colNames = (char **)malloc(sizeof(char));

into something like

colNames = (char **)malloc(sizeof(char * ) * noOfColumns);


On 09.02.2012 11:21, bhaskarReddy wrote:


HI Friends,

   I dont know why the sqlite3_step getting core dumped.

  #include
  #include
  #include "sqlitedb1.h"
  #include
  #include
  #include
  #include
  #include

  int db_retrieve_row(char *db_name, char *table_name, int num_of_keys, char
* col_names[],column_value_t * col_values);

int main(void)
{
   column_type_t enm[2];
//   int retVal;
   char *col_name[3];
   char *db_name = "LR6002.db";
   char *table_name = "ONTTable";
   column_value_t col_val[3];

   enm[0] = COLUMN_TYPE_INT;   // slotId
   enm[1] = COLUMN_TYPE_STRING;

   col_val[0].number = 1;
   col_val[1].number = 2;
   col_val[2].number = 3;
 /* Array of Column Names. */
  col_name[0] = "slotId";
  col_name[1] = "ponChannelId";
  col_name[2] = "onuType";

db_retrieve_row(db_name, table_name, 3,  col_name,col_val);
return 0;
  }
  int db_retrieve_row(char *db_name, char *table_name, int num_of_keys, char
* col_names[],column_value_t * col_values)
  {
  sqlite3 *db;
  sqlite3_stmt *stmt;
  int status = 0,loopVar,noOfColumns;
  char *query = NULL,**colNames,*str;
  int  retVal,*colType;

  retVal = sqlite3_open(db_name,);
  if(retVal) {
  fprintf(stderr,"Can't  open database: %s\n",sqlite3_errmsg(db));
  sqlite3_close(db);
  exit(1);
  }
 /* A Query to find the number  of columns in the table. */
 query = (char *)malloc(sizeof(char) * 255);
 memset(query,0,255);
 strcat(query, "SELECT * FROM ");
 strcat(query, table_name);
 strcat(query, ";");

 status = sqlite3_prepare_v2(db,query,strlen(query) + 1,, NULL);
 if(status != SQLITE_OK) {
 printf("Prepare error: %s\n", sqlite3_errmsg(db));
 exit(1);
 }
 noOfColumns = sqlite3_column_count(stmt);
 if(SQLITE_OK != sqlite3_finalize(stmt))
 {
  printf("The prepared statement is Not deleted.\n");
 }
 free(query);
 query = NULL;

   /* A Query to find the Names of each column. */
   query = (char *)malloc(sizeof(char) * 255);
  memset(query,0,255);
   strcat(query, "SELECT * FROM ");
   strcat(query, table_name);
   strcat(query, ";");

   //stmt = NULL;
   status = sqlite3_prepare_v2(db,query,strlen(query) + 1,, NULL);
  if(status != SQLITE_OK) {
  printf("Prepare error: %s\n", sqlite3_errmsg(db));
   exit(1);
   }

   colNames = (char **)malloc(sizeof(char));
   memset(colNames,0,1);

for(loopVar = 0; loopVar<  noOfColumns; loopVar++)
{
colNames[loopVar] = (char *)malloc(sizeof(char) * 20);
memset(colNames[loopVar], 0, 20);
 strcat(colNames[loopVar],sqlite3_column_name(stmt, loopVar));
}
if(SQLITE_OK != sqlite3_finalize(stmt))
{
   printf("The prepared statement is Not deleted.\n");
}
free(query);
   query = NULL;

   /*  A Query to find types of each column. */
   str = (char *)malloc(sizeof(char) * 255);
memset(str, 0, 255);
strcat(str, "SELECT ");
for(loopVar = 0; loopVar<  noOfColumns; loopVar++)
{
  strcat(str, colNames[loopVar]);
  if(loopVar<  (noOfColumns -1))
strcat(str,",");
}
strcat(str, " FROM ");
strcat(str, table_name);
strcat(str, ";");

status=sqlite3_prepare_v2(db,str,strlen(str)+1,, NULL);
if (status != SQLITE_OK) {
   printf("prepare error:%s\n",sqlite3_errmsg(db));
   exit(1);
   }
   sqlite3_step(stmt);   //Causing Segmentation Fault. //
   colType = (int *)malloc(sizeof(int) * noOfColumns);
   for(loopVar = 0; loopVar<  noOfColumns; loopVar++){
   colType[loopVar] = sqlite3_column_type(stmt, loopVar);
   printf("Column Types = %d\n",colType[loopVar]);
}
   if(SQLITE_OK != sqlite3_finalize(stmt))
   {
  printf("The prepared statement is Not deleted.\n");
   }
   free(query);
   query = NULL;

}

And the API 'sqlite3_column_type" always returning 5, i.e., NULL.


Can any one tell me what was the problem that i did.


Regards,
Bhaskar.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Segmentation fault on sqlite3_create_module

2012-02-05 Thread Marcus Grimm
>> > change this code:
>> >  typedef struct NiuRoutingStruct {
>> >  sqlite3_vtab vtab;
>> >  } NiuRouting;
>> > to this:
>> >  struct NiuRouting : public sqlite3_vtab vtab {
>> >   ... // your implementation, eventually
>> >  };
>> >
>> > This way, the casting is unnecessary and any pointer offsetting will
>> be
>> > done correctly.
>>
>> Just for my curiosity:
>> Is the above really equivalent to the C definition like:
>> typedef struct NiuRoutingStruct {
>>sqlite3_vtab vtab;
>> ... // your implementation, eventually
>> } NiuRouting;
>>
>> ?
>>
>> Reason I'm asking: sqlite uses a pointer to that structure to access
>> "sqlite3_vtab vtab" member - in machine code that would be
>> memory offset 0, all other private members start at
>> +sizeof(sqlite3_vtab)+optional alignment.
>> Now, I'm wondering if a c++ style of inheritance is defined
>> in the same way - Note that sqlite relies on that, or better
>> say "the compiler that produced the sqlite machine code relies
>> on that".
>> Just in case the c++ compiler rearrange the byte offset differently
>> it will crash rights away, I think.
>>
>> Thanks.
>
> Without my copy/paste error, (the 'vtab' should be gone, leaving just the
> struct name), it would be equivalent provided that the alignment of a
> sqlite3_vtab was no less restrictive than the alignment of
> NiuRoutingStruct as it is finally defined.  However, static_casting from
> derived to base automatically incorporates the correct offset, (as would
> taking the address of an embedded sqlite3_vtab member, instead of using
> the OP's reinterpret cast), so the crash you worry about (with good
> reason) would  be made less likely.

okay, thank you for the explanation. Interesting subject.

Kind regards

Marcus

>
> I think all SQLite relies upon is getting correct pointers to its portion
> of whatever object it is passing around.  That virtual table API is
> designed to make it agnostic as to any other, tag-along content.
>
> Best regards,
> --
> Larry Brasfield
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Segmentation fault on sqlite3_create_module

2012-02-03 Thread Marcus Grimm
> Stephan Beal wrote:
>> 2012/2/3 Jorge Eliécer Osorio Caro 
>>
>> >*ppVTab = (sqlite3_vtab*) nr;
>> > ((sqlite3_vtab*) nr)->zErrMsg = NULL;
>> >
>>
>> Please try changing those to the variants from my previous post. i'm not
>> 100% convinced that that cast is strictly legal in C++.
>
> It will work with all the C/C++ implementations I've seen.  However, it
> would be better, and more informative, and ultimately more robust, to
> change this code:
>  typedef struct NiuRoutingStruct {
>  sqlite3_vtab vtab;
>  } NiuRouting;
> to this:
>  struct NiuRouting : public sqlite3_vtab vtab {
>   ... // your implementation, eventually
>  };
>
> This way, the casting is unnecessary and any pointer offsetting will be
> done correctly.

Just for my curiosity:
Is the above really equivalent to the C definition like:
typedef struct NiuRoutingStruct {
   sqlite3_vtab vtab;
... // your implementation, eventually
} NiuRouting;

?

Reason I'm asking: sqlite uses a pointer to that structure to access
"sqlite3_vtab vtab" member - in machine code that would be
memory offset 0, all other private members start at
+sizeof(sqlite3_vtab)+optional alignment.
Now, I'm wondering if a c++ style of inheritance is defined
in the same way - Note that sqlite relies on that, or better
say "the compiler that produced the sqlite machine code relies
on that".
Just in case the c++ compiler rearrange the byte offset differently
it will crash rights away, I think.

Thanks.

Marcus


>
> The use of C-style casts in C++ is bad practice.  Use static_cast where
> you can.  The problem with C-style casting is that it can become a
> reinterpret_cast when that is going to produce problems.
>
> --
> Larry Brasfield
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Segmentation fault on sqlite3_create_module

2012-02-03 Thread Marcus Grimm
> Marcus wrote:
>> Just a guess:
>> It looks you are compiling/using c++, on Windows that wouldn't work
>> if the sqlite library is compiled as C (which is the usual case).
>>
>> So, you might have a calling convencion issue...
>>
>> Just a guess, though.
>
> The sqlite3.h header has proper, "extern "C" {" wrappers around its
> declarations, guarded by "#ifdef __cplusplus", so this is not an issue.
>   It is perfectly reasonable to call into the SQLite code from C++.

Right, you can call sqlite from c++ - no problem.
But here we define function pointers (compiled as c++),
provide them via sqlite3_module structure to a c library.
That is why I was guessing that it might not be clear for the
c-library that the function pointer is compiled within a
c++ frame.

Again, just guessing...

Marcus

>
> --
> Larry Brasfield
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Segmentation fault on sqlite3_create_module

2012-02-03 Thread Marcus Grimm
> Hi,
>
> I do this:
>
> const char *table_structure = "CREATE TABLE network (vertex_id INTEGER);";
> if (sqlite3_declare_vtab(db, table_structure) == SQLITE_OK) {
> nr = (NiuRouting*) sqlite3_malloc(sizeof (NiuRouting));
>
> if (nr == NULL) {
> //*pzErr = sqlite3_mprintf("[NiuRouting module] CREATE
> VIRTUAL: No memory \n");
> return SQLITE_NOMEM;
> }
>
> *ppVTab = (sqlite3_vtab*) nr;
> ((sqlite3_vtab*) nr)->zErrMsg = NULL;
>
> } else {
> //*pzErr = sqlite3_mprintf("[NiuRouting module] CREATE
> VIRTUAL:
> invalid SQL statement \n");
> return SQLITE_ERROR;
> }
>
> and continue.
>
> i do somethig more comment all the code inside the functions and get the
> same error. when try to load extension i just see the first debug message
> i
> can see any more i think, that the program do no enter to any method.

Just a guess:
It looks you are compiling/using c++, on Windows that wouldn't work
if the sqlite library is compiled as C (which is the usual case).

So, you might have a calling convencion issue...

Just a guess, though.

Marcus


>
> 2012/2/3 Stephan Beal 
>
>> 2012/2/3 Jorge Eliécer Osorio Caro 
>>
>> > Yes, that's my code:
>> >
>> > http://paste.ideaslabs.com/show/OPNHBY7xPG
>> >
>> > this the backtrace:
>> >
>> > #0  0x0001001701d3 in initialize_niurouting ()
>> >
>>
>> Can you try the following (but i'm just guessing here);
>>
>> std::cerr << "virtual_table_name="<>  << ", network_structure_table="
>>  <>
>> after those vars are initialized, and change:
>>
>> std::string table_structure(...)
>>
>> to
>> const static std::string table_structure(...)
>>
>> sqlite3_declare_vtab()[1] does not document the lifetime requirements of
>> the string passed to it, and it's "conceivable" that the lifetime of the
>> table_structure string is the problem.
>>
>> :-?
>>
>> [1] = http://www.sqlite.org/c3ref/declare_vtab.html
>>
>> --
>> - stephan beal
>> http://wanderinghorse.net/home/stephan/
>> http://gplus.to/sgbeal
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speeding up Sqlite reads after DML

2012-02-02 Thread Marcus Grimm
> Marcus - thanks. I will experiment with those 2 PRAGMAs.
>
> Meanwhile - I was trying to update some columns in a table with 130
> million
> rows and got this error -
>
> "Error: out of memory"

Did you get that after you increased the cache ?

Strange, I never got that during my huge DB tests.
But I'm not sure what will happen if a cache is
specified that would exceed the hardware (RAM) capabilities.
Interesting question, though.

Be careful: The cache pragma works on page-sizes, that is, it
specifies for example in 1K blocks.

>
> I am not sure why. I thought the "UPDATE" just writes out new rows and
> maybe a journal of the old rows for recoverability - but I am not sure why
> it would need very much memory.

It may depend on the update if it involves indexes loads and updates.

>
> Be that as it may - and with regard to your suggestion - and in light of
> this error message - given that I only have 4GB of RAM on my PC - is this
> really enough RAM to handle tables of this size ? Or am I giving Sqlite an
> unreasonably small amount of resources and it's time for a serious
> hardware
> upgrade?

I don't think that you will be able to specify more than 2-3 GB
cache, depending on your OS, unless you compile a 64 bit version
of sqlite (I never did that).
However, a 100 GB sqlite DB file should not be any problem, except
the reported slow down.

Marcus

>
> Thanks,
>
> Udi
>
> On Thu, Feb 2, 2012 at 10:03 PM, Marcus Grimm
> <mgr...@medcom-online.de>wrote:
>
>> > Given how clever and compelling Sqlite is - I am testing how it scales
>> to
>> > tables in the 100GB / 200 million row range. This is for a strictly
>> "read
>> > only" application - but first the tables must be populated in a one
>> time
>> > process. As is often the case with Big Data - the data is a little
>> dirty
>> -
>> > so the process involves importing - selecting - counting - inspecting
>> -
>> > updating some rows - deleting some rows - selecting - counting -
>> > inspecting, etc. until clean.
>> >
>> > Placing the Sqlite database on a traditional C: drive - IO was too
>> slow.
>> > At
>> > 15 MB/sec - reading a 50GB table would take an hour. So I moved it to
>> > external Raid array where I ran across an interesting find. IO wasn't
>> that
>> > much faster - until I vaccuumed the database - which increase IO 10X
>> to
>> > 150
>> > MB/sec - with the same CPU utilization.
>> >
>> > This is good news for the final implementation of this read-only
>> database
>> > -
>> > but still a dilemma at the data load phase. After a ".vaccuum" -
>> issueing
>> > a
>> > single DML against a table - even a DELETE which deletes no rows at
>> all -
>> > causes IO to drop back down to 15 MB/sec - on the table I'm selecting
>> /
>> > DMLing - which makes the data loading / cleansing phase very long.
>> >
>> > So I have 2 questions -
>> >
>> > (1) Why would simple DML cause such an extreme slowdown as compared
>> with
>> > "post vaccuum" speeds ?
>> >
>> > (2) Any knobs to turn to try and maintain the higher speeds post DML -
>> > without resorting to ".vaccuum" ?
>>
>>
>> You didn't tell if you already set the usual tricks to speed up
>> your load phase. That would be to increase the page cache and try
>> with reduced syncs.
>> See
>> PRAGMA cache_size
>> PRAGMA synchronous
>>
>> In particular the page cache should be increased dramatically
>> for huge DB files.
>>
>> Marcus
>>
>> >
>> > Thanks,
>> >
>> > Udi
>> > ___
>> > sqlite-users mailing list
>> > sqlite-users@sqlite.org
>> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>> >
>>
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speeding up Sqlite reads after DML

2012-02-02 Thread Marcus Grimm
> Given how clever and compelling Sqlite is - I am testing how it scales to
> tables in the 100GB / 200 million row range. This is for a strictly "read
> only" application - but first the tables must be populated in a one time
> process. As is often the case with Big Data - the data is a little dirty -
> so the process involves importing - selecting - counting - inspecting -
> updating some rows - deleting some rows - selecting - counting -
> inspecting, etc. until clean.
>
> Placing the Sqlite database on a traditional C: drive - IO was too slow.
> At
> 15 MB/sec - reading a 50GB table would take an hour. So I moved it to
> external Raid array where I ran across an interesting find. IO wasn't that
> much faster - until I vaccuumed the database - which increase IO 10X to
> 150
> MB/sec - with the same CPU utilization.
>
> This is good news for the final implementation of this read-only database
> -
> but still a dilemma at the data load phase. After a ".vaccuum" - issueing
> a
> single DML against a table - even a DELETE which deletes no rows at all -
> causes IO to drop back down to 15 MB/sec - on the table I'm selecting /
> DMLing - which makes the data loading / cleansing phase very long.
>
> So I have 2 questions -
>
> (1) Why would simple DML cause such an extreme slowdown as compared with
> "post vaccuum" speeds ?
>
> (2) Any knobs to turn to try and maintain the higher speeds post DML -
> without resorting to ".vaccuum" ?


You didn't tell if you already set the usual tricks to speed up
your load phase. That would be to increase the page cache and try
with reduced syncs.
See
PRAGMA cache_size
PRAGMA synchronous

In particular the page cache should be increased dramatically
for huge DB files.

Marcus

>
> Thanks,
>
> Udi
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] PRAGMA journal_mode=WAL;

2012-02-01 Thread Marcus Grimm


On 01.02.2012 11:10, Guy Terreault wrote:

On 12-02-01 04:32 AM, Marcus Grimm wrote:


On 01.02.2012 09:32, Guy Terreault wrote:

On 12-02-01 03:03 AM, Larry Brasfield wrote:


I cannot answer why Simon does not write demo code at every opportunity, but I 
think I speak for more than just myself
by revealing that I have other work, and a life, and consider time a scarce 
resource.


Thanks Larry for taking some of your precious time to not just answer the 
question.
And I have read shell.c and it is giberish to me.

I would write the code myself if I new it. I guess I will have to wait until 
someone has more time to answer properly.


What we don't understand is why one can use sqlite without knowing
the rather basic operation like executing an sql command...
Anyway, if you look again at
http://www.sqlite.org/quickstart.html
you may extract something like:

int main(int argc, char **argv)
{
sqlite3 *db;

if( sqlite3_open("mysqlite.db", ) )
{
sqlite3_exec(db, "PRAGMA journal_mode = WAL;", NULL, 0, 0);
sqlite3_close(db);
}
return(0);
}

pretty easy, isn't it ?

hth
Marcus



Thanks Marcus
this is what should have been done as an answer to the initial question.

My interventions was to point out that this mailing list is read by everybody 
and yes some are new to sqlite and some
are also new to c code. and it is sometimes difficult to distinguish the two.


OK, then your request was unlucky misleading, since you ask for wal
mode which implies that you already know something about sqlite.
Wal mode is something to worry after the usage of sqlite came to
a certain point.



sqlite3_exec and all the other function you use in your example are from the 
sqlite API
and yes consulting it is a good thing. And now with your example we will all be 
interested to consult it to understand
what all the parameters do.


I think sqlite3_exec is not a good point to start - It is wrapper around some 
basic
sqlite API functions. I'm rather not using it for standard sql queries as it
requires some "surrounding" work to make it usable.
What you might look at to start are these (basic) functions:

 sqlite3_open_v2, sqlite3_prepare_v2, sqlite3_step, sqlite3_finalize, 
sqlite3_close.
 then sqlite3_column_XXX to retrieve values, etc.

These functions are explained quite well, however if you run into trouble
using these or others this list will for sure assist.
Actually, I'm happy to see some more API related questions rather than
"I'm using .NET ADO DLL stuff, DLL not found, VS2005 How to add a DB connection ?, 
..."
sort of question which increase, but that's another story... :-)

Have fun with sqlite, it's a cool piece of software.

Marcus



If I look at this http://www.sqlite.org/c3ref/funclist.html they are so many 
functions. finding the one a new guy needs
is scary. And http://www.sqlite.org/c3ref/exec.html does explain all, but your 
example does it in 3 or 4 lines.

And we all thank your simple example that we can build on it.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] PRAGMA journal_mode=WAL;

2012-02-01 Thread Marcus Grimm


On 01.02.2012 09:32, Guy Terreault wrote:

On 12-02-01 03:03 AM, Larry Brasfield wrote:


I cannot answer why Simon does not write demo code at every opportunity, but I 
think I speak for more than just myself
by revealing that I have other work, and a life, and consider time a scarce 
resource.


Thanks Larry for taking some of your precious time to not just answer the 
question.
And I have read shell.c and it is giberish to me.

I would write the code myself if I new it. I guess I will have to wait until 
someone has more time to answer properly.


What we don't understand is why one can use sqlite without knowing
the rather basic operation like executing an sql command...
Anyway, if you look again at
http://www.sqlite.org/quickstart.html
you may extract something like:

int main(int argc, char **argv)
{
sqlite3 *db;

if( sqlite3_open("mysqlite.db", ) )
{
   sqlite3_exec(db, "PRAGMA journal_mode = WAL;", NULL, 0, 0);
   sqlite3_close(db);
}
return(0);
}

pretty easy, isn't it ?

hth
Marcus


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bug: SQLite Encryption Extension And Backup API

2011-04-07 Thread Marcus Grimm
Hi,

sorry.. my fault. It should be:


if (zKey != NULL)
{
sqlite3_key(pBackupDb, NULL, 0);
sqlite3_rekey(pBackupDb, zKey, strlen(zKey));
}

Thats works for me..

Marcus


On 07.04.2011 15:58, Ulric Auger wrote:
> It didn't change anything to call sqlite3_key twice as you suggested.
>
> I'm using SQLite 3.7.4 with SEE (SEE v3.7.5 is not available from the SEE
> download page, that is strange)
>
> I hope SQLite team can look into this.
>
> Ulric
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>

-- 
Marcus Grimm
MedCom GmbH Darmstadt, Rundeturmstr. 12, 64283 Darmstadt
Tel: +49(0)6151-95147-10
Fax: +49(0)6151-95147-20
web: www.medcom-online.de
--
MedCom slogans of the month:
"Vacation ? -- Every day at MedCom is a paid vacation!"
"Friday I have monday in my mind."
"MedCom -- Every week a vacation, every day an event, every hour a cliffhanger,
every minute a climax."
"Damned, it's weekend again!"
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bug: SQLite Encryption Extension And Backup API

2011-04-07 Thread Marcus Grimm
On 06.04.2011 14:21, Ulric Auger wrote:
> Hi,
>
> I think I found a bug creating an encrypted backup.

..

>
>
>
> When I use the following function to create a backup of the database, the
> backup database becomes unreadable.

...
>
>
>
>rc = sqlite3_open(zFilename,);
>
>if( rc==SQLITE_OK )
>
>{
>
>  if (zKey != NULL)
>
>  sqlite3_key(pBackupDb, zKey, strlen(zKey));

I can't recall why but when you create a new DB you need to call
sqlite3_key twice. Try making it:
if (zKey != NULL)
{
sqlite3_key(pBackupDb, NULL, 0);
sqlite3_key(pBackupDb, zKey, strlen(zKey));
}

However, I'm surprised to see that the backup API doesn't
create an exact copy of the source DB when SEE is involved.
I tend to argue that a backup should be encrypted automatically
if the source DB uses SEE as well. In my case I generated some
nice backup DBs which are now not encrypted at the customers side.
Sigh...

Anyway, hope this helps.

Marcus


>
>
>
>  pBackup = sqlite3_backup_init(pBackupDb, "main", pDb, "main");
>
>  if( pBackup )
>
>  {
>
>do
>
>{
>
>  rc = sqlite3_backup_step(pBackup, 100);
>
>  if( rc==SQLITE_OK || rc==SQLITE_BUSY || rc==SQLITE_LOCKED ){
>
>sqlite3_sleep(10);
>
>  }
>
>}
>
>while( rc==SQLITE_OK || rc==SQLITE_BUSY || rc==SQLITE_LOCKED );
>
>
>
>sqlite3_backup_finish(pBackup);
>
>  }
>
>  rc = sqlite3_errcode(pBackupDb);
>
>}
>
>
>
>sqlite3_close(pBackupDb);
>
>return rc;
>
> }
>
>
>
> Ulric Auger
> --
> Groupe Techna Inc.
>     ul...@gtechna.com
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE/INSERTing 1-2k rows slower than expected

2011-02-12 Thread Marcus Grimm
>>
>> Interesting, I did a test on a 7200 file and the best I could do was 50
>> commits per second (a simple base/table with only id, journalling off
>> and
>> no
>> extra code since the tool I use has "a repeated query" option with
>> accurate
>> timing). You mentioned 3 syncs per commit, but I tried to look at the
>> log
>> of
>> Process Monitor for the tool process and I saw only one entry with
>> 'FlushBuffersFile' that is as I suppose was a mirror name for
>> FlushFileBuffers in winSync of sqlite so probably 3 syncs was a too big
>> an
>> estimate, wasn't it? In this case 50 commits per second looks reasonable
>> limit
>
> I found the number of 3 syncs some time ago, I can't proofe it
> right now. As far as I remeber I simply activated the debug logout put in
> sqlite and have seen these syncs. sqlite needs to sync the journal file
> once it has copied the pages from the DB, then it needs to sync the
> DB file itselve after written pages to the main DB, then it probably needs
> to clear and sync the journal file to indicate that the operation is
> finished. This sums up to 3 -- but I'm guessing here, maybe DRH is willing
> to confirm... :-)
>
> Note that this handling is changed via PRAGMA synchronous;
>
> I'm talking about windows, not sure if that applies on unix or mac.
>
> Marcus
>

okay, to avoid that I'm talking nuts I repeated the
debug log. The following output is generated by sqlite in
debug mode after a "INSERT INTO T VALUES(7);":

INSERT COMMAND...
LOCK 4072 1 was 0(0)
READ 4072 lock=1
LOCK 4072 2 was 1(0)
OPEN 4084 C:\batch.db-journal 0xc000 ok
WRITE 4084 lock=0
WRITE 4084 lock=0
WRITE 4084 lock=0
WRITE 4084 lock=0
LOCK 4072 4 was 2(0)
unreadlock = 1
WRITE 4084 lock=0
WRITE 4084 lock=0
WRITE 4084 lock=0
READ 4084 lock=0
SYNC 4084 lock=0
WRITE 4084 lock=0
SYNC 4084 lock=0
WRITE 4072 lock=4
WRITE 4072 lock=4
SYNC 4072 lock=4
CLOSE 4084
CLOSE 4084 ok
DELETE "C:\batch.db-journal" ok
UNLOCK 4072 to 1 was 4(0)
UNLOCK 4072 to 0 was 1(0)
INSERT COMMAND END.

So my brain isn't that lasy, we count 3 syncs :-)

When you turn journaling to off you will most likely
see less syncs, probably 2 in your case. So that is all
in line..

Marcus


>
>>
>> Max Vlasov
>>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE/INSERTing 1-2k rows slower than expected

2011-02-12 Thread Marcus Grimm
> On Sat, Feb 12, 2011 at 9:48 PM, Marcus Grimm
> <mgr...@medcom-online.de>wrote:
>
>> > I should've realized it wasn't running this fast but the small 5000
>> record
>> > size got me.
>> > Test it yourself.
>> > I do have a 7200RPM drive.  My 261.4 numer is still 2+X your
>> theoretical.
>>
>> I don't want to be a smart-arse, but I still think your 261.4 is to
>> fast.
>> On a 7200 RPM drive one will have 125 chances/sec to see a sector to be
>> written. Since sqlite, under normal journal mode, will need 3 syncs
>> per commit as far as I can recall, the maximum number drops further
>> down to 41 commit/sec. This is theoretical, in reality one will see
>> maybe 20 commits/sec. Not sure if a disc write-cache will interfere
>> with that caluclation, though.
>> Am I wrong ? :-)
>>
>
>
> Interesting, I did a test on a 7200 file and the best I could do was 50
> commits per second (a simple base/table with only id, journalling off and
> no
> extra code since the tool I use has "a repeated query" option with
> accurate
> timing). You mentioned 3 syncs per commit, but I tried to look at the log
> of
> Process Monitor for the tool process and I saw only one entry with
> 'FlushBuffersFile' that is as I suppose was a mirror name for
> FlushFileBuffers in winSync of sqlite so probably 3 syncs was a too big an
> estimate, wasn't it? In this case 50 commits per second looks reasonable
> limit

I found the number of 3 syncs some time ago, I can't proofe it
right now. As far as I remeber I simply activated the debug logout put in
sqlite and have seen these syncs. sqlite needs to sync the journal file
once it has copied the pages from the DB, then it needs to sync the
DB file itselve after written pages to the main DB, then it probably needs
to clear and sync the journal file to indicate that the operation is
finished. This sums up to 3 -- but I'm guessing here, maybe DRH is willing
to confirm... :-)

Note that this handling is changed via PRAGMA synchronous;

I'm talking about windows, not sure if that applies on unix or mac.

Marcus


>
> Max Vlasov
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE/INSERTing 1-2k rows slower than expected

2011-02-12 Thread Marcus Grimm
> OK...I added your trigger example as option 8.  And I had pasted the wrong
> version in my last email.  My timings were correct.  Your example also did
> sql_exec instead of using prepare so it will run slower.

Yes, but that should be marginal.

When I send my code the trigger version wasn't finished.
The intension is to compare a trigger update vs. manual
UPDATE statement within your loop.
I was also quite alarmed about the issue recently raised
here about the performace penality a trigger introduces.
So far my experiments based on your code doesn't confirm
this: Using a trigger is slightly faster vs. manual update
if everthing is within implicit transaction only - Makes sense.
The trigger is slightly slower when compared to manual UPDATE
when using commit bulks - Here the overhead of the trigger
appears slightly higher than a manual UPDATE, but: The difference
is maybe 10% and not 6 times slower as reported. Strange...

Anyway, I would like to add my trigger test in your version
but I can't compile due to missing 'gettimeofday' function.
Are you using MSVS ? Can you advice me where you have the
gettimeofday from ?

Sqlite does use fsync() to sync the file content prior
proceeding with journal/db file updating. Some month ago
I looked into that and found it is done 3 times per
"INSERT". I'm not speaking about sector write performance it is
related to the file syncs,
I don't see whats wrong with my arguments, please advice.

Thanks for the code by the way - I like the approach to create
a little test tool to have a common basis to experiment.

Marcus

> I also made this compilable on Unix too.
>
> On Unix my timing matches the run time and there are the right # of
> records in the database with the right values.  Unix is faster than
> Windows (8-core 2.6Ghz Unix system using SAS drives (I'm not sure of the
> model but I'm quite sure they are 7200RPM).
> Perhaps this is write caching on the disk?
>
> time ./batch 1600 1 0
> 1543.1
> real0m1.042s
> user0m0.046s
> sys 0m0.216s
>
> Something tells me your theory is wrong.  All 1600 records get inserted
> into the database in approx 1 second so it matches independent timing
> quite well.
>
> Here's my results with your trigger.  I'll maintain the timings are
> correct until somebody can prove this wrong.  Your theory is simply
> incomplete and is not taking into account SQLite's behavior.
>
> D:\SQLite>batch 504 1 0 -- simple single insert no batch
> 442.1
> D:\SQLite>batch 504 1 8 -- with trigger insert on 2nd table -- single row
> 307.3
> D:\SQLite>batch 504 1 9 -- trigger + in memory
> 5378.5
> D:\SQLite>batch 504 1 10 -- trigger +  index
> 212.3
> D:\SQLite>batch 504 1 12 -- trigger + WAL mode
> 2482.4
> D:\SQLite>batch 504 1 14 -- trigger + WAL mode + index
> 461.0
> D:\SQLite>batch 504 1 6 -- WAL mode + index
> 4608.0
> D:\SQLite>batch 504 504 6 -- WAL mode + index in one transaction
> 32256.0
> Of course you can't do your trigger (i.e. update) inside a transaction so
> there's some error checking for that now.
> You trigger is a factor of 10 slower.  I think we determined before that
> doing that update directly instead of a trigger would be faster.  I should
> add a method for that too.
>
> I also added a SYNCHRONOUS=OFF option#16
> That speeds it up a bit too.  I didn't see where FULL made much of a
> difference here so I just made OFF the option.
> D:\SQLite>batch 1600 1 0
> 414.6
> D:\SQLite>batch 1600 1 0
> 279.8
> D:\SQLite>batch 1600 1 16
> 602.4
> D:\SQLite>batch 1600 1 16
> 572.1
> D:\SQLite>batch 1600 1600 16
> 102400.0
>
> #include 
> #include 
> #ifdef _WIN32
> #include 
> #include 
> #else
> #include 
> #endif
> #include 
> #include "sqlite3.h"
> double elapsed()
> {
> double t1;
> struct timeval tv;
> static long base_seconds;
> gettimeofday(,NULL);
> if (base_seconds==0) base_seconds=tv.tv_sec;
> return (tv.tv_sec-base_seconds)+tv.tv_usec/(double)100;
> }
> int AddTrigger(sqlite3 *db)
> {
> char SqlTxt[256];
> int rc;
> strcpy(SqlTxt,"CREATE TRIGGER XX AFTER INSERT ON t FOR EACH ROW
> BEGIN ");
> strcat(SqlTxt,"UPDATE t2 SET n = NEW.i+1;");
> strcat(SqlTxt,"END;");
>  rc = sqlite3_exec(db,SqlTxt,NULL,NULL,NULL);
> if (rc != SQLITE_OK) {
> puts(sqlite3_errmsg(db));
> }
> return(0);
> }
> int main(int argc, char *argv[])
> {
>  sqlite3 *db;
>  sqlite3_stmt *stmt=NULL;
>  int rc;
>  int n=0;
>  int nrec=0;
>  int interval=0;
>  int flags=0;
>  double t1;
>  if (argc != 4) {
>   fprintf(stderr,"Usage: %s nrecords commit_interval flags\n",argv[0]);
>   fprintf(stderr,"Flag 0 = simple table\n");
>   fprintf(stderr,"Flag 1 = in-memory database\n");
>   fprintf(stderr,"Flag 2 = add index\n");
>   fprintf(stderr,"Flag 4 = WAL mode\n");
>   fprintf(stderr,"Flag 8 = Add a trigger\n");
>   fprintf(stderr,"Flag 16 = Synchronous=Off\n");
>   fprintf(stderr,"Add flags to combine features\n");
>   exit(-1);
>  }
>  

Re: [sqlite] UPDATE/INSERTing 1-2k rows slower than expected

2011-02-12 Thread Marcus Grimm
> I fixed a couple of bugs in my program...I had converted from clock() to
> the more appropriate gettimeofday and forgot to remove the CLOCKS_PER_SEC
> factor (what's a few order of magnitude between friends :-).  Plus I added
> a 3rd argument so you can in-memory, index, and WAL mode too (or combine
> them).
>
> I should've realized it wasn't running this fast but the small 5000 record
> size got me.
> Test it yourself.
> I do have a 7200RPM drive.  My 261.4 numer is still 2+X your theoretical.

I don't want to be a smart-arse, but I still think your 261.4 is to fast.
On a 7200 RPM drive one will have 125 chances/sec to see a sector to be
written. Since sqlite, under normal journal mode, will need 3 syncs
per commit as far as I can recall, the maximum number drops further
down to 41 commit/sec. This is theoretical, in reality one will see
maybe 20 commits/sec. Not sure if a disc write-cache will interfere
with that caluclation, though.
Am I wrong ? :-)

Your loop omits the sqlite_reset call between the bind statements:
This is allowed since some sqlite versions but I'm not sure if
you are really resetting the implicit transaction. Just an idea..

Your new code still uses the clocks scaling:
printf("%.1f\n",nrec/((elapsed()-t1)/CLOCKS_PER_SEC));
That looks strange to me as your are probably scaling seconds
down to milliseconds, or something.

Marcus


>
> batch 500 500 0
> 274207.4
> batch 500 500 1 -- memory
> 540540.5
> batch 500 500 2 -- w/index
> 160481.4
> batch 500 500 3 -- memory+index
> 220689.7
> batch 500 500 4 -- WAL mode
> 441989.0
> batch 500 500 5 -- WAL mode+memory
> 541455.2
> batch 500 500 6 -- WAL mode+index
> 188902.0
> batch 500 500 7 -- WAL mode+index+memory
> 219478.7
>
> And doing the 5000 record example and testing commit intervals
> batch 5000 5000 0
> 32.0
> batch 5000 2500 0
> 32.0
> batch 5000 1200 0
> 16.0
> batch 5000 500 0
> 16.0
> batch 5000 200 0
> 8.0
> batch 5000 100 0
> 3.6
> batch 5000 500 0
> 16.0
> batch 5000 50 0
> 2.0
> batch 5000 25 0
> 11034.5
> batch 5000 12 0
> 5333.3
> batch 5000 6 0
> 2461.5
> batch 5000 3 0
> 682.3
> batch 5000 2 0
> 509.6
> batch 5000 1 0
> 261.4
>
> #include 
> #include 
> #include 
> #include 
> #include "sqlite3.h"
> double elapsed()
> {
> double t1;
> struct timeval tv;
> static long base_seconds;
> gettimeofday(,NULL);
> if (base_seconds==0) base_seconds=tv.tv_sec;
> return (tv.tv_sec-base_seconds)+tv.tv_usec/(double)100;
> }
> int main(int argc, char *argv[])
> {
>  sqlite3 *db;
>  sqlite3_stmt *stmt=NULL;
>  int rc;
>  int n=0;
>  int nrec=0;
>  int interval=0;
>  int flags=0;
>  double t1;
>  if (argc != 4) {
>   fprintf(stderr,"Usage: %s nrecords commit_interval flags\n",argv[0]);
>   fprintf(stderr,"Flag 0 = simple table\n");
>   fprintf(stderr,"Flag 1 = in-memory database\n");
>   fprintf(stderr,"Flag 2 = add index\n");
>   fprintf(stderr,"Flag 4 = WAL mode\n");
>   fprintf(stderr,"Add flags to combine features\n");
>   exit(-1);
>  }
>  nrec=atoi(argv[1]);
>  interval=atoi(argv[2]);
>  flags=atoi(argv[3]);
>  if (flags & 1) {
>   rc=sqlite3_open(":memory:",);
>  }
>  else {
>   remove("batch.db");
>   rc=sqlite3_open("batch.db",);
>  }
>  if (rc != SQLITE_OK) {
>   puts(sqlite3_errmsg(db));
>  }
>  sqlite3_exec(db,"create table t (i integer)",NULL,NULL,NULL);
>  if (flags & 2) {
>   sqlite3_exec(db,"create index tx on t (i)",NULL,NULL,NULL);
>  }
>  if (flags & 4) {
>   sqlite3_exec(db,"pragma journal_mode=WAL",NULL,NULL,NULL);
>  }
>  sqlite3_prepare_v2(db,"insert into t values(?)",-1,,NULL);
>  t1=elapsed();
>  if (interval != 1) rc=sqlite3_exec(db,"begin",NULL,NULL,NULL);
>  while(n < nrec) {
>   ++n;
>   if (interval != 1 && (n% interval)==0) {
>sqlite3_exec(db,"commit",NULL,NULL,NULL);
>sqlite3_exec(db,"begin",NULL,NULL,NULL);
>   }
>   sqlite3_bind_int(stmt,1,n);
>   rc = sqlite3_step(stmt);
>   if (rc != SQLITE_DONE) {
>puts(sqlite3_errmsg(db));
>   }
>   sqlite3_reset(stmt);
>  }
>  sqlite3_exec(db,"commit",NULL,NULL,NULL);
>  //printf("%.1f inserts per sec\n",nrec/((elapsed()-t1)/CLOCKS_PER_SEC));
>  printf("%.1f\n",nrec/((elapsed()-t1)/CLOCKS_PER_SEC));
>  sqlite3_close(db);
> }
>
>
>
> Michael D. Black
> Senior Scientist
> NG Information Systems
> Advanced Analytics Directorate
>
>
>
> 
> From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on
> behalf of Jim Wilcoxson [pri...@gmail.com]
> Sent: Saturday, February 12, 2011 10:11 AM
> To: General Discussion of SQLite Database
> Subject: EXT :Re: [sqlite] UPDATE/INSERTing 1-2k rows slower than expected
>
> On Sat, Feb 12, 2011 at 9:10 AM, Black, Michael (IS)
> > wrote:
>
>> Here's a little benchmark program I wrote to test a super-simple
>> one-integer insert to test sql speed and commit interval behavior.
>>
>> Running it on my machine (Windows XP64 

Re: [sqlite] EXT :Re: UPDATE/INSERTing 1-2k rows slower than expected

2011-02-12 Thread Marcus Grimm
> We already determined that triggers kill you on inserts so you can't use
> triggers (other than as another example of what NOT to do for speed).

that's why I added that in your test code... :-)

Surprisingly I'm not able to reproduce a dramatic slowdown using my
simple trigger test. It does slow down to use a trigger compared to
a manual update but not more than natural, but I'm not sure
if my trigger test is a working use-case.
Wal mode speeds up noticable. That is exiting.

Marcus


>
> Michael D. Black
> Senior Scientist
> NG Information Systems
> Advanced Analytics Directorate
>
>
>
> 
> From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on
> behalf of Marcus Grimm [mgr...@medcom-online.de]
> Sent: Saturday, February 12, 2011 10:23 AM
> To: General Discussion of SQLite Database
> Subject: EXT :Re: [sqlite] UPDATE/INSERTing 1-2k rows slower than expected
>
>> On Sat, Feb 12, 2011 at 9:10 AM, Black, Michael (IS)
>> <michael.bla...@ngc.com
>>> D:\SQLite>batch 5000 1
>>> 360766.6 inserts per sec
>>>
>>
>> Unless I'm missing something, SQLite has to update the first page of the
>> database on every commit, to update the change counter.  Assuming you
>> are
>> using rotating media, that record can only be updated 120 times per
>> second,
>> maximum, on a 7200RPM drive.
>>
>> I don't understand how you can do 360K commits per second if your system
>> is
>> actually doing "to the platter" writes on every commit.  Can someone
>> clue
>> me
>> in?
>
> I think the time measuring is not correct as sqlite can't in fact
> do a commit with more than appx. 10-20 commits/sec.
>
> here is a slightly modified version, ignore the trigger stuff:
> --
> #include
> #include
> #include
> #include 
> #include 
> #include "sqlite3.h"
> #include
> #include
>
> int AddTrigger(sqlite3 *db)
> {
> char SqlTxt[256];
> int rc;
>
> strcpy(SqlTxt,"CREATE TRIGGER XX AFTER INSERT ON t FOR EACH ROW
> BEGIN ");
> strcat(SqlTxt,"UPDATE t2 SET n = NEW.i;");
> strcat(SqlTxt,"END;");
> rc = sqlite3_exec(db,SqlTxt,NULL,NULL,NULL);
> if (rc != SQLITE_OK) {
> puts(sqlite3_errmsg(db));
> }
> return(0);
> }
>
> int main(int argc, char *argv[])
> {
>  sqlite3 *db;
>  sqlite3_stmt *stmt=NULL;
>  int rc;
>  int n=0;
>  int nrec=0;
>  int interval=0;
>  intAddTr = 0;
>  double t1;
>  char   SqlTxt[256];
>
>  if (argc < 3) {
>   fprintf(stderr,"Usage: %s nrecords commit_interval\n",argv[0]);
>   exit(-1);
>  }
>  nrec=atoi(argv[1]);
>  interval=atoi(argv[2]);
>  if( argc == 4 )
> AddTr = atoi(argv[3]);
>  remove("C:\\batch.db");
>  rc=sqlite3_open("C:\\batch.db",);
>  if (rc != SQLITE_OK) {
>   puts(sqlite3_errmsg(db));
>  }
>  sqlite3_exec(db,"create table t (i integer)",NULL,NULL,NULL);
>  sqlite3_exec(db,"create table t2 (n integer)",NULL,NULL,NULL);
>  sqlite3_exec(db,"insert into t2 values(7);",NULL,NULL,NULL);
>  if(AddTr)
>  AddTrigger(db);
>
>  // turn on WAL mode if you want to test it
>  //sqlite3_exec(db,"pragma journal_mode=WAL",NULL,NULL,NULL);
>  //sqlite3_prepare_v2(db,"insert into t values(?)",-1,,NULL);
>  t1=clock();
>  if (interval != 1) rc=sqlite3_exec(db,"begin",NULL,NULL,NULL);
>  while(n < nrec)
>  {
>   ++n;
>   if (interval != 1 && (n% interval)==0) {
>sqlite3_exec(db,"commit",NULL,NULL,NULL);
>sqlite3_exec(db,"begin",NULL,NULL,NULL);
>   }
>   sprintf(SqlTxt, "insert into t values(%d);", n);
>   sqlite3_exec(db,SqlTxt,NULL,NULL,NULL);
>  }
>  sqlite3_exec(db,"commit",NULL,NULL,NULL);
>  printf("%.1f inserts per sec\n",nrec/((clock()-t1)/CLOCKS_PER_SEC));
>  sqlite3_close(db);
> }
> --
>
> Marcus
>
>>
>> Jim
>> --
>> HashBackup: easy onsite and offsite Unix backup
>> http://www.hashbackup.com
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE/INSERTing 1-2k rows slower than expected

2011-02-12 Thread Marcus Grimm
> On Sat, Feb 12, 2011 at 9:10 AM, Black, Michael (IS)
> > D:\SQLite>batch 5000 1
>> 360766.6 inserts per sec
>>
>
> Unless I'm missing something, SQLite has to update the first page of the
> database on every commit, to update the change counter.  Assuming you are
> using rotating media, that record can only be updated 120 times per
> second,
> maximum, on a 7200RPM drive.
>
> I don't understand how you can do 360K commits per second if your system
> is
> actually doing "to the platter" writes on every commit.  Can someone clue
> me
> in?

I think the time measuring is not correct as sqlite can't in fact
do a commit with more than appx. 10-20 commits/sec.

here is a slightly modified version, ignore the trigger stuff:
--
#include
#include
#include
#include 
#include 
#include "sqlite3.h"
#include
#include

int AddTrigger(sqlite3 *db)
{
char SqlTxt[256];
int rc;

strcpy(SqlTxt,"CREATE TRIGGER XX AFTER INSERT ON t FOR EACH ROW BEGIN 
");
strcat(SqlTxt,"UPDATE t2 SET n = NEW.i;");
strcat(SqlTxt,"END;");
rc = sqlite3_exec(db,SqlTxt,NULL,NULL,NULL);
if (rc != SQLITE_OK) {
puts(sqlite3_errmsg(db));
}
return(0);
}

int main(int argc, char *argv[])
{
 sqlite3 *db;
 sqlite3_stmt *stmt=NULL;
 int rc;
 int n=0;
 int nrec=0;
 int interval=0;
 intAddTr = 0;
 double t1;
 char   SqlTxt[256];

 if (argc < 3) {
  fprintf(stderr,"Usage: %s nrecords commit_interval\n",argv[0]);
  exit(-1);
 }
 nrec=atoi(argv[1]);
 interval=atoi(argv[2]);
 if( argc == 4 )
AddTr = atoi(argv[3]);
 remove("C:\\batch.db");
 rc=sqlite3_open("C:\\batch.db",);
 if (rc != SQLITE_OK) {
  puts(sqlite3_errmsg(db));
 }
 sqlite3_exec(db,"create table t (i integer)",NULL,NULL,NULL);
 sqlite3_exec(db,"create table t2 (n integer)",NULL,NULL,NULL);
 sqlite3_exec(db,"insert into t2 values(7);",NULL,NULL,NULL);
 if(AddTr)
 AddTrigger(db);

 // turn on WAL mode if you want to test it
 //sqlite3_exec(db,"pragma journal_mode=WAL",NULL,NULL,NULL);
 //sqlite3_prepare_v2(db,"insert into t values(?)",-1,,NULL);
 t1=clock();
 if (interval != 1) rc=sqlite3_exec(db,"begin",NULL,NULL,NULL);
 while(n < nrec)
 {
  ++n;
  if (interval != 1 && (n% interval)==0) {
   sqlite3_exec(db,"commit",NULL,NULL,NULL);
   sqlite3_exec(db,"begin",NULL,NULL,NULL);
  }
  sprintf(SqlTxt, "insert into t values(%d);", n);
  sqlite3_exec(db,SqlTxt,NULL,NULL,NULL);
 }
 sqlite3_exec(db,"commit",NULL,NULL,NULL);
 printf("%.1f inserts per sec\n",nrec/((clock()-t1)/CLOCKS_PER_SEC));
 sqlite3_close(db);
}
--

Marcus

>
> Jim
> --
> HashBackup: easy onsite and offsite Unix backup
> http://www.hashbackup.com
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Help on DELETE FROM...

2011-01-18 Thread Marcus Grimm
ok, just to close this. Both versions of course do work, thanks again to
Michael and Igor for the help. I currently use the NOT IN variant but
I think both are equivalent, maybe a speed difference but since that table
is small I didn't bother to test it out.

For those non-Sql folks like me, here are the two versions:

delete from THI where ID in (
   select t2.ID from THI t2 where t2.UserID=THI.UserID
   order by t2.TimeStamp desc limit 100 offset 10);

delete from THI where ID not in (
   select t2.ID from THI t2 where t2.UserID=THI.UserID
   order by t2.TimeStamp desc limit 10);

Best

Marcus

On 17.01.2011 18:38, Black, Michael (IS) wrote:
> Competing solutions...gotta' love it...I prefer the postive solution as it's 
> a list of what WILL be deleted.  Though there's some merit in "tell me what 
> will be left".
>
> delete from THI where ID in (
> select t2.ID from THI t2 where t2.UserID=UserID
> order by t2.TimeStamp desc limit 100 offset 10);
>
> Michael D. Black
> Senior Scientist
> Advanced Analytics Directorate
> Northrop Grumman Information Systems
>
>
> 
>
> From: sqlite-users-boun...@sqlite.org on behalf of Marcus Grimm
> Sent: Mon 1/17/2011 11:18 AM
> To: General Discussion of SQLite Database
> Subject: EXTERNAL:Re: [sqlite] Help on DELETE FROM...
>
>
>
>
> On 17.01.2011 17:26, Black, Michael (IS) wrote:
>> I came up with the logical opposite which I think does what you want
>>
>> select a.id  from thi as a where a.id in (select thi.id from thi where 
>> a.userid=thi.userid order by timestamp limit 100 offset 10);
>>
>> You just need to set the limit value to some ridiculous number.
>
> ahh... nice trick!! Thank you!  ;)
>
> Hm.. but how do I place that into a "DELETE FROM ..." statement ?
> Here I can't use the alias syntax...
>
>
>
>
>>
>> Michael D. Black
>> Senior Scientist
>> Advanced Analytics Directorate
>> Northrop Grumman Information Systems
>>
>>
>> 
>>
>> From: sqlite-users-boun...@sqlite.org on behalf of Marcus Grimm
>> Sent: Mon 1/17/2011 10:24 AM
>> To: General Discussion of SQLite Database
>> Subject: EXTERNAL:Re: [sqlite] Help on DELETE FROM...
>>
>>
>>
>>
>> On 17.01.2011 17:14, Igor Tandetnik wrote:
>>> Marcus Grimm<mgr...@medcom-online.de>wrote:
>>>> I have a table to record some history data, for example
>>>> items a user recently selected:
>>>>
>>>> CREATE TABLE THI(ID INTEGER PRIMARY KEY, TimeStamp INTEGER, UserID 
>>>> INTEGER, DataID INTEGER);
>>>>
>>>> That table needs to trace only the last 10 events, thus I would like
>>>> to remove entries from all users until each user has only 10 recent entries
>>>> in that table.
>>>>
>>>> I can delete for a specific user (42), using this:
>>>>
>>>> DELETE FROM THI WHERE
>>>> (UserID=42) AND
>>>> (ID NOT IN (SELECT ID FROM THI WHERE UserID=42 ORDER BY TimeStamp DESC 
>>>> LIMIT 10));
>>>>
>>>> But how do I do this for all users without using a surrounding loop
>>>> on application level ?
>>>
>>> delete from THI where ID not in (
>>>select ID from THI t2 where t2.UserID = UserId
>>>order by t2.TimeStamp desc limit 10);
>>
>> Thanks Igor!
>> but a stupid question: Doesn't this basically remove
>> all entries from the table because of the "NOT IN" condition ?
>>
>> Marcus
>>
>>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>>
>>
>>
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Help on DELETE FROM...

2011-01-17 Thread Marcus Grimm
Thank you very much Michael and Igor for taking your time!

I'm still not sure if Igor's last version will work
but I'll not start to argue with Igor about Sql! :-)

Nor do I know how to put Michaels solution in a DELETE statement.

I have to leave for today, but I'll come back to this
after I tried the two solutions and report back.

Thanks again for a enlightening afternoon.

Marcus


On 17.01.2011 18:04, Black, Michael (IS) wrote:
> The "NOT IN" approach doesn't work..here's sample data using select rather 
> than delete to show the result.
>
> sqlite>  .dump
> PRAGMA foreign_keys=OFF;
> BEGIN TRANSACTION;
> CREATE TABLE THI(ID INTEGER PRIMARY KEY, TimeStamp INTEGER, UserID INTEGER, 
> DataID INTEGER);
> INSERT INTO "THI" VALUES(0,10,10,0);
> INSERT INTO "THI" VALUES(1,11,10,0);
> INSERT INTO "THI" VALUES(2,12,10,0);
> INSERT INTO "THI" VALUES(3,13,10,0);
> INSERT INTO "THI" VALUES(4,14,10,0);
> INSERT INTO "THI" VALUES(5,15,10,0);
> INSERT INTO "THI" VALUES(6,16,10,0);
> INSERT INTO "THI" VALUES(7,17,10,0);
> INSERT INTO "THI" VALUES(8,18,10,0);
> INSERT INTO "THI" VALUES(9,19,10,0);
> INSERT INTO "THI" VALUES(10,20,10,0);
> INSERT INTO "THI" VALUES(11,21,10,0);
> INSERT INTO "THI" VALUES(12,22,10,0);
> INSERT INTO "THI" VALUES(13,21,20,0);
> INSERT INTO "THI" VALUES(14,22,20,0);
> INSERT INTO "THI" VALUES(15,23,20,0);
> INSERT INTO "THI" VALUES(16,24,20,0);
> INSERT INTO "THI" VALUES(17,25,20,0);
> INSERT INTO "THI" VALUES(18,26,20,0);
> INSERT INTO "THI" VALUES(19,27,20,0);
> INSERT INTO "THI" VALUES(20,28,20,0);
> INSERT INTO "THI" VALUES(21,29,20,0);
> INSERT INTO "THI" VALUES(22,30,20,0);
> INSERT INTO "THI" VALUES(23,31,20,0);
> INSERT INTO "THI" VALUES(24,32,20,0);
> INSERT INTO "THI" VALUES(25,33,20,0);
> COMMIT;
> sqlite>  select * from THI where ID not in (
> ...>  select ID from THI t2 where t2.UserID = UserId
> ...>  order by t2.TimeStamp desc limit 10);
> 0|10|10|0
> 1|11|10|0
> 2|12|10|0
> 3|13|10|0
> 4|14|10|0
> 5|15|10|0
> 6|16|10|0
> 7|17|10|0
> 8|18|10|0
> 9|19|10|0
> 10|20|10|0
> 11|21|10|0
> 12|22|10|0
> 13|21|20|0
> 14|22|20|0
> 15|23|20|0
>
> Whereas the inclusion approach does work
> sqlite>  select *  from thi as a where a.id in
> ...>  (select thi.id from thi where a.userid=thi.userid
> ...>  order by timestamp limit 100 offset 10);
> 10|20|10|0
> 11|21|10|0
> 12|22|10|0
> 23|31|20|0
> 24|32|20|0
> 25|33|20|0
>
>
> Michael D. Black
> Senior Scientist
> Advanced Analytics Directorate
> Northrop Grumman Information Systems
>
>
> 
>
> From: sqlite-users-boun...@sqlite.org on behalf of Igor Tandetnik
> Sent: Mon 1/17/2011 10:57 AM
> To: sqlite-users@sqlite.org
> Subject: EXTERNAL:Re: [sqlite] Help on DELETE FROM...
>
>
>
> Marcus Grimm<mgr...@medcom-online.de>  wrote:
>>> delete from THI where ID not in (
>>>   select ID from THI t2 where t2.UserID = UserId
>>>   order by t2.TimeStamp desc limit 10);
>
> A correction: I think the subselect should say "select t2.ID from ..."
>
>> Thanks Igor!
>> but a stupid question: Doesn't this basically remove
>> all entries from the table because of the "NOT IN" condition ?
>
> It removes all records from the table that meet the condition in the WHERE 
> clause - in other words, those whose IDs don't appear in the subselect.
>
> Have you tested this statement and found that it didn't satisfy your 
> requirements?
> --
> Igor Tandetnik
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Help on DELETE FROM...

2011-01-17 Thread Marcus Grimm

On 17.01.2011 17:26, Black, Michael (IS) wrote:
> I came up with the logical opposite which I think does what you want
>
> select a.id  from thi as a where a.id in (select thi.id from thi where 
> a.userid=thi.userid order by timestamp limit 100 offset 10);
>
> You just need to set the limit value to some ridiculous number.

ahh... nice trick!! Thank you!  ;)

Hm.. but how do I place that into a "DELETE FROM ..." statement ?
Here I can't use the alias syntax...




>
> Michael D. Black
> Senior Scientist
> Advanced Analytics Directorate
> Northrop Grumman Information Systems
>
>
> 
>
> From: sqlite-users-boun...@sqlite.org on behalf of Marcus Grimm
> Sent: Mon 1/17/2011 10:24 AM
> To: General Discussion of SQLite Database
> Subject: EXTERNAL:Re: [sqlite] Help on DELETE FROM...
>
>
>
>
> On 17.01.2011 17:14, Igor Tandetnik wrote:
>> Marcus Grimm<mgr...@medcom-online.de>   wrote:
>>> I have a table to record some history data, for example
>>> items a user recently selected:
>>>
>>> CREATE TABLE THI(ID INTEGER PRIMARY KEY, TimeStamp INTEGER, UserID INTEGER, 
>>> DataID INTEGER);
>>>
>>> That table needs to trace only the last 10 events, thus I would like
>>> to remove entries from all users until each user has only 10 recent entries
>>> in that table.
>>>
>>> I can delete for a specific user (42), using this:
>>>
>>> DELETE FROM THI WHERE
>>>(UserID=42) AND
>>>(ID NOT IN (SELECT ID FROM THI WHERE UserID=42 ORDER BY TimeStamp DESC 
>>> LIMIT 10));
>>>
>>> But how do I do this for all users without using a surrounding loop
>>> on application level ?
>>
>> delete from THI where ID not in (
>>   select ID from THI t2 where t2.UserID = UserId
>>   order by t2.TimeStamp desc limit 10);
>
> Thanks Igor!
> but a stupid question: Doesn't this basically remove
> all entries from the table because of the "NOT IN" condition ?
>
> Marcus
>
>>
> _______
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

-- 
Marcus Grimm
MedCom GmbH Darmstadt, Rundeturmstr. 12, 64283 Darmstadt
Tel: +49(0)6151-95147-10
Fax: +49(0)6151-95147-20
web: www.medcom-online.de
--
MedCom slogans of the month:
"Vacation ? -- Every day at MedCom is a paid vacation!"
"Friday I have monday in my mind."
"MedCom -- Every week a vacation, every day an event, every hour a cliffhanger,
every minute a climax."
"Damned, it's weekend again!"
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Help on DELETE FROM...

2011-01-17 Thread Marcus Grimm

On 17.01.2011 17:14, Igor Tandetnik wrote:
> Marcus Grimm<mgr...@medcom-online.de>  wrote:
>> I have a table to record some history data, for example
>> items a user recently selected:
>>
>> CREATE TABLE THI(ID INTEGER PRIMARY KEY, TimeStamp INTEGER, UserID INTEGER, 
>> DataID INTEGER);
>>
>> That table needs to trace only the last 10 events, thus I would like
>> to remove entries from all users until each user has only 10 recent entries
>> in that table.
>>
>> I can delete for a specific user (42), using this:
>>
>> DELETE FROM THI WHERE
>>   (UserID=42) AND
>>   (ID NOT IN (SELECT ID FROM THI WHERE UserID=42 ORDER BY TimeStamp DESC 
>> LIMIT 10));
>>
>> But how do I do this for all users without using a surrounding loop
>> on application level ?
>
> delete from THI where ID not in (
>  select ID from THI t2 where t2.UserID = UserId
>  order by t2.TimeStamp desc limit 10);

Thanks Igor!
but a stupid question: Doesn't this basically remove
all entries from the table because of the "NOT IN" condition ?

Marcus

>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Help on DELETE FROM...

2011-01-17 Thread Marcus Grimm
Thanks Venkat and Michael,

actually I'm doing this in fact only once per day during
administration hours (i.e. in the night).
Thanks for the trigger advice, I didn't yet consider a trigger
for that purpose to keep the schema simple and reduce the (little)
overhead involved when inserts are made into the DB. But I might change that.
However, from the replies I conclude that it is not so easy
doing this in pure sql; I wasn't sure about that as I'm not really good in sql. 
;)

Marcus


On 17.01.2011 15:24, Black, Michael (IS) wrote:
> Seem to me that putting it inside a trigger would be the best choice...then 
> you don't have to worry about it unless you have speed concerns and only want 
> to do this once a day or such.
>
> CREATE TABLE THI(ID INTEGER PRIMARY KEY, TimeStamp INTEGER, UserID INTEGER, 
> DataID INTEGER);
> create trigger t1 after insert on thi
> begin
> delete from thi WHERE (UserID=new.UserID) AND (ID NOT IN (SELECT ID FROM THI 
> WHERE UserID=new.UserID ORDER BY TimeStamp DESC LIMIT 10));
> end;
>
> As long as you have an index on UserID this should be pretty fast.
>
> Michael D. Black
> Senior Scientist
> Advanced Analytics Directorate
> Northrop Grumman Information Systems
>
>
> ____
>
> From: sqlite-users-boun...@sqlite.org on behalf of Marcus Grimm
> Sent: Mon 1/17/2011 6:17 AM
> To: General Discussion of SQLite Database
> Subject: EXTERNAL:[sqlite] Help on DELETE FROM...
>
>
>
> Hi List,
>
> sorry for not being very sqlite specific here but I would
> like to have an advice on a delete operation for which
> I can't find the right sql command.
> Currently I do it on C programming level using a loop
> but I think there must be a better sql way.
>
> Anyway, here is the story:
>
> I have a table to record some history data, for example
> items a user recently selected:
>
> CREATE TABLE THI(ID INTEGER PRIMARY KEY, TimeStamp INTEGER, UserID INTEGER, 
> DataID INTEGER);
>
> That table needs to trace only the last 10 events, thus I would like
> to remove entries from all users until each user has only 10 recent entries
> in that table.
>
> I can delete for a specific user (42), using this:
>
> DELETE FROM THI WHERE
>(UserID=42) AND
>(ID NOT IN (SELECT ID FROM THI WHERE UserID=42 ORDER BY TimeStamp DESC 
> LIMIT 10));
>
> But how do I do this for all users without using a surrounding loop
> on application level ?
>
> Thank you
>
> Marcus
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Help on DELETE FROM...

2011-01-17 Thread Marcus Grimm
Hi List,

sorry for not being very sqlite specific here but I would
like to have an advice on a delete operation for which
I can't find the right sql command.
Currently I do it on C programming level using a loop
but I think there must be a better sql way.

Anyway, here is the story:

I have a table to record some history data, for example
items a user recently selected:

CREATE TABLE THI(ID INTEGER PRIMARY KEY, TimeStamp INTEGER, UserID INTEGER, 
DataID INTEGER);

That table needs to trace only the last 10 events, thus I would like
to remove entries from all users until each user has only 10 recent entries
in that table.

I can delete for a specific user (42), using this:

DELETE FROM THI WHERE
  (UserID=42) AND
  (ID NOT IN (SELECT ID FROM THI WHERE UserID=42 ORDER BY TimeStamp DESC LIMIT 
10));

But how do I do this for all users without using a surrounding loop
on application level ?

Thank you

Marcus
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Windows performance problems associated with malloc()

2010-12-18 Thread Marcus Grimm
> Apparently set_sbh_threshold doesn't work in a DLL.
> It uses different heap pointers.
> Just beware...
> I had found a reference where the guy said to watch out as subtle bugs
> were
> introduced putting this call inside a DLL.
> http://www.kuro5hin.org/story/2001/1/4/173637/9948

Interesting. Thanks for pointing to this.
Please note that this reference is 9 years old, we observed
that problem by moving to VS2008, not in older versions.
Anyway, I agree that this threshold issue is wierd -
Microsoft silently turned it off and claims it is no longer
needed, but fact is that was needed to get the same exe
running smoothly on XP. I guess only Microsoft can tell
what's the story behind... :-)

I would rather not suggest to put something like set_sbh_threshold
in sqlite, my suggestion would be to just compile the sqlite shell
using this call one time for test purpose, see if it changes
the reported malloc issue, and if so: Take it as an further
evidence that the memsys5 which Richard appears to implement
infact is a useful extension to the sqlite core, at least for windows.

Marcus

>
>
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org]
> On Behalf Of Marcus Grimm
> Sent: Saturday, December 18, 2010 5:31 AM
> To: General Discussion of SQLite Database
> Subject: EXTERNAL:Re: [sqlite] Windows performance problems associated
> with
> malloc()
>
>> I believe the Windows default is to use the LFH on Vista and newer
>> versions of Windows.
>> The suggestion by Marcus Grimm to use  _set_sbh_threshold() to enable
>> use
>> of the SBH (small block heap) may help under some usage scenarios on
>> those platforms.
>
> Just to be precise: The _set_sbh_threshold() setting was not related
> to sqlite but to our application in general. A part of the application
> used to malloc/free a few thousands small items (a linked pointer list)
> and this operation surprisingly dramatically slowed down on XP, not on
> Win-7. Recativating the small block heap by using _set_sbh_threshold(512)
> did the trick, although it remains strange why it is necessary.
> Anyway, Sqlite used to be much more advanced when dealing
> with memory and thus it might not be applicable, however fts3
> might be another story...
>
> The new exe fixes the stupid dll issue but I'm not able to see
> any difference on XP using standard queries; haven't tried fts3 yet.
>
> Marcus
>
>
>
>>
>> -Shane
>>
>>
>> On Fri, Dec 17, 2010 at 6:29 PM, Doug <pa...@poweradmin.com> wrote:
>>> I wonder if HeapSetInformation (which can enable a low-fragmentation
>>> heap)
>>> would be helpful too.  You can set it on the process
>>> and the CRT heaps.  Note that it's not available in Win2K and earlier.
>>>
>>> Doug
>>>
>>> -Original Message-
>>> From: sqlite-users-boun...@sqlite.org
>>> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Marcus Grimm
>>> Sent: Friday, December 17, 2010 9:21 AM
>>> To: General Discussion of SQLite Database
>>> Subject: Re: [sqlite] Windows performance problems associated with
>>> malloc()
>>>
>>>> An SQLite user has brought to our attention a performance issue in
>>>> SQLite that seems to be associated with malloc().  If you have
>>>> insights or corroborating experience with this issue please let me
>>>> know.
>>>
>>> We recently had a malloc/free slowdown issue after changing to VS2008
>>> in
>>> combination with XP.
>>> Not sure if it applies in your case but for us this helps:
>>> --
>>> if( _get_sbh_threshold() < 512 )
>>> {
>>>  _set_sbh_threshold(512);
>>> }
>>> ---
>>>
>>> I'm unable to run your sqlite3.exe: MSVCR100.dll no found.
>>>
>>> Anyway, maybe the above helps.
>>>
>>> KInd regards
>>>
>>> Marcus
>>>
>>>>
>>>> SQLite supports a "zero-malloc option" (see
>>>> http://www.sqlite.org/malloc.html#memsys5 for details) which uses its
>>>> own internal memory allocator rather than system malloc().  Earlier
>>>> today, we patched the command-line shell to allow the zero-malloc
>>>> option to be turned on.  If you do:
>>>>
>>>>  sqlite3 DATABASE
>>>>
>>>> then the regular system memory allocator is used, but if you say:
>>>>
>>>>  sqlite3 -heap 100M DATABASE
>>>>
>>>> then the MEMSYS5 memory allocator will be used with a pool of 100MB of
>>

Re: [sqlite] Windows performance problems associated with malloc()

2010-12-18 Thread Marcus Grimm
> I believe the Windows default is to use the LFH on Vista and newer
> versions of Windows.
> The suggestion by Marcus Grimm to use  _set_sbh_threshold() to enable use
> of the SBH (small block heap) may help under some usage scenarios on
> those platforms.

Just to be precise: The _set_sbh_threshold() setting was not related
to sqlite but to our application in general. A part of the application
used to malloc/free a few thousands small items (a linked pointer list)
and this operation surprisingly dramatically slowed down on XP, not on
Win-7. Recativating the small block heap by using _set_sbh_threshold(512)
did the trick, although it remains strange why it is necessary.
Anyway, Sqlite used to be much more advanced when dealing
with memory and thus it might not be applicable, however fts3
might be another story...

The new exe fixes the stupid dll issue but I'm not able to see
any difference on XP using standard queries; haven't tried fts3 yet.

Marcus



>
> -Shane
>
>
> On Fri, Dec 17, 2010 at 6:29 PM, Doug <pa...@poweradmin.com> wrote:
>> I wonder if HeapSetInformation (which can enable a low-fragmentation
>> heap)
>> would be helpful too.  You can set it on the process
>> and the CRT heaps.  Note that it's not available in Win2K and earlier.
>>
>> Doug
>>
>> -Original Message-
>> From: sqlite-users-boun...@sqlite.org
>> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Marcus Grimm
>> Sent: Friday, December 17, 2010 9:21 AM
>> To: General Discussion of SQLite Database
>> Subject: Re: [sqlite] Windows performance problems associated with
>> malloc()
>>
>>> An SQLite user has brought to our attention a performance issue in
>>> SQLite that seems to be associated with malloc().  If you have
>>> insights or corroborating experience with this issue please let me
>>> know.
>>
>> We recently had a malloc/free slowdown issue after changing to VS2008 in
>> combination with XP.
>> Not sure if it applies in your case but for us this helps:
>> --
>> if( _get_sbh_threshold() < 512 )
>> {
>>  _set_sbh_threshold(512);
>> }
>> ---
>>
>> I'm unable to run your sqlite3.exe: MSVCR100.dll no found.
>>
>> Anyway, maybe the above helps.
>>
>> KInd regards
>>
>> Marcus
>>
>>>
>>> SQLite supports a "zero-malloc option" (see
>>> http://www.sqlite.org/malloc.html#memsys5 for details) which uses its
>>> own internal memory allocator rather than system malloc().  Earlier
>>> today, we patched the command-line shell to allow the zero-malloc
>>> option to be turned on.  If you do:
>>>
>>>      sqlite3 DATABASE
>>>
>>> then the regular system memory allocator is used, but if you say:
>>>
>>>      sqlite3 -heap 100M DATABASE
>>>
>>> then the MEMSYS5 memory allocator will be used with a pool of 100MB of
>>> memory to work with.  (You can adjust the size of your memory pool for
>>> whatever you need.)
>>>
>>> There are win32 and win64 builds of this updated command-line shell
>>> compiled using vs2010 here:
>>>
>>>      http://www.sqlite.org/draft/download.html
>>>
>>> For certain full-text search queries against a large database, we are
>>> seeing speeds which are 3x faster when using "-heap 300M" (the memsys5
>>> memory
>>> allocator) versus omitting the -heap option and thus using system
>>> malloc().
>>> This is on windows7.  Similar results are seen with both gcc and
>>> vs2010 builds.
>>>
>>> If you have any large queries that you can run on windows using the
>>> command-line shell, I would appreciate you timing those queries using
>>> the new shells from the download page, both with "-heap 300M" and
>>> without it, and letting me know about any performance differences you
>>> see.
>>>
>>> I also observe that compiling for 64-bit using vs2010 (not an option
>>> with my ancient version 2.95.3 gcc cross-compiler) that the queries
>>> are an additional 2x faster.  I was surprised at the dramatic
>>> performance increase in going from 32-bit to 64-bit.  Is such a
>>> speed-up typical?
>>>
>>> The use of "-heap 300M" seems to not make any performance difference
>>> on Linux.
>>>
>>> Any insights into why this is, what we are doing wrong, or what we can
>>> do to improve the performance of malloc() on windows will be
>>> appreciated.
&g

Re: [sqlite] Windows performance problems associated with malloc()

2010-12-17 Thread Marcus Grimm
> An SQLite user has brought to our attention a performance issue in SQLite
> that seems to be associated with malloc().  If you have  insights or
> corroborating experience with this issue please let me know.

We recently had a malloc/free slowdown issue after
changing to VS2008 in combination with XP.
Not sure if it applies in your case but for us
this helps:
--
if( _get_sbh_threshold() < 512 )
{
  _set_sbh_threshold(512);
}
---

I'm unable to run your sqlite3.exe: MSVCR100.dll no found.

Anyway, maybe the above helps.

KInd regards

Marcus

>
> SQLite supports a "zero-malloc option" (see
> http://www.sqlite.org/malloc.html#memsys5 for details) which uses its own
> internal memory allocator rather than system malloc().  Earlier today, we
> patched the command-line shell to allow the zero-malloc option to be
> turned
> on.  If you do:
>
>  sqlite3 DATABASE
>
> then the regular system memory allocator is used, but if you say:
>
>  sqlite3 -heap 100M DATABASE
>
> then the MEMSYS5 memory allocator will be used with a pool of 100MB of
> memory to work with.  (You can adjust the size of your memory pool for
> whatever you need.)
>
> There are win32 and win64 builds of this updated command-line shell
> compiled
> using vs2010 here:
>
>  http://www.sqlite.org/draft/download.html
>
> For certain full-text search queries against a large database, we are
> seeing
> speeds which are 3x faster when using "-heap 300M" (the memsys5 memory
> allocator) versus omitting the -heap option and thus using system
> malloc().
> This is on windows7.  Similar results are seen with both gcc and vs2010
> builds.
>
> If you have any large queries that you can run on windows using the
> command-line shell, I would appreciate you timing those queries using the
> new shells from the download page, both with "-heap 300M" and without it,
> and letting me know about any performance differences you see.
>
> I also observe that compiling for 64-bit using vs2010 (not an option with
> my
> ancient version 2.95.3 gcc cross-compiler) that the queries are an
> additional 2x faster.  I was surprised at the dramatic performance
> increase
> in going from 32-bit to 64-bit.  Is such a speed-up typical?
>
> The use of "-heap 300M" seems to not make any performance difference on
> Linux.
>
> Any insights into why this is, what we are doing wrong, or what we can do
> to
> improve the performance of malloc() on windows will be appreciated.
>
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] No "PRAGMA default_cache_size" in 3.7.4 ?

2010-12-10 Thread Marcus Grimm
Hello,

just read in the docs that the pragma to set the default
cache size is deprecated.
I'm wondering why...
I found it quite useful as it allows me to set the cachesize
once for all connections rather than having to set it on
each individual open connection within a server application.
The docs doesn't mention what to use instead or did I missed
something ?

Kind regards

Marcus Grimm

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] WAL mode and backup API

2010-11-30 Thread Marcus Grimm

On 30.11.2010 11:50, Dan Kennedy wrote:
>
>> BTW:
>> Is there any chance that in the future the wal mode
>> will avoid that the backup API will restart on DB
>> changes during the backup loop ?
>> Currently, even in wal mode, it does restart when changes
>> are detected while the backup is ongoing.
>
> What happens if you open a read transaction on the source database
> before starting the backup? i.e.
>
> sqlite3_exec(pSource, "BEGIN; SELECT * FROM sqlite_master", 0, 0, 0);
>
> ... do backup ...
>
> sqlite3_exec(pSource, "COMMIT", 0, 0, 0);
>
> Does that help things any?

wow... what a trick!
Using your hint I'm not able to reproduce any restarts!
I can imagine what your hint does in wal mode, but without your input
I would never came to that idea... :-)

Thanks Dan! This makes the wal mode and backup API even more sexy.

Marcus


> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] WAL mode and backup API

2010-11-30 Thread Marcus Grimm
On 30.11.2010 11:32, Dan Kennedy wrote:
> On 11/30/2010 05:03 PM, Marco Bambini wrote:
>> Hello,
>>
>> if a running sqlite database is in WAL mode and a backup is performed on 
>> that db using the sqlite3_backup API, does that process is considered like a 
>> reader and can proceed concurrently with other readers and with the other 
>> writer?
>>
>
> Yes. The connection to the backup source acts like any other reader.

BTW:
Is there any chance that in the future the wal mode
will avoid that the backup API will restart on DB
changes during the backup loop ?
Currently, even in wal mode, it does restart when changes
are detected while the backup is ongoing.

Marcus

> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed up DELETE of a lot of records

2010-10-08 Thread Marcus Grimm

Michele Pradella wrote:
>   ok I'll try with 3.7.3
> DELETE is a little bit faster, and the -wal is reintegrated when I close 
> the connection.
> Changing cache_size (I try 1) DELETE takes same time to complete.

1 doesn't sounds very big, I used to define it to e.g. 32 instead when
working with a GB size DB (mainly for index creation and "PRAGMA 
integrity_check;")

Maybe playing with temporary setting PRAGMA synchronous = OFF; would be useful,
although I think the wal mode should already avoid too much syncing.

Marcus


> Was my fault, because to close the shell connection I used Ctrl+C but 
> this leave the -wal file. If I close with .quit the -wal file is 
> reintegrated.
> I thought that Ctrl+C is like a ".quit " but it's not.
> Anyway if I close the DB connection with Ctrl+C and than reopen 
> connection and close it with .quit the -wal file is not reintegrated.
> 
> Il 08/10/2010 9.56, Michele Pradella ha scritto:
>>I'll try to increase cache size, and I'll try operation on my Db with
>> the 3.7.3 anyway I already ported the Fix of the WAL issue from recent
>> snapshot. I'll try and let you know
>>
>> Il 08/10/2010 9.52, Marcus Grimm ha scritto:
>>> Michele Pradella wrote:
>>>> As I explain in previews email, I think that recreating the index is
>>>> the slowest operation I can do on my DB.
>>>> Anyway in my first email I ask another question about -wal file
>>>> Tryin to DELETE the (5 millions) records with the shell SQLITE interface
>>>> I can see the -wal grow up till 600MB. I can not understand why the -wal
>>>> is no more reintegrated, and even when I close the connection (closing
>>>> the shell) -wal file still exist.
>>>> Trying for example the statement "create table new as select * from
>>>> current where condition = keep;" the -wal file grow up till 1,5GB and
>>>> than the same, after closing shell the -wal remain.
>>>>
>>>> Moreover the operation above "create new" terminate with a "Error:
>>>> disk I/O error"
>>>> The hard disk I use has a lot of free space and it's SATA2 hard disk, so
>>>> is internal
>>> You may try with the new 3.7.3 version, the 3.7.2 doesn't operate
>>> very well on win32 when doing huge transactions in wal mode.
>>>
>>> 2nd, when running sqlite with a gigabyte sized database it is useful
>>> to heavily increase the cache size, not sure if that helps for delete
>>> statements though, but it does in particular when creating indices.
>>>
>>> Marcus
>>>
>>>> Il 07/10/2010 20.38, Petite Abeille ha scritto:
>>>>> On Oct 7, 2010, at 6:05 PM, Michele Pradella wrote:
>>>>>
>>>>>> I have a DB of about 3GB: the DB has about 23 millions of records. [..]
>>>>>> the statement is trying to delete about 5 millions records and it takes
>>>>>> about 4-5minutes.
>>>>>> Is there a way to try to speed up the DELETE?
>>>>> Considering that you want to delete about a quarter of the records, 
>>>>> perhaps it would be more efficient to recreate that table altogether, no?
>>>>>
>>>>> Pseudocode:
>>>>>
>>>>> create table new as select * from current where condition = keep;
>>>>> create index on new;
>>>>> drop table current;
>>>>> alter table rename new to current;
>>>>>
>>>>> ___
>>>>> sqlite-users mailing list
>>>>> sqlite-users@sqlite.org
>>>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>>>
>>>>>
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>>>
> 
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed up DELETE of a lot of records

2010-10-08 Thread Marcus Grimm

Michele Pradella wrote:
>   As I explain in previews email, I think that recreating the index is 
> the slowest operation I can do on my DB.
> Anyway in my first email I ask another question about -wal file
> Tryin to DELETE the (5 millions) records with the shell SQLITE interface 
> I can see the -wal grow up till 600MB. I can not understand why the -wal 
> is no more reintegrated, and even when I close the connection (closing 
> the shell) -wal file still exist.
> Trying for example the statement "create table new as select * from 
> current where condition = keep;" the -wal file grow up till 1,5GB and 
> than the same, after closing shell the -wal remain.
> 
> Moreover the operation above "create new" terminate with a "Error: 
> disk I/O error"
> The hard disk I use has a lot of free space and it's SATA2 hard disk, so 
> is internal

You may try with the new 3.7.3 version, the 3.7.2 doesn't operate
very well on win32 when doing huge transactions in wal mode.

2nd, when running sqlite with a gigabyte sized database it is useful
to heavily increase the cache size, not sure if that helps for delete
statements though, but it does in particular when creating indices.

Marcus

> 
> Il 07/10/2010 20.38, Petite Abeille ha scritto:
>> On Oct 7, 2010, at 6:05 PM, Michele Pradella wrote:
>>
>>> I have a DB of about 3GB: the DB has about 23 millions of records. [..]
>>> the statement is trying to delete about 5 millions records and it takes
>>> about 4-5minutes.
>>> Is there a way to try to speed up the DELETE?
>> Considering that you want to delete about a quarter of the records, perhaps 
>> it would be more efficient to recreate that table altogether, no?
>>
>> Pseudocode:
>>
>> create table new as select * from current where condition = keep;
>> create index on new;
>> drop table current;
>> alter table rename new to current;
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>>
> 
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-02 Thread Marcus Grimm

Michele Pradella wrote:
>   ok, I'll wait for the walk around.
> I always use a BEGIN; COMMIT; transaction but often, after a COMMIT; the 
> -wal file does not change in size, it seams it's not checkponted.
> Anyway do you think that with WAL journal mode I should continue to use 
> BEGIN; COMMIT; statement? or not?

as Richard mentioned, the wal mode is not intended to work well
for bulk-insert kind of actions. You may try to split your insert
cycles into smaller pieces.

However, that might not help if you do sql statements which involve
a huge implicit transaction, for example "CREATE INDEX .." on a huge table.
At least on windows it can fail with IO error on a GB sized db.

Btw, I think the wal file doesn't shrink because sqlite doesn't truncate
that file after completing the checkpoint. That's by design I guess.

Marcus



> 
> 
> Il 02/09/2010 14.43, Richard Hipp ha scritto:
>> On Thu, Sep 2, 2010 at 8:34 AM, Michele Pradella>> wrote:
>>>   Hi,
>>> I found a strange behavior of the sqlite 3.7.2 with WAL journal mode.
>>> Yesterday I found my application DB with a -wal file of 1,5GB and a -shm
>>> file of few MB (about 9MB) with a DB file of 1,2GB: in this
>>> situation I got the process memory wasted by "mapped file" of the -shm
>>> file. It seams that the file is mapped a lot of times in memory so the
>>> process memory become 2GB and it can't allocate more memory. In that
>>> situation operation made on the DB cause I/O disk errors probably due to
>>> the wasted memory.
>>>
>> By coincidence, the SQLite developers were just discussing this problem
>> earlier this morning.  There are technical issues with windows that make a
>> solution difficult.  We are trying to come up with a work-around.  (The
>> problem you describe is specific to the windows backend and does not come up
>> in unix.)
>>
>>
>>> I'm doing some other test to reproduce the problem, but I think that
>>> could be when I got a lot of operation between a BEGIN; COMMIT;
>>> So is it ok to use the BEGIN; COMMIT; with the WAL journal activated?
>>> is there some kind of limit in the number of operation between a BEGIN;
>>> COMMIT; statement?
>>>
>> SQLite will not checkpoint the journal until you commit your transaction.
>> So if you leave the transaction open too long, the WAL file and the -shm
>> file will grow excessively large.  WAL works best with many smaller
>> transactions.  If you have one or two big transactions, then using a
>> traditional rollback-journal mode works better.
>>
>>
>>
>>> I try to use the PRAGMA wal_checkpoint; to try resolve this situation,
>>> but seams that command was ignored by sqlite because the -wal file does
>>> not change in size, even the DB file.
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>>
> 
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Implementation check request

2010-08-28 Thread Marcus Grimm
>
> On 28 Aug 2010, at 4:24pm, Max Vlasov wrote:
>
>> I don't know whether my posts have a delay with delivery (replied
>> several
>> hours ago to the discussion you mentioned), but actually I tested 100
>> rows
>> before and after with a similar query (ORDER BY LIMIT) and it definitely
>> shows that sqlite saves time and doesn't go further. Do you have reason
>> to
>> think that it should go to the end?
>
> Thanks Max.  There was a thread where someone suggested the opposite: that
> the LIMIT clause operated to trim results which had already been found.
> It worried me because I have been assuming that LIMIT meant that SQLite
> didn't have to search the entire table.

I think it depends of course if there is an index on the column
that is specified in the ORDER BY. Without an index there is no
way to limit the searched rows prior applying the sorting.

So a simple SELECT * FROM T ORDER BY A LIMIT 10
will be fast with an index on A and slow without.

Even if ORDER BY A DESC is specified sqlite still can
scan the index. At least that shows a quick experiment
using the sqlite3.exe.

Marcus


>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite 3.7.0 coming soon....

2010-07-01 Thread Marcus Grimm
I would be pleased to try that version, however
I'm not sure if you already support the SEE extension ?
At least I can't find it yet on the SEE Software Configuration System.

Marcus Grimm

D. Richard Hipp wrote:
> We are in the final phases of development for SQLite version 3.7.0.   
> The key enhancement over version 3.6.23.1 is support for the use of  
> write-ahead logs for transaction control.  See 
> http://www.sqlite.org/draft/wal.html 
>   for additional information on how this might enhance your  use of  
> SQLite.
> 
> The transition from 3.6.23.1 to 3.7.0 is a large one, and we would  
> appreciate beta testing.  Recent snapshots of SQLite can be found at 
> http://www.sqlite.org/draft/download.html 
>   and those snapshots pass all of our extensive tests.  But testing  
> can only prove the existence of bugs, not their absence and so we are  
> eager for others to try out the latest code changes, and in particular  
> the new write-ahead log feature, and let us know your experiences,  
> before we make the next release.
> 
> Please provide feedback - positive, negative, or indifferent - to this  
> mailing list.
> 
> We are currently aiming to release 3.7.0 sometime during July.  Thank  
> you in advance for your help in testing this important new release.
> 
> D. Richard Hipp
> d...@hwaci.com
> 
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 

-- 
Marcus Grimm
MedCom GmbH Darmstadt, Rundeturmstr. 12, 64283 Darmstadt
Tel: +49(0)6151-95147-10
Fax: +49(0)6151-95147-20
web: www.medcom-online.de
--
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Insert large data question ??

2010-05-11 Thread Marcus Grimm
> Thanks Kishor ,I will note it !!
>
> I already used transaction to doing this job.
> I tried to remove all of index ,this time the job used about 31600 seconds
>
> ps. I had use "PRAGMA synchronous=OFF" in front of my transaction.
>
> someone can help me do this job more faster ??

have you tried increasing the page cache ?

eg. --> PRAGMA cache_size = 40;

occasionally it helps when dealing with large tables..

Marcus

>
> thank everybody
>
>
>
> 2010/5/11 P Kishor :
>> On Tue, May 11, 2010 at 12:47 AM, ­·ºå  wrote:
>>> Dear
>>>
>>> I have about 9 million data insert string need to insert into an table
>>> ,each
>>> row data is unique
>>>
>>> this is a sample:
>>> insert into mydata
>>> VALUES(38824801,56888,'AABBCC',4.999,157,'2009/9/10
>>> 19:55:50');
>>>
>>> this is my schema:
>>> table|mydata|mydata|2|CREATE TABLE mydata
>>> (
>>>itno VARCHAR(20),
>>>lcno VARCHAR(20),
>>>srno VARCHAR(10),
>>>ran  VARCHAR(20),
>>>pp INTEGER,
>>>cdate VARCHAR(20),
>>>PRIMARY KEY (itno DESC, lcno ASC)
>>> )
>> ..
>>> but i have question about performance,everytime doing the job takes
>>> about
>>> 63000 seconds
>>
>> Use transactions. But, do you also realize that most of your columns
>> are defined as VARCHAR, but you are inserting stuff that doesn't look
>> like TEXT. You will be/should be surprised by the results. From the
>> example above,
>>
>>>itno VARCHAR(20),
>>>lcno VARCHAR(20),
>>>srno VARCHAR(10),
>>>ran  VARCHAR(20),
>>>pp INTEGER,
>>>cdate VARCHAR(20),
>>
>> ltno VARCHAR(20): 38824801 <-- if it had leading zeroes, they would
>> vanish
>> lcno VARCHAR(10): 56888 <-- will become 56888
>> srno VARCHAR(10): 'AABBCC' <-- inserted correctly
>> ran  VARCHAR(20): 4.999 <-- are you expecting this to remain a REAL?
>> pp INTEGER: 157 <-- inserted correctly
>> cdate VARCHAR: '2009/9/10 19:55:50' <-- inserted correctly
>>
>> enclose your VARCHARs in single quotes.
>>
>> --
>> Puneet Kishor http://www.punkish.org
>> Carbon Model http://carbonmodel.org
>> Charter Member, Open Source Geospatial Foundation http://www.osgeo.org
>> Science Commons Fellow, http://sciencecommons.org/about/whoweare/kishor
>> Nelson Institute, UW-Madison http://www.nelson.wisc.edu
>> ---
>> Assertions are politics; backing up assertions with evidence is science
>> ===
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Is there any memory leak in the code while being busy?

2010-04-23 Thread Marcus Grimm
it is not necessary to send your question multible times... ;)

to answer: what makes you think that sqlite3_finalize can't
free the prepared statement ?

liubin liu wrote:
> Is there any memory leak in the code? 
> 
> Below is the code. Is there any memory leak in the pthread2? 
> 
> While pthread1 is using test.db exclusively, the sqlite3_prepare_v2() of
> pthread2 still prepares the p_stmt pointer to a piece of memory malloced by
> sqlite3_preapare_v2(). And then the sqlite3_finalize() can't free the memory
> still because pthread1 is using test.db exclusively. Does it cause a memory
> leak? 
> 
> 
> 
> __code__ 
> 
> #include  
> #include   // for usleep() 
> #include// for gettimeofday() 
> #include  
> #include  
> 
> void pthread1 (void); 
> void pthread2 (void); 
> 
> 
> int main (void) 
> { 
> int ret = -1; 
> 
> sqlite3 *g_db = NULL; 
> ret = sqlite3_open ("test.db", _db); 
> ret = sqlite3_exec (g_db, "CREATE TABLE t1 (id INTEGER PRIMARY KEY, d1
> TEXT)", NULL,NULL,NULL); 
> ret = sqlite3_close (g_db); 
> 
> usleep (50); 
> 
> 
> pthread_t pthr1, pthr2; 
> ret = pthread_create (, NULL, (void *) pthread1, NULL); 
> ret = pthread_create (, NULL, (void *) pthread2, NULL); 
> 
> 
> ret = pthread_join (pthr1, NULL); 
> printf ("thread1 end\n"); 
> ret = pthread_join (pthr2, NULL); 
> printf ("thread2 end\n"); 
> 
> return 0; 
> } 
> 
> 
> void pthread1 (void) 
> { 
> int ret = -1; 
> sqlite3 *db1 = NULL; 
> ret = sqlite3_open ("test.db", ); 
> 
> char *sql_f = "INSERT OR REPLACE INTO t1 VALUES (%d,
> 'aaabbbcccddd1122')"; 
> char *sql = NULL; 
> 
> sqlite3_stmt *p_stmt = NULL; 
> 
> struct timeval tv1, tv2; 
> ret = gettimeofday (, NULL); 
> 
> ret = sqlite3_prepare_v2 (db1, "BEGIN TRANSACTION", -1, _stmt, NULL); 
> ret = sqlite3_step (p_stmt); 
> ret = sqlite3_finalize (p_stmt); 
> 
> // n=100, test.db is 25843712 bytes, 25M; 
> int i=0, n=100; 
> for (i=0; i { 
> sql = sqlite3_mprintf (sql_f, i); 
> ret = sqlite3_prepare_v2 (db1, sql, -1, _stmt, NULL); 
> ret = sqlite3_step (p_stmt); 
> ret = sqlite3_finalize (p_stmt); 
> } 
> 
> printf ("pthread1:   ret: %d\n", ret); 
> ret = sqlite3_prepare_v2 (db1, "COMMIT TRANSACTION", -1, _stmt, NULL); 
> ret = sqlite3_step (p_stmt); 
> ret = sqlite3_finalize (p_stmt); 
> 
> ret = gettimeofday (, NULL); 
> printf ("time is : %ds\n", (int) (tv2.tv_sec - tv1.tv_sec)); 
> 
> 
> ret = sqlite3_close (db1); 
> } 
> 
> 
> 
> void pthread2 (void) 
> { 
> int ret = -1; 
> 
> sqlite3 *db2 = NULL; 
> ret = sqlite3_open ("test.db", ); 
> 
> usleep (200); 
> 
> 
> sqlite3_stmt *p_stmt = NULL; 
> int i=0, n=10; 
> for (i=0; i { 
> char *sql_f = "INSERT OR REPLACE INTO t1 VALUES (%d,
> '1122')"; 
> char *sql = NULL; 
> 
> sql = sqlite3_mprintf (sql_f, i); 
> ret = sqlite3_prepare_v2 (db2, sql, -1, _stmt, NULL); 
> printf ("pthread2:   prepare:  %d,   p_stmt: %p, errmsg: %s\n", ret,
> p_stmt, sqlite3_errmsg(db2)); 
> ret = sqlite3_step (p_stmt); 
> printf ("pthread2:   step: %d,   p_stmt: %p, errmsg: %s\n", ret,
> p_stmt, sqlite3_errmsg(db2)); 
> ret = sqlite3_finalize (p_stmt); 
> printf ("pthread2:   finalize: %d,   p_stmt: %p, errmsg: %s\n", ret,
> p_stmt, sqlite3_errmsg(db2)); 
> printf ("\n"); 
> 
> usleep (30); 
> } 
> 
> ret = sqlite3_close (db2); 
> printf ("pthread2:   close: %d,  p_stmt: %p, errmsg: %s\n", ret,
> p_stmt, sqlite3_errmsg(db2)); 
> }
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Fastest concurrent read-only performance (+ threads vsprocesses) [BUG?]

2010-03-12 Thread Marcus Grimm
Hi again,

I have played again with some variations of the speed test:

I'm not able to reproduce the reported behavior that a single process
queries faster than a single thread. During my tests I don't get a
big difference, even when I turn off some of the locking
via SQLITE_OPEN_NOMUTEX.

I have also played once more with different cache sizes:
Using a very low cache size of 2000 resulted in a query
speed of 15 sec, changing to 15 reduces to 10 sec.
So this is expected.

Shared cache mode still doesn't really affect the query speed,
this is still puzzling for me but as I allready mentioned:
For me that is only a theoretical issue since in practice
sqlite is doing very fast anyhow.

Increasing the number of threads does affect the overall
read performance slightly, example:
1 Thread: 11.2 sec
16 Threads: 8.2 sec
Single Process: 11sec.

I still would expect a better scaling, but timing measures on
the other hand are very difficult and I might be doing them wrong.

In case the test program is of interested to anybody, here
is the latest version:

http://www.exomio.de/SqliteSpeedTest2.c

Marcus

Marcus Grimm wrote:
> Hi Luke,
> 
> 
> Luke Evans wrote:
>> Hi Marcus,
>>
>> Well, I'd certainly be interested in looking at your code.  Can you mail a 
>> zip, or post to a web or file hosting site?  Thanks.
> 
> ok, please try this:
> http://www.exomio.de/SqliteSpeedTest.c
> 
> I havent tried yet to compare the numbers when using one main
> process rather than one single thread. Would be in fact really
> strange if that makes any difference. I'll try to do this using my
> test code soon.
> 
> In any case: yes, if we all understand share-cache correctly we could
> assume a nicer effect when using more than one reader thread, but
> since DRH hates threads anyhow it is unlikely that there will be
> any change... :-)
> 
> 
> Marcus
> 
> 
>> Your results seem to broadly agree with mine: multithreaded querying can 
>> save a percentage of time (10-30%?) over the same queries issued serially 
>> with no wait interval.
>> My queries are a little more complicated, so that may explain why I come in 
>> nearer a 13% saving, whereas your best case is more like 28%.  It's 
>> interesting how little the shared cache size seems to affect actual 
>> throughput, and even really whether it is on at all makes a relatively small 
>> difference here (ignoring the perceived benefits on memory pressure).
>>
>> I guess, for me the question boils down to why running these queries in 
>> their own process (which admittedly will use more memory resources) is so 
>> much more beneficial to overall throughput - when there (probably) ought to 
>> be a way to get the same partitioning between threads/connections in the 
>> multithreaded model.  In other words, I'm expecting/hoping for a way to get 
>> each thread to behave as if the query was in a separate process - getting 
>> its own private resources so that there's absolutely no need for 
>> synchronisation with activity on other threads - particularly for this 
>> read-only case.
>> For this small number of threads/queries (relative to the number of 
>> processing cores I have), the difference is between 1.6s and 8.34s, which is 
>> really quite significant. 
>>
>> -- Luke
>>
>>
>>
>> On 2010-03-11, at 5:58 AM, Marcus Grimm wrote:
>>
>>> I have followed the discussion about this issue with interest since
>>> my usage of sqlite involves threads and sharedcache mode as well.
>>> I have also generated a little speed test program that uses
>>> multible threads and shared cache to read *some* rows out of the
>>> sqlite DB. It might not help here but here are my results:
>>>
>>> Test cenario:
>>> + DB is has two tables and one index, DB size is 1.5Gb, main
>>>   table contains 150 rows, a child table has 20 * 150 rows.
>>> + Windows 7, intel i7 processor
>>> + Query is simple like "SELECT * FROM TABLE WHERE ID = ?",
>>>   ID is the primary key and thus has an index on it.
>>>   The result is used to query in a 2nd table.
>>> + page size is not changed, thus 1024 bytes on windows
>>> + Each threads opens its own DB connection, thus time
>>>   for sqlite3_open_v2 is included in the running time.
>>> + PRAGMA read_uncommitted = True;
>>> + Uses 3.6.22 - SEE Version
>>>
>>>
>>> Test-1: shared cache on, cache size = 10
>>> a) 8 queries, distributed over 8 threads: 4.6 sec
>>> b) 8 queries, distributed over 4 threads: 5.8 sec
>>> c) 8 queries, single thread: 6.3 sec
>>>
>>> T

Re: [sqlite] Fastest concurrent read-only performance (+ threads vsprocesses) [BUG?]

2010-03-11 Thread Marcus Grimm
Hi Luke,


Luke Evans wrote:
> Hi Marcus,
> 
> Well, I'd certainly be interested in looking at your code.  Can you mail a 
> zip, or post to a web or file hosting site?  Thanks.

ok, please try this:
http://www.exomio.de/SqliteSpeedTest.c

I havent tried yet to compare the numbers when using one main
process rather than one single thread. Would be in fact really
strange if that makes any difference. I'll try to do this using my
test code soon.

In any case: yes, if we all understand share-cache correctly we could
assume a nicer effect when using more than one reader thread, but
since DRH hates threads anyhow it is unlikely that there will be
any change... :-)


Marcus


> 
> Your results seem to broadly agree with mine: multithreaded querying can save 
> a percentage of time (10-30%?) over the same queries issued serially with no 
> wait interval.
> My queries are a little more complicated, so that may explain why I come in 
> nearer a 13% saving, whereas your best case is more like 28%.  It's 
> interesting how little the shared cache size seems to affect actual 
> throughput, and even really whether it is on at all makes a relatively small 
> difference here (ignoring the perceived benefits on memory pressure).
> 
> I guess, for me the question boils down to why running these queries in their 
> own process (which admittedly will use more memory resources) is so much more 
> beneficial to overall throughput - when there (probably) ought to be a way to 
> get the same partitioning between threads/connections in the multithreaded 
> model.  In other words, I'm expecting/hoping for a way to get each thread to 
> behave as if the query was in a separate process - getting its own private 
> resources so that there's absolutely no need for synchronisation with 
> activity on other threads - particularly for this read-only case.
> For this small number of threads/queries (relative to the number of 
> processing cores I have), the difference is between 1.6s and 8.34s, which is 
> really quite significant. 
> 
> -- Luke
> 
> 
> 
> On 2010-03-11, at 5:58 AM, Marcus Grimm wrote:
> 
>> I have followed the discussion about this issue with interest since
>> my usage of sqlite involves threads and sharedcache mode as well.
>> I have also generated a little speed test program that uses
>> multible threads and shared cache to read *some* rows out of the
>> sqlite DB. It might not help here but here are my results:
>>
>> Test cenario:
>> + DB is has two tables and one index, DB size is 1.5Gb, main
>>   table contains 150 rows, a child table has 20 * 150 rows.
>> + Windows 7, intel i7 processor
>> + Query is simple like "SELECT * FROM TABLE WHERE ID = ?",
>>   ID is the primary key and thus has an index on it.
>>   The result is used to query in a 2nd table.
>> + page size is not changed, thus 1024 bytes on windows
>> + Each threads opens its own DB connection, thus time
>>   for sqlite3_open_v2 is included in the running time.
>> + PRAGMA read_uncommitted = True;
>> + Uses 3.6.22 - SEE Version
>>
>>
>> Test-1: shared cache on, cache size = 10
>> a) 8 queries, distributed over 8 threads: 4.6 sec
>> b) 8 queries, distributed over 4 threads: 5.8 sec
>> c) 8 queries, single thread: 6.3 sec
>>
>> Test-2: shared cache off, cache size = 10
>> a) 8 queries, distributed over 8 threads: 5.6 sec
>> b) 8 queries, distributed over 4 threads: 6.0 sec
>> c) 8 queries, single thread: 6.3 sec
>>
>> Personally, I wouldn't call it a bug (bugs in sqlite are extremely
>> rare), but it looks a bit disappointing how sqlite scales when
>> multiple threads/processors are involved. I was expecting a much
>> higher effect on read speed when shared-cache on/off is compared.
>>
>> I tried to play with different cache sizes - the differences are
>> marginal, even down to cache size = 5000, I have similar numbers
>> in Test-1.
>>
>> On the bottom line: I have the impression that the major
>> benefit of shared-cache is the reduced memory requirement
>> when opening a number of connections to the same DB, and
>> the different (optional) locking style in that mode.
>> It does not dramatically affect the speed of fetching data.
>>
>> For me not an issue at all, sqlite is doing extremely well anyhow,
>> but it is worthwhile to understand why.
>>
>> I'm not able to attach my c-code of the test program, but if
>> one is interested to take a look, let me know. Maybe I made
>> a mistake or the test procedure is not reasonable, who knows...
>>
>> Marcus
>>
>>
>> Luke Evan

Re: [sqlite] Fastest concurrent read-only performance (+ threads vsprocesses) [BUG?]

2010-03-11 Thread Marcus Grimm
I have followed the discussion about this issue with interest since
my usage of sqlite involves threads and sharedcache mode as well.
I have also generated a little speed test program that uses
multible threads and shared cache to read *some* rows out of the
sqlite DB. It might not help here but here are my results:

Test cenario:
+ DB is has two tables and one index, DB size is 1.5Gb, main
   table contains 150 rows, a child table has 20 * 150 rows.
+ Windows 7, intel i7 processor
+ Query is simple like "SELECT * FROM TABLE WHERE ID = ?",
   ID is the primary key and thus has an index on it.
   The result is used to query in a 2nd table.
+ page size is not changed, thus 1024 bytes on windows
+ Each threads opens its own DB connection, thus time
   for sqlite3_open_v2 is included in the running time.
+ PRAGMA read_uncommitted = True;
+ Uses 3.6.22 - SEE Version


Test-1: shared cache on, cache size = 10
a) 8 queries, distributed over 8 threads: 4.6 sec
b) 8 queries, distributed over 4 threads: 5.8 sec
c) 8 queries, single thread: 6.3 sec

Test-2: shared cache off, cache size = 10
a) 8 queries, distributed over 8 threads: 5.6 sec
b) 8 queries, distributed over 4 threads: 6.0 sec
c) 8 queries, single thread: 6.3 sec

Personally, I wouldn't call it a bug (bugs in sqlite are extremely
rare), but it looks a bit disappointing how sqlite scales when
multiple threads/processors are involved. I was expecting a much
higher effect on read speed when shared-cache on/off is compared.

I tried to play with different cache sizes - the differences are
marginal, even down to cache size = 5000, I have similar numbers
in Test-1.

On the bottom line: I have the impression that the major
benefit of shared-cache is the reduced memory requirement
when opening a number of connections to the same DB, and
the different (optional) locking style in that mode.
It does not dramatically affect the speed of fetching data.

For me not an issue at all, sqlite is doing extremely well anyhow,
but it is worthwhile to understand why.

I'm not able to attach my c-code of the test program, but if
one is interested to take a look, let me know. Maybe I made
a mistake or the test procedure is not reasonable, who knows...

Marcus


Luke Evans wrote:
> Well, I guess this is looking more an more like a bug then.
> 
> I just went to the sqlite.org site to find out how I log a bug and it 
> referred me back to here.  Perhaps this means I just need to *DECLARE BUG* 
> here ;-)
> Actually, I'll probably have some time soon to try out a profiler to see if I 
> can figure out which mutexes are involved in causing the synchonising 
> behaviour I seem to be experiencing.
> 
> 
> 
>> I'm out of ideas about this. I understand that it shouldn't work this
>> way and I don't understand why it does.
>> Is it possible for you to run application in some sort of profiler?
>> Obviously it should show that a lot of time is spent in waiting on
>> some mutex and it will be able to show where this mutex is held
>> from...
>>
>>
>> Pavel
>>
>> On Tue, Mar 9, 2010 at 1:02 PM, Luke Evans  wrote:
>>> Hi guys,
>>>
>>> Had to take a break for a couple of days from my SQLite experiments, but 
>>> back on it now.
>>>
>>> Pavel, regarding the question about VFS, I'm not using one to my knowledge 
>>> and have set the "name of VFS module" to NULL in sqlite3_open_v2.  Maybe 
>>> NULL means I'm using the standard VFS, but in any case, not a 
>>> "non-standard" one.
>>> I'm selecting from a real table.
>>>
>>> Here are some more timings...
>>>
>>> Shared cache ON.  6 of the same query issued (as before) within the 
>>> multithreaded scenario.  All the queries bunch up and report complete more 
>>> or less together.
>>>
>>> 2010-03-09 09:32:33.013 SQLiteTest[16208:a0f] SQLite says multithreading is 
>>> ON
>>> 2010-03-09 09:32:33.015 SQLiteTest[16208:a0f] SQLite's multithreading value 
>>> is 2
>>> 2010-03-09 09:32:33.015 SQLiteTest[16208:a0f] SHARED CACHE IS ON
>>> 2010-03-09 09:32:33.017 SQLiteTest[16208:1b03] Starting query on thread 
>>> {name = (null), num = 3}
>>> 2010-03-09 09:32:33.018 SQLiteTest[16208:5603] Starting query on thread 
>>> {name = (null), num = 4}
>>> 2010-03-09 09:32:33.017 SQLiteTest[16208:4403] Starting query on thread 
>>> {name = (null), num = 2}
>>> 2010-03-09 09:32:33.018 SQLiteTest[16208:5503] Starting query on thread 
>>> {name = (null), num = 5}
>>> 2010-03-09 09:32:33.019 SQLiteTest[16208:5703] Starting query on thread 
>>> {name = (null), num = 6}
>>> 2010-03-09 09:32:33.019 SQLiteTest[16208:5c03] Starting query on thread 
>>> {name = (null), num = 7}
>>> 2010-03-09 09:32:41.359 SQLiteTest[16208:5603] Finished query on thread 
>>> {name = (null), num = 4} in 8.34s
>>> 2010-03-09 09:32:41.359 SQLiteTest[16208:1b03] Finished query on thread 
>>> {name = (null), num = 3} in 8.34s
>>> 2010-03-09 09:32:41.359 SQLiteTest[16208:4403] Finished query on thread 
>>> {name = (null), num = 2} in 8.34s
>>> 

Re: [sqlite] Prohibitive indexing time with large tables

2010-02-14 Thread Marcus Grimm
> Marcus,
>
> although increasing cache size is a good method, it may sometimes give
> unpredictable results (in terms of performance).

hm... why ?

> I looked at the vdbe code (EXPLAIN CREATE INDEX ... ) of the index
> creation
> and it seems like there is no special sorting algorithm (CMIIW please).

I guess you are right, doesn't make sense to assume a sorting. Sorry.

> Excluding all "make once" lines from vdbe code left this loop lines:
>
> "19""Rowid"
> "20""Column"
> "21""MakeRecord"
> "22""IdxInsert"
> "23""Next"  .. 19
>
> that leads to conlusion that either single "indexed" insert or a step from
> index creation is actually just an insert of data inside B-tree without
> special preprocessing.

But to insert that index data sqlite needs to at least partly
compare existing values with the one that it is about to insert.
Here, I guess, comes the cache into account: When sqlite is not able
to hold the running index in memory and/or the entries to be inserted
are unsorted it has to free and reload pages randomly.
Anyway, I should better stop guessing and leave this to people
who are more in the sqlite internal code... :-)

>
> So we just consider mass insert into B-tree. As I suppose the cost of such
> insert is sometimes low, sometimes high, depends on different factors
> including occasional need for rebalancing. As long as all operations are
> in
> memory and all pages are "dirty" (not saved to disk) the results looks
> very
> promising. I tried to change cache size to 500,000 (x1024) for the same

This is appx. 500MB cache, why not trying with 2,000,000 cache size ? :-)

> base
> (10M records, 100 bytes record size) and adding from scrach was very fast
> until about 3M records (taskman showing about 800Mb of memory used at the
> moment) and dropped significally after that. Will this bad point be 3M or
> not 3M depends on the size of data used in index. So if any search inside
> the tree is a parsing through the cached pages (either by sqlite or
> underlying file system) and any page write operation involved in new tree
> operations are dirty pages that will be write later, there is no problem.
> Otherwise there is. The other problem is when total datasize exceeds 32bit
> address space, the cache stops giving significant improvement.

Yes, I was assuming that the index will more or less
fit into RAM. At my experiments it appears to be the case but I had
an index on the hash value not on the text column itselve, thus reducing
the memory required.

>
> I know that quicksorting I suggested have disadvantages, but in some way
> its
> results is more predictable. For example, we postpone any sqlite writing
> operation until the data is ready. Unfortunately, for index data size
> bigger
> than 32bit address space the file-mapping doesn't work, but it can be
> changed to simple Seek acces and at least the file system can do a decent
> job keeping frequentely access data (currently) in cache.
>
> Marcus, I didn't understand your comment about frequent reloading. As I
> read
> from the initial post, he has large data chunk of unsorted data, 20M
> records
> that needed to be accessed on a daily basis. So in his case any way that
> lead to data placed inside sqlite db indexed properly is ok. The problem
> is
> that on a daily basis couple of hours is a big price to pay. Jerome can
> correct me, but he still didn't add anything new to this discussion, hope
> he
> will.

His original question was about the todo list found at the wiki.
Not sure if any of the core developer will answer, but I would
be interested as well...

Anyway, thanks for discussing.

best

Marcus

>
> Nice approach with the hash and collision resolving inside query, will
> keep
> it in mind for future use )
>
> Max
>
> On Sun, Feb 14, 2010 at 4:03 PM, Marcus Grimm
> <mgr...@medcom-online.de>wrote:
>
>> Just for my curiosity:
>>
>> Have you tried to increase the cache as already suggested ?
>>
>> I ran into a similar problem while playing with a artificial
>> test database with appx. 10 Mio records and creating an index.
>> Without drastically increasing the cache size sqlite appears
>> not to be able to create an index on a certain field - it
>> never stops within, say,  2 hours.
>> Sounds quite dissapointing keeping in mind how fast sqlite usually
>> operates, but it becomes clear when we consider that sqlite
>> is told to use only a few MB of memory per default. ALso your
>> quicksort mentioned below will be very slow if he needs
>> reload data from disk all the time.
>> So in my case it helps 

Re: [sqlite] Prohibitive indexing time with large tables

2010-02-14 Thread Marcus Grimm
Just for my curiosity:

Have you tried to increase the cache as already suggested ?

I ran into a similar problem while playing with a artificial
test database with appx. 10 Mio records and creating an index.
Without drastically increasing the cache size sqlite appears
not to be able to create an index on a certain field - it
never stops within, say,  2 hours.
Sounds quite dissapointing keeping in mind how fast sqlite usually
operates, but it becomes clear when we consider that sqlite
is told to use only a few MB of memory per default. ALso your
quicksort mentioned below will be very slow if he needs
reload data from disk all the time.
So in my case it helps to tell sqlite to use appx 500MB memory
via pragma cache_size.

Please note that if you create an index on a text field sqlite
will basically make a copy of the hole table in your case.

Depending how you are using that Text field in a filter statement
you may consider adding an integer hash (e.g. CRC32) entry in your
table and create an index on that and slightly change your queries
like:
SELECT * From TestTable WHERE TextHas=12312 AND Text='Text to search';
Unfortunately that works only for that simple form of "=" statements.

Marcus

> Jerome,
>
> It's an an interesting challenge, thanks for the post
> I tried to research more and did some tests.
> My test database contains a table with 10,000,000 records of the text 100
> chars in length
>
> CREATE TABLE [TestTable] (
> [Id] INTEGER PRIMARY KEY AUTOINCREMENT,
> [Text] TEXT
> )
>
> I suppose your data is different, but at least this one has more than 1M
> records and not so small record size. My final base was about 1G in size.
>
> The default index creation
>
> CREATE INDEX [idx_TestTable] ON [TestTable] ([Text] )
>
> took very long time (about two hours or so).
> Having the index before data insert didn't change anything, first records
> had a good speed of append (about 10,000 records/sec significantly slowing
> when the number of records exceeded 1-2M).
>
> So there was no way to ignore some external files approach and I did it
> filing memory-mapped file with the contents of Text field, while filling
> also array in memory saving offsets and length of the strings. After that
> quicksort of that offset array took about 5 minutes and inserting the
> textes
> in sorted order to sqlite base other 5 minutes, so about 10 minutes it
> total. First 5 minutes was possible since we exchange only offsets, not
> data
> and other 5 minutes since inserting sorted data into B -tree is really a
> fast operation.
>
> Although real life data can be different, the things that worked might be
> the same. So anyone can use this method occupying not more than the sqlite
> file itself for temporary storage and ending up with the data in necessary
> order inside sqlite database after that. I know that there are many things
> to take into account like the memory size and the size of the actual data
> but it's just a proof of concept.
>
> Also I think sqlite could use the same approach internally for creating
> index for existing data. The db is probably already exclusively locked
> while
> CREATE INDEX is in process so having temporary array accessing and storing
> for example file offsets of particular records should not be a problem.
>
> Max
>
> On Sat, Feb 13, 2010 at 5:00 PM, Jérôme Magnin
> wrote:
>
>> Hi,
>>
>> This post is a question directed to D. Richard Hipp :
>>
>> I have been using SQLite for 3 years in a records linkage software
>> package I have developed. My organization recently had to use the
>> package to perform daily linkage of large administrative governmental
>> registers (up to 20 million records each). During the linkage process,
>> auxiliary tables containing records "fingerprints" must be created, and
>> two columns be indexed in them.
>>
>> SQLite provides decent indexing times for such tables with up to 1M
>> rows, but beyond this size the (already well-discussed) critical slowing
>> down of indexing performance due to disk nonlocality kicks in. The only
>> workaround I could imagine to ease the problem would be to duplicate the
>> auxiliary table and load pre-sorted rows in it, with sort key being the
>> column I intend to index on. This is unfortunately too costly in terms
>> of disk space used.
>>
>> I therefore had to develop an alternate datasource type (based on flat
>> files) in order for my system to be able to efficiently handle big
>> files. Which is a pity since SQLite provides great features I still
>> would like to be able to rely upon when dealing with large files.
>>
>> Now my question: in the "To do" section of SQLite's wiki, you mention
>> "Develop a new sort implementation that does much less disk seeking. Use
>> to improve indexing performance on large tables.". I have been seeing
>> this entry for 3 years but nothing concrete seems to have happened on
>> this front in the meantime. Do you have any idea about if (and when) you
>> will work on 

Re: [sqlite] Nesting Read/Write within Transaction?

2010-02-12 Thread Marcus Grimm
>
>>The only thing that can bite you is if
>>you are in process of fetching rows from some select statement and you
>>update row that was just fetched or update/insert rows that would have
>>been fetched later by the select statement.
>
> As I understand it, simply wrapping every batch operation (Read, Write
> or Read_Modify_Write) inside an IMMEDIATE transaction is all that's
> needed to prevent such bad things from happening, or am I misguided?

It doesn't matter if you are in a transaction or not, changing
rows while inside a fetch loop on the same table may lead
to problems.
DRH gave a nice explanation about this issue:

http://www.mail-archive.com/sqlite-users@sqlite.org/msg44025.html

Marcus

>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] german documentation

2009-11-14 Thread Marcus Grimm
> Hi,
> that's bad if i sound like an google translator. At the beginning i tried
> to translate it as original as i could, now i translate it, ho wit sounds
> good to me. Can you say me, which sites are bad translated?

For example the chapter:
Vorgeschlagene Benutzungsarten für SQLite

this one is very strange/funny to read.

Best

Marcus

>
> Yeah, it is a double exercise for me. In SQLite and in English. It helps
> me to improve both.
>
> with best wishes
>
> Artur Reilin
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] german documentation

2009-11-14 Thread Marcus Grimm
Hi,

I mean a google like translator.
As I said, partly. not all of this.
I think you should not try to translate
word by word, preserving even the original english
sentence structure. The result sounds machine-like
for a native german.
But it is interesting as an exercise, if you define
it this way.

kind regards

Marcus

>
>>> Hello,
>>>
>>> sorry for the double mail, but i cannot post to my old post.
>>>
>>> The german documentation is now on:
>>>
>>> http://sqlite.yuedream.de
>>>
>>> What do you think? Is the design good?
>>
>> this will be a lot of work and I'm wondering why
>> you do this ?
>> Despite beeing a german with a rather poor english knowledge,
>> I guess a programmer should still be able to understand
>> the english sqlite documentation, right ? :-)
>>
>> 2nd: Your german style sounds strange, partly like a
>> software translator...
>>
>> sorry, but you ask for a feedback.
>>
>> beste wishes
>>
>> Marcus
>>
>>
>>>
>>> with best wishes
>>>
>>> Artur Reilin
>
> Hi,
>
> why i do this? There are more reasons why i do this. Because i translate
> this, i read all sites of the documentation and that's like learning
> sqlite while translating.
>
> I sound like an software translator? You mean like google translator or
> like someone who normaly translate programms in other languages?
>
> with best wishes
>
> Artur Reilin
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] german documentation

2009-11-14 Thread Marcus Grimm
> Hello,
>
> sorry for the double mail, but i cannot post to my old post.
>
> The german documentation is now on:
>
> http://sqlite.yuedream.de
>
> What do you think? Is the design good?

this will be a lot of work and I'm wondering why
you do this ?
Despite beeing a german with a rather poor english knowledge,
I guess a programmer should still be able to understand
the english sqlite documentation, right ? :-)

2nd: Your german style sounds strange, partly like a
software translator...

sorry, but you ask for a feedback.

beste wishes

Marcus


>
> with best wishes
>
> Artur Reilin
>
> -
>
> Artur Reilin
> sqlite.yuedream.de
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] execute or prepare+step+finalize

2009-11-11 Thread Marcus Grimm
you may also take a look at:

http://www.sqlite.org/cvstrac/wiki?p=SimpleCode

it explains how to use the step mechanism.

hth
Marcus

>
>
> why not:
> int smth = sqlite3_step (statement2);
> while( smth == SQLITE_ROW )
> {
> printf( "\n command= %s result code = %d \n", command, smth );
> smth = sqlite3_step (statement2);
> }
> so that it will work no matter how many rows
>
>
>
> because I dont have much experience :)
> that works great, thank you
>
>
>
>> and finaly i get SQLITE_DONE but it still doesnt show me table i have
>> selected...
>
> See http://www.sqlite.org/c3ref/column_blob.html
>
>
> i dont understand what exactly do you mean..
>
> combination of these:
>
> sqlite3_value *sqlite3_column_value(sqlite3_stmt*, int iCol);
> typedef struct Mem sqlite3_value;
> void sqlite3_result_value(sqlite3_context*, sqlite3_value*);
>
> maybe?
>
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>
>
> --
> View this message in context:
> http://old.nabble.com/execute-or-prepare%2Bstep%2Bfinalize-tp26299247p26300548.html
> Sent from the SQLite mailing list archive at Nabble.com.
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Deadlock with two local instances

2009-10-27 Thread Marcus Grimm
> On Tue, 27 Oct 2009 21:50:12 +0100, "Marcus Grimm"
> <mgr...@medcom-online.de> wrote:
>
>>PS: Does anybody know how I can edit this
>>example code ? I recently attempted to add a clear
>>PD statement and also add some comments but when
>>I try to edit I allways end up in the wiki index page...
>
> Last time I edited the wiki, it went like this:
>
> You're looking at a page you want to edit.
> Press edit button, the mathematical skill testing question
> page pops up, follow directions to login.
> Indeed, now the wiki returns to its start page.
> Navigate to the page you want to edit again, now you can
> press the edit button again and start editing.

uhh... yes! that did the trick! :-)
Thanks!

Marcus

> --
>   (  Kees Nuyt
>   )
> c[_]
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Deadlock with two local instances

2009-10-27 Thread Marcus Grimm
> Thanks!  I ran the example code and it seems like every UPDATE fails
> with errors like the following:
>
> SqlStep Timeout on handle: 8 (rc = 6)
> SqlStep tries on handle 8: 200
> BeginTrans Timeout/Error on handle:  8, Errorcode = 6
> Write Thread: DB is busy! tries = 142 handle = 8
>
> Looking at the database contents it looks like none of the updates
> were successful at all (though I didn't look extremely carefully).
>
> Are these errors normal?

A moderate number of timeout messages is "normal", they where
also added to understand better how much load the sqlite db
can handle using a multi thread application.

However, this example was running with allmost no final error messages
like "DB is busy! tries = 142", at least when I did it some month
ago. The 142 tries means that the DB was busy for more than 1420 sec.
I can't remeber that I achieved this, at least not with only two
writer threads.

You may reduce the number of writer threads by one using:

#define NUM_WRITE_THREADS  1

If that still gives you an endless number of the above
timeout messages then something is definetly wrong in your
system.

Please be aware also of:

a) This exe is assumed to be the only instance running
   on a system.

b) I'm still confident that the loop-redo style of
   the busy handling is one way to do it but the
   sqlite developers included a more clever approach
   using an notifier to handle this:
   http://www.sqlite.org/unlock_notify.html
   Havent tried this, though.

c) Maybe a virus scanner is interfering here ?

d) Maybe your system is slow ? :-)

Marcus

PS: Does anybody know how I can edit this
example code ? I recently attempted to add a clear
PD statement and also add some comments but when
I try to edit I allways end up in the wiki index page...

>
> On Tue, Oct 27, 2009 at 12:55 AM, Marcus Grimm <mgr...@medcom-online.de>
> wrote:
>>> Another odd thing is that when I call sqlite3_reset on the prepared
>>> statement, it also returns SQLITE_BUSY.  Should I only reset the
>>> statement when it has been executed successfully?
>>
>> one possible approach when getting SQLITE_BUSY is to
>> retry the sqlite3_step call until it finally gets thru.
>>
>> note that sqlite3_reset just returns the same error
>> code as the previous sqlite3_step call.
>>
>> attachments don't work on the list, you will need
>> find another way to provide your example code.
>>
>> you may also take a look at
>> http://www.sqlite.org/cvstrac/wiki?p=SampleCode
>> for the busy handling.
>>
>> hth
>>
>> Marcus Grimm
>>
>>>
>>> On Mon, Oct 26, 2009 at 2:40 PM, Chris T <citrus...@gmail.com> wrote:
>>>> I'm new to sqlite (and sql in general, actually) and came across
>>>> something puzzling.
>>>>
>>>> I wrote a test program statically linked with the amalgamated sqlite
>>>> code.  When I run a single instance, everything is fine.  When I start
>>>> a second instance in the same directory they both deadlock.  Every
>>>> call to sqlite3_step returns SQLITE_BUSY.
>>>>
>>>> The source code to my test program is attached.  It was written in
>>>> Visual Studio, so feel free to remove the reference to windows.h and
>>>> change the calls to Sleep( ) if you don't use Windows.
>>>>
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>>
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Deadlock with two local instances

2009-10-27 Thread Marcus Grimm
> Another odd thing is that when I call sqlite3_reset on the prepared
> statement, it also returns SQLITE_BUSY.  Should I only reset the
> statement when it has been executed successfully?

one possible approach when getting SQLITE_BUSY is to
retry the sqlite3_step call until it finally gets thru.

note that sqlite3_reset just returns the same error
code as the previous sqlite3_step call.

attachments don't work on the list, you will need
find another way to provide your example code.

you may also take a look at http://www.sqlite.org/cvstrac/wiki?p=SampleCode
for the busy handling.

hth

Marcus Grimm

>
> On Mon, Oct 26, 2009 at 2:40 PM, Chris T <citrus...@gmail.com> wrote:
>> I'm new to sqlite (and sql in general, actually) and came across
>> something puzzling.
>>
>> I wrote a test program statically linked with the amalgamated sqlite
>> code.  When I run a single instance, everything is fine.  When I start
>> a second instance in the same directory they both deadlock.  Every
>> call to sqlite3_step returns SQLITE_BUSY.
>>
>> The source code to my test program is attached.  It was written in
>> Visual Studio, so feel free to remove the reference to windows.h and
>> change the calls to Sleep( ) if you don't use Windows.
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite Suitability for Shopping Cart

2009-09-25 Thread Marcus Grimm
I would like to add also a comment here:

Sqlite perfectly works as a backend for multi
user applications/servers IF they are implemented accordingly.
That is, avoid multible processes/instances that access
the same db file, use one main process and threads instead.
Take care about busy handling etc.
Under this background my application (a client server
database) does work very well using sqlite. I have installations
with 20 user connected to my server, no problem. -- stop.

I hope that such sentenses like "sqlite is not intended
for client/server applications" will be removed from
the sqlite pages as it is too "shy" and to general.
One example, I mentioned using sqlite to one of our partners
and of course they looked at the web pages of sqlite and
found the above statement -- now I run into a defense
issue and have to explain and argue for sqlite.
As a result, I stop explaining that we are using sqlite
because I don't want that customers start to worry
about the sql back end.

Just to share my feeling.. sqlite is a wonderful tool, stop
making it smaller than it is, it doesn't deserve it.

Marcus

>
>
> Fred Williams-5 wrote:
>>
>>  no further need to ask and answer, "Will SQLite support
>> multi users?
>>
> Maybe it should be covered a bit more specifically on the SQLite site,
> bearing in mind that new people would naturally have the mindset that
> databases are for shared use normally. SQLite does have sharing
> capabilities
> so it would be helpful to profile these for novices.
> --
> View this message in context:
> http://www.nabble.com/SQLite-Suitability-for-Shopping-Cart-tp25530589p25615724.html
> Sent from the SQLite mailing list archive at Nabble.com.
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Serious locking/blocking issue

2009-09-17 Thread Marcus Grimm
> Igor Tandetnik wrote:
>> Angus March  wrote:
>>> After the callback has finished, what will happen with that original
>>> call to sqlite3_reset()?
>>>
>>
>> Since sqlite3_reset doesn't take any locks, a busy callback would never
>> be invoked for it.
>>
>
> Well someone should tell the library on my machine that, because
> it's returning locking errors

yes, it may return that... but as a result of a previous
call to sqlite3_step which returned a lock error.
so it basically just passes the last error code throu

Marcus

> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Memory leak on inserting from sqlite c/c++ interfaces

2009-07-22 Thread Marcus Grimm
just a point:
In your pseudocode you precreate statements for the
begin and commit and trying to reuse them later on.
If your real code is like this I would suggest to
take a look at this. I think it is not possible
to "re-step" a begin or commit, you may try to
recreate these statements completely.
Maybe that is a reason you are seeing a lot of file-syncs
because the begin/commit mechanism doesn't actually
work as intended by you ?

Marcus

Zachary Turner wrote:
> On Tue, Jul 21, 2009 at 6:45 PM, Pavel Ivanov wrote:
>>> If I remove or comment out the
>>> three lines with //* in the code above, I get no memory leaks.
>> So basically you're saying that if you don't insert any data into your
>> database and thus effectively don't do with your database anything and
>> thus SQLite don't have to cache anything from database then you
>> observe no memory leaks, right? Is it by any chance a growing database
>> cache is what you see? What if you execute at the very beginning of
>> your application this:
>> pragma cache_size = 1;
>>
>> Pavel
> 
> Thanks for your suggestion.  I didn't have too much luck with it
> unfortunately, but I did learn a few more things about the problem.
> Just to make sure I understood you though, after creating the database
> with sqlite3_open_v2 I ran the command
> 
> sqlite3_exec(db_, "pragma cache_size=1;");
> 
> The return value was successful so I assume it worked.
> 
> This doesn't appear to change the allocation behavior.  I feel like it
> might be related to me issuing INSERT queries too fast for sqlite to
> deal with.  This is asynchronous (not using the experimental Sqlite
> asynchronous vfs but a completely different async model I wrote
> myself) and the thread that issues inserts to sqlite is different than
> the thread that reads data from the file.
> 
> I changed the code a little so that it permanently stops reading more
> data from the file after some fixed amount (I chose about 500MB) but
> continues writing until all data has been committed to the database.
> when I do this, I notice that after I stop my reads, it takes a *long
> time* before all the writes are finished in sqlite.  99% of this time
> is spent in the winSync() function in the sqlite source code
> constantly calling FlushFileBuffers().
> 
> Upon reflection it makes sense that this would happen in an
> asynchronous environment, but I thought that after a certain threshold
> running an insert statement on a non-in-memory table would generate a
> sync and simply block my sqlite background thread until there was no
> more buffered data.
> 
> In my async model I post messages between threads, so for example the
> main thread posts a message to the sqlite thread saying "write this
> data", which runs the pseudocode I posted in the OP, and after that's
> done it posts a message to the main thread saying "I'm done writing
> that data".  the main thread only issues new read requests as a result
> of getting that message, so if sqlite were syncing every so often then
> I could guarantee that I wouldn't be in this situation because the
> main thread wouldn't be able to issue any more reads until the sync
> was done.
> 
> 
> Am I misunderstanding sqlite's filesystem sync'ing algorithm?
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Repost: Accessing a DB while copying it causes Windows to eat virtual memory

2009-07-08 Thread Marcus Grimm
I might missed that but: what are you trying to
acomplish by using explorer to copy the
database file ?
I guess you are doing it for backup purpose.
For this, be aware that it might be dangerous to
do a simple file copy on a running database since
you might forget some temporary files used by
sqlite - in particular the journal file.
As an other reply allready mentioned, you should
try to put an exclusive transaction around the
file copy. However, as you found this will lock
the file seen by external process but the file
is not locked for your application itselve.
I have a similar approach in my application to
backup the database on a daily basis:
pseudocode:
DB.Execute("BEGIN EXCLUSIVE TRANSACTION;");
CopyFile(DB.SqlFile, BackUpFile, FALSE);
DB.Execute("COMMIT;");

By doing so inside the application, I don't
see any particular memory increase for
one week now but my db file is not as big
as yours.

I would be curious if that still increase
memory usage in your case...

HTH

Marcus

> On Tue, Jul 7, 2009 at 1:10 PM, Mark Spiegel wrote:
>
>> This should give you a picture of who is opening the file and with what
>> flags.  In particular, how is your SQLite app and Explorer opening the
>> file?
>
> Thanks for the pointer to Filemon; I'm always happy to learn about a
> new tool. Apparently its functionality has been integrated into
> Procmon, so that's what I used to follow your debugging steps.
>
> To address some of the points you mention earlier:
>
> * I'm running as Administrator, and I can see SYSTEM-owned processes
> in Task Manager. Not sure if that means I can see all processes or
> not, but I suspect that I can.
>
> * I was actually wrong about VM going up. In fact, the Pagefile
> doesn't seem to grow much. Physical Memory (as reported by Resource
> Monitor) is what's growing. Once it peaks (at around 99%) the page
> fault rate goes through the roof.
>
> * When this situation occurs, the system is most definitely not
> CPU-bound, as the CPU usage hovers around 1%. The high page fault rate
> is what seems to be making other applications unresponsive.
>
> * Working Set Size and Peak Working Set size of all processes in Task
> Manager are very modest; the memory appears to be going to some sort
> of caching mechanism in the OS.
>
> Using procmon, I filtered on "Path" for my particular sqlite file and
> repeated my experiments from the original post. I verified that
> sqlite3.exe and Explorer.EXE are the only processes opening the
> file.The thing that stands out in the pathological cases is the
> presence of SHARING VIOLATION result code. Whichever one of sqlite or
> Explorer attempts to open the file second will throw this error, and
> that's when I see physical memory begin to grow.
>
> Here's a trace of sqlite accessing the file, then Explorer copying it
> (CSV format):
>
> "Time of Day","Process Name","PID","Operation","Path","Result","Detail"
> "6:01:46.1439881
> PM","sqlite3.exe","11088","IRP_MJ_CREATE","C:\inetpub\wwwroot\Vivisimo\bin\fake-cache.sqlt","SUCCESS","Desired
> Access: Read Attributes, Disposition: Open, Options: Open For Backup,
> Open Reparse Point, Attributes: n/a, ShareMode: Read, Write, Delete,
> AllocationSize: n/a, Impersonating: TESTBED9\Administrator,
> OpenResult: Opened"
> "6:01:46.1446133
> PM","sqlite3.exe","11088","IRP_MJ_CREATE","C:\inetpub\wwwroot\Vivisimo\bin\fake-cache.sqlt","SUCCESS","Desired
> Access: Read Attributes, Disposition: Open, Options: Open For Backup,
> Open Reparse Point, Attributes: n/a, ShareMode: Read, Write, Delete,
> AllocationSize: n/a, Impersonating: TESTBED9\Administrator,
> OpenResult: Opened"
> "6:01:46.1448474
> PM","sqlite3.exe","11088","IRP_MJ_CREATE","C:\inetpub\wwwroot\Vivisimo\bin\fake-cache.sqlt","SUCCESS","Desired
> Access: Generic Read/Write, Disposition: OpenIf, Options: Synchronous
> IO Non-Alert, Non-Directory File, Random Access, Attributes: N,
> ShareMode: Read, Write, AllocationSize: 0, Impersonating:
> TESTBED9\Administrator, OpenResult: Opened"
> "6:01:56.1737399
> PM","Explorer.EXE","3272","IRP_MJ_CREATE","C:\inetpub\wwwroot\Vivisimo\bin\fake-cache.sqlt","SHARING
> VIOLATION","Desired Access: Generic Read, Write Data/Add File,
> Disposition: Open, Options: Sequential Access, Synchronous IO
> Non-Alert, Non-Directory File, Attributes: n/a, ShareMode: Read,
> AllocationSize: n/a"
> "6:01:56.1738486
> PM","Explorer.EXE","3272","IRP_MJ_CREATE","C:\inetpub\wwwroot\Vivisimo\bin\fake-cache.sqlt","SUCCESS","Desired
> Access: Generic Read, Write Data/Add File, Disposition: Open, Options:
> Sequential Access, Synchronous IO Non-Alert, Non-Directory File,
> Attributes: n/a, ShareMode: Read, Write, Delete, AllocationSize: n/a,
> OpenResult: Opened"
>
> And Explorer copying the file then sqlite3 opening it:
>
> "6:30:23.1071085
> PM","Explorer.EXE","3272","IRP_MJ_CREATE","C:\inetpub\wwwroot\Vivisimo\bin\fake-cache.sqlt","SUCCESS","Desired
> Access: Generic Read, Write Data/Add File, Disposition: Open, Options:
> 

Re: [sqlite] How can a COMMIT attempt result in SQLITE_BUSY ?

2009-07-04 Thread Marcus Grimm
Thank's Igor,

uhh... I should have read this more carefully.
I understand now that the commit may in fact
need to wait for some other shared locks...

sorry for the noise

Marcus

> Marcus Grimm wrote:
>> I'm wondering how it can happen that after a successfull
>> "BEGIN TRANSACTION;" statement and some inserts or update
>> statements, that the final COMMIT failes with a busy error code ?
>>
>> Shouldn't that be impossible because there can be only one running
>> transaction ?
>
> There may be multiple connections to the same database, each of which
> could start a transaction. For more details, see
>
> http://sqlite.org/lockingv3.html
> http://sqlite.org/lang_transaction.html
>
>> Everthing was working perfect until the user started a maintenance
>> tool that opens a single connection to the same sqlite database
>> while the server application was still running on the same hardware.
>> Now it happends that the server was not able to get a
>> COMMIT statement through
>
> You cannot commit any changes while another connection is reading from
> the database.
>
> Igor Tandetnik
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] How can a COMMIT attempt result in SQLITE_BUSY ?

2009-07-03 Thread Marcus Grimm
Hello List,

I'm wondering how it can happen that after a successfull
"BEGIN TRANSACTION;" statement and some inserts or update
statements, that the final COMMIT failes with a busy error code ?

Shouldn't that be impossible because there can be only one running
transaction ?

I'm using shared cache and read_uncommitted = True inside a
server application that opens multible connection to the same
DB (similar to what can be found on the wiki pages under SampleCode).

Everthing was working perfect until the user started a maintenance
tool that opens a single connection to the same sqlite database
while the server application was still running on the same hardware.
Now it happends that the server was not able to get a
COMMIT statement through (i.e. he timeout while trying this
via sqlite3_step); with the result that he left a transaction
open in one thread and was no longer able to do any writes
into the sqlite database.
After rebooting the server of course everthing was working back again.

I'm able to reproduce this with a modified version of the
SampleCode by just starting it two times: usually a COMMIT will
timeout after a while.

I know some workarounds for this (mainly to really make sure that
a COMMIT can't be timed out anymore) but I'm curious if there is
an explanation for this.

Thank you

Marcus Grimm


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite3: Database is sometimes locked when a reader is attached with open_v2

2009-07-02 Thread Marcus Grimm

Tino Lange wrote:
> Hi Marcus,
> 
> I have no problem when the reading application gets a lock error because the 
> writing application has a lock.
> 
> But the problem is the other way around:
> -> The writing application gets a lock error because someone reads!

well... since the writer blocks the database for read and write access he has
to somehow verify that no read is ongoing before he will be able to
lock the database. hence if the write locks the database it will
indirectly cause that a read locks the db for writers as well.

so, the only thing that does not block are read against reads.

as I said the shared cache and the read_uncommitted pragma
increases this sort of concurrency and I use it regulary in my
server application.

> 
> This is what I can't understand and what I didn't expect. I would expect 
> that the writing application is undisturbed by any readers that open with 
> SQLITE_OPEN_READONLY.

I'm not familiar with that option but as far as I understand it will only
prevent that connection to apply any write operation and as I tried to
explain: a simple read may very well cause a timeout on a write attempt.

Marcus


> 
> Thanks
> 
> Tino
> 
> --
> 
> 
> Marcus Grimm wrote:
> 
>> I'm afraid this is by design of sqlite: Sqlite will lock
>> the database during a writing transaction, I think no matter
>> if you open a 2nd connection using the readonly flag.
>>
>> the typical solutions are:
>> a) retry your read attempt after you receive a busy error code until
>> it proceeds.
>> b) consider shared cache mode and pragma read_uncommitted = True;
>>
>> hope this helps
>>
>> Marcus
> 
> 
> _______
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 

-- 
Marcus Grimm, MedCom GmbH Darmstadt, Rundeturmstr. 12, 64283 Darmstadt
Tel: +49(0)6151-95147-10
Fax: +49(0)6151-95147-20
--
MedCom slogans of the month:
"Vacation ? -- Every day at MedCom is a paid vacation!"
"Friday I have monday in my mind."
"MedCom -- Every week a vacation, every day an event, every hour a cliffhanger,
every minute a climax."
"Damned, it's weekend again!"
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite3: Database is sometimes locked when a reader is attached with open_v2

2009-07-02 Thread Marcus Grimm
I'm afraid this is by design of sqlite: Sqlite will lock
the database during a writing transaction, I think no matter
if you open a 2nd connection using the readonly flag.

the typical solutions are:
a) retry your read attempt after you receive a busy error code until
it proceeds.
b) consider shared cache mode and pragma read_uncommitted = True;

hope this helps

Marcus

Tino Lange wrote:
> Hi all,
> 
> I have written a program that opens a SQLIte3 database and writes in it most 
> of the time via replace/update.
> 
> If I do select like (no writes, really only reads) statements from some 
> other process that carefully opens the database with "sqlite3_open_v2(..., 
> SQLITE_OPEN_READONLY, ...)" the main (writing) application might get 
> errcode=5 ermmsg='database is locked' errors when it tries to write while 
> the other application (only!) reads.
> 
> How is that possible? How to prevent?
> 
> I would expect that the reading application might (of course) sometimes get 
> "database is locked" errors, but the writing application should never.
> 
> [ I have also realized that the sqlite3 commandline tool uses sqlite_open() 
> and always opens in read/write mode. That's why I wrote my own reading 
> application using sqlite3_open_v2 ]
> 
> Thanks
> 
> Tino
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Python sqlite binding: commit inside select loop

2009-06-30 Thread Marcus Grimm
thank you very much Richard! very interesting to
get some info about these rather internal issues. This
gives us a good background to review some potential
dangerous loops. Your hint with the temp table is extremely
useful.

Best wishes

Marcus Grimm

>
> On Jun 30, 2009, at 2:34 PM, Marcus Grimm wrote:
>>
>> I'm not sure what will happend if you for example
>> delete a table row that would be the next row that
>> sqlite3_step would see, but doing so to me sounds
>> a bad approach anyhow.
>
> The official policy is that if you modify a table (via INSERT, UPDATE,
> or DELETE) in the middle of a SELECT, then what you get from the
> remainder of the SELECT is undefined.
>
> Sometimes, modifying a table has no effect on the remainder of the
> SELECT.  For example, suppose you are running
>
> SELECT x FROM table1 WHERE y>10 ORDER BY x;
>
> where there is no index on x.  The way SQLite implements this is to
> load all the rows that have y>10 into a temporary table that is sorted
> by x.  Then it starts returning rows of the temporary table one by
> one.  By the time the first row is returned, everything that will ever
> be read from table1 by this query has already been read, so modifying
> table1 at that point will have no effect on subsequent SELECT output.
>
> On the other hand, if you have a query like this:
>
>SELECT x FROM table1 WHERE y>10;
>
> where there is no index on y, then the query will be implemented as a
> full table scan.  SQLite will read the rows starting at the beginning
> of table1 and working toward the end.  As each row is read, it will be
> tested to see if y>10.  If y>10, then the row is returned.  Otherwise,
> SQLite advances to the next row.  In such a scenario, if you modify
> one of the later rows in the table, the modifications will be seen
> when the scan reaches the corresponding position in the table.
>
> Things can get tricky when you start looking at joins.  Consider:
>
> SELECT t1.x, t2.y FROM t1 JOIN t2;
>
> This will typically be implemented as a nested loop:
>
>  for each row in t1:
>   for each row in t2:
>output t1.x, t2.y
>
> If you delete the current row from t1 while there are still more rows
> to go on t2, what output do you get for t1.x on subsequent rows?  The
> answer is "it depends".  Sometimes you will continue to get the
> original value of t1.x.  Other times you will get a NULL.  Whether you
> get the original value of t1.x or NULL might change from one point
> release to the next.
>
> Bottom line:  It is best not to depend on the behavior of a SELECT
> after any row in the table being SELECTed is modified.  SQLite tries
> to do something "sensible".  But sometimes it is not clear what the
> sensible behavior is.  If you need to modify a table as you are
> reading it from a SELECT, consider storing the results of the SELECT
> in a TEMP table, then reading the TEMP table as you modify the
> original table:
>
>   CREATE TEMP TABLE xyz AS SELECT * FROM table1 WHERE;
>   SELECT * FROM xyz;   -- make changes to table1 as xyz is read
>   DROP TEMP TABLE xyz;
>
>
> D. Richard Hipp
> d...@hwaci.com
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Python sqlite binding: commit inside select loop

2009-06-30 Thread Marcus Grimm
> Marcus Grimm <mgr...@medcom-online.de> wrote:
>> what is possible I guess would be to start a transaction
>> inside the loop, do something, and commit and procced
>> with stepping
>
> No, this is not possible either. By starting the select, you also start
> an implicit transaction. SQLite doesn't support nested transactions, so
> you can't start another one in the middle of the loop.

hm... just read again some docs... you are right I guess,
as usual... but reading the docs
(http://www.sqlite.org/lang_transaction.html):
"..The explicit COMMIT command runs immediately, even if there are pending
SELECT statements..."

is still a bit contraire, it sounds like it should be possible
to do what I claimed since I assume only a reading select
statement pending.

anyway, it would be simple to test with a little test code,
I guess I'll do this the next days just to see what happends.
I never did such loops with an explicit transaction inside so far,
only guessing..

>
>> I'm still curious what would be the error code of sqlite3_step
>> if I would delete a row that would be one of the rows
>> reached by the running statement, but that's another issue...;)
>
> There wouldn't be any errors. The row just wouldn't be returned, as if
> it never existed.

I was expecting it to be more complicate because sqlite is running
a sort of machine code while doing the select stepping and for example
if that is using a temporary internal table that looses now an entry in
a source table, for this sqlite would need to re-build a temporary
internal table during each step... or how about a "SELECT ... LIMIT N"
statement that would now need to run one item further than
originally seen by the prepare function?
anyway, I'm afraid I'm about to highjack this thread... ;)

Marcus

>
> Igor Tandetnik
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Python sqlite binding: commit inside select loop

2009-06-30 Thread Marcus Grimm
> Marcus Grimm <mgr...@medcom-online.de> wrote:
>> I'm a bit confused about Igors replies because
>> you can very well do a select, step throu the
>> results and even delete or update the table row that you
>> are currently inspecting and procceed with stepping.
>
> ... but you cannot COMMIT your changes while still stepping through the
> select. This is the crux of the OP's problem.

ahh, okay..yes. I missed that this is the intension of the OP:
to start a transaction outside the select loop and COMMIT
sometime inside and procced with stepping.
what is possible I guess would be to start a transaction
inside the loop, do something, and commit and procced
with stepping (actually sqlite does this anyhow internally).
I'm still curious what would be the error code of sqlite3_step
if I would delete a row that would be one of the rows
reached by the running statement, but that's another issue...;)

Best

Marcus

>
> Igor Tandetnik
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Python sqlite binding: commit inside select loop

2009-06-30 Thread Marcus Grimm
I'm a bit confused about Igors replies because
you can very well do a select, step throu the
results and even delete or update the table row that you
are currently inspecting and procceed with stepping.
As long as you use the same database connection for this.

I'm not sure what will happend if you for example
delete a table row that would be the next row that
sqlite3_step would see, but doing so to me sounds
a bad approach anyhow.
I do this directly using the C api, don't know why the
python stuff is different here...

Marcus

> Jim Wilcoxson  wrote:
>> I'm using the Python sqlite3 (pysqlite) bindings.  I'd like to be able
>> to do a select, and while fetching those rows and inserting new ones,
>> periodically do a commit.
>
> You can't do this with SQLite. Commit fails if there are outstanding
> statements.
>
>> So I have a couple of questions/observations:
>>
>> 1. Is the thing about resetting cursors an SQLite requirement, or
>> something the Python binding is doing to protect people from the
>> confusion of seeing the updated rows.
>
> It is SQLite requirement that all statements be reset or finalized
> before a transaction can commit. SQLite doesn't automatically reset
> those statements - it fails the COMMIT operation instead. Apparently,
> Python binding is trying to be helpful and resetting all the statements
> for you. One way or the other, your design ain't gonna fly.
>
>> 2. If I were deleting rows, I can see where that would be a problem:
>> what if some of the rows satisfying the query were deleted?  Once the
>> commit happens, the rows are gone unless there is a delete/add with
>> the same rowid.
>
> None of that is the problem. Realize that your select already sees
> changes made on the same connection, even before they are committed.
>
> The issue has to do with locking. COMMIT needs to release all locks the
> connection holds - but it can't if you still have outstanding
> statements.
>
> Igor Tandetnik
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-31 Thread Marcus Grimm
>
> On 31 May 2009, at 9:24pm, Jay A. Kreibich wrote:
>
>> The whole point of a transaction is that once it
>>  returns "success" on a commit, it is committed to permanent storage,
>>  no matter what -- including crashes, power failures, and other major
>>  problems.
>
> My impression is that COMMIT/ROLLBACK is for data integrity: don't
> update the account balance if the transaction didn't work.  That sort
> of stuff.  Not to deal with hardware failure.

do not mix sql COMMIT with hardware sync/commit functionality,
the subject of the posts refer mainly to the fsync/commit
issue on hardware level.

You are right with the statements below but this is not the
point here:
The developers of sqlite put a major effort to make it
practically impossible that sqlite will ever end up with
a corrupt database even when considering a powerloss
or OS crash.
One issue here is the requirement to exactly
know and define when data has reached the disk surface.
To achieve this, sqlite relies on the correct low level
implementation of the fsync functions. The goal can't be
reached if these functions do not behave as originally
expected by the sqlite developers and then sqlite behaves
similar if the synchronous pragma is set to default OFF.
It's nothing to complain against sqlite it is just something
users should be aware off, imo.


>
> I think that these days your computer can never know when things that
> reached permanent storage.  Permanent storage is too far away, behind
> too many levels of caching for the CPU to ever know about it.  Your
> CPU may be virtual.  Your storage medium may be virtual: a disk image,
> a RAID controller, emulated hardware, or something like that.  If you
> talk to a hard disk by IDE it will be using onboard caching and not
> admitting it to the motherboard (most of them do this these days).
>
> Unless you're going to disable caching all the way down the line from
> CPU to physical hard disk, you're never going to get any real idea of
> what the hardware's doing.  And if you did that it would low things
> down too much anyway.  Cached writes do a really good job of speeding
> things up.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite on a Virtual Machine of Microsoft's Hyper-V ?

2009-05-29 Thread Marcus Grimm
okay, thanks to all for the info. I'm relieved..

best

Marcus


Sam Carleton wrote:
> Well, Microsoft's Hyper-V is just like VMWare.  It allows multiple OS's 
> to be installed and running on one machine at one time.  Since sqlite is 
> simply reads/writes to a file, not track/sectors, it should be fine.  
> SQLite won't even know it is running on a virtual.
> 
> Marcus Grimm wrote:
>> Hello List,
>>
>> I've been asked if my server application will run
>> in a virtual machine: Microsoft's Hyper-V
>> I have no idea what this is, nor if it affects sqlite.
>>
>> My main concern would be if the (essential) file locking
>> and sync/commit commands will still do the intended job
>> within a virtual machine.
>>
>> Any experience with this ?
>>
>> Thank you in advance.
>>
>> Marcus Grimm
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>>   
> 
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Sqlite on a Virtual Machine of Microsoft's Hyper-V ?

2009-05-29 Thread Marcus Grimm
Hello List,

I've been asked if my server application will run
in a virtual machine: Microsoft's Hyper-V
I have no idea what this is, nor if it affects sqlite.

My main concern would be if the (essential) file locking
and sync/commit commands will still do the intended job
within a virtual machine.

Any experience with this ?

Thank you in advance.

Marcus Grimm

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-29 Thread Marcus Grimm
> On Thu, May 28, 2009 at 03:12:55PM -0700, Allen Fowler scratched on the
> wall:
>>
>> > just for anybody who is interested:
>>
>> >
>> > I translated Jim's function into window code and added
>> > a page of 1024 that will be written, instead of a single byte.
>> > On my Win-XP system I got 55 TPS, much faster than sqlite
>> > seems to write a page but that might be related to the
>> > additional overhead sqlite needs to do.

just to add: I traced a little what sqlite does when an
simple UPDATE is done within a transaction: It does
two syncs on the journal file and one final sync on the
db itselve, so achieving something like 15 TPS is reasonable.


>> >
>> > This brings me to a nice to have feature request:
>> > How about adding similar test function in the sqlite API ?
>> > This might use the vfs to write pages and gives some feedback
>> > on the performance of the system where sqlite runs on.
>> > It might also detect problems with the commit function...
>> > Just an idea...
>> >
>>
>> Interesting idea.
>
>   It would make a lot more sense to make this an external utility
>   or an extension of the sqlite3 shell.  Adding it to the core library
>   is a definite case of code bloat.

Adding it into the API would allow my application to
easily make the test for example the first time it runs
on a system. But maybe a problem for the users that
apply a sqlite wrapper.
However, having it in sqlite3 shell would be very useful as
well.

Marcus

>
>   Actually, a whole suite of performance related tests might be
>   interesting.
>
>-j
>
> --
> Jay A. Kreibich < J A Y  @  K R E I B I.C H >
>
> "Our opponent is an alien starship packed with atomic bombs.  We have
>  a protractor."   "I'll go home and see if I can scrounge up a ruler
>  and a piece of string."  --from Anathem by Neal Stephenson
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-28 Thread Marcus Grimm
just for anybody who is interested:

I translated Jim's function into window code and added
a page of 1024 that will be written, instead of a single byte.
On my Win-XP system I got 55 TPS, much faster than sqlite
seems to write a page but that might be related to the
additional overhead sqlite needs to do.

This brings me to a nice to have feature request:
How about adding similar test function in the sqlite API ?
This might use the vfs to write pages and gives some feedback
on the performance of the system where sqlite runs on.
It might also detect problems with the commit function...
Just an idea...

Anyway, here is my win32 version Jim's test function:

---
#include
#include

#define TESTWRITES  1000

int TestDisk ()
{
int fd;
int n;
int loops;
unsigned char   Page[1024];
time_t elap;
time_t start;

if ((fd=_open("C:\\testfile.db", O_RDWR+O_CREAT, 0777)) == -1)
{
  fprintf(stderr, "Error opening file");
  exit(1);
}

start = time(NULL);
for(loops=0; loops I'm running on Linux with ext3 and just wrote a Python test program to
> insert rows into a table with a single column, no indexing, and doing
> a commit after each insert.  When I first ran it, I got around 440
> inserts per second, which is clearly impossible.  This is a 7200rpm
> drive, so even if I could write a row on every revolution, the maximum
> insert rate would be 120 per second.  I tried adding "pragma
> sychronous=normal", and it sped up to 520 TPS.  With synchronous=full
> it slowed back to 420-460, so this must be the Python default.  Adding
> synchronous=off increased the rate to over 6000 TPS -- basically
> writing to memory instead of disk.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] corrupt database recovery

2009-05-27 Thread Marcus Grimm
I might have missed something, but:
What are you compressing?
ehm... Are compressing the database file while sqlite is running ?


> The problem is happening in the field and the program deals with a corrupt
> database by renaming it, logging the error and creating a new database.
> So
> I only end up with the corrupt one.
>
> I would GUESS that the database doesn't have any free pages except those
> at
> the end since it's an auditing product so we never delete anything...but I
> could be wrong.
>
> The only reason I think I'm 'dropping a byte' is that all the
> compression/encryption code is byte based and the database gets corrupt
> (invalid pages) but I'm able to .dump it successfully.That and I try
> to
> take the blame first, since it's usually my fault. (just ask my wife!)
>
> Right now I'm running a large test on the encryption/compression code to
> see
> if I can get it to break.
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Marcus Grimm
> Sent: Wednesday, May 27, 2009 5:09 PM
> To: 'General Discussion of SQLite Database'
> Subject: Re: [sqlite] corrupt database recovery
>
> sorry, I don't know where sqlite stores any kind of data like
> index or raw table, my suspect is that, since sqlite
> usually looks for a free page, the pages are randomly
> distributed over the file or added at the end if no free
> page has been found.
>
> what I don't understand:
> what makes you thinking that you may miss some bytes of a tree
> (and thus of a page) ?
> That is sqlite internal, except you are using your own
> vfs, I guess, ...do you ?
>
> by the way: did your application crash prior or nearby you had this
> corrupted DB ?
>
>> I've reviewed the code over and over again, but am yet to find it.  But
>> it
>> is a good bit of very complicated code (blowfish and compression code).
>>
>> That's why I was asking about the format of the database.  Since I'm
>> able
>> to
>> successfully do a .dump but the integrity_check whines, I'm thinking
>> that
>> maybe if I focus on where what could cause that, it might give me a
>> clue.
>>
>> For example, if the tree is stored at the end of file maybe I'm falling
>> out
>> the loop without writing the final bytes.  But if the tree is not
>> localized
>> to one area of the file, I'll have to rethink my approach.
>>
>>
>> -Original Message-
>> From: sqlite-users-boun...@sqlite.org
>> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Marcus Grimm
>> Sent: Wednesday, May 27, 2009 2:44 PM
>> To: General Discussion of SQLite Database
>> Subject: Re: [sqlite] corrupt database recovery
>>
>> so, if you think it is a coding error on your side it will
>> be a hard work to find the source. I can only image side effects
>> on an used API like sqlite3 with the classics:
>>
>> - stack frame overload in a function that calls sqlite functions
>> - using a local stack variable as a return pointer and reuse
>>   this external.
>> - memory overwriting on malloc'ed pointers or reusing of allready
>>   free'd memory pages.
>>
>> one and two might be found using a code review and identify
>> local variables or particular arrays that are written:
>> So wherever you call anything from sqlite check the stack
>> declaration and review the usage of these variables...
>> of course just don't do number two... ;)
>>
>> memory errors might be detected using special debug versions of the
>> malloc/free library, by code review, or manually by adding some
>> test variables whereever you malloc or free a pointer.
>>
>> I'm sure you allready did some of these... anyway, good luck
>>
>> are you using threads? would be another beautiful issue to trace...;)
>>
>> Marcus
>>
>>
>>
>>> It is a server class machine running Windows 2003 with 4 GB.  No, it's
>>> a
>>> local drive with 20GB free on it.
>>>
>>> I'm sure that it's a coding error on my part.  SQLite is very stable,
>>> in
>>> my
>>> opinion.  I'm just trying to get a rough idea on where I'm screwing up
>>> the
>>> database.
>>>
>>>
>>> -Original Message-
>>> From: sqlite-users-boun...@sqlite.org
>>> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Elrick
>>> Sent: Wednesday, May 27, 2009 12:58 PM
>>> To: General Discussion of SQLite Database
>>> Subject: Re: [sqlite] corrupt database recovery
>>>
>>> What platfo

Re: [sqlite] corrupt database recovery

2009-05-27 Thread Marcus Grimm
sorry, I don't know where sqlite stores any kind of data like
index or raw table, my suspect is that, since sqlite
usually looks for a free page, the pages are randomly
distributed over the file or added at the end if no free
page has been found.

what I don't understand:
what makes you thinking that you may miss some bytes of a tree
(and thus of a page) ?
That is sqlite internal, except you are using your own
vfs, I guess, ...do you ?

by the way: did your application crash prior or nearby you had this
corrupted DB ?

> I've reviewed the code over and over again, but am yet to find it.  But it
> is a good bit of very complicated code (blowfish and compression code).
>
> That's why I was asking about the format of the database.  Since I'm able
> to
> successfully do a .dump but the integrity_check whines, I'm thinking that
> maybe if I focus on where what could cause that, it might give me a clue.
>
> For example, if the tree is stored at the end of file maybe I'm falling
> out
> the loop without writing the final bytes.  But if the tree is not
> localized
> to one area of the file, I'll have to rethink my approach.
>
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Marcus Grimm
> Sent: Wednesday, May 27, 2009 2:44 PM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] corrupt database recovery
>
> so, if you think it is a coding error on your side it will
> be a hard work to find the source. I can only image side effects
> on an used API like sqlite3 with the classics:
>
> - stack frame overload in a function that calls sqlite functions
> - using a local stack variable as a return pointer and reuse
>   this external.
> - memory overwriting on malloc'ed pointers or reusing of allready
>   free'd memory pages.
>
> one and two might be found using a code review and identify
> local variables or particular arrays that are written:
> So wherever you call anything from sqlite check the stack
> declaration and review the usage of these variables...
> of course just don't do number two... ;)
>
> memory errors might be detected using special debug versions of the
> malloc/free library, by code review, or manually by adding some
> test variables whereever you malloc or free a pointer.
>
> I'm sure you allready did some of these... anyway, good luck
>
> are you using threads? would be another beautiful issue to trace...;)
>
> Marcus
>
>
>
>> It is a server class machine running Windows 2003 with 4 GB.  No, it's a
>> local drive with 20GB free on it.
>>
>> I'm sure that it's a coding error on my part.  SQLite is very stable, in
>> my
>> opinion.  I'm just trying to get a rough idea on where I'm screwing up
>> the
>> database.
>>
>>
>> -Original Message-
>> From: sqlite-users-boun...@sqlite.org
>> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Elrick
>> Sent: Wednesday, May 27, 2009 12:58 PM
>> To: General Discussion of SQLite Database
>> Subject: Re: [sqlite] corrupt database recovery
>>
>> What platform?  Any chance they are using a network drive?
>>
>>
>> John
>>
>> Gene wrote:
>>> My code is outside the database layer.  So I do all my database work,
>>> then
>>> compress and encrypt it.  No errors are returned anywhere.  I'm
>>> guessing
>>> that it's going to be an uninitialized variable or byte alignment
>>> problems
>>> somewhere.
>>>
>>> This code is running on hundreds of machines without a problem and I've
>>> never reproduced it but every now and again I get a support ticket
>>> showing
>> a
>>> corrupt database.  So I'm trying to figure out WHERE to look.
>>>
>>> -Original Message-
>>> From: sqlite-users-boun...@sqlite.org
>>> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Elrick
>>> Sent: Wednesday, May 27, 2009 10:59 AM
>>> To: General Discussion of SQLite Database
>>> Subject: Re: [sqlite] corrupt database recovery
>>>
>>> Gene Allen wrote:
>>>
>>>> Ok...it's happened again and I've decided that I need to track this
>>>> down
>>>> once and for all!
>>>>
>>>> Here is what I'm seeing: I get errors when I do a integrity_check (see
>>>> below), but I can .dump it to a text file and then .read it into
>>>> another
>>>> database ok.
>>>>
>>>> It seems to me that I'm screwing up an index or something.  Are
>>>> indexes
>>>> stored at the end of the dat

Re: [sqlite] corrupt database recovery

2009-05-27 Thread Marcus Grimm
so, if you think it is a coding error on your side it will
be a hard work to find the source. I can only image side effects
on an used API like sqlite3 with the classics:

- stack frame overload in a function that calls sqlite functions
- using a local stack variable as a return pointer and reuse
  this external.
- memory overwriting on malloc'ed pointers or reusing of allready
  free'd memory pages.

one and two might be found using a code review and identify
local variables or particular arrays that are written:
So wherever you call anything from sqlite check the stack
declaration and review the usage of these variables...
of course just don't do number two... ;)

memory errors might be detected using special debug versions of the
malloc/free library, by code review, or manually by adding some
test variables whereever you malloc or free a pointer.

I'm sure you allready did some of these... anyway, good luck

are you using threads? would be another beautiful issue to trace...;)

Marcus



> It is a server class machine running Windows 2003 with 4 GB.  No, it's a
> local drive with 20GB free on it.
>
> I'm sure that it's a coding error on my part.  SQLite is very stable, in
> my
> opinion.  I'm just trying to get a rough idea on where I'm screwing up the
> database.
>
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Elrick
> Sent: Wednesday, May 27, 2009 12:58 PM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] corrupt database recovery
>
> What platform?  Any chance they are using a network drive?
>
>
> John
>
> Gene wrote:
>> My code is outside the database layer.  So I do all my database work,
>> then
>> compress and encrypt it.  No errors are returned anywhere.  I'm guessing
>> that it's going to be an uninitialized variable or byte alignment
>> problems
>> somewhere.
>>
>> This code is running on hundreds of machines without a problem and I've
>> never reproduced it but every now and again I get a support ticket
>> showing
> a
>> corrupt database.  So I'm trying to figure out WHERE to look.
>>
>> -Original Message-
>> From: sqlite-users-boun...@sqlite.org
>> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Elrick
>> Sent: Wednesday, May 27, 2009 10:59 AM
>> To: General Discussion of SQLite Database
>> Subject: Re: [sqlite] corrupt database recovery
>>
>> Gene Allen wrote:
>>
>>> Ok...it's happened again and I've decided that I need to track this
>>> down
>>> once and for all!
>>>
>>> Here is what I'm seeing: I get errors when I do a integrity_check (see
>>> below), but I can .dump it to a text file and then .read it into
>>> another
>>> database ok.
>>>
>>> It seems to me that I'm screwing up an index or something.  Are indexes
>>> stored at the end of the database file?  All I can think of is that my
>>> compression/encryption routines are messing something up and I'm trying
> to
>>> figure out 'where' to look.
>>>
>>> I guess the real question is, what would I have to do to make an
>>> integrity_check fail, but still let a dump work correctly?
>>>
>>> Many thanks for any advice on tracking down this ugliness.
>>>
>>>
>>
>> SNIP
>>
>> Personally, I'd refactor the code to allow me to verify the operation of
>> the compression/encryption routines independently of the database
>> operation.  How are you injecting the compression/encryption into the
>> database layer?
>>
>>
>> John
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>> 
>>
>>
>> No virus found in this incoming message.
>> Checked by AVG - www.avg.com
>> Version: 8.0.238 / Virus Database: 270.12.39/2133 - Release Date:
>> 05/25/09
> 08:16:00
>>
>>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Some index questions

2009-05-27 Thread Marcus Grimm
Thanks again Dan,

the phonebook is an excellent example to use the compound index.
So far I was thinking for such cases that it is enough to
have one index and let sqlite test against the other value from
the original table, like:

CREATE TABLE T1(A, B, C);
CREATE INDEX T1x ON T1(B);

SELECT * FROM T1 WHERE (B=4) AND (C=5);

if the temporary result for B=4 is small, compared to
the number of rows in that table, the additional test against
C=5 will not harm very much, I guess.
Of course, as I have learned now, a compound index would be
better, and even better if the fetch could concentrate on
index data only (as DRH mentioned)...
This sounds reasonable if the datasize of C is not very big.
In case C is a char like a UID it might be worse since the
index will become big (in terms of bytes) -- then it is time to
make experiments, I guess.

Thanks again

Marcus

Dan wrote:
> On May 27, 2009, at 10:13 PM, Marcus Grimm wrote:
> 
...
>>>
>>>   WHERE b = 3 AND c = 4;
>>>   WHERE b = 3 AND c > 4;
>>>
>>> but cannot be used to optimize:
>>>
>>>   WHERE c = 4;
>> Ahh... I picked up the wrong example... Great, this is exactly inline
>> how I was thinking how it might work.
>> I guess such compound indexes have only a benefit for specific
>> queries, like the above that use all the time exactly these two
>> columns, otherwise two seperate indexes will do the same job but maybe
>> a little slower due to additional overhead to scan two index tables.
> 
> If a single column index is like the index found in textbooks,
> a compound index with two fields is like the phone book. Sorted first by
> surname, then by first name. The "rowid", if you like, is the phone  
> number.
> 
> So, it's easy to find the set of phone numbers for everybody with the
> surname "Jones". It's easy to find the set of phone numbers for people
> called "Barry Jones". Quite difficult to find all the people called  
> "Barry"
> though.
> 
> If you had two separate indexes, one on surname and one on first-name,
> you could quickly find all the Jones's using the first index, or all the
> Barry's using the second index. But to find the set of people called
> "Barry Jones" would be much more work than it was with the compound  
> index.
> 
> Dan.
> 
> 
> 
>> Thanks again for the useful advice.
>>
>> kind regards
>>
>> Marcus
>>
>>
>>
>>
>>> Dan.
>>>
>>>
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Some index questions

2009-05-27 Thread Marcus Grimm
Richard,

thanks you very much for the example and advice.
Ahh, yes. I missed the point that sqlite might use an
index also as a kind of buffer to fetch data.

kind regards

Marcus

D. Richard Hipp wrote:
> On May 27, 2009, at 11:13 AM, Marcus Grimm wrote:
>> I guess such compound indexes have only a benefit for specific
>> queries, like the above that use all the time exactly these two
>> columns, otherwise two seperate indexes will do the same job but maybe
>> a little slower due to additional overhead to scan two index tables.
>>
> 
> Indices can be used for sorting as well as for lookup.  And sometime  
> content can be pulled from indices to avoid an additional search to  
> find the original table entry.  For these reasons, compound indices  
> can be useful.
> 
> For example:
> 
>   CREATE TABLE t1(a,b,c);
>   CREATE INDEX t1abc ON t1(a,b,c);
> 
> Then if you say:
> 
>   SELECT c FROM t1 WHERE a=5 ORDER BY b;
> 
> The query above will run faster with the index on a,b,c than it would  
> with any other index.  The first column of the index is used to do the  
> search.  Then the index is scanned in order to output rows in order by  
> b.  Finally, the c value is pulled directly from the index, avoiding a  
> lookup of the original table.
> 
> 
> 
> D. Richard Hipp
> d...@hwaci.com
> 
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 

-- 
Marcus Grimm, MedCom GmbH Darmstadt, Rundeturmstr. 12, 64283 Darmstadt
Tel: +49(0)6151-95147-10
Fax: +49(0)6151-95147-20
--
MedCom slogans of the month:
"Vacation ? -- Every day at MedCom is a paid vacation!"
"Friday I have monday in my mind."
"MedCom -- Every week a vacation, every day an event, every hour a cliffhanger,
every minute a climax."
"Damned, it's weekend again!"
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Some index questions

2009-05-27 Thread Marcus Grimm
Hi Dan,

thanks for your answers: this was exactly what I was looking for.

Dan wrote:
> On May 27, 2009, at 9:08 PM, Marcus Grimm wrote:
>>
>> The difference, I guess, to a "real" sql table, is that it is sorted
>> with respect to the indexed column and not by row_id, something that
>> makes them different to a standard sql table, am I right ?
> 
> Pretty much. Obviously there are a ton of details, but what you have is
> a good mental model for practical purposes.

I can imagine that you folks have a bunch of issue with indexes...
Thanks, for confirming my "mental model".

> 
>> I often have to deal with columns which are UIDs that have a length
>> of say 128 bytes and that will be the majority (in terms of byte-size)
>> of that table. If I would now create an index on such a UID I will  
>> basically
>> double the database size, correct ?
>> (I know I can experiment this by my selve, but maybe a sql-guru here  
>> has allready
>> the answer)
> 
> If you have a schema:
> 
>CREATE TABLE t1(a INTEGER PRIMARY KEY, b UUID);
> 
> And then create an index on column b, you will probably double the size
> of the database, yes.

Okay, that was the key question for me.

>>
>> CREATE TABLE T1(A, B, C);
>> CREATE INDEX T1Idx ON T1(B,C);
> 
> It's like a table that contains columns B and C, and the rowid. The
> table is sorted in the same order as the results of:
> 
>SELECT b, c, rowid FROM t1 ORDER BY b, c;
> 

An interesting way to express how this index is build. Thank you.

> 
>> ...
>> SELECT * FROM T1 WHERE B=3;
>>
>> as far as I know this will most likely not use the index, but then  
>> I'm curious what
>> is the benefit or application of a compound index compared to two  
>> single indexes ?
> 
> Such a query can use the compound index T1Idx above. It can do the  
> following
> too:
> 
>WHERE b = 3 AND c = 4;
>WHERE b = 3 AND c > 4;
> 
> but cannot be used to optimize:
> 
>WHERE c = 4;

Ahh... I picked up the wrong example... Great, this is exactly inline
how I was thinking how it might work.
I guess such compound indexes have only a benefit for specific
queries, like the above that use all the time exactly these two
columns, otherwise two seperate indexes will do the same job but maybe
a little slower due to additional overhead to scan two index tables.

Thanks again for the useful advice.

kind regards

Marcus




> 
> Dan.
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Some index questions

2009-05-27 Thread Marcus Grimm
Thanks Puneet,

I know Igors nice text book example... :-)

My questions are a little more technical
like "If I would do a DB engine"... to judge better
when or when not using or creating an index, etc.

I think it is useful to understand what it is behind all this,
similar that I would recommend to every young programmer to have
a little course in assembly programming to understand what
a compiler does and what not, why it is fast why it is slow, etc
but maybe I'm old fashion.

I know, maybe my post is a  bit too unspecific. :-)

Marcus

P Kishor wrote:
> On Wed, May 27, 2009 at 7:38 PM, Marcus Grimm <mgr...@medcom-online.de> wrote:
>> Hi List,
>>
>> this is not very sqlite specific but hopefully somebody will give
>> me some info on this, as I haven't yet found a nice description of this:
>>
>> I'm curios how an index works internally, my suspect is that an index
>> can be seen as a kind of table that has two columns which hold a copy of
>> a) the row_ids of the indexed table.
>> b) the value of the indexed column.
> 
> See http://www.mail-archive.com/sqlite-users@sqlite.org/msg37474.html
> for a very, very nice explanation from Igor of how indexes work. The
> explanation uses the same metaphor of an index in a text book.
> 
> 
>> The difference, I guess, to a "real" sql table, is that it is sorted
>> with respect to the indexed column and not by row_id, something that
>> makes them different to a standard sql table, am I right ?
>>
>> I often have to deal with columns which are UIDs that have a length
>> of say 128 bytes and that will be the majority (in terms of byte-size)
>> of that table. If I would now create an index on such a UID I will basically
>> double the database size, correct ?
>> (I know I can experiment this by my selve, but maybe a sql-guru here has 
>> allready
>> the answer)
>>
>> How does an compound index work:
>>
>> Is it the same if I create two indexes compared to a single but compound 
>> index ?
>> I guess no, because reading the optimizer hints from the sqlite doc I 
>> understand
>> that sqlite will not use that index if I ask only for one of the column 
>> names, like:
>>
>> CREATE TABLE T1(A, B, C);
>> CREATE INDEX T1Idx ON T1(B,C);
>> ...
>> SELECT * FROM T1 WHERE B=3;
>>
>> as far as I know this will most likely not use the index, but then I'm 
>> curious what
>> is the benefit or application of a compound index compared to two single 
>> indexes ?
>>
>> Again sorry for beeing not very specific in the questions.
>>
>> Thanks
>>
>> Marcus Grimm
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> 
> 
> 

-- 
Marcus Grimm, MedCom GmbH Darmstadt, Rundeturmstr. 12, 64283 Darmstadt
Tel: +49(0)6151-95147-10
Fax: +49(0)6151-95147-20
--
MedCom slogans of the month:
"Vacation ? -- Every day at MedCom is a paid vacation!"
"Friday I have monday in my mind."
"MedCom -- Every week a vacation, every day an event, every hour a cliffhanger,
every minute a climax."
"Damned, it's weekend again!"
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Some index questions

2009-05-27 Thread Marcus Grimm
Hi List,

this is not very sqlite specific but hopefully somebody will give
me some info on this, as I haven't yet found a nice description of this:

I'm curios how an index works internally, my suspect is that an index
can be seen as a kind of table that has two columns which hold a copy of
a) the row_ids of the indexed table.
b) the value of the indexed column.

The difference, I guess, to a "real" sql table, is that it is sorted
with respect to the indexed column and not by row_id, something that
makes them different to a standard sql table, am I right ?

I often have to deal with columns which are UIDs that have a length
of say 128 bytes and that will be the majority (in terms of byte-size)
of that table. If I would now create an index on such a UID I will basically
double the database size, correct ?
(I know I can experiment this by my selve, but maybe a sql-guru here has 
allready
the answer)

How does an compound index work:

Is it the same if I create two indexes compared to a single but compound index ?
I guess no, because reading the optimizer hints from the sqlite doc I understand
that sqlite will not use that index if I ask only for one of the column names, 
like:

CREATE TABLE T1(A, B, C);
CREATE INDEX T1Idx ON T1(B,C);
...
SELECT * FROM T1 WHERE B=3;

as far as I know this will most likely not use the index, but then I'm curious 
what
is the benefit or application of a compound index compared to two single 
indexes ?

Again sorry for beeing not very specific in the questions.

Thanks

Marcus Grimm
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-27 Thread Marcus Grimm
Thanks Nick,

good point. ahh yes, I've read about this somewhere...

My extension is currently ".db", a quick check indicates
that using ".abc" gives a slight speed improvement, maybe 10%.
But that is allready very close to the variation I get between
different test runs, so I'm not really sure if I have this
"microsoft effect" here.

Anyway, thanks for the reminder.

Well, I think now there is nothing wrong here,
it is just as it is... Currently I achieve about 10 transactions/second, maybe
not that bad... still slower than the "few dozen" that are mentioned on the
www pages, but I agree that there too many issues that may affect the disk 
speed.


Marcus


Brandon, Nicholas (UK) wrote:
>> my developemnt system is a Win XP, with of course NTFS, I 
>> don't know which drive it has, I guess a standard 7200 rpm.
>>
> 
> What file extension (i.e. the letters after the dot in the filename) do
> you give the database?
> 
> I faintly recall there is a windows peculiarity with system restore or
> something similar that archives certain file extensions in the
> background. That may contribute to your slow down.
> 
> Nick
> 
> 
> This email and any attachments are confidential to the intended
> recipient and may also be privileged. If you are not the intended
> recipient please delete it from your system and notify the sender.
> You should not copy it or use it for any purpose nor disclose or
> distribute its contents to any other person.
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-27 Thread Marcus Grimm
thank you for the hint,
my current approach to copy the database file, is even more
restrictive -- it is encapsulated by an EXCLUSIVE transaction. ;)

In my case it is unlikely that it will block anybody from using
the server since it is a typical "working-hours" application and
the backup will usually take place around midnight.

kind regards

Marcus


Nuno Lucas wrote:
> On Wed, May 27, 2009 at 12:51 PM, Marcus Grimm <mgr...@medcom-online.de> 
> wrote:
>> Yes, I understood that this is unsafe and I'll not use it right now.
>> But my feeling is that it will be an option for the future. My databases
>> are not so big (say in megabyte range) and currently my server application
>> will do a backup every day (more or less a simple file copy). So even if
>> I'll see a corrupt database the customer will loose only one day work,
>> as maximum. This should be tolerated since I estimate the liklyhood
>> of such an event extremely low (powerfailure or OS crash at exactly the
>> "wrong" time), powerfailure by the way can be workaround by using a
>> battery pack which is common on server hardware, I guess.
> 
> Ok,  but note that backing up sqlite database files could get a
> corrupted database on restore if you don't backup the sqlite database
> file AND the journal file as an atomic operation.
> 
> A work around for this situation is to start an immediate transaction
> [1] (which will assure no other writes are pending, although allowing
> reads to proceed), backup the database (copy the file) and only then
> end the transaction (a simple "BEGIN IMMEDIATE" --> "END" operation).
> 
> 
> Regards,
> ~Nuno Lucas
> 
> [1] http://www.sqlite.org/lang_transaction.html
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-27 Thread Marcus Grimm
my developemnt system is a Win XP, with of course NTFS, I don't
know which drive it has, I guess a standard 7200 rpm.

I tried also on a Win 2000 and Vista, as expected Vista appears
to be the slowest even this machine has the best hardware, theoretically.

All measurements vary +- 20%.

Just for my curiosity: Is linux considered to be faster than Windows here ?

kind regards

Marcus

D. Richard Hipp wrote:
> On May 27, 2009, at 7:51 AM, Marcus Grimm wrote:
> 
>>
>> Nuno Lucas wrote:
>>> On Wed, May 27, 2009 at 5:58 AM, Marcus Grimm <mgr...@medcom-online.de 
>>>> wrote:
>>>>> On Tue, May 26, 2009 at 9:47 PM, Marcus Grimm <mgr...@medcom-online.de 
>>>>> http://www.sqlite.org/faq.html#q19
>>>> the faq as well as the speed comparison speaks about a few dozen
>>>> of transaction per second... that's why I'm wondering why I'm
>>>> almost ten times slower on windows...
>>> You don't say much about your hardware or how you are timing the
>>> transaction speed, so 10 times slower for those numbers are within  
>>> the
>>> error margin.
>> my hardware is just a standard desktop computer, not the fastest...
>> Okay, if my measurements are within the typical speed range, then this
>> is the explanation why I see so much slower speed.
> 
> 
> Your OS and filesystem configuration have a big impact too.  I've  
> notice, for example, that transactions are really slow on RieserFS on  
> linux compared to Ext3.  What OS are you using?  And what filesystem?
> 
> 
> D. Richard Hipp
> d...@hwaci.com
> 
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-27 Thread Marcus Grimm


Nuno Lucas wrote:
> On Wed, May 27, 2009 at 5:58 AM, Marcus Grimm <mgr...@medcom-online.de> wrote:
>>> On Tue, May 26, 2009 at 9:47 PM, Marcus Grimm <mgr...@medcom-online.de>
>>> http://www.sqlite.org/faq.html#q19
>> the faq as well as the speed comparison speaks about a few dozen
>> of transaction per second... that's why I'm wondering why I'm
>> almost ten times slower on windows...
> 
> You don't say much about your hardware or how you are timing the
> transaction speed, so 10 times slower for those numbers are within the
> error margin.

my hardware is just a standard desktop computer, not the fastest...
Okay, if my measurements are within the typical speed range, then this
is the explanation why I see so much slower speed.

> 
> Do the math yourself. It's more about disk rotation speed than any
> other factor, as you have already seen when wrapping those INSERTs
> within a transaction.

Yes, I know... there has been also a funny proof some weeks ago here, that
where refering to light-speed and the distance of the disc to the memory,
as far as I can remeber.
I'm very well aware of the speed limitations, it was just that I was puzzled
by my results compared to the indication given on the sqlite pages.

> 
> For comparison, use PRAGMA SYNCHRONOUS=OFF, which will return as soon
> as the OS let's you (doesn't depend on the disk access).

correct, I made some experiments today and switching the synchronous to off
will give a dramatic speed improvement. Using the "normal" sync mode
I achieve an appx. 20% improvement as well.

Yes, I understood that this is unsafe and I'll not use it right now.
But my feeling is that it will be an option for the future. My databases
are not so big (say in megabyte range) and currently my server application
will do a backup every day (more or less a simple file copy). So even if
I'll see a corrupt database the customer will loose only one day work,
as maximum. This should be tolerated since I estimate the liklyhood
of such an event extremely low (powerfailure or OS crash at exactly the
"wrong" time), powerfailure by the way can be workaround by using a
battery pack which is common on server hardware, I guess.

I agree that the new async vfs might help as well, havn't tried this yet.

Thanks for sharing your thoughts

kind regards

Marcus


> 
> But note that this option is not safe and I would discourage people
> from using it until they understand very well what they are doing
> (it's not easy, and many time impossible, to recover a corrupt SQLite
> database). If you really want speed, you can try the new async VFS,
> which will do the atomic writes in a background thread.


> 
> 
> Regards,
> ~Nuno Lucas
> 
>> thanks for the reply
>>
>> Marcus
> 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


  1   2   >