[sqlite] Will sqlite flush the change to the disk when page cache is full?

2009-11-12 Thread pierr

Hi,

  I have a impression that sqlite will flush the change to the disk when the
page cache is full even if the "COMMIT" command has not been issued during
the transction. If this is true, will this violate the isolation  properity
of a transction as other parallel transction would be able the see the
uncommitted change. However, IMO, this won't impact the Atomic properity as
we still be able to rollback if the transction would fail after the flush
but before the "COMMIT" of the transction as the rollback journey is there. 
-- 
View this message in context: 
http://old.nabble.com/Will-sqlite-flush-the-change-to-the-disk-when-page-cache-is-full--tp26328716p26328716.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] suggestion on the database design

2009-08-20 Thread pierr


John Machin wrote:
> 
> On 21/08/2009 1:29 PM, pierr wrote:
>> 
>> Simon Slavin-2 wrote:
>>>
>>> On 21 Aug 2009, at 3:26am, pierr wrote:
>>>
>>>> I did not know the sequence in defining the field matters. This is
>>>> what I should have done.
>>> Sorry, I should have explained better.  You were right: there is no  
>>> difference.  I was just rearranging the fields in the classic way:  
>>> with the primary key column as the first column.  It helps me think  
>>> about how the database works.  You did nothing wrong.
>>>
>> Hi Simon,
>>   It do make a difference. 
>>   With this schema, 
>>   CREATE TABLE IF NOT EXISTS tblIndex(
>>  frame_type INTEGER,
>>  pts VARCHAR(5)
>>  ts_start INTEGER PRIMARY KEY,
>>  ts_end INTEGER,
>>  ) 
>>  There will be a rowid field in the database ; and there is a
>> sqlite_autoindex_tblIndex_1 build on ts_start automatically ; so
>> 1,800,000
>> records (16bytes each) takes 62M . 
> 
> Please don't type from memory -- what you have above has TWO syntax 
> errors, and doesn't (with those random comma placements fixed) produce 
> the result that you say -- and do read my previous message.
> 
> Here is an (annotated) copy/paste of an actual session:
> 
> sqlite>   CREATE TABLE IF NOT EXISTS tblIndex(
> ...>  frame_type INTEGER,
> ...>  pts VARCHAR(5)<<<=== missing comma
> ...>  ts_start INTEGER PRIMARY KEY,
> ...>  ts_end INTEGER,
> ...>  )
> ...> ;
> SQL error: near "ts_start": syntax error
> sqlite>   CREATE TABLE IF NOT EXISTS tblIndex(
> ...>  frame_type INTEGER,
> ...>  pts VARCHAR(5),
> ...>  ts_start INTEGER PRIMARY KEY,
> ...>  ts_end INTEGER, <<<=== superflous comma
> ...>  );
> SQL error: near ")": syntax error
> sqlite>   CREATE TABLE IF NOT EXISTS tblIndex(
> ...>  frame_type INTEGER,
> ...>  pts VARCHAR(5),
> ...>  ts_start INTEGER PRIMARY KEY,
> ...>  ts_end INTEGER
> ...>  );
> sqlite> select * from sqlite_master;
> table|tblIndex|tblIndex|2|CREATE TABLE tblIndex(
>   frame_type INTEGER,
>   pts VARCHAR(5),
>   ts_start INTEGER PRIMARY KEY,
>   ts_end INTEGER
>   ) <<<=== no index !!
> sqlite> drop table tblIndex;
> sqlite>   CREATE TABLE IF NOT EXISTS tblIndex(
> ...>  frame_type INTEGER,
> ...>  pts VARCHAR(5),
> ...>  ts_start INT PRIMARY KEY,  <<<=== using your original INT 
> instead of Simon's INTEGER
> ...>  ts_end INTEGER
> ...>  );
> sqlite> select * from sqlite_master;
> table|tblIndex|tblIndex|2|CREATE TABLE tblIndex(
>   frame_type INTEGER,
>   pts VARCHAR(5),
>   ts_start INT PRIMARY KEY,
>   ts_end INTEGER
>   )
> index|sqlite_autoindex_tblIndex_1|tblIndex|3| <<<=== index!!
> sqlite>
> 
> 
After reading your previous message , now I understand the difference
between "INT PRIMARY KEY" and "INTEGER PRIMARY KEY" which I did not notice
before. And sorry for "typing from memory":)
 

-- 
View this message in context: 
http://www.nabble.com/suggestion-on-the-database-design-tp25062134p25074149.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] suggestion on the database design

2009-08-20 Thread pierr


Simon Slavin-2 wrote:
> 
> 
> On 21 Aug 2009, at 3:26am, pierr wrote:
> 
>> I did not know the sequence in defining the field matters. This is
>> what I should have done.
> 
> Sorry, I should have explained better.  You were right: there is no  
> difference.  I was just rearranging the fields in the classic way:  
> with the primary key column as the first column.  It helps me think  
> about how the database works.  You did nothing wrong.
> 
Hi Simon,
  It do make a difference. 
  With this schema, 
  CREATE TABLE IF NOT EXISTS tblIndex(
 frame_type INTEGER,
 pts VARCHAR(5)
 ts_start INTEGER PRIMARY KEY,
 ts_end INTEGER,
 ) 
 There will be a rowid field in the database ; and there is a
sqlite_autoindex_tblIndex_1 build on ts_start automatically ; so 1,800,000
records (16bytes each) takes 62M . 

  With your schema , which is much better
  CREATE TABLE IF NOT EXISTS tblIndex(
 ts_start INTEGER PRIMARY KEY,
 ts_end INTEGER,
 frame_type INTEGER,
 pts VARCHAR(5)
 ) 
 There will be NO rowid field in the database ;and 1,800,000 records
(16bytes each) takes only 35M. 




>>> Whichever one of these is at fault, a delete command selecting on an
>>> indexed column and deleting 9 records from a five column table
>>> should not take 17 seconds.
>>
>> I am sorry, I should have mentioned It (17 seconds to delete 9)  
>> was
>> tested on a 350M MIPS CPU. And after changing to the schema you  
>> suggested,
>> it still take 17 seconds to delete 9 records.
>> On my 1.8G Hz Ubuntu desktop it tooks 800ms. So is this delete  
>> performance
>> is the limit we can achieve? Any other options I can improve this?
> 
> 
>> BTW:  I used following option to build the libarary. Is there any  
>> thing I
>> can expore here? (-O2 and -Os seem has no big difference on  
>> performance.)
>> mips24k-linux-gcc -Os -fPIC -c *.c
>> mips24k-linux-gcc -shared  -o mips_libsqlite3.so.1 sqlite3.o
> 
> 
> Putting these together, your 350 MIPS CPU machine is a MIPS 24K  
> machine.  There are various things to consider: not only CPU speed but  
> also memory bandwidth, memory speed, hard disk throughput, and other  
> things I have no idea about.  And I have no idea what part threading  
> would pay on such a platform.  Sorry but I have no experience with  
> these.  Perhaps someone else here does.
> 
> Simon.
> 
Thanks for your insight,Simon.

-- 
View this message in context: 
http://www.nabble.com/suggestion-on-the-database-design-tp25062134p25073602.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] suggestion on the database design

2009-08-20 Thread pierr


Simon Slavin-2 wrote:
> 
> If ts_start is an always increasing integer (i.e. no two records have  
> the same value) then I would see it as your primary key and as your  
> row id.  In other words, your structure is more like
> 
>   CREATE TABLE IF NOT EXISTS tblIndex(
>  ts_start INTEGER PRIMARY KEY,
>  ts_end   INTEGER,
>  frame_type   INTEGER,
>  pts  VARCHAR(5))
> 
> 
Thanks. I did not know the sequence in defining the field matters. This is
what I should have done.


Simon Slavin-2 wrote:
> 
> inserting 50 records a second shouldn't be a problem.  If they're  
> generated in batches rather than individually, use BEGIN ... COMMIT to  
> speed up the insertion dramatically but it should be fast enough  
> either way.
> 
> What interface or API to SQLite are you using ?  Or are you writing C  
> code and using the native library commands ?  17 seconds is far longer  
> than expected if you did everything as you described above.  It should  
> be far faster even without multiple threads/connections.
> 
> To test it, get your database ready to do one of the DELETE FROM  
> commands, and have your program quit or abort.  Then open the database  
> in the sqlite3 command-line application, and issue the exact DELETE  
> FROM command your application would execute.  If the command-line  
> program takes 17 seconds then the problem is in your data somewhere.   
> If it doesn't, the problem is in your own application or in your use  
> of the API you're using.
> 
>> Any way to reduce this ? We don't care if the records is synced to  
>> the hard
>> disk immediately. We are thinking start a seperated thread to do the  
>> delete
>> so to make this call to be async. The things I am not sure is  
>> whether the
>> (long time )delete will impact the insert performance if they share  
>> the same
>> connection? Or should I use a seperated connection for insert and  
>> delete?
>> But In this way, do they need be synced in application level?
> 
> Whichever one of these is at fault, a delete command selecting on an  
> indexed column and deleting 9 records from a five column table  
> should not take 17 seconds.
> 
> 

I am sorry, I should have mentioned It (17 seconds to delete 9) was
tested on a 350M MIPS CPU. And after changing to the schema you suggested,
it still take 17 seconds to delete 9 records.
On my 1.8G Hz Ubuntu desktop it tooks 800ms. So is this delete performance
is the limit we can achieve? Any other options I can improve this?

BTW:  I used following option to build the libarary. Is there any thing I
can expore here? (-O2 and -Os seem has no big difference on performance.)
mips24k-linux-gcc -Os -fPIC -c *.c
mips24k-linux-gcc -shared  -o mips_libsqlite3.so.1 sqlite3.o 

-- 
View this message in context: 
http://www.nabble.com/suggestion-on-the-database-design-tp25062134p25073115.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] suggestion on the database design

2009-08-20 Thread pierr

Hi, 
  We have a database with a very simple schema:
CREATE TABLE IF NOT EXISTS tblIndex(
 frame_type  INT,
 pts  VARCHAR(5),
 ts_start  INT primary key,
 ts_end  INT)

And the application scenario is :
1. Every second usr will insert 2 ~ 50 records ,and the ts_start fields of
those records is always increasing. After 8 hours ,there are at most
1_800_000 records. By setting the sync mode to off, the performance seems OK
so far.And because the data of each record has only 16 bytes, we may use
some buffer even if the insert speed is not too fast.

2. After 8 hours , usr will told me to delete the oldest data by telling the
upper bound ts_start , so I will do DELETE FROM tblIndex WHERE ts_start <
upper_bound_ts_start. Delete 90_000 (which is the records for half a hour)
out of the 1_800_000 now take 17 seconds. A litter bit longer than expected.
Any way to reduce this ? We don't care if the records is synced to the hard
disk immediately. We are thinking start a seperated thread to do the delete
so to make this call to be async. The things I am not sure is whether the
(long time )delete will impact the insert performance if they share the same
connection? Or should I use a seperated connection for insert and delete?
But In this way, do they need be synced in application level?

3. Search. SELECT ts_start FROM tblIndex WHERE ts_start BETWEEN ? AND ? " -
As ts_start is the primary key so the performance is OK for our need now. I
am thinking i should use a seperated connection for search , right?

Configuration of sqlite :
hard disk database (usb interface)
cache size is 2000 
page size is 1024 
sync mode is 0 (off)
journal_mode mode is truncate 


Thanks for any suggestion to improve the delete performance or about the
overall design.
-- 
View this message in context: 
http://www.nabble.com/suggestion-on-the-database-design-tp25062134p25062134.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] search time is non-determinate in multi-thread enviroment

2009-07-28 Thread pierr


Jay A. Kreibich-2 wrote:
> 
> 
>   THREADSAFE=1 is "Serialize mode" and THREADSAFE=2 is "Multithread"
>   mode.  THREADSAFE=2 (basic Multithread) actually offers less protection,
>   requiring the application to provide its own locks to prevent multiple
>   threads from accessing the DB at the same time, while THREADSAFE=1
>   (Serialize) handles all that for you.  Generally, unless you're doing
>   a lot of thread and lock management yourself, you want to use =1.
> 

http://www.sqlite.org/threadsafe.html
"Multi-thread. In this mode, SQLite can be safely used by multiple threads 
provided that no single database connection is used simulataneously in two
or more threads." 

If I have one(different) connection for each thread , it seems no need to
synchronize these threads' accessing to the database because they are
through different connection. Did I read it wrongly?
-- 
View this message in context: 
http://www.nabble.com/search-time-is-non-determinate-in-multi-thread-enviroment-tp24693604p24711210.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] search time is non-determinate in multi-thread enviroment

2009-07-28 Thread pierr


Igor Tandetnik wrote:
> 
> pierr wrote:
>> Hi,
>>   My sqlite is configure as serialized (THREADSAFE=1). My application
>> has only one connection but it is shared by two thread. One thread
>> will do insert/update/delete in the background , another thread will
>> do the select upon request of the user from gui.
>>  Typically, we will call following function 12 times (with different
>> parameters ,of course)before showing the pages to user. In the
>> standalone test (single  thread), a call to following function will
>> take less than 40ms, so 12 times will take less than 500 ms and it is
>> acceptable. However, in the real application, sometimes this function
>> took 1000ms to return the result which make the gui slow.
> 
> When the second thread makes SQLite calls, it takes a lock on the 
> connection. If the UI thread runs a SELECT at this moment, it also tries 
> to take that lock, and so has to wait until the second thread releases 
> it.
> 
1. I expected the second thread would impact the SELECT thread but not that
much. The serach time increase as much as 6 times on average. But in my
experiment, when i add a usleep(1) (10ms) in the INSERT thread, the
search time will back to normal.I am not sure if this is the POINT. I would
try this in my real application to see what happened.

2. I also suspect the increase of search time is caused by the SELECT thread
can not be scheduled timely to run because of INTERRUPT that happend very
frequently in my system , or because of other threads cost to much CPU time
like the other_thead in my expriment ,which has nothing but while(1).
(Actually ,I don't quite understand why this would impact the SELECT thread
(that much). Although other_thread has a while(1), I thought kernel (linux
2.6) should use time_sharing scheduler policy as both other_thread and
select_thread are nomal thread , i.e, not real time thread and no priority
set explicitly. So priority base preemptive should not be applied here. Any
idea? )

3.I think the way I use the sqlite (one thread for write , another for read)
is not uncommon. Do we have any best practice for the question I am asking
here?


Igor Tandetnik wrote:
> 
> Further, you cannot open two connections to the same in-memory database.
> 

This would be the thing that prevent me from going THREADSAFE=2.

-- 
View this message in context: 
http://www.nabble.com/search-time-is-non-determinate-in-multi-thread-enviroment-tp24693604p24711093.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] search time is non-determinate in multi-thread enviroment

2009-07-28 Thread pierr



pierr wrote:
> 
> Hi,
>My sqlite is configure as serialized (THREADSAFE=1). My application has
> only one connection but it is shared by two thread. One thread will do
> insert/update/delete in the background , another thread will do the select
> upon request of the user from gui. 
>   Typically, we will call following function 12 times (with different
> parameters ,of course)before showing the pages to user. In the standalone
> test (single  thread), a call to following function will take less than
> 40ms, so 12 times will take less than 500 ms and it is acceptable.
> However, in the real application, sometimes this function took 1000ms to
> return the result which make the gui slow.
> 
>  int search_eit_event_in_time_inteval(...)
>  86 {
>  93 
>  94 sql = (char *)" SELECT *,rowid FROM tblEvent_basic "
>  95   " WHERE "
>  96   " service_id = ? AND "
>  97   " start_time < ? AND end_time > ? "
>  98   " ORDER by start_time ASC";
>  }
> 
>  I did some experiment try to understand why this happend? Here is the
> code I used :
>  http://pastebin.ca/1509723
> 
>  I found :
>  1. When  i create a write_thread
> a. with usleep(1) in while(1) 
> search time always < 50ms
> b. without usleep(1) in while(1) 
>  17.7 % search time > 50ms , usually 200ms
>   
>  2. When  i create another_thread (no database access at all)
> a. with usleep(1) in while(1) 
> search time always < 50ms
> b. without usleep(1) in while(1) 
>  21.7 % search time > 50ms , usually 200ms.
> 
>Any insight? Should I go with THREADSAFE=2 and two connections: one for
> read ,another for write? Thanks. 
> 
I was using memory database. This should be impormant background information
:)
-- 
View this message in context: 
http://www.nabble.com/search-time-is-non-determinate-in-multi-thread-enviroment-tp24693604p24693647.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] search time is non-determinate in multi-thread enviroment

2009-07-28 Thread pierr

Hi,
   My sqlite is configure as serialized (THREADSAFE=1). My application has
only one connection but it is shared by two thread. One thread will do
insert/update/delete in the background , another thread will do the select
upon request of the user from gui. 
  Typically, we will call following function 12 times (with different
parameters ,of course)before showing the pages to user. In the standalone
test (single  thread), a call to following function will take less than
40ms, so 12 times will take less than 500 ms and it is acceptable. However,
in the real application, sometimes this function took 1000ms to return the
result which make the gui slow.

 int search_eit_event_in_time_inteval(...)
 86 {
 93 
 94 sql = (char *)" SELECT *,rowid FROM tblEvent_basic "
 95   " WHERE "
 96   " service_id = ? AND "
 97   " start_time < ? AND end_time > ? "
 98   " ORDER by start_time ASC";
 }

 I did some experiment try to understand why this happend? Here is the code
I used :
 http://pastebin.ca/1509723

 I found :
 1. When  i create a write_thread
a. with usleep(1) in while(1) 
search time always < 50ms
b. without usleep(1) in while(1) 
 17.7 % search time > 50ms , usually 200ms
  
 2. When  i create another_thread (no database access at all)
a. with usleep(1) in while(1) 
search time always < 50ms
b. without usleep(1) in while(1) 
 21.7 % search time > 50ms , usually 200ms.

   Any insight? Should I go with THREADSAFE=2 and two connections: one for
read ,another for write? Thanks. 
-- 
View this message in context: 
http://www.nabble.com/search-time-is-non-determinate-in-multi-thread-enviroment-tp24693604p24693604.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] how to break subquery into 2 simply query

2009-07-10 Thread pierr



Igor Tandetnik wrote:
> 
> pierr wrote:
>> Following 2 statements took 400ms to be excuted on a 350M MIPS CPU
>> and it is a memory database:
>>
>> "DELETE FROM tblEvent_type WHERE eguid in (SELECT rowid FROM
>> tblEvent_basic WHERE sguid=11);";
>> "DELETE FROM tblEvent_group WHERE eguid in (SELECT rowid FROM
>> tblEvent_basic WHERE sguid=11);";
>> (An index has been created on tblEvent_group(eguid) , no other index
>> so far.)
> 
> You might benefit from an index on tblEvent_basic(sguid). An index on 
> tblEvent_group(eguid) may actually hurt performance.
> 
> 
Igor,
This is really what I found. Why an index on tblEvent_group(eguid) hurted
the performance?


-- 
View this message in context: 
http://www.nabble.com/how-to-break-subquery-into-2-simple-query-tp24422352p24435644.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLITE is not working in pthread

2009-07-09 Thread pierr



Pramoda M. A wrote:
> 
> Hi All
> 
>Sqlite is not running in pthread. At first time, sqlite_open will
> execute in pthread. From next time, no one API is executing.
> Please guide me to make SQLite run in pthread.
> Please anybody help me.
> 
> 
Pramoda ,
 Would you please provide some code snip so that we can know what exactly
are you doing?
-- 
View this message in context: 
http://www.nabble.com/SQLITE-is-not-working-in-pthread-tp24410382p24422426.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] how to break subquery into 2 simply query

2009-07-09 Thread pierr

Following 2 statements took 400ms to be excuted on a 350M MIPS CPU and it is
a memory database:

 "DELETE FROM tblEvent_type WHERE eguid in (SELECT rowid FROM tblEvent_basic
WHERE sguid=11);";
 "DELETE FROM tblEvent_group WHERE eguid in (SELECT rowid FROM
tblEvent_basic WHERE sguid=11);";
(An index has been created on tblEvent_group(eguid) , no other index so
far.)

And also I notice that  "(SELECT rowid FROM tblEvent_basic WHERE
sguid=11)"`was excuted twice ,so I'd like to try to split the subquery to
something like below to see if there will be any performance gain:

result =  exe_sql "(SELECT rowid FROM tblEvent_basic WHERE sguid=11);";
 exe_sql "DELETE FROM tblEvent_type WHERE eguid in  (result)
 exe_sql "DELETE FROM tblEvent_group WHERE eguid in (result)

How to get the parmater (result) binding to follwing statement in sqlite?
"DELETE FROM tblEvent_group WHERE eguid in (?) #how to bind result here
I am using sqlite3 C API directly.



-- 
View this message in context: 
http://www.nabble.com/how-to-break-subquery-into-2-simply-query-tp24422352p24422352.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] concurrency differences between in-memory and on disk?

2009-06-29 Thread pierr


Igor Tandetnik wrote:
> 
> Daniel Watrous wrote:
>> I've developed an application that has very high concurrency.  In my
>> initial testing we used SQLite 3 from python, but we experienced too
>> many locks and the database always fell behind.
> 
> What precisely is the nature of the concurrency? Are you opening 
> multiple connections to the same database, or sharing a single 
> connection between threads? Are you trying to write concurrently, and if 
> so, is it to the same table or to different tables?
> 
> Basically, there are three ways SQLite can be used (they can also be 
> combined):
> 
> 1. Many connections to the same database. In this case, there's a 
> many-readers-single-writer lock at the database level, so at any point 
> in time only one connection can write.
> 
> 2. A single connection shared by multiple threads. A connection 
> maintains a mutex that every API call acquires on entry and releases on 
> return, so all calls are serialized. However, one thread can, say, step 
> through a select resultset row-by-row, while another inserts row after 
> row into some table: these calls can interleave, and would appear almost 
> concurrent.
>  
Hi Igor,
So, for "A single connection shared by multiple threads" case, is the
statement "there's a 
many-readers-single-writer lock at the database level" still ture? At least
, the read and write 
would appear concurrent as you said.

Is there any examples showing the difference between these 2 different usage
model?
Thanks.

- Pierr

-- 
View this message in context: 
http://www.nabble.com/concurrency-differences-between-in-memory-and-on-disk--tp24201096p24266195.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] first few characters of varchar() corrupted when SELECTing from a C++ program?

2009-06-29 Thread pierr



uralmazamog wrote:
> 
> Greetings,
> 
> maybe it's just me being stupid, I'll best jump right to the code:
> 
> sqlite3_open_v2( "testdat", &sqlDB, SQLITE_OPEN_READWRITE |
> SQLITE_OPEN_CREATE, NULL );
> sqlite3_prepare_v2( sqlDB, "SELECT b FROM whee WHERE a='bing';", -1,
> &sqlStat, NULL );
> sqlite3_step( sqlStat );
> const unsigned char *testValue = sqlite3_column_text( sqlStat, 0 );
> 
> both a and b are varchar(20)s
> 
> calling the query from the command-line tool returns the proper result
> "bang", however, running this code the value testValue shows up as ""
> for longer strings only the first four characters are corrupted, and the
> rest reads okay, what am I doing wrong?
> 
> 
Try this:
char testValue[20];
memcpy(testValue,sqlite3_column_text(sqlStat,0),sqlite3_column_bytes(sqlStat,0));


-- 
View this message in context: 
http://www.nabble.com/first-few-characters-of-varchar%28%29-corrupted-when-SELECTing-from-a-C%2B%2B-program--tp24237176p24266020.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to how many memory is used by sqlite

2009-06-18 Thread pierr



pierr wrote:
> 
> 
> pierr wrote:
>> 
>> 
>> D. Richard Hipp wrote:
>>> 
>>> 
>>> On Jun 15, 2009, at 9:04 AM, pierr wrote:
>>> 
>>>>
>>>> Hi all,
>>>>   I am using an memory type sqlite database in our embedded
>>>> application. After the applicate run a while ,the system will run  
>>>> ourt of
>>>> out of memory. So, is there a way to know how many memory is used by  
>>>> sqlite?
>>>> That would include the memory for page cache , any other other  
>>>> internal
>>>> memory used the sqlite ,as well as the memory for database itself.  
>>>> Thanks
>>>> for your help.
>>> 
>>> http://www.sqlite.org/c3ref/memory_highwater.html
>>> http://www.sqlite.org/malloc.html
>>> 
>>> 
>> 
>> By using the memory_highwater api above , I found that most of the memory
>> is used for page cache. By limiting the cache size , I can have a good
>> control of how many heap memory will be used.
>> 
>> INSERT Record NumbersPAGE SiZE   CACHE SIZE HIGH WATER
>> MARK
>> (110bytes per record)
>> 
>> 10001024   20
>> 98,256
>> 10001024   2000
>> 271,040
>> 80001024   2000
>> 1,562,144
>> 80001024   20
>> 99,200
>> 
>> And as Simon suggest ,when I close the connection, the memory used by
>> sqlite will be Zero.
>> 
>> I am more clear about how sqlite use memory now , however ,as I am using
>> memory database ,I also cares about the database size. That is what is
>> proportion of database size to the raw data size.
>> 
>> I insert a record of 112 bytes 10,1000,and 1 times repectively and
>> here list the output database size. For The last case ,it has a propotion
>> about 1.45. What cost the extra 0.45 here beside the btree? 
>> 
>>  10240  2009-06-17 14:38 /etc/eit.flash.sqlite.10
>>  171008 2009-06-17 14:32 /etc/eit.flash.sqlite.1000
>>  1634304   2009-06-17 14:42 /etc/eit.flash.sqlite.1
>> 
>> (I am using the Flash database instead of the memory data here as I think
>> the result should be the same.)
>> 
>> 
> 
> 
> I was wrong here.
> 
> In my application (Not the test enviroment mentiond above), after the
> sqlite eating up more than 8M bytes in the memory , I dumpded the database
> to the Flash but it is only 360K which is very reasonable for my
> application. The database was configured as cache_szie = 20
> ,page_size=1024 ,so the page cache would not take too much memory ,it
> should be less than 100K according to above measurement. 
> 
> So, for the non-memory based data base ,the total storage needed is :
> generated database size + memory used by sqlite3 internal.
> 
> But for the memory database, the total storage needed will be much bigger
> than that. What will eat the extra buck of memory here?  I can think of
> the journey file ,but it should be K level memory..
> 
> Thanks for help me out.
> 

I made a mistake .
Before insering to the database , I did a query but fogot to finalize the
statement, which ate lots of memory. By add the finalize statement, the
memory comsumption is reasonable now.

-- 
View this message in context: 
http://www.nabble.com/How-to-know-how-many-memory-is-used-by-sqlite-tp24034261p24091514.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to how many memory is used by sqlite

2009-06-17 Thread pierr


pierr wrote:
> 
> 
> D. Richard Hipp wrote:
>> 
>> 
>> On Jun 15, 2009, at 9:04 AM, pierr wrote:
>> 
>>>
>>> Hi all,
>>>   I am using an memory type sqlite database in our embedded
>>> application. After the applicate run a while ,the system will run  
>>> ourt of
>>> out of memory. So, is there a way to know how many memory is used by  
>>> sqlite?
>>> That would include the memory for page cache , any other other  
>>> internal
>>> memory used the sqlite ,as well as the memory for database itself.  
>>> Thanks
>>> for your help.
>> 
>> http://www.sqlite.org/c3ref/memory_highwater.html
>> http://www.sqlite.org/malloc.html
>> 
>> 
> 
> By using the memory_highwater api above , I found that most of the memory
> is used for page cache. By limiting the cache size , I can have a good
> control of how many heap memory will be used.
> 
> INSERT Record NumbersPAGE SiZE   CACHE SIZE HIGH WATER
> MARK
> (110bytes per record)
> 
> 10001024   20
> 98,256
> 10001024   2000
> 271,040
> 80001024   2000
> 1,562,144
> 80001024   20
> 99,200
> 
> And as Simon suggest ,when I close the connection, the memory used by
> sqlite will be Zero.
> 
> I am more clear about how sqlite use memory now , however ,as I am using
> memory database ,I also cares about the database size. That is what is
> proportion of database size to the raw data size.
> 
> I insert a record of 112 bytes 10,1000,and 1 times repectively and
> here list the output database size. For The last case ,it has a propotion
> about 1.45. What cost the extra 0.45 here beside the btree? 
> 
>  10240  2009-06-17 14:38 /etc/eit.flash.sqlite.10
>  171008 2009-06-17 14:32 /etc/eit.flash.sqlite.1000
>  1634304   2009-06-17 14:42 /etc/eit.flash.sqlite.1
> 
> (I am using the Flash database instead of the memory data here as I think
> the result should be the same.)
> 
> 


I was wrong here.

In my application (Not the test enviroment mentiond above), after the sqlite
eating up more than 8M bytes in the memory , I dumpded the database to the
Flash but it is only 360K which is very reasonable for my application. The
database was configured as cache_szie = 20 ,page_size=1024 ,so the page
cache would not take too much memory ,it should be less than 100K according
to above measurement. 

So, for the non-memory based data base ,the total storage needed is :
generated database size + memory used by sqlite3 internal.

But for the memory database, the total storage needed will be much bigger
than that. What will eat the extra buck of memory here?  I can think of the
journey file ,but it should be K level memory..

Thanks for help me out.
-- 
View this message in context: 
http://www.nabble.com/How-to-how-many-memory-is-used-by-sqlite-tp24034261p24073060.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to how many memory is used by sqlite

2009-06-17 Thread pierr


D. Richard Hipp wrote:
> 
> 
> On Jun 15, 2009, at 9:04 AM, pierr wrote:
> 
>>
>> Hi all,
>>   I am using an memory type sqlite database in our embedded
>> application. After the applicate run a while ,the system will run  
>> ourt of
>> out of memory. So, is there a way to know how many memory is used by  
>> sqlite?
>> That would include the memory for page cache , any other other  
>> internal
>> memory used the sqlite ,as well as the memory for database itself.  
>> Thanks
>> for your help.
> 
> http://www.sqlite.org/c3ref/memory_highwater.html
> http://www.sqlite.org/malloc.html
> 
> 

By using the memory_highwater api above , I found that most of the memory is
used for page cache. By limiting the cache size , I can have a good control
of how many heap memory will be used.

INSERT Record NumbersPAGE SiZE   CACHE SIZE HIGH WATER MARK
(110bytes per record)

10001024   20
98,256
10001024   2000
271,040
80001024   2000
1,562,144
80001024   20
99,200

And as Simon suggest ,when I close the connection, the memory used by sqlite
will be Zero.

I am more clear about how sqlite use memory now , however ,as I am using
memory database ,I also cares about the database size. That is what is
proportion of database size to the raw data size.

I insert a record of 112 bytes 10,1000,and 1 times repectively and here
list the output database size. For The last case ,it has a propotion about
1.45. What cost the extra 0.45 here beside the btree? 

 10240  2009-06-17 14:38 /etc/eit.flash.sqlite.10
 171008 2009-06-17 14:32 /etc/eit.flash.sqlite.1000
 1634304   2009-06-17 14:42 /etc/eit.flash.sqlite.1

(I am using the Flash database instead of the memory data here as I think
the result should be the same.)

Here are the table structure. For the 112 bytes in each record , 100 bytes
goes to the event_name field.
CREATE TABLE IF NOT EXISTS tblEvent_basic(
sguid  INT,
service_id  INT,
event_id  INT,
start_time  VARCHAR(5),
duration  VARCHAR(3),
running_status  INT,
free_ca_mode  INT,
event_name  VARCHAR(256),
text  VARCHAR(256))

Thanks for your comment.
-- 
View this message in context: 
http://www.nabble.com/How-to-how-many-memory-is-used-by-sqlite-tp24034261p24068533.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] How to how many memory is used by sqlite

2009-06-15 Thread pierr

Hi all,
   I am using an memory type sqlite database in our embedded
application. After the applicate run a while ,the system will run ourt of
out of memory. So, is there a way to know how many memory is used by sqlite?
That would include the memory for page cache , any other other internal
memory used the sqlite ,as well as the memory for database itself. Thanks
for your help.
-- 
View this message in context: 
http://www.nabble.com/How-to-how-many-memory-is-used-by-sqlite-tp24034261p24034261.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQL error: disk I/O error

2009-06-12 Thread pierr


Nuno Lucas-2 wrote:
> 
> On 5/23/07, Shilpa Sheoran  wrote:
>> Linux 2.6.9
>> and the media is Nand Flash memory.
>> /dir1/dir2/dir3
>>
>> /dir1/dir2 is readonly (cramfs)
>> dir3 is read write (Flash mem). and I'm creating the database in dir3.
> 
> There is your problem. The file system on that directory (I would
> guess JFS2 or similar), doesn't support the fsync() call.
> 
> 
This might not be true. See below:
http://www.linux-mtd.infradead.org/faq/jffs2.html
"
On NOR FLASH each write goes directly into the FLASH.

On NAND FLASHM and NOR ECC FLASH we have a write-buffer for writing only
full pages to the chips. There could be a loss of data, when the
write-buffer is not flushed before power down. There are some mechanisms to
ensure, that the write-buffer is flushed. You can force the flush of the
write-buffer by using fsync() or sync() in your application. JFFS2 has a
timed flush of the write buffer, which forces the flush of the buffer to
flash, if there are no writes to the filesystem for more than about 5
seconds. The time depends on the cycle-time of kupdated, which can be
adjusted via /proc/sys/vm/dirty_expire_centisecs."

-- 
View this message in context: 
http://www.nabble.com/SQL-error%3A-disk-I-O-error-tp10729555p24008290.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] help,low RAM problem

2009-06-10 Thread pierr

You have to configure the page size and cache size (number of pages) to fit
your 30K RAM. To be specific: page size * cache size < 30K.


徐中华 wrote:
> 
> 
> 
> 
> hi, 
>   I have a project based on an embedded platform.
> the platform just has 30K RAM and a large NAND Flash,
> so I want to use SQLite on this platform. 
>   Can SQLite 3.6 be used on this platform . And if it 's possible, 
> how can I configure the SQlite ?
>Thanks. 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 

-- 
View this message in context: 
http://www.nabble.com/help%2Clow-RAM-problem-tp23513661p23961762.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] journey mode TRUNCATE is to append ? not overwrite?

2009-06-01 Thread pierr

Hi all,

Section 7.9 of http://www.sqlite.org/atomiccommit.html mentioned:

"On embedded systems with synchronous filesystems, TRUNCATE results in
slower behavior than PERSIST. The commit operation is the same speed. But
subsequent transactions are slower following a TRUNCATE because it is faster
to overwrite existing content than to append to the end of a file. New
journal file entries will always be appended following a TRUNCATE but will
usually overwrite with PERSIST."

why " New journal file entries will always be appended following a TRUNCATE
but will usually overwrite with PERSIST"?  I think if we trancate the
journey file to _zero_ , and the following write to that file is going to
overwrite the old data. Why append?

Thanks.


-- 
View this message in context: 
http://www.nabble.com/journey-mode-TRUNCATE-is-to-append---not-overwrite--tp23810973p23810973.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] about sqlite on line back up API

2009-05-24 Thread pierr chen
Hi Dan,
  I checked the implementation of back up API and did not notice anywhere
had turned on "PRAGMA locking_mode=exclusive" or any other similar PRAGMA
explicitly.
  And another thing wonder me is why I did not see the journey file before
power off ?


>
> >  b. There always a journey file exist when I reset the system. I
> > am
> > thinking that If I reset the system after the loadOrSaveDb()
> > returned ,as
> > shown below,there should be no journey file exist as the write has
> > finished.
>
> Did you open the flash database with "PRAGMA locking_mode=exclusive", or
> some other option that would turn on persistent journals?
>
> Dan.
>
>
>


-- 
Pierr Chen
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] about sqlite on line back up API

2009-05-22 Thread pierr chen
Hi,
   I am using sqlite on Flash based embedded system in the following
way intended to 1) improve the insert/update/delete performance  2)reduce
the Flash write numbers:

  a .when system is up, load the flash database to the memory
  b. the delete/update/insert operation is on the memory database
  c. sync the memory based database to the flash periodically (say 10
seconds) using on line back up api .I was using loadOrSaveDb()  function
list here.
  http://www.sqlite.org/backup.html

 Compared with directly interacting with Flash based database ,the
insert/update/delete performance is improved with no doubt. However, I found
following problems :
 a. It seems when using loadOrSaveDb() , the backup engine will always
copy the pages from memory database to the flash database even if these two
database has exactly the same data. If it were the case, then I can not
achieve my goal 2-reduce the Flash write number.
  b. There always a journey file exist when I reset the system. I am
thinking that If I reset the system after the loadOrSaveDb() returned ,as
shown below,there should be no journey file exist as the write has finished.

//a background thread
while(1) {
   call LoadOrSaveDb()  and returned
  sleep(10)   <-- reset the system in between ,say
5th second moment.
   }

 Thanks for your comments.



Pierr Chen
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users