Your design trade-off is memory against development time. It obviously depends upon your product. If you are making millions of them spend the time and save memory, otherwise add memory.

If I used the method described earlier I would memory map it to a disk file if the underlying OS supports that. That would leave you with a power fail strategy which could be journalling. If you have unreliable hardware which corrupts data you might look to better equipment rather than use error correcting algorithms.

Kalyani Tummala wrote:
Hi John,

My main memory is very limited but I have large disk to keep the
database. I need to serialize the data when the device is in switch off
mode or in a different application mode where database is not required.
I need to take care of power failure, data corruption etc.,

I consider your advice but how extensible and flexible it is for future
modifications?
Regards
Kalyani

-----Original Message-----
From: John Stanton [mailto:[EMAIL PROTECTED] Sent: Tuesday, May 29, 2007 9:25 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] How to restrict the peak heap usage during
multiple inserts and updates?

In your case we would not use Sqlite and instead use a much simpler storage method. Since your storage appears to be RAM resident that approach is indicated a fortiori.

We have had success with using storage based on AVL trees. It is very fast and remains so despite repeated insertions and deletions. The code

footprint is tiny (10K) and there is no heap usage so memory leakage can

never be a problem. You do not have SQL in that environment but it would appear that you are not using it anyway. Since your data is memory resident ACID compliance and logging are not an issue.

Even with quite detailed data manipulation you would be hard pressed to have a footprint greater than 30K. You could cut that down by defining code like VDBE with a high information density and using a simple engine

to interpret that metacode. We have successfully used that approach at times.

Kalyani Tummala wrote:

Hi John,
I could not understand your query properly. Let me tell you my
application scenario.
I am planning to use sqlite as a database for storing and retrieving
media data of about 5-10k records in a device whose main memory is
extremely small. A sequence of insert statements increasing the heap
usage to nearly 70K(almost saturating point) which is crashing my
application. I want to restrict this to 30K. I tried closing database and reopen after some inserts but of no use.
I have observed that, when I open the database with about 1K to 2K
records in it, inserts and updates take more heap and also gradually
increase than a a database with less than 1k records in it.
My objective is to reduce the peak heap usage during inserts, updates
and also deletes with little or no performance degradation.

Please suggest me if I can do anything to do so.

Thank you in advance
Kalyani



-----Original Message-----
From: John Stanton [mailto:[EMAIL PROTECTED] Sent: Tuesday, May 29, 2007 6:51 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] How to restrict the peak heap usage during
multiple inserts and updates?

Since you are only using part of Sqlite have you considered using a

much

smaller footprint storage system which only implements the functions

you

are using?

Kalyani Tummala wrote:


Hi joe,

Thanks for your response.
In order to reduce the footprint size, I have bypassed parser

completely


and using byte codes directly as my schema and queries are almost
compile time fixed. Hence I am not using sqlite3_prepare(). The following is the schema and inserts I am using. CREATE TABLE OBJECT(

PUOI                    INTEGER  PRIMARY KEY,
Storage_Id              INTEGER,
Object_Format INTEGER, Protection_Status INTEGER,
Object_Size             INTEGER,
Parent_Object           INTEGER,
Non_Consumable          INTEGER,
Object_file_name        TEXT,
Name                    TEXT,
File_Path               TEXT
);

CREATE TABLE AUDIO(

PUOI                    INTEGER PRIMARY KEY,
Use_Count               INTEGER,
Audio_Bit_Rate          INTEGER,
Sample_Rate             INTEGER,
Audio_Codec_Type        INTEGER,
Number_of_Channels      INTEGER,
Track                   INTEGER,
Artist                  TEXT,
Title                   TEXT,
Genre                   TEXT,
Album_Name              TEXT,
File_Path               TEXT
);

INSERT INTO OBJECT VALUES (
7, 65537, 12297, 0,
475805, 6, 0, 'ANJANEYASTOTRAM.mp3', NULL,
'C:\\MTPSim\\Store0\\Music\\Artist\\Album\\ANJANEYASTOTRAM.mp3'
);


INSERT INTO AUDIO VALUES (
7, 6, 144100, 0,
0, 0, 6, NULL, NULL, NULL, NULL,
'C:\\MTPSim\\Store0\\Music\\Artist\\Album\\ANJANEYASTOTRAM.mp3'
);

INSERT INTO OBJECT VALUES (
8, 65537, 12297, 0,
387406, 6, 0, 'BHADRAM.mp3', NULL,
'C:\\MTPSim\\Store0\\Music\\Artist\\Album\\BHADRAM.mp3'
);


INSERT INTO AUDIO VALUES (
8, 6, 144100, 0,
0, 0, 6, NULL, NULL, NULL, NULL,
'C:\\MTPSim\\Store0\\Music\\Artist\\Album\\BHADRAM.mp3'
);


Warm regards
Kalyani

-----Original Message-----
From: Joe Wilson [mailto:[EMAIL PROTECTED] Sent: Tuesday, May 29, 2007 9:42 AM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] How to restrict the peak heap usage during
multiple inserts and updates?




I am working at porting sqlite ( ver 3.3.8 ) on an embedded device

with



extremely low main memory.

I tried running select queries on the tables( with about 2k records

each



having about 5 strings) and they do well within 20kB of runtime heap
usage.

But, when I try new insertions, the heap usage grows tremendously

(about



70 kB at peak).


Perhaps preparing the statements (sqlite3_prepare) might decrease RAM use somewhat.

Can you post an example of your schema and these insert statements?





________________________________________________________________________

____________Choose the right car based on your needs.  Check out

Yahoo!


Autos new Car Finder tool.
http://autos.yahoo.com/carfinder/




------------------------------------------------------------------------

-----
To unsubscribe, send email to [EMAIL PROTECTED]



------------------------------------------------------------------------

-----


**********************************************************************
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed. If you have received this email in error please notify
[EMAIL PROTECTED]
**********************************************************************





------------------------------------------------------------------------

-----


To unsubscribe, send email to [EMAIL PROTECTED]



------------------------------------------------------------------------

-----





------------------------------------------------------------------------

-----
To unsubscribe, send email to [EMAIL PROTECTED]


------------------------------------------------------------------------

-----




------------------------------------------------------------------------
-----

To unsubscribe, send email to [EMAIL PROTECTED]


------------------------------------------------------------------------
-----



------------------------------------------------------------------------
-----
To unsubscribe, send email to [EMAIL PROTECTED]
------------------------------------------------------------------------
-----


**********************************************************************
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed. If you have received this email in error please notify
[EMAIL PROTECTED]
**********************************************************************


-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------



-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------

Reply via email to