"Rich Rattanni" <[EMAIL PROTECTED]> writes:
Hello Rich,
Thanks for your proposal, but my purpose is to rotate the databases
and remove the oldest files to avoid filling my file system, so I
can't use the same table as I want to changes files.
The solution which is a good compromise for me for the moment is
something like this:
while(1) {
read_informations_from_several_sources();
INSERT(informations);
if($count > $max) {
/* I want to move all oldest rows in another database */
open_new_database();
attach_old_database_and_fiddle_with_database_handle();
INSERT INTO new_table SELECT * FROM old_table order by rowid desc limit
$min;
close_old_db()
}
}
with my current values ( max ~100000 and min ~1000), the switch
operation takes around 150ms which is acceptable for me.
> Self replies.... sorry its kinda lame huh? Could you add a column to
> your schema such as "LOG #" or so, and do all your work in the same
> table. So if your data max limit is 3 you would have...
>
> rowid data logNum
> 1 x 1
> 2 y 1
> 3 z 1
> 4 a 2
> 5 b 2
>
> Just thinking out of my finger tips.
>
>
>
> On Sun, Jun 15, 2008 at 10:20 PM, Rich Rattanni <[EMAIL PROTECTED]> wrote:
>> I am working with SQLite in an embedded environment. With synchronous
>> = full, I can say large inserts are abysmal (of course I need the
>> protection that full synchronous offers). Of course, as always what I
>> call large may not be what you call large. Keep in mind that sqlite
>> will make a journal file equal to roughly the size of the data you
>> will be moving. Instead of moving the data to a backup, could you
>> create a new table and start dumping data there? You know, in your
>> program remember the current table (DataLogX). When it comes time to
>> roll over the log .... "CREATE TABLE DataLog(X+1) .....Just one man's
>> opinion.
>>
>>
>> On Fri, Jun 13, 2008 at 5:25 AM, Al <[EMAIL PROTECTED]> wrote:
>>> Hello,
>>>
>>> I'm using sqlite to implement a fast logging system in an embbeded system.
>>> For
>>> mainly space but also performance reason, I need to rotate the databases.
>>>
>>> The database is queried regularly and I need to keep at least $min rows in
>>> it.
>>>
>>> What I plan, is inside my logging loop, to do something like this.
>>>
>>> while(1) {
>>> read_informations_from_several_sources();
>>> INSERT(informations);
>>>
>>> if(count > max) {
>>> /* I want to move all oldest rows in another database */
>>> BEGIN;
>>> INSERT INTO logs_backup
>>> SELECT * FROM logs order by rowid limit ($max - $min);
>>>
>>> DELETE FROM logs WHERE rowid IN (SELECT rowid FROM logs ORDER BY rowid
>>> LIMIT ($max - $min));
>>> COMMIT;
>>> }
>>> }
>>>
>>> rowid is an autoincremented field.
>>> I am not an sql expert, and would like to find the fastest solution to move
>>> the
>>> oldest rows into another database. Am I doing silly things ? Can it be
>>> improved ?
>>>
>>> Thanks in advance.
>>>
>>> _______________________________________________
>>> sqlite-users mailing list
>>> [email protected]
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>>
> _______________________________________________
> sqlite-users mailing list
> [email protected]
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
--
Alain ENOUT, Phone: +33 4 93 00 16 82, Fax : +33 4 93 00 16 61
UDcast, 2455 Route des Dolines, BP 355, F-06906 Sophia Antipolis, France
UDcast: IP - Broadcast - Wireless http://www.UDcast.com
_______________________________________________
sqlite-users mailing list
[email protected]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users