Re: [sqlite] Client/Srever SQLite

2008-06-17 Thread Alex Katebi
Hi John,

I was writing a message into the socket partially. I had done this to
avoid coping. This caused many transmissions for a single message. At the
other end I was doing MSG_PEEK optional flag which was wasteful. I had done
this to dump the message for debugging.

   By building my message into memory. I was able to send the entire message
in one socket write. At the other end of the socket I did the
MSG_WAITALLoptional for the
recv function. This caused the receiver to be active only when a
usable message peace was received.

One might say that a misbehaving client could still slow down the server. A
server should respond with a notification before dropping this client for
sending runt messages.

What is your design like?

Thanks,
-Alex


On Tue, Jun 17, 2008 at 11:57 AM, John Stanton <[EMAIL PROTECTED]> wrote:

> What did you change?  What was causing the lag?
>
> Alex Katebi wrote:
> > slowness is fixed. Can't tell the difference between client/server speed
> > from library.
> >
> > On Sat, Jun 14, 2008 at 8:32 PM, Alex Katebi <[EMAIL PROTECTED]>
> wrote:
> >
> >
> >>Hi All,
> >>
> >>   Looks like there is some interest. I will announce when I release it.
> >>Currently I am developing an interactive user shell client. This shell is
> >>used for my client/server development. It can also be embedded for any
> >>clients user interface.
> >>The request/response is a little slow for some reason. I need to fix this
> >>issue.
> >>
> >>Thanks,
> >>-Alex
> >>
> >>
> >>
> >>On Mon, Jun 2, 2008 at 11:40 AM, Alex Katebi <[EMAIL PROTECTED]>
> >>wrote:
> >>
> >>
> >>>Hi All,
> >>>
> >>>  I am using remote procedure calls (RPC) for SQLite in my application.
> I
> >>>have implemented a few SQLite RPC functions that I needed successfully.
> >>>I am wondering if there are other people like me who need this.
> >>>If there are enough people who could benefit from this I can make it
> >>>available as an open source public domain software.
> >>>Then people can add more functions as needed.
> >>>
> >>>Thanks,
> >>>-Alex
> >>>
> >>
> >>
>  > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Problem using Attach to insert data from another table

2008-06-17 Thread Dan

On Jun 18, 2008, at 9:07 AM, danjenkins wrote:

>
> Hi.  I battling my way through the Attach command and I can't get  
> my simple
> test to work.
>
> Given a database named "ultra2008.sql" that contains a table named  
> "ultra"
> and a database named "archive2007.sql" that also contains a table  
> named
> "ultra" where the databases and tables have identical structures,  
> shouldn't
> these statements insert ultra's data from archive2007.sql into
> ultra2008.sql?
> (For brevity I've removed the error checking lines for this posting.)
>
> sqlite3 *pDB = NULL;
> sqlite3_open("ultra2008.sql", );
> sqlite3_exec(pDB, "attach 'archive2007.sql' as arc", NULL, NULL,  
> NULL);
> sqlite3_exec(pDB, "insert into main.ultra select * from arc.ultra",  
> NULL,
> NULL, NULL);
> sqlite3_close(pDB);
>
> After this function runs, the databases are left with the same data  
> inside
> of them so something isn't right.

Check the error codes returned by the two sqlite3_exec calls. If
one is not SQLITE_OK, examine the string returned by sqlite3_errmsg().

Dan.


>
> I appreciate your input.
> -Dan
>
> -- 
> View this message in context: http://www.nabble.com/Problem-using- 
> Attach-to-insert-data-from-another-table-tp17958315p17958315.html
> Sent from the SQLite mailing list archive at Nabble.com.
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Importing table

2008-06-17 Thread Jim Dodgen
don't forget to wrap inserts in begin; commit; pairs


On 6/17/08, Dennis Cote <[EMAIL PROTECTED]> wrote:
> Alberto Simões wrote:
> >
> > I have a text file with 18 399 392 lines. Each line contains five
> > fields. Four are a compound key, the other is just  data.
> > What is the best way to import this to sqlite?
> >
> > I am thinking on creating another text file with  18 399 392  INSERT 
> > commands.
> > Would that be the best method?
> >
>
> You might be able to do it more directly using the .import command. It
> can read a fixed number of columns separated by fixed delimiters. It
> works with CSV files for example.
>
> All data are inserted as strings, but the strings may be converted to
> integer or real values if the columns have suitable affinity (see
> http://www.sqlite.org/datatype3.html for more details).
>
> HTH
> Dennis Cote
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


-- 
Jim Dodgen
[EMAIL PROTECTED]
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Problem using Attach to insert data from another table

2008-06-17 Thread danjenkins

Hi.  I battling my way through the Attach command and I can't get my simple
test to work.

Given a database named "ultra2008.sql" that contains a table named "ultra"
and a database named "archive2007.sql" that also contains a table named
"ultra" where the databases and tables have identical structures, shouldn't
these statements insert ultra's data from archive2007.sql into
ultra2008.sql?
(For brevity I've removed the error checking lines for this posting.)

sqlite3 *pDB = NULL; 
sqlite3_open("ultra2008.sql", );
sqlite3_exec(pDB, "attach 'archive2007.sql' as arc", NULL, NULL, NULL);
sqlite3_exec(pDB, "insert into main.ultra select * from arc.ultra", NULL,
NULL, NULL);
sqlite3_close(pDB);

After this function runs, the databases are left with the same data inside
of them so something isn't right.

I appreciate your input.
-Dan

-- 
View this message in context: 
http://www.nabble.com/Problem-using-Attach-to-insert-data-from-another-table-tp17958315p17958315.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQL question

2008-06-17 Thread Gregor Brandt
I did change it to:

UPDATE `table` SET `id` = `id` + 32768 WHERE `id` >= x

and then I decrement everything over 32768 by 32767 to get it back in  
line.

This was required because a single update on a primary key did return  
an error about a key conflict when only incrementing by 1!

Thanks,
Gregor


On 16-Jun-08, at 11:53 AM  , Darren Duncan wrote:

> Gregor, why did you do that more complicated version with the  
> subquery and
> sorting et al?  The short version that RBS would have worked a lot  
> better;
> you just need to say?
>
>   update binary_report_fmt
>   set column_id = column_id + 1
>   where column_id > 1;
>
> ... and then insert a new row with column_id = 2.  All that other  
> stuff you
> did just makes things unnecessarily more complicated, and possibly  
> buggy.
>
> On a related matter, UPDATE statements are atomic operations, so the  
> fact
> that the id is a primary key doesn't matter.  Since you're  
> incrementing all
> the id values simultaneously, there are no duplicate values at any  
> time, so
> the primary key constraint would remain happy.
>
> -- Darren Duncan
>
> Gregor Brandt wrote:
>> Hi, thanks this is great.  Of course I forgot to mention that the id
>> is a primary key, so I get an error about duplicate primary keys.
>>
>> I tried this:
>>
>> update binary_report_fmt set column_id = column_id + 1 where  
>> column_id
>> = (select column_id from binary_report_fmt where column_id >= 3 order
>> by column_id desc);
>>
>> but it only updates the last item.   I guess I can make it a non-
>> primary key..then it works perfectly.
>>
>> Gregor
>>
>> On 16-Jun-08, at 10:04 AM  , [EMAIL PROTECTED] wrote:
>>
>>> update
>>> table
>>> set id = id + 1
>>> WHERE
>>> id > 1
>>>
>>> RBS
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

-
Gregor Brandt

Marionette Software Group
Suite 202, 124 - 26 Ave SW
Calgary, AB, Canada T2S 3G5
Tel: +1 403 401 4784
EMail: [EMAIL PROTECTED]

www.marionette.ca





___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite3 to SQLite2?

2008-06-17 Thread Gilles Ganault
On Mon, 16 Jun 2008 18:32:12 -0600, "David Baird"
<[EMAIL PROTECTED]> wrote:
>echo .dump | sqlite3 input.db | sqlite output.db

Thank you.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] splitting field data

2008-06-17 Thread Rich Shepard
On Tue, 17 Jun 2008, [EMAIL PROTECTED] wrote:

> As shown, ITEMS is comprised of individual data items separated by
> semicolons. For some purposes, this concatenated form suits the purpose,
> but for other purposes I need to split these items into individual data
> points. That is, I want to split the ITEMS (at the semicolons) into
> separate rows in a new table FOO2, having fields ID and SINGLEITEM that
> looks like this:
>
> ID, SINGLEITEM
> 1,item1
> 1,item2
> 1,item3
> 2,item1
> 2,item4
> 3,item5
> 3,item3
> 3,item7
>
> Any suggestions of how to do this in SQLite?

Chris,

   You ought to normalize your data. Consider (and this is just off the top
of my head) two tables: one to hold categories and their descriptions, the
other to hold categories and items. Then you can select either specific
items, all items, or groups of items collected as you wish.

Rich

-- 
Richard B. Shepard, Ph.D.   |  IntegrityCredibility
Applied Ecosystem Services, Inc.|Innovation
 Voice: 503-667-4517  Fax: 503-667-8863
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] splitting field data

2008-06-17 Thread cmartin
I have a table that contains 2 fields: ID, ITEMS, as:

CREATE TABLE foo (ID INTEGER PRIMARY KEY, ITEMS);
INSERT INTO foo VALUES(1,item1;item2;item3);
INSERT INTO foo VALUES(2,item1;item4);
INSERT INTO foo VALUES(3,item5;item3;item7);

As shown, ITEMS is comprised of individual data items separated by 
semicolons. For some purposes, this concatenated form suits the purpose, 
but for other purposes I need to split these items into individual data 
points. That is, I want to split the ITEMS (at the semicolons) into 
separate rows in a new table FOO2, having fields ID and SINGLEITEM that 
looks like this:

ID, SINGLEITEM
1,item1
1,item2
1,item3
2,item1
2,item4
3,item5
3,item3
3,item7

Any suggestions of how to do this in SQLite?

Chris Martin
UNC-CH


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Firefox Download Day

2008-06-17 Thread D. Richard Hipp
Firefox version 3.0 is now available for download at http://www.mozilla.com/

Mozilla is trying to set a world record for the most software  
downloads in 24 hours.  So if you are able to download a copy of  
firefox 3.0 today, please do so.  If you didn't know already, Firefox  
3.0 makes heavy use of SQLite.  An Mozilla is a sponsor of the SQLite  
project.  So by supporting firefox, you are also supporting SQLite!

You have a little less than 24 hours.  Go download your copy now.


D. Richard Hipp
[EMAIL PROTECTED]



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Importing table

2008-06-17 Thread Dennis Cote
Alberto Simões wrote:
> 
> I have a text file with 18 399 392 lines. Each line contains five
> fields. Four are a compound key, the other is just  data.
> What is the best way to import this to sqlite?
> 
> I am thinking on creating another text file with  18 399 392  INSERT commands.
> Would that be the best method?
> 

You might be able to do it more directly using the .import command. It 
can read a fixed number of columns separated by fixed delimiters. It 
works with CSV files for example.

All data are inserted as strings, but the strings may be converted to 
integer or real values if the columns have suitable affinity (see 
http://www.sqlite.org/datatype3.html for more details).

HTH
Dennis Cote


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Importing table

2008-06-17 Thread Alberto Simões
On Tue, Jun 17, 2008 at 6:33 PM, P Kishor <[EMAIL PROTECTED]> wrote:
> On 6/17/08, Alberto Simões <[EMAIL PROTECTED]> wrote:
>> Hi, Folks
>>  I am thinking on creating another text file with  18 399 392  INSERT 
>> commands.
>>  Would that be the best method?
>
> Look at the .read command in the sqlite3 command line program.
>

OK, then INSERTs is the way.
Thanks

-- 
Alberto Simões
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Segmentation Fault when using mod_python / mod_wsgi

2008-06-17 Thread Eric Holmberg
> Eric Holmberg wrote:
> > PyDict_GetItem (op=0x0, key=0xb7ce82cc) at Objects/dictobject.c:571
> > 571 if (!PyDict_Check(op))
> 
> You would need to supply more of the backtrace since the 
> calling routine is supplying a null pointer instead of a 
> dictionary.  Nothing points to SQLite being involved so far ...

The full backtrack doesn't provide much info at this point.  I am
attempting to recompile Apache with debugging symbols, but am not
successful as of yet.

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread -1208731424 (LWP 20956)]
PyDict_GetItem (op=0x0, key=0xb7c25e4c) at Objects/dictobject.c:571
571 if (!PyDict_Check(op))
(gdb) bt full
#0  PyDict_GetItem (op=0x0, key=0xb7c25e4c) at Objects/dictobject.c:571
hash = 5854612
ep = (dictentry *) 0x595594
tstate = (PyThreadState *) 0x0
#1  0x0058da34 in ?? ()
No symbol table info available.
#2  0x in ?? ()
No symbol table info available.

Since SQLite works fine at the command line and fine with the Python
interpreter, but fails when envoked with mod_python and mod_wsgi through
Apache, then I'm assuming that there is either an odd threading issue or
something is not being setup correctly in the Apache/mod_* chain.

I was just hoping that if this was a thread related issue, that somebody
may have seen this before and have a quick solution.


Build environment / configuration

export CFLAGS='-g '
export LDFLAGS='-g '

SQLite 3.5.9:
./configure --disable-tcl --enable-threadsafe
--enable-cross-thread-connections --enable-debug

Python 2.5.2:
./configure --with-pydebug --enable-shared

Mod_python:
./configure


Thanks,

Eric
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Importing table

2008-06-17 Thread P Kishor
On 6/17/08, Alberto Simões <[EMAIL PROTECTED]> wrote:
> Hi, Folks
>
>  I have a text file with 18 399 392 lines. Each line contains five
>  fields. Four are a compound key, the other is just  data.
>  What is the best way to import this to sqlite?
>
>  I am thinking on creating another text file with  18 399 392  INSERT 
> commands.
>  Would that be the best method?

Look at the .read command in the sqlite3 command line program.

>
>  (by the way, I am expecting to have bigger tables in the near future)
>
>  Thanks
>  Alberto
>
>
>  --
>  Alberto Simões
>  ___
>  sqlite-users mailing list
>  sqlite-users@sqlite.org
>  http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


-- 
Puneet Kishor http://punkish.eidesis.org/
Nelson Institute for Environmental Studies http://www.nelson.wisc.edu/
Open Source Geospatial Foundation (OSGeo) http://www.osgeo.org/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Importing table

2008-06-17 Thread Alberto Simões
Hi, Folks

I have a text file with 18 399 392 lines. Each line contains five
fields. Four are a compound key, the other is just  data.
What is the best way to import this to sqlite?

I am thinking on creating another text file with  18 399 392  INSERT commands.
Would that be the best method?

(by the way, I am expecting to have bigger tables in the near future)

Thanks
Alberto

-- 
Alberto Simões
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Long delay for delete/select row in database

2008-06-17 Thread Dennis Cote
Raphaël KINDT wrote:
> 
> I'm working on a video monitoring program that uses frame grabber (12
> cameras).
> This program records videos and detects some (input/output) events such as :
> motion detection, blue screens (cut camera cable), I/Os 1 to 24, and much
> more...
> 
> I save all events in a database (sqlite3) with an INSERT request directly
> after their detection.
> Here is the table I use:
> 
> CREATE TABLE events(
>instantDATETIME,
>instant_msec   SMALLINT UNSIGNED,
>events_int0INT UNSIGNED,
>events_int1INT UNSIGNED,
>events_int2INT UNSIGNED,
>events_int3INT UNSIGNED
> );
> 
> As you can see I don't use PRIMARY KEY and INDEX...
> Each bits of events_intx is a particular event detected (1 = detected).
> 
> When a video is too old, I delete it.
> But I must delete all events detected during this video record.
> To do that I use a SQL request described hereunder:
> 
> DELETE FROM events WHERE ( (instant < datetime('2008-06-16 10:21:55.806'))
> OR ((instant = datetime('2008-06-16 10:21:55.806')) AND (instant_msec <=
> 806)) );
> 
> Sometimes a 'client' wants some information of "what's happened yesterday"
> and he sends a request such as:
> 
> SELECT * FROM events WHERE ( (instant < datetime('2008-06-16 23:59:59.999'))
> OR ((instant = datetime('2008-06-16 23:59:59.999')) AND (instant_msec <=
> 999)) ) AND ( (instant > datetime('2008-06-16 00:00:00.000')) OR ((instant =
> datetime('2008-06-16 00:00:00.000')) AND (instant_msec >= 0)) ) ORDER BY
> instant, instant_msec;
> 
> All this works well for a 'light' database.
> But I have forced the application to detect a lot of motion detection events
> (each second during 10 days).
> Thus, each second, my application is using INSERT to add a row to the event
> table.
> Now my database's size is about 120 MB... Very big...
> When the DELETE (or SELECT) request starts, it takes about 75 seconds to be
> executed (very, very lng)!!!
> 
> How can I reduce this time for a big database?
> I must have a delay that doesn't go up to 40 msec to be efficient.
> Do I have to split my database into several small databases to reduce this
> delay?
> Do I have to create another table and use PRIMARY KEY, INDEX etc?
> 

I would recommend that you change your database schema an combine your 
timestamp string and subsecond integer field into a single floating 
point julian day number that provides both pieces of information in 
units of days (since the julian day epoch). The integer portion is the 
number of days, the fractional part is the point in the day to very high 
resolution (because SQLite uses 64 bit double precision floating point 
values). See http://www.sqlite.org/cvstrac/wiki?p=DateAndTimeFunctions 
for more info on the julian date support.

 CREATE TABLE events(
time   REAL, -- julian day number
events_int0INT UNSIGNED,
events_int1INT UNSIGNED,
events_int2INT UNSIGNED,
events_int3INT UNSIGNED
 );

You may want to combine your event ints into a blob field as well if you 
really need more than 64 event bits (since SQLite uses 64 bit integer 
values).

With the table using real time stamps, you can create an index on the 
time field to greatly speed up your time based comparisons.

 CREATE INDEX event_time on events(time);

Now your queries can be simplified to:

 DELETE FROM events
 WHERE time < julianday('2008-06-16 10:21:55.806');

and

 SELECT * FROM events
 WHERE time between julianday('2008-06-16 00:00:00.000')
and julianday('2008-06-16 23:59:59.999');

These queries will use the index on time to quickly select the records 
of interest and ignore all the other records.

HTH
Dennis Cote
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQlite and C works with "like" but not with "="

2008-06-17 Thread Wilson, Ron P
Also, don't forget trailing white space, e.g. "Hexion " != "Hexion".

RW

Ron Wilson, S/W Systems Engineer III, Tyco Electronics, 434.455.6453

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Igor Tandetnik
Sent: Saturday, June 14, 2008 10:55 AM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] SQlite and C works with "like" but not with "="

"Daniel White" <[EMAIL PROTECTED]>
wrote in message news:[EMAIL PROTECTED]
>> Which way is it stored in the database? Show the output of this
>> statement:
>>
>> SELECT SongTitle FROM songs WHERE SongTitle like 'hexion';
>
> There are 8 records of Hexion in the database, so after a printout
> to the console with a carriage return after each value, I basically
> get:
> Hexion
> Hexion
> Hexion
> Hexion
> Hexion
> Hexion
> Hexion
> Hexion

And you are saying the statement

SELECT SongTitle FROM songs WHERE SongTitle='Hexion';

doesn't return any rows? With all due respect, I find it difficult to 
believe. Would it be possible for you to email a copy of the database 
file to me at [EMAIL PROTECTED] ? Perhaps a stripped-down version, 
with just enough data to reproduce the problem.

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Long delay for delete/select row in database

2008-06-17 Thread barabbas
Hi Raphaël,

Since you are dealing with intervals of numbers, perhaps recent 
discussion of
rtree indexing will help.

Besides, separating instant into highly used level of unit may be also 
useful.
For example, instant_date and instant_time.

Another rough method is applying PRAGMA synchronous = OFF and/or
PRAGMA journal_mode = OFF.

Regards,
/Mike/

Raphaël KINDT wrote:
> Hi everybody,
>
> I'm working on a video monitoring program that uses frame grabber (12
> cameras).
> This program records videos and detects some (input/output) events such as :
> motion detection, blue screens (cut camera cable), I/Os 1 to 24, and much
> more...
>
> I save all events in a database (sqlite3) with an INSERT request directly
> after their detection.
> Here is the table I use:
>
> CREATE TABLE events(
>instantDATETIME,
>instant_msec   SMALLINT UNSIGNED,
>events_int0INT UNSIGNED,
>events_int1INT UNSIGNED,
>events_int2INT UNSIGNED,
>events_int3INT UNSIGNED
> );
>
> As you can see I don't use PRIMARY KEY and INDEX...
> Each bits of events_intx is a particular event detected (1 = detected).
>
> When a video is too old, I delete it.
> But I must delete all events detected during this video record.
> To do that I use a SQL request described hereunder:
>
> DELETE FROM events WHERE ( (instant < datetime('2008-06-16 10:21:55.806'))
> OR ((instant = datetime('2008-06-16 10:21:55.806')) AND (instant_msec <=
> 806)) );
>
> Sometimes a 'client' wants some information of "what's happened yesterday"
> and he sends a request such as:
>
> SELECT * FROM events WHERE ( (instant < datetime('2008-06-16 23:59:59.999'))
> OR ((instant = datetime('2008-06-16 23:59:59.999')) AND (instant_msec <=
> 999)) ) AND ( (instant > datetime('2008-06-16 00:00:00.000')) OR ((instant =
> datetime('2008-06-16 00:00:00.000')) AND (instant_msec >= 0)) ) ORDER BY
> instant, instant_msec;
>
> All this works well for a 'light' database.
> But I have forced the application to detect a lot of motion detection events
> (each second during 10 days).
> Thus, each second, my application is using INSERT to add a row to the event
> table.
> Now my database's size is about 120 MB... Very big...
> When the DELETE (or SELECT) request starts, it takes about 75 seconds to be
> executed (very, very lng)!!!
>
> How can I reduce this time for a big database?
> I must have a delay that doesn't go up to 40 msec to be efficient.
> Do I have to split my database into several small databases to reduce this
> delay?
> Do I have to create another table and use PRIMARY KEY, INDEX etc?
>
> Perhaps I can create a database only in memory and another database in a
> file.
> The database in memory is used to do the seeking and the database in file is
> used for the backup.
> Those databases must be synchronized...
>
>
> Thanks for all advices.
>
>
> KINDT Raphaël
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Client/Srever SQLite

2008-06-17 Thread John Stanton
What did you change?  What was causing the lag?

Alex Katebi wrote:
> slowness is fixed. Can't tell the difference between client/server speed
> from library.
> 
> On Sat, Jun 14, 2008 at 8:32 PM, Alex Katebi <[EMAIL PROTECTED]> wrote:
> 
> 
>>Hi All,
>>
>>   Looks like there is some interest. I will announce when I release it.
>>Currently I am developing an interactive user shell client. This shell is
>>used for my client/server development. It can also be embedded for any
>>clients user interface.
>>The request/response is a little slow for some reason. I need to fix this
>>issue.
>>
>>Thanks,
>>-Alex
>>
>>
>>
>>On Mon, Jun 2, 2008 at 11:40 AM, Alex Katebi <[EMAIL PROTECTED]>
>>wrote:
>>
>>
>>>Hi All,
>>>
>>>  I am using remote procedure calls (RPC) for SQLite in my application. I
>>>have implemented a few SQLite RPC functions that I needed successfully.
>>>I am wondering if there are other people like me who need this.
>>>If there are enough people who could benefit from this I can make it
>>>available as an open source public domain software.
>>>Then people can add more functions as needed.
>>>
>>>Thanks,
>>>-Alex
>>>
>>
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to import CSV data if strings NULL?

2008-06-17 Thread Dennis Cote
Gilles Ganault wrote:
> 
>   It's probably trivial but I'm having a difficult time using Python
> to import a tab-delimited file into SQLite because some columns might
> be empty, so the INSERT string should contain NULL instead of "NULL".
> 

One thing you can do is let the import utility insert the string "NULL" 
for the null records and then change them to real nulls after the fact 
using an update statement.

   update table
   set column = null
   where column = 'NULL';


> Does someone have working code handy that can parse through each line,
> check each column, and if found empty, build the right INSERT?
> 
> Here's my newbie, yucckie, non-working code:
> 
> ==
> import csv
> 
> cr = csv.reader(open("test.tsv","rb"),dialect='excel-tab')
> 
> for row in cr:
>   #INSERT INTO mytable (col1,col2) VALUES (NULL,"My string")
> 
>   sql = "INSERT INTO mytable (col1,col2) VALUES ("
>   for col in row:
>   if col=="":
>   sql = sql + "NULL"
>   else:
>   sql = sql + col
>   if not last col:
>   sql = sql + ","
>   else:
>   #remove trailing comma
> ==
> 

You probably need to add quotes around the col value here to make it a 
literal string. I seem to remember that the csv reader strips the quotes 
form the strings.

if col=="":
sql = sql + "NULL"
else:
sql = sql + "'" + col + "'"

I've always found it easier to skip the first comma instead of trying to 
remove the last one.

 fisrt = True
 for col in row:
 if not first:
 sql = sql + ","
 if first:
 first = False
 if col=="":
 sql = sql + "NULL"
 else:
 sql = sql + "'" + col + "'"

HTH
Dennis Cote

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite 3.5.9 build problem - token expected in btree.c

2008-06-17 Thread Dennis Cote
Klemens Friedl wrote:
> While upgrading from an older SQLite 3.4.2, I stuble upon the
> following build problem.
> 
> me\sqlite3\btree.c:61: error: expected ';', ',' or ')' before '-' token
> mingw32-make: *** [obj-i386\me\sqlite3\btree_sqlite3.o] Error 1
> 
> Has someone experienced similar build problems with 3.5.9 or more
> generally during the upgrading process from 3.4.x to 3.5.x ?
> 

I built version 3.5.9 from the CVS source files using MinGW/MSYS without 
any issues last week.

> 
> 
> I am using mingw (gcc 4.1.3 20071015; GNU Binutils 2.18.50.20071015)
> on Win32 (WinNT 5.1) and the following source package
> http://sqlite.org/sqlite-source-3_5_9.zip (source preprocessed for
> win32); btw. the amalgamation package is not an option in my case, and
> I hadn't had any problems with the normal preprocessed package with
> SQLite 3.4.x.
> 
> I have written my own build script, as my environment provides a
> custom xml based build script language, which gets generated to a
> makefile.
> 
> 
> I have set the following pre-processer define's :
> "__USE_W32API"
> "SQLITE_DEFAULT_AUTOVACUUM" = 1
> "SQLITE_CORE" = 1
> "SQLITE_AMALGAMATION" = 0
> "disable-tcl"
> "disable-debug"
> "enable-threadsafe"
> "enable-releasemode"
> "disable-static"
> 

I'm still using the older 3.4.5 version of GCC in MinGW, and I used the 
following set of commands to build SQLite from the sqlite source 
directory available from CVS (or download at 
http://www.sqlite.org/sqlite-3.5.9.tar.gz).

mkdir build
cd build
../sqlite/configure 
make

HTH
Dennis Cote
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] help with Dates please

2008-06-17 Thread Dennis Cote
John wrote:
> 
> You might also want to look at using triggers to auto insert the 
> datetimestamp as your entries are inserted/updated.
> 

For completeness, you may also want to look at the DEFAULT time and date 
codes; CURRENT_TIME, CURRENT_DATE, and CURRENT_TIMESTAMP at 
http://www.sqlite.org/lang_createtable.html.

HTH
Dennis Cote
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Structural detection of AUTOINCREMENT

2008-06-17 Thread Dennis Cote
Csaba wrote:
> 
>> BTW, the autoincrement keyword must appear after the integer primary key
>> phrase, not in the middle as you have show it.
> 
> This was an interesting note.  I am using the SQLite that came with my
> PHP v. 5.2.6 (built Feb 13, 2008), which is SQLite 2.8.17.
> Turns out that the AUTOINCREMENT key was accepted in the position
> as I had stated and was autoincrementing just fine.  However, it
> turned out that I got the same autoincrementing functionality when
> I removed the AUTOINCREMENT keyword!  No surprise that I couldn't
> find a structural difference between the two in this version of SQLite.
> 

The autoincrement keyword has only been used in SQLite since version 
3.1.0. For a very long time SQLite has supported special functionality 
for an "integer primary key" column as an alias of the table's btree 
key. This column will also increment by default, but behaves differently 
than an autoincrement column in the face of deletes.
See http://www.sqlite.org/lang_createtable.html and 
http://www.sqlite.org/autoinc.html for more details.


> 
> Useful example, thanks.  PHP provides a way to ask sqlite
> to carry out a direct command (via http://php.net/sqlite_exec )
> but they did not provide for retrieving any data from this
> function.  In light of this, is there any way to direct output from
> an sqlite command into a table?
> 

Not in SQL as far as I know, but I don't know what the PHP wrapper does 
with pragma commands.

> Though the sqlite_sequence table seems not to exist in
> my version of SQLite, should you be able to see it in yours
> by querying the sqlite_master table ala:
> SELECT * FROM sqlite_master WHERE type='table';
> I ask this because sqlite_master itself is not listed in sqlite_master
> 

Yes, the sqlite_sequence table is listed in the sqlite_master table.

HTH
Dennis Cote
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Long delay for delete/select row in database

2008-06-17 Thread Raphaël KINDT
Hi everybody,

I'm working on a video monitoring program that uses frame grabber (12
cameras).
This program records videos and detects some (input/output) events such as :
motion detection, blue screens (cut camera cable), I/Os 1 to 24, and much
more...

I save all events in a database (sqlite3) with an INSERT request directly
after their detection.
Here is the table I use:

CREATE TABLE events(
   instantDATETIME,
   instant_msec   SMALLINT UNSIGNED,
   events_int0INT UNSIGNED,
   events_int1INT UNSIGNED,
   events_int2INT UNSIGNED,
   events_int3INT UNSIGNED
);

As you can see I don't use PRIMARY KEY and INDEX...
Each bits of events_intx is a particular event detected (1 = detected).

When a video is too old, I delete it.
But I must delete all events detected during this video record.
To do that I use a SQL request described hereunder:

DELETE FROM events WHERE ( (instant < datetime('2008-06-16 10:21:55.806'))
OR ((instant = datetime('2008-06-16 10:21:55.806')) AND (instant_msec <=
806)) );

Sometimes a 'client' wants some information of "what's happened yesterday"
and he sends a request such as:

SELECT * FROM events WHERE ( (instant < datetime('2008-06-16 23:59:59.999'))
OR ((instant = datetime('2008-06-16 23:59:59.999')) AND (instant_msec <=
999)) ) AND ( (instant > datetime('2008-06-16 00:00:00.000')) OR ((instant =
datetime('2008-06-16 00:00:00.000')) AND (instant_msec >= 0)) ) ORDER BY
instant, instant_msec;

All this works well for a 'light' database.
But I have forced the application to detect a lot of motion detection events
(each second during 10 days).
Thus, each second, my application is using INSERT to add a row to the event
table.
Now my database's size is about 120 MB... Very big...
When the DELETE (or SELECT) request starts, it takes about 75 seconds to be
executed (very, very lng)!!!

How can I reduce this time for a big database?
I must have a delay that doesn't go up to 40 msec to be efficient.
Do I have to split my database into several small databases to reduce this
delay?
Do I have to create another table and use PRIMARY KEY, INDEX etc?

Perhaps I can create a database only in memory and another database in a
file.
The database in memory is used to do the seeking and the database in file is
used for the backup.
Those databases must be synchronized...


Thanks for all advices.


KINDT Raphaël

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] DB does not get updated after many writes

2008-06-17 Thread sethuarun


My Environment details:
OS: Montavista kernel version 2.4
compiler: gcc (GCC) 3.3.1 (MontaVista 3.3.1-3.0.10.0300532 2003-12-24)

Regards,
Sethu


Mihai Limbasan wrote:
> 
> I doubt that anyone will be able to help you unless you provide more 
> (or, for that matter, *any*) details about your environment, such as 
> operating system hardware constraints, compiler brand and version, and 
> so on and so forth.
> 
> Mihai Limbasan
> 

-- 
View this message in context: 
http://www.nabble.com/DB-does-not-get-updated-after-many-writes-tp17866059p17881249.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Implementing fast database rotation

2008-06-17 Thread Al
Dennis Cote <[EMAIL PROTECTED]> writes:

> Al wrote:
>> 
>> I'm using sqlite to implement a fast logging system in an embbeded system. 
>> For
>> mainly space but also performance reason, I need to rotate the databases.
>> 
>> The database is queried regularly and I need to keep at least $min rows in 
>> it. 
>> 
>> What I plan, is inside my logging loop, to do something like this.
>> 
>> while(1) {
>> read_informations_from_several_sources();
>> INSERT(informations);
>> 
>> if(count > max) {
>>/* I want to move all oldest rows in another database */ 
>>BEGIN;
>>INSERT INTO logs_backup
>> SELECT * FROM logs order by rowid limit ($max - $min);
>> 
>>DELETE FROM logs WHERE rowid IN (SELECT rowid FROM logs ORDER BY rowid
>> LIMIT ($max - $min));
>>COMMIT;
>> }
>> }
>> 
>> rowid is an autoincremented field.
>> I am not an sql expert, and would like to find the fastest solution to move 
>> the
>> oldest rows into another database. Am I doing silly things ? Can it be 
>> improved ?
>> 
>
> Al,
>
> There have been a few past discussions of using SQLite to create a FIFO 
> table with a fixed maximum size. See 
> http://www.nabble.com/limiting-table-size--to2035232.html#a2063150 for 
> one way to do this. You would have to modify the insert trigger to make 
> the backup copy of the old row before it is deleted.
>

Hello Denis,

Thank you very much for your link. I find the solution quite elegant
and I will compare the performance with the implemented one.


> HTH
> Dennis Cote
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>

-- 
Alain ENOUT,  Phone: +33 4 93 00 16 82,  Fax : +33 4 93 00 16 61
UDcast, 2455 Route des Dolines, BP 355, F-06906 Sophia Antipolis, France
UDcast: IP - Broadcast - Wireless  http://www.UDcast.com

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Implementing fast database rotation

2008-06-17 Thread Al
"Rich Rattanni" <[EMAIL PROTECTED]> writes:

Hello Rich,

Thanks for your proposal, but my purpose is to rotate the databases
and remove the oldest files to avoid filling my file system, so I
can't use the same table as I want to changes files.

The solution which is a good compromise for me for the moment is
something like this:

while(1) {
   read_informations_from_several_sources();
   INSERT(informations);

   if($count > $max) {
  /* I want to move all oldest rows in another database */
  open_new_database();
  attach_old_database_and_fiddle_with_database_handle();
  INSERT INTO new_table SELECT * FROM old_table order by rowid desc limit 
$min;
  close_old_db()
   }
}

with my current values ( max ~10 and min ~1000), the switch
operation takes around 150ms which is acceptable for me.

> Self replies sorry its kinda lame huh?  Could you add a column to
> your schema such as "LOG #" or so, and do all your work in the same
> table.  So if your data max limit is 3 you would have...
>
> rowiddata   logNum
> 1  x   1
> 2  y   1
> 3  z   1
> 4  a   2
> 5  b   2
>
> Just thinking out of my finger tips.
>
>
>
> On Sun, Jun 15, 2008 at 10:20 PM, Rich Rattanni <[EMAIL PROTECTED]> wrote:
>> I am working with SQLite in an embedded environment.  With synchronous
>> = full, I can say large inserts are abysmal (of course I need the
>> protection that full synchronous offers).  Of course, as always what I
>> call large may not be what you call large.  Keep in mind that sqlite
>> will make a journal file equal to roughly the size of the data you
>> will be moving.  Instead of moving the data to a backup, could you
>> create a new table and start dumping data there?  You know, in your
>> program remember the current table (DataLogX).  When it comes time to
>> roll over the log  "CREATE TABLE DataLog(X+1) .Just one man's
>> opinion.
>>
>>
>> On Fri, Jun 13, 2008 at 5:25 AM, Al <[EMAIL PROTECTED]> wrote:
>>> Hello,
>>>
>>> I'm using sqlite to implement a fast logging system in an embbeded system. 
>>> For
>>> mainly space but also performance reason, I need to rotate the databases.
>>>
>>> The database is queried regularly and I need to keep at least $min rows in 
>>> it.
>>>
>>> What I plan, is inside my logging loop, to do something like this.
>>>
>>> while(1) {
>>>read_informations_from_several_sources();
>>>INSERT(informations);
>>>
>>>if(count > max) {
>>>   /* I want to move all oldest rows in another database */
>>>   BEGIN;
>>>   INSERT INTO logs_backup
>>>SELECT * FROM logs order by rowid limit ($max - $min);
>>>
>>>   DELETE FROM logs WHERE rowid IN (SELECT rowid FROM logs ORDER BY rowid
>>>LIMIT ($max - $min));
>>>   COMMIT;
>>>}
>>> }
>>>
>>> rowid is an autoincremented field.
>>> I am not an sql expert, and would like to find the fastest solution to move 
>>> the
>>> oldest rows into another database. Am I doing silly things ? Can it be 
>>> improved ?
>>>
>>> Thanks in advance.
>>>
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>

-- 
Alain ENOUT,  Phone: +33 4 93 00 16 82,  Fax : +33 4 93 00 16 61
UDcast, 2455 Route des Dolines, BP 355, F-06906 Sophia Antipolis, France
UDcast: IP - Broadcast - Wireless  http://www.UDcast.com

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Segmentation Fault when using mod_python / mod_wsgi

2008-06-17 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Eric Holmberg wrote:
> PyDict_GetItem (op=0x0, key=0xb7ce82cc) at Objects/dictobject.c:571
> 571 if (!PyDict_Check(op))

You would need to supply more of the backtrace since the calling routine
is supplying a null pointer instead of a dictionary.  Nothing points to
SQLite being involved so far ...

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFIV3BbmOOfHg372QQRAvwiAJ0eeuVG4XhPNvuvLEj/hO0/hZgZSwCdF2ja
gnIQ5EsuA2ElAE02ZSlw9WU=
=tphF
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users