Re: [sqlite] suggestion on the database design

2009-08-20 Thread pierr


John Machin wrote:
> 
> On 21/08/2009 1:29 PM, pierr wrote:
>> 
>> Simon Slavin-2 wrote:
>>>
>>> On 21 Aug 2009, at 3:26am, pierr wrote:
>>>
 I did not know the sequence in defining the field matters. This is
 what I should have done.
>>> Sorry, I should have explained better.  You were right: there is no  
>>> difference.  I was just rearranging the fields in the classic way:  
>>> with the primary key column as the first column.  It helps me think  
>>> about how the database works.  You did nothing wrong.
>>>
>> Hi Simon,
>>   It do make a difference. 
>>   With this schema, 
>>   CREATE TABLE IF NOT EXISTS tblIndex(
>>  frame_type INTEGER,
>>  pts VARCHAR(5)
>>  ts_start INTEGER PRIMARY KEY,
>>  ts_end INTEGER,
>>  ) 
>>  There will be a rowid field in the database ; and there is a
>> sqlite_autoindex_tblIndex_1 build on ts_start automatically ; so
>> 1,800,000
>> records (16bytes each) takes 62M . 
> 
> Please don't type from memory -- what you have above has TWO syntax 
> errors, and doesn't (with those random comma placements fixed) produce 
> the result that you say -- and do read my previous message.
> 
> Here is an (annotated) copy/paste of an actual session:
> 
> sqlite>   CREATE TABLE IF NOT EXISTS tblIndex(
> ...>  frame_type INTEGER,
> ...>  pts VARCHAR(5)<<<=== missing comma
> ...>  ts_start INTEGER PRIMARY KEY,
> ...>  ts_end INTEGER,
> ...>  )
> ...> ;
> SQL error: near "ts_start": syntax error
> sqlite>   CREATE TABLE IF NOT EXISTS tblIndex(
> ...>  frame_type INTEGER,
> ...>  pts VARCHAR(5),
> ...>  ts_start INTEGER PRIMARY KEY,
> ...>  ts_end INTEGER, <<<=== superflous comma
> ...>  );
> SQL error: near ")": syntax error
> sqlite>   CREATE TABLE IF NOT EXISTS tblIndex(
> ...>  frame_type INTEGER,
> ...>  pts VARCHAR(5),
> ...>  ts_start INTEGER PRIMARY KEY,
> ...>  ts_end INTEGER
> ...>  );
> sqlite> select * from sqlite_master;
> table|tblIndex|tblIndex|2|CREATE TABLE tblIndex(
>   frame_type INTEGER,
>   pts VARCHAR(5),
>   ts_start INTEGER PRIMARY KEY,
>   ts_end INTEGER
>   ) <<<=== no index !!
> sqlite> drop table tblIndex;
> sqlite>   CREATE TABLE IF NOT EXISTS tblIndex(
> ...>  frame_type INTEGER,
> ...>  pts VARCHAR(5),
> ...>  ts_start INT PRIMARY KEY,  <<<=== using your original INT 
> instead of Simon's INTEGER
> ...>  ts_end INTEGER
> ...>  );
> sqlite> select * from sqlite_master;
> table|tblIndex|tblIndex|2|CREATE TABLE tblIndex(
>   frame_type INTEGER,
>   pts VARCHAR(5),
>   ts_start INT PRIMARY KEY,
>   ts_end INTEGER
>   )
> index|sqlite_autoindex_tblIndex_1|tblIndex|3| <<<=== index!!
> sqlite>
> 
> 
After reading your previous message , now I understand the difference
between "INT PRIMARY KEY" and "INTEGER PRIMARY KEY" which I did not notice
before. And sorry for "typing from memory":)
 

-- 
View this message in context: 
http://www.nabble.com/suggestion-on-the-database-design-tp25062134p25074149.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] suggestion on the database design

2009-08-20 Thread John Machin
On 21/08/2009 1:29 PM, pierr wrote:
> 
> Simon Slavin-2 wrote:
>>
>> On 21 Aug 2009, at 3:26am, pierr wrote:
>>
>>> I did not know the sequence in defining the field matters. This is
>>> what I should have done.
>> Sorry, I should have explained better.  You were right: there is no  
>> difference.  I was just rearranging the fields in the classic way:  
>> with the primary key column as the first column.  It helps me think  
>> about how the database works.  You did nothing wrong.
>>
> Hi Simon,
>   It do make a difference. 
>   With this schema, 
>   CREATE TABLE IF NOT EXISTS tblIndex(
>  frame_type INTEGER,
>  pts VARCHAR(5)
>  ts_start INTEGER PRIMARY KEY,
>  ts_end INTEGER,
>  ) 
>  There will be a rowid field in the database ; and there is a
> sqlite_autoindex_tblIndex_1 build on ts_start automatically ; so 1,800,000
> records (16bytes each) takes 62M . 

Please don't type from memory -- what you have above has TWO syntax 
errors, and doesn't (with those random comma placements fixed) produce 
the result that you say -- and do read my previous message.

Here is an (annotated) copy/paste of an actual session:

sqlite>   CREATE TABLE IF NOT EXISTS tblIndex(
...>  frame_type INTEGER,
...>  pts VARCHAR(5)<<<=== missing comma
...>  ts_start INTEGER PRIMARY KEY,
...>  ts_end INTEGER,
...>  )
...> ;
SQL error: near "ts_start": syntax error
sqlite>   CREATE TABLE IF NOT EXISTS tblIndex(
...>  frame_type INTEGER,
...>  pts VARCHAR(5),
...>  ts_start INTEGER PRIMARY KEY,
...>  ts_end INTEGER, <<<=== superflous comma
...>  );
SQL error: near ")": syntax error
sqlite>   CREATE TABLE IF NOT EXISTS tblIndex(
...>  frame_type INTEGER,
...>  pts VARCHAR(5),
...>  ts_start INTEGER PRIMARY KEY,
...>  ts_end INTEGER
...>  );
sqlite> select * from sqlite_master;
table|tblIndex|tblIndex|2|CREATE TABLE tblIndex(
  frame_type INTEGER,
  pts VARCHAR(5),
  ts_start INTEGER PRIMARY KEY,
  ts_end INTEGER
  ) <<<=== no index !!
sqlite> drop table tblIndex;
sqlite>   CREATE TABLE IF NOT EXISTS tblIndex(
...>  frame_type INTEGER,
...>  pts VARCHAR(5),
...>  ts_start INT PRIMARY KEY,  <<<=== using your original INT 
instead of Simon's INTEGER
...>  ts_end INTEGER
...>  );
sqlite> select * from sqlite_master;
table|tblIndex|tblIndex|2|CREATE TABLE tblIndex(
  frame_type INTEGER,
  pts VARCHAR(5),
  ts_start INT PRIMARY KEY,
  ts_end INTEGER
  )
index|sqlite_autoindex_tblIndex_1|tblIndex|3| <<<=== index!!
sqlite>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] suggestion on the database design

2009-08-20 Thread pierr


Simon Slavin-2 wrote:
> 
> 
> On 21 Aug 2009, at 3:26am, pierr wrote:
> 
>> I did not know the sequence in defining the field matters. This is
>> what I should have done.
> 
> Sorry, I should have explained better.  You were right: there is no  
> difference.  I was just rearranging the fields in the classic way:  
> with the primary key column as the first column.  It helps me think  
> about how the database works.  You did nothing wrong.
> 
Hi Simon,
  It do make a difference. 
  With this schema, 
  CREATE TABLE IF NOT EXISTS tblIndex(
 frame_type INTEGER,
 pts VARCHAR(5)
 ts_start INTEGER PRIMARY KEY,
 ts_end INTEGER,
 ) 
 There will be a rowid field in the database ; and there is a
sqlite_autoindex_tblIndex_1 build on ts_start automatically ; so 1,800,000
records (16bytes each) takes 62M . 

  With your schema , which is much better
  CREATE TABLE IF NOT EXISTS tblIndex(
 ts_start INTEGER PRIMARY KEY,
 ts_end INTEGER,
 frame_type INTEGER,
 pts VARCHAR(5)
 ) 
 There will be NO rowid field in the database ;and 1,800,000 records
(16bytes each) takes only 35M. 




>>> Whichever one of these is at fault, a delete command selecting on an
>>> indexed column and deleting 9 records from a five column table
>>> should not take 17 seconds.
>>
>> I am sorry, I should have mentioned It (17 seconds to delete 9)  
>> was
>> tested on a 350M MIPS CPU. And after changing to the schema you  
>> suggested,
>> it still take 17 seconds to delete 9 records.
>> On my 1.8G Hz Ubuntu desktop it tooks 800ms. So is this delete  
>> performance
>> is the limit we can achieve? Any other options I can improve this?
> 
> 
>> BTW:  I used following option to build the libarary. Is there any  
>> thing I
>> can expore here? (-O2 and -Os seem has no big difference on  
>> performance.)
>> mips24k-linux-gcc -Os -fPIC -c *.c
>> mips24k-linux-gcc -shared  -o mips_libsqlite3.so.1 sqlite3.o
> 
> 
> Putting these together, your 350 MIPS CPU machine is a MIPS 24K  
> machine.  There are various things to consider: not only CPU speed but  
> also memory bandwidth, memory speed, hard disk throughput, and other  
> things I have no idea about.  And I have no idea what part threading  
> would pay on such a platform.  Sorry but I have no experience with  
> these.  Perhaps someone else here does.
> 
> Simon.
> 
Thanks for your insight,Simon.

-- 
View this message in context: 
http://www.nabble.com/suggestion-on-the-database-design-tp25062134p25073602.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] suggestion on the database design

2009-08-20 Thread John Machin
On 21/08/2009 12:59 PM, Simon Slavin wrote:
> On 21 Aug 2009, at 3:26am, pierr wrote:
> 
>> I did not know the sequence in defining the field matters. This is
>> what I should have done.
> 
> Sorry, I should have explained better.  You were right: there is no  
> difference.  I was just rearranging the fields in the classic way:  
> with the primary key column as the first column.  It helps me think  
> about how the database works.  You did nothing wrong.

Pierr had "INT primary key". Simon suggested "INTEGER primary key" which 
is *better* -- it means that the PK is also the rowid (saves space in 
the table) and you don't need a separate index for the PK. See 
http://sqlite.org/lang_createtable.html#rowid

Perhaps Pierr's problem is related to the "usb interface" on the hard drive?


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] suggestion on the database design

2009-08-20 Thread Simon Slavin

On 21 Aug 2009, at 3:26am, pierr wrote:

> I did not know the sequence in defining the field matters. This is
> what I should have done.

Sorry, I should have explained better.  You were right: there is no  
difference.  I was just rearranging the fields in the classic way:  
with the primary key column as the first column.  It helps me think  
about how the database works.  You did nothing wrong.

>> Whichever one of these is at fault, a delete command selecting on an
>> indexed column and deleting 9 records from a five column table
>> should not take 17 seconds.
>
> I am sorry, I should have mentioned It (17 seconds to delete 9)  
> was
> tested on a 350M MIPS CPU. And after changing to the schema you  
> suggested,
> it still take 17 seconds to delete 9 records.
> On my 1.8G Hz Ubuntu desktop it tooks 800ms. So is this delete  
> performance
> is the limit we can achieve? Any other options I can improve this?

Ah.  Okay, less than 1 second is reasonable for a standard computer.   
If your code performs at this speed then there's no evidence of any  
problem with your code.

> BTW:  I used following option to build the libarary. Is there any  
> thing I
> can expore here? (-O2 and -Os seem has no big difference on  
> performance.)
> mips24k-linux-gcc -Os -fPIC -c *.c
> mips24k-linux-gcc -shared  -o mips_libsqlite3.so.1 sqlite3.o

Putting these together, your 350 MIPS CPU machine is a MIPS 24K  
machine.  There are various things to consider: not only CPU speed but  
also memory bandwidth, memory speed, hard disk throughput, and other  
things I have no idea about.  And I have no idea what part threading  
would pay on such a platform.  Sorry but I have no experience with  
these.  Perhaps someone else here does.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] suggestion on the database design

2009-08-20 Thread pierr


Simon Slavin-2 wrote:
> 
> If ts_start is an always increasing integer (i.e. no two records have  
> the same value) then I would see it as your primary key and as your  
> row id.  In other words, your structure is more like
> 
>   CREATE TABLE IF NOT EXISTS tblIndex(
>  ts_start INTEGER PRIMARY KEY,
>  ts_end   INTEGER,
>  frame_type   INTEGER,
>  pts  VARCHAR(5))
> 
> 
Thanks. I did not know the sequence in defining the field matters. This is
what I should have done.


Simon Slavin-2 wrote:
> 
> inserting 50 records a second shouldn't be a problem.  If they're  
> generated in batches rather than individually, use BEGIN ... COMMIT to  
> speed up the insertion dramatically but it should be fast enough  
> either way.
> 
> What interface or API to SQLite are you using ?  Or are you writing C  
> code and using the native library commands ?  17 seconds is far longer  
> than expected if you did everything as you described above.  It should  
> be far faster even without multiple threads/connections.
> 
> To test it, get your database ready to do one of the DELETE FROM  
> commands, and have your program quit or abort.  Then open the database  
> in the sqlite3 command-line application, and issue the exact DELETE  
> FROM command your application would execute.  If the command-line  
> program takes 17 seconds then the problem is in your data somewhere.   
> If it doesn't, the problem is in your own application or in your use  
> of the API you're using.
> 
>> Any way to reduce this ? We don't care if the records is synced to  
>> the hard
>> disk immediately. We are thinking start a seperated thread to do the  
>> delete
>> so to make this call to be async. The things I am not sure is  
>> whether the
>> (long time )delete will impact the insert performance if they share  
>> the same
>> connection? Or should I use a seperated connection for insert and  
>> delete?
>> But In this way, do they need be synced in application level?
> 
> Whichever one of these is at fault, a delete command selecting on an  
> indexed column and deleting 9 records from a five column table  
> should not take 17 seconds.
> 
> 

I am sorry, I should have mentioned It (17 seconds to delete 9) was
tested on a 350M MIPS CPU. And after changing to the schema you suggested,
it still take 17 seconds to delete 9 records.
On my 1.8G Hz Ubuntu desktop it tooks 800ms. So is this delete performance
is the limit we can achieve? Any other options I can improve this?

BTW:  I used following option to build the libarary. Is there any thing I
can expore here? (-O2 and -Os seem has no big difference on performance.)
mips24k-linux-gcc -Os -fPIC -c *.c
mips24k-linux-gcc -shared  -o mips_libsqlite3.so.1 sqlite3.o 

-- 
View this message in context: 
http://www.nabble.com/suggestion-on-the-database-design-tp25062134p25073115.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] (no subject)

2009-08-20 Thread erik
Hi guys,

This is my first post. I am creating a simple document archiving program
for small businesses. I am creating it in a scripting language called
www.autohotkey.com. I intend to place the SQLite database file on a
network share and use sqlite.dll to access and manipulate it.

In general, everything is on a relatively small scale: there will be less
than 10 users who will occasionally interact with the database, there will
be around 4 tables and based on experience with a similar system, I don't
expect a total of more than 5 records after a few years of use. The
client computers will be Windows XP or newer and the database file will be
located on a network share on a Windows 2000 server or newer.

1. I have read that the file locking mechanisms on older windows networks
are not very reliable and that it is not advisable to use SQLite on NFS or
network shares. Given the robustness and efficiency of SQLite and the low
frequency of use of my application, do I still need to worry about placing
the database on a network share?

2. Should I modify any of the default settings to better suit this
environment?

3. I am having problems reading and writing international characters to
and from the database, specifically the norwegian characters æ, ø and å.
If I use sqlite.exe to create a records containing æ, ø or å, I can read
the record using sqlite.exe without any problems. Likewise, if I use
SQLiteSpy to create a record containing ø, æ or å I can read the record
using SQLiteSpy without any problems. But if I create a record in
sqlite.exe and try to read it with SQLiteSpy or vice versa, it doesn't
work as expected and the special characters are converted to all sorts of
oddball symbols like squares and question marks. I assume this is somehow
due to different ASCII/UTF encodings, but how can these problems be
avoided?

4. Select commands are case sensitive with æ, ø and å. Is there a simple
workaround for this?


Regards,
Erik

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] ANN: C#-SQLite 3.6.17

2009-08-20 Thread Noah Hart

C#-SQLite has been updated to release 3.6.17 and is now ready for review.  

It currently runs 30992 tests with 0 errors, but still has issues with 4
tests.

The project is located at http://code.google.com/p/csharp-sqlite/

Please keep in mind the following:

* C#-SQLite is an independent reimplementation of the SQLite software
library
* This is not an official version of SQLite
* Bugs should not be reported to the SQLite.org ticket tracking system

SQLite® is a registered trademark of Hipp, Wyrick & Company, Inc


Enjoy,

Noah Hart 
-- 
View this message in context: 
http://www.nabble.com/ANN%3A-C--SQLite-3.6.17-tp25070237p25070237.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Copying an open db file

2009-08-20 Thread Igor Tandetnik
Angus March  wrote:
> Igor Tandetnik wrote:
>> Angus March  wrote:
>>
>>> I want to copy a db file while it is still open, and I'm wondering
>>> how safe that is. It would go something like this:
>>>
>>>   1. Lock exclusively with PRAGMA locking_mode=EXCLUSIVE; Many
>>>  process are accessing the db afterall
>>>   2. UPDATE a_table SET a_column=0;
>>>   3. After finalizing (I'm using the C API) the queries in 1 and 2,
>>>  but before closing the connection to the db, I would copy the
>>> db file.
>>>
>>
>> Why not just run BEGIN EXCLUSIVE before copying, and ROLLBACK (or
>> COMMIT, doesn't matter since you made no changes) afterward?
>>
>
>Who knows? SQLite might cache things in userspace, or making
> important changes that only a connection close would call.

If BEGIN EXCLUSIVE succeeds, it means there are no outstanding 
transcations on any other connection and all changes are committed to 
the disk surface. I'm not sure I understand the scenario you are 
concerned about.

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Copying an open db file

2009-08-20 Thread Kit
2009/8/20 Angus March :
> I want to copy a db file while it is still open, and I'm wondering how
> safe that is.

.dump
-- 
Kit
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Linking all the column fields together (or something like this)

2009-08-20 Thread Kit
2009/8/20 ZB :
> I need to search a table, which has - say - about 20 columns. Each column
> should be checked against given string ("phrase"), in usual manner:
>
>  SELECT something FROM table WHERE col1 LIKE '%phrase%' OR col2 LIKE
>  '%phrase%' OR col3 LIKE '%phrase%' ... OR col20 LIKE '%phrase%'
>
> I'm afraid, it will be inefficient. Perhaps better would be to concat all
> the fields together, and search for first occurence of "phrase" only that
> big string? Does there exist any possibility to make it such way?
>
> Or perhaps there's other, better solution?
>                                pozdrawiam / regards
>                                                Zbigniew

SELECT something FROM table WHERE (col1||';'||col2||...||col20) LIKE '%phrase%'
-- 
Kit
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Copying an open db file

2009-08-20 Thread Angus March
Igor Tandetnik wrote:
> Angus March  wrote:
>   
>> I want to copy a db file while it is still open, and I'm wondering how
>> safe that is. It would go something like this:
>>
>>   1. Lock exclusively with PRAGMA locking_mode=EXCLUSIVE; Many process
>>  are accessing the db afterall
>>   2. UPDATE a_table SET a_column=0;
>>   3. After finalizing (I'm using the C API) the queries in 1 and 2,
>>  but before closing the connection to the db, I would copy the db
>> file.
>> 
>
> Why not just run BEGIN EXCLUSIVE before copying, and ROLLBACK (or 
> COMMIT, doesn't matter since you made no changes) afterward?
>   

Who knows? SQLite might cache things in userspace, or making
important changes that only a connection close would call. I guess a
partial solution would involve keeping the synchronization on.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Copying an open db file

2009-08-20 Thread Igor Tandetnik
Angus March  wrote:
> I want to copy a db file while it is still open, and I'm wondering how
> safe that is. It would go something like this:
>
>   1. Lock exclusively with PRAGMA locking_mode=EXCLUSIVE; Many process
>  are accessing the db afterall
>   2. UPDATE a_table SET a_column=0;
>   3. After finalizing (I'm using the C API) the queries in 1 and 2,
>  but before closing the connection to the db, I would copy the db
> file.

Why not just run BEGIN EXCLUSIVE before copying, and ROLLBACK (or 
COMMIT, doesn't matter since you made no changes) afterward?

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Copying an open db file

2009-08-20 Thread Angus March
I want to copy a db file while it is still open, and I'm wondering how
safe that is. It would go something like this:

   1. Lock exclusively with PRAGMA locking_mode=EXCLUSIVE; Many process
  are accessing the db afterall
   2. UPDATE a_table SET a_column=0;
   3. After finalizing (I'm using the C API) the queries in 1 and 2, but
  before closing the connection to the db, I would copy the db file.

Is there any danger that the copy would be corrupt? Would the update I
performed be reflected in the new db?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Linking all the column fields together (or something like this)

2009-08-20 Thread ZB
I need to search a table, which has - say - about 20 columns. Each column
should be checked against given string ("phrase"), in usual manner:

  SELECT something FROM table WHERE col1 LIKE '%phrase%' OR col2 LIKE
  '%phrase%' OR col3 LIKE '%phrase%' ... OR col20 LIKE '%phrase%'

I'm afraid, it will be inefficient. Perhaps better would be to concat all
the fields together, and search for first occurence of "phrase" only that
big string? Does there exist any possibility to make it such way?

Or perhaps there's other, better solution?
-- 
pozdrawiam / regards

Zbigniew
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] suggestion on the database design

2009-08-20 Thread Simon Slavin

On 20 Aug 2009, at 2:33pm, pierr wrote:

>  We have a database with a very simple schema:
>CREATE TABLE IF NOT EXISTS tblIndex(
> frame_type  INT,
> pts  VARCHAR(5),
> ts_start  INT primary key,
> ts_end  INT)
>
> And the application scenario is :
> 1. Every second usr will insert 2 ~ 50 records ,and the ts_start  
> fields of
> those records is always increasing.

If ts_start is an always increasing integer (i.e. no two records have  
the same value) then I would see it as your primary key and as your  
row id.  In other words, your structure is more like

  CREATE TABLE IF NOT EXISTS tblIndex(
 ts_start   INTEGER PRIMARY KEY,
 ts_end INTEGER,
 frame_type INTEGER,
 ptsVARCHAR(5))

inserting 50 records a second shouldn't be a problem.  If they're  
generated in batches rather than individually, use BEGIN ... COMMIT to  
speed up the insertion dramatically but it should be fast enough  
either way.

> After 8 hours ,there are at most
> 1_800_000 records. By setting the sync mode to off, the performance  
> seems OK
> so far.And because the data of each record has only 16 bytes, we may  
> use
> some buffer even if the insert speed is not too fast.


Won't do any harm, but might not be worth doing either.  Test it with  
minimal programming first.

> 2. After 8 hours , usr will told me to delete the oldest data by  
> telling the
> upper bound ts_start , so I will do DELETE FROM tblIndex WHERE  
> ts_start <
> upper_bound_ts_start. Delete 90_000 (which is the records for half a  
> hour)
> out of the 1_800_000 now take 17 seconds. A litter bit longer than  
> expected.

What interface or API to SQLite are you using ?  Or are you writing C  
code and using the native library commands ?  17 seconds is far longer  
than expected if you did everything as you described above.  It should  
be far faster even without multiple threads/connections.

To test it, get your database ready to do one of the DELETE FROM  
commands, and have your program quit or abort.  Then open the database  
in the sqlite3 command-line application, and issue the exact DELETE  
FROM command your application would execute.  If the command-line  
program takes 17 seconds then the problem is in your data somewhere.   
If it doesn't, the problem is in your own application or in your use  
of the API you're using.

> Any way to reduce this ? We don't care if the records is synced to  
> the hard
> disk immediately. We are thinking start a seperated thread to do the  
> delete
> so to make this call to be async. The things I am not sure is  
> whether the
> (long time )delete will impact the insert performance if they share  
> the same
> connection? Or should I use a seperated connection for insert and  
> delete?
> But In this way, do they need be synced in application level?

Whichever one of these is at fault, a delete command selecting on an  
indexed column and deleting 9 records from a five column table  
should not take 17 seconds.

> 3. Search. SELECT ts_start FROM tblIndex WHERE ts_start BETWEEN ?  
> AND ? " -
> As ts_start is the primary key so the performance is OK for our need  
> now. I
> am thinking i should use a seperated connection for search , right?

Your data requirements are not huge.  The system should be fast enough  
no matter whether you use one connection or many.

Simon.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Problem with sqlite3_column_origin_name and AS Clause

2009-08-20 Thread Dinu Scheppelmann (DAISY)
"Igor Tandetnik"  schrieb im Newsbeitrag
news:h6bg82$td...@ger.gmane.org...
> Dinu Scheppelmann (DAISY) wrote:
> > Unfortunately when I get the column names by function
> > sqlite3_column_origin_name(), the result columns have the names
> > "DokId", "Name" and "Name" again - instead of "Id", "PatientName" and
> > "Name"!!
>
> You want sqlite3_column_name
>
> Igor Tandetnik
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>

Thanks for that fast help;-)

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] suggestion on the database design

2009-08-20 Thread P Kishor
On Thu, Aug 20, 2009 at 8:33 AM, pierr wrote:
>
> Hi,
>  We have a database with a very simple schema:
>    CREATE TABLE IF NOT EXISTS tblIndex(
>     frame_type  INT,
>     pts  VARCHAR(5),
>     ts_start  INT primary key,
>     ts_end  INT)
>
> And the application scenario is :
> 1. Every second usr will insert 2 ~ 50 records ,and the ts_start fields of
> those records is always increasing. After 8 hours ,there are at most
> 1_800_000 records. By setting the sync mode to off, the performance seems OK
> so far.And because the data of each record has only 16 bytes, we may use
> some buffer even if the insert speed is not too fast.
>
> 2. After 8 hours , usr will told me to delete the oldest data by telling the
> upper bound ts_start , so I will do DELETE FROM tblIndex WHERE ts_start <
> upper_bound_ts_start. Delete 90_000 (which is the records for half a hour)
> out of the 1_800_000 now take 17 seconds. A litter bit longer than expected.
> Any way to reduce this ?


With a simple schema as below (no indexes other than the stock INTEGER
PRIMARY KEY on ts_start

CREATE TABLE tblIndex (
ts_start INTEGER PRIMARY KEY,
frame_type INTEGER,
pts VARCHAR(5),
ts_end INTEGER
)

and using Perl DBD::SQLite on my Macbook, I get the following times

65 seconds to insert 1,800,000 records with a commit every 100,000

1 second to delete the first 90,000 records


> We don't care if the records is synced to the hard
> disk immediately. We are thinking start a seperated thread to do the delete
> so to make this call to be async. The things I am not sure is whether the
> (long time )delete will impact the insert performance if they share the same
> connection? Or should I use a seperated connection for insert and delete?
> But In this way, do they need be synced in application level?
>
> 3. Search. SELECT ts_start FROM tblIndex WHERE ts_start BETWEEN ? AND ? " -
> As ts_start is the primary key so the performance is OK for our need now. I
> am thinking i should use a seperated connection for search , right?
>
> Configuration of sqlite :
> hard disk database (usb interface)
> cache size is 2000
> page size is 1024
> sync mode is 0 (off)
> journal_mode mode is truncate
>
>
> Thanks for any suggestion to improve the delete performance or about the
> overall design.
> --
> View this message in context: 
> http://www.nabble.com/suggestion-on-the-database-design-tp25062134p25062134.html
> Sent from the SQLite mailing list archive at Nabble.com.
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Puneet Kishor http://www.punkish.org
Carbon Model http://carbonmodel.org
Charter Member, Open Source Geospatial Foundation http://www.osgeo.org
Science Commons Fellow, http://sciencecommons.org/about/whoweare/kishor
Nelson Institute, UW-Madison http://www.nelson.wisc.edu
---
Assertions are politics; backing up assertions with evidence is science
===
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] suggestion on the database design

2009-08-20 Thread pierr

Hi, 
  We have a database with a very simple schema:
CREATE TABLE IF NOT EXISTS tblIndex(
 frame_type  INT,
 pts  VARCHAR(5),
 ts_start  INT primary key,
 ts_end  INT)

And the application scenario is :
1. Every second usr will insert 2 ~ 50 records ,and the ts_start fields of
those records is always increasing. After 8 hours ,there are at most
1_800_000 records. By setting the sync mode to off, the performance seems OK
so far.And because the data of each record has only 16 bytes, we may use
some buffer even if the insert speed is not too fast.

2. After 8 hours , usr will told me to delete the oldest data by telling the
upper bound ts_start , so I will do DELETE FROM tblIndex WHERE ts_start <
upper_bound_ts_start. Delete 90_000 (which is the records for half a hour)
out of the 1_800_000 now take 17 seconds. A litter bit longer than expected.
Any way to reduce this ? We don't care if the records is synced to the hard
disk immediately. We are thinking start a seperated thread to do the delete
so to make this call to be async. The things I am not sure is whether the
(long time )delete will impact the insert performance if they share the same
connection? Or should I use a seperated connection for insert and delete?
But In this way, do they need be synced in application level?

3. Search. SELECT ts_start FROM tblIndex WHERE ts_start BETWEEN ? AND ? " -
As ts_start is the primary key so the performance is OK for our need now. I
am thinking i should use a seperated connection for search , right?

Configuration of sqlite :
hard disk database (usb interface)
cache size is 2000 
page size is 1024 
sync mode is 0 (off)
journal_mode mode is truncate 


Thanks for any suggestion to improve the delete performance or about the
overall design.
-- 
View this message in context: 
http://www.nabble.com/suggestion-on-the-database-design-tp25062134p25062134.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Kevin Smekens is out of the office.

2009-08-20 Thread Kevin Smekens

I will be out of the office starting  2009/08/20 and will not return until
2009/08/31.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] update - select

2009-08-20 Thread Pavel Ivanov
If you have synonym for rowid (i.e. column INTEGER PRIMARY KEY) or
some other unique columns combination then you can do something like
this:

INSERT OR REPLACE INTO table_1 (rowid, field_a, field_b, field_c, field_d)
SELECT table_1.rowid, table_2.field_a, table_2.field_b,
  table_2.field_c, table_2.field_d
FROM table_1, table_2 WHERE …


Pavel

On Thu, Aug 20, 2009 at 6:01 AM, Gerald Ebner wrote:
> Dear all,
>
> it seems that sqlite does not allow update statements of this kind:
>
> UPDATE table_1 SET (field_a, field_b, field_c, field_d) = (
>    SELECT field_a, field_b, field_c, field_d FROM table_2 WHERE …
> )
> WHERE 
>
> I succeeded only with
>
> UPDATE table_1 SET
>   field_a = (SELECT field_a FROM table_2 WHERE … ),
>   field_b = (SELECT field_b FROM table_2 WHERE … ),
>   field_c = (SELECT field_c FROM table_2 WHERE … ),
>   field_d = (SELECT field_d FROM table_2 WHERE … )
> WHERE 
>
> and that's may be not th optimal way, dealing with tables of a dozen of
> fields 
>
> thanks in advance
> Geraldo
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] update - select

2009-08-20 Thread Jonas Sandman
On Thu, Aug 20, 2009 at 12:01 PM, Gerald Ebner wrote:
> Dear all,
>
> it seems that sqlite does not allow update statements of this kind:
>
> UPDATE table_1 SET (field_a, field_b, field_c, field_d) = (
>    SELECT field_a, field_b, field_c, field_d FROM table_2 WHERE …
> )
> WHERE 
>

Is this really proper standard SQL?

Regards,
Jonas
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] update - select

2009-08-20 Thread Gerald Ebner
Dear all,

it seems that sqlite does not allow update statements of this kind:

UPDATE table_1 SET (field_a, field_b, field_c, field_d) = (
SELECT field_a, field_b, field_c, field_d FROM table_2 WHERE …
)
WHERE 

I succeeded only with

UPDATE table_1 SET
   field_a = (SELECT field_a FROM table_2 WHERE … ),
   field_b = (SELECT field_b FROM table_2 WHERE … ),
   field_c = (SELECT field_c FROM table_2 WHERE … ),
   field_d = (SELECT field_d FROM table_2 WHERE … )
WHERE 

and that's may be not th optimal way, dealing with tables of a dozen of
fields 

thanks in advance
Geraldo

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users