yes, but you now have only 1 table to deal with. The DB has to store roughly 
the same amount of data.

But at least now when you need to add a column you only need to do so on 1 
table.

And it reduces the amount of parsing ie not 2400 different select but only 1.

And you won't need to write custom code to get say stock prices for a day for 
all stocks. 
Instead you could write a Single select statement as follows (of course I'm 
making up my own column names).

   select ticker, quote_date, day_high, day_low, close
     from stock
    where quote_Date = '20080510' ;

The above assumes you have only 1 row per day, if you have more than 1 you'll 
need to add additional where clause fields or do a Max, min etc.



--- On Wed, 8/20/08, cstrader <[EMAIL PROTECTED]> wrote:
From: cstrader <[EMAIL PROTECTED]>
Subject: Re: [sqlite] 2 Questions from a newbie
To: "General Discussion of SQLite Database" <sqlite-users@sqlite.org>
Date: Wednesday, August 20, 2008, 8:03 AM

Well it turns out that the db created this way (including the index) is no 
smaller than the one with 2400 tables, and the read is no faster!  Does that 
make any sense?

Thanks


----- Original Message ----- 
From: "Jeff Hamilton" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>; "General Discussion of SQLite Database" 
<sqlite-users@sqlite.org>
Sent: Tuesday, August 19, 2008 12:51 PM
Subject: Re: [sqlite] 2 Questions from a newbie


> If all 2,400 tables have the same 9 columns you could try using a
> single table and adding a column for the ticker of the stock and then
> add an index to that column to allow quick lookups based on the
> ticker.
>
> -Jeff
>
> On Tue, Aug 19, 2008 at 9:44 AM, .:UgumugU:. <[EMAIL PROTECTED]>
wrote:
>> Hi cstrader,
>>
>> Just send some table structure and the explain what you are trying
todo.
>> In my opinion it is not a good way to deal with 2400 tables :)
>>
>> ugumugu
>>
>> cstrader yazm?s,:
>>> I'm just starting with SQLite (from vb.net) and could use some
advice.
>>>
>>> I have a set of 2400 (could get substantially higher) tables in a
single
>>> database (each table contains daily stock prices for a different
stock).
>>> Each table has 9 columns (all text for now) and some several
thousand 
>>> rows.
>>> So far so good.
>>>
>>> I need to read those tables sequentially and operate on them.  It
takes
>>> about 18 seconds to open each of  the tables in sequence.  (i.e.
loop
>>> through 2400 tables with a select command opening each one)
>>>
>>> First question... does that sound about right in terms of speed? 
Is 
>>> there a
>>> way to store the data that might be faster?  The data are
primarily 
>>> single
>>> precision -- I'm using text format, but perhaps that's not
best?
>>>
>>> Second question:
>>>
>>> When I open each table I need to add some blank columns to it.  So
I 
>>> include
>>> some "0 as NewCol" lines to create the new columns with
initial 0s.
>>>
>>> However, adding 20 new columns in this manner increases the total
time 
>>> for
>>> the loop from 19 seconds to  49 seconds.  This seems like a long
time. 
>>> Is
>>> there a faster way to get these empty columns in?
>>>
>>> More questions later I'm sure...
>>>
>>> Thanks in advance
>>>
>>> cs
>>>
>>> _______________________________________________
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>>>
>>
>> _______________________________________________
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users 

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to