Third normal says:

        Table:  Stock
                        ID              Integer Primary Key,
                        Symbol  Varchar(n),
                        ... Other stuff ....

        Table: DaylyPrice (I'd call it "Price")
                        StockID Integer  (Foreign key: Stock:ID)
                        Instance        DateTime
                        Price           Integer (Price times 100/1000/10000, 
Your choice)

Then you can have virtually unlimited stocks with virtually unlimited
prices.  Price sampling can be done at any rate (i.e. seconds, hourly,
daily, weekly, etc.)  Your queries would determine "closing price" hourly,
dally, monthly, etc.

Thoughts to ponder:  How to handle stock "splits", Mergers, and ?

Fred

-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of cstrader
Sent: Tuesday, August 19, 2008 11:53 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] 2 Questions from a newbie


OK, cool idea... let me try that.


----- Original Message -----
From: "Jeff Hamilton" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>; "General Discussion of SQLite Database"
<sqlite-users@sqlite.org>
Sent: Tuesday, August 19, 2008 12:51 PM
Subject: Re: [sqlite] 2 Questions from a newbie


> If all 2,400 tables have the same 9 columns you could try using a
> single table and adding a column for the ticker of the stock and then
> add an index to that column to allow quick lookups based on the
> ticker.
>
> -Jeff
>
> On Tue, Aug 19, 2008 at 9:44 AM, .:UgumugU:. <[EMAIL PROTECTED]> wrote:
>> Hi cstrader,
>>
>> Just send some table structure and the explain what you are trying todo.
>> In my opinion it is not a good way to deal with 2400 tables :)
>>
>> ugumugu
>>
>> cstrader yazm?s,:
>>> I'm just starting with SQLite (from vb.net) and could use some advice.
>>>
>>> I have a set of 2400 (could get substantially higher) tables in a single
>>> database (each table contains daily stock prices for a different stock).
>>> Each table has 9 columns (all text for now) and some several thousand
>>> rows.
>>> So far so good.
>>>
>>> I need to read those tables sequentially and operate on them.  It takes
>>> about 18 seconds to open each of  the tables in sequence.  (i.e. loop
>>> through 2400 tables with a select command opening each one)
>>>
>>> First question... does that sound about right in terms of speed?  Is
>>> there a
>>> way to store the data that might be faster?  The data are primarily
>>> single
>>> precision -- I'm using text format, but perhaps that's not best?
>>>
>>> Second question:
>>>
>>> When I open each table I need to add some blank columns to it.  So I
>>> include
>>> some "0 as NewCol" lines to create the new columns with initial 0s.
>>>
>>> However, adding 20 new columns in this manner increases the total time
>>> for
>>> the loop from 19 seconds to  49 seconds.  This seems like a long time.
>>> Is
>>> there a faster way to get these empty columns in?
>>>
>>> More questions later I'm sure...
>>>
>>> Thanks in advance
>>>
>>> cs
>>>
>>> _______________________________________________
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>>>
>>
>> _______________________________________________
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to