Greetings list!
I am pretty new to postgresql from mysql and did a fairly extensive
search of the list and came up with a few good ones but didn't find
the exact same situation as I have now. so I am venturing asking here.
I have daily minute stock price data from 2005 on and each day with
columns
On 20 March 2012 19:45, Michael Nolan wrote:
>
>>
>> right now I am having about 7000 tables for individual stock and I use
>> perl to do inserts, it's very slow. I would like to use copy or other
>> bulk loading tool to load the daily raw gz data. but I need the split
>> the file to per stock fil
On 20 March 2012 20:19, Steve Crawford wrote:
> On 03/20/2012 04:27 PM, Jim Green wrote:
>>
>> Greetings list!
>> I am pretty new to postgresql from mysql
>
> Welcome.
>
>> I have daily minute stock price data from 2005 on and each day with
>> columns t
On 20 March 2012 21:40, David Kerr wrote:
> On 03/20/2012 04:27 PM, Jim Green wrote:
>
> Greetings list!
> I am pretty new to postgresql from mysql and did a fairly extensive
> search of the list and came up with a few good ones but didn't find
> the exact same situation
On 20 March 2012 21:54, Brent Wood wrote:
>
> Also look at a clustered index on timestamp
Thanks, this looks very helpful. what do you think about the thousands
table vs one table partitioned by month? I guess if I go with one
table, index would be too big to fit in ram?
Jim.
--
Sent via pgsql
On 20 March 2012 22:03, David Kerr wrote:
> \copy on 1.2million rows should only take a minute or two, you could make
> that table "unlogged"
> as well to speed it up more. If you could truncate / drop / create / load /
> then index the table each
> time then you'll get the best throughput.
Tha
On 20 March 2012 22:08, Jim Green wrote:
> On 20 March 2012 22:03, David Kerr wrote:
>
>> \copy on 1.2million rows should only take a minute or two, you could make
>> that table "unlogged"
>> as well to speed it up more. If you could truncate / drop / creat
On 20 March 2012 22:21, David Kerr wrote:
> I'm imagining that you're loading the raw file into a temporary table that
> you're going to use to
> process / slice new data data into your 7000+ actual tables per stock.
Thanks! would "slice new data data into your 7000+ actual tables per
stock." be
On 20 March 2012 22:22, John R Pierce wrote:
> your 7000 tables all contain the exact same information, with the only
> difference being the stock ticker symbol, right? then really, the single
> table, perhaps partitioned by month or whatever, is the right way to go.
> Any schema that makes yo
On 20 March 2012 22:25, Andy Colson wrote:
> I think the decisions:
>
> 1) one big table
> 2) one big partitioned table
> 3) many little tables
>
> would probably depend on how you want to read the data. Writing would be
> very similar.
>
> I tried to read through the thread but didnt see how you
On 20 March 2012 22:43, Andy Colson wrote:
> Here is some copy/pasted parts:
>
> my @list = glob('*.gz');
> for my $fname (@list)
> {
> $db->do('copy access from stdin');
> open my $fh, "-|", "/usr/bin/zcat $fname" or die "$fname: $!";
> while (<$fh>)
> {
>
On 20 March 2012 22:57, John R Pierce wrote:
> avg() in the database is going to be a lot faster than copying the data into
> memory for an application to process.
I see..
>
> Also, you know there's a plR for postgres that lets you embed R functions in
> the database server and invoke them in S
On 20 March 2012 23:01, Andy Colson wrote:
> Of course, there are probably other usage patters I'm not aware of. And I
> also am assuming some things based on what I've heard -- not of actual
> experience.
I am not expert in sql, so what I get out of postgresql is probably
mostly select, but as
On 21 March 2012 11:01, Steve Crawford wrote:
> Something sounds wrong, here. XID wraps around somewhere around 4-billion
> transactions which is a substantial multiple of the entire number of records
> you are trying to insert. Do you have any unusual vacuum settings?
I have autvacumm=off in my
14 matches
Mail list logo