Andrew Smith writes:
> I am setting up a proof of concept database to store some historical data.
> Whilst I've used PostgreSQL a bit in the past this is the first time I've
> looked into disk usage due to the amount of data that could potentially be
> stored. I've done a quick test and I'm a litt
On Mon, Feb 22, 2016 at 3:20 PM, Andrew Smith wrote:
> Hello,
>
> I am setting up a proof of concept database to store some historical
> data. Whilst I've used PostgreSQL a bit in the past this is the first time
> I've looked into disk usage due to the amount of data that could
> potentially be
Hello,
I am setting up a proof of concept database to store some historical data.
Whilst I've used PostgreSQL a bit in the past this is the first time I've
looked into disk usage due to the amount of data that could potentially be
stored. I've done a quick test and I'm a little confused as to why
Thanks for all help of everyone.
I have tried to change effective_cache_size = 24GB and it run well.
Tuan Hoang Anh
On 02/21/2016 02:22 PM, Kevin Waterson wrote:
> I do not understand why I am getting this error.
> I have joined the table correctly, is this not sufficient?
>
> forge=> select * FROM generate_series('2016-01-22', '2017-12-31', '1
> day'::interval) AS day
> LEFT JOIN (
>
I do not understand why I am getting this error.
I have joined the table correctly, is this not sufficient?
forge=> select * FROM generate_series('2016-01-22', '2017-12-31', '1
day'::interval) AS day
LEFT JOIN (
select *, generate_series(c.start_time,
c.end_time, '2 wee
On Fri, Feb 19, 2016 at 6:24 PM, Ashish Chauhan
wrote:
> Below is recovery.conf on slave
>
>
> #---
> # STANDBY SERVER PARAMETERS
>
> #---
> #
> # standb
On 2016-02-18 13:37:37 -0500, Tom Smith wrote:
> it is for reducing index size as the table become huge.
> sorry for confusion, by timestamp, I meant a time series number, not the sql
> timestamp type.
> I need the unique on the column to ensure no duplicate, but the btree index
> is getting
> h