Hi All,
I have a project where I wrote custom plpgsql functions to do specialized
queries of my dataset. These functions dynamically generate sql and then
RETURN EXECUTE that generated sql. From the client perspective the usage
looks like:
SELECT * FROM exec_query(new_query_object(param1 => 'blah
This will run on EC2 (or other cloud service) machines and on ssds.
Right now runs on m4.4xlarge with 64GiB of ram.
Willing to pay for beefy instances if it means better performance.
On Mon, Mar 28, 2016 at 4:49 PM, Rob Sargent wrote:
>
>
> On 03/28/2016 02:41 PM, Mat Arye wrote:
&g
Hi All,
I am writing a program that needs time-series-based insert mostly workload.
I need to make the system scaleable with many thousand of inserts/s. One of
the techniques I plan to use is time-based table partitioning and I am
trying to figure out how large to make my time tables.
Does anybod