Yes, postgres has partitions:

https://www.postgresql.org/docs/9.6/static/ddl-partitioning.html 
<https://www.postgresql.org/docs/9.6/static/ddl-partitioning.html>

But this is not going to help much in the scenario you have. 

Postgres can ingest data very very fast, 100M records in seconds - minutes , 
faster than oracle can serve it in many scenarios (all I have tested).

Specially if you use COPY command 

https://www.postgresql.org/docs/9.6/static/sql-copy.html 
<https://www.postgresql.org/docs/9.6/static/sql-copy.html>

and even faster if you use the unlogged feature 

https://www.postgresql.org/docs/9.6/static/sql-altertable.html 
<https://www.postgresql.org/docs/9.6/static/sql-altertable.html>

You can tune postgres to make it even faster, but it’s not normally necessary, 
with the two advices I gave you firstly, is more than enough,  If I don’t 
remember it wrong you can move 100M records in ~ 2 minutes.

https://www.postgresql.org/docs/current/static/populate.html 
<https://www.postgresql.org/docs/current/static/populate.html>


But if you are going to move a record at a time you are going to be limited by 
the fastest transaction rate you can achieve, which is going to be a few 
hundred per second, and limited at the end by the disk hardware you have, . Out 
of the box  and on commodity hardware it can take you up to then days to move 
100M records.

So, my recomendation is to find a way to batch record insertions using copy, 
the benefits you can achieve tunning postgres are going to be marginal compared 
with COPY.

Regards

Daniel Blanch.
ww.translatetopostgres.com







> El 18 abr 2017, a las 4:55, ROBERT PRICE <rprice...@hotmail.com> escribió:
> 
> I come from an Oracle background and am porting an application to postgres. 
> App has a table that will contain 100 million rows and has to be loaded by a 
> process that reads messages off a SQS queue and makes web service calls to 
> insert records one row at a time in a postgres RDS instance. I know slow by 
> slow is not the ideal approach but I was wondering if postgres had 
> partitioning or other ways to tune concurrent insert statements. Process will 
> run 50 - 100 concurrent threads.

Reply via email to