Tim Uckun wrote
> The database is functioning fine now but I am anticipating a much higher
> workload in the future. The table in question is probably going to have a
> few million rows per day inserted into it when it gets busy, if it gets
> very busy it might be in the tens of millions per day b
On 06/26/2014 05:31 PM, Tim Uckun wrote:
1. If there is a very large set of data in the table that needs to be
moved this will be slow and might throw locks which would impact the
performance of the inserts and the updates.
Well, the locks would only affect the rows being moved. If this is
The database is functioning fine now but I am anticipating a much higher
workload in the future. The table in question is probably going to have a
few million rows per day inserted into it when it gets busy, if it gets
very busy it might be in the tens of millions per day but that's
speculation at
This is what I was thinking but I am worried about two things.
1. If there is a very large set of data in the table that needs to be moved
this will be slow and might throw locks which would impact the performance
of the inserts and the updates.
2. Constantly deleting large chunks of data might ca
On 06/26/2014 10:47 AM, Marti Raudsepp wrote:
This deserves a caveat, in the default "read committed" isolation
level, this example can delete more rows that it inserts;
This is only true because I accidentally inverted the date resolutions.
It should have been:
BEGIN;
INSERT INTO my_table_
On Thu, Jun 26, 2014 at 5:49 PM, Shaun Thomas wrote:
> Then you create a job that runs however often you want, and all that job
> does, is move old rows from my_table, to my_table_stable. Like so:
>
> BEGIN;
> INSERT INTO my_table_stable
> SELECT * FROM ONLY my_table
> WHERE date_col >= now() - I
On 06/26/2014 02:29 AM, Tim Uckun wrote:
I have a use case in which the most recent data experiences a lot of
transactions (inserts and updates) and then the churn kind of calms
down. Eventually the data is relatively static and will only be
updated in special and sporatic events.
I was thin
On 06/26/2014 04:29 AM, Tim Uckun wrote:
I don't think partitioning is a good idea in this case because the
partitions will be for small time periods (5 to 15 minutes).
Actually, partitioning might be exactly what you want, but not in the
way you might think. What you've run into is actually
I have a use case in which the most recent data experiences a lot of
transactions (inserts and updates) and then the churn kind of calms down.
Eventually the data is relatively static and will only be updated in
special and sporatic events.
I was thinking about keeping the high churn data in a di