On Tue, Mar 21, 2017 at 6:55 PM, Robert Haas <robertmh...@gmail.com> wrote:
> On Tue, Mar 21, 2017 at 8:41 AM, Pavan Deolasee
> <pavan.deola...@gmail.com> wrote:
>>> Yeah.  So what's the deal with this?  Is somebody working on figuring
>>> out a different approach that would reduce this overhead?  Are we
>>> going to defer WARM to v11?  Or is the intent to just ignore the 5-10%
>>> slowdown on a single column update and commit everything anyway?
>> I think I should clarify something. The test case does a single column
>> update, but it also has columns which are very wide, has an index on many
>> columns (and it updates a column early in the list). In addition, in the
>> test Mithun updated all 10million rows of the table in a single transaction,
>> used UNLOGGED table and fsync was turned off.
>> TBH I see many artificial scenarios here. It will be very useful if he can
>> rerun the query with some of these restrictions lifted. I'm all for
>> addressing whatever we can, but I am not sure if this test demonstrates a
>> real world usage.
> That's a very fair point, but if these patches - or some of them - are
> going to get committed then these things need to get discussed.  Let's
> not just have nothing-nothing-nothing giant unagreed code drop.
> I think that very wide columns and highly indexed tables are not
> particularly unrealistic, nor do I think updating all the rows is
> particularly unrealistic.  Sure, it's not everything, but it's
> something.  Now, I would agree that all of that PLUS unlogged tables
> with fsync=off is not too realistic.  What kind of regression would we
> observe if we eliminated those last two variables?

Sure, we can try that.  I think we need to try it with
synchronous_commit = off, otherwise, WAL writes completely overshadows

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to