Thanks for the version info. I'm not sure about what's included in CDH's packaging -- maybe someone else knows.

I understand that your question was about the read load. However, the old guideposts likely needed to be updated with the new files' stats. Thus, even though you're writing data to your table, to update the stats for that table, it would involve a read of the *existing* stats data. But, again, that's just a guess :)

Mac Fang wrote:
Josh,

Thanks for the reply,
It is a CDH vendor version (4.7.0_1.3.0). Mutations are reasonable when
we do bulk load. Question is why we have the high read load of the
SYSTEM.STATS table.


On Thu, Apr 13, 2017 at 11:42 AM, Josh Elser <josh.el...@gmail.com
<mailto:josh.el...@gmail.com>> wrote:

    What version of Phoenix are you using? (an Apache release? some
    vendors' packaging?)

    Academically speaking, when you bulk load some data, the stats table
    should get updated (otherwise the stats are wrong until a compaction
    occurs), but I can't specifically point you at a line of code that
    is doing this (nor am I 100% positive it happens).


    On Wed, Apr 12, 2017 at 3:12 AM, Mac Fang
    <mac.fangzhen....@gmail.com <mailto:mac.fangzhen....@gmail.com>> wrote:

        Hi, Guys,


        We are notices some weird high read throuput of the HBase,

        The HBase Read Rate


        ​
        And the SYSTEM.STATS Read Rate


        ​
        During that time frame, the system did not have a high QPS.
        However, it did import some data (some millions) data via the
        "Phoenix Bulk Load".

        The question is what does phoenix do with the SYSTEM.STATS table
        when we do "Bulk Load"?

        We did not find any clues while we looked into the code. Any
        hints ?


        --
        regards
        macf





--
regards
macf

Reply via email to