Yep -- see avatica-metrics[1], avatica-dropwizard-metrics3[2], and my dropwizard-hadoop-metrics2[3] project for what Nick is referring to.

What I ended up doing in Calcite/Avatica was a step beyond your #3, Enis. Instead of choosing a subset of some standard metrics library to expose, I "re-built" the actual API that I wanted to expose. At the end of the day, the API I "built" was nearly 100% what dropwizard metrics' API was. I like the dropwizard-metrics API; however, we wanted to avoid the strong coupling to a single metrics implementation.

My current feeling is that external API should never include classes/interfaces which you don't "own". Re-building the API that already exists is pedantic, but I think it's a really good way to pay down the maintenance debt (whenever the next metrics library "hotness" takes off).

If it's amenable to you, Enis, I'm happy to work with you to do whatever decoupling of this metrics abstraction away from the "core" of Avatica (e.g. presently, a new update of the library would also require a full release of Avatica which is no-good for HBase). I think a lot of the lifting I've done already would be reusable by you and help make a better product at the end of the day.

- Josh

[1] https://github.com/apache/calcite/tree/master/avatica/metrics
[2] https://github.com/apache/calcite/tree/master/avatica/metrics-dropwizardmetrics3
[3] https://github.com/joshelser/dropwizard-hadoop-metrics2

Nick Dimiduk wrote:
IIRC, the plan is to get off of Hadoop Metrics2, so I am in favor of either
(2) or (3). Specifically for (3), I believe there is an implementation for
translating Dropwizard Metrics to Hadoop Metrics2, in or around Avatica
and/or Phoenix Query Server.

On Fri, Nov 11, 2016 at 3:15 PM, Enis Söztutar<e...@apache.org>  wrote:

HBase / Phoenix devs,

I would like to solicit early feedback on the design approach that we would
pursue for exposing coprocessor metrics. It has implications for our
compatibility, so lets try to have some consensus. Added Phoenix devs as
well since this will affect how coprocessors can emit metrics via region
server metrics bus.

The issue is HBASE-9774 [1].


We have a couple of options:

(1) Expose Hadoop Metrics2 + HBase internal classes (like BaseSourceImpl,
MutableFastCounter, FastLongHistogram, etc). This option is the least
amount of work in terms of defining the API. We would mark the important
classes with LimitedPrivate(Coprocessor) and have the coprocessors each
write their metrics source classes separately. The disadvantage would be
that some of the internal APIs are now public and has to be evolved with
regards to coprocessor API compatibility. Also it will make it so that
breaking coprocessors are now easier across minor releases.
(2) Build a Metrics subset API in HBase to abstract away HBase metrics
classes and Hadoop2 metrics classes and expose this API only. The API will
probably be limited and will be a small subset. HBase internals do not need
to be changed that much, but the API has to be kept
LimitedPrivate(Coprocessor) with the compatibility implications.
(3) Expose (a limited subset of) third-party API to the coprocessors (like
Yammer metrics) and never expose internal HBase / Hadoop implementation.
Build a translation layer between the yammer metrics and our Hadoop metrics
2 implementation so that things will still work. If we end up changing the
implementation, existing coprocessors will not be affected. The downside is
that whatever API that we agree to expose becomes our compatibility point.
We cannot change that dependency version unless it is acceptable via our
compatibility guidelines.

Personally, I would like to pursue option (3) especially with Yammer
metrics since we do not have to build yet another API endpoint. Hadoop's
metrics API is not the best and we do not know whether we will end up
changing that dependency. What do you guys think?


[1] https://issues.apache.org/jira/browse/HBASE-9774


Reply via email to