Re: [DISCUSS] HBASE-20904 Prometheus /metrics http endpoint for monitoring integration

2020-06-06 Thread ckai2480
Also please let me know if you have any other suggestions.

Thanks,
Madhusoodan

On Sat, 2020-06-06 at 16:21 +0530, ckai2480 wrote:
> Hi,
> 
> I am working on HBASE-20904 (Prometheus /metrics http endpoint for
> monitoring integration) and have created a patch (
> https://github.com/apache/hbase/pull/1814). @busbey and @saintstack
> suggested some good changes which I have incorporated in my local
> branch and wanted to know other's suggestions on it before I commit
> to
> the remote.
> 
> Prometheus
> 1. Prometheus is a monitoring software which uses HTTP to pull the
> metrics from the monitored processes.
> 2. The collected data can be used to anamoly detection, altering etc.
> 
> Problem HBASE-20904 solves.
> 1. Implementing a servlet to expose these metrics.
> 
> Currently I have implemented this as follows
> 1. Expose two end points
>   /prometheus:
>   exposes the metrics captured in native hbase metrics
>   /prometheus-hadoop2:
>   exposes the metrics captured using hadoop2 metrics.
> 
> latter is planned to be removed once all the metric sources start
> using the native metrics API.
> 
> **do the endpoints names look ok?**
> 
> 2. Make the /jmx, /prometheus-hadoop2, /prometheus, /metrics servlets
> optional by providing a multivalued config key with the first two
> servlets as default (values will be classnames or aliases **which one
> do you think is good idea**).
> 
> Out of curiosity.. Why the hadoop2 metrics still exists in HBase, I
> see
> HBASE-14282 is open and task issues linked are closed. Are there any
> unlinked issues here?
> 
> Thanks,
> Madhusoodan
> 



[DISCUSS] HBASE-20904 Prometheus /metrics http endpoint for monitoring integration

2020-06-06 Thread ckai2480
Hi,

I am working on HBASE-20904 (Prometheus /metrics http endpoint for
monitoring integration) and have created a patch (
https://github.com/apache/hbase/pull/1814). @busbey and @saintstack
suggested some good changes which I have incorporated in my local
branch and wanted to know other's suggestions on it before I commit to
the remote.

Prometheus
1. Prometheus is a monitoring software which uses HTTP to pull the
metrics from the monitored processes.
2. The collected data can be used to anamoly detection, altering etc.

Problem HBASE-20904 solves.
1. Implementing a servlet to expose these metrics.

Currently I have implemented this as follows
1. Expose two end points
  /prometheus:
  exposes the metrics captured in native hbase metrics
  /prometheus-hadoop2:
  exposes the metrics captured using hadoop2 metrics.

latter is planned to be removed once all the metric sources start
using the native metrics API.

**do the endpoints names look ok?**

2. Make the /jmx, /prometheus-hadoop2, /prometheus, /metrics servlets
optional by providing a multivalued config key with the first two
servlets as default (values will be classnames or aliases **which one
do you think is good idea**).

Out of curiosity.. Why the hadoop2 metrics still exists in HBase, I see
HBASE-14282 is open and task issues linked are closed. Are there any
unlinked issues here?

Thanks,
Madhusoodan



[jira] [Created] (HBASE-24515) batch Increment/Append fails when retrying the RPC

2020-06-06 Thread Toshihiro Suzuki (Jira)
Toshihiro Suzuki created HBASE-24515:


 Summary: batch Increment/Append fails when retrying the RPC
 Key: HBASE-24515
 URL: https://issues.apache.org/jira/browse/HBASE-24515
 Project: HBase
  Issue Type: Bug
Reporter: Toshihiro Suzuki
Assignee: Toshihiro Suzuki


When a client hits RPC timeout and sends a second RPC request for batch 
Increment/Append but the first RPC is already processed actually, the nonce of 
the RPC is saved in the RS.
 In this case, for the second RPC, the RS just reads the previous result and 
returns it to the client to avoid the Increment/Append is processed twice.

At that time, for batch Increment/Append, we try to create a Get object from a 
CellScanner object in the following code:
 
[https://github.com/apache/hbase/blob/66452afc09d8b82927e5e58565f97939faa22c7b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java#L773-L776]

However, the CellScanner object is already consumed to create the 
Increment/Append object as follows, and it fails:
 
[https://github.com/apache/hbase/blob/66452afc09d8b82927e5e58565f97939faa22c7b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java#L757]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-24514) Backport HBASE-24305 to branch-2

2020-06-06 Thread Jan Hentschel (Jira)
Jan Hentschel created HBASE-24514:
-

 Summary: Backport HBASE-24305 to branch-2
 Key: HBASE-24514
 URL: https://issues.apache.org/jira/browse/HBASE-24514
 Project: HBase
  Issue Type: Task
Affects Versions: 2.4.0
Reporter: Jan Hentschel


Backport the changes from HBASE-24305, which are not related to removed 
deprecated methods.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-24513) The default readRpcTimeout and writeRpcTimeout is incorrectly calculated in AsyncConnectionConfiguration

2020-06-06 Thread Duo Zhang (Jira)
Duo Zhang created HBASE-24513:
-

 Summary: The default readRpcTimeout and writeRpcTimeout is 
incorrectly calculated in AsyncConnectionConfiguration
 Key: HBASE-24513
 URL: https://issues.apache.org/jira/browse/HBASE-24513
 Project: HBase
  Issue Type: Bug
  Components: asyncclient, Client
Reporter: Duo Zhang
Assignee: Duo Zhang
 Fix For: 3.0.0-alpha-1, 2.3.0, 2.2.6






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-24512) ITBLL, ChaosMonkey log message sheer/interleave

2020-06-06 Thread Nick Dimiduk (Jira)
Nick Dimiduk created HBASE-24512:


 Summary: ITBLL, ChaosMonkey log message sheer/interleave
 Key: HBASE-24512
 URL: https://issues.apache.org/jira/browse/HBASE-24512
 Project: HBase
  Issue Type: Bug
  Components: integration tests
Affects Versions: 2.3.0
Reporter: Nick Dimiduk


Running {{IntegrationTestBigLinkedList}} with {{ServerKillingChaosMonkey}}, 
I've noticed that some log messages are sheered/interwoven. This is 
particularly visible via the output of the {{DumpClusterStatusAction}}. I 
suspect we are running two logging instances simultaneously, rather than all 
loggers going through the same instance, and they're not coordinating on the 
output stream.

I'm running ITBLL against a distributed/external cluster, launched via the 
{{bin/hbase}} script.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-24508) Why ProtobufUtil does not set scan's limit

2020-06-06 Thread yukunpeng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yukunpeng resolved HBASE-24508.
---
Resolution: Not A Bug

>  Why ProtobufUtil does not set scan's  limit
> 
>
> Key: HBASE-24508
> URL: https://issues.apache.org/jira/browse/HBASE-24508
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.2.5
>Reporter: yukunpeng
>Priority: Trivial
>
> {code:java}
> //ProtobufUtil
> /**
>  * Convert a client Scan to a protocol buffer Scan
>  *
>  * @param scan the client Scan to convert
>  * @return the converted protocol buffer Scan
>  * @throws IOException
>  */
> public static ClientProtos.Scan toScan(
> final Scan scan) throws IOException {
>   ClientProtos.Scan.Builder scanBuilder =
> ClientProtos.Scan.newBuilder();
>   scanBuilder.setCacheBlocks(scan.getCacheBlocks());
>   if (scan.getBatch() > 0) {
> scanBuilder.setBatchSize(scan.getBatch());
>   }
>   if (scan.getMaxResultSize() > 0) {
> scanBuilder.setMaxResultSize(scan.getMaxResultSize());
>   }
>   if (scan.isSmall()) {
> scanBuilder.setSmall(scan.isSmall());
>   }
>   if (scan.getAllowPartialResults()) {
> scanBuilder.setAllowPartialResults(scan.getAllowPartialResults());
>   }
>   Boolean loadColumnFamiliesOnDemand = 
> scan.getLoadColumnFamiliesOnDemandValue();
>   if (loadColumnFamiliesOnDemand != null) {
> scanBuilder.setLoadColumnFamiliesOnDemand(loadColumnFamiliesOnDemand);
>   }
>   scanBuilder.setMaxVersions(scan.getMaxVersions());
>   scan.getColumnFamilyTimeRange().forEach((cf, timeRange) -> {
> scanBuilder.addCfTimeRange(HBaseProtos.ColumnFamilyTimeRange.newBuilder()
>   .setColumnFamily(UnsafeByteOperations.unsafeWrap(cf))
>   .setTimeRange(toTimeRange(timeRange))
>   .build());
>   });
>   scanBuilder.setTimeRange(ProtobufUtil.toTimeRange(scan.getTimeRange()));
>   Map attributes = scan.getAttributesMap();
>   if (!attributes.isEmpty()) {
> NameBytesPair.Builder attributeBuilder = NameBytesPair.newBuilder();
> for (Map.Entry attribute: attributes.entrySet()) {
>   attributeBuilder.setName(attribute.getKey());
>   
> attributeBuilder.setValue(UnsafeByteOperations.unsafeWrap(attribute.getValue()));
>   scanBuilder.addAttribute(attributeBuilder.build());
> }
>   }
>   byte[] startRow = scan.getStartRow();
>   if (startRow != null && startRow.length > 0) {
> scanBuilder.setStartRow(UnsafeByteOperations.unsafeWrap(startRow));
>   }
>   byte[] stopRow = scan.getStopRow();
>   if (stopRow != null && stopRow.length > 0) {
> scanBuilder.setStopRow(UnsafeByteOperations.unsafeWrap(stopRow));
>   }
>   if (scan.hasFilter()) {
> scanBuilder.setFilter(ProtobufUtil.toFilter(scan.getFilter()));
>   }
>   if (scan.hasFamilies()) {
> Column.Builder columnBuilder = Column.newBuilder();
> for (Map.Entry>
> family: scan.getFamilyMap().entrySet()) {
>   
> columnBuilder.setFamily(UnsafeByteOperations.unsafeWrap(family.getKey()));
>   NavigableSet qualifiers = family.getValue();
>   columnBuilder.clearQualifier();
>   if (qualifiers != null && qualifiers.size() > 0) {
> for (byte [] qualifier: qualifiers) {
>   
> columnBuilder.addQualifier(UnsafeByteOperations.unsafeWrap(qualifier));
> }
>   }
>   scanBuilder.addColumn(columnBuilder.build());
> }
>   }
>   if (scan.getMaxResultsPerColumnFamily() >= 0) {
> scanBuilder.setStoreLimit(scan.getMaxResultsPerColumnFamily());
>   }
>   if (scan.getRowOffsetPerColumnFamily() > 0) {
> scanBuilder.setStoreOffset(scan.getRowOffsetPerColumnFamily());
>   }
>   if (scan.isReversed()) {
> scanBuilder.setReversed(scan.isReversed());
>   }
>   if (scan.getConsistency() == Consistency.TIMELINE) {
> scanBuilder.setConsistency(toConsistency(scan.getConsistency()));
>   }
>   if (scan.getCaching() > 0) {
> scanBuilder.setCaching(scan.getCaching());
>   }
>   long mvccReadPoint = PackagePrivateFieldAccessor.getMvccReadPoint(scan);
>   if (mvccReadPoint > 0) {
> scanBuilder.setMvccReadPoint(mvccReadPoint);
>   }
>   if (!scan.includeStartRow()) {
> scanBuilder.setIncludeStartRow(false);
>   }
>   scanBuilder.setIncludeStopRow(scan.includeStopRow());
>   if (scan.getReadType() != Scan.ReadType.DEFAULT) {
> scanBuilder.setReadType(toReadType(scan.getReadType()));
>   }
>   if (scan.isNeedCursorResult()) {
> scanBuilder.setNeedCursorResult(true);
>   }
>   return scanBuilder.build();
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-24496) The tab of Base Stats not actived by default in table.jsp

2020-06-06 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani resolved HBASE-24496.
--
Fix Version/s: 2.3.0
   3.0.0-alpha-1
 Hadoop Flags: Reviewed
   Resolution: Fixed

Pushed to master, branch-2 and branch-2.3.

> The tab of Base Stats not actived by default in table.jsp
> -
>
> Key: HBASE-24496
> URL: https://issues.apache.org/jira/browse/HBASE-24496
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 3.0.0-alpha-1
>Reporter: Zheng Wang
>Assignee: Zheng Wang
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.0
>
> Attachments: HBASE-24496-afterpatch.png, HBASE-24496-beforepatch.png
>
>
> This bug was introduced by HBASE-21404, it expect to resovle the active issue 
> of nav bar, but impact other areas.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: On org.apache.hadoop.hbase.constraint

2020-06-06 Thread Duo Zhang
The related classes are marked as IA.Private which means it is not part of
our public API...

That's why I check for shell support, as if there is no shell support, then
users have no way to make use of it without breaking the InterfaceAudience
rule...

Jesse Yates  于2020年6月6日周六 上午1:04写道:

> Not particularly. Just because there is no shell integration though,
> doesn't mean it isn't used -  it has been around for 5 years, which means
> someone likely has picked it up. You should probably ask on the user list
> and/or do a deprecation cycle to before just removing.
> ---
> Jesse Yates
> @jesse_yates
> jesseyates.com 
>
>
> On Fri, Jun 5, 2020 at 8:50 AM 张铎(Duo Zhang) 
> wrote:
>
> > Seems only this issue has been finished.
> >
> > https://issues.apache.org/jira/browse/HBASE-4605
> >
> > Which brought in these classes, but the later approach on adding shell
> > support had been resolved as incomplete.
> >
> > https://issues.apache.org/jira/browse/HBASE-4879
> >
> > So I guess there is no actual use in HBase yet.
> >
> > Do you still want to finish this feature?
> >
> > Thanks.
> >
> > Jesse Yates  于2020年6月5日周五 下午11:29写道:
> >
> > > Here is the original JIRA for the constraint work:
> > > https://issues.apache.org/jira/browse/HBASE-4999
> > >
> > > Its a common DB feature, but not sure if folks actually use it in
> HBase.
> > > ---
> > > Jesse Yates
> > > @jesse_yates
> > > jesseyates.com 
> > >
> > >
> > > On Fri, Jun 5, 2020 at 4:06 AM 张铎(Duo Zhang) 
> > > wrote:
> > >
> > > > When removing HTableDescriptor on master branch, I found that it has
> > been
> > > > referenced by org.apache.hadoop.hbase.constraint package.
> > > >
> > > > The problem here is that, the API design is to pass in an
> > > HTableDescriptor
> > > > and modify it directly, but now, TableDescriptor is immutable, so we
> > need
> > > > to redesign the API.
> > > >
> > > > But the problem is that, all the classes are marked as IA.Private,
> and
> > > the
> > > > only references to these classes are in the test code. And I can not
> > find
> > > > any useful information through the git log, the earliest one is
> > > >
> > > > commit 390f32d79fd0c0464fcab008150ad182f4c0abef
> > > > Author: Michael Stack 
> > > > Date:   Sat May 26 05:56:04 2012 +
> > > >
> > > > HBASE-4336 Convert source tree into maven modules
> > > >
> > > > git-svn-id: https://svn.apache.org/repos/asf/hbase/trunk@1342856
> > > > 13f79535-47bb-0310-9956-ffa450edef68
> > > > <
> > >
> >
> https://svn.apache.org/repos/asf/hbase/trunk@134285613f79535-47bb-0310-9956-ffa450edef68
> > > >
> > > >
> > > > Does anyone still use this feature? Or does anyone have some
> background
> > > on
> > > > how this feature works?
> > > >
> > > > Thanks.
> > > >
> > >
> >
>