[jira] [Commented] (HBASE-18882) Run MR branch-1 jobs against hbase2 cluster
[ https://issues.apache.org/jira/browse/HBASE-18882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16182031#comment-16182031 ] ramkrishna.s.vasudevan commented on HBASE-18882: Linking the related JIRA. > Run MR branch-1 jobs against hbase2 cluster > --- > > Key: HBASE-18882 > URL: https://issues.apache.org/jira/browse/HBASE-18882 > Project: HBase > Issue Type: Task > Components: mapreduce, test >Reporter: stack > Fix For: 2.0.0-beta-1 > > > Ensure this works still. Run all our bundled MR tools at least. Find some > custom if time. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HBASE-18889) Different namenodes url but the same location in are treated differently
Vishal Khandelwal created HBASE-18889: - Summary: Different namenodes url but the same location in are treated differently Key: HBASE-18889 URL: https://issues.apache.org/jira/browse/HBASE-18889 Project: HBase Issue Type: Bug Reporter: Vishal Khandelwal I tried to create a full backup like this which passed: {code} ./bin/hbase backup create full hdfs://localhost:8020/backup1 -t test1 {code} and incremental by following {code} ./bin/hbase backup create incremental hdfs://:8020/backup1 -t test1 {code} 2017-09-27 10:34:45,010 ERROR [main] backup.BackupDriver: Error running command-line tool java.io.IOException: Incremental backup table set contains no tables. You need to run full backup first on test1 at org.apache.hadoop.hbase.backup.impl.BackupAdminImpl.backupTables(BackupAdminImpl.java:536) at org.apache.hadoop.hbase.backup.impl.BackupCommands$CreateCommand.execute(BackupCommands.java:336) at org.apache.hadoop.hbase.backup.BackupDriver.parseAndRun(BackupDriver.java:137) at org.apache.hadoop.hbase.backup.BackupDriver.doWork(BackupDriver.java:170) at org.apache.hadoop.hbase.backup.BackupDriver.run(BackupDriver.java:203) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.hbase.backup.BackupDriver.main(BackupDriver.java:178) In production in case of multiple namenode, we can use different way to start -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18090) Improve TableSnapshotInputFormat to allow more multiple mappers per region
[ https://issues.apache.org/jira/browse/HBASE-18090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16182026#comment-16182026 ] Ashu Pachauri commented on HBASE-18090: --- +1 on the patch V5. I'll commit it tomorrow if no one objects by then (given the tests pass) > Improve TableSnapshotInputFormat to allow more multiple mappers per region > -- > > Key: HBASE-18090 > URL: https://issues.apache.org/jira/browse/HBASE-18090 > Project: HBase > Issue Type: Improvement > Components: mapreduce >Affects Versions: 1.4.0 >Reporter: Mikhail Antonov >Assignee: xinxin fan > Attachments: HBASE-18090-branch-1.3-v1.patch, > HBASE-18090-branch-1.3-v2.patch, HBASE-18090-V3-master.patch, > HBASE-18090-V4-master.patch, HBASE-18090-V5-master.patch > > > TableSnapshotInputFormat runs one map task per region in the table snapshot. > This places unnecessary restriction that the region layout of the original > table needs to take the processing resources available to MR job into > consideration. Allowing to run multiple mappers per region (assuming > reasonably even key distribution) would be useful. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18298) RegionServerServices Interface cleanup for CP expose
[ https://issues.apache.org/jira/browse/HBASE-18298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-18298: -- Release Note: We used to pass the RegionServerServices (RSS) which gave Coprocesosrs (CP) all sort of access to internal Server machinery. We now only allows the CP a subset of the RSS in the form of the CPRSS Interface. Particulars: Removed method getRegionServerServices from CP exposed RegionCoprocessorEnvironment and RegionServerCoprocessorEnvironment and replaced with getCoprocessorRegionServerServices. This returns a new interface CoprocessorRegionServerServices which is only a subset of RegionServerServices. With that below methods are no longer exposed for CPs WAL getWAL(HRegionInfo regionInfo) List getWALs() FlushRequester getFlushRequester() RegionServerAccounting getRegionServerAccounting() RegionServerRpcQuotaManager getRegionServerRpcQuotaManager() SecureBulkLoadManager getSecureBulkLoadManager() RegionServerSpaceQuotaManager getRegionServerSpaceQuotaManager() void postOpenDeployTasks(final PostOpenDeployContext context) void postOpenDeployTasks(final Region r) boolean reportRegionStateTransition(final RegionStateTransitionContext context) boolean reportRegionStateTransition(TransitionCode code, long openSeqNum, HRegionInfo... hris) boolean reportRegionStateTransition(TransitionCode code, HRegionInfo... hris) RpcServerInterface getRpcServer() ConcurrentMapgetRegionsInTransitionInRS() Leases getLeases() ExecutorService getExecutorService() Map getRecoveringRegions() public ServerNonceManager getNonceManager() boolean registerService(Service service) HeapMemoryManager getHeapMemoryManager() double getCompactionPressure() ThroughputController getFlushThroughputController() double getFlushPressure() MetricsRegionServer getMetrics() EntityLock regionLock(List regionInfos, String description, Abortable abort) void unassign(byte[] regionName) Configuration getConfiguration() ZooKeeperWatcher getZooKeeper() ClusterConnection getClusterConnection() MetaTableLocator getMetaTableLocator() CoordinatedStateManager getCoordinatedStateManager() ChoreService getChoreService() void stop(String why) void abort(String why, Throwable e) boolean isAborted() void updateRegionFavoredNodesMapping(String encodedRegionName, List favoredNodes) InetSocketAddress[] getFavoredNodesForRegion(String encodedRegionName) void addToOnlineRegions(Region region) boolean removeFromOnlineRegions(final Region r, ServerName destination) Also 3 methods name have been changed List getOnlineRegions(TableName tableName) -> List getRegions(TableName tableName) List getOnlineRegions() -> List getRegions() Region getFromOnlineRegions(final String encodedRegionName) -> Region getRegion(final String encodedRegionName) was: Removed method getRegionServerServices from CP exposed RegionCoprocessorEnvironment and RegionServerCoprocessorEnvironment and replaced with getCoprocessorRegionServerServices. This returns a new interface CoprocessorRegionServerServices which is only a subset of RegionServerServices. With that below methods are no longer exposed for CPs WAL getWAL(HRegionInfo regionInfo) List getWALs() FlushRequester getFlushRequester() RegionServerAccounting getRegionServerAccounting() RegionServerRpcQuotaManager getRegionServerRpcQuotaManager() SecureBulkLoadManager getSecureBulkLoadManager() RegionServerSpaceQuotaManager getRegionServerSpaceQuotaManager() void postOpenDeployTasks(final PostOpenDeployContext context) void postOpenDeployTasks(final Region r) boolean reportRegionStateTransition(final RegionStateTransitionContext context) boolean reportRegionStateTransition(TransitionCode code, long openSeqNum, HRegionInfo... hris) boolean reportRegionStateTransition(TransitionCode code, HRegionInfo... hris) RpcServerInterface getRpcServer() ConcurrentMap getRegionsInTransitionInRS() Leases getLeases() ExecutorService getExecutorService() Map getRecoveringRegions() public ServerNonceManager getNonceManager() boolean registerService(Service service) HeapMemoryManager getHeapMemoryManager() double getCompactionPressure() ThroughputController getFlushThroughputController() double getFlushPressure() MetricsRegionServer getMetrics() EntityLock regionLock(List regionInfos, String description, Abortable abort) void unassign(byte[] regionName) Configuration getConfiguration() ZooKeeperWatcher getZooKeeper() ClusterConnection getClusterConnection() MetaTableLocator getMetaTableLocator() CoordinatedStateManager getCoordinatedStateManager() ChoreService getChoreService() void stop(String why) void abort(String why, Throwable e) boolean isAborted() void updateRegionFavoredNodesMapping(String encodedRegionName, List favoredNodes) InetSocketAddress[] getFavoredNodesForRegion(String encodedRegionName) void addToOnlineRegions(Region region) boolean removeFromOnlineRegions(final
[jira] [Updated] (HBASE-18298) RegionServerServices Interface cleanup for CP expose
[ https://issues.apache.org/jira/browse/HBASE-18298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anoop Sam John updated HBASE-18298: --- Release Note: Removed method getRegionServerServices from CP exposed RegionCoprocessorEnvironment and RegionServerCoprocessorEnvironment and replaced with getCoprocessorRegionServerServices. This returns a new interface CoprocessorRegionServerServices which is only a subset of RegionServerServices. With that below methods are no longer exposed for CPs WAL getWAL(HRegionInfo regionInfo) List getWALs() FlushRequester getFlushRequester() RegionServerAccounting getRegionServerAccounting() RegionServerRpcQuotaManager getRegionServerRpcQuotaManager() SecureBulkLoadManager getSecureBulkLoadManager() RegionServerSpaceQuotaManager getRegionServerSpaceQuotaManager() void postOpenDeployTasks(final PostOpenDeployContext context) void postOpenDeployTasks(final Region r) boolean reportRegionStateTransition(final RegionStateTransitionContext context) boolean reportRegionStateTransition(TransitionCode code, long openSeqNum, HRegionInfo... hris) boolean reportRegionStateTransition(TransitionCode code, HRegionInfo... hris) RpcServerInterface getRpcServer() ConcurrentMapgetRegionsInTransitionInRS() Leases getLeases() ExecutorService getExecutorService() Map getRecoveringRegions() public ServerNonceManager getNonceManager() boolean registerService(Service service) HeapMemoryManager getHeapMemoryManager() double getCompactionPressure() ThroughputController getFlushThroughputController() double getFlushPressure() MetricsRegionServer getMetrics() EntityLock regionLock(List regionInfos, String description, Abortable abort) void unassign(byte[] regionName) Configuration getConfiguration() ZooKeeperWatcher getZooKeeper() ClusterConnection getClusterConnection() MetaTableLocator getMetaTableLocator() CoordinatedStateManager getCoordinatedStateManager() ChoreService getChoreService() void stop(String why) void abort(String why, Throwable e) boolean isAborted() void updateRegionFavoredNodesMapping(String encodedRegionName, List favoredNodes) InetSocketAddress[] getFavoredNodesForRegion(String encodedRegionName) void addToOnlineRegions(Region region) boolean removeFromOnlineRegions(final Region r, ServerName destination) Also 3 methods name have been changed List getOnlineRegions(TableName tableName) -> List getRegions(TableName tableName) List getOnlineRegions() -> List getRegions() Region getFromOnlineRegions(final String encodedRegionName) -> Region getRegion(final String encodedRegionName) > RegionServerServices Interface cleanup for CP expose > > > Key: HBASE-18298 > URL: https://issues.apache.org/jira/browse/HBASE-18298 > Project: HBase > Issue Type: Sub-task > Components: Coprocessors >Reporter: Anoop Sam John >Assignee: Anoop Sam John >Priority: Critical > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-18298.patch, HBASE-18298_V2.patch, > HBASE-18298_V3.patch, HBASE-18298_V4.patch, HBASE-18298_V5.patch, > HBASE-18298_V6.patch, HBASE-18298_V7.patch, HBASE-18298_V7.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HBASE-18888) StealJobQueue should call super() to init the PriorityBlockingQueue
ramkrishna.s.vasudevan created HBASE-1: -- Summary: StealJobQueue should call super() to init the PriorityBlockingQueue Key: HBASE-1 URL: https://issues.apache.org/jira/browse/HBASE-1 Project: HBase Issue Type: Bug Affects Versions: 2.0.0-alpha-3 Reporter: ramkrishna.s.vasudevan Assignee: ramkrishna.s.vasudevan Fix For: 2.0.0-alpha-4 {code} ERROR: java.io.IOException: org.apache.hadoop.hbase.regionserver.CompactSplit$CompactionRunner cannot be cast to java.lang.Comparable at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:465) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:278) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:258) Caused by: java.lang.ClassCastException: org.apache.hadoop.hbase.regionserver.CompactSplit$CompactionRunner cannot be cast to java.lang.Comparable at java.util.concurrent.PriorityBlockingQueue.siftUpComparable(PriorityBlockingQueue.java:357) at java.util.concurrent.PriorityBlockingQueue.offer(PriorityBlockingQueue.java:489) at org.apache.hadoop.hbase.util.StealJobQueue.offer(StealJobQueue.java:103) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1361) at org.apache.hadoop.hbase.regionserver.CompactSplit.requestCompactionInternal(CompactSplit.java:291) at org.apache.hadoop.hbase.regionserver.CompactSplit.requestCompactionInternal(CompactSplit.java:248) at org.apache.hadoop.hbase.regionserver.CompactSplit.requestCompaction(CompactSplit.java:236) at org.apache.hadoop.hbase.regionserver.RSRpcServices.compactRegion(RSRpcServices.java:1591) at org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:26856) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:406) {code} Seems to be a simple miss. StealJobQueue does not init the PriorityBlockingQueue that it extends and so major compaction/compaction just fails with the above stack trace. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18298) RegionServerServices Interface cleanup for CP expose
[ https://issues.apache.org/jira/browse/HBASE-18298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16182015#comment-16182015 ] stack commented on HBASE-18298: --- This is excellent! Needs a nice release note. Include any suggestions that might help a CP migrate? Good stuff. > RegionServerServices Interface cleanup for CP expose > > > Key: HBASE-18298 > URL: https://issues.apache.org/jira/browse/HBASE-18298 > Project: HBase > Issue Type: Sub-task > Components: Coprocessors >Reporter: Anoop Sam John >Assignee: Anoop Sam John >Priority: Critical > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-18298.patch, HBASE-18298_V2.patch, > HBASE-18298_V3.patch, HBASE-18298_V4.patch, HBASE-18298_V5.patch, > HBASE-18298_V6.patch, HBASE-18298_V7.patch, HBASE-18298_V7.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18846) Accommodate the hbase-indexer/lily/SEP consumer deploy-type
[ https://issues.apache.org/jira/browse/HBASE-18846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16182014#comment-16182014 ] stack commented on HBASE-18846: --- Oh, the objective is getting Lily out of our guts by having it use a bog-standard HRegionServer with a bit of specialized config with a single, ordained plugin point for catching the replication stream; getting Lily to this point will make it so it doesn't break every time we tinker in our internals (currently it breaks whenever we sneeze). > Accommodate the hbase-indexer/lily/SEP consumer deploy-type > --- > > Key: HBASE-18846 > URL: https://issues.apache.org/jira/browse/HBASE-18846 > Project: HBase > Issue Type: Bug >Reporter: stack > Attachments: HBASE-18846.master.001.patch, javadoc.txt > > > This is a follow-on from HBASE-10504, Define a Replication Interface. There > we defined a new, flexible replication endpoint for others to implement but > it did little to help the case of the lily hbase-indexer. This issue takes up > the case of the hbase-indexer. > The hbase-indexer poses to hbase as a 'fake' peer cluster (For why > hbase-indexer is implemented so, the advantage to having the indexing done in > a separate process set that can be independently scaled, can participate in > the same security realm, etc., see discussion in HBASE-10504). The > hbase-indexer will start up a cut-down "RegionServer" processes that are just > an instance of hbase RpcServer hosting an AdminProtos Service. They make > themselves 'appear' to the Replication Source by hoisting up an ephemeral > znode 'registering' as a RegionServer. The source cluster then streams > WALEdits to the Admin Protos method: > {code} > public ReplicateWALEntryResponse replicateWALEntry(final RpcController > controller, > final ReplicateWALEntryRequest request) throws ServiceException { > {code} > The hbase-indexer relies on other hbase internals like Server so it can get a > ZooKeeperWatcher instance and know the 'name' to use for this cut-down server. > Thoughts on how to proceed include: > > * Better formalize its current digestion of hbase internals; make it so > rpcserver is allowed to be used by others, etc. This would be hard to do > given they use basics like Server, Protobuf serdes for WAL types, and > AdminProtos Service. Any change in this wide API breaks (again) > hbase-indexer. We have made a 'channel' for Coprocessor Endpoints so they > continue to work though they use 'internal' types. They can use protos in > hbase-protocol. hbase-protocol protos are in a limbo currently where they are > sort-of 'public'; a TODO. Perhaps the hbase-indexer could do similar relying > on the hbase-protocol (pb2.5) content and we could do something to reveal > rpcserver and zk for hbase-indexer safe use. > * Start an actual RegionServer only have it register the AdminProtos Service > only -- not ClientProtos and the Service that does Master interaction, etc. > [I checked, this is not as easy to do as I at first thought -- St.Ack] Then > have the hbase-indexer implement an AdminCoprocessor to override the > replicateWALEntry method (the Admin CP implementation may need work). This > would narrow the hbase-indexer exposure to that of the Admin Coprocessor > Interface > * Over in HBASE-10504, [~enis] suggested "... if we want to provide > isolation for the replication services in hbase, we can have a simple host as > another daemon which hosts the ReplicationEndpoint implementation. RS's will > use a built-in RE to send the edits to this layer, and the host will delegate > it to the RE implementation. The flow would be something like: RS --> RE > inside RS --> Host daemon for RE --> Actual RE implementation --> third party > system..." > > Other crazy notions occur including the setup of an Admin Interface > Coprocessor Endpoint. A new ReplicationEndpoint would feed the replication > stream to the remote cluster via the CPEP registered channel. > But time is short. Hopefully we can figure something that will work in 2.0 > timeframe w/o too much code movement. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18846) Accommodate the hbase-indexer/lily/SEP consumer deploy-type
[ https://issues.apache.org/jira/browse/HBASE-18846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16182012#comment-16182012 ] stack commented on HBASE-18846: --- .001 is a hack to standup a RegionServer with nothing but the basic threads and 'services' (chores, sleepers, managers) running. Adds config so you can turn on/off Admin+Client Services. Adds code so if a Service/Thread/Chore has not been initialized, it keeps going. Adds a 'masterless' config so we press on if no Master to check-in with. Includes a config which turns off facility such as WAL. So, idea was to stand up a RegionServer whose only purpose in life is receiving a Replication stream and then allow Lily override the single replicateWALEntry method to catch the replication stream and feed a Lucene index (as it does currently only it manually sets up an RpcServer, registers an Admin Service, overriding all but the one method to throw NotImplementedException). The thought was to allow override w/ a Coprocessor BUT preReplicateLogEntries just removed the passing of WALEntries... to CPs (no PBs in CP API) which messes up this tack. But, we have ReplicationSinkService. Its IA.Private currently and defaults to calling RS methods. Let me make it so you can insert your own. The machinery is sort-of there. We can do this for Lily crew. TODO: Add means of inserting custom ReplicationSinkService and cleanup of service/thread/chore startup in RS. Doc. > Accommodate the hbase-indexer/lily/SEP consumer deploy-type > --- > > Key: HBASE-18846 > URL: https://issues.apache.org/jira/browse/HBASE-18846 > Project: HBase > Issue Type: Bug >Reporter: stack > Attachments: HBASE-18846.master.001.patch, javadoc.txt > > > This is a follow-on from HBASE-10504, Define a Replication Interface. There > we defined a new, flexible replication endpoint for others to implement but > it did little to help the case of the lily hbase-indexer. This issue takes up > the case of the hbase-indexer. > The hbase-indexer poses to hbase as a 'fake' peer cluster (For why > hbase-indexer is implemented so, the advantage to having the indexing done in > a separate process set that can be independently scaled, can participate in > the same security realm, etc., see discussion in HBASE-10504). The > hbase-indexer will start up a cut-down "RegionServer" processes that are just > an instance of hbase RpcServer hosting an AdminProtos Service. They make > themselves 'appear' to the Replication Source by hoisting up an ephemeral > znode 'registering' as a RegionServer. The source cluster then streams > WALEdits to the Admin Protos method: > {code} > public ReplicateWALEntryResponse replicateWALEntry(final RpcController > controller, > final ReplicateWALEntryRequest request) throws ServiceException { > {code} > The hbase-indexer relies on other hbase internals like Server so it can get a > ZooKeeperWatcher instance and know the 'name' to use for this cut-down server. > Thoughts on how to proceed include: > > * Better formalize its current digestion of hbase internals; make it so > rpcserver is allowed to be used by others, etc. This would be hard to do > given they use basics like Server, Protobuf serdes for WAL types, and > AdminProtos Service. Any change in this wide API breaks (again) > hbase-indexer. We have made a 'channel' for Coprocessor Endpoints so they > continue to work though they use 'internal' types. They can use protos in > hbase-protocol. hbase-protocol protos are in a limbo currently where they are > sort-of 'public'; a TODO. Perhaps the hbase-indexer could do similar relying > on the hbase-protocol (pb2.5) content and we could do something to reveal > rpcserver and zk for hbase-indexer safe use. > * Start an actual RegionServer only have it register the AdminProtos Service > only -- not ClientProtos and the Service that does Master interaction, etc. > [I checked, this is not as easy to do as I at first thought -- St.Ack] Then > have the hbase-indexer implement an AdminCoprocessor to override the > replicateWALEntry method (the Admin CP implementation may need work). This > would narrow the hbase-indexer exposure to that of the Admin Coprocessor > Interface > * Over in HBASE-10504, [~enis] suggested "... if we want to provide > isolation for the replication services in hbase, we can have a simple host as > another daemon which hosts the ReplicationEndpoint implementation. RS's will > use a built-in RE to send the edits to this layer, and the host will delegate > it to the RE implementation. The flow would be something like: RS --> RE > inside RS --> Host daemon for RE --> Actual RE implementation --> third party > system..." > > Other crazy notions occur including the setup of an Admin Interface > Coprocessor
[jira] [Updated] (HBASE-16290) Dump summary of callQueue content; can help debugging
[ https://issues.apache.org/jira/browse/HBASE-16290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sreeram Venkatasubramanian updated HBASE-16290: --- Attachment: HBASE-16290.master.001.patch HBASE-16290.master.002.patch HBASE-16290.master.003.patch HBASE-16290.master.004.patch > Dump summary of callQueue content; can help debugging > - > > Key: HBASE-16290 > URL: https://issues.apache.org/jira/browse/HBASE-16290 > Project: HBase > Issue Type: Bug > Components: Operability >Affects Versions: 2.0.0 >Reporter: stack >Assignee: Sreeram Venkatasubramanian > Labels: beginner > Fix For: 2.0.0 > > Attachments: DebugDump_screenshot.png, HBASE-16290.master.001.patch, > HBASE-16290.master.002.patch, HBASE-16290.master.003.patch, > HBASE-16290.master.004.patch, HBASE-16290.master.005.patch, Sample Summary.txt > > > Being able to get a clue what is in a backedup callQueue could give insight > on what is going on on a jacked server. Just needs to summarize count, sizes, > call types. Useful debugging. In a servlet? -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18298) RegionServerServices Interface cleanup for CP expose
[ https://issues.apache.org/jira/browse/HBASE-18298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anoop Sam John updated HBASE-18298: --- Resolution: Fixed Hadoop Flags: Incompatible change,Reviewed Status: Resolved (was: Patch Available) Pushed to branch-2 and master. Thanks for the reviews. > RegionServerServices Interface cleanup for CP expose > > > Key: HBASE-18298 > URL: https://issues.apache.org/jira/browse/HBASE-18298 > Project: HBase > Issue Type: Sub-task > Components: Coprocessors >Reporter: Anoop Sam John >Assignee: Anoop Sam John >Priority: Critical > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-18298.patch, HBASE-18298_V2.patch, > HBASE-18298_V3.patch, HBASE-18298_V4.patch, HBASE-18298_V5.patch, > HBASE-18298_V6.patch, HBASE-18298_V7.patch, HBASE-18298_V7.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-16290) Dump summary of callQueue content; can help debugging
[ https://issues.apache.org/jira/browse/HBASE-16290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16182007#comment-16182007 ] Sreeram Venkatasubramanian commented on HBASE-16290: Added earlier patches :) sorry ! > Dump summary of callQueue content; can help debugging > - > > Key: HBASE-16290 > URL: https://issues.apache.org/jira/browse/HBASE-16290 > Project: HBase > Issue Type: Bug > Components: Operability >Affects Versions: 2.0.0 >Reporter: stack >Assignee: Sreeram Venkatasubramanian > Labels: beginner > Fix For: 2.0.0 > > Attachments: DebugDump_screenshot.png, HBASE-16290.master.001.patch, > HBASE-16290.master.002.patch, HBASE-16290.master.003.patch, > HBASE-16290.master.004.patch, HBASE-16290.master.005.patch, Sample Summary.txt > > > Being able to get a clue what is in a backedup callQueue could give insight > on what is going on on a jacked server. Just needs to summarize count, sizes, > call types. Useful debugging. In a servlet? -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18846) Accommodate the hbase-indexer/lily/SEP consumer deploy-type
[ https://issues.apache.org/jira/browse/HBASE-18846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-18846: -- Attachment: HBASE-18846.master.001.patch > Accommodate the hbase-indexer/lily/SEP consumer deploy-type > --- > > Key: HBASE-18846 > URL: https://issues.apache.org/jira/browse/HBASE-18846 > Project: HBase > Issue Type: Bug >Reporter: stack > Attachments: HBASE-18846.master.001.patch, javadoc.txt > > > This is a follow-on from HBASE-10504, Define a Replication Interface. There > we defined a new, flexible replication endpoint for others to implement but > it did little to help the case of the lily hbase-indexer. This issue takes up > the case of the hbase-indexer. > The hbase-indexer poses to hbase as a 'fake' peer cluster (For why > hbase-indexer is implemented so, the advantage to having the indexing done in > a separate process set that can be independently scaled, can participate in > the same security realm, etc., see discussion in HBASE-10504). The > hbase-indexer will start up a cut-down "RegionServer" processes that are just > an instance of hbase RpcServer hosting an AdminProtos Service. They make > themselves 'appear' to the Replication Source by hoisting up an ephemeral > znode 'registering' as a RegionServer. The source cluster then streams > WALEdits to the Admin Protos method: > {code} > public ReplicateWALEntryResponse replicateWALEntry(final RpcController > controller, > final ReplicateWALEntryRequest request) throws ServiceException { > {code} > The hbase-indexer relies on other hbase internals like Server so it can get a > ZooKeeperWatcher instance and know the 'name' to use for this cut-down server. > Thoughts on how to proceed include: > > * Better formalize its current digestion of hbase internals; make it so > rpcserver is allowed to be used by others, etc. This would be hard to do > given they use basics like Server, Protobuf serdes for WAL types, and > AdminProtos Service. Any change in this wide API breaks (again) > hbase-indexer. We have made a 'channel' for Coprocessor Endpoints so they > continue to work though they use 'internal' types. They can use protos in > hbase-protocol. hbase-protocol protos are in a limbo currently where they are > sort-of 'public'; a TODO. Perhaps the hbase-indexer could do similar relying > on the hbase-protocol (pb2.5) content and we could do something to reveal > rpcserver and zk for hbase-indexer safe use. > * Start an actual RegionServer only have it register the AdminProtos Service > only -- not ClientProtos and the Service that does Master interaction, etc. > [I checked, this is not as easy to do as I at first thought -- St.Ack] Then > have the hbase-indexer implement an AdminCoprocessor to override the > replicateWALEntry method (the Admin CP implementation may need work). This > would narrow the hbase-indexer exposure to that of the Admin Coprocessor > Interface > * Over in HBASE-10504, [~enis] suggested "... if we want to provide > isolation for the replication services in hbase, we can have a simple host as > another daemon which hosts the ReplicationEndpoint implementation. RS's will > use a built-in RE to send the edits to this layer, and the host will delegate > it to the RE implementation. The flow would be something like: RS --> RE > inside RS --> Host daemon for RE --> Actual RE implementation --> third party > system..." > > Other crazy notions occur including the setup of an Admin Interface > Coprocessor Endpoint. A new ReplicationEndpoint would feed the replication > stream to the remote cluster via the CPEP registered channel. > But time is short. Hopefully we can figure something that will work in 2.0 > timeframe w/o too much code movement. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18298) RegionServerServices Interface cleanup for CP expose
[ https://issues.apache.org/jira/browse/HBASE-18298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181996#comment-16181996 ] Hadoop QA commented on HBASE-18298: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 65 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 33s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 12s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 50s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 37s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 2s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 9m 54s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 35s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 27s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 9s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 39m 57s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 36s{color} | {color:green} hbase-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 96m 23s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 23s{color} | {color:green} hbase-mapreduce in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 55s{color} | {color:green} hbase-thrift in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 32s{color} | {color:green} hbase-rsgroup in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 14s{color} | {color:green} hbase-endpoint in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 55s{color} | {color:green} hbase-examples in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 42s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}202m 58s{color} |
[jira] [Commented] (HBASE-17732) Coprocessor Design Improvements
[ https://issues.apache.org/jira/browse/HBASE-17732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181990#comment-16181990 ] Hadoop QA commented on HBASE-17732: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 4m 49s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 91 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 30s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 50s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 42s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 51s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 51s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 11m 43s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 28s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 51s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 6s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 37m 34s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 25s{color} | {color:green} hbase-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green}117m 45s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 28s{color} | {color:green} hbase-mapreduce in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 13s{color} | {color:green} hbase-thrift in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 30s{color} | {color:green} hbase-rsgroup in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 19s{color} | {color:green} hbase-endpoint in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 13s{color} | {color:green} hbase-backup in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 29s{color} | {color:green} hbase-it in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 5s{color} | {color:green}
[jira] [Commented] (HBASE-16290) Dump summary of callQueue content; can help debugging
[ https://issues.apache.org/jira/browse/HBASE-16290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181988#comment-16181988 ] Chia-Ping Tsai commented on HBASE-16290: You don't have to delete the old patches.:) > Dump summary of callQueue content; can help debugging > - > > Key: HBASE-16290 > URL: https://issues.apache.org/jira/browse/HBASE-16290 > Project: HBase > Issue Type: Bug > Components: Operability >Affects Versions: 2.0.0 >Reporter: stack >Assignee: Sreeram Venkatasubramanian > Labels: beginner > Fix For: 2.0.0 > > Attachments: DebugDump_screenshot.png, HBASE-16290.master.005.patch, > Sample Summary.txt > > > Being able to get a clue what is in a backedup callQueue could give insight > on what is going on on a jacked server. Just needs to summarize count, sizes, > call types. Useful debugging. In a servlet? -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-16290) Dump summary of callQueue content; can help debugging
[ https://issues.apache.org/jira/browse/HBASE-16290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sreeram Venkatasubramanian updated HBASE-16290: --- Attachment: HBASE-16290.master.005.patch > Dump summary of callQueue content; can help debugging > - > > Key: HBASE-16290 > URL: https://issues.apache.org/jira/browse/HBASE-16290 > Project: HBase > Issue Type: Bug > Components: Operability >Affects Versions: 2.0.0 >Reporter: stack >Assignee: Sreeram Venkatasubramanian > Labels: beginner > Fix For: 2.0.0 > > Attachments: DebugDump_screenshot.png, HBASE-16290.master.005.patch, > Sample Summary.txt > > > Being able to get a clue what is in a backedup callQueue could give insight > on what is going on on a jacked server. Just needs to summarize count, sizes, > call types. Useful debugging. In a servlet? -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-16290) Dump summary of callQueue content; can help debugging
[ https://issues.apache.org/jira/browse/HBASE-16290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sreeram Venkatasubramanian updated HBASE-16290: --- Attachment: (was: HBASE-16290.master.004.patch) > Dump summary of callQueue content; can help debugging > - > > Key: HBASE-16290 > URL: https://issues.apache.org/jira/browse/HBASE-16290 > Project: HBase > Issue Type: Bug > Components: Operability >Affects Versions: 2.0.0 >Reporter: stack >Assignee: Sreeram Venkatasubramanian > Labels: beginner > Fix For: 2.0.0 > > Attachments: DebugDump_screenshot.png, Sample Summary.txt > > > Being able to get a clue what is in a backedup callQueue could give insight > on what is going on on a jacked server. Just needs to summarize count, sizes, > call types. Useful debugging. In a servlet? -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18806) VerifyRep by snapshot need not to restore snapshot for each mapper
[ https://issues.apache.org/jira/browse/HBASE-18806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181984#comment-16181984 ] Hadoop QA commented on HBASE-18806: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 31s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 4s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 32s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 31s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 6m 6s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 5s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 9s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 43m 48s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}133m 37s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 42s{color} | {color:green} hbase-mapreduce in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 32s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}211m 28s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.security.access.TestCoprocessorWhitelistMasterObserver | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:5d60123 | | JIRA Issue | HBASE-18806 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12888986/HBASE-18806.v3.patch | | Optional Tests | asflicense shadedjars javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux e01e098eebee 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality |
[jira] [Commented] (HBASE-17732) Coprocessor Design Improvements
[ https://issues.apache.org/jira/browse/HBASE-17732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181981#comment-16181981 ] Appy commented on HBASE-17732: -- Yay, hadoop QA passed! Here's the list of minor followup improvements HBASE-18884. [~stack] [~apurtell] What do you say? Is this good to go in? > Coprocessor Design Improvements > --- > > Key: HBASE-17732 > URL: https://issues.apache.org/jira/browse/HBASE-17732 > Project: HBase > Issue Type: Improvement >Reporter: Appy >Assignee: Appy >Priority: Critical > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-17732.master.001.patch, > HBASE-17732.master.002.patch, HBASE-17732.master.003.patch, > HBASE-17732.master.004.patch, HBASE-17732.master.005.patch, > HBASE-17732.master.006.patch, HBASE-17732.master.007.patch, > HBASE-17732.master.008.patch, HBASE-17732.master.009.patch, > HBASE-17732.master.010.patch, HBASE-17732.master.011.patch, > HBASE-17732.master.012.patch, HBASE-17732.master.013.patch, > HBASE-17732.master.014.patch > > > The two main changes are: > * *Adding template for coprocessor type to CoprocessorEnvironment i.e. > {{interface CoprocessorEnvironment}}* > ** Enables us to load only relevant coprocessors in hosts. Right now each > type of host loads all types of coprocs and it's only during execOperation > that it checks if the coproc is of correct type i.e. XCoprocessorHost will > load XObserver, YObserver, and all others, and will check in execOperation if > {{coproc instanceOf XObserver}} and ignore the rest. > ** Allow sharing of a bunch functions/classes which are currently > duplicated in each host. For eg. CoprocessorOperations, > CoprocessorOperationWithResult, execOperations(). > * *Introduce 4 coprocessor classes and use composition between these new > classes and and old observers* > ** The real gold here is, moving forward, we'll be able to break down giant > everything-in-one observers (masterobserver has 100+ functions) into smaller, > more focused observers. These smaller observer can then have different compat > guarantees!! > Here's a more detailed design doc: > https://docs.google.com/document/d/1mPkM1CRRvBMZL4dBQzrus8obyvNnHhR5it2yyhiFXTg/edit?usp=sharing -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HBASE-18887) Full backup passed on hdfs root but incremental failed. Not able to clean full backup
Vishal Khandelwal created HBASE-18887: - Summary: Full backup passed on hdfs root but incremental failed. Not able to clean full backup Key: HBASE-18887 URL: https://issues.apache.org/jira/browse/HBASE-18887 Project: HBase Issue Type: Bug Reporter: Vishal Khandelwal >> ./bin/hbase backup create full hdfs://localhost:8020/ -t test1 2017-09-27 10:19:38,885 INFO [main] impl.BackupManifest: Manifest file stored to hdfs://localhost:8020/backup_1506487766386/.backup.manifest 2017-09-27 10:19:38,937 INFO [main] impl.TableBackupClient: Backup backup_1506487766386 completed. Backup session backup_1506487766386 finished. Status: SUCCESS >> 2017-09-27 10:20:48,211 INFO [main] mapreduce.JobSubmitter: Cleaning up the staging area /tmp/hadoop-yarn/staging/vkhandelwal/.staging/job_1506419443344_0045 2017-09-27 10:20:48,215 ERROR [main] impl.TableBackupClient: Unexpected exception in incremental-backup: incremental copy backup_1506487845361Can not convert from directory (check Hadoop, HBase and WALPlayer M/R job logs) java.io.IOException: Can not convert from directory (check Hadoop, HBase and WALPlayer M/R job logs) at org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.walToHFiles(IncrementalTableBackupClient.java:363) at {code} ./bin/hbase backup create full hdfs://localhost:8020/ -t test1 {code} 2017-09-27 10:19:38,885 INFO [main] impl.BackupManifest: Manifest file stored to hdfs://localhost:8020/backup_1506487766386/.backup.manifest 2017-09-27 10:19:38,937 INFO [main] impl.TableBackupClient: Backup backup_1506487766386 completed. Backup session backup_1506487766386 finished. Status: SUCCESS {code} ./bin/hbase backup create incremental hdfs://localhost:8020/ -t test1 {code} 2017-09-27 10:20:48,215 ERROR [main] impl.TableBackupClient: Unexpected exception in incremental-backup: incremental copy backup_1506487845361Can not convert from directory (check Hadoop, HBase and WALPlayer M/R job logs) java.io.IOException: Can not convert from directory (check Hadoop, HBase and WALPlayer M/R job logs) at org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.walToHFiles(IncrementalTableBackupClient.java:363) at org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.convertWALsToHFiles(IncrementalTableBackupClient.java:322) at org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.execute(IncrementalTableBackupClient.java:232) at org.apache.hadoop.hbase.backup.impl.BackupAdminImpl.backupTables(BackupAdminImpl.java:601) at org.apache.hadoop.hbase.backup.impl.BackupCommands$CreateCommand.execute(BackupCommands.java:336) at org.apache.hadoop.hbase.backup.BackupDriver.parseAndRun(BackupDriver.java:137) at org.apache.hadoop.hbase.backup.BackupDriver.doWork(BackupDriver.java:170) at org.apache.hadoop.hbase.backup.BackupDriver.run(BackupDriver.java:203) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.hbase.backup.BackupDriver.main(BackupDriver.java:178) Caused by: java.lang.IllegalArgumentException: Can not create a Path from an empty string at org.apache.hadoop.fs.Path.checkPathArg(Path.java:126) at org.apache.hadoop.fs.Path.(Path.java:134) at org.apache.hadoop.util.StringUtils.stringToPath(StringUtils.java:245) at org.apache.hadoop.hbase.mapreduce.WALInputFormat.getInputPaths(WALInputFormat.java:301) at org.apache.hadoop.hbase.mapreduce.WALInputFormat.getSplits(WALInputFormat.java:274) at org.apache.hadoop.hbase.mapreduce.WALInputFormat.getSplits(WALInputFormat.java:264) at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301) at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308) at org.apache.hadoop.hbase.mapreduce.WALPlayer.run(WALPlayer.java:380) at org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.walToHFiles(IncrementalTableBackupClient.java:354) ... 9 more 2017-09-27 10:20:48,216 ERROR [main] impl.TableBackupClient: BackupId=backup_1506487845361,startts=1506487846725,failedts=1506487848216,failedphase=PREPARE_INCREMENTAL,failedmessage=Can not
[jira] [Updated] (HBASE-18090) Improve TableSnapshotInputFormat to allow more multiple mappers per region
[ https://issues.apache.org/jira/browse/HBASE-18090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] xinxin fan updated HBASE-18090: --- Attachment: HBASE-18090-V5-master.patch > Improve TableSnapshotInputFormat to allow more multiple mappers per region > -- > > Key: HBASE-18090 > URL: https://issues.apache.org/jira/browse/HBASE-18090 > Project: HBase > Issue Type: Improvement > Components: mapreduce >Affects Versions: 1.4.0 >Reporter: Mikhail Antonov >Assignee: xinxin fan > Attachments: HBASE-18090-branch-1.3-v1.patch, > HBASE-18090-branch-1.3-v2.patch, HBASE-18090-V3-master.patch, > HBASE-18090-V4-master.patch, HBASE-18090-V5-master.patch > > > TableSnapshotInputFormat runs one map task per region in the table snapshot. > This places unnecessary restriction that the region layout of the original > table needs to take the processing resources available to MR job into > consideration. Allowing to run multiple mappers per region (assuming > reasonably even key distribution) would be useful. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18090) Improve TableSnapshotInputFormat to allow more multiple mappers per region
[ https://issues.apache.org/jira/browse/HBASE-18090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] xinxin fan updated HBASE-18090: --- Attachment: (was: HBASE-18090-V5-master.patch) > Improve TableSnapshotInputFormat to allow more multiple mappers per region > -- > > Key: HBASE-18090 > URL: https://issues.apache.org/jira/browse/HBASE-18090 > Project: HBase > Issue Type: Improvement > Components: mapreduce >Affects Versions: 1.4.0 >Reporter: Mikhail Antonov >Assignee: xinxin fan > Attachments: HBASE-18090-branch-1.3-v1.patch, > HBASE-18090-branch-1.3-v2.patch, HBASE-18090-V3-master.patch, > HBASE-18090-V4-master.patch > > > TableSnapshotInputFormat runs one map task per region in the table snapshot. > This places unnecessary restriction that the region layout of the original > table needs to take the processing resources available to MR job into > consideration. Allowing to run multiple mappers per region (assuming > reasonably even key distribution) would be useful. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18090) Improve TableSnapshotInputFormat to allow more multiple mappers per region
[ https://issues.apache.org/jira/browse/HBASE-18090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] xinxin fan updated HBASE-18090: --- Attachment: HBASE-18090-V5-master.patch > Improve TableSnapshotInputFormat to allow more multiple mappers per region > -- > > Key: HBASE-18090 > URL: https://issues.apache.org/jira/browse/HBASE-18090 > Project: HBase > Issue Type: Improvement > Components: mapreduce >Affects Versions: 1.4.0 >Reporter: Mikhail Antonov >Assignee: xinxin fan > Attachments: HBASE-18090-branch-1.3-v1.patch, > HBASE-18090-branch-1.3-v2.patch, HBASE-18090-V3-master.patch, > HBASE-18090-V4-master.patch, HBASE-18090-V5-master.patch > > > TableSnapshotInputFormat runs one map task per region in the table snapshot. > This places unnecessary restriction that the region layout of the original > table needs to take the processing resources available to MR job into > consideration. Allowing to run multiple mappers per region (assuming > reasonably even key distribution) would be useful. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Comment Edited] (HBASE-18886) Backup command onces cancelled not able re-trigger backup again
[ https://issues.apache.org/jira/browse/HBASE-18886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181973#comment-16181973 ] Vishal Khandelwal edited comment on HBASE-18886 at 9/27/17 4:43 AM: Used following command to recover {code} delete 'backup:system', 'activesession' {code} But these should be part of repair. Here is the scenario : As soon as backup cmd started i killed and tried to restart. was (Author: vishk): {code} delete 'backup:system', 'activesession' {code} But these should be part of repair. Here is the scenario : As soon as backup cmd started i killed and tried to restart. > Backup command onces cancelled not able re-trigger backup again > --- > > Key: HBASE-18886 > URL: https://issues.apache.org/jira/browse/HBASE-18886 > Project: HBase > Issue Type: Bug >Reporter: Vishal Khandelwal > > {code} ./bin/hbase backup repair {code} > Please make sure that backup is enabled on the cluster. To enable backup, in > hbase-site.xml, set: > hbase.backup.enable=true > hbase.master.logcleaner.plugins=YOUR_PLUGINS,org.apache.hadoop.hbase.backup.master.BackupLogCleaner > hbase.procedure.master.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager > hbase.procedure.regionserver.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager > and restart the cluster > 2017-09-27 09:28:38,119 INFO [main] metrics.MetricRegistries: Loaded > MetricRegistries class > org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl > REPAIR status: no failed sessions found. Checking failed delete backup > operation ... > No failed backup DELETE operation found > 2017-09-27 09:28:38,680 ERROR [main] impl.BackupSystemTable: Snapshot > snapshot_backup_system does not exists > No failed backup MERGE operation found > 2017-09-27 09:28:38,682 ERROR [main] impl.BackupSystemTable: Snapshot > snapshot_backup_system does not exists > {code} ./bin/hbase backup create full hdfs://localhost:8020/ -t test1 {code} > Please make sure that backup is enabled on the cluster. To enable backup, in > hbase-site.xml, set: > hbase.backup.enable=true > hbase.master.logcleaner.plugins=YOUR_PLUGINS,org.apache.hadoop.hbase.backup.master.BackupLogCleaner > hbase.procedure.master.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager > hbase.procedure.regionserver.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager > and restart the cluster > 2017-09-27 09:29:00,837 INFO [main] metrics.MetricRegistries: Loaded > MetricRegistries class > org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl > 2017-09-27 09:29:01,458 ERROR [main] impl.BackupAdminImpl: There is an active > session already running > Backup session finished. Status: FAILURE > 2017-09-27 09:29:01,460 ERROR [main] backup.BackupDriver: Error running > command-line tool > java.io.IOException: There is an active backup exclusive operation > at > org.apache.hadoop.hbase.backup.impl.BackupSystemTable.startBackupExclusiveOperation(BackupSystemTable.java:584) > at > org.apache.hadoop.hbase.backup.impl.BackupManager.startBackupSession(BackupManager.java:373) > at > org.apache.hadoop.hbase.backup.impl.TableBackupClient.init(TableBackupClient.java:100) > at > org.apache.hadoop.hbase.backup.impl.TableBackupClient.(TableBackupClient.java:78) > at > org.apache.hadoop.hbase.backup.impl.FullTableBackupClient.(FullTableBackupClient.java:61) > at > org.apache.hadoop.hbase.backup.BackupClientFactory.create(BackupClientFactory.java:51) > at > org.apache.hadoop.hbase.backup.impl.BackupAdminImpl.backupTables(BackupAdminImpl.java:595) > at > org.apache.hadoop.hbase.backup.impl.BackupCommands$CreateCommand.execute(BackupCommands.java:336) > at > org.apache.hadoop.hbase.backup.BackupDriver.parseAndRun(BackupDriver.java:137) > at > org.apache.hadoop.hbase.backup.BackupDriver.doWork(BackupDriver.java:170) > at > org.apache.hadoop.hbase.backup.BackupDriver.run(BackupDriver.java:203) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at > org.apache.hadoop.hbase.backup.BackupDriver.main(BackupDriver.java:178) > {code} ./bin/hbase backup history {code} > Please make sure that backup is enabled on the cluster. To enable backup, in > hbase-site.xml, set: > hbase.backup.enable=true > hbase.master.logcleaner.plugins=YOUR_PLUGINS,org.apache.hadoop.hbase.backup.master.BackupLogCleaner > hbase.procedure.master.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager >
[jira] [Commented] (HBASE-18886) Backup command onces cancelled not able re-trigger backup again
[ https://issues.apache.org/jira/browse/HBASE-18886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181973#comment-16181973 ] Vishal Khandelwal commented on HBASE-18886: --- {code} delete 'backup:system', 'activesession' {code} But these should be part of repair. Here is the scenario : As soon as backup cmd started i killed and tried to restart. > Backup command onces cancelled not able re-trigger backup again > --- > > Key: HBASE-18886 > URL: https://issues.apache.org/jira/browse/HBASE-18886 > Project: HBase > Issue Type: Bug >Reporter: Vishal Khandelwal > > {code} ./bin/hbase backup repair {code} > Please make sure that backup is enabled on the cluster. To enable backup, in > hbase-site.xml, set: > hbase.backup.enable=true > hbase.master.logcleaner.plugins=YOUR_PLUGINS,org.apache.hadoop.hbase.backup.master.BackupLogCleaner > hbase.procedure.master.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager > hbase.procedure.regionserver.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager > and restart the cluster > 2017-09-27 09:28:38,119 INFO [main] metrics.MetricRegistries: Loaded > MetricRegistries class > org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl > REPAIR status: no failed sessions found. Checking failed delete backup > operation ... > No failed backup DELETE operation found > 2017-09-27 09:28:38,680 ERROR [main] impl.BackupSystemTable: Snapshot > snapshot_backup_system does not exists > No failed backup MERGE operation found > 2017-09-27 09:28:38,682 ERROR [main] impl.BackupSystemTable: Snapshot > snapshot_backup_system does not exists > {code} ./bin/hbase backup create full hdfs://localhost:8020/ -t test1 {code} > Please make sure that backup is enabled on the cluster. To enable backup, in > hbase-site.xml, set: > hbase.backup.enable=true > hbase.master.logcleaner.plugins=YOUR_PLUGINS,org.apache.hadoop.hbase.backup.master.BackupLogCleaner > hbase.procedure.master.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager > hbase.procedure.regionserver.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager > and restart the cluster > 2017-09-27 09:29:00,837 INFO [main] metrics.MetricRegistries: Loaded > MetricRegistries class > org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl > 2017-09-27 09:29:01,458 ERROR [main] impl.BackupAdminImpl: There is an active > session already running > Backup session finished. Status: FAILURE > 2017-09-27 09:29:01,460 ERROR [main] backup.BackupDriver: Error running > command-line tool > java.io.IOException: There is an active backup exclusive operation > at > org.apache.hadoop.hbase.backup.impl.BackupSystemTable.startBackupExclusiveOperation(BackupSystemTable.java:584) > at > org.apache.hadoop.hbase.backup.impl.BackupManager.startBackupSession(BackupManager.java:373) > at > org.apache.hadoop.hbase.backup.impl.TableBackupClient.init(TableBackupClient.java:100) > at > org.apache.hadoop.hbase.backup.impl.TableBackupClient.(TableBackupClient.java:78) > at > org.apache.hadoop.hbase.backup.impl.FullTableBackupClient.(FullTableBackupClient.java:61) > at > org.apache.hadoop.hbase.backup.BackupClientFactory.create(BackupClientFactory.java:51) > at > org.apache.hadoop.hbase.backup.impl.BackupAdminImpl.backupTables(BackupAdminImpl.java:595) > at > org.apache.hadoop.hbase.backup.impl.BackupCommands$CreateCommand.execute(BackupCommands.java:336) > at > org.apache.hadoop.hbase.backup.BackupDriver.parseAndRun(BackupDriver.java:137) > at > org.apache.hadoop.hbase.backup.BackupDriver.doWork(BackupDriver.java:170) > at > org.apache.hadoop.hbase.backup.BackupDriver.run(BackupDriver.java:203) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at > org.apache.hadoop.hbase.backup.BackupDriver.main(BackupDriver.java:178) > {code} ./bin/hbase backup history {code} > Please make sure that backup is enabled on the cluster. To enable backup, in > hbase-site.xml, set: > hbase.backup.enable=true > hbase.master.logcleaner.plugins=YOUR_PLUGINS,org.apache.hadoop.hbase.backup.master.BackupLogCleaner > hbase.procedure.master.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager > hbase.procedure.regionserver.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager > and restart the cluster > 2017-09-27 09:30:22,218 INFO [main] metrics.MetricRegistries: Loaded > MetricRegistries class > org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl > {ID=backup_1506427470546,Type=FULL,Tables={test2,test1},State=COMPLETE,Start >
[jira] [Commented] (HBASE-18843) Add DistCp support to incremental backup with bulk loading
[ https://issues.apache.org/jira/browse/HBASE-18843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181970#comment-16181970 ] Hadoop QA commented on HBASE-18843: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 57s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 10s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 3m 52s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 27s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 3m 22s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 32m 30s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 2s{color} | {color:green} hbase-backup in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 10s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 52m 35s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:5d60123 | | JIRA Issue | HBASE-18843 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12889215/HBASE-18843-v5.patch | | Optional Tests | asflicense shadedjars javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 6c9d7baf3b7e 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 845b83b | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC3 | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/8813/testReport/ | | modules | C: hbase-backup U: hbase-backup | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/8813/console | | Powered by | Apache Yetus 0.4.0 http://yetus.apache.org | This message was automatically generated. > Add DistCp support to incremental backup with bulk loading >
[jira] [Commented] (HBASE-18886) Backup command onces cancelled not able re-trigger backup again
[ https://issues.apache.org/jira/browse/HBASE-18886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181957#comment-16181957 ] Vishal Khandelwal commented on HBASE-18886: --- Done. Here is the scan output {code} hbase(main):019:0> scan 'backup:system', {COLUMNS => ['session:c'], CACHE_BLOCKS => false} {code} ROW COLUMN+CELL activesession: column=session:c, timestamp=1506484603467, value=yes 1 row(s) Took 0.0038 seconds > Backup command onces cancelled not able re-trigger backup again > --- > > Key: HBASE-18886 > URL: https://issues.apache.org/jira/browse/HBASE-18886 > Project: HBase > Issue Type: Bug >Reporter: Vishal Khandelwal > > {code} ./bin/hbase backup repair {code} > Please make sure that backup is enabled on the cluster. To enable backup, in > hbase-site.xml, set: > hbase.backup.enable=true > hbase.master.logcleaner.plugins=YOUR_PLUGINS,org.apache.hadoop.hbase.backup.master.BackupLogCleaner > hbase.procedure.master.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager > hbase.procedure.regionserver.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager > and restart the cluster > 2017-09-27 09:28:38,119 INFO [main] metrics.MetricRegistries: Loaded > MetricRegistries class > org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl > REPAIR status: no failed sessions found. Checking failed delete backup > operation ... > No failed backup DELETE operation found > 2017-09-27 09:28:38,680 ERROR [main] impl.BackupSystemTable: Snapshot > snapshot_backup_system does not exists > No failed backup MERGE operation found > 2017-09-27 09:28:38,682 ERROR [main] impl.BackupSystemTable: Snapshot > snapshot_backup_system does not exists > {code} ./bin/hbase backup create full hdfs://localhost:8020/ -t test1 {code} > Please make sure that backup is enabled on the cluster. To enable backup, in > hbase-site.xml, set: > hbase.backup.enable=true > hbase.master.logcleaner.plugins=YOUR_PLUGINS,org.apache.hadoop.hbase.backup.master.BackupLogCleaner > hbase.procedure.master.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager > hbase.procedure.regionserver.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager > and restart the cluster > 2017-09-27 09:29:00,837 INFO [main] metrics.MetricRegistries: Loaded > MetricRegistries class > org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl > 2017-09-27 09:29:01,458 ERROR [main] impl.BackupAdminImpl: There is an active > session already running > Backup session finished. Status: FAILURE > 2017-09-27 09:29:01,460 ERROR [main] backup.BackupDriver: Error running > command-line tool > java.io.IOException: There is an active backup exclusive operation > at > org.apache.hadoop.hbase.backup.impl.BackupSystemTable.startBackupExclusiveOperation(BackupSystemTable.java:584) > at > org.apache.hadoop.hbase.backup.impl.BackupManager.startBackupSession(BackupManager.java:373) > at > org.apache.hadoop.hbase.backup.impl.TableBackupClient.init(TableBackupClient.java:100) > at > org.apache.hadoop.hbase.backup.impl.TableBackupClient.(TableBackupClient.java:78) > at > org.apache.hadoop.hbase.backup.impl.FullTableBackupClient.(FullTableBackupClient.java:61) > at > org.apache.hadoop.hbase.backup.BackupClientFactory.create(BackupClientFactory.java:51) > at > org.apache.hadoop.hbase.backup.impl.BackupAdminImpl.backupTables(BackupAdminImpl.java:595) > at > org.apache.hadoop.hbase.backup.impl.BackupCommands$CreateCommand.execute(BackupCommands.java:336) > at > org.apache.hadoop.hbase.backup.BackupDriver.parseAndRun(BackupDriver.java:137) > at > org.apache.hadoop.hbase.backup.BackupDriver.doWork(BackupDriver.java:170) > at > org.apache.hadoop.hbase.backup.BackupDriver.run(BackupDriver.java:203) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at > org.apache.hadoop.hbase.backup.BackupDriver.main(BackupDriver.java:178) > {code} ./bin/hbase backup history {code} > Please make sure that backup is enabled on the cluster. To enable backup, in > hbase-site.xml, set: > hbase.backup.enable=true > hbase.master.logcleaner.plugins=YOUR_PLUGINS,org.apache.hadoop.hbase.backup.master.BackupLogCleaner > hbase.procedure.master.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager >
[jira] [Updated] (HBASE-18886) Backup command onces cancelled not able re-trigger backup again
[ https://issues.apache.org/jira/browse/HBASE-18886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vishal Khandelwal updated HBASE-18886: -- Description: {code} ./bin/hbase backup repair {code} Please make sure that backup is enabled on the cluster. To enable backup, in hbase-site.xml, set: hbase.backup.enable=true hbase.master.logcleaner.plugins=YOUR_PLUGINS,org.apache.hadoop.hbase.backup.master.BackupLogCleaner hbase.procedure.master.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager hbase.procedure.regionserver.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager and restart the cluster 2017-09-27 09:28:38,119 INFO [main] metrics.MetricRegistries: Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl REPAIR status: no failed sessions found. Checking failed delete backup operation ... No failed backup DELETE operation found 2017-09-27 09:28:38,680 ERROR [main] impl.BackupSystemTable: Snapshot snapshot_backup_system does not exists No failed backup MERGE operation found 2017-09-27 09:28:38,682 ERROR [main] impl.BackupSystemTable: Snapshot snapshot_backup_system does not exists {code} ./bin/hbase backup create full hdfs://localhost:8020/ -t test1 {code} Please make sure that backup is enabled on the cluster. To enable backup, in hbase-site.xml, set: hbase.backup.enable=true hbase.master.logcleaner.plugins=YOUR_PLUGINS,org.apache.hadoop.hbase.backup.master.BackupLogCleaner hbase.procedure.master.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager hbase.procedure.regionserver.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager and restart the cluster 2017-09-27 09:29:00,837 INFO [main] metrics.MetricRegistries: Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2017-09-27 09:29:01,458 ERROR [main] impl.BackupAdminImpl: There is an active session already running Backup session finished. Status: FAILURE 2017-09-27 09:29:01,460 ERROR [main] backup.BackupDriver: Error running command-line tool java.io.IOException: There is an active backup exclusive operation at org.apache.hadoop.hbase.backup.impl.BackupSystemTable.startBackupExclusiveOperation(BackupSystemTable.java:584) at org.apache.hadoop.hbase.backup.impl.BackupManager.startBackupSession(BackupManager.java:373) at org.apache.hadoop.hbase.backup.impl.TableBackupClient.init(TableBackupClient.java:100) at org.apache.hadoop.hbase.backup.impl.TableBackupClient.(TableBackupClient.java:78) at org.apache.hadoop.hbase.backup.impl.FullTableBackupClient.(FullTableBackupClient.java:61) at org.apache.hadoop.hbase.backup.BackupClientFactory.create(BackupClientFactory.java:51) at org.apache.hadoop.hbase.backup.impl.BackupAdminImpl.backupTables(BackupAdminImpl.java:595) at org.apache.hadoop.hbase.backup.impl.BackupCommands$CreateCommand.execute(BackupCommands.java:336) at org.apache.hadoop.hbase.backup.BackupDriver.parseAndRun(BackupDriver.java:137) at org.apache.hadoop.hbase.backup.BackupDriver.doWork(BackupDriver.java:170) at org.apache.hadoop.hbase.backup.BackupDriver.run(BackupDriver.java:203) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.hbase.backup.BackupDriver.main(BackupDriver.java:178) {code} ./bin/hbase backup history {code} Please make sure that backup is enabled on the cluster. To enable backup, in hbase-site.xml, set: hbase.backup.enable=true hbase.master.logcleaner.plugins=YOUR_PLUGINS,org.apache.hadoop.hbase.backup.master.BackupLogCleaner hbase.procedure.master.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager hbase.procedure.regionserver.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager and restart the cluster 2017-09-27 09:30:22,218 INFO [main] metrics.MetricRegistries: Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl {ID=backup_1506427470546,Type=FULL,Tables={test2,test1},State=COMPLETE,Start time=Tue Sep 26 17:34:31 IST 2017,End time=Tue Sep 26 17:34:54 IST 2017,Progress=100%} {ID=backup_1506427304040,Type=FULL,Tables={test2,test1},State=COMPLETE,Start time=Tue Sep 26 17:31:45 IST 2017,End time=Tue Sep 26 17:32:08 IST 2017,Progress=100%} {ID=backup_1506426863567,Type=INCREMENTAL,Tables={test2,test1},State=FAILED,Start time=Tue Sep 26 17:24:24 IST 2017,Failed message=Failed copy from hdfs://localhost:8020/backup/.tmp/backup_1506426863567 to hdfs://localhost:8020/backup/backup_1506426863567/WALs,Progress=0%} {ID=backup_1506426677165,Type=INCREMENTAL,Tables={test2,test1},State=FAILED,Start time=Tue Sep 26 17:21:18 IST 2017,Failed
[jira] [Updated] (HBASE-18886) Backup command onces cancelled not able re-trigger backup again
[ https://issues.apache.org/jira/browse/HBASE-18886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vishal Khandelwal updated HBASE-18886: -- Description: {code} ./bin/hbase backup repair {code} Please make sure that backup is enabled on the cluster. To enable backup, in hbase-site.xml, set: hbase.backup.enable=true hbase.master.logcleaner.plugins=YOUR_PLUGINS,org.apache.hadoop.hbase.backup.master.BackupLogCleaner hbase.procedure.master.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager hbase.procedure.regionserver.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager and restart the cluster 2017-09-27 09:28:38,119 INFO [main] metrics.MetricRegistries: Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl REPAIR status: no failed sessions found. Checking failed delete backup operation ... No failed backup DELETE operation found 2017-09-27 09:28:38,680 ERROR [main] impl.BackupSystemTable: Snapshot snapshot_backup_system does not exists No failed backup MERGE operation found 2017-09-27 09:28:38,682 ERROR [main] impl.BackupSystemTable: Snapshot snapshot_backup_system does not exists **./bin/hbase backup create full hdfs://localhost:8020/ -t test1** Please make sure that backup is enabled on the cluster. To enable backup, in hbase-site.xml, set: hbase.backup.enable=true hbase.master.logcleaner.plugins=YOUR_PLUGINS,org.apache.hadoop.hbase.backup.master.BackupLogCleaner hbase.procedure.master.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager hbase.procedure.regionserver.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager and restart the cluster 2017-09-27 09:29:00,837 INFO [main] metrics.MetricRegistries: Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2017-09-27 09:29:01,458 ERROR [main] impl.BackupAdminImpl: There is an active session already running Backup session finished. Status: FAILURE 2017-09-27 09:29:01,460 ERROR [main] backup.BackupDriver: Error running command-line tool java.io.IOException: There is an active backup exclusive operation at org.apache.hadoop.hbase.backup.impl.BackupSystemTable.startBackupExclusiveOperation(BackupSystemTable.java:584) at org.apache.hadoop.hbase.backup.impl.BackupManager.startBackupSession(BackupManager.java:373) at org.apache.hadoop.hbase.backup.impl.TableBackupClient.init(TableBackupClient.java:100) at org.apache.hadoop.hbase.backup.impl.TableBackupClient.(TableBackupClient.java:78) at org.apache.hadoop.hbase.backup.impl.FullTableBackupClient.(FullTableBackupClient.java:61) at org.apache.hadoop.hbase.backup.BackupClientFactory.create(BackupClientFactory.java:51) at org.apache.hadoop.hbase.backup.impl.BackupAdminImpl.backupTables(BackupAdminImpl.java:595) at org.apache.hadoop.hbase.backup.impl.BackupCommands$CreateCommand.execute(BackupCommands.java:336) at org.apache.hadoop.hbase.backup.BackupDriver.parseAndRun(BackupDriver.java:137) at org.apache.hadoop.hbase.backup.BackupDriver.doWork(BackupDriver.java:170) at org.apache.hadoop.hbase.backup.BackupDriver.run(BackupDriver.java:203) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.hbase.backup.BackupDriver.main(BackupDriver.java:178) *History* lease make sure that backup is enabled on the cluster. To enable backup, in hbase-site.xml, set: hbase.backup.enable=true hbase.master.logcleaner.plugins=YOUR_PLUGINS,org.apache.hadoop.hbase.backup.master.BackupLogCleaner hbase.procedure.master.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager hbase.procedure.regionserver.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager and restart the cluster 2017-09-27 09:30:22,218 INFO [main] metrics.MetricRegistries: Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl {ID=backup_1506427470546,Type=FULL,Tables={test2,test1},State=COMPLETE,Start time=Tue Sep 26 17:34:31 IST 2017,End time=Tue Sep 26 17:34:54 IST 2017,Progress=100%} {ID=backup_1506427304040,Type=FULL,Tables={test2,test1},State=COMPLETE,Start time=Tue Sep 26 17:31:45 IST 2017,End time=Tue Sep 26 17:32:08 IST 2017,Progress=100%} {ID=backup_1506426863567,Type=INCREMENTAL,Tables={test2,test1},State=FAILED,Start time=Tue Sep 26 17:24:24 IST 2017,Failed message=Failed copy from hdfs://localhost:8020/backup/.tmp/backup_1506426863567 to hdfs://localhost:8020/backup/backup_1506426863567/WALs,Progress=0%} {ID=backup_1506426677165,Type=INCREMENTAL,Tables={test2,test1},State=FAILED,Start time=Tue Sep 26 17:21:18 IST 2017,Failed message=Failed copy from
[jira] [Commented] (HBASE-18886) Backup command onces cancelled not able re-trigger backup again
[ https://issues.apache.org/jira/browse/HBASE-18886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181950#comment-16181950 ] Ted Yu commented on HBASE-18886: Can you separate commands from command output ? You can enclose command output with \{code\} > Backup command onces cancelled not able re-trigger backup again > --- > > Key: HBASE-18886 > URL: https://issues.apache.org/jira/browse/HBASE-18886 > Project: HBase > Issue Type: Bug >Reporter: Vishal Khandelwal > > *Repair* > Please make sure that backup is enabled on the cluster. To enable backup, in > hbase-site.xml, set: > hbase.backup.enable=true > hbase.master.logcleaner.plugins=YOUR_PLUGINS,org.apache.hadoop.hbase.backup.master.BackupLogCleaner > hbase.procedure.master.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager > hbase.procedure.regionserver.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager > and restart the cluster > 2017-09-27 09:28:38,119 INFO [main] metrics.MetricRegistries: Loaded > MetricRegistries class > org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl > REPAIR status: no failed sessions found. Checking failed delete backup > operation ... > No failed backup DELETE operation found > 2017-09-27 09:28:38,680 ERROR [main] impl.BackupSystemTable: Snapshot > snapshot_backup_system does not exists > No failed backup MERGE operation found > 2017-09-27 09:28:38,682 ERROR [main] impl.BackupSystemTable: Snapshot > snapshot_backup_system does not exists > **./bin/hbase backup create full hdfs://localhost:8020/ -t test1** > Please make sure that backup is enabled on the cluster. To enable backup, in > hbase-site.xml, set: > hbase.backup.enable=true > hbase.master.logcleaner.plugins=YOUR_PLUGINS,org.apache.hadoop.hbase.backup.master.BackupLogCleaner > hbase.procedure.master.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager > hbase.procedure.regionserver.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager > and restart the cluster > 2017-09-27 09:29:00,837 INFO [main] metrics.MetricRegistries: Loaded > MetricRegistries class > org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl > 2017-09-27 09:29:01,458 ERROR [main] impl.BackupAdminImpl: There is an active > session already running > Backup session finished. Status: FAILURE > 2017-09-27 09:29:01,460 ERROR [main] backup.BackupDriver: Error running > command-line tool > java.io.IOException: There is an active backup exclusive operation > at > org.apache.hadoop.hbase.backup.impl.BackupSystemTable.startBackupExclusiveOperation(BackupSystemTable.java:584) > at > org.apache.hadoop.hbase.backup.impl.BackupManager.startBackupSession(BackupManager.java:373) > at > org.apache.hadoop.hbase.backup.impl.TableBackupClient.init(TableBackupClient.java:100) > at > org.apache.hadoop.hbase.backup.impl.TableBackupClient.(TableBackupClient.java:78) > at > org.apache.hadoop.hbase.backup.impl.FullTableBackupClient.(FullTableBackupClient.java:61) > at > org.apache.hadoop.hbase.backup.BackupClientFactory.create(BackupClientFactory.java:51) > at > org.apache.hadoop.hbase.backup.impl.BackupAdminImpl.backupTables(BackupAdminImpl.java:595) > at > org.apache.hadoop.hbase.backup.impl.BackupCommands$CreateCommand.execute(BackupCommands.java:336) > at > org.apache.hadoop.hbase.backup.BackupDriver.parseAndRun(BackupDriver.java:137) > at > org.apache.hadoop.hbase.backup.BackupDriver.doWork(BackupDriver.java:170) > at > org.apache.hadoop.hbase.backup.BackupDriver.run(BackupDriver.java:203) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at > org.apache.hadoop.hbase.backup.BackupDriver.main(BackupDriver.java:178) > *History* > lease make sure that backup is enabled on the cluster. To enable backup, in > hbase-site.xml, set: > hbase.backup.enable=true > hbase.master.logcleaner.plugins=YOUR_PLUGINS,org.apache.hadoop.hbase.backup.master.BackupLogCleaner > hbase.procedure.master.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager > hbase.procedure.regionserver.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager > and restart the cluster > 2017-09-27 09:30:22,218 INFO [main] metrics.MetricRegistries: Loaded > MetricRegistries class > org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl > {ID=backup_1506427470546,Type=FULL,Tables={test2,test1},State=COMPLETE,Start > time=Tue Sep 26 17:34:31 IST 2017,End time=Tue Sep 26 17:34:54 IST > 2017,Progress=100%} > {ID=backup_1506427304040,Type=FULL,Tables={test2,test1},State=COMPLETE,Start > time=Tue
[jira] [Commented] (HBASE-17732) Coprocessor Design Improvements
[ https://issues.apache.org/jira/browse/HBASE-17732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181949#comment-16181949 ] Hadoop QA commented on HBASE-17732: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 91 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 33s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 38s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 5s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 5s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 24s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 10m 0s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 15s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 24s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 3m 27s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 32m 50s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 35s{color} | {color:green} hbase-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 89m 50s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 44s{color} | {color:green} hbase-mapreduce in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 53s{color} | {color:green} hbase-thrift in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 30s{color} | {color:green} hbase-rsgroup in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 16s{color} | {color:green} hbase-endpoint in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 7s{color} | {color:green} hbase-backup in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 29s{color} | {color:green} hbase-it in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 57s{color} | {color:green}
[jira] [Updated] (HBASE-18826) Use HStore instead of Store in our own code base and remove unnecessary methods in Store interface
[ https://issues.apache.org/jira/browse/HBASE-18826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-18826: -- Attachment: HBASE-18826-v3.patch Add UT for age related method for Store. It is TestHStore.testAge. > Use HStore instead of Store in our own code base and remove unnecessary > methods in Store interface > -- > > Key: HBASE-18826 > URL: https://issues.apache.org/jira/browse/HBASE-18826 > Project: HBase > Issue Type: Sub-task > Components: Coprocessors >Reporter: Duo Zhang >Assignee: Duo Zhang > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-18826.patch, HBASE-18826-v1.patch, > HBASE-18826-v1.patch, HBASE-18826-v2.patch, HBASE-18826-v3.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HBASE-18886) Backup command onces cancelled not able re-trigger backup again
Vishal Khandelwal created HBASE-18886: - Summary: Backup command onces cancelled not able re-trigger backup again Key: HBASE-18886 URL: https://issues.apache.org/jira/browse/HBASE-18886 Project: HBase Issue Type: Bug Reporter: Vishal Khandelwal *Repair* Please make sure that backup is enabled on the cluster. To enable backup, in hbase-site.xml, set: hbase.backup.enable=true hbase.master.logcleaner.plugins=YOUR_PLUGINS,org.apache.hadoop.hbase.backup.master.BackupLogCleaner hbase.procedure.master.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager hbase.procedure.regionserver.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager and restart the cluster 2017-09-27 09:28:38,119 INFO [main] metrics.MetricRegistries: Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl REPAIR status: no failed sessions found. Checking failed delete backup operation ... No failed backup DELETE operation found 2017-09-27 09:28:38,680 ERROR [main] impl.BackupSystemTable: Snapshot snapshot_backup_system does not exists No failed backup MERGE operation found 2017-09-27 09:28:38,682 ERROR [main] impl.BackupSystemTable: Snapshot snapshot_backup_system does not exists **./bin/hbase backup create full hdfs://localhost:8020/ -t test1** Please make sure that backup is enabled on the cluster. To enable backup, in hbase-site.xml, set: hbase.backup.enable=true hbase.master.logcleaner.plugins=YOUR_PLUGINS,org.apache.hadoop.hbase.backup.master.BackupLogCleaner hbase.procedure.master.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager hbase.procedure.regionserver.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager and restart the cluster 2017-09-27 09:29:00,837 INFO [main] metrics.MetricRegistries: Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2017-09-27 09:29:01,458 ERROR [main] impl.BackupAdminImpl: There is an active session already running Backup session finished. Status: FAILURE 2017-09-27 09:29:01,460 ERROR [main] backup.BackupDriver: Error running command-line tool java.io.IOException: There is an active backup exclusive operation at org.apache.hadoop.hbase.backup.impl.BackupSystemTable.startBackupExclusiveOperation(BackupSystemTable.java:584) at org.apache.hadoop.hbase.backup.impl.BackupManager.startBackupSession(BackupManager.java:373) at org.apache.hadoop.hbase.backup.impl.TableBackupClient.init(TableBackupClient.java:100) at org.apache.hadoop.hbase.backup.impl.TableBackupClient.(TableBackupClient.java:78) at org.apache.hadoop.hbase.backup.impl.FullTableBackupClient.(FullTableBackupClient.java:61) at org.apache.hadoop.hbase.backup.BackupClientFactory.create(BackupClientFactory.java:51) at org.apache.hadoop.hbase.backup.impl.BackupAdminImpl.backupTables(BackupAdminImpl.java:595) at org.apache.hadoop.hbase.backup.impl.BackupCommands$CreateCommand.execute(BackupCommands.java:336) at org.apache.hadoop.hbase.backup.BackupDriver.parseAndRun(BackupDriver.java:137) at org.apache.hadoop.hbase.backup.BackupDriver.doWork(BackupDriver.java:170) at org.apache.hadoop.hbase.backup.BackupDriver.run(BackupDriver.java:203) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.hbase.backup.BackupDriver.main(BackupDriver.java:178) *History* lease make sure that backup is enabled on the cluster. To enable backup, in hbase-site.xml, set: hbase.backup.enable=true hbase.master.logcleaner.plugins=YOUR_PLUGINS,org.apache.hadoop.hbase.backup.master.BackupLogCleaner hbase.procedure.master.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager hbase.procedure.regionserver.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager and restart the cluster 2017-09-27 09:30:22,218 INFO [main] metrics.MetricRegistries: Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl {ID=backup_1506427470546,Type=FULL,Tables={test2,test1},State=COMPLETE,Start time=Tue Sep 26 17:34:31 IST 2017,End time=Tue Sep 26 17:34:54 IST 2017,Progress=100%} {ID=backup_1506427304040,Type=FULL,Tables={test2,test1},State=COMPLETE,Start time=Tue Sep 26 17:31:45 IST 2017,End time=Tue Sep 26 17:32:08 IST 2017,Progress=100%} {ID=backup_1506426863567,Type=INCREMENTAL,Tables={test2,test1},State=FAILED,Start time=Tue Sep 26 17:24:24 IST 2017,Failed message=Failed copy from hdfs://localhost:8020/backup/.tmp/backup_1506426863567 to hdfs://localhost:8020/backup/backup_1506426863567/WALs,Progress=0%}
[jira] [Updated] (HBASE-18843) Add DistCp support to incremental backup with bulk loading
[ https://issues.apache.org/jira/browse/HBASE-18843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-18843: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.0.0-alpha-4 Status: Resolved (was: Patch Available) Thanks for the patch, Vlad. > Add DistCp support to incremental backup with bulk loading > -- > > Key: HBASE-18843 > URL: https://issues.apache.org/jira/browse/HBASE-18843 > Project: HBase > Issue Type: Improvement >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-18843-v1.patch, HBASE-18843-v2.patch, > HBASE-18843-v4.patch, HBASE-18843-v5.patch > > > Currently, we copy bulk loaded files to backup one-by-one on a client side > (where backup create runs). This has to be replaced with DistCp copying. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18843) Add DistCp support to incremental backup with bulk loading
[ https://issues.apache.org/jira/browse/HBASE-18843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181940#comment-16181940 ] Ted Yu commented on HBASE-18843: +1 > Add DistCp support to incremental backup with bulk loading > -- > > Key: HBASE-18843 > URL: https://issues.apache.org/jira/browse/HBASE-18843 > Project: HBase > Issue Type: Improvement >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Attachments: HBASE-18843-v1.patch, HBASE-18843-v2.patch, > HBASE-18843-v4.patch, HBASE-18843-v5.patch > > > Currently, we copy bulk loaded files to backup one-by-one on a client side > (where backup create runs). This has to be replaced with DistCp copying. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18880) Failed to start rest server if the value of hbase.rest.threads.max is too small.
[ https://issues.apache.org/jira/browse/HBASE-18880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181939#comment-16181939 ] Guangxu Cheng commented on HBASE-18880: --- Thanks boss.Thanks [~yuzhih...@gmail.com] [~busbey] for reviewing.:) > Failed to start rest server if the value of hbase.rest.threads.max is too > small. > > > Key: HBASE-18880 > URL: https://issues.apache.org/jira/browse/HBASE-18880 > Project: HBase > Issue Type: Bug > Components: REST >Affects Versions: 3.0.0, 2.0.0-alpha-4 >Reporter: Guangxu Cheng >Assignee: Guangxu Cheng >Priority: Critical > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-18880.master.v0.patch, > hbase-hbase-rest-locolhost.log, hbase-hbase-rest-locolhost.out, > jstack-5750.log > > > After HBASE-18224, Jetty has be updated to 9.4.6, and it requires more > threads to start up. > If the value of hbase.rest.threads.max is too small, the rest server will > fail to start. > What I observed was as follows: > 1. The process did not exit. (At the beginning, I thought the rest server has > been start normally because of the process exists.) > 2. Can't connect to the rest server and I didn't found any exception log in > ***.log. > 3. the main thread has exited (jstack log). > 4. Found the exception information from ***.out. > {code} > java.lang.IllegalStateException: Insufficient threads: max=5 < > needed(acceptors=1 + selectors=8 + request=1) > at org.eclipse.jetty.server.Server.doStart(Server.java:414) > at > org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) > at org.apache.hadoop.hbase.rest.RESTServer.main(RESTServer.java:360) > {code} > I think the process should exit and log the information in ***.log when it > happens. > So that the user can directly discover that the rest server is abnormal. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18885) HFileOutputFormat2 hardcodes default FileOutputCommitter
[ https://issues.apache.org/jira/browse/HBASE-18885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shaofeng SHI updated HBASE-18885: - Attachment: HBASE-18885.branch-1.001.patch > HFileOutputFormat2 hardcodes default FileOutputCommitter > > > Key: HBASE-18885 > URL: https://issues.apache.org/jira/browse/HBASE-18885 > Project: HBase > Issue Type: Bug > Components: mapreduce >Reporter: Shaofeng SHI >Assignee: Shaofeng SHI > Attachments: HBASE-18885.branch-1.001.patch, > HBASE-18885.master.001.patch > > > Apache Kylin uses HBase's HFileOutputFormat2.java to configure the MR job. > The original reporting is in KYLIN-2788[1]. After some investigation, we > found this class always uses the default "FileOutputCommitter", see [2], > regardless of the job's configuration; so it always writing to "_temporary" > folder. Since AWS EMR configured to use DirectOutputCommitter for S3, then > this problem occurs: Hadoop expects to see the file directly under output > path, while the RecordWriter generates them in "_temporary" folder. This > caused no data be loaded to HTable. > Seems this problem exists in all versions so far. > [1] https://issues.apache.org/jira/browse/KYLIN-2788 > [2] > https://github.com/apache/hbase/blob/master/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.java#L193 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18885) HFileOutputFormat2 hardcodes default FileOutputCommitter
[ https://issues.apache.org/jira/browse/HBASE-18885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181936#comment-16181936 ] Shaofeng SHI commented on HBASE-18885: -- 'branch-1' is a little different from master branch, let me generate another patch for it. > HFileOutputFormat2 hardcodes default FileOutputCommitter > > > Key: HBASE-18885 > URL: https://issues.apache.org/jira/browse/HBASE-18885 > Project: HBase > Issue Type: Bug > Components: mapreduce >Reporter: Shaofeng SHI >Assignee: Shaofeng SHI > Attachments: HBASE-18885.master.001.patch > > > Apache Kylin uses HBase's HFileOutputFormat2.java to configure the MR job. > The original reporting is in KYLIN-2788[1]. After some investigation, we > found this class always uses the default "FileOutputCommitter", see [2], > regardless of the job's configuration; so it always writing to "_temporary" > folder. Since AWS EMR configured to use DirectOutputCommitter for S3, then > this problem occurs: Hadoop expects to see the file directly under output > path, while the RecordWriter generates them in "_temporary" folder. This > caused no data be loaded to HTable. > Seems this problem exists in all versions so far. > [1] https://issues.apache.org/jira/browse/KYLIN-2788 > [2] > https://github.com/apache/hbase/blob/master/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.java#L193 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18880) Failed to start rest server if the value of hbase.rest.threads.max is too small.
[ https://issues.apache.org/jira/browse/HBASE-18880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-18880: -- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: (was: 2.0.0-beta-1) 2.0.0-alpha-4 Status: Resolved (was: Patch Available) Pushed to branch-2 and master. Thank you for the patch [~andrewcheng] > Failed to start rest server if the value of hbase.rest.threads.max is too > small. > > > Key: HBASE-18880 > URL: https://issues.apache.org/jira/browse/HBASE-18880 > Project: HBase > Issue Type: Bug > Components: REST >Affects Versions: 3.0.0, 2.0.0-alpha-4 >Reporter: Guangxu Cheng >Assignee: Guangxu Cheng >Priority: Critical > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-18880.master.v0.patch, > hbase-hbase-rest-locolhost.log, hbase-hbase-rest-locolhost.out, > jstack-5750.log > > > After HBASE-18224, Jetty has be updated to 9.4.6, and it requires more > threads to start up. > If the value of hbase.rest.threads.max is too small, the rest server will > fail to start. > What I observed was as follows: > 1. The process did not exit. (At the beginning, I thought the rest server has > been start normally because of the process exists.) > 2. Can't connect to the rest server and I didn't found any exception log in > ***.log. > 3. the main thread has exited (jstack log). > 4. Found the exception information from ***.out. > {code} > java.lang.IllegalStateException: Insufficient threads: max=5 < > needed(acceptors=1 + selectors=8 + request=1) > at org.eclipse.jetty.server.Server.doStart(Server.java:414) > at > org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) > at org.apache.hadoop.hbase.rest.RESTServer.main(RESTServer.java:360) > {code} > I think the process should exit and log the information in ***.log when it > happens. > So that the user can directly discover that the rest server is abnormal. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18843) Add DistCp support to incremental backup with bulk loading
[ https://issues.apache.org/jira/browse/HBASE-18843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Rodionov updated HBASE-18843: -- Attachment: HBASE-18843-v5.patch v5 adds annotation to the new class > Add DistCp support to incremental backup with bulk loading > -- > > Key: HBASE-18843 > URL: https://issues.apache.org/jira/browse/HBASE-18843 > Project: HBase > Issue Type: Improvement >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Attachments: HBASE-18843-v1.patch, HBASE-18843-v2.patch, > HBASE-18843-v4.patch, HBASE-18843-v5.patch > > > Currently, we copy bulk loaded files to backup one-by-one on a client side > (where backup create runs). This has to be replaced with DistCp copying. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18880) Failed to start rest server if the value of hbase.rest.threads.max is too small.
[ https://issues.apache.org/jira/browse/HBASE-18880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181931#comment-16181931 ] stack commented on HBASE-18880: --- Oh, there are non-daemon threads in the mix. We need to spend the time to figure where they come from. Meantime let me commit this. > Failed to start rest server if the value of hbase.rest.threads.max is too > small. > > > Key: HBASE-18880 > URL: https://issues.apache.org/jira/browse/HBASE-18880 > Project: HBase > Issue Type: Bug > Components: REST >Affects Versions: 3.0.0, 2.0.0-alpha-4 >Reporter: Guangxu Cheng >Assignee: Guangxu Cheng >Priority: Critical > Fix For: 2.0.0-beta-1 > > Attachments: HBASE-18880.master.v0.patch, > hbase-hbase-rest-locolhost.log, hbase-hbase-rest-locolhost.out, > jstack-5750.log > > > After HBASE-18224, Jetty has be updated to 9.4.6, and it requires more > threads to start up. > If the value of hbase.rest.threads.max is too small, the rest server will > fail to start. > What I observed was as follows: > 1. The process did not exit. (At the beginning, I thought the rest server has > been start normally because of the process exists.) > 2. Can't connect to the rest server and I didn't found any exception log in > ***.log. > 3. the main thread has exited (jstack log). > 4. Found the exception information from ***.out. > {code} > java.lang.IllegalStateException: Insufficient threads: max=5 < > needed(acceptors=1 + selectors=8 + request=1) > at org.eclipse.jetty.server.Server.doStart(Server.java:414) > at > org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) > at org.apache.hadoop.hbase.rest.RESTServer.main(RESTServer.java:360) > {code} > I think the process should exit and log the information in ***.log when it > happens. > So that the user can directly discover that the rest server is abnormal. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18880) Failed to start rest server if the value of hbase.rest.threads.max is too small.
[ https://issues.apache.org/jira/browse/HBASE-18880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181930#comment-16181930 ] stack commented on HBASE-18880: --- +1 then [~andrewcheng] Thank you. > Failed to start rest server if the value of hbase.rest.threads.max is too > small. > > > Key: HBASE-18880 > URL: https://issues.apache.org/jira/browse/HBASE-18880 > Project: HBase > Issue Type: Bug > Components: REST >Affects Versions: 3.0.0, 2.0.0-alpha-4 >Reporter: Guangxu Cheng >Assignee: Guangxu Cheng >Priority: Critical > Fix For: 2.0.0-beta-1 > > Attachments: HBASE-18880.master.v0.patch, > hbase-hbase-rest-locolhost.log, hbase-hbase-rest-locolhost.out, > jstack-5750.log > > > After HBASE-18224, Jetty has be updated to 9.4.6, and it requires more > threads to start up. > If the value of hbase.rest.threads.max is too small, the rest server will > fail to start. > What I observed was as follows: > 1. The process did not exit. (At the beginning, I thought the rest server has > been start normally because of the process exists.) > 2. Can't connect to the rest server and I didn't found any exception log in > ***.log. > 3. the main thread has exited (jstack log). > 4. Found the exception information from ***.out. > {code} > java.lang.IllegalStateException: Insufficient threads: max=5 < > needed(acceptors=1 + selectors=8 + request=1) > at org.eclipse.jetty.server.Server.doStart(Server.java:414) > at > org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) > at org.apache.hadoop.hbase.rest.RESTServer.main(RESTServer.java:360) > {code} > I think the process should exit and log the information in ***.log when it > happens. > So that the user can directly discover that the rest server is abnormal. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-14247) Separate the old WALs into different regionserver directories
[ https://issues.apache.org/jira/browse/HBASE-14247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guanghao Zhang updated HBASE-14247: --- Attachment: HBASE-14247.master.004.patch > Separate the old WALs into different regionserver directories > - > > Key: HBASE-14247 > URL: https://issues.apache.org/jira/browse/HBASE-14247 > Project: HBase > Issue Type: Improvement > Components: wal >Reporter: Liu Shaohui >Assignee: Guanghao Zhang >Priority: Critical > Fix For: 2.0.0 > > Attachments: HBASE-14247.master.001.patch, > HBASE-14247.master.002.patch, HBASE-14247.master.003.patch, > HBASE-14247.master.004.patch, HBASE-14247-v001.diff, HBASE-14247-v002.diff, > HBASE-14247-v003.diff > > > Currently all old WALs of regionservers are achieved into the single > directory of oldWALs. In big clusters, because of long TTL of WAL or disabled > replications, the number of files under oldWALs may reach the > max-directory-items limit of HDFS, which will make the hbase cluster crashed. > {quote} > Caused by: > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.FSLimitException$MaxDirectoryItemsExceededException): > The directory item limit of /hbase/lgprc-xiaomi/.oldlogs is exceeded: > limit=1048576 items=1048576 > {quote} > A simple solution is to separate the old WALs into different directories > according to the server name of the WAL. > Suggestions are welcomed~ Thanks -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18843) Add DistCp support to incremental backup with bulk loading
[ https://issues.apache.org/jira/browse/HBASE-18843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181915#comment-16181915 ] Hadoop QA commented on HBASE-18843: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 55s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 25s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 29s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 21s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 39m 40s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 50s{color} | {color:green} hbase-backup in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 11s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 60m 38s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:5d60123 | | JIRA Issue | HBASE-18843 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12889195/HBASE-18843-v4.patch | | Optional Tests | asflicense shadedjars javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 9b028d3901e6 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 91e1f83 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC3 | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/8810/testReport/ | | modules | C: hbase-backup U: hbase-backup | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/8810/console | | Powered by | Apache Yetus 0.4.0 http://yetus.apache.org | This message was automatically generated. > Add DistCp support to incremental backup with bulk loading >
[jira] [Commented] (HBASE-18885) HFileOutputFormat2 hardcodes default FileOutputCommitter
[ https://issues.apache.org/jira/browse/HBASE-18885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181912#comment-16181912 ] Ted Yu commented on HBASE-18885: +1 > HFileOutputFormat2 hardcodes default FileOutputCommitter > > > Key: HBASE-18885 > URL: https://issues.apache.org/jira/browse/HBASE-18885 > Project: HBase > Issue Type: Bug > Components: mapreduce >Reporter: Shaofeng SHI >Assignee: Shaofeng SHI > Attachments: HBASE-18885.master.001.patch > > > Apache Kylin uses HBase's HFileOutputFormat2.java to configure the MR job. > The original reporting is in KYLIN-2788[1]. After some investigation, we > found this class always uses the default "FileOutputCommitter", see [2], > regardless of the job's configuration; so it always writing to "_temporary" > folder. Since AWS EMR configured to use DirectOutputCommitter for S3, then > this problem occurs: Hadoop expects to see the file directly under output > path, while the RecordWriter generates them in "_temporary" folder. This > caused no data be loaded to HTable. > Seems this problem exists in all versions so far. > [1] https://issues.apache.org/jira/browse/KYLIN-2788 > [2] > https://github.com/apache/hbase/blob/master/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.java#L193 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18885) HFileOutputFormat2 hardcodes default FileOutputCommitter
[ https://issues.apache.org/jira/browse/HBASE-18885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181908#comment-16181908 ] Hadoop QA commented on HBASE-18885: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 30s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 32s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 36s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 8s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 41m 18s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 53s{color} | {color:green} hbase-mapreduce in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 10s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 61m 27s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:5d60123 | | JIRA Issue | HBASE-18885 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12889196/HBASE-18885.master.001.patch | | Optional Tests | asflicense shadedjars javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux c070837673a7 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 91e1f83 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC3 | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/8808/testReport/ | | modules | C: hbase-mapreduce U: hbase-mapreduce | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/8808/console | | Powered by | Apache Yetus 0.4.0
[jira] [Updated] (HBASE-18880) Failed to start rest server if the value of hbase.rest.threads.max is too small.
[ https://issues.apache.org/jira/browse/HBASE-18880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guangxu Cheng updated HBASE-18880: -- Attachment: hbase-hbase-rest-locolhost.out jstack-5750.log hbase-hbase-rest-locolhost.log > Failed to start rest server if the value of hbase.rest.threads.max is too > small. > > > Key: HBASE-18880 > URL: https://issues.apache.org/jira/browse/HBASE-18880 > Project: HBase > Issue Type: Bug > Components: REST >Affects Versions: 3.0.0, 2.0.0-alpha-4 >Reporter: Guangxu Cheng >Assignee: Guangxu Cheng >Priority: Critical > Fix For: 2.0.0-beta-1 > > Attachments: HBASE-18880.master.v0.patch, > hbase-hbase-rest-locolhost.log, hbase-hbase-rest-locolhost.out, > jstack-5750.log > > > After HBASE-18224, Jetty has be updated to 9.4.6, and it requires more > threads to start up. > If the value of hbase.rest.threads.max is too small, the rest server will > fail to start. > What I observed was as follows: > 1. The process did not exit. (At the beginning, I thought the rest server has > been start normally because of the process exists.) > 2. Can't connect to the rest server and I didn't found any exception log in > ***.log. > 3. the main thread has exited (jstack log). > 4. Found the exception information from ***.out. > {code} > java.lang.IllegalStateException: Insufficient threads: max=5 < > needed(acceptors=1 + selectors=8 + request=1) > at org.eclipse.jetty.server.Server.doStart(Server.java:414) > at > org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) > at org.apache.hadoop.hbase.rest.RESTServer.main(RESTServer.java:360) > {code} > I think the process should exit and log the information in ***.log when it > happens. > So that the user can directly discover that the rest server is abnormal. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18880) Failed to start rest server if the value of hbase.rest.threads.max is too small.
[ https://issues.apache.org/jira/browse/HBASE-18880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181907#comment-16181907 ] Guangxu Cheng commented on HBASE-18880: --- bq.Should the catch block only enclose start() ? join() is also possible to throw a exception, I think both of them should be enclosed.Thanks bq.I was going to mark this as minor since the .out has the exception that caused the server crash – and .out is where the 'unexpected' emissions go – but I notice that you say that the server stays up. Are there threads that should be marked daemon threads that currently are not? The patch works? +1 if it does; i.e. the server is shudown if unexpected exception out of start (or join). Upload a jstack log.From the log we can see that the main thread has exited. After this patch, the process exits directly and the server doesn't stay up any longer. > Failed to start rest server if the value of hbase.rest.threads.max is too > small. > > > Key: HBASE-18880 > URL: https://issues.apache.org/jira/browse/HBASE-18880 > Project: HBase > Issue Type: Bug > Components: REST >Affects Versions: 3.0.0, 2.0.0-alpha-4 >Reporter: Guangxu Cheng >Assignee: Guangxu Cheng >Priority: Critical > Fix For: 2.0.0-beta-1 > > Attachments: HBASE-18880.master.v0.patch, > hbase-hbase-rest-locolhost.log, hbase-hbase-rest-locolhost.out, > jstack-5750.log > > > After HBASE-18224, Jetty has be updated to 9.4.6, and it requires more > threads to start up. > If the value of hbase.rest.threads.max is too small, the rest server will > fail to start. > What I observed was as follows: > 1. The process did not exit. (At the beginning, I thought the rest server has > been start normally because of the process exists.) > 2. Can't connect to the rest server and I didn't found any exception log in > ***.log. > 3. the main thread has exited (jstack log). > 4. Found the exception information from ***.out. > {code} > java.lang.IllegalStateException: Insufficient threads: max=5 < > needed(acceptors=1 + selectors=8 + request=1) > at org.eclipse.jetty.server.Server.doStart(Server.java:414) > at > org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) > at org.apache.hadoop.hbase.rest.RESTServer.main(RESTServer.java:360) > {code} > I think the process should exit and log the information in ***.log when it > happens. > So that the user can directly discover that the rest server is abnormal. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18105) [AMv2] Split/Merge need cleanup; currently they diverge and do not fully embrace AMv2 world
[ https://issues.apache.org/jira/browse/HBASE-18105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181903#comment-16181903 ] Hadoop QA commented on HBASE-18105: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 5s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 25s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 51s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 45s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 4s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 39m 10s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} | | {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 28s{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green}100m 29s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 34s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}167m 11s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:5d60123 | | JIRA Issue | HBASE-18105 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12889177/HBASE-14350-V1-master.patch | | Optional Tests | asflicense shadedjars cc unit hbaseprotoc javac javadoc findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 0a27a06c7c41 3.13.0-129-generic
[jira] [Commented] (HBASE-12260) MasterServices - remove from coprocessor API (Discuss)
[ https://issues.apache.org/jira/browse/HBASE-12260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181889#comment-16181889 ] Anoop Sam John commented on HBASE-12260: Will work on this after Region cleanup subjira. > MasterServices - remove from coprocessor API (Discuss) > -- > > Key: HBASE-12260 > URL: https://issues.apache.org/jira/browse/HBASE-12260 > Project: HBase > Issue Type: Sub-task > Components: master >Reporter: ryan rawson >Priority: Critical > Fix For: 2.0.0-alpha-4 > > > A major issue with MasterServices is the MasterCoprocessorEnvironment exposes > this class even though MasterServices is tagged with > @InterfaceAudience.Private > This means that the entire internals of the HMaster is essentially part of > the coprocessor API. Many of the classes returned by the MasterServices API > are highly internal, extremely powerful, and subject to constant change. > Perhaps a new API to replace MasterServices that is use-case focused, and > justified based on real world co-processors would suit things better. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18770) We should not allow RegionObserver.preBulkLoadHFile to bypass the default behavior
[ https://issues.apache.org/jira/browse/HBASE-18770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181878#comment-16181878 ] Anoop Sam John commented on HBASE-18770: I mean we should just ignore the bypass been set or not from this CP hook in the calling code part. We do this way in some other hooks eg: MasterObserver#preCreateTable > We should not allow RegionObserver.preBulkLoadHFile to bypass the default > behavior > -- > > Key: HBASE-18770 > URL: https://issues.apache.org/jira/browse/HBASE-18770 > Project: HBase > Issue Type: Sub-task > Components: Coprocessors >Reporter: Duo Zhang > Fix For: 2.0.0-alpha-4 > > > As now we do not allow users to create a StoreFile instance. Users can still > select the files to be bulk loaded by modifying the familyPaths passed in. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18298) RegionServerServices Interface cleanup for CP expose
[ https://issues.apache.org/jira/browse/HBASE-18298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anoop Sam John updated HBASE-18298: --- Attachment: HBASE-18298_V7.patch Retry > RegionServerServices Interface cleanup for CP expose > > > Key: HBASE-18298 > URL: https://issues.apache.org/jira/browse/HBASE-18298 > Project: HBase > Issue Type: Sub-task > Components: Coprocessors >Reporter: Anoop Sam John >Assignee: Anoop Sam John >Priority: Critical > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-18298.patch, HBASE-18298_V2.patch, > HBASE-18298_V3.patch, HBASE-18298_V4.patch, HBASE-18298_V5.patch, > HBASE-18298_V6.patch, HBASE-18298_V7.patch, HBASE-18298_V7.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18843) Add DistCp support to incremental backup with bulk loading
[ https://issues.apache.org/jira/browse/HBASE-18843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-18843: --- Status: Patch Available (was: Open) > Add DistCp support to incremental backup with bulk loading > -- > > Key: HBASE-18843 > URL: https://issues.apache.org/jira/browse/HBASE-18843 > Project: HBase > Issue Type: Improvement >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Attachments: HBASE-18843-v1.patch, HBASE-18843-v2.patch, > HBASE-18843-v4.patch > > > Currently, we copy bulk loaded files to backup one-by-one on a client side > (where backup create runs). This has to be replaced with DistCp copying. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18885) HFileOutputFormat2 hardcodes default FileOutputCommitter
[ https://issues.apache.org/jira/browse/HBASE-18885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181862#comment-16181862 ] Ted Yu commented on HBASE-18885: {code} Applying: HBASE-18885 HFileOutputFormat2 hardcodes default FileOutputCommitter error: hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.java: does not exist in index Patch failed at 0001 HBASE-18885 HFileOutputFormat2 hardcodes default FileOutputCommitter {code} Please attach patch for branch-1 > HFileOutputFormat2 hardcodes default FileOutputCommitter > > > Key: HBASE-18885 > URL: https://issues.apache.org/jira/browse/HBASE-18885 > Project: HBase > Issue Type: Bug > Components: mapreduce >Reporter: Shaofeng SHI >Assignee: Shaofeng SHI > Attachments: HBASE-18885.master.001.patch > > > Apache Kylin uses HBase's HFileOutputFormat2.java to configure the MR job. > The original reporting is in KYLIN-2788[1]. After some investigation, we > found this class always uses the default "FileOutputCommitter", see [2], > regardless of the job's configuration; so it always writing to "_temporary" > folder. Since AWS EMR configured to use DirectOutputCommitter for S3, then > this problem occurs: Hadoop expects to see the file directly under output > path, while the RecordWriter generates them in "_temporary" folder. This > caused no data be loaded to HTable. > Seems this problem exists in all versions so far. > [1] https://issues.apache.org/jira/browse/KYLIN-2788 > [2] > https://github.com/apache/hbase/blob/master/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.java#L193 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HBASE-18885) HFileOutputFormat2 hardcodes default FileOutputCommitter
[ https://issues.apache.org/jira/browse/HBASE-18885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu reassigned HBASE-18885: -- Assignee: Shaofeng SHI > HFileOutputFormat2 hardcodes default FileOutputCommitter > > > Key: HBASE-18885 > URL: https://issues.apache.org/jira/browse/HBASE-18885 > Project: HBase > Issue Type: Bug > Components: mapreduce >Reporter: Shaofeng SHI >Assignee: Shaofeng SHI > Attachments: HBASE-18885.master.001.patch > > > Apache Kylin uses HBase's HFileOutputFormat2.java to configure the MR job. > The original reporting is in KYLIN-2788[1]. After some investigation, we > found this class always uses the default "FileOutputCommitter", see [2], > regardless of the job's configuration; so it always writing to "_temporary" > folder. Since AWS EMR configured to use DirectOutputCommitter for S3, then > this problem occurs: Hadoop expects to see the file directly under output > path, while the RecordWriter generates them in "_temporary" folder. This > caused no data be loaded to HTable. > Seems this problem exists in all versions so far. > [1] https://issues.apache.org/jira/browse/KYLIN-2788 > [2] > https://github.com/apache/hbase/blob/master/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.java#L193 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18885) HFileOutputFormat2 hardcodes default FileOutputCommitter
[ https://issues.apache.org/jira/browse/HBASE-18885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-18885: --- Status: Patch Available (was: Open) > HFileOutputFormat2 hardcodes default FileOutputCommitter > > > Key: HBASE-18885 > URL: https://issues.apache.org/jira/browse/HBASE-18885 > Project: HBase > Issue Type: Bug > Components: mapreduce >Reporter: Shaofeng SHI > Attachments: HBASE-18885.master.001.patch > > > Apache Kylin uses HBase's HFileOutputFormat2.java to configure the MR job. > The original reporting is in KYLIN-2788[1]. After some investigation, we > found this class always uses the default "FileOutputCommitter", see [2], > regardless of the job's configuration; so it always writing to "_temporary" > folder. Since AWS EMR configured to use DirectOutputCommitter for S3, then > this problem occurs: Hadoop expects to see the file directly under output > path, while the RecordWriter generates them in "_temporary" folder. This > caused no data be loaded to HTable. > Seems this problem exists in all versions so far. > [1] https://issues.apache.org/jira/browse/KYLIN-2788 > [2] > https://github.com/apache/hbase/blob/master/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.java#L193 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18806) VerifyRep by snapshot need not to restore snapshot for each mapper
[ https://issues.apache.org/jira/browse/HBASE-18806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guanghao Zhang updated HBASE-18806: --- Status: Patch Available (was: Open) > VerifyRep by snapshot need not to restore snapshot for each mapper > -- > > Key: HBASE-18806 > URL: https://issues.apache.org/jira/browse/HBASE-18806 > Project: HBase > Issue Type: Improvement > Components: Replication >Affects Versions: 2.0.0-alpha-2 >Reporter: Zheng Hu >Assignee: Zheng Hu > Attachments: HBASE-18806.v1.patch, HBASE-18806.v2.patch, > HBASE-18806.v3.patch, HBASE-18806.v3.patch > > > In following method stack, seems like each mapper task will restore the > snapshot. If we verify replication by a snapshot which has many hfiles, > then we will take long time to restore snapshot. In our cluster, we took > ~30min for the snapshot restoring when verify a big table. > {code} > Verifier.map > |> replicatedScanner = new TableSnapshotScanner(...) > |> > TableSnapshotScanner.init() > > |-> RestoreSnapshotHelper.copySnapshotForScanner > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18885) HFileOutputFormat2 hardcodes default FileOutputCommitter
[ https://issues.apache.org/jira/browse/HBASE-18885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shaofeng SHI updated HBASE-18885: - Attachment: HBASE-18885.master.001.patch > HFileOutputFormat2 hardcodes default FileOutputCommitter > > > Key: HBASE-18885 > URL: https://issues.apache.org/jira/browse/HBASE-18885 > Project: HBase > Issue Type: Bug > Components: mapreduce >Reporter: Shaofeng SHI > Attachments: HBASE-18885.master.001.patch > > > Apache Kylin uses HBase's HFileOutputFormat2.java to configure the MR job. > The original reporting is in KYLIN-2788[1]. After some investigation, we > found this class always uses the default "FileOutputCommitter", see [2], > regardless of the job's configuration; so it always writing to "_temporary" > folder. Since AWS EMR configured to use DirectOutputCommitter for S3, then > this problem occurs: Hadoop expects to see the file directly under output > path, while the RecordWriter generates them in "_temporary" folder. This > caused no data be loaded to HTable. > Seems this problem exists in all versions so far. > [1] https://issues.apache.org/jira/browse/KYLIN-2788 > [2] > https://github.com/apache/hbase/blob/master/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.java#L193 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HBASE-18885) HFileOutputFormat2 hardcodes default FileOutputCommitter
Shaofeng SHI created HBASE-18885: Summary: HFileOutputFormat2 hardcodes default FileOutputCommitter Key: HBASE-18885 URL: https://issues.apache.org/jira/browse/HBASE-18885 Project: HBase Issue Type: Bug Components: mapreduce Reporter: Shaofeng SHI Apache Kylin uses HBase's HFileOutputFormat2.java to configure the MR job. The original reporting is in KYLIN-2788[1]. After some investigation, we found this class always uses the default "FileOutputCommitter", see [2], regardless of the job's configuration; so it always writing to "_temporary" folder. Since AWS EMR configured to use DirectOutputCommitter for S3, then this problem occurs: Hadoop expects to see the file directly under output path, while the RecordWriter generates them in "_temporary" folder. This caused no data be loaded to HTable. Seems this problem exists in all versions so far. [1] https://issues.apache.org/jira/browse/KYLIN-2788 [2] https://github.com/apache/hbase/blob/master/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.java#L193 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18843) Add DistCp support to incremental backup with bulk loading
[ https://issues.apache.org/jira/browse/HBASE-18843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Rodionov updated HBASE-18843: -- Attachment: HBASE-18843-v4.patch Patch v4. cc : [~te...@apache.org] > Add DistCp support to incremental backup with bulk loading > -- > > Key: HBASE-18843 > URL: https://issues.apache.org/jira/browse/HBASE-18843 > Project: HBase > Issue Type: Improvement >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Attachments: HBASE-18843-v1.patch, HBASE-18843-v2.patch, > HBASE-18843-v4.patch > > > Currently, we copy bulk loaded files to backup one-by-one on a client side > (where backup create runs). This has to be replaced with DistCp copying. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18090) Improve TableSnapshotInputFormat to allow more multiple mappers per region
[ https://issues.apache.org/jira/browse/HBASE-18090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181839#comment-16181839 ] Ashu Pachauri commented on HBASE-18090: --- Just noticed that HBASE-16894 is about adding this support for the TableFormat. Somehow I missed the fact that it's linked to this jira. You can ignore my comment regarding adding the support for TableInputFormat. > Improve TableSnapshotInputFormat to allow more multiple mappers per region > -- > > Key: HBASE-18090 > URL: https://issues.apache.org/jira/browse/HBASE-18090 > Project: HBase > Issue Type: Improvement > Components: mapreduce >Affects Versions: 1.4.0 >Reporter: Mikhail Antonov >Assignee: xinxin fan > Attachments: HBASE-18090-branch-1.3-v1.patch, > HBASE-18090-branch-1.3-v2.patch, HBASE-18090-V3-master.patch, > HBASE-18090-V4-master.patch > > > TableSnapshotInputFormat runs one map task per region in the table snapshot. > This places unnecessary restriction that the region layout of the original > table needs to take the processing resources available to MR job into > consideration. Allowing to run multiple mappers per region (assuming > reasonably even key distribution) would be useful. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-17732) Coprocessor Design Improvements
[ https://issues.apache.org/jira/browse/HBASE-17732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Appy updated HBASE-17732: - Attachment: HBASE-17732.master.014.patch > Coprocessor Design Improvements > --- > > Key: HBASE-17732 > URL: https://issues.apache.org/jira/browse/HBASE-17732 > Project: HBase > Issue Type: Improvement >Reporter: Appy >Assignee: Appy >Priority: Critical > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-17732.master.001.patch, > HBASE-17732.master.002.patch, HBASE-17732.master.003.patch, > HBASE-17732.master.004.patch, HBASE-17732.master.005.patch, > HBASE-17732.master.006.patch, HBASE-17732.master.007.patch, > HBASE-17732.master.008.patch, HBASE-17732.master.009.patch, > HBASE-17732.master.010.patch, HBASE-17732.master.011.patch, > HBASE-17732.master.012.patch, HBASE-17732.master.013.patch, > HBASE-17732.master.014.patch > > > The two main changes are: > * *Adding template for coprocessor type to CoprocessorEnvironment i.e. > {{interface CoprocessorEnvironment}}* > ** Enables us to load only relevant coprocessors in hosts. Right now each > type of host loads all types of coprocs and it's only during execOperation > that it checks if the coproc is of correct type i.e. XCoprocessorHost will > load XObserver, YObserver, and all others, and will check in execOperation if > {{coproc instanceOf XObserver}} and ignore the rest. > ** Allow sharing of a bunch functions/classes which are currently > duplicated in each host. For eg. CoprocessorOperations, > CoprocessorOperationWithResult, execOperations(). > * *Introduce 4 coprocessor classes and use composition between these new > classes and and old observers* > ** The real gold here is, moving forward, we'll be able to break down giant > everything-in-one observers (masterobserver has 100+ functions) into smaller, > more focused observers. These smaller observer can then have different compat > guarantees!! > Here's a more detailed design doc: > https://docs.google.com/document/d/1mPkM1CRRvBMZL4dBQzrus8obyvNnHhR5it2yyhiFXTg/edit?usp=sharing -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18830) TestCanaryTool does not check Canary monitor's error code
[ https://issues.apache.org/jira/browse/HBASE-18830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181821#comment-16181821 ] Hudson commented on HBASE-18830: FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #3785 (See [https://builds.apache.org/job/HBase-Trunk_matrix/3785/]) Amend HBASE-18830 TestCanaryTool does not check Canary monitor's error (apurtell: rev 91e1f834bf93e4a33c253ad6bc86260add5bd6d9) * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/tool/Canary.java > TestCanaryTool does not check Canary monitor's error code > - > > Key: HBASE-18830 > URL: https://issues.apache.org/jira/browse/HBASE-18830 > Project: HBase > Issue Type: Bug >Reporter: Chinmay Kulkarni >Assignee: Chinmay Kulkarni > Fix For: 2.0.0, 3.0.0, 1.4.0, 1.5.0 > > Attachments: HBASE-18830.001.patch > > > None of the tests inside TestCanaryTool check Canary monitor's error code. > Thus, it is possible that the monitor has registered an error and yet the > tests pass. We should check the value returned by the _ToolRunner.run()_ > method inside each unit test. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18883) Upgrade to Curator 4.0
[ https://issues.apache.org/jira/browse/HBASE-18883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181800#comment-16181800 ] Duo Zhang commented on HBASE-18883: --- And maybe we do not need curator-recipes anymore as we do not set any watchers at client side. Can verify later. > Upgrade to Curator 4.0 > -- > > Key: HBASE-18883 > URL: https://issues.apache.org/jira/browse/HBASE-18883 > Project: HBase > Issue Type: Bug > Components: Client >Reporter: Mike Drob >Assignee: Mike Drob > Fix For: 2.0.0 > > Attachments: HBASE-18883.patch > > > While we're doing a dependency pass for HBase 2, we should see if we can bump > Curator to 4.0 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18883) Upgrade to Curator 4.0
[ https://issues.apache.org/jira/browse/HBASE-18883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181799#comment-16181799 ] Duo Zhang commented on HBASE-18883: --- Oh, one problem, I think the exclusion should for curator-client? Not curator-recipes? > Upgrade to Curator 4.0 > -- > > Key: HBASE-18883 > URL: https://issues.apache.org/jira/browse/HBASE-18883 > Project: HBase > Issue Type: Bug > Components: Client >Reporter: Mike Drob >Assignee: Mike Drob > Fix For: 2.0.0 > > Attachments: HBASE-18883.patch > > > While we're doing a dependency pass for HBase 2, we should see if we can bump > Curator to 4.0 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18568) Correct metric of numRegions
[ https://issues.apache.org/jira/browse/HBASE-18568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181797#comment-16181797 ] huaxiang sun commented on HBASE-18568: -- [~zhangshibin], this is a very good catch! > Correct metric of numRegions > -- > > Key: HBASE-18568 > URL: https://issues.apache.org/jira/browse/HBASE-18568 > Project: HBase > Issue Type: Bug > Components: metrics >Affects Versions: 3.0.0 >Reporter: Shibin Zhang >Assignee: Shibin Zhang >Priority: Critical > Fix For: 3.0.0, 1.4.0, 1.5.0, 2.0.0-alpha-3 > > Attachments: HBASE-18568-V1.patch > > > i found the value of metric numReigons in Regions is not correct. > the metric can not add or remove region correctly as region close or open. > the metric as follow: > "name" : "Hadoop:service=HBase,name=RegionServer,sub=Regions", > "numRegions" : 2, > after trouble shooting ,i found the reason is in > MetricsRegionSourceImpl#MetricsRegionSourceImpl > {code:java} > agg.register(this); > ... > hashCode = regionWrapper.getRegionHashCode(); > {code} > when add the MetricsRegionSource to set ,but the hashCode has not yet > initialized. > So, the setFromMap can not put or remove the object correctly. > it will be better like this : > {code:java} > hashCode = regionWrapper.getRegionHashCode(); > agg.register(this); > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-17732) Coprocessor Design Improvements
[ https://issues.apache.org/jira/browse/HBASE-17732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Appy updated HBASE-17732: - Attachment: HBASE-17732.master.013.patch > Coprocessor Design Improvements > --- > > Key: HBASE-17732 > URL: https://issues.apache.org/jira/browse/HBASE-17732 > Project: HBase > Issue Type: Improvement >Reporter: Appy >Assignee: Appy >Priority: Critical > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-17732.master.001.patch, > HBASE-17732.master.002.patch, HBASE-17732.master.003.patch, > HBASE-17732.master.004.patch, HBASE-17732.master.005.patch, > HBASE-17732.master.006.patch, HBASE-17732.master.007.patch, > HBASE-17732.master.008.patch, HBASE-17732.master.009.patch, > HBASE-17732.master.010.patch, HBASE-17732.master.011.patch, > HBASE-17732.master.012.patch, HBASE-17732.master.013.patch > > > The two main changes are: > * *Adding template for coprocessor type to CoprocessorEnvironment i.e. > {{interface CoprocessorEnvironment}}* > ** Enables us to load only relevant coprocessors in hosts. Right now each > type of host loads all types of coprocs and it's only during execOperation > that it checks if the coproc is of correct type i.e. XCoprocessorHost will > load XObserver, YObserver, and all others, and will check in execOperation if > {{coproc instanceOf XObserver}} and ignore the rest. > ** Allow sharing of a bunch functions/classes which are currently > duplicated in each host. For eg. CoprocessorOperations, > CoprocessorOperationWithResult, execOperations(). > * *Introduce 4 coprocessor classes and use composition between these new > classes and and old observers* > ** The real gold here is, moving forward, we'll be able to break down giant > everything-in-one observers (masterobserver has 100+ functions) into smaller, > more focused observers. These smaller observer can then have different compat > guarantees!! > Here's a more detailed design doc: > https://docs.google.com/document/d/1mPkM1CRRvBMZL4dBQzrus8obyvNnHhR5it2yyhiFXTg/edit?usp=sharing -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18884) Coprocessor Design Improvements 2 (Follow up of HBASE-17732)
[ https://issues.apache.org/jira/browse/HBASE-18884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Appy updated HBASE-18884: - Description: Creating new jira to track suggestions that came in review (https://reviews.apache.org/r/62141/) but are not blocker and can be done separately. Suggestions by [~apurtell] - Change {{Service Coprocessor#getService()}} to {{List Coprocessor#getServices()}} - I think we overstepped by offering [table resource management via this interface|https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/CoprocessorEnvironment.java#L57]. There are a lot of other internal resource types which could/should be managed this way but they are all left up to the implementor. Perhaps we should remove the table ref management and leave it up to them as well. - Checkin the finalized design doc into repo (https://docs.google.com/document/d/1mPkM1CRRvBMZL4dBQzrus8obyvNnHhR5it2yyhiFXTg/edit) (fyi: [~stack]) was: Creating new jira to track suggestions that came in review (https://reviews.apache.org/r/62141/) but are not blocker and can be done separately. Suggestions by [~apurtell] - Change {{Service Coprocessor#getService()}} to {{List Coprocessor#getServices()}} - I think we overstepped by offering [table resource management via this interface|https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/CoprocessorEnvironment.java#L57]. There are a lot of other internal resource types which could/should be managed this way but they are all left up to the implementor. Perhaps we should remove the table ref management and leave it up to them as well. - Checkin the finalized design doc into repo (https://docs.google.com/document/d/1mPkM1CRRvBMZL4dBQzrus8obyvNnHhR5it2yyhiFXTg/edit) (fyi: [~stack]) > Coprocessor Design Improvements 2 (Follow up of HBASE-17732) > > > Key: HBASE-18884 > URL: https://issues.apache.org/jira/browse/HBASE-18884 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > > Creating new jira to track suggestions that came in review > (https://reviews.apache.org/r/62141/) but are not blocker and can be done > separately. > Suggestions by [~apurtell] > - Change {{Service Coprocessor#getService()}} to {{List > Coprocessor#getServices()}} > - I think we overstepped by offering [table resource management via this > interface|https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/CoprocessorEnvironment.java#L57]. > There are a lot of other internal resource types which could/should be > managed this way but they are all left up to the implementor. Perhaps we > should remove the table ref management and leave it up to them as well. > > - Checkin the finalized design doc into repo > (https://docs.google.com/document/d/1mPkM1CRRvBMZL4dBQzrus8obyvNnHhR5it2yyhiFXTg/edit) > (fyi: [~stack]) -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18884) Coprocessor Design Improvements 2 (Follow up of HBASE-17732)
[ https://issues.apache.org/jira/browse/HBASE-18884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Appy updated HBASE-18884: - Description: Creating new jira to track suggestions that came in review (https://reviews.apache.org/r/62141/) but are not blocker and can be done separately. Suggestions by [~apurtell] - Change {{Service Coprocessor#getService()}} to {{List Coprocessor#getServices()}} - I think we overstepped by offering [table resource management via this interface|https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/CoprocessorEnvironment.java#L57]. There are a lot of other internal resource types which could/should be managed this way but they are all left up to the implementor. Perhaps we should remove the table ref management and leave it up to them as well. - Checkin the finalized design doc into repo (https://docs.google.com/document/d/1mPkM1CRRvBMZL4dBQzrus8obyvNnHhR5it2yyhiFXTg/edit) (fyi: [~stack]) was: Creating new jira to track suggestions that came in review (https://reviews.apache.org/r/62141/) but are not blocker and can be done separately. Suggestions by [~apurtell] - Change {{Service Coprocessor#getService()}} to {{List Coprocessor#getServices()}} - I think we overstepped by offering [table resource management via this interface|https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/CoprocessorEnvironment.java#L57]. There are a lot of other internal resource types which could/should be managed this way but they are all left up to the implementor. Perhaps we should remove the table ref management and leave it up to them as well. > Coprocessor Design Improvements 2 (Follow up of HBASE-17732) > > > Key: HBASE-18884 > URL: https://issues.apache.org/jira/browse/HBASE-18884 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > > Creating new jira to track suggestions that came in review > (https://reviews.apache.org/r/62141/) but are not blocker and can be done > separately. > Suggestions by [~apurtell] > - Change {{Service Coprocessor#getService()}} to {{List > Coprocessor#getServices()}} > - I think we overstepped by offering [table resource management via this > interface|https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/CoprocessorEnvironment.java#L57]. > There are a lot of other internal resource types which could/should be > managed this way but they are all left up to the implementor. Perhaps we > should remove the table ref management and leave it up to them as well. > - Checkin the finalized design doc into repo > (https://docs.google.com/document/d/1mPkM1CRRvBMZL4dBQzrus8obyvNnHhR5it2yyhiFXTg/edit) > (fyi: [~stack]) -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18883) Upgrade to Curator 4.0
[ https://issues.apache.org/jira/browse/HBASE-18883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181796#comment-16181796 ] Duo Zhang commented on HBASE-18883: --- +1. And for moving to hbase-thirdparty, I think we need to find out the root cause of the failing map reduce job. It seems that we have two versions of curator in our classpath. Thanks. > Upgrade to Curator 4.0 > -- > > Key: HBASE-18883 > URL: https://issues.apache.org/jira/browse/HBASE-18883 > Project: HBase > Issue Type: Bug > Components: Client >Reporter: Mike Drob >Assignee: Mike Drob > Fix For: 2.0.0 > > Attachments: HBASE-18883.patch > > > While we're doing a dependency pass for HBase 2, we should see if we can bump > Curator to 4.0 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18568) Correct metric of numRegions
[ https://issues.apache.org/jira/browse/HBASE-18568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181794#comment-16181794 ] huaxiang sun commented on HBASE-18568: -- ping [~busbey] and [~mantonov]. > Correct metric of numRegions > -- > > Key: HBASE-18568 > URL: https://issues.apache.org/jira/browse/HBASE-18568 > Project: HBase > Issue Type: Bug > Components: metrics >Affects Versions: 3.0.0 >Reporter: Shibin Zhang >Assignee: Shibin Zhang >Priority: Critical > Fix For: 3.0.0, 1.4.0, 1.5.0, 2.0.0-alpha-3 > > Attachments: HBASE-18568-V1.patch > > > i found the value of metric numReigons in Regions is not correct. > the metric can not add or remove region correctly as region close or open. > the metric as follow: > "name" : "Hadoop:service=HBase,name=RegionServer,sub=Regions", > "numRegions" : 2, > after trouble shooting ,i found the reason is in > MetricsRegionSourceImpl#MetricsRegionSourceImpl > {code:java} > agg.register(this); > ... > hashCode = regionWrapper.getRegionHashCode(); > {code} > when add the MetricsRegionSource to set ,but the hashCode has not yet > initialized. > So, the setFromMap can not put or remove the object correctly. > it will be better like this : > {code:java} > hashCode = regionWrapper.getRegionHashCode(); > agg.register(this); > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18568) Correct metric of numRegions
[ https://issues.apache.org/jira/browse/HBASE-18568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181792#comment-16181792 ] huaxiang sun commented on HBASE-18568: -- [~psomogyi] and I are debugging one memory leak regarding with region open/close. We traced back to metrics are held in this Aggregate data structure. I think this needs to go into 1.2 and 1.3 as well. > Correct metric of numRegions > -- > > Key: HBASE-18568 > URL: https://issues.apache.org/jira/browse/HBASE-18568 > Project: HBase > Issue Type: Bug > Components: metrics >Affects Versions: 3.0.0 >Reporter: Shibin Zhang >Assignee: Shibin Zhang >Priority: Critical > Fix For: 3.0.0, 1.4.0, 1.5.0, 2.0.0-alpha-3 > > Attachments: HBASE-18568-V1.patch > > > i found the value of metric numReigons in Regions is not correct. > the metric can not add or remove region correctly as region close or open. > the metric as follow: > "name" : "Hadoop:service=HBase,name=RegionServer,sub=Regions", > "numRegions" : 2, > after trouble shooting ,i found the reason is in > MetricsRegionSourceImpl#MetricsRegionSourceImpl > {code:java} > agg.register(this); > ... > hashCode = regionWrapper.getRegionHashCode(); > {code} > when add the MetricsRegionSource to set ,but the hashCode has not yet > initialized. > So, the setFromMap can not put or remove the object correctly. > it will be better like this : > {code:java} > hashCode = regionWrapper.getRegionHashCode(); > agg.register(this); > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HBASE-18884) Coprocessor Design Improvements 2 (Follow up of HBASE-17732)
Appy created HBASE-18884: Summary: Coprocessor Design Improvements 2 (Follow up of HBASE-17732) Key: HBASE-18884 URL: https://issues.apache.org/jira/browse/HBASE-18884 Project: HBase Issue Type: Bug Reporter: Appy Assignee: Appy Creating new jira to track suggestions that came in review (https://reviews.apache.org/r/62141/) but are not blocker and can be done separately. Suggestions by [~apurtell] - Change {{Service Coprocessor#getService()}} to {{List Coprocessor#getServices()}} - I think we overstepped by offering [table resource management via this interface|https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/CoprocessorEnvironment.java#L57]. There are a lot of other internal resource types which could/should be managed this way but they are all left up to the implementor. Perhaps we should remove the table ref management and leave it up to them as well. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18883) Upgrade to Curator 4.0
[ https://issues.apache.org/jira/browse/HBASE-18883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181785#comment-16181785 ] Hadoop QA commented on HBASE-18883: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 10s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 47s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 5s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 14m 28s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 51s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 6s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 37m 32s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green}192m 59s{color} | {color:green} root in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 46s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}260m 45s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:5d60123 | | JIRA Issue | HBASE-18883 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12889140/HBASE-18883.patch | | Optional Tests | asflicense shadedjars javac javadoc unit xml compile | | uname | Linux 85705f62e444 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 91e1f83 | | Default Java | 1.8.0_144 | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/8803/testReport/ | | modules | C: . U: . | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/8803/console | | Powered by | Apache Yetus 0.4.0 http://yetus.apache.org | This message was automatically generated. > Upgrade to Curator 4.0 > -- > > Key: HBASE-18883 > URL: https://issues.apache.org/jira/browse/HBASE-18883 > Project: HBase > Issue Type: Bug > Components: Client >Reporter: Mike Drob >Assignee: Mike Drob > Fix For: 2.0.0 > > Attachments: HBASE-18883.patch > > > While we're doing a dependency pass for HBase 2, we should see if we can bump > Curator to 4.0 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18845) TestReplicationSmallTests fails after HBASE-14004
[ https://issues.apache.org/jira/browse/HBASE-18845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181766#comment-16181766 ] Ted Yu commented on HBASE-18845: Currently working on customer issue. I don't have much bandwidth in completely validating the fix. I ran the test with patch a few times yesterday which passed. > TestReplicationSmallTests fails after HBASE-14004 > - > > Key: HBASE-18845 > URL: https://issues.apache.org/jira/browse/HBASE-18845 > Project: HBase > Issue Type: Bug > Components: Replication >Affects Versions: 3.0.0, 2.0.0-alpha-3 >Reporter: Duo Zhang >Assignee: Duo Zhang > Fix For: 3.0.0, 2.0.0-alpha-4 > > Attachments: HBASE-18845.patch > > > testEmptyWALRecovery and testVerifyRepJob -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18105) [AMv2] Split/Merge need cleanup; currently they diverge and do not fully embrace AMv2 world
[ https://issues.apache.org/jira/browse/HBASE-18105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Liang updated HBASE-18105: - Status: Patch Available (was: Open) > [AMv2] Split/Merge need cleanup; currently they diverge and do not fully > embrace AMv2 world > --- > > Key: HBASE-18105 > URL: https://issues.apache.org/jira/browse/HBASE-18105 > Project: HBase > Issue Type: Sub-task > Components: Region Assignment >Affects Versions: 2.0.0 >Reporter: stack >Assignee: Yi Liang > Fix For: 2.0.0 > > Attachments: HBASE-14350-V1-master.patch > > > Region Split and Merge work on the new AMv2 but they work differently. This > issue is about bringing them back together and fully embracing the AMv2 > program. > They both have issues mostly the fact that they carry around baggage no > longer necessary in the new world of assignment. > Here are some of the items: > Split and Merge metadata modifications are done by the Master now but we have > vestige of Split/Merge on RS still; e.g. when we SPLIT, we ask the Master > which asks the RS, which turns around, and asks the Master to run the > operation. Fun. MERGE is all done Master-side. > > Clean this up. Remove asking RS to run SPLIT and remove RegionMergeRequest, > etc. on RS-side. Also remove PONR. We don’t Points-Of-No-Return now we are up > on Pv2. Remove all calls in Interfaces; they are unused. Make RS still able > to detect when split, but have it be a client of Master like anyone else. > Split is Async but does not return procId > Split is async. Doesn’t return the procId though. Merge does. Fix. Only hard > part here I think is the Admin API does not allow procid return. > Flags > Currently OFFLINE is determined by looking either at the master instance of > HTD (isOffline) and/or at the RegionState#state. Ditto for SPLIT. For MERGE, > we rely on RegionState#state. Related is a note above on how split works -- > there is a split flag in HTD when there should not be. > > TODO is move to rely on RegionState#state exclusively in Master. > From Split/Merge Procedures need finishing in > https://docs.google.com/document/d/1eVKa7FHdeoJ1-9o8yZcOTAQbv0u0bblBlCCzVSIn69g/edit#heading=h.4b60dc1h4m1f -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18105) [AMv2] Split/Merge need cleanup; currently they diverge and do not fully embrace AMv2 world
[ https://issues.apache.org/jira/browse/HBASE-18105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Liang updated HBASE-18105: - Attachment: HBASE-14350-V1-master.patch > [AMv2] Split/Merge need cleanup; currently they diverge and do not fully > embrace AMv2 world > --- > > Key: HBASE-18105 > URL: https://issues.apache.org/jira/browse/HBASE-18105 > Project: HBase > Issue Type: Sub-task > Components: Region Assignment >Affects Versions: 2.0.0 >Reporter: stack >Assignee: Yi Liang > Fix For: 2.0.0 > > Attachments: HBASE-14350-V1-master.patch > > > Region Split and Merge work on the new AMv2 but they work differently. This > issue is about bringing them back together and fully embracing the AMv2 > program. > They both have issues mostly the fact that they carry around baggage no > longer necessary in the new world of assignment. > Here are some of the items: > Split and Merge metadata modifications are done by the Master now but we have > vestige of Split/Merge on RS still; e.g. when we SPLIT, we ask the Master > which asks the RS, which turns around, and asks the Master to run the > operation. Fun. MERGE is all done Master-side. > > Clean this up. Remove asking RS to run SPLIT and remove RegionMergeRequest, > etc. on RS-side. Also remove PONR. We don’t Points-Of-No-Return now we are up > on Pv2. Remove all calls in Interfaces; they are unused. Make RS still able > to detect when split, but have it be a client of Master like anyone else. > Split is Async but does not return procId > Split is async. Doesn’t return the procId though. Merge does. Fix. Only hard > part here I think is the Admin API does not allow procid return. > Flags > Currently OFFLINE is determined by looking either at the master instance of > HTD (isOffline) and/or at the RegionState#state. Ditto for SPLIT. For MERGE, > we rely on RegionState#state. Related is a note above on how split works -- > there is a split flag in HTD when there should not be. > > TODO is move to rely on RegionState#state exclusively in Master. > From Split/Merge Procedures need finishing in > https://docs.google.com/document/d/1eVKa7FHdeoJ1-9o8yZcOTAQbv0u0bblBlCCzVSIn69g/edit#heading=h.4b60dc1h4m1f -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18105) [AMv2] Split/Merge need cleanup; currently they diverge and do not fully embrace AMv2 world
[ https://issues.apache.org/jira/browse/HBASE-18105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181757#comment-16181757 ] Yi Liang commented on HBASE-18105: -- {quote}It is better that the procedure 'succeed' even if the end-point is not exactly what was asked for rather than rollback and fail.{quote} I think split/merge should not support rollback after updating META, It is complicated to rollback the merge/split operation that region server is working on. Rollback is not supported and we should let the merge/split operation to complete" In the patch, I change the name of PONR, and also write times cases for that. > [AMv2] Split/Merge need cleanup; currently they diverge and do not fully > embrace AMv2 world > --- > > Key: HBASE-18105 > URL: https://issues.apache.org/jira/browse/HBASE-18105 > Project: HBase > Issue Type: Sub-task > Components: Region Assignment >Affects Versions: 2.0.0 >Reporter: stack >Assignee: Yi Liang > Fix For: 2.0.0 > > Attachments: HBASE-14350-V1-master.patch > > > Region Split and Merge work on the new AMv2 but they work differently. This > issue is about bringing them back together and fully embracing the AMv2 > program. > They both have issues mostly the fact that they carry around baggage no > longer necessary in the new world of assignment. > Here are some of the items: > Split and Merge metadata modifications are done by the Master now but we have > vestige of Split/Merge on RS still; e.g. when we SPLIT, we ask the Master > which asks the RS, which turns around, and asks the Master to run the > operation. Fun. MERGE is all done Master-side. > > Clean this up. Remove asking RS to run SPLIT and remove RegionMergeRequest, > etc. on RS-side. Also remove PONR. We don’t Points-Of-No-Return now we are up > on Pv2. Remove all calls in Interfaces; they are unused. Make RS still able > to detect when split, but have it be a client of Master like anyone else. > Split is Async but does not return procId > Split is async. Doesn’t return the procId though. Merge does. Fix. Only hard > part here I think is the Admin API does not allow procid return. > Flags > Currently OFFLINE is determined by looking either at the master instance of > HTD (isOffline) and/or at the RegionState#state. Ditto for SPLIT. For MERGE, > we rely on RegionState#state. Related is a note above on how split works -- > there is a split flag in HTD when there should not be. > > TODO is move to rely on RegionState#state exclusively in Master. > From Split/Merge Procedures need finishing in > https://docs.google.com/document/d/1eVKa7FHdeoJ1-9o8yZcOTAQbv0u0bblBlCCzVSIn69g/edit#heading=h.4b60dc1h4m1f -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Comment Edited] (HBASE-18845) TestReplicationSmallTests fails after HBASE-14004
[ https://issues.apache.org/jira/browse/HBASE-18845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181746#comment-16181746 ] Duo Zhang edited comment on HBASE-18845 at 9/26/17 11:39 PM: - No, this is not a behavior change, it is just a UT trick. Obviously you do not understand what I'm saying... Maybe my poor English... So please just help verifying if the fix works for you. Thanks. was (Author: apache9): No, this is not a behavior change, it is just a UT trick. Obviously you do not understand what I'm saying... So please just help verifying if the fix works for you. Thanks. > TestReplicationSmallTests fails after HBASE-14004 > - > > Key: HBASE-18845 > URL: https://issues.apache.org/jira/browse/HBASE-18845 > Project: HBase > Issue Type: Bug > Components: Replication >Affects Versions: 3.0.0, 2.0.0-alpha-3 >Reporter: Duo Zhang >Assignee: Duo Zhang > Fix For: 3.0.0, 2.0.0-alpha-4 > > Attachments: HBASE-18845.patch > > > testEmptyWALRecovery and testVerifyRepJob -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18830) TestCanaryTool does not check Canary monitor's error code
[ https://issues.apache.org/jira/browse/HBASE-18830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181751#comment-16181751 ] Hudson commented on HBASE-18830: FAILURE: Integrated in Jenkins build HBase-2.0 #584 (See [https://builds.apache.org/job/HBase-2.0/584/]) Amend HBASE-18830 TestCanaryTool does not check Canary monitor's error (apurtell: rev ede916af5a62ac5b10a2a2911dc47b580299b239) * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/tool/Canary.java > TestCanaryTool does not check Canary monitor's error code > - > > Key: HBASE-18830 > URL: https://issues.apache.org/jira/browse/HBASE-18830 > Project: HBase > Issue Type: Bug >Reporter: Chinmay Kulkarni >Assignee: Chinmay Kulkarni > Fix For: 2.0.0, 3.0.0, 1.4.0, 1.5.0 > > Attachments: HBASE-18830.001.patch > > > None of the tests inside TestCanaryTool check Canary monitor's error code. > Thus, it is possible that the monitor has registered an error and yet the > tests pass. We should check the value returned by the _ToolRunner.run()_ > method inside each unit test. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18845) TestReplicationSmallTests fails after HBASE-14004
[ https://issues.apache.org/jira/browse/HBASE-18845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181746#comment-16181746 ] Duo Zhang commented on HBASE-18845: --- No, this is not a behavior change, it is just a UT trick. Obviously you do not understand what I'm saying... So please just help verifying if the fix works for you. Thanks. > TestReplicationSmallTests fails after HBASE-14004 > - > > Key: HBASE-18845 > URL: https://issues.apache.org/jira/browse/HBASE-18845 > Project: HBase > Issue Type: Bug > Components: Replication >Affects Versions: 3.0.0, 2.0.0-alpha-3 >Reporter: Duo Zhang >Assignee: Duo Zhang > Fix For: 3.0.0, 2.0.0-alpha-4 > > Attachments: HBASE-18845.patch > > > testEmptyWALRecovery and testVerifyRepJob -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18839) Apply RegionInfo to code base
[ https://issues.apache.org/jira/browse/HBASE-18839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai updated HBASE-18839: --- Summary: Apply RegionInfo to code base (was: Region#getRegionInfo should return RegionInfo instead of HRegionInfo) > Apply RegionInfo to code base > - > > Key: HBASE-18839 > URL: https://issues.apache.org/jira/browse/HBASE-18839 > Project: HBase > Issue Type: Sub-task > Components: Coprocessors >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai > Fix For: 2.0.0-alpha-4 > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18883) Upgrade to Curator 4.0
[ https://issues.apache.org/jira/browse/HBASE-18883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181676#comment-16181676 ] stack commented on HBASE-18883: --- bq. It's Guava 20.0, not netty.. Thanks for the correction. bq. If so, we could probably do some hacks to exclude the guava stuff they've already shaded so that we don't end up with org.apache.hbase.shaded.curator.org.apache.curator.shaded.com.google? What you thinking? (That would be a pretty package name). > Upgrade to Curator 4.0 > -- > > Key: HBASE-18883 > URL: https://issues.apache.org/jira/browse/HBASE-18883 > Project: HBase > Issue Type: Bug > Components: Client >Reporter: Mike Drob >Assignee: Mike Drob > Fix For: 2.0.0 > > Attachments: HBASE-18883.patch > > > While we're doing a dependency pass for HBase 2, we should see if we can bump > Curator to 4.0 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-12081) Considering Java 9
[ https://issues.apache.org/jira/browse/HBASE-12081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181639#comment-16181639 ] Mike Drob commented on HBASE-12081: --- On the downloads page, it claims that "Builds for platforms other than Linux/x64 will be published at a later date." So those of us developing on OSX are still stuck with beta builds. > Considering Java 9 > -- > > Key: HBASE-12081 > URL: https://issues.apache.org/jira/browse/HBASE-12081 > Project: HBase > Issue Type: Umbrella >Reporter: Andrew Purtell >Assignee: Sean Busbey >Priority: Blocker > Fix For: 2.0.0, 1.4.0, 1.5.0 > > > Java 9 will ship in 2016. This will be the first Java release that makes a > significant compatibility departure from earlier runtimes. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-14451) Move on to htrace-4.0.1 (from htrace-3.2.0) and tell a couple of good trace stories
[ https://issues.apache.org/jira/browse/HBASE-14451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike Drob updated HBASE-14451: -- Resolution: Duplicate Status: Resolved (was: Patch Available) > Move on to htrace-4.0.1 (from htrace-3.2.0) and tell a couple of good trace > stories > --- > > Key: HBASE-14451 > URL: https://issues.apache.org/jira/browse/HBASE-14451 > Project: HBase > Issue Type: Task > Components: Operability, Performance >Reporter: stack >Assignee: stack >Priority: Critical > Attachments: 14451.txt, 14451.v10.txt, 14451.v10.txt, 14451v11.patch, > 14451v13.txt, 14451v15.patch, 14451v2.txt, 14451v3.txt, 14451v4.txt, > 14451v5.txt, 14451v6.txt, 14451v7.txt, 14451v8.txt, 14451v9.txt, > 14451.wip.v16.patch, 14451.wip.v17.patch, 14451.wip.v18.patch, > 14551v12.patch, 14888v14.txt > > > htrace-4.0.0 was just release with a new API. Get up on it. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-6218) Add dynamic on/off tracing facility to regionserver; lightweight(?) record of read/write load
[ https://issues.apache.org/jira/browse/HBASE-6218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike Drob updated HBASE-6218: - Status: In Progress (was: Patch Available) No patch attached, kicking out of patch available. > Add dynamic on/off tracing facility to regionserver; lightweight(?) record of > read/write load > - > > Key: HBASE-6218 > URL: https://issues.apache.org/jira/browse/HBASE-6218 > Project: HBase > Issue Type: New Feature > Components: test >Reporter: stack >Assignee: stack > > It'd be sweet if we could kick a regionserver and have it start recording the > read/write load. Then after we'd taken a sample, we could turn off the > recording. > Chatting at the meetup today, replaying the WALs would give you the write > side (though missing would be the rate at which the client should play the > edits -- perhaps we could add this to the WALEdit if its not already there?). > Read side we'd need something new recording the read load (Perhaps we'd have > a single trace for read and write but somehow you could get the write from > the WAL logs). It would be nice too if we could verify that we read the > right thing somehow (hash of the return when the trace switch is thrown? > Would need to cater to differences in timestamp possibly?) -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-18601) Update Htrace to 4.2
[ https://issues.apache.org/jira/browse/HBASE-18601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike Drob updated HBASE-18601: -- Status: Open (was: Patch Available) Moving back to In-Progress pending feedback. > Update Htrace to 4.2 > > > Key: HBASE-18601 > URL: https://issues.apache.org/jira/browse/HBASE-18601 > Project: HBase > Issue Type: Task >Affects Versions: 2.0.0, 3.0.0 >Reporter: Tamas Penzes >Assignee: Tamas Penzes > Fix For: 2.0.0-alpha-4 > > Attachments: HBASE-18601.master.001.patch, > HBASE-18601.master.002.patch, HBASE-18601.master.003 (3).patch, > HBASE-18601.master.003.patch, HBASE-18601.master.004.patch, > HBASE-18601.master.004.patch > > > HTrace is not perfectly integrated into HBase, the version 3.2.0 is buggy, > the upgrade to 4.x is not trivial and would take time. It might not worth to > keep it in this state, so would be better to remove it. > Of course it doesn't mean tracing would be useless, just that in this form > the use of HTrace 3.2 might not add any value to the project and fixing it > would be far too much effort. > - > Based on the decision of the community we keep htrace now and update version -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18405) Track scope for HBase-Spark module
[ https://issues.apache.org/jira/browse/HBASE-18405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181623#comment-16181623 ] Mike Drob commented on HBASE-18405: --- What do we need to mark this done, [~busbey]? > Track scope for HBase-Spark module > -- > > Key: HBASE-18405 > URL: https://issues.apache.org/jira/browse/HBASE-18405 > Project: HBase > Issue Type: Task > Components: spark >Reporter: Sean Busbey >Assignee: Sean Busbey > Fix For: 3.0.0, 2.1.0, 1.5.0 > > Attachments: Apache HBase - Apache Spark Integration Scope.pdf, > Apache HBase - Apache Spark Integration Scope - update 1.pdf > > > Start with [\[DISCUSS\] status of and plans for our hbase-spark integration > |https://lists.apache.org/thread.html/fd74ef9b9da77abf794664f06ea19c839fb3d543647fb29115081683@%3Cdev.hbase.apache.org%3E] > and formalize into a scope document for bringing this feature into a release. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18874) HMaster abort message will be skipped if Throwable is passed null
[ https://issues.apache.org/jira/browse/HBASE-18874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181620#comment-16181620 ] Ted Yu commented on HBASE-18874: I am waiting for branch-1 patch. > HMaster abort message will be skipped if Throwable is passed null > - > > Key: HBASE-18874 > URL: https://issues.apache.org/jira/browse/HBASE-18874 > Project: HBase > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Pankaj Kumar >Assignee: Pankaj Kumar >Priority: Minor > Attachments: HBASE-18874.patch > > > In HMaster class, we are logging abort message only in case when Throwable is > not null, > {noformat} > if (t != null) LOG.fatal(msg, t); > {noformat} > We will miss the abort message in this case. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18298) RegionServerServices Interface cleanup for CP expose
[ https://issues.apache.org/jira/browse/HBASE-18298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181605#comment-16181605 ] Hadoop QA commented on HBASE-18298: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 65 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 55s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 59s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 25s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 45s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 24s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 11m 0s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 8m 25s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 42s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 47s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 50m 31s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 11m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 41s{color} | {color:green} hbase-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 46s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 33s{color} | {color:green} hbase-mapreduce in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 25s{color} | {color:green} hbase-thrift in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 48s{color} | {color:green} hbase-rsgroup in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 36s{color} | {color:green} hbase-endpoint in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 21s{color} | {color:green} hbase-examples in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 3m 8s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}219m 57s{color} | {color:black}
[jira] [Commented] (HBASE-18874) HMaster abort message will be skipped if Throwable is passed null
[ https://issues.apache.org/jira/browse/HBASE-18874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181590#comment-16181590 ] Mike Drob commented on HBASE-18874: --- [~tedyu] - This is done? Can you add fix versions and resolve the JIRA if that is the case? > HMaster abort message will be skipped if Throwable is passed null > - > > Key: HBASE-18874 > URL: https://issues.apache.org/jira/browse/HBASE-18874 > Project: HBase > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Pankaj Kumar >Assignee: Pankaj Kumar >Priority: Minor > Attachments: HBASE-18874.patch > > > In HMaster class, we are logging abort message only in case when Throwable is > not null, > {noformat} > if (t != null) LOG.fatal(msg, t); > {noformat} > We will miss the abort message in this case. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18883) Upgrade to Curator 4.0
[ https://issues.apache.org/jira/browse/HBASE-18883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181585#comment-16181585 ] Mike Drob commented on HBASE-18883: --- bq. Last time I looked curator has its own shaded netty so there'd be two shaded netty's in hbase-thirdparty... It's Guava 20.0, not netty, and also excludes Function, Predicate, and TypeToken because those leaked into their public API at some point. Maybe we do need to re-shade Curator to avoid exposing those? If so, we could probably do some hacks to exclude the guava stuff they've already shaded so that we don't end up with org.apache.hbase.shaded.curator.org.apache.curator.shaded.com.google? > Upgrade to Curator 4.0 > -- > > Key: HBASE-18883 > URL: https://issues.apache.org/jira/browse/HBASE-18883 > Project: HBase > Issue Type: Bug > Components: Client >Reporter: Mike Drob >Assignee: Mike Drob > Fix For: 2.0.0 > > Attachments: HBASE-18883.patch > > > While we're doing a dependency pass for HBase 2, we should see if we can bump > Curator to 4.0 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18874) HMaster abort message will be skipped if Throwable is passed null
[ https://issues.apache.org/jira/browse/HBASE-18874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181552#comment-16181552 ] Hudson commented on HBASE-18874: FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #3784 (See [https://builds.apache.org/job/HBase-Trunk_matrix/3784/]) HBASE-18874, HMaster abort message will be skipped if Throwable is (tedyu: rev 9e7b16b88ed6e9c2c3f53743bfe5c74098f2b8a4) * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java > HMaster abort message will be skipped if Throwable is passed null > - > > Key: HBASE-18874 > URL: https://issues.apache.org/jira/browse/HBASE-18874 > Project: HBase > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Pankaj Kumar >Assignee: Pankaj Kumar >Priority: Minor > Attachments: HBASE-18874.patch > > > In HMaster class, we are logging abort message only in case when Throwable is > not null, > {noformat} > if (t != null) LOG.fatal(msg, t); > {noformat} > We will miss the abort message in this case. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18830) TestCanaryTool does not check Canary monitor's error code
[ https://issues.apache.org/jira/browse/HBASE-18830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181506#comment-16181506 ] Hudson commented on HBASE-18830: SUCCESS: Integrated in Jenkins build HBase-1.5 #78 (See [https://builds.apache.org/job/HBase-1.5/78/]) Amend HBASE-18830 TestCanaryTool does not check Canary monitor's error (apurtell: rev 3abc0458e9ccd5b529581dcff4b6a0dba594fcdc) * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/tool/Canary.java > TestCanaryTool does not check Canary monitor's error code > - > > Key: HBASE-18830 > URL: https://issues.apache.org/jira/browse/HBASE-18830 > Project: HBase > Issue Type: Bug >Reporter: Chinmay Kulkarni >Assignee: Chinmay Kulkarni > Fix For: 2.0.0, 3.0.0, 1.4.0, 1.5.0 > > Attachments: HBASE-18830.001.patch > > > None of the tests inside TestCanaryTool check Canary monitor's error code. > Thus, it is possible that the monitor has registered an error and yet the > tests pass. We should check the value returned by the _ToolRunner.run()_ > method inside each unit test. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18883) Upgrade to Curator 4.0
[ https://issues.apache.org/jira/browse/HBASE-18883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181502#comment-16181502 ] stack commented on HBASE-18883: --- bq, no please don't update one of our dependencies to a beta. we call out not supporting hadoop versions because of similar not-production warnings. Smile. Sure. > Upgrade to Curator 4.0 > -- > > Key: HBASE-18883 > URL: https://issues.apache.org/jira/browse/HBASE-18883 > Project: HBase > Issue Type: Bug > Components: Client >Reporter: Mike Drob >Assignee: Mike Drob > Fix For: 2.0.0 > > Attachments: HBASE-18883.patch > > > While we're doing a dependency pass for HBase 2, we should see if we can bump > Curator to 4.0 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18883) Upgrade to Curator 4.0
[ https://issues.apache.org/jira/browse/HBASE-18883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181494#comment-16181494 ] Sean Busbey commented on HBASE-18883: - bq. Oh, we could go to zk 3.5 (BETA!) too.. if that helps. no please don't update one of our dependencies to a beta. we call out not supporting hadoop versions because of similar not-production warnings. > Upgrade to Curator 4.0 > -- > > Key: HBASE-18883 > URL: https://issues.apache.org/jira/browse/HBASE-18883 > Project: HBase > Issue Type: Bug > Components: Client >Reporter: Mike Drob >Assignee: Mike Drob > Fix For: 2.0.0 > > Attachments: HBASE-18883.patch > > > While we're doing a dependency pass for HBase 2, we should see if we can bump > Curator to 4.0 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18883) Upgrade to Curator 4.0
[ https://issues.apache.org/jira/browse/HBASE-18883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181483#comment-16181483 ] stack commented on HBASE-18883: --- Oh, we could go to zk 3.5 (BETA!) too.. if that helps. > Upgrade to Curator 4.0 > -- > > Key: HBASE-18883 > URL: https://issues.apache.org/jira/browse/HBASE-18883 > Project: HBase > Issue Type: Bug > Components: Client >Reporter: Mike Drob >Assignee: Mike Drob > Fix For: 2.0.0 > > Attachments: HBASE-18883.patch > > > While we're doing a dependency pass for HBase 2, we should see if we can bump > Curator to 4.0 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18874) HMaster abort message will be skipped if Throwable is passed null
[ https://issues.apache.org/jira/browse/HBASE-18874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181481#comment-16181481 ] Hudson commented on HBASE-18874: FAILURE: Integrated in Jenkins build HBase-2.0 #583 (See [https://builds.apache.org/job/HBase-2.0/583/]) HBASE-18874, HMaster abort message will be skipped if Throwable is (tedyu: rev 6d0eb0eef019a524e5226df6f8a4bee69908ea3c) * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java > HMaster abort message will be skipped if Throwable is passed null > - > > Key: HBASE-18874 > URL: https://issues.apache.org/jira/browse/HBASE-18874 > Project: HBase > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Pankaj Kumar >Assignee: Pankaj Kumar >Priority: Minor > Attachments: HBASE-18874.patch > > > In HMaster class, we are logging abort message only in case when Throwable is > not null, > {noformat} > if (t != null) LOG.fatal(msg, t); > {noformat} > We will miss the abort message in this case. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-18762) Canary sink type cast error
[ https://issues.apache.org/jira/browse/HBASE-18762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16181480#comment-16181480 ] Chinmay Kulkarni commented on HBASE-18762: -- Thanks [~apurtell]. > Canary sink type cast error > --- > > Key: HBASE-18762 > URL: https://issues.apache.org/jira/browse/HBASE-18762 > Project: HBase > Issue Type: Bug >Reporter: Chinmay Kulkarni >Assignee: Chinmay Kulkarni > Fix For: 2.0.0, 3.0.0, 1.4.0, 1.5.0 > > Attachments: HBASE-18762.001.patch, HBASE-18830-addendum.patch > > > When running the main method of Canary.java, we see the following error: > Exception in thread "main" java.lang.ClassCastException: > org.apache.hadoop.hbase.tool.Canary$RegionServerStdOutSink cannot be cast to > org.apache.hadoop.hbase.tool.Canary$RegionStdOutSink > at org.apache.hadoop.hbase.tool.Canary.newMonitor(Canary.java:911) > at org.apache.hadoop.hbase.tool.Canary.run(Canary.java:796) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at org.apache.hadoop.hbase.tool.Canary.main(Canary.java:1571) > This happens because we typecast the sink depending on the mode (zookeeper > mode/region server mode) that Canary is configured in. In case no mode is > specified, we typecast the sink into _RegionStdOutSink_. In general, it is > possible to provide inconsistent mode and sink types while running Canary. -- This message was sent by Atlassian JIRA (v6.4.14#64029)