[jira] [Created] (HDFS-14438) Fix typo in HDFS for OfflineEditsVisitorFactory.java

2019-04-17 Thread bianqi (JIRA)
bianqi created HDFS-14438:
-

 Summary: Fix typo in HDFS for OfflineEditsVisitorFactory.java
 Key: HDFS-14438
 URL: https://issues.apache.org/jira/browse/HDFS-14438
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 3.1.2
Reporter: bianqi


https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/OfflineEditsVisitorFactory.java#L68
proccesor -> processor



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14437) Exception happened when rollEditLog expects empty EditsDoubleBuffer.bufCurrent but not

2019-04-17 Thread angerszhu (JIRA)
angerszhu created HDFS-14437:


 Summary: Exception happened when   rollEditLog expects empty 
EditsDoubleBuffer.bufCurrent  but not
 Key: HDFS-14437
 URL: https://issues.apache.org/jira/browse/HDFS-14437
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: angerszhu


For the problem mentioned in https://issues.apache.org/jira/browse/HDFS-10943 , 
I have sort the process of write and flush EditLog and some important function, 
I found the in the class  FSEditLog class, the close() function will call such 
process like below:

 
{code:java}
waitForSyncToFinish();
endCurrentLogSegment(true);{code}
since we have gain the object lock in the function close(), so when  
waitForSyncToFish() method return, it mean all logSync job has done and all 
data in bufReady has been flushed out, and since current thread has the lock of 
this object, when call endCurrentLogSegment(), no other thread will gain the 
lock so they can't write new editlog into currentBuf.

But when we don't call waitForSyncToFish() before endCurrentLogSegment(), there 
may be some autoScheduled logSync()'s flush process is doing, since this 
process don't need

synchronization since it has mention in the comment of logSync() method :

 
{code:java}
/**
 * Sync all modifications done by this thread.
 *
 * The internal concurrency design of this class is as follows:
 *   - Log items are written synchronized into an in-memory buffer,
 * and each assigned a transaction ID.
 *   - When a thread (client) would like to sync all of its edits, logSync()
 * uses a ThreadLocal transaction ID to determine what edit number must
 * be synced to.
 *   - The isSyncRunning volatile boolean tracks whether a sync is currently
 * under progress.
 *
 * The data is double-buffered within each edit log implementation so that
 * in-memory writing can occur in parallel with the on-disk writing.
 *
 * Each sync occurs in three steps:
 *   1. synchronized, it swaps the double buffer and sets the isSyncRunning
 *  flag.
 *   2. unsynchronized, it flushes the data to storage
 *   3. synchronized, it resets the flag and notifies anyone waiting on the
 *  sync.
 *
 * The lack of synchronization on step 2 allows other threads to continue
 * to write into the memory buffer while the sync is in progress.
 * Because this step is unsynchronized, actions that need to avoid
 * concurrency with sync() should be synchronized and also call
 * waitForSyncToFinish() before assuming they are running alone.
 */
public void logSync() {
  long syncStart = 0;

  // Fetch the transactionId of this thread. 
  long mytxid = myTransactionId.get().txid;
  
  boolean sync = false;
  try {
EditLogOutputStream logStream = null;
synchronized (this) {
  try {
printStatistics(false);

// if somebody is already syncing, then wait
while (mytxid > synctxid && isSyncRunning) {
  try {
wait(1000);
  } catch (InterruptedException ie) {
  }
}

//
// If this transaction was already flushed, then nothing to do
//
if (mytxid <= synctxid) {
  numTransactionsBatchedInSync++;
  if (metrics != null) {
// Metrics is non-null only when used inside name node
metrics.incrTransactionsBatchedInSync();
  }
  return;
}
   
// now, this thread will do the sync
syncStart = txid;
isSyncRunning = true;
sync = true;

// swap buffers
try {
  if (journalSet.isEmpty()) {
throw new IOException("No journals available to flush");
  }
  editLogStream.setReadyToFlush();
} catch (IOException e) {
  final String msg =
  "Could not sync enough journals to persistent storage " +
  "due to " + e.getMessage() + ". " +
  "Unsynced transactions: " + (txid - synctxid);
  LOG.fatal(msg, new Exception());
  synchronized(journalSetLock) {
IOUtils.cleanup(LOG, journalSet);
  }
  terminate(1, msg);
}
  } finally {
// Prevent RuntimeException from blocking other log edit write 
doneWithAutoSyncScheduling();
  }
  //editLogStream may become null,
  //so store a local variable for flush.
  logStream = editLogStream;
}

// do the sync
long start = now();
try {
  if (logStream != null) {
logStream.flush();
  }
} catch (IOException ex) {
  synchronized (this) {
final String msg =
"Could not sync enough journals to persistent storage. "
+ "Unsynced transactions: " + (txid - synctxid);
LOG.fatal(msg, new Exception());
synchronized(journalSetLock) {
  IOUtils.cleanup(LOG, journalSet);
}
terminate(1, msg);
  }
}

[jira] [Created] (HDDS-1447) Fix CheckStyle warnings

2019-04-17 Thread Wanqiang Ji (JIRA)
Wanqiang Ji created HDDS-1447:
-

 Summary: Fix CheckStyle warnings 
 Key: HDDS-1447
 URL: https://issues.apache.org/jira/browse/HDDS-1447
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Wanqiang Ji
Assignee: Wanqiang Ji


We had a full acceptance test + unit test build: 
[https://ci.anzix.net/job/ozone/16677/] gave 3 warnings belongs to Ozone.

*Modules:*
 * [Apache Hadoop Ozone 
Client|https://ci.anzix.net/job/ozone/16677/checkstyle/new/moduleName.1350159737/]
 ** KeyOutputStream.java:319
 ** KeyOutputStream.java:622
 * [Apache Hadoop Ozone Integration 
Tests|https://ci.anzix.net/job/ozone/16677/checkstyle/new/moduleName.-1713756601/]
 ** ContainerTestHelper.java:731



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14436) Configuration#getTimeDuration is not consistent between default value and manual settings.

2019-04-17 Thread star (JIRA)
star created HDFS-14436:
---

 Summary: Configuration#getTimeDuration is not consistent between 
default value and manual settings.
 Key: HDFS-14436
 URL: https://issues.apache.org/jira/browse/HDFS-14436
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: star
Assignee: star


When call getTimeDuration like this:
{quote}conf.getTimeDuration("property", 10, 
TimeUnit.{color:#9876aa}SECONDS{color}{color:#cc7832}, 
{color}TimeUnit.{color:#9876aa}MILLISECONDS{color}){color:#cc7832};
{color}

 
{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14435) ObserverReadProxyProvider is unable to properly fetch HAState from Standby NNs

2019-04-17 Thread Erik Krogen (JIRA)
Erik Krogen created HDFS-14435:
--

 Summary: ObserverReadProxyProvider is unable to properly fetch 
HAState from Standby NNs
 Key: HDFS-14435
 URL: https://issues.apache.org/jira/browse/HDFS-14435
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha, nn
Affects Versions: 3.3.0
Reporter: Erik Krogen
Assignee: Erik Krogen


We have been seeing issues during testing of the Consistent Read from Standby 
feature that indicate that ORPP is unable to call {{getHAServiceState}} on 
Standby NNs, as they are rejected with a {{StandbyException}}. Upon further 
investigation, we realized that although the Standby allows the 
{{getHAServiceState()}} call, reading a delegation token is not allowed in 
Standby state, thus the call will fail when using DT-based authentication. This 
hasn't caused issues in practice, since ORPP assumes that the state is Standby 
if it is unable to fetch the state, but we should fix the logic to properly 
handle this scenario.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-04-17 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1109/

[Apr 16, 2019 2:18:21 AM] (aajisaka) HADOOP-16249. Make CallerContext 
LimitedPrivate scope to Public.
[Apr 16, 2019 3:13:49 AM] (tasanuma) YARN-8943. Upgrade JUnit from 4 to 5 in 
hadoop-yarn-api.
[Apr 16, 2019 3:53:52 AM] (tasanuma) HADOOP-16253. Update AssertJ to 3.12.2.
[Apr 16, 2019 12:28:18 PM] (weichiu) HADOOP-15014. KMS should log the IP 
address of the clients. Contributed
[Apr 16, 2019 1:22:07 PM] (shashikant) HDDS-1380. Add functonality to write 
from multiple clients in
[Apr 16, 2019 4:52:14 PM] (billie) YARN-8530. Add SPNEGO filter to application 
catalog. Contributed by Eric
[Apr 16, 2019 5:04:27 PM] (billie) YARN-9466. Fixed application catalog 
navigation bar height in Safari.
[Apr 16, 2019 5:34:31 PM] (inigoiri) HDFS-14418. Remove redundant super user 
priveledge checks from namenode.
[Apr 16, 2019 6:06:25 PM] (weichiu) YARN-9123. Clean up and split testcases in 
TestNMWebServices for GPU
[Apr 16, 2019 7:35:49 PM] (arp) HDDS-1432. Ozone client list command truncates 
response without any
[Apr 16, 2019 8:49:29 PM] (github) HDDS-1374. ContainerStateMap cannot find 
container while allocating
[Apr 16, 2019 8:51:39 PM] (github) HDDS-1376. Datanode exits while executing 
client command when scmId is
[Apr 16, 2019 11:53:45 PM] (eyang) YARN-9349.  Improved log level practices for




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 
   Null passed for non-null parameter of 
com.google.common.util.concurrent.SettableFuture.set(Object) in 
org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$UpdateAppTransition.transition(RMStateStore,
 RMStateStoreEvent) At RMStateStore.java:of 
com.google.common.util.concurrent.SettableFuture.set(Object) in 
org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$UpdateAppTransition.transition(RMStateStore,
 RMStateStoreEvent) At RMStateStore.java:[line 291] 
   Null passed for non-null parameter of 
com.google.common.util.concurrent.SettableFuture.set(Object) in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.updateApplicationPriority(Priority,
 ApplicationId, SettableFuture, UserGroupInformation) At 
CapacityScheduler.java:of 
com.google.common.util.concurrent.SettableFuture.set(Object) in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.updateApplicationPriority(Priority,
 ApplicationId, SettableFuture, UserGroupInformation) At 
CapacityScheduler.java:[line 2650] 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-documentstore
 
   Unread field:TimelineEventSubDoc.java:[line 56] 
   Unread field:TimelineMetricSubDoc.java:[line 44] 

Failed junit tests :

   hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap 
   hadoop.hdfs.server.balancer.TestBalancerRPCDelay 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.server.balancer.TestBalancer 
   hadoop.yarn.client.api.impl.TestTimelineClientV2Impl 
   
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerOvercommit 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapreduce.v2.app.TestRuntimeEstimators 
   hadoop.yarn.sls.appmaster.TestAMSimulator 
   hadoop.ozone.container.common.TestDatanodeStateMachine 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1109/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1109/artifact/out/diff-compile-javac-root.txt
  [336K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1109/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1109/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1109/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1109/artifact/out/diff-patch-pylint.txt
  [84K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1109/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   

[jira] [Resolved] (HDFS-14432) dfs.datanode.shared.file.descriptor.paths duplicated in hdfs-default.xml

2019-04-17 Thread Masatake Iwasaki (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki resolved HDFS-14432.
-
   Resolution: Fixed
 Assignee: puleya7
Fix Version/s: 3.1.3
   3.2.1
   3.3.0
   3.0.4
   2.10.0

Thanks for the contribution, [~puleya7]. I committed this.

> dfs.datanode.shared.file.descriptor.paths duplicated in hdfs-default.xml
> 
>
> Key: HDFS-14432
> URL: https://issues.apache.org/jira/browse/HDFS-14432
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: puleya7
>Assignee: puleya7
>Priority: Minor
> Fix For: 2.10.0, 3.0.4, 3.3.0, 3.2.1, 3.1.3
>
>
> property "dfs.datanode.shared.file.descriptor.paths" appeared twice in 
> hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml (after 
> HDFS-6007、2.5.0)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-04-17 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/294/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml 
   hadoop-build-tools/src/main/resources/checkstyle/suppressions.xml 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle.xml 
   hadoop-tools/hadoop-resourceestimator/src/config/checkstyle.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   Class org.apache.hadoop.fs.GlobalStorageStatistics defines non-transient 
non-serializable instance field map In GlobalStorageStatistics.java:instance 
field map In GlobalStorageStatistics.java 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.TestSecureEncryptionZoneWithKMS 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.server.balancer.TestBalancerRPCDelay 
   hadoop.hdfs.server.federation.router.TestRouterAllResolver 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.yarn.client.api.impl.TestAMRMProxy 
   hadoop.yarn.sls.nodemanager.TestNMSimulator 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/294/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/294/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/294/artifact/out/diff-compile-cc-root-jdk1.8.0_191.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/294/artifact/out/diff-compile-javac-root-jdk1.8.0_191.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/294/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/294/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/294/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/294/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/294/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/294/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/294/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/294/artifact/out/whitespace-tabs.txt
  [1.2M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/294/artifact/out/xml.txt
  [20K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/294/artifact/out/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/294/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/294/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   

[jira] [Created] (HDFS-14434) webhdfs that connect secure hdfs should not use user.name parameter

2019-04-17 Thread KWON BYUNGCHANG (JIRA)
KWON BYUNGCHANG created HDFS-14434:
--

 Summary: webhdfs that connect secure hdfs should not use user.name 
parameter
 Key: HDFS-14434
 URL: https://issues.apache.org/jira/browse/HDFS-14434
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 3.1.2
Reporter: KWON BYUNGCHANG


I have two secure hadoop cluster.  Both cluster use cross-realm authentication. 

[use...@a.com|mailto:use...@a.com] can access to HDFS of B.COM realm

by the way, hadoop username of use...@a.com  in B.COM realm is  
cross_realm_a_com_user_a.

 hdfs dfs command of use...@a.com using B.COM webhdfs failed.

 

$ hdfs dfs -ls  webhdfs://b.com:50070/

{{ls: Usernames not matched: name=user_a != expected=cross_realm_a_com_usera}}

 

{{$ curl -u : --negotiate 
'http://b.com:50070/webhdfs/v1/?op=GETDELEGATIONTOKEN=user_a' }}

{{{"RemoteException":\{"exception":"SecurityException","javaClassName":"java.lang.SecurityException","message":"Failed
 to obtain user group information: java.io.IOException: Usernames not matched: 
name=user_a != expected=cross_realm_a_com_user_a"

 

{{$ curl -u : --negotiate 
'http://b.com:50070/webhdfs/v1/?op=GETDELEGATIONTOKEN'}}

{{{"Token"\{"urlString":"XgA."

 

root cause is  webhdfs that connect secure hdfs use user.name parameter.

according to webhdfs spec,  insecure webhdfs use user.name,  secure webhdfs use 
SPNEGO for authentication.

 

I think webhdfs that connect secure hdfs  should not use user.name parameter.

I will attach patch.

 

 

 

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1446) Grpc channels are leaked in XceiverClientGrpc

2019-04-17 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-1446:
---

 Summary: Grpc channels are leaked in XceiverClientGrpc
 Key: HDDS-1446
 URL: https://issues.apache.org/jira/browse/HDDS-1446
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client
Affects Versions: 0.3.0
Reporter: Mukul Kumar Singh


Grpc Channels are leaked in MiniOzoneChaosCluster runs.

{code}
SEVERE: *~*~*~ Channel ManagedChannelImpl{logId=522, target=10.200.4.160:52415} 
was not shutdown properly!!! ~*~*~*
Make sure to call shutdown()/shutdownNow() and wait until 
awaitTermination() returns true.
java.lang.RuntimeException: ManagedChannel allocation site
at 
org.apache.ratis.thirdparty.io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference.(ManagedChannelOrphanWrapper.java:103)
at 
org.apache.ratis.thirdparty.io.grpc.internal.ManagedChannelOrphanWrapper.(ManagedChannelOrphanWrapper.java:53)
at 
org.apache.ratis.thirdparty.io.grpc.internal.ManagedChannelOrphanWrapper.(ManagedChannelOrphanWrapper.java:44)
at 
org.apache.ratis.thirdparty.io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:411)
at 
org.apache.hadoop.hdds.scm.XceiverClientGrpc.connectToDatanode(XceiverClientGrpc.java:165)
at 
org.apache.hadoop.hdds.scm.XceiverClientGrpc.reconnect(XceiverClientGrpc.java:389)
at 
org.apache.hadoop.hdds.scm.XceiverClientGrpc.sendCommandAsync(XceiverClientGrpc.java:340)
at 
org.apache.hadoop.hdds.scm.XceiverClientGrpc.sendCommandWithRetry(XceiverClientGrpc.java:268)
at 
org.apache.hadoop.hdds.scm.XceiverClientGrpc.sendCommandWithTraceIDAndRetry(XceiverClientGrpc.java:236)
at 
org.apache.hadoop.hdds.scm.XceiverClientGrpc.sendCommand(XceiverClientGrpc.java:210)
at 
org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.getBlock(ContainerProtocolCalls.java:119)
at 
org.apache.hadoop.ozone.client.io.KeyInputStream.getFromOmKeyInfo(KeyInputStream.java:302)
at 
org.apache.hadoop.ozone.client.rpc.RpcClient.createInputStream(RpcClient.java:993)
at 
org.apache.hadoop.ozone.client.rpc.RpcClient.getKey(RpcClient.java:653)
at 
org.apache.hadoop.ozone.client.OzoneBucket.readKey(OzoneBucket.java:325)
at 
org.apache.hadoop.ozone.MiniOzoneLoadGenerator.load(MiniOzoneLoadGenerator.java:112)
at 
org.apache.hadoop.ozone.MiniOzoneLoadGenerator.lambda$startIO$0(MiniOzoneLoadGenerator.java:147)
at 
java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1626)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org