[jira] [Created] (HADOOP-14493) YARN distributed shell application fails, when RM failed over or Restarts

2017-06-05 Thread Sathishkumar Manimoorthy (JIRA)
Sathishkumar Manimoorthy created HADOOP-14493:
-

 Summary: YARN distributed shell application fails, when RM failed 
over or Restarts
 Key: HADOOP-14493
 URL: https://issues.apache.org/jira/browse/HADOOP-14493
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Sathishkumar Manimoorthy
Priority: Minor


YARN Distributed shell application fails when doing RM failover or RM restarts.

Exception trace:

17/05/30 11:57:38 DEBUG security.UserGroupInformation: PrivilegedAction as:mapr 
(auth:SIMPLE) 
from:org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.renameScriptFile(ApplicationMaster.java:1032)
17/05/30 11:57:38 DEBUG security.UserGroupInformation: 
PrivilegedActionException as:mapr (auth:SIMPLE) cause:java.io.IOException: 
Invalid source or target
17/05/30 11:57:38 ERROR distributedshell.ApplicationMaster: Not able to add 
suffix (.bat/.sh) to the shell script filename
java.io.IOException: Invalid source or target
at com.mapr.fs.MapRFileSystem.rename(MapRFileSystem.java:1132)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$2.run(ApplicationMaster.java:1036)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$2.run(ApplicationMaster.java:1032)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.renameScriptFile(ApplicationMaster.java:1032)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.access$1400(ApplicationMaster.java:167)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$LaunchContainerRunnable.run(ApplicationMaster.java:953)
at java.lang.Thread.run(Thread.java:748)

DS application trying to lo launch the additional container and it is failing 
to rename the path Execscript.sh as it was already renamed by the previous 
containers in  filesystem path.

I will upload the logs and path details soon.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14492) RpcDetailedMetrics and NameNodeMetrics use different rate metrics abstraction cause the Xavgtime confused

2017-06-05 Thread Lantao Jin (JIRA)
Lantao Jin created HADOOP-14492:
---

 Summary: RpcDetailedMetrics and NameNodeMetrics use different rate 
metrics abstraction cause the Xavgtime confused
 Key: HADOOP-14492
 URL: https://issues.apache.org/jira/browse/HADOOP-14492
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Affects Versions: 2.8.0, 2.7.4
Reporter: Lantao Jin
Priority: Minor


For performance purpose, 
[HADOOP-13782|https://issues.apache.org/jira/browse/HADOOP-13782] change the 
metrics behaviour in {{RpcDetailedMetrics}}.
In 2.7.4:
{code}
public class RpcDetailedMetrics {

  @Metric MutableRatesWithAggregation rates;
{code}
In old version:
{code}
public class RpcDetailedMetrics {

  @Metric MutableRates rates;
{code}

But {{NameNodeMetrics}} still use {{MutableRate}} whatever in the new or old 
version:
{code}
public class NameNodeMetrics {
  @Metric("Block report") MutableRate blockReport;
{code}

It causes the metrics in JMX very different between them.
{quote}
{
name: "Hadoop:service=NameNode,name=RpcDetailedActivityForPort8030",
modelerType: "RpcDetailedActivityForPort8030",
tag.port: "8030",
tag.Context: "rpcdetailed",
...
BlockReportNumOps: 237634,
BlockReportAvgTime: 1382,
...
}
{
name: "Hadoop:service=NameNode,name=NameNodeActivity",
modelerType: "NameNodeActivity",
tag.ProcessName: "NameNode",
...
BlockReportNumOps: 2592932,
BlockReportAvgTime: 19.258064516129032,
...
}
{quote}
In the old version. They are correct.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14491) Azure has messed doc structure

2017-06-05 Thread Mingliang Liu (JIRA)
Mingliang Liu created HADOOP-14491:
--

 Summary: Azure has messed doc structure
 Key: HADOOP-14491
 URL: https://issues.apache.org/jira/browse/HADOOP-14491
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation, fs/azure
Reporter: Mingliang Liu
Assignee: Mingliang Liu


# The _WASB Secure mode and configuration_ and _Authorization Support in WASB_ 
sections are missing in the navigation
# _Authorization Support in WASB_ should be header level 3 instead of level 2 
# Some of the code format is not specified
# Sample code indent not unified.

Let's use the auto-generated navigation instead of manually updating it, just 
as other documents.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14490) Upgrade azure-storage sdk version

2017-06-05 Thread Mingliang Liu (JIRA)
Mingliang Liu created HADOOP-14490:
--

 Summary: Upgrade azure-storage sdk version
 Key: HADOOP-14490
 URL: https://issues.apache.org/jira/browse/HADOOP-14490
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Mingliang Liu


As required by [HADOOP-14478], we're expecting the {{BlobInputStream}} to 
support advanced {{readFully()}} by taking hints of mark. This can only be done 
by means of sdk version bump.

cc: [~rajesh.balamohan].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14489) ITestS3GuardConcurrentOps requires explicit DynamoDB table name to be configured

2017-06-05 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory resolved HADOOP-14489.

Resolution: Fixed

Resolving as a duplicate. Thanks [~liuml07]

> ITestS3GuardConcurrentOps requires explicit DynamoDB table name to be 
> configured
> 
>
> Key: HADOOP-14489
> URL: https://issues.apache.org/jira/browse/HADOOP-14489
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>
> testConcurrentTableCreations fails with this: 
> {quote}java.lang.IllegalArgumentException: No DynamoDB table name 
> configured!{quote}
> I don't think that's necessary - should be able to shuffle stuff around and 
> either use the bucket name by default (like other DynamoDB tests would) or 
> use the table name that's configured later in the test.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2017-06-05 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/336/

[Jun 4, 2017 4:35:14 PM] (sunilg) YARN-6458. Use yarn package manager to lock 
down dependency versions for




-1 overall


The following subsystems voted -1:
compile mvninstall unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.fs.sftp.TestSFTPFileSystem 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 
   hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer 
   hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations 
   
hadoop.hdfs.server.datanode.metrics.TestDataNodeOutlierDetectionViaMetrics 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 
   hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService 
   hadoop.yarn.server.nodemanager.TestNodeManagerShutdown 
   hadoop.yarn.server.timeline.TestRollingLevelDB 
   hadoop.yarn.server.timeline.TestTimelineDataManager 
   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   hadoop.yarn.server.resourcemanager.ahs.TestRMApplicationHistoryWriter 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   hadoop.yarn.server.resourcemanager.TestRMAdminService 
   
hadoop.yarn.server.resourcemanager.TestOpportunisticContainerAllocatorAMService 
   hadoop.yarn.server.TestDiskFailures 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.client.api.impl.TestAMRMClient 
   hadoop.yarn.client.api.impl.TestNMClient 
   hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore 
   hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestShuffleHandler 
   hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService 
   hadoop.hdfs.TestNNBench 
   hadoop.yarn.sls.appmaster.TestAMSimulator 

Timed out junit tests :

   org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean 
   org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache 
   org.apache.hadoop.yarn.server.resourcemanager.TestRMStoreCommands 
   
org.apache.hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStore 
   
org.apache.hadoop.yarn.server.resourcemanager.TestReservationSystemWithRMHA 
   
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA 
   
org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA 
   org.apache.hadoop.yarn.server.resourcemanager.TestRMHAForNodeLabels 
  

   mvninstall:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/336/artifact/out/patch-mvninstall-root.txt
  [496K]

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/336/artifact/out/patch-compile-root.txt
  [20K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/336/artifact/out/patch-compile-root.txt
  [20K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/336/artifact/out/patch-compile-root.txt
  [20K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/336/artifact/out/patch-unit-hadoop-assemblies.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/336/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [144K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/336/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [896K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/336/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [56K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/336/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [52K]
   

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-06-05 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/425/

[Jun 4, 2017 4:35:14 PM] (sunilg) YARN-6458. Use yarn package manager to lock 
down dependency versions for




-1 overall


The following subsystems voted -1:
findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-common-project/hadoop-minikdc 
   Possible null pointer dereference in 
org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value of called 
method Dereferenced at 
MiniKdc.java:org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value 
of called method Dereferenced at MiniKdc.java:[line 368] 

FindBugs :

   module:hadoop-common-project/hadoop-auth 
   
org.apache.hadoop.security.authentication.server.MultiSchemeAuthenticationHandler.authenticate(HttpServletRequest,
 HttpServletResponse) makes inefficient use of keySet iterator instead of 
entrySet iterator At MultiSchemeAuthenticationHandler.java:of keySet iterator 
instead of entrySet iterator At MultiSchemeAuthenticationHandler.java:[line 
192] 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   org.apache.hadoop.crypto.CipherSuite.setUnknownValue(int) 
unconditionally sets the field unknownValue At CipherSuite.java:unknownValue At 
CipherSuite.java:[line 44] 
   org.apache.hadoop.crypto.CryptoProtocolVersion.setUnknownValue(int) 
unconditionally sets the field unknownValue At 
CryptoProtocolVersion.java:unknownValue At CryptoProtocolVersion.java:[line 67] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to return value of 
called method Dereferenced at 
FileUtil.java:org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to 
return value of called method Dereferenced at FileUtil.java:[line 118] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path, 
File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path,
 File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:[line 387] 
   Return value of org.apache.hadoop.fs.permission.FsAction.or(FsAction) 
ignored, but method has no side effect At FTPFileSystem.java:but method has no 
side effect At FTPFileSystem.java:[line 421] 
   Useless condition:lazyPersist == true at this point At 
CommandWithDestination.java:[line 502] 
   org.apache.hadoop.io.DoubleWritable.compareTo(DoubleWritable) 
incorrectly handles double value At DoubleWritable.java: At 
DoubleWritable.java:[line 78] 
   org.apache.hadoop.io.DoubleWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles double value At DoubleWritable.java:int) 
incorrectly handles double value At DoubleWritable.java:[line 97] 
   org.apache.hadoop.io.FloatWritable.compareTo(FloatWritable) incorrectly 
handles float value At FloatWritable.java: At FloatWritable.java:[line 71] 
   org.apache.hadoop.io.FloatWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles float value At FloatWritable.java:int) 
incorrectly handles float value At FloatWritable.java:[line 89] 
   Possible null pointer dereference in 
org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) due to return 
value of called method Dereferenced at 
IOUtils.java:org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) 
due to return value of called method Dereferenced at IOUtils.java:[line 351] 
   org.apache.hadoop.io.erasurecode.ECSchema.toString() makes inefficient 
use of keySet iterator instead of entrySet iterator At ECSchema.java:keySet 
iterator instead of entrySet iterator At ECSchema.java:[line 193] 
   Possible bad parsing of shift operation in 
org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At 
Utils.java:operation in 
org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At Utils.java:[line 
398] 
   
org.apache.hadoop.metrics2.lib.DefaultMetricsFactory.setInstance(MutableMetricsFactory)
 unconditionally sets the field mmfImpl At DefaultMetricsFactory.java:mmfImpl 
At DefaultMetricsFactory.java:[line 49] 
   
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.setMiniClusterMode(boolean) 
unconditionally sets the field miniClusterMode At 
DefaultMetricsSystem.java:miniClusterMode At DefaultMetricsSystem.java:[line 
100] 
   Useless object stored in variable seqOs of method 
org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.addOrUpdateToken(AbstractDelegationTokenIdentifier,
 

[jira] [Created] (HADOOP-14489) ITestS3GuardConcurrentOps requires explicit DynamoDB table name to be configured

2017-06-05 Thread Sean Mackrory (JIRA)
Sean Mackrory created HADOOP-14489:
--

 Summary: ITestS3GuardConcurrentOps requires explicit DynamoDB 
table name to be configured
 Key: HADOOP-14489
 URL: https://issues.apache.org/jira/browse/HADOOP-14489
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Sean Mackrory
Assignee: Sean Mackrory


testConcurrentTableCreations fails with this: 
{quote}java.lang.IllegalArgumentException: No DynamoDB table name 
configured!{quote}

I don't think that's necessary - should be able to shuffle stuff around and 
either use the bucket name by default (like other DynamoDB tests would) or use 
the table name that's configured later in the test.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14488) s34guard localdynamo listStatus fails after renaming file into directory

2017-06-05 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-14488:
---

 Summary: s34guard localdynamo listStatus fails after renaming file 
into directory
 Key: HADOOP-14488
 URL: https://issues.apache.org/jira/browse/HADOOP-14488
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Steve Loughran


Running scala integration test with inconsistent s3 client & local DDB enabled

{code}
fs.rename("work/task-00/part-00", work)
fs.listStatus(work)
{code}

The list status work fails with a message about the childStatus not being a 
child of the parent. 

Hypothesis: rename isn't updating the child path entry



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14484) Ensure deleted parent directory tombstones are overwritten when implicitly recreated

2017-06-05 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory resolved HADOOP-14484.

Resolution: Duplicate

Resolving this as a duplicate, since I did end up doing it as part of the first 
patch, and it makes sense to continue to do so.

> Ensure deleted parent directory tombstones are overwritten when implicitly 
> recreated
> 
>
> Key: HADOOP-14484
> URL: https://issues.apache.org/jira/browse/HADOOP-14484
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>
> As discussed on HADOOP-13998, there may be a test missing (and possibly 
> broken metadata store implementations) for the case where a directory is 
> deleted but is later implicitly recreated by creating a file inside it, where 
> the tombstone is not overwritten. In such a case, listing the parent 
> directory would result in an error.
> This may also be happening because of HADOOP-14457, but we should add a test 
> for this other possibility anyway and fix it if it fails with any 
> implementations.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14487) DirListingMetadata precondition failure messages to include path at fault

2017-06-05 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-14487:
---

 Summary: DirListingMetadata precondition failure messages to 
include path at fault
 Key: HADOOP-14487
 URL: https://issues.apache.org/jira/browse/HADOOP-14487
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: HADOOP-13345
Reporter: Steve Loughran
Priority: Minor


I've done something wrong in my code and getting "" childPath must be a child 
of path", which is all very well, but it doesn't include paths.

The precondition checks all need to include the relevant path info for users to 
start working out what has gone wrong.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14486) TestSFTPFileSystem#testGetAccessTime test failure

2017-06-05 Thread Sonia Garudi (JIRA)
Sonia Garudi created HADOOP-14486:
-

 Summary: TestSFTPFileSystem#testGetAccessTime test failure
 Key: HADOOP-14486
 URL: https://issues.apache.org/jira/browse/HADOOP-14486
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0-alpha4
 Environment: Ubuntu 14.04 
x86, ppc64le
$ java -version
openjdk version "1.8.0_111"
OpenJDK Runtime Environment (build 1.8.0_111-8u111-b14-3~14.04.1-b14)
OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode)
Reporter: Sonia Garudi


The TestSFTPFileSystem#testGetAccessTime test fails consistently with the error 
below:

{code}
java.lang.AssertionError: expected:<1496496040072> but was:<149649604>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.fs.sftp.TestSFTPFileSystem.testGetAccessTime(TestSFTPFileSystem.java:319)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org