[jira] [Reopened] (HADOOP-16193) add extra S3A MPU test to see what happens if a file is created during the MPU

2019-08-26 Thread Ewan Higgs (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs reopened HADOOP-16193:
-

I must have made a mistake in testing this. Re-reviewing it, it's not possible 
for this to work and indeed it fails when I test it again.

When writing to S3, the 'winner' will be the 
last-started-upload-that-completed-successfully. In the case of MPU this start 
is at init time, not complete time. So when the transient file comes in and is 
deleted then the MPU becomes moot.

In the case of a versioned bucket, then the transient file doesn't unroll the 
latest version to the MPU because all the 'latest' knowledge is based on init 
time, not complete time.

I will revert the commit.

> add extra S3A MPU test to see what happens if a file is created during the MPU
> --
>
> Key: HADOOP-16193
> URL: https://issues.apache.org/jira/browse/HADOOP-16193
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.1.3
>
>
> Proposed extra test for the S3A MPU: if you create and then delete a file 
> while an MPU is in progress, when you finally complete the MPU the new data 
> is present.
> This verifies that the other FS operations don't somehow cancel the 
> in-progress upload, and that eventual consistency brings the latest value out.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: About supporting upcoming java versions

2018-11-08 Thread Ewan Higgs
Hi all,
Reporting bugs to Java is a bit weird/nontrivial so I'm sympathetic to Owen's 
situation. The openjdk bug tracker requires users be committers so no one can 
comment unless they're already contributing.

Their tracker is here:
https://bugs.openjdk.java.net/projects/JDK/issues

To actually file a bug, the form is here:
https://bugreport.java.com/bugreport

Yours,
Ewan

On 07/11/2018, 11:18, "Steve Loughran"  wrote:


If there are problems w/ JDK11 then we should be talking to oracle about 
them to have them fixed. Is there an ASF JIRA on this issue yet?

As usual, the large physical clusters will be slow to upgrade,

but the smaller cloud ones can get away with being agile, and as I believe 
that YARN does let you run code with a different path to the jvm, people can 
mix things.
This makes it possible for people to run java 11+ apps even if hadoop 
itself is on java 8.

And this time we may want to think about: which release we declare "ready 
for Java 11", being proactive rather than lagging behind the public releases by 
many years (6=>7, 7=>8). Of course, we'll have to stay with the java 8 language 
for a while, but there's a lot more we can do there in our code. I'm currently 
(HADOOP-14556) embracing Optional, as it makes explicit when things are 
potentially null, and while its  crippled by the java language itself 
(http://steveloughran.blogspot.com/2018/10/javas-use-of-checked-exceptions.html 
), its still something we can embrace (*)


Takanobu,

I've been watching the work you, Akira and others have been putting in for 
java 9+ support and its wonderful, If we had an annual award for "persevering 
in the presence of extreme suffering" it'd be the top candidate for this year's 
work.

it means we are lined up to let people run on Hadoop 11 if they want, and 
gives that option of moving to java 11 sooner rather than later. I'm also 
looking at JUnit 5, wondering when I can embrace it fully (i.e. not worry about 
cherry picking code into junit 4 tests)

Thanks for all your work

-Steve

(*) I also have in the test code of that branch a bonding of UG.doAs which 
takes closures


https://github.com/steveloughran/hadoop/blob/s3/HADOOP-14556-delegation-token/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/LambdaTestUtils.java#L865


lets me do things like

assertEquals("FS username in doAs()",
ALICE,
doAs(bobUser, () -> fs.getUsername()))

If someone wants to actually pull this support into UGI itself, happy to 
review. as moving our doAs code to things like bobUser.doAs(() -> 
fs.create(path)) will transform all those UGI code users.

On 6 Nov 2018, at 05:57, Takanobu Asanuma 
mailto:tasan...@apache.org>> wrote:

Thanks for your reply, Owen.

That said, I’d be surprised if the work items for JDK 9 and 10 aren’t a
strict subset of the issues getting to JDK 11.

Most of the issues that we have fixed are subset of the ones of JDK 11. But
there seem to be some exceptions. HADOOP-15905 is a bug of JDK 9/10 which
has been fixed in JDK 11. It is difficult to fix it since JDK 9/10 have
already been EOL. I wonder if we should treat such a kind of error going
forward.

I've hit at least one pretty serious JVM bug in JDK 11
Could you please share that detail?

In any case, we should be carefully that what version of Hadoop is ready
for JDK 11. It will take some time yet. And we also need to keep supporting
JDK 8 for a while.

Regards,
- Takanobu







[jira] [Created] (HADOOP-15667) FileSystemMultipartUploader should verify that UploadHandle has non-0 length

2018-08-10 Thread Ewan Higgs (JIRA)
Ewan Higgs created HADOOP-15667:
---

 Summary: FileSystemMultipartUploader should verify that 
UploadHandle has non-0 length
 Key: HADOOP-15667
 URL: https://issues.apache.org/jira/browse/HADOOP-15667
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ewan Higgs
Assignee: Ewan Higgs


The S3AMultipartUploader has a good check on the length of the UploadHandle. 
This should be moved to MultipartUploader, made protected, and called in the 
various implementations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15623) Compiling hadoop-azure fails with jdk10 (javax javascript)

2018-07-20 Thread Ewan Higgs (JIRA)
Ewan Higgs created HADOOP-15623:
---

 Summary: Compiling hadoop-azure fails with jdk10 (javax javascript)
 Key: HADOOP-15623
 URL: https://issues.apache.org/jira/browse/HADOOP-15623
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Ewan Higgs


{code}
$ java -version
java version "10.0.1" 2018-04-17
Java(TM) SE Runtime Environment 18.3 (build 10.0.1+10)
Java HotSpot(TM) 64-Bit Server VM 18.3 (build 10.0.1+10, mixed mode)
{code}

{code}
$ mvn install -DskipShade -Dmaven.javadoc.skip=true -Djava.awt.headless=true 
-DskipTests -rf :hadoop-azure

... 

[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-antrun-plugin:1.7:run 
(create-parallel-tests-dirs) on project hadoop-azure: An Ant BuildException has 
occured: Unable to create javax script engine for javascript
[ERROR] around Ant part ...

[jira] [Created] (HADOOP-15445) TestCryptoAdminCLI test failure when upgrading to JDK8 patch 171.

2018-05-03 Thread Ewan Higgs (JIRA)
Ewan Higgs created HADOOP-15445:
---

 Summary: TestCryptoAdminCLI test failure when upgrading to JDK8 
patch 171.
 Key: HADOOP-15445
 URL: https://issues.apache.org/jira/browse/HADOOP-15445
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ewan Higgs


JDK8 patch 171 introduces a new feature:
{quote}
h3. New Features
security-libs/javax.crypto*[!https://www.oracle.com/webfolder/s/dm/st/images/lp-external-link-arrow.png!|http://www.oracle.com/technetwork/java/javase/8u171-relnotes-430.html#JDK-8189997]
 Enhanced KeyStore Mechanisms*
A new security property named {{jceks.key.serialFilter}} has been introduced. 
If this filter is configured, the JCEKS KeyStore uses it during the 
deserialization of the encrypted Key object stored inside a SecretKeyEntry. If 
it is not configured or if the filter result is UNDECIDED (for example, none of 
the patterns match), then the filter configured by {{jdk.serialFilter}} is 
consulted.

If the system property {{jceks.key.serialFilter}} is also supplied, it 
supersedes the security property value defined here.

The filter pattern uses the same format as {{jdk.serialFilter}}. The default 
pattern allows {{java.lang.Enum}}, {{java.security.KeyRep}}, 
{{java.security.KeyRep$Type}}, and {{javax.crypto.spec.SecretKeySpec}} but 
rejects all the others.

Customers storing a SecretKey that does not serialize to the above types must 
modify the filter to make the key extractable.
{quote}
We believe this causes some test failures:

 
{quote}{{{color:#33}java.io.IOException: Can't recover key for myKey from 
keystore 
file:/{color}{color:#33}home/{color}{color:#33}jenkins/{color}{color:#33}workspace/{color}{color:#33}hadoopFullBuild/{color}{color:#33}hadoop-hdfs-project/{color}{color:#33}hadoop-hdfs/{color}{color:#33}target/{color}{color:#33}test/{color}{color:#33}data/{color}{color:#33}53406117-0132-401e-a67d-6672f1b6a14a/{color}{color:#33}test.jks
 at 
org.apache.hadoop.crypto.key.JavaKeyStoreProvider.getMetadata(JavaKeyStoreProvider.java:424)
 at 
org.apache.hadoop.crypto.key.KeyProviderExtension.getMetadata(KeyProviderExtension.java:100)
 at 
org.apache.hadoop.hdfs.server.namenode.FSDirEncryptionZoneOp.ensureKeyIsInitialized(FSDirEncryptionZoneOp.java:124)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.createEncryptionZone(FSNamesystem.java:7227)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.createEncryptionZone(NameNodeRpcServer.java:2082)
 at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.createEncryptionZone(ClientNamenodeProtocolServerSideTranslatorPB.java:1524)
 at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869) at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:422) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1965)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675) Caused by: 
java.security.UnrecoverableKeyException: Rejected by the jceks.key.serialFilter 
or jdk.serialFilter property at 
com.sun.crypto.provider.KeyProtector.unseal(KeyProtector.java:352) at 
com.sun.crypto.provider.JceKeyStore.engineGetKey(JceKeyStore.java:136) at 
java.security.KeyStore.getKey(KeyStore.java:1023){color}}}
{quote}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15376) Remove double semi colons on imports that make Clover fall over.

2018-04-10 Thread Ewan Higgs (JIRA)
Ewan Higgs created HADOOP-15376:
---

 Summary: Remove double semi colons on imports that make Clover 
fall over.
 Key: HADOOP-15376
 URL: https://issues.apache.org/jira/browse/HADOOP-15376
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Ewan Higgs


Clover will fall over if there are double semicolons on imports.

The error looks like:
{code:java}
[INFO] Clover free edition.
[INFO] Updating existing database at 
'/Users/ehiggs/src/hadoop/hadoop-common-project/hadoop-common/target/clover/clover.db'.
[INFO] Processing files at 1.8 source level.
[INFO] 
/Users/ehiggs/src/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestIOUtils.java:43:1:expecting
 EOF, found 'import'
[INFO] Instrumentation error
com.atlassian.clover.api.CloverException: 
/Users/ehiggs/src/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestIOUtils.java:43:1:expecting
 EOF, found 'import'{code}
 

Thankfully we only have one location with this:
{code:java}
$ find . -name \*.java -exec grep '^import .*;;' {} +
./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestIOUtils.java:import
 org.apache.commons.io.FileUtils;;{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Integration Tests build line

2017-08-09 Thread Ewan Higgs
Hi all,
Sorry to be annoying, but I’m still unable to build integration tests and 
wasn’t sure if anyone had seen this message.

Is Jenkins even building these? I don’t see org.apache.hadoop.example listed 
here:
https://builds.apache.org/job/PreCommit-HDFS-Build/20614/testReport/

Thanks
Ewan

From: Ewan Higgs <ewan.hi...@wdc.com>
Date: Monday, 7 August 2017 at 18:19
To: "common-dev@hadoop.apache.org" <common-dev@hadoop.apache.org>
Subject: Integration Tests build line

Hi all,
I’m having trouble getting integration tests to run successfully. It hasn’t 
really affected me so far, but it’s bothering me that I’m not getting these 
working and I’d like this to work. So maybe I’m doing something wrong. I tried 
looking in Jenkins but I didn’t see the integration tests being used anywhere.


mvn install -Dtest=**/ITUseMiniCluster.java -DforkMode=never -Pnoshade

Running org.apache.hadoop.example.ITUseMiniCluster
Tests run: 4, Failures: 0, Errors: 4, Skipped: 0, Time elapsed: 35.641 sec <<< 
FAILURE! - in org.apache.hadoop.example.ITUseMiniCluster
useWebHDFS(org.apache.hadoop.example.ITUseMiniCluster)  Time elapsed: 21.097 
sec  <<< ERROR!
java.lang.NoClassDefFoundError: 
org/apache/hadoop/shaded/org/mockito/stubbing/Answer
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at 
org.apache.maven.surefire.booter.IsolatedClassLoader.loadClass(IsolatedClassLoader.java:97)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2668)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2564)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2607)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1667)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:874)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:494)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:453)
at 
org.apache.hadoop.example.ITUseMiniCluster.clusterUp(ITUseMiniCluster.java:74)

useWebHDFS(org.apache.hadoop.example.ITUseMiniCluster)  Time elapsed: 21.1 sec  
<<< ERROR!
java.lang.NullPointerException: null
at 
org.apache.hadoop.example.ITUseMiniCluster.clusterDown(ITUseMiniCluster.java:80)

useHdfsFileSystem(org.apache.hadoop.example.ITUseMiniCluster)  Time elapsed: 
14.486 sec  <<< ERROR!
java.lang.NoClassDefFoundError: 
org/apache/hadoop/shaded/org/mockito/stubbing/Answer
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at 
org.apache.maven.surefire.booter.IsolatedClassLoader.loadClass(IsolatedClassLoader.java:97)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2668)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2564)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2607)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1667)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:874)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:494)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:453)
at 
org.apache.hadoop.example.ITUseMiniCluster.clusterUp(ITUseMiniCluster.java:74)

useHdfsFileSystem(org.apache.hadoop.example.ITUseMiniCluster)  Time elapsed: 
14.486 sec  <<< ERROR!
java.lang.NullPointerException: null
at 
org.apache.hadoop.example.ITUseMiniCluster.clusterDown(ITUseMiniCluster.java:80)



Thanks for any help!

-Ewan

Western Digital Corporation (and its subsidiaries) E-mail Confidentiality 
Notice & Disclaimer:

This e-mail and any files transmitted with it may contain confidential or 
legally privileged information of WDC and/or its affiliates, and are intended 
solely for the use of the individual or entity to which they are addressed. If 
you are not the intended recipient, any disclosure, copying, distribution or 
any action taken or omitted to be taken in reliance on it, is prohibited. If 
you have received this e-mail in error, please notify the sender immediately 
and delete the e-mail in its entirety from your system.


Integration Tests build line

2017-08-07 Thread Ewan Higgs
Hi all,
I’m having trouble getting integration tests to run successfully. It hasn’t 
really affected me so far, but it’s bothering me that I’m not getting these 
working and I’d like this to work. So maybe I’m doing something wrong. I tried 
looking in Jenkins but I didn’t see the integration tests being used anywhere.


mvn install -Dtest=**/ITUseMiniCluster.java -DforkMode=never -Pnoshade

Running org.apache.hadoop.example.ITUseMiniCluster
Tests run: 4, Failures: 0, Errors: 4, Skipped: 0, Time elapsed: 35.641 sec <<< 
FAILURE! - in org.apache.hadoop.example.ITUseMiniCluster
useWebHDFS(org.apache.hadoop.example.ITUseMiniCluster)  Time elapsed: 21.097 
sec  <<< ERROR!
java.lang.NoClassDefFoundError: 
org/apache/hadoop/shaded/org/mockito/stubbing/Answer
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at 
org.apache.maven.surefire.booter.IsolatedClassLoader.loadClass(IsolatedClassLoader.java:97)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2668)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2564)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2607)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1667)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:874)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:494)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:453)
at 
org.apache.hadoop.example.ITUseMiniCluster.clusterUp(ITUseMiniCluster.java:74)

useWebHDFS(org.apache.hadoop.example.ITUseMiniCluster)  Time elapsed: 21.1 sec  
<<< ERROR!
java.lang.NullPointerException: null
at 
org.apache.hadoop.example.ITUseMiniCluster.clusterDown(ITUseMiniCluster.java:80)

useHdfsFileSystem(org.apache.hadoop.example.ITUseMiniCluster)  Time elapsed: 
14.486 sec  <<< ERROR!
java.lang.NoClassDefFoundError: 
org/apache/hadoop/shaded/org/mockito/stubbing/Answer
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at 
org.apache.maven.surefire.booter.IsolatedClassLoader.loadClass(IsolatedClassLoader.java:97)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2668)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2564)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2607)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1667)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:874)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:494)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:453)
at 
org.apache.hadoop.example.ITUseMiniCluster.clusterUp(ITUseMiniCluster.java:74)

useHdfsFileSystem(org.apache.hadoop.example.ITUseMiniCluster)  Time elapsed: 
14.486 sec  <<< ERROR!
java.lang.NullPointerException: null
at 
org.apache.hadoop.example.ITUseMiniCluster.clusterDown(ITUseMiniCluster.java:80)



Thanks for any help!

-Ewan

Western Digital Corporation (and its subsidiaries) E-mail Confidentiality 
Notice & Disclaimer:

This e-mail and any files transmitted with it may contain confidential or 
legally privileged information of WDC and/or its affiliates, and are intended 
solely for the use of the individual or entity to which they are addressed. If 
you are not the intended recipient, any disclosure, copying, distribution or 
any action taken or omitted to be taken in reliance on it, is prohibited. If 
you have received this e-mail in error, please notify the sender immediately 
and delete the e-mail in its entirety from your system.


[jira] [Created] (HADOOP-13514) Upgrade surefire to 2.19.1

2016-08-18 Thread Ewan Higgs (JIRA)
Ewan Higgs created HADOOP-13514:
---

 Summary: Upgrade surefire to 2.19.1
 Key: HADOOP-13514
 URL: https://issues.apache.org/jira/browse/HADOOP-13514
 Project: Hadoop Common
  Issue Type: Task
Reporter: Ewan Higgs
Priority: Minor


A lot of people working on Hadoop don't want to run all the tests when they 
develop; only the bits they're working on. Surefire 2.19 introduced more useful 
test filters which let us run a subset of the tests that brings the build time 
down from 'come back tomorrow' to 'grab a coffee'.

For instance, if I only care about the S3 adaptor, I might run:

{code}
mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
\"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
org.apache.hadoop.fs.s3a.*\"
{code}

We can work around this by specifying the surefire version on the command line 
but it would be better, imo, to just update the default surefire used.

{code}
mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
\"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
org.apache.hadoop.fs.s3a.*\" -Dmaven-surefire-plugin.version=2.19.1
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-11999) yarn.resourcemanager.webapp.address and friends are converted to fqdn in the href urls of the web app.

2015-05-19 Thread Ewan Higgs (JIRA)
Ewan Higgs created HADOOP-11999:
---

 Summary: yarn.resourcemanager.webapp.address and friends are 
converted to fqdn in the href urls of the web app.
 Key: HADOOP-11999
 URL: https://issues.apache.org/jira/browse/HADOOP-11999
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
 Environment: Linux.
Reporter: Ewan Higgs


I am setting up a Hadoop cluster where the nodes have FQDNames inside
the cluster, but the DNS where these names are registered is behind some
login nodes. So any user who tries to access the web interface needs to
use the IPs instead.

I set the 'yarn.nodemanager.webapp.address' and
'yarn.resourcemanager.webapp.address' to the appropriate IP:port. I
don't give it the FQDN in this config field.

However, when I access the web app it all works inside each web app.
However, when I cross from the Resource Manager to the Node Manager web
app, the href url uses the FQDN that I don't want. Obviously this is a
dead link to the user and can only be fixed if they copy and paste the
appropriate IP address for the node (not a pleasant user experience).

I suppose it makes sense to use the FQDN for the link text in the web app, but 
not the actual url if the IP was specified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


MAPREDUCE-5528

2015-03-18 Thread Ewan Higgs

Hi all,
MAPREDUCE-5528 has been open for about 18 months - the whole time with a 
working patch. Can someone please [take a look and verify it themselves 
and] merge it? I've tested it on GPFS and it worked for me. If you need 
me to mail a PR to a particular address, let me know and I'll send it.


https://issues.apache.org/jira/browse/MAPREDUCE-5528

MAPREDUCE-5050 is a duplicate of this ticket albeit sans patch.

Kind regards,
Ewan Higgs