[jira] [Created] (HADOOP-13910) TestKMS Unable to parse:includedir /etc/krb5.conf.d/

2016-12-14 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-13910:
---

 Summary: TestKMS Unable to parse:includedir /etc/krb5.conf.d/
 Key: HADOOP-13910
 URL: https://issues.apache.org/jira/browse/HADOOP-13910
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0-alpha2
 Environment: $ cat /etc/centos-release
CentOS Linux release 7.2.1511 (Core) 
$ mvn -vesion
Apache Maven 3.0.5 (Red Hat 3.0.5-17)
Maven home: /usr/share/maven
Java version: 1.8.0_111, vendor: Oracle Corporation
Java home: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.111-2.b15.el7_3.x86_64/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "3.10.0-327.36.3.el7.x86_64", arch: "amd64", family: 
"unix"
Reporter: John Zhuge


Saw the following error when testing {{TestKMS}}:
{noformat}
testSpecialKeyNames(org.apache.hadoop.crypto.key.kms.server.TestKMS)  Time 
elapsed: 1.175 sec  <<< ERROR!
java.lang.RuntimeException: Unable to parse:includedir /etc/krb5.conf.d/
at 
org.apache.kerby.kerberos.kerb.common.Krb5Parser.load(Krb5Parser.java:72)
at 
org.apache.kerby.kerberos.kerb.common.Krb5Conf.addKrb5Config(Krb5Conf.java:47)
at 
org.apache.kerby.kerberos.kerb.client.ClientUtil.getDefaultConfig(ClientUtil.java:94)
at 
org.apache.kerby.kerberos.kerb.client.KrbClientBase.(KrbClientBase.java:51)
at 
org.apache.kerby.kerberos.kerb.client.KrbClient.(KrbClient.java:38)
at 
org.apache.kerby.kerberos.kerb.server.SimpleKdcServer.(SimpleKdcServer.java:54)
at org.apache.hadoop.minikdc.MiniKdc.start(MiniKdc.java:280)
at 
org.apache.hadoop.crypto.key.kms.server.TestKMS.setUpMiniKdc(TestKMS.java:265)
at 
org.apache.hadoop.crypto.key.kms.server.TestKMS.setUpMiniKdc(TestKMS.java:285)
at 
org.apache.hadoop.crypto.key.kms.server.TestKMS.setUp(TestKMS.java:99)

testKMSRestartKerberosAuth(org.apache.hadoop.crypto.key.kms.server.TestKMS)  
Time elapsed: 1.004 sec  <<< ERROR!
java.lang.RuntimeException: Unable to parse:includedir /etc/krb5.conf.d/
at 
org.apache.kerby.kerberos.kerb.common.Krb5Parser.load(Krb5Parser.java:72)
at 
org.apache.kerby.kerberos.kerb.common.Krb5Conf.addKrb5Config(Krb5Conf.java:47)
at 
org.apache.kerby.kerberos.kerb.client.ClientUtil.getDefaultConfig(ClientUtil.java:94)
at 
org.apache.kerby.kerberos.kerb.client.KrbClientBase.(KrbClientBase.java:51)
at 
org.apache.kerby.kerberos.kerb.client.KrbClient.(KrbClient.java:38)
at 
org.apache.kerby.kerberos.kerb.server.SimpleKdcServer.(SimpleKdcServer.java:54)
at org.apache.hadoop.minikdc.MiniKdc.start(MiniKdc.java:280)
at 
org.apache.hadoop.crypto.key.kms.server.TestKMS.setUpMiniKdc(TestKMS.java:265)
at 
org.apache.hadoop.crypto.key.kms.server.TestKMS.setUpMiniKdc(TestKMS.java:285)
at 
org.apache.hadoop.crypto.key.kms.server.TestKMS.setUp(TestKMS.java:99)
{noformat}
{{TestKDiag}} has the same error. After removing directive {{includedir}} from 
{{/etc/krb5.conf}}, the tests passed.

See http://bugs.java.com/bugdatabase/view_bug.do?bug_id=8029994.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-13709) Ability to clean up subprocesses spawned by Shell when the process exits

2016-12-14 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reopened HADOOP-13709:
--

[~ebadger] I am sorry to reopen this issue. Would you please help fix the 
compilation issue with Java 7? Or should we consider revert this commit?

> Ability to clean up subprocesses spawned by Shell when the process exits
> 
>
> Key: HADOOP-13709
> URL: https://issues.apache.org/jira/browse/HADOOP-13709
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Eric Badger
>Assignee: Eric Badger
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13709.001.patch, HADOOP-13709.002.patch, 
> HADOOP-13709.003.patch, HADOOP-13709.004.patch, HADOOP-13709.005.patch, 
> HADOOP-13709.006.patch, HADOOP-13709.007.patch, HADOOP-13709.008.patch, 
> HADOOP-13709.009.patch, HADOOP-13709.009.patch
>
>
> The runCommand code in Shell.java can get into a situation where it will 
> ignore InterruptedExceptions and refuse to shutdown due to being in I/O 
> waiting for the return value of the subprocess that was spawned. We need to 
> allow for the subprocess to be interrupted and killed when the shell process 
> gets killed. Currently the JVM will shutdown and all of the subprocesses will 
> be orphaned and not killed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13909) Add serviceRestart function to the class AbstractService so that we could hot start some service or thread.

2016-12-14 Thread zhengchenyu (JIRA)
zhengchenyu created HADOOP-13909:


 Summary: Add serviceRestart function to the class AbstractService 
so that we could hot start some service or thread.
 Key: HADOOP-13909
 URL: https://issues.apache.org/jira/browse/HADOOP-13909
 Project: Hadoop Common
  Issue Type: Wish
  Components: common
Reporter: zhengchenyu
Priority: Minor


In our cluster, we found that the web service was not available, maybe jetty 
server didn't close the socket normally so that so many connection keep the 
state of CLOSE_WAIT. Here we decide to restart the resourcemanager, then 
problem is solved.But we must know other service of the resourcemanager is 
normal. 
In this case, we must restart the resourcemanager process, just because of web 
service. Other service is paused because of restarting resourcemanager, but 
other service is normal. 
Here I wish to add a serviceRestart function so that we could only restart the 
web service thread when web service is not normal. This operation won't 
influence other service. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Maven build: YARN timeline service downloading maven-metadata from personal repository?

2016-12-14 Thread Sangjin Lee
Wait. The errors you're seeing have nothing to do with this or HBASE-16749.
You seem to be hitting an issue with connecting to repository.apache.org.
The original issue was with hitting http://people.apache.org/~garyh/mvn.
This seems to be a different issue. Could you please drill into what's
happening?

On Wed, Dec 14, 2016 at 11:01 AM, Wangda Tan  wrote:

> Thanks folks for reply, I tried to change some parameters of maven (like
> useCache=true), but it still doesn't work.
>
> The output looks like:
>
> [INFO] 
> 
> [INFO] Building Apache Hadoop YARN Timeline Service 3.0.0-alpha2-SNAPSHOT
> [INFO] 
> 
> Downloading: https://repository.apache.org/content/repositories/releases/
> org/apache/hadoop/hadoop-client/3.0.0-alpha2-SNAPSHOT/maven-metadata.xml
> Downloading: https://repository.apache.org/content/repositories/
> snapshots/org/apache/hadoop/hadoop-client/3.0.0-alpha2-
> SNAPSHOT/maven-metadata.xml
> Downloading: http://conjars.org/repo/org/apache/hadoop/hadoop-client/3.
> 0.0-alpha2-SNAPSHOT/maven-metadata.xml
> Downloading: https://repository.apache.org/content/repositories/
> snapshots/org/apache/hadoop/hadoop-client/3.0.0-alpha2-
> SNAPSHOT/maven-metadata.xml
> Downloading: https://oss.sonatype.org/content/repositories/
> snapshots/org/apache/hadoop/hadoop-client/3.0.0-alpha2-
> SNAPSHOT/maven-metadata.xml
> Downloading: http://repository.apache.org/snapshots/org/apache/hadoop/
> hadoop-client/3.0.0-alpha2-SNAPSHOT/maven-metadata.xml
> Downloading: http://people.apache.org/~garyh/mvn/org/apache/hadoop/
> hadoop-client/3.0.0-alpha2-SNAPSHOT/maven-metadata.xml
> *[WARNING] Could not transfer metadata
> org.apache.hadoop:hadoop-client:3.0.0-alpha2-SNAPSHOT/maven-metadata.xml
> from/to apache.snapshots.https
> (https://repository.apache.org/content/repositories/snapshots
> ): Connect to
> repository.apache.org:443 
> [repository.apache.org/207.244.88.143
> ] failed: Operation timed out*
> *[WARNING] Could not transfer metadata
> org.apache.hadoop:hadoop-client:3.0.0-alpha2-SNAPSHOT/maven-metadata.xml
> from/to apache release
> (https://repository.apache.org/content/repositories/releases/
> ): Connect to
> repository.apache.org:443 
> [repository.apache.org/207.244.88.143
> ] failed: Operation timed out*
> *[WARNING] Could not transfer metadata
> org.apache.hadoop:hadoop-client:3.0.0-alpha2-SNAPSHOT/maven-metadata.xml
> from/to apache snapshot
> (https://repository.apache.org/content/repositories/snapshots/
> ): Connect
> to repository.apache.org:443 
> [repository.apache.org/207.244.88.143
> ] failed: Operation timed out*
> *[WARNING] Could not transfer metadata
> org.apache.hadoop:hadoop-client:3.0.0-alpha2-SNAPSHOT/maven-metadata.xml
> from/to apache.snapshots (http://repository.apache.org/snapshots
> ): Connect to
> repository.apache.org:80 
> [repository.apache.org/207.244.88.143
> ] failed: Operation timed out*
> Downloading: https://repository.apache.org/content/repositories/releases/
> org/apache/hadoop/hadoop-mapreduce-client-app/3.0.0-alpha2-SNAPSHOT/maven-
> metadata.xml
> Downloading: http://conjars.org/repo/org/apache/hadoop/hadoop-
> mapreduce-client-app/3.0.0-alpha2-SNAPSHOT/maven-metadata.xml
> Downloading: https://repository.apache.org/content/repositories/
> snapshots/org/apache/hadoop/hadoop-mapreduce-client-app/3.
> 0.0-alpha2-SNAPSHOT/maven-metadata.xml
> Downloading: https://repository.apache.org/content/repositories/
> snapshots/org/apache/hadoop/hadoop-mapreduce-client-app/3.
> 0.0-alpha2-SNAPSHOT/maven-metadata.xml
> Downloading: https://oss.sonatype.org/content/repositories/
> snapshots/org/apache/hadoop/hadoop-mapreduce-client-app/3.
> 0.0-alpha2-SNAPSHOT/maven-metadata.xml
> Downloading: http://repository.apache.org/snapshots/org/apache/hadoop/
> hadoop-mapreduce-client-app/3.0.0-alpha2-SNAPSHOT/maven-metadata.xml
>
> Is there any other configurations I could try to workaround the issue?
>
> Thanks,
> Wangda
>
>
> On Wed, Dec 14, 2016 at 9:22 AM, Vrushali Channapattan <
> vrush...@twitter.com> wrote:
>
>> Yes, let's continue the discussion on jira https://issues.apache.org/jira
>> /browse/YARN-5976
>>
>> I have captured this thread there as well.
>>
>> On Tue, Dec 13, 2016 at 11:27 PM, Joep Rottinghuis 
>> wrote:
>>
>>> What I'm concerned we can remove the 

[jira] [Created] (HADOOP-13908) Existing tables may not be initialized correctly in DynamoDBMetadataStore

2016-12-14 Thread Mingliang Liu (JIRA)
Mingliang Liu created HADOOP-13908:
--

 Summary: Existing tables may not be initialized correctly in 
DynamoDBMetadataStore
 Key: HADOOP-13908
 URL: https://issues.apache.org/jira/browse/HADOOP-13908
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: HADOOP-13345
Reporter: Mingliang Liu
Assignee: Mingliang Liu


This was based on discussion in [HADOOP-13455]. Though we should not create 
table unless the config {{fs.s3a.s3guard.ddb.table.create}} is set true, we 
still have to get the existing table in {{DynamoDBMetadataStore#initialize()}} 
and wait for its becoming active, before any table/item operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13907) Fix KerberosUtil#getDefaultRealm() on Windows

2016-12-14 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HADOOP-13907:
---

 Summary: Fix KerberosUtil#getDefaultRealm() on Windows
 Key: HADOOP-13907
 URL: https://issues.apache.org/jira/browse/HADOOP-13907
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Xiaoyu Yao


Running unit test 
TestWebDelegationToken#testKerberosDelegationTokenAuthenticator on windows will 
fail with {{java.lang.IllegalArgumentException: Can't get Kerberos realm}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Maven build: YARN timeline service downloading maven-metadata from personal repository?

2016-12-14 Thread Li Lu
Let’s remove the Phoenix dependency if that’s blocking builds now. The offline 
aggregator can be added later.

Li Lu

On Dec 13, 2016, at 23:27, Joep Rottinghuis 
> wrote:

What I'm concerned we can remove the Phoenix dependency, since it isn't really 
used right now and deal with adding it in later.
However, there is PhoenixOfflineAggregationWriterImpl.java which does import 
import org.apache.phoenix.util.PropertiesUtil;

If everybody is cool to remove that for now, then we can deal with Phoenix as 
and when we're ready to add the offline aggregation.

Cheers,

Joep

On Tue, Dec 13, 2016 at 9:01 PM, Vrushali Channapattan 
> wrote:
Hmm, using hbase 1.2.4 may not be possible since Phoenix 4.8.1 needs 1.2.0. Or 
we think about adding in the Phoenix dependency later.

Thanks
Vrushali


On Dec 13, 2016, at 8:54 PM, Ted Yu 
> wrote:

bq. out of which 1.2.4 is released

Actually 1.2.4 has already been released. 1.1.8 RC is being voted upon.

FYI

On Tue, Dec 13, 2016 at 5:48 PM, Sangjin Lee 
> wrote:
According to HBASE-16749, the fix went into HBase 1.2.4 and 1.1.8 (out of
which 1.2.4 is released). To resolve this issue, we'd need to upgrade to
1.2.4 or later.


Sangjin

On Tue, Dec 13, 2016 at 3:41 PM, Vrushali Channapattan <
vchannapat...@twitter.com> wrote:

> Yes, I think bumping up the hbase version to 1.2 should help with this
> build time taken issue. I will start looking into this upgrade right
> away.
>
> Thanks
> Vrushali
>
> > On Dec 13, 2016, at 3:02 PM, Li Lu 
> > > wrote:
> >
> > I could not reproduce this issue locally but this may be related to some
> local maven repos. This may be related to the private repo issues of HBase?
> If this is the case, bumping up hbase dependency version of YARN timeline
> module might be helpful?
> >
> > +Sangjin, Vrushali, and Joep: In YARN-5976 we’re proposing to bump up
> HBase dependency version into 1.2. Shall we prioritize that JIRA? Thanks!
> >
> > Li Lu
> >
> >> On Dec 13, 2016, at 14:43, Wangda Tan 
> >> > wrote:
> >>
> >> Hi folks,
> >>
> >> It looks like HBASE-16749 is fixed, and Phoenix version is updated (per
> >> Li). But I'm still experiencing slow build of ATSv2 component:
> >>
> >> [INFO] Apache Hadoop YARN . SUCCESS [
> >> 1.378 s]
> >> [INFO] Apache Hadoop YARN API . SUCCESS [
> >> 10.559 s]
> >> [INFO] Apache Hadoop YARN Common .. SUCCESS [
> >> 6.993 s]
> >> [INFO] Apache Hadoop YARN Server .. SUCCESS [
> >> 0.057 s]
> >> [INFO] Apache Hadoop YARN Server Common ... SUCCESS [
> >> 2.266 s]
> >> [INFO] Apache Hadoop YARN NodeManager . SUCCESS [
> >> 4.075 s]
> >> [INFO] Apache Hadoop YARN Web Proxy ... SUCCESS [
> >> 0.924 s]
> >> [INFO] Apache Hadoop YARN ApplicationHistoryService ... SUCCESS [
> >> 1.549 s]
> >> [INFO] Apache Hadoop YARN Timeline Service  SUCCESS
> [05:14
> >> min]
> >> [INFO] Apache Hadoop YARN ResourceManager . SUCCESS [
> >> 8.554 s]
> >> [INFO] Apache Hadoop YARN Server Tests  SUCCESS [
> >> 1.561 s]
> >> [INFO] Apache Hadoop YARN Client .. SUCCESS [
> >> 1.321 s]
> >> [INFO] Apache Hadoop YARN SharedCacheManager .. SUCCESS [
> >> 0.843 s]
> >> [INFO] Apache Hadoop YARN Timeline Plugin Storage . SUCCESS [
> >> 0.949 s]
> >> [INFO] Apache Hadoop YARN Timeline Service HBase tests  SUCCESS [
> >> 3.137 s]
> >> [INFO] Apache Hadoop YARN Applications  SUCCESS [
> >> 0.055 s]
> >> [INFO] Apache Hadoop YARN DistributedShell  SUCCESS [
> >> 0.807 s]
> >> [INFO] Apache Hadoop YARN Unmanaged Am Launcher ... SUCCESS [
> >> 0.602 s]
> >> [INFO] Apache Hadoop YARN Site  SUCCESS [
> >> 0.060 s]
> >> [INFO] Apache Hadoop YARN Registry  SUCCESS [
> >> 0.910 s]
> >> [INFO] Apache Hadoop YARN UI .. SUCCESS [
> >> 0.072 s]
> >> [INFO] Apache Hadoop YARN Project . SUCCESS [
> >> 0.749 s]
> >> [INFO]
> >> 
> 
> >> [INFO] BUILD SUCCESS
> >> [INFO]
> >> 
> 
> >> [INFO] Total time: 06:02 min
> >>
> >> This doesn't happen every time when I run build on latest Hadoop trunk,
> but
> >> I can often see this happens.
> >>
> >> Thoughts about how to solve it?
> >>
> >> Thanks,
> >> Wangda
> >>
> >>
> >>
> >>> On Tue, Oct 4, 2016 at 6:50 PM, Sangjin Lee 
> >>> 

Re: Maven build: YARN timeline service downloading maven-metadata from personal repository?

2016-12-14 Thread Wangda Tan
Thanks folks for reply, I tried to change some parameters of maven (like
useCache=true), but it still doesn't work.

The output looks like:

[INFO]

[INFO] Building Apache Hadoop YARN Timeline Service 3.0.0-alpha2-SNAPSHOT
[INFO]

Downloading:
https://repository.apache.org/content/repositories/releases/org/apache/hadoop/hadoop-client/3.0.0-alpha2-SNAPSHOT/maven-metadata.xml
Downloading:
https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-client/3.0.0-alpha2-SNAPSHOT/maven-metadata.xml
Downloading:
http://conjars.org/repo/org/apache/hadoop/hadoop-client/3.0.0-alpha2-SNAPSHOT/maven-metadata.xml
Downloading:
https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-client/3.0.0-alpha2-SNAPSHOT/maven-metadata.xml
Downloading:
https://oss.sonatype.org/content/repositories/snapshots/org/apache/hadoop/hadoop-client/3.0.0-alpha2-SNAPSHOT/maven-metadata.xml
Downloading:
http://repository.apache.org/snapshots/org/apache/hadoop/hadoop-client/3.0.0-alpha2-SNAPSHOT/maven-metadata.xml
Downloading:
http://people.apache.org/~garyh/mvn/org/apache/hadoop/hadoop-client/3.0.0-alpha2-SNAPSHOT/maven-metadata.xml
*[WARNING] Could not transfer metadata
org.apache.hadoop:hadoop-client:3.0.0-alpha2-SNAPSHOT/maven-metadata.xml
from/to apache.snapshots.https
(https://repository.apache.org/content/repositories/snapshots
): Connect to
repository.apache.org:443 
[repository.apache.org/207.244.88.143
] failed: Operation timed out*
*[WARNING] Could not transfer metadata
org.apache.hadoop:hadoop-client:3.0.0-alpha2-SNAPSHOT/maven-metadata.xml
from/to apache release
(https://repository.apache.org/content/repositories/releases/
): Connect to
repository.apache.org:443 
[repository.apache.org/207.244.88.143
] failed: Operation timed out*
*[WARNING] Could not transfer metadata
org.apache.hadoop:hadoop-client:3.0.0-alpha2-SNAPSHOT/maven-metadata.xml
from/to apache snapshot
(https://repository.apache.org/content/repositories/snapshots/
): Connect
to repository.apache.org:443 
[repository.apache.org/207.244.88.143
] failed: Operation timed out*
*[WARNING] Could not transfer metadata
org.apache.hadoop:hadoop-client:3.0.0-alpha2-SNAPSHOT/maven-metadata.xml
from/to apache.snapshots (http://repository.apache.org/snapshots
): Connect to
repository.apache.org:80 
[repository.apache.org/207.244.88.143
] failed: Operation timed out*
Downloading:
https://repository.apache.org/content/repositories/releases/org/apache/hadoop/hadoop-mapreduce-client-app/3.0.0-alpha2-SNAPSHOT/maven-metadata.xml
Downloading:
http://conjars.org/repo/org/apache/hadoop/hadoop-mapreduce-client-app/3.0.0-alpha2-SNAPSHOT/maven-metadata.xml
Downloading:
https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-mapreduce-client-app/3.0.0-alpha2-SNAPSHOT/maven-metadata.xml
Downloading:
https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-mapreduce-client-app/3.0.0-alpha2-SNAPSHOT/maven-metadata.xml
Downloading:
https://oss.sonatype.org/content/repositories/snapshots/org/apache/hadoop/hadoop-mapreduce-client-app/3.0.0-alpha2-SNAPSHOT/maven-metadata.xml
Downloading:
http://repository.apache.org/snapshots/org/apache/hadoop/hadoop-mapreduce-client-app/3.0.0-alpha2-SNAPSHOT/maven-metadata.xml

Is there any other configurations I could try to workaround the issue?

Thanks,
Wangda


On Wed, Dec 14, 2016 at 9:22 AM, Vrushali Channapattan  wrote:

> Yes, let's continue the discussion on jira https://issues.apache.org/
> jira/browse/YARN-5976
>
> I have captured this thread there as well.
>
> On Tue, Dec 13, 2016 at 11:27 PM, Joep Rottinghuis 
> wrote:
>
>> What I'm concerned we can remove the Phoenix dependency, since it isn't
>> really used right now and deal with adding it in later.
>> However, there is PhoenixOfflineAggregationWriterImpl.java which does
>> import import org.apache.phoenix.util.PropertiesUtil;
>>
>> If everybody is cool to remove that for now, then we can deal with
>> Phoenix as and when we're ready to add the offline aggregation.
>>
>> Cheers,
>>
>> Joep
>>
>> On Tue, Dec 13, 2016 at 9:01 PM, Vrushali Channapattan <
>> vchannapat...@twitter.com> wrote:
>>
>>> Hmm, using hbase 1.2.4 may not be possible since Phoenix 4.8.1 needs
>>> 1.2.0. Or 

Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2016-12-14 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/186/

[Dec 13, 2016 6:34:32 PM] (liuml07) HADOOP-13900. Remove snapshot version of 
SDK dependency from Azure Data
[Dec 13, 2016 10:55:09 PM] (jlowe) HADOOP-13709. Ability to clean up 
subprocesses spawned by Shell when the
[Dec 14, 2016 1:09:58 AM] (uma.gangumalla) HDFS-11164: Mover should avoid 
unnecessary retries if the block is
[Dec 14, 2016 2:01:31 AM] (wang) HDFS-10684. WebHDFS DataNode calls fail 
without parameter createparent.
[Dec 14, 2016 4:49:54 AM] (jianhe) Revert YARN-4126. RM should not issue 
delegation tokens in unsecure
[Dec 14, 2016 6:50:50 AM] (kai.zheng) HDFS-8411. Add bytes count metrics to 
datanode for ECWorker. Contributed
[Dec 14, 2016 9:50:43 AM] (aajisaka) HDFS-11204. Document the missing options 
of hdfs zkfc command.




-1 overall


The following subsystems voted -1:
compile unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.security.token.delegation.web.TestWebDelegationToken 
   hadoop.crypto.key.kms.server.TestKMS 
   hadoop.hdfs.TestTrashWithSecureEncryptionZones 
   hadoop.hdfs.TestSecureEncryptionZoneWithKMS 
   hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer 
   hadoop.tracing.TestTracing 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService 
   hadoop.yarn.server.nodemanager.TestNodeManagerShutdown 
   hadoop.yarn.server.timeline.TestRollingLevelDB 
   hadoop.yarn.server.timeline.TestTimelineDataManager 
   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
   hadoop.yarn.server.timeline.webapp.TestTimelineWebServices 
   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   hadoop.yarn.server.timelineservice.storage.common.TestRowKeys 
   hadoop.yarn.server.timelineservice.storage.common.TestKeyConverters 
   hadoop.yarn.server.timelineservice.storage.common.TestSeparator 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   hadoop.yarn.server.resourcemanager.TestRMRestart 
   hadoop.yarn.server.resourcemanager.TestResourceTrackerService 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore 
   hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 
   hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageApps 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRunCompaction
 
   
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageEntities 
   hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun 
   
hadoop.yarn.server.timelineservice.storage.TestPhoenixOfflineAggregationWriterImpl
 
   
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestShuffleHandler 
   hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService 
   hadoop.mapreduce.TestMRJobClient 

Timed out junit tests :

   org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache 
  

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/186/artifact/out/patch-compile-root.txt
  [164K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/186/artifact/out/patch-compile-root.txt
  [164K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/186/artifact/out/patch-compile-root.txt
  [164K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/186/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [128K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/186/artifact/out/patch-unit-hadoop-common-project_hadoop-kms.txt
  [56K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/186/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [416K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/186/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [52K]
   

[jira] [Created] (HADOOP-13906) Azure TestFileSystemOperationExceptionMessage needlessly reruns all tests in NativeAzureFileSystemBaseTest

2016-12-14 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13906:
---

 Summary: Azure TestFileSystemOperationExceptionMessage needlessly 
reruns all tests in NativeAzureFileSystemBaseTest
 Key: HADOOP-13906
 URL: https://issues.apache.org/jira/browse/HADOOP-13906
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/azure, test
Affects Versions: 3.0.0-alpha2
Reporter: Steve Loughran
Priority: Minor


Azure tests are taking too long
One factor is the test {{TestFileSystemOperationExceptionMessage}}
{code}
Running org.apache.hadoop.fs.azure.TestFileSystemOperationExceptionMessage
Tests run: 47, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 248.301 sec - 
in org.apache.hadoop.fs.azure.TestFileSystemOperationExceptionMessage
{code}

All this test is trying to do is add one new test method, 
{{testAnonymouseCredentialExceptionMessage}}, but because it subclasses 
{{NativeAzureFileSystemBaseTest}} it picks up the entirety of the base set of 
tests, 50+ of them. Hence the 4+ minutes of execution time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13905) Cannot run wordcount example when there's a mounttable configured with a link to s3a.

2016-12-14 Thread Oleg Khaschansky (JIRA)
Oleg Khaschansky created HADOOP-13905:
-

 Summary: Cannot run wordcount example when there's a mounttable 
configured with a link to s3a.
 Key: HADOOP-13905
 URL: https://issues.apache.org/jira/browse/HADOOP-13905
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.8.0
Reporter: Oleg Khaschansky


Have 3 node setup: nn/slave/client. Client default fs is viewfs with the 
following mounttable:



fs.viewfs.mounttable.hadoopDemo.homedir
/home


fs.viewfs.mounttable.hadoopDemo.link./home
hdfs://namenode:9000


fs.viewfs.mounttable.hadoopDemo.link./tmp
hdfs://namenode:9000/tmp


fs.viewfs.mounttable.hadoopDemo.link./user
hdfs://namenode:9000/user


fs.viewfs.mounttable.hadoopDemo.link./s3a
s3a://cloudply-hadoop-demo/



s3a credentials are configured in core-site.xml on the client node. Able to 
view/modify /s3a mount contents with hdfs commands. When I ran a wordcount 
example using this line (even without access to s3a share):

hadoop jar 
$HADOOP_HOME/share/hadoop/mapreduce/sources/hadoop-mapreduce-examples-2.8.0-SNAPSHOT-sources.jar
 org.apache.hadoop.examples.WordCount /home/input /home/output

it fails with the following exception: 

16/12/14 16:08:33 INFO client.RMProxy: Connecting to ResourceManager at 
namenode/172.18.0.2:8032
16/12/14 16:08:33 INFO mapreduce.Cluster: Failed to use 
org.apache.hadoop.mapred.YarnClientProtocolProvider due to error:
java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
at 
org.apache.hadoop.fs.AbstractFileSystem.newInstance(AbstractFileSystem.java:136)
at 
org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:165)
at 
org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:250)
at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:342)
at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:339)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1711)
at 
org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:339)
at org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:456)
at org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:482)
at org.apache.hadoop.mapred.YARNRunner.(YARNRunner.java:148)
at org.apache.hadoop.mapred.YARNRunner.(YARNRunner.java:132)
at org.apache.hadoop.mapred.YARNRunner.(YARNRunner.java:122)
at 
org.apache.hadoop.mapred.YarnClientProtocolProvider.create(YarnClientProtocolProvider.java:34)
at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:111)
at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:98)
at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:91)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1311)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1307)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1711)
at org.apache.hadoop.mapreduce.Job.connect(Job.java:1307)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1335)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1359)
at org.apache.hadoop.examples.WordCount.main(WordCount.java:87)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:234)
at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.fs.AbstractFileSystem.newInstance(AbstractFileSystem.java:134)
... 32 more
Caused by: org.apache.hadoop.HadoopIllegalArgumentException: FileSystem 
implementation error -  default port -1 is not valid
at 
org.apache.hadoop.fs.AbstractFileSystem.getUri(AbstractFileSystem.java:306)
at 

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2016-12-14 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/255/

[Dec 13, 2016 6:34:32 PM] (liuml07) HADOOP-13900. Remove snapshot version of 
SDK dependency from Azure Data
[Dec 13, 2016 10:55:09 PM] (jlowe) HADOOP-13709. Ability to clean up 
subprocesses spawned by Shell when the
[Dec 14, 2016 1:09:58 AM] (uma.gangumalla) HDFS-11164: Mover should avoid 
unnecessary retries if the block is
[Dec 14, 2016 2:01:31 AM] (wang) HDFS-10684. WebHDFS DataNode calls fail 
without parameter createparent.
[Dec 14, 2016 4:49:54 AM] (jianhe) Revert YARN-4126. RM should not issue 
delegation tokens in unsecure




-1 overall


The following subsystems voted -1:
asflicense compile findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.net.TestDNS 
   hadoop.security.token.delegation.web.TestWebDelegationToken 
   hadoop.crypto.key.kms.server.TestKMS 
   hadoop.hdfs.TestTrashWithSecureEncryptionZones 
   hadoop.hdfs.TestSecureEncryptionZoneWithKMS 
   hadoop.yarn.server.timeline.webapp.TestTimelineWebServices 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
  

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/255/artifact/out/patch-compile-root.txt
  [220K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/255/artifact/out/patch-compile-root.txt
  [220K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/255/artifact/out/patch-compile-root.txt
  [220K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/255/artifact/out/diff-checkstyle-root.txt
  [16M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/255/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/255/artifact/out/diff-patch-shellcheck.txt
  [28K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/255/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/255/artifact/out/whitespace-eol.txt
  [11M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/255/artifact/out/whitespace-tabs.txt
  [1.3M]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/255/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [4.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/255/artifact/out/diff-javadoc-javadoc-root.txt
  [2.2M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/255/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [128K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/255/artifact/out/patch-unit-hadoop-common-project_hadoop-kms.txt
  [56K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/255/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [152K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/255/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/255/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [316K]

   asflicense:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/255/artifact/out/patch-asflicense-problems.txt
  [4.0K]

Powered by Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

[jira] [Created] (HADOOP-13904) DynamoDBMetadataStore to handle DDB throttling failures through retry policy

2016-12-14 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13904:
---

 Summary: DynamoDBMetadataStore to handle DDB throttling failures 
through retry policy
 Key: HADOOP-13904
 URL: https://issues.apache.org/jira/browse/HADOOP-13904
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: HADOOP-13345
Reporter: Steve Loughran


When you overload DDB, you get error messages warning of throttling, [as 
documented by 
AWS|http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Programming.Errors.html#Programming.Errors.MessagesAndCodes]

Reduce load on DDB by doing a table lookup before the create, then, in table 
create/delete operations and in get/put actions, recognise the error codes and 
retry using an appropriate retry policy (exponential backoff + ultimate 
failure) 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-9208) Fix release audit warnings

2016-12-14 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-9208.
---
Resolution: Invalid

Now the issue does not occur. Closing this.

> Fix release audit warnings
> --
>
> Key: HADOOP-9208
> URL: https://issues.apache.org/jira/browse/HADOOP-9208
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ted Yu
>
> The following files should be excluded from rat check:
> ./hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/hdfsarchitecture.odg
> ./hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/hdfsdatanodes.odg
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/resources/images/FI-framework.odg
> ./hadoop-hdfs-project/hadoop-hdfs/src/site/resources/images/hdfsarchitecture.odg
> ./hadoop-hdfs-project/hadoop-hdfs/src/site/resources/images/hdfsdatanodes.odg



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13903) KMS does not provide any useful debug information

2016-12-14 Thread Tristan Stevens (JIRA)
Tristan Stevens created HADOOP-13903:


 Summary: KMS does not provide any useful debug information
 Key: HADOOP-13903
 URL: https://issues.apache.org/jira/browse/HADOOP-13903
 Project: Hadoop Common
  Issue Type: Bug
  Components: kms
Affects Versions: 2.9.0, 3.0.0-alpha2
Reporter: Tristan Stevens


At the moment there is no debug or trace level logs generated for KMS 
authorisation decisions. In order for users to understand what is going on in 
given scenarios this would be invaluable.

Code should endeavour to keep as much work off the sunny-day-code-path as much 
as possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org