Re: When are incompatible changes acceptable (HDFS-12990)

2018-01-11 Thread Chris Douglas
On Thu, Jan 11, 2018 at 6:34 PM Tsz Wo Sze  wrote:

> The question is: how are we going to fix it?
>

What do you propose? -C

> No incompatible changes are allowed between 3.0.0 and 3.0.1. Dot releases
> only allow bug fixes.
>
> We may not like the statement above but it is our compatibility policy.
> We should either follow the policy or revise it.
>
> Some more questions:
>
>- What if someone is already using 3.0.0 and has changed all the
>scripts to 9820?  Just let them fail?
>- Compared to 2.x, 3.0.0 has many incompatible changes. Are we going
>to have other incompatible changes in the future minor and dot releases?
>What is the criteria to decide which incompatible changes are allowed?
>- I hate that we have prematurely released 3.0.0 and make 3.0.1
>incompatible to 3.0.0. If the "bug" is that serious, why not fixing it in
>4.0.0 and declare 3.x as dead?
>- It seems obvious that no one has seriously tested it so that the
>problem is not uncovered until now. Are there bugs in our current release
>procedure?
>
>
> Thanks
> Tsz-Wo
>
>
>
> On Thursday, January 11, 2018, 11:36:33 AM GMT+8, Chris Douglas <
> cdoug...@apache.org> wrote:
>
>
> Isn't this limited to reverting the 8020 -> 9820 change? -C
>
> On Wed, Jan 10, 2018 at 6:13 PM Eric Yang  wrote:
>
> > The fix in HDFS-9427 can potentially bring in new customers because less
> > chance for new comer to encountering “port already in use” problem.  If
> we
> > make change according to HDFS-12990, then this incompatible change does
> not
> > make incompatible change compatible.  Other ports are not reverted
> > according to HDFS-12990.  User will encounter the bad taste in the mouth
> > that HDFS-9427 attempt to solve.  Please do consider both negative side
> > effects of reverting as well as incompatible minor release change.
> Thanks
> >
> > Regards,
> > Eric
> >
> > From: larry mccay 
> > Date: Wednesday, January 10, 2018 at 10:53 AM
> > To: Daryn Sharp 
> > Cc: "Aaron T. Myers" , Eric Yang  >,
> > Chris Douglas , Hadoop Common <
> > common-dev@hadoop.apache.org>
> > Subject: Re: When are incompatible changes acceptable (HDFS-12990)
> >
> > On Wed, Jan 10, 2018 at 1:34 PM, Daryn Sharp  da...@oath.com>> wrote:
> >
> > I fully agree the port changes should be reverted.  Although
> > "incompatible", the potential impact to existing 2.x deploys is huge.
> I'd
> > rather inconvenience 3.0 deploys that compromise <1% customers.  An
> > incompatible change to revert an incompatible change is called
> > compatibility.
> >
> > +1
> >
> >
> >
> >
> > Most importantly, consider that there is no good upgrade path existing
> > deploys, esp. large and/or multi-cluster environments.  It’s only
> feasible
> > for first-time deploys or simple single-cluster upgrades willing to take
> > downtime.  Let's consider a few reasons why:
> >
> >
> >
> > 1. RU is completely broken.  Running jobs will fail.  If MR on hdfs
> > bundles the configs, there's no way to transparently coordinate the
> switch
> > to the new bundle with the port changed.  Job submissions will fail.
> >
> >
> >
> > 2. Users generally do not add the rpc port number to uris so unless their
> > configs are updated they will contact the wrong port.  Seamlessly
> > coordinating the conf change without massive failures is impossible.
> >
> >
> >
> > 3. Even if client confs are updated, they will break in a multi-cluster
> > env with NNs using different ports.  Users/services will be forced to add
> > the port.  The cited hive "issue" is not a bug since it's the only way to
> > work in a multi-port env.
> >
> >
> >
> > 4. Coordinating the port add/change of uris is systems everywhere (you
> > know something will be missed), updating of confs, restarting all
> services,
> > requiring customers to redeploy their workflows in sync with the NN
> > upgrade, will cause mass disruption and downtime that will be
> unacceptable
> > for production environments.
> >
> >
> >
> > This is a solution to a non-existent problem.  Ports can be bound by
> > multiple processes but only 1 can listen.  Maybe multiple listeners is an
> > issue for compute nodes but not responsibly managed service nodes.  Ie.
> Who
> > runs arbitrary services on the NNs that bind to random ports?  Besides,
> the
> > default port is and was ephemeral so it solved nothing.
> >
> >
> >
> > This either standardizes ports to a particular customer's ports or is a
> > poorly thought out whim.  In either case, the needs of the many outweigh
> > the needs of the few/none (3.0 users).  The only logical conclusion is
> > revert.  If a particular site wants to change default ports and deal with
> > the massive fallout, they can explicitly change the ports themselves.
> >
> >
> >
> > Daryn
> >
> > On Tue, Jan 9, 2018 at 11:22 PM, Aaron T. Myers  

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-01-11 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/654/

[Jan 10, 2018 6:52:26 PM] (szegedim) HADOOP-15060.
[Jan 11, 2018 6:59:27 AM] (aajisaka) YARN-7735. Fix typo in YARN documentation. 
Contributed by Takanobu
[Jan 11, 2018 10:47:50 AM] (stevel) HADOOP-15033. Use java.util.zip.CRC32C for 
Java 9 and above Contributed




-1 overall


The following subsystems voted -1:
findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
   org.apache.hadoop.yarn.api.records.Resource.getResources() may expose 
internal representation by returning Resource.resources At Resource.java:by 
returning Resource.resources At Resource.java:[line 234] 

Failed junit tests :

   hadoop.hdfs.TestBlocksScheduledCounter 
   hadoop.yarn.server.nodemanager.webapp.TestContainerLogsPage 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing 
   hadoop.yarn.server.TestContainerManagerSecurity 
   
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageSchema 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/654/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/654/artifact/out/diff-compile-javac-root.txt
  [280K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/654/artifact/out/diff-checkstyle-root.txt
  [17M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/654/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/654/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/654/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/654/artifact/out/whitespace-eol.txt
  [9.2M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/654/artifact/out/whitespace-tabs.txt
  [292K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/654/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/654/artifact/out/diff-javadoc-javadoc-root.txt
  [760K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/654/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [292K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/654/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [44K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/654/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [80K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/654/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [376K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/654/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
  [20K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/654/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase-tests.txt
  [28K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/654/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/654/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/654/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-unmanaged-am-launcher.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/654/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/654/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-common.txt
  [4.0K]
   

Re: When are incompatible changes acceptable (HDFS-12990)

2018-01-11 Thread Aaron T. Myers
Yes indeed, that's the proposal being discussed on HDFS-12990 - just to
revert the default NN RPC port change, and none of the other port changes.
The other default port changes actually do have some technical benefit, and
I believe are far less likely to be embedded in databases, scripts, tests,
etc. in real deployments.

Best,
Aaron

On Thu, Jan 11, 2018 at 11:42 AM, larry mccay  wrote:

> No, the proposal was to only fix the NN port change - as I understood it.
>
> On Thu, Jan 11, 2018 at 2:01 PM, Eric Yang  wrote:
>
> > If I am reading this correctly, Daryn and Larry are in favor of complete
> > revert instead of namenode only.  Please charm in if I am wrong.  This is
> > the reason that I try to explore each perspective to understand the cost
> of
> > each options.  It appears that we have a fragment of opinions, and only
> one
> > choice will serve the need of majority of the community.  It would be
> good
> > for a PMC to call the vote at reasonable pace to address this issue to
> > reduce the pain point from either side of oppositions.
> >
> >
> >
> > Regards,
> >
> > Eric
> >
> >
> >
> > *From: *Chris Douglas 
> > *Date: *Wednesday, January 10, 2018 at 7:36 PM
> > *To: *Eric Yang 
> > *Cc: *"Aaron T. Myers" , Daryn Sharp ,
> > Hadoop Common , larry mccay <
> > lmc...@apache.org>
> >
> > *Subject: *Re: When are incompatible changes acceptable (HDFS-12990)
> >
> >
> >
> > Isn't this limited to reverting the 8020 -> 9820 change? -C
> >
> >
> >
> > On Wed, Jan 10, 2018 at 6:13 PM Eric Yang  wrote:
> >
> > The fix in HDFS-9427 can potentially bring in new customers because less
> > chance for new comer to encountering “port already in use” problem.  If
> we
> > make change according to HDFS-12990, then this incompatible change does
> not
> > make incompatible change compatible.  Other ports are not reverted
> > according to HDFS-12990.  User will encounter the bad taste in the mouth
> > that HDFS-9427 attempt to solve.  Please do consider both negative side
> > effects of reverting as well as incompatible minor release change.
> Thanks
> >
> > Regards,
> > Eric
> >
> > From: larry mccay 
> > Date: Wednesday, January 10, 2018 at 10:53 AM
> > To: Daryn Sharp 
> > Cc: "Aaron T. Myers" , Eric Yang  >,
> > Chris Douglas , Hadoop Common <
> > common-dev@hadoop.apache.org>
> > Subject: Re: When are incompatible changes acceptable (HDFS-12990)
> >
> > On Wed, Jan 10, 2018 at 1:34 PM, Daryn Sharp  > oath.com>> wrote:
> >
> > I fully agree the port changes should be reverted.  Although
> > "incompatible", the potential impact to existing 2.x deploys is huge.
> I'd
> > rather inconvenience 3.0 deploys that compromise <1% customers.  An
> > incompatible change to revert an incompatible change is called
> > compatibility.
> >
> > +1
> >
> >
> >
> >
> > Most importantly, consider that there is no good upgrade path existing
> > deploys, esp. large and/or multi-cluster environments.  It’s only
> feasible
> > for first-time deploys or simple single-cluster upgrades willing to take
> > downtime.  Let's consider a few reasons why:
> >
> >
> >
> > 1. RU is completely broken.  Running jobs will fail.  If MR on hdfs
> > bundles the configs, there's no way to transparently coordinate the
> switch
> > to the new bundle with the port changed.  Job submissions will fail.
> >
> >
> >
> > 2. Users generally do not add the rpc port number to uris so unless their
> > configs are updated they will contact the wrong port.  Seamlessly
> > coordinating the conf change without massive failures is impossible.
> >
> >
> >
> > 3. Even if client confs are updated, they will break in a multi-cluster
> > env with NNs using different ports.  Users/services will be forced to add
> > the port.  The cited hive "issue" is not a bug since it's the only way to
> > work in a multi-port env.
> >
> >
> >
> > 4. Coordinating the port add/change of uris is systems everywhere (you
> > know something will be missed), updating of confs, restarting all
> services,
> > requiring customers to redeploy their workflows in sync with the NN
> > upgrade, will cause mass disruption and downtime that will be
> unacceptable
> > for production environments.
> >
> >
> >
> > This is a solution to a non-existent problem.  Ports can be bound by
> > multiple processes but only 1 can listen.  Maybe multiple listeners is an
> > issue for compute nodes but not responsibly managed service nodes.  Ie.
> Who
> > runs arbitrary services on the NNs that bind to random ports?  Besides,
> the
> > default port is and was ephemeral so it solved nothing.
> >
> >
> >
> > This either standardizes ports to a particular customer's ports or is a
> > poorly thought out whim.  In either case, the needs of 

Re: When are incompatible changes acceptable (HDFS-12990)

2018-01-11 Thread larry mccay
No, the proposal was to only fix the NN port change - as I understood it.

On Thu, Jan 11, 2018 at 2:01 PM, Eric Yang  wrote:

> If I am reading this correctly, Daryn and Larry are in favor of complete
> revert instead of namenode only.  Please charm in if I am wrong.  This is
> the reason that I try to explore each perspective to understand the cost of
> each options.  It appears that we have a fragment of opinions, and only one
> choice will serve the need of majority of the community.  It would be good
> for a PMC to call the vote at reasonable pace to address this issue to
> reduce the pain point from either side of oppositions.
>
>
>
> Regards,
>
> Eric
>
>
>
> *From: *Chris Douglas 
> *Date: *Wednesday, January 10, 2018 at 7:36 PM
> *To: *Eric Yang 
> *Cc: *"Aaron T. Myers" , Daryn Sharp ,
> Hadoop Common , larry mccay <
> lmc...@apache.org>
>
> *Subject: *Re: When are incompatible changes acceptable (HDFS-12990)
>
>
>
> Isn't this limited to reverting the 8020 -> 9820 change? -C
>
>
>
> On Wed, Jan 10, 2018 at 6:13 PM Eric Yang  wrote:
>
> The fix in HDFS-9427 can potentially bring in new customers because less
> chance for new comer to encountering “port already in use” problem.  If we
> make change according to HDFS-12990, then this incompatible change does not
> make incompatible change compatible.  Other ports are not reverted
> according to HDFS-12990.  User will encounter the bad taste in the mouth
> that HDFS-9427 attempt to solve.  Please do consider both negative side
> effects of reverting as well as incompatible minor release change.  Thanks
>
> Regards,
> Eric
>
> From: larry mccay 
> Date: Wednesday, January 10, 2018 at 10:53 AM
> To: Daryn Sharp 
> Cc: "Aaron T. Myers" , Eric Yang ,
> Chris Douglas , Hadoop Common <
> common-dev@hadoop.apache.org>
> Subject: Re: When are incompatible changes acceptable (HDFS-12990)
>
> On Wed, Jan 10, 2018 at 1:34 PM, Daryn Sharp  oath.com>> wrote:
>
> I fully agree the port changes should be reverted.  Although
> "incompatible", the potential impact to existing 2.x deploys is huge.  I'd
> rather inconvenience 3.0 deploys that compromise <1% customers.  An
> incompatible change to revert an incompatible change is called
> compatibility.
>
> +1
>
>
>
>
> Most importantly, consider that there is no good upgrade path existing
> deploys, esp. large and/or multi-cluster environments.  It’s only feasible
> for first-time deploys or simple single-cluster upgrades willing to take
> downtime.  Let's consider a few reasons why:
>
>
>
> 1. RU is completely broken.  Running jobs will fail.  If MR on hdfs
> bundles the configs, there's no way to transparently coordinate the switch
> to the new bundle with the port changed.  Job submissions will fail.
>
>
>
> 2. Users generally do not add the rpc port number to uris so unless their
> configs are updated they will contact the wrong port.  Seamlessly
> coordinating the conf change without massive failures is impossible.
>
>
>
> 3. Even if client confs are updated, they will break in a multi-cluster
> env with NNs using different ports.  Users/services will be forced to add
> the port.  The cited hive "issue" is not a bug since it's the only way to
> work in a multi-port env.
>
>
>
> 4. Coordinating the port add/change of uris is systems everywhere (you
> know something will be missed), updating of confs, restarting all services,
> requiring customers to redeploy their workflows in sync with the NN
> upgrade, will cause mass disruption and downtime that will be unacceptable
> for production environments.
>
>
>
> This is a solution to a non-existent problem.  Ports can be bound by
> multiple processes but only 1 can listen.  Maybe multiple listeners is an
> issue for compute nodes but not responsibly managed service nodes.  Ie. Who
> runs arbitrary services on the NNs that bind to random ports?  Besides, the
> default port is and was ephemeral so it solved nothing.
>
>
>
> This either standardizes ports to a particular customer's ports or is a
> poorly thought out whim.  In either case, the needs of the many outweigh
> the needs of the few/none (3.0 users).  The only logical conclusion is
> revert.  If a particular site wants to change default ports and deal with
> the massive fallout, they can explicitly change the ports themselves.
>
>
>
> Daryn
>
> On Tue, Jan 9, 2018 at 11:22 PM, Aaron T. Myers > wrote:
> On Tue, Jan 9, 2018 at 3:15 PM, Eric Yang > wrote:
>
> > While I agree the original port change was unnecessary, I don’t think
> > Hadoop NN port change is a bad thing.
> >
> > I worked for a Hadoop distro that NN RPC port was default to port 9000.
> > When we 

Re: When are incompatible changes acceptable (HDFS-12990)

2018-01-11 Thread Eric Yang
If I am reading this correctly, Daryn and Larry are in favor of complete revert 
instead of namenode only.  Please charm in if I am wrong.  This is the reason 
that I try to explore each perspective to understand the cost of each options.  
It appears that we have a fragment of opinions, and only one choice will serve 
the need of majority of the community.  It would be good for a PMC to call the 
vote at reasonable pace to address this issue to reduce the pain point from 
either side of oppositions.

Regards,
Eric

From: Chris Douglas 
Date: Wednesday, January 10, 2018 at 7:36 PM
To: Eric Yang 
Cc: "Aaron T. Myers" , Daryn Sharp , Hadoop 
Common , larry mccay 
Subject: Re: When are incompatible changes acceptable (HDFS-12990)

Isn't this limited to reverting the 8020 -> 9820 change? -C

On Wed, Jan 10, 2018 at 6:13 PM Eric Yang 
> wrote:
The fix in HDFS-9427 can potentially bring in new customers because less chance 
for new comer to encountering “port already in use” problem.  If we make change 
according to HDFS-12990, then this incompatible change does not make 
incompatible change compatible.  Other ports are not reverted according to 
HDFS-12990.  User will encounter the bad taste in the mouth that HDFS-9427 
attempt to solve.  Please do consider both negative side effects of reverting 
as well as incompatible minor release change.  Thanks

Regards,
Eric

From: larry mccay >
Date: Wednesday, January 10, 2018 at 10:53 AM
To: Daryn Sharp >
Cc: "Aaron T. Myers" >, Eric Yang 
>, Chris Douglas 
>, Hadoop Common 
>
Subject: Re: When are incompatible changes acceptable (HDFS-12990)

On Wed, Jan 10, 2018 at 1:34 PM, Daryn Sharp 
>>
 wrote:

I fully agree the port changes should be reverted.  Although "incompatible", 
the potential impact to existing 2.x deploys is huge.  I'd rather inconvenience 
3.0 deploys that compromise <1% customers.  An incompatible change to revert an 
incompatible change is called compatibility.

+1




Most importantly, consider that there is no good upgrade path existing deploys, 
esp. large and/or multi-cluster environments.  It’s only feasible for 
first-time deploys or simple single-cluster upgrades willing to take downtime.  
Let's consider a few reasons why:



1. RU is completely broken.  Running jobs will fail.  If MR on hdfs bundles the 
configs, there's no way to transparently coordinate the switch to the new 
bundle with the port changed.  Job submissions will fail.



2. Users generally do not add the rpc port number to uris so unless their 
configs are updated they will contact the wrong port.  Seamlessly coordinating 
the conf change without massive failures is impossible.



3. Even if client confs are updated, they will break in a multi-cluster env 
with NNs using different ports.  Users/services will be forced to add the port. 
 The cited hive "issue" is not a bug since it's the only way to work in a 
multi-port env.



4. Coordinating the port add/change of uris is systems everywhere (you know 
something will be missed), updating of confs, restarting all services, 
requiring customers to redeploy their workflows in sync with the NN upgrade, 
will cause mass disruption and downtime that will be unacceptable for 
production environments.



This is a solution to a non-existent problem.  Ports can be bound by multiple 
processes but only 1 can listen.  Maybe multiple listeners is an issue for 
compute nodes but not responsibly managed service nodes.  Ie. Who runs 
arbitrary services on the NNs that bind to random ports?  Besides, the default 
port is and was ephemeral so it solved nothing.



This either standardizes ports to a particular customer's ports or is a poorly 
thought out whim.  In either case, the needs of the many outweigh the needs of 
the few/none (3.0 users).  The only logical conclusion is revert.  If a 
particular site wants to change default ports and deal with the massive 
fallout, they can explicitly change the ports themselves.



Daryn

On Tue, Jan 9, 2018 at 11:22 PM, Aaron T. Myers 
>>
 wrote:
On Tue, Jan 9, 2018 at 3:15 PM, Eric Yang 
>>
 wrote:

> While I agree the original port change was unnecessary, I don’t think
> Hadoop NN port change is a bad thing.
>
> I worked for a Hadoop distro that NN RPC port was 

[jira] [Created] (HADOOP-15168) Add kdiag and HadoopKerberosName tools to hadoop command

2018-01-11 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HADOOP-15168:
---

 Summary: Add kdiag and HadoopKerberosName tools to hadoop command
 Key: HADOOP-15168
 URL: https://issues.apache.org/jira/browse/HADOOP-15168
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15167) [viewfs] ViewFileSystem.InternalDirOfViewFs#getFileStatus shouldn't depend on UGI#getPrimaryGroupName

2018-01-11 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-15167:
-

 Summary: [viewfs] ViewFileSystem.InternalDirOfViewFs#getFileStatus 
shouldn't depend on UGI#getPrimaryGroupName
 Key: HADOOP-15167
 URL: https://issues.apache.org/jira/browse/HADOOP-15167
 Project: Hadoop Common
  Issue Type: Bug
  Components: viewfs
Affects Versions: 2.7.0
Reporter: Brahma Reddy Battula



Have Secure federated cluster with atleast two nameservices
Configure viewfs related configs 

 When we run the {{ls}} cmd in HDFS client ,we will call the method: 
org.apache.hadoop.fs.viewfs.ViewFileSystem.InternalDirOfViewFs#getFileStatus

 it will try to get the group of the kerberos user. If the node has not this 
user, it fails. 

Throws the following and exits.UserGroupInformation#getPrimaryGroupName

{code}
if (groups.isEmpty()) {
  throw new IOException("There is no primary group for UGI " + this);
}
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15166) CLI MiniCluster fails with ClassNotFoundException o.a.h.yarn.server.timelineservice.collector.TimelineCollectorManager

2018-01-11 Thread Gera Shegalov (JIRA)
Gera Shegalov created HADOOP-15166:
--

 Summary: CLI MiniCluster fails with ClassNotFoundException 
o.a.h.yarn.server.timelineservice.collector.TimelineCollectorManager
 Key: HADOOP-15166
 URL: https://issues.apache.org/jira/browse/HADOOP-15166
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov


Following CLIMiniCluster.md.vm to start minicluster fails due to:
{code}
Caused by: java.lang.ClassNotFoundException: 
org.apache.hadoop.yarn.server.timelineservice.collector.TimelineCollectorManager
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 62 more
{code}





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org