Hadoop 3.3 Release Plan Proposal

2020-01-07 Thread Brahma Reddy Battula
Hi All,



To continue a faster cadence of releases to accommodate more features,we could 
plan a Hadoop 3.3 release around March Mid.



To start the process sooner, and to establish a timeline, I propose to target 
Hadoop 3.3.0 release by March Mid 2020. (About 2 months from now).



I would also would like to take this opportunity to come up with a detailed 
plan.



Feature Freeze Date : All features should be merged by Feb 28, 2020.



Code Freeze Date : blockers/critical only, no improvements and non 
blocker/critical bug-fixes March 10, 2020.



Release Date: March 15, 2020



I have tried to come up with a list of features on my radar which could be 
candidates



1. Merged & Completed features:

  *   HDFS-13891 HDFS RBF stabilization phase 1. (Owner: Brahma)
  *   HDFS-12345: Scale testing HDFS NameNode with real metadata and workloads 
(Dynamometer) (owner: Erik Krogen)
  *   HDFS-13762: Support non-volatile storage class memory(SCM) in HDFS cache 
directives ( owner: Feilong He)
  *   HADOOP-16095 : Support impersonation for AuthenticationFilter (owner:  
Eric Yang)
  *   YARN-7129: Application Catalog for YARN applications (Owner: Eric Yang)
  *   YARN-5542: Scheduling of opportunistic containers (owner: Konstantinos 
Karanasos)
  *   YARN-9473: Support Vector Engine ( a new accelerator hardware) based on 
pluggable device framework. (owner :Peter Bacsko)
  *   YARN-9264: [Umbrella] Follow-up on IntelOpenCL FPGA plugin. (owner: Peter 
Bacsko)
  *   YARN-9145: [Umbrella] Dynamically add or remove auxiliary services



2. Features close to finish:



  *   HADOOP-13363 Upgrade protobuf from 2.5.0 to something newer (Owner: Vinay)
  *YARN-1011: Schedule containers based on utilization of currently 
allocated containers
  *   (Owner: Haibo Chen)
  *YARN-9698: [Umbrella] Tools to help migration from Fair Scheduler to 
Capacity Scheduler. (Owner: Weiwei Yang)
  *YARN-9050: [Umbrella] Usability improvements for scheduler activities. 
(Owner: Tao Yang)
  *YARN-8851:[Umbrella] A pluggable device plugin framework to ease vendor 
plugin development (owner: Zhankun Tang)
  *   YARN-9014: runC container runtime (owner : Eric Badger)
  *HADOOP-15620: Über-jira: S3A phase VI: Hadoop 3.3 features. ( owner :  
Steve Loughran)
  *HADOOP-15763: Über-JIRA: abfs phase II: Hadoop 3.3 features & fixes. ( 
owner :  Steve Loughran)
  *HADOOP-15619:Über-JIRA: S3Guard Phase IV: Hadoop 3.3 features. ( owner : 
 Steve Loughran)
  *HADOOP-15338: Support Java 11 LTS in Hadoop (owner: Akira Ajisaka)



3. Summary of Issues Status



There are 1781 issues are fixed in 3.3.0(1) which very big number.



13 Blocker and critical issues are open(2),I will followup owners to get status 
on each of them to get in by code Freeze date.



Please let me know if I missed any features targeted to 3.3 per this timeline. 
I would like to volunteer myself as release manager of 3.3.0 release.



Please let me know if you have any suggestions.

Reference:


   1) project in (YARN, HADOOP, MAPREDUCE, HDFS) AND resolution = Fixed AND 
fixVersion = 3.3.0

   2) project in (YARN, HADOOP, MAPREDUCE, HDFS) AND priority in (Blocker, 
Critical) AND resolution = Unresolved AND "TargetVersion/s" = 3.3.0 
ORDER BY priority DESC



Note:

i) added the owners based on the jira assignee and reporter.. Please correct me
ii) will update cwiki


Regards,
Brahma Reddy Battula




Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2020-01-07 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1375/

[Jan 7, 2020 2:10:32 AM] (tasanuma) HDFS-15066. HttpFS: Implement 
setErasureCodingPolicy ,
[Jan 7, 2020 11:17:37 AM] (stevel) HADOOP-16645. S3A Delegation Token extension 
point to use StoreContext.
[Jan 7, 2020 11:49:24 AM] (aajisaka) HADOOP-16773. Fix duplicate assertj-core 
dependency in hadoop-common
[Jan 7, 2020 6:05:47 PM] (stevel) HADOOP-16699. Add verbose TRACE logging to 
ABFS.
[Jan 7, 2020 11:46:14 PM] (hanishakoneru) HADOOP-16727. KMS Jetty server does 
not startup if trust store password

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org

[jira] [Created] (YARN-10073) Intraqueue preemption doesn't work across partitions

2020-01-07 Thread Paul Jones (Jira)
Paul Jones created YARN-10073:
-

 Summary: Intraqueue preemption doesn't work across partitions
 Key: YARN-10073
 URL: https://issues.apache.org/jira/browse/YARN-10073
 Project: Hadoop YARN
  Issue Type: Bug
  Components: capacity scheduler, capacityscheduler, scheduler 
preemption
Affects Versions: 2.8.5
Reporter: Paul Jones


Cluster:

1 Node with label "A"

yarn.scheduler.capacity.root.accessible-node-labels=*

yarn.resourcemanager.monitor.capacity.preemption.intra-queue-preemption.enabled=true

yarn.scheduler.capacity.root.default.minimum-user-limit-percent=50

 

User 1:  Submit job Y require 10x cluster resources to queue, default, using 
label ""

User 2: (after job Y starts) submit job Z to queue, default using label ""

 

What we see: Job Z doesn't start until job Y releases resources. This happens 
because the pending requests for job Y and Z are in partition "". However, 
queue default is using resources in partition "A". Pending requests in 
partition "" don't cause intra queue preemptions in partition "A".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Re: [DISCUSS] About creation of Hadoop Thirdparty repository for shaded artifacts

2020-01-07 Thread Brahma Reddy Battula
Hi Sree vaddi,Owen,stack,Duo Zhang,

We can move forward based on your comments, just waiting for your
reply.Hope all of your comments answered..(unification we can think
parallel thread as Vinay mentioned).



On Mon, 6 Jan 2020 at 6:21 PM, Vinayakumar B 
wrote:

> Hi Sree,
>
> > apache/hadoop-thirdparty, How would it fit into ASF ? As an Incubating
> Project ? Or as a TLP ?
> > Or as a new project definition ?
> As already mentioned by Ayush, this will be a subproject of Hadoop.
> Releases will be voted by Hadoop PMC as per ASF process.
>
>
> > The effort to streamline and put in an accepted standard for the
> dependencies that require shading,
> > seems beyond the siloed efforts of hadoop, hbase, etc
>
> >I propose, we bring all the decision makers from all these artifacts in
> one room and decide best course of action.
> > I am looking at, no projects should ever had to shade any artifacts
> except as an absolute necessary alternative.
>
> This is the ideal proposal for any project. But unfortunately some projects
> takes their own course based on need.
>
> In the current case of protobuf in Hadoop,
> Protobuf upgrade from 2.5.0 (which is already EOL) was not taken up to
> avoid downstream failures. Since Hadoop is a platform, its dependencies
> will get added to downstream projects' classpath. So any change in Hadoop's
> dependencies will directly affect downstreams. Hadoop strictly follows
> backward compatibility as far as possible.
> Though protobuf provides wire compatibility b/w versions, it doesnt
> provide compatibility for generated sources.
> Now, to support ARM protobuf upgrade is mandatory. Using shading
> technique, In Hadoop internally can upgrade to shaded protobuf 3.x and
> still have 2.5.0 protobuf (deprecated) for downstreams.
>
> This shading is necessary to have both versions of protobuf supported.
> (2.5.0 (non-shaded) for downstream's classpath and 3.x (shaded) for
> hadoop's internal usage).
> And this entire work to be done before 3.3.0 release.
>
> So, though its ideal to make a common approach for all projects, I suggest
> for Hadoop we can go ahead as per current approach.
> We can also start the parallel effort to address these problems in a
> separate discussion/proposal. Once the solution is available we can revisit
> and adopt new solution accordingly in all such projects (ex: HBase, Hadoop,
> Ratis).
>
> -Vinay
>
> On Mon, Jan 6, 2020 at 12:39 AM Ayush Saxena  wrote:
>
> > Hey Sree
> >
> > > apache/hadoop-thirdparty, How would it fit into ASF ? As an Incubating
> > > Project ? Or as a TLP ?
> > > Or as a new project definition ?
> > >
> > A sub project of Apache Hadoop, having its own independent release
> cycles.
> > May be you can put this into the same column as ozone or as
> > submarine(couple of months ago).
> >
> > Unifying for all, seems interesting but each project is independent and
> has
> > its own limitations and way of thinking, I don't think it would be an
> easy
> > task to bring all on the same table and get them agree to a common stuff.
> >
> > I guess this has been into discussion since quite long, and there hasn't
> > been any other alternative suggested. Still we can hold up for a week, if
> > someone comes up with a better solution, else we can continue in the
> > present direction.
> >
> > -Ayush
> >
> >
> >
> > On Sun, 5 Jan 2020 at 05:03, Sree Vaddi  .invalid>
> > wrote:
> >
> > > apache/hadoop-thirdparty, How would it fit into ASF ? As an Incubating
> > > Project ? Or as a TLP ?
> > > Or as a new project definition ?
> > >
> > > The effort to streamline and put in an accepted standard for the
> > > dependencies that require shading,seems beyond the siloed efforts of
> > > hadoop, hbase, etc
> > >
> > > I propose, we bring all the decision makers from all these artifacts in
> > > one room and decide best course of action.I am looking at, no projects
> > > should ever had to shade any artifacts except as an absolute necessary
> > > alternative.
> > >
> > >
> > > Thank you./Sree
> > >
> > >
> > >
> > > On Saturday, January 4, 2020, 7:49:18 AM PST, Vinayakumar B <
> > > vinayakum...@apache.org> wrote:
> > >
> > >  Hi,
> > > Sorry for the late reply,.
> > > >>> To be exact, how can we better use the thirdparty repo? Looking at
> > > HBase as an example, it looks like everything that are known to break a
> > lot
> > > after an update get shaded into the hbase-thirdparty artifact: guava,
> > > netty, ... etc.
> > > Is it the purpose to isolate these naughty dependencies?
> > > Yes, shading is to isolate these naughty dependencies from downstream
> > > classpath and have independent control on these upgrades without
> breaking
> > > downstreams.
> > >
> > > First PR https://github.com/apache/hadoop-thirdparty/pull/1 to create
> > the
> > > protobuf shaded jar is ready to merge.
> > >
> > > Please take a look if anyone interested, will be merged may be after
> two
> > > days if no objections.
> > >
> > > -Vinay
> > >
> > >
> > > 

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2020-01-07 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1374/

[Jan 6, 2020 3:31:13 AM] (aajisaka) HDFS-15089. RBF: SmallFix for RBFMetrics in 
doc (#1786)
[Jan 6, 2020 3:36:11 AM] (aajisaka) MAPREDUCE-7255. Fix typo in MapReduce 
documentaion example (#1793)
[Jan 6, 2020 9:09:59 AM] (tasanuma) HDFS-15090. RBF: MountPoint Listing Should 
Return Flag Values Of
[Jan 6, 2020 3:26:33 PM] (snemeth) YARN-10035. Add ability to filter the 
Cluster Applications API request
[Jan 6, 2020 4:16:11 PM] (snemeth) YARN-10026. Pull out common code pieces from 
ATS v1.5 and v2.
[Jan 6, 2020 6:24:16 PM] (eyang) YARN-9956. Improved connection error message 
for YARN ApiServerClient.  
[Jan 6, 2020 7:10:39 PM] (stevel) HDFS-14788. Use dynamic regex filter to 
ignore copy of source files in




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

FindBugs :

   module:hadoop-cloud-storage-project/hadoop-cos 
   Redundant nullcheck of dir, which is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:[line 66] 
   org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may 
expose internal representation by returning CosNInputStream$ReadBuffer.buffer 
At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At 
CosNInputStream.java:[line 87] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, 
byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, 
File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long):in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long): new String(byte[]) At 
CosNativeFileSystemStore.java:[line 178] 
   org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, 
String, String, int) may fail to clean up java.io.InputStream Obligation to 
clean up resource created at CosNativeFileSystemStore.java:fail to clean up 
java.io.InputStream Obligation to clean up resource created at 
CosNativeFileSystemStore.java:[line 252] is not discharged 

Failed junit tests :

   hadoop.hdfs.server.namenode.TestRedudantBlocks 
   hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized 
   hadoop.hdfs.TestDeadNodeDetection 
   
hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy 
   
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageDomain 
   hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity 
   

[jira] [Created] (YARN-10072) TestCSAllocateCustomResource failures

2020-01-07 Thread Jim Brennan (Jira)
Jim Brennan created YARN-10072:
--

 Summary: TestCSAllocateCustomResource failures
 Key: YARN-10072
 URL: https://issues.apache.org/jira/browse/YARN-10072
 Project: Hadoop YARN
  Issue Type: Test
  Components: yarn
Affects Versions: 2.10.0
Reporter: Jim Brennan


This test is failing for us consistently in our internal 2.10 based branch.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Re: [DISCUSS] Hadoop 2019 Release Planning

2020-01-07 Thread Steve Loughran
I'm thinking of doing a backport of most of the hadoop-aws changes to
branch-3.2, for the next 3.2.x release; they are all self contained and
will benefit many (they will need to cope with the older mockito version,
but I have to deal with that in-house already).

one change is the new openFile() builder API. I'd like to wrap that up with
a little improvement https://issues.apache.org/jira/browse/HADOOP-16759;
That way for all releases with the API, it's consistent.

(that withStatus() feature gives extra performance and ensures that
etag/version can be used to get the explicit version you want.)

On Tue, Jan 7, 2020 at 2:18 AM Akira Ajisaka  wrote:

> >  I am interested on 3.3 release ..will act as RM .will update the wiki as
> well..
>
> Thanks Brahma for your reply. I'll help you as co-RM.
> We will send announcements (cutting branches, code freeze, and so on) in
> another thread.
>
> Thanks,
> Akira
>
> On Tue, Jan 7, 2020 at 4:32 AM Wangda Tan  wrote:
>
> > Hi guys,
> >
> > Thanks for the update and for volunteering to be RM.
> >
> > I just did a quick check:
> > 3.1.4 has 52 patches resolved. (3.1.3 Released on Oct 21)
> > 3.2.2 has 46 patches resolved. (3.2.1 Released on Sep 22)
> > 3.3.0 has .. many patches sitting here so we definitely need a release.
> >
> > If Akira and Brahma you guys can be co-RMs for 3.3.0 that would be great.
> >
> > Hadoop 3.2.1 is released on Sep 22 which is 3+ months ago, and I saw
> > community started to have large prod deployment on 3.2.x, Gabor if you
> have
> > bandwidth to help releases, I think we can do 3.2.2 first then 3.1.4.
> >
> > Thoughts?
> > - Wangda
> >
> > On Mon, Jan 6, 2020 at 5:50 AM Brahma Reddy Battula 
> > wrote:
> >
> >> Thanks Akira for resuming this..
> >>
> >>  I am interested on 3.3 release ..will act as RM .will update the wiki
> as
> >> well..
> >>
> >>
> >>
> >> On Mon, 6 Jan 2020 at 6:08 PM, Gabor Bota  .invalid>
> >> wrote:
> >>
> >>> I'm interested in doing a release of hadoop.
> >>> The version we need an RM is 3.1.3 right? What's the target date for
> >>> that?
> >>>
> >>> Thanks,
> >>> Gabor
> >>>
> >>> On Mon, Jan 6, 2020 at 8:31 AM Akira Ajisaka 
> >>> wrote:
> >>>
> >>> > Thank you Wangda.
> >>> >
> >>> > Now it's 2020. Let's release Hadoop 3.3.0.
> >>> > I created a wiki page for tracking blocker/critical issues for 3.3.0
> >>> and
> >>> > I'll check the issues in the list.
> >>> >
> https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+3.3+Release
> >>> > If you find blocker/critical issues in trunk, please set the target
> >>> version
> >>> > to 3.3.0 for tracking.
> >>> >
> >>> > > We still need RM for 3.3.0 and 3.1.3.
> >>> > I can work as a release manager for 3.3.0. Is there anyone who wants
> >>> to be
> >>> > a RM?
> >>> >
> >>> > Thanks and regards,
> >>> > Akira
> >>> >
> >>> > On Fri, Aug 16, 2019 at 9:28 PM zhankun tang 
> >>> > wrote:
> >>> >
> >>> > > Thanks Wangda for bring this up!
> >>> > >
> >>> > > I ran the submarine 0.2.0 release before with a lot of help from
> >>> folks
> >>> > > especially Sunil. :D
> >>> > > And this time I would like to help to release the 3.1.4. Thanks!
> >>> > >
> >>> > > BR,
> >>> > > Zhankun
> >>> > >
> >>> > > Hui Fei 于2019年8月16日 周五下午7:19写道:
> >>> > >
> >>> > > > Hi Wangda,
> >>> > > > Thanks for bringing this up!
> >>> > > > Looking forward to see HDFS 3.x is widely used,but RollingUpgrade
> >>> is a
> >>> > > > problem.
> >>> > > > Hope commiters watch and review these issues, Thanks
> >>> > > > https://issues.apache.org/jira/browse/HDFS-13596
> >>> > > > https://issues.apache.org/jira/browse/HDFS-14396
> >>> > > >
> >>> > > > Wangda Tan  于2019年8月10日周六 上午10:59写道:
> >>> > > >
> >>> > > > > Hi all,
> >>> > > > >
> >>> > > > > Hope this email finds you well
> >>> > > > >
> >>> > > > > I want to hear your thoughts about what should be the release
> >>> plan
> >>> > for
> >>> > > > > 2019.
> >>> > > > >
> >>> > > > > In 2018, we released:
> >>> > > > > - 1 maintenance release of 2.6
> >>> > > > > - 3 maintenance releases of 2.7
> >>> > > > > - 3 maintenance releases of 2.8
> >>> > > > > - 3 releases of 2.9
> >>> > > > > - 4 releases of 3.0
> >>> > > > > - 2 releases of 3.1
> >>> > > > >
> >>> > > > > Total 16 releases in 2018.
> >>> > > > >
> >>> > > > > In 2019, by far we only have two releases:
> >>> > > > > - 1 maintenance release of 3.1
> >>> > > > > - 1 minor release of 3.2.
> >>> > > > >
> >>> > > > > However, the community put a lot of efforts to stabilize
> >>> features of
> >>> > > > > various release branches.
> >>> > > > > There're:
> >>> > > > > - 217 fixed patches in 3.1.3 [1]
> >>> > > > > - 388 fixed patches in 3.2.1 [2]
> >>> > > > > - 1172 fixed patches in 3.3.0 [3] (OMG!)
> >>> > > > >
> >>> > > > > I think it is the time to do maintenance releases of 3.1/3.2
> and
> >>> do a
> >>> > > > minor
> >>> > > > > release for 3.3.0.
> >>> > > > >
> >>> > > > > In addition, I saw community discussion to do a 2.8.6 release
> for
> >>> > > > security
> >>> > > > > fixes.
> 

[jira] [Created] (YARN-10071) Sync Mockito version with other modules

2020-01-07 Thread Akira Ajisaka (Jira)
Akira Ajisaka created YARN-10071:


 Summary: Sync Mockito version with other modules
 Key: YARN-10071
 URL: https://issues.apache.org/jira/browse/YARN-10071
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: build, test
Reporter: Akira Ajisaka


YARN-8551 introduced Mockito 1.x dependency, update.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-10070) NPE if no rule is defined and application-tag-based-placement is enabled

2020-01-07 Thread Kinga Marton (Jira)
Kinga Marton created YARN-10070:
---

 Summary: NPE if no rule is defined and 
application-tag-based-placement is enabled
 Key: YARN-10070
 URL: https://issues.apache.org/jira/browse/YARN-10070
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Kinga Marton
Assignee: Kinga Marton


If there is no rule defined for a user NPE is thrown by the following line.
{code:java}
String queue = placementManager
 .placeApplication(context, usernameUsedForPlacement).getQueue();{code}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org