Re: Unassigned Hadoop jiras with patch available

2019-08-15 Thread Wei-Chiu Chuang
Interestingly enough,
I think I can create new wiki, but I don't see a button to edit existing
wiki.

On Thu, Aug 15, 2019 at 6:24 AM Masatake Iwasaki <
iwasak...@oss.nttdata.co.jp> wrote:

> Hi Wei-Chiu Chuang,
>
> Thanks for doing this and sorry for late reply.
>
> HowToCommit[1] wiki has description about how to do this in "Adding
> Contributors role" section but it does not explicitly say when it
> should be done.
>
> [1] https://cwiki.apache.org/confluence/display/HADOOP2/HowToCommit
>
> This could be stated as a part of "Review" process.  Also adding link
> to your JIRA filter to the wiki and encouraging housekeeping would be
> nice.
>
> # I don't have permission to edit the page..
>
> Regards,
> Masatake Iwasaki
>
> On 8/2/19 06:41, Wei-Chiu Chuang wrote:
> > I assigned all jiras with patch available that are created since 2019. If
> > you have jiras that you are actively working on that you can't assign to
> > yourself, please let me know.
> >
> > On Wed, Jul 31, 2019 at 3:11 PM Wei-Chiu Chuang 
> wrote:
> >
> >> I was told the filter is private. I am sorry.
> >>
> >> This one should be good:
> >>
> https://issues.apache.org/jira/issues/?jql=project%20in%20(HADOOP%2C%20HDFS%2CYARN%2CMAPREDUCE%2CHDDS%2CSUBMARINE)%20AND%20status%20%3D%20%22Patch%20Available%22%20AND%20assignee%20%3D%20EMPTY%20ORDER%20BY%20created%20DESC%2C%20updated%20DESC
> >>
> >> On Wed, Jul 31, 2019 at 3:02 PM Wei-Chiu Chuang 
> >> wrote:
> >>
> >>> I am using this jira filter to find jiras with patch available but
> >>> unassigned.
> >>>
> >>>
> https://issues.apache.org/jira/issues/?filter=12346814=project%20in%20(HADOOP%2C%20HDFS%2CYARN%2CMAPREDUCE%2CHDDS%2CSUBMARINE)%20AND%20status%20%3D%20%22Patch%20Available%22%20AND%20assignee%20%3D%20EMPTY%20ORDER%20BY%20created%20DESC%2C%20updated%20DESC
> >>>
> >>> In most cases, these jiras are unassigned because the contributor who
> >>> posted the patch are the first-timers and do not have the contributor
> role
> >>> in the jira. It's very common for those folks to get overlooked.
> >>>
> >>> Hadoop PMCs, if you have the JIRA administrator permission, please help
> >>> grant contributor access to these contributors. You help keep the
> project
> >>> more friendly to new-comers.
> >>>
> >>> You can do so by going to JIRA --> (upper right, click on the gear next
> >>> to your profile avatar) --> Projects --> click on the project (say
> Hadoop
> >>> HDFS) --> Roles --> View Project Roles --> Add users to a role --> add
> to
> >>> Contributor list, or if the Contributor list is full, add to
> Contributor1
> >>> list.
> >>>
> >>> Or you can go to
> >>>
> https://issues.apache.org/jira/plugins/servlet/project-config/HDFS/roles
> to
> >>> add the contributor access for HDFS. Similarly goes for Hadoop Common
> and
> >>> other sub-projects.
> >>>
>
>


Re: [DISCUSS] Hadoop 2019 Release Planning

2019-08-15 Thread Masatake Iwasaki

Hi Wangda,

Thanks for bringing this up.

> I think it is the time to do maintenance releases of 3.1/3.2 and do a 
minor

> release for 3.3.0.

3.3.0 seems to have some blocker issues.

  project in ("Hadoop Common", "Hadoop HDFS", "Hadoop Map/Reduce", 
"Hadoop YARN") AND "Target Version/s" = 3.3.0 AND priority = Blocker AND 
status != Resolved


I'm reviewing HADOOP-15958 now.


On 8/13/19 02:33, Jonathan Hung wrote:

Hi Wangda, Thanks for starting the discussion. We would also like to
release 2.10.0 which was discussed previously
 and
at various contributor meetups. I'm interested in being release manager for
that.

Thanks,

Jonathan Hung


On Fri, Aug 9, 2019 at 7:59 PM Wangda Tan  wrote:


Hi all,

Hope this email finds you well

I want to hear your thoughts about what should be the release plan for
2019.

In 2018, we released:
- 1 maintenance release of 2.6
- 3 maintenance releases of 2.7
- 3 maintenance releases of 2.8
- 3 releases of 2.9
- 4 releases of 3.0
- 2 releases of 3.1

Total 16 releases in 2018.

In 2019, by far we only have two releases:
- 1 maintenance release of 3.1
- 1 minor release of 3.2.

However, the community put a lot of efforts to stabilize features of
various release branches.
There're:
- 217 fixed patches in 3.1.3 [1]
- 388 fixed patches in 3.2.1 [2]
- 1172 fixed patches in 3.3.0 [3] (OMG!)

I think it is the time to do maintenance releases of 3.1/3.2 and do a minor
release for 3.3.0.

In addition, I saw community discussion to do a 2.8.6 release for security
fixes.

Any other releases? I think there're release plans for Ozone as well. And
please add your thoughts.

Volunteers welcome! If you have interests to run a release as Release
Manager (or co-Resource Manager), please respond to this email thread so we
can coordinate.

Thanks,
Wangda Tan

[1] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND resolution = Fixed AND
fixVersion = 3.1.3
[2] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND resolution = Fixed AND
fixVersion = 3.2.1
[3] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND resolution = Fixed AND
fixVersion = 3.3.0




-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-9749) TestAppLogAggregatorImpl#testDFSQuotaExceeded fails on trunk

2019-08-15 Thread Peter Bacsko (JIRA)
Peter Bacsko created YARN-9749:
--

 Summary: TestAppLogAggregatorImpl#testDFSQuotaExceeded fails on 
trunk
 Key: YARN-9749
 URL: https://issues.apache.org/jira/browse/YARN-9749
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn
Reporter: Peter Bacsko
Assignee: Adam Antal


TestAppLogAggregatorImpl#testDFSQuotaExceeded currently fails on trunk. It was 
most likely introduced by YARN-9676 (resetting HEAD to the previous commit and 
then re-running the test passes).

{noformat}
[INFO] Running 
org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestAppLogAggregatorImpl
[ERROR] Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2.781 s 
<<< FAILURE! - in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestAppLogAggregatorImpl
[ERROR] 
testDFSQuotaExceeded(org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestAppLogAggregatorImpl)
  Time elapsed: 2.361 s  <<< FAILURE!
java.lang.AssertionError: The set of paths for deletion are not the same as 
expected: actual size: 0 vs expected size: 1
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestAppLogAggregatorImpl.verifyFilesToDelete(TestAppLogAggregatorImpl.java:344)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestAppLogAggregatorImpl.access$000(TestAppLogAggregatorImpl.java:82)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestAppLogAggregatorImpl$1.answer(TestAppLogAggregatorImpl.java:330)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestAppLogAggregatorImpl$1.answer(TestAppLogAggregatorImpl.java:319)
at 
org.mockito.internal.stubbing.StubbedInvocationMatcher.answer(StubbedInvocationMatcher.java:39)
at 
org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:96)
at 
org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29)
at 
org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:35)
at 
org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:61)
at 
org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:49)
at 
org.mockito.internal.creation.bytebuddy.MockMethodInterceptor$DispatcherDefaultingToRealMethod.interceptSuperCallable(MockMethodInterceptor.java:108)
at 
org.apache.hadoop.yarn.server.nodemanager.DeletionService$MockitoMock$1879282050.delete(Unknown
 Source)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl.doAppLogAggregationPostCleanUp(AppLogAggregatorImpl.java:556)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl.run(AppLogAggregatorImpl.java:476)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestAppLogAggregatorImpl.testDFSQuotaExceeded(TestAppLogAggregatorImpl.java:469)
...
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Re: Unassigned Hadoop jiras with patch available

2019-08-15 Thread Masatake Iwasaki

Hi Wei-Chiu Chuang,

Thanks for doing this and sorry for late reply.

HowToCommit[1] wiki has description about how to do this in "Adding
Contributors role" section but it does not explicitly say when it
should be done.

[1] https://cwiki.apache.org/confluence/display/HADOOP2/HowToCommit

This could be stated as a part of "Review" process.  Also adding link
to your JIRA filter to the wiki and encouraging housekeeping would be
nice.

# I don't have permission to edit the page..

Regards,
Masatake Iwasaki

On 8/2/19 06:41, Wei-Chiu Chuang wrote:

I assigned all jiras with patch available that are created since 2019. If
you have jiras that you are actively working on that you can't assign to
yourself, please let me know.

On Wed, Jul 31, 2019 at 3:11 PM Wei-Chiu Chuang  wrote:


I was told the filter is private. I am sorry.

This one should be good:
https://issues.apache.org/jira/issues/?jql=project%20in%20(HADOOP%2C%20HDFS%2CYARN%2CMAPREDUCE%2CHDDS%2CSUBMARINE)%20AND%20status%20%3D%20%22Patch%20Available%22%20AND%20assignee%20%3D%20EMPTY%20ORDER%20BY%20created%20DESC%2C%20updated%20DESC

On Wed, Jul 31, 2019 at 3:02 PM Wei-Chiu Chuang 
wrote:


I am using this jira filter to find jiras with patch available but
unassigned.

https://issues.apache.org/jira/issues/?filter=12346814=project%20in%20(HADOOP%2C%20HDFS%2CYARN%2CMAPREDUCE%2CHDDS%2CSUBMARINE)%20AND%20status%20%3D%20%22Patch%20Available%22%20AND%20assignee%20%3D%20EMPTY%20ORDER%20BY%20created%20DESC%2C%20updated%20DESC

In most cases, these jiras are unassigned because the contributor who
posted the patch are the first-timers and do not have the contributor role
in the jira. It's very common for those folks to get overlooked.

Hadoop PMCs, if you have the JIRA administrator permission, please help
grant contributor access to these contributors. You help keep the project
more friendly to new-comers.

You can do so by going to JIRA --> (upper right, click on the gear next
to your profile avatar) --> Projects --> click on the project (say Hadoop
HDFS) --> Roles --> View Project Roles --> Add users to a role --> add to
Contributor list, or if the Contributor list is full, add to Contributor1
list.

Or you can go to
https://issues.apache.org/jira/plugins/servlet/project-config/HDFS/roles to
add the contributor access for HDFS. Similarly goes for Hadoop Common and
other sub-projects.




-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-08-15 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/414/

[Aug 14, 2019 3:35:14 AM] (iwasakims) HDFS-14423. Percent (%) and plus (+) 
characters no longer work in


[Error replacing 'FILE' - Workspace is not accessible]

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-08-15 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1229/

[Aug 14, 2019 3:24:03 AM] (github) HADOOP-16495. Fix invalid metric types in 
PrometheusMetricsSink (#1244)
[Aug 14, 2019 3:27:37 AM] (aengineer) HDDS-1920. Place ozone.om.address config 
key default value in
[Aug 14, 2019 3:34:06 AM] (aengineer) HDDS-1956. Aged IO Thread exits on first 
read
[Aug 14, 2019 5:02:54 AM] (aengineer) HDDS-1915. Remove hadoop script from 
ozone distribution
[Aug 14, 2019 5:04:31 AM] (aengineer) HDDS-1832 : Improve logging for 
PipelineActions handling in SCM and
[Aug 14, 2019 6:07:02 AM] (aengineer) HDDS-1947. fix naming issue for 
ScmBlockLocationTestingClient.
[Aug 14, 2019 6:12:44 AM] (aengineer) HDDS-1929. OM started on recon host in 
ozonesecure compose
[Aug 14, 2019 6:26:47 AM] (aengineer) HDDS-1914. Ozonescript example 
docker-compose cluster can't be started
[Aug 14, 2019 8:16:23 AM] (bibinchundatt) YARN-9747. Reduce additional namenode 
call by
[Aug 14, 2019 8:50:26 AM] (gabor.bota) HADOOP-16500 S3ADelegationTokens to only 
log at debug on startup
[Aug 14, 2019 1:09:45 PM] (nanda) HDDS-1965. Compile error due to leftover 
ScmBlockLocationTestIngClient
[Aug 14, 2019 2:30:35 PM] (weichiu) HDFS-14595. HDFS-11848 breaks API 
compatibility. Contributed by Siyao
[Aug 14, 2019 2:58:22 PM] (snemeth) YARN-9140. Code cleanup in 
ResourcePluginManager.initialize and in
[Aug 14, 2019 3:07:15 PM] (nanda) HDDS-1955. 
TestBlockOutputStreamWithFailures#test2DatanodesFailure
[Aug 14, 2019 3:13:54 PM] (snemeth) YARN-9133. Make tests more easy to 
comprehend in TestGpuResourceHandler.
[Aug 14, 2019 3:14:31 PM] (ayushsaxena) HDFS-14713. RBF: RouterAdmin supports 
refreshRouterArgs command but not
[Aug 14, 2019 3:35:16 PM] (954799+szilard-nemeth) YARN-9676. Add DEBUG and 
TRACE level messages to AppLogAggregatorImpl…
[Aug 14, 2019 4:34:11 PM] (aengineer) HDDS-1966. Wrong expected key ACL in 
acceptance test
[Aug 14, 2019 4:53:31 PM] (aengineer) HDDS-1964. TestOzoneClientProducer fails 
with ConnectException
[Aug 14, 2019 5:07:22 PM] (ayushsaxena) SUBMARINE-107. Reduce the scope of 
mockito-core in submarine to test.
[Aug 14, 2019 5:42:29 PM] (weichiu) YARN-9683. Remove reapDockerContainerNoPid 
left behind by YARN-9074




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

Failed junit tests :

   hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup 
   hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken 
   
hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestAppLogAggregatorImpl
 
   hadoop.mapreduce.v2.hs.webapp.TestHSWebApp 
   hadoop.tools.dynamometer.TestDynamometerInfra 
   hadoop.tools.dynamometer.TestDynamometerInfra 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1229/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1229/artifact/out/diff-compile-javac-root.txt
  [332K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1229/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1229/artifact/out/diff-patch-hadolint.txt

[jira] [Created] (YARN-9750) TestAppLogAggregatorImpl.verifyFilesToDelete fails

2019-08-15 Thread Prabhu Joseph (JIRA)
Prabhu Joseph created YARN-9750:
---

 Summary: TestAppLogAggregatorImpl.verifyFilesToDelete fails
 Key: YARN-9750
 URL: https://issues.apache.org/jira/browse/YARN-9750
 Project: Hadoop YARN
  Issue Type: Bug
  Components: log-aggregation, test
Affects Versions: 3.3.0
Reporter: Prabhu Joseph
Assignee: Prabhu Joseph


*TestAppLogAggregatorImpl.verifyFilesToDelete fails*

{code}
[ERROR] 
testDFSQuotaExceeded(org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestAppLogAggregatorImpl)
  Time elapsed: 2.252 s  <<< FAILURE!
java.lang.AssertionError: The set of paths for deletion are not the same as 
expected: actual size: 0 vs expected size: 1
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestAppLogAggregatorImpl.verifyFilesToDelete(TestAppLogAggregatorImpl.java:344)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestAppLogAggregatorImpl.access$000(TestAppLogAggregatorImpl.java:82)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestAppLogAggregatorImpl$1.answer(TestAppLogAggregatorImpl.java:330)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestAppLogAggregatorImpl$1.answer(TestAppLogAggregatorImpl.java:319)
at 
org.mockito.internal.stubbing.StubbedInvocationMatcher.answer(StubbedInvocationMatcher.java:39)
at 
org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:96)
at 
org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29)
at 
org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:35)
at 
org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:61)
at 
org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:49)
at 
org.mockito.internal.creation.bytebuddy.MockMethodInterceptor$DispatcherDefaultingToRealMethod.interceptSuperCallable(MockMethodInterceptor.java:108)
at 
org.apache.hadoop.yarn.server.nodemanager.DeletionService$MockitoMock$1136724178.delete(Unknown
 Source)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl.doAppLogAggregationPostCleanUp(AppLogAggregatorImpl.java:556)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl.run(AppLogAggregatorImpl.java:476)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestAppLogAggregatorImpl.testDFSQuotaExceeded(TestAppLogAggregatorImpl.java:469)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 

[jira] [Created] (YARN-9751) Separate queue and app ordering policy capacity scheduler configs

2019-08-15 Thread Jonathan Hung (JIRA)
Jonathan Hung created YARN-9751:
---

 Summary: Separate queue and app ordering policy capacity scheduler 
configs
 Key: YARN-9751
 URL: https://issues.apache.org/jira/browse/YARN-9751
 Project: Hadoop YARN
  Issue Type: Task
Reporter: Jonathan Hung


Right now it's not possible to specify distinct app and queue ordering policies 
since they share the same {{ordering-policy}} suffix.

There's already a TODO in CapacitySchedulerConfiguration for this. This Jira 
intends to fix it.
{noformat}
// TODO (wangda): We need to better distinguish app ordering policy and queue
// ordering policy's classname / configuration options, etc. And dedup code
// if possible.{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Thoughts about moving submarine to a separate git repo?

2019-08-15 Thread Xun Liu
Dear Submarine developers,

My name is Xun Liu, I am a member of the Hadoop submarine development team.
I'm one of the major contributor of Submarine since June 2018.

I want to hear your thoughts about creating a separate GitHub repo under
Apache to do submarine development. This is an independent effort of
Submarine spin-off from the Hadoop project [
https://lists.apache.org/thread.html/3fab657f905d081b536d9081dc404f7fd20c80eb824c857bc8e16e3b@].
However, once the spin-off is approved, this effort can benefit the
follow-up processes as well.

Submarine dev community has a total of 8 developers and submits an average
of 4 to 5 PR per day.
But there are a limited number of Hadoop committer actively help review and
merge patches, which causes development progress delays.

So we created an external GitHub repo [
https://github.com/hadoopsubmarine/submarine] and moved all the code for
the Hadoop submarine project into the external Github repo.
In this way, everyone can review the code for each other, and now the
development progress of Hadoop submarine is very fast.

Also, now Submarine has little dependency on Hadoop, we want to have a
separate CI/CD pipeline to release and test submarine instead of every time
build whole Hadoop. Putting Submarine under Hadoop will introduce
unnecessary dependencies to Hadoop's top-level pom.xml.

Our development process still complies with the development rules of the
Hadoop community: first, create a ticket in the submarine JIRA, and then
develop, in the external GitHub repo repository, the title of each PR will
be accompanied by the JIRA ID number.

Once the Apache Github repo is created, we going to move all external
commits to the new Apache Github repo.

Any suggestions are welcome!

Best Regards
Xun Liu


[jira] [Created] (YARN-9752) Add support for allocation id in SLS.

2019-08-15 Thread Abhishek Modi (JIRA)
Abhishek Modi created YARN-9752:
---

 Summary: Add support for allocation id in SLS.
 Key: YARN-9752
 URL: https://issues.apache.org/jira/browse/YARN-9752
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Abhishek Modi
Assignee: Abhishek Modi






--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-9753) Cache Pre-Priming

2019-08-15 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created YARN-9753:
-

 Summary: Cache Pre-Priming
 Key: YARN-9753
 URL: https://issues.apache.org/jira/browse/YARN-9753
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Akash R Nilugal


Currently, we have an index server which basically helps in distributed caching 
of the datamaps in a separate spark application.

The caching of the datamaps in index server will start once the query is fired 
on the table for the first time, all the datamaps will be loaded

if the count(*) is fired and only required will be loaded for any filter query.



Here the problem or the bottleneck is, until and unless the query is fired on 
table, the caching won’t be done for the table datamaps.

So consider a scenario where we are just loading the data to table for whole 
day and then next day we query,

so all the segments will start loading into cache. So first time the query will 
be slow.



What if we load the datamaps into cache or preprime the cache without waititng 
for any query on the table?

Yes, what if we load the cache after every load is done, what if we load the 
cache for all the segments at once,

so that first time query need not do all this job, which makes it faster.



Here i have attached the design document for the pre-priming of cache into 
index server. Please have a look at it



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Resolved] (YARN-9753) Cache Pre-Priming

2019-08-15 Thread Akash R Nilugal (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal resolved YARN-9753.
---
Resolution: Invalid

created issue in wrong project

> Cache Pre-Priming
> -
>
> Key: YARN-9753
> URL: https://issues.apache.org/jira/browse/YARN-9753
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Akash R Nilugal
>Priority: Major
>
> Currently, we have an index server which basically helps in distributed 
> caching of the datamaps in a separate spark application.
> The caching of the datamaps in index server will start once the query is 
> fired on the table for the first time, all the datamaps will be loaded
> if the count(*) is fired and only required will be loaded for any filter 
> query.
> Here the problem or the bottleneck is, until and unless the query is fired on 
> table, the caching won’t be done for the table datamaps.
> So consider a scenario where we are just loading the data to table for whole 
> day and then next day we query,
> so all the segments will start loading into cache. So first time the query 
> will be slow.
> What if we load the datamaps into cache or preprime the cache without 
> waititng for any query on the table?
> Yes, what if we load the cache after every load is done, what if we load the 
> cache for all the segments at once,
> so that first time query need not do all this job, which makes it faster.
> Here i have attached the design document for the pre-priming of cache into 
> index server. Please have a look at it



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org