[jira] [Created] (HDFS-14660) [SBN Read] ObserverNameNode should throw StandbyException for requests not from ObserverProxyProvider

2019-07-17 Thread Chao Sun (JIRA)
Chao Sun created HDFS-14660:
---

 Summary: [SBN Read] ObserverNameNode should throw StandbyException 
for requests not from ObserverProxyProvider
 Key: HDFS-14660
 URL: https://issues.apache.org/jira/browse/HDFS-14660
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Chao Sun
Assignee: Chao Sun


In a HDFS HA cluster with consistent reads enabled (HDFS-12943), clients could 
be using either {{ObserverReadProxyProvider}}, {{ConfiguredProxyProvider}}, or 
something else. Since observer is just a special type of SBN and we allow 
transitions between them, a client NOT using {{ObserverReadProxyProvider}} will 
need to have {{dfs.ha.namenodes.}} include all NameNodes in the 
cluster, and therefore, it may send request to a observer node.

For this case, we should check whether the {{stateId}} in the incoming RPC 
header is set or not, and throw an {{StandbyException}} when it is not. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Force "squash and merge" option for PR merge on github UI

2019-07-17 Thread Da Zhou
sounds good!  +1

Regards,
Da

> On Jul 17, 2019, at 7:32 PM, Dinesh Chitlangia 
>  wrote:
> 
> +1, this is certainly useful.
> 
> Thank you,
> Dinesh
> 
> 
> 
> 
>> On Wed, Jul 17, 2019 at 10:04 PM Akira Ajisaka  wrote:
>> 
>> Makes sense, +1
>> 
>>> On Thu, Jul 18, 2019 at 10:01 AM Sangjin Lee  wrote:
>>> 
>>> +1. Sounds good to me.
>>> 
 On Wed, Jul 17, 2019 at 10:20 AM Iñigo Goiri  wrote:
 
 +1
 
 On Wed, Jul 17, 2019 at 4:17 AM Steve Loughran
>>  
 wrote:
 
> +1 for squash and merge, with whoever does the merge adding the full
 commit
> message for the logs, with JIRA, contributor(s) etc
> 
> One limit of the github process is that the author of the commit
>> becomes
> whoever hit the squash button, not whoever did the code, so it loses
>> the
> credit they are due. This is why I'm doing local merges (With some
>> help
> from smart-apply-patch). I think I'll have to explore
>> smart-apply-patch
 to
> see if I can do even more with it
> 
> 
> 
> 
> On Wed, Jul 17, 2019 at 7:07 AM Elek, Marton 
>> wrote:
> 
>> Hi,
>> 
>> Github UI (ui!) helps to merge Pull Requests to the proposed
>> branch.
>> There are three different ways to do it [1]:
>> 
>> 1. Keep all the different commits from the PR branch and create one
>> additional merge commit ("Create a merge commit")
>> 
>> 2. Squash all the commits and commit the change as one patch
>> ("Squash
>> and merge")
>> 
>> 3. Keep all the different commits from the PR branch but rebase,
>> merge
>> commit will be missing ("Rebase and merge")
>> 
>> 
>> 
>> As only the option 2 is compatible with the existing development
>> practices of Hadoop (1 issue = 1 patch = 1 commit), I call for a
>> lazy
>> consensus vote: If no objections withing 3 days, I will ask INFRA
>> to
>> disable the options 1 and 3 to make the process less error prone.
>> 
>> Please let me know, what do you think,
>> 
>> Thanks a lot
>> Marton
>> 
>> ps: Personally I prefer to merge from local as it enables to sign
>> the
>> commits and do a final build before push. But this is a different
 story,
>> this proposal is only about removing the options which are
>> obviously
>> risky...
>> 
>> ps2: You can always do any kind of merge / commits from CLI, for
 example
>> to merge a feature branch together with keeping the history.
>> 
>> [1]:
>> 
>> 
> 
 
>> https://help.github.com/en/articles/merging-a-pull-request#merging-a-pull-request-on-github
>> 
>> 
>> -
>> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
>> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>> 
>> 
> 
 
>> 
>> -
>> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>> 
>> 

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Force "squash and merge" option for PR merge on github UI

2019-07-17 Thread Dinesh Chitlangia
+1, this is certainly useful.

Thank you,
Dinesh




On Wed, Jul 17, 2019 at 10:04 PM Akira Ajisaka  wrote:

> Makes sense, +1
>
> On Thu, Jul 18, 2019 at 10:01 AM Sangjin Lee  wrote:
> >
> > +1. Sounds good to me.
> >
> > On Wed, Jul 17, 2019 at 10:20 AM Iñigo Goiri  wrote:
> >
> > > +1
> > >
> > > On Wed, Jul 17, 2019 at 4:17 AM Steve Loughran
>  > > >
> > > wrote:
> > >
> > > > +1 for squash and merge, with whoever does the merge adding the full
> > > commit
> > > > message for the logs, with JIRA, contributor(s) etc
> > > >
> > > > One limit of the github process is that the author of the commit
> becomes
> > > > whoever hit the squash button, not whoever did the code, so it loses
> the
> > > > credit they are due. This is why I'm doing local merges (With some
> help
> > > > from smart-apply-patch). I think I'll have to explore
> smart-apply-patch
> > > to
> > > > see if I can do even more with it
> > > >
> > > >
> > > >
> > > >
> > > > On Wed, Jul 17, 2019 at 7:07 AM Elek, Marton 
> wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > Github UI (ui!) helps to merge Pull Requests to the proposed
> branch.
> > > > > There are three different ways to do it [1]:
> > > > >
> > > > > 1. Keep all the different commits from the PR branch and create one
> > > > > additional merge commit ("Create a merge commit")
> > > > >
> > > > > 2. Squash all the commits and commit the change as one patch
> ("Squash
> > > > > and merge")
> > > > >
> > > > > 3. Keep all the different commits from the PR branch but rebase,
> merge
> > > > > commit will be missing ("Rebase and merge")
> > > > >
> > > > >
> > > > >
> > > > > As only the option 2 is compatible with the existing development
> > > > > practices of Hadoop (1 issue = 1 patch = 1 commit), I call for a
> lazy
> > > > > consensus vote: If no objections withing 3 days, I will ask INFRA
> to
> > > > > disable the options 1 and 3 to make the process less error prone.
> > > > >
> > > > > Please let me know, what do you think,
> > > > >
> > > > > Thanks a lot
> > > > > Marton
> > > > >
> > > > > ps: Personally I prefer to merge from local as it enables to sign
> the
> > > > > commits and do a final build before push. But this is a different
> > > story,
> > > > > this proposal is only about removing the options which are
> obviously
> > > > > risky...
> > > > >
> > > > > ps2: You can always do any kind of merge / commits from CLI, for
> > > example
> > > > > to merge a feature branch together with keeping the history.
> > > > >
> > > > > [1]:
> > > > >
> > > > >
> > > >
> > >
> https://help.github.com/en/articles/merging-a-pull-request#merging-a-pull-request-on-github
> > > > >
> > > > >
> -
> > > > > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> > > > > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> > > > >
> > > > >
> > > >
> > >
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>


Re: [VOTE] Force "squash and merge" option for PR merge on github UI

2019-07-17 Thread Akira Ajisaka
Makes sense, +1

On Thu, Jul 18, 2019 at 10:01 AM Sangjin Lee  wrote:
>
> +1. Sounds good to me.
>
> On Wed, Jul 17, 2019 at 10:20 AM Iñigo Goiri  wrote:
>
> > +1
> >
> > On Wed, Jul 17, 2019 at 4:17 AM Steve Loughran  > >
> > wrote:
> >
> > > +1 for squash and merge, with whoever does the merge adding the full
> > commit
> > > message for the logs, with JIRA, contributor(s) etc
> > >
> > > One limit of the github process is that the author of the commit becomes
> > > whoever hit the squash button, not whoever did the code, so it loses the
> > > credit they are due. This is why I'm doing local merges (With some help
> > > from smart-apply-patch). I think I'll have to explore smart-apply-patch
> > to
> > > see if I can do even more with it
> > >
> > >
> > >
> > >
> > > On Wed, Jul 17, 2019 at 7:07 AM Elek, Marton  wrote:
> > >
> > > > Hi,
> > > >
> > > > Github UI (ui!) helps to merge Pull Requests to the proposed branch.
> > > > There are three different ways to do it [1]:
> > > >
> > > > 1. Keep all the different commits from the PR branch and create one
> > > > additional merge commit ("Create a merge commit")
> > > >
> > > > 2. Squash all the commits and commit the change as one patch ("Squash
> > > > and merge")
> > > >
> > > > 3. Keep all the different commits from the PR branch but rebase, merge
> > > > commit will be missing ("Rebase and merge")
> > > >
> > > >
> > > >
> > > > As only the option 2 is compatible with the existing development
> > > > practices of Hadoop (1 issue = 1 patch = 1 commit), I call for a lazy
> > > > consensus vote: If no objections withing 3 days, I will ask INFRA to
> > > > disable the options 1 and 3 to make the process less error prone.
> > > >
> > > > Please let me know, what do you think,
> > > >
> > > > Thanks a lot
> > > > Marton
> > > >
> > > > ps: Personally I prefer to merge from local as it enables to sign the
> > > > commits and do a final build before push. But this is a different
> > story,
> > > > this proposal is only about removing the options which are obviously
> > > > risky...
> > > >
> > > > ps2: You can always do any kind of merge / commits from CLI, for
> > example
> > > > to merge a feature branch together with keeping the history.
> > > >
> > > > [1]:
> > > >
> > > >
> > >
> > https://help.github.com/en/articles/merging-a-pull-request#merging-a-pull-request-on-github
> > > >
> > > > -
> > > > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> > > > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> > > >
> > > >
> > >
> >

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Force "squash and merge" option for PR merge on github UI

2019-07-17 Thread Sangjin Lee
+1. Sounds good to me.

On Wed, Jul 17, 2019 at 10:20 AM Iñigo Goiri  wrote:

> +1
>
> On Wed, Jul 17, 2019 at 4:17 AM Steve Loughran  >
> wrote:
>
> > +1 for squash and merge, with whoever does the merge adding the full
> commit
> > message for the logs, with JIRA, contributor(s) etc
> >
> > One limit of the github process is that the author of the commit becomes
> > whoever hit the squash button, not whoever did the code, so it loses the
> > credit they are due. This is why I'm doing local merges (With some help
> > from smart-apply-patch). I think I'll have to explore smart-apply-patch
> to
> > see if I can do even more with it
> >
> >
> >
> >
> > On Wed, Jul 17, 2019 at 7:07 AM Elek, Marton  wrote:
> >
> > > Hi,
> > >
> > > Github UI (ui!) helps to merge Pull Requests to the proposed branch.
> > > There are three different ways to do it [1]:
> > >
> > > 1. Keep all the different commits from the PR branch and create one
> > > additional merge commit ("Create a merge commit")
> > >
> > > 2. Squash all the commits and commit the change as one patch ("Squash
> > > and merge")
> > >
> > > 3. Keep all the different commits from the PR branch but rebase, merge
> > > commit will be missing ("Rebase and merge")
> > >
> > >
> > >
> > > As only the option 2 is compatible with the existing development
> > > practices of Hadoop (1 issue = 1 patch = 1 commit), I call for a lazy
> > > consensus vote: If no objections withing 3 days, I will ask INFRA to
> > > disable the options 1 and 3 to make the process less error prone.
> > >
> > > Please let me know, what do you think,
> > >
> > > Thanks a lot
> > > Marton
> > >
> > > ps: Personally I prefer to merge from local as it enables to sign the
> > > commits and do a final build before push. But this is a different
> story,
> > > this proposal is only about removing the options which are obviously
> > > risky...
> > >
> > > ps2: You can always do any kind of merge / commits from CLI, for
> example
> > > to merge a feature branch together with keeping the history.
> > >
> > > [1]:
> > >
> > >
> >
> https://help.github.com/en/articles/merging-a-pull-request#merging-a-pull-request-on-github
> > >
> > > -
> > > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> > > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> > >
> > >
> >
>


[jira] [Created] (HDDS-1820) Fix numKeys metrics in OM HA

2019-07-17 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1820:


 Summary: Fix numKeys metrics in OM HA
 Key: HDDS-1820
 URL: https://issues.apache.org/jira/browse/HDDS-1820
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


When we commit the key, we should increment numKeys in Ozone. This metrics 
shows the current count of keys in ozone. This is missed in OMKeyCommitRequest 
logic.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1721) Client Metrics are not being pushed to the configured sink while running a hadoop command to write to Ozone.

2019-07-17 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-1721.
--
Resolution: Fixed

> Client Metrics are not being pushed to the configured sink while running a 
> hadoop command to write to Ozone.
> 
>
> Key: HDDS-1721
> URL: https://issues.apache.org/jira/browse/HDDS-1721
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Client Metrics are not being pushed to the configured sink while running a 
> hadoop command to write to Ozone.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Force "squash and merge" option for PR merge on github UI

2019-07-17 Thread Iñigo Goiri
+1

On Wed, Jul 17, 2019 at 4:17 AM Steve Loughran 
wrote:

> +1 for squash and merge, with whoever does the merge adding the full commit
> message for the logs, with JIRA, contributor(s) etc
>
> One limit of the github process is that the author of the commit becomes
> whoever hit the squash button, not whoever did the code, so it loses the
> credit they are due. This is why I'm doing local merges (With some help
> from smart-apply-patch). I think I'll have to explore smart-apply-patch to
> see if I can do even more with it
>
>
>
>
> On Wed, Jul 17, 2019 at 7:07 AM Elek, Marton  wrote:
>
> > Hi,
> >
> > Github UI (ui!) helps to merge Pull Requests to the proposed branch.
> > There are three different ways to do it [1]:
> >
> > 1. Keep all the different commits from the PR branch and create one
> > additional merge commit ("Create a merge commit")
> >
> > 2. Squash all the commits and commit the change as one patch ("Squash
> > and merge")
> >
> > 3. Keep all the different commits from the PR branch but rebase, merge
> > commit will be missing ("Rebase and merge")
> >
> >
> >
> > As only the option 2 is compatible with the existing development
> > practices of Hadoop (1 issue = 1 patch = 1 commit), I call for a lazy
> > consensus vote: If no objections withing 3 days, I will ask INFRA to
> > disable the options 1 and 3 to make the process less error prone.
> >
> > Please let me know, what do you think,
> >
> > Thanks a lot
> > Marton
> >
> > ps: Personally I prefer to merge from local as it enables to sign the
> > commits and do a final build before push. But this is a different story,
> > this proposal is only about removing the options which are obviously
> > risky...
> >
> > ps2: You can always do any kind of merge / commits from CLI, for example
> > to merge a feature branch together with keeping the history.
> >
> > [1]:
> >
> >
> https://help.github.com/en/articles/merging-a-pull-request#merging-a-pull-request-on-github
> >
> > -
> > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> >
> >
>


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-07-17 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1200/

[Jul 16, 2019 2:44:27 AM] (ayushsaxena) HDFS-14642. processMisReplicatedBlocks 
does not return correct processed
[Jul 16, 2019 4:51:59 AM] (github) HDDS-1736. Cleanup 2phase old HA code for 
Key requests. (#1038)
[Jul 16, 2019 8:33:22 AM] (bibinchundatt) YARN-9645. Fix Invalid event 
FINISHED_CONTAINERS_PULLED_BY_AM at NEW on
[Jul 16, 2019 9:36:41 AM] (msingh) HDDS-1756. DeleteContainerCommandHandler 
fails with NPE. Contributed by
[Jul 16, 2019 12:31:13 PM] (shashikant) HDDS-1492. Generated chunk size name 
too long. Contributed  by
[Jul 16, 2019 2:52:14 PM] (elek) HDDS-1793. Acceptance test of ozone-topology 
cluster is failing
[Jul 16, 2019 7:52:29 PM] (xyao) HDDS-1787. NPE thrown while trying to find DN 
closest to client.
[Jul 16, 2019 7:58:59 PM] (aengineer) HDDS-1544. Support default Acls for 
volume, bucket, keys and prefix.
[Jul 16, 2019 8:47:51 PM] (github) HDDS-1813. Fix false warning from ozones3 
acceptance test. Contributed
[Jul 16, 2019 11:59:57 PM] (github) HDDS-1775. Make OM KeyDeletingService 
compatible with HA model (#1063)
[Jul 17, 2019 12:14:23 AM] (github) HADOOP-15729. [s3a] Allow core threads to 
time out. (#1075)
[Jul 17, 2019 12:36:49 AM] (haibochen) YARN-9646. DistributedShell tests failed 
to bind to a local host name.




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

Failed junit tests :

   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup 
   hadoop.hdfs.server.federation.router.TestRouterRpc 
   hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken 
   
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption 
   hadoop.tools.TestDistCpSystem 
   hadoop.ozone.container.ozoneimpl.TestOzoneContainer 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1200/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1200/artifact/out/diff-compile-javac-root.txt
  [332K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1200/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1200/artifact/out/diff-patch-hadolint.txt
  [8.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1200/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1200/artifact/out/diff-patch-pylint.txt
  [212K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1200/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1200/artifact/out/diff-patch-shelldocs.txt
  [44K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1200/artifact/out/whitespace-eol.txt
  [9.6M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1200/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   

[jira] [Created] (HDDS-1818) Instantiate Ozone Containers using Factory pattern

2019-07-17 Thread Supratim Deka (JIRA)
Supratim Deka created HDDS-1818:
---

 Summary: Instantiate Ozone Containers using Factory pattern
 Key: HDDS-1818
 URL: https://issues.apache.org/jira/browse/HDDS-1818
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Datanode
Reporter: Supratim Deka
Assignee: Supratim Deka


Introduce a factory to instantiate Containers in Ozone.

This will be useful in different ways:
 # to test higher level functionality, for example test error handling for 
situations like HDDS-1798
 # create a simulated container which does not do disk IO for data and is used 
to run targeted max throughput tests. As an example, HDDS-1094  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1817) GetKey fails with IllegalArgumentException

2019-07-17 Thread Nanda kumar (JIRA)
Nanda kumar created HDDS-1817:
-

 Summary: GetKey fails with IllegalArgumentException
 Key: HDDS-1817
 URL: https://issues.apache.org/jira/browse/HDDS-1817
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client, SCM
Affects Versions: 0.4.0
Reporter: Nanda kumar


During get key call the client is intermittently failing with 
{{java.lang.IllegalArgumentException}}
{noformat}
E   AssertionError: Ozone get Key failed with 
output=[java.lang.IllegalArgumentException
E   at 
com.google.common.base.Preconditions.checkArgument(Preconditions.java:72)
E   at 
org.apache.hadoop.hdds.scm.XceiverClientManager.acquireClient(XceiverClientManager.java:150)
E   at 
org.apache.hadoop.hdds.scm.XceiverClientManager.acquireClientForReadData(XceiverClientManager.java:143)
E   at 
org.apache.hadoop.hdds.scm.storage.BlockInputStream.getChunkInfos(BlockInputStream.java:154)
E   at 
org.apache.hadoop.hdds.scm.storage.BlockInputStream.initialize(BlockInputStream.java:118)
E   at 
org.apache.hadoop.hdds.scm.storage.BlockInputStream.read(BlockInputStream.java:222)
E   at 
org.apache.hadoop.ozone.client.io.KeyInputStream.read(KeyInputStream.java:171)
E   at 
org.apache.hadoop.ozone.client.io.OzoneInputStream.read(OzoneInputStream.java:47)
E   at java.base/java.io.InputStream.read(InputStream.java:205)
E   at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:94)
E   at 
org.apache.hadoop.ozone.web.ozShell.keys.GetKeyHandler.call(GetKeyHandler.java:98)
E   at 
org.apache.hadoop.ozone.web.ozShell.keys.GetKeyHandler.call(GetKeyHandler.java:48)
E   at picocli.CommandLine.execute(CommandLine.java:1173)
E   at picocli.CommandLine.access$800(CommandLine.java:141)
E   at picocli.CommandLine$RunLast.handle(CommandLine.java:1367)
E   at picocli.CommandLine$RunLast.handle(CommandLine.java:1335)
E   at 
picocli.CommandLine$AbstractParseResultHandler.handleParseResult(CommandLine.java:1243)
E   at picocli.CommandLine.parseWithHandlers(CommandLine.java:1526)
E   at picocli.CommandLine.parseWithHandler(CommandLine.java:1465)
E   at 
org.apache.hadoop.hdds.cli.GenericCli.execute(GenericCli.java:65)
E   at 
org.apache.hadoop.ozone.web.ozShell.OzoneShell.execute(OzoneShell.java:60)
E   at org.apache.hadoop.hdds.cli.GenericCli.run(GenericCli.java:56)
E   at 
org.apache.hadoop.ozone.web.ozShell.OzoneShell.main(OzoneShell.java:53)]
{noformat}

This is happening when the pipeline returned by SCM doesn't have any datanode 
information.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-07-17 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/385/

[Jul 16, 2019 2:22:45 PM] (xkrogen) HDFS-14547. Improve memory efficiency of 
quotas when storage type quotas
[Jul 16, 2019 10:50:24 PM] (iwasakims) HADOOP-16386. FindBugs warning in 
branch-2: GlobalStorageStatistics




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.nodemanager.amrmproxy.TestFederationInterceptor 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.mapreduce.security.ssl.TestEncryptedShuffle 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/385/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/385/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/385/artifact/out/diff-compile-cc-root-jdk1.8.0_212.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/385/artifact/out/diff-compile-javac-root-jdk1.8.0_212.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/385/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/385/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/385/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/385/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/385/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/385/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/385/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/385/artifact/out/whitespace-tabs.txt
  [1.2M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/385/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/385/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/385/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/385/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_212.txt
  [1.1M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/385/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [292K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/385/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry.txt
  [12K]
   

Re: [VOTE] Force "squash and merge" option for PR merge on github UI

2019-07-17 Thread Steve Loughran
+1 for squash and merge, with whoever does the merge adding the full commit
message for the logs, with JIRA, contributor(s) etc

One limit of the github process is that the author of the commit becomes
whoever hit the squash button, not whoever did the code, so it loses the
credit they are due. This is why I'm doing local merges (With some help
from smart-apply-patch). I think I'll have to explore smart-apply-patch to
see if I can do even more with it




On Wed, Jul 17, 2019 at 7:07 AM Elek, Marton  wrote:

> Hi,
>
> Github UI (ui!) helps to merge Pull Requests to the proposed branch.
> There are three different ways to do it [1]:
>
> 1. Keep all the different commits from the PR branch and create one
> additional merge commit ("Create a merge commit")
>
> 2. Squash all the commits and commit the change as one patch ("Squash
> and merge")
>
> 3. Keep all the different commits from the PR branch but rebase, merge
> commit will be missing ("Rebase and merge")
>
>
>
> As only the option 2 is compatible with the existing development
> practices of Hadoop (1 issue = 1 patch = 1 commit), I call for a lazy
> consensus vote: If no objections withing 3 days, I will ask INFRA to
> disable the options 1 and 3 to make the process less error prone.
>
> Please let me know, what do you think,
>
> Thanks a lot
> Marton
>
> ps: Personally I prefer to merge from local as it enables to sign the
> commits and do a final build before push. But this is a different story,
> this proposal is only about removing the options which are obviously
> risky...
>
> ps2: You can always do any kind of merge / commits from CLI, for example
> to merge a feature branch together with keeping the history.
>
> [1]:
>
> https://help.github.com/en/articles/merging-a-pull-request#merging-a-pull-request-on-github
>
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>
>


[jira] [Created] (HDDS-1816) ContainerStateMachine should limit number of pending apply transactions

2019-07-17 Thread Lokesh Jain (JIRA)
Lokesh Jain created HDDS-1816:
-

 Summary: ContainerStateMachine should limit number of pending 
apply transactions
 Key: HDDS-1816
 URL: https://issues.apache.org/jira/browse/HDDS-1816
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Lokesh Jain
Assignee: Lokesh Jain


ContainerStateMachine should limit number of pending apply transactions in 
order to avoid excessive heap usage by the pending transactions.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-14658) Refine NameSystem lock usage during processing FBR

2019-07-17 Thread Chen Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Zhang resolved HDFS-14658.
---
Resolution: Abandoned

> Refine NameSystem lock usage during processing FBR
> --
>
> Key: HDFS-14658
> URL: https://issues.apache.org/jira/browse/HDFS-14658
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Zhang
>Priority: Major
>
> The disk with 12TB capacity is very normal today, which means the FBR size is 
> much larger than before, BlockManager holds the NameSystemLock during 
> processing block report for each storage, which might take quite a long time.
> On our production environment, processing large FBR usually cause a longer 
> RPC queue time, which impacts client latency, so we did some simple work on 
> refining the lock usage, which improved the p99 latency significantly.
> In our solution, BlockManager release the NameSystem write lock and request 
> it again for every 5000 blocks(by default) during processing FBR, with the 
> fair lock, all the RPC request can be processed before BlockManager 
> re-acquire the write lock.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-14659) Refine NameSystem lock usage during processing FBR

2019-07-17 Thread Chen Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Zhang resolved HDFS-14659.
---
Resolution: Abandoned

duplicate jira due to apache jira server error

> Refine NameSystem lock usage during processing FBR
> --
>
> Key: HDFS-14659
> URL: https://issues.apache.org/jira/browse/HDFS-14659
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Zhang
>Priority: Major
>
> The disk with 12TB capacity is very normal today, which means the FBR size is 
> much larger than before, BlockManager holds the NameSystemLock during 
> processing block report for each storage, which might take quite a long time.
> On our production environment, processing large FBR usually cause a longer 
> RPC queue time, which impacts client latency, so we did some simple work on 
> refining the lock usage, which improved the p99 latency significantly.
> In our solution, BlockManager release the NameSystem write lock and request 
> it again for every 5000 blocks(by default) during processing FBR, with the 
> fair lock, all the RPC request can be processed before BlockManager 
> re-acquire the write lock.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Force "squash and merge" option for PR merge on github UI

2019-07-17 Thread Gabor Bota
+1 Good idea.

On Wed, Jul 17, 2019 at 9:37 AM Ayush Saxena  wrote:

> Thanks Marton, Makes Sense +1
>
> > On 17-Jul-2019, at 11:37 AM, Elek, Marton  wrote:
> >
> > Hi,
> >
> > Github UI (ui!) helps to merge Pull Requests to the proposed branch.
> > There are three different ways to do it [1]:
> >
> > 1. Keep all the different commits from the PR branch and create one
> > additional merge commit ("Create a merge commit")
> >
> > 2. Squash all the commits and commit the change as one patch ("Squash
> > and merge")
> >
> > 3. Keep all the different commits from the PR branch but rebase, merge
> > commit will be missing ("Rebase and merge")
> >
> >
> >
> > As only the option 2 is compatible with the existing development
> > practices of Hadoop (1 issue = 1 patch = 1 commit), I call for a lazy
> > consensus vote: If no objections withing 3 days, I will ask INFRA to
> > disable the options 1 and 3 to make the process less error prone.
> >
> > Please let me know, what do you think,
> >
> > Thanks a lot
> > Marton
> >
> > ps: Personally I prefer to merge from local as it enables to sign the
> > commits and do a final build before push. But this is a different story,
> > this proposal is only about removing the options which are obviously
> > risky...
> >
> > ps2: You can always do any kind of merge / commits from CLI, for example
> > to merge a feature branch together with keeping the history.
> >
> > [1]:
> >
> https://help.github.com/en/articles/merging-a-pull-request#merging-a-pull-request-on-github
> >
> > -
> > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> >
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>


Re: [VOTE] Force "squash and merge" option for PR merge on github UI

2019-07-17 Thread Ayush Saxena
Thanks Marton, Makes Sense +1

> On 17-Jul-2019, at 11:37 AM, Elek, Marton  wrote:
> 
> Hi,
> 
> Github UI (ui!) helps to merge Pull Requests to the proposed branch.
> There are three different ways to do it [1]:
> 
> 1. Keep all the different commits from the PR branch and create one
> additional merge commit ("Create a merge commit")
> 
> 2. Squash all the commits and commit the change as one patch ("Squash
> and merge")
> 
> 3. Keep all the different commits from the PR branch but rebase, merge
> commit will be missing ("Rebase and merge")
> 
> 
> 
> As only the option 2 is compatible with the existing development
> practices of Hadoop (1 issue = 1 patch = 1 commit), I call for a lazy
> consensus vote: If no objections withing 3 days, I will ask INFRA to
> disable the options 1 and 3 to make the process less error prone.
> 
> Please let me know, what do you think,
> 
> Thanks a lot
> Marton
> 
> ps: Personally I prefer to merge from local as it enables to sign the
> commits and do a final build before push. But this is a different story,
> this proposal is only about removing the options which are obviously
> risky...
> 
> ps2: You can always do any kind of merge / commits from CLI, for example
> to merge a feature branch together with keeping the history.
> 
> [1]:
> https://help.github.com/en/articles/merging-a-pull-request#merging-a-pull-request-on-github
> 
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> 

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: Re: Any thoughts making Submarine a separate Apache project?

2019-07-17 Thread dashuiguailu...@gmail.com
+1 ,Good idea, we are very much looking forward to it.



dashuiguailu...@gmail.com
 
From: Szilard Nemeth
Date: 2019-07-17 14:55
To: runlin zhang
CC: Xun Liu; Hadoop Common; yarn-dev; Hdfs-dev; mapreduce-dev; submarine-dev
Subject: Re: Any thoughts making Submarine a separate Apache project?
+1, this is a very great idea.
As Hadoop repository has already grown huge and contains many projects, I
think in general it's a good idea to separate projects in the early phase.
 
 
On Wed, Jul 17, 2019, 08:50 runlin zhang  wrote:
 
> +1 ,That will be great !
>
> > 在 2019年7月10日,下午3:34,Xun Liu  写道:
> >
> > Hi all,
> >
> > This is Xun Liu contributing to the Submarine project for deep learning
> > workloads running with big data workloads together on Hadoop clusters.
> >
> > There are a bunch of integrations of Submarine to other projects are
> > finished or going on, such as Apache Zeppelin, TonY, Azkaban. The next
> step
> > of Submarine is going to integrate with more projects like Apache Arrow,
> > Redis, MLflow, etc. & be able to handle end-to-end machine learning use
> > cases like model serving, notebook management, advanced training
> > optimizations (like auto parameter tuning, memory cache optimizations for
> > large datasets for training, etc.), and make it run on other platforms
> like
> > Kubernetes or natively on Cloud. LinkedIn also wants to donate TonY
> project
> > to Apache so we can put Submarine and TonY together to the same codebase
> > (Page #30.
> >
> https://www.slideshare.net/xkrogen/hadoop-meetup-jan-2019-tony-tensorflow-on-yarn-and-beyond#30
> > ).
> >
> > This expands the scope of the original Submarine project in exciting new
> > ways. Toward that end, would it make sense to create a separate Submarine
> > project at Apache? This can make faster adoption of Submarine, and allow
> > Submarine to grow to a full-blown machine learning platform.
> >
> > There will be lots of technical details to work out, but any initial
> > thoughts on this?
> >
> > Best Regards,
> > Xun Liu
>
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>


Re: Any thoughts making Submarine a separate Apache project?

2019-07-17 Thread Szilard Nemeth
+1, this is a very great idea.
As Hadoop repository has already grown huge and contains many projects, I
think in general it's a good idea to separate projects in the early phase.


On Wed, Jul 17, 2019, 08:50 runlin zhang  wrote:

> +1 ,That will be great !
>
> > 在 2019年7月10日,下午3:34,Xun Liu  写道:
> >
> > Hi all,
> >
> > This is Xun Liu contributing to the Submarine project for deep learning
> > workloads running with big data workloads together on Hadoop clusters.
> >
> > There are a bunch of integrations of Submarine to other projects are
> > finished or going on, such as Apache Zeppelin, TonY, Azkaban. The next
> step
> > of Submarine is going to integrate with more projects like Apache Arrow,
> > Redis, MLflow, etc. & be able to handle end-to-end machine learning use
> > cases like model serving, notebook management, advanced training
> > optimizations (like auto parameter tuning, memory cache optimizations for
> > large datasets for training, etc.), and make it run on other platforms
> like
> > Kubernetes or natively on Cloud. LinkedIn also wants to donate TonY
> project
> > to Apache so we can put Submarine and TonY together to the same codebase
> > (Page #30.
> >
> https://www.slideshare.net/xkrogen/hadoop-meetup-jan-2019-tony-tensorflow-on-yarn-and-beyond#30
> > ).
> >
> > This expands the scope of the original Submarine project in exciting new
> > ways. Toward that end, would it make sense to create a separate Submarine
> > project at Apache? This can make faster adoption of Submarine, and allow
> > Submarine to grow to a full-blown machine learning platform.
> >
> > There will be lots of technical details to work out, but any initial
> > thoughts on this?
> >
> > Best Regards,
> > Xun Liu
>
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>


Re: [VOTE] Force "squash and merge" option for PR merge on github UI

2019-07-17 Thread Vinod Kumar Vavilapalli
Makes sense, +1.

Thanks
+Vinod

> On Jul 17, 2019, at 11:37 AM, Elek, Marton  wrote:
> 
> Hi,
> 
> Github UI (ui!) helps to merge Pull Requests to the proposed branch.
> There are three different ways to do it [1]:
> 
> 1. Keep all the different commits from the PR branch and create one
> additional merge commit ("Create a merge commit")
> 
> 2. Squash all the commits and commit the change as one patch ("Squash
> and merge")
> 
> 3. Keep all the different commits from the PR branch but rebase, merge
> commit will be missing ("Rebase and merge")
> 
> 
> 
> As only the option 2 is compatible with the existing development
> practices of Hadoop (1 issue = 1 patch = 1 commit), I call for a lazy
> consensus vote: If no objections withing 3 days, I will ask INFRA to
> disable the options 1 and 3 to make the process less error prone.
> 
> Please let me know, what do you think,
> 
> Thanks a lot
> Marton
> 
> ps: Personally I prefer to merge from local as it enables to sign the
> commits and do a final build before push. But this is a different story,
> this proposal is only about removing the options which are obviously
> risky...
> 
> ps2: You can always do any kind of merge / commits from CLI, for example
> to merge a feature branch together with keeping the history.
> 
> [1]:
> https://help.github.com/en/articles/merging-a-pull-request#merging-a-pull-request-on-github
> 
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> 


-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [ANNOUNCE] New Apache Hadoop Committer - Tao Yang

2019-07-17 Thread runlin zhang


Congrats Tao! 

> 在 2019年7月15日,下午5:53,Weiwei Yang  写道:
> 
> Hi Dear Apache Hadoop Community
> 
> It's my pleasure to announce that Tao Yang has been elected as an Apache
> Hadoop committer, this is to recognize his contributions to Apache Hadoop
> YARN project.
> 
> Congratulations and welcome on board!
> 
> Weiwei
> (On behalf of the Apache Hadoop PMC)


-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: Any thoughts making Submarine a separate Apache project?

2019-07-17 Thread runlin zhang
+1 ,That will be great !

> 在 2019年7月10日,下午3:34,Xun Liu  写道:
> 
> Hi all,
> 
> This is Xun Liu contributing to the Submarine project for deep learning
> workloads running with big data workloads together on Hadoop clusters.
> 
> There are a bunch of integrations of Submarine to other projects are
> finished or going on, such as Apache Zeppelin, TonY, Azkaban. The next step
> of Submarine is going to integrate with more projects like Apache Arrow,
> Redis, MLflow, etc. & be able to handle end-to-end machine learning use
> cases like model serving, notebook management, advanced training
> optimizations (like auto parameter tuning, memory cache optimizations for
> large datasets for training, etc.), and make it run on other platforms like
> Kubernetes or natively on Cloud. LinkedIn also wants to donate TonY project
> to Apache so we can put Submarine and TonY together to the same codebase
> (Page #30.
> https://www.slideshare.net/xkrogen/hadoop-meetup-jan-2019-tony-tensorflow-on-yarn-and-beyond#30
> ).
> 
> This expands the scope of the original Submarine project in exciting new
> ways. Toward that end, would it make sense to create a separate Submarine
> project at Apache? This can make faster adoption of Submarine, and allow
> Submarine to grow to a full-blown machine learning platform.
> 
> There will be lots of technical details to work out, but any initial
> thoughts on this?
> 
> Best Regards,
> Xun Liu


-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Force "squash and merge" option for PR merge on github UI

2019-07-17 Thread Mukul Kumar Singh

+1, Lets have Squash and merge as the default & only option on github UI.

Thanks,
Mukul


On 7/17/19 11:37 AM, Elek, Marton wrote:

Hi,

Github UI (ui!) helps to merge Pull Requests to the proposed branch.
There are three different ways to do it [1]:

1. Keep all the different commits from the PR branch and create one
additional merge commit ("Create a merge commit")

2. Squash all the commits and commit the change as one patch ("Squash
and merge")

3. Keep all the different commits from the PR branch but rebase, merge
commit will be missing ("Rebase and merge")



As only the option 2 is compatible with the existing development
practices of Hadoop (1 issue = 1 patch = 1 commit), I call for a lazy
consensus vote: If no objections withing 3 days, I will ask INFRA to
disable the options 1 and 3 to make the process less error prone.

Please let me know, what do you think,

Thanks a lot
Marton

ps: Personally I prefer to merge from local as it enables to sign the
commits and do a final build before push. But this is a different story,
this proposal is only about removing the options which are obviously
risky...

ps2: You can always do any kind of merge / commits from CLI, for example
to merge a feature branch together with keeping the history.

[1]:
https://help.github.com/en/articles/merging-a-pull-request#merging-a-pull-request-on-github

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Force "squash and merge" option for PR merge on github UI

2019-07-17 Thread Weiwei Yang
Thanks Marton, +1 on this.

Weiwei

On Jul 17, 2019, 2:07 PM +0800, Elek, Marton , wrote:
> Hi,
>
> Github UI (ui!) helps to merge Pull Requests to the proposed branch.
> There are three different ways to do it [1]:
>
> 1. Keep all the different commits from the PR branch and create one
> additional merge commit ("Create a merge commit")
>
> 2. Squash all the commits and commit the change as one patch ("Squash
> and merge")
>
> 3. Keep all the different commits from the PR branch but rebase, merge
> commit will be missing ("Rebase and merge")
>
>
>
> As only the option 2 is compatible with the existing development
> practices of Hadoop (1 issue = 1 patch = 1 commit), I call for a lazy
> consensus vote: If no objections withing 3 days, I will ask INFRA to
> disable the options 1 and 3 to make the process less error prone.
>
> Please let me know, what do you think,
>
> Thanks a lot
> Marton
>
> ps: Personally I prefer to merge from local as it enables to sign the
> commits and do a final build before push. But this is a different story,
> this proposal is only about removing the options which are obviously
> risky...
>
> ps2: You can always do any kind of merge / commits from CLI, for example
> to merge a feature branch together with keeping the history.
>
> [1]:
> https://help.github.com/en/articles/merging-a-pull-request#merging-a-pull-request-on-github
>
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>


[VOTE] Force "squash and merge" option for PR merge on github UI

2019-07-17 Thread Elek, Marton
Hi,

Github UI (ui!) helps to merge Pull Requests to the proposed branch.
There are three different ways to do it [1]:

1. Keep all the different commits from the PR branch and create one
additional merge commit ("Create a merge commit")

2. Squash all the commits and commit the change as one patch ("Squash
and merge")

3. Keep all the different commits from the PR branch but rebase, merge
commit will be missing ("Rebase and merge")



As only the option 2 is compatible with the existing development
practices of Hadoop (1 issue = 1 patch = 1 commit), I call for a lazy
consensus vote: If no objections withing 3 days, I will ask INFRA to
disable the options 1 and 3 to make the process less error prone.

Please let me know, what do you think,

Thanks a lot
Marton

ps: Personally I prefer to merge from local as it enables to sign the
commits and do a final build before push. But this is a different story,
this proposal is only about removing the options which are obviously
risky...

ps2: You can always do any kind of merge / commits from CLI, for example
to merge a feature branch together with keeping the history.

[1]:
https://help.github.com/en/articles/merging-a-pull-request#merging-a-pull-request-on-github

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org