Re: [VOTE] Release Apache Hadoop 3.0.3 (RC0)

2018-05-31 Thread Shashikant Banerjee
Looks like the link with the filter seems to be private. I can't see the 
blocker list. 
https://issues.apache.org/jira/issues/?filter=12343997

Meanwhile , I will be working on testing the release.

Thanks
Shashi
On 6/1/18, 11:18 AM, "Yongjun Zhang"  wrote:

Greetings all,

I've created the first release candidate (RC0) for Apache Hadoop
3.0.3. This is our next maintenance release to follow up 3.0.2. It includes
about 249
important fixes and improvements, among which there are 8 blockers. See
https://issues.apache.org/jira/issues/?filter=12343997

The RC artifacts are available at:
https://dist.apache.org/repos/dist/dev/hadoop/3.0.3-RC0/

The maven artifacts are available via
https://repository.apache.org/content/repositories/orgapachehadoop-1126

Please try the release and vote; the vote will run for the usual 5 working
days, ending on 06/07/2018 PST time. Would really appreciate your
participation here.

I bumped into quite some issues along the way, many thanks to quite a few
people who helped, especially Sammi Chen, Andrew Wang, Junping Du, Eddy Xu.

Thanks,

--Yongjun




Re: [VOTE] Release Apache Hadoop 3.0.3 (RC0)

2018-06-01 Thread Shashikant Banerjee
Hi Yongjun,

I am able to see the list after logging in now.

Thanks
Shashi

From: Yongjun Zhang 
Date: Friday, June 1, 2018 at 9:11 PM
To: Gabor Bota 
Cc: Shashikant Banerjee , Hadoop Common 
, Hdfs-dev , 
"mapreduce-...@hadoop.apache.org" , 
"yarn-...@hadoop.apache.org" 
Subject: Re: [VOTE] Release Apache Hadoop 3.0.3 (RC0)


Thanks for the feedback!

Hi Shashikant, I thought I have made the filter visible to jira users, now I 
changed it to be visible to all logged-in users of jira. Please let me know if 
you can not see it.

Hi Gabor,

Good question. I forgot to mention, I have tried to add tag earlier, as step 
7,8 9 in
https://wiki.apache.org/hadoop/HowToRelease, but these steps seem to not push 
anything to git. I suspect step 4 should have been run with --rc-label , and 
checked with Andrew, he said it doesn't matter and often people don't use the 
rc label.

I probably should mention that the build is on commit id 
37fd7d752db73d984dc31e0cdfd590d252f5e075.

The source is also available at   
https://dist.apache.org/repos/dist/dev/hadoop/3.0.3-RC0/

Thanks.

--Yongjun

On Fri, Jun 1, 2018 at 4:17 AM, Gabor Bota 
mailto:gabor.b...@cloudera.com>> wrote:
Hi Yongjun,

Thank you for working on this release. Is there a git tag in the upstream repo 
which can be checked out? I'd like to build the release from source.

Regards,
Gabor

On Fri, Jun 1, 2018 at 7:57 AM Shashikant Banerjee 
mailto:sbaner...@hortonworks.com>> wrote:
Looks like the link with the filter seems to be private. I can't see the 
blocker list.
https://issues.apache.org/jira/issues/?filter=12343997

Meanwhile , I will be working on testing the release.

Thanks
Shashi
On 6/1/18, 11:18 AM, "Yongjun Zhang" 
mailto:yjzhan...@apache.org>> wrote:

Greetings all,

I've created the first release candidate (RC0) for Apache Hadoop
3.0.3. This is our next maintenance release to follow up 3.0.2. It includes
about 249
important fixes and improvements, among which there are 8 blockers. See
https://issues.apache.org/jira/issues/?filter=12343997

The RC artifacts are available at:
https://dist.apache.org/repos/dist/dev/hadoop/3.0.3-RC0/

The maven artifacts are available via
https://repository.apache.org/content/repositories/orgapachehadoop-1126

Please try the release and vote; the vote will run for the usual 5 working
days, ending on 06/07/2018 PST time. Would really appreciate your
participation here.

I bumped into quite some issues along the way, many thanks to quite a few
people who helped, especially Sammi Chen, Andrew Wang, Junping Du, Eddy Xu.

Thanks,

--Yongjun




Re: [VOTE] Release Apache Hadoop 3.0.3 (RC0)

2018-06-02 Thread Shashikant Banerjee
Thanks for working on this Yongjun!

+1 (non-binding)
 - verified signatures and checksums
 - built from source and setup single node cluster
 - ran basic hdfs operations
- basic sanity check of NN

Thanks
Shashi

On 6/2/18, 7:31 PM, "Zsolt Venczel"  wrote:

Thanks Yongjun for working on this!

+1 (non-binding)

 - built from source with native library
 - run hadoop-hdfs-client tests with native library that all passed
 - set up a cluster with 3 nodes and run teragen, terasort and teravalidate
successfully
 - created two snapshots and produced a snapshot diff successfully
 - checked out the webui, looked at the file structure and double checked
the created snapshots

Thanks and best regards,
Zsolt

On Sat, Jun 2, 2018 at 11:33 AM Ajay Kumar 
wrote:

> Thanks for working on this Yongjun!!
>
> +1 (non-binding)
> - verified signatures and checksums
> - built from source and setup single node cluster
> - ran basic hdfs operations
> - ran TestDFSIO(read/write), wordcount, pi jobs.
> - basic sanity check of NN, RM UI
>
> Thanks,
> Ajay
    >
    > On 6/2/18, 12:45 AM, "Shashikant Banerjee" 
> wrote:
>
> Hi Yongjun,
>
> I am able to see the list after logging in now.
>
> Thanks
> Shashi
>
> From: Yongjun Zhang 
>     Date: Friday, June 1, 2018 at 9:11 PM
> To: Gabor Bota 
> Cc: Shashikant Banerjee , Hadoop Common <
> common-dev@hadoop.apache.org>, Hdfs-dev , "
> mapreduce-...@hadoop.apache.org" , "
> yarn-...@hadoop.apache.org" 
> Subject: Re: [VOTE] Release Apache Hadoop 3.0.3 (RC0)
>
>
> Thanks for the feedback!
>
> Hi Shashikant, I thought I have made the filter visible to jira users,
> now I changed it to be visible to all logged-in users of jira. Please let
> me know if you can not see it.
>
> Hi Gabor,
>
> Good question. I forgot to mention, I have tried to add tag earlier,
> as step 7,8 9 in
> https://wiki.apache.org/hadoop/HowToRelease, but these steps seem to
> not push anything to git. I suspect step 4 should have been run with
> --rc-label , and checked with Andrew, he said it doesn't matter and often
> people don't use the rc label.
>
> I probably should mention that the build is on commit id
> 37fd7d752db73d984dc31e0cdfd590d252f5e075.
>
> The source is also available at
> https://dist.apache.org/repos/dist/dev/hadoop/3.0.3-RC0/
>
> Thanks.
>
> --Yongjun
>
> On Fri, Jun 1, 2018 at 4:17 AM, Gabor Bota  <mailto:gabor.b...@cloudera.com>> wrote:
> Hi Yongjun,
>
> Thank you for working on this release. Is there a git tag in the
> upstream repo which can be checked out? I'd like to build the release from
> source.
>
> Regards,
> Gabor
>
> On Fri, Jun 1, 2018 at 7:57 AM Shashikant Banerjee <
> sbaner...@hortonworks.com<mailto:sbaner...@hortonworks.com>> wrote:
> Looks like the link with the filter seems to be private. I can't see
> the blocker list.
> https://issues.apache.org/jira/issues/?filter=12343997
>
> Meanwhile , I will be working on testing the release.
>
> Thanks
> Shashi
> On 6/1/18, 11:18 AM, "Yongjun Zhang"  yjzhan...@apache.org>> wrote:
>
> Greetings all,
>
> I've created the first release candidate (RC0) for Apache Hadoop
> 3.0.3. This is our next maintenance release to follow up 3.0.2. It
> includes
> about 249
> important fixes and improvements, among which there are 8
> blockers. See
> https://issues.apache.org/jira/issues/?filter=12343997
>
> The RC artifacts are available at:
> https://dist.apache.org/repos/dist/dev/hadoop/3.0.3-RC0/
>
> The maven artifacts are available via
>
> https://repository.apache.org/content/repositories/orgapachehadoop-1126
>
> Please try the release and vote; the vote will run for the usual 5
> working
> days, ending on 06/07/2018 PST time. Would really appreciate your
> participation here.
>
> I bumped into quite some issues along the way, many thanks to
> quite a few
> people who helped, especially Sammi Chen, Andrew Wang, Junping Du,
> Eddy Xu.
>
> Thanks,
>
> --Yongjun
>
>
>
>
>




Re: [VOTE] Merge ContainerIO branch (HDDS-48) in to trunk

2018-06-30 Thread Shashikant Banerjee
+1(non-binding)

Thanks
Shashi

On 6/30/18, 11:19 AM, "Nandakumar Vadivelu"  wrote:

+1

On 6/30/18, 3:44 AM, "Bharat Viswanadham"  
wrote:

Fixing subject line of the mail.


Thanks,
Bharat



On 6/29/18, 3:10 PM, "Bharat Viswanadham" 
 wrote:

Hi All,

Given the positive response to the discussion thread [1], here is 
the formal vote thread to merge HDDS-48 in to trunk.

Summary of code changes:
1. Code changes for this branch are done in the hadoop-hdds 
subproject and hadoop-ozone subproject, there is no impact to hadoop-hdfs.
2. Added support for multiple container types in the datanode code 
path.
3. Added disk layout logic for the containers to supports future 
upgrades.
4. Added support for volume Choosing policy to distribute 
containers across disks on the datanode.
5. Changed the format of the .container file to a human-readable 
format (yaml)


 The vote will run for 7 days, ending Fri July 6th. I will start 
this vote with my +1.

Thanks,
Bharat

[1] 
https://lists.apache.org/thread.html/79998ebd2c3837913a22097102efd8f41c3b08cb1799c3d3dea4876b@%3Chdfs-dev.hadoop.apache.org%3E




-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org


Re: [DISCUSS]Merge ContainerIO branch (HDDS-48) in to trunk

2018-06-29 Thread Shashikant Banerjee
+1 (non-binding) for merging HDDS-48 to trunk.

Thanks
Shashi



On 6/29/18, 10:34 PM, "Xiaoyu Yao"  wrote:

+1 for merging HDDS-48 to trunk. 

On 6/29/18, 9:34 AM, "Arpit Agarwal"  wrote:

+1 for merging this branch.

Added common-dev@


On 6/28/18, 3:16 PM, "Bharat Viswanadham" 
 wrote:

Hi everyone,

I’d like to start a thread to discuss merging the HDDS-48 branch to 
trunk. The ContainerIO work refactors the HDDS Datanode IO path to enforce 
clean separation between the Container management and the Storage layers.

Note: HDDS/Ozone code is not compiled by default in trunk. The 
'hdds' maven profile must be enabled to compile the branch payload.
 
The merge payload includes the following key improvements:
1. Support multiple container types on the datanode.
2. Adopt a new disk layout for the containers that supports future 
upgrades.
3. Support volume Choosing policy for container data locations.
4. Changed the format of the .container file to a human-readable 
format (yaml)
 
Below are the links for design documents attached to HDDS-48.
 

https://issues.apache.org/jira/secure/attachment/12923107/ContainerIO-StorageManagement-DesignDoc.pdf
https://issues.apache.org/jira/secure/attachment/12923108/HDDS 
DataNode Disk Layout.pdf
 
The branch is ready to merge. Over the next week we will clean up 
the unused classes, fix old integration tests and continue testing the changes.
 
Thanks to Hanisha Koneru, Arpit Agarwal, Anu Engineer, Jitendra 
Pandey,  Xiaoyu Yao, Ajay Kumar, Mukul Kumar Singh, Marton Elek and Shashikant 
Banerjee for their contributions in design, development and code reviews.

Thanks,
Bharat




-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org




-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org




Re: [VOTE] Release Apache Hadoop 3.1.1 - RC0

2018-08-07 Thread Shashikant Banerjee
+1(non-binding)

* checked out git tag release-3.1.1-RC0
* built from source 
* deployed on a single node cluster
* executed basic dfs commands
* executed some snapshot commands

Thank you very much the work Wangda.

Thanks
Shashi

On 8/7/18, 6:43 PM, "Elek, Marton"  wrote:


+1 (non-binding)

1. Built from the source package.
2. Checked the signature
3. Started docker based pseudo cluster and smoketested some basic 
functionality (hdfs cli, ec cli, viewfs, yarn examples, spark word count 
job)

Thank you very much the work Wangda.
Marton


On 08/02/2018 08:43 PM, Wangda Tan wrote:
> Hi folks,
> 
> I've created RC0 for Apache Hadoop 3.1.1. The artifacts are available 
here:
> 
> http://people.apache.org/~wangda/hadoop-3.1.1-RC0/
> 
> The RC tag in git is release-3.1.1-RC0:
> https://github.com/apache/hadoop/commits/release-3.1.1-RC0
> 
> The maven artifacts are available via repository.apache.org at
> https://repository.apache.org/content/repositories/orgapachehadoop-1139/
> 
> You can find my public key at
> http://svn.apache.org/repos/asf/hadoop/common/dist/KEYS
> 
> This vote will run 5 days from now.
> 
> 3.1.1 contains 435 [1] fixed JIRA issues since 3.1.0.
> 
> I have done testing with a pseudo cluster and distributed shell job. My +1
> to start.
> 
> Best,
> Wangda Tan
> 
> [1] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in (3.1.1)
> ORDER BY priority DESC
> 

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org




-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org


Re: HADOOP-14163 proposal for new hadoop.apache.org

2018-09-02 Thread Shashikant Banerjee
+1

Thanks
Shashi

On 9/3/18, 9:23 AM, "Mukul Kumar Singh"  wrote:

+1, Thanks for working on this Marton.

-Mukul

On 03/09/18, 9:02 AM, "John Zhuge"  wrote:

+1 Like the new site.

On Sun, Sep 2, 2018 at 7:02 PM Weiwei Yang  wrote:

> That's really nice, +1.
>
> --
> Weiwei
>
> On Sat, Sep 1, 2018 at 4:36 AM Wangda Tan  wrote:
>
> > +1, thanks for working on this, Marton!
> >
> > Best,
> > Wangda
> >
> > On Fri, Aug 31, 2018 at 11:24 AM Arpit Agarwal 
 >
> > wrote:
> >
> > > +1
> > >
> > > Thanks for initiating this Marton.
> > >
> > >
> > > On 8/31/18, 1:07 AM, "Elek, Marton"  wrote:
> > >
> > > Bumping this thread at last time.
> > >
> > > I have the following proposal:
> > >
> > > 1. I will request a new git repository hadoop-site.git and 
import
> the
> > > new site to there (which has exactly the same content as the
> existing
> > > site).
> > >
> > > 2. I will ask infra to use the new repository as the source of
> > > hadoop.apache.org
> > >
> > > 3. I will sync manually all of the changes in the next two 
months
> > back
> > > to the svn site from the git (release announcements, new
> committers)
> > >
> > > IN CASE OF ANY PROBLEM we can switch back to the svn without 
any
> > > problem.
> > >
> > > If no-one objects within three days, I'll assume lazy 
consensus and
> > > start with this plan. Please comment if you have objections.
> > >
> > > Again: it allows immediate fallback at any time as svn repo 
will be
> > > kept
> > > as is (+ I will keep it up-to-date in the next 2 months)
> > >
> > > Thanks,
> > > Marton
> > >
> > >
> > > On 06/21/2018 09:00 PM, Elek, Marton wrote:
> > > >
> > > > Thank you very much to bump up this thread.
> > > >
> > > >
> > > > About [2]: (Just for the clarification) the content of the
> proposed
> > > > website is exactly the same as the old one.
> > > >
> > > > About [1]. I believe that the "mvn site" is perfect for the
> > > > documentation but for website creation there are more 
simple and
> > > > powerful tools.
> > > >
> > > > Hugo has more simple compared to jekyll. Just one binary, 
without
> > > > dependencies, works everywhere (mac, linux, windows)
> > > >
> > > > Hugo has much more powerful compared to "mvn site". Easier 
to
> > > create/use
> > > > more modern layout/theme, and easier to handle the content 
(for
> > > example
> > > > new release announcements could be generated as part of the
> release
> > > > process)
> > > >
> > > > I think it's very low risk to try out a new approach for 
the site
> > > (and
> > > > easy to rollback in case of problems)
> > > >
> > > > Marton
> > > >
> > > > ps: I just updated the patch/preview site with the recent
> releases:
> > > >
> > > > ***
> > > > * http://hadoop.anzix.net *
> > > > ***
> > > >
> > > > On 06/21/2018 01:27 AM, Vinod Kumar Vavilapalli wrote:
> > > >> Got pinged about this offline.
> > > >>
> > > >> Thanks for keeping at it, Marton!
> > > >>
> > > >> I think there are two road-blocks here
> > > >>   (1) Is the mechanism using which the website is built 
good
> > enough
> > > -
> > > >> mvn-site / hugo etc?
> > > >>   (2) Is the new website good enough?
> > > >>
> > > >> For (1), I just think we need more committer attention and 
get
> > > >> feedback rapidly and get it in.
> > > >>
> > > >> For (2), how about we do it in a different way in the 
interest
> of
> > > >> progress?
> > > >>   - We create a hadoop.apache.org/new-site/ where this new 
site
> > > goes.
> > > >>   - We then modify the existing web-site to say that there 
is a
> > new
> > > >> site/experience that folks can click on a link and 
navigate to
> > > >>   - As this new website matures and gets feedback & fixes, 
we
> > > finally
> > > >> pull the 

Re: [VOTE] Release Apache Hadoop Ozone 0.2.1-alpha (RC0)

2018-09-25 Thread Shashikant Banerjee
Hi Marton,

+1 (non-binding)

1. Verified the Signature
3. Built from source.
4. Ran Robot tests to verify all RPC and Rest commands
4. Deployed Pseudo ozone cluster and verified basic commands.

Thanks
Shashi

On 9/26/18, 8:26 AM, "Bharat Viswanadham"  wrote:

Hi Marton,
Thank You for the first ozone release.
+1 (non-binding)

1. Verified signatures.
2. Built from source.
3. Ran a docker cluster using docker files from ozone tar ball. Tested 
ozone shell commands.
4. Ran ozone-hdfs cluster and verified ozone is started as a plugin when 
datanode boots up.

Thanks,
Bharat




On 9/19/18, 2:49 PM, "Elek, Marton"  wrote:

Hi all,

After the recent discussion about the first Ozone release I've created 
the first release candidate (RC0) for Apache Hadoop Ozone 0.2.1-alpha.

This release is alpha quality: it’s not recommended to use in 
production 
but we believe that it’s stable enough to try it out the feature set 
and 
collect feedback.

The RC artifacts are available from: 
https://home.apache.org/~elek/ozone-0.2.1-alpha-rc0/

The RC tag in git is: ozone-0.2.1-alpha-RC0 (968082ffa5d)

Please try the release and vote; the vote will run for the usual 5 
working days, ending on September 26, 2018 10pm UTC time.

The easiest way to try it out is:

1. Download the binary artifact
2. Read the docs at ./docs/index.html
3. TLDR; cd compose/ozone && docker-compose up -d


Please try it out, vote, or just give us feedback.

Thank you very much,
Marton

ps: At next week, we will have a BoF session at ApacheCon North Europe, 
Montreal on Monday evening. Please join, if you are interested, or need 
support to try out the package or just have any feedback.


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org




-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org





Re: [VOTE] - HDDS-4 Branch merge

2019-01-17 Thread Shashikant Banerjee
+1

Thanks,
Shashi

On 1/16/19, 10:25 AM, "Mukul Kumar Singh"  wrote:

+1

Thanks,
Mukul

On 1/13/19, 11:06 PM, "Xiaoyu Yao"  wrote:

+1 (binding), these will useful features for enterprise adoption of 
Ozone/Hdds.

Thanks,
Xiaoyu

On 1/12/19, 3:43 AM, "Lokesh Jain"  wrote:

+1 (non-binding)

Thanks
Lokesh

> On 12-Jan-2019, at 3:00 PM, Sandeep Nemuri  
wrote:
> 
> +1 (non-binding)
> 
> On Sat, Jan 12, 2019 at 8:49 AM Bharat Viswanadham <
> bviswanad...@hortonworks.com 
> wrote:
> 
>> +1 (binding)
>> 
>> 
>> Thanks,
>> Bharat
>> 
>> 
>> On 1/11/19, 11:04 AM, "Hanisha Koneru"  
wrote:
>> 
>>+1 (binding)
>> 
>>Thanks,
>>Hanisha
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>>On 1/11/19, 7:40 AM, "Anu Engineer"  
wrote:
>> 
>>> Since I have not heard any concerns, I will start a VOTE thread 
now.
>>> This vote will run for 7 days and will end on Jan/18/2019 @ 
8:00 AM
>> PST.
>>> 
>>> I will start with my vote, +1 (Binding)
>>> 
>>> Thanks
>>> Anu
>>> 
>>> 
>>> -- Forwarded message -
>>> From: Anu Engineer 
>>> Date: Mon, Jan 7, 2019 at 5:10 PM
>>> Subject: [Discuss] - HDDS-4 Branch merge
>>> To: , 
>>> 
>>> 
>>> Hi All,
>>> 
>>> I would like to propose a merge of HDDS-4 branch to the Hadoop 
trunk.
>>> HDDS-4 branch implements the security work for HDDS and Ozone.
>>> 
>>> HDDS-4 branch contains the following features:
>>>   - Hadoop Kerberos and Tokens support
>>>   - A Certificate infrastructure used by Ozone and HDDS.
>>>   - Audit Logging and parsing support (Spread across trunk and
>> HDDS-4)
>>>   - S3 Security Support - AWS Signature Support.
>>>   - Apache Ranger Support for Ozone
>>> 
>>> I will follow up with a formal vote later this week if I hear no
>>> objections. AFAIK, the changes are isolated to HDDS/Ozone and 
should
>> not
>>> impact any other Hadoop project.
>>> 
>>> Thanks
>>> Anu
>> 
>>
-
>>To unsubscribe, e-mail: 
common-dev-unsubscr...@hadoop.apache.org
>>For additional commands, e-mail: 
common-dev-h...@hadoop.apache.org
>> 
>> 
>> 
> 
> -- 
> *  Regards*
> *  Sandeep Nemuri*




-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org




-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org


Re: [VOTE] Propose to start new Hadoop sub project "submarine"

2019-02-04 Thread Shashikant Banerjee
+1 (non-binding)

Thanks
Shashi

On 2/4/19, 8:27 PM, "Elek, Marton"  wrote:

+1 (non-binding)

(my arguments are in the discuss thread. small move, huge benefit)

Thanks,
Marton

On 2/1/19 11:15 PM, Wangda Tan wrote:
> Hi all,
> 
> According to positive feedbacks from the thread [1]
> 
> This is vote thread to start a new subproject named "hadoop-submarine"
> which follows the release process already established for ozone.
> 
> The vote runs for usual 7 days, which ends at Feb 8th 5 PM PDT.
> 
> Thanks,
> Wangda Tan
> 
> [1]
> 
https://lists.apache.org/thread.html/f864461eb188bd12859d51b0098ec38942c4429aae7e4d001a633d96@%3Cyarn-dev.hadoop.apache.org%3E
> 

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org





Re: VOTE: Hadoop Ozone 0.4.0-alpha RC1

2019-04-21 Thread Shashikant Banerjee
+1 (non-binding)

- Verified checksums
- Built from sources 
-  Ran smoke tests
- Ran basic ozone shell commands in a single node cluster.

Thanks Ajay for all the work being done for the release.

Thanks
Shashi
On 4/20/19, 11:13 PM, "Dinesh Chitlangia"  wrote:

+1 (non-binding)

- Verified checksums
- Built from sources and ran all smoketests
- Repeated smoketest on the binary rc
- Verified audit logs and audit parser
- Toggle audit logging on/off, selectively on/off without having to restart 
the microservices

Thanks Ajay for organizing the release.

Cheers,
Dinesh



On 4/20/19, 11:19 AM, "Elek, Marton"  wrote:

+1 (non-binding)

 - build from source
 - run all the smoketest (from the fresh build based on the src package)
 - run all the smoketest from the binary package
 - signature files are checked
 - sha512 checksums are verified
 - ozone version shows the right commit information

Thanks Ajay all the releasing work,
Marton

ps: used archlinux, java 8, docker-compose 1.23.2, docker 18.09.4-ce

On 4/20/19 3:24 PM, Xiaoyu Yao wrote:
> 
> +1 (binding)
> 
> - Build from source
> - Misc security tests with docker compose
> - MR and Spark sample jobs with secure ozone cluster
> 
> —Xiaoyu
> 
>> On Apr 19, 2019, at 3:40 PM, Anu Engineer 
 wrote:
>>
>> +1 (Binding)
>>
>> -- Verified the checksums.
>> -- Built from sources.
>> -- Sniff tested the functionality.
>>
>> --Anu
>>
>>
>> On Mon, Apr 15, 2019 at 4:09 PM Ajay Kumar 

>> wrote:
>>
>>> Hi all,
>>>
>>> We have created the second release candidate (RC1) for Apache 
Hadoop Ozone
>>> 0.4.0-alpha.
>>>
>>> This release contains security payload for Ozone. Below are some 
important
>>> features in it:
>>>
>>>  *   Hadoop Delegation Tokens and Block Tokens supported for Ozone.
>>>  *   Transparent Data Encryption (TDE) Support - Allows data blocks 
to be
>>> encrypted-at-rest.
>>>  *   Kerberos support for Ozone.
>>>  *   Certificate Infrastructure for Ozone  - Tokens use PKI instead 
of
>>> shared secrets.
>>>  *   Datanode to Datanode communication secured via mutual TLS.
>>>  *   Ability secure ozone cluster that works with Yarn, Hive, and 
Spark.
>>>  *   Skaffold support to deploy Ozone clusters on K8s.
>>>  *   Support S3 Authentication Mechanisms like - S3 v4 
Authentication
>>> protocol.
>>>  *   S3 Gateway supports Multipart upload.
>>>  *   S3A file system is tested and supported.
>>>  *   Support for Tracing and Profiling for all Ozone components.
>>>  *   Audit Support - including Audit Parser tools.
>>>  *   Apache Ranger Support in Ozone.
>>>  *   Extensive failure testing for Ozone.
>>>
>>> The RC artifacts are available at
>>> https://home.apache.org/~ajay/ozone-0.4.0-alpha-rc1
>>>
>>> The RC tag in git is ozone-0.4.0-alpha-RC1 (git hash
>>> d673e16d14bb9377f27c9017e2ffc1bcb03eebfb)
>>>
>>> Please try out<
>>> 
https://cwiki.apache.org/confluence/display/HADOOP/Running+via+Apache+Release>,
>>> vote, or just give us feedback.
>>>
>>> The vote will run for 5 days, ending on April 20, 2019, 19:00 UTC.
>>>
>>> Thank you very much,
>>>
>>> Ajay
>>>
>>>
>>>
> 
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> 

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org




-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org





Re: [VOTE] Apache Hadoop Ozone 0.5.0-beta RC1

2020-03-09 Thread Shashikant Banerjee
I think https://issues.apache.org/jira/browse/HDDS-3116 is a blocker for
the release. Because of this, datanodes fail to communicate with SCM and
marked dead and don't seem to recover.
This has been observed in multiple test setups.

Thanks
Shashi

On Mon, Mar 9, 2020 at 9:20 PM Attila Doroszlai 
wrote:

> +1
>
> * Verified GPG signature and SHA512 checksum
> * Compiled sources
> * Ran ozone smoke test against both binary and locally compiled versions
>
> Thanks Dinesh for RC1.
>
> -Attila
>
> On Sun, Mar 8, 2020 at 2:34 AM Arpit Agarwal
>  wrote:
> >
> > +1 (binding)
> > Verified mds, sha512
> > Verified signatures
> > Built from source
> > Deployed to 3 node cluster
> > Tried a few ozone shell and filesystem commands
> > Ran freon load generator
> > Thanks Dinesh for putting the RC1 together.
> >
> >
> >
> > > On Mar 6, 2020, at 4:46 PM, Dinesh Chitlangia 
> wrote:
> > >
> > > Hi Folks,
> > >
> > > We have put together RC1 for Apache Hadoop Ozone 0.5.0-beta.
> > >
> > > The RC artifacts are at:
> > > https://home.apache.org/~dineshc/ozone-0.5.0-rc1/
> > >
> > > The public key used for signing the artifacts can be found at:
> > > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> > >
> > > The maven artifacts are staged at:
> > >
> https://repository.apache.org/content/repositories/orgapachehadoop-1260
> > >
> > > The RC tag in git is at:
> > > https://github.com/apache/hadoop-ozone/tree/ozone-0.5.0-beta-RC1
> > >
> > > This release contains 800+ fixes/improvements [1].
> > > Thanks to everyone who put in the effort to make this happen.
> > >
> > > *The vote will run for 7 days, ending on March 13th 2020 at 11:59 pm
> PST.*
> > >
> > > Note: This release is beta quality, it’s not recommended to use in
> > > production but we believe that it’s stable enough to try out the
> feature
> > > set and collect feedback.
> > >
> > >
> > > [1] https://s.apache.org/ozone-0.5.0-fixed-issues
> > >
> > > Thanks,
> > > Dinesh Chitlangia
> >
>
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>
>


Re: [VOTE] Apache Hadoop Ozone 0.5.0-beta RC2

2020-03-23 Thread Shashikant Banerjee
 +1 binding.

- Verified hashes and signatures
- Built from source
- Ran Freon Random key generator
- Verified unit tests


Thanks Dinesh for driving the release.

Thanks
Shashi

On Mon, Mar 23, 2020 at 8:58 AM Xiaoyu Yao 
wrote:

> +1 binding
> Download source and verify signature.
> Verify build and documents.
> Deployed an 11 node cluster (3 om with ha, 6 datanodes, 1 scm and 1 s3g)
> Verify multiple RATIS-3 pipelines are created as expected.
> Tried ozone shell commands via o3 and o3fs, focus on security and HA
> related.
>  Only find a few minor issues that we can fix in followup JIRAs.
> 1) ozone getconf -ozonemanagers does not return all the om instances
> bash-4.2$ ozone getconf -ozonemanagers
> 0.0.0.0
> 2) The document on specifying service/ID can be improved. More
> specifically, the URI should give examples for the Service ID in HA.
> Currently, it only mentions host/port.
>
> ozone sh vol create /vol1
> Service ID or host name must not be omitted when ozone.om.service.ids is
> defined.
> bash-4.2$ ozone sh vol create --help
> Usage: ozone sh volume create [-hV] [--root] [-q=] [-u=]
> 
> Creates a volume for the specified user
>  URI of the volume.
>   Ozone URI could start with o3:// or without
> prefix. URI
> may contain the host and port of the OM server.
> Both are
> optional. If they are not specified it will be
> identified from the config files.
> 3). ozone scmcli container list seems report incorrect numberOfKeys and
> usedBytes
> Also, container owner is set as the current leader om(om3), should we use
> the om service id here instead?
> bash-4.2$ ozone scmcli container list
> {
>   "state" : "OPEN",
>   "replicationFactor" : "THREE",
>   "replicationType" : "RATIS",
>   "usedBytes" : 3813,
>   "numberOfKeys" : 1,
> ...
> bash-4.2$ ozone sh key list o3://id1/vol1/bucket1/
> {
>   "volumeName" : "vol1",
>   "bucketName" : "bucket1",
>   "name" : "k1",
>   "dataSize" : 3813,
>   "creationTime" : "2020-03-23T03:23:30.670Z",
>   "modificationTime" : "2020-03-23T03:23:33.207Z",
>   "replicationType" : "RATIS",
>   "replicationFactor" : 3
> }
> {
>   "volumeName" : "vol1",
>   "bucketName" : "bucket1",
>   "name" : "k2",
>   "dataSize" : 3813,
>   "creationTime" : "2020-03-23T03:18:46.735Z",
>   "modificationTime" : "2020-03-23T03:20:15.005Z",
>   "replicationType" : "RATIS",
>   "replicationFactor" : 3
> }
>
>
> Run freon with random key generation.
>
> Thanks Dinesh for driving the the release of Beta RC2.
>
> Xiaoyu
>
> On Sun, Mar 22, 2020 at 2:51 PM Aravindan Vijayan
>  wrote:
>
> > +1
> > Deployed a 3 node cluster
> > Tried ozone shell and filesystem commands
> > Ran freon load generator
> >
> > Thanks Dinesh for working on the RC2.
> >
> > On Sun, Mar 15, 2020 at 7:27 PM Dinesh Chitlangia  >
> > wrote:
> >
> > > Hi Folks,
> > >
> > > We have put together RC2 for Apache Hadoop Ozone 0.5.0-beta.
> > >
> > > The RC artifacts are at:
> > > https://home.apache.org/~dineshc/ozone-0.5.0-rc2/
> > >
> > > The public key used for signing the artifacts can be found at:
> > > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> > >
> > > The maven artifacts are staged at:
> > >
> https://repository.apache.org/content/repositories/orgapachehadoop-1262
> > >
> > > The RC tag in git is at:
> > > https://github.com/apache/hadoop-ozone/tree/ozone-0.5.0-beta-RC2
> > >
> > > This release contains 800+ fixes/improvements [1].
> > > Thanks to everyone who put in the effort to make this happen.
> > >
> > > *The vote will run for 7 days, ending on March 22nd 2020 at 11:59 pm
> > PST.*
> > >
> > > Note: This release is beta quality, it’s not recommended to use in
> > > production but we believe that it’s stable enough to try out the
> feature
> > > set and collect feedback.
> > >
> > >
> > > [1] https://s.apache.org/ozone-0.5.0-fixed-issues
> > >
> > > Thanks,
> > > Dinesh Chitlangia
> > >
> >
>


Re: [VOTE] Apache Hadoop Ozone 1.0.0 RC1

2020-08-31 Thread Shashikant Banerjee
+1(binding)

1.Verified checksums
2.Verified signatures
3.Verified the output of `ozone version
4.Tried creating volume and bucket, write and read key, by Ozone shell
5.Verified basic Ozone Filesystem operations

Thank you very much Sammi for putting up the release together.

Thanks
Shashi

On Mon, Aug 31, 2020 at 4:35 PM Elek, Marton  wrote:

> +1 (binding)
>
>
> 1. verified signatures
>
> 2. verified checksums
>
> 3. verified the output of `ozone version` (includes the good git revision)
>
> 4. verified that the source package matches the git tag
>
> 5. verified source can be used to build Ozone without previous state
> (docker run -v ... -it maven ... --> built from the source with zero
> local maven cache during 16 minutes --> did on a sever at this time)
>
> 6. Verified Ozone can be used from binary package (cd compose/ozone &&
> test.sh --> all tests were passed)
>
> 7. Verified documentation is included in SCM UI
>
> 8. Deployed to Kubernetes and executed Teragen on Yarn [1]
>
> 9. Deployed to Kubernetes and executed Spark (3.0) Word count (local
> executor) [2]
>
> 10. Deployed to Kubernetes and executed Flink Word count [3]
>
> 11. Deployed to Kubernetes and executed Nifi
>
> Thanks very much Sammi, to drive this release...
> Marton
>
> ps:  NiFi setup requires some more testing. Counters were not updated on
> the UI and at some cases, I saw DirNotFound exceptions when I used
> master. But during the last test with -rc1 it worked well.
>
> [1]: https://github.com/elek/ozone-perf-env/tree/master/teragen-ozone
>
> [2]: https://github.com/elek/ozone-perf-env/tree/master/spark-ozone
>
> [3]: https://github.com/elek/ozone-perf-env/tree/master/flink-ozone
>
>
> On 8/25/20 4:01 PM, Sammi Chen wrote:
> > RC1 artifacts are at:
> > https://home.apache.org/~sammichen/ozone-1.0.0-rc1/
> > 
> >
> > Maven artifacts are staged at:
> > https://repository.apache.org/content/repositories/orgapachehadoop-1278
> >  >
> >
> > The public key used for signing the artifacts can be found at:
> > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> >
> > The RC1 tag in github is at:
> > https://github.com/apache/hadoop-ozone/releases/tag/ozone-1.0.0-RC1
> > 
> >
> > Change log of RC1, add
> > 1. HDDS-4063. Fix InstallSnapshot in OM HA
> > 2. HDDS-4139. Update version number in upgrade tests.
> > 3. HDDS-4144, Update version info in hadoop client dependency readme
> >
> > *The vote will run for 7 days, ending on Aug 31th 2020 at 11:59 pm PST.*
> >
> > Thanks,
> > Sammi Chen
> >
>
> -
> To unsubscribe, e-mail: ozone-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: ozone-dev-h...@hadoop.apache.org
>
>


Re: [DISCUSS] Guidelines for Code cleanup JIRAs

2020-06-23 Thread Shashikant Banerjee
+1

Thanks
Shashi

On Mon, Jan 13, 2020 at 10:27 PM Ahmed Hussein  wrote:

> +1
> Can we also make sure to add a label for the code cleanup Jiras? At least,
> this will make it easy to search and filter jiras.
>
> On Mon, Jan 13, 2020 at 7:24 AM Wei-Chiu Chuang 
> wrote:
>
> > +1
> >
> > On Thu, Jan 9, 2020 at 9:33 AM epa...@apache.org 
> > wrote:
> >
> > > There was some discussion on
> > > https://issues.apache.org/jira/browse/YARN-9052
> > > about concerns surrounding the costs/benefits of code cleanup JIRAs.
> This
> > > email
> > > is to get the discussion going within a wider audience.
> > >
> > > The positive points for code cleanup JIRAs:
> > > - Clean up tech debt
> > > - Make code more readable
> > > - Make code more maintainable
> > > - Make code more performant
> > >
> > > The concerns regarding code cleanup JIRAs are as follows:
> > > - If the changes only go into trunk, then contributors and committers
> > > trying to
> > >  backport to prior releases will have to create and test multiple patch
> > > versions.
> > > - Some have voiced concerns that code cleanup JIRAs may not be tested
> as
> > >   thoroughly as features and bug fixes because functionality is not
> > > supposed to
> > >   change.
> > > - Any patches awaiting review that are touching the same code will have
> > to
> > > be
> > >   redone, re-tested, and re-reviewed.
> > > - JIRAs that are opened for code cleanup and not worked on right away
> > tend
> > > to
> > >   clutter up the JIRA space.
> > >
> > > Here are my opinions:
> > > - Code changes of any kind force a non-trivial amount of overhead for
> > other
> > >   developers. For code cleanup JIRAs, sometimes the usability,
> > > maintainability,
> > >   and performance is worth the overhead (as in the case of YARN-9052).
> > > - Before opening any JIRA, please always consider whether or not the
> > added
> > >   usability will outweigh the added pain you are causing other
> > developers.
> > > - If you believe the benefits outweigh the costs, please backport the
> > > changes
> > >   yourself to all active lines. My preference is to port all the way
> back
> > > to 2.10.
> > > - Please don't run code analysis tools and then open many JIRAs that
> > > document
> > >   those findings. That activity does not put any thought into this
> > > cost-benefit
> > >   analysis.
> > >
> > > Thanks everyone. I'm looking forward to your thoughts. I appreciate all
> > > you do
> > > for the open source community and it is always a pleasure to work with
> > you.
> > > -Eric Payne
> > >
> > > -
> > > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> > > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> > >
> > >
> >
>


[jira] [Created] (HADOOP-15366) Add a helper shutDown routine in HadoopExecutor to ensure clean shutdown

2018-04-05 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HADOOP-15366:


 Summary: Add a helper shutDown routine in HadoopExecutor to ensure 
clean shutdown
 Key: HADOOP-15366
 URL: https://issues.apache.org/jira/browse/HADOOP-15366
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee


It is recommended to shut down an {{ExecutorService}} in two phases, first by 
calling {{shutdown}} to reject incoming tasks, and then calling 
{{shutdownNow}}, if necessary, to cancel any lingering tasks. This Jira aims to 
add a helper shutdown routine in Hadoop executor  to achieve the same.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org