Re: [VOTE] Hadoop 3.1.x EOL

2021-06-03 Thread Bharat Viswanadham
+1

Thanks,
Bharat


On Thu, Jun 3, 2021 at 1:00 PM Viraj Jasani  wrote:

> +1 (non-binding)
>
> On Thu, 3 Jun 2021 at 12:21 PM, Wei-Chiu Chuang 
> wrote:
>
> > +1
> >
> > On Thu, Jun 3, 2021 at 2:14 PM Akira Ajisaka 
> wrote:
> >
> > > Dear Hadoop developers,
> > >
> > > Given the feedback from the discussion thread [1], I'd like to start
> > > an official vote
> > > thread for the community to vote and start the 3.1 EOL process.
> > >
> > > What this entails:
> > >
> > > (1) an official announcement that no further regular Hadoop 3.1.x
> > releases
> > > will be made after 3.1.4.
> > > (2) resolve JIRAs that specifically target 3.1.5 as won't fix.
> > >
> > > This vote will run for 7 days and conclude by June 10th, 16:00 JST [2].
> > >
> > > Committers are eligible to cast binding votes. Non-committers are
> > welcomed
> > > to cast non-binding votes.
> > >
> > > Here is my vote, +1
> > >
> > > [1] https://s.apache.org/w9ilb
> > > [2]
> > >
> >
> https://www.timeanddate.com/worldclock/fixedtime.html?msg=4=20210610T16=248
> > >
> > > Regards,
> > > Akira
> > >
> > > -
> > > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> > >
> > >
> >
>


[jira] [Created] (HDFS-15897) SCM HA should be disabled in secure cluster

2021-03-15 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDFS-15897:
-

 Summary: SCM HA should be disabled in secure cluster
 Key: HDFS-15897
 URL: https://issues.apache.org/jira/browse/HDFS-15897
 Project: Hadoop HDFS
  Issue Type: Task
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


SCM HA security work is still in progress.

[~elek] Brought up the point that until before merge of SCM HA branch we should 
add safeguard check to fail bringing up the cluster



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Moving Ozone to a separated Apache project

2020-09-29 Thread Bharat Viswanadham
+1
Thank You @Elek, Marton  for driving this.


Thanks,
Bharat


On Mon, Sep 28, 2020 at 10:54 AM Vivek Ratnavel 
wrote:

> +1 for moving Ozone to a separated Top-Level Apache Project.
>
> Thanks,
> Vivek Subramanian
>
> On Mon, Sep 28, 2020 at 8:30 AM Hanisha Koneru
> 
> wrote:
>
> > +1
> >
> > Thanks,
> > Hanisha
> >
> > > On Sep 27, 2020, at 11:48 PM, Akira Ajisaka 
> wrote:
> > >
> > > +1
> > >
> > > Thanks,
> > > Akira
> > >
> > > On Fri, Sep 25, 2020 at 3:00 PM Elek, Marton  > e...@apache.org>> wrote:
> > >>
> > >> Hi all,
> > >>
> > >> Thank you for all the feedback and requests,
> > >>
> > >> As we discussed in the previous thread(s) [1], Ozone is proposed to
> be a
> > >> separated Apache Top Level Project (TLP)
> > >>
> > >> The proposal with all the details, motivation and history is here:
> > >>
> > >>
> >
> https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Hadoop+subproject+to+Apache+TLP+proposal
> > >>
> > >> This voting runs for 7 days and will be concluded at 2nd of October,
> 6AM
> > >> GMT.
> > >>
> > >> Thanks,
> > >> Marton Elek
> > >>
> > >> [1]:
> > >>
> >
> https://lists.apache.org/thread.html/rc6c79463330b3e993e24a564c6817aca1d290f186a1206c43ff0436a%40%3Chdfs-dev.hadoop.apache.org%3E
> > >>
> > >> -
> > >> To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
>  > yarn-dev-unsubscr...@hadoop.apache.org>
> > >> For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
> > 
> > >>
> > >
> > > -
> > > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > 
> > > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> > 
> >
>


Re: [VOTE] Apache Hadoop Ozone 1.0.0 RC1

2020-08-31 Thread Bharat Viswanadham
+1 (binding)

*  Built from the source tarball.
*  Verified the checksums and signatures.
*  Verified basic Ozone file system(o3fs) and S3 operations via AWS S3 CLI
on the OM HA un-secure cluster.
*  Verified ozone shell commands via CLI on the OM HA un-secure cluster.
*  Verified basic Ozone file system and S3 operations via AWS S3 CLI on the
OM HA secure cluster.
*  Verified ozone shell commands via CLI on the OM HA secure cluster.

Thanks, Sammi for driving the release.

Regards,
Bharat


On Mon, Aug 31, 2020 at 10:23 AM Xiaoyu Yao 
wrote:

> +1 (binding)
>
> * Verify the checksums and signatures.
> * Verify basic Ozone file system and S3 operations via CLI in secure docker
> compose environment
> * Run MR examples and teragen/terasort with ozone secure enabled.
> * Verify EN/CN document rendering with hugo serve
>
> Thanks Sammi for driving the release.
>
> Regards,
> Xiaoyu
>
> On Mon, Aug 31, 2020 at 8:55 AM Shashikant Banerjee
>  wrote:
>
> > +1(binding)
> >
> > 1.Verified checksums
> > 2.Verified signatures
> > 3.Verified the output of `ozone version
> > 4.Tried creating volume and bucket, write and read key, by Ozone shell
> > 5.Verified basic Ozone Filesystem operations
> >
> > Thank you very much Sammi for putting up the release together.
> >
> > Thanks
> > Shashi
> >
> > On Mon, Aug 31, 2020 at 4:35 PM Elek, Marton  wrote:
> >
> > > +1 (binding)
> > >
> > >
> > > 1. verified signatures
> > >
> > > 2. verified checksums
> > >
> > > 3. verified the output of `ozone version` (includes the good git
> > revision)
> > >
> > > 4. verified that the source package matches the git tag
> > >
> > > 5. verified source can be used to build Ozone without previous state
> > > (docker run -v ... -it maven ... --> built from the source with zero
> > > local maven cache during 16 minutes --> did on a sever at this time)
> > >
> > > 6. Verified Ozone can be used from binary package (cd compose/ozone &&
> > > test.sh --> all tests were passed)
> > >
> > > 7. Verified documentation is included in SCM UI
> > >
> > > 8. Deployed to Kubernetes and executed Teragen on Yarn [1]
> > >
> > > 9. Deployed to Kubernetes and executed Spark (3.0) Word count (local
> > > executor) [2]
> > >
> > > 10. Deployed to Kubernetes and executed Flink Word count [3]
> > >
> > > 11. Deployed to Kubernetes and executed Nifi
> > >
> > > Thanks very much Sammi, to drive this release...
> > > Marton
> > >
> > > ps:  NiFi setup requires some more testing. Counters were not updated
> on
> > > the UI and at some cases, I saw DirNotFound exceptions when I used
> > > master. But during the last test with -rc1 it worked well.
> > >
> > > [1]: https://github.com/elek/ozone-perf-env/tree/master/teragen-ozone
> > >
> > > [2]: https://github.com/elek/ozone-perf-env/tree/master/spark-ozone
> > >
> > > [3]: https://github.com/elek/ozone-perf-env/tree/master/flink-ozone
> > >
> > >
> > > On 8/25/20 4:01 PM, Sammi Chen wrote:
> > > > RC1 artifacts are at:
> > > > https://home.apache.org/~sammichen/ozone-1.0.0-rc1/
> > > > 
> > > >
> > > > Maven artifacts are staged at:
> > > >
> > https://repository.apache.org/content/repositories/orgapachehadoop-1278
> > > > <
> > https://repository.apache.org/content/repositories/orgapachehadoop-1277
> > > >
> > > >
> > > > The public key used for signing the artifacts can be found at:
> > > > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> > > >
> > > > The RC1 tag in github is at:
> > > > https://github.com/apache/hadoop-ozone/releases/tag/ozone-1.0.0-RC1
> > > >  >
> > > >
> > > > Change log of RC1, add
> > > > 1. HDDS-4063. Fix InstallSnapshot in OM HA
> > > > 2. HDDS-4139. Update version number in upgrade tests.
> > > > 3. HDDS-4144, Update version info in hadoop client dependency readme
> > > >
> > > > *The vote will run for 7 days, ending on Aug 31th 2020 at 11:59 pm
> > PST.*
> > > >
> > > > Thanks,
> > > > Sammi Chen
> > > >
> > >
> > > -
> > > To unsubscribe, e-mail: ozone-dev-unsubscr...@hadoop.apache.org
> > > For additional commands, e-mail: ozone-dev-h...@hadoop.apache.org
> > >
> > >
> >
>


Re: [VOTE] Apache Hadoop Ozone 0.5.0-beta RC2

2020-03-21 Thread Bharat Viswanadham
I am seeing this issue when running hdfs commands on hadoop 27
docker-compose. I see the same test failing when running the smoke test.


$ docker exec -it c7fe17804044 bash

bash-4.4$ hdfs dfs -put /opt/hadoop/NOTICE.txt o3fs://bucket1.vol1/kk

2020-03-22 04:40:14 WARN  NativeCodeLoader:60 - Unable to load
native-hadoop library for your platform... using builtin-java classes where
applicable

2020-03-22 04:40:15 INFO  MetricsConfig:118 - Loaded properties from
hadoop-metrics2.properties

2020-03-22 04:40:16 INFO  MetricsSystemImpl:374 - Scheduled Metric snapshot
period at 10 second(s).

2020-03-22 04:40:16 INFO  MetricsSystemImpl:191 - XceiverClientMetrics
metrics system started

-put: Fatal internal error

java.lang.NullPointerException: client is null

at java.util.Objects.requireNonNull(Objects.java:228)

at
org.apache.hadoop.hdds.scm.XceiverClientRatis.getClient(XceiverClientRatis.java:201)

at
org.apache.hadoop.hdds.scm.XceiverClientRatis.sendRequestAsync(XceiverClientRatis.java:227)

at
org.apache.hadoop.hdds.scm.XceiverClientRatis.sendCommandAsync(XceiverClientRatis.java:305)

at
org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.writeChunkAsync(ContainerProtocolCalls.java:315)

at
org.apache.hadoop.hdds.scm.storage.BlockOutputStream.writeChunkToContainer(BlockOutputStream.java:599)

at
org.apache.hadoop.hdds.scm.storage.BlockOutputStream.writeChunk(BlockOutputStream.java:452)

at
org.apache.hadoop.hdds.scm.storage.BlockOutputStream.handleFlush(BlockOutputStream.java:463)

at
org.apache.hadoop.hdds.scm.storage.BlockOutputStream.close(BlockOutputStream.java:486)

at
org.apache.hadoop.ozone.client.io.BlockOutputStreamEntry.close(BlockOutputStreamEntry.java:144)

at
org.apache.hadoop.ozone.client.io.KeyOutputStream.handleStreamAction(KeyOutputStream.java:481)

at
org.apache.hadoop.ozone.client.io.KeyOutputStream.handleFlushOrClose(KeyOutputStream.java:455)

at
org.apache.hadoop.ozone.client.io.KeyOutputStream.close(KeyOutputStream.java:508)

at
org.apache.hadoop.fs.ozone.OzoneFSOutputStream.close(OzoneFSOutputStream.java:56)

at
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)

at
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)

at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:62)

at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:120)

at
org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:466)

at
org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:391)

at
org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:328)

at
org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:263)

at
org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:248)

at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)

at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)

at
org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:243)

at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)

at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)

at
org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:220)

at
org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:267)

at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)

at org.apache.hadoop.fs.shell.Command.run(Command.java:165)

at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)

at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)

at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)

at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)


The same command when using ozone fs is working fine.

 docker exec -it fe5d39cf6eed bash

bash-4.2$ ozone fs -put /opt/hadoop/NOTICE.txt o3fs://bucket1.vol1/kk

2020-03-22 04:41:10,999 [main] INFO impl.MetricsConfig: Loaded properties
from hadoop-metrics2.properties

2020-03-22 04:41:11,123 [main] INFO impl.MetricsSystemImpl: Scheduled
Metric snapshot period at 10 second(s).

2020-03-22 04:41:11,127 [main] INFO impl.MetricsSystemImpl:
XceiverClientMetrics metrics system started

bash-4.2$ ozone fs -ls o3fs://bucket1.vol1/

Found 1 items

-rw-rw-rw-   3 hadoop hadoop  17540 2020-03-22 04:41
o3fs://bucket1.vol1/kk


- Built from the source tarball
- Verified md5 and sha256 signatures.
- Ran smoke tests, found one above issue.
- Deployed to a 5 node docker cluster using ozone compose definition(OM +
SCM + 3 Datanodes), and ran basic ozone shell and fs commands.

Thank You, Dinesh for driving the release.


Thanks,
Bharat




On Sat, Mar 21, 2020 at 8:48 PM Arpit Agarwal 
wrote:

> +1 binding.
>
> - Verified hashes and signatures
> - Built from source
> - Deployed to 5 node cluster
> - 

Re: [VOTE] EOL Hadoop branch-2.8

2020-03-03 Thread Bharat Viswanadham
+1

Thanks,
Bharat

On Tue, Mar 3, 2020 at 7:46 PM Zhankun Tang  wrote:

> Thanks, Wei-Chiu. +1.
>
> BR,
> Zhankun
>
> On Wed, 4 Mar 2020 at 08:03, Wilfred Spiegelenburg
>  wrote:
>
> > +1
> >
> > Wilfred
> >
> > > On 3 Mar 2020, at 05:48, Wei-Chiu Chuang  wrote:
> > >
> > > I am sorry I forgot to start a VOTE thread.
> > >
> > > This is the "official" vote thread to mark branch-2.8 End of Life. This
> > is
> > > based on the following thread and the tracking jira (HADOOP-16880
> > > ).
> > >
> > > This vote will run for 7 days and conclude on March 9th (Mon) 11am PST.
> > >
> > > Please feel free to share your thoughts.
> > >
> > > Thanks,
> > > Weichiu
> > >
> > > On Mon, Feb 24, 2020 at 10:28 AM Wei-Chiu Chuang  >
> > > wrote:
> > >
> > >> Looking at the EOL policy wiki:
> > >>
> >
> https://cwiki.apache.org/confluence/display/HADOOP/EOL+%28End-of-life%29+Release+Branches
> > >>
> > >> The Hadoop community can still elect to make security update for
> EOL'ed
> > >> releases.
> > >>
> > >> I think the EOL is to give more clarity to downstream applications
> (such
> > >> as HBase) the guidance of which Hadoop release lines are still active.
> > >> Additionally, I don't think it is sustainable to maintain 6 concurrent
> > >> release lines in this big project, which is why I wanted to start this
> > >> discussion.
> > >>
> > >> Thoughts?
> > >>
> > >> On Mon, Feb 24, 2020 at 10:22 AM Sunil Govindan 
> > wrote:
> > >>
> > >>> Hi Wei-Chiu
> > >>>
> > >>> Extremely sorry for the late reply here.
> > >>> Cud u pls help to add more clarity on defining what will happen for
> > >>> branch-2.8 when we call EOL.
> > >>> Does this mean that, no more release coming out from this branch, or
> > some
> > >>> more additional guidelines?
> > >>>
> > >>> - Sunil
> > >>>
> > >>>
> > >>> On Mon, Feb 24, 2020 at 11:47 PM Wei-Chiu Chuang
> > >>>  wrote:
> > >>>
> >  This thread has been running for 7 days and no -1.
> > 
> >  Don't think we've established a formal EOL process, but to publicize
> > the
> >  EOL, I am going to file a jira, update the wiki and post the
> > >>> announcement
> >  to general@ and user@
> > 
> >  On Wed, Feb 19, 2020 at 1:40 PM Dinesh Chitlangia <
> > >>> dineshc@gmail.com>
> >  wrote:
> > 
> > > Thanks Wei-Chiu for initiating this.
> > >
> > > +1 for 2.8 EOL.
> > >
> > > On Tue, Feb 18, 2020 at 10:48 PM Akira Ajisaka <
> aajis...@apache.org>
> > > wrote:
> > >
> > >> Thanks Wei-Chiu for starting the discussion,
> > >>
> > >> +1 for the EoL.
> > >>
> > >> -Akira
> > >>
> > >> On Tue, Feb 18, 2020 at 4:59 PM Ayush Saxena 
> >  wrote:
> > >>
> > >>> Thanx Wei-Chiu for initiating this
> > >>> +1 for marking 2.8 EOL
> > >>>
> > >>> -Ayush
> > >>>
> >  On 17-Feb-2020, at 11:14 PM, Wei-Chiu Chuang <
> > >>> weic...@apache.org>
> > >> wrote:
> > 
> >  The last Hadoop 2.8.x release, 2.8.5, was GA on September 15th
> >  2018.
> > 
> >  It's been 17 months since the release and the community by and
> >  large
> > >> have
> >  moved up to 2.9/2.10/3.x.
> > 
> >  With Hadoop 3.3.0 over the horizon, is it time to start the EOL
> > >>> discussion
> >  and reduce the number of active branches?
> > >>>
> > >>>
> > >>> -
> > >>> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > >>> For additional commands, e-mail:
> > >>> common-dev-h...@hadoop.apache.org
> > >>>
> > >>>
> > >>
> > >
> > 
> > >>>
> > >>
> >
> > Wilfred Spiegelenburg
> > Staff Software Engineer
> >  
> >
>


Re: [ANNOUNCE] New Apache Hadoop Committer - Stephen O'Donnell

2020-03-03 Thread Bharat Viswanadham
Congratulations Stephen!

Thanks,
Bharat


On Tue, Mar 3, 2020 at 12:12 PM Wei-Chiu Chuang  wrote:

> In bcc: general@
>
> It's my pleasure to announce that Stephen O'Donnell has been elected as
> committer on the Apache Hadoop project recognizing his continued
> contributions to the
> project.
>
> Please join me in congratulating him.
>
> Hearty Congratulations & Welcome aboard Stephen!
>
> Wei-Chiu Chuang
> (On behalf of the Hadoop PMC)
>


Re: [Discuss] Ozone moving to Beta tag

2020-02-23 Thread Bharat Viswanadham
+1 for Beta given major performance improvement work went in Ozone Manager
and Datanode Pipeline.

I have been testing Teragen runs and now we have consistent runs and
performance is almost near to HDFS with disaggregated Storage and compute
cluster.



Thanks,
Bharat


On Sun, Feb 23, 2020 at 6:35 PM Sammi Chen  wrote:

> +1,  Impressive performance achievement on OzoneManager, let's move to
> Beta.
>
> Bests,
> Sammi Chen
>
> On Thu, Feb 20, 2020 at 4:17 AM Anu Engineer  wrote:
>
> > Hi All,
> >
> >
> > I would like to propose moving Ozone from 'Alpha' tags to 'Beta' tags
> when
> > we do future releases. Here are a couple of reasons why I think we should
> > make this move.
> >
> >
> >
> >1. Ozone Manager or the Namenode for Ozone scales to more than 1
> billion
> >keys. We tested this in our labs in an organic fashion; that is, we
> were
> >able to create more than 1 billion keys from external clients with no
> > loss
> >in performance.
> >2. The ozone Manager meets the performance and resource constraints
> that
> >we set out to achieve. We were able to sustain the same throughput at
> > Ozone
> >manager for over three days that took us to get this 1 billion keys.
> > That
> >is, we did not have to shut down or resize memory for the namenode as
> we
> >went through this exercise.
> >3.  The most critical, we did this experiment with 64GB of memory
> >allocation in JVM and 64 GB of RAM off-heap allocation. That is, the
> > Ozone
> >Manager was able to achieve this scale with far less memory footprint
> > than
> >HDFS.
> >4. Ozone's performance is at par with HDFS when running workloads like
> >Hive (
> >
> >
> https://blog.cloudera.com/benchmarking-ozone-clouderas-next-generation-storage-for-cdp/
> >)
> >5. We have been able to run long-running clusters with Ozone.
> >
> >
> > Having achieved these goals, I propose that we move from the planned
> > 0.4.2-Alpha release to 0.5.0-Beta as our next release. If we hear no
> > concerns about this, we would like to move Ozone from Alpha to Beta
> > releases.
> >
> >
> > Thanks
> >
> > Anu
> >
> >
> > P.S. I am CC-ing HDFS dev since many people who are interested in Ozone
> > still have not subscribed to Ozone dev lists. My apologies if it feels
> like
> > spam, I promise that over time we will become less noisy in the HDFS
> > channel.
> >
> >
> > PPS. I know lots of you will want to know more specifics; Our blog
> presses
> > are working overtime and I promise you that you will get to see all the
> > details pretty soon.
> >
>


[jira] [Created] (HDFS-15165) In Du missed calling getAttributesProvider

2020-02-12 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDFS-15165:
-

 Summary: In Du missed calling getAttributesProvider
 Key: HDFS-15165
 URL: https://issues.apache.org/jira/browse/HDFS-15165
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


HDFS-12130 has changed the behavior of DU.

During that change to getInodeAttributes, it missed calling 
getAttributesProvider().getAttributes when it is configured.

Because of this, when sentry is configured for hdfs path, and attributeProvider 
class is set. We missed calling this, and AclFeature from Sentry was missing. 
Because of this when DU command is run on a sentry managed hdfs path, we are 
seeing AccessControlException.

 

This Jira is to fix this issue.
{code:java}
 dfs.namenode.inode.attributes.provider.class 
org.apache.sentry.hdfs.SentryINodeAttributesProvider 
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [DISCUSS] Ozone 0.4.2 release

2019-12-07 Thread Bharat Viswanadham
+1

Thanks,
Bharat


On Sat, Dec 7, 2019 at 1:18 PM Giovanni Matteo Fumarola <
giovanni.fumar...@gmail.com> wrote:

> +1
>
> Thanks for starting this.
>
> On Sat, Dec 7, 2019 at 1:13 PM Jitendra Pandey
>  wrote:
>
> > +1
> >
> >
> > > On Dec 7, 2019, at 9:13 AM, Arpit Agarwal
> 
> > wrote:
> > >
> > > +1
> > >
> > >
> > >
> > >> On Dec 6, 2019, at 5:25 PM, Dinesh Chitlangia 
> > wrote:
> > >>
> > >> All,
> > >> Since the Apache Hadoop Ozone 0.4.1 release, we have had significant
> > >> bug fixes towards performance & stability.
> > >>
> > >> With that in mind, 0.4.2 release would be good to consolidate all
> those
> > fixes.
> > >>
> > >> Pls share your thoughts.
> > >>
> > >>
> > >> Thanks,
> > >> Dinesh Chitlangia
> > >
> > >
> > > -
> > > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> > >
> >
> > -
> > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> >
> >
>


[jira] [Resolved] (HDDS-2536) Add ozone.om.internal.service.id to OM HA configuration

2019-11-20 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2536.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Add ozone.om.internal.service.id to OM HA configuration
> ---
>
> Key: HDDS-2536
> URL: https://issues.apache.org/jira/browse/HDDS-2536
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>    Reporter: Bharat Viswanadham
>    Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This Jira is to add ozone.om.internal.serviceid to let OM knows it belong to 
> a particular service.
>  
> As now we have ozone.om.service.ids -≥ where we can define all service id's 
> in a cluster.(This can happen if the same config is shared across the cluster)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2594) S3 RangeReads failing with NumberFormatException

2019-11-20 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2594:


 Summary: S3 RangeReads failing with NumberFormatException
 Key: HDDS-2594
 URL: https://issues.apache.org/jira/browse/HDDS-2594
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham


 
{code:java}
2019-11-20 15:32:04,684 WARN org.eclipse.jetty.servlet.ServletHandler:
javax.servlet.ServletException: java.lang.NumberFormatException: For input 
string: "3977248768"
        at 
org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:432)
        at 
org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:370)
        at 
org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:389)
        at 
org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:342)
        at 
org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:229)
        at 
org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:840)
        at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1780)
        at 
org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1609)
        at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)
        at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
        at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)
        at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:583)
        at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
        at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
        at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
        at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
        at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:513)
        at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
        at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
        at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
        at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
        at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
        at org.eclipse.jetty.server.Server.handle(Server.java:539)
        at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
        at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
        at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
        at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
        at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
        at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
        at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
        at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
        at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
        at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
        at java.lang.Thread.run(Thread.java:748)
{code}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2241) Optimize the refresh pipeline logic used by KeyManagerImpl to obtain the pipelines for a key

2019-11-20 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2241.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Optimize the refresh pipeline logic used by KeyManagerImpl to obtain the 
> pipelines for a key
> 
>
> Key: HDDS-2241
> URL: https://issues.apache.org/jira/browse/HDDS-2241
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently, while looking up a key, the Ozone Manager gets the pipeline 
> information from SCM through an RPC for every block in the key. For large 
> files > 1GB, we may end up making a lot of RPC calls for this. This can be 
> optimized in a couple of ways
> * We can implement a batch getContainerWithPipeline API in SCM using which we 
> can get the pipeline info locations for all the blocks for a file. To keep 
> the number of containers passed in to SCM in a single call, we can have a 
> fixed container batch size on the OM side. _Here, Number of calls = 1 (or k 
> depending on batch size)_
> * Instead, a simpler change would be to have a map (method local) of 
> ContainerID -> Pipeline that we get from SCM so that we don't need to make 
> repeated calls to SCM for the same containerID for a key. _Here, Number of 
> calls = Number of unique containerIDs_



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2247) Delete FileEncryptionInfo from KeyInfo when a Key is deleted

2019-11-19 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2247.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Delete FileEncryptionInfo from KeyInfo when a Key is deleted
> 
>
> Key: HDDS-2247
> URL: https://issues.apache.org/jira/browse/HDDS-2247
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> As part of HDDS-2174 we are deleting GDPR Encryption Key on delete file 
> operation.
> However, if KMS is enabled, we are skipping GDPR Encryption Key approach when 
> writing file in a GDPR enforced Bucket.
> {code:java}
> final FileEncryptionInfo feInfo = keyOutputStream.getFileEncryptionInfo();
> if (feInfo != null) {
>   KeyProvider.KeyVersion decrypted = getDEK(feInfo);
>   final CryptoOutputStream cryptoOut =
>   new CryptoOutputStream(keyOutputStream,
>   OzoneKMSUtil.getCryptoCodec(conf, feInfo),
>   decrypted.getMaterial(), feInfo.getIV());
>   return new OzoneOutputStream(cryptoOut);
> } else {
>   try{
> GDPRSymmetricKey gk;
> Map openKeyMetadata =
> openKey.getKeyInfo().getMetadata();
> if(Boolean.valueOf(openKeyMetadata.get(OzoneConsts.GDPR_FLAG))){
>   gk = new GDPRSymmetricKey(
>   openKeyMetadata.get(OzoneConsts.GDPR_SECRET),
>   openKeyMetadata.get(OzoneConsts.GDPR_ALGORITHM)
>   );
>   gk.getCipher().init(Cipher.ENCRYPT_MODE, gk.getSecretKey());
>   return new OzoneOutputStream(
>   new CipherOutputStream(keyOutputStream, gk.getCipher()));
> }
>   }catch (Exception ex){
> throw new IOException(ex);
>   }
> {code}
> In such scenario, when KMS is enabled & GDPR enforced on a bucket, if user 
> deletes a file, we should delete the {{FileEncryptionInfo}} from KeyInfo, 
> before moving it to deletedTable, else we cannot guarantee Right to Erasure.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2581) Make OM Ha config to use Java Configs

2019-11-19 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2581:


 Summary: Make OM Ha config to use Java Configs
 Key: HDDS-2581
 URL: https://issues.apache.org/jira/browse/HDDS-2581
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham


This Jira is created based on the comments from [~aengineer] during HDDS-2536 
review.

Can we please use the Java Configs instead of this old-style config to add a 
config?

 

This Jira it to make all HA OM config to the new style (Java config based 
approach)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2536) Add ozone.om.internal.service.id to OM HA configuration

2019-11-18 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2536:


 Summary: Add ozone.om.internal.service.id to OM HA configuration
 Key: HDDS-2536
 URL: https://issues.apache.org/jira/browse/HDDS-2536
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


This Jira is to add ozone.om.internal.serviceid to let OM knows it belong to a 
particular service.

 

As now we have ozone.om.service.ids -≥ where we can define all service id's in 
a cluster.(This can happen if the same config is shared across the cluster)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2461) Logging by ChunkUtils is misleading

2019-11-15 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2461.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Logging by ChunkUtils is misleading
> ---
>
> Key: HDDS-2461
> URL: https://issues.apache.org/jira/browse/HDDS-2461
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> During a k8s based test I found a lot of log message like:
> {code:java}
> 2019-11-12 14:27:13 WARN  ChunkManagerImpl:209 - Duplicate write chunk 
> request. Chunk overwrite without explicit request. 
> ChunkInfo{chunkName='A9UrLxiEUN_testdata_chunk_4465025, offset=0, len=1024} 
> {code}
> I was very surprised as at ChunkManagerImpl:209 there was no similar lines.
> It turned out that it's logged by ChunkUtils but it's used the logger of 
> ChunkManagerImpl.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2513) Remove this unused "COMPONENT" private field.

2019-11-15 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2513.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Remove this unused "COMPONENT" private field.
> -
>
> Key: HDDS-2513
> URL: https://issues.apache.org/jira/browse/HDDS-2513
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Abhishek Purohit
>Assignee: Abhishek Purohit
>Priority: Minor
>  Labels: pull-request-available, sonar
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Remove this unused "COMPONENT" private field in class 
> XceiverClientGrpc
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsWG=false]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2502) Close ScmClient in RatisInsight

2019-11-15 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2502.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Close ScmClient in RatisInsight
> ---
>
> Key: HDDS-2502
> URL: https://issues.apache.org/jira/browse/HDDS-2502
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Attila Doroszlai
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: pull-request-available, sonar
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {{ScmClient}} in {{RatisInsight}} should be closed after use.
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-mYKcVY8lQ4Zr_s=AW5md-mYKcVY8lQ4Zr_s
> Also two other minor issues reported in the same file:
> https://sonarcloud.io/project/issues?fileUuids=AW5md-HeKcVY8lQ4ZrXL=hadoop-ozone=false



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2507) Remove the hard-coded exclusion of TestMiniChaosOzoneCluster

2019-11-15 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2507.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Remove the hard-coded exclusion of TestMiniChaosOzoneCluster
> 
>
> Key: HDDS-2507
> URL: https://issues.apache.org/jira/browse/HDDS-2507
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We excluded the execution of TestMiniChaosOzoneCluster from the 
> hadoop-ozone/dev-support/checks/integration.sh because it was not stable 
> enough.
> Unfortunately this exclusion makes it impossible to use custom exclusion 
> lists (-Dsurefire.excludesFile=) as excludesFile can't be used if 
> -Dtest=!... is already used.
> I propose to remove this exclusion to make it possible to use different 
> exclusion for different runs (pr check, daily, etc.)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2511) Fix Sonar issues in OzoneManagerServiceProviderImpl

2019-11-15 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2511.
--
Resolution: Fixed

> Fix Sonar issues in OzoneManagerServiceProviderImpl
> ---
>
> Key: HDDS-2511
> URL: https://issues.apache.org/jira/browse/HDDS-2511
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available, sonar
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Link to the list of issues : 
> https://sonarcloud.io/project/issues?fileUuids=AW5md-HdKcVY8lQ4ZrUn=hadoop-ozone=false



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2515) No need to call "toString()" method as formatting and string conversion is done by the Formatter

2019-11-15 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2515.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> No need to call "toString()" method as formatting and string conversion is 
> done by the Formatter
> 
>
> Key: HDDS-2515
> URL: https://issues.apache.org/jira/browse/HDDS-2515
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Abhishek Purohit
>Assignee: Abhishek Purohit
>Priority: Major
>  Labels: pull-request-available, sonar
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsV4=false]
> Class:  XceiverClientGrpc
> {code:java}
>  if (LOG.isDebugEnabled()) {  LOG.debug("Nodes in pipeline : {}", 
> pipeline.getNodes().toString());
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2472) Use try-with-resources while creating FlushOptions in RDBStore.

2019-11-15 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2472.
--
Resolution: Fixed

> Use try-with-resources while creating FlushOptions in RDBStore.
> ---
>
> Key: HDDS-2472
> URL: https://issues.apache.org/jira/browse/HDDS-2472
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.5.0
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available, sonar
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Link to the sonar issue flag - 
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-zwKcVY8lQ4ZsJ4=AW5md-zwKcVY8lQ4ZsJ4.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2494) Sonar - BigDecimal(double) should not be used

2019-11-14 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2494.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Sonar - BigDecimal(double) should not be used
> -
>
> Key: HDDS-2494
> URL: https://issues.apache.org/jira/browse/HDDS-2494
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Matthew Sharp
>Assignee: Matthew Sharp
>Priority: Minor
>  Labels: pull-request-available, sonar
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Sonar Issue:  
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-0AKcVY8lQ4ZsKR=AW5md-0AKcVY8lQ4ZsKR]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2477) TableCache cleanup issue for OM non-HA

2019-11-13 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2477:


 Summary: TableCache cleanup issue for OM non-HA
 Key: HDDS-2477
 URL: https://issues.apache.org/jira/browse/HDDS-2477
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


In OM in non-HA case, the ratisTransactionLogIndex is generated by 
OmProtocolServersideTranslatorPB.java. And in OM non-HA validateAndUpdateCache 
is called from multipleHandler threads. So think of a case where one thread 
which has an index - 10 has added to doubleBuffer. (0-9 still have not added). 
DoubleBuffer flush thread flushes and call cleanup. (So, now cleanup will go 
and cleanup all cache entries with less than 10 epoch) This should not have 
cleanup those which might have put in to cache later and which are in process 
of flush to DB. This will cause inconsitency for few OM requests.

 

 

Example:

4 threads Committing 4 parts.

1st thread - part 1 - ratis Index - 3

2nd thread - part 2 - ratis index - 2

3rd thread - part3 - ratis index - 1

 

First thread got lock, and put in to doubleBuffer and cache with 
OmMultipartInfo (with part1). And cleanup is called to cleanup all entries in 
cache with less than 3. In the mean time 2nd thread and 1st thread put 2,3 
parts in to OmMultipartInfo in to Cache and doubleBuffer. But first thread 
might cleanup those entries, as it is called with index 3 for cleanup.

 

Now when the 4th part upload came -> when it is commit Multipart upload when it 
gets multipartinfo it get Only part1 in OmMultipartInfo, as the OmMultipartInfo 
(with 1,2,3 is still in process of committing to DB). So now after 4th part 
upload is complete in DB and Cache we will have 1,4 parts only. We will miss 
part2,3 information.

 

So for non-HA case cleanup will be called with list of epochs that need to be 
cleanedup.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2471) Improve exception message for CompleteMultipartUpload

2019-11-13 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2471:


 Summary: Improve exception message for CompleteMultipartUpload
 Key: HDDS-2471
 URL: https://issues.apache.org/jira/browse/HDDS-2471
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham


When InvalidPart error occurs, the exception message does not have any 
information about partName and partNumber, it will be good to have this 
information.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2470) Add partName, partNumber for CommitMultipartUpload

2019-11-13 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2470:


 Summary: Add partName, partNumber for CommitMultipartUpload
 Key: HDDS-2470
 URL: https://issues.apache.org/jira/browse/HDDS-2470
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


Right now when complete Multipart Upload is not printing partName and  
partNumber into the audit log.

 

 

2019-11-13 15:14:10,191 | INFO  | OMAudit | user=root | ip=9.134.50.210 | 
op=COMMIT_MULTIPART_UPLOAD_PARTKEY {volume=s325d55ad283aa400af464c76d713c07ad, 
bucket=ozone-test, key=plc_1570850798896_2991, dataSize=5242880, 
replicationType=RATIS, replicationFactor=ONE, keyLocationInfo=[blockID {

  containerBlockID {

    containerID: 2

    localID: 103129366531867089

  }

  blockCommitSequenceId: 4978

}

offset: 0

length: 5242880

createVersion: 0

pipeline {

  leaderID: ""

  members {

    uuid: "5d03aed5-cfb3-4689-b168-0c9a94316551"

    ipAddress: "9.134.51.232"

    hostName: "9.134.51.232"

    ports {

      name: "RATIS"

      value: 9858

    }

    ports {

      name: "STANDALONE"

      value: 9859

    }

    networkName: "5d03aed5-cfb3-4689-b168-0c9a94316551"

    networkLocation: "/default-rack"

  }

  members {

    uuid: "a71462ae-7865-4ed5-b84e-60616df60a0d"

    ipAddress: "9.134.51.25"

    hostName: "9.134.51.25"

    ports {

      name: "RATIS"

      value: 9858

    }

    ports {

      name: "STANDALONE"

      value: 9859

    }

    networkName: "a71462ae-7865-4ed5-b84e-60616df60a0d"

    networkLocation: "/default-rack"

  }

  members {

    uuid: "79bf7bdf-ed29-49d4-bf7c-e88fdbd2ce03"

    ipAddress: "9.134.51.215"

    hostName: "9.134.51.215"

    ports {

      name: "RATIS"

      value: 9858

    }

    ports {

      name: "STANDALONE"

      value: 9859

    }

    networkName: "79bf7bdf-ed29-49d4-bf7c-e88fdbd2ce03"

    networkLocation: "/default-rack"

  }

  state: PIPELINE_OPEN

  type: RATIS

  factor: THREE

  id {

    id: "ec6b06c5-193f-4c30-879b-5a12284dc4f8"

  }

}

]} | ret=SUCCESS |



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2465) S3 Multipart upload failing

2019-11-12 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2465:


 Summary: S3 Multipart upload failing
 Key: HDDS-2465
 URL: https://issues.apache.org/jira/browse/HDDS-2465
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham


When I run attached java program, facing below error, during 
completeMultipartUpload.
{code:java}
ERROR StatusLogger No Log4j 2 configuration file found. Using default 
configuration (logging only errors to the console), or user programmatically 
provided configurations. Set system property 'log4j2.debug' to show Log4j 2 
internal initialization logging. See 
https://logging.apache.org/log4j/2.x/manual/configuration.html for instructions 
on how to configure Log4j 2ERROR StatusLogger No Log4j 2 configuration file 
found. Using default configuration (logging only errors to the console), or 
user programmatically provided configurations. Set system property 
'log4j2.debug' to show Log4j 2 internal initialization logging. See 
https://logging.apache.org/log4j/2.x/manual/configuration.html for instructions 
on how to configure Log4j 2Exception in thread "main" 
com.amazonaws.services.s3.model.AmazonS3Exception: Bad Request (Service: Amazon 
S3; Status Code: 400; Error Code: 400 Bad Request; Request ID: 
c7b87393-955b-4c93-85f6-b02945e293ca; S3 Extended Request ID: 7tnVbqgc4bgb), S3 
Extended Request ID: 7tnVbqgc4bgb at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1712)
 at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1367)
 at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1113)
 at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:770)
 at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:744)
 at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:726)
 at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:686)
 at 
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:668)
 at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:532) at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:512) at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4921) at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4867) at 
com.amazonaws.services.s3.AmazonS3Client.completeMultipartUpload(AmazonS3Client.java:3464)
 at org.apache.hadoop.ozone.freon.MPU.main(MPU.java:96){code}
When I debug it is not the request is not been received by S3Gateway, and I 
don't see any trace of this in audit log.

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2453) Add Freon tests for S3Bucket/MPU Keys

2019-11-08 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2453:


 Summary: Add Freon tests for S3Bucket/MPU Keys
 Key: HDDS-2453
 URL: https://issues.apache.org/jira/browse/HDDS-2453
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham


This Jira is to create freon tests for 
 # S3Bucket creation.
 # S3 MPU Key uploads.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2410) Ozoneperf docker cluster should use privileged containers

2019-11-08 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2410.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Ozoneperf docker cluster should use privileged containers
> -
>
> Key: HDDS-2410
> URL: https://issues.apache.org/jira/browse/HDDS-2410
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The profiler 
> [servlet|https://github.com/elek/hadoop-ozone/blob/master/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/ProfileServlet.java]
>  (which helps to run java profiler in the background and publishes the result 
> on the web interface) requires privileged docker containers.
>  
> This flag is missing from the ozoneperf docker-compose cluster (which is 
> designed to run performance tests).
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2399) Update mailing list information in CONTRIBUTION and README files

2019-11-07 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2399.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Update mailing list information in CONTRIBUTION and README files
> 
>
> Key: HDDS-2399
> URL: https://issues.apache.org/jira/browse/HDDS-2399
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Priority: Major
>  Labels: newbie, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We have new mailing lists:
>  [ozone-...@hadoop.apache.org|mailto:ozone-...@hadoop.apache.org]
> [ozone-iss...@hadoop.apache.org|mailto:ozone-iss...@hadoop.apache.org]
> [ozone-comm...@hadoop.apache.org|mailto:ozone-comm...@hadoop.apache.org]
>  
> We need to update CONTRIBUTION.md and README.md to use ozone-dev instead of 
> hdfs-dev (optionally we can mention the issues/commits lists, but only in 
> CONTRIBUTION.md)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2395) Handle Ozone S3 completeMPU to match with aws s3 behavior.

2019-11-07 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2395.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Handle Ozone S3 completeMPU to match with aws s3 behavior.
> --
>
> Key: HDDS-2395
> URL: https://issues.apache.org/jira/browse/HDDS-2395
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>    Reporter: Bharat Viswanadham
>    Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> # When uploaded 2 parts, and when complete upload 1 part no error
>  # During complete multipart upload name/part number not matching with 
> uploaded part and part number then InvalidPart error
>  # When parts are not specified in sorted order InvalidPartOrder
>  # During complete multipart upload when no uploaded parts, and we specify 
> some parts then also InvalidPart
>  # Uploaded parts 1,2,3 and during complete we can do upload 1,3 (No error)
>  # When part 3 uploaded, complete with part 3 can be done



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2427) Exclude webapps from hadoop-ozone-filesystem-lib-current uber jar

2019-11-07 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2427:


 Summary: Exclude webapps from hadoop-ozone-filesystem-lib-current 
uber jar
 Key: HDDS-2427
 URL: https://issues.apache.org/jira/browse/HDDS-2427
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham


This has caused issue for DN UI loading.

hadoop-ozone-filesystem-lib-current-xx.jar is in the classpath which 
accidentally loaded Ozone datanode web application instead of Hadoop datanode 
application. This leads to the reported error. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2377) Speed up TestOzoneManagerHA#testOMRetryProxy and #testTwoOMNodesDown

2019-11-06 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2377.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Speed up TestOzoneManagerHA#testOMRetryProxy and #testTwoOMNodesDown
> 
>
> Key: HDDS-2377
> URL: https://issues.apache.org/jira/browse/HDDS-2377
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Marton's comment:
> https://github.com/apache/hadoop-ozone/pull/30#pullrequestreview-302465440
> Out of curiosity, I ran entire TestOzoneManagerHA locally. The entire test 
> class finished in 10m 30s. I discovered {{testOMRetryProxy}} and 
> {{testTwoOMNodesDown}} are taking the most time (2m and 2m 30s respectively) 
> to finish. Most time are wasted on retry and wait. We could reasonably reduce 
> the amount of time on the wait.
> As I tested, with the patch, {{testOMRetryProxy}} and {{testTwoOMNodesDown}} 
> finish in 20 sec each, saving almost 4 min runtime on those two tests alone. 
> The whole TestOzoneManagerHA test finishes in 5m 44s with the patch.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1643) Send hostName also part of OMRequest

2019-11-05 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-1643.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Send hostName also part of OMRequest
> 
>
> Key: HDDS-1643
> URL: https://issues.apache.org/jira/browse/HDDS-1643
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>    Reporter: Bharat Viswanadham
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This Jira is created based on the comment from [~eyang] on HDDS-1600 jira.
> [~bharatviswa] can hostname be used as part of OM request? For running in 
> docker container, virtual private network address may not be routable or 
> exposed to outside world. Using IP to identify the source client location may 
> not be enough. It would be nice to have ability support hostname based 
> request too.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2064) Add tests for incorrect OM HA config when node ID or RPC address is not configured

2019-11-05 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2064.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Add tests for incorrect OM HA config when node ID or RPC address is not 
> configured
> --
>
> Key: HDDS-2064
> URL: https://issues.apache.org/jira/browse/HDDS-2064
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> -OM will NPE and crash when `ozone.om.service.ids=id1,id2` is configured but 
> `ozone.om.nodes.id1` doesn't exist; or `ozone.om.address.id1.omX` doesn't 
> exist.-
> -Root cause:-
> -`OzoneManager#loadOMHAConfigs()` didn't check the case where `found == 0`. 
> This happens when local OM doesn't match any `ozone.om.address.idX.omX` in 
> the config.-
> Due to the refactoring done in HDDS-2162. This fix has been included in that 
> commit. I will repurpose the jira to add some tests for the HA config.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2359) Seeking randomly in a key with more than 2 blocks of data leads to inconsistent reads

2019-11-05 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2359.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Seeking randomly in a key with more than 2 blocks of data leads to 
> inconsistent reads
> -
>
> Key: HDDS-2359
> URL: https://issues.apache.org/jira/browse/HDDS-2359
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Istvan Fajth
>Assignee: Shashikant Banerjee
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> During Hive testing we found the following exception:
> {code}
> TaskAttempt 3 failed, info=[Error: Error while running task ( failure ) : 
> attempt_1569246922012_0214_1_03_00_3:java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: 
> java.io.IOException: error iterating
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:296)
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250)
> at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
> at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)
> at 
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.io.IOException: java.io.IOException: error iterating
> at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:80)
> at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:426)
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:267)
> ... 16 more
> Caused by: java.io.IOException: java.io.IOException: error iterating
> at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
> at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
> at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:366)
> at 
> org.apache.hadoop.hive.ql.io.HiveRecordReader.doNext(HiveRecordReader.java:79)
> at 
> org.apache.hadoop.hive.ql.io.HiveRecordReader.doNext(HiveRecordReader.java:33)
> at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:116)
> at 
> org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.next(TezGroupedSplitsInputFormat.java:151)
> at 
> org.apache.tez.mapreduce.lib.MRReaderMapred.next(MRReaderMapred.java:116)
> at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:68)
> ... 18 more
> Caused by: java.io.IOException: error iterating
> at 
> org.apache.hadoop.hive.ql.io.orc.VectorizedOrcAcidRowBatchReader.next(VectorizedOrcAcidRowBatchReader.java:835)
> at 
> org.apache.hadoop.hive.ql.io.orc.VectorizedOrcAcidRowBatchReader.next(VectorizedOrcAcidRowBatchReader.java:74)
> at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAw

[jira] [Resolved] (HDDS-2255) Improve Acl Handler Messages

2019-11-04 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2255.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Improve Acl Handler Messages
> 
>
> Key: HDDS-2255
> URL: https://issues.apache.org/jira/browse/HDDS-2255
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: om
>Reporter: Hanisha Koneru
>Assignee: YiSheng Lien
>Priority: Minor
>  Labels: newbie, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In Add/Remove/Set Acl Key/Bucket/Volume Handlers, we print a message about 
> whether the operation was successful or not. If we are trying to add an ACL 
> which is already existing, we convey the message that the operation failed. 
> It would be better if the message conveyed more clearly why the operation 
> failed i.e. the ACL already exists. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2398) Remove usage of LogUtils class from ratis-common

2019-11-02 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2398.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Remove usage of LogUtils class from ratis-common
> 
>
> Key: HDDS-2398
> URL: https://issues.apache.org/jira/browse/HDDS-2398
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>    Reporter: Bharat Viswanadham
>    Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> MiniOzoneChaoasCluster.java for setting log level it uses LogUtils from 
> ratis-common. But this is removed from LogUtils as part of Ratis-508.
> We can avoid depending on ratis for this, and use GenericTestUtils from 
> hadoop-common test.
> LogUtils.setLogLevel(GrpcClientProtocolClient.LOG, Level.WARN);



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2398) Remove usage of LogUtils class from ratis-common

2019-11-01 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2398:


 Summary: Remove usage of LogUtils class from ratis-common
 Key: HDDS-2398
 URL: https://issues.apache.org/jira/browse/HDDS-2398
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham


MiniOzoneChaoasCluster.java for setting log level it uses LogUtils from 
ratis-common. But this is removed from LogUtils as part of Ratis-508.

We can avoid depending on ratis for this, and use GenericTestUtils from 
hadoop-common test.

LogUtils.setLogLevel(GrpcClientProtocolClient.LOG, Level.WARN);



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2397) Fix calling cleanup for few missing tables in OM

2019-10-31 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2397:


 Summary: Fix calling cleanup for few missing tables in OM
 Key: HDDS-2397
 URL: https://issues.apache.org/jira/browse/HDDS-2397
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham


After DoubleBuffer flushes, we call cleanup cache to cleanup tables cache.

For few tables cleanup of cache is missed:
 # PrefixTable
 # S3SecretTable
 # DelegationTable



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2395) Handle completeMPU scenarios to match with aws s3 behavior.

2019-10-31 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2395:


 Summary: Handle completeMPU scenarios to match with aws s3 
behavior.
 Key: HDDS-2395
 URL: https://issues.apache.org/jira/browse/HDDS-2395
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Bharat Viswanadham


# When uploaded 2 parts, and when complete upload 1 part no error
 # During complete multipart upload name/part number not matching with uploaded 
part and part number then InvalidPart error
 # When parts are not specified in sorted order InvalidPartOrder
 # During complete multipart upload when no uploaded parts, and we specify some 
parts then also InvalidPart
 # Uploaded parts 1,2,3 and during complete we can do upload 1,3 (No error)
 # When part 3 uploaded, complete with part 3 can be done



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2355) Om double buffer flush termination with rocksdb error

2019-10-30 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2355.
--
Resolution: Fixed

> Om double buffer flush termination with rocksdb error
> -
>
> Key: HDDS-2355
> URL: https://issues.apache.org/jira/browse/HDDS-2355
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>    Reporter: Bharat Viswanadham
>Assignee: Aravindan Vijayan
>Priority: Blocker
> Fix For: 0.5.0
>
>
> om_1    |java.io.IOException: Unable to write the batch.
> om_1    | at 
> [org.apache.hadoop.hdds.utils.db.RDBBatchOperation.commit(RDBBatchOperation.java:48|http://org.apache.hadoop.hdds.utils.db.rdbbatchoperation.commit%28rdbbatchoperation.java:48/])
> om_1    | at 
> [org.apache.hadoop.hdds.utils.db.RDBStore.commitBatchOperation(RDBStore.java:240|http://org.apache.hadoop.hdds.utils.db.rdbstore.commitbatchoperation%28rdbstore.java:240/])
> om_1    |at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.flushTransactions(OzoneManagerDoubleBuffer.java:146)
> om_1    |at java.base/java.lang.Thread.run(Thread.java:834)
> om_1    |Caused by: org.rocksdb.RocksDBException: 
> WritePrepared/WriteUnprepared txn tag when write_after_commit_ is enabled (in 
> default WriteCommitted mode). If it is not due to corruption, the WAL must be 
> emptied before changing the WritePolicy.
> om_1    |at org.rocksdb.RocksDB.write0(Native Method)
> om_1    |at org.rocksdb.RocksDB.write(RocksDB.java:1421)
> om_1    | at 
> [org.apache.hadoop.hdds.utils.db.RDBBatchOperation.commit(RDBBatchOperation.java:46|http://org.apache.hadoop.hdds.utils.db.rdbbatchoperation.commit%28rdbbatchoperation.java:46/])
>  
> In few of my test run's i see this error and OM is terminated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2379) OM terminates with RocksDB error while continuously writing keys.

2019-10-30 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2379.
--
Resolution: Fixed

> OM terminates with RocksDB error while continuously writing keys.
> -
>
> Key: HDDS-2379
> URL: https://issues.apache.org/jira/browse/HDDS-2379
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Aravindan Vijayan
>    Assignee: Bharat Viswanadham
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Exception trace after writing around 800,000 keys.
> {code}
> 2019-10-29 11:15:15,131 ERROR 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer: Terminating with 
> exit status 1: During flush to DB encountered err
> or in OMDoubleBuffer flush thread OMDoubleBufferFlushThread
> java.io.IOException: Unable to write the batch.
> at 
> org.apache.hadoop.hdds.utils.db.RDBBatchOperation.commit(RDBBatchOperation.java:48)
> at 
> org.apache.hadoop.hdds.utils.db.RDBStore.commitBatchOperation(RDBStore.java:240)
> at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.flushTransactions(OzoneManagerDoubleBuffer.java:146)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.rocksdb.RocksDBException: unknown WriteBatch tag
> at org.rocksdb.RocksDB.write0(Native Method)
> at org.rocksdb.RocksDB.write(RocksDB.java:1421)
> at 
> org.apache.hadoop.hdds.utils.db.RDBBatchOperation.commit(RDBBatchOperation.java:46)
> ... 3 more
> {code}
> Assigning to [~bharat] since he has already started work on this. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2381) In ExcludeList, add if not exist only

2019-10-29 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2381:


 Summary: In ExcludeList, add if not exist only
 Key: HDDS-2381
 URL: https://issues.apache.org/jira/browse/HDDS-2381
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


Created based on comment from [~chinseone] in HDDS-2356

https://issues.apache.org/jira/browse/HDDS-2356?focusedCommentId=16960796=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16960796

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2345) Add a UT for newly added clone() in OmBucketInfo

2019-10-28 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2345.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Add a UT for newly added clone() in OmBucketInfo
> 
>
> Key: HDDS-2345
> URL: https://issues.apache.org/jira/browse/HDDS-2345
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>    Reporter: Bharat Viswanadham
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Add a UT for newly added clone() method in OMBucketInfo as part of HDDS-2333.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2361) Ozone Manager init & start command prints out unnecessary line in the beginning.

2019-10-26 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2361.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Ozone Manager init & start command prints out unnecessary line in the 
> beginning.
> 
>
> Key: HDDS-2361
> URL: https://issues.apache.org/jira/browse/HDDS-2361
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Aravindan Vijayan
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {code}
> [root@avijayan-om-1 ozone-0.5.0-SNAPSHOT]# bin/ozone --daemon start om
> Ozone Manager classpath extended by
> {code}
> We could probably print this line only when extra elements are added to OM 
> classpath or skip printing this line altogether.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2366) Remove ozone.enabled flag

2019-10-25 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2366:


 Summary: Remove ozone.enabled flag
 Key: HDDS-2366
 URL: https://issues.apache.org/jira/browse/HDDS-2366
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Bharat Viswanadham


Now when ozone is started the start-ozone.sh/stop-ozone.sh script check whether 
this property is enabled or not to start ozone services. Now, this property and 
this check can be removed.

 

This was needed when ozone is part of Hadoop, and we don't want to start ozone 
services by default.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2296) ozoneperf compose cluster shouln't start freon by default

2019-10-25 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2296.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> ozoneperf compose cluster shouln't start freon by default
> -
>
> Key: HDDS-2296
> URL: https://issues.apache.org/jira/browse/HDDS-2296
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: docker
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> During the original creation of the compose/ozoneperf we added an example 
> freon execution to make it clean how the data can be generated. This freon 
> process starts all the time when ozoneperf cluster is started (usually I 
> notice it when my CPU starts to use 100% of the available resources).
> Since the creation of this cluster definition we implemented multiple type of 
> freon tests and it's hard predict which tests should be executed. I propose 
> to remove the default execution of the random key generation but keep the 
> opportunity to run any of the tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [DISCUSS] Remove Ozone and Submarine from Hadoop repo

2019-10-24 Thread Bharat Viswanadham
+1

Thanks,
Bharat

On Thu, Oct 24, 2019 at 10:35 PM Jitendra Pandey
 wrote:

> +1
>
> On Thu, Oct 24, 2019 at 6:42 PM Ayush Saxena  wrote:
>
> > Thanx Akira for putting this up.
> > +1, Makes sense removing.
> >
> > -Ayush
> >
> > > On 25-Oct-2019, at 6:55 AM, Dinesh Chitlangia <
> dchitlan...@cloudera.com.invalid>
> > wrote:
> > >
> > > +1 and Anu's approach of creating a tag makes sense.
> > >
> > > Dinesh
> > >
> > >
> > >
> > >
> > >> On Thu, Oct 24, 2019 at 9:24 PM Sunil Govindan 
> > wrote:
> > >>
> > >> +1 on this to remove staleness.
> > >>
> > >> - Sunil
> > >>
> > >> On Thu, Oct 24, 2019 at 12:51 PM Akira Ajisaka 
> > >> wrote:
> > >>
> > >>> Hi folks,
> > >>>
> > >>> Both Ozone and Apache Submarine have separate repositories.
> > >>> Can we remove these modules from hadoop-trunk?
> > >>>
> > >>> Regards,
> > >>> Akira
> > >>>
> > >>
> >
> > -
> > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> >
> >
>


[jira] [Resolved] (HDDS-1015) Cleanup snapshot repository settings

2019-10-24 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-1015.
--
Resolution: Won't Fix

> Cleanup snapshot repository settings
> 
>
> Key: HDDS-1015
> URL: https://issues.apache.org/jira/browse/HDDS-1015
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>    Reporter: Bharat Viswanadham
>    Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-1015.00.patch
>
>
> Now we can clean up snapshot repository settings from hadoop-hdds/pom.xml and 
> hadoop-ozone/pom.xml
> As now we have moved our dependencies from Hadoop 3.2.1-SNAPSHOT to 3.2.0 as 
> part of HDDS-993, we don't require them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2360) Update Ratis snapshot to d6d58d0

2019-10-24 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2360.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Update Ratis snapshot to d6d58d0
> 
>
> Key: HDDS-2360
> URL: https://issues.apache.org/jira/browse/HDDS-2360
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Client, Ozone Datanode
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Update Ratis dependency version to snapshot 
> [d6d58d0|https://github.com/apache/incubator-ratis/commit/d6d58d0], to fix 
> memory issues (RATIS-726, RATIS-728).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2356) Multipart upload report errors while writing to ozone Ratis pipeline

2019-10-24 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2356.
--
Resolution: Fixed

This will be fixed as part of HDDS-2322.

> Multipart upload report errors while writing to ozone Ratis pipeline
> 
>
> Key: HDDS-2356
> URL: https://issues.apache.org/jira/browse/HDDS-2356
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
> Environment: Env: 4 VMs in total: 3 Datanodes on 3 VMs, 1 OM & 1 SCM 
> on a separate VM
>Reporter: Li Cheng
>Assignee: Bharat Viswanadham
>Priority: Blocker
>
> Env: 4 VMs in total: 3 Datanodes on 3 VMs, 1 OM & 1 SCM on a separate VM, say 
> it's VM0.
> I use goofys as a fuse and enable ozone S3 gateway to mount ozone to a path 
> on VM0, while reading data from VM0 local disk and write to mount path. The 
> dataset has various sizes of files from 0 byte to GB-level and it has a 
> number of ~50,000 files. 
> The writing is slow (1GB for ~10 mins) and it stops after around 4GB. As I 
> look at hadoop-root-om-VM_50_210_centos.out log, I see OM throwing errors 
> related with Multipart upload. This error eventually causes the  writing to 
> terminate and OM to be closed. 
>  
> 2019-10-24 16:01:59,527 [OMDoubleBufferFlushThread] ERROR - Terminating with 
> exit status 2: OMDoubleBuffer flush 
> threadOMDoubleBufferFlushThreadencountered Throwable error
> java.util.ConcurrentModificationException
>  at java.util.TreeMap.forEach(TreeMap.java:1004)
>  at 
> org.apache.hadoop.ozone.om.helpers.OmMultipartKeyInfo.getProto(OmMultipartKeyInfo.java:111)
>  at 
> org.apache.hadoop.ozone.om.codec.OmMultipartKeyInfoCodec.toPersistedFormat(OmMultipartKeyInfoCodec.java:38)
>  at 
> org.apache.hadoop.ozone.om.codec.OmMultipartKeyInfoCodec.toPersistedFormat(OmMultipartKeyInfoCodec.java:31)
>  at 
> org.apache.hadoop.hdds.utils.db.CodecRegistry.asRawData(CodecRegistry.java:68)
>  at 
> org.apache.hadoop.hdds.utils.db.TypedTable.putWithBatch(TypedTable.java:125)
>  at 
> org.apache.hadoop.ozone.om.response.s3.multipart.S3MultipartUploadCommitPartResponse.addToDBBatch(S3MultipartUploadCommitPartResponse.java:112)
>  at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.lambda$flushTransactions$0(OzoneManagerDoubleBuffer.java:137)
>  at java.util.Iterator.forEachRemaining(Iterator.java:116)
>  at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.flushTransactions(OzoneManagerDoubleBuffer.java:135)
>  at java.lang.Thread.run(Thread.java:745)
> 2019-10-24 16:01:59,629 [shutdown-hook-0] INFO - SHUTDOWN_MSG:



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2297) Enable Opentracing for new Freon tests

2019-10-23 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2297.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Enable Opentracing for new Freon tests
> --
>
> Key: HDDS-2297
> URL: https://issues.apache.org/jira/browse/HDDS-2297
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: freon
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> HDDS-2022 introduced new freon tests, but the initial root span of 
> opentracing is not created before the test execution. We need to enable 
> opentracing to get better view about the executions of the new freon test.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2355) Om double buffer flush termination with rocksdb error

2019-10-23 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2355:


 Summary: Om double buffer flush termination with rocksdb error
 Key: HDDS-2355
 URL: https://issues.apache.org/jira/browse/HDDS-2355
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Bharat Viswanadham


om_1    |java.io.IOException: Unable to write the batch.
om_1    | at 
[org.apache.hadoop.hdds.utils.db.RDBBatchOperation.commit(RDBBatchOperation.java:48|http://org.apache.hadoop.hdds.utils.db.rdbbatchoperation.commit%28rdbbatchoperation.java:48/])
om_1    | at 
[org.apache.hadoop.hdds.utils.db.RDBStore.commitBatchOperation(RDBStore.java:240|http://org.apache.hadoop.hdds.utils.db.rdbstore.commitbatchoperation%28rdbstore.java:240/])
om_1    |at 
org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.flushTransactions(OzoneManagerDoubleBuffer.java:146)
om_1    |at java.base/java.lang.Thread.run(Thread.java:834)
om_1    |Caused by: org.rocksdb.RocksDBException: 
WritePrepared/WriteUnprepared txn tag when write_after_commit_ is enabled (in 
default WriteCommitted mode). If it is not due to corruption, the WAL must be 
emptied before changing the WritePolicy.
om_1    |at org.rocksdb.RocksDB.write0(Native Method)
om_1    |at org.rocksdb.RocksDB.write(RocksDB.java:1421)
om_1    | at 
[org.apache.hadoop.hdds.utils.db.RDBBatchOperation.commit(RDBBatchOperation.java:46|http://org.apache.hadoop.hdds.utils.db.rdbbatchoperation.commit%28rdbbatchoperation.java:46/])
 
In few of my test run's i see this error and OM is terminated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2354) SCM log is full of AllocateBlock logs

2019-10-23 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2354:


 Summary: SCM log is full of AllocateBlock logs
 Key: HDDS-2354
 URL: https://issues.apache.org/jira/browse/HDDS-2354
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Bharat Viswanadham


2019-10-24 03:17:43,087 INFO server.SCMBlockProtocolServer: Allocating 1 blocks 
of size 268435456, with ExcludeList \{datanodes = [], containerIds = [], 
pipelineIds = []}

scm_1       | 2019-10-24 03:17:43,088 INFO server.SCMBlockProtocolServer: 
Allocating 1 blocks of size 268435456, with ExcludeList \{datanodes = [], 
containerIds = [], pipelineIds = []}

scm_1       | 2019-10-24 03:17:43,089 INFO server.SCMBlockProtocolServer: 
Allocating 1 blocks of size 268435456, with ExcludeList \{datanodes = [], 
containerIds = [], pipelineIds = []}

scm_1       | 2019-10-24 03:17:43,093 INFO server.SCMBlockProtocolServer: 
Allocating 1 blocks of size 268435456, with ExcludeList \{datanodes = [], 
containerIds = [], pipelineIds = []}

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2353) Cleanup old write-path code in OM

2019-10-23 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2353:


 Summary: Cleanup old write-path code in OM
 Key: HDDS-2353
 URL: https://issues.apache.org/jira/browse/HDDS-2353
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Bharat Viswanadham


This Jira is to cleanup old write path code in OM. As the newly added 
request/response code is also used for non-HA, we can cleanup the old code. And 
also this integrated code is tested for few days. So, this will be good time to 
cleanup old code. Cleaning up old code is also causing trouble for some patches 
fixing write path now they need to update in 2 places.(Because if we change a 
constructor, then it will require change in 2 places)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2131) Optimize replication type and creation time calculation in S3 MPU list call

2019-10-22 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2131.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Optimize replication type and creation time calculation in S3 MPU list call
> ---
>
> Key: HDDS-2131
> URL: https://issues.apache.org/jira/browse/HDDS-2131
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Based on the review from [~bharatviswa]:
> {code}
>  
> hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
>   metadataManager.getOpenKeyTable();
>   OmKeyInfo omKeyInfo =
>   openKeyTable.get(upload.getDbKey());
> {code}
> {quote}Here we are reading openKeyTable only for getting creation time. If we 
> can have this information in omMultipartKeyInfo, we could avoid DB calls for 
> openKeyTable.
> To do this, We can set creationTime in OmMultipartKeyInfo during 
> initiateMultipartUpload . In this way, we can get all the required 
> information from the MultipartKeyInfo table.
> And also StorageClass is missing from the returned OmMultipartUpload, as 
> listMultipartUploads shows StorageClass information. For this, if we can 
> return replicationType and depending on this value, we can set StorageClass 
> in the listMultipartUploads Response.
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2345) Add a UT for newly added clone() in OmBucketInfo

2019-10-21 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2345:


 Summary: Add a UT for newly added clone() in OmBucketInfo
 Key: HDDS-2345
 URL: https://issues.apache.org/jira/browse/HDDS-2345
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Bharat Viswanadham


Add a UT for newly added clone() method in OMBucketInfo as part of HDDS-2333.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2344) CLONE - Add immutable entries in to the DoubleBuffer for Volume requests.

2019-10-21 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2344:


 Summary: CLONE - Add immutable entries in to the DoubleBuffer for 
Volume requests.
 Key: HDDS-2344
 URL: https://issues.apache.org/jira/browse/HDDS-2344
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham
 Fix For: 0.5.0


OMBucketCreateRequest.java L181:

omClientResponse =
 new OMBucketCreateResponse(omBucketInfo,
 omResponse.build());

 

We add this to double-buffer, and double-buffer flushThread which is running in 
the background when picks up, converts to protoBuf and to ByteArray and write 
to rocksDB tables. So, during this conversion(This conversion will be done 
without any lock acquire), if any other request changes internal structure(like 
acls list) of OMBucketInfo we might get ConcurrentModificationException.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2310) Add support to add ozone ranger plugin to Ozone Manager classpath

2019-10-21 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2310.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Add support to add ozone ranger plugin to Ozone Manager classpath
> -
>
> Key: HDDS-2310
> URL: https://issues.apache.org/jira/browse/HDDS-2310
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Manager
>Affects Versions: 0.5.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently, there is no way to add Ozone Ranger plugin to Ozone Manager 
> classpath. 
> We should be able to set an environment variable that will be respected by 
> ozone and added to Ozone Manager classpath.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2320) Negative value seen for OM NumKeys Metric in JMX.

2019-10-21 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2320.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Negative value seen for OM NumKeys Metric in JMX.
> -
>
> Key: HDDS-2320
> URL: https://issues.apache.org/jira/browse/HDDS-2320
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: Screen Shot 2019-10-17 at 11.31.08 AM.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> While running teragen/terasort on a cluster and verifying number of keys 
> created on Ozone Manager, I noticed that the value of NumKeys counter metric 
> to be a negative value !Screen Shot 2019-10-17 at 11.31.08 AM.png! .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2340) Update RATIS snapshot version

2019-10-21 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2340.
--
Resolution: Fixed

> Update RATIS snapshot version
> -
>
> Key: HDDS-2340
> URL: https://issues.apache.org/jira/browse/HDDS-2340
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Update RATIS version to incorporate fix that went into RATIS-707 among others.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2343) Add immutable entries in to the DoubleBuffer.

2019-10-21 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2343:


 Summary: Add immutable entries in to the DoubleBuffer.
 Key: HDDS-2343
 URL: https://issues.apache.org/jira/browse/HDDS-2343
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


OMBucketCreateRequest.java L181:

omClientResponse =
 new OMBucketCreateResponse(omBucketInfo,
 omResponse.build());

 

We add this to double-buffer, and double-buffer flushThread which is running in 
the background when picks up, converts to protoBuf and to ByteArray and write 
to rocksDB tables. So, during this conversion, if any other request changes 
internal structure(like acls list) of OMBucketInfo we might get 
ConcurrentModificationException.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2326) Http server of Freon is not started for new Freon tests

2019-10-20 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2326.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Http server of Freon is not started for new Freon tests
> ---
>
> Key: HDDS-2326
> URL: https://issues.apache.org/jira/browse/HDDS-2326
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: freon
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> HDDS-2022 introduced new Freon tests but the Freon http server is not started 
> for the new tests.
> Freon includes a http server which can be turned on with the '–server' flag. 
> It helps to monitor and profile the freon as the http server contains by 
> default the prometheus and profiler servlets.
> The server should be started if's requested.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2333) Enable sync option for OM non-HA

2019-10-18 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2333:


 Summary: Enable sync option for OM non-HA 
 Key: HDDS-2333
 URL: https://issues.apache.org/jira/browse/HDDS-2333
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


In OM non-HA when double buffer flushes, it should commit with sync turned on. 
As in non-HA when power failure/system crashes, the operations which are 
acknowledged by OM might be lost. (As in rocks DB with Sync false, the flush is 
asynchronous and it will not persist to storage system)

 

In non-HA, this is not a problem because the guarantee is provided by ratis and 
ratis logs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2318) Avoid proto::tostring in preconditions to save CPU cycles

2019-10-18 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2318.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Avoid proto::tostring in preconditions to save CPU cycles
> -
>
> Key: HDDS-2318
> URL: https://issues.apache.org/jira/browse/HDDS-2318
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Rajesh Balamohan
>    Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: performance, pull-request-available
> Fix For: 0.5.0
>
> Attachments: Screenshot 2019-10-17 at 6.10.22 PM.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> [https://github.com/apache/hadoop-ozone/blob/61f4aa30f502b34fd778d9b37b1168721abafb2f/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerProtocolServerSideTranslatorPB.java#L117]
>  
> This ends up converting proto toString in precondition checks and burns CPU 
> cycles. {{request.toString()}} can be added in debug log on need basis.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2322) DoubleBuffer flush termination and OM is shutdown

2019-10-17 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2322:


 Summary: DoubleBuffer flush termination and OM is shutdown
 Key: HDDS-2322
 URL: https://issues.apache.org/jira/browse/HDDS-2322
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Bharat Viswanadham


om1_1       | 2019-10-18 00:34:45,317 [OMDoubleBufferFlushThread] ERROR      - 
Terminating with exit status 2: OMDoubleBuffer flush 
threadOMDoubleBufferFlushThreadencountered Throwable error

om1_1       | java.util.ConcurrentModificationException

om1_1       | at 
java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1660)

om1_1       | at 
java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)

om1_1       | at 
java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)

om1_1       | at 
java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913)

om1_1       | at 
java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)

om1_1       | at 
java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578)

om1_1       | at 
org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfoGroup.getProtobuf(OmKeyLocationInfoGroup.java:65)

om1_1       | at 
java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)

om1_1       | at 
java.base/java.util.Collections$2.tryAdvance(Collections.java:4745)

om1_1       | at 
java.base/java.util.Collections$2.forEachRemaining(Collections.java:4753)

om1_1       | at 
java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)

om1_1       | at 
java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)

om1_1       | at 
java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913)

om1_1       | at 
java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)

om1_1       | at 
java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578)

om1_1       | at 
org.apache.hadoop.ozone.om.helpers.OmKeyInfo.getProtobuf(OmKeyInfo.java:362)

om1_1       | at 
org.apache.hadoop.ozone.om.codec.OmKeyInfoCodec.toPersistedFormat(OmKeyInfoCodec.java:37)

om1_1       | at 
org.apache.hadoop.ozone.om.codec.OmKeyInfoCodec.toPersistedFormat(OmKeyInfoCodec.java:31)

om1_1       | at 
org.apache.hadoop.hdds.utils.db.CodecRegistry.asRawData(CodecRegistry.java:68)

om1_1       | at 
org.apache.hadoop.hdds.utils.db.TypedTable.putWithBatch(TypedTable.java:125)

om1_1       | at 
org.apache.hadoop.ozone.om.response.key.OMKeyCreateResponse.addToDBBatch(OMKeyCreateResponse.java:58)

om1_1       | at 
org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.lambda$flushTransactions$0(OzoneManagerDoubleBuffer.java:139)

om1_1       | at 
java.base/java.util.Iterator.forEachRemaining(Iterator.java:133)

om1_1       | at 
org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.flushTransactions(OzoneManagerDoubleBuffer.java:137)

om1_1       | at java.base/java.lang.Thread.run(Thread.java:834)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2311) Fix logic in RetryPolicy in OzoneClientSideTranslatorPB

2019-10-15 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2311:


 Summary: Fix logic in RetryPolicy in OzoneClientSideTranslatorPB
 Key: HDDS-2311
 URL: https://issues.apache.org/jira/browse/HDDS-2311
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Bharat Viswanadham


OzoneManagerProtocolClientSideTranslatorPB.java

L251: if (cause instanceof NotLeaderException) {
 NotLeaderException notLeaderException = (NotLeaderException) cause;
 omFailoverProxyProvider.performFailoverIfRequired(
 notLeaderException.getSuggestedLeaderNodeId());
 return getRetryAction(RetryAction.RETRY, retries, failovers);
 }

 

The suggested leader returned from Server is not used during failOver, as the 
cause is a type of RemoteException. So with current code, it does not use 
suggested leader for failOver at all and by default with each OM, it tries max 
retries.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1230) Update OzoneServiceProvider in s3 gateway to handle OM ha

2019-10-10 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-1230.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

Fixed as part of HDDS-2019.

> Update OzoneServiceProvider in s3 gateway to handle OM ha
> -
>
> Key: HDDS-1230
> URL: https://issues.apache.org/jira/browse/HDDS-1230
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Priority: Major
> Fix For: 0.5.0
>
>
> Update OzoneServiceProvider in s3 gateway to handle OM ha



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1986) Fix listkeys API

2019-10-10 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-1986.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Fix listkeys API
> 
>
> Key: HDDS-1986
> URL: https://issues.apache.org/jira/browse/HDDS-1986
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>    Reporter: Bharat Viswanadham
>    Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 7h 10m
>  Remaining Estimate: 0h
>
> This Jira is to fix listKeys API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listkeys, it should use both in-memory 
> cache and rocksdb key table to list keys in a bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2279) S3 commands not working on HA cluster

2019-10-09 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2279:


 Summary: S3 commands not working on HA cluster
 Key: HDDS-2279
 URL: https://issues.apache.org/jira/browse/HDDS-2279
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


ozone s3 getSecret

ozone s3 path are not working on OM HA cluster

 

Because these commands do not take URI as a parameter. And for shell in HA, 
passing URI is mandatory. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2278) Run S3 test suite on OM HA cluster

2019-10-09 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2278:


 Summary: Run S3 test suite on OM HA cluster
 Key: HDDS-2278
 URL: https://issues.apache.org/jira/browse/HDDS-2278
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


This will add a new compose setup with 3 OM's and start SCM, S3G, Datanode.

Run the existing test suite against this new docker-compose cluster.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2191) Handle bucket create request in OzoneNativeAuthorizer

2019-10-09 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2191.
--
Resolution: Fixed

> Handle bucket create request in OzoneNativeAuthorizer
> -
>
> Key: HDDS-2191
> URL: https://issues.apache.org/jira/browse/HDDS-2191
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Security
>Affects Versions: 0.5.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> OzoneNativeAuthorizer should handle bucket create request when the bucket 
> object is not yet created.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2269) Provide config for fair/non-fair for OM RW Lock

2019-10-08 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2269:


 Summary: Provide config for fair/non-fair for OM RW Lock
 Key: HDDS-2269
 URL: https://issues.apache.org/jira/browse/HDDS-2269
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham


Provide config in OzoneManager Lock for fair/non-fair for OM RW Lock.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2244) Use new ReadWrite lock in OzoneManager

2019-10-08 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2244.
--
Resolution: Fixed

> Use new ReadWrite lock in OzoneManager
> --
>
> Key: HDDS-2244
> URL: https://issues.apache.org/jira/browse/HDDS-2244
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>    Reporter: Bharat Viswanadham
>    Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> Use new ReadWriteLock added in HDDS-2223.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2260) Avoid evaluation of LOG.trace and LOG.debug statement in the read/write path (HDDS)

2019-10-08 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2260.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Avoid evaluation of LOG.trace and LOG.debug statement in the read/write path 
> (HDDS)
> ---
>
> Key: HDDS-2260
> URL: https://issues.apache.org/jira/browse/HDDS-2260
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client, Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> LOG.trace and LOG.debug with logging information will be evaluated even when 
> debug/trace logging is disabled. This jira proposes to wrap all the 
> trace/debug logging with 
> LOG.isDebugEnabled and LOG.isTraceEnabled to prevent the logging.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2258) Fix checkstyle issues introduced by HDDS-2222

2019-10-04 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2258.
--
Resolution: Duplicate

> Fix checkstyle issues introduced by HDDS-
> -
>
> Key: HDDS-2258
> URL: https://issues.apache.org/jira/browse/HDDS-2258
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2244) Use new ReadWrite lock in OzoneManager

2019-10-03 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2244:


 Summary: Use new ReadWrite lock in OzoneManager
 Key: HDDS-2244
 URL: https://issues.apache.org/jira/browse/HDDS-2244
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Nanda kumar
 Fix For: 0.5.0


Currently {{LockManager}} is using exclusive lock, instead we should support 
{{ReadWrite}} lock.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2239) Fix TestOzoneFsHAUrls

2019-10-03 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2239:


 Summary: Fix TestOzoneFsHAUrls
 Key: HDDS-2239
 URL: https://issues.apache.org/jira/browse/HDDS-2239
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham


[https://github.com/elek/ozone-ci-q4/blob/master/pr/pr-hdds-2162-pj84x/integration/hadoop-ozone/ozonefs/org.apache.hadoop.fs.ozone.TestOzoneFsHAURLs.txt]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2236) Remove default http-bind-host from ozone-default.xml

2019-10-02 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2236.
--
Resolution: Not A Problem

Discussed offline with [~arp], this was an intentional choice to use 
http.bind.host to 0.0.0.0 so that on multihomed environments/normal clusters it 
binds to all interfaces.

> Remove default http-bind-host from ozone-default.xml
> 
>
> Key: HDDS-2236
> URL: https://issues.apache.org/jira/browse/HDDS-2236
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>    Reporter: Bharat Viswanadham
>Priority: Major
>
> Right now, in the code to get HttpBindAddress
>  
> final Optional bindHost =
>  getHostNameFromConfigKeys(conf, bindHostKey);
> final Optional addressPort =
>  getPortNumberFromConfigKeys(conf, addressKey);
> final Optional addressHost =
>  getHostNameFromConfigKeys(conf, addressKey);
> String hostName = bindHost.orElse(addressHost.orElse(bindHostDefault));
> return NetUtils.createSocketAddr(
>  hostName + ":" + addressPort.orElse(bindPortdefault));
>  
> So, if http-address is mentioned in the config with some hostname, still 
> bind-host (0.0.0.0) will be used, as ozone-default.xml has value for 
> http-bind-host with 0.0.0.0.
>  
> Like this, we need to delete the default 0.0.0.0 for recon,freon,datanode.
> 
>  ozone.om.http-bind-host
>  0.0.0.0
>  OM, MANAGEMENT
>  
>  The actual address the OM web server will bind to. If this optional
>  the address is set, it overrides only the hostname portion of
>  ozone.om.http-address.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2236) Remove default http-bind-host from ozone-default.xml

2019-10-02 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2236:


 Summary: Remove default http-bind-host from ozone-default.xml
 Key: HDDS-2236
 URL: https://issues.apache.org/jira/browse/HDDS-2236
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham


Right now, in the code to get HttpBindAddress

 

final Optional bindHost =
 getHostNameFromConfigKeys(conf, bindHostKey);

final Optional addressPort =
 getPortNumberFromConfigKeys(conf, addressKey);

final Optional addressHost =
 getHostNameFromConfigKeys(conf, addressKey);

String hostName = bindHost.orElse(addressHost.orElse(bindHostDefault));

return NetUtils.createSocketAddr(
 hostName + ":" + addressPort.orElse(bindPortdefault));

 

So, if http-address is mentioned with some hostname, still bind-host (0.0.0.0) 
will be used.

 

Like this, we need to delete the default 0.0.0.0 for recon,freon,datanode.


 ozone.om.http-bind-host
 0.0.0.0
 OM, MANAGEMENT
 
 The actual address the OM web server will bind to. If this optional
 the address is set, it overrides only the hostname portion of
 ozone.om.http-address.
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2224) Fix loadup cache for cache cleanup policy NEVER

2019-10-01 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2224:


 Summary: Fix loadup cache for cache cleanup policy NEVER
 Key: HDDS-2224
 URL: https://issues.apache.org/jira/browse/HDDS-2224
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


During initial startup/restart of OM, if table has cache cleanup policy set to 
NEVER, we fill the table cache and also epochEntries. We do not need to add 
entries to epochEntries, as the epochEntries is used for eviction from the 
cache, once double buffer flushes to disk.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2193) Adding container related metrics in SCM

2019-09-26 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2193:


 Summary: Adding container related metrics in SCM
 Key: HDDS-2193
 URL: https://issues.apache.org/jira/browse/HDDS-2193
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: SCM
Reporter: Bharat Viswanadham
Assignee: Supratim Deka


This jira aims to add more container related metrics to SCM.
Following metrics will be added as part of this jira:

* Number of containers
* Number of open containers
* Number of closed containers
* Number of quasi closed containers
* Number of closing containers
* Number of successful create container calls
* Number of failed create container calls
* Number of successful delete container calls
* Number of failed delete container calls
* Number of successful container report processing
* Number of failed container report processing
* Number of successful incremental container report processing
* Number of failed incremental container report processing




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2168) TestOzoneManagerDoubleBufferWithOMResponse sometimes fails with out of memory error

2019-09-24 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2168.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> TestOzoneManagerDoubleBufferWithOMResponse sometimes fails with out of memory 
> error
> ---
>
> Key: HDDS-2168
> URL: https://issues.apache.org/jira/browse/HDDS-2168
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> testDoubleBuffer() in TestOzoneManagerDoubleBufferWithOMResponse fails with 
> outofmemory exceptions at times in dev machines.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2163) Add "Replication factor" to the output of list keys

2019-09-21 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2163.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Add "Replication factor" to the output of list keys 
> 
>
> Key: HDDS-2163
> URL: https://issues.apache.org/jira/browse/HDDS-2163
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone CLI
>Affects Versions: 0.4.1
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The output of "ozone sh key list /vol1/bucket1" does not include replication 
> factor and it will be good to have it in the output.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2162) Make KeyTab configuration support HA config

2019-09-20 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2162:


 Summary: Make KeyTab configuration support HA config
 Key: HDDS-2162
 URL: https://issues.apache.org/jira/browse/HDDS-2162
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


To have a single configuration to use across OM cluster, few of the configs 
like 

OZONE_OM_KERBEROS_KEYTAB_FILE_KEY,

OZONE_OM_KERBEROS_PRINCIPAL_KEY,

OZONE_OM_HTTP_KERBEROS_KEYTAB_FILE,

OZONE_OM_HTTP_KERBEROS_PRINCIPAL_KEY need to support configs which append with 
service id and node id.

 

This Jira is to fix the above configs.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2121) Create a shaded ozone filesystem (client) jar

2019-09-18 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2121.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Create a shaded ozone filesystem (client) jar
> -
>
> Key: HDDS-2121
> URL: https://issues.apache.org/jira/browse/HDDS-2121
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Arpit Agarwal
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> We need a shaded Ozonefs jar that does not include Hadoop ecosystem 
> components (Hadoop, HDFS, Ratis, Zookeeper).
> A common expected use case for Ozone is Hadoop clients (3.2.0 and later) 
> wanting to access Ozone via the Ozone Filesystem interface. For these 
> clients, we want to add Ozone file system jar to the classpath, however we 
> want to use Hadoop ecosystem dependencies that are `provided` and already 
> expected to be in the client classpath.
> Note that this is different from the legacy jar which bundles a shaded Hadoop 
> 3.2.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2139) Update BeanUtils and Jackson Databind dependency versions

2019-09-17 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2139.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

Thank You [~hanishakoneru] for the contribution.

I have committed this to the trunk.

> Update BeanUtils and Jackson Databind dependency versions
> -
>
> Key: HDDS-2139
> URL: https://issues.apache.org/jira/browse/HDDS-2139
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The following Ozone dependencies have known security vulnerabilities. We 
> should update them to newer/ latest versions.
>  * Apache Common BeanUtils version 1.9.3
>  * Fasterxml Jackson version 2.9.5



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2144) MR job failing on secure Ozone cluster

2019-09-17 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2144:


 Summary: MR job failing on secure Ozone cluster
 Key: HDDS-2144
 URL: https://issues.apache.org/jira/browse/HDDS-2144
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham


Failing with below error:
Caused by: Client cannot authenticate via:[TOKEN, KERBEROS]
org.apache.hadoop.security.AccessControlException: Client cannot authenticate 
via:[TOKEN, KERBEROS]
at 
org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:173)
at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:390)
at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:617)
at org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:411)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:804)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:800)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:800)
at org.apache.hadoop.ipc.Client$Connection.access$3700(Client.java:411)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1572)
at org.apache.hadoop.ipc.Client.call(Client.java:1403)
at org.apache.hadoop.ipc.Client.call(Client.java:1367)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy79.submitRequest(Unknown Source)
at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy79.submitRequest(Unknown Source)
at 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.submitRequest(OzoneManagerProtocolClientSideTranslatorPB.java:332)
at 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.getServiceList(OzoneManagerProtocolClientSideTranslatorPB.java:1163)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hdds.tracing.TraceAllMethod.invoke(TraceAllMethod.java:66)
at com.sun.proxy.$Proxy80.getServiceList(Unknown Source)
at 
org.apache.hadoop.ozone.client.rpc.RpcClient.getScmAddressForClient(RpcClient.java:248)
at org.apache.hadoop.ozone.client.rpc.RpcClient.(RpcClient.java:167)
at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:256)
at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:239)
at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:203)
at 
org.apache.hadoop.fs.ozone.BasicOzoneClientAdapterImpl.(BasicOzoneClientAdapterImpl.java:161)
at 
org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl.(OzoneClientAdapterImpl.java:50)
at 
org.apache.hadoop.fs.ozone.OzoneFileSystem.createAdapter(OzoneFileSystem.java:102)
at 
org.apache.hadoop.fs.ozone.BasicOzoneFileSystem.initialize(BasicOzoneFileSystem.java:155)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3303)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3352)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
at org.apache.hadoop.yarn.util.FSDownload.verifyAndCopy(FSDownload.java:268)
at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:67)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:414)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:411)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730

[jira] [Created] (HDDS-2143) Rename classes under package org.apache.hadoop.utils

2019-09-17 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2143:


 Summary: Rename classes under package org.apache.hadoop.utils
 Key: HDDS-2143
 URL: https://issues.apache.org/jira/browse/HDDS-2143
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
 Environment: Rename classes under package org.apache.hadoop.utils -> 
org.apache.hadoop.hdds.utils in hadoop-hdds-common

 

Now, with current way, we might collide with hadoop classes.
Reporter: Bharat Viswanadham






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2098) Ozone shell command prints out ERROR when the log4j file is not present.

2019-09-16 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2098.
--
Resolution: Fixed

> Ozone shell command prints out ERROR when the log4j file is not present.
> 
>
> Key: HDDS-2098
> URL: https://issues.apache.org/jira/browse/HDDS-2098
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone CLI
>Affects Versions: 0.5.0
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> *Exception Trace*
> {code}
> log4j:ERROR Could not read configuration file from URL 
> [file:/etc/ozone/conf/ozone-shell-log4j.properties].
> java.io.FileNotFoundException: /etc/ozone/conf/ozone-shell-log4j.properties 
> (No such file or directory)
>   at java.io.FileInputStream.open0(Native Method)
>   at java.io.FileInputStream.open(FileInputStream.java:195)
>   at java.io.FileInputStream.(FileInputStream.java:138)
>   at java.io.FileInputStream.(FileInputStream.java:93)
>   at 
> sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:90)
>   at 
> sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:188)
>   at 
> org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:557)
>   at 
> org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
>   at org.apache.log4j.LogManager.(LogManager.java:127)
>   at org.slf4j.impl.Log4jLoggerFactory.(Log4jLoggerFactory.java:66)
>   at org.slf4j.impl.StaticLoggerBinder.(StaticLoggerBinder.java:72)
>   at 
> org.slf4j.impl.StaticLoggerBinder.(StaticLoggerBinder.java:45)
>   at org.slf4j.LoggerFactory.bind(LoggerFactory.java:150)
>   at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124)
>   at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:412)
>   at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:357)
>   at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:383)
>   at org.apache.hadoop.ozone.web.ozShell.Shell.(Shell.java:35)
> log4j:ERROR Ignoring configuration file 
> [file:/etc/ozone/conf/ozone-shell-log4j.properties].
> log4j:WARN No appenders could be found for logger 
> (io.jaegertracing.thrift.internal.senders.ThriftSenderFactory).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
> info.
> {
>   "metadata" : { },
>   "name" : "vol-test-putfile-1567740142",
>   "admin" : "root",
>   "owner" : "root",
>   "creationTime" : 1567740146501,
>   "acls" : [ {
> "type" : "USER",
> "name" : "root",
> "aclScope" : "ACCESS",
> "aclList" : [ "ALL" ]
>   }, {
> "type" : "GROUP",
> "name" : "root",
> "aclScope" : "ACCESS",
> "aclList" : [ "ALL" ]
>   } ],
>   "quota" : 1152921504606846976
> }
> {code}
> *Fix*
> When a log4j file is not present, the default should be console.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2007) Make ozone fs shell command work with OM HA service ids

2019-09-13 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2007.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Make ozone fs shell command work with OM HA service ids
> ---
>
> Key: HDDS-2007
> URL: https://issues.apache.org/jira/browse/HDDS-2007
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Client
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 6h 50m
>  Remaining Estimate: 0h
>
> Build an HDFS HA-like nameservice for OM HA so that Ozone client can access 
> Ozone HA cluster with ease.
> The majority of the work is already done in HDDS-972. But the problem is that 
> the client would crash if there are more than one service ids 
> (ozone.om.service.ids) configured in ozone-site.xml. This need to be address 
> on client side.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2087) Remove the hard coded config key in ChunkManager

2019-09-08 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2087.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Remove the hard coded config key in ChunkManager
> 
>
> Key: HDDS-2087
> URL: https://issues.apache.org/jira/browse/HDDS-2087
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Anu Engineer
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> We have a hard-coded config key in the {{ChunkManagerFactory.java.}}
>  
> {code}
> boolean scrubber = config.getBoolean(
>  "hdds.containerscrub.enabled",
>  false);
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2080) Document details regarding how to implement write request in OzoneManager

2019-09-03 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2080:


 Summary: Document details regarding how to implement write request 
in OzoneManager
 Key: HDDS-2080
 URL: https://issues.apache.org/jira/browse/HDDS-2080
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham






--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2079) Fix TestSecureOzoneManager

2019-09-03 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2079:


 Summary: Fix TestSecureOzoneManager
 Key: HDDS-2079
 URL: https://issues.apache.org/jira/browse/HDDS-2079
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


[https://github.com/elek/ozone-ci/blob/master/pr/pr-hdds-1909-plfbr/integration/hadoop-ozone/integration-test/org.apache.hadoop.ozone.TestSecureOzoneCluster.txt]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2078) Fix TestSecureOzoneCluster

2019-09-03 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2078:


 Summary: Fix TestSecureOzoneCluster
 Key: HDDS-2078
 URL: https://issues.apache.org/jira/browse/HDDS-2078
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham






--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1941) Unused executor in SimpleContainerDownloader

2019-08-28 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-1941.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Unused executor in SimpleContainerDownloader
> 
>
> Key: HDDS-1941
> URL: https://issues.apache.org/jira/browse/HDDS-1941
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {{SimpleContainerDownloader}} has an {{executor}} that's created and shut 
> down, but never used.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2038) Add Auditlog for ACL operations

2019-08-26 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2038:


 Summary: Add Auditlog for ACL operations
 Key: HDDS-2038
 URL: https://issues.apache.org/jira/browse/HDDS-2038
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham


This is to add audit log support for Acl operations in HA code path.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Reopened] (HDDS-1975) Implement default acls for bucket/volume/key for OM HA code

2019-08-23 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reopened HDDS-1975:
--

> Implement default acls for bucket/volume/key for OM HA code
> ---
>
> Key: HDDS-1975
> URL: https://issues.apache.org/jira/browse/HDDS-1975
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>    Reporter: Bharat Viswanadham
>    Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> This Jira is to implement default ACLs for Ozone volume/bucket/key.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



  1   2   3   4   5   >