[jira] [Created] (HDFS-13750) RBF: Router ID in RouterRpcClient is always null

2018-07-19 Thread Takanobu Asanuma (JIRA)
Takanobu Asanuma created HDFS-13750:
---

 Summary: RBF: Router ID in RouterRpcClient is always null
 Key: HDFS-13750
 URL: https://issues.apache.org/jira/browse/HDFS-13750
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Takanobu Asanuma
Assignee: Takanobu Asanuma






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: Hadoop 3.2 Release Plan proposal

2018-07-19 Thread Sunil G
Thanks Subru for the thoughts.
One of the main reason for a major release is to push out critical features
with a faster cadence to the users. If we are pulling more and more
different types of features to a minor release, that branch will become
more destabilized and it may be tough to say that 3.1.2 is stable that
3.1.1 for eg. We always tend to improve and stabilize features in
subsequent minor release.
For few companies, it makes sense to push out these new features faster to
make a reach to the users. Adding to the point to the backporting issues, I
agree that its a pain and we can workaround that with some git scripts. If
we can make such scripts available to committers, backport will be
seem-less across branches and we can achieve the faster release cadence
also.

Thoughts?

- Sunil


On Fri, Jul 20, 2018 at 3:37 AM Subru Krishnan  wrote:

> Thanks Sunil for volunteering to lead the release effort. I am generally
> supportive of a release but -1 on a 3.2 (prefer a 3.1.x) as feel we already
> have too many branches to be maintained. I already see many commits are in
> different branches with no apparent rationale, for e.g: 3.1 has commits
> which are absent in 3.0 etc.
>
> Additionally AFAIK 3.x has not been deployed in any major production
> setting so the cost of adding features should be minimal.
>
> Thoughts?
>
> -Subru
>
> On Thu, Jul 19, 2018 at 12:31 AM, Sunil G  wrote:
>
> > Thanks Steve, Aaron, Wangda for sharing thoughts.
> >
> > Yes, important changes and features are much needed, hence we will be
> > keeping the door open for them as possible. Also considering few more
> > offline requests from other folks, I think extending the timeframe by
> > couple of weeks makes sense (including a second RC buffer) and this
> should
> > ideally help us to ship this by September itself.
> >
> > Revised dates (I will be updating same in Roadmap wiki as well)
> >
> > - Feature freeze date : all features to merge by August 21, 2018.
> >
> > - Code freeze date : blockers/critical only, no improvements and non
> > blocker/critical
> >
> > bug-fixes  August 31, 2018.
> >
> > - Release date: September 15, 2018
> >
> > Thank Eric and Zian, I think Wangda has already answered your questions.
> >
> > Thanks
> > Sunil
> >
> >
> > On Thu, Jul 19, 2018 at 12:13 PM Wangda Tan  wrote:
> >
> > > Thanks Sunil for volunteering to be RM of 3.2 release, +1 for that.
> > >
> > > To concerns from Steve,
> > >
> > > It is a good idea to keep the door open to get important changes /
> > > features in before cutoff. I would prefer to keep the proposed release
> > date
> > > to make sure things can happen earlier instead of last minute and we
> all
> > > know that releases are always get delayed :). I'm also fine if we want
> > get
> > > another several weeks time.
> > >
> > > Regarding of 3.3 release, I would suggest doing that before
> thanksgiving.
> > > Do you think is it good or too early / late?
> > >
> > > Eric,
> > >
> > > The YARN-8220 will be replaced by YARN-8135, if YARN-8135 can get
> merged
> > > in time, we probably not need the YARN-8220.
> > >
> > > Sunil,
> > >
> > > Could u update https://cwiki.apache.org/confluence/display/HADOOP/
> > Roadmap
> > > with proposed plan as well? We can fill feature list first before
> getting
> > > consensus of time.
> > >
> > > Thanks,
> > > Wangda
> > >
> > > On Wed, Jul 18, 2018 at 6:20 PM Aaron Fabbri
>  > >
> > > wrote:
> > >
> > >> On Tue, Jul 17, 2018 at 7:21 PM Steve Loughran <
> ste...@hortonworks.com>
> > >> wrote:
> > >>
> > >> >
> > >> >
> > >> > On 16 Jul 2018, at 23:45, Sunil G  > >> > sun...@apache.org>> wrote:
> > >> >
> > >> > I would also would like to take this opportunity to come up with a
> > >> detailed
> > >> > plan.
> > >> >
> > >> > - Feature freeze date : all features should be merged by August 10,
> > >> 2018.
> > >> >
> > >> >
> > >> >
> > >> > 
> > >>
> > >> >
> > >> > Please let me know if I missed any features targeted to 3.2 per this
> > >> >
> > >> >
> > >> > Well there these big todo lists for S3 & S3Guard.
> > >> >
> > >> > https://issues.apache.org/jira/browse/HADOOP-15226
> > >> > https://issues.apache.org/jira/browse/HADOOP-15220
> > >> >
> > >> >
> > >> > There's a bigger bit of work coming on for Azure Datalake Gen 2
> > >> > https://issues.apache.org/jira/browse/HADOOP-15407
> > >> >
> > >> > I don't think this is quite ready yet, I've been doing work on it,
> but
> > >> if
> > >> > we have a 3 week deadline, I'm going to expect some timely reviews
> on
> > >> > https://issues.apache.org/jira/browse/HADOOP-15546
> > >> >
> > >> > I've uprated that to a blocker feature; will review the S3 & S3Guard
> > >> JIRAs
> > >> > to see which of those are blocking. Then there are some pressing
> > "guave,
> > >> > java 9 prep"
> > >> >
> > >> >
> > >>  I can help with this part if you like.
> > >>
> > >>
> > >>
> > >> >
> > >> >
> > >> >
> > >> > timeline. I would like to volunteer myself as release manager of
> 3.2.0
> > >> > release.
> > >> >
> 

[jira] [Created] (HDDS-271) Create a block iterator to iterate blocks in a container

2018-07-19 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-271:
---

 Summary: Create a block iterator to iterate blocks in a container
 Key: HDDS-271
 URL: https://issues.apache.org/jira/browse/HDDS-271
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Bharat Viswanadham


Create a block iterator to scan all blocks in a container.

This one will be useful during implementation of container scanner.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64

2018-07-19 Thread Apache Jenkins Server
For more details, see https://builds.apache.org/job/hadoop-trunk-win/532/

[Jul 18, 2018 6:05:42 PM] (xyao) HDDS-207. ozone listVolume command accepts 
random values as argument.
[Jul 18, 2018 6:46:26 PM] (xyao) HDDS-255. Fix TestOzoneConfigurationFields for 
missing
[Jul 19, 2018 12:09:43 AM] (eyang) HADOOP-15610.  Fixed pylint version for 
Hadoop docker image.




-1 overall


The following subsystems voted -1:
compile mvninstall pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h 00m 00s)
unit


Specific tests:

Failed junit tests :

   hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec 
   hadoop.fs.contract.rawlocal.TestRawlocalContractAppend 
   hadoop.fs.TestFileUtil 
   hadoop.fs.TestFsShellCopy 
   hadoop.fs.TestFsShellList 
   hadoop.http.TestHttpServer 
   hadoop.http.TestHttpServerLogs 
   hadoop.io.nativeio.TestNativeIO 
   hadoop.ipc.TestSocketFactory 
   hadoop.log.TestLogLevel 
   hadoop.metrics2.impl.TestStatsDMetrics 
   hadoop.security.TestSecurityUtil 
   hadoop.security.TestShellBasedUnixGroupsMapping 
   hadoop.security.token.delegation.TestZKDelegationTokenSecretManager 
   hadoop.security.token.TestDtUtilShell 
   hadoop.util.TestDiskCheckerWithDiskIo 
   hadoop.util.TestNativeCodeLoader 
   hadoop.fs.viewfs.TestViewFileSystemWithAcls 
   hadoop.hdfs.crypto.TestHdfsCryptoStreams 
   hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer 
   hadoop.hdfs.qjournal.client.TestQJMWithFaults 
   hadoop.hdfs.qjournal.server.TestJournalNode 
   hadoop.hdfs.qjournal.server.TestJournalNodeSync 
   hadoop.hdfs.qjournal.TestMiniJournalCluster 
   hadoop.hdfs.server.balancer.TestBalancer 
   hadoop.hdfs.server.balancer.TestBalancerRPCDelay 
   hadoop.hdfs.server.balancer.TestBalancerWithEncryptedTransfer 
   hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes 
   hadoop.hdfs.server.blockmanagement.TestBlockManager 
   hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks 
   hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped 
   
hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness
 
   hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistPolicy 
   
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement 
   
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestProvidedImpl 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation 
   hadoop.hdfs.server.datanode.TestBlockPoolSliceStorage 
   hadoop.hdfs.server.datanode.TestBlockScanner 
   hadoop.hdfs.server.datanode.TestCachingStrategy 
   hadoop.hdfs.server.datanode.TestDataNodeFaultInjector 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand 
   hadoop.hdfs.server.diskbalancer.TestDiskBalancer 
   hadoop.hdfs.server.diskbalancer.TestDiskBalancerRPC 
   hadoop.hdfs.server.mover.TestMover 
   hadoop.hdfs.server.mover.TestStorageMover 
   hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA 
   hadoop.hdfs.server.namenode.ha.TestFailoverWithBlockTokensEnabled 
   hadoop.hdfs.server.namenode.ha.TestHAAppend 
   hadoop.hdfs.server.namenode.ha.TestHASafeMode 
   hadoop.hdfs.server.namenode.ha.TestHAStateTransitions 
   hadoop.hdfs.server.namenode.ha.TestPipelinesFailover 
   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA 
   hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints 
   hadoop.hdfs.server.namenode.ha.TestXAttrsWithHA 
   hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics 
   hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots 
   hadoop.hdfs.server.namenode.snapshot.TestSnapRootDescendantDiff 
   hadoop.hdfs.server.namenode.snapshot.TestSnapshot 
   hadoop.hdfs.server.namenode.snapshot.TestSnapshotBlocksMap 
   hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion 
   hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport 
   hadoop.hdfs.server.namenode.snapshot.TestSnapshotRename 
   hadoop.hdfs.server.namenode.TestAclConfigFlag 
   hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks 
   hadoop.hdfs.server.namenode.TestAuditLoggerWithCommands 
   hadoop.hdfs.server.namenode.TestAuditLogs 
   hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant 
 

[jira] [Resolved] (HDDS-209) createVolume command throws error when user is not present locally but creates the volume

2018-07-19 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao resolved HDDS-209.
-
Resolution: Duplicate

This is a dup of HDDS-138

> createVolume command throws error when user is not present locally but 
> creates the volume
> -
>
> Key: HDDS-209
> URL: https://issues.apache.org/jira/browse/HDDS-209
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Nilotpal Nandi
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
>
> user "test_user3" does not exist locally. 
> When -createVolume command is ran for the user "test_user3", it throws error 
> on standard output but successfully creates the volume.
> The exit code for the command execution is 0.
>  
>  
> {noformat}
> [root@ozone-vm bin]# ./ozone oz -createVolume /testvolume121 -user test_user3
> 2018-07-02 06:01:37,020 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 2018-07-02 06:01:37,605 WARN security.ShellBasedUnixGroupsMapping: unable to 
> return groups for user test_user3
> PartialGroupNameException The user name 'test_user3' is not found. id: 
> test_user3: no such user
> id: test_user3: no such user
> at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.resolvePartialGroupNames(ShellBasedUnixGroupsMapping.java:294)
>  at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:207)
>  at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:97)
>  at 
> org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:51)
>  at 
> org.apache.hadoop.security.Groups$GroupCacheLoader.fetchGroupList(Groups.java:384)
>  at org.apache.hadoop.security.Groups$GroupCacheLoader.load(Groups.java:319)
>  at org.apache.hadoop.security.Groups$GroupCacheLoader.load(Groups.java:269)
>  at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
>  at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350)
>  at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
>  at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
>  at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
>  at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3969)
>  at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4829)
>  at org.apache.hadoop.security.Groups.getGroups(Groups.java:227)
>  at 
> org.apache.hadoop.security.UserGroupInformation.getGroups(UserGroupInformation.java:1547)
>  at 
> org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1535)
>  at 
> org.apache.hadoop.ozone.client.rpc.RpcClient.createVolume(RpcClient.java:190)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:54)
>  at com.sun.proxy.$Proxy11.createVolume(Unknown Source)
>  at 
> org.apache.hadoop.ozone.client.ObjectStore.createVolume(ObjectStore.java:77)
>  at 
> org.apache.hadoop.ozone.web.ozShell.volume.CreateVolumeHandler.execute(CreateVolumeHandler.java:98)
>  at org.apache.hadoop.ozone.web.ozShell.Shell.dispatch(Shell.java:395)
>  at org.apache.hadoop.ozone.web.ozShell.Shell.run(Shell.java:135)
>  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>  at org.apache.hadoop.ozone.web.ozShell.Shell.main(Shell.java:114)
> 2018-07-02 06:01:37,611 [main] INFO - Creating Volume: testvolume121, with 
> test_user3 as owner and quota set to 1152921504606846976 bytes.
> {noformat}
>  
> {noformat}
> [root@ozone-vm bin]# ./ozone oz -listVolume / -user test_user3
> 2018-07-02 06:02:20,385 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> [ {
>  "owner" : {
>  "name" : "test_user3"
>  },
>  "quota" : {
>  "unit" : "TB",
>  "size" : 1048576
>  },
>  "volumeName" : "testvolume121",
>  "createdOn" : "Thu, 05 Jun +50470 19:07:00 GMT",
>  "createdBy" : "test_user3"
> } ]
> {noformat}
> Expectation :
> --
> Error stack should not be thrown on standard output if the volume is 
> successfully created with non-existing user.

[jira] [Created] (HDDS-270) Move generic container utils to ContianerUitls

2018-07-19 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-270:
---

 Summary: Move generic container utils to ContianerUitls
 Key: HDDS-270
 URL: https://issues.apache.org/jira/browse/HDDS-270
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru


Some container util functions such as getContainerFile() are common for all 
ContainerTypes. These functions should be moved to ContainerUtils.

Also moved some fucntions to KeyValueContainer as applicable.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-131) Replace pipeline info from container info with a pipeline id

2018-07-19 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao resolved HDDS-131.
-
Resolution: Implemented

This has been implemented with HDDS-16 and HDDS-175. 

> Replace pipeline info from container info with a pipeline id
> 
>
> Key: HDDS-131
> URL: https://issues.apache.org/jira/browse/HDDS-131
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
>
> Currently, in the containerInfo object, the complete pipeline object is 
> stored. The idea here is to decouple the pipeline info from container info 
> and replace it with a pipeline Id.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: Hadoop 3.2 Release Plan proposal

2018-07-19 Thread Subru Krishnan
Thanks Sunil for volunteering to lead the release effort. I am generally
supportive of a release but -1 on a 3.2 (prefer a 3.1.x) as feel we already
have too many branches to be maintained. I already see many commits are in
different branches with no apparent rationale, for e.g: 3.1 has commits
which are absent in 3.0 etc.

Additionally AFAIK 3.x has not been deployed in any major production
setting so the cost of adding features should be minimal.

Thoughts?

-Subru

On Thu, Jul 19, 2018 at 12:31 AM, Sunil G  wrote:

> Thanks Steve, Aaron, Wangda for sharing thoughts.
>
> Yes, important changes and features are much needed, hence we will be
> keeping the door open for them as possible. Also considering few more
> offline requests from other folks, I think extending the timeframe by
> couple of weeks makes sense (including a second RC buffer) and this should
> ideally help us to ship this by September itself.
>
> Revised dates (I will be updating same in Roadmap wiki as well)
>
> - Feature freeze date : all features to merge by August 21, 2018.
>
> - Code freeze date : blockers/critical only, no improvements and non
> blocker/critical
>
> bug-fixes  August 31, 2018.
>
> - Release date: September 15, 2018
>
> Thank Eric and Zian, I think Wangda has already answered your questions.
>
> Thanks
> Sunil
>
>
> On Thu, Jul 19, 2018 at 12:13 PM Wangda Tan  wrote:
>
> > Thanks Sunil for volunteering to be RM of 3.2 release, +1 for that.
> >
> > To concerns from Steve,
> >
> > It is a good idea to keep the door open to get important changes /
> > features in before cutoff. I would prefer to keep the proposed release
> date
> > to make sure things can happen earlier instead of last minute and we all
> > know that releases are always get delayed :). I'm also fine if we want
> get
> > another several weeks time.
> >
> > Regarding of 3.3 release, I would suggest doing that before thanksgiving.
> > Do you think is it good or too early / late?
> >
> > Eric,
> >
> > The YARN-8220 will be replaced by YARN-8135, if YARN-8135 can get merged
> > in time, we probably not need the YARN-8220.
> >
> > Sunil,
> >
> > Could u update https://cwiki.apache.org/confluence/display/HADOOP/
> Roadmap
> > with proposed plan as well? We can fill feature list first before getting
> > consensus of time.
> >
> > Thanks,
> > Wangda
> >
> > On Wed, Jul 18, 2018 at 6:20 PM Aaron Fabbri  >
> > wrote:
> >
> >> On Tue, Jul 17, 2018 at 7:21 PM Steve Loughran 
> >> wrote:
> >>
> >> >
> >> >
> >> > On 16 Jul 2018, at 23:45, Sunil G  >> > sun...@apache.org>> wrote:
> >> >
> >> > I would also would like to take this opportunity to come up with a
> >> detailed
> >> > plan.
> >> >
> >> > - Feature freeze date : all features should be merged by August 10,
> >> 2018.
> >> >
> >> >
> >> >
> >> > 
> >>
> >> >
> >> > Please let me know if I missed any features targeted to 3.2 per this
> >> >
> >> >
> >> > Well there these big todo lists for S3 & S3Guard.
> >> >
> >> > https://issues.apache.org/jira/browse/HADOOP-15226
> >> > https://issues.apache.org/jira/browse/HADOOP-15220
> >> >
> >> >
> >> > There's a bigger bit of work coming on for Azure Datalake Gen 2
> >> > https://issues.apache.org/jira/browse/HADOOP-15407
> >> >
> >> > I don't think this is quite ready yet, I've been doing work on it, but
> >> if
> >> > we have a 3 week deadline, I'm going to expect some timely reviews on
> >> > https://issues.apache.org/jira/browse/HADOOP-15546
> >> >
> >> > I've uprated that to a blocker feature; will review the S3 & S3Guard
> >> JIRAs
> >> > to see which of those are blocking. Then there are some pressing
> "guave,
> >> > java 9 prep"
> >> >
> >> >
> >>  I can help with this part if you like.
> >>
> >>
> >>
> >> >
> >> >
> >> >
> >> > timeline. I would like to volunteer myself as release manager of 3.2.0
> >> > release.
> >> >
> >> >
> >> > well volunteered!
> >> >
> >> >
> >> >
> >> Yes, thank you for stepping up.
> >>
> >>
> >> >
> >> > I think this raises a good q: what timetable should we have for the
> >> 3.2. &
> >> > 3.3 releases; if we do want a faster cadence, then having the outline
> >> time
> >> > from the 3.2 to the 3.3 release means that there's less concern about
> >> > things not making the 3.2 dealine
> >> >
> >> > -Steve
> >> >
> >> >
> >> Good idea to mitigate the short deadline.
> >>
> >> -AF
> >>
> >
>


[jira] [Created] (HDDS-268) Add SCM close container watcher

2018-07-19 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-268:
---

 Summary: Add SCM close container watcher
 Key: HDDS-268
 URL: https://issues.apache.org/jira/browse/HDDS-268
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Xiaoyu Yao
Assignee: Ajay Kumar






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-269) Refactor IdentifiableEventPayload to use a long ID

2018-07-19 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-269:
---

 Summary: Refactor IdentifiableEventPayload to use a long ID
 Key: HDDS-269
 URL: https://issues.apache.org/jira/browse/HDDS-269
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Xiaoyu Yao
Assignee: Ajay Kumar






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: Apache Hadoop 3.1.1 release plan

2018-07-19 Thread Wangda Tan
Hi all,

After several blockers of 3.1.1 landed, I think we're pretty close to a
clean 3.1.1 branch ready for RC.

By far we have two blockers targeted 3.1.1 [1], and there're 420 tickets
have fix version = 3.1.1 [2]

As we previously communicated for 3.1.1 release date (May 01), we have
delayed for more than two months, which I want to get 3.1.1 released as
soon as possible. I just cut branch-3.1.1 for blockers only. Branch-3.1
will be open for all bug fixes.

I'm going to create RC0 by end of tomorrow or the last blocker get
resolved, whichever is the later. Please let me know if there are any other
blockers need to get in 3.1.1.

[1] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND priority in (Blocker,
Critical) AND resolution = Unresolved AND "Target Version/s" = 3.1.1 ORDER
BY priority DESC
[2] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in (3.1.1)
ORDER BY priority DESC

Thanks,
Wangda


On Thu, May 10, 2018 at 6:40 PM Weiwei Yang  wrote:

> Hi Wangda
>
> I would propose to have https://issues.apache.org/jira/browse/YARN-8015 
> included
> in 3.1.1.
>
> Once this is done, we get both intra and inter placement constraint
> covered so users could start to explore this feature. Otherwise the
> functionality is pretty limited. It has been Patch Available for a while, I
> just promoted it targeting to 3.1.1. Hope that makes sense.
>
> Thanks!
>
> --
> Weiwei
>
> On 11 May 2018, 9:02 AM +0800, Wangda Tan , wrote:
>
> Hi all,
>
> As we previously proposed RC time (May 1st), we want to release 3.1.1
> sooner if possible. As of now, 3.1.1 has 187 fixes [1] on top of 3.1.0, and
> there're 10 open blockers/criticals which target to 3.1.1 [2]. I just
> posted comments to these open criticals/blockers ticket owners asking about
> statuses.
>
> If everybody agrees, I propose start code freeze of branch-3.1 from Sat PDT
> time this week, only blockers/criticals can be committed to branch-3.1. To
> avoid the burden of committers, I want to delay cutting branch-3.1.1 as
> late as possible. If you have any major/minors (For severe issues please
> update priorities) tickets want to go to 3.1.1, please reply this email
> thread and we can look at them and make a call together.
>
> Please feel free to share your comments and suggestions.
>
> Thanks,
> Wangda
>
> [1] project in (YARN, "Hadoop HDFS", "Hadoop Common", "Hadoop Map/Reduce")
> AND status = Resolved AND fixVersion = 3.1.1
> [2] project in (YARN, HADOOP, MAPREDUCE, "Hadoop Development Tools") AND
> priority in (Blocker, Critical) AND resolution = Unresolved AND "Target
> Version/s" = 3.1.1 ORDER BY priority DESC
>
>
> On Thu, May 10, 2018 at 5:48 PM, Wangda Tan  wrote:
>
> Thanks Brahma/Sunil,
>
> For YARN-8265, it is a too big change for 3.1.1, I just removed 3.1.1 from
> target version.
> For YARN-8236, it is a severe issue and I think it is close to finish.
>
>
>
> On Thu, May 10, 2018 at 3:08 AM, Sunil G  wrote:
>
>
> Thanks Brahma.
> Yes, Billie is reviewing YARN-8265 and I am helping in YARN-8236.
>
> - Sunil
>
>
> On Thu, May 10, 2018 at 2:25 PM Brahma Reddy Battula <
> brahmareddy.batt...@huawei.com> wrote:
>
> Thanks Wangda Tan for driving the 3.1.1 release.Yes,This can be better
> addition to 3.1 line release for improving quality.
>
> Looks only following two are pending which are in review state. Hope you
> are monitoring these two.
>
> https://issues.apache.org/jira/browse/YARN-8265
> https://issues.apache.org/jira/browse/YARN-8236
>
>
>
> Note : https://issues.apache.org/jira/browse/YARN-8247==> committed
> branch-3.1
>
>
> -Original Message-
> From: Wangda Tan [mailto:wheele...@gmail.com]
> Sent: 19 April 2018 17:49
> To: Hadoop Common ;
> mapreduce-...@hadoop.apache.org; Hdfs-dev ;
> yarn-...@hadoop.apache.org
> Subject: Apache Hadoop 3.1.1 release plan
>
> Hi, All
>
> We have released Apache Hadoop 3.1.0 on Apr 06. To further improve the
> quality of the release, we plan to release 3.1.1 at May 06. The focus of
> 3.1.1 will be fixing blockers / critical bugs and other enhancements. So
> far there are 100 JIRAs [1] have fix version marked to 3.1.1.
>
> We plan to cut branch-3.1.1 on May 01 and vote for RC on the same day.
>
> Please feel free to share your insights.
>
> Thanks,
> Wangda Tan
>
> [1] project in (YARN, "Hadoop HDFS", "Hadoop Common", "Hadoop
> Map/Reduce") AND fixVersion = 3.1.1
>
>
>
>


[jira] [Created] (HDDS-267) Handle consistency issues during container update/close

2018-07-19 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-267:
---

 Summary: Handle consistency issues during container update/close
 Key: HDDS-267
 URL: https://issues.apache.org/jira/browse/HDDS-267
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru


During container update and close, the .container file on disk is modified. We 
should make sure that the in-memory state and the on-disk state for a container 
are consistent. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-266) Integrate checksum into .container file

2018-07-19 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-266:
---

 Summary: Integrate checksum into .container file
 Key: HDDS-266
 URL: https://issues.apache.org/jira/browse/HDDS-266
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru


Currently, each container metadata has 2 files - .container and .checksum file.
In this Jira, we propose to integrate the checksum into the .container file 
itself. This will help with synchronization during container updates.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-265) Move numPendingDeletionBlocks and deleteTransactionId from ContainerData to KeyValueContainerData

2018-07-19 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-265:
---

 Summary: Move numPendingDeletionBlocks and deleteTransactionId 
from ContainerData to KeyValueContainerData
 Key: HDDS-265
 URL: https://issues.apache.org/jira/browse/HDDS-265
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Hanisha Koneru


"numPendingDeletionBlocks" and "deleteTransactionId" fields are specific to 
KeyValueContainers. As such they should be moved to KeyValueContainerData from 
ContainerData.

ContainerReport should also be refactored to take in this change. 

Please refer to [~ljain]'s comment in HDDS-250.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13749) Implement a new client protocol method to get NameNode state

2018-07-19 Thread Chao Sun (JIRA)
Chao Sun created HDFS-13749:
---

 Summary: Implement a new client protocol method to get NameNode 
state
 Key: HDFS-13749
 URL: https://issues.apache.org/jira/browse/HDFS-13749
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Chao Sun
Assignee: Chao Sun


Currently {{HAServiceProtocol#getServiceStatus}} requires super user privilege. 
Therefore, as a temporary solution, in HDFS-12976 we discover NameNode state by 
calling {{reportBadBlocks}}. Here, we'll properly implement this by adding a 
new method in client protocol to get the NameNode state.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-264) 'oz' subcommand reference is not present in 'ozone' command help

2018-07-19 Thread Nilotpal Nandi (JIRA)
Nilotpal Nandi created HDDS-264:
---

 Summary: 'oz' subcommand reference is not present in 'ozone' 
command help
 Key: HDDS-264
 URL: https://issues.apache.org/jira/browse/HDDS-264
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client
Reporter: Nilotpal Nandi
 Fix For: 0.2.1


'oz' subcommand is not present in ozone help.

 

ozone help:



 
{noformat}
hadoop@8ceb8dfccb36:~/bin$ ./ozone
Usage: ozone [OPTIONS] SUBCOMMAND [SUBCOMMAND OPTIONS]
OPTIONS is none or any of:
--buildpaths attempt to add class files from build tree
--config dir Hadoop config directory
--daemon (start|status|stop) operate on a daemon
--debug turn on shell script debug mode
--help usage information
--hostnames list[,of,host,names] hosts to use in worker mode
--hosts filename list of hosts to use in worker mode
--loglevel level set the log4j level for this command
--workers turn on worker mode
SUBCOMMAND is one of:

 Admin Commands:
jmxget get JMX exported values from NameNode or DataNode.
Client Commands:
classpath prints the class path needed to get the hadoop jar and the
 required libraries
envvars display computed Hadoop environment variables
freon runs an ozone data generator
genconf generate minimally required ozone configs and output to
 ozone-site.xml in specified path
genesis runs a collection of ozone benchmarks to help with tuning.
getozoneconf get ozone config values from configuration
noz ozone debug tool, convert ozone metadata into relational data
o3 command line interface for ozone
scmcli run the CLI of the Storage Container Manager
version print the version
Daemon Commands:
datanode run a HDDS datanode
om Ozone Manager
scm run the Storage Container Manager service
SUBCOMMAND may print help when invoked w/o parameters or with -h.
{noformat}
 

'oz' subcommand example :



 
{noformat}
hadoop@8ceb8dfccb36:~/bin$ ./ozone oz -listVolume /
2018-07-19 14:51:25 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
[ {
 "owner" : {
 "name" : "hadoop"
 },
 "quota" : {
 "unit" : "TB",
 "size" : 1048576
 },
 "volumeName" : "vol-0-01597",
 "createdOn" : "Sat, 20 Feb +50517 10:11:35 GMT",
 "createdBy" : "hadoop"
}, {
 "owner" : {
 "name" : "hadoop"
 },
 "quota" : {
 "unit" : "TB",
 "size" : 1048576
 },
 "volumeName" : "vol-0-19478",
 "createdOn" : "Thu, 03 Jun +50517 22:23:12 GMT",
 "createdBy" : "hadoop"
}, {
 "owner" : {
 "name" : "hadoop"
 },
 "quota" : {
 "unit" : "TB",
 "size" : 1048576
 }
 
{noformat}
 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13748) Hadoop 3.0.3 docker building when pip install pylint return typed_ast required py3 error

2018-07-19 Thread zhouhao (JIRA)
zhouhao created HDFS-13748:
--

 Summary: Hadoop 3.0.3 docker building when pip install pylint 
return typed_ast required py3 error
 Key: HDFS-13748
 URL: https://issues.apache.org/jira/browse/HDFS-13748
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.3
Reporter: zhouhao


When start to build the docker image with script "start-build-env.sh" inside 
the source package, progress stop at the command in Dockerfile "pip2 install 
pylint", and return with following error:

Complete output from command python setup.py egg_info:
 Error: typed_ast only runs on Python 3.3 and above.

Typed_ast require _pip_ version 10.0.1, however default installation of 
python-pip will be

the version 8.1.1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-07-19 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/841/

[Jul 18, 2018 4:38:43 PM] (xyao) HDDS-241. Handle Volume in inconsistent state. 
Contributed by Hanisha
[Jul 18, 2018 6:05:42 PM] (xyao) HDDS-207. ozone listVolume command accepts 
random values as argument.
[Jul 18, 2018 6:46:26 PM] (xyao) HDDS-255. Fix TestOzoneConfigurationFields for 
missing
[Jul 19, 2018 12:09:43 AM] (eyang) HADOOP-15610.  Fixed pylint version for 
Hadoop docker image.




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed CTEST tests :

   test_test_libhdfs_threaded_hdfs_static 
   test_libhdfs_threaded_hdfspp_test_shim_static 

Failed junit tests :

   hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy 
   hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestMRTimelineEventHandling 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/841/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/841/artifact/out/diff-compile-javac-root.txt
  [332K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/841/artifact/out/diff-checkstyle-root.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/841/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/841/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/841/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/841/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/841/artifact/out/whitespace-eol.txt
  [9.4M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/841/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/841/artifact/out/xml.txt
  [4.0K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/841/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [68K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/841/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [56K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/841/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [60K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/841/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/841/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/841/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  [28K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/841/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/841/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/841/artifact/out/branch-findbugs-hadoop-ozone_ozonefs.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/841/artifact/out/branch-findbugs-hadoop-ozone_tools.txt
  [4.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/841/artifact/out/diff-javadoc-javadoc-root.txt
  [760K]

   CTEST:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/841/artifact/out/patch-hadoop-hdfs-project_hadoop-hdfs-native-client-ctest.txt
  [116K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/841/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [276K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/841/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt
  [112K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/841/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [80K]
   

Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64

2018-07-19 Thread Apache Jenkins Server
For more details, see https://builds.apache.org/job/hadoop-trunk-win/531/

[Jul 14, 2018 1:17:53 AM] (aw) YETUS-639. hadoop: parallel tests on < 2.8.0 are 
not guarateed to work
[Jul 17, 2018 8:49:44 PM] (aw) YETUS-641. Hardcoded pylint version
[Jul 18, 2018 4:38:43 PM] (xyao) HDDS-241. Handle Volume in inconsistent state. 
Contributed by Hanisha




-1 overall


The following subsystems voted -1:
compile mvninstall pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h 00m 00s)
unit


Specific tests:

Failed junit tests :

   hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec 
   hadoop.fs.contract.rawlocal.TestRawlocalContractAppend 
   hadoop.fs.TestFileUtil 
   hadoop.fs.TestFsShellCopy 
   hadoop.fs.TestFsShellList 
   hadoop.fs.viewfs.TestViewFileSystemWithAuthorityLocalFileSystem 
   hadoop.http.TestHttpServer 
   hadoop.http.TestHttpServerLogs 
   hadoop.io.nativeio.TestNativeIO 
   hadoop.ipc.TestSocketFactory 
   hadoop.metrics2.impl.TestStatsDMetrics 
   hadoop.security.TestGroupsCaching 
   hadoop.security.TestSecurityUtil 
   hadoop.security.TestShellBasedUnixGroupsMapping 
   hadoop.security.token.TestDtUtilShell 
   hadoop.test.TestLambdaTestUtils 
   hadoop.util.TestDiskCheckerWithDiskIo 
   hadoop.util.TestNativeCodeLoader 
   hadoop.util.TestNodeHealthScriptRunner 
   hadoop.hdfs.client.impl.TestBlockReaderLocal 
   hadoop.hdfs.qjournal.server.TestJournalNode 
   hadoop.hdfs.qjournal.server.TestJournalNodeSync 
   hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistPolicy 
   
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement 
   
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestProvidedImpl 
   hadoop.hdfs.server.datanode.TestBlockPoolSliceStorage 
   hadoop.hdfs.server.datanode.TestDataNodeFaultInjector 
   hadoop.hdfs.server.datanode.TestDataNodeMXBean 
   hadoop.hdfs.server.datanode.TestDataNodeUUID 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.server.datanode.TestStorageReport 
   hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand 
   hadoop.hdfs.server.diskbalancer.TestDiskBalancerRPC 
   hadoop.hdfs.server.mover.TestMover 
   hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA 
   hadoop.hdfs.server.namenode.ha.TestHASafeMode 
   hadoop.hdfs.server.namenode.ha.TestPipelinesFailover 
   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA 
   hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics 
   hadoop.hdfs.server.namenode.TestCacheDirectives 
   hadoop.hdfs.TestDatanodeRegistration 
   hadoop.hdfs.TestDatanodeStartupFixesLegacyStorageIDs 
   hadoop.hdfs.TestDFSShell 
   hadoop.hdfs.TestDFSStripedInputStream 
   hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy 
   hadoop.hdfs.TestDFSUpgradeFromImage 
   hadoop.hdfs.TestFetchImage 
   hadoop.hdfs.TestHDFSFileSystemContract 
   hadoop.hdfs.TestMaintenanceState 
   hadoop.hdfs.TestPread 
   hadoop.hdfs.TestReadStripedFileWithDecodingDeletedData 
   hadoop.hdfs.TestReadStripedFileWithMissingBlocks 
   hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy 
   hadoop.hdfs.TestSecureEncryptionZoneWithKMS 
   hadoop.hdfs.TestTrashWithSecureEncryptionZones 
   hadoop.hdfs.tools.TestDFSAdmin 
   hadoop.hdfs.web.TestWebHDFS 
   hadoop.hdfs.web.TestWebHdfsFileSystemContract 
   hadoop.hdfs.web.TestWebHdfsUrl 
   hadoop.fs.http.server.TestHttpFSServerWebServer 
   
hadoop.yarn.logaggregation.filecontroller.ifile.TestLogAggregationIndexFileController
 
   
hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch 
   hadoop.yarn.server.nodemanager.containermanager.TestAuxServices 
   hadoop.yarn.server.nodemanager.containermanager.TestContainerManager 
   
hadoop.yarn.server.nodemanager.nodelabels.TestScriptBasedNodeLabelsProvider 
   hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService 
   hadoop.yarn.server.nodemanager.TestContainerExecutor 
   hadoop.yarn.server.nodemanager.TestNodeManagerResync 
   hadoop.yarn.server.webproxy.amfilter.TestAmFilter 
   

[jira] [Created] (HDDS-263) Add retries in Ozone Client to handle BLOCK_NOT_COMMITTED Exception

2018-07-19 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-263:


 Summary: Add retries in Ozone Client to handle BLOCK_NOT_COMMITTED 
Exception
 Key: HDDS-263
 URL: https://issues.apache.org/jira/browse/HDDS-263
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee
 Fix For: 0.2.1


While Ozone client writes are going on, a container on a datanode can gets 
closed because of node failures, disk out of space etc. In situations as such, 
client write will fail with CLOSED_CONTAINER_IO. In this case, ozone client 
should try to get the committed block length for the pending open blocks and 
update the OzoneManager. While trying to get the committed block length, it may 
fail with BLOCK_NOT_COMMITTED exception as the as a part of transiton from 
CLOSING to CLOSED state for the container , it commits all open blocks one by 
one. In such cases, client needs to retry to get the committed block length for 
a fixed no of attempts and eventually throw the exception to the application if 
its not able to successfully get and update the length in the OzoneManager 
eventually. This Jira aims to address this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: Hadoop 3.2 Release Plan proposal

2018-07-19 Thread Sunil G
Thanks Steve, Aaron, Wangda for sharing thoughts.

Yes, important changes and features are much needed, hence we will be
keeping the door open for them as possible. Also considering few more
offline requests from other folks, I think extending the timeframe by
couple of weeks makes sense (including a second RC buffer) and this should
ideally help us to ship this by September itself.

Revised dates (I will be updating same in Roadmap wiki as well)

- Feature freeze date : all features to merge by August 21, 2018.

- Code freeze date : blockers/critical only, no improvements and non
blocker/critical

bug-fixes  August 31, 2018.

- Release date: September 15, 2018

Thank Eric and Zian, I think Wangda has already answered your questions.

Thanks
Sunil


On Thu, Jul 19, 2018 at 12:13 PM Wangda Tan  wrote:

> Thanks Sunil for volunteering to be RM of 3.2 release, +1 for that.
>
> To concerns from Steve,
>
> It is a good idea to keep the door open to get important changes /
> features in before cutoff. I would prefer to keep the proposed release date
> to make sure things can happen earlier instead of last minute and we all
> know that releases are always get delayed :). I'm also fine if we want get
> another several weeks time.
>
> Regarding of 3.3 release, I would suggest doing that before thanksgiving.
> Do you think is it good or too early / late?
>
> Eric,
>
> The YARN-8220 will be replaced by YARN-8135, if YARN-8135 can get merged
> in time, we probably not need the YARN-8220.
>
> Sunil,
>
> Could u update https://cwiki.apache.org/confluence/display/HADOOP/Roadmap
> with proposed plan as well? We can fill feature list first before getting
> consensus of time.
>
> Thanks,
> Wangda
>
> On Wed, Jul 18, 2018 at 6:20 PM Aaron Fabbri 
> wrote:
>
>> On Tue, Jul 17, 2018 at 7:21 PM Steve Loughran 
>> wrote:
>>
>> >
>> >
>> > On 16 Jul 2018, at 23:45, Sunil G > > sun...@apache.org>> wrote:
>> >
>> > I would also would like to take this opportunity to come up with a
>> detailed
>> > plan.
>> >
>> > - Feature freeze date : all features should be merged by August 10,
>> 2018.
>> >
>> >
>> >
>> > 
>>
>> >
>> > Please let me know if I missed any features targeted to 3.2 per this
>> >
>> >
>> > Well there these big todo lists for S3 & S3Guard.
>> >
>> > https://issues.apache.org/jira/browse/HADOOP-15226
>> > https://issues.apache.org/jira/browse/HADOOP-15220
>> >
>> >
>> > There's a bigger bit of work coming on for Azure Datalake Gen 2
>> > https://issues.apache.org/jira/browse/HADOOP-15407
>> >
>> > I don't think this is quite ready yet, I've been doing work on it, but
>> if
>> > we have a 3 week deadline, I'm going to expect some timely reviews on
>> > https://issues.apache.org/jira/browse/HADOOP-15546
>> >
>> > I've uprated that to a blocker feature; will review the S3 & S3Guard
>> JIRAs
>> > to see which of those are blocking. Then there are some pressing "guave,
>> > java 9 prep"
>> >
>> >
>>  I can help with this part if you like.
>>
>>
>>
>> >
>> >
>> >
>> > timeline. I would like to volunteer myself as release manager of 3.2.0
>> > release.
>> >
>> >
>> > well volunteered!
>> >
>> >
>> >
>> Yes, thank you for stepping up.
>>
>>
>> >
>> > I think this raises a good q: what timetable should we have for the
>> 3.2. &
>> > 3.3 releases; if we do want a faster cadence, then having the outline
>> time
>> > from the 3.2 to the 3.3 release means that there's less concern about
>> > things not making the 3.2 dealine
>> >
>> > -Steve
>> >
>> >
>> Good idea to mitigate the short deadline.
>>
>> -AF
>>
>


Re: Hadoop 3.2 Release Plan proposal

2018-07-19 Thread Wangda Tan
Thanks Sunil for volunteering to be RM of 3.2 release, +1 for that.

To concerns from Steve,

It is a good idea to keep the door open to get important changes / features
in before cutoff. I would prefer to keep the proposed release date to make
sure things can happen earlier instead of last minute and we all know that
releases are always get delayed :). I'm also fine if we want get another
several weeks time.

Regarding of 3.3 release, I would suggest doing that before thanksgiving.
Do you think is it good or too early / late?

Eric,

The YARN-8220 will be replaced by YARN-8135, if YARN-8135 can get merged in
time, we probably not need the YARN-8220.

Sunil,

Could u update https://cwiki.apache.org/confluence/display/HADOOP/Roadmap
with proposed plan as well? We can fill feature list first before getting
consensus of time.

Thanks,
Wangda

On Wed, Jul 18, 2018 at 6:20 PM Aaron Fabbri 
wrote:

> On Tue, Jul 17, 2018 at 7:21 PM Steve Loughran 
> wrote:
>
> >
> >
> > On 16 Jul 2018, at 23:45, Sunil G  > sun...@apache.org>> wrote:
> >
> > I would also would like to take this opportunity to come up with a
> detailed
> > plan.
> >
> > - Feature freeze date : all features should be merged by August 10, 2018.
> >
> >
> >
> > 
>
> >
> > Please let me know if I missed any features targeted to 3.2 per this
> >
> >
> > Well there these big todo lists for S3 & S3Guard.
> >
> > https://issues.apache.org/jira/browse/HADOOP-15226
> > https://issues.apache.org/jira/browse/HADOOP-15220
> >
> >
> > There's a bigger bit of work coming on for Azure Datalake Gen 2
> > https://issues.apache.org/jira/browse/HADOOP-15407
> >
> > I don't think this is quite ready yet, I've been doing work on it, but if
> > we have a 3 week deadline, I'm going to expect some timely reviews on
> > https://issues.apache.org/jira/browse/HADOOP-15546
> >
> > I've uprated that to a blocker feature; will review the S3 & S3Guard
> JIRAs
> > to see which of those are blocking. Then there are some pressing "guave,
> > java 9 prep"
> >
> >
>  I can help with this part if you like.
>
>
>
> >
> >
> >
> > timeline. I would like to volunteer myself as release manager of 3.2.0
> > release.
> >
> >
> > well volunteered!
> >
> >
> >
> Yes, thank you for stepping up.
>
>
> >
> > I think this raises a good q: what timetable should we have for the 3.2.
> &
> > 3.3 releases; if we do want a faster cadence, then having the outline
> time
> > from the 3.2 to the 3.3 release means that there's less concern about
> > things not making the 3.2 dealine
> >
> > -Steve
> >
> >
> Good idea to mitigate the short deadline.
>
> -AF
>