[jira] [Created] (HDDS-700) Support Node selection based on network topology

2018-10-18 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-700:
---

 Summary: Support Node selection based on network topology
 Key: HDDS-700
 URL: https://issues.apache.org/jira/browse/HDDS-700
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-699) Detect Ozone Network topology

2018-10-18 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-699:
---

 Summary: Detect Ozone Network topology
 Key: HDDS-699
 URL: https://issues.apache.org/jira/browse/HDDS-699
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao


Traditionally this has been implemented in Hadoop via script or customizable 
java class. One thing we want to add here is the flexible multi-level support 
instead of fixed levels like DC/Rack/NG/Node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-698) Support Topology Awareness for Ozone

2018-10-18 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-698:
---

 Summary: Support Topology Awareness for Ozone
 Key: HDDS-698
 URL: https://issues.apache.org/jira/browse/HDDS-698
 Project: Hadoop Distributed Data Store
  Issue Type: New Feature
Reporter: Xiaoyu Yao
Assignee: Junping Du


This is an umbrella JIRA to add topology aware support for Ozone Pipelines, 
Containers and Blocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-697) update the BCSID for PutSmallFile command

2018-10-18 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-697:


 Summary: update the BCSID for PutSmallFile command
 Key: HDDS-697
 URL: https://issues.apache.org/jira/browse/HDDS-697
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14010) Pass correct DF usage to ReservedSpaceCalculator builder

2018-10-18 Thread Lukas Majercak (JIRA)
Lukas Majercak created HDFS-14010:
-

 Summary: Pass correct DF usage to ReservedSpaceCalculator builder
 Key: HDFS-14010
 URL: https://issues.apache.org/jira/browse/HDFS-14010
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Lukas Majercak
Assignee: Lukas Majercak






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14009) HttpFS: FileStatus#setSnapShotEnabledFlag throws InvocationTargetException when attribute set is emptySet

2018-10-18 Thread Siyao Meng (JIRA)
Siyao Meng created HDFS-14009:
-

 Summary: HttpFS: FileStatus#setSnapShotEnabledFlag throws 
InvocationTargetException when attribute set is emptySet
 Key: HDFS-14009
 URL: https://issues.apache.org/jira/browse/HDFS-14009
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: httpfs
Affects Versions: 3.0.3
Reporter: Siyao Meng
Assignee: Siyao Meng


FileStatus#setSnapShotEnabledFlag throws InvocationTargetException when 
attribute set (attr) is Collections.emptySet().




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14008) NN should log snapshotdiff report

2018-10-18 Thread Pranay Singh (JIRA)
Pranay Singh created HDFS-14008:
---

 Summary: NN should log snapshotdiff report
 Key: HDFS-14008
 URL: https://issues.apache.org/jira/browse/HDFS-14008
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.0.3, 3.1.1, 3.0.0
Reporter: Pranay Singh


It will be helpful to log message for snapshotdiff  to correlate snapshotdiff 
operation against memory spikes in NN heap.  It will be good to log the below 
details at the end of snapshot diff operation, this will help us to know the 
time spent in the snapshotdiff operation and to know the number of 
files/directories processed and compared.

a) Total dirs processed

b) Total dirs compared

c) Total files processed

d)  Total files compared

 e) Total children listing time






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14007) Incompatible layout when generating fs image

2018-10-18 Thread JIRA
Íñigo Goiri created HDFS-14007:
--

 Summary: Incompatible layout when generating fs image
 Key: HDFS-14007
 URL: https://issues.apache.org/jira/browse/HDFS-14007
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Íñigo Goiri
Assignee: Virajith Jalaparti


Exception in thread "main" java.lang.IllegalStateException: Incompatible layout 
-65 (expected -64
at 
org.apache.hadoop.hdfs.server.namenode.ImageWriter.(ImageWriter.java:131)
at 
org.apache.hadoop.hdfs.server.namenode.FileSystemImage.run(FileSystemImage.java:139)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at 
org.apache.hadoop.hdfs.server.namenode.FileSystemImage.main(FileSystemImage.java:148)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14006) RBF: Support to get Router object from web context instead of Namenode

2018-10-18 Thread CR Hota (JIRA)
CR Hota created HDFS-14006:
--

 Summary: RBF: Support to get Router object from web context 
instead of Namenode
 Key: HDFS-14006
 URL: https://issues.apache.org/jira/browse/HDFS-14006
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: CR Hota
Assignee: CR Hota


Router currently uses Namenode web resources to read and verify delegation 
tokens. This model doesn't work when router will be deployed in secured mode. 
This change will introduce router's own UserProvider resource and dependencies.

In the current deployment one can see this exception.

{"RemoteException":\{"exception":"ClassCastException","javaClassName":"java.lang.ClassCastException","message":"org.apache.hadoop.hdfs.server.federation.router.Router
 cannot be cast to org.apache.hadoop.hdfs.server.namenode.NameNode"}}

In the proposed change, router will maintain its own web resource, that will be 
similar to current namenode, but modified to get back a router instance instead 
of namenode.

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14005) RBF: Web UI update to bootstrap-3.3.7

2018-10-18 Thread JIRA
Íñigo Goiri created HDFS-14005:
--

 Summary: RBF: Web UI update to bootstrap-3.3.7
 Key: HDFS-14005
 URL: https://issues.apache.org/jira/browse/HDFS-14005
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Íñigo Goiri
Assignee: Íñigo Goiri


HADOOP-15483 upgraded bootstrap to 3.3.7 and did not change the functions. We 
need to use the new API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-696) Bootstrap genesis SCM(CA) with self-signed certificate.

2018-10-18 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-696:
---

 Summary: Bootstrap genesis SCM(CA) with self-signed certificate.
 Key: HDDS-696
 URL: https://issues.apache.org/jira/browse/HDDS-696
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


This can be done in the following two scenarios:
1) scm has not been "-init"-ed
If ozone security is enabled, we will bootstrap genesis CA along with "scm 
--init".

2) scm has been "--init"-ed but without security enabled. 
Now, we want to enable security on an non-secure scm cluster. This can be done 
with 
"scm --init -security"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-663) Lot of "Removed undeclared tags" logger while running commands

2018-10-18 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HDDS-663.

Resolution: Information Provided

Resolving as we can't fix this problem in HDDS. It needs Hadoop 3.2.0 or later.

This problem should go away once Hadoop 3.2.0 is released.

> Lot of "Removed undeclared tags" logger while running commands
> --
>
> Key: HDDS-663
> URL: https://issues.apache.org/jira/browse/HDDS-663
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Namit Maheshwari
>Priority: Major
>  Labels: newbie
>
> While running commands against OzoneFs see lot of logger like below:
> {code:java}
> -bash-4.2$ hdfs dfs -ls o3://bucket2.volume2/mr_jobEE
> 18/10/15 20:29:17 INFO conf.Configuration: Removed undeclared tags:
> 18/10/15 20:29:18 INFO conf.Configuration: Removed undeclared tags:
> Found 2 items
> rw-rw-rw 1 hdfs hdfs 0 2018-10-15 20:28 o3://bucket2.volume2/mr_jobEE/_SUCCESS
> rw-rw-rw 1 hdfs hdfs 5017 1970-07-23 04:33 
> o3://bucket2.volume2/mr_jobEE/part-r-0
> 18/10/15 20:29:19 INFO conf.Configuration: Removed undeclared tags:
> -bash-4.2$ {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-695) Introduce a new SCM Command to teardown a Pipeline

2018-10-18 Thread Nanda kumar (JIRA)
Nanda kumar created HDDS-695:


 Summary: Introduce a new SCM Command to teardown a Pipeline
 Key: HDDS-695
 URL: https://issues.apache.org/jira/browse/HDDS-695
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: SCM
Reporter: Nanda kumar
Assignee: Nanda kumar


We need to have a tear-down pipeline command in SCM so that an administrator 
can close/destroy a pipeline in the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-694) Plugin new Pipeline management code in SCM

2018-10-18 Thread Lokesh Jain (JIRA)
Lokesh Jain created HDDS-694:


 Summary: Plugin new Pipeline management code in SCM
 Key: HDDS-694
 URL: https://issues.apache.org/jira/browse/HDDS-694
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: SCM
Reporter: Lokesh Jain
Assignee: Lokesh Jain


This Jira aims to plugin new pipeline management code in SCM. It removes the 
old pipeline related classes as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14004) TestLeaseRecovery2#testCloseWhileRecoverLease fails intermittently in trunk

2018-10-18 Thread Ayush Saxena (JIRA)
Ayush Saxena created HDFS-14004:
---

 Summary: TestLeaseRecovery2#testCloseWhileRecoverLease fails 
intermittently in trunk
 Key: HDFS-14004
 URL: https://issues.apache.org/jira/browse/HDFS-14004
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ayush Saxena
Assignee: Ayush Saxena


Reference

https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/930/testReport/junit/org.apache.hadoop.hdfs/TestLeaseRecovery2/testCloseWhileRecoverLease/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-10-18 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/930/

[Oct 17, 2018 2:33:55 AM] (xiao) HDFS-13662. 
TestBlockReaderLocal#testStatisticsForErasureCodingRead is
[Oct 17, 2018 8:27:38 AM] (nanda) HDDS-656. Add logic for pipeline report and 
action processing in new
[Oct 17, 2018 9:29:09 AM] (stevel) HADOOP-15854. AuthToken Use StringBuilder 
instead of StringBuffer.
[Oct 17, 2018 10:01:53 AM] (stevel) HADOOP-15861. Move DelegationTokenIssuer to 
the right path. Contributed
[Oct 17, 2018 10:35:08 AM] (sunilg) YARN-8759. Copy of resource-types.xml is 
not deleted if test fails,
[Oct 17, 2018 10:43:44 AM] (elek) HDDS-563. Support hybrid VirtualHost style 
URL. Contributed by Bharat
[Oct 17, 2018 10:54:01 AM] (elek) HDDS-527. Show SCM chill mode status in SCM 
UI. Contributed by Yiqun
[Oct 17, 2018 12:15:35 PM] (nanda) HDDS-662. Introduce ContainerReplicaState in 
StorageContainerManager.
[Oct 17, 2018 1:14:05 PM] (nanda) HDDS-661. When a volume fails in datanode, 
VersionEndpointTask#call ends
[Oct 17, 2018 6:34:50 PM] (xiao) HADOOP-11100. Support to configure 
ftpClient.setControlKeepAliveTimeout.
[Oct 17, 2018 7:38:42 PM] (jlowe) HADOOP-15859. ZStandardDecompressor.c 
mistakes a class for an instance.
[Oct 17, 2018 9:19:17 PM] (jitendra) HDDS-651. Rename o3 to o3fs for Filesystem.
[Oct 17, 2018 11:40:25 PM] (inigoiri) HDFS-14000. RBF: Documentation should 
reflect right scripts for v3.0 and
[Oct 17, 2018 11:46:13 PM] (bharat) HDDS-683. Add a shell command to provide 
ozone mapping for a S3Bucket.
[Oct 18, 2018 12:51:29 AM] (hanishakoneru) HDDS-670. Fix OzoneFS directory 
rename.




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   Dead store to state in 
org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Saver.save(OutputStream,
 INodeSymlink) At 
FSImageFormatPBINode.java:org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Saver.save(OutputStream,
 INodeSymlink) At FSImageFormatPBINode.java:[line 663] 

Failed CTEST tests :

   test_test_libhdfs_threaded_hdfs_static 
   test_libhdfs_threaded_hdfspp_test_shim_static 

Failed junit tests :

   hadoop.hdfs.TestPread 
   hadoop.hdfs.TestLeaseRecovery2 
   hadoop.hdfs.server.datanode.TestDataNodeUUID 
   hadoop.hdfs.protocol.TestLayoutVersion 
   hadoop.fs.contract.router.web.TestRouterWebHDFSContractAppend 
   hadoop.yarn.server.nodemanager.containermanager.TestContainerManager 
   hadoop.yarn.server.resourcemanager.security.TestRMDelegationTokens 
   
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 
   hadoop.yarn.service.TestYarnNativeServices 
   hadoop.yarn.service.TestCleanupAfterKill 
   hadoop.streaming.TestFileArgs 
   hadoop.streaming.TestMultipleCachefiles 
   hadoop.streaming.TestMultipleArchiveFiles 
   hadoop.streaming.TestSymLink 
   hadoop.streaming.TestStreamingBadRecords 
   hadoop.mapred.gridmix.TestDistCacheEmulation 
   hadoop.mapred.gridmix.TestLoadJob 
   hadoop.mapred.gridmix.TestGridmixSubmission 
   hadoop.mapred.gridmix.TestSleepJob 
   hadoop.tools.TestDistCh 
   hadoop.yarn.sls.TestReservationSystemInvariants 
   hadoop.yarn.sls.TestSLSRunner 
   hadoop.yarn.sls.TestSLSStreamAMSynth 
   hadoop.yarn.sls.TestSLSGenericSynth 
   hadoop.yarn.sls.appmaster.TestAMSimulator 
   hadoop.yarn.sls.nodemanager.TestNMSimulator 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/930/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/930/artifact/out/diff-compile-javac-root.txt
  [296K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/930/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/930/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/930/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/930/artifact/out/diff-patch-pylint.txt
  [40K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/930/artifact/out/diff-patch-shellcheck.txt
  [68K]

   shelldocs:

   

[jira] [Created] (HDFS-14003) Fix findbugs warning in trunk

2018-10-18 Thread Yiqun Lin (JIRA)
Yiqun Lin created HDFS-14003:


 Summary: Fix findbugs warning in trunk
 Key: HDFS-14003
 URL: https://issues.apache.org/jira/browse/HDFS-14003
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Yiqun Lin
Assignee: Yiqun Lin


There is a findbugs warning generated in trunk recently.
 
[https://builds.apache.org/job/PreCommit-HDFS-Build/25298/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html]

Looks like this is generated after this 
commit:[https://github.com/apache/hadoop/commit/b60ca37914b22550e3630fa02742d40697decb31#diff-116c9c55048a5e9df753f219c4b3f233]

We can make a clean for this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: Hadoop 3.2 Release Plan proposal

2018-10-18 Thread Sunil G
Hi Folks,

As we previously communicated for 3.2.0 release, we have delayed due to few
blockers in our gate.

I just cut branch-3.2.0 for release purpose. branch-3.2 will be open for
all bug fixes.

- Sunil


On Tue, Oct 16, 2018 at 8:59 AM Sunil G  wrote:

> Hi Folks,
>
> We are now close to RC as other blocker issues are now merged to trunk and
> branch-3.2. Last 2 critical issues are closer to merge and will be
> committed in few hours.
> With this, I will be creating 3.2.0 branch today and will go ahead with RC
> related process.
>
> - Sunil
>
> On Mon, Oct 15, 2018 at 11:43 PM Jonathan Bender 
> wrote:
>
>> Hello, were there any updates around the 3.2.0 RC timing? All I see in
>> the current blockers are related to the new Submarine subproject, wasn't
>> sure if that is what is holding things up.
>>
>> Cheers,
>> Jon
>>
>> On Tue, Oct 2, 2018 at 7:13 PM, Sunil G  wrote:
>>
>>> Thanks Robert and Haibo for quickly correcting same.
>>> Sigh, I somehow missed one file while committing the change. Sorry for
>>> the
>>> trouble.
>>>
>>> - Sunil
>>>
>>> On Wed, Oct 3, 2018 at 5:22 AM Robert Kanter 
>>> wrote:
>>>
>>> > Looks like there's two that weren't updated:
>>> > >> [115] 16:32 : hadoop-common (trunk) :: grep "3.2.0-SNAPSHOT" . -r
>>> > --include=pom.xml
>>> > ./hadoop-project/pom.xml:
>>> > 3.2.0-SNAPSHOT
>>> > ./pom.xml:3.2.0-SNAPSHOT
>>> >
>>> > I've just pushed in an addendum commit to fix those.
>>> > In the future, please make sure to do a sanity compile when updating
>>> poms.
>>> >
>>> > thanks
>>> > - Robert
>>> >
>>> > On Tue, Oct 2, 2018 at 11:44 AM Aaron Fabbri
>>> 
>>> > wrote:
>>> >
>>> >> Trunk is not building for me.. Did you miss a 3.2.0-SNAPSHOT in the
>>> >> top-level pom.xml?
>>> >>
>>> >>
>>> >> On Tue, Oct 2, 2018 at 10:16 AM Sunil G  wrote:
>>> >>
>>> >> > Hi All
>>> >> >
>>> >> > As mentioned in earlier mail, I have cut branch-3.2 and reset trunk
>>> to
>>> >> > 3.3.0-SNAPSHOT. I will share the RC details sooner once all
>>> necessary
>>> >> > patches are pulled into branch-3.2.
>>> >> >
>>> >> > Thank You
>>> >> > - Sunil
>>> >> >
>>> >> >
>>> >> > On Mon, Sep 24, 2018 at 2:00 PM Sunil G  wrote:
>>> >> >
>>> >> > > Hi All
>>> >> > >
>>> >> > > We are now down to the last Blocker and HADOOP-15407 is merged to
>>> >> trunk.
>>> >> > > Thanks for the support.
>>> >> > >
>>> >> > > *Plan for RC*
>>> >> > > 3.2 branch cut and reset trunk : *25th Tuesday*
>>> >> > > RC0 for 3.2: *28th Friday*
>>> >> > >
>>> >> > > Thank You
>>> >> > > Sunil
>>> >> > >
>>> >> > >
>>> >> > > On Mon, Sep 17, 2018 at 3:21 PM Sunil G 
>>> wrote:
>>> >> > >
>>> >> > >> Hi All
>>> >> > >>
>>> >> > >> We are down to 3 Blockers and 4 Critical now. Thanks all of you
>>> for
>>> >> > >> helping in this. I am following up on these tickets, once its
>>> closed
>>> >> we
>>> >> > >> will cut the 3.2 branch.
>>> >> > >>
>>> >> > >> Thanks
>>> >> > >> Sunil Govindan
>>> >> > >>
>>> >> > >>
>>> >> > >> On Wed, Sep 12, 2018 at 5:10 PM Sunil G 
>>> wrote:
>>> >> > >>
>>> >> > >>> Hi All,
>>> >> > >>>
>>> >> > >>> Inline with the original 3.2 communication proposal dated 17th
>>> July
>>> >> > >>> 2018, I would like to provide more updates.
>>> >> > >>>
>>> >> > >>> We are approaching previously proposed code freeze date
>>> (September
>>> >> 14,
>>> >> > >>> 2018). So I would like to cut 3.2 branch on 17th Sept and point
>>> >> > existing
>>> >> > >>> trunk to 3.3 if there are no issues.
>>> >> > >>>
>>> >> > >>> *Current Release Plan:*
>>> >> > >>> Feature freeze date : all features to merge by September 7,
>>> 2018.
>>> >> > >>> Code freeze date : blockers/critical only, no improvements and
>>> >> > >>> blocker/critical bug-fixes September 14, 2018.
>>> >> > >>> Release date: September 28, 2018
>>> >> > >>>
>>> >> > >>> If any critical/blocker tickets which are targeted to 3.2.0, we
>>> >> need to
>>> >> > >>> backport to 3.2 post branch cut.
>>> >> > >>>
>>> >> > >>> Here's an updated 3.2.0 feature status:
>>> >> > >>>
>>> >> > >>> 1. Merged & Completed features:
>>> >> > >>>
>>> >> > >>> - (Wangda) YARN-8561: Hadoop Submarine project for DeepLearning
>>> >> > >>> workloads Initial cut.
>>> >> > >>> - (Uma) HDFS-10285: HDFS Storage Policy Satisfier
>>> >> > >>> - (Sunil) YARN-7494: Multi Node scheduling support in Capacity
>>> >> > >>> Scheduler.
>>> >> > >>> - (Chandni/Eric) YARN-7512: Support service upgrade via YARN
>>> Service
>>> >> > API
>>> >> > >>> and CLI.
>>> >> > >>> - (Naga/Sunil) YARN-3409: Node Attributes support in YARN.
>>> >> > >>> - (Inigo) HDFS-12615: Router-based HDFS federation. Improvement
>>> >> works.
>>> >> > >>>
>>> >> > >>> 2. Features close to finish:
>>> >> > >>>
>>> >> > >>> - (Steve) S3Guard Phase III. Close to commit.
>>> >> > >>> - (Steve) S3a phase V. Close to commit.
>>> >> > >>> - (Steve) Support Windows Azure Storage. Close to commit.
>>> >> > >>>
>>> >> > >>> 3. Tentative/Cancelled features for 3.2:
>>> >> > >>> - (Rohith) YARN-5742: Serve aggregated logs of historical apps

[jira] [Created] (HDDS-693) Support multi-chunk signatures in s3g PUT object endpoint

2018-10-18 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-693:
-

 Summary: Support multi-chunk signatures in s3g PUT object endpoint
 Key: HDDS-693
 URL: https://issues.apache.org/jira/browse/HDDS-693
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: S3
Reporter: Elek, Marton
Assignee: Elek, Marton


I tried to execute s3a unit tests with our s3 gateway and in 
ITestS3AContractMkdir.testMkDirRmRfDir I got the following error: 

{code}
org.apache.hadoop.fs.FileAlreadyExistsException: Can't make directory for path 
's3a://buckettest/test' since it is a file.

at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerMkdirs(S3AFileSystem.java:2077)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:2027)
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2274)
at 
org.apache.hadoop.fs.contract.AbstractContractMkdirTest.testMkDirRmRfDir(AbstractContractMkdirTest.java:55)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{code}

Checking the created key I found that the size is not zero (it's a directory 
entry) but 86. Checking the content of the key I can see:

{code}
 cat /tmp/qwe2
0;chunk-signature=23abb2bd920ddeeaac78a63ed808bc59fa6e7d3ef0e356474b82cdc2f8c93c40
{code}

The reason is that it's uploaded with multi-chunk signature.

In case of the header x-amz-content-sha256=STREAMING-AWS4-HMAC-SHA256-PAYLOAD, 
the body is special: Multiple signed chunks are following each other with 
additional signature lines.

See the documentation for more details:
https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html

In this jira I would add an initial support for this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-692) Use the ProgressBar class in the RandomKeyGenerator freon test

2018-10-18 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-692:
-

 Summary: Use the ProgressBar class in the RandomKeyGenerator freon 
test
 Key: HDDS-692
 URL: https://issues.apache.org/jira/browse/HDDS-692
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Elek, Marton
Assignee: Zsolt Horvath


HDDS-443 provides a reusable progress bar to make it easier to add more freon 
tests, but the existing RandomKeyGenerator test 
(hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/RandomKeyGenerator.java)
 still doesn't use it. 

It would be good to switch to use the new progress bar there.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-691) Dependency convergence error for org.apache.hadoop:hadoop-annotations

2018-10-18 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HDDS-691:


 Summary: Dependency convergence error for 
org.apache.hadoop:hadoop-annotations
 Key: HDDS-691
 URL: https://issues.apache.org/jira/browse/HDDS-691
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-690) Javadoc build fails in hadoop-ozone

2018-10-18 Thread Takanobu Asanuma (JIRA)
Takanobu Asanuma created HDDS-690:
-

 Summary: Javadoc build fails in hadoop-ozone
 Key: HDDS-690
 URL: https://issues.apache.org/jira/browse/HDDS-690
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
 Environment: JDK8
Reporter: Takanobu Asanuma
Assignee: Takanobu Asanuma






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org