[jira] [Created] (HDFS-13949) Correct the description of dfs.datanode.disk.check.timeout in hdfs-default.xml

2018-10-01 Thread Toshihiro Suzuki (JIRA)
Toshihiro Suzuki created HDFS-13949:
---

 Summary: Correct the description of 
dfs.datanode.disk.check.timeout in hdfs-default.xml
 Key: HDFS-13949
 URL: https://issues.apache.org/jira/browse/HDFS-13949
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Toshihiro Suzuki


The description of dfs.datanode.disk.check.timeout in hdfs-default.xml is as 
follows:
{code}

  dfs.datanode.disk.check.timeout
  10m
  
Maximum allowed time for a disk check to complete during DataNode
startup. If the check does not complete within this time interval
then the disk is declared as failed. This setting supports
multiple time unit suffixes as described in dfs.heartbeat.interval.
If no suffix is specified then milliseconds is assumed.
  

{code}

I don't think the value of this config is used only during DataNode startup. I 
think it's used whenever checking volumes.
The description is misleading so we need to correct it.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [ANNOUNCE] Apache Hadoop Ozone 0.2.1-alpha release

2018-10-01 Thread Elek, Marton



TLDR; No artifacts for 0.2.1-alpha.

The problem is that ozone depends one hadoop-3.2.0-SNAPSHOT. To upload 
hadoop artifacts to maven we also need to upload a custom 3.2.0 hadoop 
to the nexus.


There are multiple ways to resolve this issue ([1.] reduce the 
dependencies between ozone/hadoop-core, [2.] use 3.1.0 fix version, [3.] 
upload a custom ozone-0.2.1 version from hadoop-common, ...)


As of now we decided to release without the maven publish (The main 
release is the src package, anyway).


You can also see the discussion here:
https://issues.apache.org/jira/browse/HDDS-214

Marton

ps: why do you need artifacts from ozone? How would you like to use them?


On 10/1/18 8:33 PM, Ted Yu wrote:

Are the artifacts published on maven ?

I did a quick search but didn't find anything.

Cheers

On Mon, Oct 1, 2018 at 5:24 PM Elek, Marton > wrote:



It gives me great pleasure to announce that the Apache Hadoop community
has voted to release Apache Hadoop Ozone 0.2.1-alpha.

Apache Hadoop Ozone is an object store for Hadoop built using Hadoop
Distributed Data Store.

For more information and to download, please check

https://hadoop.apache.org/ozone

Note: This release is alpha quality, it's not recommended to use in
production.

Many thanks to everyone who contributed to the release, and everyone in
the Apache Hadoop community! The release is a result of work from many
contributors. Thank you for all of them.

On behalf of the Hadoop community,
Márton Elek


ps: Hadoop Ozone and HDDS are released separately from the main Hadoop
releases, this release doesn't include new Hadoop Yarn/Mapreduce/Hdfs
versions.

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org

For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org




-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [ANNOUNCE] Apache Hadoop Ozone 0.2.1-alpha release

2018-10-01 Thread Ted Yu
Are the artifacts published on maven ?

I did a quick search but didn't find anything.

Cheers

On Mon, Oct 1, 2018 at 5:24 PM Elek, Marton  wrote:

>
> It gives me great pleasure to announce that the Apache Hadoop community
> has voted to release Apache Hadoop Ozone 0.2.1-alpha.
>
> Apache Hadoop Ozone is an object store for Hadoop built using Hadoop
> Distributed Data Store.
>
> For more information and to download, please check
>
> https://hadoop.apache.org/ozone
>
> Note: This release is alpha quality, it's not recommended to use in
> production.
>
> Many thanks to everyone who contributed to the release, and everyone in
> the Apache Hadoop community! The release is a result of work from many
> contributors. Thank you for all of them.
>
> On behalf of the Hadoop community,
> Márton Elek
>
>
> ps: Hadoop Ozone and HDDS are released separately from the main Hadoop
> releases, this release doesn't include new Hadoop Yarn/Mapreduce/Hdfs
> versions.
>
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>
>


[ANNOUNCE] Apache Hadoop Ozone 0.2.1-alpha release

2018-10-01 Thread Elek, Marton



It gives me great pleasure to announce that the Apache Hadoop community 
has voted to release Apache Hadoop Ozone 0.2.1-alpha.


Apache Hadoop Ozone is an object store for Hadoop built using Hadoop 
Distributed Data Store.


For more information and to download, please check

https://hadoop.apache.org/ozone

Note: This release is alpha quality, it's not recommended to use in 
production.


Many thanks to everyone who contributed to the release, and everyone in 
the Apache Hadoop community! The release is a result of work from many 
contributors. Thank you for all of them.


On behalf of the Hadoop community,
Márton Elek


ps: Hadoop Ozone and HDDS are released separately from the main Hadoop 
releases, this release doesn't include new Hadoop Yarn/Mapreduce/Hdfs 
versions.


-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-566) Move OzoneSecure docker-compose after HDDS-447

2018-10-01 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-566:
---

 Summary: Move OzoneSecure docker-compose after HDDS-447
 Key: HDDS-566
 URL: https://issues.apache.org/jira/browse/HDDS-566
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


After HDDS-447. The docker-compose has been moved from hadoop-dist to 
hadoop-ozone/dist, this ticket is opened to move the secure docker compose 
added from HDDS-547 to new locations. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [IMPORTANT] Apache JIRA doesn't properly show sub-tickets from parent

2018-10-01 Thread Wangda Tan
Just checked the JIRA again, now the issue has gone. I think it might be
caused by some intermittent JIRA system issue.

Thanks,
Wangda

On Mon, Oct 1, 2018 at 10:52 AM Wangda Tan  wrote:

> Hi all devs,
>
> Today I found many subtickets doesn't show properly under parent.
>
> For example, YARN-6875 is the parent of YARN-7072. But from YARN-6875's
> sub ticket, YARN-7072 isn't shown here.
>
> I found many sub-tickets are gone from parent JIRA, such as YARN-6223
> (there were  ~20+ sub tickets), YARN-2492 (There were ~70-80 sub tickets).
>
> Are there any changes recently to Apache JIRA?
>
> Thanks,
> Wangda
>


[DISCUSS] Deprecate fuse-dfs

2018-10-01 Thread Wei-Chiu Chuang
Hi fellow Hadoop developers,

I want to start this thread to raise the awareness of the quality of
fuse-dfs. It appears that this sub-component is not being developed and
maintained, and appears not many are using it.

In the past two years, there has been only one bug fixed (HDFS-13322).


https://issues.apache.org/jira/issues/?jql=project%20in%20(HADOOP%2C%20HDFS)%20AND%20text%20~%20fuse%20ORDER%20BY%20created%20DESC%2C%20updated%20DESC

It doesn't support keytab login, ACL permissions, rename, ... a number of
POSIX semantics. We also recently realized fuse-dfs doesn't work under
heavy weight workload (Think running SQL applications on it)

So what's the status now? Is there any one who is still using fuse-dfs in
production? Should we start the deprecation process? Or at least document
that it is not meant for anything beyond simple data transfer? IIRC vim
would even complain if you try to edit a file in fuse_dfs directory.
-- 
A very happy Hadoop contributor


[jira] [Created] (HDDS-565) TestContainerPersistence fails regularly in Jenkins

2018-10-01 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-565:
---

 Summary: TestContainerPersistence fails regularly in Jenkins
 Key: HDDS-565
 URL: https://issues.apache.org/jira/browse/HDDS-565
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Reporter: Hanisha Koneru


TestContainerPersistence tests are regularly failing in Jenkins with the error 
- "\{{Unable to create directory /dfs/data}}". 
In \{{#init()}}, we are setting HDDS_DATANODE_DIR_KEY to a test dir location. 
But in {{#setupPaths}}, we are using DFS_DATANODE_DATA_DIR_KEY as the data dir 
location. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-564) Update docker-hadoop-runner branch to reflect changes done in HDDS-490

2018-10-01 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDDS-564:
-

 Summary: Update docker-hadoop-runner branch to reflect changes 
done in HDDS-490
 Key: HDDS-564
 URL: https://issues.apache.org/jira/browse/HDDS-564
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Namit Maheshwari


starter.sh needs to be modified to reflect the changes done in HDDS-490

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13948) Provide Regex Based Mount Point In Inode Tree

2018-10-01 Thread zhenzhao wang (JIRA)
zhenzhao wang created HDFS-13948:


 Summary: Provide Regex Based Mount Point In Inode Tree
 Key: HDFS-13948
 URL: https://issues.apache.org/jira/browse/HDFS-13948
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: fs
Reporter: zhenzhao wang
Assignee: zhenzhao wang


This jira is created to support regex based mount point in Inode Tree. We 
noticed that mount point only support fixed target path. However, we might have 
user cases when target needs to refer some fields from source. e.g. We might 
want a mapping of /cluster1/user1 => /cluster1-dc1/user-nn-user1, we want to 
refer `cluster` and `user` field in source to construct target. It's impossible 
to archive this with current link type. Though we could set one-to-one mapping, 
the mount table would become bloated if we have thousands of users. Besides, a 
regex mapping would empower us more flexibility. So we are going to build a 
regex based mount point which target could refer groups from src regex mapping. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[IMPORTANT] Apache JIRA doesn't properly show sub-tickets from parent

2018-10-01 Thread Wangda Tan
Hi all devs,

Today I found many subtickets doesn't show properly under parent.

For example, YARN-6875 is the parent of YARN-7072. But from YARN-6875's sub
ticket, YARN-7072 isn't shown here.

I found many sub-tickets are gone from parent JIRA, such as YARN-6223
(there were  ~20+ sub tickets), YARN-2492 (There were ~70-80 sub tickets).

Are there any changes recently to Apache JIRA?

Thanks,
Wangda


[jira] [Created] (HDDS-563) Improve VirtualHoststyle filter

2018-10-01 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-563:
---

 Summary: Improve VirtualHoststyle filter
 Key: HDDS-563
 URL: https://issues.apache.org/jira/browse/HDDS-563
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Bharat Viswanadham


a) the host HTTP header sometimes contains the port, sometimes not (with aws 
cli we have the port, with mitm proxy we doesn't). Would be easier to remove it 
anyway to make it easier to configure.

b) I found that we need to support an url scheme where the volume comes from 
the domain ([http://vol1.s3g/]...) but the bucket is used as path style 
([http://vol1.s3g/bucket]). It seems that both goofys and the existing s3a unit 
tests (not sure, but it seems) requires this schema.

 

This Jira is created from [~elek] comments on HDDS-525 jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-214) HDDS/Ozone First Release

2018-10-01 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-214.
---
Resolution: Fixed

> HDDS/Ozone First Release
> 
>
> Key: HDDS-214
> URL: https://issues.apache.org/jira/browse/HDDS-214
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Anu Engineer
>Assignee: Elek, Marton
>Priority: Major
> Attachments: Ozone 0.2.1 release plan.pdf
>
>
> This is an umbrella JIRA that collects all work items, design discussions, 
> etc. for Ozone's release. We will post a design document soon to open the 
> discussion and nail down the details of the release.
> cc: [~xyao] , [~elek], [~arpitagarwal] [~jnp] , [~msingh] [~nandakumar131], 
> [~bharatviswa]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-10-01 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/913/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine
 
   
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceUtils.getComponentArrayJson(String,
 int, String) concatenates strings using + in a loop At 
YarnServiceUtils.java:using + in a loop At YarnServiceUtils.java:[line 123] 

Failed CTEST tests :

   test_test_libhdfs_threaded_hdfs_static 
   test_libhdfs_threaded_hdfspp_test_shim_static 

Failed junit tests :

   hadoop.security.TestRaceWhenRelogin 
   hadoop.hdfs.TestLeaseRecovery2 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.server.nodemanager.containermanager.TestNMProxy 
   
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/913/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/913/artifact/out/diff-compile-javac-root.txt
  [300K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/913/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/913/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/913/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/913/artifact/out/diff-patch-pylint.txt
  [40K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/913/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/913/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/913/artifact/out/whitespace-eol.txt
  [9.4M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/913/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/913/artifact/out/xml.txt
  [4.0K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/913/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-submarine-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/913/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/913/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/913/artifact/out/branch-findbugs-hadoop-hdds_framework.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/913/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/913/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/913/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/913/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/913/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/913/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/913/artifact/out/branch-findbugs-hadoop-ozone_ozonefs.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/913/artifact/out/branch-findbugs-hadoop-ozone_s3gateway.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/913/artifact/out/branch-findbugs-hadoop-ozone_tools.txt
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/913/artifact/out/diff-javadoc-javadoc-root.txt
  [756K]

   CTEST: