[jira] [Commented] (HDDS-891) Create customized yetus personality for ozone

2019-02-25 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16777128#comment-16777128
 ] 

Allen Wittenauer commented on HDDS-891:
---

bq. Unfortunately yetus doesn't work very well for the patches for 
ozone/submarine.

No one is willing to do the work, apparently.  Of course, it doesn't help that 
ozone and submarine build integrations are full of bad decisions. 

> Create customized yetus personality for ozone
> -
>
> Key: HDDS-891
> URL: https://issues.apache.org/jira/browse/HDDS-891
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> Ozone pre commit builds (such as 
> https://builds.apache.org/job/PreCommit-HDDS-Build/) use the official hadoop 
> personality from the yetus personality.
> Yetus personalities are bash scripts which contain personalization for 
> specific builds.
> The hadoop personality tries to identify which project should be built and 
> use partial build to build only the required subprojects because the full 
> build is very time consuming.
> But in Ozone:
> 1.) The build + unit tests are very fast
> 2.) We don't need all the checks (for example the hadoop specific shading 
> test)
> 3.) We prefer to do a full build and full unit test for hadoop-ozone and 
> hadoop-hdds subrojects (for example the hadoop-ozone integration test always 
> should be executed as it contains many generic unit test)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-891) Create customized yetus personality for ozone

2019-02-20 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16773255#comment-16773255
 ] 

Allen Wittenauer commented on HDDS-891:
---

1.
bq. If I understood well, you suggest to double quote the 
$DOCKER_INTERACTIVE_RUN variable in the docker run line. 

It's not even that.  To do what you are wanting to do should really just be an 
extra flag to set the extra options rather than a full-blown 'set it from the 
outside' variable.

bq.  But please let me know if I am wrong. 

Let me do one better and give you an example.

DOCKER_INTERACTIVE_RUN opens the door for users to set command line options to 
docker.  Most notably, -c and -v and a few others that share one particular 
characteristic: they reference the file system.  As soon as shell code hits the 
file system, it is no longer safe to assume space delimited options.  In other 
words, -c /My Cool Filesystem/Docker Files/config.json or -v /c_drive/Program 
Files/Data:/data may be something a user wants to do, but the script now breaks 
because of the IFS assumptions.  

This bug is exactly why shellcheck is correctly flagging it as busted code.  

2.
bq.  running docker based acceptance tests,

While it is not well tested, it should be doable with 0.9.0+ with --dockerind 
mode. If external volumes are required to be mounted, things might get wonky 
though. Just be aware that users and other ASF build server patrons get annoyed 
when jobs take too long during peak hours.  precommit response should be quick 
checks with better, post-commit checks happening at full build time. If they 
can't be triggered from maven test, then a custom test needs to be defined, 
either in the personality (recommended) or in the --user-plugins directory (not 
recommended, mainly because people will forget to set this option when they run 
test-patch interactively).

bq.   run ALL the hdds/ozone unit tests all the time, not just for the changed 
projects

See above. Full tests are run as part of the nightlies due to time constraints.

bq. check ALL the findbugs/checkstyle issues not just the new ones

For findbugs, that's the --findbugs-strict-precheck option which AFAIK most/all 
of the Hadoop jobs have enabled. It will fail the patch if there are 
pre-existing findbugs issues. Adding a similar option to checkstyle wouldn't be 
hard, but a reminder that this info is also presented in the nightlies.  Also, 
if the source tree is already clean, then new checkstyle failures should 
technically be 'all' already.  

Experience has shown, though, that users tend to blow right past precheck 
failures and commit code anyway.  [Hell, many PMC members ignore errors that 
their own patches generated, blaming the Jenkins nodes when it's pretty clear 
that their Java code has e.g., javadoc errors.]

3. 
bq. I  am convinced to run the more strict tests in addition to the existing 
yetus tests.

It sounds like everything you want is either already there or is fairly trivial 
to implement.

bq. Please let me know If I can do something to get Yetus results for the PR-s.

I think [~ste...@apache.org] just needs to edit the user/pw for the 
hadoop-multibranch job credentials and  HADOOP-16035 committed. I do 
practically zero Java these days, so it may not be 100% and probably needs a 
few more tweaks after it is implemented.  (A definite flaw with Jenkins' 
multibranch pipelines.)

> Create customized yetus personality for ozone
> -
>
> Key: HDDS-891
> URL: https://issues.apache.org/jira/browse/HDDS-891
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> Ozone pre commit builds (such as 
> https://builds.apache.org/job/PreCommit-HDDS-Build/) use the official hadoop 
> personality from the yetus personality.
> Yetus personalities are bash scripts which contain personalization for 
> specific builds.
> The hadoop personality tries to identify which project should be built and 
> use partial build to build only the required subprojects because the full 
> build is very time consuming.
> But in Ozone:
> 1.) The build + unit tests are very fast
> 2.) We don't need all the checks (for example the hadoop specific shading 
> test)
> 3.) We prefer to do a full build and full unit test for hadoop-ozone and 
> hadoop-hdds subrojects (for example the hadoop-ozone integration test always 
> should be executed as it contains many generic unit test)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-891) Create customized yetus personality for ozone

2019-02-16 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16770158#comment-16770158
 ] 

Allen Wittenauer edited comment on HDDS-891 at 2/16/19 5:33 PM:


I already answered your Yetus question: if ozone needs special handling for its 
submodules, it needs to go into the hadoop personality.  Cutting back on 
options just because you think they don't apply to Ozone is almost entirely 
incorrect.   It's still part of Hadoop and it's still very possible for people 
to modify core Hadoop through it.  e.g., HDDS-1115, filed by you, does exactly 
that.

The project doesn't have enough people working on it anymore to have wildly 
different build characteristics.  The fewer variances between Hadoop 
subprojects the easier it is for the handful of people still working on it.  

HADOOP-16035 really drives this point home:  there is, without a significant 
amount of hacks or\-\-as the PR jobs you've already got setup 
demonstrate\-\-massive confusion, only one precommit for github PRs for all 
source code sitting in the hadoop tree. Never mind that users running 
test-patch locally aren't going to know to use some oddball personality file 
that is off to the side.

As to the various bugs introduced by HDDS-146... well, [~anu] should never have 
+1'd without a precommit run and it should definitely have not been committed 
without it.  The shellcheck errors are through the roof.  If it had been run or 
Ozone was paying attention to the nightly qbt runs, the errors are all laid out 
there.  By far, the most problematic code is the use of unquoted variables for 
command line arguments.  TBF, it's a common pitfall that a lot of inexperienced 
shell programmers hit, but given that we have tools in our build system to 
specifically point these types of errors out, there's little excuse for _new_ 
code to have this problem. 

[In hindsight, I should have incompatibility broken HADOOP_OPTS and friends in 
3.x but it is what it is.  It's been a bug in Hadoop since Doug copied the 
original code from Tomcat or Maven or whatever.  Meanwhile, some users are 
pretty much required to do a lot of gymnastics to make the system work because 
of that long long long standing bug.]


was (Author: aw):
I already answered your Yetus question: if ozone needs special handling for its 
submodules, it needs to go into the hadoop personality.  Cutting back on 
options just because you think they don't apply to Ozone is almost entirely 
incorrect.   It's still part of Hadoop and it's still very possible for people 
to modify core Hadoop through it.  e.g., HDDS-1115, filed by you, does exactly 
that.

The project doesn't have enough people working on it anymore to have wildly 
different build characteristics.  The fewer variances between Hadoop 
subprojects the easier it is for the handful of people still working on it.  

HADOOP-16035 really drives this point home:  there is, without a significant 
amount of hacks or--as the PR jobs you've already got setup 
demonstrate--massive confusion, only one precommit for github PRs for all 
source code sitting in the hadoop tree. Never mind that users running 
test-patch locally aren't going to know to use some oddball personality file 
that is off to the side.

As to the various bugs introduced by HDDS-146... well, [~anu] should never have 
+1'd without a precommit run and it should definitely have not been committed 
without it.  The shellcheck errors are through the roof.  If it had been run or 
Ozone was paying attention to the nightly qbt runs, the errors are all laid out 
there.  By far, the most problematic code is the use of unquoted variables for 
command line arguments.  TBF, it's a common pitfall that a lot of inexperienced 
shell programmers hit, but given that we have tools in our build system to 
specifically point these types of errors out, there's little excuse for _new_ 
code to have this problem. 

[In hindsight, I should have incompatibility broken HADOOP_OPTS and friends in 
3.x but it is what it is.  It's been a bug in Hadoop since Doug copied the 
original code from Tomcat or Maven or whatever.  Meanwhile, some users are 
pretty much required to do a lot of gymnastics to make the system work because 
of that long long long standing bug.]

> Create customized yetus personality for ozone
> -
>
> Key: HDDS-891
> URL: https://issues.apache.org/jira/browse/HDDS-891
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> Ozone pre commit builds (such as 
> https://builds.apache.org/job/PreCommit-HDDS-Build/) use the official hadoop 
> personality from the yetus personality.
> Yetus personalities are bash scripts which contain personalization for 
> 

[jira] [Commented] (HDDS-891) Create customized yetus personality for ozone

2019-02-16 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16770158#comment-16770158
 ] 

Allen Wittenauer commented on HDDS-891:
---

I already answered your Yetus question: if ozone needs special handling for its 
submodules, it needs to go into the hadoop personality.  Cutting back on 
options just because you think they don't apply to Ozone is almost entirely 
incorrect.   It's still part of Hadoop and it's still very possible for people 
to modify core Hadoop through it.  e.g., HDDS-1115, filed by you, does exactly 
that.

The project doesn't have enough people working on it anymore to have wildly 
different build characteristics.  The fewer variances between Hadoop 
subprojects the easier it is for the handful of people still working on it.  

HADOOP-16035 really drives this point home:  there is, without a significant 
amount of hacks or--as the PR jobs you've already got setup 
demonstrate--massive confusion, only one precommit for github PRs for all 
source code sitting in the hadoop tree. Never mind that users running 
test-patch locally aren't going to know to use some oddball personality file 
that is off to the side.

As to the various bugs introduced by HDDS-146... well, [~anu] should never have 
+1'd without a precommit run and it should definitely have not been committed 
without it.  The shellcheck errors are through the roof.  If it had been run or 
Ozone was paying attention to the nightly qbt runs, the errors are all laid out 
there.  By far, the most problematic code is the use of unquoted variables for 
command line arguments.  TBF, it's a common pitfall that a lot of inexperienced 
shell programmers hit, but given that we have tools in our build system to 
specifically point these types of errors out, there's little excuse for _new_ 
code to have this problem. 

[In hindsight, I should have incompatibility broken HADOOP_OPTS and friends in 
3.x but it is what it is.  It's been a bug in Hadoop since Doug copied the 
original code from Tomcat or Maven or whatever.  Meanwhile, some users are 
pretty much required to do a lot of gymnastics to make the system work because 
of that long long long standing bug.]

> Create customized yetus personality for ozone
> -
>
> Key: HDDS-891
> URL: https://issues.apache.org/jira/browse/HDDS-891
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> Ozone pre commit builds (such as 
> https://builds.apache.org/job/PreCommit-HDDS-Build/) use the official hadoop 
> personality from the yetus personality.
> Yetus personalities are bash scripts which contain personalization for 
> specific builds.
> The hadoop personality tries to identify which project should be built and 
> use partial build to build only the required subprojects because the full 
> build is very time consuming.
> But in Ozone:
> 1.) The build + unit tests are very fast
> 2.) We don't need all the checks (for example the hadoop specific shading 
> test)
> 3.) We prefer to do a full build and full unit test for hadoop-ozone and 
> hadoop-hdds subrojects (for example the hadoop-ozone integration test always 
> should be executed as it contains many generic unit test)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-891) Create customized yetus personality for ozone

2019-02-15 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16770031#comment-16770031
 ] 

Allen Wittenauer edited comment on HDDS-891 at 2/16/19 7:14 AM:


>From Precommit-HDDS-Admin:

{code}
https://gist.githubusercontent.com/elek/315f251b71bfb8d5f66e99eafbca7808/raw/a184384a5e13c345362fd15661584e5984886f51/ozone.sh
{code}

Why did you make this change despite the -1? 


was (Author: aw):

{code}
https://gist.githubusercontent.com/elek/315f251b71bfb8d5f66e99eafbca7808/raw/a184384a5e13c345362fd15661584e5984886f51/ozone.sh
{code}

Why did you make this change despite the -1? 

> Create customized yetus personality for ozone
> -
>
> Key: HDDS-891
> URL: https://issues.apache.org/jira/browse/HDDS-891
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> Ozone pre commit builds (such as 
> https://builds.apache.org/job/PreCommit-HDDS-Build/) use the official hadoop 
> personality from the yetus personality.
> Yetus personalities are bash scripts which contain personalization for 
> specific builds.
> The hadoop personality tries to identify which project should be built and 
> use partial build to build only the required subprojects because the full 
> build is very time consuming.
> But in Ozone:
> 1.) The build + unit tests are very fast
> 2.) We don't need all the checks (for example the hadoop specific shading 
> test)
> 3.) We prefer to do a full build and full unit test for hadoop-ozone and 
> hadoop-hdds subrojects (for example the hadoop-ozone integration test always 
> should be executed as it contains many generic unit test)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-891) Create customized yetus personality for ozone

2019-02-15 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16770031#comment-16770031
 ] 

Allen Wittenauer commented on HDDS-891:
---


{code}
https://gist.githubusercontent.com/elek/315f251b71bfb8d5f66e99eafbca7808/raw/a184384a5e13c345362fd15661584e5984886f51/ozone.sh
{code}

Why did you make this change despite the -1? 

> Create customized yetus personality for ozone
> -
>
> Key: HDDS-891
> URL: https://issues.apache.org/jira/browse/HDDS-891
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> Ozone pre commit builds (such as 
> https://builds.apache.org/job/PreCommit-HDDS-Build/) use the official hadoop 
> personality from the yetus personality.
> Yetus personalities are bash scripts which contain personalization for 
> specific builds.
> The hadoop personality tries to identify which project should be built and 
> use partial build to build only the required subprojects because the full 
> build is very time consuming.
> But in Ozone:
> 1.) The build + unit tests are very fast
> 2.) We don't need all the checks (for example the hadoop specific shading 
> test)
> 3.) We prefer to do a full build and full unit test for hadoop-ozone and 
> hadoop-hdds subrojects (for example the hadoop-ozone integration test always 
> should be executed as it contains many generic unit test)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-891) Create customized yetus personality for ozone

2018-12-09 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16714317#comment-16714317
 ] 

Allen Wittenauer commented on HDDS-891:
---

bq. YOr is the problem to check a modification which is filed under HADOOP but 
modifies something under hadoop-ozone/hadoop-hdds? I don't think it it's 
handled right now (so we are as good as now), and I didn't see any example for 
that. 

We've already been seeing this fail in the nightly qbt since ozone got 
committed.  Whether we see changes happening anywhere else or not is 
irrelevant.  

bq. Can't see any problem here. A full (hadoop + ozone) checkstyle should 
execute exactly the same checkstyle rules which are checked by the ozone 
personality.

It currently does not.

bq. For me using hadoop + ozone personalities seems to be a more clean 
separation. 

Ozone is part of Hadoop.  The whole point of making it that was, from what I 
can tell, to get co-bundled at some point in the future.  Making a separate 
personality is exactly the opposite direction and for reasons which have yet to 
be justified as to why that is desirable.  

My -1 remains.



> Create customized yetus personality for ozone
> -
>
> Key: HDDS-891
> URL: https://issues.apache.org/jira/browse/HDDS-891
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> Ozone pre commit builds (such as 
> https://builds.apache.org/job/PreCommit-HDDS-Build/) use the official hadoop 
> personality from the yetus personality.
> Yetus personalities are bash scripts which contain personalization for 
> specific builds.
> The hadoop personality tries to identify which project should be built and 
> use partial build to build only the required subprojects because the full 
> build is very time consuming.
> But in Ozone:
> 1.) The build + unit tests are very fast
> 2.) We don't need all the checks (for example the hadoop specific shading 
> test)
> 3.) We prefer to do a full build and full unit test for hadoop-ozone and 
> hadoop-hdds subrojects (for example the hadoop-ozone integration test always 
> should be executed as it contains many generic unit test)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-891) Create customized yetus personality for ozone

2018-12-09 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16714106#comment-16714106
 ] 

Allen Wittenauer commented on HDDS-891:
---

bq. Why is it not enough?

Because:

a) It's still possible to modify these components from the other JIRA 
categories.
b) This part of the project still needs the capability to modify other modules  
(e.g., hadoop-dist)
c) qbt runs over the entire source repository
d) It's incredibly short-sighted.

I'm sure I'm forgetting things, but it doesn't really matter. Fundamentally, 
this stuff is part of the Hadoop source. 

> Create customized yetus personality for ozone
> -
>
> Key: HDDS-891
> URL: https://issues.apache.org/jira/browse/HDDS-891
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> Ozone pre commit builds (such as 
> https://builds.apache.org/job/PreCommit-HDDS-Build/) use the official hadoop 
> personality from the yetus personality.
> Yetus personalities are bash scripts which contain personalization for 
> specific builds.
> The hadoop personality tries to identify which project should be built and 
> use partial build to build only the required subprojects because the full 
> build is very time consuming.
> But in Ozone:
> 1.) The build + unit tests are very fast
> 2.) We don't need all the checks (for example the hadoop specific shading 
> test)
> 3.) We prefer to do a full build and full unit test for hadoop-ozone and 
> hadoop-hdds subrojects (for example the hadoop-ozone integration test always 
> should be executed as it contains many generic unit test)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-891) Create customized yetus personality for ozone

2018-12-05 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16710786#comment-16710786
 ] 

Allen Wittenauer commented on HDDS-891:
---

bq.  Please note that the hadoop-ozone and hadoop-hdds in the current trunk 
doesn't depend on the in-tree hadoop-hdfs/common projects. It depends on the 
hadoop-3.2-SNAPSHOT as of now and we would like to switch to a stable release 
as soon as possible.

That won't always be the case unless Ozone becomes its own project.  There's no 
value in creating technical debt here.

bq. Technically it's possible

Yup: HDDS-146, for example, changed start-build-env.sh (and introduced a bug).  
So clearly there is still some dependence despite everything said above.

bq. b): I like the idea and I tried to implement it. Would you be so kind to 
review the v2 patch of YETUS-631?

YETUS-631 isn't an implementation of b at all.  [and, FWIW, I'm going to reject 
that patch.  I've got a better fix as part of  YETUS-723.]

bq. I checked the last two commits: If I understood well there was an 
additional property in the root pom.xml for ozone version (low risk) and with 
the last commit it was removed, so the parent pom.xml shouldn't be modified any 
more. 

Irrelevant.  A change is a change.  There is no way to guarantee that further 
changes won't leak outside these two modules short of not having any other code 
in the branch/repo.

My binding vote remains -1.

> Create customized yetus personality for ozone
> -
>
> Key: HDDS-891
> URL: https://issues.apache.org/jira/browse/HDDS-891
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> Ozone pre commit builds (such as 
> https://builds.apache.org/job/PreCommit-HDDS-Build/) use the official hadoop 
> personality from the yetus personality.
> Yetus personalities are bash scripts which contain personalization for 
> specific builds.
> The hadoop personality tries to identify which project should be built and 
> use partial build to build only the required subprojects because the full 
> build is very time consuming.
> But in Ozone:
> 1.) The build + unit tests are very fast
> 2.) We don't need all the checks (for example the hadoop specific shading 
> test)
> 3.) We prefer to do a full build and full unit test for hadoop-ozone and 
> hadoop-hdds subrojects (for example the hadoop-ozone integration test always 
> should be executed as it contains many generic unit test)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-891) Create customized yetus personality for ozone

2018-12-05 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16710384#comment-16710384
 ] 

Allen Wittenauer commented on HDDS-891:
---

-1

I'm going to stop you from shooting yourself in the foot here.  

bq. But in Ozone:

That list may be true for patches that only modify hadoop-ozone or hadoop-hdds 
maven modules, but patches uploaded to the HDDS project sometimes hit more than 
just those modules (e.g., the last two changes to /pom.xml came from HDDS!). 
Plus the union between those two modules is the root of the tree which means it 
really is building everything.

It's important to also remember that hadoop-ozone and hadoop-hdds are fair game 
to be modified in by other JIRA projects as well.  

Consider some other things:

a) Re-arrange the source such that there aren't two children off of / such that 
modifying both modules won't trigger a full build.
b) Modify the existing personality to slap -Phdds when the changed files list 
includes hadoop-ozone and hadoop-hdds. 
c) It should probably also trigger a native library build. (There's a 
tremendous amount of inconsistency with the test runs presently.)
d) Modify the personality to skip shaded if the patch ONLY modifies 
hadoop-ozone/hadoop-hdds

With the exception of the first, these are all pretty trivial to do and have a 
way higher chance of success.

> Create customized yetus personality for ozone
> -
>
> Key: HDDS-891
> URL: https://issues.apache.org/jira/browse/HDDS-891
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> Ozone pre commit builds (such as 
> https://builds.apache.org/job/PreCommit-HDDS-Build/) use the official hadoop 
> personality from the yetus personality.
> Yetus personalities are bash scripts which contain personalization for 
> specific builds.
> The hadoop personality tries to identify which project should be built and 
> use partial build to build only the required subprojects because the full 
> build is very time consuming.
> But in Ozone:
> 1.) The build + unit tests are very fast
> 2.) We don't need all the checks (for example the hadoop specific shading 
> test)
> 3.) We prefer to do a full build and full unit test for hadoop-ozone and 
> hadoop-hdds subrojects (for example the hadoop-ozone integration test always 
> should be executed as it contains many generic unit test)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14093) HDFS to pass new available() tests

2018-11-22 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16696404#comment-16696404
 ] 

Allen Wittenauer commented on HDFS-14093:
-

I don't think I understand the question... But I might be able to answer in a 
roundabout way. :)

Unless something has changed since I last touched it months and months ago, all 
of the Hadoop precommit jobs are configured exactly the same.  The only bearing 
the JIRA project has on how Yetus is testing something for Hadoop is what queue 
it goes into on Jenkins. (and all jobs are FIFO, so...)

> HDFS to pass new available() tests
> --
>
> Key: HDFS-14093
> URL: https://issues.apache.org/jira/browse/HDFS-14093
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-15870-002.patch
>
>
> submit patches of HADOOP-15920 to the HDFS yetus runs, see what they say, 
> tune tests/HDFS as a appropriate



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14093) HDFS to pass new available() tests

2018-11-21 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16695300#comment-16695300
 ] 

Allen Wittenauer commented on HDFS-14093:
-

Yetus determines which tests to run based upon the contents of the patch.  So 
make a dummy patch that specifically touches something in HDFS if you want to 
run the HDFS unit tests.

> HDFS to pass new available() tests
> --
>
> Key: HDFS-14093
> URL: https://issues.apache.org/jira/browse/HDFS-14093
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-15870-002.patch
>
>
> submit patches of HADOOP-15920 to the HDFS yetus runs, see what they say, 
> tune tests/HDFS as a appropriate



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-7033) dfs.web.authentication.filter should be documented

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-7033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-7033.

Resolution: Won't Fix

> dfs.web.authentication.filter should be documented
> --
>
> Key: HDFS-7033
> URL: https://issues.apache.org/jira/browse/HDFS-7033
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, security
>Affects Versions: 2.4.0
>Reporter: Allen Wittenauer
>Assignee: Srikanth Upputuri
>Priority: Major
>
> HDFS-5716 added dfs.web.authentication.filter but this doesn't appear to be 
> documented anywhere.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-7307) Need 'force close'

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-7307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-7307.

Resolution: Won't Fix

> Need 'force close'
> --
>
> Key: HDFS-7307
> URL: https://issues.apache.org/jira/browse/HDFS-7307
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Allen Wittenauer
>Priority: Major
>
> Until HDFS-4882 and HDFS-7306 get real fixes, operations teams need a way to 
> force close files.  DNs are essentially held hostage by broken clients that 
> never close.  This situation will get worse as longer/permanently running 
> jobs start increasing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-7231) rollingupgrade needs some guard rails

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-7231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-7231.

Resolution: Won't Fix

> rollingupgrade needs some guard rails
> -
>
> Key: HDFS-7231
> URL: https://issues.apache.org/jira/browse/HDFS-7231
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Allen Wittenauer
>Priority: Critical
>
> See first comment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-7777) Consolidate the HA NN documentation down to one

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-.

Resolution: Won't Fix

> Consolidate the HA NN documentation down to one
> ---
>
> Key: HDFS-
> URL: https://issues.apache.org/jira/browse/HDFS-
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Allen Wittenauer
>Priority: Major
>
> These are nearly the same document now.  Let's consolidate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-7904) NFS hard codes ShellBasedIdMapping

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-7904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-7904.

Resolution: Won't Fix

> NFS hard codes ShellBasedIdMapping
> --
>
> Key: HDFS-7904
> URL: https://issues.apache.org/jira/browse/HDFS-7904
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Reporter: Allen Wittenauer
>Priority: Major
>
> The current NFS doesn't allow one to configure an alternative to the 
> shell-based id mapping provider.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-7850) distribute-excludes and refresh-namenodes update to new shell framework

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-7850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-7850.

  Resolution: Won't Fix
Target Version/s:   (was: )

> distribute-excludes and refresh-namenodes update to new shell framework
> ---
>
> Key: HDFS-7850
> URL: https://issues.apache.org/jira/browse/HDFS-7850
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Priority: Major
>
> These need to get updated to use new shell framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-7983) HTTPFS proxy server needs pluggable-auth support

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-7983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-7983.

  Resolution: Won't Fix
Target Version/s:   (was: )

> HTTPFS proxy server needs pluggable-auth support
> 
>
> Key: HDFS-7983
> URL: https://issues.apache.org/jira/browse/HDFS-7983
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> Now that WebHDFS has been fixed to support pluggable auth, the httpfs proxy 
> server also needs support.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8251) Move the synthetic load generator into its own package

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-8251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-8251:
---
Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

> Move the synthetic load generator into its own package
> --
>
> Key: HDFS-8251
> URL: https://issues.apache.org/jira/browse/HDFS-8251
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: J.Andreina
>Priority: Major
>  Labels: BB2015-05-RFC
> Attachments: HDFS-8251.1.patch
>
>
> It doesn't really make sense for the HDFS load generator to be a part of the 
> (extremely large) mapreduce jobclient package. It should be pulled out and 
> put its own package, probably in hadoop-tools.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-9056) add set/remove quota capability to webhdfs

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-9056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-9056.

  Resolution: Won't Fix
Target Version/s:   (was: )

> add set/remove quota capability to webhdfs
> --
>
> Key: HDFS-9056
> URL: https://issues.apache.org/jira/browse/HDFS-9056
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Priority: Major
>
> It would be nice to be able to set and remove quotas via WebHDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-9055) WebHDFS REST v2

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-9055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-9055.

  Resolution: Won't Fix
Target Version/s:   (was: )

> WebHDFS REST v2
> ---
>
> Key: HDFS-9055
> URL: https://issues.apache.org/jira/browse/HDFS-9055
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Priority: Major
>
> There's starting to be enough changes to fix and add missing functionality to 
> webhdfs that we should probably update to REST v2.  This also gives us an 
> opportunity to deal with some incompatible issues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-9031) libhdfs should use doxygen plugin to generate mvn site output

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-9031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-9031.

Resolution: Won't Fix

> libhdfs should use doxygen plugin to generate mvn site output
> -
>
> Key: HDFS-9031
> URL: https://issues.apache.org/jira/browse/HDFS-9031
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> Rather than point people to the hdfs.h file, we should take advantage of the 
> doxyfile and actually generate for mvn site so it shows up on the website.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-9058) enable find via WebHDFS

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-9058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-9058.

  Resolution: Won't Fix
Target Version/s:   (was: )

> enable find via WebHDFS
> ---
>
> Key: HDFS-9058
> URL: https://issues.apache.org/jira/browse/HDFS-9058
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Brahma Reddy Battula
>Priority: Major
>
> It'd be useful to implement find over webhdfs rather than forcing the client 
> to grab a lot of data.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-9059) Expose lssnapshottabledir via WebHDFS

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-9059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-9059.

  Resolution: Won't Fix
Target Version/s:   (was: )

> Expose lssnapshottabledir via WebHDFS
> -
>
> Key: HDFS-9059
> URL: https://issues.apache.org/jira/browse/HDFS-9059
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Jagadesh Kiran N
>Priority: Major
>
> lssnapshottabledir should be exposed via WebHDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-9061) hdfs groups should be exposed via WebHDFS

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-9061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-9061.

  Resolution: Won't Fix
Target Version/s:   (was: )

> hdfs groups should be exposed via WebHDFS
> -
>
> Key: HDFS-9061
> URL: https://issues.apache.org/jira/browse/HDFS-9061
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Jagadesh Kiran N
>Priority: Major
>
> It would be extremely useful from a REST perspective to expose which groups 
> the NN says the user belongs to.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-9464) Documentation needs to be exposed

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-9464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-9464.

Resolution: Won't Fix

> Documentation needs to be exposed
> -
>
> Key: HDFS-9464
> URL: https://issues.apache.org/jira/browse/HDFS-9464
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> From the few builds I've done, there doesn't appear to be any user-facing 
> documentation that is actually exposed when mvn site is built.  HDFS-8745 
> allegedly added doxygen support, but even those docs aren't tied into the 
> docs and/or site build. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-9465) No header files in mvn package

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-9465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-9465.

Resolution: Won't Fix

> No header files in mvn package
> --
>
> Key: HDFS-9465
> URL: https://issues.apache.org/jira/browse/HDFS-9465
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> The current build appears to only include the shared library and no header 
> files to actually use the library in the final maven binary build.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-9778) Add liberasurecode support

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-9778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-9778.

Resolution: Won't Fix

> Add liberasurecode support
> --
>
> Key: HDFS-9778
> URL: https://issues.apache.org/jira/browse/HDFS-9778
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Reporter: Allen Wittenauer
>Priority: Major
>
> It would be beneficial to use liberasurecode as either supplemental or in 
> lieu of ISA-L in order to provide the widest possible hardware/OS platform 
> and OOB support.  Major software platforms appear to be converging on this 
> library and we should too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-10509) httpfs generates docs in bin tarball

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-10509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-10509.
-
Resolution: Won't Fix

> httpfs generates docs in bin tarball 
> -
>
> Key: HDFS-10509
> URL: https://issues.apache.org/jira/browse/HDFS-10509
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, documentation
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Priority: Major
>
> When building a release, httpfs generates a share/doc/hadoop/httpfs dir with 
> content when it shouldn't.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11356) figure out what to do about hadoop-hdfs-project/hadoop-hdfs/src/main/native

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-11356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-11356:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> figure out what to do about hadoop-hdfs-project/hadoop-hdfs/src/main/native
> ---
>
> Key: HDFS-11356
> URL: https://issues.apache.org/jira/browse/HDFS-11356
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, documentation
>Affects Versions: 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HDFS-11356.001.patch
>
>
> The move of code to hdfs-client-native creation caused all sorts of loose 
> ends, and this is just another one.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-11356) figure out what to do about hadoop-hdfs-project/hadoop-hdfs/src/main/native

2018-09-01 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-11356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer reopened HDFS-11356:
-

> figure out what to do about hadoop-hdfs-project/hadoop-hdfs/src/main/native
> ---
>
> Key: HDFS-11356
> URL: https://issues.apache.org/jira/browse/HDFS-11356
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, documentation
>Affects Versions: 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HDFS-11356.001.patch
>
>
> The move of code to hdfs-client-native creation caused all sorts of loose 
> ends, and this is just another one.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13822) speedup libhdfs++ build (enable parallel build)

2018-08-15 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16581735#comment-16581735
 ] 

Allen Wittenauer commented on HDFS-13822:
-

So gained 30 minutes just in the patch phase.  Probably expect close to another 
20 or so minutes after commit.

> speedup libhdfs++ build (enable parallel build)
> ---
>
> Key: HDFS-13822
> URL: https://issues.apache.org/jira/browse/HDFS-13822
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Pradeep Ambati
>Priority: Minor
> Attachments: HDFS-13382.000.patch, HDFS-13822.01.patch, 
> HDFS-13822.02.patch
>
>
> libhdfs++ has significantly increased clean build times for the native client 
> on trunk. Problem is that libhdfs++ isn't build in parallel. When I tried to 
> force a parallel build by specifying -Dnative_make_args=-j4, the build fails 
> due to dependencies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13822) speedup libhdfs++ build (enable parallel build)

2018-08-15 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16581590#comment-16581590
 ] 

Allen Wittenauer commented on HDFS-13822:
-

-02:
* cmake plugin for both builds
* tie the clang build to a specific native-clange profile that can be 
configured in test-patch's personality for when it is appropriate
* add dyld library paths for os x

> speedup libhdfs++ build (enable parallel build)
> ---
>
> Key: HDFS-13822
> URL: https://issues.apache.org/jira/browse/HDFS-13822
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Pradeep Ambati
>Priority: Minor
> Attachments: HDFS-13382.000.patch, HDFS-13822.01.patch, 
> HDFS-13822.02.patch
>
>
> libhdfs++ has significantly increased clean build times for the native client 
> on trunk. Problem is that libhdfs++ isn't build in parallel. When I tried to 
> force a parallel build by specifying -Dnative_make_args=-j4, the build fails 
> due to dependencies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13822) speedup libhdfs++ build (enable parallel build)

2018-08-15 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-13822:

Attachment: HDFS-13822.02.patch

> speedup libhdfs++ build (enable parallel build)
> ---
>
> Key: HDFS-13822
> URL: https://issues.apache.org/jira/browse/HDFS-13822
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Pradeep Ambati
>Priority: Minor
> Attachments: HDFS-13382.000.patch, HDFS-13822.01.patch, 
> HDFS-13822.02.patch
>
>
> libhdfs++ has significantly increased clean build times for the native client 
> on trunk. Problem is that libhdfs++ isn't build in parallel. When I tried to 
> force a parallel build by specifying -Dnative_make_args=-j4, the build fails 
> due to dependencies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13822) speedup libhdfs++ build (enable parallel build)

2018-08-15 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16581423#comment-16581423
 ] 

Allen Wittenauer commented on HDFS-13822:
-

So I'm working on some Yetus bugs when I stumble back into this... mess.

I just noticed the altern code.   It triggers when the 'test-patch' profile is 
set, regardless of when native is set.  This means it runs the clunky antrun 
version of cmake a few extra times during precommits that touch 
hdfs-native-client and any of its parents. 

Let's add this up.

1 x mvn install branch
2 x mvn compile branch
1 x mvn install patch
2 x mvn compile patch

I'm flabbergasted.  Especially so since it doesn't even work correctly.   It's 
still using the same compilers for both compiles.

> speedup libhdfs++ build (enable parallel build)
> ---
>
> Key: HDFS-13822
> URL: https://issues.apache.org/jira/browse/HDFS-13822
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Pradeep Ambati
>Priority: Minor
> Attachments: HDFS-13382.000.patch, HDFS-13822.01.patch
>
>
> libhdfs++ has significantly increased clean build times for the native client 
> on trunk. Problem is that libhdfs++ isn't build in parallel. When I tried to 
> force a parallel build by specifying -Dnative_make_args=-j4, the build fails 
> due to dependencies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13822) speedup libhdfs++ build (enable parallel build)

2018-08-14 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16580492#comment-16580492
 ] 

Allen Wittenauer commented on HDFS-13822:
-


Pre:
+1  compile 37m 44s trunk passed 

Post:
+1  compile 21m 57s the patch passed 

> speedup libhdfs++ build (enable parallel build)
> ---
>
> Key: HDFS-13822
> URL: https://issues.apache.org/jira/browse/HDFS-13822
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Pradeep Ambati
>Priority: Minor
> Attachments: HDFS-13382.000.patch, HDFS-13822.01.patch
>
>
> libhdfs++ has significantly increased clean build times for the native client 
> on trunk. Problem is that libhdfs++ isn't build in parallel. When I tried to 
> force a parallel build by specifying -Dnative_make_args=-j4, the build fails 
> due to dependencies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13822) speedup libhdfs++ build (enable parallel build)

2018-08-14 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16580147#comment-16580147
 ] 

Allen Wittenauer commented on HDFS-13822:
-

One other thing: I keep meaning to optimize the OpenSSL handling code to be in 
one place instead of like 3-4 now (common, hdfs-native, pipes, one more I 
think?)

> speedup libhdfs++ build (enable parallel build)
> ---
>
> Key: HDFS-13822
> URL: https://issues.apache.org/jira/browse/HDFS-13822
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Pradeep Ambati
>Priority: Minor
> Attachments: HDFS-13382.000.patch, HDFS-13822.01.patch
>
>
> libhdfs++ has significantly increased clean build times for the native client 
> on trunk. Problem is that libhdfs++ isn't build in parallel. When I tried to 
> force a parallel build by specifying -Dnative_make_args=-j4, the build fails 
> due to dependencies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13822) speedup libhdfs++ build (enable parallel build)

2018-08-14 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16580136#comment-16580136
 ] 

Allen Wittenauer commented on HDFS-13822:
-

-01:
* is what I'm currently using (minus some changes for yarn and mr).  has fixes 
for OS X, openssl, and some other stuff.

It doesn't use the hadoop-maven-plugin code for ctest because the 
hadoop-maven-plugin TestMojo code is not really built for large amounts of 
tests.  It basically requires listing every single test binary in its own 
execution snippet in the pom.  IIRC.

hadoop-maven-plugin should probably have a new mojo added that specifically 
calls ctest in a directory.  (It should also probably be fixed to call Windows 
cmake compatibly, especially now that cmake 3.1+ works in a sane way on 
Windows.)

> speedup libhdfs++ build (enable parallel build)
> ---
>
> Key: HDFS-13822
> URL: https://issues.apache.org/jira/browse/HDFS-13822
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Pradeep Ambati
>Priority: Minor
> Attachments: HDFS-13382.000.patch, HDFS-13822.01.patch
>
>
> libhdfs++ has significantly increased clean build times for the native client 
> on trunk. Problem is that libhdfs++ isn't build in parallel. When I tried to 
> force a parallel build by specifying -Dnative_make_args=-j4, the build fails 
> due to dependencies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13822) speedup libhdfs++ build (enable parallel build)

2018-08-14 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-13822:

Status: Patch Available  (was: Open)

> speedup libhdfs++ build (enable parallel build)
> ---
>
> Key: HDFS-13822
> URL: https://issues.apache.org/jira/browse/HDFS-13822
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Pradeep Ambati
>Priority: Minor
> Attachments: HDFS-13382.000.patch, HDFS-13822.01.patch
>
>
> libhdfs++ has significantly increased clean build times for the native client 
> on trunk. Problem is that libhdfs++ isn't build in parallel. When I tried to 
> force a parallel build by specifying -Dnative_make_args=-j4, the build fails 
> due to dependencies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13822) speedup libhdfs++ build (enable parallel build)

2018-08-14 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-13822:

Attachment: HDFS-13822.01.patch

> speedup libhdfs++ build (enable parallel build)
> ---
>
> Key: HDFS-13822
> URL: https://issues.apache.org/jira/browse/HDFS-13822
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Pradeep Ambati
>Priority: Minor
> Attachments: HDFS-13382.000.patch, HDFS-13822.01.patch
>
>
> libhdfs++ has significantly increased clean build times for the native client 
> on trunk. Problem is that libhdfs++ isn't build in parallel. When I tried to 
> force a parallel build by specifying -Dnative_make_args=-j4, the build fails 
> due to dependencies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13822) speedup libhdfs++ build (enable parallel build)

2018-08-14 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16579892#comment-16579892
 ] 

Allen Wittenauer commented on HDFS-13822:
-

As reported by the qbt nightly runs, be aware that ctests for libhdfspp have 
been broken since ~ June 29th.  Likely caused by one of:

{code}
[Jun 28, 2018 5:37:22 AM] (aajisaka) HADOOP-15495. Upgrade commons-lang version 
to 3.7 in
[Jun 28, 2018 5:58:40 AM] (aajisaka) HADOOP-14313. Replace/improve Hadoop's 
byte[] comparator. Contributed by
[Jun 28, 2018 6:39:33 AM] (aengineer) HDDS-195. Create generic CommandWatcher 
utility. Contributed by Elek,
[Jun 28, 2018 4:21:56 PM] (Bharat) HDFS-13705:The native ISA-L library loading 
failure should be made
[Jun 28, 2018 4:39:49 PM] (eyang) YARN-8409.  Fixed NPE in 
ActiveStandbyElectorBasedElectorService.   
[Jun 28, 2018 5:23:31 PM] (sunilg) YARN-8379. Improve balancing resources in 
already satisfied queues by
[Jun 28, 2018 10:41:39 PM] (nanda) HDDS-185: 
TestCloseContainerByPipeline#testCloseContainerViaRatis fail
[Jun 28, 2018 11:07:16 PM] (nanda) HDDS-178: DN should update transactionId on 
block delete. Contributed by
{code}

So be sure your failures are actually related to the patch.

> speedup libhdfs++ build (enable parallel build)
> ---
>
> Key: HDFS-13822
> URL: https://issues.apache.org/jira/browse/HDFS-13822
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Pradeep Ambati
>Priority: Minor
> Attachments: HDFS-13382.000.patch
>
>
> libhdfs++ has significantly increased clean build times for the native client 
> on trunk. Problem is that libhdfs++ isn't build in parallel. When I tried to 
> force a parallel build by specifying -Dnative_make_args=-j4, the build fails 
> due to dependencies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-214) HDDS/Ozone First Release

2018-08-13 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578741#comment-16578741
 ] 

Allen Wittenauer commented on HDDS-214:
---

bq.  if you can point us to the correct Hadoop release process document

This pretty much answers all of my questions.  :( 


> HDDS/Ozone First Release
> 
>
> Key: HDDS-214
> URL: https://issues.apache.org/jira/browse/HDDS-214
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Anu Engineer
>Assignee: Elek, Marton
>Priority: Major
> Attachments: Ozone 0.2.1 release plan.pdf
>
>
> This is an umbrella JIRA that collects all work items, design discussions, 
> etc. for Ozone's release. We will post a design document soon to open the 
> discussion and nail down the details of the release.
> cc: [~xyao] , [~elek], [~arpitagarwal] [~jnp] , [~msingh] [~nandakumar131], 
> [~bharatviswa]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-214) HDDS/Ozone First Release

2018-08-13 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578692#comment-16578692
 ] 

Allen Wittenauer commented on HDDS-214:
---

* Has there been any attempt to actually cut the two releases to see if this 
plan is even feasible?

* Where are the proposed changes to the Hadoop release process documented?

* Where are the actual steps to build a release?  


> HDDS/Ozone First Release
> 
>
> Key: HDDS-214
> URL: https://issues.apache.org/jira/browse/HDDS-214
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Anu Engineer
>Assignee: Elek, Marton
>Priority: Major
> Attachments: Ozone 0.2.1 release plan.pdf
>
>
> This is an umbrella JIRA that collects all work items, design discussions, 
> etc. for Ozone's release. We will post a design document soon to open the 
> discussion and nail down the details of the release.
> cc: [~xyao] , [~elek], [~arpitagarwal] [~jnp] , [~msingh] [~nandakumar131], 
> [~bharatviswa]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-341) HDDS/Ozone bits are leaking into Hadoop release

2018-08-13 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578617#comment-16578617
 ] 

Allen Wittenauer commented on HDDS-341:
---

I'll comment on HDDS-214 then.  

> HDDS/Ozone bits are leaking into Hadoop release
> ---
>
> Key: HDDS-341
> URL: https://issues.apache.org/jira/browse/HDDS-341
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Anu Engineer
>Priority: Blocker
> Fix For: 0.2.1
>
>
> [~aw] in the Ozone release discussion reported that Ozone is leaking bits 
> into Hadoop. This has to be fixed before  Hadoop 3.2 or Ozone 0.2.1 release. 
> I will make this a release blocker for Ozone.
>  
> {noformat}
> >Has anyone verified that a Hadoop release doesn't have _any_ of the extra 
> >ozone bits that are sprinkled outside the maven modules?
> [aengineer] : As far as I know that is the state, we have had multiple Hadoop 
> releases after ozone has been merged. So far no one has reported Ozone bits 
> leaking into Hadoop. If we find something like that, it would be a bug.
> [aw]: There hasn't been a release from a branch where Ozone has been merged 
> yet. The first one will be 3.2.0.  Running create-release off of trunk 
> presently shows bits of Ozone in dev-support, hadoop-dist, and elsewhere in 
> the Hadoop source tar ball.
>   So, consider this as a report. IMHO, cutting an Ozone release prior to 
> a Hadoop release ill-advised given the distribution impact and the 
> requirements of the merge vote.  
> {noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12711) deadly hdfs test

2018-08-13 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578504#comment-16578504
 ] 

Allen Wittenauer commented on HDFS-12711:
-

bq. Was a follow up jira filed for this work? (and if so, which one was chosen)

Nope.  Few others seemed to care; patches go in regardless of what Jenkins says 
and/or how they may impact the build negatively.  

Yetus 0.7.0 and bumping up the surefire version (at least in trunk) stopped 
hadoop from crashing ASF Jenkins build nodes.  It's still horribly broken, just 
less obviously so. branch-2 nightlies were turned off months ago since they 
were failing at such a high level to be pointless. I don't think anyone really 
pays attention to the trunk nightlies so they should probably be turned off too.

> deadly hdfs test
> 
>
> Key: HDFS-12711
> URL: https://issues.apache.org/jira/browse/HDFS-12711
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: 2.9.0, 2.8.2
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Critical
> Attachments: HDFS-12711.branch-2.00.patch, fakepatch.branch-2.txt
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-341) HDDS/Ozone bits are leaking into Hadoop release

2018-08-13 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16578450#comment-16578450
 ] 

Allen Wittenauer commented on HDDS-341:
---

bq. Is it a problem with binary or source distribution

Both. 

bq. Not sure if it's a problem to include some ozone specific source in a 
hadoop source releases.

[TBF: Officially, the only ASF release is the source release. The rest are 
considered 'convenience artifacts'. While I disagree with the letter of the 
law, that's what it is.]

In order to fix the Hadoop source distribution, I'm fairly certain that the 
ozone dirs and related source will almost certainly need to be re-arranged.  It 
doesn't make a lot of sense to me to release an ozone source distribution that 
won't even be close to matching what trunk looks like organizationally.

The binary release also has a pretty big issue: the maven dependencies are such 
that ozone's jars will depend upon 3.2.0-SNAPSHOT.  From a maven repository 
view of the world, this is extremely problematic.  Doing a maven deploy in just 
the ozone dirs means that the poms will point to non-existent dependencies.  
Doing a maven deploy at the root of hadoop means pushing a 3.2.0-SNAPSHOT 
release onto somewhere official and I have no idea what the impact of that 
would be. Specifically:

* is that different than what 
https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-trunk-Commit/ does?  
* will those jars get overridden by that job?  
* what if they don't get overridden and all maven builds for 3.2.0 are horribly 
broken until the official release?
* what if they do get overridden in a subtle, incompatible way and now ozone 
jars are broken?

The only way out that I can see is:

a) move everything to parent ozone dir (just to make things easier)
b) change the pom for that to be non-relative and tie it directly to 3.1.1 or 
some other released hadoop-project pom prior to release

I have a hunch though that the first release of ozone in a maven dependency 
context might be effectively blocked for 3.2.0 hadoop release though.  It just 
depends upon what changes from trunk that are coming from outside of its source 
tree it requires.

> HDDS/Ozone bits are leaking into Hadoop release
> ---
>
> Key: HDDS-341
> URL: https://issues.apache.org/jira/browse/HDDS-341
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Anu Engineer
>Priority: Blocker
> Fix For: 0.2.1
>
>
> [~aw] in the Ozone release discussion reported that Ozone is leaking bits 
> into Hadoop. This has to be fixed before  Hadoop 3.2 or Ozone 0.2.1 release. 
> I will make this a release blocker for Ozone.
>  
> {noformat}
> >Has anyone verified that a Hadoop release doesn't have _any_ of the extra 
> >ozone bits that are sprinkled outside the maven modules?
> [aengineer] : As far as I know that is the state, we have had multiple Hadoop 
> releases after ozone has been merged. So far no one has reported Ozone bits 
> leaking into Hadoop. If we find something like that, it would be a bug.
> [aw]: There hasn't been a release from a branch where Ozone has been merged 
> yet. The first one will be 3.2.0.  Running create-release off of trunk 
> presently shows bits of Ozone in dev-support, hadoop-dist, and elsewhere in 
> the Hadoop source tar ball.
>   So, consider this as a report. IMHO, cutting an Ozone release prior to 
> a Hadoop release ill-advised given the distribution impact and the 
> requirements of the merge vote.  
> {noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-280) Support ozone dist-start-stitching on openbsd/osx

2018-07-27 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16560398#comment-16560398
 ] 

Allen Wittenauer commented on HDDS-280:
---

There are probably a million ways to get a docker-compose yaml file (or 
anything else you'd need, for that matter) built with the version embedded.  
(maven-resources-plugin, maven-assembly-plugin, and/or a smarter CMD in the 
docker image that does directory detection are the first three that come to 
mind)  


> Support ozone dist-start-stitching on openbsd/osx
> -
>
> Key: HDDS-280
> URL: https://issues.apache.org/jira/browse/HDDS-280
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Priority: Major
>
> {quote}Ozone is creating a symlink during the dist process.
> Using the "ozone" directory as a destination name all the docker-based 
> acceptance tests and docker-compose files are more simple as they don't need 
> to have the version information in the path.
> But to keep the version specific folder name in the tar file we create a 
> symbolic link during the tar creation. With the symbolic link and the 
> '–dereference' tar argument we can create the tar file which includes a 
> versioned directory (ozone-0.2.1) but we can use the a dist directory without 
> the version in the name (hadoop-dist/target/ozone).
> {quote}
> This is the description of the current 
> dev-support/bin/ozone-dist-tar-stitching. [~aw] in a comment for HDDS-276 
> pointed to the problem that some bsd variants don't support the dereference 
> command line options of the ln command.
> The main reason to use this approach is to get a simplified destination name 
> without the version (hadoop-dist/target/ozone instead of 
> hadoop-dist/target/ozone-0.2.1). It simplifies the docker-compose based 
> environments and acceptance tests, therefore I prefer to keep the simplified 
> destination name.
> The issue is the tar file creation, if and only if we need the version number 
> in the name of the root directory inside of the tar.
> Possible solutions:
>  # Use cp target/ozone target/ozone-0.2.1 + tar. It's simple but more slow 
> and requires more space.
>  # Do the tar distribution from docker all the time in case of 'dereference' 
> is not supported. Not very convenient
>  # Accept that tar will contain ozone directory and not ozone-0.2.1. This is 
> the more simple and can be improved with an additional VERSION file in the 
> root of the distribution.
>  # (+1) Use hadoop-dist/target/ozone-0.2.1 instead of 
> hadoop-dist/target/ozone. This is more complex for the docker based testing 
> as we need the explicit names in the compose files (volume: 
> ../../../hadoop-dist/target/ozone-0.2.1). The structure is more complex with 
> using version in the directory name.
> Please comment your preference.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-276) Fix symbolic link creation during Ozone dist process

2018-07-27 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16559793#comment-16559793
 ] 

Allen Wittenauer commented on HDDS-276:
---

Nope, -1 on this patch too.

This patch pretty much breaks a very common operational practice for the sole 
purpose of making testing easier. If I extract a tarball, I'm expecting to get 
'project-version'.  There is a very good chance users are going to do this in a 
dir with multiple versions extracted with their own symlink set.

Again, there seems to be a desire to trade simpler testing logic for a more 
difficult and surprising operational experience.  

> Fix symbolic link creation during Ozone dist process
> 
>
> Key: HDDS-276
> URL: https://issues.apache.org/jira/browse/HDDS-276
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.2.1
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Minor
> Fix For: 0.2.1
>
> Attachments: HDDS-276.001.patch
>
>
> Ozone is creating a symlink during the dist process.
> Using the "ozone" directory as a destination name all the docker-based 
> acceptance tests and docker-compose files are more simple as they don't need 
> to have the version information in the path.
> But to keep the version specific folder name in the tar file we create a 
> symbolic link during the tar creation. With the symbolic link and the 
> '–dereference' tar argument we can create the tar file which includes a 
> versioned directory (ozone-0.2.1) but we can use the a dist directory without 
> the version in the name (hadoop-dist/target/ozone).
> Currently this symlink creation has an issue. It couldn't be run twice. You 
> need to do a 'mvn clean' before you can create a new dist.
> But fortunately this could be fixed easily by checking if the destination 
> symlink exists.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-280) Support ozone dist-start-stitching on openbsd/osx

2018-07-27 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16559784#comment-16559784
 ] 

Allen Wittenauer commented on HDDS-280:
---

bq.  if and only if we need the version number in the name of the root 
directory inside of the tar.

Yes, you do.

bq.  It simplifies the docker-compose based environments and acceptance tests, 
therefore I prefer to keep the simplified destination name.

It sounds like a prioritization problem. distribution standardization is way 
more important than testing. Besides, why can't you use a docker build arg to 
know what the version is when doing the tests?  This way you'll know what the 
fully versioned directory is.

> Support ozone dist-start-stitching on openbsd/osx
> -
>
> Key: HDDS-280
> URL: https://issues.apache.org/jira/browse/HDDS-280
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Priority: Major
>
> {quote}Ozone is creating a symlink during the dist process.
> Using the "ozone" directory as a destination name all the docker-based 
> acceptance tests and docker-compose files are more simple as they don't need 
> to have the version information in the path.
> But to keep the version specific folder name in the tar file we create a 
> symbolic link during the tar creation. With the symbolic link and the 
> '–dereference' tar argument we can create the tar file which includes a 
> versioned directory (ozone-0.2.1) but we can use the a dist directory without 
> the version in the name (hadoop-dist/target/ozone).
> {quote}
> This is the description of the current 
> dev-support/bin/ozone-dist-tar-stitching. [~aw] in a comment for HDDS-276 
> pointed to the problem that some bsd variants don't support the dereference 
> command line options of the ln command.
> The main reason to use this approach is to get a simplified destination name 
> without the version (hadoop-dist/target/ozone instead of 
> hadoop-dist/target/ozone-0.2.1). It simplifies the docker-compose based 
> environments and acceptance tests, therefore I prefer to keep the simplified 
> destination name.
> The issue is the tar file creation, if and only if we need the version number 
> in the name of the root directory inside of the tar.
> Possible solutions:
>  # Use cp target/ozone target/ozone-0.2.1 + tar. It's simple but more slow 
> and requires more space.
>  # Do the tar distribution from docker all the time in case of 'dereference' 
> is not supported. Not very convenient
>  # Accept that tar will contain ozone directory and not ozone-0.2.1. This is 
> the more simple and can be improved with an additional VERSION file in the 
> root of the distribution.
>  # (+1) Use hadoop-dist/target/ozone-0.2.1 instead of 
> hadoop-dist/target/ozone. This is more complex for the docker based testing 
> as we need the explicit names in the compose files (volume: 
> ../../../hadoop-dist/target/ozone-0.2.1). The structure is more complex with 
> using version in the directory name.
> Please comment your preference.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-276) Fix symbolic link creation during Ozone dist process

2018-07-20 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16551500#comment-16551500
 ] 

Allen Wittenauer commented on HDDS-276:
---

-1

Only works on GNU tar.

> Fix symbolic link creation during Ozone dist process
> 
>
> Key: HDDS-276
> URL: https://issues.apache.org/jira/browse/HDDS-276
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.2.1
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Minor
> Fix For: 0.2.1
>
> Attachments: HDDS-276.001.patch
>
>
> Ozone is creating a symlink during the dist process.
> Using the "ozone" directory as a destination name all the docker-based 
> acceptance tests and docker-compose files are more simple as they don't need 
> to have the version information in the path.
> But to keep the version specific folder name in the tar file we create a 
> symbolic link during the tar creation. With the symbolic link and the 
> '–dereference' tar argument we can create the tar file which includes a 
> versioned directory (ozone-0.2.1) but we can use the a dist directory without 
> the version in the name (hadoop-dist/target/ozone).
> Currently this symlink creation has an issue. It couldn't be run twice. You 
> need to do a 'mvn clean' before you can create a new dist.
> But fortunately this could be fixed easily by checking if the destination 
> symlink exists.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13734) Add Heapsize variables for HDFS daemons

2018-07-13 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543508#comment-16543508
 ] 

Allen Wittenauer edited comment on HDFS-13734 at 7/13/18 5:49 PM:
--

bq. While still possible through adding the -Xmx to HDFS_DAEMON_OPTS, this is 
not intuitive for this relatively common setting.

While I can appreciate the feeling, it leads to users configuring a lot more 
environment variables than they need since _OPTS is almost always configured as 
well.  (Especially with zero hints in the *-env.sh files that this is mostly 
unnecessary syntactic sugar.)  In addition, it adds Yet More Shell Code and 
increases the support burden.  There is also the slippery slope problem:  if 
there is a dedicate var for heap should there be a dedicate var for other java 
params as well? What is the barrier?

It was always my intent to deprecate the equivalent MR and YARN variables for 
the exact same reasons but I just never got around to it.


was (Author: aw):
> While still possible through adding the -Xmx to HDFS_DAEMON_OPTS, this is not 
> intuitive for this relatively common setting.

While I can appreciate the feeling, it leads to users configuring a lot more 
environment variables than they need since _OPTS is almost always configured as 
well.  (Especially with zero hints in the *-env.sh files that this is mostly 
unnecessary syntactic sugar.)  In addition, it adds Yet More Shell Code and 
increases the support burden.  There is also the slippery slope problem:  if 
there is a dedicate var for heap should there be a dedicate var for other java 
params as well? What is the barrier?

It was always my intent to deprecate the equivalent MR and YARN variables for 
the exact same reasons but I just never got around to it.

> Add Heapsize variables for HDFS daemons
> ---
>
> Key: HDFS-13734
> URL: https://issues.apache.org/jira/browse/HDFS-13734
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, journal-node, namenode
>Affects Versions: 3.0.3
>Reporter: Brandon Scheller
>Priority: Major
>
> Currently there are no variables to set HDFS daemon heapsize differently. 
> While still possible through adding the -Xmx to HDFS_*DAEMON*_OPTS, this is 
> not intuitive for this relatively common setting.
> YARN currently has these separate YARN_*DAEMON*_HEAPSIZE variables supported 
> so it seems natural for HDFS too.
> It also looks like HDFS use to have this for namenode with 
> HADOOP_NAMENODE_INIT_HEAPSIZE
> This JIRA is to have these configurations added/supported



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13734) Add Heapsize variables for HDFS daemons

2018-07-13 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543508#comment-16543508
 ] 

Allen Wittenauer commented on HDFS-13734:
-

> While still possible through adding the -Xmx to HDFS_DAEMON_OPTS, this is not 
> intuitive for this relatively common setting.

While I can appreciate the feeling, it leads to users configuring a lot more 
environment variables than they need since _OPTS is almost always configured as 
well.  (Especially with zero hints in the *-env.sh files that this is mostly 
unnecessary syntactic sugar.)  In addition, it adds Yet More Shell Code and 
increases the support burden.  There is also the slippery slope problem:  if 
there is a dedicate var for heap should there be a dedicate var for other java 
params as well? What is the barrier?

It was always my intent to deprecate the equivalent MR and YARN variables for 
the exact same reasons but I just never got around to it.

> Add Heapsize variables for HDFS daemons
> ---
>
> Key: HDFS-13734
> URL: https://issues.apache.org/jira/browse/HDFS-13734
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, journal-node, namenode
>Affects Versions: 3.0.3
>Reporter: Brandon Scheller
>Priority: Major
>
> Currently there are no variables to set HDFS daemon heapsize differently. 
> While still possible through adding the -Xmx to HDFS_*DAEMON*_OPTS, this is 
> not intuitive for this relatively common setting.
> YARN currently has these separate YARN_*DAEMON*_HEAPSIZE variables supported 
> so it seems natural for HDFS too.
> It also looks like HDFS use to have this for namenode with 
> HADOOP_NAMENODE_INIT_HEAPSIZE
> This JIRA is to have these configurations added/supported



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13722) HDFS Native Client Fails Compilation on Ubuntu 18.04

2018-07-10 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-13722:

   Resolution: Fixed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

Committed to trunk

> HDFS Native Client Fails Compilation on Ubuntu 18.04
> 
>
> Key: HDFS-13722
> URL: https://issues.apache.org/jira/browse/HDFS-13722
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jack Bearden
>Assignee: Jack Bearden
>Priority: Minor
>  Labels: trunk
> Fix For: 3.2.0
>
> Attachments: HDFS-13722.001.patch
>
>
> When compiling the hdfs-native-client on Ubuntu 18.04, the RPC request.cc 
> fails.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13722) HDFS Native Client Fails Compilation on Ubuntu 18.04

2018-07-10 Thread Allen Wittenauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer reassigned HDFS-13722:
---

Assignee: Jack Bearden

> HDFS Native Client Fails Compilation on Ubuntu 18.04
> 
>
> Key: HDFS-13722
> URL: https://issues.apache.org/jira/browse/HDFS-13722
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jack Bearden
>Assignee: Jack Bearden
>Priority: Minor
>  Labels: trunk
> Attachments: HDFS-13722.001.patch
>
>
> When compiling the hdfs-native-client on Ubuntu 18.04, the RPC request.cc 
> fails.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13722) HDFS Native Client Fails Compilation on Ubuntu 18.04

2018-07-10 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16539116#comment-16539116
 ] 

Allen Wittenauer commented on HDFS-13722:
-

Those unit tests have been broken in trunk for a while.

LGTM.

+1

> HDFS Native Client Fails Compilation on Ubuntu 18.04
> 
>
> Key: HDFS-13722
> URL: https://issues.apache.org/jira/browse/HDFS-13722
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jack Bearden
>Priority: Minor
>  Labels: trunk
> Attachments: HDFS-13722.001.patch
>
>
> When compiling the hdfs-native-client on Ubuntu 18.04, the RPC request.cc 
> fails.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13501) Secure Datanode stop/start from cli does not throw a valid error if HDFS_DATANODE_SECURE_USER is not set

2018-04-28 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457763#comment-16457763
 ] 

Allen Wittenauer commented on HDFS-13501:
-

Some important background:

One of my key goals with the rewrite was to reduce the amount of stuff that 
printed to the screen. With a few exceptions, output broke down into three 
buckets:

* stdout: vitally important information that the user either requested or can't 
act on but needs to know
* stderr: vitally important information that the user has an action they must 
take
* --debug: non-vital information that is only interesting when debugging

As a result, there are lots of places where branch-2 has output that 3.x+ does 
not.  There's not a whole lot in the bash code where 'stdout' is appropriate. 
On the flip side, there is a lot more 'stderr' output because of significantly 
better error handling. 

That said...

The problem of the missing pid file is one of them that caused me the most 
problems.  It's an error from a logical program sense, but what is the user 
action?  If the daemon is still running, but the pid file is missing, then 
something likely catastrophic happened, including a very screwed up directory 
structure/config or multiple invocations of the --daemon flag.  Both of those 
are things that are really beyond the bash code to fix. Then there is the 
opposite situation:

{code}
$ hdfs --daemon stop namenode
$ hdfs --daemon stop namenode
{code}

The daemon isn't running, and so the pid file should be gone.  Is that an error 
worth disturbing the user?  Also, how common is that?  (Morgan Freeman voice: 
It is very common.)  Then there is the old ops habit of running ps even after 
issuing stop commands because no one trusts the system...

By comparison, branch-2 does

{codes}
 echo no $command to stop
{code}

... which is mostly useless but does confirm the thinking that a missing pid 
file is primarily interpreted as "daemon is already down; no action required."

OK, fine. All of that was a bit of a dead end.  So then I thought about it from 
"what is the pid file anyway?".  Ultimately it's a file system lock for the 
bash code.  Nothing else that ships with Hadoop cares about it.  And with the 
introduction of '--daemon status,' there isn't much of a reason for anything 
else to be looking at them either. That mostly makes them private.

In the end, I opted to not print a message at all because I couldn't answer the 
"action" question.  There isn't anything for a user to do when the pid file is 
missing.  

FWIW: this also highlights the problem of what to do with the exit status.  
IIRC, it currently exits with 0 when the pid file isn't found because again, it 
is assumed that the daemon was stopped successfully the same as branch-2.  In 
one sense that feels wrong, but I felt it was better to stay compatible in this 
instance.

> Secure Datanode stop/start from cli does not throw a valid error if 
> HDFS_DATANODE_SECURE_USER is not set
> 
>
> Key: HDFS-13501
> URL: https://issues.apache.org/jira/browse/HDFS-13501
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>
> Secure Datanode start/stop from cli does not throw a valid error if 
> HADOOP_SECURE_DN_USER/HDFS_DATANODE_SECURE_USER is not set. If 
> HDFS_DATANODE_SECURE_USER and JSVC_HOME is not set start/stop is expected to 
> fail (when privilege ports are used) but it should show some valid message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13510) Ozone: Fix precommit hook for Ozone/Hdds on trunk

2018-04-27 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16456594#comment-16456594
 ] 

Allen Wittenauer commented on HDFS-13510:
-

Start with the qbt findbugs logs from last night.

> Ozone: Fix precommit hook for Ozone/Hdds on trunk
> -
>
> Key: HDFS-13510
> URL: https://issues.apache.org/jira/browse/HDFS-13510
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> Current precommit doesn't work with the ozone projects as they are in an 
> optional profile.
> This jira may not have any code change but I opened it to track the required 
> changes on builds.apache.org and make the changes more transparent.
> I think we need the following changes:
> 1. Separated jira subproject, as planned
> 2. After that we can create new Precommit-OZONE-Build job which will be 
> triggered by the PreCommit-Admin (jira filter should be modified)
> 3. In the Precommit-OZONE-Build we need to enable the hdds profile. It could 
> be done by modifying the yetus personality or the create a .mvn/mvn.config
> 4. We need the ozone/hdds snapshot artifacts in apache nexus:
>   a.) One option is adding -P hdds to the Hadoop-trunk-Commit. This is the 
> simplified but Hdds/Ozone build failure will cause missing artifacts on nexus 
> (low chance as the merge will be guarded by PreCommit hook)
>   b.) Other options is to create a Hadoop-Ozone-trunk-Commit which do a full 
> compilation but only hdds and ozone artifacts will be deployed (some sync 
> problem maybe here if different core artifacts are uploaded...)
> 5. And we also need a daily unit test run. (qbt) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13510) Ozone: Fix precommit hook for Ozone/Hdds on trunk

2018-04-27 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16456571#comment-16456571
 ] 

Allen Wittenauer commented on HDFS-13510:
-

I'd suggest fixing the problems with the trunk build first.  It's pretty 
obvious that the profile isn't providing 100% coverage.

> Ozone: Fix precommit hook for Ozone/Hdds on trunk
> -
>
> Key: HDFS-13510
> URL: https://issues.apache.org/jira/browse/HDFS-13510
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> Current precommit doesn't work with the ozone projects as they are in an 
> optional profile.
> This jira may not have any code change but I opened it to track the required 
> changes on builds.apache.org and make the changes more transparent.
> I think we need the following changes:
> 1. Separated jira subproject, as planned
> 2. After that we can create new Precommit-OZONE-Build job which will be 
> triggered by the PreCommit-Admin (jira filter should be modified)
> 3. In the Precommit-OZONE-Build we need to enable the hdds profile. It could 
> be done by modifying the yetus personality or the create a .mvn/mvn.config
> 4. We need the ozone/hdds snapshot artifacts in apache nexus:
>   a.) One option is adding -P hdds to the Hadoop-trunk-Commit. This is the 
> simplified but Hdds/Ozone build failure will cause missing artifacts on nexus 
> (low chance as the merge will be guarded by PreCommit hook)
>   b.) Other options is to create a Hadoop-Ozone-trunk-Commit which do a full 
> compilation but only hdds and ozone artifacts will be deployed (some sync 
> problem maybe here if different core artifacts are uploaded...)
> 5. And we also need a daily unit test run. (qbt) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13501) Secure Datanode stop/start from cli does not throw a valid error if HDFS_DATANODE_SECURE_USER is not set

2018-04-25 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453262#comment-16453262
 ] 

Allen Wittenauer commented on HDFS-13501:
-

JSVC_HOME is allowed to be configured independently of 
HDFS_DATANODE_SECURE_USER due to other services that may be using the secure 
starter code.  For example, it's possible to have NFS running in secure mode 
but not the datanode.

OOTB, the only way to tell the shell code if the datanode needs to use the 
secure daemon starter is via HDFS_DATANODE_SECURE_USER.  Since having it set 
and unset are legal, there's no real way to predict what the user intends 
without reading through hdfs-site.xml, looking at port numbers, rpc settings, 
and the like and even then, we might get it wrong. For example, if Hadoop 
is using authbind or pfexec or any number of other ways to give a process the 
ability to open reserved ports.  They are a little more complicated, but the 
shell code does support it via hadoop-user-functions.  This flexibility is 
exactly why the current shell doesn't enforce the strict rules that 2.x did. 

> Secure Datanode stop/start from cli does not throw a valid error if 
> HDFS_DATANODE_SECURE_USER is not set
> 
>
> Key: HDFS-13501
> URL: https://issues.apache.org/jira/browse/HDFS-13501
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>
> Secure Datanode start/stop from cli does not throw a valid error if 
> HADOOP_SECURE_DN_USER/HDFS_DATANODE_SECURE_USER is not set. If 
> HDFS_DATANODE_SECURE_USER and JSVC_HOME is not set start/stop is expected to 
> fail (when privilege ports are used) but it should show some valid message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13501) Secure Datanode stop/start from cli does not throw a valid error if HDFS_DATANODE_SECURE_USER is not set

2018-04-25 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453085#comment-16453085
 ] 

Allen Wittenauer commented on HDFS-13501:
-

It's possible to run a secure datanode without using jsvc.  Thus why the shell 
code doesn't check for those values.

> Secure Datanode stop/start from cli does not throw a valid error if 
> HDFS_DATANODE_SECURE_USER is not set
> 
>
> Key: HDFS-13501
> URL: https://issues.apache.org/jira/browse/HDFS-13501
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>
> Secure Datanode start/stop from cli does not throw a valid error if 
> HADOOP_SECURE_DN_USER/HDFS_DATANODE_SECURE_USER is not set. If 
> HDFS_DATANODE_SECURE_USER and JSVC_HOME is not set start/stop is expected to 
> fail (when privilege ports are used) but it should show some valid message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13272) DataNodeHttpServer to have configurable HttpServer2 threads

2018-04-25 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452535#comment-16452535
 ] 

Allen Wittenauer commented on HDFS-13272:
-

Increasing timeout won't help.  branch-2's hdfs unit tests haven't been able to 
complete during the qbt run for a time period measured in months.  It's set for 
(IIRC) 18 hours. No one really pays much attention to the nightlies, s 

The log probably provides some hints as to where to look to fix it:

https://builds.apache.org/job/PreCommit-HDFS-Build/24054/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt



> DataNodeHttpServer to have configurable HttpServer2 threads
> ---
>
> Key: HDFS-13272
> URL: https://issues.apache.org/jira/browse/HDFS-13272
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13272-branch-2.000.patch, 
> HDFS-13272-branch-2.001.patch
>
>
> In HDFS-7279, the Jetty server on the DataNode was hard-coded to use 10 
> threads. In addition to the possibility of this being too few threads, it is 
> much higher than necessary in resource constrained environments such as 
> MiniDFSCluster. To avoid compatibility issues, rather than using 
> {{HttpServer2#HTTP_MAX_THREADS}} directly, we can introduce a new 
> configuration for the DataNode's thread pool size.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10276) HDFS should not expose path info that user has no permission to see.

2018-03-06 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-10276:

Labels: security  (was: )

> HDFS should not expose path info that user has no permission to see.
> 
>
> Key: HDFS-10276
> URL: https://issues.apache.org/jira/browse/HDFS-10276
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs, security
>Reporter: Kevin Cox
>Assignee: Yuanbo Liu
>Priority: Major
>  Labels: security
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha1
>
> Attachments: HDFS-10276.001.patch, HDFS-10276.002.patch, 
> HDFS-10276.003.patch, HDFS-10276.004.patch, HDFS-10276.005.patch, 
> HDFS-10276.006.patch
>
>
> This following issue is remedied by HDFS-5802.
> {quote}
> Given you have a file {{/file}} an existence check for the path 
> {{/file/whatever}} will give different responses for different 
> implementations of FileSystem.
> LocalFileSystem will return false while DistributedFileSystem will throw 
> {{org.apache.hadoop.security.AccessControlException: Permission denied: ..., 
> access=EXECUTE, ...}}
> {quote}
> However, HDFS-5802 may expose information about a path that user doesn't have 
> permission to see. 
> For example, if the user asks for /a/b/c, but does not have permission to 
> list /a, we should not complain about /a/b



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10276) HDFS should not expose path info that user has no permission to see.

2018-03-06 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-10276:

Component/s: security
 fs

> HDFS should not expose path info that user has no permission to see.
> 
>
> Key: HDFS-10276
> URL: https://issues.apache.org/jira/browse/HDFS-10276
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs, security
>Reporter: Kevin Cox
>Assignee: Yuanbo Liu
>Priority: Major
>  Labels: security
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha1
>
> Attachments: HDFS-10276.001.patch, HDFS-10276.002.patch, 
> HDFS-10276.003.patch, HDFS-10276.004.patch, HDFS-10276.005.patch, 
> HDFS-10276.006.patch
>
>
> This following issue is remedied by HDFS-5802.
> {quote}
> Given you have a file {{/file}} an existence check for the path 
> {{/file/whatever}} will give different responses for different 
> implementations of FileSystem.
> LocalFileSystem will return false while DistributedFileSystem will throw 
> {{org.apache.hadoop.security.AccessControlException: Permission denied: ..., 
> access=EXECUTE, ...}}
> {quote}
> However, HDFS-5802 may expose information about a path that user doesn't have 
> permission to see. 
> For example, if the user asks for /a/b/c, but does not have permission to 
> list /a, we should not complain about /a/b



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-7913) HADOOP_HDFS_LOG_DIR should be HDFS_LOG_DIR in deprecations

2018-02-06 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-7913.

Resolution: Won't Fix

> HADOOP_HDFS_LOG_DIR should be HDFS_LOG_DIR in deprecations
> --
>
> Key: HDFS-7913
> URL: https://issues.apache.org/jira/browse/HDFS-7913
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Critical
> Attachments: HDFS-7913-01.patch, HDFS-7913.patch
>
>
> The wrong variable is deprecated in hdfs-config.sh.  It should be 
> HDFS_LOG_DIR, not HADOOP_HDFS_LOG_DIR.  This is breaking backward 
> compatibility.
> It might be worthwhile to doublecheck the other dep's to make sure they are 
> correct as well.
> Also, release notes for the deprecation jira should be updated to reflect 
> this change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12711) deadly hdfs test

2018-02-06 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer reassigned HDFS-12711:
---

Assignee: Allen Wittenauer

> deadly hdfs test
> 
>
> Key: HDFS-12711
> URL: https://issues.apache.org/jira/browse/HDFS-12711
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: 2.9.0, 2.8.2
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Critical
> Attachments: HDFS-12711.branch-2.00.patch, fakepatch.branch-2.txt
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12711) deadly hdfs test

2018-02-06 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-12711:

Resolution: Incomplete
Status: Resolved  (was: Patch Available)

I think I'm going to close this as incomplete. Yetus 0.7.0 and up appear to be 
successfully preventing HDFS unit tests from killing nodes. It does not, 
however, make the HDFS unit tests work. They are still wildly unreliable (e.g., 
HDFS-12512).

There are two remediations that need to happen, both of which are out of my 
purview:

1. The Hadoop PMC needs to push on INFRA-15685 to get the max task count 
raised. Doing so raises the thread count, but that's only a *temporary* 
solution until HDFS is pushed over the edge again.

2. hadoop-hdfs-project either needs to get refactored into multiple maven 
modules or the simultaneous thread counts need to get greatly reduced. e.g., 
just changing the unit test's DN RPC thread count may work.



> deadly hdfs test
> 
>
> Key: HDFS-12711
> URL: https://issues.apache.org/jira/browse/HDFS-12711
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: 2.9.0, 2.8.2
>Reporter: Allen Wittenauer
>Priority: Critical
> Attachments: HDFS-12711.branch-2.00.patch, fakepatch.branch-2.txt
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12512) RBF: Add WebHDFS

2018-02-06 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16354482#comment-16354482
 ] 

Allen Wittenauer commented on HDFS-12512:
-

{quote}Splitting out hadoop-hdfs-project/hadoop-contracts may still have the 
issue, right? As code changes to hadoop-hdfs component will still trigger test 
cases in hadoop-hdfs, and the running may still fail there. Or hadoop-contracts 
testcases still can run, even hadoop-hdfs ones fail?
{quote}


maven is very hierarchical. Yetus tries to disrupt that for performance. Let's 
break this out:

||Directory||Change||Maven from root||Yetus Precommit hadoop-hdfs||Yetus 
Precommit hadoop-hdfs-contracts||
|hadoop-hdfs-project/hadoop-hdfs/hadoop-hdfs-contracts|hadoop-hdfs|Fail before 
hadoop-hdfs-contracts|Fail|Won't Execute because it's part of hadoop-hdfs|
|hadoop-hdfs-project/hadoop-hdfs/hadoop-hdfs-contracts|hadoop-hdfs-contracts|Fail
 before hadoop-hdfs-contracts|Won't Execute, irrelevant patch|Succeed|
|hadoop-hdfs-project/hadoop-hdfs/hadoop-hdfs-contracts|hadoop-hdfs and 
hadoop-hdfs-contracts|Fail before hadoop-hdfs-contracts|Fail|Won't Execute 
because it's part of hadoop-hdfs|
|hadoop-hdfs-project/hadoop-hdfs-contracts|hadoop-hdfs-contracts|Fail|Won't 
Execute, irrelevant patch|Succeed|
|hadoop-hdfs-project/hadoop-hdfs-contracts|hadoop-hdfs|Fail|Fail|Won't execute|
|hadoop-hdfs-project/hadoop-hdfs-contracts|hadoop-hdfs and 
hadoop-hdfs-contracts|Fail|Fail|Succeed|

 

> RBF: Add WebHDFS
> 
>
> Key: HDFS-12512
> URL: https://issues.apache.org/jira/browse/HDFS-12512
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>Assignee: Wei Yan
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-12512.000.patch, HDFS-12512.001.patch, 
> HDFS-12512.002.patch, HDFS-12512.003.patch, HDFS-12512.004.patch, 
> HDFS-12512.005.patch, HDFS-12512.006.patch
>
>
> The Router currently does not support WebHDFS. It needs to implement 
> something similar to {{NamenodeWebHdfsMethods}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-12512) RBF: Add WebHDFS

2018-02-06 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-12512:

Comment: was deleted

(was: Yes.
)

> RBF: Add WebHDFS
> 
>
> Key: HDFS-12512
> URL: https://issues.apache.org/jira/browse/HDFS-12512
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>Assignee: Wei Yan
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-12512.000.patch, HDFS-12512.001.patch, 
> HDFS-12512.002.patch, HDFS-12512.003.patch, HDFS-12512.004.patch, 
> HDFS-12512.005.patch, HDFS-12512.006.patch
>
>
> The Router currently does not support WebHDFS. It needs to implement 
> something similar to {{NamenodeWebHdfsMethods}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12512) RBF: Add WebHDFS

2018-02-06 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16354457#comment-16354457
 ] 

Allen Wittenauer commented on HDFS-12512:
-

Yes.


> RBF: Add WebHDFS
> 
>
> Key: HDFS-12512
> URL: https://issues.apache.org/jira/browse/HDFS-12512
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>Assignee: Wei Yan
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-12512.000.patch, HDFS-12512.001.patch, 
> HDFS-12512.002.patch, HDFS-12512.003.patch, HDFS-12512.004.patch, 
> HDFS-12512.005.patch, HDFS-12512.006.patch
>
>
> The Router currently does not support WebHDFS. It needs to implement 
> something similar to {{NamenodeWebHdfsMethods}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12512) RBF: Add WebHDFS

2018-02-02 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351092#comment-16351092
 ] 

Allen Wittenauer commented on HDFS-12512:
-

Yetus only runs tests when they modify code in that module.

> RBF: Add WebHDFS
> 
>
> Key: HDFS-12512
> URL: https://issues.apache.org/jira/browse/HDFS-12512
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>Assignee: Wei Yan
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-12512.000.patch, HDFS-12512.001.patch, 
> HDFS-12512.002.patch, HDFS-12512.003.patch, HDFS-12512.004.patch, 
> HDFS-12512.005.patch, HDFS-12512.006.patch
>
>
> The Router currently does not support WebHDFS. It needs to implement 
> something similar to {{NamenodeWebHdfsMethods}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12512) RBF: Add WebHDFS

2018-02-02 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351079#comment-16351079
 ] 

Allen Wittenauer commented on HDFS-12512:
-

Parallelization is already set per module.  These tests are sitting in the 
hadoop-hdfs module.  Parallelization turned off in hadoop-hdfs will make the 
hadoop-hdfs tests run for 4 hours (on a good day).  

> RBF: Add WebHDFS
> 
>
> Key: HDFS-12512
> URL: https://issues.apache.org/jira/browse/HDFS-12512
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>Assignee: Wei Yan
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-12512.000.patch, HDFS-12512.001.patch, 
> HDFS-12512.002.patch, HDFS-12512.003.patch, HDFS-12512.004.patch, 
> HDFS-12512.005.patch, HDFS-12512.006.patch
>
>
> The Router currently does not support WebHDFS. It needs to implement 
> something similar to {{NamenodeWebHdfsMethods}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12512) RBF: Add WebHDFS

2018-02-02 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351032#comment-16351032
 ] 

Allen Wittenauer commented on HDFS-12512:
-

bq.  do you have any idea on what might be the issue?

This is basically HDFS-12711 (still) 

The HDFS unit tests are completely unreliable on the ASF infrastructure.  You 
can go to pretty much any HDFS patch and see this happening.

bq.  No idea how to get around this.

Break hadoop-hdfs into multiple modules is the full fix. Turning off 
parallelization would be a temporary workaround but that just means the tests 
run 4 hours instead of 1 hour, which would just break the ASF infrastructure in 
other ways.  But other than that, no idea.  In the end, there are too many 
large tests with too many threads running at once. They hit the upper process 
count limits and will actually crash nodes if let loose.  

Just to make matters worse, Jenkins, Yetus, etc, can't report the failures 
because surefire doesn't report them properly either (SUREFIRE-1447).  I've 
been contemplating workarounds in YETUS-587, but since Hadoop isn't a priority 
of my volunteer time, I haven't spent a lot of effort on it.



> RBF: Add WebHDFS
> 
>
> Key: HDFS-12512
> URL: https://issues.apache.org/jira/browse/HDFS-12512
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>Assignee: Wei Yan
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-12512.000.patch, HDFS-12512.001.patch, 
> HDFS-12512.002.patch, HDFS-12512.003.patch, HDFS-12512.004.patch, 
> HDFS-12512.005.patch, HDFS-12512.006.patch
>
>
> The Router currently does not support WebHDFS. It needs to implement 
> something similar to {{NamenodeWebHdfsMethods}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13059) Add pie chart in NN UI to show storage used

2018-01-25 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16339580#comment-16339580
 ] 

Allen Wittenauer commented on HDFS-13059:
-

bq. Our UI's are stateless and are not suitable for comparing any data point 
over period of time. 

I never said that the UI should show historical data[*].  I'm saying that 
people will generate their own form of historical data via the UI. I've lost 
track of how many conference presentations I've seen with screenshots of the NN 
UI.  

At this point, I'm bowing out of the topic. I've given my suggestion.

[*] - it could, however, given the metrics system does have a level of it 
available.  It again highlights why a bar chart or a stacked column chart would 
be significantly better. But that's a different set of improvements altogether.

> Add pie chart in NN UI to show storage used
> ---
>
> Key: HDFS-13059
> URL: https://issues.apache.org/jira/browse/HDFS-13059
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13059.001.patch, HDFS-13059.002.patch, 
> HDFS-13059.003.patch, Screen Shot 2018-01-24 at 1.58.28 PM.png, Screen Shot 
> 2018-01-24 at 1.58.33 PM.png, Screen Shot 2018-01-24 at 3.04.17 PM.png, 
> fed-capacity.png
>
>
> This jira proposes to add a pie chart in NN UI to show storage used by:
> * DFS Used (Tooltip : "Storage currently used for DFS.")
> * DFS available (Tooltip : "Storage available for DFS use.")
> * Non DFS Used (Tooltip : "Storage allocated for DFS but currently" +
>  " used by Non DFS storage.")
> Tooltip will help users better understand what these terms mean.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13059) Add pie chart in NN UI to show storage used

2018-01-25 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16339502#comment-16339502
 ] 

Allen Wittenauer commented on HDFS-13059:
-

I guess I've failed to communicate my point here. :(

When talking about storage there are always two extra data points:  time and 
growth. Administrators are going to use this pie chart in presentations.  Then 
use it again in 6 months. It ends being a horrible comparison because unless 
the usage of the cluster hasn't changed dramatically, it doesn't really convey 
much information.  Given that the tool tip is required really underscores this 
point.  

Additionally, if I have more than one cluster, I'm going to pull up both NN UIs 
and look at both charts simultaneously.  Again, this doesn't really convey much 
information other than maybe the usage patterns are similar.  It doesn't convey 
an actual size.

There's also a potential accessibility problem here, but we should probably 
consult with an expert.

As an alternative, I think I'd much rather have a bar chart where some actual 
numeric information can also be provided without requiring tool tips.  The 
units will also give a much better sense of quantity.  Comparing static bar 
charts over time is also significantly easier.

> Add pie chart in NN UI to show storage used
> ---
>
> Key: HDFS-13059
> URL: https://issues.apache.org/jira/browse/HDFS-13059
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13059.001.patch, HDFS-13059.002.patch, 
> HDFS-13059.003.patch, Screen Shot 2018-01-24 at 1.58.28 PM.png, Screen Shot 
> 2018-01-24 at 1.58.33 PM.png, Screen Shot 2018-01-24 at 3.04.17 PM.png, 
> fed-capacity.png
>
>
> This jira proposes to add a pie chart in NN UI to show storage used by:
> * DFS Used (Tooltip : "Storage currently used for DFS.")
> * DFS available (Tooltip : "Storage available for DFS use.")
> * Non DFS Used (Tooltip : "Storage allocated for DFS but currently" +
>  " used by Non DFS storage.")
> Tooltip will help users better understand what these terms mean.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13059) Add pie chart in NN UI to show storage used

2018-01-24 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338574#comment-16338574
 ] 

Allen Wittenauer commented on HDFS-13059:
-

bq. Is that our case here? I think so.

Some questions to think about:

a) How does one compare the pie chart from multiple clusters?

b) Given that there will be times when two or more of the slices of the pie 
charts will be nearly the same size, what does that tell the user?

c) When storage is added to a cluster, how is that reflected in the pie chart 
over time?

> Add pie chart in NN UI to show storage used
> ---
>
> Key: HDFS-13059
> URL: https://issues.apache.org/jira/browse/HDFS-13059
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13059.001.patch, HDFS-13059.002.patch, Screen Shot 
> 2018-01-24 at 1.58.28 PM.png, Screen Shot 2018-01-24 at 1.58.33 PM.png, 
> Screen Shot 2018-01-24 at 3.04.17 PM.png
>
>
> This jira proposes to add a pie chart in NN UI to show storage used by:
> * DFS Used (Tooltip : "Storage currently used for DFS.")
> * DFS available (Tooltip : "Storage available for DFS use.")
> * Non DFS Used (Tooltip : "Storage allocated for DFS but currently" +
>  " used by Non DFS storage.")
> Tooltip will help users better understand what these terms mean.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13059) Add pie chart in NN UI to show storage used

2018-01-24 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338479#comment-16338479
 ] 

Allen Wittenauer commented on HDFS-13059:
-

As a counterpoint:

http://www.businessinsider.com/pie-charts-are-the-worst-2013-6



> Add pie chart in NN UI to show storage used
> ---
>
> Key: HDFS-13059
> URL: https://issues.apache.org/jira/browse/HDFS-13059
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13059.001.patch, HDFS-13059.002.patch, Screen Shot 
> 2018-01-24 at 1.58.28 PM.png, Screen Shot 2018-01-24 at 1.58.33 PM.png, 
> Screen Shot 2018-01-24 at 3.04.17 PM.png
>
>
> This jira proposes to add a pie chart in NN UI to show storage used by:
> * DFS Used (Tooltip : "Storage currently used for DFS.")
> * DFS available (Tooltip : "Storage available for DFS use.")
> * Non DFS Used (Tooltip : "Storage allocated for DFS but currently" +
>  " used by Non DFS storage.")
> Tooltip will help users better understand what these terms mean.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12996) DataNode Replica Trash

2018-01-15 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16326698#comment-16326698
 ] 

Allen Wittenauer commented on HDFS-12996:
-

bq. Also the design looks very similar to Checkpointing/Snapshots.

The fact that this JIRA even exists points that snapshots are/were a failure.  
On other file systems, snapshots are exactly the recovery model for these types 
of deletes.

...

Reading through the doc, there are handful of spots where I see the use cases 
are extremely limited.  But I'm really left with a basic question:

Why isn't there an option to just have the NN automatically do a snapshot for 
deletes over a certain size instead and then automatically delete these 
snapshots after X amount time?  Wouldn't that add the protection that is being 
requested while avoiding the requirement to restart the NN? 



> DataNode Replica Trash
> --
>
> Key: HDFS-12996
> URL: https://issues.apache.org/jira/browse/HDFS-12996
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: DataNode_Replica_Trash_Design_Doc.pdf
>
>
> DataNode Replica Trash will allow administrators to recover from a recent 
> delete request that resulted in catastrophic loss of user data. This is 
> achieved by placing all invalidated blocks in a replica trash on the datanode 
> before completely purging them from the system. The design doc is attached 
> here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12916) HDFS commands throws error, when only shaded clients in classpath

2017-12-14 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291849#comment-16291849
 ] 

Allen Wittenauer commented on HDFS-12916:
-

bq. I have included Hadoop tools lib folder, so which will be adequate to get 
all its dependencies.

I think you've missed my point:  not all of bits in tools are just Hadoop 
clients.  They have other dependencies that are outside of both tools and the 
runtime jar.

bq.  I have just ran distcp for sample, need to run other commands to know more 
if they cause issue.

distcp and a few others are literallly YARN clients.  Try enabling something 
bigger like Azure or AWS.

bq. if I override all home variables(like hdfs_home, mapred_home) to same 
client location in hadoop-layout.sh, it will have only shaded client jars.

We haven't even talked about things that people run with 'hadoop jar'.

At this point, I'm changing this to a new feature and moving it to HADOOP since 
it isn't HDFS-specific.  It's not a bug.  Everything is working as intended.

> HDFS commands throws error, when only shaded clients in classpath
> -
>
> Key: HDFS-12916
> URL: https://issues.apache.org/jira/browse/HDFS-12916
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>
> [root@n001 hadoop]# bin/hdfs dfs -rm /
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/htrace/core/Tracer$Builder
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:303)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:389)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.htrace.core.Tracer$Builder
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 4 more
> cc [~busbey]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12916) HDFS commands throws error, when only shaded clients in classpath

2017-12-14 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-12916:

Issue Type: New Feature  (was: Bug)

> HDFS commands throws error, when only shaded clients in classpath
> -
>
> Key: HDFS-12916
> URL: https://issues.apache.org/jira/browse/HDFS-12916
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>
> [root@n001 hadoop]# bin/hdfs dfs -rm /
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/htrace/core/Tracer$Builder
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:303)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:389)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.htrace.core.Tracer$Builder
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 4 more
> cc [~busbey]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12916) HDFS commands throws error, when only shaded clients in classpath

2017-12-14 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16291834#comment-16291834
 ] 

Allen Wittenauer commented on HDFS-12916:
-

bq. Ya admin commands will break with this, but hadoop tools commands should 
work as I have copied tools/lib folder and changed hadoop-layout.sh accordingly

In hadoop 3.x when HADOOP_OPTIONAL_TOOLS are loaded, they expect their 
dependencies from the other directories to already be in the classpath. I'd be 
greatly surprised if they find them from the shaded jar since I'm pretty sure 
we do class hiding.

bq. For backward compatibility we can have existing hdfs script as it is, and 
we can have 2 scripts 

-1 on having multiple entry points.  This is one the key reasons why the shell 
code in branch-2 was a disaster area. I absolutely refuse to let the shell code 
regress again.

But no matter what you do, everyone is already trained to use 'hadoop 
classpath'.  Until that gets changed, anything else that is done is moot. 


> HDFS commands throws error, when only shaded clients in classpath
> -
>
> Key: HDFS-12916
> URL: https://issues.apache.org/jira/browse/HDFS-12916
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>
> [root@n001 hadoop]# bin/hdfs dfs -rm /
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/htrace/core/Tracer$Builder
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:303)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:389)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.htrace.core.Tracer$Builder
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 4 more
> cc [~busbey]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12916) HDFS commands throws error, when only shaded clients in classpath

2017-12-13 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290409#comment-16290409
 ] 

Allen Wittenauer commented on HDFS-12916:
-

bq. Changed the hadoop-layout.sh mapred_home hadoop_hdfs_home etc. to point to 
only hdfs client jar location and copied all tools and shaded client jars .

That just broke all of the admin commands and a good chunk of hadoop-tools.

bq. Actually this is done as part of work to use shaded client jars for running 
hdfs commands and also not to expose other run time jars to clients. And also 
use same jars for running hdfs commands.

The vast majority of hadoop shell commands are *not* client-level commands and 
actually need all of those jars.  This means that classpath construction has to 
take place on a per command basis if one truly wants to hide all of the extra 
jars.  That, in turn, means a ton of extra code in the shell scripts.

I brought up the idea of having the default classpath with shaded jars back in 
April (https://s.apache.org/LTzv). Given how much Hortonworks, Yahoo!, and 
Cloudera have been fighting against backwards incompatibilities breaking 
rolling upgrade despite this being a major release, it likely would have been a 
wasted effort anyway.


> HDFS commands throws error, when only shaded clients in classpath
> -
>
> Key: HDFS-12916
> URL: https://issues.apache.org/jira/browse/HDFS-12916
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>
> [root@n001 hadoop]# bin/hdfs dfs -rm /
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/htrace/core/Tracer$Builder
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:303)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:389)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.htrace.core.Tracer$Builder
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 4 more
> cc [~busbey]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12916) HDFS commands throws error, when only shaded clients in classpath

2017-12-13 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290364#comment-16290364
 ] 

Allen Wittenauer commented on HDFS-12916:
-

bq. HDFS commands throws error, when only shaded clients in classpath

a) What is the goal here? 

b) What exact surgery was performed to make that happen, since that's not how 
Hadoop ships out of the box.

bq. After adding below jars to classpath, commands started working

Correct.  Clients are expected to define their own logging.  That's one of the 
key features of using the shaded jars.  htrace is a bit of a surprise, but that 
might be expected too.  Thus far, I'm not really seeing a bug here.


> HDFS commands throws error, when only shaded clients in classpath
> -
>
> Key: HDFS-12916
> URL: https://issues.apache.org/jira/browse/HDFS-12916
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>
> [root@n001 hadoop]# bin/hdfs dfs -rm /
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/htrace/core/Tracer$Builder
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:303)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:389)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.htrace.core.Tracer$Builder
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 4 more
> cc [~busbey]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12711) deadly hdfs test

2017-11-20 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16259536#comment-16259536
 ] 

Allen Wittenauer commented on HDFS-12711:
-

FYI HADOOP-13514.

> deadly hdfs test
> 
>
> Key: HDFS-12711
> URL: https://issues.apache.org/jira/browse/HDFS-12711
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: 2.9.0, 2.8.2
>Reporter: Allen Wittenauer
>Priority: Critical
> Attachments: HDFS-12711.branch-2.00.patch, fakepatch.branch-2.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12711) deadly hdfs test

2017-11-17 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257668#comment-16257668
 ] 

Allen Wittenauer commented on HDFS-12711:
-

Doing some quick math, my estimation is that we received 730 test results out 
of ~3000.  So yes, we lost 75% of the test results in that run.

HDFS-12731's https://builds.apache.org/job/PreCommit-HDFS-Build/22132/ run only 
dropped ~34%.  So hey, that's an improvement...

> deadly hdfs test
> 
>
> Key: HDFS-12711
> URL: https://issues.apache.org/jira/browse/HDFS-12711
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: 2.9.0, 2.8.2
>Reporter: Allen Wittenauer
>Priority: Critical
> Attachments: HDFS-12711.branch-2.00.patch, fakepatch.branch-2.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12711) deadly hdfs test

2017-11-17 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257558#comment-16257558
 ] 

Allen Wittenauer commented on HDFS-12711:
-

bq. We usually try to rerun the failed tests locally to check if they are 
related to the patch. 

I think this may be the key as to why I don't think enough people are in panic 
mode. Let's take Erik's log as an example.  It's from HDFS-12823.  Precommit 
reported ~20 tests that either failed or timed out.  It reaped 20 excess 
surefire jvms after mvn returned.  The asflicense check came back with 130 dump 
log files.  Those 130 dump log files in almost every case I looked at were not 
reported to surefire.  That means that we're probably looking at a minimum of 
150 tests failed, not 20. Given that those 120 broken JVMs likely had more than 
1 test...

We're basically dropping a very large percentage (maybe even the majority) of 
test results on the ground.

> deadly hdfs test
> 
>
> Key: HDFS-12711
> URL: https://issues.apache.org/jira/browse/HDFS-12711
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: 2.9.0, 2.8.2
>Reporter: Allen Wittenauer
>Priority: Critical
> Attachments: HDFS-12711.branch-2.00.patch, fakepatch.branch-2.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11096) Support rolling upgrade between 2.x and 3.x

2017-11-17 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257197#comment-16257197
 ] 

Allen Wittenauer commented on HDFS-11096:
-

bq.  I actually feel pretty strongly about keeping set -e here

http://mywiki.wooledge.org/BashFAQ/105

bq. have I addressed the issues you pointed out to your satisfaction?

A quick glance:

{code}
+# shellcheck disable=SC1090
+source 
"${HADOOP_ROOT}/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh"
{code}

You don't need to disable that.  You can tell shellcheck where to look for that 
file:

{code}
# shellcheck 
source=./hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
{code}

bq. There were no new shellcheck issues. 

That's only because of the massive amount of quoting.  There are a ton of 
problems that shellcheck can't catch as a result of that.


> Support rolling upgrade between 2.x and 3.x
> ---
>
> Key: HDFS-11096
> URL: https://issues.apache.org/jira/browse/HDFS-11096
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rolling upgrades
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Sean Mackrory
>Priority: Blocker
> Attachments: HDFS-11096.001.patch, HDFS-11096.002.patch, 
> HDFS-11096.003.patch, HDFS-11096.004.patch, HDFS-11096.005.patch, 
> HDFS-11096.006.patch, HDFS-11096.007.patch
>
>
> trunk has a minimum software version of 3.0.0-alpha1. This means we can't 
> rolling upgrade between branch-2 and trunk.
> This is a showstopper for large deployments. Unless there are very compelling 
> reasons to break compatibility, let's restore the ability to rolling upgrade 
> to 3.x releases.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12711) deadly hdfs test

2017-11-16 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256584#comment-16256584
 ] 

Allen Wittenauer commented on HDFS-12711:
-

Ignoring the hs_err_pid log files is pretty much just sticking our collective 
heads in the sand about actual, real problems with the unit tests. The unit 
tests themselves haven't been rock solid for a very long time, even before all 
of this start happening.   Entries have been put into the ignore pile so often 
that I wouldn't be surprised if the community is already at the point that most 
developers are ignoring precommit.  (e.g., commits with findbugs reported in 
the issues, javadoc compilation failures being treated as "environmental", etc, 
etc.) 

If I were actually paying more attention to day-to-day Hadoop bits these days, 
I'd probably be ready to disable unit tests (at least HDFS) to specifically 
avoid the "cried wolf" condition.  The rest of the precommit tests work 
properly the vast majority of the time and are probably more important given 
the current state of things. (Never mind the massive speed up. QBT is hitting 
the 15 hour mark for a full run for branch-2 when it is actually allowed to 
complete.)  No one seems to actually care that the unit tests are a broken mess 
and I doubt they'd be missed.

My goal here was to prevent Hadoop from bringing down the rest of the ASF build 
infrastructure.  It's under enough stress without this project making things 
that much worse.  Achievement unlocked and other Yetus users will pick up those 
new safety features in the next release.  I should probably close this JIRA 
issue. Unless someone else plans to spend some effort on these bugs?  At least 
at this point in time, I view my work here as complete. 

Also:

{code}
/build/
{code}

ARGH.  That hasn't been valid since Hadoop used ant.  A great example of "well, 
if we ignore it, it doesn't exist, right?"  Because anything that is still 
using /build/ almost certainly isn't safe for parallel tests and likely 
contributing to a whole host of problems.

> deadly hdfs test
> 
>
> Key: HDFS-12711
> URL: https://issues.apache.org/jira/browse/HDFS-12711
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: 2.9.0, 2.8.2
>Reporter: Allen Wittenauer
>Priority: Critical
> Attachments: HDFS-12711.branch-2.00.patch, fakepatch.branch-2.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12711) deadly hdfs test

2017-11-16 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256283#comment-16256283
 ] 

Allen Wittenauer commented on HDFS-12711:
-

It's probably also worth pointing out that those files also represent tests 
that weren't actually executed.  So they aren't recorded in the fail/success 
output. 

> deadly hdfs test
> 
>
> Key: HDFS-12711
> URL: https://issues.apache.org/jira/browse/HDFS-12711
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: 2.9.0, 2.8.2
>Reporter: Allen Wittenauer
>Priority: Critical
> Attachments: HDFS-12711.branch-2.00.patch, fakepatch.branch-2.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12711) deadly hdfs test

2017-11-16 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256275#comment-16256275
 ] 

Allen Wittenauer commented on HDFS-12711:
-

Those files are the stack dumps from the unit tests that ran out of resources.  
Fix the unit tests, those files go away.

> deadly hdfs test
> 
>
> Key: HDFS-12711
> URL: https://issues.apache.org/jira/browse/HDFS-12711
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: 2.9.0, 2.8.2
>Reporter: Allen Wittenauer
>Priority: Critical
> Attachments: HDFS-12711.branch-2.00.patch, fakepatch.branch-2.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12711) deadly hdfs test

2017-11-14 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16251809#comment-16251809
 ] 

Allen Wittenauer commented on HDFS-12711:
-

With the kill code in place, I'm seeing wild fluctuations in hdfs and mr unit 
tests.  Lots of unreaped processes.  Probably a hint that they are paused for 
some reason.  I have a hunch that we're pretty much bottlenecked on IO. Tests 
happen on a single disk that is shared among all the executors on that jenkins 
node.  Let's say 2xHDFS tests are running, that could easily be thousands of 
threads doing IO to the same disk.

It might be smart to decrease the # of parallel tests, at least in HDFS. This 
obviously impacts runtime (which is already out of control) but will probably 
increase accuracy.  Or, we could attempt to split up the tests such that 
compute heavy get done in parallel, IO heavy get done serial. 

Of course, if no one is paying attention to the tests anyway, we could just 
disable them altogether I guess.

> deadly hdfs test
> 
>
> Key: HDFS-12711
> URL: https://issues.apache.org/jira/browse/HDFS-12711
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: 2.9.0, 2.8.2
>Reporter: Allen Wittenauer
>Priority: Critical
> Attachments: HDFS-12711.branch-2.00.patch, fakepatch.branch-2.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12734) Ozone: generate optional, version specific documentation during the build

2017-11-07 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242282#comment-16242282
 ] 

Allen Wittenauer commented on HDFS-12734:
-

bq. HADOOP-14163 is finished (by me)

I'll go comment over there I guess because I firmly disagree that 14163 is 
anywhere close to finished.

bq. docker images for development.

So basically, we're adding a bunch of stuff that will never see the light of 
day in an Apache Hadoop release?  Why does this even exist then?  We've got 
enough half-integrated bits hanging in the source tree.

bq. One of the reason why I prefer hugo is exactly the problems which are 
introduced by npm/bower/webpack/yarn.

... except this isn't replacing those problems.  Instead, it's adding another 
framework so now we have those existing problems and now whatever new ones 
comes with this additional framework. 

> Ozone: generate optional, version specific documentation during the build
> -
>
> Key: HDFS-12734
> URL: https://issues.apache.org/jira/browse/HDFS-12734
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12734-HDFS-7240.001.patch, 
> HDFS-12734-HDFS-7240.002.patch
>
>
> HDFS-12664 susggested a new way to include documentation in the KSM web ui.
> This patch modifies the build lifecycle to automatically generate the 
> documentation *if* hugo is on the PATH. If hugo is not there  the 
> documentation won't be generated and it won't be displayed (see HDFS-12661)
> To test: Apply this patch on top of HDFS-12664 do a full build and check the 
> KSM webui.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12734) Ozone: generate version specific documentation during the build

2017-11-03 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238499#comment-16238499
 ] 

Allen Wittenauer commented on HDFS-12734:
-

bq. The only thing this patch does is that it adds hugo automatically in case 
of Jenkins builds, It just makes it easy for us when we do releases. 

The current set of patches do neither of those things:

* Yetus isn't going to fail it if it isn't there.
* Since hugo is being added below the CUT HERE line, it won't be part of the 
Docker image that create-release or Yetus use




> Ozone: generate version specific documentation during the build
> ---
>
> Key: HDFS-12734
> URL: https://issues.apache.org/jira/browse/HDFS-12734
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDFS-12734-HDFS-7240.001.patch
>
>
> HDFS-12664 susggested a new way to include documentation in the KSM web ui.
> This patch modifies the build lifecycle to automatically generate the 
> documentation *if* hugo is on the PATH. If hugo is not there  the 
> documentation won't be generated and it won't be displayed (see HDFS-12661)
> To test: Apply this patch on top of HDFS-12664 do a full build and check the 
> KSM webui.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12734) Ozone: generate version specific documentation during the build

2017-11-03 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238467#comment-16238467
 ] 

Allen Wittenauer commented on HDFS-12734:
-

bq. from the comments on that thread, it looks to me that is the direction we 
want to go. 

It doesn't mean anything until it's been committed.  I can point to lots and 
lots of issues where this is true... and years later, still open.

bq. If you have a hugo based site, how do you want to generate it?

Why does it need to be hugo-based?  We've already got all of 
node/npm/bower/yarn sitting there due to the overly heavy yarn-ui.

bq. by asking people to install the build tool each time? 

What do you think happens for people who aren't using Docker?  Or, what about 
platforms where Go doesn't work at all?

In my mind, there is a very big difference between what gets posted on 
hadoop.apache.org and the requirements that we place on end users trying to 
build Hadoop.  Every additional dependency just makes it that much harder.

> Ozone: generate version specific documentation during the build
> ---
>
> Key: HDFS-12734
> URL: https://issues.apache.org/jira/browse/HDFS-12734
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDFS-12734-HDFS-7240.001.patch
>
>
> HDFS-12664 susggested a new way to include documentation in the KSM web ui.
> This patch modifies the build lifecycle to automatically generate the 
> documentation *if* hugo is on the PATH. If hugo is not there  the 
> documentation won't be generated and it won't be displayed (see HDFS-12661)
> To test: Apply this patch on top of HDFS-12664 do a full build and check the 
> KSM webui.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12734) Ozone: generate version specific documentation during the build

2017-11-03 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238440#comment-16238440
 ] 

Allen Wittenauer commented on HDFS-12734:
-

HADOOP-14163 has been open for over 6 months and doesn't appear to be any where 
near completion. It also doesn't appear to impact any build time dependencies.

That's a very different situation than what this patch is proposing.  It 
specifically adds another build-time dependency in the critical path. Worse, I 
think this may be something like the 5th website generator in the source tree.  
(I don't even know if I can name all them anymore.) To add insult to injury, 
BUILDING.txt wasn't even updated to list it as a dependency.


> Ozone: generate version specific documentation during the build
> ---
>
> Key: HDFS-12734
> URL: https://issues.apache.org/jira/browse/HDFS-12734
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDFS-12734-HDFS-7240.001.patch
>
>
> HDFS-12664 susggested a new way to include documentation in the KSM web ui.
> This patch modifies the build lifecycle to automatically generate the 
> documentation *if* hugo is on the PATH. If hugo is not there  the 
> documentation won't be generated and it won't be displayed (see HDFS-12661)
> To test: Apply this patch on top of HDFS-12664 do a full build and check the 
> KSM webui.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12734) Ozone: generate version specific documentation during the build

2017-11-03 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238400#comment-16238400
 ] 

Allen Wittenauer commented on HDFS-12734:
-

bq. If hugo is not there the documentation won't be generated and it won't be 
displayed

-1

This is not an option. mvn site needs to be used to be consistent with the rest 
of Hadoop.  If you want to move Hadoop to something that isn't mvn site, that's 
a much bigger conversation and should definitely not be snuck into a patch.

> Ozone: generate version specific documentation during the build
> ---
>
> Key: HDFS-12734
> URL: https://issues.apache.org/jira/browse/HDFS-12734
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDFS-12734-HDFS-7240.001.patch
>
>
> HDFS-12664 susggested a new way to include documentation in the KSM web ui.
> This patch modifies the build lifecycle to automatically generate the 
> documentation *if* hugo is on the PATH. If hugo is not there  the 
> documentation won't be generated and it won't be displayed (see HDFS-12661)
> To test: Apply this patch on top of HDFS-12664 do a full build and check the 
> KSM webui.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12711) deadly hdfs test

2017-11-03 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238107#comment-16238107
 ] 

Allen Wittenauer commented on HDFS-12711:
-

Thanks!

I'll have to play around with sending a SIGQUIT. The other thing is that some 
process types may need different types of signals.  It might be useful to be 
able to define the "signal path"... e.g., surefire processes get QUIT -> TERM 
-> KILL.

I know the other thing is for archiver to save off the stack trace logs  
(hs_err_pidXX.log files) we do get.  That's just a settings thing that I've 
been too busy to setup in Jenkins. 

For now, though, I'm sort of tired at looking at this problem and will go work 
on something else for a while.  It's at the point that the issues are firmly 
contained from ASF build infra perspective and rests solely in the hands of the 
Hadoop community to fix their unit tests (or even base code) to be less broken. 
 

> deadly hdfs test
> 
>
> Key: HDFS-12711
> URL: https://issues.apache.org/jira/browse/HDFS-12711
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: 2.9.0, 2.8.2
>Reporter: Allen Wittenauer
>Priority: Critical
> Attachments: HDFS-12711.branch-2.00.patch, fakepatch.branch-2.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12711) deadly hdfs test

2017-11-03 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16237685#comment-16237685
 ] 

Allen Wittenauer commented on HDFS-12711:
-

Finally got a full qbt run on branch-2, thanks to YETUS-561 and YETUS-570:

https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/27/console

branch-2 is still a broken mess (those test times! argh!), but at least it 
won't kill nodes anymore.

> deadly hdfs test
> 
>
> Key: HDFS-12711
> URL: https://issues.apache.org/jira/browse/HDFS-12711
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: 2.9.0, 2.8.2
>Reporter: Allen Wittenauer
>Priority: Critical
> Attachments: HDFS-12711.branch-2.00.patch, fakepatch.branch-2.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12711) deadly hdfs test

2017-11-01 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-12711:

Attachment: fakepatch.branch-2.txt

> deadly hdfs test
> 
>
> Key: HDFS-12711
> URL: https://issues.apache.org/jira/browse/HDFS-12711
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: 2.9.0, 2.8.2
>Reporter: Allen Wittenauer
>Priority: Critical
> Attachments: HDFS-12711.branch-2.00.patch, fakepatch.branch-2.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12711) deadly hdfs test

2017-10-30 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16226299#comment-16226299
 ] 

Allen Wittenauer commented on HDFS-12711:
-

For those playing at home:

YETUS-570 (in development) changes how precommit handles tests.  It will now 
seek out and kill processes that match a certain pattern after unit tests are 
run.  It reports the number that it had to kill:

| Stuck Test Processes | hadoop-hdfs-project/hadoop-hdfs:21 |

and generates a log to show which processes those were:

| Stuck Test Processes Log | 
https://builds.apache.org/job/PreCommit-HDFS-Build2/9/artifact/out/reaper-hadoop-hdfs-project_hadoop-hdfs.log
 |

It's supposed to do this after every individual module, but I've got a (simple) 
bug to fix first.  In any case, this should help give a metric as to just how 
broken a particular set of tests actually are.  Hopefully at some point we'll 
have the logic to pinpoint it to individual tests, but using the actual unit 
test log should be pretty helpful.

It's also worth pointing out that this change will also help the full qbt to 
actually complete.  But I need to fix that bug first.

> deadly hdfs test
> 
>
> Key: HDFS-12711
> URL: https://issues.apache.org/jira/browse/HDFS-12711
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: 2.9.0, 2.8.2
>Reporter: Allen Wittenauer
>Priority: Critical
> Attachments: HDFS-12711.branch-2.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   4   5   6   7   8   9   10   >