[jira] [Commented] (HDDS-891) Create customized yetus personality for ozone

2018-12-09 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16714382#comment-16714382
 ] 

Elek, Marton commented on HDDS-891:
---

bq. and for reasons which have yet to be justified as to why that is desirable. 

Sorry if it was not clear enough from my last paragraph, my fault. The reason 
is the simplicity and maintainability. I believe that one project could be 
built with two separated personalities: 1) if it's more easy to maintain 2) 
there are two separated build paths.

I agree with you: we need to rethink the whole build process when we will 
deliver ozone together with the main hadoop release.  

> Create customized yetus personality for ozone
> -
>
> Key: HDDS-891
> URL: https://issues.apache.org/jira/browse/HDDS-891
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> Ozone pre commit builds (such as 
> https://builds.apache.org/job/PreCommit-HDDS-Build/) use the official hadoop 
> personality from the yetus personality.
> Yetus personalities are bash scripts which contain personalization for 
> specific builds.
> The hadoop personality tries to identify which project should be built and 
> use partial build to build only the required subprojects because the full 
> build is very time consuming.
> But in Ozone:
> 1.) The build + unit tests are very fast
> 2.) We don't need all the checks (for example the hadoop specific shading 
> test)
> 3.) We prefer to do a full build and full unit test for hadoop-ozone and 
> hadoop-hdds subrojects (for example the hadoop-ozone integration test always 
> should be executed as it contains many generic unit test)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14096) [SPS] : Add Support for Storage Policy Satisfier in ViewFs

2018-12-09 Thread Surendra Singh Lilhore (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16714325#comment-16714325
 ] 

Surendra Singh Lilhore commented on HDFS-14096:
---

Thanks [~ayushtkn] for patch
 1. Pls change the API description, this behavior changed in HDFS-12291
{code:java}
+  /**
+   * Set the source path to satisfy storage policy. This API is non-recursive
+   * in nature, i.e., if the source path is a directory then all the files
+   * immediately under the directory would be considered for satisfying the
+   * policy and the sub-directories if any under this path will be skipped.
+   *
+   * @param path The source path referring to either a directory or a file.
+   */
{code}
2. TestFileContextSpsjust verifying if SPS is working or not, its not checking 
anything about viewFS.
 3. Implement same API in \{{ViewFileSystem}}

> [SPS] : Add Support for Storage Policy Satisfier in ViewFs
> --
>
> Key: HDFS-14096
> URL: https://issues.apache.org/jira/browse/HDFS-14096
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14096-01.patch
>
>
> Add support for SPS in ViewFileSystem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14096) [SPS] : Add Support for Storage Policy Satisfier in ViewFs

2018-12-09 Thread Surendra Singh Lilhore (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16714325#comment-16714325
 ] 

Surendra Singh Lilhore edited comment on HDFS-14096 at 12/10/18 6:13 AM:
-

Thanks [~ayushtkn] for patch
 1. Pls change the API description, this behavior changed in HDFS-12291
{code:java}
+  /**
+   * Set the source path to satisfy storage policy. This API is non-recursive
+   * in nature, i.e., if the source path is a directory then all the files
+   * immediately under the directory would be considered for satisfying the
+   * policy and the sub-directories if any under this path will be skipped.
+   *
+   * @param path The source path referring to either a directory or a file.
+   */
{code}
2. TestFileContextSpsjust verifying if SPS is working or not, its not checking 
anything about viewFS.
3. Implement same API in ViewFileSystem


was (Author: surendrasingh):
Thanks [~ayushtkn] for patch
 1. Pls change the API description, this behavior changed in HDFS-12291
{code:java}
+  /**
+   * Set the source path to satisfy storage policy. This API is non-recursive
+   * in nature, i.e., if the source path is a directory then all the files
+   * immediately under the directory would be considered for satisfying the
+   * policy and the sub-directories if any under this path will be skipped.
+   *
+   * @param path The source path referring to either a directory or a file.
+   */
{code}
2. TestFileContextSpsjust verifying if SPS is working or not, its not checking 
anything about viewFS.
 3. Implement same API in \{{ViewFileSystem}}

> [SPS] : Add Support for Storage Policy Satisfier in ViewFs
> --
>
> Key: HDFS-14096
> URL: https://issues.apache.org/jira/browse/HDFS-14096
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14096-01.patch
>
>
> Add support for SPS in ViewFileSystem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14096) ViewFs : Add Support for Storage Policy Satisfier

2018-12-09 Thread Surendra Singh Lilhore (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-14096:
--
Issue Type: Sub-task  (was: Improvement)
Parent: HDFS-12226

> ViewFs : Add Support for Storage Policy Satisfier
> -
>
> Key: HDFS-14096
> URL: https://issues.apache.org/jira/browse/HDFS-14096
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14096-01.patch
>
>
> Add support for SPS in ViewFileSystem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-891) Create customized yetus personality for ozone

2018-12-09 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16714317#comment-16714317
 ] 

Allen Wittenauer commented on HDDS-891:
---

bq. YOr is the problem to check a modification which is filed under HADOOP but 
modifies something under hadoop-ozone/hadoop-hdds? I don't think it it's 
handled right now (so we are as good as now), and I didn't see any example for 
that. 

We've already been seeing this fail in the nightly qbt since ozone got 
committed.  Whether we see changes happening anywhere else or not is 
irrelevant.  

bq. Can't see any problem here. A full (hadoop + ozone) checkstyle should 
execute exactly the same checkstyle rules which are checked by the ozone 
personality.

It currently does not.

bq. For me using hadoop + ozone personalities seems to be a more clean 
separation. 

Ozone is part of Hadoop.  The whole point of making it that was, from what I 
can tell, to get co-bundled at some point in the future.  Making a separate 
personality is exactly the opposite direction and for reasons which have yet to 
be justified as to why that is desirable.  

My -1 remains.



> Create customized yetus personality for ozone
> -
>
> Key: HDDS-891
> URL: https://issues.apache.org/jira/browse/HDDS-891
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> Ozone pre commit builds (such as 
> https://builds.apache.org/job/PreCommit-HDDS-Build/) use the official hadoop 
> personality from the yetus personality.
> Yetus personalities are bash scripts which contain personalization for 
> specific builds.
> The hadoop personality tries to identify which project should be built and 
> use partial build to build only the required subprojects because the full 
> build is very time consuming.
> But in Ozone:
> 1.) The build + unit tests are very fast
> 2.) We don't need all the checks (for example the hadoop specific shading 
> test)
> 3.) We prefer to do a full build and full unit test for hadoop-ozone and 
> hadoop-hdds subrojects (for example the hadoop-ozone integration test always 
> should be executed as it contains many generic unit test)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14096) [SPS] : Add Support for Storage Policy Satisfier in ViewFs

2018-12-09 Thread Surendra Singh Lilhore (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-14096:
--
Summary: [SPS] : Add Support for Storage Policy Satisfier in ViewFs  (was: 
ViewFs : Add Support for Storage Policy Satisfier)

> [SPS] : Add Support for Storage Policy Satisfier in ViewFs
> --
>
> Key: HDFS-14096
> URL: https://issues.apache.org/jira/browse/HDFS-14096
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14096-01.patch
>
>
> Add support for SPS in ViewFileSystem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14129) RBF: Create new policy provider for router

2018-12-09 Thread Surendra Singh Lilhore (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16714310#comment-16714310
 ] 

Surendra Singh Lilhore edited comment on HDFS-14129 at 12/10/18 5:37 AM:
-

Thanks [~RANith] for patch

Some comments from my side

1. Change this property to "*security.router.admin.protocol.acl*".
{code:java}
+  public static final String SECURITY_ROUTERADMIN_PROTOCOL_ACL =
+  "security.routeradmin.protocol.acl";{code}

2. Please add {{InterfaceAudience}} for {{RouterPolicyProvider.}}

3. I think by mistake you given wrong protocol name here, pls change 
{{ReconfigurationProtocol.class}} to {{RouterAdminProtocol.class}}
{code:java}
+  new Service(
+CommonConfigurationKeys.SECURITY_ROUTERADMIN_PROTOCOL_ACL,
+ReconfigurationProtocol.class){code}

4. Change Policy provider object in {{RouterRpcServer}} also.

5. Pls fix the check style, whitespace and find bugs warnings.
6. pls add UT for the change.


was (Author: surendrasingh):
Thanks [~RANith] for patch

Some comments from my side

1. Change this property to "*security.router.admin.protocol.acl*".
{code:java}
+  public static final String SECURITY_ROUTERADMIN_PROTOCOL_ACL =
+  "security.routeradmin.protocol.acl";{code}

2. Please add {{InterfaceAudience}} for {{RouterPolicyProvider.}}

3. I think by mistake you given wrong protocol name here, pls change 
{{ReconfigurationProtocol.class}} to {{RouterAdminProtocol.class}}
{code:java}
+  new Service(
+CommonConfigurationKeys.SECURITY_ROUTERADMIN_PROTOCOL_ACL,
+ReconfigurationProtocol.class){code}

4. Change Policy provider object in {{RouterRpcServer}} also.

5. Pls fix the check style, whitespace and find bugs warnings.

> RBF: Create new policy provider for router
> --
>
> Key: HDFS-14129
> URL: https://issues.apache.org/jira/browse/HDFS-14129
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS-13532
>Reporter: Surendra Singh Lilhore
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14129-HDFS-13891.001.patch
>
>
> Router is using *{{HDFSPolicyProvider}}*. We can't add new protocol in this 
> class for router, its better to create in policy provider for Router.
> {code:java}
> // Set service-level authorization security policy
> if (conf.getBoolean(HADOOP_SECURITY_AUTHORIZATION, false)) {
> this.adminServer.refreshServiceAcl(conf, new HDFSPolicyProvider());
> }
> {code}
> I got this issue when I am verified HDFS-14079 with secure cluster.
> {noformat}
> ./bin/hdfs dfsrouteradmin -ls /
> ls: Protocol interface org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocol 
> is not known.
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
>  Protocol interface org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocol is 
> not known.
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1520)
> at org.apache.hadoop.ipc.Client.call(Client.java:1466)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14129) RBF: Create new policy provider for router

2018-12-09 Thread Surendra Singh Lilhore (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16714310#comment-16714310
 ] 

Surendra Singh Lilhore commented on HDFS-14129:
---

Thanks [~RANith] for patch

Some comments from my side

1. Change this property to "*security.router.admin.protocol.acl*".
{code:java}
+  public static final String SECURITY_ROUTERADMIN_PROTOCOL_ACL =
+  "security.routeradmin.protocol.acl";{code}

2. Please add {{InterfaceAudience}} for {{RouterPolicyProvider.}}

3. I think by mistake you given wrong protocol name here, pls change 
{{ReconfigurationProtocol.class}} to {{RouterAdminProtocol.class}}
{code:java}
+  new Service(
+CommonConfigurationKeys.SECURITY_ROUTERADMIN_PROTOCOL_ACL,
+ReconfigurationProtocol.class){code}

4. Change Policy provider object in {{RouterRpcServer}} also.

5. Pls fix the check style, whitespace and find bugs warnings.

> RBF: Create new policy provider for router
> --
>
> Key: HDFS-14129
> URL: https://issues.apache.org/jira/browse/HDFS-14129
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS-13532
>Reporter: Surendra Singh Lilhore
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14129-HDFS-13891.001.patch
>
>
> Router is using *{{HDFSPolicyProvider}}*. We can't add new protocol in this 
> class for router, its better to create in policy provider for Router.
> {code:java}
> // Set service-level authorization security policy
> if (conf.getBoolean(HADOOP_SECURITY_AUTHORIZATION, false)) {
> this.adminServer.refreshServiceAcl(conf, new HDFSPolicyProvider());
> }
> {code}
> I got this issue when I am verified HDFS-14079 with secure cluster.
> {noformat}
> ./bin/hdfs dfsrouteradmin -ls /
> ls: Protocol interface org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocol 
> is not known.
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
>  Protocol interface org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocol is 
> not known.
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1520)
> at org.apache.hadoop.ipc.Client.call(Client.java:1466)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13443) RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries.

2018-12-09 Thread Surendra Singh Lilhore (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16714298#comment-16714298
 ] 

Surendra Singh Lilhore commented on HDFS-13443:
---

{quote}I'm not sure I would consider this an incompatible change if making it 
{{true}} but definitely something to keep in mind.
{quote}
It’s not incompatible change, this is just server side sync-up behavior change 
which will not impact any client or upgrade operation. 

> RBF: Update mount table cache immediately after changing (add/update/remove) 
> mount table entries.
> -
>
> Key: HDFS-13443
> URL: https://issues.apache.org/jira/browse/HDFS-13443
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Mohammad Arshad
>Assignee: Mohammad Arshad
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13443-012.patch, HDFS-13443-013.patch, 
> HDFS-13443-014.patch, HDFS-13443-015.patch, HDFS-13443-016.patch, 
> HDFS-13443-branch-2.001.patch, HDFS-13443-branch-2.002.patch, 
> HDFS-13443.001.patch, HDFS-13443.002.patch, HDFS-13443.003.patch, 
> HDFS-13443.004.patch, HDFS-13443.005.patch, HDFS-13443.006.patch, 
> HDFS-13443.007.patch, HDFS-13443.008.patch, HDFS-13443.009.patch, 
> HDFS-13443.010.patch, HDFS-13443.011.patch
>
>
> Currently mount table cache is updated periodically, by default cache is 
> updated every minute. After change in mount table, user operations may still 
> use old mount table. This is bit wrong.
> To update mount table cache, maybe we can do following
>  * *Add refresh API in MountTableManager which will update mount table cache.*
>  * *When there is a change in mount table entries, router admin server can 
> update its cache and ask other routers to update their cache*. For example if 
> there are three routers R1,R2,R3 in a cluster then add mount table entry API, 
> at admin server side, will perform following sequence of action
>  ## user submit add mount table entry request on R1
>  ## R1 adds the mount table entry in state store
>  ## R1 call refresh API on R2
>  ## R1 calls refresh API on R3
>  ## R1 directly freshest its cache
>  ## Add mount table entry response send back to user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14135) TestWebHdfsTimeouts Fails intermittently in trunk

2018-12-09 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16714236#comment-16714236
 ] 

Ayush Saxena commented on HDFS-14135:
-

[~elgoiri] For the testTwoStepWriteConnectTimeout() I thought that it is a race 
between the main thread and the startSingleTemporaryRedirectResponseThread(). 
Before the latter sets the platform the main is getting executed.That is why 
there is no failure.But I don't think that this is actually the problem.I guess 
there is some problem with the method consumeConnectionBacklog().This doesn't 
seem to work as intended at the Jenkins side.As almost tests dependent on it 
are failing.This issue has been earlier taken up too in HDFS-11043.Catching up 
the discussion there.There suspect was also on CLIENTS_TO_CONSUME_BACKLOG. But 
I guess then value set wasn't that the yetus expects or it got changed 
latter.If somehow we mange to get 

/proc/sys/net/ipv4/tcp_max_syn_backlog from the there and set the same value 
here.I guess this might get solved.

> TestWebHdfsTimeouts Fails intermittently in trunk
> -
>
> Key: HDFS-14135
> URL: https://issues.apache.org/jira/browse/HDFS-14135
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14135-01.patch, HDFS-14135-02.patch
>
>
> Reference to failure
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/982/testReport/junit/org.apache.hadoop.hdfs.web/TestWebHdfsTimeouts/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-99) Adding SCM Audit log

2018-12-09 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-99?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16714217#comment-16714217
 ] 

Dinesh Chitlangia commented on HDDS-99:
---

[~xyao] Thank you for reviewing the patch.

{quote}Line 159-161: can we avoid building the auditMap outside logWritexxx(), 
this might need refactoring of the AUDIT class that wen can handle it in a 
follow up JIRA.
{quote}
Sure. We can file a Jira to make that change in the framework and then across 
DN, SCM, OM, audit logging.

 
{quote}Line 162: auditSuccess variable can be removed if we remove the finally 
and move the logic out of finally{}. This applies to some other places too.
{quote}
The reason I took this approach is that we are returning an object (in this 
case an instance of AllocatedBlock). Since we don't really need to capture the 
value of AllocatedBlock, I wanted to avoid grabbing its instance in order to 
avoid the finally.
{code:java}
boolean auditSuccess = true;
try {
  return scm.getScmBlockManager().allocateBlock(size, type, factor, owner);
} catch (Exception ex) {
  auditSuccess = false;
  AUDIT.logWriteFailure(
  buildAuditMessageForFailure(SCMAction.ALLOCATE_BLOCK, auditMap, ex)
  );
  throw ex;
} finally {
  if(auditSuccess) {
AUDIT.logWriteSuccess(
buildAuditMessageForSuccess(SCMAction.ALLOCATE_BLOCK, auditMap)
);
  }
}
{code}

If we remove the finally block, the could would look like:
{code:java}
try {
  AllocatedBlock allocatedBlock = 
scm.getScmBlockManager().allocateBlock(size, type, factor, owner);
  AUDIT.logWriteSuccess(
buildAuditMessageForSuccess(SCMAction.ALLOCATE_BLOCK, auditMap)
);
} catch (Exception ex) {
  AUDIT.logWriteFailure(
  buildAuditMessageForFailure(SCMAction.ALLOCATE_BLOCK, auditMap, ex)
  );
  throw ex;
}
}
{code}

Since allocatedBlock really has no use inside this method, I wanted to avoid 
creating it here.

Let me know your thoughts.

> Adding SCM Audit log
> 
>
> Key: HDDS-99
> URL: https://issues.apache.org/jira/browse/HDDS-99
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: alpha2
> Attachments: HDDS-99.001.patch, HDDS-99.002.patch
>
>
> This ticket is opened to add SCM audit log.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13843) RBF: When we add/update mount entry to multiple destinations, unable to see the order information in mount entry points and in federation router UI

2018-12-09 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16714196#comment-16714196
 ] 

Íñigo Goiri edited comment on HDFS-13843 at 12/10/18 1:05 AM:
--

Thanks [~zvenczel] for the patch.
* In the unit you may want to extract the entry as it is used twice.
* Can we use {{j}} instead of {{ind}}? Maybe {{index}}?

BTW, the JIRA title and description doesn't match the patch much.
Do you want to make it: {{RBF: show the order when listing mount points}}?


was (Author: elgoiri):
Thanks [~zvenczel] for the patch.
* In the unit you may want to extract the entry as it is used twice.
* Can we use {{j}} instead of {{ind}}? Maybe {{index}}?

> RBF: When we add/update mount entry to multiple destinations, unable to see 
> the order information in mount entry points and in federation router UI
> ---
>
> Key: HDFS-13843
> URL: https://issues.apache.org/jira/browse/HDFS-13843
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Reporter: Soumyapn
>Assignee: Zsolt Venczel
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13843.01.patch
>
>
> *Scenario:*
> Execute the below add/update command for single mount entry for single 
> nameservice pointing to multiple destinations. 
>  # hdfs dfsrouteradmin -add /apps1 hacluster /tmp1
>  # hdfs dfsrouteradmin -add /apps1 hacluster /tmp1,/tmp2,/tmp3
>  # hdfs dfsrouteradmin -update /apps1 hacluster /tmp1,/tmp2,/tmp3 -order 
> RANDOM
> *Actual*. With the above commands, mount entry is successfully updated.
> But order information like HASH, RANDOM is not displayed in mount entries and 
> also not displayed in federation router UI. However order information is 
> updated properly when there are multiple nameservices. This issue is with 
> single nameservice having multiple destinations.
> *Expected:* 
> *Order information should be updated in mount entries so that the user will 
> come to know which order has been set.*
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13843) RBF: When we add/update mount entry to multiple destinations, unable to see the order information in mount entry points and in federation router UI

2018-12-09 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16714196#comment-16714196
 ] 

Íñigo Goiri commented on HDFS-13843:


Thanks [~zvenczel] for the patch.
* In the unit you may want to extract the entry as it is used twice.
* Can we use {{j}} instead of {{ind}}? Maybe {{index}}?

> RBF: When we add/update mount entry to multiple destinations, unable to see 
> the order information in mount entry points and in federation router UI
> ---
>
> Key: HDFS-13843
> URL: https://issues.apache.org/jira/browse/HDFS-13843
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Reporter: Soumyapn
>Assignee: Zsolt Venczel
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13843.01.patch
>
>
> *Scenario:*
> Execute the below add/update command for single mount entry for single 
> nameservice pointing to multiple destinations. 
>  # hdfs dfsrouteradmin -add /apps1 hacluster /tmp1
>  # hdfs dfsrouteradmin -add /apps1 hacluster /tmp1,/tmp2,/tmp3
>  # hdfs dfsrouteradmin -update /apps1 hacluster /tmp1,/tmp2,/tmp3 -order 
> RANDOM
> *Actual*. With the above commands, mount entry is successfully updated.
> But order information like HASH, RANDOM is not displayed in mount entries and 
> also not displayed in federation router UI. However order information is 
> updated properly when there are multiple nameservices. This issue is with 
> single nameservice having multiple destinations.
> *Expected:* 
> *Order information should be updated in mount entries so that the user will 
> come to know which order has been set.*
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14135) TestWebHdfsTimeouts Fails intermittently in trunk

2018-12-09 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16714186#comment-16714186
 ] 

Íñigo Goiri commented on HDFS-14135:


[~ayushtkn], any idea on what's the issue?

> TestWebHdfsTimeouts Fails intermittently in trunk
> -
>
> Key: HDFS-14135
> URL: https://issues.apache.org/jira/browse/HDFS-14135
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14135-01.patch, HDFS-14135-02.patch
>
>
> Reference to failure
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/982/testReport/junit/org.apache.hadoop.hdfs.web/TestWebHdfsTimeouts/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-891) Create customized yetus personality for ozone

2018-12-09 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16714112#comment-16714112
 ] 

Elek, Marton commented on HDDS-891:
---

bq. a) It's still possible to modify these components from the other JIRA 
categories.

Yes, but for other jira categories other pre-commit build will be executed. We 
can check the hdds/ozone changes with ozone personality and other changes 
(HADOOP/HDDS/MAPREDUCE) can be checked with the hadoop personalities. I think 
we agreed that we shouldn't mix the two kind of changes in one patch.

Or is the problem to check a modification which is filed under HADOOP but 
modifies something under hadoop-ozone/hadoop-hdds? I don't think it it's 
handled right now (so we are as good as now), and I didn't see any example for 
that. Worst case we can also improve the hadoop personality to fail in case of 
a hadoop-ozone/hadoop-hdds changes are included.

bq. b) This part of the project still needs the capability to modify other 
modules (e.g., hadoop-dist)

Not sure. Ozone is packaged by hadoop-ozone/dist maven project and not 
hadoop-dist project. If we need to modify something to modify outside the 
hadoop-ozone/hadoop-hdds projects I propose to do this in a HADOOP jira which 
are checked with the hadoop personality.
 
bq. c) qbt runs over the entire source repository

Can't see any problem here. A full (hadoop + ozone) checkstyle should execute 
exactly the same checkstyle rules which are checked by the ozone personality.

BTW, according to the original plan hdds profile should be turned off by 
default, so qbt shouldb't check the ozone subproject. I am not sure which one 
is better I am open to both: 1) check everything in one qbt, or use two qbts: 
one for hadoop one for ozone 

>From technical point of view I think all the requirements can be implemented 
>both by only one (hadoop) or two (hadoop + ozone) personalities. For me using 
>hadoop + ozone personalities seems to be a more clean separation. To put 
>everything in the hadoop personality makes it more complex and we need a lot 
>of additional condition. Dont' we?

(And there is a bigger chance to break something on the pure hadoop side with 
an ozone modification in the hadoop personality). 


> Create customized yetus personality for ozone
> -
>
> Key: HDDS-891
> URL: https://issues.apache.org/jira/browse/HDDS-891
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> Ozone pre commit builds (such as 
> https://builds.apache.org/job/PreCommit-HDDS-Build/) use the official hadoop 
> personality from the yetus personality.
> Yetus personalities are bash scripts which contain personalization for 
> specific builds.
> The hadoop personality tries to identify which project should be built and 
> use partial build to build only the required subprojects because the full 
> build is very time consuming.
> But in Ozone:
> 1.) The build + unit tests are very fast
> 2.) We don't need all the checks (for example the hadoop specific shading 
> test)
> 3.) We prefer to do a full build and full unit test for hadoop-ozone and 
> hadoop-hdds subrojects (for example the hadoop-ozone integration test always 
> should be executed as it contains many generic unit test)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-891) Create customized yetus personality for ozone

2018-12-09 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16714106#comment-16714106
 ] 

Allen Wittenauer commented on HDDS-891:
---

bq. Why is it not enough?

Because:

a) It's still possible to modify these components from the other JIRA 
categories.
b) This part of the project still needs the capability to modify other modules  
(e.g., hadoop-dist)
c) qbt runs over the entire source repository
d) It's incredibly short-sighted.

I'm sure I'm forgetting things, but it doesn't really matter. Fundamentally, 
this stuff is part of the Hadoop source. 

> Create customized yetus personality for ozone
> -
>
> Key: HDDS-891
> URL: https://issues.apache.org/jira/browse/HDDS-891
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> Ozone pre commit builds (such as 
> https://builds.apache.org/job/PreCommit-HDDS-Build/) use the official hadoop 
> personality from the yetus personality.
> Yetus personalities are bash scripts which contain personalization for 
> specific builds.
> The hadoop personality tries to identify which project should be built and 
> use partial build to build only the required subprojects because the full 
> build is very time consuming.
> But in Ozone:
> 1.) The build + unit tests are very fast
> 2.) We don't need all the checks (for example the hadoop specific shading 
> test)
> 3.) We prefer to do a full build and full unit test for hadoop-ozone and 
> hadoop-hdds subrojects (for example the hadoop-ozone integration test always 
> should be executed as it contains many generic unit test)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13843) RBF: When we add/update mount entry to multiple destinations, unable to see the order information in mount entry points and in federation router UI

2018-12-09 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16714087#comment-16714087
 ] 

Hadoop QA commented on HDFS-13843:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 15s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m  1s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.fs.contract.router.web.TestRouterWebHDFSContractAppend |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-13843 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12951129/HDFS-13843.01.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c26c7045c82c 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1afba83 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25740/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25740/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25740/testReport/ |
| Max. process+thread count | 1416 (vs. 

[jira] [Commented] (HDFS-14135) TestWebHdfsTimeouts Fails intermittently in trunk

2018-12-09 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16714084#comment-16714084
 ] 

Hadoop QA commented on HDFS-14135:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 50s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 40s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}132m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.namenode.sps.TestBlockStorageMovementAttemptedItems |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14135 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12951128/HDFS-14135-02.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux eb75d400e4a5 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1afba83 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25739/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25739/testReport/ |
| Max. process+thread count | 4249 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25739/console |
| 

[jira] [Updated] (HDFS-13843) RBF: When we add/update mount entry to multiple destinations, unable to see the order information in mount entry points and in federation router UI

2018-12-09 Thread Zsolt Venczel (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zsolt Venczel updated HDFS-13843:
-
Attachment: HDFS-13843.01.patch
Status: Patch Available  (was: In Progress)

> RBF: When we add/update mount entry to multiple destinations, unable to see 
> the order information in mount entry points and in federation router UI
> ---
>
> Key: HDFS-13843
> URL: https://issues.apache.org/jira/browse/HDFS-13843
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Reporter: Soumyapn
>Assignee: Zsolt Venczel
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13843.01.patch
>
>
> *Scenario:*
> Execute the below add/update command for single mount entry for single 
> nameservice pointing to multiple destinations. 
>  # hdfs dfsrouteradmin -add /apps1 hacluster /tmp1
>  # hdfs dfsrouteradmin -add /apps1 hacluster /tmp1,/tmp2,/tmp3
>  # hdfs dfsrouteradmin -update /apps1 hacluster /tmp1,/tmp2,/tmp3 -order 
> RANDOM
> *Actual*. With the above commands, mount entry is successfully updated.
> But order information like HASH, RANDOM is not displayed in mount entries and 
> also not displayed in federation router UI. However order information is 
> updated properly when there are multiple nameservices. This issue is with 
> single nameservice having multiple destinations.
> *Expected:* 
> *Order information should be updated in mount entries so that the user will 
> come to know which order has been set.*
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14135) TestWebHdfsTimeouts Fails intermittently in trunk

2018-12-09 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14135:

Attachment: HDFS-14135-02.patch

> TestWebHdfsTimeouts Fails intermittently in trunk
> -
>
> Key: HDFS-14135
> URL: https://issues.apache.org/jira/browse/HDFS-14135
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14135-01.patch, HDFS-14135-02.patch
>
>
> Reference to failure
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/982/testReport/junit/org.apache.hadoop.hdfs.web/TestWebHdfsTimeouts/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14135) TestWebHdfsTimeouts Fails intermittently in trunk

2018-12-09 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16714022#comment-16714022
 ] 

Hadoop QA commented on HDFS-14135:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 47s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 34s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}131m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14135 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12951122/HDFS-14135-01.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 86f91d01b89e 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1afba83 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25738/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25738/testReport/ |
| Max. process+thread count | 4533 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25738/console |
| Powered by | 

[jira] [Updated] (HDFS-14135) TestWebHdfsTimeouts Fails intermittently in trunk

2018-12-09 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14135:

Status: Patch Available  (was: Open)

> TestWebHdfsTimeouts Fails intermittently in trunk
> -
>
> Key: HDFS-14135
> URL: https://issues.apache.org/jira/browse/HDFS-14135
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14135-01.patch
>
>
> Reference to failure



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14135) TestWebHdfsTimeouts Fails intermittently in trunk

2018-12-09 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14135:

Description: 
Reference to failure

https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/982/testReport/junit/org.apache.hadoop.hdfs.web/TestWebHdfsTimeouts/


  was:
Reference to failure




> TestWebHdfsTimeouts Fails intermittently in trunk
> -
>
> Key: HDFS-14135
> URL: https://issues.apache.org/jira/browse/HDFS-14135
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14135-01.patch
>
>
> Reference to failure
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/982/testReport/junit/org.apache.hadoop.hdfs.web/TestWebHdfsTimeouts/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14135) TestWebHdfsTimeouts Fails intermittently in trunk

2018-12-09 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14135:

Attachment: HDFS-14135-01.patch

> TestWebHdfsTimeouts Fails intermittently in trunk
> -
>
> Key: HDFS-14135
> URL: https://issues.apache.org/jira/browse/HDFS-14135
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14135-01.patch
>
>
> Reference to failure



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14135) TestWebHdfsTimeouts Fails intermittently in trunk

2018-12-09 Thread Ayush Saxena (JIRA)
Ayush Saxena created HDFS-14135:
---

 Summary: TestWebHdfsTimeouts Fails intermittently in trunk
 Key: HDFS-14135
 URL: https://issues.apache.org/jira/browse/HDFS-14135
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ayush Saxena
Assignee: Ayush Saxena


Reference to failure





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org