[jira] [Updated] (HDDS-138) createVolume verbose warning with non-existent user

2018-09-12 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-138:
---
Description: 
When createVolume is invoked for a non-existent user, it logs a verbose warning 
for {{PartialGroupNameException}}.
{code:java}
hadoop@9a70d9aa6bf9:~$ ozone oz volume create --user=nosuchuser vol4
2018-05-31 20:40:17 WARN  ShellBasedUnixGroupsMapping:210 - unable to 
return groups for user nosuchuser
PartialGroupNameException The user name 'nosuchuser' is not found. id: 
‘nosuchuser’: no such user
id: ‘nosuchuser’: no such user

  at 
org.apache.hadoop.security.ShellBasedUnixGroupsMapping.resolvePartialGroupNames(ShellBasedUnixGroupsMapping.java:294)
  at 
org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:207)
  at 
org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:97)
  at 
org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:51)
  at 
org.apache.hadoop.security.Groups$GroupCacheLoader.fetchGroupList(Groups.java:384)
  at 
org.apache.hadoop.security.Groups$GroupCacheLoader.load(Groups.java:319)
  at 
org.apache.hadoop.security.Groups$GroupCacheLoader.load(Groups.java:269)
  at 
com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
  at 
com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350)
  at 
com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
  at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
  at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
  at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3969)
  at 
com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4829)
  at org.apache.hadoop.security.Groups.getGroups(Groups.java:227)
  at 
org.apache.hadoop.security.UserGroupInformation.getGroups(UserGroupInformation.java:1545)
  at 
org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1533)
  at 
org.apache.hadoop.ozone.client.rpc.RpcClient.createVolume(RpcClient.java:190)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:498)
  at 
org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:54)
  at com.sun.proxy.$Proxy11.createVolume(Unknown Source)
  at 
org.apache.hadoop.ozone.client.ObjectStore.createVolume(ObjectStore.java:77)
  at 
org.apache.hadoop.ozone.web.ozShell.volume.CreateVolumeHandler.execute(CreateVolumeHandler.java:98)
  at org.apache.hadoop.ozone.web.ozShell.Shell.dispatch(Shell.java:395)
  at org.apache.hadoop.ozone.web.ozShell.Shell.run(Shell.java:135)
  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
  at org.apache.hadoop.ozone.web.ozShell.Shell.main(Shell.java:114)
2018-05-31 20:40:17 INFO  RpcClient:210 - Creating Volume: vol4, with 
nosuchuser as owner and quota set to 1152921504606846976 bytes.
{code}
However the volume is created:
{code}
$ ozone oz volume list --user=nosuchuser
[ {
  "owner" : {
"name" : "nosuchuser"
  },
  "quota" : {
"unit" : "TB",
"size" : 1048576
  },
  "volumeName" : "vol4",
  "createdOn" : "Thu, 31 May 2018 20:40:17 GMT",
  "createdBy" : "nosuchuser"
} ]
{code}

  was:
When createVolume is invoked for a non-existent user, it fails with 
{{PartialGroupNameException}}.
{code:java}
hadoop@9a70d9aa6bf9:~$ ozone oz -createVolume /vol4 -user nosuchuser
2018-05-31 20:40:17 WARN  ShellBasedUnixGroupsMapping:210 - unable to 
return groups for user nosuchuser
PartialGroupNameException The user name 'nosuchuser' is not found. id: 
‘nosuchuser’: no such user
id: ‘nosuchuser’: no such user

  at 
org.apache.hadoop.security.ShellBasedUnixGroupsMapping.resolvePartialGroupNames(ShellBasedUnixGroupsMapping.java:294)
  at 
org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:207)
  at 
org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:97)
  at 
org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:51)
  at 
org.apache.hadoop.security.Groups$GroupCacheLoader.fetchGroupList(Groups.java:384)
  at 
org.apache.hadoop.security.Groups$GroupCacheLoader.load(Groups.java:319)
  at 

[jira] [Updated] (HDDS-138) createVolume verbose warning with non-existent user

2018-09-12 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-138:
---
Summary: createVolume verbose warning with non-existent user  (was: 
createVolume bug with non-existent user)

> createVolume verbose warning with non-existent user
> ---
>
> Key: HDDS-138
> URL: https://issues.apache.org/jira/browse/HDDS-138
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Shashikant Banerjee
>Priority: Blocker
>  Labels: newbie, usability
>
> When createVolume is invoked for a non-existent user, it fails with 
> {{PartialGroupNameException}}.
> {code:java}
> hadoop@9a70d9aa6bf9:~$ ozone oz -createVolume /vol4 -user nosuchuser
> 2018-05-31 20:40:17 WARN  ShellBasedUnixGroupsMapping:210 - unable to 
> return groups for user nosuchuser
> PartialGroupNameException The user name 'nosuchuser' is not found. id: 
> ‘nosuchuser’: no such user
> id: ‘nosuchuser’: no such user
>   at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.resolvePartialGroupNames(ShellBasedUnixGroupsMapping.java:294)
>   at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:207)
>   at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:97)
>   at 
> org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:51)
>   at 
> org.apache.hadoop.security.Groups$GroupCacheLoader.fetchGroupList(Groups.java:384)
>   at 
> org.apache.hadoop.security.Groups$GroupCacheLoader.load(Groups.java:319)
>   at 
> org.apache.hadoop.security.Groups$GroupCacheLoader.load(Groups.java:269)
>   at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
>   at 
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350)
>   at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
>   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
>   at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
>   at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3969)
>   at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4829)
>   at org.apache.hadoop.security.Groups.getGroups(Groups.java:227)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getGroups(UserGroupInformation.java:1545)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1533)
>   at 
> org.apache.hadoop.ozone.client.rpc.RpcClient.createVolume(RpcClient.java:190)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:54)
>   at com.sun.proxy.$Proxy11.createVolume(Unknown Source)
>   at 
> org.apache.hadoop.ozone.client.ObjectStore.createVolume(ObjectStore.java:77)
>   at 
> org.apache.hadoop.ozone.web.ozShell.volume.CreateVolumeHandler.execute(CreateVolumeHandler.java:98)
>   at org.apache.hadoop.ozone.web.ozShell.Shell.dispatch(Shell.java:395)
>   at org.apache.hadoop.ozone.web.ozShell.Shell.run(Shell.java:135)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.ozone.web.ozShell.Shell.main(Shell.java:114)
> 2018-05-31 20:40:17 INFO  RpcClient:210 - Creating Volume: vol4, with 
> nosuchuser as owner and quota set to 1152921504606846976 bytes.
> {code}
> However the volume appears to be created:
> {code:json}
> ozone oz -listVolume o3:/// -user nosuchuser -root
> [ {
>   "owner" : {
> "name" : "nosuchuser"
>   },
>   "quota" : {
> "unit" : "TB",
> "size" : 1048576
>   },
>   "volumeName" : "vol4",
>   "createdOn" : "Thu, 31 May 2018 20:40:17 GMT",
>   "createdBy" : "nosuchuser"
> } ]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-419) ChunkInputStream bulk read api does not read from all the chunks

2018-09-12 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh reassigned HDDS-419:
--

Assignee: Lokesh Jain  (was: Mukul Kumar Singh)

> ChunkInputStream bulk read api does not read from all the chunks
> 
>
> Key: HDDS-419
> URL: https://issues.apache.org/jira/browse/HDDS-419
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Lokesh Jain
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-419.001.patch
>
>
> After enabling of bulk reads with HDDS-408, testDataValidate started failing 
> because the bulk read api does not read all the chunks from the block.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-429) StorageContainerManager lock optimization

2018-09-12 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-429:
---
Fix Version/s: (was: 0.2.1)

> StorageContainerManager lock optimization
> -
>
> Key: HDDS-429
> URL: https://issues.apache.org/jira/browse/HDDS-429
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Blocker
>
> Currently, {{StorageContainerManager}} uses {{ReentrantLock}} for 
> synchronization. We can replace this with {{ReentrantReadWriteLock}} to get 
> better performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-429) StorageContainerManager lock optimization

2018-09-12 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16613041#comment-16613041
 ] 

Arpit Agarwal commented on HDDS-429:


Removed Fix Version as we have Target Version field now.

> StorageContainerManager lock optimization
> -
>
> Key: HDDS-429
> URL: https://issues.apache.org/jira/browse/HDDS-429
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Blocker
>
> Currently, {{StorageContainerManager}} uses {{ReentrantLock}} for 
> synchronization. We can replace this with {{ReentrantReadWriteLock}} to get 
> better performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-138) createVolume bug with non-existent user

2018-09-12 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-138:
---
Target Version/s: 0.3.0

> createVolume bug with non-existent user
> ---
>
> Key: HDDS-138
> URL: https://issues.apache.org/jira/browse/HDDS-138
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Shashikant Banerjee
>Priority: Blocker
>  Labels: newbie, usability
>
> When createVolume is invoked for a non-existent user, it fails with 
> {{PartialGroupNameException}}.
> {code:java}
> hadoop@9a70d9aa6bf9:~$ ozone oz -createVolume /vol4 -user nosuchuser
> 2018-05-31 20:40:17 WARN  ShellBasedUnixGroupsMapping:210 - unable to 
> return groups for user nosuchuser
> PartialGroupNameException The user name 'nosuchuser' is not found. id: 
> ‘nosuchuser’: no such user
> id: ‘nosuchuser’: no such user
>   at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.resolvePartialGroupNames(ShellBasedUnixGroupsMapping.java:294)
>   at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:207)
>   at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:97)
>   at 
> org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:51)
>   at 
> org.apache.hadoop.security.Groups$GroupCacheLoader.fetchGroupList(Groups.java:384)
>   at 
> org.apache.hadoop.security.Groups$GroupCacheLoader.load(Groups.java:319)
>   at 
> org.apache.hadoop.security.Groups$GroupCacheLoader.load(Groups.java:269)
>   at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
>   at 
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350)
>   at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
>   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
>   at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
>   at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3969)
>   at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4829)
>   at org.apache.hadoop.security.Groups.getGroups(Groups.java:227)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getGroups(UserGroupInformation.java:1545)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1533)
>   at 
> org.apache.hadoop.ozone.client.rpc.RpcClient.createVolume(RpcClient.java:190)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:54)
>   at com.sun.proxy.$Proxy11.createVolume(Unknown Source)
>   at 
> org.apache.hadoop.ozone.client.ObjectStore.createVolume(ObjectStore.java:77)
>   at 
> org.apache.hadoop.ozone.web.ozShell.volume.CreateVolumeHandler.execute(CreateVolumeHandler.java:98)
>   at org.apache.hadoop.ozone.web.ozShell.Shell.dispatch(Shell.java:395)
>   at org.apache.hadoop.ozone.web.ozShell.Shell.run(Shell.java:135)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.ozone.web.ozShell.Shell.main(Shell.java:114)
> 2018-05-31 20:40:17 INFO  RpcClient:210 - Creating Volume: vol4, with 
> nosuchuser as owner and quota set to 1152921504606846976 bytes.
> {code}
> However the volume appears to be created:
> {code:json}
> ozone oz -listVolume o3:/// -user nosuchuser -root
> [ {
>   "owner" : {
> "name" : "nosuchuser"
>   },
>   "quota" : {
> "unit" : "TB",
> "size" : 1048576
>   },
>   "volumeName" : "vol4",
>   "createdOn" : "Thu, 31 May 2018 20:40:17 GMT",
>   "createdBy" : "nosuchuser"
> } ]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-138) createVolume bug with non-existent user

2018-09-12 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16613039#comment-16613039
 ] 

Arpit Agarwal commented on HDDS-138:


This is annoying and not easy to fix. The verbose exception is caught and 
logged deep inside ShellBasedUnixGroupsMapping.java. I'll move it out of 0.2.1.

> createVolume bug with non-existent user
> ---
>
> Key: HDDS-138
> URL: https://issues.apache.org/jira/browse/HDDS-138
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Shashikant Banerjee
>Priority: Blocker
>  Labels: newbie, usability
>
> When createVolume is invoked for a non-existent user, it fails with 
> {{PartialGroupNameException}}.
> {code:java}
> hadoop@9a70d9aa6bf9:~$ ozone oz -createVolume /vol4 -user nosuchuser
> 2018-05-31 20:40:17 WARN  ShellBasedUnixGroupsMapping:210 - unable to 
> return groups for user nosuchuser
> PartialGroupNameException The user name 'nosuchuser' is not found. id: 
> ‘nosuchuser’: no such user
> id: ‘nosuchuser’: no such user
>   at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.resolvePartialGroupNames(ShellBasedUnixGroupsMapping.java:294)
>   at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:207)
>   at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:97)
>   at 
> org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:51)
>   at 
> org.apache.hadoop.security.Groups$GroupCacheLoader.fetchGroupList(Groups.java:384)
>   at 
> org.apache.hadoop.security.Groups$GroupCacheLoader.load(Groups.java:319)
>   at 
> org.apache.hadoop.security.Groups$GroupCacheLoader.load(Groups.java:269)
>   at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
>   at 
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350)
>   at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
>   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
>   at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
>   at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3969)
>   at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4829)
>   at org.apache.hadoop.security.Groups.getGroups(Groups.java:227)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getGroups(UserGroupInformation.java:1545)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1533)
>   at 
> org.apache.hadoop.ozone.client.rpc.RpcClient.createVolume(RpcClient.java:190)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:54)
>   at com.sun.proxy.$Proxy11.createVolume(Unknown Source)
>   at 
> org.apache.hadoop.ozone.client.ObjectStore.createVolume(ObjectStore.java:77)
>   at 
> org.apache.hadoop.ozone.web.ozShell.volume.CreateVolumeHandler.execute(CreateVolumeHandler.java:98)
>   at org.apache.hadoop.ozone.web.ozShell.Shell.dispatch(Shell.java:395)
>   at org.apache.hadoop.ozone.web.ozShell.Shell.run(Shell.java:135)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.ozone.web.ozShell.Shell.main(Shell.java:114)
> 2018-05-31 20:40:17 INFO  RpcClient:210 - Creating Volume: vol4, with 
> nosuchuser as owner and quota set to 1152921504606846976 bytes.
> {code}
> However the volume appears to be created:
> {code:json}
> ozone oz -listVolume o3:/// -user nosuchuser -root
> [ {
>   "owner" : {
> "name" : "nosuchuser"
>   },
>   "quota" : {
> "unit" : "TB",
> "size" : 1048576
>   },
>   "volumeName" : "vol4",
>   "createdOn" : "Thu, 31 May 2018 20:40:17 GMT",
>   "createdBy" : "nosuchuser"
> } ]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-415) 'ozone om' with incorrect argument first logs all the STARTUP_MSG

2018-09-12 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-415:
---
Target Version/s: 0.2.1
   Fix Version/s: (was: 0.2.1)

> 'ozone om' with incorrect argument first logs all the STARTUP_MSG
> -
>
> Key: HDDS-415
> URL: https://issues.apache.org/jira/browse/HDDS-415
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Blocker
>
>  bin/ozone om with incorrect argument first logs all the STARTUP_MSG
> {code:java}
> ➜ ozone-0.2.1-SNAPSHOT bin/ozone om -hgfj
> 2018-09-07 12:56:12,391 [main] INFO - STARTUP_MSG:
> /
> STARTUP_MSG: Starting OzoneManager
> STARTUP_MSG: host = HW11469.local/10.22.16.67
> STARTUP_MSG: args = [-hgfj]
> STARTUP_MSG: version = 3.2.0-SNAPSHOT
> STARTUP_MSG: classpath = 
> 

[jira] [Updated] (HDDS-138) createVolume bug with non-existent user

2018-09-12 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-138:
---
Fix Version/s: (was: 0.2.1)

> createVolume bug with non-existent user
> ---
>
> Key: HDDS-138
> URL: https://issues.apache.org/jira/browse/HDDS-138
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Shashikant Banerjee
>Priority: Blocker
>  Labels: newbie, usability
>
> When createVolume is invoked for a non-existent user, it fails with 
> {{PartialGroupNameException}}.
> {code:java}
> hadoop@9a70d9aa6bf9:~$ ozone oz -createVolume /vol4 -user nosuchuser
> 2018-05-31 20:40:17 WARN  ShellBasedUnixGroupsMapping:210 - unable to 
> return groups for user nosuchuser
> PartialGroupNameException The user name 'nosuchuser' is not found. id: 
> ‘nosuchuser’: no such user
> id: ‘nosuchuser’: no such user
>   at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.resolvePartialGroupNames(ShellBasedUnixGroupsMapping.java:294)
>   at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:207)
>   at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:97)
>   at 
> org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:51)
>   at 
> org.apache.hadoop.security.Groups$GroupCacheLoader.fetchGroupList(Groups.java:384)
>   at 
> org.apache.hadoop.security.Groups$GroupCacheLoader.load(Groups.java:319)
>   at 
> org.apache.hadoop.security.Groups$GroupCacheLoader.load(Groups.java:269)
>   at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
>   at 
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350)
>   at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
>   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
>   at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
>   at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3969)
>   at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4829)
>   at org.apache.hadoop.security.Groups.getGroups(Groups.java:227)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getGroups(UserGroupInformation.java:1545)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1533)
>   at 
> org.apache.hadoop.ozone.client.rpc.RpcClient.createVolume(RpcClient.java:190)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:54)
>   at com.sun.proxy.$Proxy11.createVolume(Unknown Source)
>   at 
> org.apache.hadoop.ozone.client.ObjectStore.createVolume(ObjectStore.java:77)
>   at 
> org.apache.hadoop.ozone.web.ozShell.volume.CreateVolumeHandler.execute(CreateVolumeHandler.java:98)
>   at org.apache.hadoop.ozone.web.ozShell.Shell.dispatch(Shell.java:395)
>   at org.apache.hadoop.ozone.web.ozShell.Shell.run(Shell.java:135)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.ozone.web.ozShell.Shell.main(Shell.java:114)
> 2018-05-31 20:40:17 INFO  RpcClient:210 - Creating Volume: vol4, with 
> nosuchuser as owner and quota set to 1152921504606846976 bytes.
> {code}
> However the volume appears to be created:
> {code:json}
> ozone oz -listVolume o3:/// -user nosuchuser -root
> [ {
>   "owner" : {
> "name" : "nosuchuser"
>   },
>   "quota" : {
> "unit" : "TB",
> "size" : 1048576
>   },
>   "volumeName" : "vol4",
>   "createdOn" : "Thu, 31 May 2018 20:40:17 GMT",
>   "createdBy" : "nosuchuser"
> } ]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-439) 'ozone oz volume create' should default to current Unix user

2018-09-12 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-439:
---
Priority: Blocker  (was: Major)

> 'ozone oz volume create' should default to current Unix user
> 
>
> Key: HDDS-439
> URL: https://issues.apache.org/jira/browse/HDDS-439
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Arpit Agarwal
>Assignee: Sree Vaddi
>Priority: Blocker
>  Labels: newbie
>
> Currently the user parameter appears to be mandatory. It should just default 
> to the current Unix user if missing.
> E.g.
> {code:java}
> $ ozone oz volume create vol32
> Missing required option '--user='{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-439) 'ozone oz volume create' should default to current Unix user

2018-09-12 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-439:
---
Target Version/s: 0.2.1

> 'ozone oz volume create' should default to current Unix user
> 
>
> Key: HDDS-439
> URL: https://issues.apache.org/jira/browse/HDDS-439
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Arpit Agarwal
>Assignee: Sree Vaddi
>Priority: Blocker
>  Labels: newbie
>
> Currently the user parameter appears to be mandatory. It should just default 
> to the current Unix user if missing.
> E.g.
> {code:java}
> $ ozone oz volume create vol32
> Missing required option '--user='{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-439) 'ozone oz volume create' should default to current Unix user

2018-09-12 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-439:
---
Fix Version/s: 0.2.1

> 'ozone oz volume create' should default to current Unix user
> 
>
> Key: HDDS-439
> URL: https://issues.apache.org/jira/browse/HDDS-439
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Arpit Agarwal
>Assignee: Sree Vaddi
>Priority: Major
>  Labels: newbie
>
> Currently the user parameter appears to be mandatory. It should just default 
> to the current Unix user if missing.
> E.g.
> {code:java}
> $ ozone oz volume create vol32
> Missing required option '--user='{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-439) 'ozone oz volume create' should default to current Unix user

2018-09-12 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-439:
---
Fix Version/s: (was: 0.2.1)

> 'ozone oz volume create' should default to current Unix user
> 
>
> Key: HDDS-439
> URL: https://issues.apache.org/jira/browse/HDDS-439
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Arpit Agarwal
>Assignee: Sree Vaddi
>Priority: Blocker
>  Labels: newbie
>
> Currently the user parameter appears to be mandatory. It should just default 
> to the current Unix user if missing.
> E.g.
> {code:java}
> $ ozone oz volume create vol32
> Missing required option '--user='{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13749) Use getServiceStatus to discover observer namenodes

2018-09-12 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16613035#comment-16613035
 ] 

Chao Sun commented on HDFS-13749:
-

Actually, using {{RetryProxy}} in {{createNonHAProxyWithHAServiceProtocol}} 
works. [~xkrogen]: I think you are right about using a retry policy. The retry 
policy used in {{Connection#getConnectionId}} is only when initializing a new 
connection, but not for an existing connection. Attached patch v3.

> Use getServiceStatus to discover observer namenodes
> ---
>
> Key: HDFS-13749
> URL: https://issues.apache.org/jira/browse/HDFS-13749
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13749-HDFS-12943.000.patch, 
> HDFS-13749-HDFS-12943.001.patch, HDFS-13749-HDFS-12943.002.patch, 
> HDFS-13749-HDFS-12943.003.patch
>
>
> In HDFS-12976 currently we discover NameNode state by calling 
> {{reportBadBlocks}} as a temporary solution. Here, we'll properly implement 
> this by using {{HAServiceProtocol#getServiceStatus}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13749) Use getServiceStatus to discover observer namenodes

2018-09-12 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-13749:

Attachment: HDFS-13749-HDFS-12943.003.patch

> Use getServiceStatus to discover observer namenodes
> ---
>
> Key: HDFS-13749
> URL: https://issues.apache.org/jira/browse/HDFS-13749
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13749-HDFS-12943.000.patch, 
> HDFS-13749-HDFS-12943.001.patch, HDFS-13749-HDFS-12943.002.patch, 
> HDFS-13749-HDFS-12943.003.patch
>
>
> In HDFS-12976 currently we discover NameNode state by calling 
> {{reportBadBlocks}} as a temporary solution. Here, we'll properly implement 
> this by using {{HAServiceProtocol#getServiceStatus}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8196) Erasure Coding related information on NameNode UI

2018-09-12 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-8196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16613010#comment-16613010
 ] 

Xiao Chen commented on HDFS-8196:
-

Thanks for picking up the ball here [~knanasi]. It seems the html/js changes 
are removed in the latest rev. Is that intentional?

> Erasure Coding related information on NameNode UI
> -
>
> Key: HDFS-8196
> URL: https://issues.apache.org/jira/browse/HDFS-8196
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: HDFS-7285
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Major
>  Labels: NameNode, WebUI, hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-8196.01.patch, HDFS-8196.02.patch, 
> HDFS-8196.03.patch, HDFS-8196.04.patch, HDFS-8196.05.patch, Screen Shot 
> 2017-02-06 at 22.30.40.png, Screen Shot 2017-02-12 at 20.21.42.png, Screen 
> Shot 2017-02-14 at 22.43.57.png
>
>
> NameNode WebUI shows EC related information and metrics. 
> This is depend on [HDFS-7674|https://issues.apache.org/jira/browse/HDFS-7674].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-362) Modify functions impacted by SCM chill mode in ScmBlockLocationProtocol

2018-09-12 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612980#comment-16612980
 ] 

Xiaoyu Yao commented on HDDS-362:
-

Thanks [~ajayydv] for working on this. The patch looks good to me overall. Here 
are a few comments:

Hdds.proto

Line 176: should we have a enum for deleteBlock op?

 

BlockManagerImpl.java

Line 435: should we have an interface defined for PreChecks with a (Boolean 
check() )instead of hard coded enum  for better extensibility? This also avoid 
the big switch below.

 

 

SCMChillModeManager.java

Line 75: can we consolidate the CHILL_MODE_STATUS with START_REPLICATION into 
EXIT_CHILL_MODE? And we might need ENTER_CHILL_MODE so that the Replication 
manager and the BlockManager can respond correspondly.

 

SCMEvents.java

Line 234: Same as above.

 

TestBlockManager.java

Line 79: scm can be removed.

 

 

TestSCMChillModeManager.java

Line 34/35/38: NIT: unused imports.

> Modify functions impacted by SCM chill mode in ScmBlockLocationProtocol
> ---
>
> Key: HDDS-362
> URL: https://issues.apache.org/jira/browse/HDDS-362
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-362.00.patch, HDDS-362.01.patch, HDDS-362.02.patch
>
>
> [HDDS-351] adds chill mode state to SCM. When SCM is in chill mode certain 
> operations will be restricted for end users. This jira intends to modify 
> functions impacted by SCM chill mode in ScmBlockLocationProtocol.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13882) Set a maximum for the delay before retrying locateFollowingBlock

2018-09-12 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612971#comment-16612971
 ] 

Xiao Chen commented on HDFS-13882:
--

Thanks for the new patch Kitti. 60 secs as the default SGTM. No one really 
wants this to grow unlimitedly.

Some comments:
 - Should add a test, probably in \{{TestDFSClientRetries}}. I didn't 
scrutinize to see if there's a real test on the backoff, but at the minimum we 
should add a test similar to 
\{{testDFSClientConfigurationLocateFollowingBlockInitialDelay}}.
- {{DFSOutputStream}} had 2 places ({{addBlock}} and {{completeFile}}) that the 
retry happens. The new config should cover both, for consistency.
- {{hdfs-default.xml}} default missed a 0.

> Set a maximum for the delay before retrying locateFollowingBlock
> 
>
> Key: HDFS-13882
> URL: https://issues.apache.org/jira/browse/HDFS-13882
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HDFS-13882.001.patch, HDFS-13882.002.patch
>
>
> More and more we are seeing cases where customers are running into the java 
> io exception "Unable to close file because the last block does not have 
> enough number of replicas" on client file closure. The common workaround is 
> to increase dfs.client.block.write.locateFollowingBlock.retries from 5 to 10. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13833) Failed to choose from local rack (location = /default); the second replica is not found, retry choosing ramdomly

2018-09-12 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612968#comment-16612968
 ] 

Xiao Chen commented on HDFS-13833:
--

Thanks all for the discussion / comments, and [~shwetayakkali] for the patch.

The approach looks good to me, some review comments:
 - To address Kitti's comment #1, I think we can move the test case to 
{{TestReplicationPolicy.java}}, if we do not want to create a new file. 
{{TestReplicationPolicy}} is parameterized, but because this is a very quick 
mock test I think that's ok.
 - Instead of creating a mock BPPD object in the test, we can actually 
instantiate a real one, using similar code from {{BlockPlacementPolicies}}. 
This will get us less things to mock, and be able to call {{initialize}} on the 
BPPD object.
 {code}
 final Class replicatorClass = conf
.getClass(DFSConfigKeys.DFS_BLOCK_REPLICATOR_CLASSNAME_KEY,
DFSConfigKeys.DFS_BLOCK_REPLICATOR_CLASSNAME_DEFAULT,
BlockPlacementPolicy.class);
replicationPolicy = ReflectionUtils.newInstance(replicatorClass, conf);
{code}
- Once the above is done, {{checkMaxLoad}} no longer needs the {{stats}} object 
passed in.
- Suggest to cover a few other possibilities in the unit test. For example, 
mock the stats and dndescriptor in a way that {{checkMaxLoad}} would return 
false, and verify that with an assertion.
- Suggest to rename {{checkMaxLoad}} to a more self-explaining name. 
{{excludeNodeByLoad}} or similar names will be more readable because reader 
doesn't have to look into the logic to see what the return value means.
- Similar to above line, we can add a simple javadoc to the new method.

> Failed to choose from local rack (location = /default); the second replica is 
> not found, retry choosing ramdomly
> 
>
> Key: HDFS-13833
> URL: https://issues.apache.org/jira/browse/HDFS-13833
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Henrique Barros
>Assignee: Shweta
>Priority: Critical
> Attachments: HDFS-13833.001.patch
>
>
> I'm having a random problem with blocks replication with Hadoop 
> 2.6.0-cdh5.15.0
> With Cloudera CDH-5.15.0-1.cdh5.15.0.p0.21
>  
> In my case we are getting this error very randomly (after some hours) and 
> with only one Datanode (for now, we are trying this cloudera cluster for a 
> POC)
> Here is the Log.
> {code:java}
> Choosing random from 1 available nodes on node /default, scope=/default, 
> excludedScope=null, excludeNodes=[]
> 2:38:20.527 PMDEBUG   NetworkTopology 
> Choosing random from 0 available nodes on node /default, scope=/default, 
> excludedScope=null, excludeNodes=[192.168.220.53:50010]
> 2:38:20.527 PMDEBUG   NetworkTopology 
> chooseRandom returning null
> 2:38:20.527 PMDEBUG   BlockPlacementPolicy
> [
> Node /default/192.168.220.53:50010 [
>   Datanode 192.168.220.53:50010 is not chosen since the node is too busy 
> (load: 8 > 0.0).
> 2:38:20.527 PMDEBUG   NetworkTopology 
> chooseRandom returning 192.168.220.53:50010
> 2:38:20.527 PMINFOBlockPlacementPolicy
> Not enough replicas was chosen. Reason:{NODE_TOO_BUSY=1}
> 2:38:20.527 PMDEBUG   StateChange 
> closeFile: 
> /mobi.me/development/apps/flink/checkpoints/a5a6806866c1640660924ea1453cbe34/chk-2118/eef8bff6-75a9-43c1-ae93-4b1a9ca31ad9
>  with 1 blocks is persisted to the file system
> 2:38:20.527 PMDEBUG   StateChange 
> *BLOCK* NameNode.addBlock: file 
> /mobi.me/development/apps/flink/checkpoints/a5a6806866c1640660924ea1453cbe34/chk-2118/1cfe900d-6f45-4b55-baaa-73c02ace2660
>  fileId=129628869 for DFSClient_NONMAPREDUCE_467616914_65
> 2:38:20.527 PMDEBUG   BlockPlacementPolicy
> Failed to choose from local rack (location = /default); the second replica is 
> not found, retry choosing ramdomly
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy$NotEnoughReplicasException:
>  
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:784)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:694)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:601)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalStorage(BlockPlacementPolicyDefault.java:561)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:464)
>   at 
> 

[jira] [Commented] (HDDS-420) putKey failing with KEY_ALLOCATION_ERROR

2018-09-12 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612966#comment-16612966
 ] 

Mukul Kumar Singh commented on HDDS-420:


Thanks for working on this [~shashikant]. 
+1, Patch v2 looks good to me I will commit this shortly.

> putKey failing with KEY_ALLOCATION_ERROR
> 
>
> Key: HDDS-420
> URL: https://issues.apache.org/jira/browse/HDDS-420
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Nilotpal Nandi
>Assignee: Shashikant Banerjee
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-420.000.patch, HDDS-420.001.patch, 
> HDDS-420.002.patch, all-node-ozone-logs-1536607597.tar.gz
>
>
> Here are the commands run :
> {noformat}
> [root@ctr-e138-1518143905142-468367-01-02 bin]# ./ozone oz -putKey 
> /fs-volume/fs-bucket/nn1 -file /etc/passwd
> 2018-09-09 15:39:31,131 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> Create key failed, error:KEY_ALLOCATION_ERROR
> [root@ctr-e138-1518143905142-468367-01-02 bin]#
> [root@ctr-e138-1518143905142-468367-01-02 bin]# ./ozone fs -copyFromLocal 
> /etc/passwd /
> 2018-09-09 15:40:16,879 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 2018-09-09 15:40:23,632 [main] ERROR - Try to allocate more blocks for write 
> failed, already allocated 0 blocks for this write.
> copyFromLocal: Message missing required fields: keyLocation
> [root@ctr-e138-1518143905142-468367-01-02 bin]# ./ozone oz -putKey 
> /fs-volume/fs-bucket/nn2 -file /etc/passwd
> 2018-09-09 15:44:55,912 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> Create key failed, error:KEY_ALLOCATION_ERROR{noformat}
>  
> hadoop version :
> ---
> {noformat}
> [root@ctr-e138-1518143905142-468367-01-02 bin]# ./hadoop version
> Hadoop 3.2.0-SNAPSHOT
> Source code repository git://git.apache.org/hadoop.git -r 
> bf8a1750e99cfbfa76021ce51b6514c74c06f498
> Compiled by root on 2018-09-08T10:22Z
> Compiled with protoc 2.5.0
> From source with checksum c5bbb375aed8edabd89c377af83189d
> This command was run using 
> /root/hadoop_trunk/ozone-0.3.0-SNAPSHOT/share/hadoop/common/hadoop-common-3.2.0-SNAPSHOT.jar{noformat}
>  
> scm log :
> ---
> {noformat}
> 2018-09-09 15:45:00,907 INFO 
> org.apache.hadoop.hdds.scm.pipelines.ratis.RatisManagerImpl: Allocating a new 
> ratis pipeline of size: 3 id: pipelineId=f210716d-ba7b-4adf-91d6-da286e5fd010
> 2018-09-09 15:45:00,973 INFO org.apache.ratis.conf.ConfUtils: raft.rpc.type = 
> GRPC (default)
> 2018-09-09 15:45:01,007 INFO org.apache.ratis.conf.ConfUtils: 
> raft.grpc.message.size.max = 33554432 (custom)
> 2018-09-09 15:45:01,011 INFO org.apache.ratis.conf.ConfUtils: 
> raft.client.rpc.retryInterval = 300 ms (default)
> 2018-09-09 15:45:01,012 INFO org.apache.ratis.conf.ConfUtils: 
> raft.client.async.outstanding-requests.max = 100 (default)
> 2018-09-09 15:45:01,012 INFO org.apache.ratis.conf.ConfUtils: 
> raft.client.async.scheduler-threads = 3 (default)
> 2018-09-09 15:45:01,020 INFO org.apache.ratis.conf.ConfUtils: 
> raft.grpc.flow.control.window = 1MB (=1048576) (default)
> 2018-09-09 15:45:01,020 INFO org.apache.ratis.conf.ConfUtils: 
> raft.grpc.message.size.max = 33554432 (custom)
> 2018-09-09 15:45:01,102 INFO org.apache.ratis.conf.ConfUtils: 
> raft.client.rpc.request.timeout = 3000 ms (default)
> 2018-09-09 15:45:01,667 ERROR org.apache.hadoop.hdds.scm.XceiverClientRatis: 
> Failed to reinitialize 
> RaftPeer:bfe9c5f2-da9b-4a8f-9013-7540cbbed1c9:172.27.12.96:9858 datanode: 
> bfe9c5f2-da9b-4a8f-9013-7540cbbed1c9{ip: 172.27.12.96, host: 
> ctr-e138-1518143905142-468367-01-07.hwx.site}
> org.apache.ratis.protocol.GroupMismatchException: 
> bfe9c5f2-da9b-4a8f-9013-7540cbbed1c9: The group (group-7347726F7570) of 
> client-409D68EB500F does not match the group (group-2041ABBEE452) of the 
> server bfe9c5f2-da9b-4a8f-9013-7540cbbed1c9
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>  at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>  at 
> org.apache.ratis.util.ReflectionUtils.instantiateException(ReflectionUtils.java:222)
>  at 
> org.apache.ratis.grpc.RaftGrpcUtil.tryUnwrapException(RaftGrpcUtil.java:79)
>  at 

[jira] [Commented] (HDDS-369) Remove the containers of a dead node from the container state map

2018-09-12 Thread LiXin Ge (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612943#comment-16612943
 ] 

LiXin Ge commented on HDDS-369:
---

Thanks for the commets [~elek].
Actually I find this doubts when I add unit test for HDDS-401(you can take a 
look at the 
[patch|https://issues.apache.org/jira/secure/attachment/12939217/HDDS-401.002.patch]
 if interested). I find that I must call {{registerReplicas}} on the target 
datanode(which not related to my test case), or I will get an exception like 
that:
{noformat}
[ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.535 s 
<<< FAILURE! - in org.apache.hadoop.hdds.scm.node.TestDeadNodeHandler
[ERROR] 
testStatisticsUpdate(org.apache.hadoop.hdds.scm.node.TestDeadNodeHandler)  Time 
elapsed: 0.33 s  <<< ERROR!
java.lang.NullPointerException
at 
org.apache.hadoop.hdds.scm.node.DeadNodeHandler.onMessage(DeadNodeHandler.java:68)
at 
org.apache.hadoop.hdds.scm.node.TestDeadNodeHandler.testStatisticsUpdate(TestDeadNodeHandler.java:179)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
{noformat}
The exception is not threw by {{node2ContainerMap}} but  
{{DeadNodeHandler#onMessage}}. even so, if it's only impact the unit test as 
you said, we can just let it be.

> Remove the containers of a dead node from the container state map
> -
>
> Key: HDDS-369
> URL: https://issues.apache.org/jira/browse/HDDS-369
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-369.001.patch, HDDS-369.002.patch, 
> HDDS-369.003.patch, HDDS-369.004.patch, HDDS-369.005.patch, HDDS-369.006.patch
>
>
> In case of a node is dead we need to update the container replicas 
> information of the containerStateMap for all the containers from that 
> specific node.
> With removing the replica information we can detect the under replicated 
> state and start the replication.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13880) Add mechanism to allow certain RPC calls to bypass sync

2018-09-12 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612932#comment-16612932
 ] 

Konstantin Shvachko commented on HDFS-13880:


Hey Chen, this looks good, except the unused {{import FsAction}} in 
{{TestObserverNode}}.
+1 you can remove the import on commit.

> Add mechanism to allow certain RPC calls to bypass sync
> ---
>
> Key: HDFS-13880
> URL: https://issues.apache.org/jira/browse/HDFS-13880
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-13880-HDFS-12943.001.patch, 
> HDFS-13880-HDFS-12943.002.patch, HDFS-13880-HDFS-12943.003.patch, 
> HDFS-13880-HDFS-12943.004.patch, HDFS-13880-HDFS-12943.005.patch
>
>
> Currently, every single call to NameNode will be synced, in the sense that 
> NameNode will not process it until state id catches up. But in certain cases, 
> we would like to bypass this check and allow the call to return immediately, 
> even when the server id is not up to date. One case could be the to-be-added 
> new API in HDFS-13749 that request for current state id. Others may include 
> calls that do not promise real time responses such as {{getContentSummary}}. 
> This Jira is to add the mechanism to allow certain calls to bypass sync.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13778) In TestStateAlignmentContextWithHA replace artificial AlignmentContextProxyProvider with real ObserverReadProxyProvider.

2018-09-12 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612927#comment-16612927
 ] 

Konstantin Shvachko commented on HDFS-13778:


Plamen, like your changes. Looks good.
* I see you updated the package name, but didn't move the file in the directory 
tree. Glad you didn't, otherwise I wouldn't see the diff. I am actually fine 
with the test being in {{org.apache.hadoop.hdfs}} because we are testing mostly 
the client logic here rather the server. Well both, but from client 
perspective. I suggest we leave the package as is for now to see the diff in 
the commit. If you decide to move it under {{org.apache.hadoop.hdfs.server.ha}} 
let's do a separate jira just for the move.
* Jenkins says there is a space in {{runClientsWithFailover()}}
* I agree chasing {{testClientSendsState}} with reflections etc. is not 
productive. Better time spent implementing some of the test cases from the test 
plan.

> In TestStateAlignmentContextWithHA replace artificial 
> AlignmentContextProxyProvider with real ObserverReadProxyProvider.
> 
>
> Key: HDFS-13778
> URL: https://issues.apache.org/jira/browse/HDFS-13778
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Konstantin Shvachko
>Assignee: Sherwood Zheng
>Priority: Major
> Attachments: HDFS-13778-HDFS-12943.001.patch, 
> HDFS-13778-HDFS-12943.002.patch, HDFS-13778-HDFS-12943.003.patch
>
>
> TestStateAlignmentContextWithHA uses an artificial 
> AlignmentContextProxyProvider, which was temporary needed for testing. Now 
> that we have real ObserverReadProxyProvider it can take over ACPP. This is 
> also useful for testing the ORPP.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13778) In TestStateAlignmentContextWithHA replace artificial AlignmentContextProxyProvider with real ObserverReadProxyProvider.

2018-09-12 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612927#comment-16612927
 ] 

Konstantin Shvachko edited comment on HDFS-13778 at 9/13/18 1:44 AM:
-

Plamen, like your changes. Looks good.
 * -I see you updated the package name, but didn't move the file in the 
directory tree. Glad you didn't, otherwise I wouldn't see the diff.-
You did move the file - my IDE messed up, still. I am actually fine with the 
test being in {{org.apache.hadoop.hdfs}} because we are testing mostly the 
client logic here rather the server. Well both, but from client perspective. I 
suggest we leave the package as is for now to see the diff in the commit. If 
you decide to move it under {{org.apache.hadoop.hdfs.server.ha}} let's do a 
separate jira just for the move.
 * Jenkins says there is a space in {{runClientsWithFailover()}}
 * I agree chasing {{testClientSendsState}} with reflections etc. is not 
productive. Better time spent implementing some of the test cases from the test 
plan.


was (Author: shv):
Plamen, like your changes. Looks good.
* I see you updated the package name, but didn't move the file in the directory 
tree. Glad you didn't, otherwise I wouldn't see the diff. I am actually fine 
with the test being in {{org.apache.hadoop.hdfs}} because we are testing mostly 
the client logic here rather the server. Well both, but from client 
perspective. I suggest we leave the package as is for now to see the diff in 
the commit. If you decide to move it under {{org.apache.hadoop.hdfs.server.ha}} 
let's do a separate jira just for the move.
* Jenkins says there is a space in {{runClientsWithFailover()}}
* I agree chasing {{testClientSendsState}} with reflections etc. is not 
productive. Better time spent implementing some of the test cases from the test 
plan.

> In TestStateAlignmentContextWithHA replace artificial 
> AlignmentContextProxyProvider with real ObserverReadProxyProvider.
> 
>
> Key: HDFS-13778
> URL: https://issues.apache.org/jira/browse/HDFS-13778
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Konstantin Shvachko
>Assignee: Sherwood Zheng
>Priority: Major
> Attachments: HDFS-13778-HDFS-12943.001.patch, 
> HDFS-13778-HDFS-12943.002.patch, HDFS-13778-HDFS-12943.003.patch
>
>
> TestStateAlignmentContextWithHA uses an artificial 
> AlignmentContextProxyProvider, which was temporary needed for testing. Now 
> that we have real ObserverReadProxyProvider it can take over ACPP. This is 
> also useful for testing the ORPP.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-439) 'ozone oz volume create' should default to current Unix user

2018-09-12 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612922#comment-16612922
 ] 

Arpit Agarwal edited comment on HDDS-439 at 9/13/18 1:27 AM:
-

-Thanks for picking this up [~sreevaddi]!- Let me know if I can help in any way.

Edit: Just realized [~anu] assigned this to you. :)


was (Author: arpitagarwal):
Thanks for picking this up [~sreevaddi]! Let me know if I can help in any way.

> 'ozone oz volume create' should default to current Unix user
> 
>
> Key: HDDS-439
> URL: https://issues.apache.org/jira/browse/HDDS-439
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Arpit Agarwal
>Assignee: Sree Vaddi
>Priority: Major
>  Labels: newbie
>
> Currently the user parameter appears to be mandatory. It should just default 
> to the current Unix user if missing.
> E.g.
> {code:java}
> $ ozone oz volume create vol32
> Missing required option '--user='{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-439) 'ozone oz volume create' should default to current Unix user

2018-09-12 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612922#comment-16612922
 ] 

Arpit Agarwal commented on HDDS-439:


Thanks for picking this up [~sreevaddi]! Let me know if I can help in any way.

> 'ozone oz volume create' should default to current Unix user
> 
>
> Key: HDDS-439
> URL: https://issues.apache.org/jira/browse/HDDS-439
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Arpit Agarwal
>Assignee: Sree Vaddi
>Priority: Major
>  Labels: newbie
>
> Currently the user parameter appears to be mandatory. It should just default 
> to the current Unix user if missing.
> E.g.
> {code:java}
> $ ozone oz volume create vol32
> Missing required option '--user='{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-440) Datanode loops forever if it cannot create directories

2018-09-12 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer reassigned HDDS-440:
-

Assignee: Sree Vaddi

> Datanode loops forever if it cannot create directories
> --
>
> Key: HDDS-440
> URL: https://issues.apache.org/jira/browse/HDDS-440
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Arpit Agarwal
>Assignee: Sree Vaddi
>Priority: Blocker
>  Labels: newbie
>
> Datanode starts but runs in a tight loop forever if it cannot create the 
> DataNode ID directory e.g. due to permissions issues. I encountered this by 
> having a typo in my ozone-site.xml for {{ozone.scm.datanode.id}}.
> In just a few minutes the DataNode had generated over 20GB of log+out files 
> with the following exception:
> {code:java}
> 2018-09-12 17:28:20,649 WARN 
> org.apache.hadoop.util.concurrent.ExecutorHelper: Caught exception in thread 
> Datanode State Machine Thread - 2
> 63:
> java.io.IOException: Unable to create datanode ID directories.
> at 
> org.apache.hadoop.ozone.container.common.helpers.ContainerUtils.writeDatanodeDetailsTo(ContainerUtils.java:211)
> at 
> org.apache.hadoop.ozone.container.common.states.datanode.InitDatanodeState.persistContainerDatanodeDetails(InitDatanodeState.java:131)
> at 
> org.apache.hadoop.ozone.container.common.states.datanode.InitDatanodeState.call(InitDatanodeState.java:111)
> at 
> org.apache.hadoop.ozone.container.common.states.datanode.InitDatanodeState.call(InitDatanodeState.java:50)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> 2018-09-12 17:28:20,648 WARN 
> org.apache.hadoop.util.concurrent.ExecutorHelper: Execution exception when 
> running task in Datanode State Mach
> ine Thread - 160
> 2018-09-12 17:28:20,650 WARN 
> org.apache.hadoop.util.concurrent.ExecutorHelper: Caught exception in thread 
> Datanode State Machine Thread - 1
> 60:
> java.io.IOException: Unable to create datanode ID directories.
> at 
> org.apache.hadoop.ozone.container.common.helpers.ContainerUtils.writeDatanodeDetailsTo(ContainerUtils.java:211)
> at 
> org.apache.hadoop.ozone.container.common.states.datanode.InitDatanodeState.persistContainerDatanodeDetails(InitDatanodeState.java:131)
> at 
> org.apache.hadoop.ozone.container.common.states.datanode.InitDatanodeState.call(InitDatanodeState.java:111)
> at 
> org.apache.hadoop.ozone.container.common.states.datanode.InitDatanodeState.call(InitDatanodeState.java:50)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748){code}
> We should just exit since this is a fatal issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-439) 'ozone oz volume create' should default to current Unix user

2018-09-12 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer reassigned HDDS-439:
-

Assignee: Sree Vaddi

> 'ozone oz volume create' should default to current Unix user
> 
>
> Key: HDDS-439
> URL: https://issues.apache.org/jira/browse/HDDS-439
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Arpit Agarwal
>Assignee: Sree Vaddi
>Priority: Major
>  Labels: newbie
>
> Currently the user parameter appears to be mandatory. It should just default 
> to the current Unix user if missing.
> E.g.
> {code:java}
> $ ozone oz volume create vol32
> Missing required option '--user='{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13566) Add configurable additional RPC listener to NameNode

2018-09-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612903#comment-16612903
 ] 

Hadoop QA commented on HDFS-13566:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
20m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
9s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
31s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
4m 53s{color} | {color:orange} root: The patch generated 3 new + 676 unchanged 
- 1 fixed = 679 total (was 677) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
48s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
40s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
54s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 3 new + 1 
unchanged - 0 fixed = 4 total (was 1) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
11s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
41s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 44s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}138m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-13566 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12939477/HDFS-13566.005.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 0aaa64950af8 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 

[jira] [Commented] (HDDS-174) Shell error messages are often cryptic

2018-09-12 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612901#comment-16612901
 ] 

Arpit Agarwal commented on HDDS-174:


Also we should print the exception stack trace on the client side when 
possible. I hit an issue where the DataNode was not registered. Key creation 
failed with {{KEY_ALLOCATION_ERROR}}. The SCM logs showed the exception to be:
{code}
2018-09-12 17:39:27,842 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 
on 9863, call Call#4 Retry#0 
org.apache.hadoop.ozone.protocol.ScmBlockLocationProtocol.allocateScmBlock from 
127.0.0.1:50389
org.apache.hadoop.hdds.scm.exceptions.SCMException: Unable to create block 
while in chill mode
at 
org.apache.hadoop.hdds.scm.block.BlockManagerImpl.allocateBlock(BlockManagerImpl.java:207)
at 
org.apache.hadoop.hdds.scm.server.SCMBlockProtocolServer.allocateBlock(SCMBlockProtocolServer.java:143)
at 
org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.allocateScmBlock(ScmBlockLocationProtocolServerSideTranslatorPB.java:74)
at 
org.apache.hadoop.hdds.protocol.proto.ScmBlockLocationProtocolProtos$ScmBlockLocationProtocolService$2.callBlockingMethod(ScmBlockLocationProtocolProtos.java:6271)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
2018-09-12 17:43:52,806 INFO 
org.apache.hadoop.hdds.scm.server.StorageContainerManager: STARTUP_MSG: 
{code}



> Shell error messages are often cryptic
> --
>
> Key: HDDS-174
> URL: https://issues.apache.org/jira/browse/HDDS-174
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Nanda kumar
>Priority: Blocker
>  Labels: newbie
> Fix For: 0.3.0
>
>
> Error messages in the Ozone shell are often too cryptic. e.g.
> {code}
> $ ozone oz -putKey /vol1/bucket1/key1 -file foo.txt
> Command Failed : Create key failed, error:INTERNAL_ERROR
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-440) Datanode loops forever if it cannot create directories

2018-09-12 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612900#comment-16612900
 ] 

Anu Engineer commented on HDDS-440:
---

Great find. Thanks.

> Datanode loops forever if it cannot create directories
> --
>
> Key: HDDS-440
> URL: https://issues.apache.org/jira/browse/HDDS-440
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Arpit Agarwal
>Priority: Blocker
>  Labels: newbie
>
> Datanode starts but runs in a tight loop forever if it cannot create the 
> DataNode ID directory e.g. due to permissions issues. I encountered this by 
> having a typo in my ozone-site.xml for {{ozone.scm.datanode.id}}.
> In just a few minutes the DataNode had generated over 20GB of log+out files 
> with the following exception:
> {code:java}
> 2018-09-12 17:28:20,649 WARN 
> org.apache.hadoop.util.concurrent.ExecutorHelper: Caught exception in thread 
> Datanode State Machine Thread - 2
> 63:
> java.io.IOException: Unable to create datanode ID directories.
> at 
> org.apache.hadoop.ozone.container.common.helpers.ContainerUtils.writeDatanodeDetailsTo(ContainerUtils.java:211)
> at 
> org.apache.hadoop.ozone.container.common.states.datanode.InitDatanodeState.persistContainerDatanodeDetails(InitDatanodeState.java:131)
> at 
> org.apache.hadoop.ozone.container.common.states.datanode.InitDatanodeState.call(InitDatanodeState.java:111)
> at 
> org.apache.hadoop.ozone.container.common.states.datanode.InitDatanodeState.call(InitDatanodeState.java:50)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> 2018-09-12 17:28:20,648 WARN 
> org.apache.hadoop.util.concurrent.ExecutorHelper: Execution exception when 
> running task in Datanode State Mach
> ine Thread - 160
> 2018-09-12 17:28:20,650 WARN 
> org.apache.hadoop.util.concurrent.ExecutorHelper: Caught exception in thread 
> Datanode State Machine Thread - 1
> 60:
> java.io.IOException: Unable to create datanode ID directories.
> at 
> org.apache.hadoop.ozone.container.common.helpers.ContainerUtils.writeDatanodeDetailsTo(ContainerUtils.java:211)
> at 
> org.apache.hadoop.ozone.container.common.states.datanode.InitDatanodeState.persistContainerDatanodeDetails(InitDatanodeState.java:131)
> at 
> org.apache.hadoop.ozone.container.common.states.datanode.InitDatanodeState.call(InitDatanodeState.java:111)
> at 
> org.apache.hadoop.ozone.container.common.states.datanode.InitDatanodeState.call(InitDatanodeState.java:50)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748){code}
> We should just exit since this is a fatal issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-394) Rename *Key Apis in DatanodeContainerProtocol to *Block apis

2018-09-12 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612899#comment-16612899
 ] 

Anu Engineer commented on HDDS-394:
---

This is will be a good Jira to fix for Acadia.

 

> Rename *Key Apis in DatanodeContainerProtocol to *Block apis
> 
>
> Key: HDDS-394
> URL: https://issues.apache.org/jira/browse/HDDS-394
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Mukul Kumar Singh
>Assignee: Dinesh Chitlangia
>Priority: Major
>
> All the block apis in client datanode interaction are named *key apis(e.g. 
> PutKey), This can be renamed to *Block apis. (e.g. PutBlock).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-440) Datanode loops forever if it cannot create directories

2018-09-12 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDDS-440:
--

 Summary: Datanode loops forever if it cannot create directories
 Key: HDDS-440
 URL: https://issues.apache.org/jira/browse/HDDS-440
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Datanode
Reporter: Arpit Agarwal


Datanode starts but runs in a tight loop forever if it cannot create the 
DataNode ID directory e.g. due to permissions issues. I encountered this by 
having a typo in my ozone-site.xml for {{ozone.scm.datanode.id}}.

In just a few minutes the DataNode had generated over 20GB of log+out files 
with the following exception:
{code:java}
2018-09-12 17:28:20,649 WARN org.apache.hadoop.util.concurrent.ExecutorHelper: 
Caught exception in thread Datanode State Machine Thread - 2
63:
java.io.IOException: Unable to create datanode ID directories.
at 
org.apache.hadoop.ozone.container.common.helpers.ContainerUtils.writeDatanodeDetailsTo(ContainerUtils.java:211)
at 
org.apache.hadoop.ozone.container.common.states.datanode.InitDatanodeState.persistContainerDatanodeDetails(InitDatanodeState.java:131)
at 
org.apache.hadoop.ozone.container.common.states.datanode.InitDatanodeState.call(InitDatanodeState.java:111)
at 
org.apache.hadoop.ozone.container.common.states.datanode.InitDatanodeState.call(InitDatanodeState.java:50)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2018-09-12 17:28:20,648 WARN org.apache.hadoop.util.concurrent.ExecutorHelper: 
Execution exception when running task in Datanode State Mach
ine Thread - 160
2018-09-12 17:28:20,650 WARN org.apache.hadoop.util.concurrent.ExecutorHelper: 
Caught exception in thread Datanode State Machine Thread - 1
60:
java.io.IOException: Unable to create datanode ID directories.
at 
org.apache.hadoop.ozone.container.common.helpers.ContainerUtils.writeDatanodeDetailsTo(ContainerUtils.java:211)
at 
org.apache.hadoop.ozone.container.common.states.datanode.InitDatanodeState.persistContainerDatanodeDetails(InitDatanodeState.java:131)
at 
org.apache.hadoop.ozone.container.common.states.datanode.InitDatanodeState.call(InitDatanodeState.java:111)
at 
org.apache.hadoop.ozone.container.common.states.datanode.InitDatanodeState.call(InitDatanodeState.java:50)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748){code}

We should just exit since this is a fatal issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-440) Datanode loops forever if it cannot create directories

2018-09-12 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-440:
---
Labels: newbie  (was: )

> Datanode loops forever if it cannot create directories
> --
>
> Key: HDDS-440
> URL: https://issues.apache.org/jira/browse/HDDS-440
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Arpit Agarwal
>Priority: Blocker
>  Labels: newbie
>
> Datanode starts but runs in a tight loop forever if it cannot create the 
> DataNode ID directory e.g. due to permissions issues. I encountered this by 
> having a typo in my ozone-site.xml for {{ozone.scm.datanode.id}}.
> In just a few minutes the DataNode had generated over 20GB of log+out files 
> with the following exception:
> {code:java}
> 2018-09-12 17:28:20,649 WARN 
> org.apache.hadoop.util.concurrent.ExecutorHelper: Caught exception in thread 
> Datanode State Machine Thread - 2
> 63:
> java.io.IOException: Unable to create datanode ID directories.
> at 
> org.apache.hadoop.ozone.container.common.helpers.ContainerUtils.writeDatanodeDetailsTo(ContainerUtils.java:211)
> at 
> org.apache.hadoop.ozone.container.common.states.datanode.InitDatanodeState.persistContainerDatanodeDetails(InitDatanodeState.java:131)
> at 
> org.apache.hadoop.ozone.container.common.states.datanode.InitDatanodeState.call(InitDatanodeState.java:111)
> at 
> org.apache.hadoop.ozone.container.common.states.datanode.InitDatanodeState.call(InitDatanodeState.java:50)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> 2018-09-12 17:28:20,648 WARN 
> org.apache.hadoop.util.concurrent.ExecutorHelper: Execution exception when 
> running task in Datanode State Mach
> ine Thread - 160
> 2018-09-12 17:28:20,650 WARN 
> org.apache.hadoop.util.concurrent.ExecutorHelper: Caught exception in thread 
> Datanode State Machine Thread - 1
> 60:
> java.io.IOException: Unable to create datanode ID directories.
> at 
> org.apache.hadoop.ozone.container.common.helpers.ContainerUtils.writeDatanodeDetailsTo(ContainerUtils.java:211)
> at 
> org.apache.hadoop.ozone.container.common.states.datanode.InitDatanodeState.persistContainerDatanodeDetails(InitDatanodeState.java:131)
> at 
> org.apache.hadoop.ozone.container.common.states.datanode.InitDatanodeState.call(InitDatanodeState.java:111)
> at 
> org.apache.hadoop.ozone.container.common.states.datanode.InitDatanodeState.call(InitDatanodeState.java:50)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748){code}
> We should just exit since this is a fatal issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-138) createVolume bug with non-existent user

2018-09-12 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612885#comment-16612885
 ] 

Arpit Agarwal commented on HDDS-138:


Looks like this bug still exists. I am tentatively making it an Acadia blocker 
since users are sure to hit it if they specify the user name incorrectly. cc 
[~anu].

> createVolume bug with non-existent user
> ---
>
> Key: HDDS-138
> URL: https://issues.apache.org/jira/browse/HDDS-138
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Shashikant Banerjee
>Priority: Blocker
>  Labels: newbie, usability
> Fix For: 0.2.1
>
>
> When createVolume is invoked for a non-existent user, it fails with 
> {{PartialGroupNameException}}.
> {code:java}
> hadoop@9a70d9aa6bf9:~$ ozone oz -createVolume /vol4 -user nosuchuser
> 2018-05-31 20:40:17 WARN  ShellBasedUnixGroupsMapping:210 - unable to 
> return groups for user nosuchuser
> PartialGroupNameException The user name 'nosuchuser' is not found. id: 
> ‘nosuchuser’: no such user
> id: ‘nosuchuser’: no such user
>   at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.resolvePartialGroupNames(ShellBasedUnixGroupsMapping.java:294)
>   at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:207)
>   at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:97)
>   at 
> org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:51)
>   at 
> org.apache.hadoop.security.Groups$GroupCacheLoader.fetchGroupList(Groups.java:384)
>   at 
> org.apache.hadoop.security.Groups$GroupCacheLoader.load(Groups.java:319)
>   at 
> org.apache.hadoop.security.Groups$GroupCacheLoader.load(Groups.java:269)
>   at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
>   at 
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350)
>   at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
>   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
>   at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
>   at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3969)
>   at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4829)
>   at org.apache.hadoop.security.Groups.getGroups(Groups.java:227)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getGroups(UserGroupInformation.java:1545)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1533)
>   at 
> org.apache.hadoop.ozone.client.rpc.RpcClient.createVolume(RpcClient.java:190)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:54)
>   at com.sun.proxy.$Proxy11.createVolume(Unknown Source)
>   at 
> org.apache.hadoop.ozone.client.ObjectStore.createVolume(ObjectStore.java:77)
>   at 
> org.apache.hadoop.ozone.web.ozShell.volume.CreateVolumeHandler.execute(CreateVolumeHandler.java:98)
>   at org.apache.hadoop.ozone.web.ozShell.Shell.dispatch(Shell.java:395)
>   at org.apache.hadoop.ozone.web.ozShell.Shell.run(Shell.java:135)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.ozone.web.ozShell.Shell.main(Shell.java:114)
> 2018-05-31 20:40:17 INFO  RpcClient:210 - Creating Volume: vol4, with 
> nosuchuser as owner and quota set to 1152921504606846976 bytes.
> {code}
> However the volume appears to be created:
> {code:json}
> ozone oz -listVolume o3:/// -user nosuchuser -root
> [ {
>   "owner" : {
> "name" : "nosuchuser"
>   },
>   "quota" : {
> "unit" : "TB",
> "size" : 1048576
>   },
>   "volumeName" : "vol4",
>   "createdOn" : "Thu, 31 May 2018 20:40:17 GMT",
>   "createdBy" : "nosuchuser"
> } ]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-138) createVolume bug with non-existent user

2018-09-12 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-138:
---
Fix Version/s: 0.2.1

> createVolume bug with non-existent user
> ---
>
> Key: HDDS-138
> URL: https://issues.apache.org/jira/browse/HDDS-138
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: newbie, usability
> Fix For: 0.2.1
>
>
> When createVolume is invoked for a non-existent user, it fails with 
> {{PartialGroupNameException}}.
> {code:java}
> hadoop@9a70d9aa6bf9:~$ ozone oz -createVolume /vol4 -user nosuchuser
> 2018-05-31 20:40:17 WARN  ShellBasedUnixGroupsMapping:210 - unable to 
> return groups for user nosuchuser
> PartialGroupNameException The user name 'nosuchuser' is not found. id: 
> ‘nosuchuser’: no such user
> id: ‘nosuchuser’: no such user
>   at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.resolvePartialGroupNames(ShellBasedUnixGroupsMapping.java:294)
>   at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:207)
>   at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:97)
>   at 
> org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:51)
>   at 
> org.apache.hadoop.security.Groups$GroupCacheLoader.fetchGroupList(Groups.java:384)
>   at 
> org.apache.hadoop.security.Groups$GroupCacheLoader.load(Groups.java:319)
>   at 
> org.apache.hadoop.security.Groups$GroupCacheLoader.load(Groups.java:269)
>   at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
>   at 
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350)
>   at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
>   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
>   at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
>   at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3969)
>   at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4829)
>   at org.apache.hadoop.security.Groups.getGroups(Groups.java:227)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getGroups(UserGroupInformation.java:1545)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1533)
>   at 
> org.apache.hadoop.ozone.client.rpc.RpcClient.createVolume(RpcClient.java:190)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:54)
>   at com.sun.proxy.$Proxy11.createVolume(Unknown Source)
>   at 
> org.apache.hadoop.ozone.client.ObjectStore.createVolume(ObjectStore.java:77)
>   at 
> org.apache.hadoop.ozone.web.ozShell.volume.CreateVolumeHandler.execute(CreateVolumeHandler.java:98)
>   at org.apache.hadoop.ozone.web.ozShell.Shell.dispatch(Shell.java:395)
>   at org.apache.hadoop.ozone.web.ozShell.Shell.run(Shell.java:135)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.ozone.web.ozShell.Shell.main(Shell.java:114)
> 2018-05-31 20:40:17 INFO  RpcClient:210 - Creating Volume: vol4, with 
> nosuchuser as owner and quota set to 1152921504606846976 bytes.
> {code}
> However the volume appears to be created:
> {code:json}
> ozone oz -listVolume o3:/// -user nosuchuser -root
> [ {
>   "owner" : {
> "name" : "nosuchuser"
>   },
>   "quota" : {
> "unit" : "TB",
> "size" : 1048576
>   },
>   "volumeName" : "vol4",
>   "createdOn" : "Thu, 31 May 2018 20:40:17 GMT",
>   "createdBy" : "nosuchuser"
> } ]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-138) createVolume bug with non-existent user

2018-09-12 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-138:
---
Priority: Blocker  (was: Major)

> createVolume bug with non-existent user
> ---
>
> Key: HDDS-138
> URL: https://issues.apache.org/jira/browse/HDDS-138
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Arpit Agarwal
>Assignee: Shashikant Banerjee
>Priority: Blocker
>  Labels: newbie, usability
> Fix For: 0.2.1
>
>
> When createVolume is invoked for a non-existent user, it fails with 
> {{PartialGroupNameException}}.
> {code:java}
> hadoop@9a70d9aa6bf9:~$ ozone oz -createVolume /vol4 -user nosuchuser
> 2018-05-31 20:40:17 WARN  ShellBasedUnixGroupsMapping:210 - unable to 
> return groups for user nosuchuser
> PartialGroupNameException The user name 'nosuchuser' is not found. id: 
> ‘nosuchuser’: no such user
> id: ‘nosuchuser’: no such user
>   at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.resolvePartialGroupNames(ShellBasedUnixGroupsMapping.java:294)
>   at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:207)
>   at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:97)
>   at 
> org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:51)
>   at 
> org.apache.hadoop.security.Groups$GroupCacheLoader.fetchGroupList(Groups.java:384)
>   at 
> org.apache.hadoop.security.Groups$GroupCacheLoader.load(Groups.java:319)
>   at 
> org.apache.hadoop.security.Groups$GroupCacheLoader.load(Groups.java:269)
>   at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
>   at 
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350)
>   at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
>   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
>   at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
>   at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3969)
>   at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4829)
>   at org.apache.hadoop.security.Groups.getGroups(Groups.java:227)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getGroups(UserGroupInformation.java:1545)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1533)
>   at 
> org.apache.hadoop.ozone.client.rpc.RpcClient.createVolume(RpcClient.java:190)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:54)
>   at com.sun.proxy.$Proxy11.createVolume(Unknown Source)
>   at 
> org.apache.hadoop.ozone.client.ObjectStore.createVolume(ObjectStore.java:77)
>   at 
> org.apache.hadoop.ozone.web.ozShell.volume.CreateVolumeHandler.execute(CreateVolumeHandler.java:98)
>   at org.apache.hadoop.ozone.web.ozShell.Shell.dispatch(Shell.java:395)
>   at org.apache.hadoop.ozone.web.ozShell.Shell.run(Shell.java:135)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.ozone.web.ozShell.Shell.main(Shell.java:114)
> 2018-05-31 20:40:17 INFO  RpcClient:210 - Creating Volume: vol4, with 
> nosuchuser as owner and quota set to 1152921504606846976 bytes.
> {code}
> However the volume appears to be created:
> {code:json}
> ozone oz -listVolume o3:/// -user nosuchuser -root
> [ {
>   "owner" : {
> "name" : "nosuchuser"
>   },
>   "quota" : {
> "unit" : "TB",
> "size" : 1048576
>   },
>   "volumeName" : "vol4",
>   "createdOn" : "Thu, 31 May 2018 20:40:17 GMT",
>   "createdBy" : "nosuchuser"
> } ]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-439) 'ozone oz volume create' should default to current Unix user

2018-09-12 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDDS-439:
--

 Summary: 'ozone oz volume create' should default to current Unix 
user
 Key: HDDS-439
 URL: https://issues.apache.org/jira/browse/HDDS-439
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Tools
Reporter: Arpit Agarwal


Currently the user parameter appears to be mandatory. It should just default to 
the current Unix user if missing.

E.g.
{code:java}
$ ozone oz volume create vol32
Missing required option '--user='{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13882) Set a maximum for the delay before retrying locateFollowingBlock

2018-09-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612875#comment-16612875
 ] 

Hadoop QA commented on HDFS-13882:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 34s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
31s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}100m 36s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}170m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-13882 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12939468/HDFS-13882.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux f68ff2c2b82c 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 

[jira] [Commented] (HDDS-394) Rename *Key Apis in DatanodeContainerProtocol to *Block apis

2018-09-12 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612867#comment-16612867
 ] 

Arpit Agarwal commented on HDDS-394:


Should we target this for Acadia (0.2.1)? cc [~anu].

> Rename *Key Apis in DatanodeContainerProtocol to *Block apis
> 
>
> Key: HDDS-394
> URL: https://issues.apache.org/jira/browse/HDDS-394
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Mukul Kumar Singh
>Assignee: Dinesh Chitlangia
>Priority: Major
>
> All the block apis in client datanode interaction are named *key apis(e.g. 
> PutKey), This can be renamed to *Block apis. (e.g. PutBlock).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-438) 'ozone oz' should print usage when command or sub-command is missing

2018-09-12 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-438 started by Dinesh Chitlangia.
--
> 'ozone oz' should print usage when command or sub-command is missing
> 
>
> Key: HDDS-438
> URL: https://issues.apache.org/jira/browse/HDDS-438
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
>
> When invoked without the command or sub-command, _ozone oz_ prints the 
> following error:
> {code:java}
> $ ozone oz
> Please select a subcommand
> {code}
> and
> {code:java}
> $ ozone oz volume
> Please select a subcommand
> {code}
> For most familiar with Unix utilities it is obvious they should rerun the 
> command with _--help._ However we can just print the usage instead to avoid 
> guesswork



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-438) 'ozone oz' should print usage when command or sub-command is missing

2018-09-12 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612852#comment-16612852
 ] 

Anu Engineer edited comment on HDDS-438 at 9/12/18 11:22 PM:
-

Changing target version to acadia release, it is a small but extremely 
beneficial change. [~dineshchitlangia]


was (Author: anu):
Changing target version to acadia release, it is a small but extremely 
beneficial change. [~dineshchitlangia], [~GeLiXin]

> 'ozone oz' should print usage when command or sub-command is missing
> 
>
> Key: HDDS-438
> URL: https://issues.apache.org/jira/browse/HDDS-438
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
>
> When invoked without the command or sub-command, _ozone oz_ prints the 
> following error:
> {code:java}
> $ ozone oz
> Please select a subcommand
> {code}
> and
> {code:java}
> $ ozone oz volume
> Please select a subcommand
> {code}
> For most familiar with Unix utilities it is obvious they should rerun the 
> command with _--help._ However we can just print the usage instead to avoid 
> guesswork



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-438) 'ozone oz' should print usage when command or sub-command is missing

2018-09-12 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612853#comment-16612853
 ] 

Dinesh Chitlangia commented on HDDS-438:


[~anu] Thanks. I have taken ownership.

> 'ozone oz' should print usage when command or sub-command is missing
> 
>
> Key: HDDS-438
> URL: https://issues.apache.org/jira/browse/HDDS-438
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
>
> When invoked without the command or sub-command, _ozone oz_ prints the 
> following error:
> {code:java}
> $ ozone oz
> Please select a subcommand
> {code}
> and
> {code:java}
> $ ozone oz volume
> Please select a subcommand
> {code}
> For most familiar with Unix utilities it is obvious they should rerun the 
> command with _--help._ However we can just print the usage instead to avoid 
> guesswork



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-438) 'ozone oz' should print usage when command or sub-command is missing

2018-09-12 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612852#comment-16612852
 ] 

Anu Engineer edited comment on HDDS-438 at 9/12/18 11:21 PM:
-

Changing target version to acadia release, it is a small but extremely 
beneficial change. [~dineshchitlangia], [~GeLiXin]


was (Author: anu):
Changing target version to acadia release, it is a small but extremely 
beneficial change. [~dineshchitlangia]

> 'ozone oz' should print usage when command or sub-command is missing
> 
>
> Key: HDDS-438
> URL: https://issues.apache.org/jira/browse/HDDS-438
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
>
> When invoked without the command or sub-command, _ozone oz_ prints the 
> following error:
> {code:java}
> $ ozone oz
> Please select a subcommand
> {code}
> and
> {code:java}
> $ ozone oz volume
> Please select a subcommand
> {code}
> For most familiar with Unix utilities it is obvious they should rerun the 
> command with _--help._ However we can just print the usage instead to avoid 
> guesswork



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-438) 'ozone oz' should print usage when command or sub-command is missing

2018-09-12 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia reassigned HDDS-438:
--

Assignee: Dinesh Chitlangia

> 'ozone oz' should print usage when command or sub-command is missing
> 
>
> Key: HDDS-438
> URL: https://issues.apache.org/jira/browse/HDDS-438
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Arpit Agarwal
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
>
> When invoked without the command or sub-command, _ozone oz_ prints the 
> following error:
> {code:java}
> $ ozone oz
> Please select a subcommand
> {code}
> and
> {code:java}
> $ ozone oz volume
> Please select a subcommand
> {code}
> For most familiar with Unix utilities it is obvious they should rerun the 
> command with _--help._ However we can just print the usage instead to avoid 
> guesswork



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-438) 'ozone oz' should print usage when command or sub-command is missing

2018-09-12 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612852#comment-16612852
 ] 

Anu Engineer commented on HDDS-438:
---

Changing target version to acadia release, it is a small but extremely 
beneficial change. [~dineshchitlangia]

> 'ozone oz' should print usage when command or sub-command is missing
> 
>
> Key: HDDS-438
> URL: https://issues.apache.org/jira/browse/HDDS-438
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Arpit Agarwal
>Priority: Major
>  Labels: newbie
>
> When invoked without the command or sub-command, _ozone oz_ prints the 
> following error:
> {code:java}
> $ ozone oz
> Please select a subcommand
> {code}
> and
> {code:java}
> $ ozone oz volume
> Please select a subcommand
> {code}
> For most familiar with Unix utilities it is obvious they should rerun the 
> command with _--help._ However we can just print the usage instead to avoid 
> guesswork



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-438) 'ozone oz' should print usage when command or sub-command is missing

2018-09-12 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-438:
--
Target Version/s: 0.2.1  (was: 0.3.0)

> 'ozone oz' should print usage when command or sub-command is missing
> 
>
> Key: HDDS-438
> URL: https://issues.apache.org/jira/browse/HDDS-438
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Arpit Agarwal
>Priority: Major
>  Labels: newbie
>
> When invoked without the command or sub-command, _ozone oz_ prints the 
> following error:
> {code:java}
> $ ozone oz
> Please select a subcommand
> {code}
> and
> {code:java}
> $ ozone oz volume
> Please select a subcommand
> {code}
> For most familiar with Unix utilities it is obvious they should rerun the 
> command with _--help._ However we can just print the usage instead to avoid 
> guesswork



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-438) 'ozone oz' should print usage when command or sub-command is missing

2018-09-12 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-438:
---
Description: 
When invoked without the command or sub-command, _ozone oz_ prints the 
following error:
{code:java}
$ ozone oz
Please select a subcommand
{code}
and
{code:java}
$ ozone oz volume
Please select a subcommand
{code}
For most familiar with Unix utilities it is obvious they should rerun the 
command with _--help._ However we can just print the usage instead to avoid 
guesswork

  was:
When invoked without the command or sub-command, _ozone oz_ prints the 
following error:
{code:java}
$ ozone oz
Picked up _JAVA_OPTIONS: -Djava.awt.headless=true
Please select a subcommand
{code}
and
{code:java}
$ ozone oz volume
Picked up _JAVA_OPTIONS: -Djava.awt.headless=true
Please select a subcommand
{code}
For most familiar with Unix utilities it is obvious they should rerun the 
command with _--help._ However we can just print the usage instead to avoid 
guesswork


> 'ozone oz' should print usage when command or sub-command is missing
> 
>
> Key: HDDS-438
> URL: https://issues.apache.org/jira/browse/HDDS-438
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Arpit Agarwal
>Priority: Major
>  Labels: newbie
>
> When invoked without the command or sub-command, _ozone oz_ prints the 
> following error:
> {code:java}
> $ ozone oz
> Please select a subcommand
> {code}
> and
> {code:java}
> $ ozone oz volume
> Please select a subcommand
> {code}
> For most familiar with Unix utilities it is obvious they should rerun the 
> command with _--help._ However we can just print the usage instead to avoid 
> guesswork



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-437) Disable native-hadoop library warning

2018-09-12 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HDDS-437.

Resolution: Duplicate

Thanks [~anu]. Resolved as dup.

> Disable native-hadoop library warning
> -
>
> Key: HDDS-437
> URL: https://issues.apache.org/jira/browse/HDDS-437
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Arpit Agarwal
>Priority: Major
>  Labels: newbie
>
> The following warning is always printed when running Ozone commands without 
> native library.
> {code}
> 2018-09-12 16:10:32,438 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> {code}
> afaik no functionality is broken without native library support and the log 
> message is noise. Let's disable it for now. We can re-enable it if we add a 
> native library dependency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-438) 'ozone oz' should print usage when command or sub-command is missing

2018-09-12 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDDS-438:
--

 Summary: 'ozone oz' should print usage when command or sub-command 
is missing
 Key: HDDS-438
 URL: https://issues.apache.org/jira/browse/HDDS-438
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Tools
Reporter: Arpit Agarwal


When invoked without the command or sub-command, _ozone oz_ prints the 
following error:
{code:java}
$ ozone oz
Picked up _JAVA_OPTIONS: -Djava.awt.headless=true
Please select a subcommand
{code}
and
{code:java}
$ ozone oz volume
Picked up _JAVA_OPTIONS: -Djava.awt.headless=true
Please select a subcommand
{code}
For most familiar with Unix utilities it is obvious they should rerun the 
command with _--help._ However we can just print the usage instead to avoid 
guesswork



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13833) Failed to choose from local rack (location = /default); the second replica is not found, retry choosing ramdomly

2018-09-12 Thread Kitti Nanasi (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612849#comment-16612849
 ] 

Kitti Nanasi commented on HDFS-13833:
-

[~shwetayakkali], you're right, I didn't realise that they are in different 
packages.

> Failed to choose from local rack (location = /default); the second replica is 
> not found, retry choosing ramdomly
> 
>
> Key: HDFS-13833
> URL: https://issues.apache.org/jira/browse/HDFS-13833
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Henrique Barros
>Assignee: Shweta
>Priority: Critical
> Attachments: HDFS-13833.001.patch
>
>
> I'm having a random problem with blocks replication with Hadoop 
> 2.6.0-cdh5.15.0
> With Cloudera CDH-5.15.0-1.cdh5.15.0.p0.21
>  
> In my case we are getting this error very randomly (after some hours) and 
> with only one Datanode (for now, we are trying this cloudera cluster for a 
> POC)
> Here is the Log.
> {code:java}
> Choosing random from 1 available nodes on node /default, scope=/default, 
> excludedScope=null, excludeNodes=[]
> 2:38:20.527 PMDEBUG   NetworkTopology 
> Choosing random from 0 available nodes on node /default, scope=/default, 
> excludedScope=null, excludeNodes=[192.168.220.53:50010]
> 2:38:20.527 PMDEBUG   NetworkTopology 
> chooseRandom returning null
> 2:38:20.527 PMDEBUG   BlockPlacementPolicy
> [
> Node /default/192.168.220.53:50010 [
>   Datanode 192.168.220.53:50010 is not chosen since the node is too busy 
> (load: 8 > 0.0).
> 2:38:20.527 PMDEBUG   NetworkTopology 
> chooseRandom returning 192.168.220.53:50010
> 2:38:20.527 PMINFOBlockPlacementPolicy
> Not enough replicas was chosen. Reason:{NODE_TOO_BUSY=1}
> 2:38:20.527 PMDEBUG   StateChange 
> closeFile: 
> /mobi.me/development/apps/flink/checkpoints/a5a6806866c1640660924ea1453cbe34/chk-2118/eef8bff6-75a9-43c1-ae93-4b1a9ca31ad9
>  with 1 blocks is persisted to the file system
> 2:38:20.527 PMDEBUG   StateChange 
> *BLOCK* NameNode.addBlock: file 
> /mobi.me/development/apps/flink/checkpoints/a5a6806866c1640660924ea1453cbe34/chk-2118/1cfe900d-6f45-4b55-baaa-73c02ace2660
>  fileId=129628869 for DFSClient_NONMAPREDUCE_467616914_65
> 2:38:20.527 PMDEBUG   BlockPlacementPolicy
> Failed to choose from local rack (location = /default); the second replica is 
> not found, retry choosing ramdomly
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy$NotEnoughReplicasException:
>  
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:784)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:694)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:601)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalStorage(BlockPlacementPolicyDefault.java:561)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:464)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:395)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:270)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:142)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:158)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1715)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3505)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:694)
>   at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:219)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:507)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>   at 

[jira] [Commented] (HDDS-437) Disable native-hadoop library warning

2018-09-12 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612848#comment-16612848
 ] 

Anu Engineer commented on HDDS-437:
---

https://issues.apache.org/jira/browse/HDDS-423 might fix that.

> Disable native-hadoop library warning
> -
>
> Key: HDDS-437
> URL: https://issues.apache.org/jira/browse/HDDS-437
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Arpit Agarwal
>Priority: Major
>  Labels: newbie
>
> The following warning is always printed when running Ozone commands without 
> native library.
> {code}
> 2018-09-12 16:10:32,438 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> {code}
> afaik no functionality is broken without native library support and the log 
> message is noise. Let's disable it for now. We can re-enable it if we add a 
> native library dependency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-437) Disable native-hadoop library warning

2018-09-12 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDDS-437:
--

 Summary: Disable native-hadoop library warning
 Key: HDDS-437
 URL: https://issues.apache.org/jira/browse/HDDS-437
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Tools
Reporter: Arpit Agarwal


The following warning is always printed when running Ozone commands without 
native library.
{code}
2018-09-12 16:10:32,438 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
{code}

afaik no functionality is broken without native library support and the log 
message is noise. Let's disable it for now. We can re-enable it if we add a 
native library dependency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-434) Provide an s3 compatible REST api for ozone objects

2018-09-12 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612843#comment-16612843
 ] 

Anu Engineer commented on HDDS-434:
---

[~elek] Thanks for filing this JIRA. I will file some work items under this 
umbrella Jira. This is a Jira which is a good place for new contributors to 
start off. This is a clear, well defined and very important feature for Ozone. 
cc: [~GeLiXin], [~candychencan], [~dchitlangia], [~Sandeep Nemuri], I will file 
JIRAs to break down this work clearly and post a design document too. 
[~gabor.bota] I see that you watching this Jira, if you would like to work on 
this feature, please feel to ping me or grab work items.

> Provide an s3 compatible REST api for ozone objects
> ---
>
> Key: HDDS-434
> URL: https://issues.apache.org/jira/browse/HDDS-434
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> S3 REST api is the de facto standard for object stores. Many external tools 
> already support it.
> This issue is about creating a new s3gateway component which implements (most 
> part of) the s3 API using the internal RPC calls.
> Some part of the implementation is very straightforward: we need a new 
> service with usual REST stack and we need to implement the most commont 
> GET/POST/PUT calls. Some other (Authorization, multi-part upload) are more 
> tricky.
> Here I suggest to create an evaluation: first we can implement a skeleton 
> service which could support read only requests without authorization and we 
> can define proper specification for the upload part / authorization during 
> the work.
> As of now the gatway service could be a new standalone application (eg. ozone 
> s3g start) later we can modify it to work as s DatanodePlugin similar to the 
> existing object store plugin. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13833) Failed to choose from local rack (location = /default); the second replica is not found, retry choosing ramdomly

2018-09-12 Thread Shweta (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612838#comment-16612838
 ] 

Shweta commented on HDFS-13833:
---

Thanks for reviewing my patch [~knanasi] and providing suggestions. 
IMO since the test is in a different package it is best left public since we 
have extracted the code to test a corner case. I agree the usage being less 
should be changed to package-private but for that a new test file will be 
needed for the block management package in which the BPPDefault class is.
I will post a patch with ref to your other comments.

> Failed to choose from local rack (location = /default); the second replica is 
> not found, retry choosing ramdomly
> 
>
> Key: HDFS-13833
> URL: https://issues.apache.org/jira/browse/HDFS-13833
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Henrique Barros
>Assignee: Shweta
>Priority: Critical
> Attachments: HDFS-13833.001.patch
>
>
> I'm having a random problem with blocks replication with Hadoop 
> 2.6.0-cdh5.15.0
> With Cloudera CDH-5.15.0-1.cdh5.15.0.p0.21
>  
> In my case we are getting this error very randomly (after some hours) and 
> with only one Datanode (for now, we are trying this cloudera cluster for a 
> POC)
> Here is the Log.
> {code:java}
> Choosing random from 1 available nodes on node /default, scope=/default, 
> excludedScope=null, excludeNodes=[]
> 2:38:20.527 PMDEBUG   NetworkTopology 
> Choosing random from 0 available nodes on node /default, scope=/default, 
> excludedScope=null, excludeNodes=[192.168.220.53:50010]
> 2:38:20.527 PMDEBUG   NetworkTopology 
> chooseRandom returning null
> 2:38:20.527 PMDEBUG   BlockPlacementPolicy
> [
> Node /default/192.168.220.53:50010 [
>   Datanode 192.168.220.53:50010 is not chosen since the node is too busy 
> (load: 8 > 0.0).
> 2:38:20.527 PMDEBUG   NetworkTopology 
> chooseRandom returning 192.168.220.53:50010
> 2:38:20.527 PMINFOBlockPlacementPolicy
> Not enough replicas was chosen. Reason:{NODE_TOO_BUSY=1}
> 2:38:20.527 PMDEBUG   StateChange 
> closeFile: 
> /mobi.me/development/apps/flink/checkpoints/a5a6806866c1640660924ea1453cbe34/chk-2118/eef8bff6-75a9-43c1-ae93-4b1a9ca31ad9
>  with 1 blocks is persisted to the file system
> 2:38:20.527 PMDEBUG   StateChange 
> *BLOCK* NameNode.addBlock: file 
> /mobi.me/development/apps/flink/checkpoints/a5a6806866c1640660924ea1453cbe34/chk-2118/1cfe900d-6f45-4b55-baaa-73c02ace2660
>  fileId=129628869 for DFSClient_NONMAPREDUCE_467616914_65
> 2:38:20.527 PMDEBUG   BlockPlacementPolicy
> Failed to choose from local rack (location = /default); the second replica is 
> not found, retry choosing ramdomly
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy$NotEnoughReplicasException:
>  
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:784)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:694)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:601)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalStorage(BlockPlacementPolicyDefault.java:561)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:464)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:395)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:270)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:142)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:158)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1715)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3505)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:694)
>   at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:219)
>   at 
> 

[jira] [Commented] (HDDS-233) Update ozone to latest ratis snapshot build

2018-09-12 Thread Tsz Wo Nicholas Sze (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612837#comment-16612837
 ] 

Tsz Wo Nicholas Sze commented on HDDS-233:
--

+1 the 03 patch looks good.

> Update ozone to latest ratis snapshot build
> ---
>
> Key: HDDS-233
> URL: https://issues.apache.org/jira/browse/HDDS-233
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-233.01.patch, HDDS-233.02.patch, HDDS-233.03.patch, 
> HDDS-233_20180911.patch
>
>
> This jira proposes to update ozone to latest ratis snapshot build. This jira 
> also will add config to set append entry timeout as well as controlling the 
> number of entries in retry cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-362) Modify functions impacted by SCM chill mode in ScmBlockLocationProtocol

2018-09-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612820#comment-16612820
 ] 

Hadoop QA commented on HDDS-362:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
53s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
36s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 20s{color} | {color:orange} hadoop-hdds: The patch generated 3 new + 0 
unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
58s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
38s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-362 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12939472/HDDS-362.02.patch |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  javadoc  
mvninstall  shadedclient  findbugs  checkstyle  |
| uname | Linux 6d93c5e2e569 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c18eb97 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |

[jira] [Updated] (HDFS-13566) Add configurable additional RPC listener to NameNode

2018-09-12 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-13566:
--
Attachment: HDFS-13566.005.patch

> Add configurable additional RPC listener to NameNode
> 
>
> Key: HDFS-13566
> URL: https://issues.apache.org/jira/browse/HDFS-13566
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ipc
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-13566.001.patch, HDFS-13566.002.patch, 
> HDFS-13566.003.patch, HDFS-13566.004.patch, HDFS-13566.005.patch
>
>
> This Jira aims to add the capability to NameNode to run additional 
> listener(s). Such that NameNode can be accessed from multiple ports. 
> Fundamentally, this Jira tries to extend ipc.Server to allow configured with 
> more listeners, binding to different ports, but sharing the same call queue 
> and the handlers. Useful when different clients are only allowed to access 
> certain different ports. Combined with HDFS-13547, this also allows different 
> ports to have different SASL security levels. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-395) TestOzoneRestWithMiniCluster fails with "Unable to read ROCKDB config"

2018-09-12 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612808#comment-16612808
 ] 

Dinesh Chitlangia commented on HDDS-395:


[~anu] thanks for the commit

> TestOzoneRestWithMiniCluster fails with "Unable to read ROCKDB config"
> --
>
> Key: HDDS-395
> URL: https://issues.apache.org/jira/browse/HDDS-395
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Mukul Kumar Singh
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.2.1
>
> Attachments: HDDS-395.001.patch, HDDS-395.002.patch
>
>
> Ozone datanode initialization is failing with the following exception.
> This was noted in the following precommit build result.
> https://builds.apache.org/job/PreCommit-HDDS-Build/935/testReport/org.apache.hadoop.ozone.web/TestOzoneRestWithMiniCluster/org_apache_hadoop_ozone_web_TestOzoneRestWithMiniCluster_2/
> {code}
> 2018-09-02 20:56:33,501 INFO  db.DBStoreBuilder 
> (DBStoreBuilder.java:getDbProfile(176)) - Unable to read ROCKDB config
> java.io.IOException: Unable to find the configuration directory. Please make 
> sure that HADOOP_CONF_DIR is setup correctly 
>   at 
> org.apache.hadoop.utils.db.DBConfigFromFile.getConfigLocation(DBConfigFromFile.java:62)
>   at 
> org.apache.hadoop.utils.db.DBConfigFromFile.readFromFile(DBConfigFromFile.java:118)
>   at 
> org.apache.hadoop.utils.db.DBStoreBuilder.getDbProfile(DBStoreBuilder.java:170)
>   at 
> org.apache.hadoop.utils.db.DBStoreBuilder.build(DBStoreBuilder.java:122)
>   at 
> org.apache.hadoop.ozone.om.OmMetadataManagerImpl.(OmMetadataManagerImpl.java:133)
>   at org.apache.hadoop.ozone.om.OzoneManager.(OzoneManager.java:146)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:295)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.createOM(MiniOzoneClusterImpl.java:357)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.build(MiniOzoneClusterImpl.java:304)
>   at 
> org.apache.hadoop.ozone.web.TestOzoneRestWithMiniCluster.init(TestOzoneRestWithMiniCluster.java:73)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-395) TestOzoneRestWithMiniCluster fails with "Unable to read ROCKDB config"

2018-09-12 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612805#comment-16612805
 ] 

Hudson commented on HDDS-395:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14938 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14938/])
HDDS-395. TestOzoneRestWithMiniCluster fails with "Unable to read ROCKDB 
(aengineer: rev c18eb9780163f8995c21b7d1b7b2b04140e4bc0a)
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/DBConfigFromFile.java


> TestOzoneRestWithMiniCluster fails with "Unable to read ROCKDB config"
> --
>
> Key: HDDS-395
> URL: https://issues.apache.org/jira/browse/HDDS-395
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Mukul Kumar Singh
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.2.1
>
> Attachments: HDDS-395.001.patch, HDDS-395.002.patch
>
>
> Ozone datanode initialization is failing with the following exception.
> This was noted in the following precommit build result.
> https://builds.apache.org/job/PreCommit-HDDS-Build/935/testReport/org.apache.hadoop.ozone.web/TestOzoneRestWithMiniCluster/org_apache_hadoop_ozone_web_TestOzoneRestWithMiniCluster_2/
> {code}
> 2018-09-02 20:56:33,501 INFO  db.DBStoreBuilder 
> (DBStoreBuilder.java:getDbProfile(176)) - Unable to read ROCKDB config
> java.io.IOException: Unable to find the configuration directory. Please make 
> sure that HADOOP_CONF_DIR is setup correctly 
>   at 
> org.apache.hadoop.utils.db.DBConfigFromFile.getConfigLocation(DBConfigFromFile.java:62)
>   at 
> org.apache.hadoop.utils.db.DBConfigFromFile.readFromFile(DBConfigFromFile.java:118)
>   at 
> org.apache.hadoop.utils.db.DBStoreBuilder.getDbProfile(DBStoreBuilder.java:170)
>   at 
> org.apache.hadoop.utils.db.DBStoreBuilder.build(DBStoreBuilder.java:122)
>   at 
> org.apache.hadoop.ozone.om.OmMetadataManagerImpl.(OmMetadataManagerImpl.java:133)
>   at org.apache.hadoop.ozone.om.OzoneManager.(OzoneManager.java:146)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:295)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.createOM(MiniOzoneClusterImpl.java:357)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.build(MiniOzoneClusterImpl.java:304)
>   at 
> org.apache.hadoop.ozone.web.TestOzoneRestWithMiniCluster.init(TestOzoneRestWithMiniCluster.java:73)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-233) Update ozone to latest ratis snapshot build

2018-09-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612802#comment-16612802
 ] 

Hadoop QA commented on HDDS-233:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
38s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
24s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
14s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
56s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  6m 10s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
44s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} 

[jira] [Assigned] (HDDS-394) Rename *Key Apis in DatanodeContainerProtocol to *Block apis

2018-09-12 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia reassigned HDDS-394:
--

Assignee: Dinesh Chitlangia

> Rename *Key Apis in DatanodeContainerProtocol to *Block apis
> 
>
> Key: HDDS-394
> URL: https://issues.apache.org/jira/browse/HDDS-394
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Mukul Kumar Singh
>Assignee: Dinesh Chitlangia
>Priority: Major
>
> All the block apis in client datanode interaction are named *key apis(e.g. 
> PutKey), This can be renamed to *Block apis. (e.g. PutBlock).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-362) Modify functions impacted by SCM chill mode in ScmBlockLocationProtocol

2018-09-12 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612795#comment-16612795
 ] 

Ajay Kumar commented on HDDS-362:
-

rebased the patch.

> Modify functions impacted by SCM chill mode in ScmBlockLocationProtocol
> ---
>
> Key: HDDS-362
> URL: https://issues.apache.org/jira/browse/HDDS-362
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-362.00.patch, HDDS-362.01.patch, HDDS-362.02.patch
>
>
> [HDDS-351] adds chill mode state to SCM. When SCM is in chill mode certain 
> operations will be restricted for end users. This jira intends to modify 
> functions impacted by SCM chill mode in ScmBlockLocationProtocol.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-407) ozone logs are written to ozone.log. instead of ozone.log

2018-09-12 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612791#comment-16612791
 ] 

Dinesh Chitlangia commented on HDDS-407:


[~nilotpalnandi] - I tried replicating this problem but have not yet been 
successful.

Can you share the repro steps if you can still replicate the problem?

 

I have started ozone in local today and shut it down.

I will start it again tomorrow to trigger log file being rolled over and will 
let you know if I can repro the issue.

> ozone logs are written to ozone.log. instead of ozone.log
> ---
>
> Key: HDDS-407
> URL: https://issues.apache.org/jira/browse/HDDS-407
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Nilotpal Nandi
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.3.0
>
>
> Please refer below details 
> ozone related logs are written to ozone.log.2018-09-05 instead of ozone.log. 
> Also, please check the timestamps of the logs. The cluster was created 
> {noformat}
> [root@ctr-e138-1518143905142-459606-01-02 logs]# ls -lhart 
> /root/hadoop_trunk/ozone-0.2.1-SNAPSHOT/logs/
> total 968K
> drwxr-xr-x 9 root root 4.0K Sep 5 10:04 ..
> -rw-r--r-- 1 root root 0 Sep 5 10:04 fairscheduler-statedump.log
> -rw-r--r-- 1 root root 17K Sep 5 10:05 
> hadoop-root-om-ctr-e138-1518143905142-459606-01-02.hwx.site.out.1
> -rw-r--r-- 1 root root 16K Sep 5 10:10 
> hadoop-root-om-ctr-e138-1518143905142-459606-01-02.hwx.site.out
> -rw-r--r-- 1 root root 11K Sep 5 10:10 
> hadoop-root-om-ctr-e138-1518143905142-459606-01-02.hwx.site.log
> -rw-r--r-- 1 root root 17K Sep 6 05:42 
> hadoop-root-datanode-ctr-e138-1518143905142-459606-01-02.hwx.site.out
> -rw-r--r-- 1 root root 2.1K Sep 6 13:20 ozone.log
> -rw-r--r-- 1 root root 67K Sep 6 13:22 
> hadoop-root-datanode-ctr-e138-1518143905142-459606-01-02.hwx.site.log
> drwxr-xr-x 2 root root 4.0K Sep 6 13:31 .
> -rw-r--r-- 1 root root 811K Sep 6 13:39 ozone.log.2018-09-05
> [root@ctr-e138-1518143905142-459606-01-02 logs]# date
> Thu Sep 6 13:39:47 UTC 2018{noformat}
>  
> tail of ozone.log
> {noformat}
> [root@ctr-e138-1518143905142-459606-01-02 logs]# tail -f ozone.log
> 2018-09-06 10:51:56,616 [IPC Server handler 13 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key 0file allocated in volume test-vol2 bucket 
> test-bucket2
> 2018-09-06 10:52:18,570 [IPC Server handler 9 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key 0file1 allocated in volume test-vol2 bucket 
> test-bucket2
> 2018-09-06 10:52:32,256 [IPC Server handler 12 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key 0file2 allocated in volume test-vol2 bucket 
> test-bucket2
> 2018-09-06 10:53:11,008 [IPC Server handler 14 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key 0file2 allocated in volume test-vol2 bucket 
> test-bucket2
> 2018-09-06 10:53:28,316 [IPC Server handler 10 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key 0file2 allocated in volume test-vol2 bucket 
> test-bucket2
> 2018-09-06 10:53:39,509 [IPC Server handler 17 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key 0file3 allocated in volume test-vol2 bucket 
> test-bucket2
> 2018-09-06 11:31:02,388 [IPC Server handler 19 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key 2GBFILE allocated in volume test-vol2 bucket 
> test-bucket2
> 2018-09-06 11:32:44,269 [IPC Server handler 12 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key 2GBFILE_1 allocated in volume test-vol2 
> bucket test-bucket2
> 2018-09-06 13:17:33,408 [IPC Server handler 16 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key FILEWITHZEROS allocated in volume test-vol2 
> bucket test-bucket2
> 2018-09-06 13:20:13,897 [IPC Server handler 15 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key FILEWITHZEROS1 allocated in volume test-vol2 
> bucket test-bucket2{noformat}
>  
> tail of ozone.log.2018-09-05:
> {noformat}
> root@ctr-e138-1518143905142-459606-01-02 logs]# tail -50 
> ozone.log.2018-09-05
> 2018-09-06 13:28:57,866 [BlockDeletingService#8] DEBUG 
> (TopNOrderedContainerDeletionChoosingPolicy.java:79) - Stop looking for next 
> container, there is no pending deletion block contained in remaining 
> containers.
> 2018-09-06 13:29:07,816 [Datanode State Machine Thread - 0] DEBUG 
> (DatanodeStateMachine.java:145) - Executing cycle Number : 3266
> 2018-09-06 13:29:13,687 [Datanode ReportManager Thread - 0] DEBUG 
> (ContainerSet.java:191) - Starting container report iteration.
> 2018-09-06 13:29:37,816 [Datanode State Machine Thread - 0] DEBUG 
> (DatanodeStateMachine.java:145) - Executing cycle Number : 3267
> 2018-09-06 13:29:57,866 [BlockDeletingService#8] DEBUG 
> (TopNOrderedContainerDeletionChoosingPolicy.java:79) - Stop looking for next 
> container, there is no pending 

[jira] [Assigned] (HDDS-407) ozone logs are written to ozone.log. instead of ozone.log

2018-09-12 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia reassigned HDDS-407:
--

Assignee: Dinesh Chitlangia

> ozone logs are written to ozone.log. instead of ozone.log
> ---
>
> Key: HDDS-407
> URL: https://issues.apache.org/jira/browse/HDDS-407
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Nilotpal Nandi
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.3.0
>
>
> Please refer below details 
> ozone related logs are written to ozone.log.2018-09-05 instead of ozone.log. 
> Also, please check the timestamps of the logs. The cluster was created 
> {noformat}
> [root@ctr-e138-1518143905142-459606-01-02 logs]# ls -lhart 
> /root/hadoop_trunk/ozone-0.2.1-SNAPSHOT/logs/
> total 968K
> drwxr-xr-x 9 root root 4.0K Sep 5 10:04 ..
> -rw-r--r-- 1 root root 0 Sep 5 10:04 fairscheduler-statedump.log
> -rw-r--r-- 1 root root 17K Sep 5 10:05 
> hadoop-root-om-ctr-e138-1518143905142-459606-01-02.hwx.site.out.1
> -rw-r--r-- 1 root root 16K Sep 5 10:10 
> hadoop-root-om-ctr-e138-1518143905142-459606-01-02.hwx.site.out
> -rw-r--r-- 1 root root 11K Sep 5 10:10 
> hadoop-root-om-ctr-e138-1518143905142-459606-01-02.hwx.site.log
> -rw-r--r-- 1 root root 17K Sep 6 05:42 
> hadoop-root-datanode-ctr-e138-1518143905142-459606-01-02.hwx.site.out
> -rw-r--r-- 1 root root 2.1K Sep 6 13:20 ozone.log
> -rw-r--r-- 1 root root 67K Sep 6 13:22 
> hadoop-root-datanode-ctr-e138-1518143905142-459606-01-02.hwx.site.log
> drwxr-xr-x 2 root root 4.0K Sep 6 13:31 .
> -rw-r--r-- 1 root root 811K Sep 6 13:39 ozone.log.2018-09-05
> [root@ctr-e138-1518143905142-459606-01-02 logs]# date
> Thu Sep 6 13:39:47 UTC 2018{noformat}
>  
> tail of ozone.log
> {noformat}
> [root@ctr-e138-1518143905142-459606-01-02 logs]# tail -f ozone.log
> 2018-09-06 10:51:56,616 [IPC Server handler 13 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key 0file allocated in volume test-vol2 bucket 
> test-bucket2
> 2018-09-06 10:52:18,570 [IPC Server handler 9 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key 0file1 allocated in volume test-vol2 bucket 
> test-bucket2
> 2018-09-06 10:52:32,256 [IPC Server handler 12 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key 0file2 allocated in volume test-vol2 bucket 
> test-bucket2
> 2018-09-06 10:53:11,008 [IPC Server handler 14 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key 0file2 allocated in volume test-vol2 bucket 
> test-bucket2
> 2018-09-06 10:53:28,316 [IPC Server handler 10 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key 0file2 allocated in volume test-vol2 bucket 
> test-bucket2
> 2018-09-06 10:53:39,509 [IPC Server handler 17 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key 0file3 allocated in volume test-vol2 bucket 
> test-bucket2
> 2018-09-06 11:31:02,388 [IPC Server handler 19 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key 2GBFILE allocated in volume test-vol2 bucket 
> test-bucket2
> 2018-09-06 11:32:44,269 [IPC Server handler 12 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key 2GBFILE_1 allocated in volume test-vol2 
> bucket test-bucket2
> 2018-09-06 13:17:33,408 [IPC Server handler 16 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key FILEWITHZEROS allocated in volume test-vol2 
> bucket test-bucket2
> 2018-09-06 13:20:13,897 [IPC Server handler 15 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key FILEWITHZEROS1 allocated in volume test-vol2 
> bucket test-bucket2{noformat}
>  
> tail of ozone.log.2018-09-05:
> {noformat}
> root@ctr-e138-1518143905142-459606-01-02 logs]# tail -50 
> ozone.log.2018-09-05
> 2018-09-06 13:28:57,866 [BlockDeletingService#8] DEBUG 
> (TopNOrderedContainerDeletionChoosingPolicy.java:79) - Stop looking for next 
> container, there is no pending deletion block contained in remaining 
> containers.
> 2018-09-06 13:29:07,816 [Datanode State Machine Thread - 0] DEBUG 
> (DatanodeStateMachine.java:145) - Executing cycle Number : 3266
> 2018-09-06 13:29:13,687 [Datanode ReportManager Thread - 0] DEBUG 
> (ContainerSet.java:191) - Starting container report iteration.
> 2018-09-06 13:29:37,816 [Datanode State Machine Thread - 0] DEBUG 
> (DatanodeStateMachine.java:145) - Executing cycle Number : 3267
> 2018-09-06 13:29:57,866 [BlockDeletingService#8] DEBUG 
> (TopNOrderedContainerDeletionChoosingPolicy.java:79) - Stop looking for next 
> container, there is no pending deletion block contained in remaining 
> containers.
> 2018-09-06 13:30:07,816 [Datanode State Machine Thread - 0] DEBUG 
> (DatanodeStateMachine.java:145) - Executing cycle Number : 3268
> 2018-09-06 13:30:19,186 [Datanode ReportManager Thread - 0] DEBUG 
> (ContainerSet.java:191) - Starting container report iteration.
> 2018-09-06 

[jira] [Updated] (HDDS-407) ozone logs are written to ozone.log. instead of ozone.log

2018-09-12 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-407:
---
Summary: ozone logs are written to ozone.log. instead of ozone.log  
(was: ozone logs are wriiten to ozone.log. instead of ozone.log)

> ozone logs are written to ozone.log. instead of ozone.log
> ---
>
> Key: HDDS-407
> URL: https://issues.apache.org/jira/browse/HDDS-407
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Nilotpal Nandi
>Priority: Major
> Fix For: 0.3.0
>
>
> Please refer below details 
> ozone related logs are written to ozone.log.2018-09-05 instead of ozone.log. 
> Also, please check the timestamps of the logs. The cluster was created 
> {noformat}
> [root@ctr-e138-1518143905142-459606-01-02 logs]# ls -lhart 
> /root/hadoop_trunk/ozone-0.2.1-SNAPSHOT/logs/
> total 968K
> drwxr-xr-x 9 root root 4.0K Sep 5 10:04 ..
> -rw-r--r-- 1 root root 0 Sep 5 10:04 fairscheduler-statedump.log
> -rw-r--r-- 1 root root 17K Sep 5 10:05 
> hadoop-root-om-ctr-e138-1518143905142-459606-01-02.hwx.site.out.1
> -rw-r--r-- 1 root root 16K Sep 5 10:10 
> hadoop-root-om-ctr-e138-1518143905142-459606-01-02.hwx.site.out
> -rw-r--r-- 1 root root 11K Sep 5 10:10 
> hadoop-root-om-ctr-e138-1518143905142-459606-01-02.hwx.site.log
> -rw-r--r-- 1 root root 17K Sep 6 05:42 
> hadoop-root-datanode-ctr-e138-1518143905142-459606-01-02.hwx.site.out
> -rw-r--r-- 1 root root 2.1K Sep 6 13:20 ozone.log
> -rw-r--r-- 1 root root 67K Sep 6 13:22 
> hadoop-root-datanode-ctr-e138-1518143905142-459606-01-02.hwx.site.log
> drwxr-xr-x 2 root root 4.0K Sep 6 13:31 .
> -rw-r--r-- 1 root root 811K Sep 6 13:39 ozone.log.2018-09-05
> [root@ctr-e138-1518143905142-459606-01-02 logs]# date
> Thu Sep 6 13:39:47 UTC 2018{noformat}
>  
> tail of ozone.log
> {noformat}
> [root@ctr-e138-1518143905142-459606-01-02 logs]# tail -f ozone.log
> 2018-09-06 10:51:56,616 [IPC Server handler 13 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key 0file allocated in volume test-vol2 bucket 
> test-bucket2
> 2018-09-06 10:52:18,570 [IPC Server handler 9 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key 0file1 allocated in volume test-vol2 bucket 
> test-bucket2
> 2018-09-06 10:52:32,256 [IPC Server handler 12 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key 0file2 allocated in volume test-vol2 bucket 
> test-bucket2
> 2018-09-06 10:53:11,008 [IPC Server handler 14 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key 0file2 allocated in volume test-vol2 bucket 
> test-bucket2
> 2018-09-06 10:53:28,316 [IPC Server handler 10 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key 0file2 allocated in volume test-vol2 bucket 
> test-bucket2
> 2018-09-06 10:53:39,509 [IPC Server handler 17 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key 0file3 allocated in volume test-vol2 bucket 
> test-bucket2
> 2018-09-06 11:31:02,388 [IPC Server handler 19 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key 2GBFILE allocated in volume test-vol2 bucket 
> test-bucket2
> 2018-09-06 11:32:44,269 [IPC Server handler 12 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key 2GBFILE_1 allocated in volume test-vol2 
> bucket test-bucket2
> 2018-09-06 13:17:33,408 [IPC Server handler 16 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key FILEWITHZEROS allocated in volume test-vol2 
> bucket test-bucket2
> 2018-09-06 13:20:13,897 [IPC Server handler 15 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key FILEWITHZEROS1 allocated in volume test-vol2 
> bucket test-bucket2{noformat}
>  
> tail of ozone.log.2018-09-05:
> {noformat}
> root@ctr-e138-1518143905142-459606-01-02 logs]# tail -50 
> ozone.log.2018-09-05
> 2018-09-06 13:28:57,866 [BlockDeletingService#8] DEBUG 
> (TopNOrderedContainerDeletionChoosingPolicy.java:79) - Stop looking for next 
> container, there is no pending deletion block contained in remaining 
> containers.
> 2018-09-06 13:29:07,816 [Datanode State Machine Thread - 0] DEBUG 
> (DatanodeStateMachine.java:145) - Executing cycle Number : 3266
> 2018-09-06 13:29:13,687 [Datanode ReportManager Thread - 0] DEBUG 
> (ContainerSet.java:191) - Starting container report iteration.
> 2018-09-06 13:29:37,816 [Datanode State Machine Thread - 0] DEBUG 
> (DatanodeStateMachine.java:145) - Executing cycle Number : 3267
> 2018-09-06 13:29:57,866 [BlockDeletingService#8] DEBUG 
> (TopNOrderedContainerDeletionChoosingPolicy.java:79) - Stop looking for next 
> container, there is no pending deletion block contained in remaining 
> containers.
> 2018-09-06 13:30:07,816 [Datanode State Machine Thread - 0] DEBUG 
> (DatanodeStateMachine.java:145) - Executing cycle Number : 3268
> 2018-09-06 13:30:19,186 [Datanode ReportManager Thread - 0] DEBUG 
> (ContainerSet.java:191) 

[jira] [Updated] (HDDS-395) TestOzoneRestWithMiniCluster fails with "Unable to read ROCKDB config"

2018-09-12 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-395:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

[~elek], [~sreevaddi] Thanks for reviews, comments and earlier patches. 
[~dineshchitlangia] Thanks for the new patch. I have committed this to trunk 
and ozone-2.0. This should clear some of the jenkins failure we are having.

> TestOzoneRestWithMiniCluster fails with "Unable to read ROCKDB config"
> --
>
> Key: HDDS-395
> URL: https://issues.apache.org/jira/browse/HDDS-395
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Mukul Kumar Singh
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.2.1
>
> Attachments: HDDS-395.001.patch, HDDS-395.002.patch
>
>
> Ozone datanode initialization is failing with the following exception.
> This was noted in the following precommit build result.
> https://builds.apache.org/job/PreCommit-HDDS-Build/935/testReport/org.apache.hadoop.ozone.web/TestOzoneRestWithMiniCluster/org_apache_hadoop_ozone_web_TestOzoneRestWithMiniCluster_2/
> {code}
> 2018-09-02 20:56:33,501 INFO  db.DBStoreBuilder 
> (DBStoreBuilder.java:getDbProfile(176)) - Unable to read ROCKDB config
> java.io.IOException: Unable to find the configuration directory. Please make 
> sure that HADOOP_CONF_DIR is setup correctly 
>   at 
> org.apache.hadoop.utils.db.DBConfigFromFile.getConfigLocation(DBConfigFromFile.java:62)
>   at 
> org.apache.hadoop.utils.db.DBConfigFromFile.readFromFile(DBConfigFromFile.java:118)
>   at 
> org.apache.hadoop.utils.db.DBStoreBuilder.getDbProfile(DBStoreBuilder.java:170)
>   at 
> org.apache.hadoop.utils.db.DBStoreBuilder.build(DBStoreBuilder.java:122)
>   at 
> org.apache.hadoop.ozone.om.OmMetadataManagerImpl.(OmMetadataManagerImpl.java:133)
>   at org.apache.hadoop.ozone.om.OzoneManager.(OzoneManager.java:146)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:295)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.createOM(MiniOzoneClusterImpl.java:357)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.build(MiniOzoneClusterImpl.java:304)
>   at 
> org.apache.hadoop.ozone.web.TestOzoneRestWithMiniCluster.init(TestOzoneRestWithMiniCluster.java:73)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-362) Modify functions impacted by SCM chill mode in ScmBlockLocationProtocol

2018-09-12 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-362:

Attachment: HDDS-362.02.patch

> Modify functions impacted by SCM chill mode in ScmBlockLocationProtocol
> ---
>
> Key: HDDS-362
> URL: https://issues.apache.org/jira/browse/HDDS-362
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-362.00.patch, HDDS-362.01.patch, HDDS-362.02.patch
>
>
> [HDDS-351] adds chill mode state to SCM. When SCM is in chill mode certain 
> operations will be restricted for end users. This jira intends to modify 
> functions impacted by SCM chill mode in ScmBlockLocationProtocol.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-434) Provide an s3 compatible REST api for ozone objects

2018-09-12 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-434:
--
Target Version/s: 0.3.0
   Fix Version/s: (was: 0.3.0)

> Provide an s3 compatible REST api for ozone objects
> ---
>
> Key: HDDS-434
> URL: https://issues.apache.org/jira/browse/HDDS-434
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>
> S3 REST api is the de facto standard for object stores. Many external tools 
> already support it.
> This issue is about creating a new s3gateway component which implements (most 
> part of) the s3 API using the internal RPC calls.
> Some part of the implementation is very straightforward: we need a new 
> service with usual REST stack and we need to implement the most commont 
> GET/POST/PUT calls. Some other (Authorization, multi-part upload) are more 
> tricky.
> Here I suggest to create an evaluation: first we can implement a skeleton 
> service which could support read only requests without authorization and we 
> can define proper specification for the upload part / authorization during 
> the work.
> As of now the gatway service could be a new standalone application (eg. ozone 
> s3g start) later we can modify it to work as s DatanodePlugin similar to the 
> existing object store plugin. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-395) TestOzoneRestWithMiniCluster fails with "Unable to read ROCKDB config"

2018-09-12 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612736#comment-16612736
 ] 

Anu Engineer commented on HDDS-395:
---

I have few failures in the acceptance test path, but I am pretty sure it is not 
due to this patch. So I am going to commit this shortly. +1
{code:java}

Acceptance.Ozonefs.Ozonesinglenode :: Ozonefs Single Node Test
==
Create volume and bucket  | PASS |
--
Check volume from ozonefs | FAIL |
1 != 0
--
Create directory from ozonefs | FAIL |
1 != 0
--
Test key handling | FAIL |
2 != 0
--
Acceptance.Ozonefs.Ozonesinglenode :: Ozonefs Single Node Test    | FAIL |
4 critical tests, 1 passed, 3 failed
4 tests total, 1 passed, 3 failed
==
Acceptance.Ozonefs    | FAIL |
7 critical tests, 4 passed, 3 failed
7 tests total, 4 passed, 3 failed
==
Acceptance    | FAIL |
16 critical tests, 13 passed, 3 failed
16 tests total, 13 passed, 3 failed
=={code}

> TestOzoneRestWithMiniCluster fails with "Unable to read ROCKDB config"
> --
>
> Key: HDDS-395
> URL: https://issues.apache.org/jira/browse/HDDS-395
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Mukul Kumar Singh
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.2.1
>
> Attachments: HDDS-395.001.patch, HDDS-395.002.patch
>
>
> Ozone datanode initialization is failing with the following exception.
> This was noted in the following precommit build result.
> https://builds.apache.org/job/PreCommit-HDDS-Build/935/testReport/org.apache.hadoop.ozone.web/TestOzoneRestWithMiniCluster/org_apache_hadoop_ozone_web_TestOzoneRestWithMiniCluster_2/
> {code}
> 2018-09-02 20:56:33,501 INFO  db.DBStoreBuilder 
> (DBStoreBuilder.java:getDbProfile(176)) - Unable to read ROCKDB config
> java.io.IOException: Unable to find the configuration directory. Please make 
> sure that HADOOP_CONF_DIR is setup correctly 
>   at 
> org.apache.hadoop.utils.db.DBConfigFromFile.getConfigLocation(DBConfigFromFile.java:62)
>   at 
> org.apache.hadoop.utils.db.DBConfigFromFile.readFromFile(DBConfigFromFile.java:118)
>   at 
> org.apache.hadoop.utils.db.DBStoreBuilder.getDbProfile(DBStoreBuilder.java:170)
>   at 
> org.apache.hadoop.utils.db.DBStoreBuilder.build(DBStoreBuilder.java:122)
>   at 
> org.apache.hadoop.ozone.om.OmMetadataManagerImpl.(OmMetadataManagerImpl.java:133)
>   at org.apache.hadoop.ozone.om.OzoneManager.(OzoneManager.java:146)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:295)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.createOM(MiniOzoneClusterImpl.java:357)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.build(MiniOzoneClusterImpl.java:304)
>   at 
> org.apache.hadoop.ozone.web.TestOzoneRestWithMiniCluster.init(TestOzoneRestWithMiniCluster.java:73)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> 

[jira] [Commented] (HDFS-13846) Safe blocks counter is not decremented correctly if the block is striped

2018-09-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612725#comment-16612725
 ] 

Hadoop QA commented on HDFS-13846:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 53s{color} 
| {color:red} hadoop-hdfs-project_hadoop-hdfs generated 1 new + 416 unchanged - 
0 fixed = 417 total (was 416) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}106m 40s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}160m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.client.impl.TestBlockReaderLocal |
|   | hadoop.hdfs.TestLeaseRecovery2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-13846 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12939446/HDFS-13846.005.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 83b517181aa8 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1f6c454 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25051/artifact/out/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25051/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-13882) Set a maximum for the delay before retrying locateFollowingBlock

2018-09-12 Thread Kitti Nanasi (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612724#comment-16612724
 ] 

Kitti Nanasi commented on HDFS-13882:
-

I uploaded a patch with the new config. I just realised that in one of the 
comment I wrote "the overall timeout will be more than 409 minutes", it was 
supposed to be seconds and not minutes.

> Set a maximum for the delay before retrying locateFollowingBlock
> 
>
> Key: HDFS-13882
> URL: https://issues.apache.org/jira/browse/HDFS-13882
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HDFS-13882.001.patch, HDFS-13882.002.patch
>
>
> More and more we are seeing cases where customers are running into the java 
> io exception "Unable to close file because the last block does not have 
> enough number of replicas" on client file closure. The common workaround is 
> to increase dfs.client.block.write.locateFollowingBlock.retries from 5 to 10. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13882) Set a maximum for the delay before retrying locateFollowingBlock

2018-09-12 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HDFS-13882:

Attachment: HDFS-13882.002.patch

> Set a maximum for the delay before retrying locateFollowingBlock
> 
>
> Key: HDFS-13882
> URL: https://issues.apache.org/jira/browse/HDFS-13882
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HDFS-13882.001.patch, HDFS-13882.002.patch
>
>
> More and more we are seeing cases where customers are running into the java 
> io exception "Unable to close file because the last block does not have 
> enough number of replicas" on client file closure. The common workaround is 
> to increase dfs.client.block.write.locateFollowingBlock.retries from 5 to 10. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13882) Set a maximum for the delay before retrying locateFollowingBlock

2018-09-12 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HDFS-13882:

Summary: Set a maximum for the delay before retrying locateFollowingBlock  
(was: Change dfs.client.block.write.locateFollowingBlock.retries default from 5 
to 10)

> Set a maximum for the delay before retrying locateFollowingBlock
> 
>
> Key: HDFS-13882
> URL: https://issues.apache.org/jira/browse/HDFS-13882
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HDFS-13882.001.patch
>
>
> More and more we are seeing cases where customers are running into the java 
> io exception "Unable to close file because the last block does not have 
> enough number of replicas" on client file closure. The common workaround is 
> to increase dfs.client.block.write.locateFollowingBlock.retries from 5 to 10. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13532) RBF: Adding security

2018-09-12 Thread CR Hota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

CR Hota updated HDFS-13532:
---
Attachment: RBF _ Security delegation token thoughts_updated_2.pdf

> RBF: Adding security
> 
>
> Key: HDFS-13532
> URL: https://issues.apache.org/jira/browse/HDFS-13532
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Íñigo Goiri
>Assignee: CR Hota
>Priority: Major
> Attachments: RBF _ Security delegation token thoughts.pdf, RBF _ 
> Security delegation token thoughts_updated.pdf, RBF _ Security delegation 
> token thoughts_updated_2.pdf, RBF-DelegationToken-Approach1b.pdf, 
> Security_for_Router-based Federation_design_doc.pdf
>
>
> HDFS Router based federation should support security. This includes 
> authentication and delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13532) RBF: Adding security

2018-09-12 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612667#comment-16612667
 ] 

CR Hota commented on HDFS-13532:


Thanks everyone for all the reviews so far.

MoM
 # Everyone tilted towards Approach 1 based on the pros and cons outlined.
 # Anu raised a valid point about multi domain cluster set-ups. He would like 
us to update the document with thoughts on this area. 
 # Inigo felt we should start prototyping approach 1.
 # Brahma also felt Approach 1 would be better.
 # Everyone more or less agreed that Approach 1 is also easy to implement.

Attaching an updated document which contains some initial information around 
multi domain. [~anu] could you please to add more context around it. As of now, 
secured router could facade multiple hdfs clusters that all work on the same 
domain.

> RBF: Adding security
> 
>
> Key: HDFS-13532
> URL: https://issues.apache.org/jira/browse/HDFS-13532
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Íñigo Goiri
>Assignee: CR Hota
>Priority: Major
> Attachments: RBF _ Security delegation token thoughts.pdf, RBF _ 
> Security delegation token thoughts_updated.pdf, 
> RBF-DelegationToken-Approach1b.pdf, Security_for_Router-based 
> Federation_design_doc.pdf
>
>
> HDFS Router based federation should support security. This includes 
> authentication and delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13777) [PROVIDED Phase 2] Scheduler in the NN for distributing DNA_BACKUP work.

2018-09-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612659#comment-16612659
 ] 

Hadoop QA commented on HDFS-13777:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 35 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-12090 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
45s{color} | {color:green} HDFS-12090 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 
16s{color} | {color:green} HDFS-12090 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
34s{color} | {color:green} HDFS-12090 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m  
4s{color} | {color:green} HDFS-12090 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m 
59s{color} | {color:green} HDFS-12090 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
18s{color} | {color:green} HDFS-12090 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
37s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
45s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
28s{color} | {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  1m  
0s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  1m  0s{color} | 
{color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m  0s{color} 
| {color:red} root in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
4m 46s{color} | {color:orange} root: The patch generated 863 new + 2807 
unchanged - 14 fixed = 3670 total (was 2821) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
39s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
44s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
29s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
22s{color} | {color:green} The patch generated 0 new + 114 unchanged - 286 
fixed = 114 total (was 400) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  0m 
48s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
21s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
19s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 2 new 
+ 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:red}-1{color} | 

[jira] [Comment Edited] (HDDS-233) Update ozone to latest ratis snapshot build

2018-09-12 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612644#comment-16612644
 ] 

Shashikant Banerjee edited comment on HDDS-233 at 9/12/18 7:35 PM:
---

Fix the test failure related  to Test to testBlockWriteViaRatis by adjusting 
the configurations leader election minimum timeout and ozone client retry 
Interval so that, it keeps on retrying till a leader election completes before 
the client time out. Rest of the test failures are unrelated.


was (Author: shashikant):
Fix the test failure related  to Test to testBlockWriteViaRatis by adjusting 
the configurations leader election minimum timeout and ozone retryInterval so 
that, it keeps on retrying till a leader election completes before the client 
time out. Rest of the test failures are unrelated.

> Update ozone to latest ratis snapshot build
> ---
>
> Key: HDDS-233
> URL: https://issues.apache.org/jira/browse/HDDS-233
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-233.01.patch, HDDS-233.02.patch, HDDS-233.03.patch, 
> HDDS-233_20180911.patch
>
>
> This jira proposes to update ozone to latest ratis snapshot build. This jira 
> also will add config to set append entry timeout as well as controlling the 
> number of entries in retry cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-233) Update ozone to latest ratis snapshot build

2018-09-12 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612644#comment-16612644
 ] 

Shashikant Banerjee commented on HDDS-233:
--

Fix the test failure related  to Test to testBlockWriteViaRatis by adjusting 
the configurations leader election minimum timeout and ozone retryInterval so 
that, it keeps on retrying till a leader election completes before the client 
time out. Rest of the test failures are unrelated.

> Update ozone to latest ratis snapshot build
> ---
>
> Key: HDDS-233
> URL: https://issues.apache.org/jira/browse/HDDS-233
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-233.01.patch, HDDS-233.02.patch, HDDS-233.03.patch, 
> HDDS-233_20180911.patch
>
>
> This jira proposes to update ozone to latest ratis snapshot build. This jira 
> also will add config to set append entry timeout as well as controlling the 
> number of entries in retry cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-233) Update ozone to latest ratis snapshot build

2018-09-12 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-233:
-
Attachment: HDDS-233.03.patch

> Update ozone to latest ratis snapshot build
> ---
>
> Key: HDDS-233
> URL: https://issues.apache.org/jira/browse/HDDS-233
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-233.01.patch, HDDS-233.02.patch, HDDS-233.03.patch, 
> HDDS-233_20180911.patch
>
>
> This jira proposes to update ozone to latest ratis snapshot build. This jira 
> also will add config to set append entry timeout as well as controlling the 
> number of entries in retry cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-415) 'ozone om' with incorrect argument first logs all the STARTUP_MSG

2018-09-12 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-415:
---
Summary: 'ozone om' with incorrect argument first logs all the STARTUP_MSG  
(was:  bin/ozone om with incorrect argument first logs all the STARTUP_MSG)

> 'ozone om' with incorrect argument first logs all the STARTUP_MSG
> -
>
> Key: HDDS-415
> URL: https://issues.apache.org/jira/browse/HDDS-415
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Blocker
> Fix For: 0.2.1
>
>
>  bin/ozone om with incorrect argument first logs all the STARTUP_MSG
> {code:java}
> ➜ ozone-0.2.1-SNAPSHOT bin/ozone om -hgfj
> 2018-09-07 12:56:12,391 [main] INFO - STARTUP_MSG:
> /
> STARTUP_MSG: Starting OzoneManager
> STARTUP_MSG: host = HW11469.local/10.22.16.67
> STARTUP_MSG: args = [-hgfj]
> STARTUP_MSG: version = 3.2.0-SNAPSHOT
> STARTUP_MSG: classpath = 
> 

[jira] [Commented] (HDDS-415) bin/ozone om with incorrect argument first logs all the STARTUP_MSG

2018-09-12 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612631#comment-16612631
 ] 

Arpit Agarwal commented on HDDS-415:


If it is a straightforward fix to move the error message to the start then 
let's do that first and use picocli separately later.

>  bin/ozone om with incorrect argument first logs all the STARTUP_MSG
> 
>
> Key: HDDS-415
> URL: https://issues.apache.org/jira/browse/HDDS-415
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Blocker
> Fix For: 0.2.1
>
>
>  bin/ozone om with incorrect argument first logs all the STARTUP_MSG
> {code:java}
> ➜ ozone-0.2.1-SNAPSHOT bin/ozone om -hgfj
> 2018-09-07 12:56:12,391 [main] INFO - STARTUP_MSG:
> /
> STARTUP_MSG: Starting OzoneManager
> STARTUP_MSG: host = HW11469.local/10.22.16.67
> STARTUP_MSG: args = [-hgfj]
> STARTUP_MSG: version = 3.2.0-SNAPSHOT
> STARTUP_MSG: classpath = 
> 

[jira] [Commented] (HDFS-13846) Safe blocks counter is not decremented correctly if the block is striped

2018-09-12 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612624#comment-16612624
 ] 

Hudson commented on HDFS-13846:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14936 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14936/])
HDFS-13846. Safe blocks counter is not decremented correctly if the (templedf: 
rev 78bd3b1db9dc9eb533c2379ee71f133ecfc5cdeb)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManagerSafeMode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerSafeMode.java


> Safe blocks counter is not decremented correctly if the block is striped
> 
>
> Key: HDFS-13846
> URL: https://issues.apache.org/jira/browse/HDFS-13846
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13846.001.patch, HDFS-13846.002.patch, 
> HDFS-13846.003.patch, HDFS-13846.004.patch, HDFS-13846.005.patch
>
>
> In BlockManagerSafeMode class, the "safe blocks" counter is incremented if 
> the number of nodes containing the block equals to the number of data units 
> specified by the erasure coding policy, which looks like this in the code:
> {code:java}
> final int safe = storedBlock.isStriped() ?
> ((BlockInfoStriped)storedBlock).getRealDataBlockNum() : 
> safeReplication;
> if (storageNum == safe) {
>   this.blockSafe++;
> {code}
> But when it is decremented the code does not check if the block is striped or 
> not, just compares the number of nodes containing the block with 0 
> (safeReplication - 1) if the block is complete, which is not correct.
> {code:java}
> if (storedBlock.isComplete() &&
> blockManager.countNodes(b).liveReplicas() == safeReplication - 1) {
>   this.blockSafe--;
>   assert blockSafe >= 0;
>   checkSafeMode();
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-436) Allow SCM chill mode to be disabled by configuration.

2018-09-12 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612625#comment-16612625
 ] 

Hudson commented on HDDS-436:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14936 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14936/])
HDDS-436. Allow SCM chill mode to be disabled by configuration. (aengineer: rev 
64c7a12b5775a330a04087adda25f85bd08b8366)
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMChillModeManager.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
* (edit) hadoop-hdds/common/src/main/resources/ozone-default.xml
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/server/TestSCMChillModeManager.java


> Allow SCM chill mode to be disabled by configuration.
> -
>
> Key: HDDS-436
> URL: https://issues.apache.org/jira/browse/HDDS-436
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-436.00.patch
>
>
> [HDDS-351] adds chill mode state to SCM. As 
> [suggested|https://issues.apache.org/jira/browse/HDDS-362?focusedCommentId=16611334=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16611334]
>  by [~jnp] there are cases when we need to disable it via config. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-395) TestOzoneRestWithMiniCluster fails with "Unable to read ROCKDB config"

2018-09-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612622#comment-16612622
 ] 

Hadoop QA commented on HDDS-395:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 20s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 39s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
56s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-395 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12939447/HDDS-395.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e103447be886 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1f6c454 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1045/testReport/ |
| Max. process+thread count | 399 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common U: hadoop-hdds/common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1045/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestOzoneRestWithMiniCluster fails with "Unable to read ROCKDB config"
> 

[jira] [Commented] (HDFS-13882) Change dfs.client.block.write.locateFollowingBlock.retries default from 5 to 10

2018-09-12 Thread Kitti Nanasi (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612618#comment-16612618
 ] 

Kitti Nanasi commented on HDFS-13882:
-

Thanks for [~arpitagarwal] and [~xiaochen] for the discussion!

I will not change the default for the retry number then, but I will add a 
config for the maximum sleep between retries. I think the default for that 
maximum sleep could be 60 seconds, if 7 retries usually solves the problem for 
you with maximum 50 seconds of waiting, 60 seconds seems like a reasonable 
maximum to me. What do you think?

> Change dfs.client.block.write.locateFollowingBlock.retries default from 5 to 
> 10
> ---
>
> Key: HDFS-13882
> URL: https://issues.apache.org/jira/browse/HDFS-13882
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HDFS-13882.001.patch
>
>
> More and more we are seeing cases where customers are running into the java 
> io exception "Unable to close file because the last block does not have 
> enough number of replicas" on client file closure. The common workaround is 
> to increase dfs.client.block.write.locateFollowingBlock.retries from 5 to 10. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-436) Allow SCM chill mode to be disabled by configuration.

2018-09-12 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612613#comment-16612613
 ] 

Ajay Kumar commented on HDDS-436:
-

[~anu] thanks for review and commit.

> Allow SCM chill mode to be disabled by configuration.
> -
>
> Key: HDDS-436
> URL: https://issues.apache.org/jira/browse/HDDS-436
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-436.00.patch
>
>
> [HDDS-351] adds chill mode state to SCM. As 
> [suggested|https://issues.apache.org/jira/browse/HDDS-362?focusedCommentId=16611334=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16611334]
>  by [~jnp] there are cases when we need to disable it via config. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-395) TestOzoneRestWithMiniCluster fails with "Unable to read ROCKDB config"

2018-09-12 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612612#comment-16612612
 ] 

Anu Engineer commented on HDDS-395:
---

+1, looks good to me. I will test this locally and commit since Jenkins is not 
going to catch the error even if there is one. I will do both unit tests and 
integration tests to make sure that this works as expected. Will take a little 
while to get this committed since these tests take time.

> TestOzoneRestWithMiniCluster fails with "Unable to read ROCKDB config"
> --
>
> Key: HDDS-395
> URL: https://issues.apache.org/jira/browse/HDDS-395
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Mukul Kumar Singh
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.2.1
>
> Attachments: HDDS-395.001.patch, HDDS-395.002.patch
>
>
> Ozone datanode initialization is failing with the following exception.
> This was noted in the following precommit build result.
> https://builds.apache.org/job/PreCommit-HDDS-Build/935/testReport/org.apache.hadoop.ozone.web/TestOzoneRestWithMiniCluster/org_apache_hadoop_ozone_web_TestOzoneRestWithMiniCluster_2/
> {code}
> 2018-09-02 20:56:33,501 INFO  db.DBStoreBuilder 
> (DBStoreBuilder.java:getDbProfile(176)) - Unable to read ROCKDB config
> java.io.IOException: Unable to find the configuration directory. Please make 
> sure that HADOOP_CONF_DIR is setup correctly 
>   at 
> org.apache.hadoop.utils.db.DBConfigFromFile.getConfigLocation(DBConfigFromFile.java:62)
>   at 
> org.apache.hadoop.utils.db.DBConfigFromFile.readFromFile(DBConfigFromFile.java:118)
>   at 
> org.apache.hadoop.utils.db.DBStoreBuilder.getDbProfile(DBStoreBuilder.java:170)
>   at 
> org.apache.hadoop.utils.db.DBStoreBuilder.build(DBStoreBuilder.java:122)
>   at 
> org.apache.hadoop.ozone.om.OmMetadataManagerImpl.(OmMetadataManagerImpl.java:133)
>   at org.apache.hadoop.ozone.om.OzoneManager.(OzoneManager.java:146)
>   at 
> org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:295)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.createOM(MiniOzoneClusterImpl.java:357)
>   at 
> org.apache.hadoop.ozone.MiniOzoneClusterImpl$Builder.build(MiniOzoneClusterImpl.java:304)
>   at 
> org.apache.hadoop.ozone.web.TestOzoneRestWithMiniCluster.init(TestOzoneRestWithMiniCluster.java:73)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-373) Ozone genconf tool must generate ozone-site.xml with sample values instead of a template

2018-09-12 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-373:
--
Status: Open  (was: Patch Available)

> Ozone genconf tool must generate ozone-site.xml with sample values instead of 
> a template
> 
>
> Key: HDDS-373
> URL: https://issues.apache.org/jira/browse/HDDS-373
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-373.001.patch
>
>
> As discussed with [~anu], currently, the genconf tool generates a template 
> ozone-site.xml. This is not very useful for new users as they would have to 
> understand what values should be set for the minimal configuration properties.
> This Jira proposes to modify the ozone-default.xml which is leveraged by 
> genconf tool to generate ozone-site.xml



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-387) Remove hadoop-ozone-filesystem dependency on hadoop-ozone-integration-test

2018-09-12 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-387:
--
Status: Open  (was: Patch Available)

> Remove hadoop-ozone-filesystem dependency on hadoop-ozone-integration-test
> --
>
> Key: HDDS-387
> URL: https://issues.apache.org/jira/browse/HDDS-387
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Namit Maheshwari
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-387.001.patch, HDDS-387.002.patch
>
>
> hadoop-ozone-filesystem has dependency on hadoop-ozone-integration-test
> Ideally filesystem modules should not have dependency on test modules.
> This will also have issues while developing Unit Tests and trying to 
> instantiate OzoneFileSystem object inside hadoop-ozone-integration-test, as 
> that will create a circular dependency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-390) Add method to check for valid key name based on URI characters

2018-09-12 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-390:
--
Status: Open  (was: Patch Available)

> Add method to check for valid key name based on URI characters
> --
>
> Key: HDDS-390
> URL: https://issues.apache.org/jira/browse/HDDS-390
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-390.001.patch
>
>
> As per design, key names composed of all valid characters in URI set must be 
> treated as valid key name.
> For URI character set: [https://tools.ietf.org/html/rfc2396#appendix-A]
> This Jira proposes to define validateKeyName() similar to 
> validateResourceName() that validates bucket/volume name
>  
> Valid Key name must:
>  * conform to URI Character set
>  * must allow /
> TBD whether key names must impose other rules similar to volume/bucket names 
> like  -
>  * should not start with period or dash
>  * should not end with period or dash
>  * should not have contiguous periods
>  * should not have period after dash and vice versa
> etc
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13777) [PROVIDED Phase 2] Scheduler in the NN for distributing DNA_BACKUP work.

2018-09-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612602#comment-16612602
 ] 

Hadoop QA commented on HDFS-13777:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  9s{color} 
| {color:red} HDFS-13777 does not apply to HDFS-12090. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-13777 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12939451/HDFS-13777-HDFS-12090.007.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25052/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> [PROVIDED Phase 2] Scheduler in the NN for distributing DNA_BACKUP work.
> 
>
> Key: HDFS-13777
> URL: https://issues.apache.org/jira/browse/HDFS-13777
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-13777-HDFS-12090.001.patch, 
> HDFS-13777-HDFS-12090.002.patch, HDFS-13777-HDFS-12090.003.patch, 
> HDFS-13777-HDFS-12090.005.patch, HDFS-13777-HDFS-12090.006.patch, 
> HDFS-13777-HDFS-12090.007.patch
>
>
> When the SyncService is running, it should periodically take snapshots, make 
> a snapshotdiff, and then distribute DNA_BACKUP work to the Datanodes (See 
> HDFS-13421). Upon completion of the work, the NN should update the AliasMap.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-233) Update ozone to latest ratis snapshot build

2018-09-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612595#comment-16612595
 ] 

Hadoop QA commented on HDDS-233:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
4s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
42s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 57s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
6s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
59s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
25s{color} | {color:green} client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 11s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
36s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} 

[jira] [Updated] (HDFS-13777) [PROVIDED Phase 2] Scheduler in the NN for distributing DNA_BACKUP work.

2018-09-12 Thread Ewan Higgs (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HDFS-13777:
--
Status: Patch Available  (was: Open)

> [PROVIDED Phase 2] Scheduler in the NN for distributing DNA_BACKUP work.
> 
>
> Key: HDFS-13777
> URL: https://issues.apache.org/jira/browse/HDFS-13777
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-13777-HDFS-12090.001.patch, 
> HDFS-13777-HDFS-12090.002.patch, HDFS-13777-HDFS-12090.003.patch, 
> HDFS-13777-HDFS-12090.005.patch, HDFS-13777-HDFS-12090.006.patch, 
> HDFS-13777-HDFS-12090.007.patch
>
>
> When the SyncService is running, it should periodically take snapshots, make 
> a snapshotdiff, and then distribute DNA_BACKUP work to the Datanodes (See 
> HDFS-13421). Upon completion of the work, the NN should update the AliasMap.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13777) [PROVIDED Phase 2] Scheduler in the NN for distributing DNA_BACKUP work.

2018-09-12 Thread Ewan Higgs (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HDFS-13777:
--
Status: Open  (was: Patch Available)

> [PROVIDED Phase 2] Scheduler in the NN for distributing DNA_BACKUP work.
> 
>
> Key: HDFS-13777
> URL: https://issues.apache.org/jira/browse/HDFS-13777
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-13777-HDFS-12090.001.patch, 
> HDFS-13777-HDFS-12090.002.patch, HDFS-13777-HDFS-12090.003.patch, 
> HDFS-13777-HDFS-12090.005.patch, HDFS-13777-HDFS-12090.006.patch, 
> HDFS-13777-HDFS-12090.007.patch
>
>
> When the SyncService is running, it should periodically take snapshots, make 
> a snapshotdiff, and then distribute DNA_BACKUP work to the Datanodes (See 
> HDFS-13421). Upon completion of the work, the NN should update the AliasMap.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13777) [PROVIDED Phase 2] Scheduler in the NN for distributing DNA_BACKUP work.

2018-09-12 Thread Ewan Higgs (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HDFS-13777:
--
Attachment: HDFS-13777-HDFS-12090.007.patch

> [PROVIDED Phase 2] Scheduler in the NN for distributing DNA_BACKUP work.
> 
>
> Key: HDFS-13777
> URL: https://issues.apache.org/jira/browse/HDFS-13777
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-13777-HDFS-12090.001.patch, 
> HDFS-13777-HDFS-12090.002.patch, HDFS-13777-HDFS-12090.003.patch, 
> HDFS-13777-HDFS-12090.005.patch, HDFS-13777-HDFS-12090.006.patch, 
> HDFS-13777-HDFS-12090.007.patch
>
>
> When the SyncService is running, it should periodically take snapshots, make 
> a snapshotdiff, and then distribute DNA_BACKUP work to the Datanodes (See 
> HDFS-13421). Upon completion of the work, the NN should update the AliasMap.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-436) Allow SCM chill mode to be disabled by configuration.

2018-09-12 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-436:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

[~ajayydv] thanks for the contribution. I have committed to trunk and ozone-2.0

> Allow SCM chill mode to be disabled by configuration.
> -
>
> Key: HDDS-436
> URL: https://issues.apache.org/jira/browse/HDDS-436
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-436.00.patch
>
>
> [HDDS-351] adds chill mode state to SCM. As 
> [suggested|https://issues.apache.org/jira/browse/HDDS-362?focusedCommentId=16611334=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16611334]
>  by [~jnp] there are cases when we need to disable it via config. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-429) StorageContainerManager lock optimization

2018-09-12 Thread Jitendra Nath Pandey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDDS-429:
--
Priority: Blocker  (was: Major)

> StorageContainerManager lock optimization
> -
>
> Key: HDDS-429
> URL: https://issues.apache.org/jira/browse/HDDS-429
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Blocker
> Fix For: 0.2.1
>
>
> Currently, {{StorageContainerManager}} uses {{ReentrantLock}} for 
> synchronization. We can replace this with {{ReentrantReadWriteLock}} to get 
> better performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13697) DFSClient should instantiate and cache KMSClientProvider using UGI at creation time for consistent UGI handling

2018-09-12 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612575#comment-16612575
 ] 

Xiao Chen commented on HDFS-13697:
--

Thanks for the update [~daryn].

As far as I can find, there were similar discussions initially on HADOOP-10771 
/ HADOOP-10880. I agree it's a day0 issue that's rather fundamental in 
authentication, and more difficult to fix. However, it seems to be a somewhat 
separate problem than what we are tackling so far. The existing code seems to 
have met requirement from all use cases so far, except the oozie one 
[~zvenczel] reported here. I believe Zsolt is setting up a cluster with the 
latest patch, to test the various cases to be sure. Assuming that goes well, 
would you be ok with this jira focusing on the kms client cache issue, and 
having another jira focusing on the hadoop-auth family improvements (which 
presumably impacts kms, httpfs, and maybe oozie)?

I may not fully understand your intention, and the fix may turn out to be easy 
enough to come in this together, please help elaborate if that's the case.

> DFSClient should instantiate and cache KMSClientProvider using UGI at 
> creation time for consistent UGI handling
> ---
>
> Key: HDFS-13697
> URL: https://issues.apache.org/jira/browse/HDFS-13697
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>Priority: Major
> Attachments: HDFS-13697.01.patch, HDFS-13697.02.patch, 
> HDFS-13697.03.patch, HDFS-13697.04.patch, HDFS-13697.05.patch, 
> HDFS-13697.06.patch, HDFS-13697.07.patch, HDFS-13697.08.patch, 
> HDFS-13697.09.patch, HDFS-13697.10.patch, HDFS-13697.11.patch, 
> HDFS-13697.12.patch, HDFS-13697.prelim.patch
>
>
> While calling KeyProviderCryptoExtension decryptEncryptedKey the call stack 
> might not have doAs privileged execution call (in the DFSClient for example). 
> This results in loosing the proxy user from UGI as UGI.getCurrentUser finds 
> no AccessControllerContext and does a re-login for the login user only.
> This can cause the following for example: if we have set up the oozie user to 
> be entitled to perform actions on behalf of example_user but oozie is 
> forbidden to decrypt any EDEK (for security reasons), due to the above issue, 
> example_user entitlements are lost from UGI and the following error is 
> reported:
> {code}
> [0] 
> SERVER[xxx] USER[example_user] GROUP[-] TOKEN[] APP[Test_EAR] 
> JOB[0020905-180313191552532-oozie-oozi-W] 
> ACTION[0020905-180313191552532-oozie-oozi-W@polling_dir_path] Error starting 
> action [polling_dir_path]. ErrorType [ERROR], ErrorCode [FS014], Message 
> [FS014: User [oozie] is not authorized to perform [DECRYPT_EEK] on key with 
> ACL name [encrypted_key]!!]
> org.apache.oozie.action.ActionExecutorException: FS014: User [oozie] is not 
> authorized to perform [DECRYPT_EEK] on key with ACL name [encrypted_key]!!
>  at 
> org.apache.oozie.action.ActionExecutor.convertExceptionHelper(ActionExecutor.java:463)
>  at 
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:441)
>  at 
> org.apache.oozie.action.hadoop.FsActionExecutor.touchz(FsActionExecutor.java:523)
>  at 
> org.apache.oozie.action.hadoop.FsActionExecutor.doOperations(FsActionExecutor.java:199)
>  at 
> org.apache.oozie.action.hadoop.FsActionExecutor.start(FsActionExecutor.java:563)
>  at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:232)
>  at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:63)
>  at org.apache.oozie.command.XCommand.call(XCommand.java:286)
>  at 
> org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:332)
>  at 
> org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:261)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>  at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  at java.lang.Thread.run(Thread.java:744)
> Caused by: org.apache.hadoop.security.authorize.AuthorizationException: User 
> [oozie] is not authorized to perform [DECRYPT_EEK] on key with ACL name 
> [encrypted_key]!!
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>  at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  at 

[jira] [Updated] (HDFS-13846) Safe blocks counter is not decremented correctly if the block is striped

2018-09-12 Thread Daniel Templeton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HDFS-13846:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

> Safe blocks counter is not decremented correctly if the block is striped
> 
>
> Key: HDFS-13846
> URL: https://issues.apache.org/jira/browse/HDFS-13846
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13846.001.patch, HDFS-13846.002.patch, 
> HDFS-13846.003.patch, HDFS-13846.004.patch, HDFS-13846.005.patch
>
>
> In BlockManagerSafeMode class, the "safe blocks" counter is incremented if 
> the number of nodes containing the block equals to the number of data units 
> specified by the erasure coding policy, which looks like this in the code:
> {code:java}
> final int safe = storedBlock.isStriped() ?
> ((BlockInfoStriped)storedBlock).getRealDataBlockNum() : 
> safeReplication;
> if (storageNum == safe) {
>   this.blockSafe++;
> {code}
> But when it is decremented the code does not check if the block is striped or 
> not, just compares the number of nodes containing the block with 0 
> (safeReplication - 1) if the block is complete, which is not correct.
> {code:java}
> if (storedBlock.isComplete() &&
> blockManager.countNodes(b).liveReplicas() == safeReplication - 1) {
>   this.blockSafe--;
>   assert blockSafe >= 0;
>   checkSafeMode();
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   >