[jira] [Commented] (HDFS-14900) Fix build failure of hadoop-hdfs-native-client

2019-10-07 Thread Masatake Iwasaki (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946505#comment-16946505
 ] 

Masatake Iwasaki commented on HDFS-14900:
-

{quote}
If I catch it correct now, you are telling, The build fails when no protobuf is 
installed explicitly?
{quote}

Yes. Sorry for unclear problem statement.

{quote}
Anyway if so, do you have fix for it?
{quote}

I'm looking for the way without manual protobuf installation.


> Fix build failure of hadoop-hdfs-native-client
> --
>
> Key: HDFS-14900
> URL: https://issues.apache.org/jira/browse/HDFS-14900
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>
> Build with native profile failed due to lack of protobuf resources. I did not 
> noticed this because build scceeds if protobuf-2.5.0 is installed. 
> protobuf-3.7.1 is the correct version after HADOOP-16558.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2260) Avoid evaluation of LOG.trace and LOG.debug statement in the read/write path (HDDS)

2019-10-07 Thread Siddharth Wagle (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946504#comment-16946504
 ] 

Siddharth Wagle commented on HDDS-2260:
---

Creating a separate Jira for hdds and ozone modules since the lines of code 
changes is a ton.

> Avoid evaluation of LOG.trace and LOG.debug statement in the read/write path 
> (HDDS)
> ---
>
> Key: HDDS-2260
> URL: https://issues.apache.org/jira/browse/HDDS-2260
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client, Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Siddharth Wagle
>Priority: Major
>
> LOG.trace and LOG.debug with logging information will be evaluated even when 
> debug/trace logging is disabled. This jira proposes to wrap all the 
> trace/debug logging with 
> LOG.isDebugEnabled and LOG.isTraceEnabled to prevent the logging.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2217) Remove log4j and audit configuration from the docker-config files

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2217?focusedWorklogId=324860=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324860
 ]

ASF GitHub Bot logged work on HDDS-2217:


Author: ASF GitHub Bot
Created on: 08/Oct/19 05:45
Start Date: 08/Oct/19 05:45
Worklog Time Spent: 10m 
  Work Description: christeoh commented on issue #1582: HDDS-2217. Removed 
redundant LOG4J lines from docker configurations
URL: https://github.com/apache/hadoop/pull/1582#issuecomment-539341389
 
 
   Looks like there's some issues with Ratis? Is this related to the patch?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324860)
Time Spent: 2h 50m  (was: 2h 40m)

> Remove log4j and audit configuration from the docker-config files
> -
>
> Key: HDDS-2217
> URL: https://issues.apache.org/jira/browse/HDDS-2217
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: docker
>Reporter: Marton Elek
>Assignee: Chris Teoh
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Log4j configuration lines are added to the docker-config under 
> hadoop-ozone/dist/src/main/compose/...
> Mainly to make it easier to reconfigure the log level of any components.
> As we already have a "ozone insight" tool which can help us to modify the log 
> level at runtime we don't need these lines any more.
> {code:java}
> LOG4J.PROPERTIES_log4j.rootLogger=INFO, stdout
> LOG4J.PROPERTIES_log4j.appender.stdout=org.apache.log4j.ConsoleAppender
> LOG4J.PROPERTIES_log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
> LOG4J.PROPERTIES_log4j.appender.stdout.layout.ConversionPattern=%d{-MM-dd 
> HH:mm:ss} %-5p %c{1}:%L - %m%n
> LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR
> LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.conf.ConfUtils=WARN
> LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.security.ShellBasedUnixGroupsMapping=ERROR
> LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.grpc.client.GrpcClientProtocolClient=WARN
> LOG4J.PROPERTIES_log4j.logger.http.requests.s3gateway=INFO,s3gatewayrequestlog
> LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog=org.apache.hadoop.http.HttpRequestLogAppender
> LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog.Filename=/tmp/jetty-s3gateway-_mm_dd.log
> LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog.RetainDays=3 {code}
> We can remove them together with the audit log entries as we already have a 
> default log4j.propertes / audit log4j2 config.
> After the remove the clusters should be tested: Ozone CLI should not print 
> and confusing log messages (such as NativeLib is missing or anything else). 
> AFAIK they are already turned off in the etc/hadoop/etc log4j.properties.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14814) RBF: RouterQuotaUpdateService supports inherited rule.

2019-10-07 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946496#comment-16946496
 ] 

Ayush Saxena commented on HDFS-14814:
-

Thanx [~LiJinglun]
v011 LGTM +1

> RBF: RouterQuotaUpdateService supports inherited rule.
> --
>
> Key: HDFS-14814
> URL: https://issues.apache.org/jira/browse/HDFS-14814
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-14814.001.patch, HDFS-14814.002.patch, 
> HDFS-14814.003.patch, HDFS-14814.004.patch, HDFS-14814.005.patch, 
> HDFS-14814.006.patch, HDFS-14814.007.patch, HDFS-14814.008.patch, 
> HDFS-14814.009.patch, HDFS-14814.010.patch, HDFS-14814.011.patch
>
>
> I want to add a rule *'The quota should be set the same as the nearest 
> parent'* to Global Quota. Supposing we have the mount table below.
> M1: /dir-a                            ns0->/dir-a     \{nquota=10,squota=20}
> M2: /dir-a/dir-b                 ns1->/dir-b     \{nquota=-1,squota=30}
> M3: /dir-a/dir-b/dir-c       ns2->/dir-c     \{nquota=-1,squota=-1}
> M4: /dir-d                           ns3->/dir-d     \{nquota=-1,squota=-1}
>  
> The quota for the remote locations on the namespaces should be:
>  ns0->/dir-a     \{nquota=10,squota=20}
>  ns1->/dir-b     \{nquota=10,squota=30}
>  ns2->/dir-c      \{nquota=10,squota=30}
>  ns3->/dir-d     \{nquota=-1,squota=-1}
>  
> The quota of the remote location is set the same as the corresponding 
> MountTable, and if there is no quota of the MountTable then the quota is set 
> to the nearest parent MountTable with quota.
>  
> It's easy to implement it. In RouterQuotaUpdateService each time we compute 
> the currentQuotaUsage, we can get the quota info for each MountTable. We can 
> do a
>  check and fix all the MountTable which's quota doesn't match the rule above.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14900) Fix build failure of hadoop-hdfs-native-client

2019-10-07 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946489#comment-16946489
 ] 

Ayush Saxena edited comment on HDFS-14900 at 10/8/19 5:15 AM:
--

bq.  I did not noticed this because build succeeds if protobuf-2.5.0 is 
installed. protobuf-3.7.1

Oh I thought, You are telling if protobuf 2.5.0 is installed the build passed, 
with 3.7.1 it doesn't.
If I catch it correct now, you are telling, The build fails when no protobuf is 
installed explicitly?
I tried removing the protobuf and compile, the build seems to fail now. 
Anyway if so, do you have fix for it?


was (Author: ayushtkn):
bq.  I did not noticed this because build succeeds if protobuf-2.5.0 is 
installed. protobuf-3.7.1

Oh I thought, You are telling if protobuf 2.5.0 is installed the build passed, 
with 3.7.1 it doesn't.
If I catch it correct now, you are telling, The build fails when no protobuf is 
installed explicitly? 
Anyway if so, do you have fix for it?

> Fix build failure of hadoop-hdfs-native-client
> --
>
> Key: HDFS-14900
> URL: https://issues.apache.org/jira/browse/HDFS-14900
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>
> Build with native profile failed due to lack of protobuf resources. I did not 
> noticed this because build scceeds if protobuf-2.5.0 is installed. 
> protobuf-3.7.1 is the correct version after HADOOP-16558.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2266) Avoid evaluation of LOG.trace and LOG.debug statement in the read/write path (Ozone)

2019-10-07 Thread Siddharth Wagle (Jira)
Siddharth Wagle created HDDS-2266:
-

 Summary: Avoid evaluation of LOG.trace and LOG.debug statement in 
the read/write path (Ozone)
 Key: HDDS-2266
 URL: https://issues.apache.org/jira/browse/HDDS-2266
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone CLI, Ozone Manager
Affects Versions: 0.5.0
Reporter: Siddharth Wagle
Assignee: Siddharth Wagle
 Fix For: 0.5.0


LOG.trace and LOG.debug with logging information will be evaluated even when 
debug/trace logging is disabled. This jira proposes to wrap all the trace/debug 
logging with
LOG.isDebugEnabled and LOG.isTraceEnabled to prevent the logging.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2260) Avoid evaluation of LOG.trace and LOG.debug statement in the read/write path (HDDS)

2019-10-07 Thread Siddharth Wagle (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle updated HDDS-2260:
--
Summary: Avoid evaluation of LOG.trace and LOG.debug statement in the 
read/write path (HDDS)  (was: Avoid evaluation of LOG.trace and LOG.debug 
statement in the read/write path)

> Avoid evaluation of LOG.trace and LOG.debug statement in the read/write path 
> (HDDS)
> ---
>
> Key: HDDS-2260
> URL: https://issues.apache.org/jira/browse/HDDS-2260
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client, Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Siddharth Wagle
>Priority: Major
>
> LOG.trace and LOG.debug with logging information will be evaluated even when 
> debug/trace logging is disabled. This jira proposes to wrap all the 
> trace/debug logging with 
> LOG.isDebugEnabled and LOG.isTraceEnabled to prevent the logging.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14859) Prevent unnecessary evaluation of costly operation getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes is not zero

2019-10-07 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946490#comment-16946490
 ] 

Ayush Saxena commented on HDFS-14859:
-

Thanx [~smajeti] for the patch.
Seems all comments have been addressed.
v007 LGTM +1

> Prevent unnecessary evaluation of costly operation getNumLiveDataNodes when 
> dfs.namenode.safemode.min.datanodes is not zero
> ---
>
> Key: HDFS-14859
> URL: https://issues.apache.org/jira/browse/HDFS-14859
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.0, 3.3.0, 3.1.4
>Reporter: Srinivasu Majeti
>Assignee: Srinivasu Majeti
>Priority: Major
>  Labels: block
> Attachments: HDFS-14859.001.patch, HDFS-14859.002.patch, 
> HDFS-14859.003.patch, HDFS-14859.004.patch, HDFS-14859.005.patch, 
> HDFS-14859.006.patch, HDFS-14859.007.patch
>
>
> There have been improvements like HDFS-14171 and HDFS-14632 to the 
> performance issue caused from getNumLiveDataNodes calls per block. The 
> improvement has been only done w.r.t dfs.namenode.safemode.min.datanodes 
> paramter being set to 0 or not.
> {code}
>private boolean areThresholdsMet() {
>  assert namesystem.hasWriteLock();
> -int datanodeNum = 
> blockManager.getDatanodeManager().getNumLiveDataNodes();
> +// Calculating the number of live datanodes is time-consuming
> +// in large clusters. Skip it when datanodeThreshold is zero.
> +int datanodeNum = 0;
> +if (datanodeThreshold > 0) {
> +  datanodeNum = blockManager.getDatanodeManager().getNumLiveDataNodes();
> +}
>  synchronized (this) {
>return blockSafe >= blockThreshold && datanodeNum >= datanodeThreshold;
>  }
> {code}
> I feel above logic would create similar situation of un-necessary evaluations 
> of getNumLiveDataNodes when dfs.namenode.safemode.min.datanodes paramter is 
> set > 0 even though "blockSafe >= blockThreshold" is false for most of the 
> time in NN startup safe mode. We could do something like below to avoid this
> {code}
> private boolean areThresholdsMet() {
> assert namesystem.hasWriteLock();
> synchronized (this) {
>   return blockSafe >= blockThreshold && (datanodeThreshold > 0)?
>   blockManager.getDatanodeManager().getNumLiveDataNodes() >= 
> datanodeThreshold : true;
> }
>   } 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14900) Fix build failure of hadoop-hdfs-native-client

2019-10-07 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946489#comment-16946489
 ] 

Ayush Saxena commented on HDFS-14900:
-

bq.  I did not noticed this because build succeeds if protobuf-2.5.0 is 
installed. protobuf-3.7.1

Oh I thought, You are telling if protobuf 2.5.0 is installed the build passed, 
with 3.7.1 it doesn't.
If I catch it correct now, you are telling, The build fails when no protobuf is 
installed explicitly? 
Anyway if so, do you have fix for it?

> Fix build failure of hadoop-hdfs-native-client
> --
>
> Key: HDFS-14900
> URL: https://issues.apache.org/jira/browse/HDFS-14900
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>
> Build with native profile failed due to lack of protobuf resources. I did not 
> noticed this because build scceeds if protobuf-2.5.0 is installed. 
> protobuf-3.7.1 is the correct version after HADOOP-16558.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14655) [SBN Read] Namenode crashes if one of The JN is down

2019-10-07 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14655:

Attachment: HDFS-14655-branch-2-02.patch

> [SBN Read] Namenode crashes if one of The JN is down
> 
>
> Key: HDFS-14655
> URL: https://issues.apache.org/jira/browse/HDFS-14655
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Harshakiran Reddy
>Assignee: Ayush Saxena
>Priority: Critical
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14655-01.patch, HDFS-14655-02.patch, 
> HDFS-14655-03.patch, HDFS-14655-04.patch, HDFS-14655-05.patch, 
> HDFS-14655-06.patch, HDFS-14655-07.patch, HDFS-14655-08.patch, 
> HDFS-14655-branch-2-01.patch, HDFS-14655-branch-2-02.patch, 
> HDFS-14655.poc.patch
>
>
> {noformat}
> 2019-07-04 17:35:54,064 | INFO  | Logger channel (from parallel executor) to 
> XXX/XXX | Retrying connect to server: XXX/XXX. Already tried 
> 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, 
> sleepTime=1000 MILLISECONDS) | Client.java:975
> 2019-07-04 17:35:54,087 | FATAL | Edit log tailer | Unknown error encountered 
> while tailing edits. Shutting down standby NN. | EditLogTailer.java:474
> java.lang.OutOfMemoryError: unable to create new native thread
>   at java.lang.Thread.start0(Native Method)
>   at java.lang.Thread.start(Thread.java:717)
>   at 
> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957)
>   at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1378)
>   at 
> com.google.common.util.concurrent.MoreExecutors$ListeningDecorator.execute(MoreExecutors.java:440)
>   at 
> com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:56)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel.getJournaledEdits(IPCLoggerChannel.java:565)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.getJournaledEdits(AsyncLoggerSet.java:272)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectRpcInputStreams(QuorumJournalManager.java:533)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectInputStreams(QuorumJournalManager.java:508)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.selectInputStreams(JournalSet.java:275)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1681)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1714)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:307)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$300(EditLogTailer.java:410)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:360)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:483)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
> 2019-07-04 17:35:54,112 | INFO  | Edit log tailer | Exiting with status 1: 
> java.lang.OutOfMemoryError: unable to create new native thread | 
> ExitUtil.java:210
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14891) RBF: namenode links in NameFederation Health page (federationhealth.html) cannot use https scheme

2019-10-07 Thread Xieming Li (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xieming Li updated HDFS-14891:
--
Attachment: HDFS-14891.patch
Status: Patch Available  (was: Open)

> RBF: namenode links in NameFederation Health page (federationhealth.html)  
> cannot use https scheme
> --
>
> Key: HDFS-14891
> URL: https://issues.apache.org/jira/browse/HDFS-14891
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf, ui
>Reporter: Xieming Li
>Assignee: Xieming Li
>Priority: Major
> Attachments: HDFS-14891.patch
>
>
> The scheme of links in federationhealth.html are hard coded as 'http'.
> It should be set to 'https' when dfs.http.policy is HTTPS_ONLY 
> (HTTP_AND_HTTPS also, maybe)
>  
> [https://github.com/apache/hadoop/blob/c99a12167ff9566012ef32104a3964887d62c899/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.html#L168-L169]
> [https://github.com/apache/hadoop/blob/c99a12167ff9566012ef32104a3964887d62c899/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.html#L236]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14900) Fix build failure of hadoop-hdfs-native-client

2019-10-07 Thread Masatake Iwasaki (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946484#comment-16946484
 ] 

Masatake Iwasaki commented on HDFS-14900:
-

> Tried on my system, seems working..

[~ayushtkn] If you have installed protobuf on your system, native build works. 
I think this jira is a follow-up of HADOOP-16620 which removed protobuf from 
build env.


> Fix build failure of hadoop-hdfs-native-client
> --
>
> Key: HDFS-14900
> URL: https://issues.apache.org/jira/browse/HDFS-14900
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>
> Build with native profile failed due to lack of protobuf resources. I did not 
> noticed this because build scceeds if protobuf-2.5.0 is installed. 
> protobuf-3.7.1 is the correct version after HADOOP-16558.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14900) Fix build failure of hadoop-hdfs-native-client

2019-10-07 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946449#comment-16946449
 ] 

Ayush Saxena commented on HDFS-14900:
-

Thanx [~iwasakims] for the report.
Tried on my system, seems working..

{noformat}
[INFO] Reactor Summary for Apache Hadoop HDFS Project 3.3.0-SNAPSHOT:
[INFO] 
[INFO] Apache Hadoop HDFS Client .. SUCCESS [ 26.240 s]
[INFO] Apache Hadoop HDFS . SUCCESS [ 39.070 s]
[INFO] Apache Hadoop HDFS Native Client ... SUCCESS [02:14 min]
[INFO] Apache Hadoop HttpFS ... SUCCESS [ 16.890 s]
[INFO] Apache Hadoop HDFS-NFS . SUCCESS [ 11.721 s]
[INFO] Apache Hadoop HDFS-RBF . SUCCESS [ 11.887 s]
[INFO] Apache Hadoop HDFS Project . SUCCESS [  0.274 s]
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time:  04:01 min
[INFO] Finished at: 2019-10-08T08:51:43+05:30
[INFO] 
ayush@ayushpc:~/hadoop/trunk/hadoop-hdfs-project$ protoc --version
libprotoc 3.7.1
ayush@ayushpc:~/hadoop/trunk/hadoop-hdfs-project$ 

{noformat}

Can you share your OS details, Or if something extra needs to be done?

> Fix build failure of hadoop-hdfs-native-client
> --
>
> Key: HDFS-14900
> URL: https://issues.apache.org/jira/browse/HDFS-14900
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>
> Build with native profile failed due to lack of protobuf resources. I did not 
> noticed this because build scceeds if protobuf-2.5.0 is installed. 
> protobuf-3.7.1 is the correct version after HADOOP-16558.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14509) DN throws InvalidToken due to inequality of password when upgrade NN 2.x to 3.x

2019-10-07 Thread Yuxuan Wang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946443#comment-16946443
 ] 

Yuxuan Wang commented on HDFS-14509:


Sorry for delay it. I upload 003 patch. Feel free to take over this jira.
[~shv] Thanks for your patch.
[~vagarychen] Thanks for your review.

> DN throws InvalidToken due to inequality of password when upgrade NN 2.x to 
> 3.x
> ---
>
> Key: HDFS-14509
> URL: https://issues.apache.org/jira/browse/HDFS-14509
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yuxuan Wang
>Priority: Blocker
>  Labels: release-blocker
> Attachments: HDFS-14509-001.patch, HDFS-14509-002.patch, 
> HDFS-14509-003.patch
>
>
> According to the doc, if we want to upgrade cluster from 2.x to 3.x, we need 
> upgrade NN first. And there will be a intermediate state that NN is 3.x and 
> DN is 2.x. At that moment, if a client reads (or writes) a block, it will get 
> a block token from NN and then deliver the token to DN who can verify the 
> token. But the verification in the code now is :
> {code:title=BlockTokenSecretManager.java|borderStyle=solid}
> public void checkAccess(...)
> {
> ...
> id.readFields(new DataInputStream(new 
> ByteArrayInputStream(token.getIdentifier(;
> ...
> if (!Arrays.equals(retrievePassword(id), token.getPassword())) {
>   throw new InvalidToken("Block token with " + id.toString()
>   + " doesn't have the correct token password");
> }
> }
> {code} 
> And {{retrievePassword(id)}} is:
> {code} 
> public byte[] retrievePassword(BlockTokenIdentifier identifier)
> {
> ...
> return createPassword(identifier.getBytes(), key.getKey());
> }
> {code} 
> So, if NN's identifier add new fields, DN will lose the fields and compute 
> wrong password.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2034) Async RATIS pipeline creation and destroy through heartbeat commands

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2034?focusedWorklogId=324833=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324833
 ]

ASF GitHub Bot logged work on HDDS-2034:


Author: ASF GitHub Bot
Created on: 08/Oct/19 03:15
Start Date: 08/Oct/19 03:15
Worklog Time Spent: 10m 
  Work Description: ChenSammi commented on pull request #1469: HDDS-2034. 
Async RATIS pipeline creation and destroy through heartbea…
URL: https://github.com/apache/hadoop/pull/1469#discussion_r332321239
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/CreatePipelineCommandHandler.java
 ##
 @@ -0,0 +1,226 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.ozone.container.common.statemachine.commandhandler;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.
+StorageContainerDatanodeProtocolProtos.CreatePipelineCommandProto;
+import org.apache.hadoop.hdds.protocol.proto.
+StorageContainerDatanodeProtocolProtos.SCMCommandProto;
+import org.apache.hadoop.hdds.protocol.proto.
+StorageContainerDatanodeProtocolProtos.CreatePipelineACKProto;
+import org.apache.hadoop.hdds.ratis.RatisHelper;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import org.apache.hadoop.hdds.scm.pipeline.PipelineID;
+import org.apache.hadoop.hdds.security.x509.SecurityConfig;
+import org.apache.hadoop.io.MultipleIOException;
+import org.apache.hadoop.ozone.container.common.statemachine
+.SCMConnectionManager;
+import org.apache.hadoop.ozone.container.common.statemachine.StateContext;
+import org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer;
+import org.apache.hadoop.ozone.protocol.commands.CommandStatus;
+import org.apache.hadoop.ozone.protocol.commands.CreatePipelineCommand;
+import org.apache.hadoop.ozone.protocol.commands.CreatePipelineCommandStatus;
+import org.apache.hadoop.ozone.protocol.commands.SCMCommand;
+import org.apache.hadoop.util.Time;
+import org.apache.ratis.client.RaftClient;
+import org.apache.ratis.grpc.GrpcTlsConfig;
+import org.apache.ratis.protocol.NotLeaderException;
+import org.apache.ratis.protocol.RaftClientReply;
+import org.apache.ratis.protocol.RaftGroup;
+import org.apache.ratis.protocol.RaftGroupId;
+import org.apache.ratis.protocol.RaftPeer;
+import org.apache.ratis.retry.RetryPolicy;
+import org.apache.ratis.rpc.SupportedRpcType;
+import org.apache.ratis.util.TimeDuration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.List;
+import java.util.concurrent.RejectedExecutionException;
+import java.util.function.Consumer;
+import java.util.stream.Collectors;
+
+/**
+ * Handler for create pipeline command received from SCM.
+ */
+public class CreatePipelineCommandHandler implements CommandHandler {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(CreatePipelineCommandHandler.class);
+
+  private int invocationCount;
+  private long totalTime;
+
+  /**
+   * Constructs a createPipelineCommand handler.
+   */
+  public CreatePipelineCommandHandler() {
+  }
+
+  /**
+   * Handles a given SCM command.
+   *
+   * @param command   - SCM Command
+   * @param ozoneContainer- Ozone Container.
+   * @param context   - Current Context.
+   * @param connectionManager - The SCMs that we are talking to.
+   */
+  @Override
+  public void handle(SCMCommand command, OzoneContainer ozoneContainer,
+  StateContext context, SCMConnectionManager connectionManager) {
+invocationCount++;
+final long startTime = Time.monotonicNow();
+final DatanodeDetails dn = context.getParent()
+.getDatanodeDetails();
+final CreatePipelineCommandProto createCommand =
+((CreatePipelineCommand)command).getProto();
+final PipelineID pipelineID = PipelineID.getFromProtobuf(
+createCommand.getPipelineID());
+

[jira] [Updated] (HDFS-14509) DN throws InvalidToken due to inequality of password when upgrade NN 2.x to 3.x

2019-10-07 Thread Yuxuan Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuxuan Wang updated HDFS-14509:
---
Attachment: HDFS-14509-003.patch

> DN throws InvalidToken due to inequality of password when upgrade NN 2.x to 
> 3.x
> ---
>
> Key: HDFS-14509
> URL: https://issues.apache.org/jira/browse/HDFS-14509
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yuxuan Wang
>Priority: Blocker
>  Labels: release-blocker
> Attachments: HDFS-14509-001.patch, HDFS-14509-002.patch, 
> HDFS-14509-003.patch
>
>
> According to the doc, if we want to upgrade cluster from 2.x to 3.x, we need 
> upgrade NN first. And there will be a intermediate state that NN is 3.x and 
> DN is 2.x. At that moment, if a client reads (or writes) a block, it will get 
> a block token from NN and then deliver the token to DN who can verify the 
> token. But the verification in the code now is :
> {code:title=BlockTokenSecretManager.java|borderStyle=solid}
> public void checkAccess(...)
> {
> ...
> id.readFields(new DataInputStream(new 
> ByteArrayInputStream(token.getIdentifier(;
> ...
> if (!Arrays.equals(retrievePassword(id), token.getPassword())) {
>   throw new InvalidToken("Block token with " + id.toString()
>   + " doesn't have the correct token password");
> }
> }
> {code} 
> And {{retrievePassword(id)}} is:
> {code} 
> public byte[] retrievePassword(BlockTokenIdentifier identifier)
> {
> ...
> return createPassword(identifier.getBytes(), key.getKey());
> }
> {code} 
> So, if NN's identifier add new fields, DN will lose the fields and compute 
> wrong password.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2034) Async RATIS pipeline creation and destroy through heartbeat commands

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2034?focusedWorklogId=324828=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324828
 ]

ASF GitHub Bot logged work on HDDS-2034:


Author: ASF GitHub Bot
Created on: 08/Oct/19 03:09
Start Date: 08/Oct/19 03:09
Worklog Time Spent: 10m 
  Work Description: ChenSammi commented on issue #1469: HDDS-2034. Async 
RATIS pipeline creation and destroy through heartbea…
URL: https://github.com/apache/hadoop/pull/1469#issuecomment-539294113
 
 
   > > I think the purpose of safemode is to guarantee that Ozone cluster is 
ready to provide service to Ozone client once safemode is exited.
   > 
   > @ChenSammi I agree with that. I think the problem occurs with 
OneReplicaPipelineSafeModeRule. This rule makes sure that atleast one datanode 
in the old pipeline is reported so that reads for OPEN containers can go 
through. Here I think that old pipelines need to be tracked separately.
   
   OK, I will try to separate the olds from the new ones. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324828)
Time Spent: 11.5h  (was: 11h 20m)

> Async RATIS pipeline creation and destroy through heartbeat commands
> 
>
> Key: HDDS-2034
> URL: https://issues.apache.org/jira/browse/HDDS-2034
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 11.5h
>  Remaining Estimate: 0h
>
> Currently, pipeline creation and destroy are synchronous operations. SCM 
> directly connect to each datanode of the pipeline through gRPC channel to 
> create the pipeline to destroy the pipeline.  
> This task is to remove the gRPC channel, send pipeline creation and destroy 
> action through heartbeat command to each datanode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14898) Use Relative URLS in Hadoop HDFS HTTP FS

2019-10-07 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946428#comment-16946428
 ] 

Ayush Saxena commented on HDFS-14898:
-

Thanx [~belugabehr] for the patch. Makes sense.
Can you confirm testing it both ways as in HDFS-12961

> Use Relative URLS in Hadoop HDFS HTTP FS
> 
>
> Key: HDFS-14898
> URL: https://issues.apache.org/jira/browse/HDFS-14898
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Attachments: HDFS-14898.1.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2217) Remove log4j and audit configuration from the docker-config files

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2217?focusedWorklogId=324823=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324823
 ]

ASF GitHub Bot logged work on HDDS-2217:


Author: ASF GitHub Bot
Created on: 08/Oct/19 02:47
Start Date: 08/Oct/19 02:47
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1582: HDDS-2217. 
Removed redundant LOG4J lines from docker configurations
URL: https://github.com/apache/hadoop/pull/1582#issuecomment-539289204
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 10 | https://github.com/apache/hadoop/pull/1582 does not 
apply to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/1582 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1582/3/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324823)
Time Spent: 2h 40m  (was: 2.5h)

> Remove log4j and audit configuration from the docker-config files
> -
>
> Key: HDDS-2217
> URL: https://issues.apache.org/jira/browse/HDDS-2217
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: docker
>Reporter: Marton Elek
>Assignee: Chris Teoh
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Log4j configuration lines are added to the docker-config under 
> hadoop-ozone/dist/src/main/compose/...
> Mainly to make it easier to reconfigure the log level of any components.
> As we already have a "ozone insight" tool which can help us to modify the log 
> level at runtime we don't need these lines any more.
> {code:java}
> LOG4J.PROPERTIES_log4j.rootLogger=INFO, stdout
> LOG4J.PROPERTIES_log4j.appender.stdout=org.apache.log4j.ConsoleAppender
> LOG4J.PROPERTIES_log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
> LOG4J.PROPERTIES_log4j.appender.stdout.layout.ConversionPattern=%d{-MM-dd 
> HH:mm:ss} %-5p %c{1}:%L - %m%n
> LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR
> LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.conf.ConfUtils=WARN
> LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.security.ShellBasedUnixGroupsMapping=ERROR
> LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.grpc.client.GrpcClientProtocolClient=WARN
> LOG4J.PROPERTIES_log4j.logger.http.requests.s3gateway=INFO,s3gatewayrequestlog
> LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog=org.apache.hadoop.http.HttpRequestLogAppender
> LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog.Filename=/tmp/jetty-s3gateway-_mm_dd.log
> LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog.RetainDays=3 {code}
> We can remove them together with the audit log entries as we already have a 
> default log4j.propertes / audit log4j2 config.
> After the remove the clusters should be tested: Ozone CLI should not print 
> and confusing log messages (such as NativeLib is missing or anything else). 
> AFAIK they are already turned off in the etc/hadoop/etc log4j.properties.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2217) Remove log4j and audit configuration from the docker-config files

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2217?focusedWorklogId=324822=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324822
 ]

ASF GitHub Bot logged work on HDDS-2217:


Author: ASF GitHub Bot
Created on: 08/Oct/19 02:37
Start Date: 08/Oct/19 02:37
Worklog Time Spent: 10m 
  Work Description: christeoh commented on issue #1582: HDDS-2217. Removed 
redundant LOG4J lines from docker configurations
URL: https://github.com/apache/hadoop/pull/1582#issuecomment-539286942
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324822)
Time Spent: 2.5h  (was: 2h 20m)

> Remove log4j and audit configuration from the docker-config files
> -
>
> Key: HDDS-2217
> URL: https://issues.apache.org/jira/browse/HDDS-2217
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: docker
>Reporter: Marton Elek
>Assignee: Chris Teoh
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> Log4j configuration lines are added to the docker-config under 
> hadoop-ozone/dist/src/main/compose/...
> Mainly to make it easier to reconfigure the log level of any components.
> As we already have a "ozone insight" tool which can help us to modify the log 
> level at runtime we don't need these lines any more.
> {code:java}
> LOG4J.PROPERTIES_log4j.rootLogger=INFO, stdout
> LOG4J.PROPERTIES_log4j.appender.stdout=org.apache.log4j.ConsoleAppender
> LOG4J.PROPERTIES_log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
> LOG4J.PROPERTIES_log4j.appender.stdout.layout.ConversionPattern=%d{-MM-dd 
> HH:mm:ss} %-5p %c{1}:%L - %m%n
> LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR
> LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.conf.ConfUtils=WARN
> LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.security.ShellBasedUnixGroupsMapping=ERROR
> LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.grpc.client.GrpcClientProtocolClient=WARN
> LOG4J.PROPERTIES_log4j.logger.http.requests.s3gateway=INFO,s3gatewayrequestlog
> LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog=org.apache.hadoop.http.HttpRequestLogAppender
> LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog.Filename=/tmp/jetty-s3gateway-_mm_dd.log
> LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog.RetainDays=3 {code}
> We can remove them together with the audit log entries as we already have a 
> default log4j.propertes / audit log4j2 config.
> After the remove the clusters should be tested: Ozone CLI should not print 
> and confusing log messages (such as NativeLib is missing or anything else). 
> AFAIK they are already turned off in the etc/hadoop/etc log4j.properties.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=324821=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324821
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 08/Oct/19 02:35
Start Date: 08/Oct/19 02:35
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1588: HDDS-1986. 
Fix listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#discussion_r332314620
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -680,26 +688,85 @@ public boolean isBucketEmpty(String volume, String 
bucket)
   seekPrefix = getBucketKey(volumeName, bucketName + OM_KEY_PREFIX);
 }
 int currentCount = 0;
-try (TableIterator> keyIter =
-getKeyTable()
-.iterator()) {
-  KeyValue kv = keyIter.seek(seekKey);
-  while (currentCount < maxKeys && keyIter.hasNext()) {
-kv = keyIter.next();
-// Skip the Start key if needed.
-if (kv != null && skipStartKey && kv.getKey().equals(seekKey)) {
-  continue;
+
+
+TreeMap cacheKeyMap = new TreeMap<>();
+Set deletedKeySet = new TreeSet<>();
+Iterator, CacheValue>> iterator =
+keyTable.cacheIterator();
+
+//TODO: We can avoid this iteration if table cache has stored entries in
+// treemap. Currently HashMap is used in Cache. HashMap get operation is an
+// constant time operation, where as for treeMap get is log(n).
+// So if we move to treemap, the get operation will be affected. As get
+// is frequent operation on table. So, for now in list we iterate cache map
+// and construct treeMap which match with keyPrefix and are greater than or
+// equal to startKey. Later we can revisit this, if list operation
+// is becoming slow.
+while (iterator.hasNext()) {
 
 Review comment:
   I am ok with putting this change in if we can prove that we can do large 
list keys. You might want to borrow the DB from @nandakumar131 and see if you 
can list keys with this patch, just a thought.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324821)
Time Spent: 2h  (was: 1h 50m)

> Fix listkeys API
> 
>
> Key: HDDS-1986
> URL: https://issues.apache.org/jira/browse/HDDS-1986
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> This Jira is to fix listKeys API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listkeys, it should use both in-memory 
> cache and rocksdb key table to list keys in a bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=324819=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324819
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 08/Oct/19 02:34
Start Date: 08/Oct/19 02:34
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1588: HDDS-1986. 
Fix listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#discussion_r332255281
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -645,7 +648,12 @@ public boolean isBucketEmpty(String volume, String bucket)
   @Override
   public List listKeys(String volumeName, String bucketName,
   String startKey, String keyPrefix, int maxKeys) throws IOException {
+
 List result = new ArrayList<>();
+if (maxKeys == 0) {
 
 Review comment:
   or <= 0 ?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324819)
Time Spent: 1h 40m  (was: 1.5h)

> Fix listkeys API
> 
>
> Key: HDDS-1986
> URL: https://issues.apache.org/jira/browse/HDDS-1986
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> This Jira is to fix listKeys API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listkeys, it should use both in-memory 
> cache and rocksdb key table to list keys in a bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=324818=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324818
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 08/Oct/19 02:34
Start Date: 08/Oct/19 02:34
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1588: HDDS-1986. 
Fix listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#discussion_r332256331
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -680,26 +688,85 @@ public boolean isBucketEmpty(String volume, String 
bucket)
   seekPrefix = getBucketKey(volumeName, bucketName + OM_KEY_PREFIX);
 }
 int currentCount = 0;
-try (TableIterator> keyIter =
-getKeyTable()
-.iterator()) {
-  KeyValue kv = keyIter.seek(seekKey);
-  while (currentCount < maxKeys && keyIter.hasNext()) {
-kv = keyIter.next();
-// Skip the Start key if needed.
-if (kv != null && skipStartKey && kv.getKey().equals(seekKey)) {
-  continue;
+
+
+TreeMap cacheKeyMap = new TreeMap<>();
+Set deletedKeySet = new TreeSet<>();
+Iterator, CacheValue>> iterator =
+keyTable.cacheIterator();
+
+//TODO: We can avoid this iteration if table cache has stored entries in
+// treemap. Currently HashMap is used in Cache. HashMap get operation is an
+// constant time operation, where as for treeMap get is log(n).
+// So if we move to treemap, the get operation will be affected. As get
+// is frequent operation on table. So, for now in list we iterate cache map
+// and construct treeMap which match with keyPrefix and are greater than or
+// equal to startKey. Later we can revisit this, if list operation
+// is becoming slow.
+while (iterator.hasNext()) {
 
 Review comment:
   I feel that we are better off leaving the old code in place...where we can 
read from the DB.. Worst, we might have to make sure that cache is flushed to 
DB before doing the list operation.But practically it may not matter.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324818)
Time Spent: 1.5h  (was: 1h 20m)

> Fix listkeys API
> 
>
> Key: HDDS-1986
> URL: https://issues.apache.org/jira/browse/HDDS-1986
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> This Jira is to fix listKeys API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listkeys, it should use both in-memory 
> cache and rocksdb key table to list keys in a bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=324820=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324820
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 08/Oct/19 02:34
Start Date: 08/Oct/19 02:34
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1588: HDDS-1986. 
Fix listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#discussion_r332255873
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -680,26 +688,85 @@ public boolean isBucketEmpty(String volume, String 
bucket)
   seekPrefix = getBucketKey(volumeName, bucketName + OM_KEY_PREFIX);
 }
 int currentCount = 0;
-try (TableIterator> keyIter =
-getKeyTable()
-.iterator()) {
-  KeyValue kv = keyIter.seek(seekKey);
-  while (currentCount < maxKeys && keyIter.hasNext()) {
-kv = keyIter.next();
-// Skip the Start key if needed.
-if (kv != null && skipStartKey && kv.getKey().equals(seekKey)) {
-  continue;
+
+
+TreeMap cacheKeyMap = new TreeMap<>();
+Set deletedKeySet = new TreeSet<>();
+Iterator, CacheValue>> iterator =
+keyTable.cacheIterator();
+
+//TODO: We can avoid this iteration if table cache has stored entries in
+// treemap. Currently HashMap is used in Cache. HashMap get operation is an
+// constant time operation, where as for treeMap get is log(n).
+// So if we move to treemap, the get operation will be affected. As get
+// is frequent operation on table. So, for now in list we iterate cache map
+// and construct treeMap which match with keyPrefix and are greater than or
+// equal to startKey. Later we can revisit this, if list operation
+// is becoming slow.
+while (iterator.hasNext()) {
 
 Review comment:
   How many keys are expected in this cache? and how many in the tree ? 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324820)
Time Spent: 1h 50m  (was: 1h 40m)

> Fix listkeys API
> 
>
> Key: HDDS-1986
> URL: https://issues.apache.org/jira/browse/HDDS-1986
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> This Jira is to fix listKeys API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listkeys, it should use both in-memory 
> cache and rocksdb key table to list keys in a bucket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1984) Fix listBucket API

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1984?focusedWorklogId=324817=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324817
 ]

ASF GitHub Bot logged work on HDDS-1984:


Author: ASF GitHub Bot
Created on: 08/Oct/19 02:31
Start Date: 08/Oct/19 02:31
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1555: HDDS-1984. 
Fix listBucket API.
URL: https://github.com/apache/hadoop/pull/1555#discussion_r332313947
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -618,23 +618,31 @@ public boolean isBucketEmpty(String volume, String 
bucket)
 }
 int currentCount = 0;
 
-try (TableIterator>
-bucketIter = bucketTable.iterator()) {
-  KeyValue kv = bucketIter.seek(startKey);
-  while (currentCount < maxNumOfBuckets && bucketIter.hasNext()) {
-kv = bucketIter.next();
-// Skip the Start Bucket if needed.
-if (kv != null && skipStartKey &&
-kv.getKey().equals(startKey)) {
+
+// For Bucket it is full cache, so we can just iterate in-memory table
+// cache.
+Iterator, CacheValue>> iterator =
 
 Review comment:
   You can consider this comment resolved. Thanks for the explanation. I am 
leaving it open for other reviewers who might want to read this patch.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324817)
Time Spent: 2.5h  (was: 2h 20m)

> Fix listBucket API
> --
>
> Key: HDDS-1984
> URL: https://issues.apache.org/jira/browse/HDDS-1984
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> This Jira is to fix listBucket API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listBuckets, it should use both 
> in-memory cache and rocksdb bucket table to list buckets in a volume.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1984) Fix listBucket API

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1984?focusedWorklogId=324816=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324816
 ]

ASF GitHub Bot logged work on HDDS-1984:


Author: ASF GitHub Bot
Created on: 08/Oct/19 02:30
Start Date: 08/Oct/19 02:30
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1555: HDDS-1984. 
Fix listBucket API.
URL: https://github.com/apache/hadoop/pull/1555#discussion_r332313824
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -618,23 +618,31 @@ public boolean isBucketEmpty(String volume, String 
bucket)
 }
 int currentCount = 0;
 
-try (TableIterator>
-bucketIter = bucketTable.iterator()) {
-  KeyValue kv = bucketIter.seek(startKey);
-  while (currentCount < maxNumOfBuckets && bucketIter.hasNext()) {
-kv = bucketIter.next();
-// Skip the Start Bucket if needed.
-if (kv != null && skipStartKey &&
-kv.getKey().equals(startKey)) {
+
+// For Bucket it is full cache, so we can just iterate in-memory table
+// cache.
+Iterator, CacheValue>> iterator =
 
 Review comment:
   > But when we want to list all buckets is /vol2, we will iterate the entries 
from start, and reach to /vol2 in the cache and once the maximum count is 
reached we return from there.
   
   The architecture of SkipList prevents us from iterating all the keys. That 
is good enough, I was worried that we will walk all the entries. I missed we 
were using a skipList based hash table.
   
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324816)
Time Spent: 2h 20m  (was: 2h 10m)

> Fix listBucket API
> --
>
> Key: HDDS-1984
> URL: https://issues.apache.org/jira/browse/HDDS-1984
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> This Jira is to fix listBucket API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listBuckets, it should use both 
> in-memory cache and rocksdb bucket table to list buckets in a volume.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2244) Use new ReadWrite lock in OzoneManager

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2244?focusedWorklogId=324815=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324815
 ]

ASF GitHub Bot logged work on HDDS-2244:


Author: ASF GitHub Bot
Created on: 08/Oct/19 02:29
Start Date: 08/Oct/19 02:29
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1589: HDDS-2244. Use 
new ReadWrite lock in OzoneManager.
URL: https://github.com/apache/hadoop/pull/1589#discussion_r332313554
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/lock/OzoneManagerLock.java
 ##
 @@ -105,15 +109,66 @@ public OzoneManagerLock(Configuration conf) {
* should be bucket name. For remaining all resource only one param should
* be passed.
*/
+  @Deprecated
   public boolean acquireLock(Resource resource, String... resources) {
 String resourceName = generateResourceName(resource, resources);
+return lock(resource, resourceName, manager::writeLock, WRITE_LOCK);
+  }
+
+  /**
+   * Acquire read lock on resource.
+   *
+   * For S3_BUCKET_LOCK, VOLUME_LOCK, BUCKET_LOCK type resource, same
+   * thread acquiring lock again is allowed.
+   *
+   * For USER_LOCK, PREFIX_LOCK, S3_SECRET_LOCK type resource, same thread
 
 Review comment:
   Why is that?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324815)
Time Spent: 2h 50m  (was: 2h 40m)

> Use new ReadWrite lock in OzoneManager
> --
>
> Key: HDDS-2244
> URL: https://issues.apache.org/jira/browse/HDDS-2244
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Use new ReadWriteLock added in HDDS-2223.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14895) Define LOG instead of BlockPlacementPolicy.LOG in DatanodeDescriptor#chooseStorage4Block

2019-10-07 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946401#comment-16946401
 ] 

Lisheng Sun commented on HDFS-14895:


[~ayushtkn]

-HDFS-9023 defines BlockPlacementPolicy.LOG.debug instead of LOG.debug. i think 
LOG.debug is more reasonable.  Unified definition of log mode. Please correct 
me if was wrong. Thank you a lot [~ayushtkn]

> Define LOG instead of BlockPlacementPolicy.LOG in 
> DatanodeDescriptor#chooseStorage4Block
> 
>
> Key: HDFS-14895
> URL: https://issues.apache.org/jira/browse/HDFS-14895
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14895.001.patch
>
>
> There is a noisy log with BlockPlacementPolicy.LOG, it's too hard to debug 
> problem. Define LOG instead of it in DatanodeDescriptor#chooseStorage4Block.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2244) Use new ReadWrite lock in OzoneManager

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2244?focusedWorklogId=324808=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324808
 ]

ASF GitHub Bot logged work on HDDS-2244:


Author: ASF GitHub Bot
Created on: 08/Oct/19 02:02
Start Date: 08/Oct/19 02:02
Worklog Time Spent: 10m 
  Work Description: arp7 commented on issue #1589: HDDS-2244. Use new 
ReadWrite lock in OzoneManager.
URL: https://github.com/apache/hadoop/pull/1589#issuecomment-539279865
 
 
   I looked through the JDK implementation of read-write locks a couple of 
years ago. Even in non-fair mode there is prevention against starvation. HDFS 
uses non-fair mode by default and works well even for very busy Name Nodes.
   
   However we can make the lock fair for now, and evaluate making it non-fair 
later.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324808)
Time Spent: 2h 40m  (was: 2.5h)

> Use new ReadWrite lock in OzoneManager
> --
>
> Key: HDDS-2244
> URL: https://issues.apache.org/jira/browse/HDDS-2244
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Use new ReadWriteLock added in HDDS-2223.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1984) Fix listBucket API

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1984?focusedWorklogId=324800=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324800
 ]

ASF GitHub Bot logged work on HDDS-1984:


Author: ASF GitHub Bot
Created on: 08/Oct/19 01:44
Start Date: 08/Oct/19 01:44
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1555: HDDS-1984. Fix 
listBucket API.
URL: https://github.com/apache/hadoop/pull/1555#discussion_r332306087
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/cache/TableCacheImpl.java
 ##
 @@ -47,15 +48,22 @@
 public class TableCacheImpl implements TableCache 
{
 
-  private final ConcurrentHashMap cache;
+  private final Map cache;
   private final NavigableSet> epochEntries;
   private ExecutorService executorService;
   private CacheCleanupPolicy cleanupPolicy;
 
 
 
   public TableCacheImpl(CacheCleanupPolicy cleanupPolicy) {
-cache = new ConcurrentHashMap<>();
+
+// As for full table cache only we need elements to be inserted in sorted
+// manner, so that list will be easy. For other we can go with Hash map.
+if (cleanupPolicy == CacheCleanupPolicy.NEVER) {
 
 Review comment:
   We do need a solution for this, it's not very good to keep adding special 
checks for Cache policy NEVER. The code will become fragile.
   
   I think when we first added cache policies you proposed making it an 
interface. Let's do that now.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324800)
Time Spent: 2h 10m  (was: 2h)

> Fix listBucket API
> --
>
> Key: HDDS-1984
> URL: https://issues.apache.org/jira/browse/HDDS-1984
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> This Jira is to fix listBucket API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listBuckets, it should use both 
> in-memory cache and rocksdb bucket table to list buckets in a volume.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1984) Fix listBucket API

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1984?focusedWorklogId=324799=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324799
 ]

ASF GitHub Bot logged work on HDDS-1984:


Author: ASF GitHub Bot
Created on: 08/Oct/19 01:43
Start Date: 08/Oct/19 01:43
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1555: HDDS-1984. Fix 
listBucket API.
URL: https://github.com/apache/hadoop/pull/1555#discussion_r332305881
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/cache/CacheKey.java
 ##
 @@ -53,4 +53,18 @@ public boolean equals(Object o) {
   public int hashCode() {
 return Objects.hash(key);
   }
+
+  @Override
+  public int compareTo(Object o) {
+if(Objects.equals(key, ((CacheKey)o).key)) {
+  return 0;
+} else {
+  if (key instanceof String) {
+return ((String) key).compareTo((String) ((CacheKey)o).key);
+  } else {
+// If not type string, convert to string and compare.
+return key.toString().compareToCacheKey) o).key).toString());
 
 Review comment:
   I think the correct fix is to make OzoneTokenIdentifier a Comparable, and 
then enforce that the CacheKey implements Comparable.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324799)
Time Spent: 2h  (was: 1h 50m)

> Fix listBucket API
> --
>
> Key: HDDS-1984
> URL: https://issues.apache.org/jira/browse/HDDS-1984
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> This Jira is to fix listBucket API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listBuckets, it should use both 
> in-memory cache and rocksdb bucket table to list buckets in a volume.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=324798=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324798
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 08/Oct/19 01:35
Start Date: 08/Oct/19 01:35
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1588: HDDS-1986. Fix 
listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#discussion_r332304634
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestOmMetadataManager.java
 ##
 @@ -0,0 +1,298 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.ozone.om;
+import com.google.common.base.Optional;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.StorageType;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
+import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.request.TestOMRequestUtils;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+
+import java.util.List;
+import java.util.TreeSet;
+
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_DB_DIRS;
+
+/**
+ * Tests OzoneManager MetadataManager.
+ */
+public class TestOmMetadataManager {
+
+  private OMMetadataManager omMetadataManager;
+  private OzoneConfiguration ozoneConfiguration;
+
+  @Rule
+  public TemporaryFolder folder = new TemporaryFolder();
+
+
+  @Before
+  public void setup() throws Exception {
+ozoneConfiguration = new OzoneConfiguration();
+ozoneConfiguration.set(OZONE_OM_DB_DIRS,
+folder.getRoot().getAbsolutePath());
+omMetadataManager = new OmMetadataManagerImpl(ozoneConfiguration);
+  }
+  @Test
+  public void testListKeys() throws Exception {
+
+String volumeNameA = "volumeA";
+String volumeNameB = "volumeB";
+String ozoneBucket = "ozoneBucket";
+String hadoopBucket = "hadoopBucket";
+
+
+// Create volumes and buckets.
+TestOMRequestUtils.addVolumeToDB(volumeNameA, omMetadataManager);
+TestOMRequestUtils.addVolumeToDB(volumeNameB, omMetadataManager);
+addBucketsToCache(volumeNameA, ozoneBucket);
+addBucketsToCache(volumeNameB, hadoopBucket);
+
+
+String prefixKeyA = "key-a";
+String prefixKeyB = "key-b";
+TreeSet keysASet = new TreeSet<>();
+TreeSet keysBSet = new TreeSet<>();
+for (int i=1; i<= 100; i++) {
+  if (i % 2 == 0) {
+keysASet.add(
+prefixKeyA + i);
+addKeysToOM(volumeNameA, ozoneBucket, prefixKeyA + i, i);
+  } else {
+keysBSet.add(
+prefixKeyB + i);
+addKeysToOM(volumeNameA, hadoopBucket, prefixKeyB + i, i);
+  }
+}
+
+
+TreeSet keysAVolumeBSet = new TreeSet<>();
+TreeSet keysBVolumeBSet = new TreeSet<>();
+for (int i=1; i<= 100; i++) {
+  if (i % 2 == 0) {
+keysAVolumeBSet.add(
+prefixKeyA + i);
+addKeysToOM(volumeNameB, ozoneBucket, prefixKeyA + i, i);
+  } else {
+keysBVolumeBSet.add(
+prefixKeyB + i);
+addKeysToOM(volumeNameB, hadoopBucket, prefixKeyB + i, i);
+  }
+}
+
+
+// List all keys which have prefix "key-a"
+List omKeyInfoList =
+omMetadataManager.listKeys(volumeNameA, ozoneBucket,
+null, prefixKeyA, 100);
+
+Assert.assertEquals(omKeyInfoList.size(),  50);
+
+for (OmKeyInfo omKeyInfo : omKeyInfoList) {
+  Assert.assertTrue(omKeyInfo.getKeyName().startsWith(
+  prefixKeyA));
+}
+
+
+String startKey = prefixKeyA + 10;
+omKeyInfoList =
+omMetadataManager.listKeys(volumeNameA, ozoneBucket,
+startKey, prefixKeyA, 100);
+
+Assert.assertEquals(keysASet.tailSet(
+startKey).size() - 1, omKeyInfoList.size());
+
+startKey = prefixKeyA + 38;
+

[jira] [Work logged] (HDDS-1986) Fix listkeys API

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1986?focusedWorklogId=324797=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324797
 ]

ASF GitHub Bot logged work on HDDS-1986:


Author: ASF GitHub Bot
Created on: 08/Oct/19 01:30
Start Date: 08/Oct/19 01:30
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #1588: HDDS-1986. Fix 
listkeys API.
URL: https://github.com/apache/hadoop/pull/1588#discussion_r332303770
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -680,26 +688,85 @@ public boolean isBucketEmpty(String volume, String 
bucket)
   seekPrefix = getBucketKey(volumeName, bucketName + OM_KEY_PREFIX);
 }
 int currentCount = 0;
-try (TableIterator> keyIter =
-getKeyTable()
-.iterator()) {
-  KeyValue kv = keyIter.seek(seekKey);
-  while (currentCount < maxKeys && keyIter.hasNext()) {
-kv = keyIter.next();
-// Skip the Start key if needed.
-if (kv != null && skipStartKey && kv.getKey().equals(seekKey)) {
-  continue;
+
+
+TreeMap cacheKeyMap = new TreeMap<>();
+Set deletedKeySet = new TreeSet<>();
+Iterator, CacheValue>> iterator =
+keyTable.cacheIterator();
+
+//TODO: We can avoid this iteration if table cache has stored entries in
+// treemap. Currently HashMap is used in Cache. HashMap get operation is an
+// constant time operation, where as for treeMap get is log(n).
+// So if we move to treemap, the get operation will be affected. As get
+// is frequent operation on table. So, for now in list we iterate cache map
+// and construct treeMap which match with keyPrefix and are greater than or
+// equal to startKey. Later we can revisit this, if list operation
+// is becoming slow.
+while (iterator.hasNext()) {
+  Map.Entry< CacheKey, CacheValue> entry =
+  iterator.next();
+
+  String key = entry.getKey().getCacheKey();
+  OmKeyInfo omKeyInfo = entry.getValue().getCacheValue();
+  // Making sure that entry in cache is not for delete key request.
+
+  if (omKeyInfo != null) {
+if (key.startsWith(seekPrefix) && key.compareTo(seekKey) >= 0) {
+  cacheKeyMap.put(key, omKeyInfo);
 }
+  } else {
+deletedKeySet.add(key);
+  }
+}
+
+// Get maxKeys from DB if it has.
+
+try (TableIterator>
+ keyIter = getKeyTable().iterator()) {
+  KeyValue< String, OmKeyInfo > kv;
+  keyIter.seek(seekKey);
+  // we need to iterate maxKeys + 1 here because if skipStartKey is true,
+  // we should skip that entry and return the result.
+  while (currentCount < maxKeys + 1 && keyIter.hasNext()) {
+kv = keyIter.next();
 if (kv != null && kv.getKey().startsWith(seekPrefix)) {
-  result.add(kv.getValue());
-  currentCount++;
+
+  // Entry should not be marked for delete, consider only those
+  // entries.
+  if(!deletedKeySet.contains(kv.getKey())) {
+cacheKeyMap.put(kv.getKey(), kv.getValue());
+currentCount++;
+  }
 } else {
   // The SeekPrefix does not match any more, we can break out of the
   // loop.
   break;
 }
   }
 }
+
+// Finally DB entries and cache entries are merged, then return the count
+// of maxKeys from the sorted map.
+currentCount = 0;
+
+for (Map.Entry  cacheKey : cacheKeyMap.entrySet()) {
 
 Review comment:
   The second iteration is unfortunate. We should see if there is a way to 
avoid it.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324797)
Time Spent: 1h 10m  (was: 1h)

> Fix listkeys API
> 
>
> Key: HDDS-1986
> URL: https://issues.apache.org/jira/browse/HDDS-1986
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> This Jira is to fix listKeys API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listkeys, it should use both in-memory 
> cache and rocksdb key table to list keys in a 

[jira] [Work logged] (HDDS-2217) Remove log4j and audit configuration from the docker-config files

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2217?focusedWorklogId=324793=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324793
 ]

ASF GitHub Bot logged work on HDDS-2217:


Author: ASF GitHub Bot
Created on: 08/Oct/19 00:47
Start Date: 08/Oct/19 00:47
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1582: HDDS-2217. 
Removed redundant LOG4J lines from docker configurations
URL: https://github.com/apache/hadoop/pull/1582#issuecomment-539264930
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 11 | https://github.com/apache/hadoop/pull/1582 does not 
apply to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/1582 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1582/2/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324793)
Time Spent: 2h 20m  (was: 2h 10m)

> Remove log4j and audit configuration from the docker-config files
> -
>
> Key: HDDS-2217
> URL: https://issues.apache.org/jira/browse/HDDS-2217
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: docker
>Reporter: Marton Elek
>Assignee: Chris Teoh
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Log4j configuration lines are added to the docker-config under 
> hadoop-ozone/dist/src/main/compose/...
> Mainly to make it easier to reconfigure the log level of any components.
> As we already have a "ozone insight" tool which can help us to modify the log 
> level at runtime we don't need these lines any more.
> {code:java}
> LOG4J.PROPERTIES_log4j.rootLogger=INFO, stdout
> LOG4J.PROPERTIES_log4j.appender.stdout=org.apache.log4j.ConsoleAppender
> LOG4J.PROPERTIES_log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
> LOG4J.PROPERTIES_log4j.appender.stdout.layout.ConversionPattern=%d{-MM-dd 
> HH:mm:ss} %-5p %c{1}:%L - %m%n
> LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR
> LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.conf.ConfUtils=WARN
> LOG4J.PROPERTIES_log4j.logger.org.apache.hadoop.security.ShellBasedUnixGroupsMapping=ERROR
> LOG4J.PROPERTIES_log4j.logger.org.apache.ratis.grpc.client.GrpcClientProtocolClient=WARN
> LOG4J.PROPERTIES_log4j.logger.http.requests.s3gateway=INFO,s3gatewayrequestlog
> LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog=org.apache.hadoop.http.HttpRequestLogAppender
> LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog.Filename=/tmp/jetty-s3gateway-_mm_dd.log
> LOG4J.PROPERTIES_log4j.appender.s3gatewayrequestlog.RetainDays=3 {code}
> We can remove them together with the audit log entries as we already have a 
> default log4j.propertes / audit log4j2 config.
> After the remove the clusters should be tested: Ozone CLI should not print 
> and confusing log messages (such as NativeLib is missing or anything else). 
> AFAIK they are already turned off in the etc/hadoop/etc log4j.properties.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1984) Fix listBucket API

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1984?focusedWorklogId=324700=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324700
 ]

ASF GitHub Bot logged work on HDDS-1984:


Author: ASF GitHub Bot
Created on: 07/Oct/19 23:45
Start Date: 07/Oct/19 23:45
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1555: 
HDDS-1984. Fix listBucket API.
URL: https://github.com/apache/hadoop/pull/1555#discussion_r332284284
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -618,23 +618,31 @@ public boolean isBucketEmpty(String volume, String 
bucket)
 }
 int currentCount = 0;
 
-try (TableIterator>
-bucketIter = bucketTable.iterator()) {
-  KeyValue kv = bucketIter.seek(startKey);
-  while (currentCount < maxNumOfBuckets && bucketIter.hasNext()) {
-kv = bucketIter.next();
-// Skip the Start Bucket if needed.
-if (kv != null && skipStartKey &&
-kv.getKey().equals(startKey)) {
+
+// For Bucket it is full cache, so we can just iterate in-memory table
+// cache.
+Iterator, CacheValue>> iterator =
 
 Review comment:
   To improve this API, I think we should change our cache data structure which 
will be effective for both read/write/list API. (Initially, usage of 
ConcurrentHashMap, as get() is a constant time operation, but it has caused the 
problem for list API).
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324700)
Time Spent: 1h 50m  (was: 1h 40m)

> Fix listBucket API
> --
>
> Key: HDDS-1984
> URL: https://issues.apache.org/jira/browse/HDDS-1984
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> This Jira is to fix listBucket API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listBuckets, it should use both 
> in-memory cache and rocksdb bucket table to list buckets in a volume.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1984) Fix listBucket API

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1984?focusedWorklogId=324699=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324699
 ]

ASF GitHub Bot logged work on HDDS-1984:


Author: ASF GitHub Bot
Created on: 07/Oct/19 23:39
Start Date: 07/Oct/19 23:39
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1555: 
HDDS-1984. Fix listBucket API.
URL: https://github.com/apache/hadoop/pull/1555#discussion_r332282739
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -618,23 +618,31 @@ public boolean isBucketEmpty(String volume, String 
bucket)
 }
 int currentCount = 0;
 
-try (TableIterator>
-bucketIter = bucketTable.iterator()) {
-  KeyValue kv = bucketIter.seek(startKey);
-  while (currentCount < maxNumOfBuckets && bucketIter.hasNext()) {
-kv = bucketIter.next();
-// Skip the Start Bucket if needed.
-if (kv != null && skipStartKey &&
-kv.getKey().equals(startKey)) {
+
+// For Bucket it is full cache, so we can just iterate in-memory table
+// cache.
+Iterator, CacheValue>> iterator =
 
 Review comment:
   There is a maxCount, when we reach the count, we will return immediately, in 
that case, we shall not iterate all volumes in a bucket.
   
   BucketTable cache is just a concurrentHashMap of all buckets in OM.
   
   So let's take an example: As it is ConcurrentSkipListMap, it has items 
sorted based on the key.
   
   We have entries in bucket table cache like below: (For brevity, removed 
bucketInfo structures which are the values for the key.)
   /vol/buck1
   /vol/buck2
   /vol/buck3
   /vol2/bucket2
   /vol2/bucket3
   /vol2/bucket4
   
   When we want to list buckets of /vol, and in each return only 1 
entry(maximum count), we return /vol/buck1 and then immediately return, next 
time listBuckets is called, startkey will be /vol/buck1,1, we return 
/vol/buck2, (To return this we iterate 2 entries), like that so on.
   
   But when we want to list all buckets is /vol2, we will iterate the entries 
from start, and reach to /vol2 in the cache and once the maximum count is 
reached we return from there.
   
   So, to answer your question not every time we iterate entire cache map. (But 
some times we iterate and skip them, as shown with case of vol2 bucket list)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324699)
Time Spent: 1h 40m  (was: 1.5h)

> Fix listBucket API
> --
>
> Key: HDDS-1984
> URL: https://issues.apache.org/jira/browse/HDDS-1984
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> This Jira is to fix listBucket API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listBuckets, it should use both 
> in-memory cache and rocksdb bucket table to list buckets in a volume.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1984) Fix listBucket API

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1984?focusedWorklogId=324698=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324698
 ]

ASF GitHub Bot logged work on HDDS-1984:


Author: ASF GitHub Bot
Created on: 07/Oct/19 23:38
Start Date: 07/Oct/19 23:38
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1555: 
HDDS-1984. Fix listBucket API.
URL: https://github.com/apache/hadoop/pull/1555#discussion_r332282739
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -618,23 +618,31 @@ public boolean isBucketEmpty(String volume, String 
bucket)
 }
 int currentCount = 0;
 
-try (TableIterator>
-bucketIter = bucketTable.iterator()) {
-  KeyValue kv = bucketIter.seek(startKey);
-  while (currentCount < maxNumOfBuckets && bucketIter.hasNext()) {
-kv = bucketIter.next();
-// Skip the Start Bucket if needed.
-if (kv != null && skipStartKey &&
-kv.getKey().equals(startKey)) {
+
+// For Bucket it is full cache, so we can just iterate in-memory table
+// cache.
+Iterator, CacheValue>> iterator =
 
 Review comment:
   There is a maxCount, when we reach the count, we will return immediately, in 
that case, we shall not iterate all volumes in a bucket.
   
   BucketTable cache is just a concurrentHashMap of all buckets in OM.
   
   So let's take an example: As it is ConcurrentSkipListMap, it has items 
sorted based on the key.
   
   We have entries in bucket table cache like below: (For brevity, removed 
bucketInfo structures which are the values for the key.)
   /vol/buck1
   /vol/buck2
   /vol/buck3
   /vol2/bucket2
   /vol2/bucket3
   /vol2/bucket4
   
   When we want to list buckets of /vol, and in each return only 1 
entry(maximum count), we return /vol/buck1 and then immediately return, next 
time listBuckets is called, startkey will be /vol/buck1,1, we return 
/vol/buck2, (To return this we iterate 2 entries), like that so on.
   
   But when we want to list all buckets is /vol2, we will iterate the entries 
from start, and reach to /vol2 in the cache and once the maximum count is 
reached we return from there.
   
   So, to answer your question not every time we iterate entire cache map.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324698)
Time Spent: 1.5h  (was: 1h 20m)

> Fix listBucket API
> --
>
> Key: HDDS-1984
> URL: https://issues.apache.org/jira/browse/HDDS-1984
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> This Jira is to fix listBucket API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listBuckets, it should use both 
> in-memory cache and rocksdb bucket table to list buckets in a volume.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2245) Use dynamic ports for SCM in TestSecureOzoneCluster

2019-10-07 Thread kevin su (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946334#comment-16946334
 ] 

kevin su commented on HDDS-2245:


Thanks [~aengineer] for the help and commit  

> Use dynamic ports for SCM in TestSecureOzoneCluster
> ---
>
> Key: HDDS-2245
> URL: https://issues.apache.org/jira/browse/HDDS-2245
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: kevin su
>Priority: Major
>  Labels: newbie
> Fix For: 0.5.0
>
> Attachments: HDDS-2245.001.patch, HDDS-2245.002.patch
>
>
> {{TestSecureOzoneCluster}} is using default SCM ports, we should use dynamic 
> ports.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1984) Fix listBucket API

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1984?focusedWorklogId=324696=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324696
 ]

ASF GitHub Bot logged work on HDDS-1984:


Author: ASF GitHub Bot
Created on: 07/Oct/19 23:31
Start Date: 07/Oct/19 23:31
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1555: 
HDDS-1984. Fix listBucket API.
URL: https://github.com/apache/hadoop/pull/1555#discussion_r332281050
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/cache/CacheKey.java
 ##
 @@ -53,4 +53,18 @@ public boolean equals(Object o) {
   public int hashCode() {
 return Objects.hash(key);
   }
+
+  @Override
+  public int compareTo(Object o) {
+if(Objects.equals(key, ((CacheKey)o).key)) {
+  return 0;
+} else {
+  if (key instanceof String) {
+return ((String) key).compareTo((String) ((CacheKey)o).key);
+  } else {
+// If not type string, convert to string and compare.
+return key.toString().compareToCacheKey) o).key).toString());
 
 Review comment:
   When CacheKey KEY type is not string. For BucketTable type is string, 
for tokentable type is OzoneTokenIdentifier. As this is common class used by 
all tables in OM, added the if, else condition.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324696)
Time Spent: 1h 20m  (was: 1h 10m)

> Fix listBucket API
> --
>
> Key: HDDS-1984
> URL: https://issues.apache.org/jira/browse/HDDS-1984
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> This Jira is to fix listBucket API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listBuckets, it should use both 
> in-memory cache and rocksdb bucket table to list buckets in a volume.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14814) RBF: RouterQuotaUpdateService supports inherited rule.

2019-10-07 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-14814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946330#comment-16946330
 ] 

Íñigo Goiri commented on HDFS-14814:


LGTM.
+1 on [^HDFS-14814.011.patch].

> RBF: RouterQuotaUpdateService supports inherited rule.
> --
>
> Key: HDFS-14814
> URL: https://issues.apache.org/jira/browse/HDFS-14814
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-14814.001.patch, HDFS-14814.002.patch, 
> HDFS-14814.003.patch, HDFS-14814.004.patch, HDFS-14814.005.patch, 
> HDFS-14814.006.patch, HDFS-14814.007.patch, HDFS-14814.008.patch, 
> HDFS-14814.009.patch, HDFS-14814.010.patch, HDFS-14814.011.patch
>
>
> I want to add a rule *'The quota should be set the same as the nearest 
> parent'* to Global Quota. Supposing we have the mount table below.
> M1: /dir-a                            ns0->/dir-a     \{nquota=10,squota=20}
> M2: /dir-a/dir-b                 ns1->/dir-b     \{nquota=-1,squota=30}
> M3: /dir-a/dir-b/dir-c       ns2->/dir-c     \{nquota=-1,squota=-1}
> M4: /dir-d                           ns3->/dir-d     \{nquota=-1,squota=-1}
>  
> The quota for the remote locations on the namespaces should be:
>  ns0->/dir-a     \{nquota=10,squota=20}
>  ns1->/dir-b     \{nquota=10,squota=30}
>  ns2->/dir-c      \{nquota=10,squota=30}
>  ns3->/dir-d     \{nquota=-1,squota=-1}
>  
> The quota of the remote location is set the same as the corresponding 
> MountTable, and if there is no quota of the MountTable then the quota is set 
> to the nearest parent MountTable with quota.
>  
> It's easy to implement it. In RouterQuotaUpdateService each time we compute 
> the currentQuotaUsage, we can get the quota info for each MountTable. We can 
> do a
>  check and fix all the MountTable which's quota doesn't match the rule above.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14899) Use Relative URLS in Hadoop HDFS RBF

2019-10-07 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-14899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946328#comment-16946328
 ] 

Íñigo Goiri commented on HDFS-14899:


Thanks [~belugabehr], have you verified this in your browser?

> Use Relative URLS in Hadoop HDFS RBF
> 
>
> Key: HDFS-14899
> URL: https://issues.apache.org/jira/browse/HDFS-14899
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Attachments: HDFS-14899.1.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14898) Use Relative URLS in Hadoop HDFS HTTP FS

2019-10-07 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946325#comment-16946325
 ] 

Hadoop QA commented on HDFS-14898:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
44s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
31m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
23s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.2 Server=19.03.2 Image:yetus/hadoop:1dde3efb91e |
| JIRA Issue | HDFS-14898 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12982424/HDFS-14898.1.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  |
| uname | Linux 2c298dd88c87 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 012d897 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28027/testReport/ |
| Max. process+thread count | 600 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-httpfs U: 
hadoop-hdfs-project/hadoop-hdfs-httpfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28027/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Use Relative URLS in Hadoop HDFS HTTP FS
> 
>
> Key: HDFS-14898
> URL: https://issues.apache.org/jira/browse/HDFS-14898
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Attachments: HDFS-14898.1.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To 

[jira] [Commented] (HDFS-14899) Use Relative URLS in Hadoop HDFS RBF

2019-10-07 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946319#comment-16946319
 ] 

Hadoop QA commented on HDFS-14899:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
45s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
29m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 39s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.2 Server=19.03.2 Image:yetus/hadoop:1dde3efb91e |
| JIRA Issue | HDFS-14899 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12982429/HDFS-14899.1.patch |
| Optional Tests |  dupname  asflicense  shadedclient  |
| uname | Linux 4fafc195dc99 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 012d897 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 437 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28028/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Use Relative URLS in Hadoop HDFS RBF
> 
>
> Key: HDFS-14899
> URL: https://issues.apache.org/jira/browse/HDFS-14899
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Attachments: HDFS-14899.1.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2245) Use dynamic ports for SCM in TestSecureOzoneCluster

2019-10-07 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946314#comment-16946314
 ] 

Hudson commented on HDDS-2245:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17500 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17500/])
HDDS-2245. Use dynamic ports for SCM in TestSecureOzoneCluster (aengineer: rev 
4fdf01635835a1b8f1107a50c112a3601a6a61f9)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestSecureOzoneCluster.java


> Use dynamic ports for SCM in TestSecureOzoneCluster
> ---
>
> Key: HDDS-2245
> URL: https://issues.apache.org/jira/browse/HDDS-2245
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: kevin su
>Priority: Major
>  Labels: newbie
> Fix For: 0.5.0
>
> Attachments: HDDS-2245.001.patch, HDDS-2245.002.patch
>
>
> {{TestSecureOzoneCluster}} is using default SCM ports, we should use dynamic 
> ports.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14900) Fix build failure of hadoop-hdfs-native-client

2019-10-07 Thread Masatake Iwasaki (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946311#comment-16946311
 ] 

Masatake Iwasaki commented on HDFS-14900:
-

from CMakeError.log:
{noformat}
/usr/bin/cc   -DCHECK_FUNCTION_EXISTS=pthread_create   -o 
CMakeFiles/cmTC_cace7.dir/CheckFunctionExists.c.o   -c 
/usr/local/share/cmake-3.3/Modules/CheckFunctionExists.c
Linking C executable cmTC_cace7
/usr/local/bin/cmake -E cmake_link_script CMakeFiles/cmTC_cace7.dir/link.txt 
--verbose=1
/usr/bin/cc   -DCHECK_FUNCTION_EXISTS=pthread_create
CMakeFiles/cmTC_cace7.dir/CheckFunctionExists.c.o  -o cmTC_cace7 -rdynamic 
-lpthreads 
/usr/bin/ld: cannot find -lpthreads
collect2: error: ld returned 1 exit status
{noformat}
{noformat}
Building CXX object CMakeFiles/cmTC_6c0d6.dir/src.cxx.o
/usr/bin/c++-g -O2 -Wall -pthread -D_FILE_OFFSET_BITS=64 -D_GNU_SOURCE 
-DPROTOC_IS_COMPATIBLE   -std=c++11 -o CMakeFiles/cmTC_6c0d6.dir/src.cxx.o -c 
/home/iwasakims/srcs/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target/CMakeFiles/CMakeTmp/src.cxx
/home/iwasakims/srcs/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target/CMakeFiles/CMakeTmp/src.cxx:1:40:
 fatal error: google/protobuf/io/printer.h: No such file or directory
 #include 
{noformat}


> Fix build failure of hadoop-hdfs-native-client
> --
>
> Key: HDFS-14900
> URL: https://issues.apache.org/jira/browse/HDFS-14900
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>
> Build with native profile failed due to lack of protobuf resources. I did not 
> noticed this because build scceeds if protobuf-2.5.0 is installed. 
> protobuf-3.7.1 is the correct version after HADOOP-16558.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2244) Use new ReadWrite lock in OzoneManager

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2244?focusedWorklogId=324676=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324676
 ]

ASF GitHub Bot logged work on HDDS-2244:


Author: ASF GitHub Bot
Created on: 07/Oct/19 22:51
Start Date: 07/Oct/19 22:51
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1589: HDDS-2244. Use 
new ReadWrite lock in OzoneManager.
URL: https://github.com/apache/hadoop/pull/1589#issuecomment-539237587
 
 
   > Right now ActiveLock creates ReadWrite Lock with non-fair mode. Do you 
mean, we want to create the RWLOCK with an option of fair mode. If my 
understanding is wrong, could you let me know what additional things need to be 
implemented?
   
   When you use a Reader writer lock, there is a question of fairness. Where as 
exclusive locks are first come first serve.
   
   > 
   > And also this work is mainly to improve read performance workloads, as now 
with current approach of exclusive lock all reads are serialized.
   
   I am afraid this gives so much importance to Reads that you will have your 
writes getting stalled completely. 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324676)
Time Spent: 2.5h  (was: 2h 20m)

> Use new ReadWrite lock in OzoneManager
> --
>
> Key: HDDS-2244
> URL: https://issues.apache.org/jira/browse/HDDS-2244
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> Use new ReadWriteLock added in HDDS-2223.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2244) Use new ReadWrite lock in OzoneManager

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2244?focusedWorklogId=324675=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324675
 ]

ASF GitHub Bot logged work on HDDS-2244:


Author: ASF GitHub Bot
Created on: 07/Oct/19 22:51
Start Date: 07/Oct/19 22:51
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1589: HDDS-2244. Use 
new ReadWrite lock in OzoneManager.
URL: https://github.com/apache/hadoop/pull/1589#issuecomment-539237587
 
 
   
   > Right now ActiveLock creates ReadWrite Lock with non-fair mode. Do you 
mean, we want to create the RWLOCK with an option of fair mode. If my 
understanding is wrong, could you let me know what additional things need to be 
implemented?
   When you use a Reader writer lock, there is a question of fairness. Where as 
exclusive locks are first come first serve.
   > 
   > And also this work is mainly to improve read performance workloads, as now 
with current approach of exclusive lock all reads are serialized.
   I am afraid this gives so much importance to Reads that you will have your 
writes getting stalled completely. 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324675)
Time Spent: 2h 20m  (was: 2h 10m)

> Use new ReadWrite lock in OzoneManager
> --
>
> Key: HDDS-2244
> URL: https://issues.apache.org/jira/browse/HDDS-2244
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Use new ReadWriteLock added in HDDS-2223.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14900) Fix build failure of hadoop-hdfs-native-client

2019-10-07 Thread Masatake Iwasaki (Jira)
Masatake Iwasaki created HDFS-14900:
---

 Summary: Fix build failure of hadoop-hdfs-native-client
 Key: HDFS-14900
 URL: https://issues.apache.org/jira/browse/HDFS-14900
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.3.0
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki


Build with native profile failed due to lack of protobuf resources. I did not 
noticed this because build scceeds if protobuf-2.5.0 is installed. 
protobuf-3.7.1 is the correct version after HADOOP-16558.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2244) Use new ReadWrite lock in OzoneManager

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2244?focusedWorklogId=324674=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324674
 ]

ASF GitHub Bot logged work on HDDS-2244:


Author: ASF GitHub Bot
Created on: 07/Oct/19 22:48
Start Date: 07/Oct/19 22:48
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1589: HDDS-2244. 
Use new ReadWrite lock in OzoneManager.
URL: https://github.com/apache/hadoop/pull/1589#discussion_r332270946
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/lock/OzoneManagerLock.java
 ##
 @@ -105,15 +109,66 @@ public OzoneManagerLock(Configuration conf) {
* should be bucket name. For remaining all resource only one param should
* be passed.
*/
+  @Deprecated
   public boolean acquireLock(Resource resource, String... resources) {
 String resourceName = generateResourceName(resource, resources);
+return lock(resource, resourceName, manager::writeLock, WRITE_LOCK);
+  }
+
+  /**
+   * Acquire read lock on resource.
+   *
+   * For S3_BUCKET_LOCK, VOLUME_LOCK, BUCKET_LOCK type resource, same
+   * thread acquiring lock again is allowed.
+   *
+   * For USER_LOCK, PREFIX_LOCK, S3_SECRET_LOCK type resource, same thread
+   * acquiring lock again is not allowed.
+   *
+   * Special Note for USER_LOCK: Single thread can acquire single user lock/
+   * multi user lock. But not both at the same time.
+   * @param resource - Type of the resource.
+   * @param resources - Resource names on which user want to acquire lock.
+   * For Resource type BUCKET_LOCK, first param should be volume, second param
+   * should be bucket name. For remaining all resource only one param should
+   * be passed.
+   */
+  public boolean acquireReadLock(Resource resource, String... resources) {
+String resourceName = generateResourceName(resource, resources);
+return lock(resource, resourceName, manager::readLock, READ_LOCK);
+  }
+
+
+  /**
+   * Acquire write lock on resource.
+   *
+   * For S3_BUCKET_LOCK, VOLUME_LOCK, BUCKET_LOCK type resource, same
+   * thread acquiring lock again is allowed.
+   *
+   * For USER_LOCK, PREFIX_LOCK, S3_SECRET_LOCK type resource, same thread
+   * acquiring lock again is not allowed.
+   *
+   * Special Note for USER_LOCK: Single thread can acquire single user lock/
+   * multi user lock. But not both at the same time.
+   * @param resource - Type of the resource.
+   * @param resources - Resource names on which user want to acquire lock.
+   * For Resource type BUCKET_LOCK, first param should be volume, second param
+   * should be bucket name. For remaining all resource only one param should
+   * be passed.
+   */
+  public boolean acquireWriteLock(Resource resource, String... resources) {
+String resourceName = generateResourceName(resource, resources);
+return lock(resource, resourceName, manager::writeLock, WRITE_LOCK);
+  }
+
+  private boolean lock(Resource resource, String resourceName,
+  Consumer lockFn, String lockType) {
 if (!resource.canLock(lockSet.get())) {
   String errorMessage = getErrorMessage(resource);
   LOG.error(errorMessage);
   throw new RuntimeException(errorMessage);
 } else {
-  manager.lock(resourceName);
-  LOG.debug("Acquired {} lock on resource {}", resource.name,
+  lockFn.accept(resourceName);
+  LOG.debug("Acquired {} {} lock on resource {}", lockType, resource.name,
   resourceName);
 
 Review comment:
   Yes, it is very confusing. But Thanks for the explanation, it makes sense 
now.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324674)
Time Spent: 2h 10m  (was: 2h)

> Use new ReadWrite lock in OzoneManager
> --
>
> Key: HDDS-2244
> URL: https://issues.apache.org/jira/browse/HDDS-2244
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Use new ReadWriteLock added in HDDS-2223.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2245) Use dynamic ports for SCM in TestSecureOzoneCluster

2019-10-07 Thread Anu Engineer (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946308#comment-16946308
 ] 

Anu Engineer commented on HDDS-2245:


The normal jenkins is broken when a patch is submitted, if you do it via Github 
we have a working version. I have tested the patch manually and confirmed that 
it works as expected.

> Use dynamic ports for SCM in TestSecureOzoneCluster
> ---
>
> Key: HDDS-2245
> URL: https://issues.apache.org/jira/browse/HDDS-2245
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: kevin su
>Priority: Major
>  Labels: newbie
> Fix For: 0.5.0
>
> Attachments: HDDS-2245.001.patch, HDDS-2245.002.patch
>
>
> {{TestSecureOzoneCluster}} is using default SCM ports, we should use dynamic 
> ports.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2245) Use dynamic ports for SCM in TestSecureOzoneCluster

2019-10-07 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2245:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

[~pingsutw] Thank you for the contribution. I have committed this patch to the 
trunk.

> Use dynamic ports for SCM in TestSecureOzoneCluster
> ---
>
> Key: HDDS-2245
> URL: https://issues.apache.org/jira/browse/HDDS-2245
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: kevin su
>Priority: Major
>  Labels: newbie
> Fix For: 0.5.0
>
> Attachments: HDDS-2245.001.patch, HDDS-2245.002.patch
>
>
> {{TestSecureOzoneCluster}} is using default SCM ports, we should use dynamic 
> ports.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1984) Fix listBucket API

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1984?focusedWorklogId=324663=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324663
 ]

ASF GitHub Bot logged work on HDDS-1984:


Author: ASF GitHub Bot
Created on: 07/Oct/19 22:25
Start Date: 07/Oct/19 22:25
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1555: HDDS-1984. 
Fix listBucket API.
URL: https://github.com/apache/hadoop/pull/1555#discussion_r332257684
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/db/cache/CacheKey.java
 ##
 @@ -53,4 +53,18 @@ public boolean equals(Object o) {
   public int hashCode() {
 return Objects.hash(key);
   }
+
+  @Override
+  public int compareTo(Object o) {
+if(Objects.equals(key, ((CacheKey)o).key)) {
+  return 0;
+} else {
+  if (key instanceof String) {
+return ((String) key).compareTo((String) ((CacheKey)o).key);
+  } else {
+// If not type string, convert to string and compare.
+return key.toString().compareToCacheKey) o).key).toString());
 
 Review comment:
   when can this happen? 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324663)
Time Spent: 1h 10m  (was: 1h)

> Fix listBucket API
> --
>
> Key: HDDS-1984
> URL: https://issues.apache.org/jira/browse/HDDS-1984
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> This Jira is to fix listBucket API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listBuckets, it should use both 
> in-memory cache and rocksdb bucket table to list buckets in a volume.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1984) Fix listBucket API

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1984?focusedWorklogId=324662=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324662
 ]

ASF GitHub Bot logged work on HDDS-1984:


Author: ASF GitHub Bot
Created on: 07/Oct/19 22:25
Start Date: 07/Oct/19 22:25
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1555: HDDS-1984. 
Fix listBucket API.
URL: https://github.com/apache/hadoop/pull/1555#discussion_r332264370
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 ##
 @@ -618,23 +618,31 @@ public boolean isBucketEmpty(String volume, String 
bucket)
 }
 int currentCount = 0;
 
-try (TableIterator>
-bucketIter = bucketTable.iterator()) {
-  KeyValue kv = bucketIter.seek(startKey);
-  while (currentCount < maxNumOfBuckets && bucketIter.hasNext()) {
-kv = bucketIter.next();
-// Skip the Start Bucket if needed.
-if (kv != null && skipStartKey &&
-kv.getKey().equals(startKey)) {
+
+// For Bucket it is full cache, so we can just iterate in-memory table
+// cache.
+Iterator, CacheValue>> iterator =
 
 Review comment:
   Sorry, I am not about to make sure of thi; but for each request do we 
iterator through the whole bucket space here? 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324662)
Time Spent: 1h 10m  (was: 1h)

> Fix listBucket API
> --
>
> Key: HDDS-1984
> URL: https://issues.apache.org/jira/browse/HDDS-1984
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> This Jira is to fix listBucket API in HA code path.
> In HA, we have an in-memory cache, where we put the result to in-memory cache 
> and return the response, later it will be picked by double buffer thread and 
> it will flush to disk. So, now when do listBuckets, it should use both 
> in-memory cache and rocksdb bucket table to list buckets in a volume.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2244) Use new ReadWrite lock in OzoneManager

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2244?focusedWorklogId=324657=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324657
 ]

ASF GitHub Bot logged work on HDDS-2244:


Author: ASF GitHub Bot
Created on: 07/Oct/19 22:09
Start Date: 07/Oct/19 22:09
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1589: HDDS-2244. Use 
new ReadWrite lock in OzoneManager.
URL: https://github.com/apache/hadoop/pull/1589#issuecomment-539225586
 
 
   > I have an uber question on this patch. How do we ensure that writes will 
not be starved on a resource, since Reads allow multiple of them to get thru at 
the same time? Do we have a mechanism to avoid write starvation in place? if 
not, it is not better to keep simple locks?
   
   Right now ActiveLock creates ReadWrite Lock with non-fair mode. Do you mean, 
we want to create the RWLOCK with an option of fair mode. If my understanding 
is wrong, could you let me know what additional things need to be implemented?
   
   And also this work is mainly to improve read performance workloads, as now 
with current approach of exclusive lock all reads are serialized.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324657)
Time Spent: 2h  (was: 1h 50m)

> Use new ReadWrite lock in OzoneManager
> --
>
> Key: HDDS-2244
> URL: https://issues.apache.org/jira/browse/HDDS-2244
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Use new ReadWriteLock added in HDDS-2223.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2262) SLEEP_SECONDS: command not found

2019-10-07 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946288#comment-16946288
 ] 

Hudson commented on HDDS-2262:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17499 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17499/])
HDDS-2262. SLEEP_SECONDS: command not found (aengineer: rev 
012d897e5b13228152ca31ad97fae87e4b1e4b54)
* (edit) hadoop-ozone/dist/src/main/dockerbin/entrypoint.sh


> SLEEP_SECONDS: command not found
> 
>
> Key: HDDS-2262
> URL: https://issues.apache.org/jira/browse/HDDS-2262
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Trivial
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {noformat}
> datanode_1  | /opt/hadoop/bin/docker/entrypoint.sh: line 66: SLEEP_SECONDS: 
> command not found
> datanode_1  | Sleeping for  seconds
> {noformat}
> Eg. 
> https://raw.githubusercontent.com/elek/ozone-ci-q4/master/pr/pr-hdds-2238-79fll/acceptance/docker-ozonesecure-ozonesecure-s3-s3g.log



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2244) Use new ReadWrite lock in OzoneManager

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2244?focusedWorklogId=324655=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324655
 ]

ASF GitHub Bot logged work on HDDS-2244:


Author: ASF GitHub Bot
Created on: 07/Oct/19 21:57
Start Date: 07/Oct/19 21:57
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1589: 
HDDS-2244. Use new ReadWrite lock in OzoneManager.
URL: https://github.com/apache/hadoop/pull/1589#discussion_r332255791
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/lock/OzoneManagerLock.java
 ##
 @@ -105,15 +109,66 @@ public OzoneManagerLock(Configuration conf) {
* should be bucket name. For remaining all resource only one param should
* be passed.
*/
+  @Deprecated
   public boolean acquireLock(Resource resource, String... resources) {
 String resourceName = generateResourceName(resource, resources);
+return lock(resource, resourceName, manager::writeLock, WRITE_LOCK);
+  }
+
+  /**
+   * Acquire read lock on resource.
+   *
+   * For S3_BUCKET_LOCK, VOLUME_LOCK, BUCKET_LOCK type resource, same
+   * thread acquiring lock again is allowed.
+   *
+   * For USER_LOCK, PREFIX_LOCK, S3_SECRET_LOCK type resource, same thread
+   * acquiring lock again is not allowed.
+   *
+   * Special Note for USER_LOCK: Single thread can acquire single user lock/
+   * multi user lock. But not both at the same time.
+   * @param resource - Type of the resource.
+   * @param resources - Resource names on which user want to acquire lock.
+   * For Resource type BUCKET_LOCK, first param should be volume, second param
+   * should be bucket name. For remaining all resource only one param should
+   * be passed.
+   */
+  public boolean acquireReadLock(Resource resource, String... resources) {
+String resourceName = generateResourceName(resource, resources);
+return lock(resource, resourceName, manager::readLock, READ_LOCK);
+  }
+
+
+  /**
+   * Acquire write lock on resource.
+   *
+   * For S3_BUCKET_LOCK, VOLUME_LOCK, BUCKET_LOCK type resource, same
+   * thread acquiring lock again is allowed.
+   *
+   * For USER_LOCK, PREFIX_LOCK, S3_SECRET_LOCK type resource, same thread
+   * acquiring lock again is not allowed.
+   *
+   * Special Note for USER_LOCK: Single thread can acquire single user lock/
+   * multi user lock. But not both at the same time.
+   * @param resource - Type of the resource.
+   * @param resources - Resource names on which user want to acquire lock.
+   * For Resource type BUCKET_LOCK, first param should be volume, second param
+   * should be bucket name. For remaining all resource only one param should
+   * be passed.
+   */
+  public boolean acquireWriteLock(Resource resource, String... resources) {
+String resourceName = generateResourceName(resource, resources);
+return lock(resource, resourceName, manager::writeLock, WRITE_LOCK);
+  }
+
+  private boolean lock(Resource resource, String resourceName,
+  Consumer lockFn, String lockType) {
 if (!resource.canLock(lockSet.get())) {
   String errorMessage = getErrorMessage(resource);
   LOG.error(errorMessage);
   throw new RuntimeException(errorMessage);
 } else {
-  manager.lock(resourceName);
-  LOG.debug("Acquired {} lock on resource {}", resource.name,
+  lockFn.accept(resourceName);
+  LOG.debug("Acquired {} {} lock on resource {}", lockType, resource.name,
   resourceName);
 
 Review comment:
   Here the first resource.name prints VOLUME_LOCK/BUCKET_LOCK, next 
resourceName prints actual resource name. (I think it is little confusing here, 
because class Resource name is defined like that.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324655)
Time Spent: 1h 50m  (was: 1h 40m)

> Use new ReadWrite lock in OzoneManager
> --
>
> Key: HDDS-2244
> URL: https://issues.apache.org/jira/browse/HDDS-2244
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Use new ReadWriteLock added in HDDS-2223.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: 

[jira] [Work logged] (HDDS-2244) Use new ReadWrite lock in OzoneManager

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2244?focusedWorklogId=324646=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324646
 ]

ASF GitHub Bot logged work on HDDS-2244:


Author: ASF GitHub Bot
Created on: 07/Oct/19 21:57
Start Date: 07/Oct/19 21:57
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1589: 
HDDS-2244. Use new ReadWrite lock in OzoneManager.
URL: https://github.com/apache/hadoop/pull/1589#discussion_r332255791
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/lock/OzoneManagerLock.java
 ##
 @@ -105,15 +109,66 @@ public OzoneManagerLock(Configuration conf) {
* should be bucket name. For remaining all resource only one param should
* be passed.
*/
+  @Deprecated
   public boolean acquireLock(Resource resource, String... resources) {
 String resourceName = generateResourceName(resource, resources);
+return lock(resource, resourceName, manager::writeLock, WRITE_LOCK);
+  }
+
+  /**
+   * Acquire read lock on resource.
+   *
+   * For S3_BUCKET_LOCK, VOLUME_LOCK, BUCKET_LOCK type resource, same
+   * thread acquiring lock again is allowed.
+   *
+   * For USER_LOCK, PREFIX_LOCK, S3_SECRET_LOCK type resource, same thread
+   * acquiring lock again is not allowed.
+   *
+   * Special Note for USER_LOCK: Single thread can acquire single user lock/
+   * multi user lock. But not both at the same time.
+   * @param resource - Type of the resource.
+   * @param resources - Resource names on which user want to acquire lock.
+   * For Resource type BUCKET_LOCK, first param should be volume, second param
+   * should be bucket name. For remaining all resource only one param should
+   * be passed.
+   */
+  public boolean acquireReadLock(Resource resource, String... resources) {
+String resourceName = generateResourceName(resource, resources);
+return lock(resource, resourceName, manager::readLock, READ_LOCK);
+  }
+
+
+  /**
+   * Acquire write lock on resource.
+   *
+   * For S3_BUCKET_LOCK, VOLUME_LOCK, BUCKET_LOCK type resource, same
+   * thread acquiring lock again is allowed.
+   *
+   * For USER_LOCK, PREFIX_LOCK, S3_SECRET_LOCK type resource, same thread
+   * acquiring lock again is not allowed.
+   *
+   * Special Note for USER_LOCK: Single thread can acquire single user lock/
+   * multi user lock. But not both at the same time.
+   * @param resource - Type of the resource.
+   * @param resources - Resource names on which user want to acquire lock.
+   * For Resource type BUCKET_LOCK, first param should be volume, second param
+   * should be bucket name. For remaining all resource only one param should
+   * be passed.
+   */
+  public boolean acquireWriteLock(Resource resource, String... resources) {
+String resourceName = generateResourceName(resource, resources);
+return lock(resource, resourceName, manager::writeLock, WRITE_LOCK);
+  }
+
+  private boolean lock(Resource resource, String resourceName,
+  Consumer lockFn, String lockType) {
 if (!resource.canLock(lockSet.get())) {
   String errorMessage = getErrorMessage(resource);
   LOG.error(errorMessage);
   throw new RuntimeException(errorMessage);
 } else {
-  manager.lock(resourceName);
-  LOG.debug("Acquired {} lock on resource {}", resource.name,
+  lockFn.accept(resourceName);
+  LOG.debug("Acquired {} {} lock on resource {}", lockType, resource.name,
   resourceName);
 
 Review comment:
   Here the first resource.name prints VOLUME_LOCK/BUCKET_LOCK, next 
resourceName prints actual resource name.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324646)
Time Spent: 1h 40m  (was: 1.5h)

> Use new ReadWrite lock in OzoneManager
> --
>
> Key: HDDS-2244
> URL: https://issues.apache.org/jira/browse/HDDS-2244
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Use new ReadWriteLock added in HDDS-2223.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2244) Use new ReadWrite lock in OzoneManager

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2244?focusedWorklogId=324640=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324640
 ]

ASF GitHub Bot logged work on HDDS-2244:


Author: ASF GitHub Bot
Created on: 07/Oct/19 21:53
Start Date: 07/Oct/19 21:53
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1589: HDDS-2244. 
Use new ReadWrite lock in OzoneManager.
URL: https://github.com/apache/hadoop/pull/1589#discussion_r332251450
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/lock/OzoneManagerLock.java
 ##
 @@ -105,15 +109,66 @@ public OzoneManagerLock(Configuration conf) {
* should be bucket name. For remaining all resource only one param should
* be passed.
*/
+  @Deprecated
   public boolean acquireLock(Resource resource, String... resources) {
 String resourceName = generateResourceName(resource, resources);
+return lock(resource, resourceName, manager::writeLock, WRITE_LOCK);
+  }
+
+  /**
+   * Acquire read lock on resource.
+   *
+   * For S3_BUCKET_LOCK, VOLUME_LOCK, BUCKET_LOCK type resource, same
+   * thread acquiring lock again is allowed.
+   *
+   * For USER_LOCK, PREFIX_LOCK, S3_SECRET_LOCK type resource, same thread
+   * acquiring lock again is not allowed.
+   *
+   * Special Note for USER_LOCK: Single thread can acquire single user lock/
+   * multi user lock. But not both at the same time.
+   * @param resource - Type of the resource.
+   * @param resources - Resource names on which user want to acquire lock.
+   * For Resource type BUCKET_LOCK, first param should be volume, second param
+   * should be bucket name. For remaining all resource only one param should
+   * be passed.
+   */
+  public boolean acquireReadLock(Resource resource, String... resources) {
+String resourceName = generateResourceName(resource, resources);
+return lock(resource, resourceName, manager::readLock, READ_LOCK);
+  }
+
+
+  /**
+   * Acquire write lock on resource.
+   *
+   * For S3_BUCKET_LOCK, VOLUME_LOCK, BUCKET_LOCK type resource, same
+   * thread acquiring lock again is allowed.
+   *
+   * For USER_LOCK, PREFIX_LOCK, S3_SECRET_LOCK type resource, same thread
+   * acquiring lock again is not allowed.
+   *
+   * Special Note for USER_LOCK: Single thread can acquire single user lock/
+   * multi user lock. But not both at the same time.
+   * @param resource - Type of the resource.
+   * @param resources - Resource names on which user want to acquire lock.
+   * For Resource type BUCKET_LOCK, first param should be volume, second param
+   * should be bucket name. For remaining all resource only one param should
+   * be passed.
+   */
+  public boolean acquireWriteLock(Resource resource, String... resources) {
+String resourceName = generateResourceName(resource, resources);
+return lock(resource, resourceName, manager::writeLock, WRITE_LOCK);
+  }
+
+  private boolean lock(Resource resource, String resourceName,
+  Consumer lockFn, String lockType) {
 if (!resource.canLock(lockSet.get())) {
   String errorMessage = getErrorMessage(resource);
   LOG.error(errorMessage);
   throw new RuntimeException(errorMessage);
 } else {
-  manager.lock(resourceName);
-  LOG.debug("Acquired {} lock on resource {}", resource.name,
+  lockFn.accept(resourceName);
+  LOG.debug("Acquired {} {} lock on resource {}", lockType, resource.name,
   resourceName);
 
 Review comment:
   I am trying to read this debug statement.. Do you need to have resource name 
twice ? once via resource.name and another via resourceName?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324640)
Time Spent: 1.5h  (was: 1h 20m)

> Use new ReadWrite lock in OzoneManager
> --
>
> Key: HDDS-2244
> URL: https://issues.apache.org/jira/browse/HDDS-2244
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Use new ReadWriteLock added in HDDS-2223.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Work logged] (HDDS-1737) Add Volume check in KeyManager and File Operations

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1737?focusedWorklogId=324637=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324637
 ]

ASF GitHub Bot logged work on HDDS-1737:


Author: ASF GitHub Bot
Created on: 07/Oct/19 21:51
Start Date: 07/Oct/19 21:51
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1559: 
HDDS-1737. Add Volume check in KeyManager and File Operations.
URL: https://github.com/apache/hadoop/pull/1559#discussion_r332252751
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyDeleteRequest.java
 ##
 @@ -117,12 +121,19 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
   acquiredLock = omMetadataManager.getLock().acquireLock(BUCKET_LOCK,
   volumeName, bucketName);
 
-  // Not doing bucket/volume checks here. In this way we can avoid db
-  // checks for them.
-  // TODO: Once we have volume/bucket full cache, we can add
-  // them back, as these checks will be inexpensive at that time.
-  OmKeyInfo omKeyInfo = omMetadataManager.getKeyTable().get(objectKey);
+  // Check volume exist.
+  if (omMetadataManager.getVolumeTable().isExist(volumeName)) {
 
 Review comment:
   Here it should be if 
(!omMetadataManager.getVolumeTable().isExist(volumeName)) right?
   
   And also we should pass omMetadataManagerImpl.getVolumeKey/getBucketKey, not 
direct volumeName/bucketName.
   
   As here if it does not exist, we should return error?
   
   ```
 /**
  * Check if a given key exists in Metadata store.
  * (Optimization to save on data deserialization)
  * A lock on the key / bucket needs to be acquired before invoking this 
API.
  * @param key metadata key
  * @return true if the metadata store contains a key.
  * @throws IOException on Failure
  */
 boolean isExist(KEY key) throws IOException;
   ```
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324637)
Time Spent: 1.5h  (was: 1h 20m)

> Add Volume check in KeyManager and File Operations
> --
>
> Key: HDDS-1737
> URL: https://issues.apache.org/jira/browse/HDDS-1737
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> This is to address a TODO to check volume checks when performing Key/File 
> operations.
>  
> // TODO: Not checking volume exist here, once we have full cache we can
> // add volume exist check also.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1737) Add Volume check in KeyManager and File Operations

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1737?focusedWorklogId=324638=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324638
 ]

ASF GitHub Bot logged work on HDDS-1737:


Author: ASF GitHub Bot
Created on: 07/Oct/19 21:51
Start Date: 07/Oct/19 21:51
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1559: 
HDDS-1737. Add Volume check in KeyManager and File Operations.
URL: https://github.com/apache/hadoop/pull/1559#discussion_r332253834
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyRenameRequest.java
 ##
 @@ -123,10 +126,17 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
   acquiredLock = omMetadataManager.getLock().acquireLock(BUCKET_LOCK,
   volumeName, bucketName);
 
-  // Not doing bucket/volume checks here. In this way we can avoid db
-  // checks for them.
-  // TODO: Once we have volume/bucket full cache, we can add
-  // them back, as these checks will be inexpensive at that time.
+  // Check volume exist.
+  if (omMetadataManager.getVolumeTable().isExist(volumeName)) {
+throw new OMException("Volume not found " + volumeName,
 
 Review comment:
   Same as above
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324638)
Time Spent: 1h 40m  (was: 1.5h)

> Add Volume check in KeyManager and File Operations
> --
>
> Key: HDDS-1737
> URL: https://issues.apache.org/jira/browse/HDDS-1737
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> This is to address a TODO to check volume checks when performing Key/File 
> operations.
>  
> // TODO: Not checking volume exist here, once we have full cache we can
> // add volume exist check also.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1737) Add Volume check in KeyManager and File Operations

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1737?focusedWorklogId=324636=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324636
 ]

ASF GitHub Bot logged work on HDDS-1737:


Author: ASF GitHub Bot
Created on: 07/Oct/19 21:50
Start Date: 07/Oct/19 21:50
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1559: 
HDDS-1737. Add Volume check in KeyManager and File Operations.
URL: https://github.com/apache/hadoop/pull/1559#discussion_r332253241
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyDeleteRequest.java
 ##
 @@ -117,12 +121,19 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
   acquiredLock = omMetadataManager.getLock().acquireLock(BUCKET_LOCK,
   volumeName, bucketName);
 
-  // Not doing bucket/volume checks here. In this way we can avoid db
-  // checks for them.
-  // TODO: Once we have volume/bucket full cache, we can add
-  // them back, as these checks will be inexpensive at that time.
-  OmKeyInfo omKeyInfo = omMetadataManager.getKeyTable().get(objectKey);
+  // Check volume exist.
+  if (omMetadataManager.getVolumeTable().isExist(volumeName)) {
 
 Review comment:
   And also we can do little optimization here, check the first bucket exists 
or not, if it does not exist, then check volume?
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324636)
Time Spent: 1h 20m  (was: 1h 10m)

> Add Volume check in KeyManager and File Operations
> --
>
> Key: HDDS-1737
> URL: https://issues.apache.org/jira/browse/HDDS-1737
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> This is to address a TODO to check volume checks when performing Key/File 
> operations.
>  
> // TODO: Not checking volume exist here, once we have full cache we can
> // add volume exist check also.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1737) Add Volume check in KeyManager and File Operations

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1737?focusedWorklogId=324634=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324634
 ]

ASF GitHub Bot logged work on HDDS-1737:


Author: ASF GitHub Bot
Created on: 07/Oct/19 21:48
Start Date: 07/Oct/19 21:48
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1559: 
HDDS-1737. Add Volume check in KeyManager and File Operations.
URL: https://github.com/apache/hadoop/pull/1559#discussion_r332252751
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyDeleteRequest.java
 ##
 @@ -117,12 +121,19 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
   acquiredLock = omMetadataManager.getLock().acquireLock(BUCKET_LOCK,
   volumeName, bucketName);
 
-  // Not doing bucket/volume checks here. In this way we can avoid db
-  // checks for them.
-  // TODO: Once we have volume/bucket full cache, we can add
-  // them back, as these checks will be inexpensive at that time.
-  OmKeyInfo omKeyInfo = omMetadataManager.getKeyTable().get(objectKey);
+  // Check volume exist.
+  if (omMetadataManager.getVolumeTable().isExist(volumeName)) {
 
 Review comment:
   Here it should be if 
(!omMetadataManager.getVolumeTable().isExist(volumeName)) right?
   
   As here if it does not exist, we should return error?
   
   ```
 /**
  * Check if a given key exists in Metadata store.
  * (Optimization to save on data deserialization)
  * A lock on the key / bucket needs to be acquired before invoking this 
API.
  * @param key metadata key
  * @return true if the metadata store contains a key.
  * @throws IOException on Failure
  */
 boolean isExist(KEY key) throws IOException;
   ```
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324634)
Time Spent: 1h 10m  (was: 1h)

> Add Volume check in KeyManager and File Operations
> --
>
> Key: HDDS-1737
> URL: https://issues.apache.org/jira/browse/HDDS-1737
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> This is to address a TODO to check volume checks when performing Key/File 
> operations.
>  
> // TODO: Not checking volume exist here, once we have full cache we can
> // add volume exist check also.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2259) Container Data Scrubber computes wrong checksum

2019-10-07 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946279#comment-16946279
 ] 

Hudson commented on HDDS-2259:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17498 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17498/])
HDDS-2259. Container Data Scrubber computes wrong checksum (aengineer: rev 
aaa94c3da6e725cbf8118993d17502f852de6fc0)
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestKeyValueContainerCheck.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainerCheck.java


> Container Data Scrubber computes wrong checksum
> ---
>
> Key: HDDS-2259
> URL: https://issues.apache.org/jira/browse/HDDS-2259
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Chunk checksum verification fails for (almost) any file.  This is caused by 
> computing checksum for the entire buffer, regardless of the actual size of 
> the chunk.
> {code:title=https://github.com/apache/hadoop/blob/55c5436f39120da0d7dabf43d7e5e6404307123b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainerCheck.java#L259-L273}
> byte[] buffer = new byte[cData.getBytesPerChecksum()];
> ...
> v = fs.read(buffer);
> ...
> bytesRead += v;
> ...
> ByteString actual = cal.computeChecksum(buffer)
> .getChecksums().get(0);
> {code}
> This results in marking all closed containers as unhealthy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1868) Ozone pipelines should be marked as ready only after the leader election is complete

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1868?focusedWorklogId=324633=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324633
 ]

ASF GitHub Bot logged work on HDDS-1868:


Author: ASF GitHub Bot
Created on: 07/Oct/19 21:44
Start Date: 07/Oct/19 21:44
Worklog Time Spent: 10m 
  Work Description: swagle commented on issue #1610: HDDS-1868. Ozone 
pipelines should be marked as ready only after the leader election is complete.
URL: https://github.com/apache/hadoop/pull/1610#issuecomment-539217658
 
 
   /label ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324633)
Time Spent: 40m  (was: 0.5h)

> Ozone pipelines should be marked as ready only after the leader election is 
> complete
> 
>
> Key: HDDS-1868
> URL: https://issues.apache.org/jira/browse/HDDS-1868
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: HDDS-1868.01.patch, HDDS-1868.02.patch, 
> HDDS-1868.03.patch, HDDS-1868.04.patch, HDDS-1868.05.patch, HDDS-1868.06.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Ozone pipeline on create and restart, start in allocated state. They are 
> moved into open state after all the pipeline have reported to it. However, 
> this potentially can lead into an issue where the pipeline is still not ready 
> to accept any incoming IO operations.
> The pipelines should be marked as ready only after the leader election is 
> complete and leader is ready to accept incoming IO.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2262) SLEEP_SECONDS: command not found

2019-10-07 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2262:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to the trunk branch. Thanks for the contribution.

> SLEEP_SECONDS: command not found
> 
>
> Key: HDDS-2262
> URL: https://issues.apache.org/jira/browse/HDDS-2262
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Trivial
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {noformat}
> datanode_1  | /opt/hadoop/bin/docker/entrypoint.sh: line 66: SLEEP_SECONDS: 
> command not found
> datanode_1  | Sleeping for  seconds
> {noformat}
> Eg. 
> https://raw.githubusercontent.com/elek/ozone-ci-q4/master/pr/pr-hdds-2238-79fll/acceptance/docker-ozonesecure-ozonesecure-s3-s3g.log



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2262) SLEEP_SECONDS: command not found

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2262?focusedWorklogId=324629=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324629
 ]

ASF GitHub Bot logged work on HDDS-2262:


Author: ASF GitHub Bot
Created on: 07/Oct/19 21:39
Start Date: 07/Oct/19 21:39
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1606: HDDS-2262. 
SLEEP_SECONDS: command not found
URL: https://github.com/apache/hadoop/pull/1606#issuecomment-539216038
 
 
   Thank you for the contribution. I have committed this patch to the trunk.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324629)
Time Spent: 40m  (was: 0.5h)

> SLEEP_SECONDS: command not found
> 
>
> Key: HDDS-2262
> URL: https://issues.apache.org/jira/browse/HDDS-2262
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Trivial
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> {noformat}
> datanode_1  | /opt/hadoop/bin/docker/entrypoint.sh: line 66: SLEEP_SECONDS: 
> command not found
> datanode_1  | Sleeping for  seconds
> {noformat}
> Eg. 
> https://raw.githubusercontent.com/elek/ozone-ci-q4/master/pr/pr-hdds-2238-79fll/acceptance/docker-ozonesecure-ozonesecure-s3-s3g.log



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2262) SLEEP_SECONDS: command not found

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2262?focusedWorklogId=324630=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324630
 ]

ASF GitHub Bot logged work on HDDS-2262:


Author: ASF GitHub Bot
Created on: 07/Oct/19 21:39
Start Date: 07/Oct/19 21:39
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1606: HDDS-2262. 
SLEEP_SECONDS: command not found
URL: https://github.com/apache/hadoop/pull/1606
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324630)
Time Spent: 50m  (was: 40m)

> SLEEP_SECONDS: command not found
> 
>
> Key: HDDS-2262
> URL: https://issues.apache.org/jira/browse/HDDS-2262
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Trivial
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {noformat}
> datanode_1  | /opt/hadoop/bin/docker/entrypoint.sh: line 66: SLEEP_SECONDS: 
> command not found
> datanode_1  | Sleeping for  seconds
> {noformat}
> Eg. 
> https://raw.githubusercontent.com/elek/ozone-ci-q4/master/pr/pr-hdds-2238-79fll/acceptance/docker-ozonesecure-ozonesecure-s3-s3g.log



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14899) Use Relative URLS in Hadoop HDFS RBF

2019-10-07 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HDFS-14899:
--
Status: Open  (was: Patch Available)

> Use Relative URLS in Hadoop HDFS RBF
> 
>
> Key: HDFS-14899
> URL: https://issues.apache.org/jira/browse/HDFS-14899
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Attachments: HDFS-14899.1.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14899) Use Relative URLS in Hadoop HDFS RBF

2019-10-07 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HDFS-14899:
--
Attachment: HDFS-14899.1.patch

> Use Relative URLS in Hadoop HDFS RBF
> 
>
> Key: HDFS-14899
> URL: https://issues.apache.org/jira/browse/HDFS-14899
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Attachments: HDFS-14899.1.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14899) Use Relative URLS in Hadoop HDFS RBF

2019-10-07 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HDFS-14899:
--
Status: Patch Available  (was: Open)

> Use Relative URLS in Hadoop HDFS RBF
> 
>
> Key: HDFS-14899
> URL: https://issues.apache.org/jira/browse/HDFS-14899
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Attachments: HDFS-14899.1.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14899) Use Relative URLS in Hadoop HDFS RBF

2019-10-07 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HDFS-14899:
--
Attachment: (was: HDFS-14899.1.patch)

> Use Relative URLS in Hadoop HDFS RBF
> 
>
> Key: HDFS-14899
> URL: https://issues.apache.org/jira/browse/HDFS-14899
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Attachments: HDFS-14899.1.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2259) Container Data Scrubber computes wrong checksum

2019-10-07 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2259:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Thank you for the contribution. I have committed this patch to the trunk.

> Container Data Scrubber computes wrong checksum
> ---
>
> Key: HDDS-2259
> URL: https://issues.apache.org/jira/browse/HDDS-2259
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Chunk checksum verification fails for (almost) any file.  This is caused by 
> computing checksum for the entire buffer, regardless of the actual size of 
> the chunk.
> {code:title=https://github.com/apache/hadoop/blob/55c5436f39120da0d7dabf43d7e5e6404307123b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainerCheck.java#L259-L273}
> byte[] buffer = new byte[cData.getBytesPerChecksum()];
> ...
> v = fs.read(buffer);
> ...
> bytesRead += v;
> ...
> ByteString actual = cal.computeChecksum(buffer)
> .getChecksums().get(0);
> {code}
> This results in marking all closed containers as unhealthy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2259) Container Data Scrubber computes wrong checksum

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2259?focusedWorklogId=324624=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324624
 ]

ASF GitHub Bot logged work on HDDS-2259:


Author: ASF GitHub Bot
Created on: 07/Oct/19 21:36
Start Date: 07/Oct/19 21:36
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1605: HDDS-2259. 
Container Data Scrubber computes wrong checksum
URL: https://github.com/apache/hadoop/pull/1605#issuecomment-539214958
 
 
   +1. LGTM. Thank you for fixing this very important issue. I have committed 
this patch to the trunk.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324624)
Time Spent: 40m  (was: 0.5h)

> Container Data Scrubber computes wrong checksum
> ---
>
> Key: HDDS-2259
> URL: https://issues.apache.org/jira/browse/HDDS-2259
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Chunk checksum verification fails for (almost) any file.  This is caused by 
> computing checksum for the entire buffer, regardless of the actual size of 
> the chunk.
> {code:title=https://github.com/apache/hadoop/blob/55c5436f39120da0d7dabf43d7e5e6404307123b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainerCheck.java#L259-L273}
> byte[] buffer = new byte[cData.getBytesPerChecksum()];
> ...
> v = fs.read(buffer);
> ...
> bytesRead += v;
> ...
> ByteString actual = cal.computeChecksum(buffer)
> .getChecksums().get(0);
> {code}
> This results in marking all closed containers as unhealthy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2259) Container Data Scrubber computes wrong checksum

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2259?focusedWorklogId=324625=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324625
 ]

ASF GitHub Bot logged work on HDDS-2259:


Author: ASF GitHub Bot
Created on: 07/Oct/19 21:36
Start Date: 07/Oct/19 21:36
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1605: HDDS-2259. 
Container Data Scrubber computes wrong checksum
URL: https://github.com/apache/hadoop/pull/1605
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324625)
Time Spent: 50m  (was: 40m)

> Container Data Scrubber computes wrong checksum
> ---
>
> Key: HDDS-2259
> URL: https://issues.apache.org/jira/browse/HDDS-2259
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Chunk checksum verification fails for (almost) any file.  This is caused by 
> computing checksum for the entire buffer, regardless of the actual size of 
> the chunk.
> {code:title=https://github.com/apache/hadoop/blob/55c5436f39120da0d7dabf43d7e5e6404307123b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainerCheck.java#L259-L273}
> byte[] buffer = new byte[cData.getBytesPerChecksum()];
> ...
> v = fs.read(buffer);
> ...
> bytesRead += v;
> ...
> ByteString actual = cal.computeChecksum(buffer)
> .getChecksums().get(0);
> {code}
> This results in marking all closed containers as unhealthy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14899) Use Relative URLS in Hadoop HDFS RBF

2019-10-07 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HDFS-14899:
--
Status: Patch Available  (was: Open)

> Use Relative URLS in Hadoop HDFS RBF
> 
>
> Key: HDFS-14899
> URL: https://issues.apache.org/jira/browse/HDFS-14899
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Attachments: HDFS-14899.1.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14899) Use Relative URLS in Hadoop HDFS RBF

2019-10-07 Thread David Mollitor (Jira)
David Mollitor created HDFS-14899:
-

 Summary: Use Relative URLS in Hadoop HDFS RBF
 Key: HDFS-14899
 URL: https://issues.apache.org/jira/browse/HDFS-14899
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: rbf
Affects Versions: 3.2.0
Reporter: David Mollitor
Assignee: David Mollitor
 Attachments: HDFS-14899.1.patch





--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14899) Use Relative URLS in Hadoop HDFS RBF

2019-10-07 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HDFS-14899:
--
Attachment: HDFS-14899.1.patch

> Use Relative URLS in Hadoop HDFS RBF
> 
>
> Key: HDFS-14899
> URL: https://issues.apache.org/jira/browse/HDFS-14899
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Attachments: HDFS-14899.1.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2264) Improve output of TestOzoneContainer

2019-10-07 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946263#comment-16946263
 ] 

Hudson commented on HDDS-2264:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17497 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17497/])
HDDS-2264. Improve output of TestOzoneContainer (aengineer: rev 
cfba6ac9512b180d598a7a477a1ee0ea251e7b41)
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/ozoneimpl/TestOzoneContainer.java


> Improve output of TestOzoneContainer
> 
>
> Key: HDDS-2264
> URL: https://issues.apache.org/jira/browse/HDDS-2264
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> TestOzoneContainer#testContainerCreateDiskFull fails intermittently 
> (HDDS-2263), but test output does not reveal too much about the reason.  The 
> goal of this task is to improve the assertion/output to make it easier to fix 
> the failure.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14898) Use Relative URLS in Hadoop HDFS HTTP FS

2019-10-07 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HDFS-14898:
--
Status: Patch Available  (was: Open)

> Use Relative URLS in Hadoop HDFS HTTP FS
> 
>
> Key: HDFS-14898
> URL: https://issues.apache.org/jira/browse/HDFS-14898
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Attachments: HDFS-14898.1.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14898) Use Relative URLS in Hadoop HDFS HTTP FS

2019-10-07 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HDFS-14898:
--
Attachment: HDFS-14898.1.patch

> Use Relative URLS in Hadoop HDFS HTTP FS
> 
>
> Key: HDFS-14898
> URL: https://issues.apache.org/jira/browse/HDFS-14898
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Attachments: HDFS-14898.1.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14898) Use Relative URLS in Hadoop HDFS HTTP FS

2019-10-07 Thread David Mollitor (Jira)
David Mollitor created HDFS-14898:
-

 Summary: Use Relative URLS in Hadoop HDFS HTTP FS
 Key: HDFS-14898
 URL: https://issues.apache.org/jira/browse/HDFS-14898
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: httpfs
Affects Versions: 3.2.0
Reporter: David Mollitor
Assignee: David Mollitor






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14898) Use Relative URLS in Hadoop HDFS HTTP FS

2019-10-07 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HDFS-14898:
--
Flags: Patch

> Use Relative URLS in Hadoop HDFS HTTP FS
> 
>
> Key: HDFS-14898
> URL: https://issues.apache.org/jira/browse/HDFS-14898
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2238) Container Data Scrubber spams log in empty cluster

2019-10-07 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946248#comment-16946248
 ] 

Hudson commented on HDDS-2238:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17496 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17496/])
HDDS-2238. Container Data Scrubber spams log in empty cluster (aengineer: rev 
187731244067f6bf817ad352851cb27850b81c92)
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/ContainerDataScrubberMetrics.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/ContainerController.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/OzoneContainer.java
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerSet.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/dn/scrubber/TestDataScrubber.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerSet.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/TestBlockDeletingService.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/ContainerMetadataScanner.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/ContainerDataScanner.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/ContainerScrubberConfiguration.java
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/ozoneimpl/TestContainerScrubberMetrics.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/ContainerMetadataScrubberMetrics.java


> Container Data Scrubber spams log in empty cluster
> --
>
> Key: HDDS-2238
> URL: https://issues.apache.org/jira/browse/HDDS-2238
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> In an empty cluster (without closed containers) logs are filled with messages 
> from completed data scrubber iterations (~3600 per second for me), if 
> Container Scanner is enabled ({{hdds.containerscrub.enabled=true}}), eg.:
> {noformat}
> datanode_1  | 2019-10-03 15:43:57 INFO  ContainerDataScanner:114 - Completed 
> an iteration of container data scrubber in 0 minutes. Number of  iterations 
> (since the data-node restart) : 6763, Number of containers scanned in this 
> iteration : 0, Number of unhealthy containers found in this iteration : 0
> {noformat} 
> Also CPU usage is quite high.
> I think:
> # there should be a small sleep between iterations
> # it should log only if any containers were scanned



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2264) Improve output of TestOzoneContainer

2019-10-07 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2264:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

I have committed this patch to the trunk. Thank you for the contribution.

> Improve output of TestOzoneContainer
> 
>
> Key: HDDS-2264
> URL: https://issues.apache.org/jira/browse/HDDS-2264
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> TestOzoneContainer#testContainerCreateDiskFull fails intermittently 
> (HDDS-2263), but test output does not reveal too much about the reason.  The 
> goal of this task is to improve the assertion/output to make it easier to fix 
> the failure.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2264) Improve output of TestOzoneContainer

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2264?focusedWorklogId=324612=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324612
 ]

ASF GitHub Bot logged work on HDDS-2264:


Author: ASF GitHub Bot
Created on: 07/Oct/19 21:15
Start Date: 07/Oct/19 21:15
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1607: HDDS-2264. 
Improve output of TestOzoneContainer
URL: https://github.com/apache/hadoop/pull/1607#issuecomment-539207650
 
 
   +1. LGTM. I have committed this patch to the trunk. Thank you for the 
contribution.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324612)
Time Spent: 40m  (was: 0.5h)

> Improve output of TestOzoneContainer
> 
>
> Key: HDDS-2264
> URL: https://issues.apache.org/jira/browse/HDDS-2264
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> TestOzoneContainer#testContainerCreateDiskFull fails intermittently 
> (HDDS-2263), but test output does not reveal too much about the reason.  The 
> goal of this task is to improve the assertion/output to make it easier to fix 
> the failure.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2264) Improve output of TestOzoneContainer

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2264?focusedWorklogId=324613=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324613
 ]

ASF GitHub Bot logged work on HDDS-2264:


Author: ASF GitHub Bot
Created on: 07/Oct/19 21:15
Start Date: 07/Oct/19 21:15
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1607: HDDS-2264. 
Improve output of TestOzoneContainer
URL: https://github.com/apache/hadoop/pull/1607
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324613)
Time Spent: 50m  (was: 40m)

> Improve output of TestOzoneContainer
> 
>
> Key: HDDS-2264
> URL: https://issues.apache.org/jira/browse/HDDS-2264
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: test
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> TestOzoneContainer#testContainerCreateDiskFull fails intermittently 
> (HDDS-2263), but test output does not reveal too much about the reason.  The 
> goal of this task is to improve the assertion/output to make it easier to fix 
> the failure.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2238) Container Data Scrubber spams log in empty cluster

2019-10-07 Thread Anu Engineer (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-2238:
---
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to the trunk

> Container Data Scrubber spams log in empty cluster
> --
>
> Key: HDDS-2238
> URL: https://issues.apache.org/jira/browse/HDDS-2238
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> In an empty cluster (without closed containers) logs are filled with messages 
> from completed data scrubber iterations (~3600 per second for me), if 
> Container Scanner is enabled ({{hdds.containerscrub.enabled=true}}), eg.:
> {noformat}
> datanode_1  | 2019-10-03 15:43:57 INFO  ContainerDataScanner:114 - Completed 
> an iteration of container data scrubber in 0 minutes. Number of  iterations 
> (since the data-node restart) : 6763, Number of containers scanned in this 
> iteration : 0, Number of unhealthy containers found in this iteration : 0
> {noformat} 
> Also CPU usage is quite high.
> I think:
> # there should be a small sleep between iterations
> # it should log only if any containers were scanned



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2238) Container Data Scrubber spams log in empty cluster

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2238?focusedWorklogId=324611=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324611
 ]

ASF GitHub Bot logged work on HDDS-2238:


Author: ASF GitHub Bot
Created on: 07/Oct/19 21:05
Start Date: 07/Oct/19 21:05
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1590: HDDS-2238. 
Container Data Scrubber spams log in empty cluster
URL: https://github.com/apache/hadoop/pull/1590
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324611)
Time Spent: 1h 50m  (was: 1h 40m)

> Container Data Scrubber spams log in empty cluster
> --
>
> Key: HDDS-2238
> URL: https://issues.apache.org/jira/browse/HDDS-2238
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> In an empty cluster (without closed containers) logs are filled with messages 
> from completed data scrubber iterations (~3600 per second for me), if 
> Container Scanner is enabled ({{hdds.containerscrub.enabled=true}}), eg.:
> {noformat}
> datanode_1  | 2019-10-03 15:43:57 INFO  ContainerDataScanner:114 - Completed 
> an iteration of container data scrubber in 0 minutes. Number of  iterations 
> (since the data-node restart) : 6763, Number of containers scanned in this 
> iteration : 0, Number of unhealthy containers found in this iteration : 0
> {noformat} 
> Also CPU usage is quite high.
> I think:
> # there should be a small sleep between iterations
> # it should log only if any containers were scanned



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2238) Container Data Scrubber spams log in empty cluster

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2238?focusedWorklogId=324610=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324610
 ]

ASF GitHub Bot logged work on HDDS-2238:


Author: ASF GitHub Bot
Created on: 07/Oct/19 21:05
Start Date: 07/Oct/19 21:05
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1590: HDDS-2238. 
Container Data Scrubber spams log in empty cluster
URL: https://github.com/apache/hadoop/pull/1590#issuecomment-539204102
 
 
   Thank you for the contribution. I have committed this change to the trunk 
branch.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324610)
Time Spent: 1h 40m  (was: 1.5h)

> Container Data Scrubber spams log in empty cluster
> --
>
> Key: HDDS-2238
> URL: https://issues.apache.org/jira/browse/HDDS-2238
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> In an empty cluster (without closed containers) logs are filled with messages 
> from completed data scrubber iterations (~3600 per second for me), if 
> Container Scanner is enabled ({{hdds.containerscrub.enabled=true}}), eg.:
> {noformat}
> datanode_1  | 2019-10-03 15:43:57 INFO  ContainerDataScanner:114 - Completed 
> an iteration of container data scrubber in 0 minutes. Number of  iterations 
> (since the data-node restart) : 6763, Number of containers scanned in this 
> iteration : 0, Number of unhealthy containers found in this iteration : 0
> {noformat} 
> Also CPU usage is quite high.
> I think:
> # there should be a small sleep between iterations
> # it should log only if any containers were scanned



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2238) Container Data Scrubber spams log in empty cluster

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2238?focusedWorklogId=324609=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324609
 ]

ASF GitHub Bot logged work on HDDS-2238:


Author: ASF GitHub Bot
Created on: 07/Oct/19 21:03
Start Date: 07/Oct/19 21:03
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1590: HDDS-2238. 
Container Data Scrubber spams log in empty cluster
URL: https://github.com/apache/hadoop/pull/1590#issuecomment-539203108
 
 
   +1. LGTM. There is one thing that is not very clear to me. Why add 
container it is not an issue, but I am not sure I understand the benefit 
either. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324609)
Time Spent: 1.5h  (was: 1h 20m)

> Container Data Scrubber spams log in empty cluster
> --
>
> Key: HDDS-2238
> URL: https://issues.apache.org/jira/browse/HDDS-2238
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> In an empty cluster (without closed containers) logs are filled with messages 
> from completed data scrubber iterations (~3600 per second for me), if 
> Container Scanner is enabled ({{hdds.containerscrub.enabled=true}}), eg.:
> {noformat}
> datanode_1  | 2019-10-03 15:43:57 INFO  ContainerDataScanner:114 - Completed 
> an iteration of container data scrubber in 0 minutes. Number of  iterations 
> (since the data-node restart) : 6763, Number of containers scanned in this 
> iteration : 0, Number of unhealthy containers found in this iteration : 0
> {noformat} 
> Also CPU usage is quite high.
> I think:
> # there should be a small sleep between iterations
> # it should log only if any containers were scanned



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14305) Serial number in BlockTokenSecretManager could overlap between different namenodes

2019-10-07 Thread Arpit Agarwal (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946210#comment-16946210
 ] 

Arpit Agarwal commented on HDFS-14305:
--

Incompatibility is not worse than an obviously broken implementation. Also Erik 
explained above the mitigation for the incompatibility.

This patch was committed over my valid technical objection. I hope you will 
respect that, as we have respected your objections in the past.

> Serial number in BlockTokenSecretManager could overlap between different 
> namenodes
> --
>
> Key: HDFS-14305
> URL: https://issues.apache.org/jira/browse/HDFS-14305
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, security
>Reporter: Chao Sun
>Assignee: Konstantin Shvachko
>Priority: Major
>  Labels: multi-sbnn, release-blocker
> Fix For: 2.10.0, 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14305-007.patch, HDFS-14305-008.patch, 
> HDFS-14305.001.patch, HDFS-14305.002.patch, HDFS-14305.003.patch, 
> HDFS-14305.004.patch, HDFS-14305.005.patch, HDFS-14305.006.patch
>
>
> Currently, a {{BlockTokenSecretManager}} starts with a random integer as the 
> initial serial number, and then use this formula to rotate it:
> {code:java}
> this.intRange = Integer.MAX_VALUE / numNNs;
> this.nnRangeStart = intRange * nnIndex;
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
>  {code}
> while {{numNNs}} is the total number of NameNodes in the cluster, and 
> {{nnIndex}} is the index of the current NameNode specified in the 
> configuration {{dfs.ha.namenodes.}}.
> However, with this approach, different NameNode could have overlapping ranges 
> for serial number. For simplicity, let's assume {{Integer.MAX_VALUE}} is 100, 
> and we have 2 NameNodes {{nn1}} and {{nn2}} in configuration. Then the ranges 
> for these two are:
> {code}
> nn1 -> [-49, 49]
> nn2 -> [1, 99]
> {code}
> This is because the initial serial number could be any negative integer.
> Moreover, when the keys are updated, the serial number will again be updated 
> with the formula:
> {code}
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
> {code}
> which means the new serial number could be updated to a range that belongs to 
> a different NameNode, thus increasing the chance of collision again.
> When the collision happens, DataNodes could overwrite an existing key which 
> will cause clients to fail because of {{InvalidToken}} error.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1868) Ozone pipelines should be marked as ready only after the leader election is complete

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1868?focusedWorklogId=324585=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324585
 ]

ASF GitHub Bot logged work on HDDS-1868:


Author: ASF GitHub Bot
Created on: 07/Oct/19 20:14
Start Date: 07/Oct/19 20:14
Worklog Time Spent: 10m 
  Work Description: swagle commented on issue #1610: HDDS-1868. Ozone 
pipelines should be marked as ready only after the leader election is complete.
URL: https://github.com/apache/hadoop/pull/1610#issuecomment-539184469
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324585)
Time Spent: 0.5h  (was: 20m)

> Ozone pipelines should be marked as ready only after the leader election is 
> complete
> 
>
> Key: HDDS-1868
> URL: https://issues.apache.org/jira/browse/HDDS-1868
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: HDDS-1868.01.patch, HDDS-1868.02.patch, 
> HDDS-1868.03.patch, HDDS-1868.04.patch, HDDS-1868.05.patch, HDDS-1868.06.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Ozone pipeline on create and restart, start in allocated state. They are 
> moved into open state after all the pipeline have reported to it. However, 
> this potentially can lead into an issue where the pipeline is still not ready 
> to accept any incoming IO operations.
> The pipelines should be marked as ready only after the leader election is 
> complete and leader is ready to accept incoming IO.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2238) Container Data Scrubber spams log in empty cluster

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2238?focusedWorklogId=324577=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324577
 ]

ASF GitHub Bot logged work on HDDS-2238:


Author: ASF GitHub Bot
Created on: 07/Oct/19 20:01
Start Date: 07/Oct/19 20:01
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on issue #1590: HDDS-2238. 
Container Data Scrubber spams log in empty cluster
URL: https://github.com/apache/hadoop/pull/1590#issuecomment-539179385
 
 
   @anuengineer please review, too
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324577)
Time Spent: 1h 20m  (was: 1h 10m)

> Container Data Scrubber spams log in empty cluster
> --
>
> Key: HDDS-2238
> URL: https://issues.apache.org/jira/browse/HDDS-2238
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> In an empty cluster (without closed containers) logs are filled with messages 
> from completed data scrubber iterations (~3600 per second for me), if 
> Container Scanner is enabled ({{hdds.containerscrub.enabled=true}}), eg.:
> {noformat}
> datanode_1  | 2019-10-03 15:43:57 INFO  ContainerDataScanner:114 - Completed 
> an iteration of container data scrubber in 0 minutes. Number of  iterations 
> (since the data-node restart) : 6763, Number of containers scanned in this 
> iteration : 0, Number of unhealthy containers found in this iteration : 0
> {noformat} 
> Also CPU usage is quite high.
> I think:
> # there should be a small sleep between iterations
> # it should log only if any containers were scanned



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14373) EC : Decoding is failing when block group last incomplete cell fall in to AlignedStripe

2019-10-07 Thread Surendra Singh Lilhore (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946174#comment-16946174
 ] 

Surendra Singh Lilhore commented on HDFS-14373:
---

Need to create new patch for branch-3.1

> EC : Decoding is failing when block group last incomplete cell fall in to 
> AlignedStripe
> ---
>
> Key: HDFS-14373
> URL: https://issues.apache.org/jira/browse/HDFS-14373
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, hdfs-client
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Critical
> Attachments: HDFS-14373.001.patch, HDFS-14373.002.patch, 
> HDFS-14373.003.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1868) Ozone pipelines should be marked as ready only after the leader election is complete

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1868?focusedWorklogId=324568=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324568
 ]

ASF GitHub Bot logged work on HDDS-1868:


Author: ASF GitHub Bot
Created on: 07/Oct/19 19:29
Start Date: 07/Oct/19 19:29
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1610: HDDS-1868. Ozone 
pipelines should be marked as ready only after the leader election is complete.
URL: https://github.com/apache/hadoop/pull/1610#issuecomment-539167261
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 45 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for branch |
   | -1 | mvninstall | 31 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 33 | hadoop-ozone in trunk failed. |
   | -1 | compile | 22 | hadoop-hdds in trunk failed. |
   | -1 | compile | 15 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 51 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 843 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 23 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 19 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 945 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 33 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 20 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 31 | Maven dependency ordering for patch |
   | -1 | mvninstall | 48 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 38 | hadoop-ozone in the patch failed. |
   | -1 | compile | 25 | hadoop-hdds in the patch failed. |
   | -1 | compile | 19 | hadoop-ozone in the patch failed. |
   | -1 | cc | 25 | hadoop-hdds in the patch failed. |
   | -1 | cc | 19 | hadoop-ozone in the patch failed. |
   | -1 | javac | 25 | hadoop-hdds in the patch failed. |
   | -1 | javac | 19 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 27 | hadoop-hdds: The patch generated 7 new + 0 
unchanged - 0 fixed = 7 total (was 0) |
   | -0 | checkstyle | 30 | hadoop-ozone: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 723 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 23 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 21 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 32 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 21 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 28 | hadoop-hdds in the patch failed. |
   | -1 | unit | 27 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 2438 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1610/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1610 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc xml |
   | uname | Linux 5d4b009da9f3 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9685a6c |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1610/1/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1610/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1610/1/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1610/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1610/1/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1610/1/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 

[jira] [Commented] (HDDS-1868) Ozone pipelines should be marked as ready only after the leader election is complete

2019-10-07 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946160#comment-16946160
 ] 

Hadoop QA commented on HDDS-1868:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  4m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
38s{color} | {color:red} hadoop-hdds in trunk failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
39s{color} | {color:red} hadoop-ozone in trunk failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
21s{color} | {color:red} hadoop-hdds in trunk failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
15s{color} | {color:red} hadoop-ozone in trunk failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 38s{color} | {color:orange} The patch fails to run checkstyle in 
hadoop-ozone {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-hdds in trunk failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-ozone in trunk failed. {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 15m 
59s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
35s{color} | {color:red} hadoop-hdds in trunk failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
21s{color} | {color:red} hadoop-ozone in trunk failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
34s{color} | {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
36s{color} | {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  0m 25s{color} | 
{color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  0m 19s{color} | 
{color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 25s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 19s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 28s{color} | {color:orange} hadoop-hdds: The patch generated 7 new + 0 
unchanged - 0 fixed = 7 total (was 0) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} The patch fails to run checkstyle in 
hadoop-ozone {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient 

[jira] [Commented] (HDFS-14162) Balancer should work with ObserverNode

2019-10-07 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946158#comment-16946158
 ] 

Hadoop QA commented on HDFS-14162:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 21m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
51s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
13s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m  
5s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
55s{color} | {color:green} branch-2 passed with JDK v1.8.0_222 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
40s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
52s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
24s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
59s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
13s{color} | {color:green} branch-2 passed with JDK v1.8.0_222 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
30s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  1m 
40s{color} | {color:red} root in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 40s{color} 
| {color:red} root in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  1m 
24s{color} | {color:red} root in the patch failed with JDK v1.8.0_222. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 24s{color} 
| {color:red} root in the patch failed with JDK v1.8.0_222. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 31s{color} | {color:orange} root: The patch generated 1 new + 30 unchanged - 
10 fixed = 31 total (was 40) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
30s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
16s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
29s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed with JDK v1.8.0_222 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
23s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 29s{color} 
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 70m  6s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}173m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.snapshot.TestSnapshotBlocksMap |
|   | 

[jira] [Work logged] (HDDS-2265) integration.sh may report false negative

2019-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2265?focusedWorklogId=324564=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324564
 ]

ASF GitHub Bot logged work on HDDS-2265:


Author: ASF GitHub Bot
Created on: 07/Oct/19 19:15
Start Date: 07/Oct/19 19:15
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1608: HDDS-2265. 
integration.sh may report false negative
URL: https://github.com/apache/hadoop/pull/1608#discussion_r332190504
 
 

 ##
 File path: hadoop-ozone/dev-support/checks/_mvn_unit_report.sh
 ##
 @@ -45,6 +45,11 @@ grep -A1 'Crashed tests' "${REPORT_DIR}/output.log" \
   | cut -f2- -d' ' \
   | sort -u >> "${REPORT_DIR}/summary.txt"
 
+## Check if Maven was killed
+if grep -q 'Killed.* mvn .* test ' "${REPORT_DIR}/output.log"; then
 
 Review comment:
   So we are presuming that Killed will never be used by a test? :) I am fine 
with that.
   +1. LGTM.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 324564)
Time Spent: 50m  (was: 40m)

> integration.sh may report false negative
> 
>
> Key: HDDS-2265
> URL: https://issues.apache.org/jira/browse/HDDS-2265
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build, test
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Sometimes integration test run gets killed, and {{integration.sh}} 
> incorrectly reports "success".  Example:
> {noformat:title=https://github.com/elek/ozone-ci-q4/tree/ae930d6f7f10c7d2aeaf1f2f21b18ada954ea444/pr/pr-hdds-2259-hlwmv/integration/result}
> success
> {noformat}
> {noformat:title=https://github.com/elek/ozone-ci-q4/blob/ae930d6f7f10c7d2aeaf1f2f21b18ada954ea444/pr/pr-hdds-2259-hlwmv/integration/output.log#L2457}
> /workdir/hadoop-ozone/dev-support/checks/integration.sh: line 22:   369 
> Killed  mvn -B -fn test -f pom.ozone.xml -pl 
> :hadoop-ozone-integration-test,:hadoop-ozone-filesystem,:hadoop-ozone-tools 
> -Dtest=\!TestMiniChaosOzoneCluster "$@"
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14373) EC : Decoding is failing when block group last incomplete cell fall in to AlignedStripe

2019-10-07 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946156#comment-16946156
 ] 

Hudson commented on HDFS-14373:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17495 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17495/])
HDFS-14373. EC : Decoding is failing when block group last incomplete 
(surendralilhore: rev 382967be51052d59e31d8d05713645b8d3c2325b)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/StripedBlockUtil.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/StripeReader.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedInputStream.java


> EC : Decoding is failing when block group last incomplete cell fall in to 
> AlignedStripe
> ---
>
> Key: HDFS-14373
> URL: https://issues.apache.org/jira/browse/HDFS-14373
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, hdfs-client
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Critical
> Attachments: HDFS-14373.001.patch, HDFS-14373.002.patch, 
> HDFS-14373.003.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14373) EC : Decoding is failing when block group last incomplete cell fall in to AlignedStripe

2019-10-07 Thread Surendra Singh Lilhore (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16946155#comment-16946155
 ] 

Surendra Singh Lilhore commented on HDFS-14373:
---

Committed to branch-3.2 and trunk.

> EC : Decoding is failing when block group last incomplete cell fall in to 
> AlignedStripe
> ---
>
> Key: HDFS-14373
> URL: https://issues.apache.org/jira/browse/HDFS-14373
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, hdfs-client
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Critical
> Attachments: HDFS-14373.001.patch, HDFS-14373.002.patch, 
> HDFS-14373.003.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >