[jira] [Resolved] (HDFS-13014) Moving logging APIs over to slf4j in hadoop-hdfs-nfs

2018-01-11 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma resolved HDFS-13014.
-
Resolution: Duplicate

Duplicate HDFS-12829.

> Moving logging APIs over to slf4j in hadoop-hdfs-nfs
> 
>
> Key: HDFS-13014
> URL: https://issues.apache.org/jira/browse/HDFS-13014
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>
> Replaced log4j APIs with slf4j APIs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-13015) Moving logging APIs over to slf4j in hadoop-hdfs

2018-01-11 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma resolved HDFS-13015.
-
Resolution: Duplicate

> Moving logging APIs over to slf4j in hadoop-hdfs
> 
>
> Key: HDFS-13015
> URL: https://issues.apache.org/jira/browse/HDFS-13015
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>
> Since there are many places using log4j in hadoop-hdfs, creating sub-tasks 
> may be better.
> {noformat}
> find hadoop-hdfs-project/hadoop-hdfs -name "*.java" | xargs grep "import 
> org.apache.commons.logging.Log" | wc -l
>  620
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13015) Moving logging APIs over to slf4j in hadoop-hdfs

2018-01-11 Thread Takanobu Asanuma (JIRA)
Takanobu Asanuma created HDFS-13015:
---

 Summary: Moving logging APIs over to slf4j in hadoop-hdfs
 Key: HDFS-13015
 URL: https://issues.apache.org/jira/browse/HDFS-13015
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Takanobu Asanuma
Assignee: Takanobu Asanuma


Since there are many places using log4j in hadoop-hdfs, creating sub-tasks may 
be better.
{noformat}
find hadoop-hdfs-project/hadoop-hdfs -name "*.java" | xargs grep "import 
org.apache.commons.logging.Log" | wc -l
 620
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13014) Moving logging APIs over to slf4j in hadoop-hdfs-nfs

2018-01-11 Thread Takanobu Asanuma (JIRA)
Takanobu Asanuma created HDFS-13014:
---

 Summary: Moving logging APIs over to slf4j in hadoop-hdfs-nfs
 Key: HDFS-13014
 URL: https://issues.apache.org/jira/browse/HDFS-13014
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Takanobu Asanuma
Assignee: Takanobu Asanuma


Replaced log4j APIs with slf4j APIs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13013) Fix ContainerMapping#closeContainer after HDFS-12980

2018-01-11 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-13013:
-

 Summary: Fix ContainerMapping#closeContainer after HDFS-12980
 Key: HDFS-13013
 URL: https://issues.apache.org/jira/browse/HDFS-13013
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


SCMCLI close container command is based on ContainerMapping#closeContainer, 
which is based on the state machine (open->close) before HDFS-12980.

HDFS-12980 changes the container state machine. A container has to be finalized 
into closing state before closed. (open->closing->closed). This ticket is 
opened to fix  ContainerMapping#closeContainer to match the new state machine. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13012) Fix TestOzoneCOnfigurationFields after HDFS-12853

2018-01-11 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-13012:
-

 Summary: Fix TestOzoneCOnfigurationFields after HDFS-12853
 Key: HDFS-13012
 URL: https://issues.apache.org/jira/browse/HDFS-13012
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Mukul Kumar Singh
Priority: Minor


"dfs.container.ratis.num.write.chunk.threads" and 
"dfs.container.ratis.segment.size" were added with HDFS-12853, they need to be 
added to ozone-default.xml to unblock test failures in 
TestOzoneConfigurationFields. cc: [~msingh]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13011) Support replacing multiple nodes during pipeline recovery and append

2018-01-11 Thread Vinayakumar B (JIRA)
Vinayakumar B created HDFS-13011:


 Summary: Support replacing multiple nodes during pipeline recovery 
and append
 Key: HDFS-13011
 URL: https://issues.apache.org/jira/browse/HDFS-13011
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Reporter: Vinayakumar B


During pipeline recovery only one additional node will be asked and will be 
replaced with failed node.
But if initial pipeline size is less than replication, then extra nodes could 
be added during pipeline recovery to satisfy the replication during write 
itself.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13010) DataNode: Listen queue is always 128

2018-01-11 Thread Gopal V (JIRA)
Gopal V created HDFS-13010:
--

 Summary: DataNode: Listen queue is always 128
 Key: HDFS-13010
 URL: https://issues.apache.org/jira/browse/HDFS-13010
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0
Reporter: Gopal V


DFS write-heavy workloads are failing with 

{code}
18/01/11 05:02:34 INFO mapreduce.Job: Task Id : 
attempt_1515660475578_0007_m_000387_0, Status : FAILED
Error: java.io.IOException: Could not get block locations. Source file 
"/tmp/tpcds-generate/1/_temporary/1/_temporary/attempt_1515660475578_0007_m_000387_0/inventory/data-m-00387"
 - Aborting...block==null
at 
org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1477)
at 
org.apache.hadoop.hdfs.DataStreamer.processDatanodeOrExternalError(DataStreamer.java:1256)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:667)
{code}

This was tracked to 

{code}
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at 
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at 
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
at 
org.apache.hadoop.hdfs.DataStreamer.createSocketForPipeline(DataStreamer.java:253)
at 
org.apache.hadoop.hdfs.DataStreamer$StreamerStreams.(DataStreamer.java:162)
at org.apache.hadoop.hdfs.DataStreamer.transfer(DataStreamer.java:1450)
at 
org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1407)
at 
org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1598)
at 
org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1499)
at 
org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1481)
at 
org.apache.hadoop.hdfs.DataStreamer.processDatanodeOrExternalError(DataStreamer.java:1256)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:667)
{code}

{code}
# ss -tl | grep 50010

LISTEN 0  128*:50010*:*   
{code}

However, the system is configured with a much higher somaxconn

{code}
# sysctl -a | grep somaxconn

net.core.somaxconn = 16000
{code}

Yet, the SNMP counters show connections being refused with {{127 times the 
listen queue of a socket overflowed}}





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org