[jira] [Created] (HADOOP-17440) Downgrade guava version in trunk

2020-12-18 Thread Lisheng Sun (Jira)
Lisheng Sun created HADOOP-17440:


 Summary: Downgrade guava version in trunk
 Key: HADOOP-17440
 URL: https://issues.apache.org/jira/browse/HADOOP-17440
 Project: Hadoop Common
  Issue Type: Task
Reporter: Lisheng Sun






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17439) No shade guava in branch

2020-12-18 Thread Lisheng Sun (Jira)
Lisheng Sun created HADOOP-17439:


 Summary: No shade guava in branch
 Key: HADOOP-17439
 URL: https://issues.apache.org/jira/browse/HADOOP-17439
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Lisheng Sun
 Attachments: image-2020-12-18-22-01-45-424.png

!image-2020-12-18-22-01-45-424.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16793) Remove WARN log when ipc connection interrupted in Client#handleSaslConnectionFailure()

2020-01-07 Thread Lisheng Sun (Jira)
Lisheng Sun created HADOOP-16793:


 Summary: Remove WARN log when ipc connection interrupted in 
Client#handleSaslConnectionFailure()
 Key: HADOOP-16793
 URL: https://issues.apache.org/jira/browse/HADOOP-16793
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Lisheng Sun


{code:java}
private synchronized void handleSaslConnectionFailure(

LOG.warn("Exception encountered while connecting to "
+ "the server : " + ex);
}


{code}
With RequestHedgingProxyProvider, one rpc call will send multiple requests to 
all namenodes. After one request return successfully,  all other requests will 
be interrupted. It's not a big problem and should not print a warning log.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16720) Optimize LowRedundancyBlocks#chooseLowRedundancyBlocks()

2019-11-19 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun resolved HADOOP-16720.
--
Resolution: Duplicate

> Optimize LowRedundancyBlocks#chooseLowRedundancyBlocks()
> 
>
> Key: HADOOP-16720
> URL: https://issues.apache.org/jira/browse/HADOOP-16720
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Priority: Major
>
> when priority=QUEUE_WITH_CORRUPT_BLOCKS, it mean no block in needreplication 
> need replica. 
> in current code if use continue, there is one more invalid judgment (priority 
> ==QUEUE_WITH_CORRUPT_BLOCKS).
> i think it should use break instread of continue.
> {code:java}
>  */
> synchronized List> chooseLowRedundancyBlocks(
> int blocksToProcess) {
>   final List> blocksToReconstruct = new ArrayList<>(LEVEL);
>   int count = 0;
>   int priority = 0;
>   for (; count < blocksToProcess && priority < LEVEL; priority++) {
> if (priority == QUEUE_WITH_CORRUPT_BLOCKS) {
>   // do not choose corrupted blocks.
>   continue;
> }
> ...
>
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16720) Optimize LowRedundancyBlocks#chooseLowRedundancyBlocks()

2019-11-19 Thread Lisheng Sun (Jira)
Lisheng Sun created HADOOP-16720:


 Summary: Optimize LowRedundancyBlocks#chooseLowRedundancyBlocks()
 Key: HADOOP-16720
 URL: https://issues.apache.org/jira/browse/HADOOP-16720
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Lisheng Sun


when priority=QUEUE_WITH_CORRUPT_BLOCKS, it mean no block in needreplication 
need replica. 

in current code if use continue, there is one more invalid judgment (priority 
==QUEUE_WITH_CORRUPT_BLOCKS).

i think it should use break instread of continue.
{code:java}
 */
synchronized List> chooseLowRedundancyBlocks(
int blocksToProcess) {
  final List> blocksToReconstruct = new ArrayList<>(LEVEL);

  int count = 0;
  int priority = 0;
  for (; count < blocksToProcess && priority < LEVEL; priority++) {
if (priority == QUEUE_WITH_CORRUPT_BLOCKS) {
  // do not choose corrupted blocks.
  continue;
}
...
   
}
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-16671) Optimize InnerNodeImpl#getLeaf

2019-10-27 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun reopened HADOOP-16671:
--

> Optimize InnerNodeImpl#getLeaf
> --
>
> Key: HADOOP-16671
> URL: https://issues.apache.org/jira/browse/HADOOP-16671
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HADOOP-16671.001.patch
>
>
> {code:java}
> @Override
> public Node getLeaf(int leafIndex, Node excludedNode) {
>   int count=0;
>   // check if the excluded node a leaf
>   boolean isLeaf = !(excludedNode instanceof InnerNode);
>   // calculate the total number of excluded leaf nodes
>   int numOfExcludedLeaves =
>   isLeaf ? 1 : ((InnerNode)excludedNode).getNumOfLeaves();
>   if (isLeafParent()) { // children are leaves
> if (isLeaf) { // excluded node is a leaf node
>   if (excludedNode != null &&
>   childrenMap.containsKey(excludedNode.getName())) {
> int excludedIndex = children.indexOf(excludedNode);
> if (excludedIndex != -1 && leafIndex >= 0) {
>   // excluded node is one of the children so adjust the leaf index
>   leafIndex = leafIndex>=excludedIndex ? leafIndex+1 : leafIndex;
> }
>   }
> }
> // range check
> if (leafIndex<0 || leafIndex>=this.getNumOfChildren()) {
>   return null;
> }
> return children.get(leafIndex);
>   } else {
> {code}
> the code InnerNodeImpl#getLeaf() as above
> i think it has two problems:
> 1.if childrenMap.containsKey(excludedNode.getName()) return true, 
> children.indexOf(excludedNode) must return > -1, so if (excludedIndex != -1) 
> is it necessary?
> 2. if excludedindex = children.size() -1
> as current code:
> leafIndex = leafIndex>=excludedIndex ? leafIndex+1 : leafIndex;
> leafIndex will be out of index and return null. Actually there are nodes that 
> can be returned.
> i think it should add the judgement excludedIndex == children.size() -1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16671) Optimize InnerNodeImpl#getLeaf

2019-10-25 Thread Lisheng Sun (Jira)
Lisheng Sun created HADOOP-16671:


 Summary: Optimize InnerNodeImpl#getLeaf
 Key: HADOOP-16671
 URL: https://issues.apache.org/jira/browse/HADOOP-16671
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Lisheng Sun


{code:java}
@Override
public Node getLeaf(int leafIndex, Node excludedNode) {
  int count=0;
  // check if the excluded node a leaf
  boolean isLeaf = !(excludedNode instanceof InnerNode);
  // calculate the total number of excluded leaf nodes
  int numOfExcludedLeaves =
  isLeaf ? 1 : ((InnerNode)excludedNode).getNumOfLeaves();
  if (isLeafParent()) { // children are leaves
if (isLeaf) { // excluded node is a leaf node
  if (excludedNode != null &&
  childrenMap.containsKey(excludedNode.getName())) {
int excludedIndex = children.indexOf(excludedNode);
if (excludedIndex != -1 && leafIndex >= 0) {
  // excluded node is one of the children so adjust the leaf index
  leafIndex = leafIndex>=excludedIndex ? leafIndex+1 : leafIndex;
}
  }
}
// range check
if (leafIndex<0 || leafIndex>=this.getNumOfChildren()) {
  return null;
}
return children.get(leafIndex);
  } else {
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16662) Remove invalid judgment in NetworkTopology#add()

2019-10-18 Thread Lisheng Sun (Jira)
Lisheng Sun created HADOOP-16662:


 Summary: Remove invalid judgment in NetworkTopology#add()
 Key: HADOOP-16662
 URL: https://issues.apache.org/jira/browse/HADOOP-16662
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Lisheng Sun
Assignee: Lisheng Sun


The method of NetworkTopology#add() as follow:
{code:java}
/** Add a leaf node
 * Update node counter  rack counter if necessary
 * @param node node to be added; can be null
 * @exception IllegalArgumentException if add a node to a leave 
   or node to be added is not a leaf
 */
public void add(Node node) {
  if (node==null) return;
  int newDepth = NodeBase.locationToDepth(node.getNetworkLocation()) + 1;
  netlock.writeLock().lock();
  try {
if( node instanceof InnerNode ) {
  throw new IllegalArgumentException(
"Not allow to add an inner node: "+NodeBase.getPath(node));
}
if ((depthOfAllLeaves != -1) && (depthOfAllLeaves != newDepth)) {
  LOG.error("Error: can't add leaf node {} at depth {} to topology:{}\n",
  NodeBase.getPath(node), newDepth, this);
  throw new InvalidTopologyException("Failed to add " + 
NodeBase.getPath(node) +
  ": You cannot have a rack and a non-rack node at the same " +
  "level of the network topology.");
}
Node rack = getNodeForNetworkLocation(node);
if (rack != null && !(rack instanceof InnerNode)) {
  throw new IllegalArgumentException("Unexpected data node " 
 + node.toString() 
 + " at an illegal network location");
}
if (clusterMap.add(node)) {
  LOG.info("Adding a new node: "+NodeBase.getPath(node));
  if (rack == null) {
incrementRacks();
  }
  if (!(node instanceof InnerNode)) {
if (depthOfAllLeaves == -1) {
  depthOfAllLeaves = node.getLevel();
}
  }
}
LOG.debug("NetworkTopology became:\n{}", this);
  } finally {
netlock.writeLock().unlock();
  }
}
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16600) StagingTestBase uses methods not available in Mockito 1.8.5 in branch-3.1

2019-09-24 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun resolved HADOOP-16600.
--
Resolution: Duplicate

> StagingTestBase uses methods not available in Mockito 1.8.5 in branch-3.1
> -
>
> Key: HADOOP-16600
> URL: https://issues.apache.org/jira/browse/HADOOP-16600
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.1.1, 3.1.2
>Reporter: Lisheng Sun
>Priority: Major
>
> details see HADOOP-15398
> Problem: hadoop trunk compilation is failing
> Root Cause:
> compilation error is coming from 
> org.apache.hadoop.fs.s3a.commit.staging.StagingTestBase. Compilation error is 
> "The method getArgumentAt(int, Class) is undefined for the 
> type InvocationOnMock".
> StagingTestBase is using getArgumentAt(int, Class) method 
> which is not available in mockito-all 1.8.5 version. getArgumentAt(int, 
> Class) method is available only from version 2.0.0-beta



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-16600) StagingTestBase uses methods not available in Mockito 1.8.5 in branch-3.1

2019-09-24 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun reopened HADOOP-16600:
--

> StagingTestBase uses methods not available in Mockito 1.8.5 in branch-3.1
> -
>
> Key: HADOOP-16600
> URL: https://issues.apache.org/jira/browse/HADOOP-16600
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.1.1, 3.1.2
>Reporter: Lisheng Sun
>Priority: Major
>
> details see HADOOP-15398
> Problem: hadoop trunk compilation is failing
> Root Cause:
> compilation error is coming from 
> org.apache.hadoop.fs.s3a.commit.staging.StagingTestBase. Compilation error is 
> "The method getArgumentAt(int, Class) is undefined for the 
> type InvocationOnMock".
> StagingTestBase is using getArgumentAt(int, Class) method 
> which is not available in mockito-all 1.8.5 version. getArgumentAt(int, 
> Class) method is available only from version 2.0.0-beta



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16600) StagingTestBase uses methods not available in Mockito 1.8.5 in branch-3.1

2019-09-24 Thread Lisheng Sun (Jira)
Lisheng Sun created HADOOP-16600:


 Summary: StagingTestBase uses methods not available in Mockito 
1.8.5 in branch-3.1
 Key: HADOOP-16600
 URL: https://issues.apache.org/jira/browse/HADOOP-16600
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.1.2, 3.1.1, 3.1.0
Reporter: Lisheng Sun






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16553) ipc.client.connect.max.retries.on.timeouts default value is too many

2019-09-07 Thread Lisheng Sun (Jira)
Lisheng Sun created HADOOP-16553:


 Summary: ipc.client.connect.max.retries.on.timeouts default value 
is too many
 Key: HADOOP-16553
 URL: https://issues.apache.org/jira/browse/HADOOP-16553
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Lisheng Sun
Assignee: Lisheng Sun


Current ipc connection retry default times is 45 when socket timeout.  Socket 
timeout default is 20s.
So if network packet loss on received machine,client  need wait at most 15 mins.
I think ipc connection retry default times should decreate.
{code:java}
public static final String  
IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SOCKET_TIMEOUTS_KEY =
  "ipc.client.connect.max.retries.on.timeouts";
/** Default value for IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SOCKET_TIMEOUTS_KEY */
public static final int  
IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SOCKET_TIMEOUTS_DEFAULT = 45;

public static final String  IPC_CLIENT_CONNECT_TIMEOUT_KEY =
  "ipc.client.connect.timeout";
/** Default value for IPC_CLIENT_CONNECT_TIMEOUT_KEY */
public static final int IPC_CLIENT_CONNECT_TIMEOUT_DEFAULT = 2; // 20s
{code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16504) Increase ipc.server.listen.queue.size default from 128 to 256

2019-08-10 Thread Lisheng Sun (JIRA)
Lisheng Sun created HADOOP-16504:


 Summary: Increase ipc.server.listen.queue.size default from 128 to 
256
 Key: HADOOP-16504
 URL: https://issues.apache.org/jira/browse/HADOOP-16504
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Lisheng Sun






--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16431) Change Log Level to trace in IOUtils.java and ExceptionDiags.java

2019-07-15 Thread Lisheng Sun (JIRA)
Lisheng Sun created HADOOP-16431:


 Summary: Change Log Level to trace in IOUtils.java and 
ExceptionDiags.java
 Key: HADOOP-16431
 URL: https://issues.apache.org/jira/browse/HADOOP-16431
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Lisheng Sun


When there is no String Constructor for the exception, we Log a Warn Message, 
and rethrow the exception. We can change the Log level to TRACE/DEBUG.
{code:java}
private static  T wrapWithMessage(
  T exception, String msg) {
  Class clazz = exception.getClass();
  try {
Constructor ctor =
  clazz.getConstructor(String.class);
Throwable t = ctor.newInstance(msg);
return (T) (t.initCause(exception));
  } catch (Throwable e) {
LOG.trace("Unable to wrap exception of type " +
 clazz + ": it has no (String) constructor", e);
return exception;
  }
}
{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16112) fs.TrashPolicyDefault: can't create trash directory and race condition

2019-02-14 Thread Lisheng Sun (JIRA)
Lisheng Sun created HADOOP-16112:


 Summary: fs.TrashPolicyDefault: can't create trash directory and 
race condition
 Key: HADOOP-16112
 URL: https://issues.apache.org/jira/browse/HADOOP-16112
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Affects Versions: 3.2.0
Reporter: Lisheng Sun


here is race condition in method moveToTrash class TrashPolicyDefault

try {
if (!fs.mkdirs(baseTrashPath, PERMISSION))

{ // create current LOG.warn("Can't create(mkdir) trash directory: " + 
baseTrashPath); return false; }

} catch (FileAlreadyExistsException e) {
// find the path which is not a directory, and modify baseTrashPath
// & trashPath, then mkdirs
Path existsFilePath = baseTrashPath;
while (!fs.exists(existsFilePath))

{ existsFilePath = existsFilePath.getParent(); }

{color:#ff}// case{color}

{color:#ff}  other thread deletes existsFilePath here ,the results doesn't  
meet expectation{color}

{color:#ff} for example{color}

{color:#ff}   there is 
/user/u_sunlisheng/.Trash/Current/user/u_sunlisheng/b{color}

{color:#ff}   when delete /user/u_sunlisheng/b/a. if existsFilePath is 
deleted, the result is 
/user/u_sunlisheng/.Trash/Current/user/u_sunlisheng+timstamp/b/a{color}

{color:#ff}  so  when existsFilePath is deleted, don't modify 
baseTrashPath.    {color}

baseTrashPath = new Path(baseTrashPath.toString().replace(
existsFilePath.toString(), existsFilePath.toString() + Time.now())
);

trashPath = new Path(baseTrashPath, trashPath.getName());
// retry, ignore current failure
--i;
continue;
} catch (IOException e)

{ LOG.warn("Can't create trash directory: " + baseTrashPath, e); cause = e; 
break; }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org