[jira] [Created] (HADOOP-15453) hadoop fs -count -v report "-count: Illegal option -v"

2018-05-09 Thread zhoutai.zt (JIRA)
zhoutai.zt created HADOOP-15453:
---

 Summary: hadoop fs -count -v report "-count: Illegal option -v"
 Key: HADOOP-15453
 URL: https://issues.apache.org/jira/browse/HADOOP-15453
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.7.2
Reporter: zhoutai.zt


[hadoop@Hadoop1 bin]$ ./hadoop fs -count -q -h -v SparkHis*
-count: Illegal option -v

 

Reading the source code, can't find the -v option.
{code:java}
// code placeholder
private static final String OPTION_QUOTA = "q";
private static final String OPTION_HUMAN = "h";
public static final String NAME = "count";
public static final String USAGE =
"[-" + OPTION_QUOTA + "] [-" + OPTION_HUMAN + "]  ...";
{code}
BUT the document states the -v option.

[http://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-common/FileSystemShell.html#count]

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15109) TestDFSIO -read -random doesn't work on file sized 4GB

2017-12-12 Thread zhoutai.zt (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288659#comment-16288659
 ] 

zhoutai.zt commented on HADOOP-15109:
-

Thanks [~ajayydv].
The second patch looks good to me, +1.

> TestDFSIO -read -random doesn't work on file sized 4GB
> --
>
> Key: HADOOP-15109
> URL: https://issues.apache.org/jira/browse/HADOOP-15109
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, test
>Affects Versions: 3.0.0-beta1
>Reporter: zhoutai.zt
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-15109.001.patch, HADOOP-15109.002.patch, Screen 
> Shot 2017-12-11 at 3.17.22 PM.png
>
>
> TestDFSIO -read -random throws IllegalArgumentException on 4GB file. The 
> cause is:
> {code:java}
> private long nextOffset(long current) {
>   if(skipSize == 0)
> return rnd.nextInt((int)(fileSize));
>   if(skipSize > 0)
> return (current < 0) ? 0 : (current + bufferSize + skipSize);
>   // skipSize < 0
>   return (current < 0) ? Math.max(0, fileSize - bufferSize) :
>  Math.max(0, current + skipSize);
> }
>   }
> {code}
> When {color:#d04437}_filesize_{color} exceeds signed int, (int)(filesize) 
> will be negative and cause Random.nextInt throws  IllegalArgumentException("n 
> must be positive").



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15109) TestDFSIO -read -random doesn't work on file sized 4GB

2017-12-11 Thread zhoutai.zt (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286947#comment-16286947
 ] 

zhoutai.zt commented on HADOOP-15109:
-

Thanks [~ajayydv].  

Another way to generate a bounded random long.
{code:java}
ThreadLocalRandom.current().nextLong(fileSize)
{code}


> TestDFSIO -read -random doesn't work on file sized 4GB
> --
>
> Key: HADOOP-15109
> URL: https://issues.apache.org/jira/browse/HADOOP-15109
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, test
>Affects Versions: 3.0.0-beta1
>Reporter: zhoutai.zt
>Assignee: Ajay Kumar
>Priority: Minor
> Attachments: HADOOP-15109.001.patch, Screen Shot 2017-12-11 at 
> 3.17.22 PM.png
>
>
> TestDFSIO -read -random throws IllegalArgumentException on 4GB file. The 
> cause is:
> {code:java}
> private long nextOffset(long current) {
>   if(skipSize == 0)
> return rnd.nextInt((int)(fileSize));
>   if(skipSize > 0)
> return (current < 0) ? 0 : (current + bufferSize + skipSize);
>   // skipSize < 0
>   return (current < 0) ? Math.max(0, fileSize - bufferSize) :
>  Math.max(0, current + skipSize);
> }
>   }
> {code}
> When {color:#d04437}_filesize_{color} exceeds signed int, (int)(filesize) 
> will be negative and cause Random.nextInt throws  IllegalArgumentException("n 
> must be positive").



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15109) TestDFSIO -read -random doesn't work on file sized 4GB

2017-12-11 Thread zhoutai.zt (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhoutai.zt updated HADOOP-15109:

Status: Open  (was: Patch Available)

> TestDFSIO -read -random doesn't work on file sized 4GB
> --
>
> Key: HADOOP-15109
> URL: https://issues.apache.org/jira/browse/HADOOP-15109
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, test
>Affects Versions: 3.0.0-beta1
>Reporter: zhoutai.zt
>Priority: Minor
>
> TestDFSIO -read -random throws IllegalArgumentException on 4GB file. The 
> cause is:
> {code:java}
> private long nextOffset(long current) {
>   if(skipSize == 0)
> return rnd.nextInt((int)(fileSize));
>   if(skipSize > 0)
> return (current < 0) ? 0 : (current + bufferSize + skipSize);
>   // skipSize < 0
>   return (current < 0) ? Math.max(0, fileSize - bufferSize) :
>  Math.max(0, current + skipSize);
> }
>   }
> {code}
> When {color:#d04437}_filesize_{color} exceeds signed int, (int)(filesize) 
> will be negative and cause Random.nextInt throws  IllegalArgumentException("n 
> must be positive").



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15109) TestDFSIO -read -random doesn't work on file sized 4GB

2017-12-11 Thread zhoutai.zt (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhoutai.zt updated HADOOP-15109:

Status: Patch Available  (was: Open)

> TestDFSIO -read -random doesn't work on file sized 4GB
> --
>
> Key: HADOOP-15109
> URL: https://issues.apache.org/jira/browse/HADOOP-15109
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, test
>Affects Versions: 3.0.0-beta1
>Reporter: zhoutai.zt
>Priority: Minor
>
> TestDFSIO -read -random throws IllegalArgumentException on 4GB file. The 
> cause is:
> {code:java}
> private long nextOffset(long current) {
>   if(skipSize == 0)
> return rnd.nextInt((int)(fileSize));
>   if(skipSize > 0)
> return (current < 0) ? 0 : (current + bufferSize + skipSize);
>   // skipSize < 0
>   return (current < 0) ? Math.max(0, fileSize - bufferSize) :
>  Math.max(0, current + skipSize);
> }
>   }
> {code}
> When {color:#d04437}_filesize_{color} exceeds signed int, (int)(filesize) 
> will be negative and cause Random.nextInt throws  IllegalArgumentException("n 
> must be positive").



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15109) TestDFSIO -read -random doesn't work on file sized 4GB

2017-12-11 Thread zhoutai.zt (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhoutai.zt updated HADOOP-15109:

Priority: Minor  (was: Major)

> TestDFSIO -read -random doesn't work on file sized 4GB
> --
>
> Key: HADOOP-15109
> URL: https://issues.apache.org/jira/browse/HADOOP-15109
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, test
>Affects Versions: 3.0.0-beta1
>Reporter: zhoutai.zt
>Priority: Minor
>
> TestDFSIO -read -random throws IllegalArgumentException on 4GB file. The 
> cause is:
> {code:java}
> private long nextOffset(long current) {
>   if(skipSize == 0)
> return rnd.nextInt((int)(fileSize));
>   if(skipSize > 0)
> return (current < 0) ? 0 : (current + bufferSize + skipSize);
>   // skipSize < 0
>   return (current < 0) ? Math.max(0, fileSize - bufferSize) :
>  Math.max(0, current + skipSize);
> }
>   }
> {code}
> When {color:#d04437}_filesize_{color} exceeds signed int, (int)(filesize) 
> will be negative and cause Random.nextInt throws  IllegalArgumentException("n 
> must be positive").



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15109) TestDFSIO -read -random doesn't work on file sized 4GB

2017-12-11 Thread zhoutai.zt (JIRA)
zhoutai.zt created HADOOP-15109:
---

 Summary: TestDFSIO -read -random doesn't work on file sized 4GB
 Key: HADOOP-15109
 URL: https://issues.apache.org/jira/browse/HADOOP-15109
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, test
Affects Versions: 3.0.0-beta1
Reporter: zhoutai.zt


TestDFSIO -read -random throws IllegalArgumentException on 4GB file. The cause 
is:

{code:java}
private long nextOffset(long current) {
  if(skipSize == 0)
return rnd.nextInt((int)(fileSize));
  if(skipSize > 0)
return (current < 0) ? 0 : (current + bufferSize + skipSize);
  // skipSize < 0
  return (current < 0) ? Math.max(0, fileSize - bufferSize) :
 Math.max(0, current + skipSize);
}
  }
{code}

When {color:#d04437}_filesize_{color} exceeds signed int, (int)(filesize) will 
be negative and cause Random.nextInt throws  IllegalArgumentException("n must 
be positive").




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15101) what testListStatusFile verified not consistent with listStatus declaration in FileSystem

2017-12-08 Thread zhoutai.zt (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16284550#comment-16284550
 ] 

zhoutai.zt commented on HADOOP-15101:
-

Thanks Steve Loughran. Will the details in filesystem.md be add to 
FileSystem.java? At least add a link to filesystem.md.

> what testListStatusFile verified not consistent with listStatus declaration 
> in FileSystem
> ---
>
> Key: HADOOP-15101
> URL: https://issues.apache.org/jira/browse/HADOOP-15101
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, test
>Affects Versions: 3.0.0-beta1
>Reporter: zhoutai.zt
>Priority: Critical
>
> {code}
>   @Test
>   public void testListStatusFile() throws Throwable {
> describe("test the listStatus(path) on a file");
> Path f = touchf("liststatusfile");
> verifyStatusArrayMatchesFile(f, getFileSystem().listStatus(f));
>   }
> {code}
> In this case, first create a file _f_, then listStatus on _f_,expect 
> listStatus returns an array of one FileStatus. But this is not consistent 
> with the declarations in FileSystem, i.e.
> {code}
> " 
> List the statuses of the files/directories in the given path if the path is a 
> directory.
> Parameters:
> f given path
> Returns:
> the statuses of the files/directories in the given patch
> "
> {code}
> Which is the expected? The behave in fs contract test or in FileSystem?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-15101) what testListStatusFile verified not consistent with listStatus declaration in FileSystem

2017-12-08 Thread zhoutai.zt (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhoutai.zt updated HADOOP-15101:

Comment: was deleted

(was: Where can I find the file filesystem.md?)

> what testListStatusFile verified not consistent with listStatus declaration 
> in FileSystem
> ---
>
> Key: HADOOP-15101
> URL: https://issues.apache.org/jira/browse/HADOOP-15101
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, test
>Affects Versions: 3.0.0-beta1
>Reporter: zhoutai.zt
>Priority: Critical
>
> {code}
>   @Test
>   public void testListStatusFile() throws Throwable {
> describe("test the listStatus(path) on a file");
> Path f = touchf("liststatusfile");
> verifyStatusArrayMatchesFile(f, getFileSystem().listStatus(f));
>   }
> {code}
> In this case, first create a file _f_, then listStatus on _f_,expect 
> listStatus returns an array of one FileStatus. But this is not consistent 
> with the declarations in FileSystem, i.e.
> {code}
> " 
> List the statuses of the files/directories in the given path if the path is a 
> directory.
> Parameters:
> f given path
> Returns:
> the statuses of the files/directories in the given patch
> "
> {code}
> Which is the expected? The behave in fs contract test or in FileSystem?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15101) what testListStatusFile verified not consistent with listStatus declaration in FileSystem

2017-12-08 Thread zhoutai.zt (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16284546#comment-16284546
 ] 

zhoutai.zt commented on HADOOP-15101:
-

Where can I find the file filesystem.md?

> what testListStatusFile verified not consistent with listStatus declaration 
> in FileSystem
> ---
>
> Key: HADOOP-15101
> URL: https://issues.apache.org/jira/browse/HADOOP-15101
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, test
>Affects Versions: 3.0.0-beta1
>Reporter: zhoutai.zt
>Priority: Critical
>
> {code}
>   @Test
>   public void testListStatusFile() throws Throwable {
> describe("test the listStatus(path) on a file");
> Path f = touchf("liststatusfile");
> verifyStatusArrayMatchesFile(f, getFileSystem().listStatus(f));
>   }
> {code}
> In this case, first create a file _f_, then listStatus on _f_,expect 
> listStatus returns an array of one FileStatus. But this is not consistent 
> with the declarations in FileSystem, i.e.
> {code}
> " 
> List the statuses of the files/directories in the given path if the path is a 
> directory.
> Parameters:
> f given path
> Returns:
> the statuses of the files/directories in the given patch
> "
> {code}
> Which is the expected? The behave in fs contract test or in FileSystem?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15101) what testListStatusFile verified not consistent with listStatus declaration in FileSystem

2017-12-07 Thread zhoutai.zt (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhoutai.zt updated HADOOP-15101:

Description: 
  @Test
  public void testListStatusFile() throws Throwable {
describe("test the listStatus(path) on a file");
Path f = touchf("liststatusfile");
verifyStatusArrayMatchesFile(f, getFileSystem().listStatus(f));
  }

In this case, first create a file _f_, then listStatus on _f_,expect listStatus 
returns an array of one FileStatus. But this is not consistent with the 
declarations in FileSystem, i.e.

" 
List the statuses of the files/directories in the given path if the path is a 
directory.
Parameters:
f given path
Returns:
the statuses of the files/directories in the given patch
"

Which is the expected? The behave in fs contract test or in FileSystem?

  was:
  @Test
  public void testListStatusFile() throws Throwable {
describe("test the listStatus(path) on a file");
Path f = touchf("liststatusfile");
verifyStatusArrayMatchesFile(f, getFileSystem().listStatus(f));
  }

In this case, first create a file _f_, then listStatus on _f_,expect listStatus 
returns an array of one FileStatus. But this is not consistent with the 
declarations in FileSystem, i.e.
??List the statuses of the files/directories in the given path if the path is a 
directory.
Parameters:
f given path
Returns:
the statuses of the files/directories in the given patch??

Which is the expected? The behave in fs contract test or in FileSystem?


> what testListStatusFile verified not consistent with listStatus declaration 
> in FileSystem
> ---
>
> Key: HADOOP-15101
> URL: https://issues.apache.org/jira/browse/HADOOP-15101
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, test
>Affects Versions: 3.0.0-beta1
>Reporter: zhoutai.zt
>Priority: Critical
>
>   @Test
>   public void testListStatusFile() throws Throwable {
> describe("test the listStatus(path) on a file");
> Path f = touchf("liststatusfile");
> verifyStatusArrayMatchesFile(f, getFileSystem().listStatus(f));
>   }
> In this case, first create a file _f_, then listStatus on _f_,expect 
> listStatus returns an array of one FileStatus. But this is not consistent 
> with the declarations in FileSystem, i.e.
> " 
> List the statuses of the files/directories in the given path if the path is a 
> directory.
> Parameters:
> f given path
> Returns:
> the statuses of the files/directories in the given patch
> "
> Which is the expected? The behave in fs contract test or in FileSystem?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15101) what testListStatusFile verified not consistent with listStatus declaration in FileSystem

2017-12-07 Thread zhoutai.zt (JIRA)
zhoutai.zt created HADOOP-15101:
---

 Summary: what testListStatusFile verified not consistent with 
listStatus declaration in FileSystem
 Key: HADOOP-15101
 URL: https://issues.apache.org/jira/browse/HADOOP-15101
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, test
Affects Versions: 3.0.0-beta1
Reporter: zhoutai.zt
Priority: Critical


  @Test
  public void testListStatusFile() throws Throwable {
describe("test the listStatus(path) on a file");
Path f = touchf("liststatusfile");
verifyStatusArrayMatchesFile(f, getFileSystem().listStatus(f));
  }

In this case, first create a file _f_, then listStatus on _f_,expect listStatus 
returns an array of one FileStatus. But this is not consistent with the 
declarations in FileSystem, i.e.
??List the statuses of the files/directories in the given path if the path is a 
directory.
Parameters:
f given path
Returns:
the statuses of the files/directories in the given patch??

Which is the expected? The behave in fs contract test or in FileSystem?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15097) AbstractContractDeleteTest::testDeleteNonEmptyDirRecursive with misleading path

2017-12-07 Thread zhoutai.zt (JIRA)
zhoutai.zt created HADOOP-15097:
---

 Summary: 
AbstractContractDeleteTest::testDeleteNonEmptyDirRecursive with misleading path
 Key: HADOOP-15097
 URL: https://issues.apache.org/jira/browse/HADOOP-15097
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, test
Affects Versions: 3.0.0-beta1
Reporter: zhoutai.zt
Priority: Minor


  @Test
  public void testDeleteNonEmptyDirRecursive() throws Throwable {
Path path = path("{color:red}testDeleteNonEmptyDirNonRecursive{color}");
mkdirs(path);
Path file = new Path(path, "childfile");
ContractTestUtils.writeTextFile(getFileSystem(), file, "goodbye, world",
true);
assertDeleted(path, true);
assertPathDoesNotExist("not deleted", file);
  }

change testDeleteNonEmptyDirNonRecursive to testDeleteNonEmptyDirRecursive



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15020) NNBench not support run more than one map task on the same host

2017-11-07 Thread zhoutai.zt (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16241994#comment-16241994
 ] 

zhoutai.zt commented on HADOOP-15020:
-

thanks.

> NNBench not support run more than one map task on the same host
> ---
>
> Key: HADOOP-15020
> URL: https://issues.apache.org/jira/browse/HADOOP-15020
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: benchmarks
>Affects Versions: 2.7.2
> Environment: Hadoop 2.7.2
>Reporter: zhoutai.zt
>Priority: Minor
>
> When benchmark NameNode performance with NNBench, I start with pseudo 
> distributed deploy. Everything goes well with "-maps 1". BUT with -maps N 
> (n>1) and -operation create_write, many exceptions meet during the benchmark.
> Hostname is part of the file path, which can  differentiate hosts. With more 
> than two map tasks run on the same host, more than two map tasks may operate 
> on the same file, leading to exceptions.
> 17/11/07 15:22:32 INFO hdfs.NNBench:  RAW DATA: AL Total #1: 
> 84
> 17/11/07 15:22:32 INFO hdfs.NNBench:  RAW DATA: AL Total #2: 
> 43
> 17/11/07 15:22:32 INFO hdfs.NNBench:   RAW DATA: TPS Total (ms): 
> 2570
> 17/11/07 15:22:32 INFO hdfs.NNBench:RAW DATA: Longest Map Time (ms): 
> 814.0
> 17/11/07 15:22:32 INFO hdfs.NNBench:RAW DATA: Late maps: 0
> 17/11/07 15:22:32 INFO hdfs.NNBench:  {color:red}RAW DATA: # of 
> exceptions: 3000{color}
> 2017-11-07 14:54:08,082 INFO org.apache.hadoop.hdfs.NNBench: Exception 
> recorded in op: Create/Write/Close
> 2017-11-07 14:54:08,082 INFO org.apache.hadoop.hdfs.NNBench: Exception 
> recorded in op: Create/Write/Close
> 2017-11-07 14:54:08,083 INFO org.apache.hadoop.hdfs.NNBench: Exception 
> recorded in op: Create/Write/Close



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15020) NNBench not support run more than one map task on the same host

2017-11-07 Thread zhoutai.zt (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhoutai.zt updated HADOOP-15020:

Issue Type: Bug  (was: Improvement)

> NNBench not support run more than one map task on the same host
> ---
>
> Key: HADOOP-15020
> URL: https://issues.apache.org/jira/browse/HADOOP-15020
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: benchmarks
>Affects Versions: 2.7.2
> Environment: Hadoop 2.7.2
>Reporter: zhoutai.zt
>Priority: Minor
>
> When benchmark NameNode performance with NNBench, I start with pseudo 
> distributed deploy. Everything goes well with "-maps 1". BUT with -maps N 
> (n>1) and -operation create_write, many exceptions meet during the benchmark.
> Hostname is part of the file path, which can  differentiate hosts. With more 
> than two map tasks run on the same host, more than two map tasks may operate 
> on the same file, leading to exceptions.
> 17/11/07 15:22:32 INFO hdfs.NNBench:  RAW DATA: AL Total #1: 
> 84
> 17/11/07 15:22:32 INFO hdfs.NNBench:  RAW DATA: AL Total #2: 
> 43
> 17/11/07 15:22:32 INFO hdfs.NNBench:   RAW DATA: TPS Total (ms): 
> 2570
> 17/11/07 15:22:32 INFO hdfs.NNBench:RAW DATA: Longest Map Time (ms): 
> 814.0
> 17/11/07 15:22:32 INFO hdfs.NNBench:RAW DATA: Late maps: 0
> 17/11/07 15:22:32 INFO hdfs.NNBench:  {color:red}RAW DATA: # of 
> exceptions: 3000{color}
> 2017-11-07 14:54:08,082 INFO org.apache.hadoop.hdfs.NNBench: Exception 
> recorded in op: Create/Write/Close
> 2017-11-07 14:54:08,082 INFO org.apache.hadoop.hdfs.NNBench: Exception 
> recorded in op: Create/Write/Close
> 2017-11-07 14:54:08,083 INFO org.apache.hadoop.hdfs.NNBench: Exception 
> recorded in op: Create/Write/Close



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15020) NNBench not support run more than one map task on the same host

2017-11-06 Thread zhoutai.zt (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhoutai.zt updated HADOOP-15020:

Description: 
When benchmark NameNode performance with NNBench, I start with pseudo 
distributed deploy. Everything goes well with "-maps 1". BUT with -maps N (n>1) 
and -operation create_write, many exceptions meet during the benchmark.

Hostname is part of the file path, which can  differentiate hosts. With more 
than two map tasks run on the same host, more than two map tasks may operate on 
the same file, leading to exceptions.

17/11/07 15:22:32 INFO hdfs.NNBench:  RAW DATA: AL Total #1: 84
17/11/07 15:22:32 INFO hdfs.NNBench:  RAW DATA: AL Total #2: 43
17/11/07 15:22:32 INFO hdfs.NNBench:   RAW DATA: TPS Total (ms): 
2570
17/11/07 15:22:32 INFO hdfs.NNBench:RAW DATA: Longest Map Time (ms): 
814.0
17/11/07 15:22:32 INFO hdfs.NNBench:RAW DATA: Late maps: 0
17/11/07 15:22:32 INFO hdfs.NNBench:  {color:red}RAW DATA: # of 
exceptions: 3000{color}

2017-11-07 14:54:08,082 INFO org.apache.hadoop.hdfs.NNBench: Exception recorded 
in op: Create/Write/Close
2017-11-07 14:54:08,082 INFO org.apache.hadoop.hdfs.NNBench: Exception recorded 
in op: Create/Write/Close
2017-11-07 14:54:08,083 INFO org.apache.hadoop.hdfs.NNBench: Exception recorded 
in op: Create/Write/Close


  was:
When benchmark NameNode performance with NNBench. I start with pseudo 
distributed deploy. Everything goes well with "-maps 1". BUT with -maps N (n>1) 
and -operation create_write, many exceptions meet during the benchmark.

Hostname is part of the file path, which can  differentiate hosts. With more 
than two map tasks run on the same host, more than two map tasks may operate on 
the same file, leading to exceptions.

17/11/07 15:22:32 INFO hdfs.NNBench:  RAW DATA: AL Total #1: 84
17/11/07 15:22:32 INFO hdfs.NNBench:  RAW DATA: AL Total #2: 43
17/11/07 15:22:32 INFO hdfs.NNBench:   RAW DATA: TPS Total (ms): 
2570
17/11/07 15:22:32 INFO hdfs.NNBench:RAW DATA: Longest Map Time (ms): 
814.0
17/11/07 15:22:32 INFO hdfs.NNBench:RAW DATA: Late maps: 0
17/11/07 15:22:32 INFO hdfs.NNBench:  {color:red}RAW DATA: # of 
exceptions: 3000{color}

2017-11-07 14:54:08,082 INFO org.apache.hadoop.hdfs.NNBench: Exception recorded 
in op: Create/Write/Close
2017-11-07 14:54:08,082 INFO org.apache.hadoop.hdfs.NNBench: Exception recorded 
in op: Create/Write/Close
2017-11-07 14:54:08,083 INFO org.apache.hadoop.hdfs.NNBench: Exception recorded 
in op: Create/Write/Close



> NNBench not support run more than one map task on the same host
> ---
>
> Key: HADOOP-15020
> URL: https://issues.apache.org/jira/browse/HADOOP-15020
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: benchmarks
>Affects Versions: 2.7.2
> Environment: Hadoop 2.7.2
>Reporter: zhoutai.zt
>Priority: Minor
>
> When benchmark NameNode performance with NNBench, I start with pseudo 
> distributed deploy. Everything goes well with "-maps 1". BUT with -maps N 
> (n>1) and -operation create_write, many exceptions meet during the benchmark.
> Hostname is part of the file path, which can  differentiate hosts. With more 
> than two map tasks run on the same host, more than two map tasks may operate 
> on the same file, leading to exceptions.
> 17/11/07 15:22:32 INFO hdfs.NNBench:  RAW DATA: AL Total #1: 
> 84
> 17/11/07 15:22:32 INFO hdfs.NNBench:  RAW DATA: AL Total #2: 
> 43
> 17/11/07 15:22:32 INFO hdfs.NNBench:   RAW DATA: TPS Total (ms): 
> 2570
> 17/11/07 15:22:32 INFO hdfs.NNBench:RAW DATA: Longest Map Time (ms): 
> 814.0
> 17/11/07 15:22:32 INFO hdfs.NNBench:RAW DATA: Late maps: 0
> 17/11/07 15:22:32 INFO hdfs.NNBench:  {color:red}RAW DATA: # of 
> exceptions: 3000{color}
> 2017-11-07 14:54:08,082 INFO org.apache.hadoop.hdfs.NNBench: Exception 
> recorded in op: Create/Write/Close
> 2017-11-07 14:54:08,082 INFO org.apache.hadoop.hdfs.NNBench: Exception 
> recorded in op: Create/Write/Close
> 2017-11-07 14:54:08,083 INFO org.apache.hadoop.hdfs.NNBench: Exception 
> recorded in op: Create/Write/Close



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15020) NNBench not support run more than one map task on the same host

2017-11-06 Thread zhoutai.zt (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhoutai.zt updated HADOOP-15020:

Summary: NNBench not support run more than one map task on the same host  
(was: NNBench not support run more than one map task on one host)

> NNBench not support run more than one map task on the same host
> ---
>
> Key: HADOOP-15020
> URL: https://issues.apache.org/jira/browse/HADOOP-15020
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: benchmarks
>Affects Versions: 2.7.2
> Environment: Hadoop 2.7.2
>Reporter: zhoutai.zt
>Priority: Minor
>
> When benchmark NameNode performance with NNBench. I start with pseudo 
> distributed deploy. Everything goes well with "-maps 1". BUT with -maps N 
> (n>1) and -operation create_write, many exceptions meet during the benchmark.
> Hostname is part of the file path, which can  differentiate hosts. With more 
> than two map tasks run on the same host, more than two map tasks may operate 
> on the same file, leading to exceptions.
> 17/11/07 15:22:32 INFO hdfs.NNBench:  RAW DATA: AL Total #1: 
> 84
> 17/11/07 15:22:32 INFO hdfs.NNBench:  RAW DATA: AL Total #2: 
> 43
> 17/11/07 15:22:32 INFO hdfs.NNBench:   RAW DATA: TPS Total (ms): 
> 2570
> 17/11/07 15:22:32 INFO hdfs.NNBench:RAW DATA: Longest Map Time (ms): 
> 814.0
> 17/11/07 15:22:32 INFO hdfs.NNBench:RAW DATA: Late maps: 0
> 17/11/07 15:22:32 INFO hdfs.NNBench:  {color:red}RAW DATA: # of 
> exceptions: 3000{color}
> 2017-11-07 14:54:08,082 INFO org.apache.hadoop.hdfs.NNBench: Exception 
> recorded in op: Create/Write/Close
> 2017-11-07 14:54:08,082 INFO org.apache.hadoop.hdfs.NNBench: Exception 
> recorded in op: Create/Write/Close
> 2017-11-07 14:54:08,083 INFO org.apache.hadoop.hdfs.NNBench: Exception 
> recorded in op: Create/Write/Close



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15020) NNBench not support run more than one map task on one host

2017-11-06 Thread zhoutai.zt (JIRA)
zhoutai.zt created HADOOP-15020:
---

 Summary: NNBench not support run more than one map task on one host
 Key: HADOOP-15020
 URL: https://issues.apache.org/jira/browse/HADOOP-15020
 Project: Hadoop Common
  Issue Type: Improvement
  Components: benchmarks
Affects Versions: 2.7.2
 Environment: Hadoop 2.7.2
Reporter: zhoutai.zt
Priority: Minor


When benchmark NameNode performance with NNBench. I start with pseudo 
distributed deploy. Everything goes well with "-maps 1". BUT with -maps N (n>1) 
and -operation create_write, many exceptions meet during the benchmark.

Hostname is part of the file path, which can  differentiate hosts. With more 
than two map tasks run on the same host, more than two map tasks may operate on 
the same file, leading to exceptions.

17/11/07 15:22:32 INFO hdfs.NNBench:  RAW DATA: AL Total #1: 84
17/11/07 15:22:32 INFO hdfs.NNBench:  RAW DATA: AL Total #2: 43
17/11/07 15:22:32 INFO hdfs.NNBench:   RAW DATA: TPS Total (ms): 
2570
17/11/07 15:22:32 INFO hdfs.NNBench:RAW DATA: Longest Map Time (ms): 
814.0
17/11/07 15:22:32 INFO hdfs.NNBench:RAW DATA: Late maps: 0
17/11/07 15:22:32 INFO hdfs.NNBench:  {color:red}RAW DATA: # of 
exceptions: 3000{color}

2017-11-07 14:54:08,082 INFO org.apache.hadoop.hdfs.NNBench: Exception recorded 
in op: Create/Write/Close
2017-11-07 14:54:08,082 INFO org.apache.hadoop.hdfs.NNBench: Exception recorded 
in op: Create/Write/Close
2017-11-07 14:54:08,083 INFO org.apache.hadoop.hdfs.NNBench: Exception recorded 
in op: Create/Write/Close




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org