[jira] [Updated] (HDFS-5517) Lower the default maximum number of blocks per file

2016-11-30 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-5517:
--
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha2
   Status: Resolved  (was: Patch Available)

Committed, thanks for the review Akira and sorry again for missing the broken 
tests earlier.

> Lower the default maximum number of blocks per file
> ---
>
> Key: HDFS-5517
> URL: https://issues.apache.org/jira/browse/HDFS-5517
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.2.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
>  Labels: BB2015-05-TBR
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-5517.002.patch, HDFS-5517.003.patch, HDFS-5517.patch
>
>
> We introduced the maximum number of blocks per file in HDFS-4305, but we set 
> the default to 1MM. In practice this limit is so high as to never be hit, 
> whereas we know that an individual file with 10s of thousands of blocks can 
> cause problems. We should lower the default value, in my opinion to 10k.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-5517) Lower the default maximum number of blocks per file

2016-11-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-5517:
--
Status: Patch Available  (was: Reopened)

> Lower the default maximum number of blocks per file
> ---
>
> Key: HDFS-5517
> URL: https://issues.apache.org/jira/browse/HDFS-5517
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.2.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
>  Labels: BB2015-05-TBR
> Attachments: HDFS-5517.002.patch, HDFS-5517.003.patch, HDFS-5517.patch
>
>
> We introduced the maximum number of blocks per file in HDFS-4305, but we set 
> the default to 1MM. In practice this limit is so high as to never be hit, 
> whereas we know that an individual file with 10s of thousands of blocks can 
> cause problems. We should lower the default value, in my opinion to 10k.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-5517) Lower the default maximum number of blocks per file

2016-11-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-5517:
--
Attachment: HDFS-5517.003.patch

New patch to address the two unit test issues.

> Lower the default maximum number of blocks per file
> ---
>
> Key: HDFS-5517
> URL: https://issues.apache.org/jira/browse/HDFS-5517
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.2.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
>  Labels: BB2015-05-TBR
> Attachments: HDFS-5517.002.patch, HDFS-5517.003.patch, HDFS-5517.patch
>
>
> We introduced the maximum number of blocks per file in HDFS-4305, but we set 
> the default to 1MM. In practice this limit is so high as to never be hit, 
> whereas we know that an individual file with 10s of thousands of blocks can 
> cause problems. We should lower the default value, in my opinion to 10k.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-5517) Lower the default maximum number of blocks per file

2016-11-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-5517:
--
Fix Version/s: (was: 3.0.0-alpha2)

> Lower the default maximum number of blocks per file
> ---
>
> Key: HDFS-5517
> URL: https://issues.apache.org/jira/browse/HDFS-5517
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.2.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
>  Labels: BB2015-05-TBR
> Attachments: HDFS-5517.002.patch, HDFS-5517.patch
>
>
> We introduced the maximum number of blocks per file in HDFS-4305, but we set 
> the default to 1MM. In practice this limit is so high as to never be hit, 
> whereas we know that an individual file with 10s of thousands of blocks can 
> cause problems. We should lower the default value, in my opinion to 10k.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-5517) Lower the default maximum number of blocks per file

2016-11-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-5517:
--
Release Note: The default value of 
"dfs.namenode.fs-limits.max-blocks-per-file" has been reduced from 1M to 10K.  
(was: The maximum number of blocks per file has been reduced to 10,000.)

> Lower the default maximum number of blocks per file
> ---
>
> Key: HDFS-5517
> URL: https://issues.apache.org/jira/browse/HDFS-5517
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.2.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
>  Labels: BB2015-05-TBR
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-5517.002.patch, HDFS-5517.patch
>
>
> We introduced the maximum number of blocks per file in HDFS-4305, but we set 
> the default to 1MM. In practice this limit is so high as to never be hit, 
> whereas we know that an individual file with 10s of thousands of blocks can 
> cause problems. We should lower the default value, in my opinion to 10k.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-5517) Lower the default maximum number of blocks per file

2016-11-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-5517:
--
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha2
 Release Note: The maximum number of blocks per file has been reduced to 
10,000.
   Status: Resolved  (was: Patch Available)

Since this was a trivial rebase, I'm committing this based on Uma and Vinay's 
previous +1s. Thanks all!

> Lower the default maximum number of blocks per file
> ---
>
> Key: HDFS-5517
> URL: https://issues.apache.org/jira/browse/HDFS-5517
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.2.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
>  Labels: BB2015-05-TBR
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-5517.002.patch, HDFS-5517.patch
>
>
> We introduced the maximum number of blocks per file in HDFS-4305, but we set 
> the default to 1MM. In practice this limit is so high as to never be hit, 
> whereas we know that an individual file with 10s of thousands of blocks can 
> cause problems. We should lower the default value, in my opinion to 10k.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-5517) Lower the default maximum number of blocks per file

2016-11-28 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-5517:
--
Attachment: HDFS-5517.002.patch

I did the trival rebase for this change, patch attached.

> Lower the default maximum number of blocks per file
> ---
>
> Key: HDFS-5517
> URL: https://issues.apache.org/jira/browse/HDFS-5517
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.2.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
>  Labels: BB2015-05-TBR
> Attachments: HDFS-5517.002.patch, HDFS-5517.patch
>
>
> We introduced the maximum number of blocks per file in HDFS-4305, but we set 
> the default to 1MM. In practice this limit is so high as to never be hit, 
> whereas we know that an individual file with 10s of thousands of blocks can 
> cause problems. We should lower the default value, in my opinion to 10k.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-5517) Lower the default maximum number of blocks per file

2016-10-17 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-5517:
--
Target Version/s: 3.0.0-alpha2  (was: 2.8.0)

> Lower the default maximum number of blocks per file
> ---
>
> Key: HDFS-5517
> URL: https://issues.apache.org/jira/browse/HDFS-5517
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.2.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
>  Labels: BB2015-05-TBR
> Attachments: HDFS-5517.patch
>
>
> We introduced the maximum number of blocks per file in HDFS-4305, but we set 
> the default to 1MM. In practice this limit is so high as to never be hit, 
> whereas we know that an individual file with 10s of thousands of blocks can 
> cause problems. We should lower the default value, in my opinion to 10k.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-5517) Lower the default maximum number of blocks per file

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-5517:
---
Labels: BB2015-05-TBR  (was: )

> Lower the default maximum number of blocks per file
> ---
>
> Key: HDFS-5517
> URL: https://issues.apache.org/jira/browse/HDFS-5517
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.2.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
>  Labels: BB2015-05-TBR
> Attachments: HDFS-5517.patch
>
>
> We introduced the maximum number of blocks per file in HDFS-4305, but we set 
> the default to 1MM. In practice this limit is so high as to never be hit, 
> whereas we know that an individual file with 10s of thousands of blocks can 
> cause problems. We should lower the default value, in my opinion to 10k.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-5517) Lower the default maximum number of blocks per file

2013-11-15 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-5517:
-

Attachment: HDFS-5517.patch

Thanks a lot for the support, Uma. Here's a little patch which just changes the 
default from 1MM to 10,000.

> Lower the default maximum number of blocks per file
> ---
>
> Key: HDFS-5517
> URL: https://issues.apache.org/jira/browse/HDFS-5517
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.2.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Attachments: HDFS-5517.patch
>
>
> We introduced the maximum number of blocks per file in HDFS-4305, but we set 
> the default to 1MM. In practice this limit is so high as to never be hit, 
> whereas we know that an individual file with 10s of thousands of blocks can 
> cause problems. We should lower the default value, in my opinion to 10k.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5517) Lower the default maximum number of blocks per file

2013-11-15 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-5517:
-

Status: Patch Available  (was: Open)

> Lower the default maximum number of blocks per file
> ---
>
> Key: HDFS-5517
> URL: https://issues.apache.org/jira/browse/HDFS-5517
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.2.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Attachments: HDFS-5517.patch
>
>
> We introduced the maximum number of blocks per file in HDFS-4305, but we set 
> the default to 1MM. In practice this limit is so high as to never be hit, 
> whereas we know that an individual file with 10s of thousands of blocks can 
> cause problems. We should lower the default value, in my opinion to 10k.



--
This message was sent by Atlassian JIRA
(v6.1#6144)