[
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Harsh J updated HDFS-554:
-------------------------
Attachment: HDFS-554.patch
The difference is apparently very clear, in terms of speed, from even a silly
test:
{code}
public class TestSpeed {
public static void main(String[] args) {
// Load about a million "Integers".
Object[] arr = new Object[1000000];
for (Integer i = 0; i < 1000000; i++) {
arr[i] = i;
}
long now = System.currentTimeMillis();
// Copy iteratively into a new sized array.
Object[] arr2 = new Object[3000000];
for (Integer i = 0; i < arr.length; i++) {
arr2[i] = arr[i];
}
System.out.println(System.currentTimeMillis() - now);
now = System.currentTimeMillis();
// arraycopy into a new sized array.
Object[] arr3 = new Object[3000000];
System.arraycopy(arr, 0, arr3, 0, arr.length);
System.out.println(System.currentTimeMillis() - now);
}
}
{code}
A few runs do, for example:
||Loop||System.arraycopy||
|59|17|
|54|14|
|52|14|
|52|15|
> BlockInfo.ensureCapacity may get a speedup from System.arraycopy()
> ------------------------------------------------------------------
>
> Key: HDFS-554
> URL: https://issues.apache.org/jira/browse/HDFS-554
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: name-node
> Affects Versions: 0.21.0
> Reporter: Steve Loughran
> Priority: Minor
> Attachments: HDFS-554.patch
>
>
> BlockInfo.ensureCapacity() uses a for() loop to copy the old array data into
> the expanded array. {{System.arraycopy()}} is generally much faster for
> this, as it can do a bulk memory copy. There is also the typesafe Java6
> {{Arrays.copyOf()}} to consider, though here it offers no tangible benefit.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira