[
https://issues.apache.org/jira/browse/CHUKWA-583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ari Rabkin updated CHUKWA-583:
------------------------------
Priority: Trivial (was: Major)
This looks like an autogenerated bug report. I would caution you against this.
The code you're flagging is in the test suite. It is NOT performance critical,
nor is it run on substantial data volumes. It's probably pointless to change.
The human time invested in generating and committing a patch is probably more
than the "fix" would ever save.
> Copying data from inputStream to OuputStream needs appropriate buffer size
> --------------------------------------------------------------------------
>
> Key: CHUKWA-583
> URL: https://issues.apache.org/jira/browse/CHUKWA-583
> Project: Chukwa
> Issue Type: Bug
> Components: Data Processors
> Affects Versions: 0.4.0
> Reporter: Xiaoming Shi
> Priority: Trivial
>
> In the file
> ./chukwa-0.4.0/src/test/org/apache/hadoop/chukwa/validationframework/util/DataOperations.java
> line: 54-58
> In the function copyFile, the buffer size is fixed as 4096 bytes. With the
> size of the data varies, the performance can be damaged a lot.
> We need an appropriate buffer size which depends on the size of the data to
> be copied.
> This is the same as the Appache Bug
> (https://issues.apache.org/bugzilla/show_bug.cgi?id=32546)
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira