[
https://issues.apache.org/jira/browse/HADOOP-1788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12534732
]
Christian Kunz commented on HADOOP-1788:
----------------------------------------
By using setvbuf on the c++ side and BufferedOutputStream / BufferedInputStream
on the java side, I was able to change the data transfer to larger chunks,
especially important when submitting many key-value pairs with small values.
strace shows now larger reads and writes.
I attach patch-1788-1.txt
> Increase the buffer size of pipes from 1k to 128k
> -------------------------------------------------
>
> Key: HADOOP-1788
> URL: https://issues.apache.org/jira/browse/HADOOP-1788
> Project: Hadoop
> Issue Type: Bug
> Components: pipes
> Reporter: Owen O'Malley
> Assignee: Amareshwari Sri Ramadasu
> Attachments: patch-1788.txt
>
>
> Currently pipes applications use 1k writes to the socket and it should be
> larger to increase throughput.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.