[
https://issues.apache.org/jira/browse/HADOOP-5302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jerome Boulon updated HADOOP-5302:
----------------------------------
Attachment: HADOOP-5302.patch
- If we couldn't found a complete record AND
we cannot read more, i.e bufferRead == MAX_READ_SIZE
it's because the record is too BIG
So log.warn, and drop current buffer so we can keep moving
- Add a test case
- Modify AgentControlSocketListener.java to be able:
--> to give a port of 0, so the system will can use any free port.
--> retrieve the real port from Agent via AgentControlSocketListener
> If a record is too big, the adaptor will stop sending chunks
> ------------------------------------------------------------
>
> Key: HADOOP-5302
> URL: https://issues.apache.org/jira/browse/HADOOP-5302
> Project: Hadoop Core
> Issue Type: Bug
> Components: contrib/chukwa
> Reporter: Jerome Boulon
> Assignee: Jerome Boulon
> Attachments: HADOOP-5302.patch
>
>
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.