[
https://issues.apache.org/jira/browse/HAMA-531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13281746#comment-13281746
]
Thomas Jungblut commented on HAMA-531:
--------------------------------------
Thanks Edward, you are right.
I've just observed it in the testcases:
{noformat}
12/05/23 19:41:37 INFO bsp.BSPJobClient: Running job: job_localrunner_0001
12/05/23 19:41:37 ERROR bsp.LocalBSPRunner: Exception during BSP execution!
java.io.IOException: org.apache.hama.graph.VertexWritable@78092b6f read 42
bytes, should read 49
at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:2129)
at
org.apache.hama.bsp.SequenceFileRecordReader.next(SequenceFileRecordReader.java:82)
at
org.apache.hama.bsp.TrackedRecordReader.moveToNext(TrackedRecordReader.java:60)
at
org.apache.hama.bsp.TrackedRecordReader.next(TrackedRecordReader.java:46)
at org.apache.hama.bsp.BSPPeerImpl.readNext(BSPPeerImpl.java:495)
at
org.apache.hama.graph.GraphJobRunner.loadVertices(GraphJobRunner.java:395)
{noformat}
I fix this in HAMA-580.
> Data re-partitioning in BSPJobClient
> ------------------------------------
>
> Key: HAMA-531
> URL: https://issues.apache.org/jira/browse/HAMA-531
> Project: Hama
> Issue Type: Improvement
> Reporter: Edward J. Yoon
> Attachments: HAMA-531_1.patch, HAMA-531_2.patch, HAMA-531_final.patch
>
>
> The re-partitioning the data is a very expensive operation. By the way,
> currently, we processes read/write operations sequentially using HDFS api in
> BSPJobClient from client-side. This causes potential too many open files
> error, contains HDFS overheads, and shows slow performance.
> We have to find another way to re-partitioning data.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira