[
https://issues.apache.org/jira/browse/PIG-766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12871253#action_12871253
]
Ashutosh Chauhan commented on PIG-766:
--------------------------------------
Dirk,
1. Are you getting the exact same stack trace as mentioned in the jira?
2. Which operations are you doing in your query - join, group-by, any other ?
3. What load/store func are you using to read and write data? PigStorage or
your own ?
4. What is your data size and memory available to your tasks?
5. Do you have very large records in your dataset, like hundreds of MB for one
record ?
It would be great if you can paste here the script from which you get this
exception.
> ava.lang.OutOfMemoryError: Java heap space
> ------------------------------------------
>
> Key: PIG-766
> URL: https://issues.apache.org/jira/browse/PIG-766
> Project: Pig
> Issue Type: Bug
> Components: impl
> Affects Versions: 0.2.0, 0.7.0
> Environment: Hadoop-0.18.3 (cloudera RPMs).
> mapred.child.java.opts=-Xmx1024m
> Reporter: Vadim Zaliva
>
> My pig script always fails with the following error:
> Java.lang.OutOfMemoryError: Java heap space
> at java.util.Arrays.copyOf(Arrays.java:2786)
> at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:94)
> at java.io.DataOutputStream.write(DataOutputStream.java:90)
> at java.io.FilterOutputStream.write(FilterOutputStream.java:80)
> at
> org.apache.pig.data.DataReaderWriter.writeDatum(DataReaderWriter.java:213)
> at org.apache.pig.data.DefaultTuple.write(DefaultTuple.java:291)
> at
> org.apache.pig.data.DefaultAbstractBag.write(DefaultAbstractBag.java:233)
> at
> org.apache.pig.data.DataReaderWriter.writeDatum(DataReaderWriter.java:162)
> at org.apache.pig.data.DefaultTuple.write(DefaultTuple.java:291)
> at
> org.apache.pig.impl.io.PigNullableWritable.write(PigNullableWritable.java:83)
> at
> org.apache.hadoop.io.serializer.WritableSerialization$WritableSerializer.serialize(WritableSerialization.java:90)
> at
> org.apache.hadoop.io.serializer.WritableSerialization$WritableSerializer.serialize(WritableSerialization.java:77)
> at org.apache.hadoop.mapred.IFile$Writer.append(IFile.java:156)
> at
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.spillSingleRecord(MapTask.java:857)
> at
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:467)
> at
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapReduce$Map.collect(PigMapReduce.java:101)
> at
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapBase.runPipeline(PigMapBase.java:219)
> at
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapBase.map(PigMapBase.java:208)
> at
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapReduce$Map.map(PigMapReduce.java:86)
> at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:47)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:227)
> at
> org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2198)
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.