Try increasing the memory with -Xmx1024m option. 1024 can be replaced with the memory availability and choice. This should be set to mapred.child.java.opts
Regards, Unmesh Joshi On 11 November 2014 10:26, Charith Wickramarachchi < [email protected]> wrote: > Hi Devs, > > I am sending this mail to the dev list since I think Giraph developers > might have experienced the issue I am facing. > > I am working on extending graph to support a programming model somewhat > similar to giraph++. I got an initial POC version running with in my local > machine in a pseudo distributed mode. But when I run with large graphs in > a cluster, suddenly the map reduce job get killed. > > This is because, suddenly the job receives a kill signal. I am still not > sure about what's the root cause. My hunch is that it has something to > do with progress reporting from mappers. I am attaching part of the log > that might be helpful. > > It will be great if you can give me some insights based on your > experience. > > Giraph Version: 1.1.0 > Hadoop version: 2.2.0 > Application Type: Map Reduce > > Thanks, > Charith > > -- > Charith Dhanushka Wickramaarachchi > > Tel +1 213 447 4253 > Web http://apache.org/~charith <http://www-scf.usc.edu/~cwickram/> > <http://charith.wickramaarachchi.org/> > Blog http://charith.wickramaarachchi.org/ > <http://charithwiki.blogspot.com/> > Twitter @charithwiki <https://twitter.com/charithwiki> > > This communication may contain privileged or other confidential information > and is intended exclusively for the addressee/s. If you are not the > intended recipient/s, or believe that you may have > received this communication in error, please reply to the sender indicating > that fact and delete the copy you received and in addition, you should > not print, copy, retransmit, disseminate, or otherwise use the > information contained in this communication. Internet communications > cannot be guaranteed to be timely, secure, error or virus-free. The > sender does not accept liability for any errors or omissions >
