The input data is 1 million vertices and each vertex has 100 edges ,its size is about 0.9G.When I I set the maximun tasks of each node is 3,and each tasks has 2000m memory,it run successfully.But when I set the maximun tasks of each node is 5,and each tasks has 1000m memory,it failed.
------------------ ???????? ------------------ ??????: "Edward J. Yoon"; ????????: 2012??12??24??(??????) ????4:48 ??????: "dev"; ????: Re: ?????? ?????? ?????? ?????? sssp experiment Yeah, thanks, I am aware of it. In my case, to process vertices within 256MB block, each Task requied 25~30GB memory. Please test small data at the moment. We're fixing. On Mon, Dec 24, 2012 at 5:19 PM, ???? <[email protected]> wrote: > My input data is 2G,and my each node has 8G memory,but it run failed. > > > > > ------------------ ???????? ------------------ > ??????: "Edward J. Yoon"; > ????????: 2012??12??24??(??????) ????3:56 > ??????: "dev"; > ????: Re: ?????? ?????? ?????? sssp experiment > > > >> I set the maximun tasks of each node is 5,and each tasks has 1000m memory > > The whole graph (input data) should fit into the memory of the cluster > nodes. You have only 5GB per node * 9 = 40GB memory. Please check how > large your input is. > > And, as I told you, SSSP job on 1 billion edges graph requires total > 600+GB memory of full rack. You can't run it using Hama 0.6 on small > cluster. > > On Mon, Dec 24, 2012 at 2:52 PM, ???? <[email protected]> wrote: >> I set the maximun tasks of each node is 5,and each tasks has 1000m >> memory,but it failed. With the increasing superstep,is the memory >> consumption increasing? >> >> >> >> >> ------------------ ???????? ------------------ >> ??????: "Edward J. Yoon"; >> ????????: 2012??12??24??(??????) ????1:36 >> ??????: "dev"; >> ????: "pengchen0525"; >> ????: Re: ?????? ?????? sssp experiment >> >> >> >> Using Oracle BDA one rack (18 nodes and each node has 48GB memory), I >> was able to run (Hama 0.6) SSSP example on 1 ~ 2 billion edges >> graph[1]. If you can partition the input manually, please increase >> splits. Then, each task will consume less memory. >> >> Currently we're working on partitioner and spilling queue. Please wait >> next major release. >> >> 1. http://wiki.apache.org/hama/Benchmarks >> >> On Mon, Dec 24, 2012 at 1:59 PM, ???? <[email protected]> wrote: >>> But when I use 1 million vertices and each vertex has 100 edges,the tasks >>> is 45.It also can not run successful.What is the maximum input data that >>> the hama can run? ???? >>> >>> >>> >>> >>> ------------------ ???????? ------------------ >>> ??????: "Edward J. Yoon"; >>> ????????: 2012??12??24??(??????) ????12:31 >>> ??????: "dev"; >>> ????: Re: ?????? sssp experiment >>> >>> >>> >>>> The input graph has 10 million vertex.Each vertex has 100 edges. >>> >>> 10 million vertices and 10 billion edges? The input is too large. >>> you'll need at least 2 Racks. >>> >>> On Mon, Dec 24, 2012 at 11:00 AM, ???? <[email protected]> wrote: >>>> 12/12/24 09:28:50 WARN util.NativeCodeLoader: Unable to load native-hadoop >>>> library for your platform... using builtin-java classes where applicable >>>> 12/12/24 09:28:50 WARN snappy.LoadSnappy: Snappy native library not loaded >>>> 12/12/24 09:28:51 INFO sync.ZKSyncClient: Initializing ZK Sync Client >>>> 12/12/24 09:28:51 INFO sync.ZooKeeperSyncClientImpl: Start connecting to >>>> Zookeeper! At /192.168.1.211:61004 >>>> 12/12/24 09:28:51 ERROR sync.ZooKeeperSyncClientImpl: >>>> org.apache.zookeeper.KeeperException$ConnectionLossException: >>>> KeeperErrorCode = ConnectionLoss for /bsp/job_201212240927_0001/peers >>>> 12/12/24 09:28:52 INFO ipc.Server: Starting SocketReader >>>> 12/12/24 09:28:52 INFO ipc.Server: IPC Server Responder: starting >>>> 12/12/24 09:28:52 INFO message.HadoopMessageManagerImpl: BSPPeer >>>> address:datanode01 port:61004 >>>> 12/12/24 09:28:52 INFO ipc.Server: IPC Server listener on 61004: starting >>>> 12/12/24 09:28:52 INFO ipc.Server: IPC Server handler 0 on 61004: starting >>>> 12/12/24 09:37:39 ERROR security.UserGroupInformation: >>>> PriviledgedActionException as:hadoop cause:java.io.IOException: >>>> java.lang.OutOfMemoryError: Java heap space >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> ------------------ ???????? ------------------ >>>> ??????: "Suraj Menon"<[email protected]>; >>>> ????????: 2012??12??24??(??????) ????0:20 >>>> ??????: "dev"<[email protected]>; >>>> >>>> ????: Re: sssp experiment >>>> >>>> >>>> >>>> Please share the error you observe in the logs area - >>>> $HAMA_HOME/logs/tasklogs/job_201212231344_0002/*.log >>>> >>>> -Suraj >>>> >>>> On Sun, Dec 23, 2012 at 3:24 AM, ???? <[email protected]> wrote: >>>> >>>>> Hi, There is something wrong in my sssp experiment.My hama cluster >>>>> has 9 nodes.The input graph has 10 million vertex.Each vertex has 100 >>>>> edges.And the number of the tasks is 36.But the job failed.What happened?? >>>>> The error message as follows: >>>>> [hadoop@namenode hama]$ bin/hama jar hama-examples-0.6.0.jar sssp 0 >>>>> sssp_graph/10m_100e_36s sssp_graph_output/10m_100e_36s 36 >>>>> 12/12/23 14:49:30 INFO bsp.FileInputFormat: Total input paths to process : >>>>> 36 >>>>> 12/12/23 14:49:31 INFO bsp.BSPJobClient: Running job: >>>>> job_201212231344_0002 >>>>> 12/12/23 14:49:34 INFO bsp.BSPJobClient: Current supersteps number: 0 >>>>> 12/12/23 14:49:49 INFO bsp.BSPJobClient: Current supersteps number: 1 >>>>> 12/12/23 14:49:52 INFO bsp.BSPJobClient: Current supersteps number: 2 >>>>> 12/12/23 14:50:55 INFO bsp.BSPJobClient: Current supersteps number: 3 >>>>> 12/12/23 14:51:55 INFO bsp.BSPJobClient: Current supersteps number: 4 >>>>> 12/12/23 14:52:01 INFO bsp.BSPJobClient: Current supersteps number: 5 >>>>> 12/12/23 14:52:04 INFO bsp.BSPJobClient: Current supersteps number: 6 >>>>> 12/12/23 14:52:07 INFO bsp.BSPJobClient: Current supersteps number: 7 >>>>> 12/12/23 14:52:10 INFO bsp.BSPJobClient: Current supersteps number: 8 >>>>> 12/12/23 14:53:34 INFO bsp.BSPJobClient: Current supersteps number: 9 >>>>> 12/12/23 14:56:38 INFO bsp.BSPJobClient: Job failed. >>>>> [hadoop@namenode hama]$ >>> >>> >>> >>> -- >>> Best Regards, Edward J. Yoon >>> @eddieyoon >> >> >> >> -- >> Best Regards, Edward J. Yoon >> @eddieyoon > > > > -- > Best Regards, Edward J. Yoon > @eddieyoon -- Best Regards, Edward J. Yoon @eddieyoon
