Hi Toshio, Roman, The HBase I/O test failure (no 3) happens may be due to this issue
https://issues.apache.org/jira/browse/GIRAPH-926 Toshio, Can you check whether you get an error similar to this? 14/07/07 15:53:58 INFO hbase.metrics: new MBeanInfo 14/07/07 15:53:58 INFO metrics.RegionServerMetrics: Initialized 14/07/07 15:53:58 INFO http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter) 14/07/07 15:53:58 INFO http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 60030 14/07/07 15:53:58 WARN regionserver.HRegionServer: Exception in region server : java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:444) at sun.nio.ch.Net.bind(Net.java:436) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer.start(HttpServer.java:602) at org.apache.hadoop.hbase.regionserver.HRegionServer.putUpWebUI(HRegionServer.java:1760) at org.apache.hadoop.hbase.regionserver.HRegionServer.startServiceThreads(HRegionServer.java:1715) at org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1108) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.handleReportForDutyResponse(MiniHBaseCluster.java:122) at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:752) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:148) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:101) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:132) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:356) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1172) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hbase.util.Methods.call(Methods.java:37) at org.apache.hadoop.hbase.security.User.call(User.java:624) at org.apache.hadoop.hbase.security.User.access$600(User.java:52) at org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:464) at org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:130) at java.lang.Thread.run(Thread.java:724) 14/07/07 15:53:58 INFO regionserver.HRegionServer: STOPPED: Failed initialization 14/07/07 15:53:58 ERROR regionserver.HRegionServer: Failed init java.net.BindException: Address already in use Thanks, Akila On Wed, Jul 2, 2014 at 12:10 AM, Roman Shaposhnik <[email protected]> wrote: > Yes, the failures around Accumulo in hadoop_2 profile are expected and > nothing > to worry about. I should've probably mentioned it in my RC announcement > email. > Sorry about that. > > Any failures in hadoop_1 profile would be a reason to reconsider RC0. > > Thanks, > Roman. > > P.S. This is one of the reasons we're still running with hadoop_1 as a > default > profile. > > On Mon, Jun 30, 2014 at 3:09 AM, Akila Wajirasena > <[email protected]> wrote: > > Hi Roman, > > > > I got the same error when running hadoop_2 profile. > > According to this [1] the Accumulo version we use in giraph (1.4) is not > > compatible with Hadoop 2. > > I think this is the issue. > > > > [1] > > > http://apache-accumulo.1065345.n5.nabble.com/Accumulo-Hadoop-version-compatibility-matrix-tp3893p3894.html > > > > Thanks > > > > Akila > > > > > > On Mon, Jun 30, 2014 at 2:21 PM, Toshio ITO <[email protected]> > > wrote: > >> > >> Hi Roman. > >> > >> I checked out release-1.1.0-RC0 and succeeded to build it. > >> > >> $ git checkout release-1.1.0-RC0 > >> $ mvn clean > >> $ mvn package -Phadoop_2 -DskipTests > >> ## SUCCESS > >> > >> However, when I ran the tests with LocalJobRunner, it failed. > >> > >> $ mvn clean > >> $ mvn package -Phadoop_2 > >> > >> It passed tests from "Core" and "Examples", but it failed at > >> "Accumulo I/O". > >> > >> > >> > testAccumuloInputOutput(org.apache.giraph.io.accumulo.TestAccumuloVertexFormat) > >> > >> The error log contained the following exception > >> > >> java.lang.IncompatibleClassChangeError: Found interface > >> org.apache.hadoop.mapreduce.JobContext, but class was expected > >> > >> > >> Next I wanted to run the tests with a running Hadoop2 instance, but > >> I'm having trouble to set it up (I'm quite new to Hadoop). > >> > >> Could you show me some example configuration (etc/hadoop/* files) of > >> Hadoop 2.2.0 single-node cluster? That would be very helpful. > >> > >> > >> > >> > >> > > >> > On Sun, Jun 29, 2014 at 5:06 PM, Toshio ITO < > [email protected]> > >> > wrote: > >> > > Hi Roman. > >> > > > >> > > Thanks for the reply. > >> > > > >> > > OK, I'll try hadoop_1 and hadoop_2 with the latest > >> > > release-1.1.0-RC0 and report the result. > >> > > >> > That would be extremely helpful! > >> > > >> > And speaking of which -- I'd like to remind folks > >> > that taking RC0 for a spin would really help > >> > at this point. If we ever want to have 1.1.0 out > >> > we need the required PMC votes. > >> > > >> > Thanks, > >> > Roman. > >> ------------------------------------ > >> Toshio Ito > > > > > > > > > > > > > > > -- Regards Akila Wajirasena
