Re: java.io.IOException: Pass a Delete or a Put

2012-09-12 Thread Jothikumar Ekanath
Any help on this one please.

On Tue, Sep 11, 2012 at 11:19 AM, Jothikumar Ekanath kbmku...@gmail.comwrote:

 Hi Stack,
 Thanks for the reply. I looked at the code and i am having
 a very basic confusion on how to use it correctly.  The code i wrote
 earlier has the following input and output types and i want it that way

 After looking at the sources and examples, i modified my reducer (given
 below), the mapper and job configuration are still the same. Still i see
 the same error. Am i doing something wrong?


  DailySumMapper extends TableMapperText, Text
 KEYOUT = Text
 VALUEOUT = Text

  DailySumReducer extends TableReducerText, Text, ImmutableBytesWritable

 KEYIN = Text
 VALUEIN = Text
 KEYOUT = ImmutableBytesWritable
 VALUEOUT = must be always Put or Delete when we extend TableReducer,
 So we are not specifying that.

 Code
  public static class DailySumReducer extends TableReducerText, Text,
 ImmutableBytesWritable {

 private int count = 0;
 protected void reduce(Text key, IterableText
 values,Reducer.Context context) throws IOException, InterruptedException{

 long inbound = 0l;
 long outbound = 0l;
 for (Text val : values) {
 String text = val.toString();
 int index = text.indexOf(-);
 String in = text.substring(0,index);
 String out = text.substring(index+1,text.length());
 inbound = inbound + Long.parseLong(in);
 outbound = outbound + Long.parseLong(out);
 }
 ByteBuffer data = ByteBuffer.wrap(new byte[16]);
 data.putLong(inbound);
 data.putLong(outbound);
 Put put = new Put(Bytes.toBytes(key.toString()+20120804));
 put.add(Bytes.toBytes(t), Bytes.toBytes(s),data.array());
 context.setStatus(Emitting Put  + count++);
 ImmutableBytesWritable ibw = new
 ImmutableBytesWritable(Bytes.toBytes(key.toString()));
 context.write(ibw,put);

 }
 }

 On Tue, Sep 11, 2012 at 10:38 AM, Stack st...@duboce.net wrote:

 On Mon, Sep 10, 2012 at 7:06 PM, Jothikumar Ekanath kbmku...@gmail.com
 wrote:
  Hi,
 Getting this error while using hbase as a sink.
 
 
  Error
  java.io.IOException: Pass a Delete or a Put

 Would suggest you study the mapreduce jobs that ship with hbase both
 in main and under test.

 Looking at your program, you are all Text.  The above complaint is
 about wanting a Put or Delete.  Can you change what you produce so
 Put/Delete rather than Text?

 St.Ack





Re: java.io.IOException: Pass a Delete or a Put

2012-09-12 Thread Jothikumar Ekanath
Hi Doug,
  That is where i took my code initially, not able to
notice anything different from there. I know there is something wrong with
the key in key out in my code, but not able to figure out.

I have given below what i am using, Do you see anything wrong in there?

DailySumMapper extends TableMapperText, Text
 KEYOUT = Text
 VALUEOUT = Text

  DailySumReducer extends TableReducerText, Text,
ImmutableBytesWritable

 KEYIN = Text
 VALUEIN = Text
 KEYOUT = ImmutableBytesWritable
 VALUEOUT = must be always Put or Delete when we extend TableReducer,
 So we are not specifying that.

 Code
  public static class DailySumReducer extends TableReducerText, Text,
 ImmutableBytesWritable {

 private int count = 0;
 protected void reduce(Text key, IterableText
 values,Reducer.Context context) throws IOException,
InterruptedException{

 long inbound = 0l;
 long outbound = 0l;
 for (Text val : values) {
 String text = val.toString();
 int index = text.indexOf(-);
 String in = text.substring(0,index);
 String out = text.substring(index+1,text.
length());
 inbound = inbound + Long.parseLong(in);
 outbound = outbound + Long.parseLong(out);
 }
 ByteBuffer data = ByteBuffer.wrap(new byte[16]);
 data.putLong(inbound);
 data.putLong(outbound);
 Put put = new Put(Bytes.toBytes(key.toString()+20120804));
 put.add(Bytes.toBytes(t),
Bytes.toBytes(s),data.array());
 context.setStatus(Emitting Put  + count++);
 ImmutableBytesWritable ibw = new
 ImmutableBytesWritable(Bytes.toBytes(key.toString()));
 context.write(ibw,put);

 }
 }



On Wed, Sep 12, 2012 at 11:05 AM, Doug Meil
doug.m...@explorysmedical.comwrote:


 Did you compare your example to this...

 http://hbase.apache.org/book.html#mapreduce.example
 7.2.2. HBase MapReduce Read/Write Example


 ?




 On 9/12/12 1:02 PM, Jothikumar Ekanath kbmku...@gmail.com wrote:

 Any help on this one please.
 
 On Tue, Sep 11, 2012 at 11:19 AM, Jothikumar Ekanath
 kbmku...@gmail.comwrote:
 
  Hi Stack,
  Thanks for the reply. I looked at the code and i am
 having
  a very basic confusion on how to use it correctly.  The code i wrote
  earlier has the following input and output types and i want it that way
 
  After looking at the sources and examples, i modified my reducer (given
  below), the mapper and job configuration are still the same. Still i see
  the same error. Am i doing something wrong?
 
 
   DailySumMapper extends TableMapperText, Text
  KEYOUT = Text
  VALUEOUT = Text
 
   DailySumReducer extends TableReducerText, Text,
 ImmutableBytesWritable
 
  KEYIN = Text
  VALUEIN = Text
  KEYOUT = ImmutableBytesWritable
  VALUEOUT = must be always Put or Delete when we extend TableReducer,
  So we are not specifying that.
 
  Code
   public static class DailySumReducer extends TableReducerText, Text,
  ImmutableBytesWritable {
 
  private int count = 0;
  protected void reduce(Text key, IterableText
  values,Reducer.Context context) throws IOException,
 InterruptedException{
 
  long inbound = 0l;
  long outbound = 0l;
  for (Text val : values) {
  String text = val.toString();
  int index = text.indexOf(-);
  String in = text.substring(0,index);
  String out = text.substring(index+1,text.length());
  inbound = inbound + Long.parseLong(in);
  outbound = outbound + Long.parseLong(out);
  }
  ByteBuffer data = ByteBuffer.wrap(new byte[16]);
  data.putLong(inbound);
  data.putLong(outbound);
  Put put = new Put(Bytes.toBytes(key.toString()+20120804));
  put.add(Bytes.toBytes(t),
 Bytes.toBytes(s),data.array());
  context.setStatus(Emitting Put  + count++);
  ImmutableBytesWritable ibw = new
  ImmutableBytesWritable(Bytes.toBytes(key.toString()));
  context.write(ibw,put);
 
  }
  }
 
  On Tue, Sep 11, 2012 at 10:38 AM, Stack st...@duboce.net wrote:
 
  On Mon, Sep 10, 2012 at 7:06 PM, Jothikumar Ekanath
 kbmku...@gmail.com
  wrote:
   Hi,
  Getting this error while using hbase as a sink.
  
  
   Error
   java.io.IOException: Pass a Delete or a Put
 
  Would suggest you study the mapreduce jobs that ship with hbase both
  in main and under test.
 
  Looking at your program, you are all Text.  The above complaint is
  about wanting a Put or Delete.  Can you change what you produce so
  Put/Delete rather than Text?
 
  St.Ack
 
 
 





Re: java.io.IOException: Pass a Delete or a Put

2012-09-11 Thread Jothikumar Ekanath
Hi,

I am kind of stuck on this one, I read all the other similar issues and
coded based on that. But still i get this error.

Any help or clue will help me moving forward.

Thanks




On Mon, Sep 10, 2012 at 7:06 PM, Jothikumar Ekanath kbmku...@gmail.comwrote:

 Hi,
Getting this error while using hbase as a sink.


 Error
 java.io.IOException: Pass a Delete or a Put
 at
 org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:125)
 at
 org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:84)
 at
 org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:587)
 at
 org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
 at org.apache.hadoop.mapreduce.Reducer.reduce(Reducer.java:156)
 at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:176)
 at
 org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:649)
 at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:417)
 at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
 at org.apache.hadoop.mapred.Child.main(Child.java:249)



  Below is my code
 Using the following version

 Hbase = 0.94
 Hadoop - 1.0.3

 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.HBaseConfiguration;
 import org.apache.hadoop.hbase.KeyValue;
 import org.apache.hadoop.hbase.client.Put;
 import org.apache.hadoop.hbase.client.Result;
 import org.apache.hadoop.hbase.client.Scan;
 import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
 import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
 import org.apache.hadoop.hbase.mapreduce.TableMapper;
 import org.apache.hadoop.hbase.mapreduce.TableReducer;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.hadoop.io.Text;
 import org.apache.hadoop.mapreduce.Job;
 import org.apache.hadoop.mapreduce.*;

 import java.io.IOException;
 import java.nio.ByteBuffer;
 import java.util.ArrayList;
 import java.util.List;

 public class DailyAggMapReduce {

 public static void main(String args[]) throws Exception {
 Configuration config = HBaseConfiguration.create();
 Job job = new Job(config, DailyAverageMR);
 job.setJarByClass(DailyAggMapReduce.class);
 Scan scan = new Scan();
 // 1 is the default in Scan, which will be bad for MapReduce jobs
 scan.setCaching(500);
 // don't set to true for MR jobs
 scan.setCacheBlocks(false);

 TableMapReduceUtil.initTableMapperJob(
 HTASDB,// input table
 scan,   // Scan instance to control CF and
 attribute selection
 DailySumMapper.class, // mapper class
 Text.class, // mapper output key
 Text.class,  // mapper output value
 job);

 TableMapReduceUtil.initTableReducerJob(
 DA,// output table
 DailySumReducer.class,// reducer class
 job);

 //job.setOutputValueClass(Put.class);
 job.setNumReduceTasks(1);   // at least one, adjust as required

 boolean b = job.waitForCompletion(true);
 if (!b) {
 throw new IOException(error with job!);
 }

 }


 public static class DailySumMapper extends TableMapperText, Text {

 public void map(ImmutableBytesWritable row, Result value,
 Mapper.Context context) throws IOException, InterruptedException {
 ListString key = getRowKey(row.get());
 Text rowKey = new Text(key.get(0));
 int time = Integer.parseInt(key.get(1));
 //limiting the time for one day (Aug 04 2012) -- Testing, Not
 a good way
 if (time = 1344146400) {
 ListKeyValue data = value.list();
 long inbound = 0l;
 long outbound = 0l;
 for (KeyValue kv : data) {
 ListLong values = getValues(kv.getValue());
 if (values.get(0) != -1) {
 inbound = inbound + values.get(0);
 }
 if (values.get(1) != -1) {
 outbound = outbound + values.get(1);
 }
 }
 context.write(rowKey, new Text(String.valueOf(inbound) +
 - + String.valueOf(outbound)));
 }
 }

 private static ListLong getValues(byte[] data) {
 ListLong values = new ArrayListLong();
 ByteBuffer buffer = ByteBuffer.wrap(data);
 values.add(buffer.getLong

Re: java.io.IOException: Pass a Delete or a Put

2012-09-11 Thread Jothikumar Ekanath
Hi Stack,
Thanks for the reply. I looked at the code and i am having
a very basic confusion on how to use it correctly.  The code i wrote
earlier has the following input and output types and i want it that way

After looking at the sources and examples, i modified my reducer (given
below), the mapper and job configuration are still the same. Still i see
the same error. Am i doing something wrong?

 DailySumMapper extends TableMapperText, Text
KEYOUT = Text
VALUEOUT = Text

 DailySumReducer extends TableReducerText, Text, ImmutableBytesWritable

KEYIN = Text
VALUEIN = Text
KEYOUT = ImmutableBytesWritable
VALUEOUT = must be always Put or Delete when we extend TableReducer, So
we are not specifying that.

Code
 public static class DailySumReducer extends TableReducerText, Text,
ImmutableBytesWritable {
private int count = 0;
protected void reduce(Text key, IterableText
values,Reducer.Context context) throws IOException, InterruptedException{
long inbound = 0l;
long outbound = 0l;
for (Text val : values) {
String text = val.toString();
int index = text.indexOf(-);
String in = text.substring(0,index);
String out = text.substring(index+1,text.length());
inbound = inbound + Long.parseLong(in);
outbound = outbound + Long.parseLong(out);
}
ByteBuffer data = ByteBuffer.wrap(new byte[16]);
data.putLong(inbound);
data.putLong(outbound);
Put put = new Put(Bytes.toBytes(key.toString()+20120804));
put.add(Bytes.toBytes(t), Bytes.toBytes(s),data.array());
context.setStatus(Emitting Put  + count++);
ImmutableBytesWritable ibw = new
ImmutableBytesWritable(Bytes.toBytes(key.toString()));
context.write(ibw,put);
}
}

On Tue, Sep 11, 2012 at 10:38 AM, Stack st...@duboce.net wrote:

 On Mon, Sep 10, 2012 at 7:06 PM, Jothikumar Ekanath kbmku...@gmail.com
 wrote:
  Hi,
 Getting this error while using hbase as a sink.
 
 
  Error
  java.io.IOException: Pass a Delete or a Put

 Would suggest you study the mapreduce jobs that ship with hbase both
 in main and under test.

 Looking at your program, you are all Text.  The above complaint is
 about wanting a Put or Delete.  Can you change what you produce so
 Put/Delete rather than Text?

 St.Ack



java.io.IOException: Pass a Delete or a Put

2012-09-10 Thread Jothikumar Ekanath
Hi,
   Getting this error while using hbase as a sink.


Error
java.io.IOException: Pass a Delete or a Put
at
org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:125)
at
org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:84)
at
org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:587)
at
org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
at org.apache.hadoop.mapreduce.Reducer.reduce(Reducer.java:156)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:176)
at
org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:649)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:417)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at org.apache.hadoop.mapred.Child.main(Child.java:249)



 Below is my code
Using the following version

Hbase = 0.94
Hadoop - 1.0.3

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.KeyValue;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
import org.apache.hadoop.hbase.mapreduce.TableMapper;
import org.apache.hadoop.hbase.mapreduce.TableReducer;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.*;

import java.io.IOException;
import java.nio.ByteBuffer;
import java.util.ArrayList;
import java.util.List;

public class DailyAggMapReduce {

public static void main(String args[]) throws Exception {
Configuration config = HBaseConfiguration.create();
Job job = new Job(config, DailyAverageMR);
job.setJarByClass(DailyAggMapReduce.class);
Scan scan = new Scan();
// 1 is the default in Scan, which will be bad for MapReduce jobs
scan.setCaching(500);
// don't set to true for MR jobs
scan.setCacheBlocks(false);

TableMapReduceUtil.initTableMapperJob(
HTASDB,// input table
scan,   // Scan instance to control CF and
attribute selection
DailySumMapper.class, // mapper class
Text.class, // mapper output key
Text.class,  // mapper output value
job);

TableMapReduceUtil.initTableReducerJob(
DA,// output table
DailySumReducer.class,// reducer class
job);

//job.setOutputValueClass(Put.class);
job.setNumReduceTasks(1);   // at least one, adjust as required

boolean b = job.waitForCompletion(true);
if (!b) {
throw new IOException(error with job!);
}

}


public static class DailySumMapper extends TableMapperText, Text {

public void map(ImmutableBytesWritable row, Result value,
Mapper.Context context) throws IOException, InterruptedException {
ListString key = getRowKey(row.get());
Text rowKey = new Text(key.get(0));
int time = Integer.parseInt(key.get(1));
//limiting the time for one day (Aug 04 2012) -- Testing, Not a
good way
if (time = 1344146400) {
ListKeyValue data = value.list();
long inbound = 0l;
long outbound = 0l;
for (KeyValue kv : data) {
ListLong values = getValues(kv.getValue());
if (values.get(0) != -1) {
inbound = inbound + values.get(0);
}
if (values.get(1) != -1) {
outbound = outbound + values.get(1);
}
}
context.write(rowKey, new Text(String.valueOf(inbound) +
- + String.valueOf(outbound)));
}
}

private static ListLong getValues(byte[] data) {
ListLong values = new ArrayListLong();
ByteBuffer buffer = ByteBuffer.wrap(data);
values.add(buffer.getLong());
values.add(buffer.getLong());
return values;
}

private static ListString getRowKey(byte[] key) {
ListString keys = new ArrayListString();
ByteBuffer buffer = ByteBuffer.wrap(key);
StringBuilder sb = new StringBuilder();
sb.append(buffer.getInt());
sb.append(-);

Re: Problem - Bringing up the HBase cluster

2012-08-22 Thread Jothikumar Ekanath
Hi,
 Thanks for the response, sorry i put this email in the dev space.
My data replication is 2. and yes the region and master server connectivity
is good

Initially i started with 4 data nodes and 1 master, i faced the same
problem. So i reduced the data nodes to 1 and wanted to test it. I see the
same issue. I haven't tested the pseudo distribution mode, i can test that.
But my objective is to test the full distributed mode and do some testing.
I can send my configuration for review. Please let me know if i am missing
any basic setup configuration.

On Wed, Aug 22, 2012 at 12:00 AM, N Keywal nkey...@gmail.com wrote:

 Hi,

 Please use the user mailing list (added at dest) for this type of
 questions instead of the dev list (now in bcc).

 It's a little bit strange to use the full distributed mode with a
 single region server. Is the Pseudo-distributed mode working?
 Check the number of datanodes vs. dfs.replication (default 3). If you
 have less datanodes then dfs.replication value, it won't work
 properly.
 Check as well that the region server is connected to the master.

 Cheers,



 On Wed, Aug 22, 2012 at 3:16 AM, kbmkumar kbmku...@gmail.com wrote:
  Hi,
I am trying to bring up a HBase cluster with 1 master and 1 one
 region
  server. I am using
  Hadoop 1.0.3
  Hbase 0.94.1
 
  Starting the hdfs was straight forward and i could see the namenode up
 and
  running successfully. But the problem is with Hbase. I followed all the
  guidelines given in the Hbase cluster setup (fully distributed mode) and
 ran
  the start-hbase.sh
 
  It started the Master, Region server and zookeeper (in the region
 server) as
  per my configuration. But i am not sure the master is fully functional.
 When
  i try to connect hbase shell and create table, it errors out saying
  PleaseHoldException- Master is initializing
 
  In UI HMaster status shows like this *Assigning META region (since
 18mins,
  39sec ago)*
 
  and i see the Hmaster logs are flowing with the following debug prints,
 the
  log file is full of below prints,
  *
  2012-08-22 01:08:19,637 DEBUG
 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
  Looked up root region location,
 
 connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@49586cbd
 ;
  serverName=hadoop-datanode1,60020,1345596463277
  2012-08-22 01:08:19,638 DEBUG
 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
  Looked up root region location,
 
 connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@49586cbd
 ;
  serverName=hadoop-datanode1,60020,1345596463277
  2012-08-22 01:08:19,639 DEBUG
 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
  Looked up root region location,
 
 connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@49586cbd
 ;
  serverName=hadoop-datanode1,60020,1345596463277*
 
  Please help me in debugging this.
 
 
 
 
 
  --
  View this message in context:
 http://apache-hbase.679495.n3.nabble.com/Problem-Bringing-up-the-HBase-cluster-tp4019948.html
  Sent from the HBase - Developer mailing list archive at Nabble.com.



Re: Problem - Bringing up the HBase cluster

2012-08-22 Thread Jothikumar Ekanath
Hi Stack,

Ok, i will cleanup everything and start from fresh. This time i will add
one more data node

so 1 hbase master and 2 regions. Zookeeper managed by hbase is started in
region1. This is my configuration, i will start everything from the scratch
and will see. If i get the same error, i will send the logs for review

configuration

 property
 namehbase.rootdir/name
 valuehdfs://hadoop-namenode:9000/hbase/value
 descriptionThe directory shared by RegionServers. /description
 /property
 property
  namehbase.master/name
  valuehbase-master:6/value
  descriptionThe host and port that the HBase master runs
at./description
 /property
 property
 namehbase.cluster.distributed/name
 valuetrue/value
  descriptionThe mode the cluster will be in. Possible values are
false: standalone and pseudo-distributed setups with managed Zookeeper
true: fully-distributed with unmanaged Zookeeper Quorum (see
hbase-env.sh) /description
 /property
  property
  namehbase.zookeeper.quorum/name
  valuehbase-region1/value

!--valuehadoop-namenode,sv2lxixpdevdn01,sv2lxixpdevdn02,sv2lxixpdevdn03,sv2lxixpdevdn04/value--
  /property
  property
  namehbase.zookeeper.property.dataDir/name
  value/apps/zookeeper/value
  descriptionProperty from ZooKeeper's config zoo.cfg. The directory
where the snapshot is stored./description
  /property
/configuration


Thanks
Jothikumar


On Wed, Aug 22, 2012 at 8:59 AM, Stack st...@duboce.net wrote:

 On Wed, Aug 22, 2012 at 8:43 AM, Jothikumar Ekanath kbmku...@gmail.com
 wrote:
  Hi,
   Thanks for the response, sorry i put this email in the dev
 space.
  My data replication is 2. and yes the region and master server
 connectivity
  is good
 
  Initially i started with 4 data nodes and 1 master, i faced the same
  problem. So i reduced the data nodes to 1 and wanted to test it. I see
 the
  same issue. I haven't tested the pseudo distribution mode, i can test
 that.
  But my objective is to test the full distributed mode and do some
 testing.
  I can send my configuration for review. Please let me know if i am
 missing
  any basic setup configuration.
 

 Be sure you start fresh.  Did you run it standalone previous?  Did you
 reset hbase tmp dir?  If not, try clearing /tmp before starting.  If
 still does not work, put up your logs where we can take a look see --
 and your config.

 St.Ack



Re: Problem - Bringing up the HBase cluster

2012-08-22 Thread Jothikumar Ekanath
Hi Stack,

For sure, hdfs is up?  You can put and get files into it?
I can see the namenode webapp, didn't try the file get and put part

What is hbase-master and hbase-namenode?  Are they supposed to be the
same machine?  DNS says they are?

They are different nodes (VM's) configured hosts file to map the dns
correctly

Where is hbase-region1?  Not same as hbase-master and hbase-namenode?

It is  a seperate VM , not same as the master and namenode

This is writable?

Yes, it is.

Do my configuration looks correct? do i need to set anything in the
hbase-env.sh? I already set the manage_ZK to true,

Thanks
Jothikumar

On Wed, Aug 22, 2012 at 11:01 AM, Stack st...@duboce.net wrote:

 On Wed, Aug 22, 2012 at 10:41 AM, Jothikumar Ekanath kbmku...@gmail.com
 wrote:
  Hi Stack,
 
  Ok, i will cleanup everything and start from fresh. This time i will add
 one
  more data node
 
  so 1 hbase master and 2 regions. Zookeeper managed by hbase is started in
  region1. This is my configuration, i will start everything from the
 scratch
  and will see. If i get the same error, i will send the logs for review
 
  configuration
 
   property
   namehbase.rootdir/name
   valuehdfs://hadoop-namenode:9000/hbase/value
   descriptionThe directory shared by RegionServers. /description
   /property

 For sure, hdfs is up?  You can put and get files into it?

   property
namehbase.master/name
valuehbase-master:6/value
descriptionThe host and port that the HBase master runs
  at./description
   /property

 What is hbase-master and hbase-namenode?  Are they supposed to be the
 same machine?  DNS says they are?


   property
   namehbase.cluster.distributed/name
   valuetrue/value
descriptionThe mode the cluster will be in. Possible values are
  false: standalone and pseudo-distributed setups with managed Zookeeper
 true:
  fully-distributed with unmanaged Zookeeper Quorum (see hbase-env.sh)
  /description
   /property
property
namehbase.zookeeper.quorum/name
valuehbase-region1/value
 


 Where is hbase-region1?  Not same as hbase-master and hbase-namenode?


 
 !--valuehadoop-namenode,sv2lxixpdevdn01,sv2lxixpdevdn02,sv2lxixpdevdn03,sv2lxixpdevdn04/value--
/property
property
namehbase.zookeeper.property.dataDir/name
value/apps/zookeeper/value
descriptionProperty from ZooKeeper's config zoo.cfg. The
 directory
  where the snapshot is stored./description
/property
  /configuration
 

 This is writable?

 St.Ack

 
  Thanks
  Jothikumar
 
 
  On Wed, Aug 22, 2012 at 8:59 AM, Stack st...@duboce.net wrote:
 
  On Wed, Aug 22, 2012 at 8:43 AM, Jothikumar Ekanath kbmku...@gmail.com
 
  wrote:
   Hi,
Thanks for the response, sorry i put this email in the dev
   space.
   My data replication is 2. and yes the region and master server
   connectivity
   is good
  
   Initially i started with 4 data nodes and 1 master, i faced the same
   problem. So i reduced the data nodes to 1 and wanted to test it. I see
   the
   same issue. I haven't tested the pseudo distribution mode, i can test
   that.
   But my objective is to test the full distributed mode and do some
   testing.
   I can send my configuration for review. Please let me know if i am
   missing
   any basic setup configuration.
  
 
  Be sure you start fresh.  Did you run it standalone previous?  Did you
  reset hbase tmp dir?  If not, try clearing /tmp before starting.  If
  still does not work, put up your logs where we can take a look see --
  and your config.
 
  St.Ack
 
 



Re: Problem - Bringing up the HBase cluster

2012-08-22 Thread Jothikumar Ekanath
Really surprising !

All of sudden the fresh restart worked like a charm. Master initialization
error is gone and i see the HMaster webapp clean and good. Thank you very
much for the help. I will continue my testing by adding more data nodes and
i will post if i see any errors

Thanks for the excellent support
Jothikumar

On Wed, Aug 22, 2012 at 11:24 AM, Stack st...@duboce.net wrote:

 On Wed, Aug 22, 2012 at 11:16 AM, Jothikumar Ekanath kbmku...@gmail.com
 wrote:
  Do my configuration looks correct? do i need to set anything in the
  hbase-env.sh? I already set the manage_ZK to true,
 

 Yes.

 It must be something about your environment.

 Lets see your logs.

 St.Ack