Thanks JD and Shumin for the responses.
I realized that this chain is getting longer and longer and I've tried
many different things in an between. I will clean out every previous
installs and start a fresh one with the newest version following your
instructions step by step. Hopefully that will
Just to report back (in case someone else ran into similar issues
during install) - I noticed that one of my friends use Sun's JDK (and
I was using openJDK). I then replaced my JDK and started a new install
with hadoop 1.0.3 + Hbase 0.94.1. Now it works in my MacBook!
Jason
On Wed, Sep 19, 2012
I've done some more research but still can't start the HMaster node
(with similar error). Here is what I found in the Master Server log:
Tue Sep 18 11:50:22 EDT 2012 Starting master on Jasons-MacBook-Pro.local
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
Which Hadoop version are you using exactly? I see you are setting
dfs.datanode.data.dir which is a post 1.0 setting (from what I can
tell by googling, since I didn't recognize it), but you are using a
hadoop-examples-1.0.3.jar file that seems to imply you are on 1.0.3
which would probably not pick
Hi J-D,
I am using hadoop 1.0.3 - I was using dfs.datanode.data.dir last week
but that had already been updated (someone else pointed that out)
before I ran this test today.
thanks,
Jason
On Tue, Sep 18, 2012 at 1:05 PM, Jean-Daniel Cryans jdcry...@apache.org wrote:
Which Hadoop version are
Hi Jason,
In a pesudo-distributed environment, you should start zookeeper and
hbase-regionserver. I don't see them in your process list.
$ jps
274 NameNode
514 JobTracker
1532 HMaster
1588 Jps
604 TaskTracker
450 SecondaryNameNode
362 DataNode
$ ./bin/hbase shell
Trace/BPT trap: 5
Shumin
On
On Tue, Sep 18, 2012 at 10:21 AM, Jason Huang jason.hu...@icare.com wrote:
I am using hadoop 1.0.3 - I was using dfs.datanode.data.dir last week
but that had already been updated (someone else pointed that out)
before I ran this test today.
I see. In the future it'd be best if you specify
I've done several reinstallation's and hadoop seems to be fine. However, I
still get similar error when I tried to access HBase shell.
$ jps
274 NameNode
514 JobTracker
1532 HMaster
1588 Jps
604 TaskTracker
450 SecondaryNameNode
362 DataNode
$ ./bin/hbase shell
Trace/BPT trap: 5
I looked at the
Thanks Marcos.
I applied the change you mentioned but it still gave me error. I then stop
everything and restart Hadoop and tried to run a simple Map-Reduce job with
the provided example jar. (./bin/hadoop jar hadoop-examples-1.0.3.jar pi 10
100)
That gave me an error of:
12/09/14 15:59:50 INFO
Regards, Jason.
Answers in line
On 09/13/2012 06:42 PM, Jason Huang wrote:
Hello,
I am trying to set up HBase at pseudo-distributed mode on my Macbook.
I was able to installed hadoop and HBase and started the nodes.
$JPS
5417 TaskTracker
5083 NameNode
5761 HRegionServer
5658 HMaster
6015 Jps
Hi,
not root user can not find all the hadoop process.
1. root use find all the process, below:
[root@hadoop1 ~]# jps
17452 SecondaryNameNode
18266 Main
7759 Jps
32095 QuorumPeerMain
17108 JobTracker
16955 TaskTracker
17566 HMaster
17177 NameNode
17765 HRegionServer
19424 ThriftServer
11 matches
Mail list logo