Thanks Adam, that worked.  Accumulo starts but when I try the shell I get:

ERROR: unable obtain instance id at file:/accumulo/instance_id

$ hadoop fs -ls /


Shows the id file and the Hadoop configuration directory is on the
Accumulo class path according to accumulo-site.xml.

Is the shell looking in the local file system or in hdfs?  I never had this
problem until I started up with Google.

Thanks

On Wed, Jan 31, 2018 at 5:06 PM, Adam J. Shook <adamjsh...@gmail.com> wrote:

> Yes, it does use RPC to talk to HDFS.  You will need to update the value
> of instance.volumes in accumulo-site.xml to reference this address,
> haz0-m:8020, instead of the default localhost:9000.
>
> --Adam
>
> On Wed, Jan 31, 2018 at 4:45 PM, Geoffry Roberts <threadedb...@gmail.com>
> wrote:
>
>> I have a situation where Accumulo cannot find Hadoop.
>>
>> Hadoop is running and I can access hdfs from the cli.
>> Zookeeper also says it is ok and I can log in using the client.
>> Accumulo init is failing with a connection refused for localhost:9000.
>>
>> netstat shows nothing listening on 9000.
>>
>> Now the plot thickens...
>>
>> The Hadoop I am running is Google's Dataproc and the Hadoop installation
>> is not my own.  I have already found a number of differences.
>>
>> Here's my question:  Does Accumulo use RPC to talk to Hadoop? I ask
>> because of things like this:
>>
>> From hfs-site.xml
>>
>>   <property>
>>
>>     <name>dfs.namenode.rpc-address</name>
>>
>>     <value>haz0-m:8020</value>
>>
>>     <description>
>>
>>       RPC address that handles all clients requests. If empty then we'll
>> get
>>
>>       thevalue from fs.default.name.The value of this property will take
>> the
>>
>>       form of hdfs://nn-host1:rpc-port.
>>
>>     </description>
>>
>>   </property>
>>
>> Or does it use something else?
>>
>> Thanks
>> --
>> There are ways and there are ways,
>>
>> Geoffry Roberts
>>
>
>


-- 
There are ways and there are ways,

Geoffry Roberts

Reply via email to