Yup, a "volume" (a URI) needs both some scheme+host (hdfs://mycluster)
and a path on that filesystem (/accumulo).
I'll open up a JIRA because that error is rather unintuitive.
On 8/11/14, 7:36 AM, craig w wrote:
Looks like I needed to add "/accumulo" to the instance.volumes property:
On Mon, Aug 11, 2014 at 6:50 AM, craig w <[email protected]
<mailto:[email protected]>> wrote:
So I wiped my cluster, started with a non-HA cluster, put in data. I
stopped everything, reconfigured for HA, this time having the
following settings in accumulo-site.xml:
instance.volumes: hdfs://mycluster
instance.volumes.replacements: hdfs://somehostname:9000/accumulo
hdfs://mycluster/accumulo
When starting Accumulo, I'm getting the following error:
java.lang.IllegalArgumentException: Can not create a Path from an
empty string
at org.apache.hadoop.fs.Path.checkPathArg(Path.java:127)
at org.apache.hadoop.fs.Path.<init>(Path.java:135)
at org.apache.hadoop.fs.Path.<init>(Path.java:89)
at
org.apache.accumulo.core.volume.VolumeImpl.prefixChild(VolumeImpl.java:102)
at
org.apache.accumulo.server.ServerConstants.getInstanceIdLocation(ServerConstants.java:133)
at
org.apache.accumulo.server.Accumulo.getAccumuloInstanceIdPath(Accumulo.java:102)
Any idea?
On Fri, Aug 8, 2014 at 11:37 AM, Keith Turner <[email protected]
<mailto:[email protected]>> wrote:
You can run
accumulo admin volumes --list
to check and see if things are as expected. This command will
like all unique volumes that occurr in Accumulo's metadata.
On Fri, Aug 8, 2014 at 9:42 AM, craig w <[email protected]
<mailto:[email protected]>> wrote:
instance.dfs.uri appears to be deprecated in 1.6.x
I did add the following two changes to accumulo-site.xml and
pushed that to all tablet servers, restarted accumulo and
things look good:
set instance.volumes to: hdfs://mycluster
set instance.volumes.replacements to:
hdfs://somehostname:9000/accumulo hdfs://mycluster/accumulo
Thanks.
On Fri, Aug 8, 2014 at 9:31 AM, <[email protected]
<mailto:[email protected]>> wrote:
I believe the problem that you are running into is that
because dfs.default.uri was not specified, then
fs.defaultFS was used to write entries to the
accumulo.root and accumulo.metadata tables. Suggest
doing the following:
Update to the latest version of Accumulo 1.6.1-SNAPSHOT
set instance.dfs.uri to: hdfs://mycluster
set instance.volumes to: hdfs://mycluster
set instance.volumes.replacements to:
hdfs://somehostname:9000/accumulo hdfs://mycluster/accumulo
------------------------------------------------------------------------
*From: *"craig w" <[email protected]
<mailto:[email protected]>>
*To: *[email protected]
<mailto:[email protected]>
*Sent: *Friday, August 8, 2014 9:13:02 AM
*Subject: *Re: accumulo 1.6 and HDFS non-HA conversion
to HDFS HA
In $HADOOP_CONF_DIR/core-site.xml I used to have:
<property>
<name>fs.default.name <http://fs.default.name></name>
<value>hdfs://somehostname:9000</value>
</property>
When going to HDFS HA, I removed that property and have:
<property>
<name>fs.defaultFS</name>
<value>hdfs://mycluster</value>
</property>
In the accumulo-site.xml I don't have instance.volumes
or instance.dfs.uri.
On Tue, Aug 5, 2014 at 1:41 PM, Keith Turner
<[email protected] <mailto:[email protected]>> wrote:
Did your HDFS URI change and are errors you are
seeing connecting to the old HDFS URI? If so, you
may need to configure instance.volumes.replacements
to replace the old URI in Accumulo metadata.
On Tue, Aug 5, 2014 at 1:06 PM, craig w
<[email protected] <mailto:[email protected]>>
wrote:
I've setup an Accumulo 1.6 cluster with Hadoop
2.4.0 (with a secondary namenode). I wanted to
convert the secondary namenode to be a standby
(hence HDFS HA).
After getting HDFS HA up and making sure the
hadoop configuration files were accessible by
Accumulo, I started up Accumulo. I noticed some
reports of tablet servers failing to connect,
however, they were failing to connect to HDFS
over port 9000. That port is not configured/used
with HDFS HA so I'm unsure why they are still
trying to talk to HDFS using the old configuration.
Any thoughts ideas? I know Accumulo 1.6 works
with HDFS HA, but I'm curious if the tests have
ever been run against a non-HA cluster that was
converted to HA (with data in it).
--
https://github.com/mindscratch
https://www.google.com/+CraigWickesser
https://twitter.com/mind_scratch
https://twitter.com/craig_links
--
https://github.com/mindscratch
https://www.google.com/+CraigWickesser
https://twitter.com/mind_scratch
https://twitter.com/craig_links
--
https://github.com/mindscratch
https://www.google.com/+CraigWickesser
https://twitter.com/mind_scratch
https://twitter.com/craig_links
--
https://github.com/mindscratch
https://www.google.com/+CraigWickesser
https://twitter.com/mind_scratch
https://twitter.com/craig_links
--
https://github.com/mindscratch
https://www.google.com/+CraigWickesser
https://twitter.com/mind_scratch
https://twitter.com/craig_links