Folks,
I have, I believe, followed all the directions for turning on namespace
mapping as well as extra steps to (added classpath) required to use the
mapreduce bulk load utility, but am still running into this error...I am
running a Hortonworks cluster with both HDP v 3.0.1 and HDF components.
Here is what I have tried:
* Checked that the proper hbase-site.xml (in my case:
/etc/hbase/3.0.1.0-187/0/hbase-site.xml) file is being referenced
when launching the mapreduce utility:
...
<property>
<name>phoenix.schema.isNamespaceMappingEnabled</name>
<value>true</value>
</property>
<property>
<name>phoenix.schema.mapSystemTablesToNamespace</name>
<value>true</value>
</property>
...
* added the appropriate classpath additions to the hadoop jar command
(zookeeper quorum hostnames changed to remove my corporate network
info as well as data directory):
HADOOP_CLASSPATH=/usr/hdp/3.0.1.0-187/hbase/lib/hbase-protocol.jar:/etc/hbase/3.0.1.0-187/0/hbase-site.xml
hadoop jar
/usr/hdp/3.0.1.0-187/phoenix/phoenix-5.0.0.3.0.1.0-187-client.jar
org.apache.phoenix.mapreduce.CsvBulkLoadTool --table MYTABLE --input
/ingest/MYCSV -z zk1,zk2,zk3 -g
...
18/11/27 15:31:48 INFO zookeeper.ReadOnlyZKClient: Close zookeeper
connection 0x1d58d65f to master-1.punch.datareservoir.net:2181
<http://master-1.punch.datareservoir.net:2181>,master-2.punch.datareservoir.net:2181
<http://master-2.punch.datareservoir.net:2181>,master-3.punch.datareservoir.net:2181
<http://master-3.punch.datareservoir.net:2181>
18/11/27 15:31:48 INFO log.QueryLoggerDisruptor: Shutting down
QueryLoggerDisruptor..
Exception in thread "main" java.sql.SQLException: ERROR 726
(43M10):Inconsistent namespace mapping properties. Cannot initiate
connection as SYSTEM:CATALOG is found but client does not have
phoenix.schema.isNamespaceMappingEnabled enabled
at
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:494)
at
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
at
org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1113)
at
org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1501)
at
org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2740)
at
org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:1114)
at
org.apache.phoenix.compile.CreateTableCompiler$1.execute(CreateTableCompiler.java:192)
at
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:408)
at
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:390)
at
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
at
org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1806)
at
org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2569)
at
org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2532)
at
org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
at
org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2532)
at
org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
at
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:208)
at org.apache.phoenix.util.QueryUtil.getConnection(QueryUtil.java:400)
at org.apache.phoenix.util.QueryUtil.getConnection(QueryUtil.java:392)
at
org.apache.phoenix.mapreduce.AbstractBulkLoadTool.loadData(AbstractBulkLoadTool.java:206)
at
org.apache.phoenix.mapreduce.AbstractBulkLoadTool.run(AbstractBulkLoadTool.java:180)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at
org.apache.phoenix.mapreduce.CsvBulkLoadTool.main(CsvBulkLoadTool.java:109)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
at org.apache.hadoop.util.RunJar.main(RunJar.java:232)
18/11/27 15:31:48 INFO zookeeper.ZooKeeper: Session: 0x3672eebffa800c8
closed
18/11/27 15:31:48 INFO zookeeper.ClientCnxn: EventThread shut down
* Also tried the other recommended option:
HADOOP_CLASSPATH=$(hbase
mapredcp):/etc/hbase/3.0.1.0-187/0/hbase-site.xml hadoop jar
/usr/hdp/3.0.1.0-187/phoenix/phoenix-5.0.0.3.0.1.0-187-client.jar
org.apache.phoenix.mapreduce.CsvBulkLoadTool --table MYTABLE --input
/ingest/MYCSV -z zk1,zk2,zk3 -g
...
18/11/27 15:31:48 INFO zookeeper.ReadOnlyZKClient: Close zookeeper
connection 0x1d58d65f to master-1.punch.datareservoir.net:2181
<http://master-1.punch.datareservoir.net:2181>,master-2.punch.datareservoir.net:2181
<http://master-2.punch.datareservoir.net:2181>,master-3.punch.datareservoir.net:2181
<http://master-3.punch.datareservoir.net:2181>
18/11/27 15:31:48 INFO log.QueryLoggerDisruptor: Shutting down
QueryLoggerDisruptor..
Exception in thread "main" java.sql.SQLException: ERROR 726
(43M10):Inconsistent namespace mapping properties. Cannot initiate
connection as SYSTEM:CATALOG is found but client does not have
phoenix.schema.isNamespaceMappingEnabled enabled
at
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:494)
at
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
at
org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1113)
at
org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1501)
at
org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2740)
at
org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:1114)
at
org.apache.phoenix.compile.CreateTableCompiler$1.execute(CreateTableCompiler.java:192)
at
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:408)
at
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:390)
at
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
at
org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1806)
at
org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2569)
at
org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2532)
at
org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
at
org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2532)
at
org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
at
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:208)
at org.apache.phoenix.util.QueryUtil.getConnection(QueryUtil.java:400)
at org.apache.phoenix.util.QueryUtil.getConnection(QueryUtil.java:392)
at
org.apache.phoenix.mapreduce.AbstractBulkLoadTool.loadData(AbstractBulkLoadTool.java:206)
at
org.apache.phoenix.mapreduce.AbstractBulkLoadTool.run(AbstractBulkLoadTool.java:180)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at
org.apache.phoenix.mapreduce.CsvBulkLoadTool.main(CsvBulkLoadTool.java:109)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
at org.apache.hadoop.util.RunJar.main(RunJar.java:232)
18/11/27 15:31:48 INFO zookeeper.ZooKeeper: Session: 0x3672eebffa800c8
closed
18/11/27 15:31:48 INFO zookeeper.ClientCnxn: EventThread shut down
* As well as the recommended approach in the HBase reference guide
linked in the Phoenix docs:
HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath` hadoop jar
/usr/hdp/3.0.1.0-187/phoenix/phoenix-5.0.0.3.0.1.0-187-client.jar
org.apache.phoenix.mapreduce.CsvBulkLoadTool --table MYTABLE --input
/ingest/MYCSV -z zk1,zk2,zk3 -g
Exception in thread "main" java.lang.NoClassDefFoundError:
org/apache/commons/cli/DefaultParser
at
org.apache.phoenix.mapreduce.AbstractBulkLoadTool.parseOptions(AbstractBulkLoadTool.java:128)
at
org.apache.phoenix.mapreduce.AbstractBulkLoadTool.run(AbstractBulkLoadTool.java:176)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at
org.apache.phoenix.mapreduce.CsvBulkLoadTool.main(CsvBulkLoadTool.java:109)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
at org.apache.hadoop.util.RunJar.main(RunJar.java:232)
Caused by: java.lang.ClassNotFoundException:
org.apache.commons.cli.DefaultParser
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 11 more
* And finally, here is what the tables look like in both Hbase shell
and sqlline:
hbase shell
HBase Shell
Use "help" to get list of supported commands.
Use "exit" to quit this interactive shell.
Version 2.0.0.3.0.1.0-187, re9fcf450949102de5069b257a6dee469b8f5aab3,
Wed Sep 19 10:16:35 UTC 2018
Took 0.0016 seconds
hbase(main):001:0> list
TABLE
ATLAS_ENTITY_AUDIT_EVENTS
MYTABLE
SYSTEM:CATALOG
SYSTEM:FUNCTION
SYSTEM:LOG
SYSTEM:MUTEX
SYSTEM:SEQUENCE
SYSTEM:STATS
atlas_janus
9 row(s)
Took 0.6114 seconds
=> ["ATLAS_ENTITY_AUDIT_EVENTS", "MYTABLE", "SYSTEM:CATALOG",
"SYSTEM:FUNCTION", "SYSTEM:LOG", "SYSTEM:MUTEX", "SYSTEM:SEQUENCE",
"SYSTEM:STATS", "atlas_janus"]
phoenix-sqlline master-1.punch.datareservoir.net
<http://master-1.punch.datareservoir.net>
*Setting property: [incremental, false]*
*Setting property: [isolation, TRANSACTION_READ_COMMITTED]*
*issuing: !connect jdbc:phoenix:mysrv none none
org.apache.phoenix.jdbc.PhoenixDriver*
*Connecting to jdbc:phoenix:mysrv*
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/usr/hdp/3.0.1.0-187/phoenix/phoenix-5.0.0.3.0.1.0-187-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/usr/hdp/3.0.1.0-187/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
18/11/27 15:45:51 WARN util.NativeCodeLoader: Unable to load
native-hadoop library for your platform... using builtin-java classes
where applicable
*Connected to: Phoenix (version 5.0)*
*Driver: PhoenixEmbeddedDriver (version 5.0)*
*Autocommit status: true*
*Transaction isolation: TRANSACTION_READ_COMMITTED*
Building list of tables and columns for tab-completion (set fastconnect
to true to skip)...
144/144 (100%) Done
Done
sqlline version 1.2.0
0: jdbc:phoenix:mysrv> !tables
*+------------+--------------+-------------+---------------+----------+------------+----------------------------+-----------------+--------------+-----------------+---------------+---------------+-----------------+------------+---------+*
*| **TABLE_CAT**| **TABLE_SCHEM**| **TABLE_NAME**|**TABLE_TYPE **|
**REMARKS**| **TYPE_NAME**| **SELF_REFERENCING_COL_NAME**|
**REF_GENERATION**| **INDEX_STATE**| **IMMUTABLE_ROWS**|
**SALT_BUCKETS**| **MULTI_TENANT**| **VIEW_STATEMENT**| **VIEW_TYPE**|
**INDEX_T**|*
*+------------+--------------+-------------+---------------+----------+------------+----------------------------+-----------------+--------------+-----------------+---------------+---------------+-----------------+------------+---------+*
*|**| *SYSTEM *| *CATALOG *| *SYSTEM TABLE*|**|**|**| **|**| *false *|
*null*| *false *| **|**| **|*
*|**| *SYSTEM *| *FUNCTION*| *SYSTEM TABLE*|**|**|**| **|**| *false *|
*null*| *false *| **|**| **|*
*|**| *SYSTEM *| *LOG *| *SYSTEM TABLE*|**|**|**| **|**| *true*| *32*|
*false *| **|**| **|*
*|**| *SYSTEM *| *SEQUENCE*| *SYSTEM TABLE*|**|**|**| **|**| *false *|
*null*| *false *| **|**| **|*
*|**| *SYSTEM *| *STATS *| *SYSTEM TABLE*|**|**|**| **|**| *false *|
*null*| *false *| **|**| **|*
*|**|**| *MYTABLE*| *TABLE *|**|**|**| **|**| *false *| *5 *| *false *|
**|**| **|*
*+------------+--------------+-------------+---------------+----------+------------+----------------------------+-----------------+--------------+-----------------+---------------+---------------+-----------------+------------+---------+*