当前local模式的hbase 的数据如下
Took 0.0045 seconds
hbase:001:0> list
TABLE
books
1 row(s)
Took 0.6144 seconds
=> ["books"]
hbase:002:0> scan 'books'
ROW                                         COLUMN+CELL
 Godel, Escher, Bach                        column=analytics:views,
timestamp=2023-04-17T20:02:28.755, value=820
 Godel, Escher, Bach                        column=info:author,
timestamp=2023-04-17T20:02:16.779, value=Douglas Hofstadter
 Godel, Escher, Bach                        column=info:year,
timestamp=2023-04-17T20:02:23.027, value=1979
 In Search of Lost Time                     column=analytics:views,
timestamp=2023-04-17T20:02:09.784, value=3298
 In Search of Lost Time                     column=info:author,
timestamp=2023-04-17T20:01:56.050, value=Marcel Proust
 In Search of Lost Time                     column=info:year,
timestamp=2023-04-17T20:02:03.161, value=1922
2 row(s)
Took 0.2390 seconds

Resonance OpenSky <yangchunlin10061...@gmail.com> 于2023年4月19日周三 13:46写道:

> 当前配置  hbase是从官网下载的 2.4.16,  其中hbase-site.xml 配置如下
> <configuration>
>   <property>
>     <name>hbase.cluster.distributed</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>hbase.rootdir</name>
>     <value>./tmp</value>
>   </property>
>   <property>
>     <name>hbase.unsafe.stream.capability.enforce</name>
>     <value>false</value>
>   </property>
> </configuration>
>
> 使用构建
> mvn -Dspark.version=3.2.1 -Dscala.version=2.12.15
> -Dscala.binary.version=2.12 -Dhbase.version=2.4.16 clean package 得到
> hbase-spark-1.0.1-SNAPSHOT.jar
> hbase-spark-protocol-1.0.1-SNAPSHOT-sources.jar
> original-hbase-spark-protocol-shaded-1.0.1-SNAPSHOT.jar
> hbase-spark-it-1.0.1-SNAPSHOT.jar
>  hbase-spark-protocol-shaded-1.0.1-SNAPSHOT.jar
> hbase-spark-protocol-1.0.1-SNAPSHOT.jar
>  hbase-spark-protocol-shaded-1.0.1-SNAPSHOT-sources.jar
>
> 将这些jar copy到hbase的lib中,也copy到pyspark的lib中
>
> 启动hbase:   ./bin/start-hbase.sh
>
> 测试程序 pyspark
> ----------------------------------------------
>
> from pyspark.sql import SparkSession
> from pyspark import SparkConf
>
> spark = SparkSession.builder.master("local").getOrCreate()
>
>
> df = spark.read.format('org.apache.hadoop.hbase.spark') \
>     .option('hbase.table','books') \
>     .option('hbase.columns.mapping', \
>             'title STRING :key, \
>             author STRING info:author, \
>             year STRING info:year, \
>             views STRING analytics:views') \
>     .option('hbase.use.hbase.context', False) \
>     .option('hbase.config.resources',
> 'file:///root/repo/my_hbase/hbase-site.xml') \
>     .option('hbase-push.down.column.filter', False) \
>     .load()
>
> df.show()
> ----------------------------------------------
> 其中hbase-site.xml 内容如下
> ----------------------------------------------
> <configuration>
>   <property>
>     <name>hbase.cluster.distributed</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>hbase.zookeeper.quorum</name>
>     <value>10.9.2.217</value>
>   </property>
>   <property>
>     <name>zookeeper.znode.parent</name>
>     <!--or /hbase-->
>     <value>/hbase</value>
>   </property>
> </configuration>
> ----------------------------------------------
>
> 执行结果
> root@9412e1e1f853:~/repo/my_hbase# python myhbase.py
> Setting default log level to "WARN".
> To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use
> setLogLevel(newLevel).
> 23/04/19 13:40:24 WARN NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
> 23/04/19 13:40:25 WARN Utils: Service 'SparkUI' could not bind on port
> 4040. Attempting port 4041.
> title STRING :key,             author STRING info:author,             year
> STRING info:year,             views STRING analytics:views
> Traceback (most recent call last):
>   File "/root/repo/my_hbase/myhbase.py", line 7, in <module>
>     df = spark.read.format('org.apache.hadoop.hbase.spark') \
>   File "/usr/local/lib/python3.9/site-packages/pyspark/sql/readwriter.py",
> line 164, in load
>     return self._df(self._jreader.load())
>   File "/usr/local/lib/python3.9/site-packages/py4j/java_gateway.py", line
> 1321, in __call__
>     return_value = get_return_value(
>   File "/usr/local/lib/python3.9/site-packages/pyspark/sql/utils.py", line
> 111, in deco
>     return f(*a, **kw)
>   File "/usr/local/lib/python3.9/site-packages/py4j/protocol.py", line
> 326, in get_return_value
>     raise Py4JJavaError(
> py4j.protocol.Py4JJavaError: An error occurred while calling o32.load.
> : java.lang.NullPointerException
> at
> org.apache.hadoop.hbase.spark.HBaseRelation.<init>(DefaultSource.scala:138)
> at
> org.apache.hadoop.hbase.spark.DefaultSource.createRelation(DefaultSource.scala:69)
> at
> org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:350)
> at
> org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:274)
> at
> org.apache.spark.sql.DataFrameReader.$anonfun$load$3(DataFrameReader.scala:245)
> at scala.Option.getOrElse(Option.scala:189)
> at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:245)
> at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:174)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
> at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
> at py4j.Gateway.invoke(Gateway.java:282)
> at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
> at py4j.commands.CallCommand.execute(CallCommand.java:79)
> at
> py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
> at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
> at java.lang.Thread.run(Thread.java:748)
>
> Resonance OpenSky <yangchunlin10061...@gmail.com> 于2023年4月19日周三 13:14写道:
>
>> 我打算使用 hbase connector spark,我是在自己的开发机器上部署hbase的,使用的是local方式,这种情况如何build呢
>>
>> 我看Readme.md是这样的
>> mvn -Dspark.version=3.1.2 -Dscala.version=2.12.10
>> -Dscala.binary.version=2.12 -Dhbase.version=2.4.8
>> -Dhadoop-three.version=3.2.0 clean install
>>
>> 对于local模式的HBase,-Dhadoop-three.version该如何指定哇
>>
>

Reply via email to