[
https://issues.apache.org/jira/browse/PHOENIX-877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14240386#comment-14240386
]
James Taylor commented on PHOENIX-877:
--------------------------------------
[~alexdl] - please try the 3.2.2/4.2.2 release (just closed the vote on that,
so we'll be pushing out to maven soon). We've removed the dependency on the
native snappy libraries and instead introduced a dependency on a pure Java
solution. See PHOENIX-1455 for more info.
> Snappy native library is not available
> --------------------------------------
>
> Key: PHOENIX-877
> URL: https://issues.apache.org/jira/browse/PHOENIX-877
> Project: Phoenix
> Issue Type: Bug
> Affects Versions: 3.1, 4.1, 4.2, 3.2
> Reporter: alex kamil
> Assignee: Mujtaba Chohan
>
> still getting this error with the most recent phoenix v3.0 (i think it has
> been fixed in 2.2.3)
> "Snappy native library is not available" when running SELECT DISTINCT on
> large table (>300k rows) in sqlline, on linux 64bit (intel)
> in order to fix had to add to incubator-phoenix/bin/sqlline.py:
> ' -Djava.library.path= /var/lib/hadoop/lib/native/Linux-amd64-64'+\
> snappy binaries were installed
> sudo yum install snappy snappy-devel
> ln -sf /usr/lib64/libsnappy.so /var/lib/hadoop/lib/native/Linux-amd64-64/.
> ln -sf /usr/lib64/libsnappy.so /var/lib/hbase/lib/native/Linux-amd64-64/.
> -------------------------------------------------------------------------------------------
> Edit (Dec 2014): still getting this error with phoenix 3.1 and 3.2.
> Pls eliminate this dependency or package it with phoenix core and client jars
> here is the exception and steps to make it work:
> jdbc:phoenix:localhost> SELECT COUNT (DISTINCT ROWKEY) FROM
> table_with_1000000_rows;
> +------------------------+
> | DISTINCT_COUNT(ROWKEY) |
> +------------------------+
> java.lang.UnsatisfiedLinkError:
> org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z
> at org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy(Native
> Method)
> at
> org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:62)
> at
> org.apache.hadoop.io.compress.SnappyCodec.getDecompressorType(SnappyCodec.java:185)
> at
> org.apache.hadoop.io.compress.CodecPool.getDecompressor(CodecPool.java:131)
> at
> org.apache.hadoop.hbase.io.hfile.Compression$Algorithm.getDecompressor(Compression.java:331)
> at
> org.apache.phoenix.expression.aggregator.DistinctValueWithCountClientAggregator.aggregate(DistinctValueWithCountClientAggregator.java:66)
> at
> org.apache.phoenix.expression.aggregator.ClientAggregators.aggregate(ClientAggregators.java:63)
> at
> org.apache.phoenix.iterate.GroupedAggregatingResultIterator.next(GroupedAggregatingResultIterator.java:75)
> at
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
> at
> org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:732)
> at sqlline.SqlLine$IncrementalRows.hasNext(SqlLine.java:2429)
> at sqlline.SqlLine$TableOutputFormat.print(SqlLine.java:2074)
> at sqlline.SqlLine.print(SqlLine.java:1735)
> at sqlline.SqlLine$Commands.execute(SqlLine.java:3683)
> at sqlline.SqlLine$Commands.sql(SqlLine.java:3584)
> at sqlline.SqlLine.dispatch(SqlLine.java:821)
> at sqlline.SqlLine.begin(SqlLine.java:699)
> at sqlline.SqlLine.mainWithInputRedirection(SqlLine.java:441)
> at sqlline.SqlLine.main(SqlLine.java:424)
> to fix need to update several configuration files to enable snappy
> compression in phoenix, hadoop and hbase:
> vim phoenix/hadoop2/bin/sqlline.py
> extrajars="/etc/hadoop/conf:/etc/hbase/conf:/etc/zookeeper/conf:/usr/lib/hbase/hbase-0.94.15-cdh4.7.0-security.jar:/opt/app/extlib/hadoop-common-2.0.0-cdh4.7.0.jar:/usr/lib/zookeeper/zookeeper-3.4.5-cdh4.7.0.jar:/opt/app/extlib/hadoop-auth-2.0.0-cdh4.7.0.jar:/opt/app/extlib/commons-collections-3.2.1.jar:/opt/app/phoenix/common/phoenix-core-3.1.0.jar:/opt/app/extlib/snappy-java-1.1.1.3.jar"
> someflags="-Djava.library.path=/usr/lib/hadoop/lib/native"
> java_cmd = 'java '+ someflags+' -cp ".' + os.pathsep + extrajars+ os.pathsep+
> phoenix_utils.phoenix_client_jar + \
> vim systemd/hbase-regionserver.service
> ExecStartPre=/usr/bin/mkdir -p /usr/lib/hbase/lib/native/Linux-amd64-64
> ExecStartPre=/usr/bin/ln -sf /usr/lib64/libsnappy.so
> /usr/lib/hbase/lib/native/Linux-amd64-64/.
> ExecStartPre=/usr/bin/chown -R hbase:hbase /usr/lib/hbase/lib
> vim systemd/hadoop-hdfs-datanode.service
> ExecStartPre=/usr/bin/mkdir -p /usr/lib/hadoop/lib/native/Linux-amd64-64
> ExecStartPre=/usr/bin/ln -sf /usr/lib64/libsnappy.so
> /usr/lib/hadoop/lib/native/Linux-amd64-64/.
> ExecStartPre=/usr/bin/chown -R hdfs:hdfs /usr/lib/hadoop/lib
> vim hadoop/core-site.xml
> <property>
> <name>io.compression.codecs</name>
>
> <value>org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.BZip2Codec,org.apache.hadoop.io.compress.SnappyCodec,com.hadoop.compression.lzo.LzoCodec,com.hadoop.compression.lzo.LzopCodec</value>
> </property>
> <property>
> <name>io.compression.codec.lzo.class</name>
> <value>com.hadoop.compression.lzo.LzoCodec</value>
> </property>
> vim hbase/hbase-site.xml
> <property>
> <name>hbase.regionserver.codecs</name>
> <value>snappy</value>
> </property>
> vim hadoop/hadoop-env.sh
> export HADOOP_HOME=/usr/lib/hadoop
> export
> JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native/Linux-amd64-64:$HADOOP_HOME/lib/native
> export
> HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$HADOOP_MAPRED_HOME:$HADOOP_HDFS_HOME:$HADOOP_YARN_HOME:$HADOOP_HOME:$HADOOP_CONF_DIR:$YARN_CONF_DIR:$JSVC_HOME:$HADOOP_HOME/lib/native/Linux-amd64-64:$HADOOP_HOME/lib/native
> vim hbase/hbase-env.sh
> export HBASE_HOME=/usr/lib/hbase
> export
> HBASE_LIBRARY_PATH=$HBASE_HOME/lib/native/Linux-amd64-64:$HBASE_HOME/lib/native
> export HBASE_CLASSPATH_PREFIX=/opt/app/phoenix/common/phoenix-core-3.1.0.jar
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)