.
Best,
Zhefu Peng
-- --
??: "Gopal Vijayaraghavan";
: 2018??7??24??(??) 10:53
??: "user@hive.apache.org";
????: Re: Using snappy compresscodec in hive
> "TBLPROPERTIES ("orc.compress"=&quo
.
Best,
Zhefu Peng
-- --
??: "Gopal Vijayaraghavan";
: 2018??7??24??(??) 10:53
??: "user@hive.apache.org";
????: Re: Using snappy compresscodec in hive
> "TBLPROPERTIES ("orc.compress"=&quo
> "TBLPROPERTIES ("orc.compress"="Snappy"); "
That doesn't use the Hadoop SnappyCodec, but uses a pure-java version (which is
slower, but always works).
The Hadoop snappyCodec needs libsnappy installed on all hosts.
Cheers,
Gopal
Hi,
Here is a confusion I encountered these days: I don't install or build snappy
on my hadoop cluster, but when I tested and compared about the compression
ratio of Parquet and ORC storage format. During the test, I can set the way of
compression for two storage format, for example, using
Hi,
Here is a confusion I encounter these days: I don't install or build snappy on
my hadoop cluster, but when I tested and compared about the compression ratio
of Parquet and ORC storage format. During the test, I can set the way of
compression for two storage format, for example, using
Hi,
Here is a confusion I encounter these days: I don't install or build snappy on
my hadoop cluster, but when I tested and compared about the compression ratio
of Parquet and ORC storage format. During the test, I can set the way of
compression for two storage format, for example, using