Hive .12 on Hadoop 2
I have a table with a mix of STRING and DECIMAL fields that is stored as
ORC no compression or partitions.
I wanted to create a copy of this table with CTAS, stored also as ORC.
The job fails with NumberFormatException at the HiveDecimal class but I
can't narrow it down the
, Kristopher Kane kkane.l...@gmail.comwrote:
Hive .12 on Hadoop 2
I have a table with a mix of STRING and DECIMAL fields that is stored as
ORC no compression or partitions.
I wanted to create a copy of this table with CTAS, stored also as ORC.
The job fails with NumberFormatException
Clay,
Keep in mind that setting this to false in the global hive-site.xml will
mean that you will not do any client hash table generating and will miss
out on optimizations for other joins. You should set this in your query
directly. Another option is so increase the client side heap to allow
Is there a list of possible return codes as logged by the
TempletonJobController's map task?
I'm getting an RC of 6 for a pig+hcat job that works from the CLI:
o.a.h.hcatalog.templeton.tool.launchMapper: templeton: Writing exit value 6
to...
-Kris
Is there a variable that can be used for the user principal in scratchdir
instead of the JVM user.name?
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.exec.scratchdir
Kris
I see that Hive doesn't seem to know about an Avro SerDe compressed table
(Hive 1.2.1) in 'describe extended' when determining compression with the
following:
SET hive.exec.compress.output=true;
SET avro.output.codec=snappy;
-- likely because you set those on INSERT and there isn't any DDL
I have a highly compressed single ORC file based table generated from Hive
DDL. Raw size reports 120GB ORC/Snappy compressed down to 990 MB (ORC with
no compression is still only 1.3GB) . Hive on MR is throwing
ArrayIndexOutOfBoundsException like the following:
Diagnostic Messages for this
Gopal. That was exactly it.
As always, a succinct, accurate answer.
Thanks,
-Kris
On Mon, Feb 26, 2018 at 8:06 PM, Gopal Vijayaraghavan
wrote:
> Hi,
>
> > Caused by: java.lang.ArrayIndexOutOfBoundsException
> > at org.apache.hadoop.mapred.MapTask$MapOutputBuffer$
>
If using a default external table location, in a cluster with Ranger
Authorization, the table location and data are owned by the `hive`
user.
Since the table is external, there doesn't seem to be a way to delete
this data other than impersonating or becoming the `hive` or `hdfs`
principal. Is
'hive.query.results.cache.max.size' - Is this limit per query result,
total for all users across all HS2 instances or per HS2 instance?
Thanks,
Kris
Authorization, rather.
On Thu, Jun 13, 2019 at 10:51 AM Kristopher Kane wrote:
>
> You really have no choice with storage based authentication.
>
> On Fri, Jun 7, 2019 at 12:24 PM Mainak Ghosh wrote:
> >
> > Hey Alan,
> >
> > Thanks for replying.
You really have no choice with storage based authentication.
On Fri, Jun 7, 2019 at 12:24 PM Mainak Ghosh wrote:
>
> Hey Alan,
>
> Thanks for replying. We are currently using storage based authorization and
> Hive 2.3.2. Unfortunately, we found that the default warehouse path requires
> a 777
The JDBC storage handler wiki states:
"You will need to protect the keystore file by only authorize targeted
user to read this file using authorizer (such as ranger). Hive will
check the permission of the keystore file to make sure user has read
permission of it when creating/altering table."
I
Does anyone have a pointer to how I can copy non-jar files from a
storage handler such that they are accessible by the map task executor
in usercache?
Thanks,
Kris
dCacheFile(new URI("hdfs://tmp/my.truststore"));
.. and the Distributed Cache directly but I do not see them in the
directly listing of a Tez log.
On Tue, Aug 6, 2019 at 1:44 PM Kristopher Kane wrote:
>
> Does anyone have a pointer to how I can copy non-jar files from a
&
I'm trying to add protected SSL credentials to the Kafka Storage
Handler. This is my first jump into the pool.
I have it working where the creds for the keystore/truststore are in
JCEKS files in HDFS and the KafkaStorageHandler class loads them into
the job configuration based on some new
16 matches
Mail list logo