I think logs of catalogd and HMS have more details about this error. Could
you find and share the stacktrace of this exception?

BTW, is your impala-3.4 able to create other kinds of tables, e.g. hdfs
table or kudu table?

On Wed, Mar 31, 2021 at 9:59 PM Hashan Gayasri <hashan.gaya...@gmail.com>
wrote:

> Hi all,
>
> I'm getting a Thrift error when trying to create an S3 backed tables in
> Impala 3.4.0. This worked without any issues in impala 3.3.0.
>
> Output from impala-shell:
>
>> Server version: impalad version 3.4.0-RELEASE RELEASE (build Could not
>> obtain git hash)
>>
>> ***********************************************************************************
>> Welcome to the Impala shell.
>> (Impala Shell v3.4.0-RELEASE (Could) built on Wed Feb 24 23:52:04 GMT
>> 2021)
>>
>> After running a query, type SUMMARY to see a summary of where time was
>> spent.
>>
>> ***********************************************************************************
>> [localhost:11432] default> CREATE TABLE parqt_table_s3 (time INT)
>> PARTITIONED BY (time_part INT) STORED AS PARQUET LOCATION
>> 's3a://hashan-0011220-test-s3/Sample_Data';
>> Query: CREATE TABLE parqt_table_s3 (time INT) PARTITIONED BY (time_part
>> INT) STORED AS PARQUET LOCATION
>> 's3a://hashan-0011220-test-s3/Sample_Data'
>> ERROR: ImpalaRuntimeException: Error making 'createTable' RPC to Hive
>> Metastore:
>> CAUSED BY: TTransportException: null
>>
>
> Log (impalad):
>
>> I0331 14:31:56.494190 1732 Frontend.java:1487]
>> d54db09df4e5acfe:4e10d11500000000] Analyzing query: CREATE TABLE
>> parqt_table_s3 (time INT) PARTITIONED BY (time_part INT) STORED AS PARQUET
>> LOCATION 's3a://hashan-0011220-test-s3/Sample_Data' db: default
>> I0331 14:31:56.600950 1732 Frontend.java:1529]
>> d54db09df4e5acfe:4e10d11500000000] Analysis and authorization finished.
>> I0331 14:31:57.650727 1732 client-request-state.cc:211]
>> d54db09df4e5acfe:4e10d11500000000] ImpalaRuntimeException: Error making
>> 'createTable' RPC to Hive Metastore:
>> CAUSED BY: TTransportException: null
>>
>
> Is this a known issue / has someone faced this before?
>
>
> Regards,
> Hashan Gayasri
>
>

Reply via email to