I do not know of a way to get Impala to read data that it does not
consider a table. Are you concerned about the overhead of Impala's
maintenance of the metadata?
On Mon, Aug 14, 2017 at 7:57 PM, sky wrote:
> Thank you,
> I am currently using this way. But is there any way to load data from
> hdfs to parquet table not via external table or internal table?
>
>
>
>
>
>
> At 2017-08-15 10:53:55, "Jim Apple" wrote:
>>http://impala.apache.org/docs/build/html/topics/impala_create_table.html#create_table
>>
>>I think you can follow these two steps in order:
>>
>>1. Make an external table referring to the CSV
>>
>>2. Use CREATE TABLE AS SELECT to make a parquet table
>>
>>On Mon, Aug 14, 2017 at 7:48 PM, sky wrote:
>>> csv file on the HDFS.
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> At 2017-08-15 10:42:13, "Jim Apple" wrote:
Is the data in a format that Impala can read?
On Mon, Aug 14, 2017 at 7:31 PM, sky wrote:
> Thank you,
> I read the document.But it only describes the conversion of
> internal
> and external tables.How to directly load data to parquet table? Could
> you
> provide an example? Thank You !
>
>
>
>
>
>
>
> At 2017-08-15 03:25:43, "Jim Apple" wrote:
>>Maybe this will help:
>>
>>http://impala.apache.org/docs/build/html/topics/impala_create_table.html#create_table
>>
>>"Although the EXTERNAL and LOCATION clauses are often specified
>>together, LOCATION is optional for external tables, and you can also
>>specify LOCATION for internal tables. The difference is all about
>>whether Impala "takes control" of the underlying data files and moves
>>them when you rename the table, or deletes them when you drop the
>>table. For more about internal and external tables and how they
>>interact with the LOCATION attribute, see Overview of Impala Tables."
>>
>>On Thu, Aug 10, 2017 at 10:45 PM, sky wrote:
>>> Hi all,
>>> Is there any way to load data from hdfs to parquet table not via
>>> external table or inner table?
>>>
>>>
>>>
>>>
>
>
>
>