Hi all,
I am using cdh5.11.1-release,the compilation command is provided in the
documentation(./buildall.sh -notests -so),but there is no command similar to
'make install'.In the current document compiled, the directory structure is too
much and do not need too many files. Could you provide
Hi Tim,
Is there a plan for this work? Could you provide a manual copy of the
example?Thanks.
At 2017-06-21 01:41:33, "Tim Armstrong" wrote:
>Hi Sky,
> We have not implemented an install target yet - for deployment we rely on
>copying out the artifacts manually.
agered by Cloudera Manager.
>> > > Build Debian packages and use `apt-get`?
>> > >
>> > > 2017-06-21 11:16 GMT+08:00 Henry Robinson :
>> > >
>> > >> I don't think there's any plan for this work. The CMake documentation
>> &
Hi all,
How to backup and restore impala and kudu database.
plication. I would be too concerned about
>making backups.
>
>I don't know enough about kudu to comment.
>
>Sent from my iPhone
>
>> On Jul 4, 2017, at 9:26 PM, sky wrote:
>>
>> Hi all,
>>How to backup and restore impala and kudu database.
Hi all,
Does impala and kudu have rollback mechanism when load data ?
Thanks.
Hi all,
I use this command( ./buildall.sh -fe_only -notests -so) to compile
impala,in the fe module will be given the error(Connection timed out), But on
the host using the "telnet" command to connect the repository.cloudera.com:443
is success. why?
The following is a detailed error
Scan
Hi all,
When I used the command "./bin/start-impala-cluster.py" to start the
impala cluster, it will report the following error:
impalad.INFO:
I0714 13:37:13.292771 9363 status.cc:122] Currently configured
default filesystem: LocalFileSystem. fs.defaultFS (file:///)
Hi Tim,
I found it from ./bin/create-test-configuration.sh that generating
./fe/src/test/resources configurations, and HADOOP_CONFIG_DIR variable also
points to this directory. But I change this variable is not take effect. Is
this a hard code?
question.
>
>There isn't a way to override HADOOP_CONF_DIR mostly - most scripts source
>impala-config.sh.
>
>On Sun, Jul 16, 2017 at 8:31 PM, sky wrote:
>
>> Hi Tim,
>> I found it from ./bin/create-test-configuration.sh that generating
>> ./fe/src/te
"黄权隆" wrote:
>Hi sky,
>
>
>Do you want to use customized hadoop cluster but not the mini cluster? For
>example, testing the latest version of Impala upon your existing Hive cluster.
>If so, you can modify the configuration files in ./fe/src/test/resources.
>They&
Hi all,
t1 table:3000 data
t2 table: 0 data, the table structure is the same as t1
using "insert into table t2 select * from t2;" will cause a timeout error.
Why ?
Hi all,
Is there any way to load data from hdfs to parquet table not via external
table or inner table?
t;whether Impala "takes control" of the underlying data files and moves
>them when you rename the table, or deletes them when you drop the
>table. For more about internal and external tables and how they
>interact with the LOCATION attribute, see Overview of Impala Tables.&q
csv file on the HDFS.
At 2017-08-15 10:42:13, "Jim Apple" wrote:
>Is the data in a format that Impala can read?
>
>On Mon, Aug 14, 2017 at 7:31 PM, sky wrote:
>> Thank you,
>> I read the document.But it only describes the conversion of internal and
>
te_table
>
>I think you can follow these two steps in order:
>
>1. Make an external table referring to the CSV
>
>2. Use CREATE TABLE AS SELECT to make a parquet table
>
>On Mon, Aug 14, 2017 at 7:48 PM, sky wrote:
>> csv file on the HDFS.
>>
>>
>>
>
Hi all,
In addition to "show tables" command, is there any other ways to show all
the tables in impala ?
I need a way to handle collection of all tables through SQL, but "show
tables" can not be combined with SQL.
Hi Dimitris,
What you mean is to query the database that stores the impala metadata ?
hive(mysql,pg,derby and so on) ?
At 2017-08-30 16:41:40, "Dimitris Tsirogiannis"
wrote:
>Hi sky,
>
>You could use HiveServer2 API (
>https://github.com/apache/hive/blob
Hi all,
How does impala interact data with other relational databases? Sqoop's
functionality is not perfect, and in impala, each insert has 100ms query plan
overhead. Are there any other easy ways to interact ?
Hi all,
Could you give me a impala driver source code connection?
ietary.
>
>On 18 September 2017 at 11:55, sky wrote:
>> Hi all,
>> Could you give me a impala driver source code connection?
Hi all,
How to compile impala into rpm package?
Hi all,
After using the 'alter table ... drop columns ...' to delete the middle
column, then the select query will appear data confusion, how to solve it ?
Textfile and parquet,the two format both cause data confusion.
.
At 2017-10-12 14:41:02, "Alexander Behm" wrote:
>What's the file format?
>
>On Wed, Oct 11, 2017 at 11:30 PM, sky wrote:
>
>> Hi all,
>> After using the 'alter table ... drop
Why is the second step performed in hive, not impala?
At 2017-10-12 15:12:38, "yu feng" wrote:
>I open impala-shell and hive-cli.
>1、execute 'show create table impala_test.sales_fact_1997' in impala-shell ,
>return :
>
>+-
Hi all,
How does the parquet table perform load data operations? How does a CSV
file import into the parquet table?
7;insert into [parquet table] select
>[...] from [csv_table]'.
>
>HTH
>
>On 12 October 2017 at 07:58, sky wrote:
>> Hi all,
>> How does the parquet table perform load data operations? How does a CSV
>> file import into the parquet table?
Hi all,
Which impala version of the stability is better ? cdh5.13.0-release or
cdh5.12.1-release ?
28 matches
Mail list logo