Hi Jorn
Thank you for replying. We are currently exporting data from hbase to hive, I
have mentioned in the previous message. I am working in the big company. I
personally like tez but it's even not in our roadmap.
Thank you
On May 12, 2016, at 1:52 AM, J?rn Franke
mailto:jornfra...@gmail.com>
Why don't you export the data from hbase to hive, eg in Orc format. You should
not use mr with Hive, but Tez. Also use a recent hive version (at least 1.2).
You can then do queries there. For large log file processing in real time, one
alternative depending on your needs could be Solr on Hadoop
Hi Sathi
Thank you for the answer. But we will load data from hbase to hive, let map
reduce to process those data. I am not sure if it is efficient in sever for
those terabytes data
Thanks l
Jacky
On May 11, 2016, at 11:03 PM, Sathi Chowdhury
mailto:sathi.chowdh...@lithium.com>> wrote:
Hi Ya
Hi Yang,
Did you think of bulk loading option?
http://blog.cloudera.com/blog/2013/09/how-to-use-hbase-bulk-loading-and-why/
This may be a way to go .
Thanks
Sathi
On May 11, 2016, at 6:07 PM, Yi Jiang
mailto:yi.ji...@ubisoft.com>> wrote:
Hi, Guys
Recently we are debating the usage for hbase as
Hi, Guys
Recently we are debating the usage for hbase as our destination for data
pipeline job.
Basically, we want to save our logs into hbase, and our pipeline can generate
2-4 terabytes data everyday, but our IT department think it is not good idea to
scan so hbase, it will cause the performan
Thanks Dudu, looks like both of them pointing to same class. Let me
check if there is any problem with data which is not visible and
surprising LCASE is resolving issue but not lower.
On 5/10/2016 9:43 PM, Markovitz, Dudu wrote:
Hi
According to documentation LCASE is a synonym for LOWER.
F
One more example:
[hdfs@hadoopnn1 ~]$ hdfs dfs -count -h /user/margusja/files_10k/
19.8 K 47.7 K /user/margusja/files_10k
[hdfs@hadoopnn1 ~]$ hdfs dfs -count -h /datasource/dealgate/
537.9 K 8.5 G /datasource/dealgate
2: jdbc:hive2://
More information:
2016-05-11 13:31:17,086 INFO [HiveServer2-Handler-Pool: Thread-5867]:
parse.ParseDriver (ParseDriver.java:parse(185)) - Parsing command:
create external table files_10k (i int) row format delimited fields
terminated by '\t' location '/user/margusja/files_10k'
2016-05-11 13:3
What do you mean?
Margus (margusja) Roo
http://margus.roo.ee
skype: margusja
+372 51 48 780
On 11/05/16 08:21, Mich Talebzadeh wrote:
yes but table then exists correct I mean second time
did you try
*use default;*
*
drop table if exists trips;*
**
it is still within Hive metadata registere
Sadly in our environment:
Generated files like you did.
Connected to: Apache Hive (version 1.2.1.2.3.4.0-3485)
Driver: Hive JDBC (version 1.2.1.2.3.4.0-3485)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://hadoopnn1.estpak.ee:2181,hado> create external table
files_10k (i int
Hi
It seem that you are right and it a bug with the CTE when there’s an “IS NULL”
predicate involved.
I’ve opened a bug for this.
https://issues.apache.org/jira/browse/HIVE-13733
Dudu
hive> create table t (i int,a string,b string);
hive> insert into t values (1,'hello','world'),(2,'bye',null);
Could not reproduced that issue on Cloudera quickstart VM.
I’ve created an HDFS directory with 10,000 files.
I’ve create external table from within beeline.
The creation was immediate.
Dudu
---
bash
-
12 matches
Mail list logo