yeah. that works as expected. the schema drives the column list in the
select statement (not the hdfs file.)
you'd have nulls if your schema had *more* columns than the hdfs file had
fields.
you dig?
On Wed, Oct 23, 2013 at 4:53 PM, Xiu Guo wrote:
> We have a table called employee.dat with
We have a table called employee.dat with below contents:
1,ryan,d'souza,it,2
2,michael,fernandes,admin,25000
then in Hive, query:
create table myTbl (a INT, b STRING)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
TBLPROPERTIES ("serialization.null.format"="\\N");
LO
Thanks Stephen-
I will submit it, its definitely still kinda beta mode.
Looking for feedback and contributors if anyone is interested.
Thanks!
B
On Wed, Oct 23, 2013 at 4:21 PM, Stephen Sprague wrote:
> excellent. you might try to get it mentioned on this page:
>
> https://cwiki.apache.org/c
excellent. you might try to get it mentioned on this page:
https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients
and save some other poor saps from re-inventing the wheel.
On Wed, Oct 23, 2013 at 2:42 PM, Brad Ruderman wrote:
> Hi All-
> I have struggled for awhile with a
Hi All-
I have struggled for awhile with a simple and straightforward driver that I
can use to connect to Hive Server 2 in a very similar manner as a mysql
driver in python. I know there are a few ways like using thrift or ODBC but
all require significant amount of installation. I decided to create
Never mind. Figured out where that was.
Thanks.
On Wed, Oct 23, 2013 at 2:27 PM, SF Hadoop wrote:
> Where is package.jdo located? The one that you changed?
>
> Thanks.
>
>
>
> On Wed, Oct 23, 2013 at 1:22 PM, Timothy Potter wrote:
>
>> I updated package.jdo to use COMMENT instead of FCOMMEN
Where is package.jdo located? The one that you changed?
Thanks.
On Wed, Oct 23, 2013 at 1:22 PM, Timothy Potter wrote:
> I updated package.jdo to use COMMENT instead of FCOMMENT (which is
> something I had to do for HCatalog a long while back) ... may not be the
> "right" solution but worked
I updated package.jdo to use COMMENT instead of FCOMMENT (which is
something I had to do for HCatalog a long while back) ... may not be the
"right" solution but worked for me.
Cheers,
Tim
On Wed, Oct 23, 2013 at 2:09 PM, SF Hadoop wrote:
> Has anyone come up with further information on this is
Has anyone come up with further information on this issue? I am
experiencing the same thing.
Hive is set to auto-create if not exist yet it still fails. I cannot
create *any* table at all.
Any help is appreciated.
On Sat, Oct 19, 2013 at 11:56 PM, Jov wrote:
> can you confirm the script c
Hello,
I am running a hive query given below and I am getting this
Exception:
FAILED: SemanticException [Error 10004]: Line 2:74 Invalid table alias or
column reference 'result_data': (possible column names are: resultid,
publishdatetime)
select resdata.resultid, resdata.clinical.publishdat
Hi Andy,
Using Http over Thrift is currently a feature in development and has
limited functionality. As of now, it does not support doAs and works with
only NOSASL authentication mode set on the server side. There is a
follow-up JIRA (https://issues.apache.org/jira/browse/HIVE-4764) which will
add
I found a solution.
In testing, I tried out the HTTPClientTransport that's now supported for
Thrift in Hive 0.12. This required me to set the Hive conf var
hive.server2.enable.doAs to false in order for it to work. When I reverted
back to using the BufferedTransport, with this property in my
hive-
Hi there,
I have created a table of numbers using clustered by and am sampling it using
buckets.
If I am selecting 1 candidates from ~125m how can I get good random
selections?
Should I create 12500 clusters? Or should I create 100 clusters and then use
the sample function (... from 1250
13 matches
Mail list logo