You can try
http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html#Archival_Storage_SSD__Memory
. Hive tmp table use this function to speed job.
https://issues.apache.org/jira/browse/HIVE-7313
r7raul1...@163.com
From: Christian
Date: 2015-11-06 13:50
how to use trees and ensembles: class probabilities in spark 1.5.0 . Any
example or document ?
r7raul1...@163.com
Thank you
r7raul1...@163.com
From: Sean Owen
Date: 2015-09-24 16:18
To: r7raul1...@163.com
CC: user
Subject: Re: How to fix some WARN when submit job on spark 1.5 YARN
You can ignore all of these. Various libraries can take advantage of
native acceleration if libs are available but it's
1 WARN netlib.BLAS: Failed to load implementation from:
com.github.fommil.netlib.NativeSystemBLAS
2 WARN netlib.BLAS: Failed to load implementation from:
com.github.fommil.netlib.NativeRefBLAS
3 WARN Unable to load native-hadoop library for your platform
r7raul1...@163.com
Example:
select * from test.table chang to select * from production.table
r7raul1...@163.com
From: Cheng, Hao
Date: 2015-09-17 11:05
To: r7raul1...@163.com; user
Subject: RE: spark sql hook
Catalyst TreeNode is very fundamental API, not sure what kind of hook you need.
Any concrete
I want to modify some sql treenode before execute. I cau do this by hive hook
in hive. Does spark support such hook? Any advise?
r7raul1...@163.com
Thank you
r7raul1...@163.com
From: Cheng, Hao
Date: 2015-09-17 12:32
To: r7raul1...@163.com; user
Subject: RE: RE: spark sql hook
Probably a workable solution is, create your own SQLContext by extending the
class HiveContext, and override the `analyzer`, and add your own rule to do