IMHO, there is no straight forward way of doing this in Hadoop except that
you need to install Hadoop components such as MapReduce and HDFS as
different users . This is an ongoing development priority.
The available access related configuration options (before Kerberos V5) are
:
-
On Fri, Oct 29, 2010 at 3:42 PM, John Sichi jsi...@facebook.com wrote:
http://wiki.apache.org/hadoop/Hive/Development/ContributorsMeetings/HiveContributorsMinutes101025
JVS
Carl Steinbach proposed making 0.7.0 a time-based release (rather than
a feature-based release), and that we should
I'm about to investigate the following situation, but I'd appreciate any
insight that can be given.
We have an external table which is comprised of 3 HDFS files.
We then run an INSERT OVERWRITE which is just a SELECT * from the external
table.
The table being overwritten has N buckets.
The issue