Hue and Beeline access your warehouse data and metadata via the HiveServer2
APIs. The HiveServer2 service runs as the 'hive' user.
On Wed, Dec 23, 2015 at 9:42 PM Kumar Jayapal wrote:
> Hi,
>
> My environment has Kerbros and Senry for authentication and authorisation.
>
>
Thanks Vinod for the wonderful pdf.
It explains how the security can be achieved via Kerberos.
Any other way to implement security in hadoop (without using kerberos)?
On Fri, Feb 22, 2013 at 2:39 AM, Vinod Kumar Vavilapalli
vino...@hortonworks.com wrote:
You should read the hadoop security
I am looking for an explanation of Kerberos working with Hadoop cluster .
I need to know how KDC is used by hdfs and mapred.
(Something like this :- An example of kerberos with mail server ,
https://www.youtube.com/watch?v=KD2Q-2ToloE)
How the name node and data node are prone to attacks ?
What
You should read the hadoop security design doc which you can find at
https://issues.apache.org/jira/browse/HADOOP-4487
HTH,
+Vinod Kumar Vavilapalli
Hortonworks Inc.
http://hortonworks.com/
On Feb 21, 2013, at 11:02 AM, rohit sarewar wrote:
I am looking for an explanation of Kerberos
, September 19, 2012 02:38 PM
To: hdfs-user@hadoop.apache.org hdfs-user@hadoop.apache.org
Subject: Re: How does decommissioning work
Bryan,
I am going to assume that you know about replication factor per file, block,
replicas etc.
Nodes are marked for decommissioned by adding it to excludes file (btw
Hello,
I'm using cdh3u2, if it matters. I'm using the dfs.exclude.hosts to
decommission a good percentage of my cluster as I scale it down for a
period of time. I'm just trying to understand how hdfs goes about this,
because I haven't found anything more than a how to use documentation for
the
, December 20, 2009 11:14 PM
To: common-user@hadoop.apache.org
Subject: how does hadoop work?
Trying to figure out how hadoop actually achieves its speed. Assuming that
data locality is central to the efficiency of hadoop, how does the magic
actually happen, given that data still gets moved all over
DS,
What you say is true, but there are finer points:
1. Data transfer can begin while the mapper is working through the data.
You would still bottleneck on the network if: (a) you have enough nodes and
spindles such that the aggregate disk transfer speed is greater than the
network
Trying to figure out how hadoop actually achieves its speed. Assuming that
data locality is central to the efficiency of hadoop, how does the magic
actually happen, given that data still gets moved all over the network to
reach the reducers?
For example, if I have 1gb of logs spread across 10
Thank you, Chris. This solves my questions.
-Kevin
On Mon, Jul 14, 2008 at 11:17 AM, Chris Douglas [EMAIL PROTECTED] wrote:
Yielding equal partitions means that each input source will offer n
partitions and for any given partition 0 = i n, the records in that
partition are 1) sorted on the
10 matches
Mail list logo