I think one point you need to mention is your target - HDFS, Hive or Hbase
(or something else) and which end points are used.

On Thu, Oct 13, 2016 at 8:50 PM, dbolshak <bolshakov.de...@gmail.com> wrote:

> Hello community,
> We've a challenge and no ideas how to solve it.
> The problem,
> Say we have the following environment:
> 1. `cluster A`, the cluster does not use kerberos and we use it as a source
> of data, important thing is - we don't manage this cluster.
> 2. `cluster B`, small cluster where our spark application is running and
> performing some logic. (we manage this cluster and it does not have
> kerberos).
> 3. `cluster C`, the cluster uses kerberos and we use it to keep results of
> our spark application, we manage this cluster
> Our requrements and conditions that are not mentioned yet:
> 1. All clusters are in a single data center, but in the different
> subnetworks.
> 2. We cannot turn on kerberos on `cluster A`
> 3. We cannot turn off kerberos on `cluster C`
> 4. We can turn on/off kerberos on `cluster B`, currently it's turned off.
> 5. Spark app is built on top of RDD and does not depend on spark-sql.
> Does anybody know how to write data using RDD api to remote cluster which
> is
> running with Kerberos?
> --
> //with Best Regards
> --Denis Bolshakov
> e-mail: bolshakov.de...@gmail.com
> --
> View this message in context: http://apache-spark-user-list.
> 1001560.n3.nabble.com/spark-with-kerberos-tp27894.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
> ---------------------------------------------------------------------
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Best Regards,
Ayan Guha

Reply via email to