on.server.AuthenticationFilter
> spark.org.apache.hadoop.security.authentication.server.AuthenticationFilter.params="type=kerberos,kerberos.principal=HTTP/mybox@MYDOMAIN,kerberos.keytab=/some/keytab"
>
>
>
>
> On Thu, Jan 7, 2016 at 10:35 AM, Kostiantyn Kudriavtsev
> <kudryavtsev.konstan...@gmail.com> wrote:
> I’m afraid I missed where
I know, but I need only to hide/protect web ui at least with servlet/filter api
On Jan 7, 2016, at 4:59 PM, Ted Yu <yuzhih...@gmail.com> wrote:
> Without kerberos you don't have true security.
>
> Cheers
>
> On Thu, Jan 7, 2016 at 1:56 PM, Kostiantyn Kudriavtsev
&
Hi guys,
the only one big issue with this approach:
> spark.hadoop.s3a.access.key is now visible everywhere, in logs, in spark
> webui and is not secured at all...
On Jan 2, 2016, at 11:13 AM, KOSTIANTYN Kudriavtsev
<kudryavtsev.konstan...@gmail.com> wrote:
> thanks Jerry, it
n you can specify different user using different spark.conf via
> --properties-file when spark-submit
>
> HTH,
>
> Jerry
>
> Sent from my iPhone
>
> On 31 Dec, 2015, at 2:06 pm, KOSTIANTYN Kudriavtsev <
> kudryavtsev.konstan...@gmail.com> wrote:
>
> Hi Jerry,
>
uld.
>
> Best Regards,
>
> Jerry
>
> Sent from my iPhone
>
> On 30 Dec, 2015, at 2:31 pm, KOSTIANTYN Kudriavtsev <
> kudryavtsev.konstan...@gmail.com> wrote:
>
> Hi Jerry,
>
> I want to run different jobs on different S3 buckets - different AWS creds
>
Hi all,
I'm trying to use different spark-default.conf per user, i.e. I want to
have spark-user1.conf and etc. Is it a way to pass a path to appropriate
conf file when I'm using standalone spark installation?
Also, is it possible to configure different hdfs-site.xml and pass it as
well with
> Sent from my iPhone
>
> On 30 Dec, 2015, at 2:31 pm, KOSTIANTYN Kudriavtsev <
> kudryavtsev.konstan...@gmail.com> wrote:
>
> Hi Jerry,
>
> I want to run different jobs on different S3 buckets - different AWS creds
> - on the same instances. Could you shed s
te.xml
>
> What config parameters do you plan to change in hdfs-site.xml ?
> If the parameter only affects hdfs NN / DN, passing hdfs-site.xml
> wouldn't take effect, right ?
>
> Cheers
>
> On Thu, Dec 31, 2015 at 10:48 AM, KOSTIANTYN Kudriavtsev <
> kudryavtsev.konsta
Dear Spark community,
I faced the following issue with trying accessing data on S3a, my code is
the following:
val sparkConf = new SparkConf()
val sc = new SparkContext(sparkConf)
sc.hadoopConfiguration.set("fs.s3a.impl",
"org.apache.hadoop.fs.s3a.S3AFileSystem")
ch Worker node to all the Executor JVMs
> on each Worker?
>
> On Dec 30, 2015, at 12:45 PM, KOSTIANTYN Kudriavtsev <
> kudryavtsev.konstan...@gmail.com> wrote:
>
> Dear Spark community,
>
> I faced the following issue with trying accessing data on S3a, my code is
&
Dear Spark community,
I faced the following issue with trying accessing data on S3a, my code is
the following:
val sparkConf = new SparkConf()
val sc = new SparkContext(sparkConf)
sc.hadoopConfiguration.set("fs.s3a.impl",
"org.apache.hadoop.fs.s3a.S3AFileSystem")
Dec 30, 2015 6:48 PM, "KOSTIANTYN Kudriavtsev" <
> kudryavtsev.konstan...@gmail.com> wrote:
>
>> Dear Spark community,
>>
>> I faced the following issue with trying accessing data on S3a, my code is
>> the following:
>>
>
f the EC2 instances in the cluster - and handles autoscaling
> very well - and at some point, you will want to autoscale.
>
> On Wed, Dec 30, 2015 at 1:08 PM, KOSTIANTYN Kudriavtsev <
> kudryavtsev.konstan...@gmail.com> wrote:
>
>> Chris,
>>
>> good question, as yo
> Hi Kostiantyn,
>
> Can you define those properties in hdfs-site.xml and make sure it is
> visible in the class path when you spark-submit? It looks like a conf
> sourcing issue to me.
>
> Cheers,
>
> Sent from my iPhone
>
> On 30 Dec, 2015, at 1:59 pm, KOSTIAN
Hi,
try this one
http://simpletoad.blogspot.com/2014/07/runing-spark-unit-test-on-windows-7.html
it’s more about fixing windows-specific issue, but code snippet gives general
idea
just run etl and check output w/ Assert(s)
On Jul 29, 2014, at 6:29 PM, soumick86 sdasgu...@dstsystems.com
Hi Sam,
I tried Spark on Cloudera a couple month age, any there were a lot of issues…
Fortunately, I was able to switch to Hortonworks and exerting works perfect. In
general, you can try two mode: standalone and via YARN. Personally, I found
using Spark via YARN more comfortable special for
early builds of Spark on Windows, I
admittedly had Hadoop on Windows running as well and had not run into this
particular issue.
On Wed, Jul 2, 2014 at 12:04 PM, Kostiantyn Kudriavtsev
kudryavtsev.konstan...@gmail.com wrote:
No, I don’t
why do I need to have HDP installed? I don’t use
Hi all,
Could you please share your the best practices on writing logs in Spark? I’m
running it on YARN, so when I check logs I’m bit confused…
Currently, I’m writing System.err.println to put a message in log and access it
via YARN history server. But, I don’t like this way… I’d like to use
No, I don’t
why do I need to have HDP installed? I don’t use Hadoop at all and I’d like to
read data from local filesystem
On Jul 2, 2014, at 9:10 PM, Denny Lee denny.g@gmail.com wrote:
By any chance do you have HDP 2.1 installed? you may need to install the
utils and update the env
19 matches
Mail list logo