Saisai Shao created SPARK-17678:
---
Summary: Spark 1.6 Scala-2.11 repl doesn't honor
"spark.replClassServer.port"
Key: SPARK-17678
URL: https://issues.apache.org/jira/browse/SPARK-17678
Proj
[
https://issues.apache.org/jira/browse/SPARK-17637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515836#comment-15515836
]
Saisai Shao commented on SPARK-17637:
-
[~zhanzhang] would you mind sharing more details about your
[
https://issues.apache.org/jira/browse/SPARK-17624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15512237#comment-15512237
]
Saisai Shao edited comment on SPARK-17624 at 9/22/16 5:36 AM:
--
I cannot
[
https://issues.apache.org/jira/browse/SPARK-17624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15512237#comment-15512237
]
Saisai Shao commented on SPARK-17624:
-
I cannot reproduce locally on my
> Flaky t
[
https://issues.apache.org/jira/browse/SPARK-17604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Saisai Shao updated SPARK-17604:
Issue Type: Sub-task (was: Improvement)
Parent: SPARK-17267
> Support purging aged f
[
https://issues.apache.org/jira/browse/SPARK-17604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Saisai Shao updated SPARK-17604:
Description:
Currently with SPARK-15698, FileStreamSource metadata log will be compacted
Saisai Shao created SPARK-17604:
---
Summary: Support purging aged file entry for FileStreamSource
metadata log
Key: SPARK-17604
URL: https://issues.apache.org/jira/browse/SPARK-17604
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-15698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15505799#comment-15505799
]
Saisai Shao commented on SPARK-15698:
-
I think [~rxin] set this target version before. I'm OK
[
https://issues.apache.org/jira/browse/SPARK-17566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15500522#comment-15500522
]
Saisai Shao commented on SPARK-17566:
-
I've already submitted a PR under SPARK-17512, since this JIRA
[
https://issues.apache.org/jira/browse/SPARK-17566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15500511#comment-15500511
]
Saisai Shao commented on SPARK-17566:
-
Shouldn't it be {{!isYarnCluster}}? Since we need to avoid
[
https://issues.apache.org/jira/browse/SPARK-17512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Saisai Shao updated SPARK-17512:
Component/s: YARN
> Specifying remote files for Python based Spark jobs in Yarn cluster m
[
https://issues.apache.org/jira/browse/SPARK-17512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15500178#comment-15500178
]
Saisai Shao commented on SPARK-17512:
-
This is due to some behavior changes during submitting spark
[
https://issues.apache.org/jira/browse/SPARK-17566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Saisai Shao closed SPARK-17566.
---
Resolution: Duplicate
> "--master yarn --deploy-mode cluster" gives "Launching P
[
https://issues.apache.org/jira/browse/SPARK-17566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15500145#comment-15500145
]
Saisai Shao commented on SPARK-17566:
-
Sorry I misunderstood your point, looks like it should
[
https://issues.apache.org/jira/browse/SPARK-17566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15500109#comment-15500109
]
Saisai Shao commented on SPARK-17566:
-
Can you confirm the above command you mentioned can be run
dalone?
>
> Why are there 2 ways to get information, REST API and this Sink?
>
>
> Best regards, Vladimir.
>
>
>
>
>
>
> On Mon, Sep 12, 2016 at 3:53 PM, Vladimir Tretyakov <
> vladimir.tretya...@sematext.com> wrote:
>
>> Hello Saisai Shao,
[
https://issues.apache.org/jira/browse/SPARK-17522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15495243#comment-15495243
]
Saisai Shao edited comment on SPARK-17522 at 9/16/16 3:19 AM:
--
[~sunrui] I
[
https://issues.apache.org/jira/browse/SPARK-17522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15495243#comment-15495243
]
Saisai Shao commented on SPARK-17522:
-
[~sunrui] I think the performance is depended on different
Here is the yarn RM REST API for you to refer (
http://hadoop.apache.org/docs/r2.7.0/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html).
You can use these APIs to query applications running on yarn.
On Sun, Sep 11, 2016 at 11:25 PM, Jacek Laskowski wrote:
> Hi Vladimir,
>
>
[
https://issues.apache.org/jira/browse/SPARK-17340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15470553#comment-15470553
]
Saisai Shao commented on SPARK-17340:
-
I think what [~asukhenko] mentioned in the description is one
[
https://issues.apache.org/jira/browse/SPARK-17340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15455080#comment-15455080
]
Saisai Shao commented on SPARK-17340:
-
yarn-client and yarn-cluster has different way to handle
[
https://issues.apache.org/jira/browse/SPARK-17340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15454777#comment-15454777
]
Saisai Shao edited comment on SPARK-17340 at 9/1/16 11:02 AM:
--
I think
[
https://issues.apache.org/jira/browse/SPARK-17340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15455076#comment-15455076
]
Saisai Shao commented on SPARK-17340:
-
You can try not kill local {{yarn#client}} process after
[
https://issues.apache.org/jira/browse/SPARK-17340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15455055#comment-15455055
]
Saisai Shao commented on SPARK-17340:
-
I'm saying yarn cluster mode, I think here in my comment
[
https://issues.apache.org/jira/browse/SPARK-17340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15454777#comment-15454777
]
Saisai Shao commented on SPARK-17340:
-
I think in your scenario, it is because you killed local
This archive contains all the jars required by Spark runtime, you could zip
all the jars under /jars and upload this archive to HDFS, then
configure spark.yarn.archive with the path of this archive on HDFS.
On Sun, Aug 28, 2016 at 9:59 PM, Srikanth Sampath wrote:
>
[
https://issues.apache.org/jira/browse/SPARK-17204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15436222#comment-15436222
]
Saisai Shao commented on SPARK-17204:
-
Yes, I could reproduce this issue, but not constantly
[
https://issues.apache.org/jira/browse/SPARK-17204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15436213#comment-15436213
]
Saisai Shao commented on SPARK-17204:
-
I think to reflect the issue {{sc.range(0, 0)}} should
[
https://issues.apache.org/jira/browse/SPARK-17204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15436205#comment-15436205
]
Saisai Shao commented on SPARK-17204:
-
No, I tested in yarn cluster, not local mode.
> Spark 2.0
[
https://issues.apache.org/jira/browse/SPARK-17204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15436179#comment-15436179
]
Saisai Shao commented on SPARK-17204:
-
It works OK in my local test with latest build:
{code}
val
oud.com
>
>
> *From:* Sun Rui <sunrise_...@163.com>
> *Date:* 2016-08-24 22:17
> *To:* Saisai Shao <sai.sai.s...@gmail.com>
> *CC:* tony@tendcloud.com; user <user@spark.apache.org>
> *Subject:* Re: Can we redirect Spark shuffle spill data to HDFS or
>
ty, and also there is additional overhead of network I/O and replica
> of HDFS files.
>
> On Aug 24, 2016, at 21:02, Saisai Shao <sai.sai.s...@gmail.com> wrote:
>
> Spark Shuffle uses Java File related API to create local dirs and R/W
> data, so it can only be worked with OS suppor
Spark Shuffle uses Java File related API to create local dirs and R/W data,
so it can only be worked with OS supported FS. It doesn't leverage Hadoop
FileSystem API, so writing to Hadoop compatible FS is not worked.
Also it is not suitable to write temporary shuffle data into distributed
FS, this
This looks like Spark application is running into a abnormal status. From
the stack it means driver could not send requests to AM, can you please
check if AM is reachable or are there any other exceptions beside this one.
>From my past test, Spark's dynamic allocation may run into some corner
[
https://issues.apache.org/jira/browse/SPARK-17209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Saisai Shao updated SPARK-17209:
Summary: Support manual credential updating in the run-time for Spark on
YARN (was: Support
[
https://issues.apache.org/jira/browse/SPARK-17209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Saisai Shao updated SPARK-17209:
Description:
Current Spark on YARN supports time based credential renewal and updating
Saisai Shao created SPARK-17209:
---
Summary: Support manual credential updating in the run-time
Key: SPARK-17209
URL: https://issues.apache.org/jira/browse/SPARK-17209
Project: Spark
Issue Type
[
https://issues.apache.org/jira/browse/SPARK-17148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15430402#comment-15430402
]
Saisai Shao commented on SPARK-17148:
-
I manually verified this by explicitly throwing
The implementation inside the Python API and Scala API for RDD is slightly
different, so the difference of RDD lineage you printed is expected.
On Tue, Aug 16, 2016 at 10:58 AM, DEEPAK SHARMA wrote:
> Hi All,
>
>
> Below is the small piece of code in scala and
Saisai Shao created SPARK-17019:
---
Summary: Expose off-heap memory usage in various places
Key: SPARK-17019
URL: https://issues.apache.org/jira/browse/SPARK-17019
Project: Spark
Issue Type
[
https://issues.apache.org/jira/browse/AMBARI-18091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414751#comment-15414751
]
Saisai Shao commented on AMBARI-18091:
--
Please help to review, [~sumitmohanty] [~jluniya], thanks
[
https://issues.apache.org/jira/browse/AMBARI-18091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Saisai Shao reassigned AMBARI-18091:
Assignee: Saisai Shao
> Use https url for Spark2 Service check when WireEncrypt
/resources/common-services/SPARK2/2.0.0/package/scripts/service_check.py
565f924
Diff: https://reviews.apache.org/r/50945/diff/
Testing
---
Manual verification is done.
Thanks,
Saisai Shao
[
https://issues.apache.org/jira/browse/SPARK-16966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414481#comment-15414481
]
Saisai Shao commented on SPARK-16966:
-
Here is the code in {{SparkSubmitArguments}} to handle
[
https://issues.apache.org/jira/browse/SPARK-16966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15413201#comment-15413201
]
Saisai Shao commented on SPARK-16966:
-
Yes, agreed. A better way is to handle this app name thing
[
https://issues.apache.org/jira/browse/SPARK-16944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15411299#comment-15411299
]
Saisai Shao commented on SPARK-16944:
-
Does Mesos have the similar concept like Yarn container, also
1. Standalone mode doesn't support accessing kerberized Hadoop, simply
because it lacks the mechanism to distribute delegation tokens via cluster
manager.
2. For the HBase token fetching failure, I think you have to do kinit to
generate tgt before start spark application (
[
https://issues.apache.org/jira/browse/SPARK-16914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15411159#comment-15411159
]
Saisai Shao edited comment on SPARK-16914 at 8/8/16 1:48 AM:
-
So from your
[
https://issues.apache.org/jira/browse/SPARK-16914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15411159#comment-15411159
]
Saisai Shao commented on SPARK-16914:
-
So from your description, is this exception mainly due
I guess you're mentioning about spark assembly uber jar. In Spark 2.0,
there's no uber jar, instead there's a jars folder which contains all jars
required in the run-time. For the end user it is transparent, the way to
submit spark application is still the same.
On Wed, Aug 3, 2016 at 4:51 PM,
[
https://issues.apache.org/jira/browse/SPARK-16871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Saisai Shao updated SPARK-16871:
Summary: Support getting HBase tokens from multiple clusters dynamically
(was: Support getting
Saisai Shao created SPARK-16871:
---
Summary: Support getting HBase tokens from multiple clusters and
dynamically
Key: SPARK-16871
URL: https://issues.apache.org/jira/browse/SPARK-16871
Project: Spark
Use dominant resource calculator instead of default resource calculator
will get the expected vcores as you wanted. Basically by default yarn does
not honor cpu cores as resource, so you will always see vcore is 1 no
matter what number of cores you set in spark.
On Wed, Aug 3, 2016 at 12:11 PM,
Use dominant resource calculator instead of default resource calculator
will get the expected vcores as you wanted. Basically by default yarn does
not honor cpu cores as resource, so you will always see vcore is 1 no
matter what number of cores you set in spark.
On Wed, Aug 3, 2016 at 12:11 PM,
[
https://issues.apache.org/jira/browse/SPARK-16864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15405487#comment-15405487
]
Saisai Shao commented on SPARK-16864:
-
A program way to get spark version is to call {{SparkContext
[
https://issues.apache.org/jira/browse/SPARK-14453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15405112#comment-15405112
]
Saisai Shao commented on SPARK-14453:
-
If you want to fix this issue, it would be better target
.
Thanks,
Saisai Shao
with my hotfix the config
> > will remain -Dhdp.version={{hdp_full_version}}.
>
> Saisai Shao wrote:
> From my understanding, you mean that in the params.py we should also take
> care of {{hdp_full_version}} if amabri is upgraded from lower version. Can
> you please explain more
get the specific version of Ambari and how to upgrade to the specific
version?
- Saisai
---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/50594/#review144425
--
---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/50594/#review144418
-------
On Aug. 1, 2016, 1:22 a.m., Saisai Shao wrote:
>
> -
e addition of -Dhdp.version should also be under condition
> > check_stack_feature(StackFeature.SPARK_JAVA_OPTS_SUPPORT,
> > effective_version).
> >
> > I assume -Dhdp.version is to be added only for HDP-2.3 and below.
>
> Saisai Shao wrote:
>
-------
On Aug. 1, 2016, 1:22 a.m., Saisai Shao wrote:
>
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/50594/
> ---
>
> java.lang.NoClassDefFoundError: spray/json/JsonReader
>
> at
> com.memsql.spark.pushdown.MemSQLPhysicalRDD$.fromAbstractQueryTree(MemSQLPhysicalRDD.scala:95)
>
> at
> com.memsql.spark.pushdown.MemSQLPushdownStrategy.apply(MemSQLPushdownStrategy.scala:49)
>
[
https://issues.apache.org/jira/browse/SPARK-16815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15401566#comment-15401566
]
Saisai Shao edited comment on SPARK-16815 at 8/1/16 6:01 AM:
-
>From
[
https://issues.apache.org/jira/browse/SPARK-16815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15401566#comment-15401566
]
Saisai Shao commented on SPARK-16815:
-
>From my understanding you can use
{c
[
https://issues.apache.org/jira/browse/SPARK-16817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15401432#comment-15401432
]
Saisai Shao commented on SPARK-16817:
-
What's difference compared to use ramdisk to store shuffle
/test/python/stacks/2.3/SPARK/test_spark_thrift_server.py
a1abdfa
Diff: https://reviews.apache.org/r/50594/diff/
Testing
---
Manual test with different scenarios:
1. Fresh install of HDP 2.3.6, 2.4.3, 2.5.0
2. Upgrade for 2.3.6 to 2.5.0.
3. Downgrade from 2.5.0 to 2.3.6.
Thanks,
Saisai
>action="delete"
> >)
Done
- Saisai
-------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/50594/#review144288
---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/50594/#review144288
-------
On July 31, 2016, 3:44 a.m., Saisai Shao wrote:
>
> -
/test/python/stacks/2.3/SPARK/test_spark_thrift_server.py
a1abdfa
Diff: https://reviews.apache.org/r/50594/diff/
Testing
---
Manual test with different scenarios:
1. Fresh install of HDP 2.3.6, 2.4.3, 2.5.0
2. Upgrade for 2.3.6 to 2.5.0.
3. Downgrade from 2.5.0 to 2.3.6.
Thanks,
Saisai
[
https://issues.apache.org/jira/browse/AMBARI-17954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398740#comment-15398740
]
Saisai Shao commented on AMBARI-17954:
--
CC [~sumitmohanty] [~jluniya], please help to review
Saisai Shao created AMBARI-17954:
Summary: Fix Spark hdp.version issues in upgrading and fresh
install
Key: AMBARI-17954
URL: https://issues.apache.org/jira/browse/AMBARI-17954
Project: Ambari
[
https://issues.apache.org/jira/browse/SPARK-16085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15395705#comment-15395705
]
Saisai Shao commented on SPARK-16085:
-
Unfortunately, there's no such configuration for Spark
[
https://issues.apache.org/jira/browse/SPARK-16708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15393377#comment-15393377
]
Saisai Shao commented on SPARK-16708:
-
Looks similar to SPARK-11334, and I have a patch on it, though
Several useful information can be found here (
https://issues.apache.org/jira/browse/YARN-1842), though personally I
haven't met this problem before.
Thanks
Saisai
On Tue, Jul 26, 2016 at 2:21 PM, Yu Wei wrote:
> Hi guys,
>
>
> When I tried to shut down spark application
[
https://issues.apache.org/jira/browse/SPARK-16723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15393033#comment-15393033
]
Saisai Shao commented on SPARK-16723:
-
So maybe this application is not yet started in the yarn side
[
https://issues.apache.org/jira/browse/SPARK-16723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15393003#comment-15393003
]
Saisai Shao commented on SPARK-16723:
-
Did you enable log aggregation in YARN, if not this command
[
https://issues.apache.org/jira/browse/SPARK-16723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15393003#comment-15393003
]
Saisai Shao edited comment on SPARK-16723 at 7/26/16 1:36 AM:
--
Did you
[
https://issues.apache.org/jira/browse/SPARK-16723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392980#comment-15392980
]
Saisai Shao commented on SPARK-16723:
-
{{yarn logs -applicationId application_1467990031555_0089
[
https://issues.apache.org/jira/browse/SPARK-16723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15392965#comment-15392965
]
Saisai Shao commented on SPARK-16723:
-
I think you should check the AM and executor logs to see
I think both 6066 and 7077 can be worked. 6066 is using the REST way to
submit application, while 7077 is the legacy way. From user's aspect, it
should be transparent and no need to worry about the difference.
- *URL:* spark://hw12100.local:7077
- *REST URL:* spark://hw12100.local:6066
[
https://issues.apache.org/jira/browse/AMBARI-16864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15383435#comment-15383435
]
Saisai Shao commented on AMBARI-16864:
--
Done, patch updated.
> Add unit tests for Spark2 serv
The error stack is throwing from your code:
Caused by: scala.MatchError: [Ljava.lang.String;@68d279ec (of class
[Ljava.lang.String;)
at com.jd.deeplog.LogAggregator$.main(LogAggregator.scala:29)
at com.jd.deeplog.LogAggregator.main(LogAggregator.scala)
I think you should debug
[
https://issues.apache.org/jira/browse/SPARK-16540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Saisai Shao updated SPARK-16540:
Description:
Currently when running spark on yarn, jars specified with \--jars, \--packages
Saisai Shao created SPARK-16540:
---
Summary: Jars specified with --jars will added twice when running
on YARN
Key: SPARK-16540
URL: https://issues.apache.org/jira/browse/SPARK-16540
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-16534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376449#comment-15376449
]
Saisai Shao commented on SPARK-16534:
-
Maybe I can take a try if no one is working on this :). BTW do
[
https://issues.apache.org/jira/browse/SPARK-16534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376432#comment-15376432
]
Saisai Shao commented on SPARK-16534:
-
Is there anyone working on this?
> Kafka 0.10 Python supp
[
https://issues.apache.org/jira/browse/SPARK-16521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15374612#comment-15374612
]
Saisai Shao commented on SPARK-16521:
-
I see, sorry about the duplication.
> Add supp
[
https://issues.apache.org/jira/browse/SPARK-16521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Saisai Shao closed SPARK-16521.
---
Resolution: Duplicate
> Add support of parameterized configuration for SparkC
[
https://issues.apache.org/jira/browse/SPARK-16522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15374583#comment-15374583
]
Saisai Shao commented on SPARK-16522:
-
Perhaps there's race condition when exiting the Spark
[
https://issues.apache.org/jira/browse/SPARK-16521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Saisai Shao updated SPARK-16521:
Priority: Minor (was: Major)
> Add support of parameterized configuration for SparkC
Saisai Shao created SPARK-16521:
---
Summary: Add support of parameterized configuration for SparkConf
Key: SPARK-16521
URL: https://issues.apache.org/jira/browse/SPARK-16521
Project: Spark
Issue
[
https://issues.apache.org/jira/browse/SPARK-16428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15372885#comment-15372885
]
Saisai Shao commented on SPARK-16428:
-
bq. Spark detected those files with the above terminal output
[
https://issues.apache.org/jira/browse/SPARK-16435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15371975#comment-15371975
]
Saisai Shao commented on SPARK-16435:
-
OK, I will file a small patch to add the warning log about
Saisai Shao created SPARK-16435:
---
Summary: Behavior changes if initialExecutor is less than
minExecutor for dynamic allocation
Key: SPARK-16435
URL: https://issues.apache.org/jira/browse/SPARK-16435
[
https://issues.apache.org/jira/browse/SPARK-14743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Saisai Shao updated SPARK-14743:
Component/s: YARN
> Improve delegation token handling in secure clust
[
https://issues.apache.org/jira/browse/SPARK-16342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Saisai Shao closed SPARK-16342.
---
Resolution: Duplicate
> Add a new Configurable Token Manager for Spark Running on Y
[
https://issues.apache.org/jira/browse/SPARK-16342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15365535#comment-15365535
]
Saisai Shao commented on SPARK-16342:
-
Close as JIRA as duplicated and move to SPARK-14743.
>
[
https://issues.apache.org/jira/browse/SPARK-14743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15365534#comment-15365534
]
Saisai Shao commented on SPARK-14743:
-
Post design doc here and move SPARK-16342 to here.
> Impr
[
https://issues.apache.org/jira/browse/SPARK-14743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15365534#comment-15365534
]
Saisai Shao edited comment on SPARK-14743 at 7/7/16 3:18 AM:
-
Post design doc
1101 - 1200 of 1996 matches
Mail list logo