[jira] [Commented] (SPARK-18357) YARN --files/--archives broke

2016-11-08 Thread Kishor Patil (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-18357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648158#comment-15648158
 ] 

Kishor Patil commented on SPARK-18357:
--

The patch is up with unit tests: https://github.com/apache/spark/pull/15810


> YARN --files/--archives broke
> -
>
> Key: SPARK-18357
> URL: https://issues.apache.org/jira/browse/SPARK-18357
> Project: Spark
>  Issue Type: Bug
>  Components: YARN
>Affects Versions: 2.1.0
>Reporter: Thomas Graves
>Priority: Blocker
>
> SPARK-18099 broke --files and --archives options.  The check should be ==null 
> instead of !=:
>  if (localizedPath != null) {
>  +throw new IllegalArgumentException(s"Attempt to add ($file) 
> multiple times" +
>  +  " to the distributed cache.")
>  +  }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-18357) YARN --files/--archives broke

2016-11-08 Thread Kishor Patil (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-18357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15647967#comment-15647967
 ] 

Kishor Patil commented on SPARK-18357:
--

My apologies for breaking the functionality. I will put up the patch soon.

> YARN --files/--archives broke
> -
>
> Key: SPARK-18357
> URL: https://issues.apache.org/jira/browse/SPARK-18357
> Project: Spark
>  Issue Type: Bug
>  Components: YARN
>Affects Versions: 2.1.0
>Reporter: Thomas Graves
>Priority: Blocker
>
> SPARK-18099 broke --files and --archives options.  The check should be ==null 
> instead of !=:
>  if (localizedPath != null) {
>  +throw new IllegalArgumentException(s"Attempt to add ($file) 
> multiple times" +
>  +  " to the distributed cache.")
>  +  }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-18099) Spark distributed cache should throw exception if same file is specified to dropped in --files --archives

2016-10-25 Thread Kishor Patil (JIRA)
Kishor Patil created SPARK-18099:


 Summary: Spark distributed cache should throw exception if same 
file is specified to dropped in --files --archives
 Key: SPARK-18099
 URL: https://issues.apache.org/jira/browse/SPARK-18099
 Project: Spark
  Issue Type: Bug
  Components: YARN
Affects Versions: 2.0.1, 2.0.0
Reporter: Kishor Patil


Recently, for the changes to [SPARK-14423] Handle jar conflict issue when 
uploading to distributed cache
If by default yarn#client will upload all the --files and --archives in 
assembly to HDFS staging folder. It should throw if file appears in both 
--files and --archives exception to know whether uncompress or leave the file 
compressed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-17979) Remove deprecated support for config SPARK_YARN_USER_ENV

2016-10-17 Thread Kishor Patil (JIRA)
Kishor Patil created SPARK-17979:


 Summary: Remove deprecated support for config SPARK_YARN_USER_ENV 
 Key: SPARK-17979
 URL: https://issues.apache.org/jira/browse/SPARK-17979
 Project: Spark
  Issue Type: Bug
Reporter: Kishor Patil






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Closed] (SPARK-15708) Tasks table in Detailed Stage page shows ip instead of hostname under Executor ID/Host

2016-10-05 Thread Kishor Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-15708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishor Patil closed SPARK-15708.

Resolution: Cannot Reproduce

> Tasks table in Detailed Stage page shows ip instead of hostname under 
> Executor ID/Host
> --
>
> Key: SPARK-15708
> URL: https://issues.apache.org/jira/browse/SPARK-15708
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 2.0.0
>Reporter: Thomas Graves
>Priority: Minor
>
> If you go to the detailed Stages page in Spark 2.0, the Tasks table under the 
> Executor ID/Host columns hosts the hostname as an ip address rather then a 
> fully qualified hostname.
> The table above it (Aggregated Metrics by Executor) shows the "Address" as 
> the full hostname.
> I'm running spark on yarn on latest branch-2.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-15708) Tasks table in Detailed Stage page shows ip instead of hostname under Executor ID/Host

2016-10-05 Thread Kishor Patil (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-15708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15549699#comment-15549699
 ] 

Kishor Patil commented on SPARK-15708:
--

Unable to reproduce this. Can we closing this? We can reopen if we see it again.

> Tasks table in Detailed Stage page shows ip instead of hostname under 
> Executor ID/Host
> --
>
> Key: SPARK-15708
> URL: https://issues.apache.org/jira/browse/SPARK-15708
> Project: Spark
>  Issue Type: Bug
>  Components: Web UI
>Affects Versions: 2.0.0
>Reporter: Thomas Graves
>Priority: Minor
>
> If you go to the detailed Stages page in Spark 2.0, the Tasks table under the 
> Executor ID/Host columns hosts the hostname as an ip address rather then a 
> fully qualified hostname.
> The table above it (Aggregated Metrics by Executor) shows the "Address" as 
> the full hostname.
> I'm running spark on yarn on latest branch-2.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-17511) Dynamic allocation race condition: Containers getting marked failed while releasing

2016-09-12 Thread Kishor Patil (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-17511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishor Patil updated SPARK-17511:
-
Description: 
While trying to reach launch multiple containers in pool, if running executors 
count reaches or goes beyond the target running executors, the container is 
released and marked failed. This can cause many jobs to be marked failed 
causing overall job failure.

I will have a patch up soon after completing testing.

{panel:title=Typical Exception found in Driver marking the container to Failed}
{code}
java.lang.AssertionError: assertion failed
at scala.Predef$.assert(Predef.scala:156)
at 
org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$runAllocatedContainers$1.org$apache$spark$deploy$yarn$YarnAllocator$$anonfun$$updateInternalState$1(YarnAllocator.scala:489)
at 
org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$runAllocatedContainers$1$$anon$1.run(YarnAllocator.scala:519)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}
{panel}



  was:
While trying to reach launch multiple containers in pool, if running executors 
count reaches or goes beyond the target running executors, the container is 
released and marked failed. This can cause many jobs to be marked failed 
causing overall job failure.

I will have a patch up soon after completing testing.




> Dynamic allocation race condition: Containers getting marked failed while 
> releasing
> ---
>
> Key: SPARK-17511
> URL: https://issues.apache.org/jira/browse/SPARK-17511
> Project: Spark
>  Issue Type: Bug
>  Components: YARN
>Affects Versions: 2.0.0, 2.0.1, 2.1.0
>Reporter: Kishor Patil
>
> While trying to reach launch multiple containers in pool, if running 
> executors count reaches or goes beyond the target running executors, the 
> container is released and marked failed. This can cause many jobs to be 
> marked failed causing overall job failure.
> I will have a patch up soon after completing testing.
> {panel:title=Typical Exception found in Driver marking the container to 
> Failed}
> {code}
> java.lang.AssertionError: assertion failed
> at scala.Predef$.assert(Predef.scala:156)
> at 
> org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$runAllocatedContainers$1.org$apache$spark$deploy$yarn$YarnAllocator$$anonfun$$updateInternalState$1(YarnAllocator.scala:489)
> at 
> org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$runAllocatedContainers$1$$anon$1.run(YarnAllocator.scala:519)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> {panel}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-17511) Dynamic allocation race condition: Containers getting marked failed while releasing

2016-09-12 Thread Kishor Patil (JIRA)
Kishor Patil created SPARK-17511:


 Summary: Dynamic allocation race condition: Containers getting 
marked failed while releasing
 Key: SPARK-17511
 URL: https://issues.apache.org/jira/browse/SPARK-17511
 Project: Spark
  Issue Type: Bug
  Components: YARN
Affects Versions: 2.0.0, 2.0.1, 2.1.0
Reporter: Kishor Patil


While trying to reach launch multiple containers in pool, if running executors 
count reaches or goes beyond the target running executors, the container is 
released and marked failed. This can cause many jobs to be marked failed 
causing overall job failure.

I will have a patch up soon after completing testing.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-17443) SparkLauncher should allow stoppingApplication and need not rely on SparkSubmit binary

2016-09-07 Thread Kishor Patil (JIRA)
Kishor Patil created SPARK-17443:


 Summary: SparkLauncher should allow stoppingApplication and need 
not rely on SparkSubmit binary
 Key: SPARK-17443
 URL: https://issues.apache.org/jira/browse/SPARK-17443
 Project: Spark
  Issue Type: Improvement
  Components: Spark Submit
Affects Versions: 2.0.0
Reporter: Kishor Patil


Oozie wants SparkLauncher to support the following things:

- When oozie launcher is killed, the launched Spark application also gets killed
- Spark Launcher to not have to rely on spark-submit bash script




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-15951) Change Executors Page to use datatables to support sorting columns and searching

2016-06-14 Thread Kishor Patil (JIRA)
Kishor Patil created SPARK-15951:


 Summary: Change Executors Page to use datatables to support 
sorting columns and searching
 Key: SPARK-15951
 URL: https://issues.apache.org/jira/browse/SPARK-15951
 Project: Spark
  Issue Type: Improvement
  Components: Spark Core
Affects Versions: 2.0.0
Reporter: Kishor Patil
 Fix For: 2.1.0


Support column sort and search for Executors Server using jQuery DataTable and 
REST API. Before this commit, the executors page was generated hard-coded html 
and can not support search, also, the sorting was disabled if there is any 
application that has more than one attempt. Supporting search and sort (over 
all applications rather than the 20 entries in the current page) in any case 
will greatly improve user experience.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org