[ 
https://issues.apache.org/jira/browse/OOZIE-2606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15528686#comment-15528686
 ] 

Satish Subhashrao Saley commented on OOZIE-2606:
------------------------------------------------

1.
{quote}
I could not find an easy way to get the Spark version. Maybe applying the 
matchers on the grabbed result of
SparkSubmin.main(new String[] {"--version"});
is a bit cleaner.
{quote}
Spark is writing output of this to 
[Stderr|https://github.com/apache/spark/blob/branch-1.6/core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala#L93-L110].
 I need to redirect output to some file and parse it from there. I am able to 
redirect the stderr of spark into a file but spark code calls system.exit after 
finishing the command which is causing an exception 

{code}
java.lang.SecurityException: Intercepted System.exit(0)
    at 
org.apache.oozie.action.hadoop.LauncherSecurityManager.checkExit(LauncherMapper.java:667)
    at java.lang.Runtime.exit(Runtime.java:107)
    at java.lang.System.exit(System.java:971)
    at 
org.apache.spark.deploy.SparkSubmit$$anonfun$1.apply$mcVI$sp(SparkSubmit.scala:91)
    at 
org.apache.spark.deploy.SparkSubmit$.printVersionAndExit(SparkSubmit.scala:112)
    at 
org.apache.spark.deploy.SparkSubmitArguments.handle(SparkSubmitArguments.scala:436)
    at 
org.apache.spark.launcher.SparkSubmitOptionParser.parse(SparkSubmitOptionParser.java:172)
    at 
org.apache.spark.deploy.SparkSubmitArguments.<init>(SparkSubmitArguments.scala:98)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:117)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
    at 
org.apache.oozie.action.hadoop.SparkMain.getSparkVersion(SparkMain.java:485)
    at 
org.apache.oozie.action.hadoop.SparkMain.setSparkYarnJarsConf(SparkMain.java:460)
    at org.apache.oozie.action.hadoop.SparkMain.run(SparkMain.java:202)
    at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:60)
    at org.apache.oozie.action.hadoop.SparkMain.main(SparkMain.java:70)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at 
org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:232)
    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
    at 
org.apache.hadoop.mapred.LocalContainerLauncher$EventHandler.runSubtask(LocalContainerLauncher.java:373)
    at 
org.apache.hadoop.mapred.LocalContainerLauncher$EventHandler.runTask(LocalContainerLauncher.java:298)
    at 
org.apache.hadoop.mapred.LocalContainerLauncher$EventHandler.access$200(LocalContainerLauncher.java:184)
    at 
org.apache.hadoop.mapred.LocalContainerLauncher$EventHandler$1.run(LocalContainerLauncher.java:227)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
{code}
This call is intercepted by LauncherSecurityManager.

2. 
A simple way is to use SparkContext. 
{code}
SparkContext sparkContext = new SparkContext("local[1]", "spark-version-app");
version = sparkContext.version();
sparkContext.stop();
{code}
It works, but seemed expensive for fetching just version. Also the looking at 
log of this, spark is doing a lot of stuffs. Feels not worth for getting just 
version.
{code}
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
16/09/27 21:35:40 INFO SparkContext: Running Spark version 2.0.0
16/09/27 21:35:40 WARN NativeCodeLoader: Unable to load native-hadoop library 
for your platform... using builtin-java classes where applicable
16/09/27 21:35:41 INFO SecurityManager: Changing view acls to: saley
16/09/27 21:35:41 INFO SecurityManager: Changing modify acls to: saley
16/09/27 21:35:41 INFO SecurityManager: Changing view acls groups to: 
16/09/27 21:35:41 INFO SecurityManager: Changing modify acls groups to: 
16/09/27 21:35:41 INFO SecurityManager: SecurityManager: authentication 
disabled; ui acls disabled; users  with view permissions: Set(saley); groups 
with view permissions: Set(); users  with modify permissions: Set(saley); 
groups with modify permissions: Set()
16/09/27 21:35:41 INFO Utils: Successfully started service 'sparkDriver' on 
port 62907.
16/09/27 21:35:41 INFO SparkEnv: Registering MapOutputTracker
16/09/27 21:35:41 INFO SparkEnv: Registering BlockManagerMaster
16/09/27 21:35:41 INFO DiskBlockManager: Created local directory at 
/private/var/folders/4c/wvbswpb14f71jc_f2287zhpm002v_4/T/blockmgr-b8fb61a6-99ab-4ce8-817e-6d925878a438
16/09/27 21:35:41 INFO MemoryStore: MemoryStore started with capacity 1731.6 MB
16/09/27 21:35:41 INFO SparkEnv: Registering OutputCommitCoordinator
16/09/27 21:35:42 INFO Utils: Successfully started service 'SparkUI' on port 
4040.
16/09/27 21:35:42 INFO SparkUI: Bound SparkUI to localhost, and started at 
http://127.0.0.1:4040
16/09/27 21:35:42 INFO Executor: Starting executor ID driver on host localhost
16/09/27 21:35:42 INFO Utils: Successfully started service 
'org.apache.spark.network.netty.NettyBlockTransferService' on port 62908.
16/09/27 21:35:42 INFO NettyBlockTransferService: Server created on 
127.0.0.1:62908
16/09/27 21:35:42 INFO BlockManagerMaster: Registering BlockManager 
BlockManagerId(driver, 127.0.0.1, 62908)
16/09/27 21:35:42 INFO BlockManagerMasterEndpoint: Registering block manager 
127.0.0.1:62908 with 1731.6 MB RAM, BlockManagerId(driver, 127.0.0.1, 62908)
16/09/27 21:35:42 INFO BlockManagerMaster: Registered BlockManager 
BlockManagerId(driver, 127.0.0.1, 62908)
2.0.0
16/09/27 21:35:43 INFO SparkUI: Stopped Spark web UI at http://127.0.0.1:4040
16/09/27 21:35:43 INFO MapOutputTrackerMasterEndpoint: 
MapOutputTrackerMasterEndpoint stopped!
16/09/27 21:35:43 INFO MemoryStore: MemoryStore cleared
16/09/27 21:35:43 INFO BlockManager: BlockManager stopped
16/09/27 21:35:43 INFO BlockManagerMaster: BlockManagerMaster stopped
16/09/27 21:35:43 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: 
OutputCommitCoordinator stopped!
16/09/27 21:35:43 INFO SparkContext: Successfully stopped SparkContext
16/09/27 21:35:43 INFO ShutdownHookManager: Shutdown hook called
16/09/27 21:35:43 INFO ShutdownHookManager: Deleting directory 
/private/var/folders/4c/wvbswpb14f71jc_f2287zhpm002v_4/T/spark-ee0cf4e0-4080-478c-b968-a9ba3197634c
{code} 

3.
I guess reading from jar manifest was less complicated. Keeping it intact 
unless majority reaches on SparkContext.

I will upload patch tomorrow.  
 

> Set spark.yarn.jars to fix Spark 2.0 with Oozie
> -----------------------------------------------
>
>                 Key: OOZIE-2606
>                 URL: https://issues.apache.org/jira/browse/OOZIE-2606
>             Project: Oozie
>          Issue Type: Bug
>          Components: core
>    Affects Versions: 4.2.0
>            Reporter: Jonathan Kelly
>            Assignee: Satish Subhashrao Saley
>              Labels: spark, spark2.0.0
>             Fix For: 4.3.0
>
>         Attachments: OOZIE-2606-2.patch, OOZIE-2606.patch
>
>
> Oozie adds all of the jars in the Oozie Spark sharelib to the 
> DistributedCache such that all jars will be present in the current working 
> directory of the YARN container (as well as in the container classpath). 
> However, this is not quite enough to make Spark 2.0 work, since Spark 2.0 by 
> default looks for the jars in assembly/target/scala-2.11/jars [1] (as if it 
> is a locally built distribution for development) and will not find them in 
> the current working directory.
> To fix this, we can set spark.yarn.jars to *.jar so that it finds the jars in 
> the current working directory rather than looking in the wrong place. [2]
> [1] 
> https://github.com/apache/spark/blob/v2.0.0-rc2/launcher/src/main/java/org/apache/spark/launcher/CommandBuilderUtils.java#L357
> [2] 
> https://github.com/apache/spark/blob/v2.0.0-rc2/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala#L476
> Note: This property will be ignored by Spark 1.x.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to