[
https://issues.apache.org/jira/browse/TOREE-222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Chip Senkbeil resolved TOREE-222.
---------------------------------
Resolution: Resolved (was: Unresolved)
Fix Version/s: 0.1.0
> Makefile should provide an install option
> -----------------------------------------
>
> Key: TOREE-222
> URL: https://issues.apache.org/jira/browse/TOREE-222
> Project: TOREE
> Issue Type: Improvement
> Reporter: Chip Senkbeil
> Fix For: 0.1.0
>
>
> Similarly to how {{sbt pack}} would generate a makefile that contained an
> install option to copy the jars and script to {{$HOME/local/bin}} and
> {{$HOME/local/lib}}, we should have an option to install ourselves.
> What this really needs to do is generate the _kernel.json_ files that point
> to the {{sparkkernel}} script. Maybe copy that script and associated assembly
> jar to a standard location.
> In my old setup, I had four _kernel.json_ files, one per language supported
> on the kernel.
> {code:json}
> {
> "display_name": "Spark 1.5.0 (Scala 2.10.4)",
> "language": "scala",
> "argv": [
> "/Users/senkwich/local/bin/sparkkernel",
> "--profile",
> "{connection_file}",
> "--default-interpreter",
> "scala"
> ],
> "codemirror_mode": "scala",
> "env": {
> "JVM_OPT": "-Xms1024M -Xmx4096M -Dlog4j.logLevel=trace",
> "MAX_INTERPRETER_THREADS": "16",
> "SPARK_CONFIGURATION": "spark.cores.max=4",
> "CAPTURE_STANDARD_OUT": "true",
> "CAPTURE_STANDARD_ERR": "true",
> "SEND_EMPTY_OUTPUT": "false",
> "SPARK_HOME": "/opt/spark-1.5.0-bin-hadoop2.3"
> }
> }
> {code}
> {code:json}
> {
> "display_name": "Spark 1.5.0 (Python)",
> "language": "scala",
> "argv": [
> "/Users/senkwich/local/bin/sparkkernel",
> "--profile",
> "{connection_file}",
> "--default-interpreter",
> "pyspark"
> ],
> "codemirror_mode": "python",
> "env": {
> "JVM_OPT": "-Xms1024M -Xmx4096M -Dlog4j.logLevel=trace -cp
> ~/Downloads/spark-hive_2.10-1.5.1.jar",
> "MAX_INTERPRETER_THREADS": "16",
> "SPARK_CONFIGURATION": "spark.cores.max=4",
> "CAPTURE_STANDARD_OUT": "true",
> "CAPTURE_STANDARD_ERR": "true",
> "SEND_EMPTY_OUTPUT": "false",
> "SPARK_HOME": "/opt/spark-1.5.0-bin-hadoop2.3"
> }
> }
> {code}
> {code:json}
> {
> "display_name": "Spark 1.5.0 (R)",
> "language": "scala",
> "argv": [
> "/Users/senkwich/local/bin/sparkkernel",
> "--profile",
> "{connection_file}",
> "--default-interpreter",
> "sparkr"
> ],
> "codemirror_mode": "r",
> "env": {
> "JVM_OPT": "-Xms1024M -Xmx4096M -Dlog4j.logLevel=trace -cp
> ~/Downloads/spark-hive_2.10-1.5.1.jar",
> "MAX_INTERPRETER_THREADS": "16",
> "SPARK_CONFIGURATION": "spark.cores.max=4",
> "CAPTURE_STANDARD_OUT": "true",
> "CAPTURE_STANDARD_ERR": "true",
> "SEND_EMPTY_OUTPUT": "false",
> "SPARK_HOME": "/opt/spark-1.5.0-bin-hadoop2.3"
> }
> }
> {code}
> {code:json}
> {
> "display_name": "Spark 1.5.0 (SQL)",
> "language": "scala",
> "argv": [
> "/Users/senkwich/local/bin/sparkkernel",
> "--profile",
> "{connection_file}",
> "--default-interpreter",
> "sql"
> ],
> "codemirror_mode": "sql",
> "env": {
> "JVM_OPT": "-Xms1024M -Xmx4096M -Dlog4j.logLevel=trace -cp
> ~/Downloads/spark-hive_2.10-1.5.1.jar",
> "MAX_INTERPRETER_THREADS": "16",
> "SPARK_CONFIGURATION": "spark.cores.max=4",
> "CAPTURE_STANDARD_OUT": "true",
> "CAPTURE_STANDARD_ERR": "true",
> "SEND_EMPTY_OUTPUT": "false",
> "SPARK_HOME": "/opt/spark-1.5.0-bin-hadoop2.3"
> }
> }
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)