[jira] [Commented] (SPARK-24534) Add a way to bypass entrypoint.sh script if no spark cmd is passed

2018-06-14 Thread Ricardo Martinelli de Oliveira (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-24534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513159#comment-16513159
 ] 

Ricardo Martinelli de Oliveira commented on SPARK-24534:


Guys, I sent a PR as proposal for this Jira: 
[https://github.com/apache/spark/pull/21572]

Would you mind review it?

> Add a way to bypass entrypoint.sh script if no spark cmd is passed
> --
>
> Key: SPARK-24534
> URL: https://issues.apache.org/jira/browse/SPARK-24534
> Project: Spark
>  Issue Type: Improvement
>  Components: Kubernetes
>Affects Versions: 2.3.0
>Reporter: Ricardo Martinelli de Oliveira
>Priority: Minor
>
> As an improvement in the entrypoint.sh script, I'd like to propose spark 
> entrypoint do a passthrough if driver/executor/init is not the command 
> passed. Currently it raises an error.
> To me more specific, I'm talking about these lines:
> [https://github.com/apache/spark/blob/master/resource-managers/kubernetes/docker/src/main/dockerfiles/spark/entrypoint.sh#L113-L114]
> This allows the openshift-spark image to continue to function as a Spark 
> Standalone component, with custom configuration support etc. without 
> compromising the previous method to configure the cluster inside a kubernetes 
> environment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-24534) Add a way to bypass entrypoint.sh script if no spark cmd is passed

2018-06-12 Thread Ricardo Martinelli de Oliveira (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-24534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ricardo Martinelli de Oliveira updated SPARK-24534:
---
Description: 
As an improvement in the entrypoint.sh script, I'd like to propose spark 
entrypoint do a passthrough if driver/executor/init is not the command passed. 
Currently it raises an error.

To me more specific, I'm talking about these lines:

[https://github.com/apache/spark/blob/master/resource-managers/kubernetes/docker/src/main/dockerfiles/spark/entrypoint.sh#L113-L114]

This allows the openshift-spark image to continue to function as a Spark 
Standalone component, with custom configuration support etc. without 
compromising the previous method to configure the cluster inside a kubernetes 
environment.

  was:
As an improvement in the entrypoint.sh script, I'd like to propose spark 
entrypoint do a passthrough if driver/executor/init is not the command passed. 
Currently it raises an error.

To me more specific, I'm talking about these lines:

[https://github.com/apache/spark/blob/master/resource-managers/kubernetes/docker/src/main/dockerfiles/spark/entrypoint.sh#L113-L114]

This allows the openshift-spark image to continue to function as a Spark 
Standalone component, with custom configuration support etc.


> Add a way to bypass entrypoint.sh script if no spark cmd is passed
> --
>
> Key: SPARK-24534
> URL: https://issues.apache.org/jira/browse/SPARK-24534
> Project: Spark
>  Issue Type: Improvement
>  Components: Kubernetes
>Affects Versions: 2.3.0
>Reporter: Ricardo Martinelli de Oliveira
>Priority: Minor
>
> As an improvement in the entrypoint.sh script, I'd like to propose spark 
> entrypoint do a passthrough if driver/executor/init is not the command 
> passed. Currently it raises an error.
> To me more specific, I'm talking about these lines:
> [https://github.com/apache/spark/blob/master/resource-managers/kubernetes/docker/src/main/dockerfiles/spark/entrypoint.sh#L113-L114]
> This allows the openshift-spark image to continue to function as a Spark 
> Standalone component, with custom configuration support etc. without 
> compromising the previous method to configure the cluster inside a kubernetes 
> environment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-24534) Add a way to bypass entrypoint.sh script if no spark cmd is passed

2018-06-12 Thread Ricardo Martinelli de Oliveira (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-24534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ricardo Martinelli de Oliveira updated SPARK-24534:
---
Description: 
As an improvement in the entrypoint.sh script, I'd like to propose spark 
entrypoint do a passthrough if driver/executor/init is not the command passed. 
Currently it raises an error.

To me more specific, I'm talking about these lines:

[https://github.com/apache/spark/blob/master/resource-managers/kubernetes/docker/src/main/dockerfiles/spark/entrypoint.sh#L113-L114]

This allows the openshift-spark image to continue to function as a Spark 
Standalone component, with custom configuration support etc.

  was:
As an improvement in the entrypoint.sh script, I'd like to propose spark 
entrypoint do a passthrough if driver/executor/init is not the command passed. 
Currently it raises an error.

To me more specific, I'm talking about these lines:

 

[https://github.com/apache/spark/blob/master/resource-managers/kubernetes/docker/src/main/dockerfiles/spark/entrypoint.sh#L113-L114]

 

This allows the openshift-spark image to continue to function as a Spark 
Standalone component, with custom configuration support etc, but double as an 
OpenShift spark-on-k8s image.


> Add a way to bypass entrypoint.sh script if no spark cmd is passed
> --
>
> Key: SPARK-24534
> URL: https://issues.apache.org/jira/browse/SPARK-24534
> Project: Spark
>  Issue Type: Improvement
>  Components: Kubernetes
>Affects Versions: 2.3.0
>Reporter: Ricardo Martinelli de Oliveira
>Priority: Minor
>
> As an improvement in the entrypoint.sh script, I'd like to propose spark 
> entrypoint do a passthrough if driver/executor/init is not the command 
> passed. Currently it raises an error.
> To me more specific, I'm talking about these lines:
> [https://github.com/apache/spark/blob/master/resource-managers/kubernetes/docker/src/main/dockerfiles/spark/entrypoint.sh#L113-L114]
> This allows the openshift-spark image to continue to function as a Spark 
> Standalone component, with custom configuration support etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-24534) Add a way to bypass entrypoint.sh script if no spark cmd is passed

2018-06-12 Thread Ricardo Martinelli de Oliveira (JIRA)
Ricardo Martinelli de Oliveira created SPARK-24534:
--

 Summary: Add a way to bypass entrypoint.sh script if no spark cmd 
is passed
 Key: SPARK-24534
 URL: https://issues.apache.org/jira/browse/SPARK-24534
 Project: Spark
  Issue Type: Improvement
  Components: Kubernetes
Affects Versions: 2.3.0
Reporter: Ricardo Martinelli de Oliveira


As an improvement in the entrypoint.sh script, I'd like to propose spark 
entrypoint do a passthrough if driver/executor/init is not the command passed. 
Currently it raises an error.

To me more specific, I'm talking about these lines:

 

[https://github.com/apache/spark/blob/master/resource-managers/kubernetes/docker/src/main/dockerfiles/spark/entrypoint.sh#L113-L114]

 

This allows the openshift-spark image to continue to function as a Spark 
Standalone component, with custom configuration support etc, but double as an 
OpenShift spark-on-k8s image.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-23680) entrypoint.sh does not accept arbitrary UIDs, returning as an error

2018-03-14 Thread Ricardo Martinelli de Oliveira (JIRA)
Ricardo Martinelli de Oliveira created SPARK-23680:
--

 Summary: entrypoint.sh does not accept arbitrary UIDs, returning 
as an error
 Key: SPARK-23680
 URL: https://issues.apache.org/jira/browse/SPARK-23680
 Project: Spark
  Issue Type: Bug
  Components: Kubernetes
Affects Versions: 2.3.0
 Environment: OpenShift
Reporter: Ricardo Martinelli de Oliveira


Openshift supports running pods using arbitrary UIDs 
([https://docs.openshift.com/container-platform/3.7/creating_images/guidelines.html#openshift-specific-guidelines)]
  to improve security. Although entrypoint.sh was developed to cover this 
feature, the script is returning an error[1].

The issue is that the script uses getent to find the passwd entry of the 
current UID, and if the entry is not found it creates an entry in /etc/passwd. 
According to the getent man page:
{code:java}
EXIT STATUS
   One of the following exit values can be returned by getent:
  0 Command completed successfully.
  1 Missing arguments, or database unknown.
  2 One or more supplied key could not be found in the database.
  3 Enumeration not supported on this database.
{code}
And since the script begin with a "set -ex" command, which means it turns on 
debug and breaks the script if the command pipelines returns an exit code other 
than 0.--

Having that said, this line below must be changed to remove the "-e" flag from 
set command:

https://github.com/apache/spark/blob/v2.3.0/resource-managers/kubernetes/docker/src/main/dockerfiles/spark/entrypoint.sh#L20

 

 
[1]https://github.com/apache/spark/blob/v2.3.0/resource-managers/kubernetes/docker/src/main/dockerfiles/spark/entrypoint.sh#L25-L34



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org