(float(9)/5)*x + 32) when x = 12.8

2020-04-04 Thread jane thorpe

PLATFORM
zeppelin 0.9  
SPARK_HOME = spark-3.0.0-preview2-bin-hadoop2.7

%spark.ipyspark

# work around
sc.setJobGroup("a","b")
tempc = sc.parallelize([12.8])
tempf = tempc.map(lambda x: (float(9)/5)*x + 32)
tempf.collect()

OUTPUT 
[55.046] 
 
%spark.ipyspark

# work around
sc.setJobGroup("a","b")
tempc = sc.parallelize([38.4,19.2,13.8,9.6])
tempf = tempc.map(lambda x: (float(9)/5)*x + 32)
tempf.collect()
OUTPUT :
[101.12, 66.56, 56.84, 49.28]
calculator  result =  55.04 

Is the answer correct  when x =  12.8  ?

 
jane thorpe
janethor...@aol.com


Serialization or internal functions?

2020-04-04 Thread email
Dear Community, 

 

Recently, I had to solve the following problem "for every entry of a
Dataset[String], concat a constant value" , and to solve it, I used built-in
functions : 

 

val data = Seq("A","b","c").toDS

 

scala> data.withColumn("valueconcat",concat(col(data.columns.head),lit("
"),lit("concat"))).select("valueconcat").explain()

== Physical Plan ==

LocalTableScan [valueconcat#161]

 

As an alternative , a much simpler version of the program is to use map, but
it adds a serialization step that does not seem to be present for the
version above : 

 

scala> data.map(e=> s"$e concat").explain

== Physical Plan ==

*(1) SerializeFromObject [staticinvoke(class
org.apache.spark.unsafe.types.UTF8String, StringType, fromString, input[0,
java.lang.String, true], true, false) AS value#92]

+- *(1) MapElements , obj#91: java.lang.String

   +- *(1) DeserializeToObject value#12.toString, obj#90: java.lang.String

  +- LocalTableScan [value#12]

 

Is this over-optimization or is this the right way to go?  

 

As a follow up , is there any better API to get the one and only column
available in a DataSet[String] when using built-in functions?
"col(data.columns.head)" works but it is not ideal.

 

Thanks!



Re: spark-submit exit status on k8s

2020-04-04 Thread Masood Krohy
I'm not in the Spark dev team, so cannot tell you why that priority was 
chosen for the JIRA issue or if anyone is about to finish the work on 
that; I'll let others jump in if they know.


Just wanted to offer a potential solution so that you can move ahead in 
the meantime.


Masood

__

Masood Krohy, Ph.D.
Data Science Advisor|Platform Architect
https://www.analytical.works

On 4/4/20 7:49 AM, Marshall Markham wrote:


Thank you very much Masood for your fast response. Last question, is 
the current status in Jira representative of the status of the ticket 
within the project team? This seems like a big deal for the K8s 
implementation and we were surprised to find it marked as priority 
low. Is there any discussion of picking up this work in the near future?


Thanks,

Marshall

*From:*Masood Krohy 
*Sent:* Friday, April 3, 2020 9:34 PM
*To:* Marshall Markham ; user 


*Subject:* Re: spark-submit exit status on k8s

While you wait for a fix on that JIRA ticket, you may be able to add 
an intermediary step in your AirFlow graph, calling Spark's REST API 
after submitting the job, and dig into the actual status of the 
application, and make a success/fail decision accordingly. You can 
make repeated calls in a loop to the REST API with few seconds delay 
between each call while the execution is in progress until the 
application fails or succeeds.


https://spark.apache.org/docs/latest/monitoring.html#rest-api 



Hope this helps.

Masood

__
Masood Krohy, Ph.D.
Data Science Advisor|Platform Architect
https://www.analytical.works  


On 4/3/20 8:23 AM, Marshall Markham wrote:

Hi Team,

My team recently conducted a POC of Kubernetes/Airflow/Spark with
great success. The major concern we have about this system, after
the completion of our POC is a behavior of spark-submit. When
called with a Kubernetes API endpoint as master spark-submit seems
to always return exit status 0. This is obviously a major issue
preventing us from conditioning job graphs on the success or
failure of our Spark jobs. I found Jira ticket SPARK-27697 under
the Apache issues covering this bug. The ticket is listed as minor
and does not seem to have any activity recently. I would like to
up vote it and ask if there is anything I can do to move this
forward. This could be the one thing standing between my team and
our preferred batch workload implementation. Thank you.

*Marshall Markham*

Data Engineer

PrecisionLender, a Q2 Company

NOTE: This communication and any attachments are for the sole use
of the intended recipient(s) and may contain confidential and/or
privileged information. Any unauthorized review, use, disclosure
or distribution is prohibited. If you are not the intended
recipient, please contact the sender by replying to this email,
and destroy all copies of the original message.

NOTE: This communication and any attachments are for the sole use of 
the intended recipient(s) and may contain confidential and/or 
privileged information. Any unauthorized review, use, disclosure or 
distribution is prohibited. If you are not the intended recipient, 
please contact the sender by replying to this email, and destroy all 
copies of the original message. 


RE: spark-submit exit status on k8s

2020-04-04 Thread Marshall Markham
Thank you very much Masood for your fast response. Last question, is the 
current status in Jira representative of the status of the ticket within the 
project team? This seems like a big deal for the K8s implementation and we were 
surprised to find it marked as priority low. Is there any discussion of picking 
up this work in the near future?

Thanks,
Marshall

From: Masood Krohy 
Sent: Friday, April 3, 2020 9:34 PM
To: Marshall Markham ; user 

Subject: Re: spark-submit exit status on k8s


While you wait for a fix on that JIRA ticket, you may be able to add an 
intermediary step in your AirFlow graph, calling Spark's REST API after 
submitting the job, and dig into the actual status of the application, and make 
a success/fail decision accordingly. You can make repeated calls in a loop to 
the REST API with few seconds delay between each call while the execution is in 
progress until the application fails or succeeds.

https://spark.apache.org/docs/latest/monitoring.html#rest-api

Hope this helps.

Masood

__



Masood Krohy, Ph.D.

Data Science Advisor|Platform Architect

https://www.analytical.works
On 4/3/20 8:23 AM, Marshall Markham wrote:
Hi Team,

My team recently conducted a POC of Kubernetes/Airflow/Spark with great 
success. The major concern we have about this system, after the completion of 
our POC is a behavior of spark-submit. When called with a Kubernetes API 
endpoint as master spark-submit seems to always return exit status 0. This is 
obviously a major issue preventing us from conditioning job graphs on the 
success or failure of our Spark jobs. I found Jira ticket SPARK-27697 under the 
Apache issues covering this bug. The ticket is listed as minor and does not 
seem to have any activity recently. I would like to up vote it and ask if there 
is anything I can do to move this forward. This could be the one thing standing 
between my team and our preferred batch workload implementation. Thank you.

Marshall Markham
Data Engineer
PrecisionLender, a Q2 Company

NOTE: This communication and any attachments are for the sole use of the 
intended recipient(s) and may contain confidential and/or privileged 
information. Any unauthorized review, use, disclosure or distribution is 
prohibited. If you are not the intended recipient, please contact the sender by 
replying to this email, and destroy all copies of the original message.
NOTE: This communication and any attachments are for the sole use of the 
intended recipient(s) and may contain confidential and/or privileged 
information. Any unauthorized review, use, disclosure or distribution is 
prohibited. If you are not the intended recipient, please contact the sender by 
replying to this email, and destroy all copies of the original message.