Thanks for looking into this with me Mars and Donald. It's nice to have experts, we are not completely stuck as our models train successfully ~50% of the time but we are looking forward to finding a way to order the jars and stabilize our processes.
Mars, I was reviewing the code that you are referencing, the "jars for Spark" function, this morning and trying to see how it ties in. This code that is outside the custom binary distribution correct, I could not find it in the distribution that is being used by the buildpack? Do you think the ordering of jars may need to happen in the Common.scala file instead of the compute-classpath.sh? Best, Shane *Shane Johnson | LIFT IQ* *Founder | CEO* *www.liftiq.com <http://www.liftiq.com/>* or *[email protected] <[email protected]>* mobile: (801) 360-3350 LinkedIn <https://www.linkedin.com/in/shanewjohnson/> | Twitter <https://twitter.com/SWaldenJ> | Facebook <https://www.facebook.com/shane.johnson.71653> On Fri, Mar 9, 2018 at 6:59 PM, Mars Hall <[email protected]> wrote: > Correction, it’s this “jars for Spark” function: > https://github.com/apache/predictionio/blob/develop/ > tools/src/main/scala/org/apache/predictionio/tools/Common.scala#L105 > > On Fri, Mar 9, 2018 at 17:54 Mars Hall <[email protected]> wrote: > >> It looks like this Scala function is the source of that jars list: >> https://github.com/apache/predictionio/blob/develop/ >> tools/src/main/scala/org/apache/predictionio/tools/Common.scala#L81 >> >> On Fri, Mar 9, 2018 at 17:42 Mars Hall <[email protected]> wrote: >> >>> Where does the classpath in spark-submit originate? Is >>> compute-classpath.sh not the source? >>> >>>> >>>> -- > *Mars Hall > 415-818-7039 <(415)%20818-7039> > Customer Facing Architect > Salesforce Platform / Heroku > San Francisco, California >
