You need to first have the Spark assembly jar built with "sbt/sbt
assembly/assembly"

Then usually I go into python/run-tests and comment out the non-SQL tests:

#run_core_tests
run_sql_tests
#run_mllib_tests
#run_ml_tests
#run_streaming_tests

And then you can run "python/run-tests"




On Thu, Apr 23, 2015 at 1:17 PM, Olivier Girardot <
o.girar...@lateral-thoughts.com> wrote:

> What is the way of testing/building the pyspark part of Spark ?
>
> Le jeu. 23 avr. 2015 à 22:06, Olivier Girardot <
> o.girar...@lateral-thoughts.com> a écrit :
>
>> yep :) I'll open the jira when I've got the time.
>> Thanks
>>
>> Le jeu. 23 avr. 2015 à 19:31, Reynold Xin <r...@databricks.com> a écrit :
>>
>>> Ah damn. We need to add it to the Python list. Would you like to give it
>>> a shot?
>>>
>>>
>>> On Thu, Apr 23, 2015 at 4:31 AM, Olivier Girardot <
>>> o.girar...@lateral-thoughts.com> wrote:
>>>
>>>> Yep no problem, but I can't seem to find the coalesce fonction in
>>>> pyspark.sql.{*, functions, types or whatever :) }
>>>>
>>>> Olivier.
>>>>
>>>> Le lun. 20 avr. 2015 à 11:48, Olivier Girardot <
>>>> o.girar...@lateral-thoughts.com> a écrit :
>>>>
>>>> > a UDF might be a good idea no ?
>>>> >
>>>> > Le lun. 20 avr. 2015 à 11:17, Olivier Girardot <
>>>> > o.girar...@lateral-thoughts.com> a écrit :
>>>> >
>>>> >> Hi everyone,
>>>> >> let's assume I'm stuck in 1.3.0, how can I benefit from the *fillna*
>>>> API
>>>> >> in PySpark, is there any efficient alternative to mapping the records
>>>> >> myself ?
>>>> >>
>>>> >> Regards,
>>>> >>
>>>> >> Olivier.
>>>> >>
>>>> >
>>>>
>>>
>>>

Reply via email to