Thanks for the clarification Sean, much appreciated!

On 06/10/2016 09:18, Sean Owen wrote:
Yes you should do that. The examples, with one exception, do show this, and it's always been the intended behavior. I guess it's no surprise to me because any 'context' object in any framework generally has to be shutdown for reasons like this.

We need to update the one example. The twist is error handling though, yeah, because you need to stop() even if an exception occurs. Easy enough with "finally", but I guess people don't do that. It'd be nice to get rid of this non-daemon thread if possible

On Thu, Oct 6, 2016 at 9:02 AM Adrian Bridgett <adr...@opensignal.com <mailto:adr...@opensignal.com>> wrote:

    Just one question - what about errors?  Should we be wrapping our
    entire
    code in a ...finally spark.stop() clause (as per
    http://spark.apache.org/docs/latest/programming-guide.html#unit-testing)?

    BTW the .stop() requirement was news to quite a few people here, maybe
    it'd be a good idea to shout more loudly about in the initial
    quickstart/examples?

    Cheers,

    Adrian

    ---------------------------------------------------------------------
    To unsubscribe e-mail: user-unsubscr...@spark.apache.org
    <mailto:user-unsubscr...@spark.apache.org>


--
*Adrian Bridgett* | Sysadmin Engineer, OpenSignal <http://www.opensignal.com>
_____________________________________________________
Office: 3rd Floor, The Angel Office, 2 Angel Square, London, EC1V 1NY
Phone #: +44 777-377-8251
Skype: abridgett |@adrianbridgett <http://twitter.com/adrianbridgett>| LinkedIn link <https://uk.linkedin.com/in/abridgett>
_____________________________________________________

Reply via email to