Hi William, Thanks for the reply. I think we're mostly on the same page. First, let me correct what I hope is a minor point. When I hit this problem, my lambda function (which I would like to generically refer to as a callback function) did not catch any exceptions at all. All exceptions would flow back to the Edgent runtime. I only added catch clauses to prove to myself that an exception was the reason for the stoppage. So your example is not exactly what I was doing, but I think it's close enough for the important part of the discussion.
The lambda function I was using was from one of the samples, the moving average function: (d,k) -> d.stream().reduce((a,b) -> a+b).get() / d.size() Periodically the .get() API would throw a NoSuchElementException, and I think that is because the stream had no tuples in it. I suspect the right thing for the application to do is check for the existence of any tuples before blindly trying to run a reduce on them. However, that's beside the point here. Exceptions are always possible so it's important in any programming model to understand who is responsible for handling them and which ones. Thus my question. My opinion is that Edgent should be catching all exceptions from app callbacks and continue processing the next "thing". Throw away what you were working on and move on to the next thing. I'm not sure what it means to "then stop all batching for that window". When more tuples arrive, will the batch run again on the new tuples...and possibly result in an exception again? This is the behavior I would have expected. If the programming model NEVER expects the callbacks to throw exceptions (which I hope is the case), then the Edgent runtime can eat them and keep going. But if there are places in the programming model where these callback functions are supposed to throw exceptions, then things will get more tricky in the runtime. Assuming we agree on what "then stop all batching for that window" means, is it a big deal to fix? Is anyone already working on it? thanks On Fri, Jul 22, 2016 at 3:42 PM, William Marshall <[email protected]> wrote: > Hi David, > > Thank you for joining the mailing list! > > >if the lambda function that processes a window into a new stream > encounters an exception but DOES NOT handle it, what is supposed to happen? > By not handling it, I assume you mean something like the following where > the exception is rethrown: > > /* In the user's lambda */ > try{ > // Do some operation > } > catch(IllegalStateException e){ > throw e; > } > > In this case, what *does* happen, currently, is the exception will > percolate up to the Edgent/Quarks Thread Scheduler and be caught there. I > believe this kills all runtime threads, terminating the application. This > is why you observe all tuple flow to stop after you removed exception > catching from your lambda code. > > What *should* happen is that that the windowing library catches the > exception and then stop all batching for that window. This is more graceful > than terminating all threads. The windowing library might look something > like the following: > > /* In PartitionImpl */ > @Override > public synchronized void process() { > try{ > window.getPartitionProcessor().accept(unmodifiableTuples, key); > } > catch(Exception e){ > // Clear the ScheduledExecutorService which handles the batch > scheduling. No more batching for this window. > } > } > > >I rewrote my lambda function to catch exceptions, and once in a while the > catch clause gets control. > Right, so if your lambda code catches all exceptions and doesn't rethrow > them, the batch scheduler doesn't know that anything is wrong and will > continue to schedule batches. This is why the catch clause gets control > once in a while, and you continue to see tuples downstream from the window. > > >It is very inconvenient (from a programming model perspective) for the > lambda functions to have to do exception handling in simple cases > Would you mind providing a brief code/pseudocode example of such a simple > case? > > I hope this helps to answer your question. > > -Will > -- Dave Booz [email protected]
