David Abrahams said: > "William E. Kempf" <[EMAIL PROTECTED]> writes: > >> David Abrahams said: >>> "William E. Kempf" <[EMAIL PROTECTED]> writes: >>> >>>> David Abrahams said: >>>>>>> ...and if it can't be default-constructed? >>>>>> >>>>>> That's what boost::optional<> is for ;). >>>>> >>>>> Yeeeh. Once the async_call returns, you have a value, and should be >>>>> able to count on it. You shouldn't get back an object whose >>>>> invariant allows there to be no value. >>>> >>>> I'm not sure I can interpret the "yeeeh" part. Do you think there's >>>> still an issue to discuss here? >>> >>> Yes. Yeeeeh means I'm uncomfortable with asking people to get >>> involved with complicated state like "it's there or it isn't there" >>> for something as conceptually simple as a result returned from >>> waiting on a thread function to finish. >> >> OK, *if* I'm totally understanding you now, I don't think the issue >> you see actually exists. The invariant of optional<> may allow >> there to be no value, but the invariant of a future/async_result >> doesn't allow this *after the invocation has completed*. (Actually, >> there is one case where this might occur, and that's when the >> invocation throws an exception if we add the async exception >> functionality that people want here. But in this case what happens is >> a call to res.get(), or what ever name we use, will throw an >> exception.) The optional<> is just an implementation detail that >> allows you to not have to use a type that's default constructable. > > It doesn't matter if the semantics of future ensures that the optional > is always filled in; returning an object whose class invariant is more > complicated than the actual intended result complicates life for the > user. The result of a future leaves it and propagates through other > parts of the program where the invariant established by future aren't as > obvious. Returning an optional<double> where a double is intended is > akin to returning a vector<double> that has only one element. Use the > optional internally to the future if that's what you need to do. The > user shouldn't have to mess with it. > >> If, on the other hand, you're concerned about the uninitialized state >> prior to invocation... we can't have our cake and eat it to, and since >> the value is meaningless prior to invocation any way, I'd rather allow >> the solution that doesn't require default constructable types. > > I don't care if you have an "uninitialized" optional internally to the > future. The point is to encapsulate that mess so the user doesn't have > to look at it, read its documentation, etc.
I think there's some serious misunderstanding here. I never said the user would use optional<> directly, I said I'd use it in the implementation of this "async" concept. >> I *think* I understand what you're saying. So, the interface would be >> more something like: >> >> future<double> f1 = thread_executor(foo, a, b, c); >> thread_pool pool; >> future<double> f2 = thread_pool_executor(pool, foo, d, e, f); >> double d = f1.get() + f2.get(); >> >> This puts a lot more work on the creation of "executors" (they'll have >> to obey a more complex interface design than just "anything that can >> invoke a function object"), but I can see the merits. Is this >> actually what you had in mind? > > Something very much along those lines. I would very much prefer to > access the value of the future with its operator(), because we have lots > of nice mechanisms that work on function-like objects; to use get you'd > need to go through mem_fn/bind, and according to Peter we > wouldn't be able to directly get such a function object from a future > rvalue. Hmmm... OK, more pieces are falling into place. I think the f() syntax conveys something that's not the case, but I won't argue the utility of it. >>>> Only if you have a clearly defined "default case". Someone doing a >>>> lot of client/server development might argue with you about thread >>>> creation being a better default than RPC calling, or even >>>> thread_pool usage. >>> >>> Yes, they certainly might. Check out the systems that have been >>> implemented in Erlang with great success and get back to me ;-) >> >> Taking a chapter out of Alexander's book? > > Ooooh, touché! ;-) > > Actually I think it's only fair to answer speculation about what > people will like with a reference to real, successful systems. I'd agree with that, but the link you gave led me down a VERY long research path, and I'm in a time crunch right now ;). Maybe a short code example or a more specific link would have helped. >> As for the alternate interface your suggesting here, can you spell it >> out for me? > > I'm not yet wedded to a particular design choice, though I am getting > closer; I hope you don't think that's a cop-out. What I'm aiming for is > a particular set of design requirements: Not a cop-out, though I wasn't asking for a final design from you. > 1. Simple syntax, for some definition of "simple". > 2. A way, that looks like a function call, to create a future > 3. A way, that looks like a function call, to get the value of a future These requirements help me a lot. Thanks. > [the strange grammatical construction of those last 2 is there to > avoid ambiguity] > >>> I still think I'm onto something with the importance of being able to >>> do functional concurrent programming. The minimum requirement for >>> that is to be able to return a result; you can always bind all the >>> arguments to fully curry function objects so that they take no >>> arguments, but that seems needlessly limiting. 'Nuff said; if you >>> can't see my point now I'm gonna let it lie. >> >> Don't let it lie, because I think the issue here is my not >> understanding, not my disagreeing. > > OK, well I hope the above helps. Definately. And thanks again. -- William E. Kempf [EMAIL PROTECTED] _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost