Re: Re: [boost] Re: A new boost::thread implementation?

2003-02-12 Thread William E. Kempf
Sorry for late reply... had a hard disk problem that prevented accessing
e-mail.

Peter Dimov said:
 William E. Kempf wrote:

 How about this compromise:

 template typename R
 class async_call
 {
 public:
 template typename F
 explicit async_call(const F f)
 : m_func(f)
 {
 }

 void operator()()
 {
 mutex::scoped_lock lock(m_mutex);
 if (m_result)
 throw can't call multiple times;

 operator() shouldn't throw; it's being used as a thread procedure, and
 the final verdict on these was to terminate() on exception, I believe.
 But you may have changed that. :-)

I'm not sure how the terminate() on exception semantics (which haven't
changed) apply, exactly.  But I assume you (and probably Dave) would
prefer this to just be an assert and documented undefined behavior.  I
have no problems with that.

 lock.unlock();
 R temp(m_func());
 lock.lock();
 m_result.reset(temp);
 m_cond.notify_all();
 }

 R result() const
 {
 boost::mutex::scoped_lock lock(m_mutex);
 while (!m_result)
 m_cond.wait(lock);

 This changes result()'s semantics to block until op() finishes; what
 happens if nobody calls op()? Or it throws an exception?

Changes the semantics?  I thought this was what was expected and
illustrated in every example thus far?  Failure to call op() is a user
error that will result in deadlock if result() is called.  The only other
alternative is to throw in result() if op() wasn't called, but I don't
think that's appropriate.  The exception question still needs work.  We
probably want result() to throw in this case, the question is what it will
throw.  IOW, do we build the mechanism for propogating exception types
across thread boundaries, or just throw a single generic exception type.

 return *m_result.get();
 }

 private:
 boost::function0R m_func;
 optionalR m_result;
 mutable mutex m_mutex;
 mutable condition m_cond;
 };

 template typename R
 class future
 {
 public:
 template typename F
 explicit future(const F f)
 : m_pimpl(new async_callR(f))
 {
 }

 future(const futureR other)
 {
 mutex::scoped_lock lock(m_mutex);

 I don't think you need a lock here, but I may be missing something.

I have to double check the implementation of shared_ptr, but I was
assuming all it did was to synchronize the ref count manipulation. 
Reads/writes of the data pointed at needed to be synchronized externally. 
If that's the case, the assignment here needs to be synchronized in order
to insure it doesn't interrupt the access in op().

 m_pimpl = other.m_pimpl;
 }

 futureR operator=(const futureR other)
 {
 mutex::scoped_lock lock(m_mutex);

 --

 m_pimpl = other.m_pimpl;
 }

 void operator()()
 {
 (*get())();
 }

 R result() const
 {
 return get()-result();
 }

 private:
 shared_ptrasync_callR  get() const
 {
 mutex::scoped_lock lock(m_mutex);

 --

 return m_pimpl;
 }

 shared_ptrasync_callR  m_pimpl;
 mutable mutex m_mutex;
 };

 As for the big picture, ask Dave. ;-) I tend towards a refcounted
 async_call.

That's what future gives you, while async_call requires no dynamic
memory allocation, which is an important consideration for many uses.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: Re: [boost] Re: A new boost::thread implementation?

2003-02-12 Thread William E. Kempf

Peter Dimov said:
 William E. Kempf wrote:

 It's not just the efficiencies that concern me with dynamic
 allocation.  It's the additional points of failure that occur in this
 case as well.  For instance, check out the article on embedded coding
 in the most recent CUJ (sorry, don't have the exact title handy).
 Embedded folks generally avoid dynamic memory when ever possible, so
 I'm a little uncomfortable with a solution that mandates that the
 implementation use dynamic allocation of memory.  At least, if that's
 the only solution provided.

 This allocation isn't much different than the allocation performed by
 pthread_create. An embedded implementation can simply impose an upper
 limit on the total number of async_calls and never malloc.

True enough.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread Peter Dimov
David Abrahams wrote:
 Peter Dimov [EMAIL PROTECTED] writes:

 I am not saying that this is never useful, but syntax should target
 the typical scenario, not corner cases.

 Agreed.  I suppose that you'll say it doesn't target the typical
 scenario because of its confusability.  I wouldn't argue.  Any other
 reasons?

 What about:

 result(f)

Unqualified? ;-)

 It makes a lot more sense (to me) to reserve operator() for the
 Runnable concept, since that's what Boost.Threads currently uses.

 And prevent any other concepts from using operator()?  Surely you
 don't mean that.

No, I meant in that particular case.

We have three concepts: Runnable, Executor (executes Runnables), and
HasResult (for lack of a better name.) The AsyncCall concept I had in mind
is both Runnable and HasResult, so it can't use operator() for both.
x.result() or result(x) are both fine for HasResult.

Here's some compilable code, to put things in perspective:

#include boost/detail/lightweight_mutex.hpp
#include boost/function.hpp
#include boost/bind.hpp
#include stdexcept
#include string
#include iostream

templateclass R class async_call
{
public:

templateclass F explicit async_call(F f): f_(f), ready_(false)
{
}

void operator()()
{
mutex_type::scoped_lock lock(mutex_);
new(result_) R(f_());
ready_ = true;
}

R result() const
{
mutex_type::scoped_lock lock(mutex_);
if(ready_) return reinterpret_castR const (result_);
throw std::logic_error(async_call not completed);
}

private:

typedef boost::detail::lightweight_mutex mutex_type;

mutable mutex_type mutex_;
boost::functionR () f_;
char result_[sizeof(R)];
bool ready_;
};

int f(int x)
{
return x * 2;
}

int main()
{
// step 1: construct an async_call
async_callint call( boost::bind(f, 3) );

// 1a: attempt to obtain result before execution
try
{
std::cout  call.result()  std::endl;
}
catch(std::exception const  x)
{
std::cout  x.what()  std::endl;
}

// step 2: execute an async_call
call();

// step 3: obtain result
try
{
std::cout  call.result()  std::endl;
}
catch(std::exception const  x)
{
std::cout  x.what()  std::endl;
}
}

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread David Abrahams
Peter Dimov [EMAIL PROTECTED] writes:

 David Abrahams wrote:
 Peter Dimov [EMAIL PROTECTED] writes:

 I am not saying that this is never useful, but syntax should target
 the typical scenario, not corner cases.

 Agreed.  I suppose that you'll say it doesn't target the typical
 scenario because of its confusability.  I wouldn't argue.  Any other
 reasons?

 What about:

 result(f)

 Unqualified? ;-)

Good question ;-)

Maybe so, for the generic case.  If you know you're dealing with
Boost.Threads, then qualified should work.

 It makes a lot more sense (to me) to reserve operator() for the
 Runnable concept, since that's what Boost.Threads currently uses.

 And prevent any other concepts from using operator()?  Surely you
 don't mean that.

 No, I meant in that particular case.

Okay.

 We have three concepts: Runnable, Executor (executes Runnables), and
 HasResult (for lack of a better name.) The AsyncCall concept I had in mind
 is both Runnable and HasResult, so it can't use operator() for both.
 x.result() or result(x) are both fine for HasResult.

I see that you separate the initiation of the call from its creation.
I was going under the assumption that a call IS a call, i.e. verb, and
it starts when you construct it.  An advantage of this arrangement is
that you get simple invariants: you don't need to handle the tried to
get the result before initiating the call case.  Are there
disadvantages?

I'll also observe that tying initiation to creation (RAII ;-)) also
frees up operator() for other uses.  It is of course arguable whether
those other uses are good ones ;-)

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread Peter Dimov
David Abrahams wrote:
 Peter Dimov [EMAIL PROTECTED] writes:

 We have three concepts: Runnable, Executor (executes Runnables), and
 HasResult (for lack of a better name.) The AsyncCall concept I had
 in mind is both Runnable and HasResult, so it can't use operator()
 for both. x.result() or result(x) are both fine for HasResult.

 I see that you separate the initiation of the call from its creation.
 I was going under the assumption that a call IS a call, i.e. verb, and
 it starts when you construct it.  An advantage of this arrangement is
 that you get simple invariants: you don't need to handle the tried to
 get the result before initiating the call case.  Are there
 disadvantages?

No, the tried to get the result but the call did not complete case needs
to be handled anyway. The call may have been initiated but ended with an
exception.

This particular arrangement is one way to cleanly separate the
synchronization/result storage/exception translation from the actual
execution: async_call doesn't know anything about threads or thread pools.
Other alternatives are possible, too, but I think that we've reached the
point where we need actual code and not just made-up syntax.

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread David Abrahams
William E. Kempf [EMAIL PROTECTED] writes:

 Actually, there's another minor issue as well.  The user can call
 operator() and then let the async_call go out of scope with out ever
 calling result().  Mayhem would ensue.  The two options for dealing
 with this are to either block in the destructor until the call has
 completed or to simply document this as undefined behavior.

If you want async_call to be copyable you'd need to have a handle-body
idiom anyway, and something associated with the thread could be used
to keep the body alive.

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread William E. Kempf

 
 From: David Abrahams [EMAIL PROTECTED]
 Date: 2003/02/10 Mon AM 11:15:31 EST
 To: Boost mailing list [EMAIL PROTECTED]
 Subject: Re: [boost] Re: A new boost::thread implementation?
 
 William E. Kempf [EMAIL PROTECTED] writes:
 
  Actually, there's another minor issue as well.  The user can call
  operator() and then let the async_call go out of scope with out ever
  calling result().  Mayhem would ensue.  The two options for dealing
  with this are to either block in the destructor until the call has
  completed or to simply document this as undefined behavior.
 
 If you want async_call to be copyable you'd need to have a handle-body
 idiom anyway, and something associated with the thread could be used
 to keep the body alive.

True enough.  The code provided by Mr. Dimov wasn't copyable, however.  Is it 
important enough to allow copying to be worth the issues involved with dynamic memory 
usage here (i.e. a point of failure in the constructor)?  I think it probably is, I 
just want to see how others feel.


William E. Kempf
[EMAIL PROTECTED]

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread David Abrahams
William E. Kempf [EMAIL PROTECTED] writes:

  From: David Abrahams [EMAIL PROTECTED] Date: 2003/02/10
 Mon AM 11:15:31 EST To: Boost mailing list [EMAIL PROTECTED]
 Subject: Re: [boost] Re: A new boost::thread implementation?
 
 William E. Kempf [EMAIL PROTECTED] writes:
 
  Actually, there's another minor issue as well.  The user can call
  operator() and then let the async_call go out of scope with out
  ever calling result().  Mayhem would ensue.  The two options for
  dealing with this are to either block in the destructor until the
  call has completed or to simply document this as undefined
  behavior.
  If you want async_call to be copyable you'd need to have a
 handle-body idiom anyway, and something associated with the thread
 could be used to keep the body alive.

 True enough.  The code provided by Mr. Dimov wasn't copyable, however.
 Is it important enough to allow copying to be worth the issues
 involved with dynamic memory usage here (i.e. a point of failure in
 the constructor)?  I think it probably is, I just want to see how
 others feel.

I don't have an opinion.  The answer may depend on the relative
expense of acquiring the asynchronous executor resource (thread).

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread Peter Dimov
William E. Kempf wrote:
 From: Peter Dimov [EMAIL PROTECTED]
 // step 2: execute an async_call
 call();

 This example, and the implementation above, are just complex
 synchronous calls.  I assume you really meant for either the
 constructor or this call to also take an Executor concept?

 This line could be

 boost::thread exec( ref(call) );

 or

 boost::thread_pool pool;
 pool.dispatch( ref(call) );

 I didn't have a prebuilt Boost.Threads library handy when I wrote
 the code (rather quickly) so I used a plain call.

 No, it couldn't be, because async_call isn't copyable ;).  But I get
 the point.

Note that I diligently used ref(call) above. ;-)

 Since operator() is synchronized, i don't see a race... am I missing
 something?

 Sort of... I was thinking about the refactoring where you don't hold
 the mutex the entire time the function is being called.  But even
 with out the refactoring, there is some room for error:

 thread1: call()
 thread2: call()
 thread1: result() // which result?

Unspecified, but I don't think we can avoid that with the low-level
interface. High level wrappers that package creation and execution would be
immune to this problem.

 Actually, there's another minor issue as well.  The user can call
 operator() and then let the async_call go out of scope with out ever
 calling result().  Mayhem would ensue.  The two options for dealing
 with this are to either block in the destructor until the call has
 completed or to simply document this as undefined behavior.

 Yes, good point, I missed that.

 I lean towards simple undefined behavior.  How do you feel about it?

Seems entirely reasonable. I don't think that we can fix this. Accessing
an object after it has been destroyed is simply an error; although this is
probably a good argument for making async_call copyable/counted so that the
copy being executed can keep the representation alive.

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread David Abrahams
Peter Dimov [EMAIL PROTECTED] writes:

 Unspecified, but I don't think we can avoid that with the low-level
 interface. High level wrappers that package creation and execution would be
 immune to this problem.

Is there really a need for a low-level async_call interface?  After
all, the existing threads interface provides all the low-levelness
you can handle.

 Actually, there's another minor issue as well.  The user can call
 operator() and then let the async_call go out of scope with out ever
 calling result().  Mayhem would ensue.  The two options for dealing
 with this are to either block in the destructor until the call has
 completed or to simply document this as undefined behavior.

 Yes, good point, I missed that.

 I lean towards simple undefined behavior.  How do you feel about it?

 Seems entirely reasonable. I don't think that we can fix this. Accessing
 an object after it has been destroyed is simply an error; although this is
 probably a good argument for making async_call copyable/counted so that the
 copy being executed can keep the representation alive.

Seems like this is also pointing at a high-level interface...

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread William E. Kempf

 
 From: Peter Dimov [EMAIL PROTECTED]
 Date: 2003/02/10 Mon PM 12:54:28 EST
 To: Boost mailing list [EMAIL PROTECTED]
 Subject: Re: Re: [boost] Re: A new boost::thread implementation?
 
 William E. Kempf wrote:
  From: Peter Dimov [EMAIL PROTECTED]
  // step 2: execute an async_call
  call();
 
  This example, and the implementation above, are just complex
  synchronous calls.  I assume you really meant for either the
  constructor or this call to also take an Executor concept?
 
  This line could be
 
  boost::thread exec( ref(call) );
 
  or
 
  boost::thread_pool pool;
  pool.dispatch( ref(call) );
 
  I didn't have a prebuilt Boost.Threads library handy when I wrote
  the code (rather quickly) so I used a plain call.
 
  No, it couldn't be, because async_call isn't copyable ;).  But I get
  the point.
 
 Note that I diligently used ref(call) above. ;-)

Yeah, I noticed that when I received my own response.  Sorry about not reading it more 
carefully.

  Since operator() is synchronized, i don't see a race... am I missing
  something?
 
  Sort of... I was thinking about the refactoring where you don't hold
  the mutex the entire time the function is being called.  But even
  with out the refactoring, there is some room for error:
 
  thread1: call()
  thread2: call()
  thread1: result() // which result?
 
 Unspecified, but I don't think we can avoid that with the low-level
 interface. High level wrappers that package creation and execution would be
 immune to this problem.

Agreed.
 
  Actually, there's another minor issue as well.  The user can call
  operator() and then let the async_call go out of scope with out ever
  calling result().  Mayhem would ensue.  The two options for dealing
  with this are to either block in the destructor until the call has
  completed or to simply document this as undefined behavior.
 
  Yes, good point, I missed that.
 
  I lean towards simple undefined behavior.  How do you feel about it?
 
 Seems entirely reasonable. I don't think that we can fix this. Accessing
 an object after it has been destroyed is simply an error; although this is
 probably a good argument for making async_call copyable/counted so that the
 copy being executed can keep the representation alive.

Yes, agreed.  I'm just not sure which approach is more appropriate... to use dynamic 
allocation and ref-counting in the implementation or to simply require the user to 
strictly manage the lifetime of the async_call so that there's no issues with a truly 
asynchronous Executor accessing the return value after it's gone out of scope.
 


William E. Kempf
[EMAIL PROTECTED]

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread Peter Dimov
David Abrahams wrote:
 Peter Dimov [EMAIL PROTECTED] writes:

 Unspecified, but I don't think we can avoid that with the low-level
 interface. High level wrappers that package creation and execution
 would be immune to this problem.

 Is there really a need for a low-level async_call interface?  After
 all, the existing threads interface provides all the low-levelness
 you can handle.

I don't know. But the low-levelness contributed by async_call is unique, and
not covered by boost::thread at present. I'm thinking of the R f() - { void
f(), R result() } transformation, with the associated synchronization and
(possibly) encapsulated exception transporting/translation from the
execution to result(). It's a tool that allows high-level interfaces to be
built. Whether people will want/need to build their own high-level
interfaces is another story.

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread William E. Kempf
 From: David Abrahams [EMAIL PROTECTED]
 William E. Kempf [EMAIL PROTECTED] writes:
   I lean towards simple undefined behavior.  How do you feel about it?
 
 I have a feeling that I'm not being asked here, and maybe even that
 it's wasted breath because you've grown tired of my emphasis on a
 high-level interface, but there's a lot to be said for eliminating
 sources of undefined behavior, especially when it might have to do
 with the ordering of operations in a MT context.

No, I was asking anyone interested in responding, and you're certainly not wasting 
your breath.  I think I reached a compromise on these issues/questions, and would 
appreciate your response (it's in another post).
 
  Seems entirely reasonable. I don't think that we can fix this. Accessing
  an object after it has been destroyed is simply an error; although this is
  probably a good argument for making async_call copyable/counted so that the
  copy being executed can keep the representation alive.
 
  Yes, agreed.  I'm just not sure which approach is more
  appropriate... to use dynamic allocation and ref-counting in the
  implementation or to simply require the user to strictly manage the
  lifetime of the async_call so that there's no issues with a truly
  asynchronous Executor accessing the return value after it's gone out
  of scope.
 
 Allocation can be pretty darned efficient when it matters.  See my
 fast smart pointer allocator that Peter added to shared_ptr for
 example.

It's not just the efficiencies that concern me with dynamic allocation.  It's the 
additional points of failure that occur in this case as well.  For instance, check out 
the article on embedded coding in the most recent CUJ (sorry, don't have the exact 
title handy).  Embedded folks generally avoid dynamic memory when ever possible, so 
I'm a little uncomfortable with a solution that mandates that the implementation use 
dynamic allocation of memory.  At least, if that's the only solution provided.
 


William E. Kempf
[EMAIL PROTECTED]

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread Peter Dimov
David Abrahams wrote:
 Peter Dimov [EMAIL PROTECTED] writes:

 David Abrahams wrote:
 Peter Dimov [EMAIL PROTECTED] writes:

 Unspecified, but I don't think we can avoid that with the low-level
 interface. High level wrappers that package creation and execution
 would be immune to this problem.

 Is there really a need for a low-level async_call interface?  After
 all, the existing threads interface provides all the low-levelness
 you can handle.

 I don't know. But the low-levelness contributed by async_call is
 unique, and not covered by boost::thread at present. I'm thinking of
 the R f() - { void f(), R result() } transformation, with the
 associated synchronization and (possibly) encapsulated exception
 transporting/translation from the execution to result().

 1. Are you saying you can't implement that in terms of existing thread
primitives and optional?

I can. On the other hand, I can implement the thread primitives and
optional, too. The point is that if, while building a high-level interface
implementation, we discover an useful low-level primitive that offers
greater expressive power (if less safety), we should consider exposing it,
too, unless there are strong reasons not to.

 2. Is that much different (or more valuable than)

   R f() - { construct(), R result() }

which is what I was suggested?

I don't know. Post the code. ;-) You can use stub synchronous threads and
thread_pools for illustration:

struct thread
{
templateclass F explicit thread(F f) { f(); }
};

struct thread_pool
{
templateclass F void dispatch(F f) { f(); }
};

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread Peter Dimov
William E. Kempf wrote:

 How about this compromise:

 template typename R
 class async_call
 {
 public:
 template typename F
 explicit async_call(const F f)
 : m_func(f)
 {
 }

 void operator()()
 {
 mutex::scoped_lock lock(m_mutex);
 if (m_result)
 throw can't call multiple times;

operator() shouldn't throw; it's being used as a thread procedure, and the
final verdict on these was to terminate() on exception, I believe. But you
may have changed that. :-)

 lock.unlock();
 R temp(m_func());
 lock.lock();
 m_result.reset(temp);
 m_cond.notify_all();
 }

 R result() const
 {
 boost::mutex::scoped_lock lock(m_mutex);
 while (!m_result)
 m_cond.wait(lock);

This changes result()'s semantics to block until op() finishes; what
happens if nobody calls op()? Or it throws an exception?

 return *m_result.get();
 }

 private:
 boost::function0R m_func;
 optionalR m_result;
 mutable mutex m_mutex;
 mutable condition m_cond;
 };

 template typename R
 class future
 {
 public:
 template typename F
 explicit future(const F f)
 : m_pimpl(new async_callR(f))
 {
 }

 future(const futureR other)
 {
 mutex::scoped_lock lock(m_mutex);

I don't think you need a lock here, but I may be missing something.

 m_pimpl = other.m_pimpl;
 }

 futureR operator=(const futureR other)
 {
 mutex::scoped_lock lock(m_mutex);

--

 m_pimpl = other.m_pimpl;
 }

 void operator()()
 {
 (*get())();
 }

 R result() const
 {
 return get()-result();
 }

 private:
 shared_ptrasync_callR  get() const
 {
 mutex::scoped_lock lock(m_mutex);

--

 return m_pimpl;
 }

 shared_ptrasync_callR  m_pimpl;
 mutable mutex m_mutex;
 };

As for the big picture, ask Dave. ;-) I tend towards a refcounted
async_call.

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread Peter Dimov
William E. Kempf wrote:

 It's not just the efficiencies that concern me with dynamic
 allocation.  It's the additional points of failure that occur in this
 case as well.  For instance, check out the article on embedded coding
 in the most recent CUJ (sorry, don't have the exact title handy).
 Embedded folks generally avoid dynamic memory when ever possible, so
 I'm a little uncomfortable with a solution that mandates that the
 implementation use dynamic allocation of memory.  At least, if that's
 the only solution provided.

This allocation isn't much different than the allocation performed by
pthread_create. An embedded implementation can simply impose an upper limit
on the total number of async_calls and never malloc.

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread David Abrahams
William E. Kempf [EMAIL PROTECTED] writes:

 From: David Abrahams [EMAIL PROTECTED]
 Peter Dimov [EMAIL PROTECTED] writes:

  It's a tool that allows high-level interfaces to be built. Whether
  people will want/need to build their own high-level interfaces is
  another story.
 
 I think it's a valuable question to ask whether /everyone/ will want
 to create /the same/ high-level interface ;-).  In other words, as
 long as we have a bunch of low-level thread primitives, I prefer to
 reduce interface complexity and increase encapsulation unless we can
 find a specific use for a medium-level interface.

 How about this compromise:

snip

I don't want either of these to have a separate function (operator()
in this case) which initiates the call, for reasons described earlier

My suggestion:

  template typename R
  class future
  {
  public:
  template class F, class Executor
  future(F const f, Executor const e)
  : m_pimpl(new async_callR(f))
  {
  (*get())();
  }

  future(const futureR other)
  {
  mutex::scoped_lock lock(m_mutex);
  m_pimpl = other.m_pimpl;
  }

  futureR operator=(const futureR other)
  {
  mutex::scoped_lock lock(m_mutex);
  m_pimpl = other.m_pimpl;
  }

  R result() const
  {
  return get()-result();
  }

  private:
  shared_ptrasync_callR  get() const
  {
  mutex::scoped_lock lock(m_mutex);
  return m_pimpl;
  }

  shared_ptrasync_callR  m_pimpl;
  mutable mutex m_mutex;
  };

  // Not convinced that this helps, but...
  template class R
  R result(futureR const f)
  {
  return f.result();
  }

...and I don't care whether async_call gets implemented as part of the
public interface or not, but only because I can't see a compelling
reason to have it yet.

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread David Abrahams
William E. Kempf [EMAIL PROTECTED] writes:

 No, I was asking anyone interested in responding, and you're certainly
 not wasting your breath.  I think I reached a compromise on these
 issues/questions, and would appreciate your response (it's in another
 post).

Done.

  Allocation can be pretty darned efficient when it matters.  See my
 fast smart pointer allocator that Peter added to shared_ptr for
 example.

 It's not just the efficiencies that concern me with dynamic
 allocation.  It's the additional points of failure that occur in this
 case as well.  For instance, check out the article on embedded coding
 in the most recent CUJ (sorry, don't have the exact title handy).
 Embedded folks generally avoid dynamic memory when ever possible, so
 I'm a little uncomfortable with a solution that mandates that the
 implementation use dynamic allocation of memory.  At least, if that's
 the only solution provided.

What Peter said.

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread David Abrahams
Peter Dimov [EMAIL PROTECTED] writes:

 I can. On the other hand, I can implement the thread primitives and
 optional, too. The point is that if, while building a high-level interface
 implementation, we discover an useful low-level primitive that offers
 greater expressive power (if less safety), we should consider exposing it,
 too, unless there are strong reasons not to.

And we should also consider NOT exposing it, unless there are strong
reasons to do so ;-).  I'm all for considering everything, but let's
be careful not to generalize this too much, too early.  If we discover
that people really need fine-grained control over the way their
async_calls work, we can go with a policy-based design ;-)

 2. Is that much different (or more valuable than)

   R f() - { construct(), R result() }

which is what I was suggested?

 I don't know. Post the code. ;-) 

done.

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



[boost] Re: A new boost::thread implementation?

2003-02-09 Thread David Abrahams
William E. Kempf [EMAIL PROTECTED] writes:

 I don't care if you have an uninitialized optional internally to the
 future.  The point is to encapsulate that mess so the user doesn't have
 to look at it, read its documentation, etc.

 I think there's some serious misunderstanding here. I never said the user
 would use optional directly, I said I'd use it in the implementation of
 this async concept.

So, we're in violent agreement? Yes, apparently there *is* some
serious misunderstanding!  When did you say you were talking about the
implementation details, and how did I miss it?  I thought I've been
very clear that I was talking about high-level interface issues.  Did
you miss that?

 I *think* I understand what you're saying.  So, the interface would be
 more something like:

 futuredouble f1 = thread_executor(foo, a, b, c);
 thread_pool pool;
 futuredouble f2 = thread_pool_executor(pool, foo, d, e, f);
 double d = f1.get() + f2.get();

 This puts a lot more work on the creation of executors (they'll have
 to obey a more complex interface design than just anything that can
 invoke a function object), but I can see the merits.  Is this
 actually what you had in mind?

 Something very much along those lines.  I would very much prefer to
 access the value of the future with its operator(), because we have lots
 of nice mechanisms that work on function-like objects; to use get you'd
 need to go through mem_fn/bind, and according to Peter we
 wouldn't be able to directly get such a function object from a future
 rvalue.

 Hmmm... OK, more pieces are falling into place.  I think the f() syntax
 conveys something that's not the case, but I won't argue the utility of
 it.

I understand your concern.  Another possible interface that is
functional in nature would be:

  futuredouble f1 = thread_executor(foo, a, b, c);
  thread_pool pool;
  futuredouble f2 = thread_pool_executor(pool, foo, d, e, f);
  double d = async_result(f1) + async_result(f2);

Where async_result would be a function object instance.  That may seem
radical to you, but I think Joel has demonstrated the effectiveness of
that approach in Spirit.

 Only if you have a clearly defined default case.  Someone doing a
 lot of client/server development might argue with you about thread
 creation being a better default than RPC calling, or even
 thread_pool usage.

 Yes, they certainly might.  Check out the systems that have been
 implemented in Erlang with great success and get back to me ;-)

 Taking a chapter out of Alexander's book?

 Ooooh, touché!  ;-)

 Actually I think it's only fair to answer speculation about what
 people will like with a reference to real, successful systems.

 I'd agree with that, but the link you gave led me down a VERY long
 research path, and I'm in a time crunch right now ;).  Maybe a short code
 example or a more specific link would have helped.

Sorry, I don't have one.  Oh, maybe
http://ll2.ai.mit.edu/talks/armstrong.pdf, which includes the
fabulous 10 minute Erlang course would help.  The point is, they use
Erlang (a functional language) to build massively concurrent, threaded
systems.  Erlang is running some major telephone switches in Europe.

 I'm not yet wedded to a particular design choice, though I am getting
 closer; I hope you don't think that's a cop-out.  What I'm aiming for is
 a particular set of design requirements:

 Not a cop-out, though I wasn't asking for a final design from you.

 1. Simple syntax, for some definition of simple.
 2. A way, that looks like a function call, to create a future
 3. A way, that looks like a function call, to get the value of a future

 These requirements help me a lot.  Thanks.

No prob.

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



[boost] Re: A new boost::thread implementation?

2003-02-09 Thread David Abrahams
William E. Kempf [EMAIL PROTECTED] writes:

 I don't care if you have an uninitialized optional internally to the
 future.  The point is to encapsulate that mess so the user doesn't have
 to look at it, read its documentation, etc.

 I think there's some serious misunderstanding here. I never said the user
 would use optional directly, I said I'd use it in the implementation of
 this async concept.

So, we're in violent agreement? Yes, apparently there *is* some
serious misunderstanding!  When did you say you were talking about the
implementation details, and how did I miss it?  I thought I've been
very clear that I was talking about high-level interface issues.  Did
you miss that?

 I *think* I understand what you're saying.  So, the interface would be
 more something like:

 futuredouble f1 = thread_executor(foo, a, b, c);
 thread_pool pool;
 futuredouble f2 = thread_pool_executor(pool, foo, d, e, f);
 double d = f1.get() + f2.get();

 This puts a lot more work on the creation of executors (they'll have
 to obey a more complex interface design than just anything that can
 invoke a function object), but I can see the merits.  Is this
 actually what you had in mind?

 Something very much along those lines.  I would very much prefer to
 access the value of the future with its operator(), because we have lots
 of nice mechanisms that work on function-like objects; to use get you'd
 need to go through mem_fn/bind, and according to Peter we
 wouldn't be able to directly get such a function object from a future
 rvalue.

 Hmmm... OK, more pieces are falling into place.  I think the f() syntax
 conveys something that's not the case, but I won't argue the utility of
 it.

I understand your concern.  Another possible interface that is
functional in nature would be:

  futuredouble f1 = thread_executor(foo, a, b, c);
  thread_pool pool;
  futuredouble f2 = thread_pool_executor(pool, foo, d, e, f);
  double d = async_result(f1) + async_result(f2);

Where async_result would be a function object instance.  That may seem
radical to you, but I think Joel has demonstrated the effectiveness of
that approach in Spirit.

 Only if you have a clearly defined default case.  Someone doing a
 lot of client/server development might argue with you about thread
 creation being a better default than RPC calling, or even
 thread_pool usage.

 Yes, they certainly might.  Check out the systems that have been
 implemented in Erlang with great success and get back to me ;-)

 Taking a chapter out of Alexander's book?

 Ooooh, touché!  ;-)

 Actually I think it's only fair to answer speculation about what
 people will like with a reference to real, successful systems.

 I'd agree with that, but the link you gave led me down a VERY long
 research path, and I'm in a time crunch right now ;).  Maybe a short code
 example or a more specific link would have helped.

Sorry, I don't have one.  Oh, maybe
http://ll2.ai.mit.edu/talks/armstrong.pdf, which includes the
fabulous 10 minute Erlang course would help.  The point is, they use
Erlang (a functional language) to build massively concurrent, threaded
systems.  Erlang is running some major telephone switches in Europe.

 I'm not yet wedded to a particular design choice, though I am getting
 closer; I hope you don't think that's a cop-out.  What I'm aiming for is
 a particular set of design requirements:

 Not a cop-out, though I wasn't asking for a final design from you.

 1. Simple syntax, for some definition of simple.
 2. A way, that looks like a function call, to create a future
 3. A way, that looks like a function call, to get the value of a future

 These requirements help me a lot.  Thanks.

No prob.

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-08 Thread Peter Dimov
David Abrahams wrote:

[...]

 Well, I don't really feel like arguing about this much longer.

I'd love to contribute to this discussion but there's no firm ground to
stand on. What _are_ the concepts being discussed? I think I see

AsyncCallR

  AsyncCall(functionR () f);

  void operator()();

// effects: f();

  R result() const;

// if operator()() hasn't been invoked, throw;
// if operator()() is still executing, block;
// otherwise, return the value returned by f().

but I'm not sure.

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-08 Thread David Abrahams
Peter Dimov [EMAIL PROTECTED] writes:

 David Abrahams wrote:

 [...]

 Well, I don't really feel like arguing about this much longer.

 I'd love to contribute to this discussion but there's no firm ground to
 stand on. What _are_ the concepts being discussed? I think I see

 AsyncCallR

   AsyncCall(functionR () f);

   void operator()();

 // effects: f();

   R result() const;

 // if operator()() hasn't been invoked, throw;
 // if operator()() is still executing, block;
 // otherwise, return the value returned by f().

 but I'm not sure.

That's the general idea.  Of course we can haggle over the syntactic
details, but the main question is whether you can get a return value
from invoking a thread function or whether you have to declare some
global state and ask the thread function to modify it.

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-08 Thread Peter Dimov
David Abrahams wrote:
 Peter Dimov [EMAIL PROTECTED] writes:
 
 David Abrahams wrote:
 
 [...]
 
 Well, I don't really feel like arguing about this much longer.
 
 I'd love to contribute to this discussion but there's no firm ground
 to stand on. What _are_ the concepts being discussed? I think I see
 
 AsyncCallR
 
   AsyncCall(functionR () f);
 
   void operator()();
 
 // effects: f();
 
   R result() const;
 
 // if operator()() hasn't been invoked, throw;
 // if operator()() is still executing, block;
 // otherwise, return the value returned by f().
 
 but I'm not sure.
 
 That's the general idea.  Of course we can haggle over the syntactic
 details, but the main question is whether you can get a return value
 from invoking a thread function or whether you have to declare some
 global state and ask the thread function to modify it.

With the above AsyncCall:

async_callint f( bind(g, 1, 2) ); // can offer syntactic sugar here
thread t(f); // or thread(f); for extra cuteness
int r = f.result();

The alternative seems to be

async_callint f( bind(g, 1, 2) );
int r = f.result();

but now f is tied to boost::thread. A helper

int r = async(g, 1, 2);

seems possible with either approach.

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-08 Thread David Abrahams
Peter Dimov [EMAIL PROTECTED] writes:

 David Abrahams wrote:

 That's the general idea.  Of course we can haggle over the syntactic
 details, but the main question is whether you can get a return value
 from invoking a thread function or whether you have to declare some
 global state and ask the thread function to modify it.

 With the above AsyncCall:

 async_callint f( bind(g, 1, 2) ); // can offer syntactic sugar here
 thread t(f); // or thread(f); for extra cuteness
 int r = f.result();

 The alternative seems to be

 async_callint f( bind(g, 1, 2) );
 int r = f.result();

 but now f is tied to boost::thread. A helper

 int r = async(g, 1, 2);

Another alternative might allow all of the following:

async_callint f(create_thread(), bind(g,1,2));
int r = f();

async_callint f(thread_pool(), bind(g,1,2));
int r = f();

int r = async_callint(create_thread(), bind(g, 1, 2));

int r = async(boost::thread(), g, 1, 2);

int r = async_callint(rpc(some_machine), bind(g,1,2));

int r = async_callint(my_message_queue, bind(g,1,2));

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-08 Thread Peter Dimov
David Abrahams wrote:
 Peter Dimov [EMAIL PROTECTED] writes:

 With the above AsyncCall:

 async_callint f( bind(g, 1, 2) ); // can offer syntactic sugar here
 thread t(f); // or thread(f); for extra cuteness
 int r = f.result();

 The alternative seems to be

 async_callint f( bind(g, 1, 2) );
 int r = f.result();

 but now f is tied to boost::thread. A helper

 int r = async(g, 1, 2);

 Another alternative might allow all of the following:

 async_callint f(create_thread(), bind(g,1,2));
 int r = f();

 async_callint f(thread_pool(), bind(g,1,2));
 int r = f();

Using an undefined-yet Executor concept for the first argument. This is not
much different from

async_callint f( bind(g, 1, 2) );
// execute f using whatever Executor is appropriate
int r = f.result();

except that the async_call doesn't need to know about Executors.

 int r = async_callint(create_thread(), bind(g, 1, 2));

 int r = async(boost::thread(), g, 1, 2);

 int r = async_callint(rpc(some_machine), bind(g,1,2));

 int r = async_callint(my_message_queue, bind(g,1,2));

All of these are possible with helper functions (and the int could be made
optional.) I've my doubts about

 int r = async_callint(rpc(some_machine), bind(g,1,2));

though. How do you envision this working? A local opaque function object
can't be RPC'ed.

int r = rpc_call(g, 1, 2);

looks much easier to implement.

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-08 Thread David Abrahams
Peter Dimov [EMAIL PROTECTED] writes:

 David Abrahams wrote:
 Peter Dimov [EMAIL PROTECTED] writes:

 With the above AsyncCall:

 async_callint f( bind(g, 1, 2) ); // can offer syntactic sugar here
 thread t(f); // or thread(f); for extra cuteness
 int r = f.result();

 The alternative seems to be

 async_callint f( bind(g, 1, 2) );
 int r = f.result();

 but now f is tied to boost::thread. A helper

 int r = async(g, 1, 2);

 Another alternative might allow all of the following:

 async_callint f(create_thread(), bind(g,1,2));
 int r = f();

 async_callint f(thread_pool(), bind(g,1,2));
 int r = f();

 Using an undefined-yet Executor concept for the first argument. This is not
 much different from

 async_callint f( bind(g, 1, 2) );
 // execute f using whatever Executor is appropriate
 int r = f.result();

 except that the async_call doesn't need to know about Executors.

...and that you don't need a special syntax to get the result out that
isn't compatible with functional programming.  If you want to pass a
function object off from the above, you need something like:

bind(async_callint::result, async_callint(bind(g, 1, 2)))


 int r = async_callint(create_thread(), bind(g, 1, 2));

 int r = async(boost::thread(), g, 1, 2);
   ^^^

 int r = async_callint(rpc(some_machine), bind(g,1,2));

 int r = async_callint(my_message_queue, bind(g,1,2));

 All of these are possible with helper functions (and the int could be made
 optional.) 

Yup, note the line in the middle.

 I've my doubts about

 int r = async_callint(rpc(some_machine), bind(g,1,2));

 though. How do you envision this working? A local opaque function object
 can't be RPC'ed.

It would have to not be opaque ;-)

Maybe it's a wrapper over Python code that can be transmitted across
the wire.  Anyway, I agree that it's not very likely.  I just put it
in there to satisfy Bill, who seems to have some idea how RPC can be
squeezed into the same mold as thread invocation ;-)

 int r = rpc_call(g, 1, 2);

 looks much easier to implement.

Agreed.

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-08 Thread Peter Dimov
David Abrahams wrote:
 Peter Dimov [EMAIL PROTECTED] writes:

 Using an undefined-yet Executor concept for the first argument. This
 is not much different from

 async_callint f( bind(g, 1, 2) );
 // execute f using whatever Executor is appropriate
 int r = f.result();

 except that the async_call doesn't need to know about Executors.

 ...and that you don't need a special syntax to get the result out that
 isn't compatible with functional programming.  If you want to pass a
 function object off from the above, you need something like:

 bind(async_callint::result, async_callint(bind(g, 1, 2)))

Hmm. Actually I'll need a similar three-liner:

async_callint f( bind(g, 1, 2) );
// execute f using whatever Executor is appropriate
// pass bind(async_callint::result, f) to whoever is interested

Synchronous RPC calls notwithstanding, the point of the async_call is that
the creation+execution (lines 1-2) are performed well in advance so that the
background thread has time to run. It doesn't make sense to construct and
execute an async_call if the very next thing is calling result(). So in a
typical scenario there will be other code between lines 2 and 3.

The bind() can be syntax-sugared, of course. :-)

 int r = async_callint(create_thread(), bind(g, 1, 2));

 int r = async(boost::thread(), g, 1, 2);
^^^

 int r = async_callint(rpc(some_machine), bind(g,1,2));

 int r = async_callint(my_message_queue, bind(g,1,2));

 All of these are possible with helper functions (and the int could
 be made optional.)

 Yup, note the line in the middle.

The line in the middle won't work, actually, but that's another story.
boost::thread() creates a handle to the current thread. ;-) Score another
one for defining concepts before using them.

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



[boost] Re: A new boost::thread implementation?

2003-02-08 Thread Alexander Terekhov

 repost [with repaired link] 

Peter Dimov wrote:
[...]
  Another alternative might allow all of the following:
 
  async_callint f(create_thread(), bind(g,1,2));
  int r = f();
 
  async_callint f(thread_pool(), bind(g,1,2));
  int r = f();
 
 Using an undefined-yet Executor concept for the first argument. This is not
 much different from
 
 async_callint f( bind(g, 1, 2) );
 // execute f using whatever Executor is appropriate
 int r = f.result();
 
 except that the async_call doesn't need to know about Executors.

Yep. 

http://gee.cs.oswego.edu/dl/concurrent/dist/docs/java/util/concurrent/ThreadExecutor.html

But I still insist ( ;-) ) on a rather simple interface 
for creating a thread object (that shall kinda-encapsulate 
that async_callT-thing representing the thread routine 
with its optional parameter(s) and return value... and which 
can be canceled [no-result-ala-PTHREAD_CANCELED] and timedout-
on-timedjoin() -- also no result [reported by another magic 
pointer value]):

http://groups.google.com/groups?selm=3D5D59A3.E6C97827%40web.de
(Subject: Re: High level thread design question)

http://groups.google.com/groups?selm=3D613D44.9B67916%40web.de
(Well, futures aside for a moment, how about the following...)
 ^^ ;-) ;-)

regards,
alexander.

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-08 Thread William E. Kempf

David Abrahams said:
 William E. Kempf [EMAIL PROTECTED] writes:

 David Abrahams said:
 ...and if it can't be default-constructed?

 That's what boost::optional is for ;).

 Yeeeh. Once the async_call returns, you have a value, and should be
 able to count on it.  You shouldn't get back an object whose
 invariant allows there to be no value.

 I'm not sure I can interpret the yeeeh part. Do you think there's
 still an issue to discuss here?

 Yes.  Yh means I'm uncomfortable with asking people to get
 involved with complicated state like it's there or it isn't there for
 something as conceptually simple as a result returned from waiting on a
 thread function to finish.

OK, *if* I'm totally understanding you now, I don't think the issue you
see actually exists.  The invariant of optional may allow there to be no
value, but the invariant of a future/async_result doesn't allow this
*after the invocation has completed*.  (Actually, there is one case where
this might occur, and that's when the invocation throws an exception if we
add the async exception functionality that people want here.  But in this
case what happens is a call to res.get(), or what ever name we use, will
throw an exception.)  The optional is just an implementation detail that
allows you to not have to use a type that's default constructable.

If, on the other hand, you're concerned about the uninitialized state
prior to invocation... we can't have our cake and eat it to, and since the
value is meaningless prior to invocation any way, I'd rather allow the
solution that doesn't require default constructable types.

 These are the two obvious (to me) alternatives, but the idea is to
 leave the call/execute portion orthogonal and open.  Alexander was
 quite right that this is similar to the Future concept in his Java
 link.  The Future holds the storage for the data to be returned and
 provides the binding mechanism for what actually gets called, while
 the Executor does the actual invocation.  I've modeled the Future
 to use function objects for the binding, so the Executor can be any
 mechanism which can invoke a function object.  This makes thread,
 thread_pool and other such classes Executors.

 Yes, it is a non-functional (stateful) model which allows efficient
 re-use of result objects when they are large, but complicates simple
 designs that could be better modeled as stateless functional programs.
 When there is an argument for re-using the result object, C++
 programmers tend to write void functions and pass the result by
 reference anyway.  There's a good reason people write functions
 returning non-void, though.  There's no reason to force them to twist
 their invocation model inside out just to achieve parallelism.

I *think* I understand what you're saying.  So, the interface would be
more something like:

futuredouble f1 = thread_executor(foo, a, b, c);
thread_pool pool;
futuredouble f2 = thread_pool_executor(pool, foo, d, e, f);
double d = f1.get() + f2.get();

This puts a lot more work on the creation of executors (they'll have to
obey a more complex interface design than just anything that can invoke a
function object), but I can see the merits.  Is this actually what you
had in mind?

 And there's other examples as well, such as RPC mechanisms.

 True.

 And personally, I find passing such a creation parameter to be
 turning the design inside out.

 A bit, yes.

 It turns _your_ design inside out, which might not be a bad thing for
 quite a few use cases ;-)

We're obviously not thinking of the same interface choices here.

 It might make things a little simpler for the default case, but it
 complicates usage for all the other cases.  With the design I
 presented every usage is treated the same.

 There's a lot to be said for making the default case very easy.

 Only if you have a clearly defined default case.  Someone doing a
 lot of client/server development might argue with you about thread
 creation being a better default than RPC calling, or even thread_pool
 usage.

 Yes, they certainly might.  Check out the systems that have been
 implemented in Erlang with great success and get back to me ;-)

Taking a chapter out of Alexander's book?

 More importantly, if you really don't like the syntax of my design,
 it at least allows you to *trivially* implement your design.

 I doubt most users regard anything involving typesafe varargs as
 trivial to implement.

 Well, I'm not claiming to support variadric parameters here.  I'm only
 talking about supporting a 0..N for some fixed N interface.

 That's what I mean by typesafe varargs; it's the best we can do in
 C++98/02.

  And with Boost.Bind already available, that makes other such
 interfaces trivial to implement.  At least usually.

 For an expert in library design familiar with the workings of boost
 idioms like ref(x), yes.  For someone who just wants to accomplish a
 task using threading, no.

Point taken.

 The suggestion that the binding occur at the time of 

Re: [boost] Re: A new boost::thread implementation?

2003-02-08 Thread William E. Kempf

David Abrahams said:
 Peter Dimov [EMAIL PROTECTED] writes:

 David Abrahams wrote:
 Peter Dimov [EMAIL PROTECTED] writes:

 With the above AsyncCall:

 async_callint f( bind(g, 1, 2) ); // can offer syntactic sugar
 here thread t(f); // or thread(f); for extra cuteness
 int r = f.result();

 The alternative seems to be

 async_callint f( bind(g, 1, 2) );
 int r = f.result();

 but now f is tied to boost::thread. A helper

 int r = async(g, 1, 2);

 Another alternative might allow all of the following:

 async_callint f(create_thread(), bind(g,1,2));
 int r = f();

 async_callint f(thread_pool(), bind(g,1,2));
 int r = f();

 Using an undefined-yet Executor concept for the first argument. This
 is not much different from

 async_callint f( bind(g, 1, 2) );
 // execute f using whatever Executor is appropriate
 int r = f.result();

 except that the async_call doesn't need to know about Executors.

 ...and that you don't need a special syntax to get the result out that
 isn't compatible with functional programming.  If you want to pass a
 function object off from the above, you need something like:

 bind(async_callint::result, async_callint(bind(g, 1, 2)))

I think the light is dawning for me.  Give me a little bit of time to work
out a new design taking this into consideration.

 int r = async_callint(create_thread(), bind(g, 1, 2));

 int r = async(boost::thread(), g, 1, 2);
^^^

 int r = async_callint(rpc(some_machine), bind(g,1,2));

 int r = async_callint(my_message_queue, bind(g,1,2));

None of these make much sense to me.  You're executing the function object
in a supposedly asynchronous manner, but the immediate assignment to int
renders it a synchronous call.  Am I missing something again?

 All of these are possible with helper functions (and the int could
 be made optional.)

 Yup, note the line in the middle.

 I've my doubts about

 int r = async_callint(rpc(some_machine), bind(g,1,2));

 though. How do you envision this working? A local opaque function
 object can't be RPC'ed.

 It would have to not be opaque ;-)

 Maybe it's a wrapper over Python code that can be transmitted across the
 wire.  Anyway, I agree that it's not very likely.  I just put it in
 there to satisfy Bill, who seems to have some idea how RPC can be
 squeezed into the same mold as thread invocation ;-)

Ouch.  A tad harsh.  But yes, I do see this concept applying to RPC
invocation.  All that's required is the proxy that handles wiring the
data and the appropriate scaffolding to turn this into an Executor. 
Obviously this is a much more strict implementation then thread
creation... you can't just call any function here.

-- 
William E. Kempf
[EMAIL PROTECTED]


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-08 Thread David Abrahams
Alexander Terekhov [EMAIL PROTECTED] writes:

 But I still insist ( ;-) ) on a rather simple interface 
 for creating a thread object (that shall kinda-encapsulate 
 that async_callT-thing representing the thread routine 
 with its optional parameter(s) and return value... and which 
 can be canceled [no-result-ala-PTHREAD_CANCELED] and timedout-
 on-timedjoin() -- also no result [reported by another magic 
 pointer value]):

 http://groups.google.com/groups?selm=3D5D59A3.E6C97827%40web.de
 (Subject: Re: High level thread design question)

 http://groups.google.com/groups?selm=3D613D44.9B67916%40web.de
 (Well, futures aside for a moment, how about the following...)
  ^^ ;-) ;-)

Hmm, good point.  If we are going to get results back in this
straightforward way we probably ought to be thinking about exception
propagation also.  However, that's a *much* harder problem, so I'm
inclined to defer solving it.

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-08 Thread David Abrahams
Peter Dimov [EMAIL PROTECTED] writes:

 except that the async_call doesn't need to know about Executors.

 ...and that you don't need a special syntax to get the result out that
 isn't compatible with functional programming.  If you want to pass a
 function object off from the above, you need something like:

 bind(async_callint::result, async_callint(bind(g, 1, 2)))

 Hmm. Actually I'll need a similar three-liner:

 async_callint f( bind(g, 1, 2) );
 // execute f using whatever Executor is appropriate
 // pass bind(async_callint::result, f) to whoever is interested

A little bit worse, you gotta admit.

 Synchronous RPC calls notwithstanding, the point of the async_call is that
 the creation+execution (lines 1-2) are performed well in advance so that the
 background thread has time to run. It doesn't make sense to construct and
 execute an async_call if the very next thing is calling result(). So in a
 typical scenario there will be other code between lines 2 and 3.

I agree, but I'm not sure what difference it makes.

 The bind() can be syntax-sugared, of course. :-)

I think some useful syntax-sugaring is what I'm trying to push for ;-)

 int r = async_callint(create_thread(), bind(g, 1, 2));

 int r = async(boost::thread(), g, 1, 2);
^^^

 int r = async_callint(rpc(some_machine), bind(g,1,2));

 int r = async_callint(my_message_queue, bind(g,1,2));

 All of these are possible with helper functions (and the int could
 be made optional.)

 Yup, note the line in the middle.

 The line in the middle won't work, actually, but that's another story.
 boost::thread() creates a handle to the current thread. ;-) Score another
 one for defining concepts before using them.

Oh, I'm not up on the new interface.  How are we going to create a new
thread?

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-08 Thread David Abrahams
William E. Kempf [EMAIL PROTECTED] writes:

 except that the async_call doesn't need to know about Executors.

 ...and that you don't need a special syntax to get the result out that
 isn't compatible with functional programming.  If you want to pass a
 function object off from the above, you need something like:

 bind(async_callint::result, async_callint(bind(g, 1, 2)))

 I think the light is dawning for me.  Give me a little bit of time to work
 out a new design taking this into consideration.

...and there was much rejoicing!!

 int r = async_callint(create_thread(), bind(g, 1, 2));

 int r = async(boost::thread(), g, 1, 2);
^^^

 int r = async_callint(rpc(some_machine), bind(g,1,2));

 int r = async_callint(my_message_queue, bind(g,1,2));

 None of these make much sense to me.  You're executing the function
 object in a supposedly asynchronous manner, but the immediate
 assignment to int renders it a synchronous call.  Am I missing
 something again?

No, my fault.  Syntactically, I should've written this:

async_callint f(create_thread(), bind(g,1,2));
int r = f();

async_callint f(thread_pool(), bind(g,1,2));
int r = f();

int r = async_callint(create_thread(), bind(g, 1, 2))();
   ^^
int r = async(boost::thread(), g, 1, 2)();
   ^^
int r = async_callint(rpc(some_machine), bind(g,1,2))();
   ^^
int r = async_callint(my_message_queue, bind(g,1,2))();
  ^^

But you're also right about the synchronous thing.  The usage isn't
very likely.  More typically, you'd pass an async_call object off to
some other function, which would eventually invoke it to get the
result (potentially blocking if neccessary until the result was
available).

 though. How do you envision this working? A local opaque function
 object can't be RPC'ed.

 It would have to not be opaque ;-)

 Maybe it's a wrapper over Python code that can be transmitted across the
 wire.  Anyway, I agree that it's not very likely.  I just put it in
 there to satisfy Bill, who seems to have some idea how RPC can be
 squeezed into the same mold as thread invocation ;-)

 Ouch.  A tad harsh.  

Sorry, not intended.  I was just trying to palm off responsibility for
justifying that line on you ;o)

 But yes, I do see this concept applying to RPC invocation.  All
 that's required is the proxy that handles wiring the data and the
 appropriate scaffolding to turn this into an Executor.  Obviously
 this is a much more strict implementation then thread
 creation... you can't just call any function here.

I don't get it.  Could you fill in more detail?  For example, where
does the proxy come from?

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-08 Thread William E. Kempf

David Abrahams said:
 Peter Dimov [EMAIL PROTECTED] writes:
 The line in the middle won't work, actually, but that's another story.
 boost::thread() creates a handle to the current thread. ;-) Score
 another one for defining concepts before using them.

 Oh, I'm not up on the new interface.  How are we going to create a new
 thread?

Nothing new about the interface in this regard.  The default constructor
has always behaved this way.  New threads are created with the overloaded
constructor taking a function object.

BTW: I'm not opposed to changing the semantics or removing the default
constructor in the new design, since it's Copyable and Assignable.  If
there's reasons to do this, we can now switch to a self() method for
accessing the current thread.

-- 
William E. Kempf
[EMAIL PROTECTED]


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-08 Thread David Abrahams
William E. Kempf [EMAIL PROTECTED] writes:

 David Abrahams said:
 William E. Kempf [EMAIL PROTECTED] writes:

 David Abrahams said:
 ...and if it can't be default-constructed?

 That's what boost::optional is for ;).

 Yeeeh. Once the async_call returns, you have a value, and should be
 able to count on it.  You shouldn't get back an object whose
 invariant allows there to be no value.

 I'm not sure I can interpret the yeeeh part. Do you think there's
 still an issue to discuss here?

 Yes.  Yh means I'm uncomfortable with asking people to get
 involved with complicated state like it's there or it isn't there for
 something as conceptually simple as a result returned from waiting on a
 thread function to finish.

 OK, *if* I'm totally understanding you now, I don't think the issue
 you see actually exists.  The invariant of optional may allow
 there to be no value, but the invariant of a future/async_result
 doesn't allow this *after the invocation has completed*.  (Actually,
 there is one case where this might occur, and that's when the
 invocation throws an exception if we add the async exception
 functionality that people want here.  But in this case what happens
 is a call to res.get(), or what ever name we use, will throw an
 exception.)  The optional is just an implementation detail that
 allows you to not have to use a type that's default constructable.

It doesn't matter if the semantics of future ensures that the optional
is always filled in; returning an object whose class invariant is more
complicated than the actual intended result complicates life for the
user.  The result of a future leaves it and propagates through other
parts of the program where the invariant established by future aren't
as obvious.  Returning an optionaldouble where a double is intended
is akin to returning a vectordouble that has only one element.  Use
the optional internally to the future if that's what you need to do.
The user shouldn't have to mess with it.

 If, on the other hand, you're concerned about the uninitialized state
 prior to invocation... we can't have our cake and eat it to, and since the
 value is meaningless prior to invocation any way, I'd rather allow the
 solution that doesn't require default constructable types.

I don't care if you have an uninitialized optional internally to the
future.  The point is to encapsulate that mess so the user doesn't
have to look at it, read its documentation, etc.

 I *think* I understand what you're saying.  So, the interface would be
 more something like:

 futuredouble f1 = thread_executor(foo, a, b, c);
 thread_pool pool;
 futuredouble f2 = thread_pool_executor(pool, foo, d, e, f);
 double d = f1.get() + f2.get();

 This puts a lot more work on the creation of executors (they'll have to
 obey a more complex interface design than just anything that can invoke a
 function object), but I can see the merits.  Is this actually what you
 had in mind?

Something very much along those lines.  I would very much prefer to
access the value of the future with its operator(), because we have
lots of nice mechanisms that work on function-like objects; to use get
you'd need to go through mem_fn/bind, and according to Peter we
wouldn't be able to directly get such a function object from a future
rvalue.

 Only if you have a clearly defined default case.  Someone doing a
 lot of client/server development might argue with you about thread
 creation being a better default than RPC calling, or even thread_pool
 usage.

 Yes, they certainly might.  Check out the systems that have been
 implemented in Erlang with great success and get back to me ;-)

 Taking a chapter out of Alexander's book?

Ooooh, touché!  ;-)

Actually I think it's only fair to answer speculation about what
people will like with a reference to real, successful systems.

 The suggestion that the binding occur at the time of construction is
 going to complicate things for me, because it makes it much more
 difficult to handle the reference semantics required here.

 a. What required reference semantics?

 The reference semantics required for asynchronous calls ;).

 Seriously, though, you have to pass a reference across thread
 boundaries here.  With late binding you have a seperate entity
 that's passed as the function object, which can carry the reference
 semantics.  With the (specific) early binding syntax suggested it's
 the future itself which is passed, which means it has to be copy
 constructable and each copy must reference the same instance of the
 value.

OK, I understand.  That sounds right.

 Well, I did say I was open to alternative designs.  Whether said designs
 are high level or low level means little to me, so long as they fullfill
 the requirements.  The suggestions made so far didn't, AFAICT.

 As for the alternate interface your suggesting here, can you spell it out
 for me?

I'm not yet wedded to a particular design choice, though I am getting
closer; I hope you don't think that's a 

[boost] Re: A new boost::thread implementation?

2003-02-08 Thread Alexander Terekhov

David Abrahams wrote:
[...]
 Oh, I'm not up on the new interface.  How are we going to create a 
 new thread?

KISS: new_thread factories. Please.

regards,
alexander.

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



RE: [boost] Re: A new boost::thread implementation?

2003-02-07 Thread William E. Kempf

Darryl Green said:
 -Original Message-
 From: William E. Kempf [mailto:[EMAIL PROTECTED]]

 Dave Abrahams said:
  Hm? How is the result not a result in my case?

 I didn't say it wasn't a result, I said that it wasn't only a
 result.
 In your case it's also the call.

 Regardless of whether it invokes the function the result must always be
 associated with the function at some point. It would be nice if this
 could be done at creation as per Dave's suggestion but providing
 behaviour like the futures alexander refers to. That is, bind the
 function, its parameters and the async_result into a single
 aync_call/future object that is a function/executable object. It can be
 passed to (executed by) a thread or a thread_pool (or whatever).

I'm not sure that binding the result and the function at creation time
is that helpful.  Actual results aren't that way.  This allows you to
reuse the result variable in multiple calls to different functions.  But
if people aren't comfortable with this binding scheme, I'm not opposed to
changing it.  Doing so *will* complicate things a bit, however, on the
implementation side.  So let me explore it some.

 It's not thread-creation in this case.  You don't create threads
 when
 you use a thread_pool.  And there's other examples as well, such as
 RPC
 mechanisms.  And personally, I find passing such a creation
 parameter to
 be turning the design inside out.

 But this doesn't (borrowing Dave's async_call syntax and Alexander's
 semantics (which aren't really any different to yours):

Dave's semantics certainly *were* different from mine (and the Futures
link posted by Alexander).  In fact, I see Alexander's post as
strengthening my argument for semantics different from Dave's.  Which
leaves us with my semantics (mostly), but some room left to argue the
syntax.

 async_calldouble later1(foo, a, b, c);
 async_calldouble later2(foo, d, e, f);
 thread_pool pool;
 pool.dispatch(later1);
 pool.dispatch(later2);
 d = later1.result() + later2.result();

You've not used Dave's semantics, but mine (with the variation of when you
bind).

 More importantly, if you really don't like the syntax of my design, it
 at
 least allows you to *trivially* implement your design.  Sometimes
 there's
 something to be said for being lower level.

 Well as a user I'd be *trivially* implementing something to produce the
 above. Do-able I think (after I have a bit of a look at the innards of
 bind), but its hardly trivial.

The only thing that's not trivial with your syntax changes above is
dealing with the requisite reference semantics with out requiring dynamic
memory allocation.  But I think I can work around that.  If people prefer
the early/static binding, I can work on this design.  I think it's a
little less flexible, but won't argue that point if people prefer it.

-- 
William E. Kempf
[EMAIL PROTECTED]


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-07 Thread William E. Kempf

David Abrahams said:
 ...and if it can't be default-constructed?

 That's what boost::optional is for ;).

 Yeeeh. Once the async_call returns, you have a value, and should be able
 to count on it.  You shouldn't get back an object whose invariant allows
 there to be no value.

I'm not sure I can interpret the yeeeh part. Do you think there's still
an issue to discuss here?

 Second, and this is more important, you've bound this concept to
 boost::thread explicitly.  With the fully seperated concerns of my
 proposal, async_result can be used with other asynchronous call
 mechanisms, such as the coming boost::thread_pool.

asyc_resultdouble res1, res2;

 no fair - I'm calling it async_call now ;-)

thread_pool pool;
pool.dispatch(bind(res1.call(foo), a, b, c));
pool.dispatch(bind(res2.call(foo), d, e, f));
d = res1.value() + res2.value();

 This one is important.  However, there are other ways to deal with
 this.
  An async_call object could take an optional thread-creation
 parameter,
 for example.

 It's not thread-creation in this case.  You don't create threads
 when you use a thread_pool.

 OK, thread acquisition, then.

No, not even that.  An RPC mechanism, for instance, isn't acquiring a
thread.  And a message queue implementation wouldn't be acquiring a thread
either.  These are the two obvious (to me) alternatives, but the idea is
to leave the call/execute portion orthogonal and open.  Alexander was
quite right that this is similar to the Future concept in his Java link.
 The Future holds the storage for the data to be returned and provides
the binding mechanism for what actually gets called, while the Executor
does the actual invocation.  I've modeled the Future to use function
objects for the binding, so the Executor can be any mechanism which can
invoke a function object.  This makes thread, thread_pool and other such
classes Executors.

 And there's other examples as well, such as RPC mechanisms.

 True.

 And personally, I find passing such a creation parameter to be
 turning the design inside out.

 A bit, yes.

 It might make things a little simpler for the default case, but it
 complicates usage for all the other cases.  With the design I
 presented every usage is treated the same.

 There's a lot to be said for making the default case very easy.

Only if you have a clearly defined default case.  Someone doing a lot of
client/server development might argue with you about thread creation being
a better default than RPC calling, or even thread_pool usage.

 More importantly, if you really don't like the syntax of my design, it
 at least allows you to *trivially* implement your design.

 I doubt most users regard anything involving typesafe varargs as
 trivial to implement.

Well, I'm not claiming to support variadric parameters here.  I'm only
talking about supporting a 0..N for some fixed N interface.  And with
Boost.Bind already available, that makes other such interfaces trivial to
implement.  At least usually.  The suggestion that the binding occur at
the time of construction is going to complicate things for me, because it
makes it much more difficult to handle the reference semantics required
here.

 Sometimes there's something to be said for being lower level.

 Sometimes.  I think users have complained all along that the
 Boost.Threads library takes the you can implement it yourself using our
 primitives line way too much.  It's important to supply
 simplifying high-level abstractions, especially in a domain as
 complicated as threading.

OK, I actually believe this is a valid criticism.  But I also think it's
wrong to start at the top of the design and work backwards.  In other
words, I expect that we'll take the lower level stuff I'm building now and
use them as the building blocks for the higher level constructs later.  If
I'd started with the higher level stuff, there'd be things that you
couldn't accomplish.

  That's what we mean by the terms high-level and encapsulation
 ;-)

 Yes, but encapsulation shouldn't hide the implementation to the
 point that users aren't aware of what the operations actually are.
 ;)

 I don't think I agree with you, if you mean that the implementation
 should be apparent from looking at the usage.  Implementation details
 that must be revealed should be shown in the documentation.

 I was referring to the fact that you have no idea if the async call
 is being done via a thread, a thread_pool, an RPC mechanism, a simple
 message queue, etc.  Sometimes you don't care, but often you do.

 And for those cases you have a low-level interface, right?

Where's the low level interface if I don't provide it? ;)

-- 
William E. Kempf
[EMAIL PROTECTED]


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-07 Thread David Abrahams
William E. Kempf [EMAIL PROTECTED] writes:

 David Abrahams said:
 ...and if it can't be default-constructed?

 That's what boost::optional is for ;).

 Yeeeh. Once the async_call returns, you have a value, and should be able
 to count on it.  You shouldn't get back an object whose invariant allows
 there to be no value.

 I'm not sure I can interpret the yeeeh part. Do you think there's still
 an issue to discuss here?

Yes.  Yh means I'm uncomfortable with asking people to get
involved with complicated state like it's there or it isn't there
for something as conceptually simple as a result returned from waiting
on a thread function to finish.

 It's not thread-creation in this case.  You don't create threads
 when you use a thread_pool.

 OK, thread acquisition, then.

 No, not even that.  An RPC mechanism, for instance, isn't acquiring
 a thread.  

Yes, but we don't have an RPC mechanism in Boost.  It's important to
be able to use a generic interface that will handle RPC, but for
common tasks where nobody's interested in RPC it's important to be
able to do something reasonably convenient and uncomplicated.

Anyway, if you want to stretch this to cover RPC it's easy enough:
just call it acquisition of an executor resource.

 And a message queue implementation wouldn't be acquiring
 a thread either.  

But it _would_ be aquiring an execution resource.

 These are the two obvious (to me) alternatives, but the idea is to
 leave the call/execute portion orthogonal and open.  Alexander was
 quite right that this is similar to the Future concept in his Java
 link.  The Future holds the storage for the data to be returned
 and provides the binding mechanism for what actually gets called,
 while the Executor does the actual invocation.  I've modeled the
 Future to use function objects for the binding, so the Executor
 can be any mechanism which can invoke a function object.  This makes
 thread, thread_pool and other such classes Executors.

Yes, it is a non-functional (stateful) model which allows efficient
re-use of result objects when they are large, but complicates simple
designs that could be better modeled as stateless functional programs.
When there is an argument for re-using the result object, C++
programmers tend to write void functions and pass the result by
reference anyway.  There's a good reason people write functions
returning non-void, though.  There's no reason to force them to twist
their invocation model inside out just to achieve parallelism.  

If you were designing a language from the ground up to support
parallelism, would you encourage or rule out a functional programming
model?  I bet you can guess what the designers of Erlang
(http://www.erlang.org/) chose to do ;o)

 And there's other examples as well, such as RPC mechanisms.

 True.

 And personally, I find passing such a creation parameter to be
 turning the design inside out.

 A bit, yes.

It turns _your_ design inside out, which might not be a bad thing for
quite a few use cases ;-)

 It might make things a little simpler for the default case, but it
 complicates usage for all the other cases.  With the design I
 presented every usage is treated the same.

 There's a lot to be said for making the default case very easy.

 Only if you have a clearly defined default case.  Someone doing a lot of
 client/server development might argue with you about thread creation being
 a better default than RPC calling, or even thread_pool usage.

Yes, they certainly might.  Check out the systems that have been
implemented in Erlang with great success and get back to me ;-)

 More importantly, if you really don't like the syntax of my design, it
 at least allows you to *trivially* implement your design.

 I doubt most users regard anything involving typesafe varargs as
 trivial to implement.

 Well, I'm not claiming to support variadric parameters here.  I'm only
 talking about supporting a 0..N for some fixed N interface. 

That's what I mean by typesafe varargs; it's the best we can do in
C++98/02.

  And with Boost.Bind already available, that makes other such
 interfaces trivial to implement.  At least usually.  

For an expert in library design familiar with the workings of boost
idioms like ref(x), yes.  For someone who just wants to accomplish a
task using threading, no.

 The suggestion that the binding occur at the time of construction is
 going to complicate things for me, because it makes it much more
 difficult to handle the reference semantics required here.

a. What required reference semantics?

b. As a user, I don't really care if I'm making it hard for the
   library provider, (within reason).  It's the library provider's job
   to make my life easier.

 Sometimes there's something to be said for being lower level.

 Sometimes.  I think users have complained all along that the
 Boost.Threads library takes the you can implement it yourself using our
 primitives line way too much.  It's important to supply
 simplifying 

Re: [boost] Re: A new boost::thread implementation?

2003-02-06 Thread William E. Kempf


  double d;
  threaddouble t = spawn(foo)(a,b,c);
  // do something else
  d = thread.return_value();

 A solution like this has been proposed before, but I don't like it.
 This creates multiple thread types, instead of a single thread type.
 I think this will only make the interface less convenient, and will
 make the implementation much more complex.  For instance, you now must
 have a seperate thread_self type that duplicates all of thread
 except for the data type specific features.  These differing types
 will have to compare to each other, however.

 Make the common part a base class. That's how the proposal I sent you
 does  it :-)

Simplifies the implementation, but complicates the interface.

 async_resultdouble res;
 thread t(bind(res.call(), a, b, c));
 // do something else
 d = res.value();  // Explicitly waits for the thread to return a
 value?

 This does the same, indeed. Starting a thread this way is just a little
 more complex (and -- in my view -- less obvious to read) than writing
   thread t = spawn(foo)(a,b,c);

Not sure I agree about less obvious to read.  If your syntax had been
   thread t = spawn(foo, a, b, c);
I think you'd have a bit more of an argument here.  And I certainly could
fold the binding directly into boost::thread so that my syntax would
become:

thread t(res.call(), a, b, c);

I could even eliminate the .call() syntax with some implicit
conversions, but I dislike that for the obvious reasons.  I specifically
chose not to include syntactic binding in boost::thread a long time ago,
because I prefer the explicit seperation of concerns.  So, where you think
my syntax is less obvious to read, I think it's more explicit.

 But that's just personal opinion, and I'm arguably biased :-)

As am I :).

 Hopefully you're not duplicating efforts here, and are using
 Boost.Bind and Boost.Function in the implementation?

 Actually it does duplicate the work, but not because I am stubborn. We
 have an existing implementation for a couple of years, and the present
 version just evolved from this. However, there's a second point: when
 starting threads, you have a relatively clear picture as to how long
 certain objects are needed, and one can avoid several copying steps if
 one  does some things by hand. It's short anyway, tuple type and tie
 function  are your friend here.

I'm not sure how you avoid copies here.  Granted, the current
implementation isn't optimized in this way, but it's possible for me to
reduce the number of copies down to what I think would be equivalent to a
hand coded implementation.

  thread t = spawn(foo)(a,b,c);
  t.yield ();// oops, who's going to yield here?

 You shouldn't really ever write code like that.  It should be
 thread::yield().  But even if you write it the way you did, it will
 always be the current thread that yields, which is the only thread
 that can.  I don't agree with seperating the interfaces here.

 I certainly know that one shouldn't write the code like this. It's just
 that this way you are inviting people to write buglets. After all, you
 have (or may have in the future) functions
   t-kill ();
   t-suspend ();
 Someone sees that there's a function yield() but doesn't have the time
 to  read the documentation, what will he assume what yield() does?

How does someone see that there's a function yield() with out also
seeing that it's static?  No need to read documentation for that, as it's
an explicit part of the functions signature.

 If there's a way to avoid such invitations for errors, one should use
 it.

I understand the theory behind this, I've just never seen a real world
case where someone's been bitten in this way.  I know I never would be. 
So I don't find it very compelling.  But as I said elsewhere, I'm not so
opposed as to not consider making these free functions instead because of
this reasoning.  I would be opposed to another class, however, as I don't
think that solves anything, but instead makes things worse.

-- 
William E. Kempf
[EMAIL PROTECTED]


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-06 Thread Dave Abrahams
From: Wolfgang Bangerth [EMAIL PROTECTED]

   double d;
   threaddouble t = spawn(foo)(a,b,c);
   // do something else
   d = thread.return_value();
 
  A solution like this has been proposed before, but I don't like it.
This
  creates multiple thread types, instead of a single thread type.  I think
  this will only make the interface less convenient, and will make the
  implementation much more complex.  For instance, you now must have a
  seperate thread_self type that duplicates all of thread except for the
  data type specific features.  These differing types will have to compare
  to each other, however.

 Make the common part a base class. That's how the proposal I sent you does
 it :-)


  async_resultdouble res;
  thread t(bind(res.call(), a, b, c));
  // do something else
  d = res.value();  // Explicitly waits for the thread to return a value?

 This does the same, indeed. Starting a thread this way is just a little
 more complex (and -- in my view -- less obvious to read) than writing
   thread t = spawn(foo)(a,b,c);

 But that's just personal opinion, and I'm arguably biased :-)

I'm sure I'm never biased wink, and I tend to like your syntax better.
However, I recognize what Bill is concerned about.  Let me suggest a
compromise:

  async_resultdouble later = spawn(foo)(a, b, c);
  ...
  thread t = later.thread();
  // do whatever with t
  ...
  double now = later.join();  // or later.get()

You could also consider the merits of providing an implicit conversion from
async_resultT to T.

This approach doesn't get the asynchronous call wound up with the meaning of
the thread concept.


--
Dave Abrahams
Boost Consulting
http://www.boost-consulting.com

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-06 Thread William E. Kempf

 From: Wolfgang Bangerth [EMAIL PROTECTED]
 This does the same, indeed. Starting a thread this way is just a
 little more complex (and -- in my view -- less obvious to read) than
 writing
   thread t = spawn(foo)(a,b,c);

 But that's just personal opinion, and I'm arguably biased :-)

 I'm sure I'm never biased wink, and I tend to like your syntax better.
 However, I recognize what Bill is concerned about.  Let me suggest a
 compromise:

   async_resultdouble later = spawn(foo)(a, b, c);

Mustn't use the name spawn() here.  It implies a thread/process/what ever
has been spawned at this point, which is not the case.  Or has it (he says
later, having read on)?

   ...
   thread t = later.thread();

The thread type will be Copyable and Assignable soon, so no need for the
reference.  Hmm... does this member indicate that spawn() above truly did
create a thread that's stored in the async_result?  Hmm... that would be
an interesting alternative implementation.  I'm not sure it's as obvious
as the syntax I suggested, as evidenced by the questions I've raised here,
but worth considering.  Not sure I care for spawn(foo)(a, b, c) though. 
I personally still prefer explicit usage of Boost.Bind or some other
binding/lambda library.  But if you want to hide the binding, why not
just spawn(foo, a, b, c)?

And if we go this route, should be remove the boost::thread constructor
that creates a thread in favor of using spawn() there as well?

   thread t = spawn(foo, a, b, c);

   // do whatever with t
   ...
   double now = later.join();  // or later.get()

 You could also consider the merits of providing an implicit conversion
 from async_resultT to T.

The merits, and the cons, yes.  I'll be considering this carefully at some
point.

 This approach doesn't get the asynchronous call wound up with the
 meaning of the thread concept.

If I fully understand it, yes it does, but too a lesser extent.  What I
mean by this is that the async_result hides the created thread (though you
do get access to it through the res.thread() syntax).  I found this
surprising enough to require careful thought about the FULL example you
posted to understand this.

-- 
William E. Kempf
[EMAIL PROTECTED]


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-06 Thread Dave Abrahams
From: William E. Kempf [EMAIL PROTECTED]

  From: Wolfgang Bangerth [EMAIL PROTECTED]
  This does the same, indeed. Starting a thread this way is just a
  little more complex (and -- in my view -- less obvious to read) than
  writing
thread t = spawn(foo)(a,b,c);
 
  But that's just personal opinion, and I'm arguably biased :-)
 
  I'm sure I'm never biased wink, and I tend to like your syntax better.
  However, I recognize what Bill is concerned about.  Let me suggest a
  compromise:
 
async_resultdouble later = spawn(foo)(a, b, c);

 Mustn't use the name spawn() here.  It implies a thread/process/what ever
 has been spawned at this point, which is not the case.  Or has it (he says
 later, having read on)?

It has.

...
thread t = later.thread();

 The thread type will be Copyable and Assignable soon, so no need for the
 reference.  Hmm... does this member indicate that spawn() above truly did
 create a thread that's stored in the async_result?

Yes.

 Hmm... that would be
 an interesting alternative implementation.  I'm not sure it's as obvious
 as the syntax I suggested

Sorry, IMO there's nothing obvious about your syntax.  It looks cumbersome
and low-level to me.  Let me suggest some other syntaxes for async_result,
though:

async_calldouble later(foo, a, b, c)

or, if you don't want to duplicate the multi-arg treatment of bind(), just:

async_calldouble later(bind(foo, a, b, c));
...
...
double d = later(); // call it to get the result out.

I like the first one better, but could understand why you'd want to go with
the second one.  This is easily implemented on top of the existing
Boost.Threads interface.  Probably any of my suggestions is.

 as evidenced by the questions I've raised here,

Can't argue with user confusion I guess ;-)

 but worth considering.  Not sure I care for spawn(foo)(a, b, c) though.
 I personally still prefer explicit usage of Boost.Bind or some other
 binding/lambda library.  But if you want to hide the binding, why not
 just spawn(foo, a, b, c)?

Mostly agree; it's just that interfaces like that tend to obscure which is
the function and which is the argument list.

 And if we go this route, should be remove the boost::thread constructor
 that creates a thread in favor of using spawn() there as well?

thread t = spawn(foo, a, b, c);

Good point.  I dunno.  I don't see a problem with the idea that
async_callvoid adds little or nothing to thread

  This approach doesn't get the asynchronous call wound up with the
  meaning of the thread concept.

 If I fully understand it, yes it does, but too a lesser extent.  What I
 mean by this is that the async_result hides the created thread (though you
 do get access to it through the res.thread() syntax).

That's what we mean by the terms high-level and encapsulation ;-)

 I found this
 surprising enough to require careful thought about the FULL example you
 posted to understand this.

Like I said, I can't argue with user confusion.  Does the name async_call
help?

--
Dave Abrahams
Boost Consulting
http://www.boost-consulting.com

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-06 Thread Wolfgang Bangerth

  async_resultdouble res;
  thread t(bind(res.call(), a, b, c));
  // do something else
  d = res.value();  // Explicitly waits for the thread to return a
  value?
 
  This does the same, indeed. Starting a thread this way is just a little
  more complex (and -- in my view -- less obvious to read) than writing
thread t = spawn(foo)(a,b,c);
 
 Not sure I agree about less obvious to read.  If your syntax had been
thread t = spawn(foo, a, b, c);
 I think you'd have a bit more of an argument here.  And I certainly could
 fold the binding directly into boost::thread so that my syntax would
 become:
 
 thread t(res.call(), a, b, c);
 
 I could even eliminate the .call() syntax with some implicit
 conversions, but I dislike that for the obvious reasons.  I specifically
 chose not to include syntactic binding in boost::thread a long time ago,
 because I prefer the explicit seperation of concerns.  So, where you think
 my syntax is less obvious to read, I think it's more explicit.

If you do all this, then you'll probably almost arrive at the code I 
posted :-)

Still, keeping the analogy to the usual call foo(a,b,c), I prefer the 
arguments to foo in a separate pair of parentheses. However, there is 
another point that I guess will make your approach very hard: assume
void foo(int, double, char);
and a potential constructor for your thread class
template typename A, typename B, typename C
thread (void (*p)(A,B,C), A, B, C);
Then you can write
foo(1,1,1)
and arguments will be converted automatically. However, you cannot write
thread t(foo, 1, 1, 1);
since template parameters must be exact matches.

There really is no other way than to first get at the argument types in a 
first step, and pass the arguments in a second step. You _need_ two sets 
of parentheses to get the conversions.


  Actually it does duplicate the work, but not because I am stubborn. We
  have an existing implementation for a couple of years, and the present
  version just evolved from this. However, there's a second point: when
  starting threads, you have a relatively clear picture as to how long
  certain objects are needed, and one can avoid several copying steps if
  one  does some things by hand. It's short anyway, tuple type and tie
  function  are your friend here.
 
 I'm not sure how you avoid copies here.

Since you have control over lifetimes of objects, you can pass references 
instead of copies at various places.


t-kill ();
t-suspend ();
  Someone sees that there's a function yield() but doesn't have the time
  to  read the documentation, what will he assume what yield() does?
 
 How does someone see that there's a function yield() with out also
 seeing that it's static?  No need to read documentation for that, as it's
 an explicit part of the functions signature.

Seeing it used in someone else's code? Just not being careful when reading 
the signature?

I think it's the same argument as with void* : if applied correctly it's 
ok, but in general it's considered harmful.

W.

-
Wolfgang Bangerth email:[EMAIL PROTECTED]
  www: http://www.ticam.utexas.edu/~bangerth/


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-06 Thread William E. Kempf

Dave Abrahams said:

 Hmm... that would be
 an interesting alternative implementation.  I'm not sure it's as
 obvious as the syntax I suggested

 Sorry, IMO there's nothing obvious about your syntax.  It looks
 cumbersome and low-level to me.  Let me suggest some other syntaxes for
 async_result, though:

 async_calldouble later(foo, a, b, c)

 or, if you don't want to duplicate the multi-arg treatment of bind(),
 just:

 async_calldouble later(bind(foo, a, b, c));
 ...
 ...
 double d = later(); // call it to get the result out.

The two things that come to mind for me with this suggestion are:

1) You've explicitly tied the result into the call.  I chose the other
design because the result is just that, only a result.  An asynchronous
call can be bound to this result more than once.

2) You're still hiding the thread creation.  This is a mistake to me for
two reasons.  First, it's not as obvious that a thread is being created
here (though the new names help a lot).  Second, and this is more
important, you've bound this concept to boost::thread explicitly.  With
the fully seperated concerns of my proposal, async_result can be used with
other asynchronous call mechanisms, such as the coming boost::thread_pool.

   asyc_resultdouble res1, res2;
   thread_pool pool;
   pool.dispatch(bind(res1.call(foo), a, b, c));
   pool.dispatch(bind(res2.call(foo), d, e, f));
   d = res1.value() + res2.value();

 I like the first one better, but could understand why you'd want to go
 with the second one.  This is easily implemented on top of the existing
 Boost.Threads interface.  Probably any of my suggestions is.

Yes, all of the suggestions which don't directly modify boost::thread are
easily implemented on top of the existing interface.

 as evidenced by the questions I've raised here,

 Can't argue with user confusion I guess ;-)

 but worth considering.  Not sure I care for spawn(foo)(a, b, c)
 though. I personally still prefer explicit usage of Boost.Bind or some
 other binding/lambda library.  But if you want to hide the binding,
 why not just spawn(foo, a, b, c)?

 Mostly agree; it's just that interfaces like that tend to obscure which
 is the function and which is the argument list.

OK.  That's never bothered me, though, and is not the syntax used by
boost::bind, so I find it less appealing.

  This approach doesn't get the asynchronous call wound up with the
 meaning of the thread concept.

 If I fully understand it, yes it does, but too a lesser extent.  What
 I mean by this is that the async_result hides the created thread
 (though you do get access to it through the res.thread() syntax).

 That's what we mean by the terms high-level and encapsulation ;-)

Yes, but encapsulation shouldn't hide the implementation to the point that
users aren't aware of what the operations actually are. ;)

But I'll admit that some of my own initial confusion on this particular
case probably stem from having my brain focused on implementation details.

 I found this
 surprising enough to require careful thought about the FULL example
 you posted to understand this.

 Like I said, I can't argue with user confusion.  Does the name
 async_call help?

Certainly... but leads to the problems I addresed above.  There's likely a
design that will satisfy all concerns, however, that's not been given yet.

-- 
William E. Kempf
[EMAIL PROTECTED]


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-06 Thread Dave Abrahams
On Thursday, February 06, 2003 12:33 PM [GMT+1=CET],
William E. Kempf [EMAIL PROTECTED] wrote:

 Dave Abrahams said:

   Hmm... that would be
   an interesting alternative implementation.  I'm not sure it's as
   obvious as the syntax I suggested
 
  Sorry, IMO there's nothing obvious about your syntax.  It looks
  cumbersome and low-level to me.  Let me suggest some other syntaxes for
  async_result, though:
 
  async_calldouble later(foo, a, b, c)
 
  or, if you don't want to duplicate the multi-arg treatment of bind(),
  just:
 
  async_calldouble later(bind(foo, a, b, c));
  ...
  ...
  double d = later(); // call it to get the result out.

 The two things that come to mind for me with this suggestion are:

 1) You've explicitly tied the result into the call.  I chose the other
 design because the result is just that, only a result.

Hm? How is the result not a result in my case?

 An asynchronous call can be bound to this result more than once.

...and if it can't be default-constructed?

 2) You're still hiding the thread creation.

Absolutely.  High-level vs. low-level.

 This is a mistake to me for
 two reasons.  First, it's not as obvious that a thread is being created
 here (though the new names help a lot).

Unimportant, IMO.  Who cares how an async_call is implemented under the
covers?

 Second, and this is more
 important, you've bound this concept to boost::thread explicitly.  With
 the fully seperated concerns of my proposal, async_result can be used with
 other asynchronous call mechanisms, such as the coming boost::thread_pool.

asyc_resultdouble res1, res2;
thread_pool pool;
pool.dispatch(bind(res1.call(foo), a, b, c));
pool.dispatch(bind(res2.call(foo), d, e, f));
d = res1.value() + res2.value();

This one is important.  However, there are other ways to deal with this.  An
async_call object could take an optional thread-creation parameter, for
example.

  I like the first one better, but could understand why you'd want to go
  with the second one.  This is easily implemented on top of the existing
  Boost.Threads interface.  Probably any of my suggestions is.

 Yes, all of the suggestions which don't directly modify boost::thread are
 easily implemented on top of the existing interface.

No duh ;-)

  That's what we mean by the terms high-level and encapsulation ;-)

 Yes, but encapsulation shouldn't hide the implementation to the point that
 users aren't aware of what the operations actually are. ;)

I don't think I agree with you, if you mean that the implementation should
be apparent from looking at the usage.  Implementation details that must be
revealed should be shown in the documentation.

 But I'll admit that some of my own initial confusion on this particular
 case probably stem from having my brain focused on implementation details.

Ha!

   I found this
   surprising enough to require careful thought about the FULL example
   you posted to understand this.
 
  Like I said, I can't argue with user confusion.  Does the name
  async_call help?

 Certainly... but leads to the problems I addresed above.  There's likely a
 design that will satisfy all concerns, however, that's not been given yet.

P'raps.

-- 
Dave Abrahams
Boost Consulting
http://www.boost-consulting.com

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-06 Thread William E. Kempf

Dave Abrahams said:
 On Thursday, February 06, 2003 12:33 PM [GMT+1=CET],
 William E. Kempf [EMAIL PROTECTED] wrote:

 Dave Abrahams said:

   Hmm... that would be
   an interesting alternative implementation.  I'm not sure it's as
 obvious as the syntax I suggested
 
  Sorry, IMO there's nothing obvious about your syntax.  It looks
 cumbersome and low-level to me.  Let me suggest some other syntaxes
 for async_result, though:
 
  async_calldouble later(foo, a, b, c)
 
  or, if you don't want to duplicate the multi-arg treatment of
 bind(), just:
 
  async_calldouble later(bind(foo, a, b, c));
  ...
  ...
  double d = later(); // call it to get the result out.

 The two things that come to mind for me with this suggestion are:

 1) You've explicitly tied the result into the call.  I chose the other
 design because the result is just that, only a result.

 Hm? How is the result not a result in my case?

I didn't say it wasn't a result, I said that it wasn't only a result. 
In your case it's also the call.

 An asynchronous call can be bound to this result more than once.

 ...and if it can't be default-constructed?

That's what boost::optional is for ;).

 2) You're still hiding the thread creation.

 Absolutely.  High-level vs. low-level.

But I think too high-level.  I say this, because it ties you solely to
thread creation for asynchronous calls.

 This is a mistake to me for
 two reasons.  First, it's not as obvious that a thread is being
 created here (though the new names help a lot).

 Unimportant, IMO.  Who cares how an async_call is implemented under the
 covers?

I care, because of what comes next ;).

 Second, and this is more
 important, you've bound this concept to boost::thread explicitly.
 With the fully seperated concerns of my proposal, async_result can be
 used with other asynchronous call mechanisms, such as the coming
 boost::thread_pool.

asyc_resultdouble res1, res2;
thread_pool pool;
pool.dispatch(bind(res1.call(foo), a, b, c));
pool.dispatch(bind(res2.call(foo), d, e, f));
d = res1.value() + res2.value();

 This one is important.  However, there are other ways to deal with this.
  An async_call object could take an optional thread-creation parameter,
 for example.

It's not thread-creation in this case.  You don't create threads when
you use a thread_pool.  And there's other examples as well, such as RPC
mechanisms.  And personally, I find passing such a creation parameter to
be turning the design inside out.  It might make things a little simpler
for the default case, but it complicates usage for all the other cases. 
With the design I presented every usage is treated the same.

More importantly, if you really don't like the syntax of my design, it at
least allows you to *trivially* implement your design.  Sometimes there's
something to be said for being lower level.

  That's what we mean by the terms high-level and encapsulation
 ;-)

 Yes, but encapsulation shouldn't hide the implementation to the point
 that users aren't aware of what the operations actually are. ;)

 I don't think I agree with you, if you mean that the implementation
 should be apparent from looking at the usage.  Implementation details
 that must be revealed should be shown in the documentation.

I was referring to the fact that you have no idea if the async call is
being done via a thread, a thread_pool, an RPC mechanism, a simple message
queue, etc.  Sometimes you don't care, but often you do.

-- 
William E. Kempf
[EMAIL PROTECTED]


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



[boost] Re: A new boost::thread implementation?

2003-02-06 Thread Wolfgang Bangerth

 async_resultdouble later = spawn(foo)(a, b, c);
 
  Mustn't use the name spawn() here.  It implies a thread/process/what ever
  has been spawned at this point, which is not the case.  Or has it (he says
  later, having read on)?
 
 It has.

It must, because a/b/c might be temporaries. You must have started the new 
thread and copied the arguments to the new thread before you can return 
from the spawn call, since otherwise the temporaries might have been 
destroyed in the meantime.


 Sorry, IMO there's nothing obvious about your syntax.  It looks cumbersome
 and low-level to me.

Right. That's exactly my criticism. Calling a function on a new thread 
should be just as simple as calling it sequentially. And
spawn(foo)(a,b,c);
i.e. not using the return value, should just create a detached thread.


 I'm sure I'm never biased wink, and I tend to like your syntax better.
 However, I recognize what Bill is concerned about.  Let me suggest a
 compromise:

  async_resultdouble later = spawn(foo)(a, b, c);
  ...
  thread t = later.thread();
  // do whatever with t
  ...
  double now = later.join();  // or later.get()

The way I went is to derive async_result from thread, so you get the 
.thread() function for free as a derived-to-base conversion. Actually,
async_result=thread and thread=thread_base in what I have, but that's 
just naming.

 You could also consider the merits of providing an implicit conversion 
 from async_resultT to T.

Possible, but I'd say too confusing.

Cheers
  W.

-
Wolfgang Bangerth email:[EMAIL PROTECTED]
  www: http://www.ticam.utexas.edu/~bangerth/



___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



[boost] Re: A new boost::thread implementation?

2003-02-06 Thread Alexander Terekhov

Dave Abrahams wrote:
[...]
  2) You're still hiding the thread creation.
 
 Absolutely.  High-level vs. low-level.

Yeah ...

 
  This is a mistake to me for
  two reasons.  First, it's not as obvious that a thread is being created
  here (though the new names help a lot).
 
 Unimportant, IMO.  Who cares how an async_call is implemented under the
 covers?

... but to me, that async_call-thing is nothing but a future [one 
would also need an executor to make some use of it]. I can think of 
a thread as a sort of executor but not the other way around.

http://gee.cs.oswego.edu/dl/concurrent/dist/docs/java/util/concurrent/Future.html
http://gee.cs.oswego.edu/dl/concurrent/dist/docs/java/util/concurrent/Executor.html
http://gee.cs.oswego.edu/dl/concurrency-interest

regards,
alexander.

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-05 Thread William E. Kempf


 Hi Ove,

 f. It shall be possible to send extra information, as an optional
 extra argument
to the  boost::thread ctor, to the created  thread.
 boost::thread::self shall offer a method for retrieving this extra
 information. It is not required that this information be passed in
 a type-safe manner, i.e. void* is okay.

 g. It shall  be possible for a thread  to exit with a return value.
 It shall be
possible for  the creating side to  retrieve, as a return  value
 from join(), that  value. It is  not required  that this  value be
 passed in  a type-safe manner, i.e. void* is okay.

 j. The header file shall not expose any implementation specific
 details.

 Incidentally, I have a scheme almost ready that does all this. In
 particular, it allows you to pass every number of parameters to the new
 thread, and to return every possible type. Both happens in a type-safe
 fashion, i.e. whereever you would call a function serially like
 double d = foo(a, b, c);
 you can now call it like
 double d;
 threaddouble t = spawn(foo)(a,b,c);
 // do something else
 d = thread.return_value();

A solution like this has been proposed before, but I don't like it.  This
creates multiple thread types, instead of a single thread type.  I think
this will only make the interface less convenient, and will make the
implementation much more complex.  For instance, you now must have a
seperate thread_self type that duplicates all of thread except for the
data type specific features.  These differing types will have to compare
to each other, however.

I don't feel that this sort of information belongs in the thread object. 
It belongs in the thread function.  This already works very nicely for
passing data, we just need some help with returning data.  And I'm working
on that.  The current idea would be used something like this:

async_resultdouble res;
thread t(bind(res.call(), a, b, c));
// do something else
d = res.value();  // Explicitly waits for the thread to return a value?

Now thread remains type-neutral, but we have the full ability to both pass
and return values in a type-safe manner.

 Argument and return types are automatically deducted, and the number of
 arguments are only limited by the present restriction on the number of
 elements in boost::tuple (which I guess is 10). Conversions between
 types  are performed in exactly the same way as they would when calling
 a  function serially. Furthermore, it also allows calling member
 functions  with some object, without the additional syntax necessary to
 tie object  and member function pointer together.

Hopefully you're not duplicating efforts here, and are using Boost.Bind
and Boost.Function in the implementation?

 I attach an almost ready proposal to this mail, but rather than
 steamrolling the present author of the threads code (William Kempf), I
 would like to discuss this with him (and you, if you like) before
 submitting it as a proposal to boost.

Give me a couple of days to have the solution above implemented in the dev
branch, and then argue for or against the two designs.

 Let me add that I agree with all your other topics, in particular the
 separation of calling/called thread interface, to prevent accidents like
 thread t = spawn(foo)(a,b,c);
 t.yield ();// oops, who's going to yield here?

You shouldn't really ever write code like that.  It should be
thread::yield().  But even if you write it the way you did, it will always
be the current thread that yields, which is the only thread that can.  I
don't agree with seperating the interfaces here.

 I would be most happy if we could cooperate and join efforts.

Certainly.

-- 
William E. Kempf
[EMAIL PROTECTED]


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



[boost] Re: A new boost::thread implementation?

2003-02-05 Thread Wolfgang Bangerth

  double d;
  threaddouble t = spawn(foo)(a,b,c);
  // do something else
  d = thread.return_value();
 
 A solution like this has been proposed before, but I don't like it.  This
 creates multiple thread types, instead of a single thread type.  I think
 this will only make the interface less convenient, and will make the
 implementation much more complex.  For instance, you now must have a
 seperate thread_self type that duplicates all of thread except for the
 data type specific features.  These differing types will have to compare
 to each other, however.

Make the common part a base class. That's how the proposal I sent you does 
it :-)


 async_resultdouble res;
 thread t(bind(res.call(), a, b, c));
 // do something else
 d = res.value();  // Explicitly waits for the thread to return a value?

This does the same, indeed. Starting a thread this way is just a little 
more complex (and -- in my view -- less obvious to read) than writing
  thread t = spawn(foo)(a,b,c);

But that's just personal opinion, and I'm arguably biased :-)


 Hopefully you're not duplicating efforts here, and are using Boost.Bind
 and Boost.Function in the implementation?

Actually it does duplicate the work, but not because I am stubborn. We 
have an existing implementation for a couple of years, and the present 
version just evolved from this. However, there's a second point: when 
starting threads, you have a relatively clear picture as to how long 
certain objects are needed, and one can avoid several copying steps if one 
does some things by hand. It's short anyway, tuple type and tie function 
are your friend here.


 Give me a couple of days to have the solution above implemented in the dev
 branch, and then argue for or against the two designs.

Sure. I'll be away next week, so there's plenty of time :-)


  thread t = spawn(foo)(a,b,c);
  t.yield ();// oops, who's going to yield here?
 
 You shouldn't really ever write code like that.  It should be
 thread::yield().  But even if you write it the way you did, it will always
 be the current thread that yields, which is the only thread that can.  I
 don't agree with seperating the interfaces here.

I certainly know that one shouldn't write the code like this. It's just 
that this way you are inviting people to write buglets. After all, you 
have (or may have in the future) functions
  t-kill ();
  t-suspend ();
Someone sees that there's a function yield() but doesn't have the time to 
read the documentation, what will he assume what yield() does?

If there's a way to avoid such invitations for errors, one should use it.

Regards
  Wolfgang


PS: Can you do me a favor and CC: me? I just get the digests of the 
mailing list and replying is -- well, tedious ;-)

-
Wolfgang Bangerth email:[EMAIL PROTECTED]
  www: http://www.ticam.utexas.edu/~bangerth/



___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



[boost] Re: A new boost::thread implementation?

2003-02-05 Thread Alexander Terekhov

Wolfgang Bangerth wrote:
[...]
  async_resultdouble res;
  thread t(bind(res.call(), a, b, c));
  // do something else
  d = res.value();  // Explicitly waits for the thread to return a value?
 
 This does the same, indeed. Starting a thread this way is just a little
 more complex (and -- in my view -- less obvious to read) than writing
   thread t = spawn(foo)(a,b,c);

  joinable_thread_ptr double  tp = new_thread(foo, a, b, c);
  d = *tp-join(); // double pointer is used to report cancelation 
   // and timedjoin() timeout; in this case, the
   // thread shall never be canceled.

 
 But that's just personal opinion, and I'm arguably biased :-)

Right. ;-)

regards,
alexander.

 PS: Can you do me a favor and CC: me? I just get the digests of the 
 mailing list and replying is -- well, tedious ;-)

news://news.gmane.org/gmane.comp.lib.boost.devel

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost