Re: [boost] Unspecified behaviour in Thread FAQ example

2003-07-10 Thread William E. Kempf

Daniel Spangenberg said:
 Hello, Boosters!

 Today I stumbled across an unspecified behaviour which can be caused by
 the example class counter of the Boost thread FAQ.
 The errounous lines are inside the copy assignment operator of the
 class:

   boost::mutex::scoped_lock lock1(m_mutex  other.m_mutex ?
 m_mutex : other.m_mutex);
   boost::mutex::scoped_lock lock2(m_mutex  other.m_mutex ?
 m_mutex : other.m_mutex);

 Reasoning: Both the m_mutex member of *this as well as the m_mutex
 member of the argument
 other are not elements of the same object, so according to 5.9/p.2:
 [..] the results
 of pq, pq, p=q, and p=q are unspecified..
 On the first sight this does not mean so much for the first line (We
 just don't know, which one of the
 mutexes is chosen), but it can result in an errounous program combined
 with the same unspecified
 behaviour of the 2nd line. A valid implementation could in both cases
 return true (or false) and thus
 could lead to two successive trials to lock the same mutex, which would
 lead to an lock exception.

You're correct, and the solution is simply to replace the  operator with
std::less calls.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Re: Unspecified behaviour in Thread FAQ example

2003-07-10 Thread William E. Kempf

Daniel Spangenberg said:
 Hello William!

 William E. Kempf schrieb:

 You're correct, and the solution is simply to replace the  operator
 with std::less calls.

 You mean the std::less specialization on boost::mutex? (I wasn't aware,
 that you provide total ordering on mutexes). Otherwise I don't see the
 difference, I have to confess...

20.3.3/8

For templates greater, less, greater_equal, and less_equal, the
specializations for any pointer type yield a total order, even if the
builtin operators , , =, = do not.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Re: Re: Re: Re: thread::current() ?

2003-07-01 Thread William E. Kempf

Philippe A. Bouchard said:
 William E. Kempf wrote:


 Philippe A. Bouchard said:
 William E. Kempf wrote:

 [...]

 As already pointed out, to associate data with a thread you use
 thread_specific_ptr.  BTW, you still have to remember that the
 functor is copied, and data passed to/in the functor is not
 considered part of the thread in any event.

 Ok, how do you find out the data of the current thread?  The key in
 boost::detail::tss is not the same as the one in boost::thread.

 What data?  The actual thread data (there's not much, beyond the
 thread id which isn't direclty accessible) is found by calling the
 default c-tor on the thread.  The data passed in the functor is your
 responsibility. Either you explicitly pass it everywhere that's
 needed, or the thread stores it in a tss key.


 Suppose you have:

 struct functor1
 {
 listvoid * m_list;

 void operator () ()
 {
 ...
 }
 };

 struct functor2
 {
 listvoid * m_list;

 void operator () ()
 {
 ...
 }
 };

 int main()
 {
 thread f1(functor1());
 thread f2(functor2());

 ...

 // I would like to access m_list of the current thread's
 functor:

 lock()...
 if (thread() == f1 || thread() == f2)
 {
 thread()..(whatever casts)...m_list;
 }
 unlock()...

 // I think the only way to do this is by mapping the thread's id
  //  with the object's address (mapkey, functor1 *) but there
 is //  no standard key publicly accessible and map creates
 overhead.
 }

The functor may be copied any number of times, so this is problematic.  If
you passed it by reference, possibly using boost::ref, then the answer
would be to just retain the functor as a named variable instead of passing
a temporary.  I'd say that's typically the answer for most use cases.  For
some, you may want to place the functor in a thread_specific_ptr, as
suggested, but this only allows you to recover the data for the current
thread.

The only way I can see to give the interface your suggesting is to either
type the thread, which is problematic for so many reasons, or to employ
something similar to boost::any.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] thread::current() ?

2003-06-30 Thread William E. Kempf

Philippe A. Bouchard said:
 Hi there,

 I was wondering if you were planning to implement some static thread
 
 thread::current() that returns the current thread id ( thread).  That
 would be really useful.

Can't be done with the current non-copyable semantics.  (BTW, I'm assuming
you know this functionality still exists, via the default c-tor.)  The
next release is changing this, though.  What I'm contemplating as the best
design is a thread::current() as you suggest, with the default c-tor
creating a null thread... but that will break existing code, so I have
to tread lightly here.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] DLL hell

2003-06-30 Thread William E. Kempf

Martin Bosticky said:
 Hello everybody,

 I just got the boost_1.30.0 version. Some libraries (like thread)
 require use of a DLL. However I would like to avoid the DLL hell. From
 looking at the output of the thread build it looks like a statically
 linkable library is not available.

 Is there some reason why I would not be able to create a static library
 for the thread library in VC7.1? Has anybody done this already? I feel
 this is very important because maintenance of multiple dll versions on
 client machines can be a really nasty problem.

Maintenance of multiple versions of a DLL isn't as bad as you make out. 
Just place them in the app directory and don't worry about actually
sharing the library ;).

I *know* that DLLs aren't the ideal distribution mechanism on Windows
(despite it's popularity), but yes, there's a reason for it.  Thread clean
up can't be implemented any other way.  Read the archives where this
subject gets discussed at least once a month.  (I'll add a FAQ entry about
this soon.)

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Re: thread::current() ?

2003-06-30 Thread William E. Kempf

Philippe A. Bouchard said:
 Howard Hinnant wrote:

 On Saturday, June 28, 2003, at 02:43  PM, Philippe A. Bouchard wrote:

 Hi there,

 I was wondering if you were planning to implement some static
 thread 
 thread::current() that returns the current thread id ( thread).
 That would
 be really useful.

 The thread default constructor serves this role:

 void foo()
 {
  thread self;  // current thread
  ...
 }


 Thanks... but is it possible to obtain the initial address of the
 functor object portably, given the current thread object?

No, and why would you want to?  Especially since it will be a pointer to a
_copy_ of the functor?

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Boost::thread feature request: thread priority

2003-06-30 Thread William E. Kempf

Maxim Egorushkin said:
 Hello,



 I've been missing a feature in the thread library: managing a thread
 priority. And, BTW, the class encapsulating stopwatch functionality with
 a millisecond precision would be very useful. It would help writing more
 portable programs (as boost::thread is portable).

Priorities are implemented, but still undergoing design changes, in the
thread_dev branch.  The timer, if I understand what you want, is trivial
to implement portably with the current Boost.Threads interfaces, but I do
plan on addressing this as well.

 I'm aware of the fact that it's very operating system specific. But I do
 think that it could be done with elegance and ease in the spirit the
 whole library adhere. The first thing to come in mind is to add a couple
 of member functions to boost::thread like this:

 class thread

 {

 // ...

 void increase_priority();

 void decrease_priority();

Not that useful, IMO.  Usually you want to set a specific priority, and
this interface would require several calls to do so.  Further, these calls
can fail, so should probably have bool return types.

 // ...

 };



 I'd really love to have this abilities in the boost::thread.

 Please, tell me, whether it's possible?

Difficult to design portably, but possible.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Re: Re: thread::current() ?

2003-06-30 Thread William E. Kempf

Philippe A. Bouchard said:
 William E. Kempf wrote:

 [...]

 Thanks... but is it possible to obtain the initial address of the
 functor object portably, given the current thread object?

 No, and why would you want to?  Especially since it will be a pointer
 to a _copy_ of the functor?

 Because I would like to access specific information of the newly created
 thread.  Being able to match this information with the current thread
 would require you to have some pointer to the class functor.

As already pointed out, to associate data with a thread you use
thread_specific_ptr.  BTW, you still have to remember that the functor is
copied, and data passed to/in the functor is not considered part of the
thread in any event.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Re: Boost::thread feature request: thread priority

2003-06-30 Thread William E. Kempf

Maxim Egorushkin said:

 William E. Kempf [EMAIL PROTECTED] wrote in message
 news:[EMAIL PROTECTED]

 Priorities are implemented, but still undergoing design changes, in
 the thread_dev branch.  The timer, if I understand what you want, is
 trivial to implement portably with the current Boost.Threads
 interfaces, but I do plan on addressing this as well.

 Speaking about the timer I ment something like that:

 typedef int milliseconds;

 class stopwatch
 {
 public:
 stopwatch()
 : started_(::GetTickCount())
 {}

 milliseconds elapsed() const
 {
 return ::GetTickCount() - started_;
 }

 private:
 const DWORD started_;
 };

Ahh... that's not a threading concept ;).

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Re: Re: Re: thread::current() ?

2003-06-30 Thread William E. Kempf

Philippe A. Bouchard said:
 William E. Kempf wrote:

 [...]

 As already pointed out, to associate data with a thread you use
 thread_specific_ptr.  BTW, you still have to remember that the
 functor is copied, and data passed to/in the functor is not
 considered part of the thread in any event.

 Ok, how do you find out the data of the current thread?  The key in
 boost::detail::tss is not the same as the one in boost::thread.

What data?  The actual thread data (there's not much, beyond the thread id
which isn't direclty accessible) is found by calling the default c-tor on
the thread.  The data passed in the functor is your responsibility. 
Either you explicitly pass it everywhere that's needed, or the thread
stores it in a tss key.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Re: Re: Boost::thread feature request: thread priority

2003-06-30 Thread William E. Kempf

Maxim Egorushkin said:

 William E. Kempf [EMAIL PROTECTED] wrote in message
 news:[EMAIL PROTECTED]

  Speaking about the timer I ment something like that:
 
  typedef int milliseconds;
 
  class stopwatch
  {
  public:
  stopwatch()
  : started_(::GetTickCount())
  {}
 
  milliseconds elapsed() const
  {
  return ::GetTickCount() - started_;
  }
 
  private:
  const DWORD started_;
  };

 Ahh... that's not a threading concept ;).

 Let me disagree here :) A couple of days ago I was implementing a user
 mode task scheduler. And I had the scheduler thread updating the tasks
 delays 4 times per second and putting ready for execution tasks in the
 execution queue. I tryed to make it portable but the problem was that I
 could be sure that the scheduler thread would receive its time slice
 exactly every 250 ms. To solve the problem I decided to increase the
 scheduler thread priority and to measure the time the thread spent
 sleeping till the next time slice. I was using boost::thread library and
 my solution could be implemented by means of the library and made my
 code unportable. That was the rationale of my posting.

I'm way confused here.  The code you have above simply tracks elapsed
time.  This is in no way thread specific, and Boost already has such a
library.

Now, what you're describing sounds more like this:

class timer
{
public:
   timer(boost::functionvoid on_event, int ms, bool repeat=false);
};

Which do you really want?

As for the problem was that I could be sure that the scheduler thread
would receive its time slice exactly every 250 ms, you simply can't do
this, portably or not.  Granularity issues of the underlying clock aside,
I'm not aware of any scheduler that would give you this sort of control,
and fiddling with the priorities will only give you the illusion that
you've accomplished what you want.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


RE: [boost] Draft of new Boost Software License

2003-06-27 Thread William E. Kempf
Paul A. Bristow said:


 | -Original Message-
 | From: [EMAIL PROTECTED]
 | [mailto:[EMAIL PROTECTED] Behalf Of Rene Rivera
 | Sent: Wednesday, June 25, 2003 8:26 PM
 | To: Boost mailing list
 | Subject: Re: [boost] Draft of new Boost Software License
 |
 | Spanish is my first, but English is a very close second.

 | The impression I got is that it's somewhat hard to parse as it is. |
 | The second paragraph is long; and without any separators other than
 the commas it's
 | hard to read.
 |
 | Here's an edited version which might be better for non-english readers
 to | understand:
 |
 | 
 | Permission is hereby granted ...
 snip
 | all derivative works of the Software. Unless such copies or derivative
 works | are solely in the form of machine-executable object code
 generated by a | source language processor.

 As someone whose first language really is english - unlike the majority
 of ex-colonial Boosters :-))

 I really must protest that the last 'sentence' isn't one!

 Seriously, overall I think this is excellent.

 It isn't meant to be read by human beings, only lawyers - and they are
 used to this stuff.

 And:

 //  (C) Jane Programmer, 2003
 //   See www.boost.org/license for license terms and conditions
 //   See www.boost.org/libs/janes-lib for documentation

 Looks fine to me, though I prefer Copyright to (C)

It looks simple, but would it be legally binding?  For instance, say I
release my software with this Boost license today, using the above text
(and assuming the links point to the license, of course).  Now, a year
from now something is found to be problematic with the license and the
lawyers tweak it.  I can see a case being made that existing projects
could from that point on be changed to be covered by this new license, but
previous releases would seem to have to be legally bound to the license as
it existed then.  The above links, however, will not refer to this older
license, but to the newer license.  This seems to make the above scheme a
little legally shakey, no?  I thought you had to physically include the
license with distributions and have the individual file licenses refer to
this distributed license?

That's obviously a question for the lawyers, as us laymen will only be
guessing ;).

But it would be nice to just refer to the license instead of repeating it
in every single file.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Threads and msvc 7

2003-06-13 Thread William E. Kempf

Ulrich Eckhardt said:
 On Thursday 12 June 2003 17:05, William E. Kempf wrote:
 JOLY Loic said:
  1/ Dynamic libraries
  Although I compiled boost with the option -sBUILD=debug release
 runtime-linkstatic/dynamic, the library is still generated as a
 DLL. I do not exactly know what is meant by static in this case.

 runtime-link specifies how you link against the C RTL, not what type
 of library will be built.

 Currently (and for the forseeable future), Boost.Threads will only
 produce dynamic link libraries.  Read the mail archives for a detailed
 explanation of why.

 There is on rather hackish way to use static linkage. We simply setup
 three  .cpp-files that included the .cpp-files from out of the boost
 sourcetree.  Works fine for 1.28 and 1.29 and only required one
 conditional for 1.30. If  there is interest, I can post these files here
 (I'm not at work atm...).

Prior to 1.30 there'd have been no need to do this.  Most of Boost.Threads
was a static library.  The exception was thread_mon.dll which was required
for thread_specific_ptr, and there's simply no way out on that one, it
must be a DLL.

The Boost.Threads implementation will be relying on TLS data internally
very shortly... at which point there will be no way to hack the code
into a static library, so I wouldn't recommend doing so now.  Sorry.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Threads and msvc 7

2003-06-13 Thread William E. Kempf

Loïc Joly said:
 William E. Kempf wrote:
I sympathize, but it's just not reasonable.  Again, read the archives.

 Thank you for your fast answer !

 I did try to look in the archives before posting my mail, but I could no
  find a relevant mail in this huge archive. Could you remember roughly
 at  what time this discussion took place to help me narrow my search ?

Sorry, I should have given a more meaningful response, but I was in a
hurry.  Start looking here:
http://aspn.activestate.com/ASPN/Mail/Message/1566699.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


RE: [boost] [filesystem] '.' directory-placeholder added

2003-06-13 Thread William E. Kempf

Reece Dunn said:
 Beman Dawes wrote:

I gave that some consideration at one time, but the full URI spec
 (http://www.ietf.org/rfc/rfc2396.txt) covers so much that is totally
 outside the scope of a filesystem library that it really seems an
 over-generalization to try to included it as part of filesystem::path.
 The  tail would soon be wagging the dog:-)

 I was not suggesting that URL handling was a part of the file system
 library. What I was considering was a URL library with a *bridge* to the
  file system library, e.g. (I may have the names wrong):

boost::url::url localsite( http://localhost/xml/docs/overview.xml;
 );

// use native OS interface:
boost::path localpath1 = boost::url::getpath( localsite );

This mapping can't be easily done.  Where this maps to is basically known
only to the web server.  A mapping from a file://xml/docs/overview.xml
URI would be useful, however.  It should also be fairly trivial.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Threads and msvc 7

2003-06-12 Thread William E. Kempf

JOLY Loic said:
 Hello,

 I am currently trying to use the Boost.Threads library on windows (VC++
 7.0 and 7.1 compilers), and I wonder about some points :

 1/ Dynamic libraries
 Although I compiled boost with the option -sBUILD=debug release
 runtime-linkstatic/dynamic, the library is still generated as a DLL.
 I do not exactly know what is meant by static in this case.

runtime-link specifies how you link against the C RTL, not what type of
library will be built.

Currently (and for the forseeable future), Boost.Threads will only produce
dynamic link libraries.  Read the mail archives for a detailed explanation
of why.

 What I know is I would appreciate to link fully statically with a .lib
 file and no .dll at run-time.

I sympathize, but it's just not reasonable.  Again, read the archives.

 2/ The use of DLL-exported classes that derive from or uses as member
 variables non-DLL-exported classes is generating some warnings by msvc
 that fall into two categories (4275 and 4251). Would it be possible to
 insert #pragma to remove these spurious warnings ?

I'm addressing this issue.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


RE: [boost] Threads and msvc 7

2003-06-12 Thread William E. Kempf

Adrian Michel said:
 2/ The use of DLL-exported classes that derive from or uses as member
 variables non-DLL-exported classes is generating some warnings by msvc
 that fall into two categories (4275 and 4251). Would it be possible to
 insert #pragma to remove these spurious warnings ?

 These warnings are generated because your project is set to link with
 the static version of the MFC library, while the boost libraries link
 with the MFC dll. Change the settings in your project and they will
 disappear.

No MFC library is used by Boost.Threads.  The warnings are known and will
be fixed soon. Ignore them for now.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


RE: [boost] Threads and msvc 7

2003-06-12 Thread William E. Kempf

Adrian Michel said:
 I am using MSVC 6, but I run into the same problem. Changing the project
 settings to use the MFC dll cleared the warnings.

 Moreover, I tried to run my project with no MFC support and I got this
 message:
 d:\documents and
 settings\administrator\desktop\dev\boost_1_30_0\boost\thread\thread.hpp(17)
 : fatal error C1189: #error :Thread support is unavailable!

 I did look deeper into the problem, but there seems to be some hidden
 MFC dependency in the thread libraries.

No, there is no MFC dependency.  Changing your project settings to use the
MFC dll cleared the warnings because this change also effects how you link
against the C RTL.  When you tried to compile the project with no MFC you
got the error you did because you failed to compile against a
multi-threaded C RTL.  All dependencies in Boost.Threads are with the C
RTL and not MFC.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Re: Managing boost dependencies

2003-06-09 Thread William E. Kempf

Edward Diener said:
 David Abrahams wrote:
 Edward Diener [EMAIL PROTECTED] writes:

 David Abrahams wrote:
 Edward Diener [EMAIL PROTECTED] writes:
 I will look at the WinCVS site to see if there are NGs or mailing
 lists that might help me out.

 Suit yourself; I'm trying to suggest that you not waste your time, at
 least as first, and instead dig into http://cvsbook.red-bean.com/

 Thanks for the link.

 Do realize that people are different and that my programming preference
 is almost always to use a GUI interface over command lines as long as
 the GUI interface lets me do what I want to accomplish. Of course I
 write actual code in a fancy editor just like everyone else g. I will
 dig into the .pdf version of the link first, although my initial
 reaction to CVS was not joyful, but I am sure it can not be that arcane
 to use.

I normally prefer GUIs as well.  But in this case, I have to agree with
Dave.  You should learn the command line long before using any GUI,
especially one with only a thin wrapper like WinCVS.  It's an easier
learning curve on a very complex tool.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Re: no semaphores in boost::thread

2003-06-07 Thread William E. Kempf

Stefan Seefeld said:
 Alexander Terekhov wrote:
 This is far from elegant, because the queue data structure already
 knows the number of messages either implicitly or with an extra
 counter of its own.

 well, I need to know it explicitely when I want to extract the next one,
 so either the queue caches its length, or I have to count each time.

 Usually, what is most relevant is whether or not the queue is empty.
 So the semaphore is coding redundant information. Note that at times,
 this information will be out of sync. For example, say you have
 this sequence of actions:

 lock();
 append(item);
 unlock();
 signal(semaphore);

 In between the append and signal operation, the queue has N+1 items,
 but the semaphore's value is out of date at N.

 so what ? the 'real' queue length is kept private and doesn't matter
 much. It's the signaling of the semaphore that makes the change public.

This is a race condition.  It also occurs when extracting data from the
queue.  Whether or not the 'real' queue length is private is not
relevant, this race condition can lead to improper synchronization, such
as trying to to extract data when there's no data left to extract.

 A semaphore has an internal lock which protects incremnts and
 decrements of its internal counter. There is no way to extend that
 lock to cover additional data such as a queue. With a mutex and
 condition variable, the entire queue can be treated as a semaphore
 based on its *own* state!

 I never said it can't. Besides, I'd like to argue about your description
 of the implementation of semaphores. There is no need for a lock around
 the internal counter, if the semaphore is implemented with atomic
 counters. Of course, that's true only on platforms that support atomic
 counters...

That's still a form of a lock.

 And then there is the other semaphore I use to count the free slots,
 which you didn't comment on, probably because it didn't fit into your
 arguments...

No, actually, it strengthens the argument, because you now have even more
state that needs to be synchronized to ensure against race conditions.

 This is my last mail in this thread. It's not related to boost any more
 anyways. We have to agree to disagree.

If you want semaphores to be added to Boost.Threads, the arguments are
very much on topic here.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Re: no semaphores in boost::thread

2003-06-07 Thread William E. Kempf

Stefan Seefeld said:
 William E. Kempf wrote:

so what ? the 'real' queue length is kept private and doesn't matter
 much. It's the signaling of the semaphore that makes the change
 public.


 This is a race condition.  It also occurs when extracting data from
 the queue.  Whether or not the 'real' queue length is private is not
 relevant, this race condition can lead to improper synchronization,
 such as trying to to extract data when there's no data left to
 extract.

 Can you elaborate ? I can manipulate the queue as much as I want, the
 availability of tasks will be known to consumers only when they are
 signaled, not when the queue is non-empty. Where is the race condition ?
 (Same argument for the empty slots)

I can't elaborate easily, especially with out reference code.

 Oh, of course the queue needs a mutex, too (as I said in my original
 mail), just to protect the queue's internal structure, so a task
 extraction may look like that:

 template typename T
 T task_queue::consume()
 {
my_tasks.wait();   // decrements 'tasks' counter
 Prague::GuardMutex guard(my_mutex);  // protects queue impl
T t = rep_type::front();   // copies next task (mustn't
 throw !) rep_type::pop();   // removes task from
 queue impl my_free.post();// announce
 availability of a free slot return t;  //
 return t
 }

 The only tricky thing here is to make sure T's copy constructor doesn't
 throw.

As soon as synchronization relies on *BOTH* a mutex and a sema/event,
you've got a race condition.

And then there is the other semaphore I use to count the free slots,
 which you didn't comment on, probably because it didn't fit into your
 arguments...


 No, actually, it strengthens the argument, because you now have even
 more state that needs to be synchronized to ensure against race
 conditions.

 I don't understand this. The state in question is the difference between
 the capacity of the queue and its current length. The only variable
 holding this state is the semaphore ('my_free' in my code snippet). What
 do I need to synchronize here ?

The semaphore only represents a logical count... the queue holds the
actual count (even if it's not publicly available).  That's why you use a
mutex in your code... to protect the actual shared state.  Semas/events
are only useful when the count/flag is the *only* state.  Otherwise, you
have more synchronization to do, which can be very tricky to do with out
race conditions.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Re: no semaphores in boost::thread

2003-06-07 Thread William E. Kempf

Stefan Seefeld said:
 William E. Kempf wrote:

 As soon as synchronization relies on *BOTH* a mutex and a sema/event,
 you've got a race condition.

 hmm, I'm not sure I have the same definition for 'race condition' as you
 have. Of course I could write non-safe code that presents a race
 condition. Is your point that you want to make it impossible to write
 non-thread-safe code ?

If only that were possible.  No, that's not my point.  But semas/events
are certainly difficult enough to use in all but the simplest cases that
they were removed from the library.

 Or are you claiming that the code I have shown contains a race condition
 (which I still don't see) ?

I haven't seen your code to say for sure, but from the limited description
I believe there's a very high probability that this is the case.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Re: an XML API in boost

2003-06-04 Thread William E. Kempf

Stefan Seefeld said:
 Vincent Finn wrote:
 What I did was to provide a *thin* wrapper around the internal C
 strucs used by libxml2, so every dom manipulation call can be
 delegated down to libxml2. For example xpath lookup: I call libxml2's
 xpath API, returning me a C structure (possibly) holding a node set,
 i.e. a list of C nodes. I just need to map these C structs back to my
 C++ wrapper objects and I'm done with it. (Luckily for me, libxml2
 provides all the hooks to make that lookup very efficient...)


 One problem would be the licence
 libxml2 is a Gnu project isn't it?
 that means it's under the Gnu licence which is far more restrictive
 than  the boost licence

 there is no such thing as the 'Gnu licence'. There is the 'GNU General
 Public License' (aka GPL) and the 'GNU Lesser General Public License'
 (LGPL). libxml2 uses neither, and its license is fully compatible with
 boost's license requirements.

Maybe, but it fails the Boost Library Guidelines:

Use the C++ Standard Library or other Boost libraries, but only when the
benefits outweigh the costs.  Do not use libraries other than the C++
Standard Library or Boost. See Library reuse (edit:
http://www.boost.org/more/library_reuse.htm).

If a submitted library required libxml2, I'd personally vote no.  If the
interface was supposed to be portable to other backends, I'd probably
still vote no unless at least one other backend was included as proof of
concept.  It would still be nice to have a Boost supplied backend,
probably via Spirit, but so long as I was confident that I was not tied to
any specific non-Boost library, it wouldn't matter that much.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Re: Re: an XML API in boost

2003-06-04 Thread William E. Kempf

Vladimir Prus said:
 William E. Kempf wrote:

 there is no such thing as the 'Gnu licence'. There is the 'GNU
 General Public License' (aka GPL) and the 'GNU Lesser General Public
 License' (LGPL). libxml2 uses neither, and its license is fully
 compatible with boost's license requirements.

 Maybe, but it fails the Boost Library Guidelines:

 Use the C++ Standard Library or other Boost libraries, but only when
 the benefits outweigh the costs.  Do not use libraries other than the
 C++ Standard Library or Boost. See Library reuse (edit:
 http://www.boost.org/more/library_reuse.htm).

 If a submitted library required libxml2, I'd personally vote no.  If
 the interface was supposed to be portable to other backends, I'd
 probably still vote no unless at least one other backend was included
 as proof of concept.  It would still be nice to have a Boost supplied
 backend, probably via Spirit, but so long as I was confident that I
 was not tied to any specific non-Boost library, it wouldn't matter
 that much.

 I tend to disagree here. Writing XML library is not easy, and libraries
 like expat and libxml2 are already here, working and debugged. The
 effort to write a new library from scratch would be quite serious, and
 will result in anything tangible only after a lot of time. Unless
 somebody has really lot of spare time, wrapping existing library is the
 only way how XML support can be added in boost.

Careful with what you disagree with.  I stated that it would still be nice
to have a Boost supplied backend, but I didn't state this was a
requirement.  What I think *is* a requirement is that any wrapper library
not be tied to a single backend, and I personally believe that what
follows from that is that the submission must have at least 2 referenced
backends for proof of concept.  Note that this is precisely what
Boost.Threads does, for instance.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Re: Re: an XML API in boost

2003-06-04 Thread William E. Kempf

Stefan Seefeld said:
 William E. Kempf wrote:

  What I think *is* a requirement is that any wrapper library
 not be tied to a single backend, and I personally believe that what
 follows from that is that the submission must have at least 2
 referenced backends for proof of concept.

 Fair enough. What would you suggest me to do ? I do have a working
 wrapper around libxml2, but I don't have the time to reimplement it
 around another backend. Is this something that could be done in the
 boost cvs sandbox ?

Yes, the sandbox would probably be useful.  If you don't have the time to
make it work with another backend, but still feel that it is portable in
this manner, you might go ahead and submit any way.  I personally would be
inclined to vote no, unless I felt it was fairly obvious that the API
truly is portable to other backends with out proof in multiple
implementations, but others might not feel the same.  The other
alternative would be to ask for volunteers to do the port before
submission.

 All I wanted to do is

 a) find out whether there is interest into a boost XML library

Absolutely!  This has been discussed before.

 b) if the answer to a) is 'yes' get feedback as to how to get there

I think that's what we're trying to do ;).

I don't want to discourage you... in fact, I'd like to do the opposite.  I
just haven't had the time to look at what you have so far to give any
helpful criticism, other than to emphasise that Boost discourages tight
coupling to libraries other than Boost or the standard libraries.  This
doesn't mean that you have to provide a full implementation of the back
end parser as a Boost submission (though I do think that would be an
interesting submission in and of itself), only that you need to convince
people that you aren't tied to some other library.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Re: Re: Re: an XML API in boost

2003-06-04 Thread William E. Kempf

Vladimir Prus said:
 William E. Kempf wrote:
 Oh.. I misread your post. Apologies. Still, from a practical point of
 view I can hardly imagine that if libxml2 wrapper works, somebody will
 take the time to plug in another backend. That would mean rewriting
 all/most implementation method and will bring no end user value --- so
 it's not sufficiently interesting task to anybody to take.

That totally depends on the wrapper.  If it's a thin wrapper it will have
very tight coupling and will require extensive rewriting, as you say.  But
such a design wouldn't be interesting to me any way, as a Boost
submission.

In any event, the amount of rewriting would be no different than the
amount of code variation there is in Boost.Threads for the three target
platforms it supports.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Re: shared_ptr/weak_ptr and thread-safety

2003-06-04 Thread William E. Kempf

Alexander Terekhov said:

 William E. Kempf wrote:
 [...]
 Not specifying the exact width
 of various types is not really something that I think most people
 would classify as brain damaged.

 That's not the only problem with MS-interlocked API. For example, for
 DCSI and DCCI things, all you need is hoist-load and sink-store
 barriers; a full acquire/release is an overkill, so to speak. Also,
 for basic thread-safe reference counting, you really want to have
 naked increments and either naked decrements [for the immutable
 stuff] or decrements with acquire-if-min/release-if-not-min
 memory synchronization semantics.

I'd agree with that, but that's not what you said in the thread to defend
the brain damaged remark.  Further, even this wouldn't classify it as
brain damaged to me, because what they give can be used correctly and
successfully for some use cases.  Brain damaged implies it can't be used
at all.  It's too derogatory to be used for anything less, and you throw
the term around very frequently.

 Now, can you provide documentation for the above, including
 preconditions, postconditions, etc. for each method?

 Do you mean refcount's methods? atomic stuff? refs-thing?

All of it.

 A man-pages-like specification for plain C version of optionally
 non-blocking pthread_refcount_t without parameterization (I mean
 thread-safety and counter type) can be found here:

 http://terekhov.de/pthread_refcount_t/draft-edits.txt

Thanks.  I'll take a look at it.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Re: no semaphores in boost::thread

2003-06-04 Thread William E. Kempf

Stefan Seefeld said:
 Alexander Terekhov wrote:

 It is showing that semas (e.g. bin-semas aka auto-reset events) are
 really error-prone.

 you seem to equate microsoft's implementation of semaphores with
 the concept of semaphores (which is what I'd like to get feedback on).

No, you miss Alexander's point (which is easy to do, with his
communication style... in this case he points you to a good example, but
fails to explain why it's a good example).

His point is that the MS concept of an auto-reset event is the same
thing as a binary semaphore.  The MeteredSection concept in this article
was implemented using an auto-reset event (bin-semaphore), and on the
surface looks like a reasonable implementation.  However, if you do a
thorough analysis of this implementation you'll find that it's prone to
race conditions.

Another great example is the attempts to implement a condition variable
concept using semaphores, as has been done sooo many times on Windows. 
Nearly every attempt has been flawed, and the valid solutions are
extremely complex.

 If all that is wrong is that microsoft does a crappy job at implementing
 them, the response could be to provide a special implementation using
 mutexes and cv's *for the MS platforms*, and using native
 implementations when possible.

MS's actual semaphore is as valid an implementation as any other
(Alexander will claim them to be brain damaged, but that's because of
the design, not the implementation).

 As boost doesn't, there must clearly be other reasons for them not to do
 that.

There is, but the explanations are long and quite complex.  That's why the
FAQ points you at a seminal paper on the subject, rather than attempting
to explain it.  Like I've said in numerous arguments about the Event
concept, the problem with the concept isn't that it's broken or unusable,
only that it's difficult to actually use correctly.  Most users think
their code is correct, when in fact they have race conditions waiting to
bite them.  When Mutexes and Condition variables provide everything that
Semaphores and Events do, but in a way that's easier to use correctly, the
choice to not include Event's or Semaphore's is reasonable.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] thread lib: thread specific storage problem

2003-06-03 Thread William E. Kempf

Chuck Messenger said:
 I've been experimenting with the thread lib.  I found a bug related in
 some way to thread_specific_storage (tss).  In particular, I #ifdef'd
 all the tss stuff out, using native pthread calls instead, and
 everything worked as expected.

 So, I went back through, trying to determine what could be going on.
 I'm using tss to store a pointer to a thread-wide structure.

  thread_specific_ptrmythread* current_thread;

  mythread *mythread_get() {
  return *(current_thread.get());
  }

  static int init() {
  // Need to initialize thread local storage for main thread

  // XXX I'd like to put something here to ensure that
  // current_thread has been constructed.  But what?

  current_thread.reset(new mythread *);
  *(current_thread.get()) = new mythread;

  return 0;
  }

  static int val = init();

I don't follow what you're code is supposed to be doing.  'val' always is
assigned 0.  Is it's only purpose to cause the call to init()?  Why would
you want to do that in this manner?  It seems to me the solution you need
is as simple as:

  mythread *mythread_get() {
  mythread* t = *(current_thread.get());
  if (t == 0)
 current_thread.reset(new mythread *);
  return t;
  }

However, why thread_specific_ptrmythread*?  My not just
thread_specific_ptrmythread?  This code doesn't look valid to me, but
with out context I'm guessing.

 The problem is that I can't be sure, during init(), that current_thread
 has been constructed.  I believe that's at the root of the bug I'm
 tracking.

 By contrast, in my pthread-specific code, I simply put the
 pthread_key_create() call at the start of init(), thus ensuring proper
 initialization order.  But there's no analogous call I can make for a
 thread_specific_ptr -- that call is done during construction time.

pthread_key_create() only creates the key, it does not create any of the
thread specific storage for which there'd be an initialization order
issue.  So, I don't understand what you're doing or what the problem is.

 So, what to do about it?  Well, one obvious solution is to indirect
 current_thread:

  thread_specific_ptrmythread* *current_thread;

 then construct current_thread during init().  But that gives me yet one
 more indirection I have to do during mythread_get().  It's already slow
 as it is.

 Any suggestions?


 Which brings me to another source of slowness: because
 thread_specific_ptr automatically invokes delete on a thread's tss
 pointers when the thread goes away, I can't put a pointer to my real
 object, like this:

  thread_specific_ptrmythread current_thread;

 because mythread can't be deleted (for arcane reasons I won't get into
 here).

That depends.  As long as you can set the value back to 0 before the
thread ends, you can still put this into thread_specific_ptr.  Not a
universal solution, obviously, I just point it out in case it may help
with your current use case.

  It would be nice if there were a version of thread_specific_ptr
 which didn't delete, e.g.:

  thread_specific_ptrmythread, non_deleting current_thread;

 Just a suggestion...

Already in the thread_dev branch.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Re: Upcoming ISO/IEC lt;threadgt;... and lt;pthread.hgt; -gt; lt;cthreadgt; transition ;-)

2003-06-03 Thread William E. Kempf

Alexander Terekhov said:

Sorry for the late reply.  I've been on vacation and away from 'net access
for a week.  Still catching up on a LOT of e-mail, so I'm sure there's
been a lot of responses to this from others already, and I have not read
them yet.  So bear with me.

 Alexander Terekhov wrote:

 William E. Kempf wrote:
 [...]
   How about moving this discussion to c.p.t.?
  
   Well, just in case... Forward Quoted
 
  Thanks... I currently can't access c.p.t. in any reasonable manner.
 I'm working to rectify this,

 http://news.cis.dfn.de might help. ;-)

but in the mean time, I appreciate the
 cross
  post.

 I'll wait a day or two and post a reply addressing some of your points
 to comp.programming.threads.

 just in case... Forward Inline

 Alexander Terekhov wrote:

 Forward-Quoted source=Boost

 William E. Kempf wrote:
 [...]
   The big hurdle for a true C++ binding is that the current state
 of affairs is good enough for most people, and the political
 process of developing a full native C++ binding would be painful.
 (Remember, it's not just saying that the thread::create method
 takes a class member at which the thread will start... it means
 reviewing every method and template in the STL to determine which
 have thread safety
   requirements, and deciding precisely what those requirements are
 and how to meet them. Then there's the matter of cancellation
 points... and so forth.)
 
  Most STL libraries are thread-safe today, so the analysis there
 wouldn't be too difficult.

 Well, http://lists.boost.org/MailArchives/boost/msg47701.php.

Yes.  I didn't mean to indicate there was *no* evaluation to be done. 
Only that the lion's share has already been done by library implementers
today.

It just needs to be stated explicitly in the
 standard.
  Cancellation points are another issue... but I don't think C++ will
 add too many to the list already provided by POSIX.

 The current C++ Std is in conflict with POSIX, to begin with.

Not exactly.  POSIX specifies C language extension libraries.  As such,
there is no relationship to C++ at all.  They're not part of the C
standard, so C++ doesn't address them, while POSIX doesn't address
bindings for any language but C.  So the conflict is that they are
simply unrelated.  If POSIX were to submit their libraries for inclusion
in the C standard, then things would be a little different.  However, the
C++ standard does not gaurantee that it will track the C standard's future
changes.  In fact, there are incompatibilities with C99 already.  There is
incentive to do so (for instance, they are discussing how to integrate the
C99 changes), but since they are seperate standards this may not always be
possible.

 The C++
 standard says that almost the entire C library doesn't throw (in effect,
  everything is throw()) except just a few calls for which C++ headers
 provide extern C++ overloading to facilitate C++ callbacks. In POSIX,
 a whole bunch of standard C library functions DO throw due to thread
 cancelation. Isn't that funny?

Not really funny.  But POSIX doesn't specify that cancellation results in
a C++ exception being thrown either.  So there's no technical conflict
here, even if there's all sorts of headaches for users.  I hope this
improves from our efforts, but there's no gaurantee, since, again, they
are seperate and unrelated standards.  In the mean time, this results in a
QOI issue.

   When and if the C++ standard adds true thread support, that will
 be, by default and in practice, the thread binding for C++;
 whether the underlying thread environment is POSIX, Win32, or
 something else. This is great, as long as it doesn't do or say
 anything stupid, but it still leaves a few loopholes because
 inevitably people will continue to write applications that mix
 languages. Mixing C and C++ has never been a problem; but if the
 thread model in C++ is radically different, it could become a
 problem.
 
  Absolutely agreed.  I've said all along that Boost.Threads has to be
 very aware of what POSIX says.  We can not deviate in any way from
 POSIX that will result in conflicts between the threading systems.

 Bill, Boost.Threads shall end up in standard thread header. But there
 should also be a cthread standard header (it should penetrate ISO C as
  well; in the form of either pthread.h or cthread.h header). Now, in
  the past, I was thinking of thread as just an object layer on top
 of  cthread stuff. That was wrong. cthread should be fully
 implementable  using stuff from thread. I mean things along the lines
 of:

I see no reason for C++ to include a cthread, unless it's there for
compatibility with a C thread.h.  IOW, POSIX should address the C
standard first, not the other way around.  (And not all of POSIX is
necessarily appropriate, either, as the impact to non-POSIX platforms can
be quite extensive and you are likely to find opposition from those
members.)

typedef std::thread * pthread_t

Re: [boost] Re: shared_ptr/weak_ptr and thread-safety

2003-06-03 Thread William E. Kempf

Alexander Terekhov said:

 Trevor Taylor wrote:
 [...]
 Why wait? With so many people contributing to boost, why not introduce
 a pthread_refcount_t into  into boost threads (or is there one
 already?), provide a default implementation equivalent to what
 shared_ptr does now,

 Nah. atomic based stuff would surely be much better than what
 shared_ptr does now, I think.

 and let the platform experts provide the optimal specialisations.

 http://groups.google.com/groups?selm=3EC4F194.2DA8701C%40web.de
 (Subject: Re: Is this thread-safe on multi-processor Win32?)

 regards,
 alexander.

 P.S. ``Where's Bill?'' ;-)

Please, drop the adversarial tone in your posts.  It's rude at best.

I was on vacation, but I'm hardly ignoring any of this.  I had an atomic
class in Boost.Threads pre-review, but it was removed because it needed a
lot more research and effort than I had time for in that release.  I'm
trying to track the efforts you've been doing in this area, but you
scatter things so much with see this link type posts that it's
difficult.  If you can write a summary paper or even provide a base
implementation with thorough documentation, I'd definately be interested.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Re: shared_ptr/weak_ptr and thread-safety

2003-06-03 Thread William E. Kempf

Alexander Terekhov said:
  P.S. ``Where's Bill?'' ;-)

 Please, drop the adversarial tone in your posts.

 I can't. Really.

You can't, or you won't?  The above could have been just as easily written
as:

P.S. Is Bill reading this?

Even just removing the wink would have taken a lot of the bite out of
your post.


   It's rude at best.

 Sorry for that; well, try to not take it seriously.

Should I not take anything you say seriously?

 I was on vacation, but I'm hardly ignoring any of this.  I had an
 atomic class in Boost.Threads pre-review, but it was removed because
 it needed a lot more research and effort than I had time for in that
 release.  I'm trying to track the efforts you've been doing in this
 area, but you scatter things so much with see this link type posts
 that it's difficult.

 Okay. copypaste

Thanks.  I'll see if this helps.

If you can write a summary paper or even provide a base
 implementation with thorough documentation, I'd definately be
 interested.

 Well, I'm basically done with introduction. Here we go: Forward
 Inline

  Original Message 
 Message-ID: [EMAIL PROTECTED]
 Newsgroups: comp.programming.threads
 Subject: Re: using atomic_ptr for a lock-free instance once algo?
 References: ... [EMAIL PROTECTED]

 SenderX wrote:

  The acquire/release semantic are associated with the external
 pointer load
 and store
  actions as observed by the program.  Nothing to do with the cas
 operations
 since
  atomic_ptr's are atomic and are safe without them but would then be
 about
 as useful
  as pre-JSR 133 references.

 Is that why Alex says that the server 2003
 InterlockedCompareExchangeAcquire/Release API's are brain-damaged?

 MS interlocked stuff is totally brain-damaged because it *DOESN'T*

 1. extend C99's stdint.h with:

— atomic integer types having certain exact widths;
— atomic integer types having at least certain specified widths; —
 atomic fastest integer types having at least certain specified
 widths; — atomic integer types wide enough to hold pointers to
 objects; — atomic integer types having greatest width.

providing interlocked function like macros with various memory
 synch semantics for atomic operations.

 2. introduce a plusified version of stdint.h with atomic integers
(stdint would be a good name for it, probably).

 3. introduce atomic header that would provide a C++ atomic
template covering scalar types (based on certain, or least, or
 fastest integers as optionally specified template argument) via
 atomic specializations ala numeric_limits ones.

This hardly suffices as documentation, or even a summary paper.  I'd also
(as usual) take great exception to your use of the term brain damaged,
especially given your technical reasons.  Not specifying the exact width
of various types is not really something that I think most people would
classify as brain damaged.

Now, can you provide documentation for the above, including preconditions,
postconditions, etc. for each method?  I'll get by if you can't, but
documentation would be very useful.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Re: thread lib: thread specific storage problem

2003-06-03 Thread William E. Kempf

Chuck Messenger said:
 William E. Kempf wrote:
 I don't follow what you're code is supposed to be doing.

 Background: I have a structure of information, 'mythread', of which I
 need one per thread.  That is, I've grouped all my tss variables into a
 single structure.  I need to bootstrap the main thread.  Hence the
 static variable initialization -- my thought (perhaps incorrect) being
 that 'val' will be initialized before main() begins, thus bootstrapping
 the main thread.  I believe that's right -- that all static variables in
  all translation units are initialized before main()...?

 (The point is that I don't have control of main() -- otherwise, I could
 initialize current_thread in main().  This code is part of a library.)

 'val' always is
 assigned 0.  Is it's only purpose to cause the call to init()?

 Yes.

OK, I've got some idea what you're doing now, but your comments still
aren't gelling.

 Why would
 you want to do that in this manner?  It seems to me the solution you
 need is as simple as:

   mythread *mythread_get() {
   mythread* t = *(current_thread.get());
   if (t == 0)

Sorry slight bug here:

  current_thread.reset(new mythread *);

{
   t = new mythread *;
   current_thread.reset(t);
}

   return t;
   }

 I was, perhaps misguidedly, trying to avoid the need to call a function
 -- since I access the mythread structure alot.  Ideally, I want as much
 speed as the underlying OS-level implementation will allow.

I still don't understand.  Since the mythread structure has an instance
per thread, you have to make a function call to get the current thread's
instance no matter what.  That's what the above does.  Nothing else is
done (well, there is a test and jump instruction, but do you need to
optimize even that out!?!) unless this is our first call.  I don't see how
the init approach would be any more efficient, and would only work for the
main thread.

 However, why thread_specific_ptrmythread*?  My not just
 thread_specific_ptrmythread?  This code doesn't look valid to me,
 but with out context I'm guessing.

 Because I don't want the 'mythread' structure to get deleted when the
 thread dies.  My understanding of tss is that part of the semantics is
 that, if you do

  some_tss_value.reset(whatever);

 then when the thread ends, the thread library does 'delete whatever'.
 Therefore, I'm forced to have tss store a pointer to mystruct*, rather
 than mystruct* itself.

Ahh... gotcha.  You want the thread_specific_ptr in the thread_dev branch.

The problem is that I can't be sure, during init(), that
 current_thread has been constructed.  I believe that's at the root of
 the bug I'm tracking.

By contrast, in my pthread-specific code, I simply put the
pthread_key_create() call at the start of init(), thus ensuring proper
 initialization order.  But there's no analogous call I can make for a
 thread_specific_ptr -- that call is done during construction time.


 pthread_key_create() only creates the key, it does not create any of
 the thread specific storage for which there'd be an initialization
 order issue.  So, I don't understand what you're doing or what the
 problem is.

 The problem is that it is necessary for pthread_key_create() to be
 called before I can set a value for it.  Can I be sure that
 thread_specific_ptrmythread* has been initialized before I try using
 it during init()?  I'm not that clear on C++ initialization order
 semantics for statics.  If I do:

  extern int i;
  extern int j;
  int j = i;
  int i = j;

 then what happens?  It isn't obvious to me how the order would be
 established.  And so, can I assume that the following is guaranteed to
 work?

No.

  static SomeType var1;
  static AnotherType var2 = var1.something();

 That is, is var1 guaranteed to be constructed before I initialize var2
 from it?

If they are in the same translation unit, yes.  If not, no.  With out
fully understanding how you're using this, this sounds dicey.

 The thing is: there is some sort of bug with (Boost Threads) TSS -- the
 way I'm using it, that is.  My attempts at tracking the bug failed, so I
  resorted to replacing TSS with native pthreads, and that worked.  The
 only difference that I could glean between the TSS and pthreads versions
  of my code were in init().  If I spent some more time on it, perhaps I
 could definitively nail down this suspicion.

 So, as it is, I *suspect* that the problem is that accessing a static
 thread_specific_ptr during initialization of another static variable
 isn't guaranteed to work (because C++ doesn't, perhaps, define
 initialization order for statics, so the thread_specific_ptr object
 won't necessarily have been created by the time it is used).

 Assuming this is correct, then I'm proposing a new force_construction()
 method for thread_specific_ptr, to ameliorate this kind of problem.
 During thread_specific_ptr's normal construction, it would detect
 whether it had

Re: [boost] Re: Upcoming ISO/IEC lt;threadgt;... and lt;pthread.hgt; -gt; lt;cthreadgt; transition ;-)

2003-06-03 Thread William E. Kempf

Alexander Terekhov said:
 William E. Kempf wrote:
 [...]
When and if the C++ standard adds true thread support, that
 will
  be, by default and in practice, the thread binding for C++;
  whether the underlying thread environment is POSIX, Win32, or
 something else. This is great, as long as it doesn't do or say
 anything stupid, but it still leaves a few loopholes because
  inevitably people will continue to write applications that mix
 languages. Mixing C and C++ has never been a problem; but if the
 thread model in C++ is radically different, it could become a
 problem.
  
   Absolutely agreed.  I've said all along that Boost.Threads has to
 be
  very aware of what POSIX says.  We can not deviate in any way from
 POSIX that will result in conflicts between the threading systems.
 
  Bill, Boost.Threads shall end up in standard thread header. But
 there should also be a cthread standard header (it should
 penetrate ISO C as
   well; in the form of either pthread.h or cthread.h header).
 Now, in the past, I was thinking of thread as just an object
 layer on top
  of  cthread stuff. That was wrong. cthread should be fully
 implementable  using stuff from thread. I mean things along the
 lines of:

 I see no reason for C++ to include a cthread, unless it's there for
 compatibility with a C thread.h.

 The compatibility issue here is not thread.h but rather pthread.h.

Which is a POSIX header, not a C or C++ header.  I'm not going to suggest
that the C++ language standard be made compatible with POSIX, nor do I
think you'd be able to convince the committee of this if *you* were to
suggest it.

IOW, POSIX should address the C
 standard first, not the other way around.  (And not all of POSIX is
 necessarily appropriate, either, as the impact to non-POSIX platforms
 can be quite extensive and you are likely to find opposition from
 those members.)

 I'd love to hear something from them. MS-folks, are you here? Y'know, I
 guess that NOW, given Microsoft's recent acquisition of The UNIX
 license from The SCO (aka hey-IBM-give-me-a-billion) Group, they
 will have NO problems whatsoever merging win32 and Interix/POSIX sub
 systems... http://google.com/groups?selm=3DE4FF8C.FA94CDAC%40web.de

I wasn't specifically meaning MS.  After all, the Windows platform is
already at least partially POSIX.  There are other, non-POSIX platforms,
however.

But I'm sure MS would have issues with adopting all of POSIX if you were
to suggest it.  There are certain areas which are incompatible, such as
path naming, which would be problematic at best to be switched over to a
POSIX compatible form.  And this one even rears its ugly head in pthreads
(indirectly) where shared memory is created with path names.

 [...typedef std::aligned_storagestd::mutex pthread_mutex_t;...]

 I've recently foundcorrected a typo in that snippet. Here's what
 I wrote recently: (I hope it's relevant to this discussion as well)

I'm lost.  I never referenced what you are talking about here.  I'll have
to spend some time reading all of this thread (including your links) to
get back on track.  A better job of quoting would save both of us some
time here.

 :David Schwartz wrote:
 :
 : Wil Evers [EMAIL PROTECTED] wrote in message
 : news:[EMAIL PROTECTED]
 :
 :  Not sure I agree.  Assuming no special 'compiler magic' is used,
 and the :  definitions of pthread_mutex_t and
 PTHREAD_MUTEX_INITIALIZER are shared :  between C and C++, then
 pthread_mutex_t must be a POD and
 :  PTHREAD_MUTEX_INITIALIZER must be a constant expression,
 :
 :Yeah. http://google.com/groups?selm=3ED1E663.B9C89A4F%40web.de
 :
 :typedef std::aligned_storagestd::mutex pthread_mutex_t;
 :
 :This is meant to be a properly aligned POD; e.g. POD-union with
 :a character array [or whatever] meant to be const-initialized by  :the
 cthread's PTHREAD_MUTEX_INITIALIZER. BTW, I've made a typo: :
 :extern C int pthread_mutex_destroy(pthread_mutex_t * mutex_storage)
 :throw() {
 :   // destructor with implicit throw()-nothing ES
 :   ~mutex_storage-object();
 :   // may fail shall be caught in the std::unexpected() handler :
 return 0;
 : }
 :
 :actually meant to be:
 :
 : extern C int pthread_mutex_destroy(pthread_mutex_t * mutex_storage)
 :throw() {
 :   // destructor with implicit throw()-nothing ES
 :   mutex_storage-object().~mutex();
 :   // may fail shall be caught in the std::unexpected() handler :
 return 0;
 : }
 :
 :well, but...
 :
 :so the C++
 rules :  for dynamic initialization do not apply here.
 :
 :...see below. ;-)
 :
 :
 : You are implicitly assuming that the pthreads implementation must
 : strictly conform to the ANSI C standard. Since this is impossible,
 you : should stop assuming it. ;)
 :
 : Why can't PTHREAD_MUTEX_INITIALIZER call a function that isn't :
 reentrant? I mean why can't it be:
 :
 : #define PTHREAD_MUTEX_INITIALIZER get_mutex();
 : #define

Re: [boost] synchronized with boost::threads?

2003-05-08 Thread William E. Kempf

Roland Richter said:
 Dear all,

   I'm new with Boost.Threads; I've just worked with
   Java Threads so far.

   One feature of the Java language is the synchronized
   keyword - to make variables, methods, code blocks etc.
   thread-safe. So, when I first came into the situation
   that I needed threads in C++ as well, I thought of a
   way how to reflect that feature into C++.

   It seems to be easy to synchronize variables - see the
   very minimalistic draft below. But what about synchronized
   class methods etc.?


Java synchronized method:

class Foo
{
  public synchronized void bar() { /* code */ }
}

Boost.Threads synchronized method:

class Foo
{
public:
  void bar() { boost::mutex::scoped_lock lock(m_mutex); /* code */ }
private:
  boost::mutex m_mutex;
};

Java synchronized block:

class Foo
{
  public void bar() {
synchronized (this) {
  /* code */
}
  }
}

Boost.Threads synchronized block:

class Foo
{
public:
  void bar() {
{
  boost::mutex::scoped_lock lock(m_mutex);
  /* code */
}
  }
private:
  boost::mutex m_mutex;
};

   Is it worth to go further into that direction?

   I mean, the Boost.Thread library seems to be designed with
   safety in mind, but is still a little bit low-level.

   Are there any efforts to enhance the library further?

Yes.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Re: Boost Library Guidelines

2003-04-30 Thread William E. Kempf

Pavol Droba said:
 I have noticed a lot of new warnings in the release 1.30.
 I absuletely agree, that there is no reason to do some kind of line by
 line  pragma suppression.

 But...

 Most of the new warnings can be easily removed with a static_cast. I
 don't understand, why any boost lib have to generate such a warnings.

I'm going to guess that most of the new warnings you see aren't level 4
warnings.  I'll also guess that the crept in for the same reason I missed
some VC warnings in Boost.Threads with 1.30.  That is to say, it happened
because I assumed Boost.Build was setting the warning level to an
appropriate default (i.e. the level that the IDE sets for new projects),
when in fact, it wasn't setting it at all.  I posted about this a while
ago.

If this isn't the cause, then you'll have to ask individual authors,
and/or submit patches.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Threads and mutexes : assign_to error

2003-04-30 Thread William E. Kempf

Jacques Kerner said:
 Hi,

 I get the following error :

 error C2664: 'void boost::function0R,Allocator::assign_to(Functor)' :
 unable to convert parameter 1 from 'const CTaskManager' to
 'CTaskManager'

 when doing this :

 class CTaskManager
 {
 public:
 CTaskManager();
 ~CTaskManager();
 void operator()() {}

 private:
 boost::mutex m_mutex;
 }

 and

 CTaskManager taskManager;
 boost::thread_group mainThreadGroup;
 mainThreadGroup.create_thread(taskManager);
 mainThreadGroup.join_all();

 The error dissapears when I remove the mutex from the definition of
 CTaskManager ... (?!!)

Correct.  Functors are passed by value (i.e. they must be Copyable), and
Mutexes are Noncopyable.

 So what is the right way to use mutex and threads together? Do I have to
  declare the mutex outside of the
 functor? Why?

No, you just have to enable the functor to be copyable, as per the FAQ at
http://www.boost.org/libs/thread/doc/faq.html#question5.

However, I'm going to guess from the code snippet that you really don't
want this functor to be copied?  If that's the case, you may want to make
use of boost::ref.

CTaskManager taskManager;
boost::thread_group mainThreadGroup;
mainThreadGroup.create_thread(boost::ref(taskManager));
mainThreadGroup.join_all();

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


RE: [boost] Boost Library Guidelines

2003-04-29 Thread William E. Kempf

Paul A. Bristow said:
 | -Original Message-
 | From: [EMAIL PROTECTED]
 | [mailto:[EMAIL PROTECTED] Behalf Of Terje Slettebø |
 Sent: Friday, April 25, 2003 5:33 PM
 | To: Boost mailing list
 | Subject: Re: [boost] Boost Library Guidelines
 |
 |  May I suggest that we add to Aim for ISO Standard C++ ...
 |  Try to code so that  compiles with 'strict' compiler settings ... |
 | I use the highest warning level (4) for MSVC and Intel C++, and strict
 mode | for the latter, to not ignore any warnings/remarks by default.

 | In the cpp-files, not headers, I then selectively disable
 remarks/warnings that are
 | harmless (and there's a lot of them), until it compiles without
 | remarks/warnings. I think one should not get used to ignore warnings
 in the | output, or one could easily miss some which _does_ matter,
 which is why I | disable the ones that don't.
 |
 | In many cases, on level 4, there's _no_ practical way to turn off a |
 remark/warning, without using #pragma. Therefore, I think it may be
 better | to use a #pragma (in the cpp-file), than telling the user to
 ignore the | remarks/warnings. In header-only libraries, like much of
 the Boost | libraries, this leaves it up to the user, anyway.

 This sounds 'best practice'.  If others agree, can it be added to the
 guidelines?

It sounds good in theory, but I've never been comfortable living with it. 
I know others do, but in my experience, especially with the MS compiler,
the highest warning level produces a LOT of meaningless diagnostics which
can be very difficult to eliminate... even with pragmas.  As a best
practice suggestion, it's a great idea... as a requirement, I'd have to
voice an opinion against.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] BOOST_HAS_THREADS in Win32

2003-04-04 Thread William E. Kempf

Jiang Hong said:
 I'm new to boost. But should '#define BOOST_HAS_THREADS' be added to

 boost_1_30_0/config/platform/win32.hpp?

 Or is there a better way?

This is defined by the config headers if the compiler you're using has
indicated that multi-threading has been requested during compilation. 
This is generally done by compiling/linking against the multi-threaded
RTL.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Re: Re: Re: Re: Re: Thread Lib and DLL

2003-03-27 Thread William E. Kempf

David Brownell said:
 William E. Kempf [EMAIL PROTECTED] wrote in message
 news:[EMAIL PROTECTED]
 Ahhh, the light bulb just went on, I finally understand.  However, it
 does seem like this usage of TLS is a corner case, that is refactoring
 code to be thread safe.  I can see how this could be useful and may be a
 larger corner that I am aware, but it is something that I have not had
 to do before. However, it seems like the solution to this problem has
 some very severe consequences, namely forcing the user to compile with a
 DLL rather than a static lib on Win32 systems.  I understand that you
 would like to make the thread library as easy and error-free to use as
 possible, but that solution that requires the use of a DLL prevents me
 from using a library that I greatly value.

Actually, it's a very significant use case, not a corner case.  In
addition, other use cases can result in the same problems.  When a library
allocates TLS, it does so because it needs to maintain state for a thread
it did not create.  After all, if it created the thread, there are easier
and more efficient methods to maintain state.

 I have two main issues with using a DLL, one is another corner case, and
 the second is far less practical but more of an aesthetic.  The first is
 this: on a recent project, we had a requirement that the final binary
 was one and only one .exe.  Due to the nature of the project, anything
 else would be unacceptable (the discussion of why would lead to another
 conversation :). I could not have used the threads library in its
 current state.  Secondly, when I ship a product, I want the customer,
 programmer or not, to view its internal workings as magic.  I don't want
 them to know how I am doing anything.  Obviously they can hex edit the
 binary and figure out anything they want to, but that takes a more
 skilled person that the one who is able to see a dll and know that I am
 using boost threads.  Admittedly, this is not a sound scientific
 complaint, but still valid in my eyes.

The first is very valid.  I never claimed to like the DLL requirement ;). 
In fact, I've been in search of a solution that didn't require this (and
for more reasons than just wanting to support static linking!) from the
outset.  Unfortunately, I don't believe there is a solution at this point.

The second is totally uncompelling.  If hiding usage is all your after,
rename the DLL (do this by changing the stage rule in the Jamfile).

 I would be more than happy to try and help with a solution that would
 handle both of the corner cases, or at least allow the library user to
 compile as desired while knowing the consequences of their
 recompilation.  I hope that the case is not closed on restoring the
 static library compilation in future versions of the thread library.

It's not closed, but it's in definate limbo until after V2 is complete,
since that will change which cases require TLS cleanup.  But I definately
want a better solution to this problem as well, so don't be discouraged.

 After all of this, maybe the thread docs need this question answered as
 part of the FAQ? :)

Yes, it does, and I'll work on that shortly ;).

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] [BoostBook] Guinea pig request

2003-03-27 Thread William E. Kempf

Douglas Paul Gregor said:
 BoostBook is nearing the point where building documentation is as easy
 as building libraries. The Boost.Build v2 modules for BoostBook (and
 associated tools) are quite functional and work well for me, but I want
 to verify the they will work well for someone else.

 I would like a volunteer to try out the BoostBook tools to see if they
 can easily build documentation, and report your successes, failures, and
 general level of frustration to the Boost documentation list or to me
 personally so I can improve the situation for future users and
 developers. You'll need a few tools, a very recent checkout of Boost
 CVS, and
 possibly a little patience, but everything is explained (I hope) in the
 Getting Started guide here:
   http://www.cs.rpi.edu/~gregod/boost/tools/boostbook/doc/html/

 Any takers? Please?

Documentation nits:

* and including that Jamfile in the list of Jamfiles including for
testing under Testsuites... should be included for testing.

* Navigation links have broken images.  Probably only an issue on your web
server, right?

* The documentation on modifying user-config.jam doesn't make it clear
that you don't need to do *any* configuration if you wish to just let the
build process pull the stylesheets off the Internet.

Problems building:

* On Mandrake 9.1 I had no issues.

* On Cygwin, I get the result:

xslt-xsltproc bin\gcc\debug\boost.docbook
'XML_CATALOG_FILES' is not recognized as an internal or external command,
operable program or batch file.

  XML_CATALOG_FILES=catalog.xml xsltproc  --xinclude -o
bin\gcc\debug\boost.docbook 
C:\cygwin\home\wekempf\boost/tools/boostbook/xsl/docbook.xsl 
src\boost.xml

failed xslt-xsltproc bin\gcc\debug\boost.docbook...

I have the following installed under cygwin:

libxml2 2.4.23-1
libxslt 1.0.13-1

At this point, I have no clue how to diagnose the problem.

This is the result of my first attempts to just compile the existing
documentation to html.  After I get the Cygwin build working, I'll move on
to FOP and PDF generation and report other things I find.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Re: [BoostBook] Guinea pig request

2003-03-27 Thread William E. Kempf

Remy Blank said:
 On Thu, 27 Mar 2003 10:40:26 -0600 (CST), William E. Kempf
 [EMAIL PROTECTED] wrote:
 Problems building:

 * On Mandrake 9.1 I had no issues.

 * On Cygwin, I get the result:

 xslt-xsltproc bin\gcc\debug\boost.docbook
 'XML_CATALOG_FILES' is not recognized as an internal or external
 command, operable program or batch file.

   XML_CATALOG_FILES=catalog.xml xsltproc  --xinclude -o
 bin\gcc\debug\boost.docbook
 C:\cygwin\home\wekempf\boost/tools/boostbook/xsl/docbook.xsl
 src\boost.xml

 XML_CATALOG_FILES={something} xsltproc ...

 This is bash syntax for temporarily setting an environment variable for
 the duration of the xsltproc program run. Are you using bash on Cygwin,
 or the normal cmd.exe shell? The latter probably doesn't understand this
 syntax.

Bash.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] [BoostBook] Guinea pig request

2003-03-27 Thread William E. Kempf

Douglas Paul Gregor said:


 On Thu, 27 Mar 2003, Martin Wille wrote:

 Douglas Paul Gregor wrote:

  I would like a volunteer ...

 I gave it a try:

 Thanks!

 - pdf: lots of messages regarding missing hyphenation pattern for
language en.  A pdf file is created, however.

 This seems to be a problem with fop 0.20.5-rc2. I dropped back to 0.20.4
 and the problem disappeared. Documented now.

I'm using 0.20.4 (on Mandrake 9.1) and receive lots of errors.  A few
examples:

[ERROR] Error in column-width property value '33%':
org.apache.fop.fo.expr.PropertyException: No conversion defined

[ERROR] property - last-line-end-indent is not implemented yet.

[ERROR] property - linefeed-treatment is not implemented yet.

And others as well (plus a lot of warnings).  If you want a full log, I
can send it.  A PDF is generated, but lands in
$BOOST_ROOT/doc/bin/gcc/debug/boost.pf.  Shouldn't this be in
$BOOST_ROOT/doc/pdf or something similar?  The produced PDF is viewable,
and looks pretty good from a casual glance.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] [BoostBook] Guinea pig request

2003-03-27 Thread William E. Kempf

Douglas Paul Gregor said:
 On Thu, 27 Mar 2003, William E. Kempf wrote:
 I'm using 0.20.4 (on Mandrake 9.1) and receive lots of errors.  A few
 examples:

 [ERROR] Error in column-width property value '33%':
 org.apache.fop.fo.expr.PropertyException: No conversion defined

 [ERROR] property - last-line-end-indent is not implemented yet.

 [ERROR] property - linefeed-treatment is not implemented yet.

 And others as well (plus a lot of warnings).  If you want a full log,
 I can send it.

 These errors are normal with FOP.

Ick!  Any way to suppress the output in that case?

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


RE: [boost] VC7/Threads Warnings

2003-03-26 Thread William E. Kempf

Paul A. Bristow said:
 I was surprised to find that /Wp64  flag (detect 64-bit portability)

 means that std::size_t is 64 bit.  This leds to a number of oddities
 that confused me.  Is this perhaps causing your problem?

AFAIK and AFAICT, /Wp64 is not used.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Thread Lib and DLL

2003-03-26 Thread William E. Kempf

David Brownell said:
 I am curious as to why the new version of the Thread library does not
 provide a static library in the 1.30 version of boost.  After reading
 some initial posts, I have seen references to thread local storage, but
 haven't seen anything that documents why this makes a static library
 impossible. All thing considered, I find a static library is much more
 desirable than a dll.

It has been discussed numerous times on this list, as well as on the Users
list.  TLS cleanup can only be done on the Win32 platform with code in the
thread itself (which won't work for threads created outside of
Boost.Threads) or with code in DllMain.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Re: Thread Lib and DLL

2003-03-26 Thread William E. Kempf

Russell Hind said:
 I'd been wondering this, and heard about TLS issues.  The issues are
 only on Windows it appears.  Search for the thread

 Fwd: Thread-Local Storage (TLS) and templates by Greg Colvin on
 18/02/2003

 Specifically, the many posts by William Kempf and Edward Diener discuss
 the problems on windows with TLS cleanup.

 I do have a question on this issue:  If this problem is only to do with
 TLS cleanup when a thread exits, then if all threads are created when
 the program starts and only destroyed when the program exited, then, in
 practice, could this really be an issue?  I.e. if we only work like
 this, could building thread as a static lib cause problems providing
 that we don't let threads exit in the middle of the program?  We're
 currently really trying to stay clear of any DLLs.

Theoretically at least, I don't see why this would cause a problem.  You
intentionally leak, but the leak is benign since it occurs only right
before the application exits.  But most users won't code this way, nor do
I want to have to deal with the support requests/questions this would
cause.  So, unless you have some suggestion as to how I can enable this
usage with out causing confusion, I'm not sure I'd care to re-enable
static builds.  But you could probably fairly easily hack things to build
that way yourself.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] VC7/Threads Warnings

2003-03-26 Thread William E. Kempf

vc said:

 - Original Message -
 From: William E. Kempf [EMAIL PROTECTED]



 vc said:
  As for the warnings themselves... I'm still doing more research
 just to be 100% sure, but everything I've found thus far indicates
 you can ignore these warnings as long as you link against the same
 RTL in both the Boost.Threads DLL and the application.  After I
 verify this, I'll remove the warnings through the use of pragmas.
 
  So, is it ok if for the boost.thread dll and for the app I will use
 the /MT flag (multi-threaded)
  instead of /MD (multi-threaded dll) that you are using when building
 with bjam?

 According to what I'm reading about these warnings, no, that wouldn't
 be a good idea.  However, you can build against the static RTL easy
 enough.  In the $BOOST_ROOT/libs/thread/build directory issue the
 following bjam command:

 bjam -sBUILD=runtime-linkstatic

 Doing so, the boost.thread will be build with the /MTd flag (for debug).
 This is exactly
 what you said that it won't be a good idea, right? Or am I missing
 something here?

Sorry, I guess I wasn't very clear (and it looks like it may have been
less clear, because I misunderstood your question).  What's not a good
idea is mixing the RTLs.  If you want to use /MT(d) then you should
compile Boost.Threads with /MT(d) as well.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] VC7/Threads Warnings

2003-03-26 Thread William E. Kempf

Peter Dimov said:
 William E. Kempf wrote:

 I guess I'm wondering if the official toolsets shouldn't be changed. I
 don't understand why the MSDN indicates it should default to /W2 while
 we're seeing it default to what I assume is /W1.  But, projects
 created by the IDE default to /W3 (which is also the level
 recommended by the MSDN), so it makes sense to me that we should
 probably do the same?  Otherwise, users are likely to see warnings
 that we don't represent in the regression logs.

 I agree with the suggestion. The default should be /W3 for VC 6, and /W4
 (possibly with some specific warnings disabled) on VC 7+.

Why /W4 for VC 7+?  The IDE's default is still /W3 on these compilers.  I
don't think selecting a level different from the compiler's/IDE's default
is necessarily a good idea.  Of course, what *would* be nice is to have
some way to specify this portably for all toolsets.  IOW, the default
would be to use what's considered a normal level for the toolset, but
warningsall could be used to crank it up to full.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] VC7/Threads Warnings

2003-03-26 Thread William E. Kempf

Peter Dimov said:
 William E. Kempf wrote:
 Peter Dimov said:

 I agree with the suggestion. The default should be /W3 for VC 6, and
 /W4 (possibly with some specific warnings disabled) on VC 7+.

 Why /W4 for VC 7+?  The IDE's default is still /W3 on these
 compilers.  I don't think selecting a level different from the
 compiler's/IDE's default is necessarily a good idea.

 My opinion only. /W4 was a bit painful for VC 6 (many of us used it
 anyway) but it seems fairly usable on VC 7.

 Of course, what
 *would* be nice is to have some way to specify this portably for all
 toolsets.  IOW, the default would be to use what's considered a
 normal level for the toolset, but warningsall could be used to crank
 it up to full.

 I'd expect warningsall to translate to /Wall on VC 7. Not a very
 practical warning level IMHO. :-)

Well, add other options for warnings then ;).

warningsnone (disable warnings)
warningsdefault (typical warnings)
warningshigh (all warnings that aren't considered painful)
warningsall (all warnings)

Of course, the names and categories would need consideration when viewed
portably.  For example, warningsall would imply -Wall for gcc, which
should also be used for warningshigh (?), so some users might mistakenly
use all when they should use high instead.

But I think you could come up with a reasonable way to declare this
portably, and believe it would be useful.  I'd use warningsall from the
command line to lint my code with VC (even VC6), but would prefer to
leave things at the default level in the Jamfile to more closely meet
user's expectations (too low and when they use my code they get warnings
we don't report, too high, and when they use bjam on their own code they
may get warnings they don't care to see).

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] VC7/Threads Warnings

2003-03-26 Thread William E. Kempf

vc said:

 - Original Message -
 From: William E. Kempf [EMAIL PROTECTED]
  Doing so, the boost.thread will be build with the /MTd flag (for
 debug). This is exactly
  what you said that it won't be a good idea, right? Or am I missing
 something here?

 Sorry, I guess I wasn't very clear (and it looks like it may have been
 less clear, because I misunderstood your question).  What's not a good
 idea is mixing the RTLs.  If you want to use /MT(d) then you should
 compile Boost.Threads with /MT(d) as well.

 Thanks a lot for the answer! This is what I wanted to know: If I can
 build the boost.thread dll
 and the app with the /MT flag instead of /MD flag.
 Regarding the mixing of the RTLs the last versions of VC++ gives you a
 link warning if you try
 to do that ...

Nice to know... but it wouldn't work for dynamically loaded DLLs.  Then
again, dynamically loading Boost.Thread is not a good idea.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Re: Thread Lib and DLL

2003-03-26 Thread William E. Kempf

Russell Hind said:
 William E. Kempf wrote:

 Theoretically at least, I don't see why this would cause a problem.
 You intentionally leak, but the leak is benign since it occurs only
 right before the application exits.  But most users won't code this
 way, nor do I want to have to deal with the support requests/questions
 this would cause.  So, unless you have some suggestion as to how I can
 enable this usage with out causing confusion, I'm not sure I'd care to
 re-enable static builds.  But you could probably fairly easily hack
 things to build that way yourself.


 No, I wasn't going to ask you to re-enable static linking because of
 this.  As you rightly pointed out in the other thread, you have to make
 the library safe for all possible cases which is what you are doing.

 If we did decide to go this route, then we would certainly handle
 building the lib ourselves.

 Our problem with DLLs is this:  We work on many projects.  Some are in
 maintenance only mode, so don't get many updates.  The next project may
 use boost-1.30.0 and then go into maintenance.  I may then be working on

   a project which uses boost-1.32.0 and would like to keep both dlls
 available on the system.

You can do this simply by placing the applications in seperate directories
and keeping the proper DLL version alongside the executable.  Not
necessarily the ideal solution, but it's the easiest way to solve DLL
Hell.

 Current idea for doing this is re-naming the boost dlls to
 boost_thread-1.30.0.dll etc so that I can have 1 bin directory with all
 the dlls in, and each project would link and use the correct dll.  I
 wonder if support for this could be built into the builds?

Absolutely!  I'm hoping we address these kind of concerns with a full
installation solution sometime soon.  In the mean time, the stage rule in
the Jamfile should be able to handle this.  You can hardcode the release
number in today... but I believe there's a variable available which I
could use to do this with out hardcoding.  I'll see if I can track this
down and make the patch.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Re: Thread Lib and DLL

2003-03-26 Thread William E. Kempf

Edward Diener said:
 William E. Kempf wrote:
 David Brownell said:
 I am curious as to why the new version of the Thread library does not
 provide a static library in the 1.30 version of boost.  After reading
 some initial posts, I have seen references to thread local storage,
 but haven't seen anything that documents why this makes a static
 library impossible. All thing considered, I find a static library is
 much more desirable than a dll.

 It has been discussed numerous times on this list, as well as on the
 Users list.  TLS cleanup can only be done on the Win32 platform with
 code in the thread itself (which won't work for threads created
 outside of Boost.Threads) or with code in DllMain.

 A possibility. Simulate the DllMain DLL_PROCESS_DETACH through a member
 function call in the thread class which should only be used for those
 who are using a static library version of the library. The member
 function must be called before the thread function exits in the static
 library version. For the DLL version, the member function must be
 ignored or you can simply not have the member function call in the DLL
 version. The onus would be on those using the static library version to
 always make this call before their thread function exits, but would at
 least provide them wioth the possibility of using a static library
 version. Of course there may be other
 ramifications which cause this idea not to work, or even getting it to
 work properly would be too much trouble, but I thought I would suggest
 it anyway.

Workable, if the user makes absolute certain he calls this method from
every thread that accesses TLS.  However, he may not know this, for
example when a library function uses Boost.Threads internally and
allocates TLS with out the user knowing.  This is just a variation on the
you must use this thread creation routine if you use our libraries
solution that MS uses for the C RTL and MFC.  I think it's fragile... and
many users fail to understand the issues here and thus do the wrong thing.

 It may not be worth thinking about possible solutions of building a
 static library version of Boost.Threads. I know that for myself I always
 creates DLLs when distributing applications but as a 3rd party developer
 I always leave open the possibility that there are people who like to
 distribute the applications as a single EXE which uses static libraries
 and the static library version of their compiler's RTL.

Yes, and for that reason I certainly dislike the DLL only packaging of
Boost.Threads.  But it seems the safest and most viable solution.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Re: Re: Thread Lib and DLL

2003-03-26 Thread William E. Kempf

David Brownell said:
 The user can just call the method for every thread which uses
 Boost.Threads
 in a static library implementation. If a library ( LIB ) function uses
 Boost.Threads internally, then it is up to the library function
 implementor
 to document this and iteratively define a function which can be called
 which
 calls the Boost.Threads function before the thread ends or, if the
 library function itself controls the ending of the thread, it must do
 it itself.

 As I have researched this topic, it has become quite clear that I am
 nowhere near an expert in this area, so forgive me if these questions
 are naive or have been hashed over before.

 Are these statements accurate: When a thread is created within a static
 lib, there is no way to find out when the thread has completed.  In a
 DLL, DllMain is called when the thread is complete.  This is important
 because TLS data must be destroyed when the thread is complete.  In the
 current version of boost (1.30), TLS is a feature of the thread library,
 but not required.  In future versions of boost, threads themselves will
 rely on TLS internally, so TLS is no longer a feature, but required.

Correct.

 If a user must link with a static thread lib, a workaround would be for
 them to notify the thread library that the thread is about to complete,
 and any associated TLS data can be destroyed.  This is not an optimum
 solution, as it places the onus on the user.

Correct.

 Some questions:  In the current thread library, can the associated TLS
 data be deleted before the thread is complete?  In the next version of
 the library, can the associated TLS data be deleted before the thread is
 complete?

Not sure I understand precisely what you're asking, but I'll make some
assumptions and say yes.  However, read on.

 Would it be possible to add a level of indirection in the thread functor
 in static lib builds?  For example, in a DLL build, the following
 happens (this is very loose, but should convey my meaning):

 --Thread Created (thread lib)
-- User's thread functor (user code)
 --Thread Destroyed (thread lib)

 In a static lib build:

 -- Thread Created (thread lib)
-- Internal thread functor (thread lib)
   -- User's thread functor (user code)
   -- Destroy TLS (thread lib)
 -- Thread Destroyed (thread lib)

 This would free the user from calling a destroy function at the end of
 the thread proc, but would enable static builds (if the above
 assumptions are correct).

This is the model used by MS.  Threads are created at the low level by
calls to CreateThread.  The C RTL uses some TLS, so if you call any C RTL
routines you're required to instead call _beginthread(ex), which creates a
proxy that ensures TLS data is cleaned up.  Then MFC comes along and for
the same reasons requires you to instead call AfxBeginThread.  One of the
more common errors that users make is to use the wrong thread creation
routine, which doesn't produce any obvious problems like a segfault. 
Worse, this causes issues for people like me.  Which thread creation
routine should Boost.Threads use?  CreateThread is obviously a bad choice,
but the other routines are not so easy to choose between.  If I use
AfxBeginThread(), the user is stuck with MFC, even if they don't use it. 
If I use _beginthread(ex) (which is what I've chosen) then the user can't
safely call any MFC routines from threads created by Boost.Thread.  If I
implement the solution you've given above, I cause these same issues on my
end users in triplicate.

More importantly Boost.Threads is meant to be useful for library
developers.  Why should an application developer be forced to use
Boost.Threads just because library Foo choose to use Boost.Threads to make
the library thread safe?

This solution is fragile and difficult to manage today.  Every time you
add yet another thread creation routine/proxy into the mix it gets
geometrically worse.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Re: Re: Re: Re: Thread Lib and DLL

2003-03-26 Thread William E. Kempf

David Brownell said:
 // In library Foo

 void some_library_foo()
 {
boost::thread_specific_ptrFoo p;
// other stuff
 }

 // In Application Bar which uses library Foo with out any knowledge //
 that Foo used Boost.Threads
 void bar()
 {
some_library_foo();
 }

 int main()
 {
__beginthread(bar, ); // leak, but how could the developer
 know?
 }


 I'm not sure I understand this example completely.  Is this the case
 where library Foo's author has created the some_library_foo function
 with the intention that it will be accessed by a thread, but leave the
 actual thread creation up to the user of the foo library (the bar
 application in your example)?

 If this is correct, it seems like Foo should either a) not burden Bar
 with the knowledge that threads are being used and handle thread
 creation itself or b) allocate locally to some_library_foo without using
 thread_specific_ptr.

Foo doesn't create any threads, but Bar does.  So (a) isn't the answer. 
I'm not sure what you mean by allocate locally to some_library_foo,
since that's precisely what's being done.  Telling Foo not to use
thread_specific_ptr is the same as telling them not to use Boost.Threads,
which doesn't sound like the answer to me!

To make this more concrete, TLS is most often used to make legacy
interfaces, such as the classic example of strtok, which maintain state
across calls, thread safe.  That's what's being done in the hypothetical
some_library_foo.  TLS is really the only solution here (other than
changing the legacy interface, which often isn't an option), which is why
I said telling them not to use thread_specific_ptr is the same as telling
them not to use Boost.Threads.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] VC7/Threads Warnings

2003-03-25 Thread William E. Kempf

Andrew J. P. Maclean said:
 I am using Boost Ver 1.30 just released. I built the libraries with
 BJam. Now when building my code I get lots of warnings like the
 following. These warnings worry me a bit because they are level 1 and 2
 warnings. Is it safe to ignore these or do I need to manually set some
 option? I never got these warnings with Boost 1.29.

There does appear to be something wrong in your setup.  I'm going to guess
that you're linking against the static RTL?

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] VC7/Threads Warnings

2003-03-25 Thread William E. Kempf

David Abrahams said:
 William E. Kempf [EMAIL PROTECTED] writes:

 David Abrahams said:
 William E. Kempf [EMAIL PROTECTED] writes:

 Hmm... this surprised me.  Mr. Maclean indicated the warnings were
 level 1 _and_ 2.  Builds with bjam do report errors, so the warning
 level can't be 0.  MSDN indicates Level 2 is the default warning
 level at the command line.  So I assumed that it must be an RTL
 issue causing the warnings for him.  However, experimenting with
 'bjam -sTOOLS=vc7
 -sBUILD=vc7*cxxflags-W2' does indeed produce the warnings in
 question.  So it appears that MSDN is wrong, and that level 1 is
 selected if none is supplied?  I plan to bump the level up in my own
 set of bjam tool sets.

 Suggestion: turn them on with #pragmas in the library's test files.

 This won't turn them on when compiling the library itself, though.
 Turning them on only for the test files won't catch many of the
 warnings.

 So I suggest you use #pragmas in the library implementation files
 also.  My point is that you shouldn't need a custom toolset to see the
 warnings, and if it's your aim to avoid triggering them they
 should show up in the Boost regression tests when you do.

I guess I'm wondering if the official toolsets shouldn't be changed.  I
don't understand why the MSDN indicates it should default to /W2 while
we're seeing it default to what I assume is /W1.  But, projects created by
the IDE default to /W3 (which is also the level recommended by the MSDN),
so it makes sense to me that we should probably do the same?  Otherwise,
users are likely to see warnings that we don't represent in the regression
logs.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] 1.30.0 release postmortem

2003-03-24 Thread William E. Kempf

Beman Dawes said:
 There was some discussion of a better tracking system once before, but I
  really think we need to get going on this now. The problems are much
 more  serious.

 What systems work for others in an Internet environment like Boost? Who
 could act as host? I see the GCC folks are migrating from GNATS to
 Bugzilla.

The only bug tracking systems I have experience with are commercial. 
However, I did run across a link for an interesting project the other day
that may be worth looking into.  TUTOS
(http://www.tutos.org/homepage/about.html) goes beyond bug tracking and
into full project management.  As such, the bug tracking may be less
robust than dedicated applications like Bugzilla?  But it also would
address other things that could make maintaining Boost much nicer for both
developers and release managers.  Of course, since I have no experience
with this, it may be a non-start suggestion, but I thought it would at
least be worth posting the link.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] boost 1.30 - Thread lib workspace

2003-03-21 Thread William E. Kempf

vc said:
 Hi all,

 I'm using the boost version 1.30 release, on Win2k and the VC7.1
 compiler.

 I'm porting a big application from Unix to Windows. Because for all the
 modules within this app I created
 a VC++ workspace I would like to do the same for the thread library from
 boost.

 For this I did the following steps:
 1) Create with VC7.1 a Static library application without Precompiled
 header
 2) Add to this lib the source files (from boost_1_30_0\libs\thread\src):
 3) Set the right paths of the project for finding the includes
 4) Build the lib

 My questions/problems are:

 1) Are ok the above steps that I have done? Is it ok that I created it
 as a static lib (this is how I would
 like to have it)?

Not if you make use of thread_specific_ptr in any of your code.  Note
also that the next version of Boost.Threads will be doing this internally
for boost::thread itself... so a static build won't really be possible
with that release.

 2) Are there any preprocessor flags that I have to add to the project?
 If yes from where can I
 find out which should I set?

Just make sure you're linking to the multi-threaded C RTL.

 3) I got a lot of warnings like: xtime.cpp(75) : warning C4273:
 'boost::xtime_get' : inconsistent dll linkage. Actually there are 119
 warnings like this one (C4273 and C4275).
 Why do I get these warnings? Is there a way to eliminate them? Should I
 be worried about them?

You'll have to add code to $BOOST_ROOT/boost/thread/detail/config.hpp to
not define BOOST_THREAD_DECL when building a static library.

 4) Actually I'm using the thread lib from boost, just because it seems
 that it is used by spirit when adding the
 SPIRIT_THREADSAFE flag.
 Looking a little through the boost source files comments I saw that by
 default the Windows native threads are used.
 But the threads created specifically by the application are posix
 threads so for them I used the pthread-win32 lib.
 Can I have problems because there will be both types of threads?

I wouldn't expect problems, but you can compile Boost.Threads with
pthreads-win32 if you want (at least with this version... the next release
probably won't work with this configuration, and I have to admit that I've
not tested this build variant in quite a while).  Look at
$BOOST_ROOT/libs/thread/build/threads.jam to see how to do this.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] RPMS?

2003-03-21 Thread William E. Kempf

Beman Dawes said:
 At 01:11 PM 3/21/2003, Neal D. Becker wrote:

  I have built SRPMS for RH8 for boost1.30.0.  They required just minor
 modifications to the spec files.  Where should I upload them?

 Should that be part of the regular Boost distribution, and thus live in
 CVS? If so, would you be willing to maintain it?

 Sorry if that is a completely dumb question - I have no knowledge of
 what  an SRPM is, and only a vague second-hand knowledge of RPM.

Until we have a more formal installation solution, I think the SRPM's spec
file should reside in CVS.  It would also be nice to have other
installation options as well, such as Debian packages (sorrry, not totally
familiar with the terminology to use), BSD ports, Gentoo ports, Windows
installers, etc.  We just need champions willing to submit and maintain
each of these.

Before anything's actually added, however, maybe we should discuss things
a bit either here or on the Boost Install list.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] boost web page nitpick

2003-03-14 Thread William E. Kempf

Keith Burton said:



 Please see http://www.boost.org/more/download.html#CVS


From this page :

 free GUI clients are also available for Windows, Mac, and other systems
 from CvsGui.org.


 The link to cvsgui.org goes to somewhere that appears to be not longer
 valid

I believe that at one point http://www.cvsgui.org and
http://www.wincvs.org lead you to the same place.  Or at least I think
they were related in some way.  A few weeks ago, the http://www.wincvs.org
site had some server troubles.  It's back up now.  So maybe the
http://www.cvsgui.org link has just not been fixed since then?  Someone
with more info on this will have to decide if the Boost web page needs
updating, but in the mean time you should be able to get any of the GUI
clients you're looking for from http://www.wincvs.org.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: Boost RPMS (Was: [boost] Outstanding patches and fixes)

2003-03-13 Thread William E. Kempf

Vladimir Prus said:
 David Abrahams wrote:
 Beman Dawes [EMAIL PROTECTED] writes:
  Doesn't seem to be in the archives. It's from Neal D. Becker 10 Mar
 
  2003. Here is the entire message:
   I really appreciate the boost rpms that have been made available.
 I
 
hope we
 
   can improve one thing in the upcoming release.
   
   rpm -q --requires boost-python-devel
   boost-devel
   libpython-devel
   
   Unfortuantely, on RedHat it's called
   
   python-devel
   
   I hope there is some way to fix this.

 Since I never made a boost RPM, I don't think I'm the guy to address
 it.

 I believe that Malte Starostik is the right person for dealing with this
  issue. I'm pretty sure the different is naming is difference between
 Mandrake  and Redhat, but have no idea how to fix it.

 And, while we're on it, I think it would be much better if  RPM are
 officially available (i.e from sourceforge download page).

 Lastly, this issue is not release show-stopper: the *spec file which
 creates RPM is not in Boost CVS tree. Malte can make the changes when
 1.30 is  out.

Should it be in the tree?

(Yes, I know, we need to revitalize the installation discussion and
actually get something done on that front.  I only ever intended to be a
moderator in this case, because of time constraints, but someone needs to
take a much more active role in ensuring this area is addressed!)

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] boost::threads and lib vs dll

2003-03-12 Thread William E. Kempf

Russell Hind said:
 I see that boost::thread has moved to a dll implementation (win32) in
 1.30.0-b1.  I have modified the JamFile for boost:thread so it builds
 the lib as well as the dll.  Default build be made to do both, rather
 than just the dll?  Or is boost moving to dll implementation only for
 all libraries?

This has been discussed before.  The switch is a Boost.Threads switch
only, and not something that all Boost libraries are doing.  It was done
to simplify the build process, since Win32 requires the use of a DLL for
TLS cleanup any way.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] 1.30.0b1 thread.hpp bug

2003-03-10 Thread William E. Kempf

Geurt Vos said:

 Just downloaded the 1.30.0-beta1 zip. There boost/thread.hpp
 is slightly wrong. Line 16 reads:

   #include boost/thread/conditin.hpp

 but should be:

   #include boost/thread/condition.hpp

Fixed. Thanks.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] once_flag

2003-03-03 Thread William E. Kempf

Noel Yap said:
 Just wondering, looking at boost/thread/once.hpp, I see that once_flag
 is typedef'd to long, why not bool or char to take up less memory?

For compatibility with the underlying system APIs.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Re: Re: Re: Re: Re: Thread-Local Storage (TLS) andtemplates

2003-02-26 Thread William E. Kempf

Edward Diener said:
 William E. Kempf wrote:
 Edward Diener said:
 William E. Kempf wrote:
 I still don't think it is a TLS issue but rather a thread cleanup
 issue and the restrictions imposed by MS's design of that situation.
 So I can well understand your chagrin at the tricks you must do in
 order to cleanup internal thread data when a thread exits under
 Windows.

 That's a minor fine hair your splitting.  What's the difference
 between a TLS issue and a thread cleanup issue of TLS data?

 Because it is a thread cleanup issue of any thread specific data, not
 just TLS. No matter what the data that exists for a thread when the
 thread is exited, it musty be cleaned up efficiently. A user could have
 objects associated with only a particular thread and if the thread ends,
 it must be able to cleanup while other threads continue running. The
 fact that MS says no to this in DLL_THREAD_DETACH, that you are
 effectively blocking other threads in the process until you exit the
 DllMain routine, is for me the central weakness.

What forms of thread specific data are there besides TLS?  How does
DllMain play into how you clean up such data?

  And
 if you look back, I said I wouldn't call it broken, just that the
 implementation has serious design issues.  So it looks like we're in
 agreement now.

 Want to join in my mini campaign to convince MS to fix this one?

 As long as the suggested fix is to allow thread cleanup without the
 current restructions on synchronization or DLL loading/unloading, sure.
 I don't think you are going to change anything in the way that TLS
 currently works nor should you. I am going to guess that MS is aware of
 this issue of thread cleanup and that the main fix on their part would
 be to allow re-entrancy in the current DllMain routine, which of course
 may be a big job on their part given the amount of underlying
 intertwined code.

I have to disagree.  First, I don't think you *can* solve the reentrancy
problems with DllMain.  Second, even if you could, DllMain is the wrong
solution for cleanup.  Something as simple as a thread exit routine would
be a much better partial solution, though it's silly not to add this
directly to TlsAlloc() (or a TlsAllocEx() to keep backwards
compatibility).

 The other solution(s) involves some sort of callback
 as you suggested, or some other way to ensure that a thread can be
 exited cleanly without blocking other threads from running in the
 meantime, whether in an executable, a static LIB, or a DLL.

 Knowing MS from their track record, a campaign to get them to change
 anything entails working closely with one or more of their techies who
 can actually understand the issues involved and get the changes to be
 seriously considered. Just voicing displeasure in any public way won't
 do anything. If there is a VC++ rep who now works closely with Boost to
 ensure VC++ .NET conformance, he's the guy I would badger first, and
 then through him you might be able to get to other MS techie employees.

I can give you some names.  I could also give some e-mail addresses,
though that might be considered bad netiquette.  But MS *does* have
someone who's supposed to champion for us developers... Herb Sutter.  If
you want to campaign for this, send him a polite e-mail and I'm sure he'll
discuss this with the appropriate folks at MS.  I've already done so (as
well as voiced the same opinions to some actual MS developers), but my
lone voice may not be enough for them to prioritize this in any way.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Re: Re: Re: Re: Re: Re: Thread-Local Storage (TLS)andtemplates

2003-02-26 Thread William E. Kempf

Edward Diener said:
 I can give you some names.  I could also give some e-mail addresses,
 though that might be considered bad netiquette.  But MS *does* have
 someone who's supposed to champion for us developers... Herb Sutter.
 If you want to campaign for this, send him a polite e-mail and I'm
 sure he'll discuss this with the appropriate folks at MS.

 I have an address for Herb Sutter from CUJ but I don't know if that is
 the most effective for reaching him on this issue. If you have another
 one, I will be glad to use it and argue for the need of thread cleanup
 which doesn't block other threads from running at the same time and.or
 prevent synchronization. I don't mind adding my voice to an issue which
 has obvious drawbacks.

I don't know what e-mail such topics should go to, that's part of the
reason why I didn't give one.  However, the one you have will probably
suffice?

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Re: Re: Re: Thread-Local Storage (TLS) and templates

2003-02-25 Thread William E. Kempf

Edward Diener said:
 William E. Kempf wrote:
 You can clean up your own TLS index ( or indices ) in your DllMain
 routine when the seond parameter is DLL_PROCESS_DETACH, meaning that
 your process is being exited. AFAIK this is the standard way to do
 this.

 (Note: The issue is more with cleaning up TLS data then with cleaning
 up TLS indices/slots.  So we're really talking about
 DLL_THREAD_DETACH here.)

 Then perhaps the weakness is not really with TLS on Windows but rather
 with limitations to actions one can perform at DLL_THREAD_DETACH time.

No, the issue is that the only mechanism provided for TLS cleanup is to
hook DllMain, or to explicitly clean up the data in the thread before it
exits.  The former is unusable in several cases, while the latter can only
be done if you control the creation of the thread.

 Would you describe these issues specifically ?

I did, below.

 This is the MS way, not the standard way.

 This referring to what above ?

Using DllMain to cleanup TLS data.

  And it's full of issues.
 You are quite limited in what you can safely do within DllMain.  Any
 calls to synchronization routines is likely to deadlock the entire
 process.

 I agree that this is so. You can't sit and wait until some other thread
 has done something, via a Windows synchronization primitive, when you
 are processing DLL_THREAD_DETACH. What is the situation where this is
 necessary ?

There are numerous situations where this is necessary.  For example, the
cleanup mechanism in both Boost.Threads and pthreads-win32 use mutexes,
which can potentially cause *process* deadlock.  If the TLS data is shared
across threads, or references data shared across threads, or simply calls
a routine that does synchronization in the cleanup, all of which are not
that uncommon, and some of which are hard for the programmer to avoid (do
you know what routines do synchronization internally?), you risk deadlock.

  As is calling any routines that load/unload a DLL.

 The workaround is not to dynamically load/unload a DLL as part of thread
 processing.

Do you know what routines do this as part of their implementation?  To
quote the MSDN Calling imported functions other than those located in
Kernel32.dll may result in problems that are difficult to diagnose.  And
since a very large number of Win32 API functions are imported... I think
you see the issue.

 Yes, it is cleaner to do so when one only needs a DLL for a
 specific time but the overhead of statically linking a DLL into a
 process instead is minimal, although I agree that dynamic loading is
 often a cleaner design. I do agree with you that the inability to
 dynamically load and unload a DLL at DLL_THREAD_ATTACH/DLL_THREAD_DETACH
 is an unfortunate imposition and that this is poor design on MS's part.
 I am still not clear whay this is so and why this limitation exists on
 Windows.

I honestly don't care.  The only time I've ever found this design to be
unusable is when dealing specifically with the cleanup of TLS data, which
would be much better implemented as a registered cleanup routine in the
first place.  Fix this, and I don't care about this artifact of the DLL
system on Win32 platforms.

  There's
 also the issue of forcing the use of a DLL with this scheme, which
 many users rightfully dislike (this is why there are so many thread
 creation routines on Windows).

 I could be mistaken but I believe that TLS works just as effectively in
 static LIBs as it does with DLLs. The difference is that one must do
 manual initialization routines and finalization routines of TLS data for
 different threads, as opposed to what one may do automatically using
 DLL_THREAD_ATTACH/DLL_THREAD_DETACH. But certainly one is not forced to
 use only DLLs or only static LIBs if the implementation supports both.

Initialization isn't really an issue, as you can do lazy initialization
(synchronization issues aside, as they are solvable).  It's the
finalization that's an issue, and it's resulted in the numerous thread
creation routines and the rules for when to use which one.  If you call
any C RTL routines (which may allocate TLS) you can't call CreateThread,
but must instead call _beginthread(ex).  Likewise, if you call any MFC
routines you can't call CreateThread or _beginthread(ex) but instead must
call AfxBeginThread, lest you leak TLS data allocated by these routines. 
This is an issue for Boost.Threads, which has it's own thread creation
routines, because I don't know how a user will use the thread.  It's a
problem for other libraries if they have thread creation routines, or have
user registered callbacks executed by the threads created by the library. 
It's a problem if you want to make use of the built in Win32 thread
pooling mechanisms, since those threads will be created using
CreateThread.  And so on.

 I won't be as critical as Alexander, but I will agree that the MS
 TLS implementation has serious design issues which need to be
 corrected.

 OK, this isn't the place

Re: [boost] Re: Re: Re: Re: Thread-Local Storage (TLS) and templates

2003-02-25 Thread William E. Kempf

Edward Diener said:
 William E. Kempf wrote:
 Edward Diener said:
 William E. Kempf wrote:
  And it's full of issues.
 You are quite limited in what you can safely do within DllMain.  Any
 calls to synchronization routines is likely to deadlock the entire
 process.

 I agree that this is so. You can't sit and wait until some other
 thread has done something, via a Windows synchronization primitive,
 when you are processing DLL_THREAD_DETACH. What is the situation
 where this is necessary ?

 There are numerous situations where this is necessary.  For example,
 the cleanup mechanism in both Boost.Threads and pthreads-win32 use
 mutexes, which can potentially cause *process* deadlock.  If the TLS
 data is shared across threads, or references data shared across
 threads, or simply calls a routine that does synchronization in the
 cleanup, all of which are not that uncommon, and some of which are
 hard for the programmer to avoid (do you know what routines do
 synchronization internally?), you risk deadlock.

 My understanding of TLS data is that it is thread specific and not meant
 to be shared across threads.

That's the most common use, but it's not mandated by anything.  However,
sharing the TLS data is the less common of the cases I gave.

 The whole idea is that every thread in a
 process gets their own copy of the same data. Why then do you say that
 TLS data is shared across threads, or references data shared across
 threads ? The last issue of doing synchronization in the cleanup I can
 understand but not that the data which needs to be synchronized is TLS
 data itself. I completely agree with you that there is a serious problem
 in the DLL_THREAD_DETACH attempting to do synchronized cleanup, but I am
 guessing that you may be using TLS data itself in ways in which it was
 not intended. I don't believe TLS data was ever intended as a way to
 share data between threads but was intended, as its name implies, to
 create data that is specific only to a single thread.

An example of the former, and I believe a valid example, exists in the
next revision to Boost.Threads.  The thread representation is sharable,
i.e. boost::thread uses a ref-counted pimpl idiom to make it copyable and
assignable.  The implementation holds some state information that's
specific to the thread, such as it's running state.  The default
constructor and/or thread::self() needs to be able to access this shared
state, which means it must be contained in a TLS slot.  There you have TLS
data that's shared across threads.  More importantly, this data must be
cleaned up at thread exit by decrementing the ref-count (which has to be
synchronized) and if the ref-count goes to zero, by actually deleting the
data.

But again, the latter is the more likely case, and is likely to occur
quite frequently in MT C++ code.

  As is calling any routines that load/unload a DLL.

 The workaround is not to dynamically load/unload a DLL as part of
 thread processing.

 Do you know what routines do this as part of their implementation?  To
 quote the MSDN Calling imported functions other than those located in
 Kernel32.dll may result in problems that are difficult to diagnose.
 And since a very large number of Win32 API functions are imported... I
 think you see the issue.

 I see the issue and although I haven't investigated what Windows API
 functions may load/unload a DLL dynamically, something tells me that MS
 must publish such a list somewhere so that one knows what to avoid at
 DLL_THREAD_DETACH time at least within their own Windows APIs.

*chuckles*  Sorry, there's no such list.  The MSDN pretty much says the
only routines you can trust are those in kernel32.dll.  But even if MS did
publish such a list, do you think third party vendors do?  Have you
ensured you document which of your own routines call such routines?

 Yes, it is cleaner to do so when one only needs a DLL for a
 specific time but the overhead of statically linking a DLL into a
 process instead is minimal, although I agree that dynamic loading is
 often a cleaner design. I do agree with you that the inability to
 dynamically load and unload a DLL at
 DLL_THREAD_ATTACH/DLL_THREAD_DETACH is an unfortunate imposition and
 that this is poor design on MS's part. I am still not clear whay this
 is so and why this limitation exists on Windows.

 I honestly don't care.  The only time I've ever found this design to
 be unusable is when dealing specifically with the cleanup of TLS data,
 which would be much better implemented as a registered cleanup routine
 in the first place.  Fix this, and I don't care about this artifact of
 the DLL system on Win32 platforms.

 OK, given that a registered cleanup routine would not have the
 restrictions which DLL_THREAD_DETACH has.

 I still don't think it is a TLS issue but rather a thread cleanup issue
 and the restrictions imposed by MS's design of that situation. So I can
 well understand your chagrin at the tricks you must do in order to
 cleanup

Re: [boost] Re: Thread-Local Storage (TLS) and templates

2003-02-24 Thread William E. Kempf

Edward Diener said:
 Alexander Terekhov [EMAIL PROTECTED] wrote in message
 news:[EMAIL PROTECTED]

 Ken Hagan wrote:
 
  Alexander Terekhov wrote:
  
   I, for one, believe strongly that k is nothing but
  
   static_casttypeof(k)*(pthread_getspecific(__k_key));
  
   It *isn't* a compile-time constant (just like errno isn't a
 compile time constant).
 
  MSVC has no pthread_getspecific(), so I venture to suggest that your
 belief probably isn't valid for that compiler.

 Uhmm. In return, I venture to suggest that MS-TLS can and shall be
 characterized as ``utterly busted.''

 If a DLL declares any nonlocal data or object as __declspec(thread),
  it can cause a protection fault if dynamically loaded.

 This is well-known by many and has never been hidden by MS. It doesn't
 mean __declspec(thread) is busted, it just means that it is limited to
 only those cases in which the DLL is not dynamically loaded, which is
 the vast majority of cases. Of course to make TLS completely foolproof,
 one does not use __declspec(thread) but instead one uses the Win32 TLS
 API functions instead.

Where you run into issues with TLS cleanup ;).

I won't be as critical as Alexander, but I will agree that the MS TLS
implementation has serious design issues which need to be corrected.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Re: [boost] Boost Crashes after Compiling with Mingw?

2003-02-21 Thread William E. Kempf

Chris S said:
 I've installed boost's threading library following the build
 instructions in the documentation. I was unable to get bjam to work. No
 matter what I tried it won't accept my include or library paths for
 mingw. However, using Dev-C++, I set up projects using the appropriate
 include and library directories and successfully built both
 libboostthread.a and
 boostthreadmon.dll.

 All seemed well until I tried to run the libs/thread/example/thread.cpp
 example. It compiled wonderfully and without error, but segfaults for
 the lines:

 boost::thread thrd(alarm);
 thrd.join();

 I'm assuming, while I believe I followed the build instructions
 correctly and received no compilation or linking errors, that the boost
 threading library is not at fault. If so, what might I have done, or not
 done, to cause this problem?

Actually, I'm fighting GCC/Win32 issues.  Cygwin uses POSIX threads, and
the timing facilities seem to be broken.  Cygwin with -mno-cygwin and
MinGW use the native Win32 C RTL, but there's issues with TSS not working
which *appears* to stem from the STL libraries not being thread safe (and
I've not had the time to get STLPort to work with the Cygwin/-mno-cygwin
combination that I use).  The specific problem you're describing is not
one I've seen, so I'll look into it shortly, but don't be too quick to
assume it's not a fault in Boost.Threads.  If anyone can help in solving
these portability issues, I'd appreciate it.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Any, Function, and Signals documentation

2003-02-19 Thread William E. Kempf

Beman Dawes said:
 At 11:56 AM 2/18/2003, William E. Kempf wrote:

  Well, I'm in favor of that, since we're moving at least some of the
 documentation to Boost.Book with this release (or so I gathered).  So
 what's the group opinion on this one?

 I'd like to hold off as many changes as possible until after the
 release. I  don't have time to think clearly about the problems
 involved, and I'd like  to actually try out some of the software too.
 The final day or two before a  branch-for-release isn't a good time for
 this important discussion.

Sorry, I do agree strongly with this and didn't mean to imply we should
rush moving the tools in before this release.  But since some of the
documentation for this release will be (it seems) generated documentation,
I think we should move the tools in very shortly after release.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: Re: Formal review or Variant Library (Ed B.)

2003-02-19 Thread William E. Kempf

David Abrahams said:
 Ed Brey [EMAIL PROTECTED] writes:

 Incidentally, no Boost.Python user has reported confusion about
 extract, and they tend to be slightly more naive than the average
 Boost user.

 Unfortunately, that data point is of limited use, since Python has a
 lot of names leaving something to be desired (generally those borrowed
 from C and Unix).  When I was a Python newby, insetad of complaining,
 I just got used to looking up functions in the docs to be sure I knew
 what they did.

 Are you kidding?  Python users (almost) never read docs!
 {sorry all you other Python users out there; it's just my impression}.

No?  I thought this sort of thing was done all the time:

 import os
 help(os)

 help(os.path)


I know I do it a lot.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Any, Function, and Signals documentation

2003-02-18 Thread William E. Kempf
 in BoostBook than  in HTML. For instance,
 librarynameTuple/libraryname will link to the  Tuple library,
 regardless of where the HTML is (even if it isn't generated);
 functionnameboost::ref/functionname will link to the function
 boost::ref,  regardless of where it is. Broken link detection is built
 into the BoostBook  XSL, because it emits a warning whenever name lookup
 fails (and won't  generate a link). What we do now is much more
 involved: find the HTML file  and anchor documenting the entity we want
 to link, put in an explicit link a  href=..., and checking the links
 will have to be run manually prior to a  release.

The only issue lies in the transition period when not all documentation
has been converted to Boost.Book and some of the static documentation
needs to link into a library that's been converted.

 Using generated documentation has some up-front costs: you'll need to
 get an  XSLT processor, and maybe some stylesheets (if you don't want
 them downloaded  on demand), and probably run a simple configuration
 command (now a shell  script; will be in Jam eventually).

 The time savings from the generated documentation will come in little
 pieces:  you won't need to keep the synopsis in sync with the detailed
 description,  you won't need to keep a table of contents in sync, keep
 example code in a  separate test file in sync with the HTML version in
 the documentation, or  look up a link in someone else's library.
 BoostBook is meant to eliminate  redundancy (except for XML closing
 tags; ha ha), and all the time we waste  keeping redundant pieces in
 sync.

I think everyone is convinced that Boost.Book is a good idea long term. 
We just need to try and impact the whole project as minimally as we can
for the short term.

 There's an unfortunate Catch-22 with all this: to smooth the BoostBook
 learning curve would require further integration with the Boost CVS
 repository (not the sandbox), but we shouldn't integrate with Boost CVS
 until  BoostBook has been accepted (whatever that means for a tool).
 But  acceptance requires, at the very least, more developers to hop
 over the  initial hump and to start seeing the benefits of BoostBook.

I think there's several of us interested who will be working on this when
time permits.  But honestly, having it in the sandbox is at least a little
inconvenient... and to me it makes little sense if some released
documentation is going to depend on it.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: Re: boost.test thread safe bugs (and some fixes)

2003-02-18 Thread William E. Kempf

Gennadiy Rozental said:
  The code never promised to work in multithreaded environment, nor
 even to be thread save. It is in my to-do list. Though recent hands
 in several situations may require address some of these issues
 sooner.

 What?!?  Where's the big, bold disclaimer about that!

 It's in to-do section in front page. Though you right. There should have
 been explicit disclaimer about that.

 We have to have all
 of the Boost.Test library thread safe, because the Boost.Thread
 library depends on it.

 It you are accessing Boost.Test interfaces only from one thread it may
 work even with current implementation.

But that's an impossibility.  I have to test from multiple threads (or
what would the point be?!?).  Now certain parts of the interface might be
restricted to a single thread... but I don't feel comfortable even with
that.

  No. I don't think it's common situation. You don't usually create
 and run test cases inside the other test case code.

 *I* had considered doing just this, in order to get a tree structure
 for dependent test cases.  Nothing in the documentation seems to
 indicate this is something that's not supported, and I think that
 would be the wrong answer in any event.

 I already implemented changes that should allow reentrant usage of
 execution monitor. So this is not a problem any more. On the other hand
 I was thinking about implementing direct support for test cases
 dependency inside the Boost.Test (next release). Would it be enough for
 you to be able to specify that one test case should run only if some
 other one passed?

If you consider a test suite a test case (which should be how it is, no?),
then yes, that's all I'd need.

 To make this thread safe you would need to store the pointer in a
  thread local storage slot, BTW I don't think you can use
 boost.threads for this,
  as
 it will create a dependency quagmire for status/Jamfile :-(
 
  I thought to introduce macro BOOST_MULTITHREADED_UNIT_TEST and guard
 all usage of Boost.Thread with it. It does not create extra
 dependency and should allow to build multithreaded version with bjam
 subvariant feature.

 How would this work for the Boost.Thread library.  Boost.Test must be
 usable by Boost.Thread, and this means it must be thread safe with out
 using Boost.Thread.

 1. Boost.Thread with depend on multithreaded version of Boost.Test. 2.
 Boost.Test will try to use minimal/basic part of Boost.Thread
 functionality

There's no minimal/basic part of Boost.Thread that doesn't need testing.
 If I can't rely on it working in my own regression testing, I ceratainly
can't rely on it being a part of the underlying test framework.  I know
this means more work for you, but there's not much to be done about it. 
You can sacrifice performance, however, in a testing framework.  So you
can probably get by with nothing more than a simple mutex and a TSS
concept with out implicit cleanup, which should be fairly trivial for you
to implement.

 3. The first test cases of Boost.Thread unit test will need to check
 that the above basic functionality is working as expected. And only of
 these test cases are passing, continue with rest of testing.

How do I test the minimal portion if I can't use the testing framework?

 This is not unique situation. Boost.Test have the similar problems. It's
 like in relativistic physics: one could not measure the exact value
 cause the measure tools affect the measurement.

If I can't measure the correctness of Boost.Threads, because Boost.Test
affects the measurement, then what good is it?

 Thread safety issues are very critical, AFAICT.  Boost.Threads depends
 on Boost.Test, and assumes it is thread safe.

 I understand, William, your concern. But the Boost.Thread library is the
 only library that needs thread-safe version of Boost.Test. Thread safety
 will need to be addressed all over the place not only in
 execution_monitor. Add here that I am not familiar with your library. As
 a result I would not want to do this in a hurry. I promise to take care
 about it for the next release. Would it be acceptable for you?

Boost.Threads is the only library that needs thread-safe versions of
Boost.Test *TODAY* (at least that are part of the actual Boost project,
but Boost.Test is also being used outside of the Boost project, and I
won't begin to claim that I know they don't need thread-safe versions). 
As for not doing it in a hurry... I understand what you're saying, but
this sounds like it jeapordizes this and future release schedules.  The
deadlocks reported in the Boost.Threads tests can't be reproduced by
myself with any of the compilers I have available on any of the 3 machines
and 2 OSes I have.  This makes diagnosing problems extremely difficult,
and if I can't trust that the problems aren't in the testing framework,
it's even more difficult.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Any, Function, and Signals documentation

2003-02-18 Thread William E. Kempf

Douglas Paul Gregor said:
 On Tue, 18 Feb 2003, William E. Kempf wrote:
 Douglas Gregor said:

 A reasonable concern.  But if we keep only release versions of
 generated documentation in CVS, I don't think it will be too severe.
 Intermediate doc changes would either have to be accessed directly
 from the web or generated locally from CVS.  Seems a fair compromise
 to this issue to me.

 I'm okay with this.

What are other's thoughts on this compromise?

  It's my hope that developers will adopt BoostBook for their own
 documentation.  Then any time they want to be sure their local copy
 of the documentation is  up-to-date they just regenerate the format
 they want locally. It's really not  much different from rebuilding,
 e.g., libboost_regex with Boost Jam.

 Actually, today it's much different.  There's no Jam files for
 producing the documentation, and several tools are required to run the
 makefiles that not all developers will have on hand.  In the future I
 expect we'll be able to simplify the process, but you have to admit
 we're not there yet.

 My intended analogy was with Boost.Build. To use Boost.Build, you need
 to compile and install another program (Boost Jam), and perform a build
 step to get updated binaries. BoostBook will be the same way: compile
 and install another program (XSLT processor) and perform a build step to
 get updated documentation. (Granted, Boost Jam comes with the Boost
 distribution, but an XSLT processor should not; on the other hand, you
 need Jam if you want to use Regex, Thread, Signals, or Date-Time, but
 generally nobody is required to rebuild documentation).

This a minor difference here, though.  The bjam executable boot straps
fairly easily on most platforms.  XSLT processors aren't quite as
convenient.  At least that was my experience that last time I tried to do
DocBook stuff on a Windows box (with out Cygwin).  Things may have
improved in this regard, and if not, I'm sure we can improve things
ourselves, but I'm nervous that we're not ready for this yet.

 The only issue lies in the transition period when not all
 documentation has been converted to Boost.Book and some of the
 static documentation needs to link into a library that's been
 converted.

 ... and I don't know how to do that, yet.

Which is the single biggest concern with the migration to Boost.Book. 
Here's where I see the real catch-22, and I'm not sure how to deal with
it.

 I think there's several of us interested who will be working on this
 when time permits.  But honestly, having it in the sandbox is at least
 a little inconvenient... and to me it makes little sense if some
 released documentation is going to depend on it.

 If there are no complains, I would _love_ to move BoostBook out of the
 sandbox and into its (presumably) permanent place in Boost CVS.

Well, I'm in favor of that, since we're moving at least some of the
documentation to Boost.Book with this release (or so I gathered).  So
what's the group opinion on this one?

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: condition::notify_all

2003-02-18 Thread William E. Kempf

Michael Glassford said:
 Scott McCaskill wrote:
 I was just looking at the win32 implementation of the
 condition variable class in the thread library and
 noticed something odd.  In version 1.7 of
 condition.cpp, there is a bug fix for
 condition::notify_one.  At the beginning of the
 function, a mutex is acquired, but not all control
 paths resulted in the mutex being released.  Part of
 the fix involved making sure that the mutex is always
 released.

 However, it looks like the same behavior still exists
 in the current version of condition.cpp for notify_all
 (win32)--not all control paths will release the mutex.
  Am I mistaken, or is this a bug?

 It looks the same to me. Any comment about this?

I somehow missed the original post here.  Now fixed in CVS.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: Re: Re: boost.test thread safe bugs (and some fixes)

2003-02-18 Thread William E. Kempf

Gennadiy Rozental said:
  1. Boost.Thread with depend on multithreaded version of Boost.Test.
 2. Boost.Test will try to use minimal/basic part of Boost.Thread
  functionality

 There's no minimal/basic part of Boost.Thread that doesn't need
 testing.

 I did not mean that it does not need testing. What I meant is that I
 will try to use only small part of Boost.Thread functionality.
 What I propose is that we first check that namely this part is working
 (It could be part of Boost.Tet or BoostThread unit test)
 And then move on with rest of your testing.

Again, how do you test it is working if we can't use the Boost.Test
framework!  We're in a catch-22 situation so long as you use Boost.Thread
in Boost.Test.

  If I can't rely on it working in my own regression testing, I
 ceratainly
 can't rely on it being a part of the underlying test framework.  I
 know this means more work for you, but there's not much to be done
 about it. You can sacrifice performance, however, in a testing
 framework.  So you can probably get by with nothing more than a simple
 mutex and a TSS concept with out implicit cleanup, which should be
 fairly trivial for you to implement.

 I would really prefer not to reinvent the wheel with portable
 implementation of Mutex and TSS.

I understand you not wanting to do so, but I see no alternative.

  3. The first test cases of Boost.Thread unit test will need to check
 that the above basic functionality is working as expected. And only
 of these test cases are passing, continue with rest of testing.

 How do I test the minimal portion if I can't use the testing
 framework?

 There are several choises here. You may need to know know Boost.Test is
 using mutex and tss. Or you may rely on  Boost.Test unit tests that I
 will implement to validate multithreaded usage. For example, start 2
 threads and throw an exceptions in both of them (making sure that there
 is no race conditions). This way  we may check that execution monitor
 jump buffer is located in thread specific storage. And so on with every
 usage of
 Boost.Thread. An alternative is to write several simple tests for the
 part of Boost.Thread that used by Boost.Test without usaing of
 Boost.Test. Once they passed we may be confident with usage of
 Boost.Test for further testing.

How do you create a thread here if we can't prove that the Boost.Thread
creation works portably?  You'll still wind up reinventing the wheel
here, you're just choosing to implement thread creation instead of the
mutex and TSS.  From my POV it would be easier to do the mutex and TSS,
but hey, I don't care as long as you can prove that the testing framework
works *before* I start using it to test Boost.Threads.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: boost.test thread safe bugs (and some fixes)

2003-02-17 Thread William E. Kempf

Gennadiy Rozental said:
 I've been looking at your signal handling implementation in
execution_monitor.cpp, and I think I've uncovered quite a few bugs,
 some of which are really quite fatal for multithreading code.

 The code never promised to work in multithreaded environment, nor even
 to be thread save. It is in my to-do list. Though recent hands in
 several situations may require address some of these issues sooner.

What?!?  Where's the big, bold disclaimer about that! We have to have all
of the Boost.Test library thread safe, because the Boost.Thread library
depends on it.

Issue 1:
~
You use a singleton for the signal handling jump:

inline sigjmp_buf  execution_monitor_jump_buffer()
{
static sigjmp_buf unit_test_jump_buffer_;
return unit_test_jump_buffer_;
}

There are two issues with this:

a) It is not reentrant: if the monitored procedure called by
 catch_signals calls another monitored procedure then the inner
 catch_signals call will overwrite the jump data for the outer call.
 IMO this situation is quite common - and actually occurs in your own
 unit test code I believe.

 No. I don't think it's common situation. You don't usually create and
 run test cases inside the other test case code.

*I* had considered doing just this, in order to get a tree structure for
dependent test cases.  Nothing in the documentation seems to indicate this
is something that's not supported, and I think that would be the wrong
answer in any event.

To make this thread safe you would need to store the pointer in a
 thread local storage slot, BTW I don't think you can use boost.threads
 for this,
 as
it will create a dependency quagmire for status/Jamfile :-(

 I thought to introduce macro BOOST_MULTITHREADED_UNIT_TEST and guard all
 usage of Boost.Thread with it. It does not create extra dependency and
 should allow to build multithreaded version with bjam subvariant
 feature.

How would this work for the Boost.Thread library.  Boost.Test must be
usable by Boost.Thread, and this means it must be thread safe with out
using Boost.Thread.

The difficulty we have now is that there is a release coming up, and
 boost.test is pretty mission critical for that, and these aren't really
 trivial issues to fix, I'm not sure how we should handle that - ideas?

 I was aware of thread safety issues. And still I don't think it is so
 critical, that we need to hurry to fix it for this release. My plan was
 to address it after CLA. I still hope to be able to use Boost.Thread for
 this. I will try to address 1(without tss) 2 and 4 today.

Thread safety issues are very critical, AFAICT.  Boost.Threads depends on
Boost.Test, and assumes it is thread safe.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] OpenBSD regression, hanging tests!

2003-02-14 Thread William E. Kempf

Rene Rivera said:
 The tests for OpenBSD just finished a while ago and there are some tests
 that fail because they hang, using 99% CPU with indefinite execution:

 Hang on GCC 2.95.3:

 thread / test_condition...
 
http://boost.sourceforge.net/regression-logs/cs-OpenBSD-links.html#test_condition%20gcc

 thread / test_mutex...
 http://boost.sourceforge.net/regression-logs/cs-OpenBSD-links.html#test_mutex%20gcc

 thread / test_thread...
 http://boost.sourceforge.net/regression-logs/cs-OpenBSD-links.html#test_thread%20gcc

 Hangs on both GCC 2.95.3 and 3.2:

 test / errors_handling_test...
 
http://boost.sourceforge.net/regression-logs/cs-OpenBSD-links.html#errors_handling_test%20gcc
 
http://boost.sourceforge.net/regression-logs/cs-OpenBSD-links.html#errors_handling_test%20gcc-3.2

Thanks.  I'll try to look into this tomorrow.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: Re: [boost] Re: A new boost::thread implementation?

2003-02-12 Thread William E. Kempf
Sorry for late reply... had a hard disk problem that prevented accessing
e-mail.

Peter Dimov said:
 William E. Kempf wrote:

 How about this compromise:

 template typename R
 class async_call
 {
 public:
 template typename F
 explicit async_call(const F f)
 : m_func(f)
 {
 }

 void operator()()
 {
 mutex::scoped_lock lock(m_mutex);
 if (m_result)
 throw can't call multiple times;

 operator() shouldn't throw; it's being used as a thread procedure, and
 the final verdict on these was to terminate() on exception, I believe.
 But you may have changed that. :-)

I'm not sure how the terminate() on exception semantics (which haven't
changed) apply, exactly.  But I assume you (and probably Dave) would
prefer this to just be an assert and documented undefined behavior.  I
have no problems with that.

 lock.unlock();
 R temp(m_func());
 lock.lock();
 m_result.reset(temp);
 m_cond.notify_all();
 }

 R result() const
 {
 boost::mutex::scoped_lock lock(m_mutex);
 while (!m_result)
 m_cond.wait(lock);

 This changes result()'s semantics to block until op() finishes; what
 happens if nobody calls op()? Or it throws an exception?

Changes the semantics?  I thought this was what was expected and
illustrated in every example thus far?  Failure to call op() is a user
error that will result in deadlock if result() is called.  The only other
alternative is to throw in result() if op() wasn't called, but I don't
think that's appropriate.  The exception question still needs work.  We
probably want result() to throw in this case, the question is what it will
throw.  IOW, do we build the mechanism for propogating exception types
across thread boundaries, or just throw a single generic exception type.

 return *m_result.get();
 }

 private:
 boost::function0R m_func;
 optionalR m_result;
 mutable mutex m_mutex;
 mutable condition m_cond;
 };

 template typename R
 class future
 {
 public:
 template typename F
 explicit future(const F f)
 : m_pimpl(new async_callR(f))
 {
 }

 future(const futureR other)
 {
 mutex::scoped_lock lock(m_mutex);

 I don't think you need a lock here, but I may be missing something.

I have to double check the implementation of shared_ptr, but I was
assuming all it did was to synchronize the ref count manipulation. 
Reads/writes of the data pointed at needed to be synchronized externally. 
If that's the case, the assignment here needs to be synchronized in order
to insure it doesn't interrupt the access in op().

 m_pimpl = other.m_pimpl;
 }

 futureR operator=(const futureR other)
 {
 mutex::scoped_lock lock(m_mutex);

 --

 m_pimpl = other.m_pimpl;
 }

 void operator()()
 {
 (*get())();
 }

 R result() const
 {
 return get()-result();
 }

 private:
 shared_ptrasync_callR  get() const
 {
 mutex::scoped_lock lock(m_mutex);

 --

 return m_pimpl;
 }

 shared_ptrasync_callR  m_pimpl;
 mutable mutex m_mutex;
 };

 As for the big picture, ask Dave. ;-) I tend towards a refcounted
 async_call.

That's what future gives you, while async_call requires no dynamic
memory allocation, which is an important consideration for many uses.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: Re: [boost] Re: A new boost::thread implementation?

2003-02-12 Thread William E. Kempf

Peter Dimov said:
 William E. Kempf wrote:

 It's not just the efficiencies that concern me with dynamic
 allocation.  It's the additional points of failure that occur in this
 case as well.  For instance, check out the article on embedded coding
 in the most recent CUJ (sorry, don't have the exact title handy).
 Embedded folks generally avoid dynamic memory when ever possible, so
 I'm a little uncomfortable with a solution that mandates that the
 implementation use dynamic allocation of memory.  At least, if that's
 the only solution provided.

 This allocation isn't much different than the allocation performed by
 pthread_create. An embedded implementation can simply impose an upper
 limit on the total number of async_calls and never malloc.

True enough.

-- 
William E. Kempf


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread William E. Kempf

 
 From: David Abrahams [EMAIL PROTECTED]
 Date: 2003/02/10 Mon AM 11:15:31 EST
 To: Boost mailing list [EMAIL PROTECTED]
 Subject: Re: [boost] Re: A new boost::thread implementation?
 
 William E. Kempf [EMAIL PROTECTED] writes:
 
  Actually, there's another minor issue as well.  The user can call
  operator() and then let the async_call go out of scope with out ever
  calling result().  Mayhem would ensue.  The two options for dealing
  with this are to either block in the destructor until the call has
  completed or to simply document this as undefined behavior.
 
 If you want async_call to be copyable you'd need to have a handle-body
 idiom anyway, and something associated with the thread could be used
 to keep the body alive.

True enough.  The code provided by Mr. Dimov wasn't copyable, however.  Is it 
important enough to allow copying to be worth the issues involved with dynamic memory 
usage here (i.e. a point of failure in the constructor)?  I think it probably is, I 
just want to see how others feel.


William E. Kempf
[EMAIL PROTECTED]

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread William E. Kempf

 
 From: Peter Dimov [EMAIL PROTECTED]
 Date: 2003/02/10 Mon PM 12:54:28 EST
 To: Boost mailing list [EMAIL PROTECTED]
 Subject: Re: Re: [boost] Re: A new boost::thread implementation?
 
 William E. Kempf wrote:
  From: Peter Dimov [EMAIL PROTECTED]
  // step 2: execute an async_call
  call();
 
  This example, and the implementation above, are just complex
  synchronous calls.  I assume you really meant for either the
  constructor or this call to also take an Executor concept?
 
  This line could be
 
  boost::thread exec( ref(call) );
 
  or
 
  boost::thread_pool pool;
  pool.dispatch( ref(call) );
 
  I didn't have a prebuilt Boost.Threads library handy when I wrote
  the code (rather quickly) so I used a plain call.
 
  No, it couldn't be, because async_call isn't copyable ;).  But I get
  the point.
 
 Note that I diligently used ref(call) above. ;-)

Yeah, I noticed that when I received my own response.  Sorry about not reading it more 
carefully.

  Since operator() is synchronized, i don't see a race... am I missing
  something?
 
  Sort of... I was thinking about the refactoring where you don't hold
  the mutex the entire time the function is being called.  But even
  with out the refactoring, there is some room for error:
 
  thread1: call()
  thread2: call()
  thread1: result() // which result?
 
 Unspecified, but I don't think we can avoid that with the low-level
 interface. High level wrappers that package creation and execution would be
 immune to this problem.

Agreed.
 
  Actually, there's another minor issue as well.  The user can call
  operator() and then let the async_call go out of scope with out ever
  calling result().  Mayhem would ensue.  The two options for dealing
  with this are to either block in the destructor until the call has
  completed or to simply document this as undefined behavior.
 
  Yes, good point, I missed that.
 
  I lean towards simple undefined behavior.  How do you feel about it?
 
 Seems entirely reasonable. I don't think that we can fix this. Accessing
 an object after it has been destroyed is simply an error; although this is
 probably a good argument for making async_call copyable/counted so that the
 copy being executed can keep the representation alive.

Yes, agreed.  I'm just not sure which approach is more appropriate... to use dynamic 
allocation and ref-counting in the implementation or to simply require the user to 
strictly manage the lifetime of the async_call so that there's no issues with a truly 
asynchronous Executor accessing the return value after it's gone out of scope.
 


William E. Kempf
[EMAIL PROTECTED]

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: Re: [boost] Re: A new boost::thread implementation?

2003-02-10 Thread William E. Kempf
 From: David Abrahams [EMAIL PROTECTED]
 William E. Kempf [EMAIL PROTECTED] writes:
   I lean towards simple undefined behavior.  How do you feel about it?
 
 I have a feeling that I'm not being asked here, and maybe even that
 it's wasted breath because you've grown tired of my emphasis on a
 high-level interface, but there's a lot to be said for eliminating
 sources of undefined behavior, especially when it might have to do
 with the ordering of operations in a MT context.

No, I was asking anyone interested in responding, and you're certainly not wasting 
your breath.  I think I reached a compromise on these issues/questions, and would 
appreciate your response (it's in another post).
 
  Seems entirely reasonable. I don't think that we can fix this. Accessing
  an object after it has been destroyed is simply an error; although this is
  probably a good argument for making async_call copyable/counted so that the
  copy being executed can keep the representation alive.
 
  Yes, agreed.  I'm just not sure which approach is more
  appropriate... to use dynamic allocation and ref-counting in the
  implementation or to simply require the user to strictly manage the
  lifetime of the async_call so that there's no issues with a truly
  asynchronous Executor accessing the return value after it's gone out
  of scope.
 
 Allocation can be pretty darned efficient when it matters.  See my
 fast smart pointer allocator that Peter added to shared_ptr for
 example.

It's not just the efficiencies that concern me with dynamic allocation.  It's the 
additional points of failure that occur in this case as well.  For instance, check out 
the article on embedded coding in the most recent CUJ (sorry, don't have the exact 
title handy).  Embedded folks generally avoid dynamic memory when ever possible, so 
I'm a little uncomfortable with a solution that mandates that the implementation use 
dynamic allocation of memory.  At least, if that's the only solution provided.
 


William E. Kempf
[EMAIL PROTECTED]

___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-08 Thread William E. Kempf

David Abrahams said:
 William E. Kempf [EMAIL PROTECTED] writes:

 David Abrahams said:
 ...and if it can't be default-constructed?

 That's what boost::optional is for ;).

 Yeeeh. Once the async_call returns, you have a value, and should be
 able to count on it.  You shouldn't get back an object whose
 invariant allows there to be no value.

 I'm not sure I can interpret the yeeeh part. Do you think there's
 still an issue to discuss here?

 Yes.  Yh means I'm uncomfortable with asking people to get
 involved with complicated state like it's there or it isn't there for
 something as conceptually simple as a result returned from waiting on a
 thread function to finish.

OK, *if* I'm totally understanding you now, I don't think the issue you
see actually exists.  The invariant of optional may allow there to be no
value, but the invariant of a future/async_result doesn't allow this
*after the invocation has completed*.  (Actually, there is one case where
this might occur, and that's when the invocation throws an exception if we
add the async exception functionality that people want here.  But in this
case what happens is a call to res.get(), or what ever name we use, will
throw an exception.)  The optional is just an implementation detail that
allows you to not have to use a type that's default constructable.

If, on the other hand, you're concerned about the uninitialized state
prior to invocation... we can't have our cake and eat it to, and since the
value is meaningless prior to invocation any way, I'd rather allow the
solution that doesn't require default constructable types.

 These are the two obvious (to me) alternatives, but the idea is to
 leave the call/execute portion orthogonal and open.  Alexander was
 quite right that this is similar to the Future concept in his Java
 link.  The Future holds the storage for the data to be returned and
 provides the binding mechanism for what actually gets called, while
 the Executor does the actual invocation.  I've modeled the Future
 to use function objects for the binding, so the Executor can be any
 mechanism which can invoke a function object.  This makes thread,
 thread_pool and other such classes Executors.

 Yes, it is a non-functional (stateful) model which allows efficient
 re-use of result objects when they are large, but complicates simple
 designs that could be better modeled as stateless functional programs.
 When there is an argument for re-using the result object, C++
 programmers tend to write void functions and pass the result by
 reference anyway.  There's a good reason people write functions
 returning non-void, though.  There's no reason to force them to twist
 their invocation model inside out just to achieve parallelism.

I *think* I understand what you're saying.  So, the interface would be
more something like:

futuredouble f1 = thread_executor(foo, a, b, c);
thread_pool pool;
futuredouble f2 = thread_pool_executor(pool, foo, d, e, f);
double d = f1.get() + f2.get();

This puts a lot more work on the creation of executors (they'll have to
obey a more complex interface design than just anything that can invoke a
function object), but I can see the merits.  Is this actually what you
had in mind?

 And there's other examples as well, such as RPC mechanisms.

 True.

 And personally, I find passing such a creation parameter to be
 turning the design inside out.

 A bit, yes.

 It turns _your_ design inside out, which might not be a bad thing for
 quite a few use cases ;-)

We're obviously not thinking of the same interface choices here.

 It might make things a little simpler for the default case, but it
 complicates usage for all the other cases.  With the design I
 presented every usage is treated the same.

 There's a lot to be said for making the default case very easy.

 Only if you have a clearly defined default case.  Someone doing a
 lot of client/server development might argue with you about thread
 creation being a better default than RPC calling, or even thread_pool
 usage.

 Yes, they certainly might.  Check out the systems that have been
 implemented in Erlang with great success and get back to me ;-)

Taking a chapter out of Alexander's book?

 More importantly, if you really don't like the syntax of my design,
 it at least allows you to *trivially* implement your design.

 I doubt most users regard anything involving typesafe varargs as
 trivial to implement.

 Well, I'm not claiming to support variadric parameters here.  I'm only
 talking about supporting a 0..N for some fixed N interface.

 That's what I mean by typesafe varargs; it's the best we can do in
 C++98/02.

  And with Boost.Bind already available, that makes other such
 interfaces trivial to implement.  At least usually.

 For an expert in library design familiar with the workings of boost
 idioms like ref(x), yes.  For someone who just wants to accomplish a
 task using threading, no.

Point taken.

 The suggestion that the binding occur at the time

Re: [boost] Re: A new boost::thread implementation?

2003-02-08 Thread William E. Kempf

David Abrahams said:
 Peter Dimov [EMAIL PROTECTED] writes:

 David Abrahams wrote:
 Peter Dimov [EMAIL PROTECTED] writes:

 With the above AsyncCall:

 async_callint f( bind(g, 1, 2) ); // can offer syntactic sugar
 here thread t(f); // or thread(f); for extra cuteness
 int r = f.result();

 The alternative seems to be

 async_callint f( bind(g, 1, 2) );
 int r = f.result();

 but now f is tied to boost::thread. A helper

 int r = async(g, 1, 2);

 Another alternative might allow all of the following:

 async_callint f(create_thread(), bind(g,1,2));
 int r = f();

 async_callint f(thread_pool(), bind(g,1,2));
 int r = f();

 Using an undefined-yet Executor concept for the first argument. This
 is not much different from

 async_callint f( bind(g, 1, 2) );
 // execute f using whatever Executor is appropriate
 int r = f.result();

 except that the async_call doesn't need to know about Executors.

 ...and that you don't need a special syntax to get the result out that
 isn't compatible with functional programming.  If you want to pass a
 function object off from the above, you need something like:

 bind(async_callint::result, async_callint(bind(g, 1, 2)))

I think the light is dawning for me.  Give me a little bit of time to work
out a new design taking this into consideration.

 int r = async_callint(create_thread(), bind(g, 1, 2));

 int r = async(boost::thread(), g, 1, 2);
^^^

 int r = async_callint(rpc(some_machine), bind(g,1,2));

 int r = async_callint(my_message_queue, bind(g,1,2));

None of these make much sense to me.  You're executing the function object
in a supposedly asynchronous manner, but the immediate assignment to int
renders it a synchronous call.  Am I missing something again?

 All of these are possible with helper functions (and the int could
 be made optional.)

 Yup, note the line in the middle.

 I've my doubts about

 int r = async_callint(rpc(some_machine), bind(g,1,2));

 though. How do you envision this working? A local opaque function
 object can't be RPC'ed.

 It would have to not be opaque ;-)

 Maybe it's a wrapper over Python code that can be transmitted across the
 wire.  Anyway, I agree that it's not very likely.  I just put it in
 there to satisfy Bill, who seems to have some idea how RPC can be
 squeezed into the same mold as thread invocation ;-)

Ouch.  A tad harsh.  But yes, I do see this concept applying to RPC
invocation.  All that's required is the proxy that handles wiring the
data and the appropriate scaffolding to turn this into an Executor. 
Obviously this is a much more strict implementation then thread
creation... you can't just call any function here.

-- 
William E. Kempf
[EMAIL PROTECTED]


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-08 Thread William E. Kempf

David Abrahams said:
 Peter Dimov [EMAIL PROTECTED] writes:
 The line in the middle won't work, actually, but that's another story.
 boost::thread() creates a handle to the current thread. ;-) Score
 another one for defining concepts before using them.

 Oh, I'm not up on the new interface.  How are we going to create a new
 thread?

Nothing new about the interface in this regard.  The default constructor
has always behaved this way.  New threads are created with the overloaded
constructor taking a function object.

BTW: I'm not opposed to changing the semantics or removing the default
constructor in the new design, since it's Copyable and Assignable.  If
there's reasons to do this, we can now switch to a self() method for
accessing the current thread.

-- 
William E. Kempf
[EMAIL PROTECTED]


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



RE: [boost] Re: A new boost::thread implementation?

2003-02-07 Thread William E. Kempf

Darryl Green said:
 -Original Message-
 From: William E. Kempf [mailto:[EMAIL PROTECTED]]

 Dave Abrahams said:
  Hm? How is the result not a result in my case?

 I didn't say it wasn't a result, I said that it wasn't only a
 result.
 In your case it's also the call.

 Regardless of whether it invokes the function the result must always be
 associated with the function at some point. It would be nice if this
 could be done at creation as per Dave's suggestion but providing
 behaviour like the futures alexander refers to. That is, bind the
 function, its parameters and the async_result into a single
 aync_call/future object that is a function/executable object. It can be
 passed to (executed by) a thread or a thread_pool (or whatever).

I'm not sure that binding the result and the function at creation time
is that helpful.  Actual results aren't that way.  This allows you to
reuse the result variable in multiple calls to different functions.  But
if people aren't comfortable with this binding scheme, I'm not opposed to
changing it.  Doing so *will* complicate things a bit, however, on the
implementation side.  So let me explore it some.

 It's not thread-creation in this case.  You don't create threads
 when
 you use a thread_pool.  And there's other examples as well, such as
 RPC
 mechanisms.  And personally, I find passing such a creation
 parameter to
 be turning the design inside out.

 But this doesn't (borrowing Dave's async_call syntax and Alexander's
 semantics (which aren't really any different to yours):

Dave's semantics certainly *were* different from mine (and the Futures
link posted by Alexander).  In fact, I see Alexander's post as
strengthening my argument for semantics different from Dave's.  Which
leaves us with my semantics (mostly), but some room left to argue the
syntax.

 async_calldouble later1(foo, a, b, c);
 async_calldouble later2(foo, d, e, f);
 thread_pool pool;
 pool.dispatch(later1);
 pool.dispatch(later2);
 d = later1.result() + later2.result();

You've not used Dave's semantics, but mine (with the variation of when you
bind).

 More importantly, if you really don't like the syntax of my design, it
 at
 least allows you to *trivially* implement your design.  Sometimes
 there's
 something to be said for being lower level.

 Well as a user I'd be *trivially* implementing something to produce the
 above. Do-able I think (after I have a bit of a look at the innards of
 bind), but its hardly trivial.

The only thing that's not trivial with your syntax changes above is
dealing with the requisite reference semantics with out requiring dynamic
memory allocation.  But I think I can work around that.  If people prefer
the early/static binding, I can work on this design.  I think it's a
little less flexible, but won't argue that point if people prefer it.

-- 
William E. Kempf
[EMAIL PROTECTED]


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-07 Thread William E. Kempf

David Abrahams said:
 ...and if it can't be default-constructed?

 That's what boost::optional is for ;).

 Yeeeh. Once the async_call returns, you have a value, and should be able
 to count on it.  You shouldn't get back an object whose invariant allows
 there to be no value.

I'm not sure I can interpret the yeeeh part. Do you think there's still
an issue to discuss here?

 Second, and this is more important, you've bound this concept to
 boost::thread explicitly.  With the fully seperated concerns of my
 proposal, async_result can be used with other asynchronous call
 mechanisms, such as the coming boost::thread_pool.

asyc_resultdouble res1, res2;

 no fair - I'm calling it async_call now ;-)

thread_pool pool;
pool.dispatch(bind(res1.call(foo), a, b, c));
pool.dispatch(bind(res2.call(foo), d, e, f));
d = res1.value() + res2.value();

 This one is important.  However, there are other ways to deal with
 this.
  An async_call object could take an optional thread-creation
 parameter,
 for example.

 It's not thread-creation in this case.  You don't create threads
 when you use a thread_pool.

 OK, thread acquisition, then.

No, not even that.  An RPC mechanism, for instance, isn't acquiring a
thread.  And a message queue implementation wouldn't be acquiring a thread
either.  These are the two obvious (to me) alternatives, but the idea is
to leave the call/execute portion orthogonal and open.  Alexander was
quite right that this is similar to the Future concept in his Java link.
 The Future holds the storage for the data to be returned and provides
the binding mechanism for what actually gets called, while the Executor
does the actual invocation.  I've modeled the Future to use function
objects for the binding, so the Executor can be any mechanism which can
invoke a function object.  This makes thread, thread_pool and other such
classes Executors.

 And there's other examples as well, such as RPC mechanisms.

 True.

 And personally, I find passing such a creation parameter to be
 turning the design inside out.

 A bit, yes.

 It might make things a little simpler for the default case, but it
 complicates usage for all the other cases.  With the design I
 presented every usage is treated the same.

 There's a lot to be said for making the default case very easy.

Only if you have a clearly defined default case.  Someone doing a lot of
client/server development might argue with you about thread creation being
a better default than RPC calling, or even thread_pool usage.

 More importantly, if you really don't like the syntax of my design, it
 at least allows you to *trivially* implement your design.

 I doubt most users regard anything involving typesafe varargs as
 trivial to implement.

Well, I'm not claiming to support variadric parameters here.  I'm only
talking about supporting a 0..N for some fixed N interface.  And with
Boost.Bind already available, that makes other such interfaces trivial to
implement.  At least usually.  The suggestion that the binding occur at
the time of construction is going to complicate things for me, because it
makes it much more difficult to handle the reference semantics required
here.

 Sometimes there's something to be said for being lower level.

 Sometimes.  I think users have complained all along that the
 Boost.Threads library takes the you can implement it yourself using our
 primitives line way too much.  It's important to supply
 simplifying high-level abstractions, especially in a domain as
 complicated as threading.

OK, I actually believe this is a valid criticism.  But I also think it's
wrong to start at the top of the design and work backwards.  In other
words, I expect that we'll take the lower level stuff I'm building now and
use them as the building blocks for the higher level constructs later.  If
I'd started with the higher level stuff, there'd be things that you
couldn't accomplish.

  That's what we mean by the terms high-level and encapsulation
 ;-)

 Yes, but encapsulation shouldn't hide the implementation to the
 point that users aren't aware of what the operations actually are.
 ;)

 I don't think I agree with you, if you mean that the implementation
 should be apparent from looking at the usage.  Implementation details
 that must be revealed should be shown in the documentation.

 I was referring to the fact that you have no idea if the async call
 is being done via a thread, a thread_pool, an RPC mechanism, a simple
 message queue, etc.  Sometimes you don't care, but often you do.

 And for those cases you have a low-level interface, right?

Where's the low level interface if I don't provide it? ;)

-- 
William E. Kempf
[EMAIL PROTECTED]


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-06 Thread William E. Kempf


  double d;
  threaddouble t = spawn(foo)(a,b,c);
  // do something else
  d = thread.return_value();

 A solution like this has been proposed before, but I don't like it.
 This creates multiple thread types, instead of a single thread type.
 I think this will only make the interface less convenient, and will
 make the implementation much more complex.  For instance, you now must
 have a seperate thread_self type that duplicates all of thread
 except for the data type specific features.  These differing types
 will have to compare to each other, however.

 Make the common part a base class. That's how the proposal I sent you
 does  it :-)

Simplifies the implementation, but complicates the interface.

 async_resultdouble res;
 thread t(bind(res.call(), a, b, c));
 // do something else
 d = res.value();  // Explicitly waits for the thread to return a
 value?

 This does the same, indeed. Starting a thread this way is just a little
 more complex (and -- in my view -- less obvious to read) than writing
   thread t = spawn(foo)(a,b,c);

Not sure I agree about less obvious to read.  If your syntax had been
   thread t = spawn(foo, a, b, c);
I think you'd have a bit more of an argument here.  And I certainly could
fold the binding directly into boost::thread so that my syntax would
become:

thread t(res.call(), a, b, c);

I could even eliminate the .call() syntax with some implicit
conversions, but I dislike that for the obvious reasons.  I specifically
chose not to include syntactic binding in boost::thread a long time ago,
because I prefer the explicit seperation of concerns.  So, where you think
my syntax is less obvious to read, I think it's more explicit.

 But that's just personal opinion, and I'm arguably biased :-)

As am I :).

 Hopefully you're not duplicating efforts here, and are using
 Boost.Bind and Boost.Function in the implementation?

 Actually it does duplicate the work, but not because I am stubborn. We
 have an existing implementation for a couple of years, and the present
 version just evolved from this. However, there's a second point: when
 starting threads, you have a relatively clear picture as to how long
 certain objects are needed, and one can avoid several copying steps if
 one  does some things by hand. It's short anyway, tuple type and tie
 function  are your friend here.

I'm not sure how you avoid copies here.  Granted, the current
implementation isn't optimized in this way, but it's possible for me to
reduce the number of copies down to what I think would be equivalent to a
hand coded implementation.

  thread t = spawn(foo)(a,b,c);
  t.yield ();// oops, who's going to yield here?

 You shouldn't really ever write code like that.  It should be
 thread::yield().  But even if you write it the way you did, it will
 always be the current thread that yields, which is the only thread
 that can.  I don't agree with seperating the interfaces here.

 I certainly know that one shouldn't write the code like this. It's just
 that this way you are inviting people to write buglets. After all, you
 have (or may have in the future) functions
   t-kill ();
   t-suspend ();
 Someone sees that there's a function yield() but doesn't have the time
 to  read the documentation, what will he assume what yield() does?

How does someone see that there's a function yield() with out also
seeing that it's static?  No need to read documentation for that, as it's
an explicit part of the functions signature.

 If there's a way to avoid such invitations for errors, one should use
 it.

I understand the theory behind this, I've just never seen a real world
case where someone's been bitten in this way.  I know I never would be. 
So I don't find it very compelling.  But as I said elsewhere, I'm not so
opposed as to not consider making these free functions instead because of
this reasoning.  I would be opposed to another class, however, as I don't
think that solves anything, but instead makes things worse.

-- 
William E. Kempf
[EMAIL PROTECTED]


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-06 Thread William E. Kempf

 From: Wolfgang Bangerth [EMAIL PROTECTED]
 This does the same, indeed. Starting a thread this way is just a
 little more complex (and -- in my view -- less obvious to read) than
 writing
   thread t = spawn(foo)(a,b,c);

 But that's just personal opinion, and I'm arguably biased :-)

 I'm sure I'm never biased wink, and I tend to like your syntax better.
 However, I recognize what Bill is concerned about.  Let me suggest a
 compromise:

   async_resultdouble later = spawn(foo)(a, b, c);

Mustn't use the name spawn() here.  It implies a thread/process/what ever
has been spawned at this point, which is not the case.  Or has it (he says
later, having read on)?

   ...
   thread t = later.thread();

The thread type will be Copyable and Assignable soon, so no need for the
reference.  Hmm... does this member indicate that spawn() above truly did
create a thread that's stored in the async_result?  Hmm... that would be
an interesting alternative implementation.  I'm not sure it's as obvious
as the syntax I suggested, as evidenced by the questions I've raised here,
but worth considering.  Not sure I care for spawn(foo)(a, b, c) though. 
I personally still prefer explicit usage of Boost.Bind or some other
binding/lambda library.  But if you want to hide the binding, why not
just spawn(foo, a, b, c)?

And if we go this route, should be remove the boost::thread constructor
that creates a thread in favor of using spawn() there as well?

   thread t = spawn(foo, a, b, c);

   // do whatever with t
   ...
   double now = later.join();  // or later.get()

 You could also consider the merits of providing an implicit conversion
 from async_resultT to T.

The merits, and the cons, yes.  I'll be considering this carefully at some
point.

 This approach doesn't get the asynchronous call wound up with the
 meaning of the thread concept.

If I fully understand it, yes it does, but too a lesser extent.  What I
mean by this is that the async_result hides the created thread (though you
do get access to it through the res.thread() syntax).  I found this
surprising enough to require careful thought about the FULL example you
posted to understand this.

-- 
William E. Kempf
[EMAIL PROTECTED]


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-06 Thread William E. Kempf

Dave Abrahams said:

 Hmm... that would be
 an interesting alternative implementation.  I'm not sure it's as
 obvious as the syntax I suggested

 Sorry, IMO there's nothing obvious about your syntax.  It looks
 cumbersome and low-level to me.  Let me suggest some other syntaxes for
 async_result, though:

 async_calldouble later(foo, a, b, c)

 or, if you don't want to duplicate the multi-arg treatment of bind(),
 just:

 async_calldouble later(bind(foo, a, b, c));
 ...
 ...
 double d = later(); // call it to get the result out.

The two things that come to mind for me with this suggestion are:

1) You've explicitly tied the result into the call.  I chose the other
design because the result is just that, only a result.  An asynchronous
call can be bound to this result more than once.

2) You're still hiding the thread creation.  This is a mistake to me for
two reasons.  First, it's not as obvious that a thread is being created
here (though the new names help a lot).  Second, and this is more
important, you've bound this concept to boost::thread explicitly.  With
the fully seperated concerns of my proposal, async_result can be used with
other asynchronous call mechanisms, such as the coming boost::thread_pool.

   asyc_resultdouble res1, res2;
   thread_pool pool;
   pool.dispatch(bind(res1.call(foo), a, b, c));
   pool.dispatch(bind(res2.call(foo), d, e, f));
   d = res1.value() + res2.value();

 I like the first one better, but could understand why you'd want to go
 with the second one.  This is easily implemented on top of the existing
 Boost.Threads interface.  Probably any of my suggestions is.

Yes, all of the suggestions which don't directly modify boost::thread are
easily implemented on top of the existing interface.

 as evidenced by the questions I've raised here,

 Can't argue with user confusion I guess ;-)

 but worth considering.  Not sure I care for spawn(foo)(a, b, c)
 though. I personally still prefer explicit usage of Boost.Bind or some
 other binding/lambda library.  But if you want to hide the binding,
 why not just spawn(foo, a, b, c)?

 Mostly agree; it's just that interfaces like that tend to obscure which
 is the function and which is the argument list.

OK.  That's never bothered me, though, and is not the syntax used by
boost::bind, so I find it less appealing.

  This approach doesn't get the asynchronous call wound up with the
 meaning of the thread concept.

 If I fully understand it, yes it does, but too a lesser extent.  What
 I mean by this is that the async_result hides the created thread
 (though you do get access to it through the res.thread() syntax).

 That's what we mean by the terms high-level and encapsulation ;-)

Yes, but encapsulation shouldn't hide the implementation to the point that
users aren't aware of what the operations actually are. ;)

But I'll admit that some of my own initial confusion on this particular
case probably stem from having my brain focused on implementation details.

 I found this
 surprising enough to require careful thought about the FULL example
 you posted to understand this.

 Like I said, I can't argue with user confusion.  Does the name
 async_call help?

Certainly... but leads to the problems I addresed above.  There's likely a
design that will satisfy all concerns, however, that's not been given yet.

-- 
William E. Kempf
[EMAIL PROTECTED]


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-06 Thread William E. Kempf

Dave Abrahams said:
 On Thursday, February 06, 2003 12:33 PM [GMT+1=CET],
 William E. Kempf [EMAIL PROTECTED] wrote:

 Dave Abrahams said:

   Hmm... that would be
   an interesting alternative implementation.  I'm not sure it's as
 obvious as the syntax I suggested
 
  Sorry, IMO there's nothing obvious about your syntax.  It looks
 cumbersome and low-level to me.  Let me suggest some other syntaxes
 for async_result, though:
 
  async_calldouble later(foo, a, b, c)
 
  or, if you don't want to duplicate the multi-arg treatment of
 bind(), just:
 
  async_calldouble later(bind(foo, a, b, c));
  ...
  ...
  double d = later(); // call it to get the result out.

 The two things that come to mind for me with this suggestion are:

 1) You've explicitly tied the result into the call.  I chose the other
 design because the result is just that, only a result.

 Hm? How is the result not a result in my case?

I didn't say it wasn't a result, I said that it wasn't only a result. 
In your case it's also the call.

 An asynchronous call can be bound to this result more than once.

 ...and if it can't be default-constructed?

That's what boost::optional is for ;).

 2) You're still hiding the thread creation.

 Absolutely.  High-level vs. low-level.

But I think too high-level.  I say this, because it ties you solely to
thread creation for asynchronous calls.

 This is a mistake to me for
 two reasons.  First, it's not as obvious that a thread is being
 created here (though the new names help a lot).

 Unimportant, IMO.  Who cares how an async_call is implemented under the
 covers?

I care, because of what comes next ;).

 Second, and this is more
 important, you've bound this concept to boost::thread explicitly.
 With the fully seperated concerns of my proposal, async_result can be
 used with other asynchronous call mechanisms, such as the coming
 boost::thread_pool.

asyc_resultdouble res1, res2;
thread_pool pool;
pool.dispatch(bind(res1.call(foo), a, b, c));
pool.dispatch(bind(res2.call(foo), d, e, f));
d = res1.value() + res2.value();

 This one is important.  However, there are other ways to deal with this.
  An async_call object could take an optional thread-creation parameter,
 for example.

It's not thread-creation in this case.  You don't create threads when
you use a thread_pool.  And there's other examples as well, such as RPC
mechanisms.  And personally, I find passing such a creation parameter to
be turning the design inside out.  It might make things a little simpler
for the default case, but it complicates usage for all the other cases. 
With the design I presented every usage is treated the same.

More importantly, if you really don't like the syntax of my design, it at
least allows you to *trivially* implement your design.  Sometimes there's
something to be said for being lower level.

  That's what we mean by the terms high-level and encapsulation
 ;-)

 Yes, but encapsulation shouldn't hide the implementation to the point
 that users aren't aware of what the operations actually are. ;)

 I don't think I agree with you, if you mean that the implementation
 should be apparent from looking at the usage.  Implementation details
 that must be revealed should be shown in the documentation.

I was referring to the fact that you have no idea if the async call is
being done via a thread, a thread_pool, an RPC mechanism, a simple message
queue, etc.  Sometimes you don't care, but often you do.

-- 
William E. Kempf
[EMAIL PROTECTED]


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] A new boost::thread implementation?

2003-02-05 Thread William E. Kempf
  relationship  between  an
 instance  of  a
boost::thread and an instance of  a boost::thread::self, i.e.  a
 thread which starts yet another thread shall not be required to wait
 for that other thread and termination  of the  first thread  shall
 not lead  to termination  of the other thread.  This  means that an
 instance of a boost::thread  can go out of scope without affecting
 the associated instance of a boost::thread::self.

 f. It shall be possible to send extra information, as an optional extra
 argument
to the  boost::thread ctor, to the created  thread.
 boost::thread::self shall offer a method for retrieving this extra
 information. It is not required that this information be passed in a
 type-safe manner, i.e. void* is okay.

No, void* is NOT okay.  And again, the current design facilities passing
data in a _typesafe_ manner, and with no restrictions on the number of
parameters.  When combined with Boost.Bind it makes parameter passing
much simpler than would be possible with a void* design, not to mention
the typesafety.  (Have I mentioned that typesafety is important?)

 g. It shall  be possible for a thread  to exit with a return value.  It
 shall be
possible for  the creating side to  retrieve, as a return  value from
 join(), that  value. It is  not required  that this  value be  passed
 in  a type-safe manner, i.e. void* is okay.

Again, I fully disagree with any void* design.  And return values are
possible today, via the same ideas used for passing data, though there's
no convenience library like Boost.Bind to help here.  Simplifying this
is planned, however.

 h. Explicit termination of a thread, i.e. by calling
 boost::thread::self::exit()
shall not lead to any resource-leaks.

Only possible by throwing an exception, and you don't need library support
for this.  I'm not saying that I won't add an exit(), only that there's
not a compelling reason for it, other than completeness of the library.

 i. Creation of  a new thread of  execution shall not require calls  to
 any other
method than the boost::thread ctor.

True today, so I don't know why you have this requirement here.

 j. The header file shall not expose any implementation specific details.

It doesn't, today.  I realize you're referring to PIMPL here, but that
doesn't mean that the current implementation gives users access to
implementation details.  And again, PIMPL has been discussed and
discarded.  Sorry.

 Some additional features I would like to see.

 k. It should be possible to control  the behavior of a new thread, e.g.
 schedule
policy, scheduling priority and contention scope.

Initial design in the thread_dev branch.

 l. It should be  possible to cancel a thread and for a  thread to
 control how is
shall respond to cancellations.

Again, thread_dev branch.

 m. Failure  in any operation should  be reported by throwing
 exceptions, not by
assertions.

*chuckles*  Hot topic.  You really don't want to open that can of worms. 
But the current library does throw exceptions.  Assertions are used only
for debugging diagnostics on things that simply should never happen unless
there's a bug in the Boost.Threads library.  (That said, it's possible
I've missed some error conditions that should be exceptions, especially on
the Win32 platforms where error conditions aren't documented.)

-- 
William E. Kempf
[EMAIL PROTECTED]


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: A new boost::thread implementation?

2003-02-05 Thread William E. Kempf


 Hi Ove,

 f. It shall be possible to send extra information, as an optional
 extra argument
to the  boost::thread ctor, to the created  thread.
 boost::thread::self shall offer a method for retrieving this extra
 information. It is not required that this information be passed in
 a type-safe manner, i.e. void* is okay.

 g. It shall  be possible for a thread  to exit with a return value.
 It shall be
possible for  the creating side to  retrieve, as a return  value
 from join(), that  value. It is  not required  that this  value be
 passed in  a type-safe manner, i.e. void* is okay.

 j. The header file shall not expose any implementation specific
 details.

 Incidentally, I have a scheme almost ready that does all this. In
 particular, it allows you to pass every number of parameters to the new
 thread, and to return every possible type. Both happens in a type-safe
 fashion, i.e. whereever you would call a function serially like
 double d = foo(a, b, c);
 you can now call it like
 double d;
 threaddouble t = spawn(foo)(a,b,c);
 // do something else
 d = thread.return_value();

A solution like this has been proposed before, but I don't like it.  This
creates multiple thread types, instead of a single thread type.  I think
this will only make the interface less convenient, and will make the
implementation much more complex.  For instance, you now must have a
seperate thread_self type that duplicates all of thread except for the
data type specific features.  These differing types will have to compare
to each other, however.

I don't feel that this sort of information belongs in the thread object. 
It belongs in the thread function.  This already works very nicely for
passing data, we just need some help with returning data.  And I'm working
on that.  The current idea would be used something like this:

async_resultdouble res;
thread t(bind(res.call(), a, b, c));
// do something else
d = res.value();  // Explicitly waits for the thread to return a value?

Now thread remains type-neutral, but we have the full ability to both pass
and return values in a type-safe manner.

 Argument and return types are automatically deducted, and the number of
 arguments are only limited by the present restriction on the number of
 elements in boost::tuple (which I guess is 10). Conversions between
 types  are performed in exactly the same way as they would when calling
 a  function serially. Furthermore, it also allows calling member
 functions  with some object, without the additional syntax necessary to
 tie object  and member function pointer together.

Hopefully you're not duplicating efforts here, and are using Boost.Bind
and Boost.Function in the implementation?

 I attach an almost ready proposal to this mail, but rather than
 steamrolling the present author of the threads code (William Kempf), I
 would like to discuss this with him (and you, if you like) before
 submitting it as a proposal to boost.

Give me a couple of days to have the solution above implemented in the dev
branch, and then argue for or against the two designs.

 Let me add that I agree with all your other topics, in particular the
 separation of calling/called thread interface, to prevent accidents like
 thread t = spawn(foo)(a,b,c);
 t.yield ();// oops, who's going to yield here?

You shouldn't really ever write code like that.  It should be
thread::yield().  But even if you write it the way you did, it will always
be the current thread that yields, which is the only thread that can.  I
don't agree with seperating the interfaces here.

 I would be most happy if we could cooperate and join efforts.

Certainly.

-- 
William E. Kempf
[EMAIL PROTECTED]


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] A new boost::thread implementation?

2003-02-05 Thread William E. Kempf

 On Wednesday, February 05, 2003 3:04 PM [GMT+1=CET],
 William E. Kempf [EMAIL PROTECTED] wrote:

  What I would  like to see is a new boost::thread  implementation
 which meets the following requirements.
 
  a. There shall be two interfaces to a thread. One for creation of a
 thread, from
 here on called  boost::thread. And, one for the created  thread,
 from
  here on called boost::thread::self.

 Self?  Why?  If it's on operation that can *only* be made on the
 current thread, then a static method is a better approach.  Otherwise,
 I could make a self instance and pass it to another thread, which
 could then attempt an operation that's not valid for calling on
 another thread.

 It would seem to me that, given the availability of p-yield() as a
 syntax for invoking a static function, it'd be better to use a
 namespace-scope function to avoid errors and for clarity.

OK, I can buy that over a seperate self class.  This was discussed at one
point, but the particular issue with p-yield() was never brought up.  I'm
not sure I find it compelling, because which thread yields should be
evident from the documentation, and I don't see anyone ever using this
syntax.  But compelling or not, I'm not opposed to making this a free
function if others think it's clearer in this regard.

-- 
William E. Kempf
[EMAIL PROTECTED]


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: [boost] Re: Thread library with BOOST_HAS_PTHREAD

2003-02-03 Thread William E. Kempf

 Alexander Terekhov writes:

 Shimshon Duvdevan wrote:

 [ ... Solaris - PTHREAD_SCOPE_SYSTEM ... ]

 Can anyone verify the supposed boost threads library behavior on a
 multi-processor Solaris machine? Is this behavior the intended one?
 Perhaps a bug fix is necessary.

 That's Solaris' bug and actually, they've already kinda-fixed it
 recently. More info on this can be found in the c.p.t.(*) archive on
 google.

 So, if upgrading Solaris is not an option, I should patch boost.threads
 each time a new version is out? Isn't it easier to add a couple of
 #ifdefs? :)

Supply me with a proper patch, that works both before and after a
Solaris upgrade (i.e. let's not have extra code when it's not needed), and
I'll apply it to the library.

-- 
William E. Kempf
[EMAIL PROTECTED]


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



[boost] Complex testing requirements

2003-01-30 Thread William E. Kempf
One of the many things I'm attempting to do right now is to improve the
testing of Boost.Threads.  I'd really like to use a more complex testing
system than seems available with the current Boost tools.  Or maybe I'm
wrong, and it is possible.  Here's a description of my requirements.

* Test cases/suites need to be defined in a tree hierarchy, where branches
are never run if the parent test doesn't pass.

* These test cases may include tests for compilation and/or link
success/failure.

* By default I would like to build and test the entire tree of test
cases/suites, obeying the first rule I provided above, but there should be
a way to build and test a single test case/suite with out regard to the
parent (i.e. force a build and test of a single suite with out regard to
any others in the tree).

The reasoning behind these requirements:

* Many tests in Boost.Threads depend on portions of the library that's not
being directly tested.  For instance, to fully test mutex locks I need to
create some threads, so it's pointless to run these tests if the test for
thread creation fails.  In fact, there are cases where deadlocks could be
possible because of this, and coding around those cases is a lot of work
just for creating a test case.

* The total regression test is fairly extensive (and will be even more so
before we're done), so running the full test when debugging a single
concept would be too time consuming.  Thus the need for running a single
test case with out regard for any dependencies on other test cases (where
it's up to the user to ensure dependent portions are working).

I could manage to implement the test dependency tree if I were to put
all tests into a single program.  However, this doesn't allow for
compilation/linking tests, and doesn't address the ability to quickly
compile and test a single test case/suite.

I can continue on with out these capabilities, but I'd like to know if I'm
missing something that would allow me to do all of this with the tools as
they are today, or if there's enough interest in these capabilities for
them to be addressed in our testing tools in the future.

William E. Kempf
[EMAIL PROTECTED]


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



Re: Re: [boost] Problem with boost::bind and windows api calls

2003-01-30 Thread William E. Kempf

Peter Dimov said:
 From: DudeSan [EMAIL PROTECTED]
  No, this won't work. boost::bind returns a function object, an
 object
 with
  operator() defined, not a function pointer. You can't use bind() to
 create
  a function pointer.

 So, are there any suggestions or ideas that I could use?

 I'm trying to make the wndProc point at a member function. I've got it
 too work with functors but I figured boost might get handy since I'm
 converting a class::function to a function.

 A wndProc is a function pointer. It can't be made to point to a
 nonstatic member function; these need a pointer to the object (this)
 as an implicit first argument.

 Are you sure you got it to work with functors?

 The usual solutions are either using Set/GetWindowLong(hWnd,
 GWL_USERDATA) to store a pointer to the object, or using a std::map
 HWND,
 boost::functionHWND, UINT, WPARAM, LPARAM .

Another interesting solution is to use a thunk, much like is done in
ATL.  This technique could be trivially used to invoke functors.

William E. Kempf
[EMAIL PROTECTED]


___
Unsubscribe  other changes: http://lists.boost.org/mailman/listinfo.cgi/boost



  1   2   >