On Oct 25, 2006, at 5:11 PM, Bryan Sant wrote:

On 10/25/06, Levi Pearson <[EMAIL PROTECTED]> wrote:
On Oct 25, 2006, at 11:51 AM, Bryan Sant wrote:
You're conflating two different problems here.  First, there is the

Na uh, you are one.


I am one? I am one what? Are you disagreeing that memory protection is orthogonal to concurrency?


This same thing can be achieved by using threads and...  Drum roll...
Not accessing shared resources.  Threads of this nature are called
"worker threads" and serve the same purpose as a child process.  Spawn
a separate thread and let that sucker run.  No locks, no shared memory
access, just a dumb worker.  Threads shine in the face of a separate
process when you have to do a lot of interaction between two or more
threads.  However, just because threads are good at interaction via
shared resources and locks doesn't mean you HAVE to use that feature.
So just because you're using threads doesn't mean you're having to
grapple with all of these insane locking/race/dead-lock conditions.
Though I think that managing locks is simple and those who can't are
weak minded.


Of course you can choose not to take advantages of dangerous features. That's not the point. You're saying that threads are the best solution to all problems. I'm saying there are alternative concurrency models that provide additional safety from inadvertent errors. I don't see what your problem with this idea is.

First of all, you must remember to put locks in all the right
places.  Some higher level languages help out quite a bit with this,

Or you just dip into the vast "thread safe" libraries that come with
your runtime.  All common thread safety issues are handled by your
data structures et al.  I can't speak for other lesser languages, but
threading is a cake walk in Java due to the great threading support
built into the language.

List myList = new Vector(); // Thread safe. (uses locks on reads and writes)

Keeping all of my shared data in a List or a thread-safe Map
(Hashtable), ensures I have data integrity between threads and I'm not
having to explicitly mess around with locks if I don't want to.  If I
do want to, there is a built-in "synchronized" keyword in java that
makes lock management s-i-m-p-l-e.

Right. Like I said, some high level languages (read, Java) help out a lot.



but if you're doing raw pthreads in C, it's pretty easy to screw up

Right.  Don't use threads if you're using C/C++.  Do use threads when
using a higher-level language.  Threads can do everything a forked
process can do, but a process can't do everything a thread can do.  So
stick with threads if you're cool like me.

One can always choose the best tool for the job, too. Assembly Language can express any computation that Java can, yet sometimes we choose to write programs in Java. Probably because of the increased safety, abstractions, and other good things.


consists of many machine operations, so the scheduler could switch to
a different thread /in the middle/ of that operation.  Doing shared-
memory concurrency safely in a high-level language requires a lot of
information about the implementation of that language, which kind of
defeats the purpose.

I don't understand how that defeats the purpose.  Please explain.

The point of abstraction is to hide irrelevant details. If it ends up hiding important details instead, it creates more headaches and bugs than it prevents. See the following:

http://www.cs.umd.edu/~pugh/java/memoryModel/DoubleCheckedLocking.html

I believe the Java memory model has been fixed now; I'm sure you'd know better than I do.


Second, you are hampered in your ability to create new abstractions.
When multiple shared resources are involved, you must be careful to
obtain and release the locks in the correct order.  This is a pain,
it creates concerns that cross abstraction barriers, and is generally
an impediment to good software design practices.

I completely disagree.  You can design much cleaner software with
minor interaction between threads via locks and shared resources
versus child processes and marshaled messages.  Keep your interaction
via locks and thread-safe shared resources to a minimum, but go ahead
a use that ability.  It isn't a big deal.  You make it out to be a
monumental task that /hampers one's ability to create new
abstractions/.  That's nonsense.

I'll assume you've momentarily forgotten about the non-composability of locks. You can't take two arbitrary, correct pieces of lock-based software, compose them, and assume the result is correct. This describes the problem better than I could:

http://acmqueue.com/modules.php?name=Content&pa=showpage&pid=332&page=3


Horrors!  Locks serialize things?  That's why you scope your locks to
be very specific.  Or, as stated before, if you're afraid of locks and
shared resource interaction, then you can always be a coward and use a
thread just like a child process with no interaction at all (or only
via some marshaled method).


This isn't a matter of fear, it's a matter of managing complexity. It is very easy to keep your program safe and correct by over- protecting your resources. That will, however, slow the program down. As you make your locks increasingly fine-grained, it becomes increasingly difficult to reason about the correctness of your program. This is just the way it is, whether you are cowardly or brave. Turning down tools that help manage this complexity is neither brave nor cowardly; it is stupid.


In that case, C and Lisp should demand better tools for threading.


Indeed, yes. And there are libraries in C and Lisp that implement the ideas I've been talking about, because I'm just repeating ideas I've read from people far smarter and more experienced than I am, and quite possibly smarter than you, hard as that may be for you to believe.

That's true by virtue of the fact that a thread can be used just like
a child process but the reverse is not true.  Threads give you the
option to touch shared data -- not an obligation to do so.  Child
processes restrict your options.


Encapsulation via private class members restricts your options, too. So does restricting memory allocation and deallocation to the runtime system. You seem to be okay with those restrictions, which were largely added for reasons of safety and maintainability of software. Safer concurrency models provide a similar tradeoff.


You have also left out one important option from your list, though;
threads that by default share nothing, but can explicitly ask for
regions of memory to be shared.  Combine that with software
transactional memory (aka optimistic or lock-free concurrency) and
message-passing and deterministic concurrency whenever they are
appropriate, and you can use the tool that suits your problem and
eliminate the possibility of large classes of programming errors,
just like memory protection eliminates another large class of
programming errors.

I get the benefits of what you're describing today by using threads
and then choosing if I'll allow that thread to access shared data or
not.  In a good piece of OO software no data (or at least VERY little)
is global.  The thread can't just go out and touch some data when it
wants like in C.  When I create a thread, I pass only the objects (and
their data) that I WANT the thread to have access to -- otherwise, the
thread is hands-off from all other heap data.  So as described above,
my threads only have access to the explicit this I give it access too.
Life is good.


This is clearly better than the state of things in C, but it's far from the best we can do. But you can continue to use what you've got in Java if you really believe it's the best that can be.


                --Levi

/*
PLUG: http://plug.org, #utah on irc.freenode.net
Unsubscribe: http://plug.org/mailman/options/plug
Don't fear the penguin.
*/

Reply via email to