Xiaowen:

In your example, you have multiple writers to one channel.
Strictly speaking, this is not OK in PN.  In particular, the Display
actor is designed to take multiple inputs from multiple channels.
Probably PN should detect this situation and flag an error.

However, there is a more interesting issue that is exposed by this.
Your diagnosis, in fact, is exactly on target.
The design of PN assumes:

  - one thread per actor
  - one actor writing to each each channel.

If you violate these assumptions, then both the deadlock
detection and bounded buffer scheduling ("Parks' algorithm")
fail. However, violating these assumptions can be
very useful sometimes...  In particular, we have created
a "NondeterministicMerge" actor for PN that violates these
assumption...  This actor significantly (and dangerously)
extends PN semantics... But it is often useful.

We have been working on this issue, and have some ideas, but
no concrete solutions.  In particular, we have some simple
test cases where we implement a nondeterministic merge
in PN using multiple threads, and everything works except
that deadlock detection can fail, and the bounded buffer
scheduling scheme can interfere with deadlock detection and
stop models prematurely.

Unfortunately, I haven't really been successful at getting
anyone in the Ptolemy group interested enough to actually fix
the problem...  I have some ideas, but regrettably, almost no
time these days...  Others in the group have contributed ideas,
but not code... I would be happy to work with anyone
who wants to take this on.  There is likely a paper in the solution,
as the whole question of deadlock detection and bounded buffering
has a considerable literature.

Edward

At 02:35 PM 2/10/2005 -0800, xiaowen wrote:
Hi,


I think I see where the problem is.

A single PNQueueReceiver may have multiple actors/threads trying to call its put() method--this is the case when there's a relation connecting the output of one actor to the input ports of multiple actors. If the size of the queue is not big enough, two threads may both block on the write and inform PNDirector that the queue is write blocked. This leads the director to believe that two different queues are blocked instead of one. Later, when the capacity of the queue is increased or a token is read and removed from it, the number of write-blocked queues is only decremented once by the director. Thus the workflow never terminates because it always thinks there's a write-blocked queue.

Here's a diff against PNQueueReceiver.java in Ptolemy II CVS that seems to fix the problem:

$ diff ~/work/ptII/ptolemy/domains/pn/kernel/PNQueueReceiver.java PNQueueReceiver.java
420,421c420,423
< _writeBlocked = true;
< prepareToBlock(branch);
---
> if (false == _writeBlocked) {
> _writeBlocked = true;
> prepareToBlock(branch);
> }



Perhaps the get() method also requires something similar. With this change, the queue only informs the director that it's blocking on write once. It seems to make the workflow I sent earlier work correctly. If you agree this patch works, then please apply to ptII CVS.



Thanks, Xiaowen



On 08.02.2005, at 19:53, xiaowen wrote:

Hi Everyone,


Attached please find a small workflow that doesn't appear to finish correctly. It is meant to write a short string to some files and immediately finish. However, it seems to stay in the "executing" state and never switches to the "execution finished" state. Will you please take a look and tell me what I'm missing?



Thanks! Xiaowen




<execute.xml>


----------------------------------------------------------------------------
Posted to the ptolemy-hackers mailing list.  Please send administrative
mail for this list to: [EMAIL PROTECTED]

------------
Edward A. Lee
Professor, Chair of the EE Division, Associate Chair of EECS
231 Cory Hall, UC Berkeley, Berkeley, CA 94720
phone: 510-642-0253 or 510-642-0455, fax: 510-642-2845
[EMAIL PROTECTED], http://ptolemy.eecs.berkeley.edu/~eal



---------------------------------------------------------------------------- Posted to the ptolemy-hackers mailing list. Please send administrative mail for this list to: [EMAIL PROTECTED]

Reply via email to