+1 for Ola's suggested wording, except the last.  The wording "to design
carefully" is grammatically correct English.  The alternative "to carefully
design" although common is technically a split infinitive which is
considered incorrect.

Bill

On Tue, Feb 17, 2015 at 8:51 AM, Ola Liljedahl <[email protected]>
wrote:

> On 17 February 2015 at 15:49, Ola Liljedahl <[email protected]>
> wrote:
> > On 17 February 2015 at 11:49, Petri Savolainen
> > <[email protected]> wrote:
> >> Improved documentation and definition of atomic and ordered
> >> queue synchronisation.
> >>
> >> Signed-off-by: Petri Savolainen <[email protected]>
> >>
> >> ---
> >> This is the ordered queue definition (in patch format) promised
> >> in the call yesterday.
> >> ---
> >>  include/odp/api/queue.h    | 21 +++++++++++++++++++--
> >>  include/odp/api/schedule.h | 15 ++++++++++++---
> >>  2 files changed, 31 insertions(+), 5 deletions(-)
> >>
> >> diff --git a/include/odp/api/queue.h b/include/odp/api/queue.h
> >> index 9519edf..1d67a3c 100644
> >> --- a/include/odp/api/queue.h
> >> +++ b/include/odp/api/queue.h
> >> @@ -108,12 +108,29 @@ extern "C" {
> >>
> >>  /**
> >>   * @def ODP_SCHED_SYNC_ATOMIC
> >> - * Atomic queue
> >> + * Atomic queue synchronisation
> >> + *
> >> + * Events from an atomic queue can be scheduled only to a single
> thread at a
> > I question the use of "schedule to", is this really proper language?
> > Events are scheduled but the events selected are then 'dispatched' to a
> thread.
> > Suggestion: "Events from an atomic queue can be dispatched only to a
> > single thread at a time, this  avoids ".
> Should have been: "Events from an atomic queue can be dispatched only
> to a single thread at a time."
> The "avoid" stuff is below.
>
> >
> >> + * time. This quarantees atomic access to the queue context and
> maintains event
> > Spelling error, should be "guarantees".
> > Suggestion: "Atomic scheduling provides mutual exclusion and atomic
> > access to the associated queue context and maintains event ordering.
> > This enables the user to avoid synchronization and event reordering in
> > software."
> >
> >> + * ordering, which helps the user to avoid SW locking.
> >> + *
> >> + * The atomic queue is dedicated to the thread until it requests
> another event
> > Suggestion: "The selected atomic queue is dedicated to the thread
> > until the thread requests another event from the scheduler (which
> > implicitly releases the queue) or the thread calls
> > odp_schedule_release_atomic() (which allows the scheduler to release
> > the queue immediately)."
> >
> >> + * from the scheduler (implicitly requests to release the current
> queue) or
> >> + * calls odp_schedule_release_atomic(), which hints the scheduler to
> release the
> >> + * queue early.
> >>   */
> >>
> >>  /**
> >>   * @def ODP_SCHED_SYNC_ORDERED
> >> - * Ordered queue
> >> + * Ordered queue synchronisation
> >> + *
> >> + * Events from an ordered queue can be scheduled to multiple threads
> for
> >> + * parallel processing.
> > The phrase "parallel processing" indicates that *one* piece of work is
> > being processed in parallel by multiple processors. But we are not
> > talking about one event being processed in parallel, it is different
> > events (from the same queue) that may be processed concurrently (by
> > different processors). It's borderline but I suggest we avoid the use
> > of "parallel" in order to minimize the risk for confusion.
> >
> > Suggestion: "Events from an ordered queue can be scheduled and
> > dispatched to multiple threads for concurrent processing."
> >
> >> The original enqueue order of the source queue is
> > Isn't it the event dequeue order of the source queue that is being
> > maintained? Yes this might be the same as the enqueue order to the
> > source queue (since queues supposedly are FIFO) but if enqueueing is
> > done from another ordered queue, threads may call odp_queue_enq() in a
> > different order and the events will be reordered. It is this enqueue
> > order the software sees but it is not this order ordered queues are
> > maintaining.
> >
> > Suggestion: "The (dequeue) event order of the source queue is
> > maintained when events are enqueued to their destination queue(s)
> > (before another call to the scheduler)." Maybe drop the parentheses
> > from the last phrase.
> >
> >> + * maintained when events are enqueued to their destination queue(s)
> before
> >> + * another schedule call. Events from the same (source) queue appear
> in their
> >> + * original order when dequeued from a destination queue. The
> destination
> >> + * queue type (POLL/SCHED) or synchronisation (NONE/ATOMIC/ORDERED) is
> not
> >> + * limited.
> > What about packet input and packet output queues?
> >
> > Suggestion: "The destination queue type (polled, scheduled or packet
> > input/output) or synchronization model (none, ordered or atomic) is of
> > no concern."
> >
> >>   */
> >>
> >>  /**
> >> diff --git a/include/odp/api/schedule.h b/include/odp/api/schedule.h
> >> index 5c08357..378ca19 100644
> >> --- a/include/odp/api/schedule.h
> >> +++ b/include/odp/api/schedule.h
> >> @@ -93,9 +93,9 @@ int odp_schedule_multi(odp_queue_t *from, uint64_t
> wait, odp_event_t events[],
> >>   * Pause scheduling
> >>   *
> >>   * Pause global scheduling for this thread. After this call, all
> schedule calls
> >> - * will return only locally reserved buffers (if any). User can exit
> the
> >> + * will return only locally reserved events (if any). User can exit the
> > "reserved"?
> > Reserved items can normally be unreserved but I don't think this is
> > the case here.
> > Better to use "pre-scheduled events" which must be consumed by
> > (dispatched to) the thread in question.
> >
> >>   * schedule loop only after the schedule function indicates that
> there's no more
> >> - * buffers (no more locally reserved buffers).
> >> + * events (no more locally reserved events).
> >>   *
> >>   * Must be used with odp_schedule() and odp_schedule_multi() before
> exiting (or
> >>   * stalling) the schedule loop.
> >> @@ -111,7 +111,16 @@ void odp_schedule_pause(void);
> >>  void odp_schedule_resume(void);
> >>
> >>  /**
> >> - * Release currently hold atomic context
> >> + * Release the current atomic context
> >> + *
> >> + * This call is valid only for source queues with atomic
> synchronisation. It
> > English or American spelling? "s" vs. "z".
> >
> >> + * hints the scheduler that the user has completed processing of the
> critical
> >> + * section (which depends on the atomic synchronisation). The
> scheduler is now
> > Same here.
> >
> >> + * allowed to schedule another event(s) from the same queue to another
> thread.
> > Suggestion: "The scheduler is now allowed to schedule and dispatch
> > further events from the same queue to some other thread".
> >
> >> + *
> >> + * Early atomic context release may increase parallelism and thus
> system
> >> + * performance, but user needs to design carefully the split into
> critical vs.
> > "design carefully" => "carefully design"
> >
> >> + * non-critical sections.
> >>   */
> >>  void odp_schedule_release_atomic(void);
> >>
> >> --
> >> 2.3.0
> >>
> >>
> >> _______________________________________________
> >> lng-odp mailing list
> >> [email protected]
> >> http://lists.linaro.org/mailman/listinfo/lng-odp
>
> _______________________________________________
> lng-odp mailing list
> [email protected]
> http://lists.linaro.org/mailman/listinfo/lng-odp
>
_______________________________________________
lng-odp mailing list
[email protected]
http://lists.linaro.org/mailman/listinfo/lng-odp

Reply via email to