Help me understand what's going on here.

The standard issue j3d interpolators (evidently?) use for a clock the
System.currentTimeMillis(), which is maintained by the system on a very high
priority thread, and they set wakeupOnElapsedFrame(0) for their recurrence.
The j3d renderer, on the other hand, uses a very low-priority thread.  The
most obvious visual consequence of this is that while the system is off
doing something like moving or resizing windows, the renderer stops (meaning
no Behaviors or Interpolators get called(?), and as a consequence, any
pieces
of the simulation that you've embedded in Behaviors don't get called
either. [Don't know whether this is true or not.  Help?])  When the system
returns,
any interpolators are evaluated on the system clock (which has been
running),
and rendered objects seem to leap to their new places.  You can see this
behavior in any of the standard interpolator samples like
HelloUniverse.java.

Fine.  J3d's designers did it this way for what must have been very good
reasons.  For one, the approach keeps the simulation clock (SC) in step with
the system clock, which is the same, for practical purposes, as the Meat
Clock (MC.)

But it's not suitable for simulations in which the solutions of a system
state
at a particular time depend for their convergence and stability on their
proximity to the state at the immediately preceding time.  Three obvious
examples are 1.) numerical integration of differential equations of state,
2.) iterative solution of inverse kinematic constraints, and 3) construction
and display of tracers or graphs that depend on small (spatial) steps
between adjacent solutions.

So I built a new Interpolator that defines its own SC, maintains it by
adding a time step every time its processStimulus() is called, and uses a
WakeupOnElapsedTime set to the time step for recurrence.  I call the
Alpha value() with the argument set at my SC reading.  This class
evidently does what I want - each time the processStimulus() is called, the
SC is just a time step advanced from the prior solution, and the animation
pauses while the system is busy but continues smoothly (without a step or
jump) once the system returns.  As a corollary benefit, the CPU usage as
measured on the Windows NT Task Manager console drops from 100%
(for the case in which any Interpolators are active and in which the
renderer
evidently free-wheels, using up all of the CPU cycles it can get) to 15-30%.

It's a naive approach, and there's no real reason to expect it to work.  But
it
does.  Sort of.

Before I started, I thought I had a grip on the tradeoffs I'd be making, but
in practice, anomalies in the measured performance make me doubt that I have
a clue about how it all works.  In the table below, I summarize system
performance on an applet running a constant-speed animation of a mechanism
(the kinematics are closed form, so it's unlikely that there's anything
funny going on within the computation part of the processStimulus(), but
then again I'm certain of nothing).  The first column is the time increment
(in milliseconds) for the SC as well as the argument for the
WakeupOnElapsedTime(time_step).  The second column is just a calculated
1000 / time_step.  The third column I call the frame rate,
but it's really the rate at which my interpolator's processStimulus() is
called.  I don't
really have any idea what the actual frame rate is.  The fourth column is
the
"latency" - the time elapsed on the currentTimeMillis() minus the time step.
The fifth column is the "CPU Usage" percentage estimated from the Windows
NT Task Manager.

                          Target              Observed
time_step    Frame Rate       Frame Rate         Latency           CPU Usage
(ms)                       (fps)                 (fps)
(ms)                 (%)

  20                        50                         29
14.5                24-27
  40                        25                        19.7
10.8                22-24
  60                       16.7                     14.8
7.8                  22
  80                       12.5                       9.8
21.5               21-24
100                       10                          9.0
11.0                16

If I build and attach a frame counter Behavior that wakes up on every frame,
the renderer free-wheels at 60 or so fps, and the CPU usage runs at 100%.
My SC continues to update properly, and the simulation operates properly.

I was ready for there to be a certain unpredictability in the latency - the
delay between the scheduled System time and System time that a
wakeupOnElapsedTime Behavior actually executed.  The delay might depend on
the details of the method used to post the wakeup condition or on the
details of the ways that the rendering threads manage their workload and
priorities.  I can't come up with an explanation for the pattern, though.

So by using this naive (ignorant?  stupid?) approach - getting rid of the
wakeupOnElapsedFrame(0) Interpolators and introducing my own
interpolators that rely on wakeupOnElapsedTime(40 /*milliseconds */) - I
get a result that, for most of the range, works about as well as a
mixed-mode manager that I understood.

I had hopes of trimming the two time_step quantities (the SC increment and
the
argument to the wakeupOnElapsedTime()) on the fly to align the SC with the
MT,
but I don't think that I could work around the performance humps shown in
the table without understanding the threads and their interactions better.

Any clues?

TIA,

Fred Klingener
Brock Engineering
Roxbury CT

===========================================================================
To unsubscribe, send email to [EMAIL PROTECTED] and include in the body
of the message "signoff JAVA3D-INTEREST".  For general help, send email to
[EMAIL PROTECTED] and include in the body of the message "help".

Reply via email to