On Thu, Jul 22, 2010 at 11:25 PM, Moacyr Francischetti Corrêa <
moa...@spacnet.com.br> wrote:
> The software could be run in a CAVE environment or in a multi-projection
> system.
>
> The user would command an Jmol instance with mouse. The other instances
> would be synchronized with the first. For now do not worry about the
> synchronism because it is already running!
>
ok
> The code changes were made in the Jmol Viewer.java.
>
>
>
> These transformation matrices exist or have to be created? Where?
>
TransformManager creates one key transformation matrix, and uses one
rotation matrix
> As explained previously, the problem is not limited to only display a
> molecule from a different angle. One projection complements the others, ie,
> a molecule can have a piece displayed in the left projection, nothing in
> central and final part on the right; a single atom can't appear in two
> projections simultaneously.
>
Right, that's what navigationmode does. Are you using that? It should do
exactly what you want. Could be tricky to exactly recreate the corner of the
room, though. Where are the projectors exactly? (Is it rear projection from
the exact location of the user?)
> I'll try to explain it another way:
>
> Imagine a water molecule. Initially I see the three atoms in the central
> projection and nothing in the lateral projections. As you zoom, a hydrogen
> atom is displayed in the left projection, the other on the right and the
> oxygen on the central. If I keep zooming, the oxygen will occupy the entire
> central projection and begin to invade the lateral projections (the
> hydrogens were already behind the user and therefore are not displayed).
>
> Am I clear?
>
sounds like navigationmode. Maybe what you are saying is that the projective
screen is slanted? Something like the attached drawing?
If so, then that's very special. It would require a different mapping
function in TransformManager11 (or 12)
protected float getPerspectiveFactor(float z) {
return (z <= 0 ? referencePlaneOffset : referencePlaneOffset / z);
}
This function magnifies the distance from the center of the XY screen
coordinate:
The magic is done here:
protected void adjustTemporaryScreenPoint() {
// fixedRotation point is at the origin initially
float z = point3fScreenTemp.z;
// this could easily go negative -- behind the screen --
// but we don't care. In fact, that just makes it easier,
// because it means we won't render it.
// we should probably assign z = 0 as "unrenderable"
if (Float.isNaN(z)) {
if (!haveNotifiedNaN)
Logger.debug("NaN seen in TransformPoint");
haveNotifiedNaN = true;
z = 1;
} else if (z <= 0) {
// just don't let z go past 1 BH 11/15/06
z = 1;
}
point3fScreenTemp.z = z;
// x and y are moved inward (generally) relative to 0, which
// is either the fixed rotation center or the navigation center
// at this point coordinates are centered on rotation center
switch (mode) {
case MODE_NAVIGATION:
// move nav center to 0; refOffset = Nav - Rot
point3fScreenTemp.x -= navigationShiftXY.x;
point3fScreenTemp.y -= navigationShiftXY.y;
break;
case MODE_PERSPECTIVE_CENTER:
point3fScreenTemp.x -= perspectiveShiftXY.x;
point3fScreenTemp.y -= perspectiveShiftXY.y;
break;
}
if (perspectiveDepth) {
// apply perspective factor
float factor = getPerspectiveFactor(z);
point3fScreenTemp.x *= factor;
point3fScreenTemp.y *= factor;
}
switch (mode) {
case MODE_NAVIGATION:
point3fScreenTemp.x += navigationOffset.x;
point3fScreenTemp.y += navigationOffset.y;
break;
case MODE_PERSPECTIVE_CENTER:
point3fScreenTemp.x += perspectiveOffset.x;
point3fScreenTemp.y += perspectiveOffset.y;
break;
case MODE_STANDARD:
point3fScreenTemp.x += fixedRotationOffset.x;
point3fScreenTemp.y += fixedRotationOffset.y;
break;
}
if (Float.isNaN(point3fScreenTemp.x) && !haveNotifiedNaN) {
Logger.debug("NaN found in transformPoint ");
haveNotifiedNaN = true;
}
point3iScreenTemp.set((int) point3fScreenTemp.x, (int)
point3fScreenTemp.y,
(int) point3fScreenTemp.z);
}
> The transformation matrices are the solution of this problem? How?
>
>
>
> []’s
>
> Moacyr
>
>
>
> *De:* Robert Hanson [mailto:hans...@stolaf.edu]
> *Enviada em:* sexta-feira, 23 de julho de 2010 00:22
>
> *Para:* jmol-developers@lists.sourceforge.net
> *Assunto:* Re: [Jmol-developers] RES: RES: Virtual Reality
>
>
>
> Well, I think it's a simple matter. It would be just like stereo - we
> create two buffers with a rotation between them. In this case you need three
> buffers and a 4x4 matrix transformation, not just a 3x3 rotation. Should be
> easy to implement. What I wasn't sure about was how you wanted to deliver
> it.
>
> Is this a real-time virtual reality cube/cave? Or is it something else?
>
> How does the user's position get fed into the system?
>
> What technical issues are you running into?
>
> How much have you tweaked Jmol already?
>
> more comments....
>
> On Thu, Jul 22, 2010 at 9:52 PM, Moacyr Francischetti Corrêa <
> moa...@spacnet.com.br> wrote:
>
> Moacyr:
>
> I read http://chemapps.stolaf.edu/jmol/docs/misc/navigation.pdf and
> concluded that there is no control of the camera position. Am I right?
>
> Please tell me which part of code I could make changes to reposition the
> camera (observer's position) relative to the molecule. It would be in
> TransformManager? Or in TransformManager10? What are the variables involved?
>
> Do nothing with TransformManange10. It's history. Make sure you are using
> Jmol 12.0 and either work in TransformManager11, or better, perhaps overlay
> that with a small TransformManager12 that has your options included.
>
> You will basically want an enhanced navigation mode. It gives you the
> realistic "in place" walk-through perspective you are looking for.
>
> Camera positions -- this is, of course, just an illusion. I've recently
> added more standard camera parameter calculation that I needed for the U3D
> business to TransformManager -- getCameraFactors. They are pretty standard.
>
> The camera position is basically just a position, a distance from a
> reference point, and a quaternion (or 3x3 matrix) that describes the
> orientation. Jmol calculates all of these and computes from that a 4x4
> transformation matrix that takes you from Cartesian to screen coordinates.
> It's pretty standard.
>
> I think your process would just mimic what is being done in Viewer for
> stereo. Take a look at how those images are created. Basically that's just
>
> -- render the first image
> -- rotate
> -- render the second image
>
> I think you will just do -- or already have done --
>
> -- render the front view image (same as current)
> -- rotate (navigate) 90 Y
> -- render the left image
> -- rotate (navigate) 180 Y
> -- render the right image
>
> Very simple. That sound about right?
>
> Where do the images go to?
>
> Bob
>
>
>
>
>
>
>
>
>
>
>
>
>
> *De:* Robert Hanson [mailto:hans...@stolaf.edu]
>
> *Enviada em:* quinta-feira, 22 de julho de 2010 19:57
>
>
> *Para:* jmol-developers@lists.sourceforge.net
>
> *Assunto:* Re: [Jmol-developers] RES: Virtual Reality
>
>
>
> Can we start this over? I lost the sense of the thread. What exactly would
> you like to be able to do. Suggest some command options.
>
> On Thu, Jul 22, 2010 at 12:18 PM, Moacyr Francischetti Corrêa <
> moa...@spacnet.com.br> wrote:
>
> Bob,
>
>
>
> I read the pdf you've indicated and concluded that there
> is no control of the camera position. Am I right?
>
> Please tell me which part of code I could make changes to reposition the
> camera (observer's position) relative to the molecule.
>
> What are the variables involved?
>
>
>
> Moacyr
>
>
>
> *De:* Robert Hanson [mailto:hans...@stolaf.edu]
> *Enviada em:* quinta-feira, 18 de março de 2010 11:21
>
>
> *Para:* jmol-developers@lists.sourceforge.net
> *Assunto:* Re: [Jmol-developers] Virtual Reality
>
>
>
> Moacyr,
>
>
> By the way, the way we would do this, I think, is just the same as we do
> stereo -- rerender x times and capture the screen image each time, then put
> those together for delivery. Be aware that Java has some memory size
> limitations that could put a cap on buffer size. What sort of screen pixel
> counts are we talking about here?
>
> Bob
>
> On Tue, Mar 9, 2010 at 12:40 PM, Moacyr Francischetti Corrêa <
> moa...@spacnet.com.br> wrote:
>
> Is it possible to run Jmol in a virtual reality environment, such as a
> CAVE?
>
> It would require that the software could generate 5 different views of the
> molecule, one for each wall of the CAVE.
>
>
>
> Any suggestion?
>
>
>
> Moacyr
>
>
>
> ------------------------------------------------------------------------------
> Download Intel® Parallel Studio Eval
> Try the new software tools for yourself. Speed compiling, find bugs
> proactively, and fine-tune applications for parallel performance.
> See why Intel Parallel Studio got high marks during beta.
> http://p.sf.net/sfu/intel-sw-dev
> _______________________________________________
> Jmol-developers mailing list
> Jmol-developers@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/jmol-developers
>
>
>
>
> --
> Robert M. Hanson
> Professor of Chemistry
> St. Olaf College
> 1520 St. Olaf Ave.
> Northfield, MN 55057
> http://www.stolaf.edu/people/hansonr
> phone: 507-786-3107
>
>
> If nature does not answer first what we want,
> it is better to take what answer we get.
>
> -- Josiah Willard Gibbs, Lecture XXX, Monday, February 5, 1900
>
>
>
> ------------------------------------------------------------------------------
> This SF.net email is sponsored by Sprint
> What will you do first with EVO, the first 4G phone?
> Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first
> _______________________________________________
> Jmol-developers mailing list
> Jmol-developers@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/jmol-developers
>
>
>
>
> --
> Robert M. Hanson
> Professor of Chemistry
> St. Olaf College
> 1520 St. Olaf Ave.
> Northfield, MN 55057
> http://www.stolaf.edu/people/hansonr
> phone: 507-786-3107
>
>
> If nature does not answer first what we want,
> it is better to take what answer we get.
>
> -- Josiah Willard Gibbs, Lecture XXX, Monday, February 5, 1900
>
>
>
> ------------------------------------------------------------------------------
> This SF.net email is sponsored by Sprint
> What will you do first with EVO, the first 4G phone?
> Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first
> _______________________________________________
> Jmol-developers mailing list
> Jmol-developers@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/jmol-developers
>
>
>
>
> --
> Robert M. Hanson
> Professor of Chemistry
> St. Olaf College
> 1520 St. Olaf Ave.
> Northfield, MN 55057
> http://www.stolaf.edu/people/hansonr
> phone: 507-786-3107
>
>
> If nature does not answer first what we want,
> it is better to take what answer we get.
>
> -- Josiah Willard Gibbs, Lecture XXX, Monday, February 5, 1900
>
>
> ------------------------------------------------------------------------------
> This SF.net email is sponsored by Sprint
> What will you do first with EVO, the first 4G phone?
> Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first
> _______________________________________________
> Jmol-developers mailing list
> Jmol-developers@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/jmol-developers
>
>
--
Robert M. Hanson
Professor of Chemistry
St. Olaf College
1520 St. Olaf Ave.
Northfield, MN 55057
http://www.stolaf.edu/people/hansonr
phone: 507-786-3107
If nature does not answer first what we want,
it is better to take what answer we get.
-- Josiah Willard Gibbs, Lecture XXX, Monday, February 5, 1900
------------------------------------------------------------------------------
This SF.net email is sponsored by Sprint
What will you do first with EVO, the first 4G phone?
Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first
_______________________________________________
Jmol-developers mailing list
Jmol-developers@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/jmol-developers