> On Sunday 12 October 2003 02:00, Miguel wrote: >> > is it normal that the rendering time about scales 1 to 1 with the >> window size? >> >> Good observation > > I also noted that zooming results in the same kind of increase in the > rendering time... your explaination did not discuss that, or did I > miss that part? There are two, maybe three things going on.
I would expect that when you zoom in you are covering more pixels ... and have a higher pixel density. I think this is the main reason. The sphere rendering code keeps a cache of sphere shapes. The first time a sphere of a specific size is used, the shape for that size needs to be created. So, during the zooming process, sphere cache entries are being created. Then, when you do some rotations, you are probably going to hit some additional sphere sizes (because the perspective depth code makes spheres smaller away get smaller). But this effect should settle down very quickly. Finally, if you make the spheres very large, then they do not get cached. Instead, the sphere gets created each time. For simplicity of implementation, this code uses floating point and is not particularly optimized. The cache size is either 64, 96, or 128 pixels in diameter ... I don't remember which. You really only hit this if you are working with a small molecule in a large window with spacefill at the vdwRadius. > I made sure to look at the very crowed screen before zooming, so that > after zooming the number of displayed atoms was lower... but the > amount of screen covered with colors (not background) about the > same... Hmmm, then I don't know. The atoms that are not on the screen get clipped (they do not get rendered), so they should not be an issue. I would expect that if you zoom in that there *are* more pixels covered. But maybe not. Which file are you looking at? I would be interested in taking a look at it. Miguel In the spirit of building the 'tech note library' I am going to write some things about the sphere cache that is used for rendering atoms. Sphere Cache Details -------------------- The sphere cache entries are somewhat interesting. Actually, they are only the front half of the sphere because you never see the back half. What is actually stored is 1/4 of a hemi-sphere. At each point a z-height is stored, basically the z-depth of this pixel above the center of the sphere. When a point is plotted it is replicated to each of the 4 quadrants at the same height. Spheres are not stored on a per color basis ... there is only one entry in the cache per sphere diameter. What is stored is an 'intensity' value. This is used for lighting. The color model is that all colors have 64 shades, ranging from the darkest ambient color (a shadow) to white. So, for each point on the sphere, this intensity value gets used as an index into the shades for the current color. While the heights are symmetric across the 4 quadrants, the intensities are not. So, each pixel in the representation has a z value and 4 intensity values, one for each quadrant. So, each 'position' in the representation has a z-offset (which is shared by all 4 quadrants of the hemisphere) and 4 intensity values (one for each quadrant). All of this information is packed into a single integer. And all of these integers are stored in a single array. How? There are 64 shades of intensity, so that can be held in 6 bits. There are 4 of those, so that consumes 24 of our 32 bits. 7 bits are used for the z-offset, giving us an offset up to 127, or a maximum cacheable sphere diameter of 254. 24 + 7 is 31 bits, so we have 1 left over. That bit is used to tell the code that it has reached the end of a scan line. The code starts painting pixels at the center of the sphere and paints to the outside. When that bit is set the code knows to do a 'carriage return' and start to the next line. The more important question is: Why? Because, from a performance, this is one of the most heavily used and time-critical pieces of code in the rendering engine. Our job is to render atoms (spheres) to the offscreen buffer as quickly as possible. But is it actually faster? I think so. There are some bit-shifting and masking operations going on, but those things are happening on the CPU. Therefore they are inexpensive operations. One of the earlier implementations had 5 arrays, 1 for the z-height and 1 for each of the 4 intensities. But array operations are bounds-checked in Java, making them a little bit more expensive that we would like them to be. The sphere rendering loop involves reading sequential entries from an int[], and reading/writing to 4 independent sequential locations in the zBuffer/pixelBuffer. This should allow us to take good advantage of the various layers of CPU caching ... we are not adulturating our caches with lots of junk. And there are only a couple of variables involved, so the compiler should be able to keep all of our variables in registers. It *is* a lot of systems-level bit manipulation, but it pays off in terms of better systems performance. > Egon > > -- > PhD Molecular Representation in Chemometrics > Laboratory of Analytical Chemistry > http://www-cac.sci.kun.nl/people/egonw.html -------------------------------------------------- Miguel Howard [EMAIL PROTECTED] c/Pe�a Primera 11-13 esc dcha 6B 37002 Salamanca Espa�a Spain -------------------------------------------------- telefono casa 923 27 10 82 movil 650 52 54 58 -------------------------------------------------- To call from the US dial 9:00 am Pacific US = home 011 34 923 27 10 82 12:00 noon Eastern US = cell 011 34 650 52 54 58 6:00 pm Spain -------------------------------------------------- ------------------------------------------------------- This SF.net email is sponsored by: SF.net Giveback Program. SourceForge.net hosts over 70,000 Open Source Projects. See the people who have HELPED US provide better services: Click here: http://sourceforge.net/supporters.php _______________________________________________ Jmol-developers mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/jmol-developers
