Re: Batch function for Ultimapper and Render Map

2014-01-09 Thread Nicolas Esposito
Wow Matt,
Thanks for spending time writing the script, I'll test it out together with
the Mapify plugin to see which one could suit my needs

Sorry to ask the same question as before, but based on your Jscript for
Ultimapper I'll just need to replace the function with the Ultimapper
function, add the maps variables ( normals, albedo, depth, AO ) and change
the specified ultimapper property to be what is called Ultimapper thingy
at the end of the script

I'll start experimenting with the SDK editor to see if there are some
usefull informations ;)

Thanks again Matt for the script, amazing how you guys could come out with
a solution within few minutes just writing it down! :)


2014/1/9 Matt Lind ml...@carbinestudios.com



 // Jscript – will need some tweaking to be functional

 RenderMapSequence( 1, 100, “C:\\tmp\\my_sequence.CURRENTFRAME.tga” );



 function RenderMapSequence( FrameStart, FrameEnd, FileName )

 {

 var oPlayControl = Dictionary.GetObject( “PlayControl” );



 // get eligible objects from selection

 var aFilterNames = new Array( siPolyMeshFilter, siSurfaceMeshFilter );

 var oObjects = SIFilter( null, aFilterNames.join(“,”), true, siQuickSearch
 );



 If ( !oObjects || oObjects.Count =0  ) {

 LogMessage( “nothing selected”, siError );

 }



 for ( var i = 0; i  oObjects.Count; i++ ) {



 var oObject = oObjects(i);



 var oProperties = oObject.Properties.Filter( “rendermap” );

 if ( oProperties.Count = 0 ) {

 // rendermap property not found

 continue;

 }

 var oRenderMapProperty = oProperties(0);



 for ( var j = FrameStart; j = FrameEnd; j++ ) {



 var FrameCurrent =  j;



 // advance timeline

 oPlayControl.Parameters( “Current” ).value = FrameCurrent;



 // update the parameter defining output image file name

 var ImageFileName = FileName.replace( /CURRENTFRAME/,
 FrameCurrent );

 oRenderMapProperty.Parameters( “imagefilepath” ).value =
 ImageFileName;



 // execute specified rendermap property

 RegenerateMaps ( oRenderMapProperty.FullName );

 }

 }

 }





 *From:* softimage-boun...@listproc.autodesk.com [mailto:
 softimage-boun...@listproc.autodesk.com] *On Behalf Of *Nicolas Esposito
 *Sent:* Wednesday, January 08, 2014 3:19 PM
 *To:* softimage@listproc.autodesk.com
 *Subject:* Re: Batch function for Ultimapper and Render Map



 The same script functionality to execute every frame and update the output
 file could be applied to Ultimapper as well, am I correct?



 Sorry but I'm not too familiar with scripting, but looking at the manual
 looks nothing super-complicated ;)



 2014/1/9 Nicolas Esposito 3dv...@gmail.com

 This is gold!

 Thanks Matt and Alan ;)



 2014/1/8 Alan Fregtman alan.fregt...@gmail.com

 You might find Sajjad's *Mapify* plugin useful as it can rendermap a
 sequence:



 http://www.sajjadamjad.com/plugins.html#Mapify





 On Wed, Jan 8, 2014 at 5:42 PM, Matt Lind ml...@carbinestudios.com
 wrote:

 Set up the rendermap property as usual, then write a few lines of script
 to execute rendermap every frame while updating the output file path.



 Matt









 *From:* softimage-boun...@listproc.autodesk.com [mailto:
 softimage-boun...@listproc.autodesk.com] *On Behalf Of *Nicolas Esposito
 *Sent:* Wednesday, January 08, 2014 2:40 PM
 *To:* softimage@listproc.autodesk.com
 *Subject:* Batch function for Ultimapper and Render Map



 I remember there was backburner for Softimage once but I suppose has been
 removed...



 So, I need to use ultimapper and render map to render out each frame of my
 animation, and if I remember correctly this was doable with backburner



 With Soft 2013 there is a way to achieve that?



 I'm trying to replicate the effect done by Blur in this video @1.59

 http://www.youtube.com/watch?v=38wh5Fn4WEs



 The system or the dynamic tassellation/normal map is what I'm trying to
 achieve, but I will need the maps in order to test something else



 Cheers









Re: Redshift3D Render

2014-01-09 Thread Mirko Jankovic
Well I finally got nice reason to get 4 Titans.. mmm 4way SLI and 3 27 mon
stereo gaming.
Soo nice to have same comp good for both job and gaming.



On Thu, Jan 9, 2014 at 3:49 AM, Sebastien Sterling 
sebastien.sterl...@gmail.com wrote:

 Call me immature, but i kind of love the idea of my gaming pc also being
 my render farm :P

 Sod of Nvidia and your flaby Quadro cards there expensive as fuck and you
 can't play Crysis on them :)


 On 8 January 2014 23:13, Tim Leydecker bauero...@gmx.de wrote:

 For a current list of features available as well as a roadmap,
 I would like to suggest to just go and give it a free try:

 https://www.redshift3d.com/get-redshift

 Yes, actually you don´t even have to commit to spending $100 directly,
 the Free Beta Trial gives you 30 days of full access to Redshift.

 A special benefit of this free trial option is that you could actually
 try out how a bunch of machines would run using redshift in a farm or
 knot.

 Reading the docs doesn´t require a login:

 http://docs.redshift3d.com/Default.html


 Redshift is a really well balanced renderer and I wholehartedly trust in
 it´s success.

 With the above opportunity available it is a good time to test it in your
 production scenario
 and wheight it against VRay and Arnold, which are also both very nice
 plattforms with
 enough momentum to also be around for quite a while.

 I am sure Redshift is a valuable addition to that arsenal.

 Cheers,

 tim








 On 08.01.2014 22:44, Emilio Hernandez wrote:

 Backing up Tim, in the forums there is actually a hair test in Softimage
 with a simple phong shader.  And IMHO it looks nice.




 2014/1/8 Tim Crowson tim.crow...@magneticdreams.com mailto:
 tim.crow...@magneticdreams.com


 They've stated pretty clearly that Hair and Strand support is the
 next big thing to come... shouldn't be too long now...

 -Tim



 On 1/8/2014 3:36 PM, Daniel Kim wrote:

 I found some weird result of displacement map with Redshift.
 Elevation is okay, but sometimes I could see weird connection of UVM. All
 UVM boders wasn't smooth and I had no
 idea how to fix it. Arnold has that option though : / But more
 option I need is... hair @__@ and ICE strand


 ---
 Daniel Kim
 Animation Director  Professional 3D Generalist
 http://www.danielkim3d.com
 ---




 On Thu, Jan 9, 2014 at 10:31 AM, Emilio Hernandez 
 emi...@e-roja.com mailto:emi...@e-roja.com wrote:

 Displacement and bump map are there and they work beautiful.
  They event implement a scalar change range into the displacement node.




 2014/1/8 Tim Crowson tim.crow...@magneticdreams.com mailto:
 tim.crow...@magneticdreams.com


 I'll repost what I said in the other thread

 We started using Redshift back in March and pretty much use
 it exclusively now. Of course it all depends on the needs of the project
 (and there are still some real
 limitations).  The RS dev team is top notch though. I'm
 really excited to see how things will be at the end of this calendar year.
  Redshift development is
 progressing at a fantastic rate, and the pricing is very
 competitive. For facilities, even small ones, it does require that you
 spend some time considering your
 hardware and infrastructure, especially if you want to
 start converting CPU farms for GPU rendering, or augmenting them.
 Fortunately, Redshift isn't licensed per GPU,
 but per machine, and that should provide some breathing
 room.

 To be honest (and I realize we have many Arnold folks
 here), here at Magnetic we evaluated our rendering options (MR, vRay,
 3Delight, Arnold). I even started working
 on a Soft-to-Modo pipeline. Among these Arnold was the
 clear winner. That said, we felt that to be useful for us in production,
 Arnold was too costly a solution for
 us, both financially and in render time, /considering the
 kinds of projects we do/ . Then Redshift came along and despite its
 infancy, really turned our heads. We

 cautiously began using it on productions, and it has since
 proven itself for us.

 Again, it all depends on what kind of project you're
 working on! You need to evaluate it for yourself of course, but for smaller
 houses like us, it allows us to
 produce better looking content faster, while staying in
 Softimage. And in this economy, we can't argue with that.

 -Tim



 On 1/8/2014 2:53 PM, Paul Griswold wrote:

 I've been using it along side Arnold for quite a while
 now.  I just finished a project for CES entirely in Redshift.

 I think Redshift falls more into the category of a VRay
 competitor rather than Arnold.  Redshift isn't open the way Arnold is  I
 don't think they intend it to 

Re: rigging in xsi vs maya

2014-01-09 Thread Tim Leydecker

Autodesk is doing a lot of development in the area of 3D scan data handling.

If you look into what is going on in the area of topology data aquisition for
architecture, engineering and the military, there is a shift towards 3D 
pointcloud
data which imho is compareable to what 2D tracking as a concept brought us in 
the 90s.

(Facial recognition and finally image based modeling and camera positional data)

It is at hand that the more complex, raw 3D point cloud data will need new and 
abstracted ways
of handling and manipulation, filtering options and adaptive control layers for 
approximated data.

The implication such data for 3D animation brings is that the concept of 
wheighting a fixed number
of vertices to a bone may have to be extended beyond a fixed number of polygons.

Unfortunately, taking fall-off based volume wheighting as in it´s current level 
of finesse
may give worse results than before, especially if your shape options for the 
influence volume
are limited to capsules, boxes or spheres.

I am a bit worried that the process of riggingwheighting an organic character 
will become even
more frustrating and stiff or at least will need even more steps, like creating 
an extra controlsurface
with a fixed number of points and wrapping it around the high-density data.

Such a wrap-deformer takes away control. It´s always the rims and little 
caveats that need extra care.

Cheers,

tim









On 09.01.2014 02:13, Guillaume Laforge wrote:

On Wed, Jan 8, 2014 at 7:55 PM, Luc-Eric Rousseau luceri...@gmail.com 
mailto:luceri...@gmail.com wrote:

In the new future  ( not talking about autodesk here)  I think
workflows will standards will be Gator-like tools to deal with topo
changes (point clouds tools as necessary also ptex-based workflows)
and katana-like proceduralism for render passes-like workflow.


I'm still wondering if a company ( not talking about Autodesk here ) will do 
anything new like that for our little world. Money for such large dev projects 
is just not in the
animation/vfx world anymore. I'm not sarcastic, just realist. So lets embrace 
old techs like Maya or XSI. They won't evolve too much but won't disappear 
before many (many) years.

Btw, Katana is not the futur, it is now :).



Re: Redshift3D Render

2014-01-09 Thread James De Colling
quick one, can your rendermap with redshift?


On Thu, Jan 9, 2014 at 7:39 PM, Mirko Jankovic mirkoj.anima...@gmail.comwrote:

 Well I finally got nice reason to get 4 Titans.. mmm 4way SLI and 3 27
 mon stereo gaming.
 Soo nice to have same comp good for both job and gaming.



 On Thu, Jan 9, 2014 at 3:49 AM, Sebastien Sterling 
 sebastien.sterl...@gmail.com wrote:

 Call me immature, but i kind of love the idea of my gaming pc also being
 my render farm :P

 Sod of Nvidia and your flaby Quadro cards there expensive as fuck and you
 can't play Crysis on them :)


 On 8 January 2014 23:13, Tim Leydecker bauero...@gmx.de wrote:

 For a current list of features available as well as a roadmap,
 I would like to suggest to just go and give it a free try:

 https://www.redshift3d.com/get-redshift

 Yes, actually you don´t even have to commit to spending $100 directly,
 the Free Beta Trial gives you 30 days of full access to Redshift.

 A special benefit of this free trial option is that you could actually
 try out how a bunch of machines would run using redshift in a farm or
 knot.

 Reading the docs doesn´t require a login:

 http://docs.redshift3d.com/Default.html


 Redshift is a really well balanced renderer and I wholehartedly trust in
 it´s success.

 With the above opportunity available it is a good time to test it in
 your production scenario
 and wheight it against VRay and Arnold, which are also both very nice
 plattforms with
 enough momentum to also be around for quite a while.

 I am sure Redshift is a valuable addition to that arsenal.

 Cheers,

 tim








 On 08.01.2014 22:44, Emilio Hernandez wrote:

 Backing up Tim, in the forums there is actually a hair test in
 Softimage with a simple phong shader.  And IMHO it looks nice.




 2014/1/8 Tim Crowson tim.crow...@magneticdreams.com mailto:
 tim.crow...@magneticdreams.com


 They've stated pretty clearly that Hair and Strand support is the
 next big thing to come... shouldn't be too long now...

 -Tim



 On 1/8/2014 3:36 PM, Daniel Kim wrote:

 I found some weird result of displacement map with Redshift.
 Elevation is okay, but sometimes I could see weird connection of UVM. All
 UVM boders wasn't smooth and I had no
 idea how to fix it. Arnold has that option though : / But more
 option I need is... hair @__@ and ICE strand


 
 ---
 Daniel Kim
 Animation Director  Professional 3D Generalist
 http://www.danielkim3d.com
 
 ---




 On Thu, Jan 9, 2014 at 10:31 AM, Emilio Hernandez 
 emi...@e-roja.com mailto:emi...@e-roja.com wrote:

 Displacement and bump map are there and they work beautiful.
  They event implement a scalar change range into the displacement node.




 2014/1/8 Tim Crowson tim.crow...@magneticdreams.com mailto:
 tim.crow...@magneticdreams.com


 I'll repost what I said in the other thread

 We started using Redshift back in March and pretty much
 use it exclusively now. Of course it all depends on the needs of the
 project (and there are still some real
 limitations).  The RS dev team is top notch though. I'm
 really excited to see how things will be at the end of this calendar year.
  Redshift development is
 progressing at a fantastic rate, and the pricing is very
 competitive. For facilities, even small ones, it does require that you
 spend some time considering your
 hardware and infrastructure, especially if you want to
 start converting CPU farms for GPU rendering, or augmenting them.
 Fortunately, Redshift isn't licensed per GPU,
 but per machine, and that should provide some breathing
 room.

 To be honest (and I realize we have many Arnold folks
 here), here at Magnetic we evaluated our rendering options (MR, vRay,
 3Delight, Arnold). I even started working
 on a Soft-to-Modo pipeline. Among these Arnold was the
 clear winner. That said, we felt that to be useful for us in production,
 Arnold was too costly a solution for
 us, both financially and in render time, /considering the
 kinds of projects we do/ . Then Redshift came along and despite its
 infancy, really turned our heads. We

 cautiously began using it on productions, and it has since
 proven itself for us.

 Again, it all depends on what kind of project you're
 working on! You need to evaluate it for yourself of course, but for 
 smaller
 houses like us, it allows us to
 produce better looking content faster, while staying in
 Softimage. And in this economy, we can't argue with that.

 -Tim



 On 1/8/2014 2:53 PM, Paul Griswold wrote:

 I've been using it along side Arnold for quite a while
 now.  I just finished a project for CES entirely in Redshift.

 I think Redshift falls 

Re: Redshift3D Render

2014-01-09 Thread Stefan Kubicek

Afaik it's in development just like hair  fur.quick one, can your rendermap with redshift?On Thu, Jan 9, 2014 at 7:39 PM, Mirko Jankovic mirkoj.anima...@gmail.com wrote:
Well I finally got nice reason to get 4 Titans.. mmm 4way SLI and 3 27" mon stereo gaming.Soo nice to have same comp good for both job and gaming.

On Thu, Jan 9, 2014 at 3:49 AM, Sebastien Sterling sebastien.sterl...@gmail.com wrote:

Call me immature, but i kind of love the idea of my gaming pc also being my render farm :PSod of Nvidia and your flaby Quadro cards there expensive as fuck and you can't play Crysis on them :)


On 8 January 2014 23:13, Tim Leydecker bauero...@gmx.de wrote:


For a current list of features available as well as a roadmap,
I would like to suggest to just go and give it a free try:

https://www.redshift3d.com/get-redshift

Yes, actually you don´t even have to commit to spending $100 directly,
the Free Beta Trial gives you 30 days of full access to Redshift.

A special benefit of this free trial option is that you could actually
try out how a bunch of machines would run using redshift in a farm or knot.

Reading the docs doesn´t require a login:

http://docs.redshift3d.com/Default.html


Redshift is a really well balanced renderer and I wholehartedly trust in it´s success.

With the above opportunity available it is a good time to test it in your production scenario
and wheight it against VRay and Arnold, which are also both very nice plattforms with
enough momentum to also be around for quite a while.

I am sure Redshift is a valuable addition to that arsenal.

Cheers,

tim







On 08.01.2014 22:44, Emilio Hernandez wrote:

Backing up Tim, in the forums there is actually a hair test in Softimage with a simple phong shader. And IMHO it looks nice.




2014/1/8 Tim Crowson tim.crow...@magneticdreams.com mailto:tim.crow...@magneticdreams.com




  They've stated pretty clearly that Hair and Strand support is the next big thing to come... shouldn't be too long now...

  -Tim



  On 1/8/2014 3:36 PM, Daniel Kim wrote:

  I found some weird result of displacement map with Redshift. Elevation is okay, but sometimes I could see weird connection of UVM. All UVM boders wasn't smooth and I had no
  idea how to fix it. Arnold has that option though : / But more option I need is... hair @__@ and ICE strand


  ---
  Daniel Kim
  Animation Director  Professional 3D Generalist
  http://www.danielkim3d.com
  ---




  On Thu, Jan 9, 2014 at 10:31 AM, Emilio Hernandez emi...@e-roja.com mailto:emi...@e-roja.com wrote:




Displacement and bump map are there and they work beautiful. They event implement a scalar change range into the displacement node.




2014/1/8 Tim Crowson tim.crow...@magneticdreams.com mailto:tim.crow...@magneticdreams.com




  I'll repost what I said in the other thread

  We started using Redshift back in March and pretty much use it exclusively now. Of course it all depends on the needs of the project (and there are still some real
  limitations). The RS dev team is top notch though. I'm really excited to see how things will be at the end of this calendar year. Redshift development is
  progressing at a fantastic rate, and the pricing is very competitive. For facilities, even small ones, it does require that you spend some time considering your
  hardware and infrastructure, especially if you want to start converting CPU farms for GPU rendering, or augmenting them. Fortunately, Redshift isn't licensed per GPU,
  but per machine, and that should provide some breathing room.

  To be honest (and I realize we have many Arnold folks here), here at Magnetic we evaluated our rendering options (MR, vRay, 3Delight, Arnold). I even started working
  on a Soft-to-Modo pipeline. Among these Arnold was the clear winner. That said, we felt that to be useful for us in production, Arnold was too costly a solution for
  us, both financially and in render time, /considering the kinds of projects we do/ . Then Redshift came along and despite its infancy, really turned our heads. We
  cautiously began using it on productions, and it has since proven itself for us.

  Again, it all depends on what kind of project you're working on! You need to evaluate it for yourself of course, but for smaller houses like us, it allows us to
  produce better looking content faster, while staying in Softimage. And in this economy, we can't argue with that.

  -Tim



  On 1/8/2014 2:53 PM, Paul Griswold wrote:

  I've been using it along side Arnold for quite a while now. I just finished a project for CES entirely in Redshift.

  I think Redshift falls more into the category of a VRay competitor rather than Arnold. Redshift isn't open the way Arnold is  I don't think they intend it to be.

  I've found it to be extremely fast and 

[ICE] B-Spline/NURBS curve compound?

2014-01-09 Thread Martin Chatterjee
Hey there,

is anybody aware of an ICE compound solving B-Spline/NURBS curves?

You know, like the existing 'Bezier 4' and 'Bezier 5' compounds but for a
curve with an arbitrary number of control points?

Thanks in advance, cheers,

Martin
--
   Martin Chatterjee

[ Freelance Technical Director ]
[   http://www.chatterjee.de   ]
[ https://vimeo.com/chatterjee ]


Re: [ICE] B-Spline/NURBS curve compound?

2014-01-09 Thread Vladimir Jankijevic
there is the 'Piecewise Cubic B Spline' Compound you could use as a
starting point.

Cheers,
Vladimir


On Thu, Jan 9, 2014 at 11:21 AM, Martin Chatterjee 
martin.chatterjee.li...@googlemail.com wrote:

 Hey there,

 is anybody aware of an ICE compound solving B-Spline/NURBS curves?

 You know, like the existing 'Bezier 4' and 'Bezier 5' compounds but for a
 curve with an arbitrary number of control points?

 Thanks in advance, cheers,

 Martin
 --
Martin Chatterjee

 [ Freelance Technical Director ]
 [   http://www.chatterjee.de   ]
 [ https://vimeo.com/chatterjee ]



Re: [ICE] B-Spline/NURBS curve compound?

2014-01-09 Thread Martin Chatterjee
Cheers Vladimir, that definitely helps!

Can't believe I actually overlooked this compound in my search... :)

Martin


--
   Martin Chatterjee

[ Freelance Technical Director ]
[   http://www.chatterjee.de   ]
[ https://vimeo.com/chatterjee ]


On Thu, Jan 9, 2014 at 11:23 AM, Vladimir Jankijevic 
vladi...@elefantstudios.ch wrote:

 there is the 'Piecewise Cubic B Spline' Compound you could use as a
 starting point.

 Cheers,
 Vladimir


 On Thu, Jan 9, 2014 at 11:21 AM, Martin Chatterjee 
 martin.chatterjee.li...@googlemail.com wrote:

 Hey there,

 is anybody aware of an ICE compound solving B-Spline/NURBS curves?

 You know, like the existing 'Bezier 4' and 'Bezier 5' compounds but for a
 curve with an arbitrary number of control points?

 Thanks in advance, cheers,

 Martin
 --
Martin Chatterjee

 [ Freelance Technical Director ]
 [   http://www.chatterjee.de   ]
 [ https://vimeo.com/chatterjee ]





BUG: referenced models with gear rig swapping animation between characters.

2014-01-09 Thread Ognjen Vukovic
Hi guys, we have a bit of a strange situation here, as you can see by the
title...

A couple of referenced models with characters are swapping the animation in
random scenes, the bug seems to appear when the models are referenced but
the moment you localize everything it seems to revert to the original
state, so im presuming it has something to do with the deltas going haywire
but i have no clue as to how to attack this problem since my knowledge of
animation and anything tied to animation is very limited.

I was hoping someone might have run into this before and might have a quick
fix.

Cheers,
Ogi.


RE: Rendering ZBrush displacement in Soft

2014-01-09 Thread Szabolcs Matefy
Actually finally I managed to work with displacement. However, displacement is 
not as detailed as I'd like, but boosting with a bump map, it looks fine. 
Unfortunately, the midpoint seems to be off, I have to somehow tune it. In ZB I 
set midpoint to 0.5, because if I set it to 0, It looked as if I have no 
recesses on the skin. Now what I think is that I have to exaggerate the details 
to make it work properly with skin shading, but that's another story.

It looks like that the details are in the texture, but somehow the model 
doesn't want to reflect it, maybe I should pump up the subdivision in the 
displacement tab of geoapprox PPG.

Cheers


Szabolcs

PS. For me, after GoZ is used then I have two issues: 1) I can't Alt-Tab to 
change between tasks, I have to minimize ZBrush, 2) All playback function, 
simulation is not available anymore in Softimage


Cheers




From: softimage-boun...@listproc.autodesk.com 
[mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Emilio Hernandez
Sent: Wednesday, January 08, 2014 4:58 PM
To: softimage@listproc.autodesk.com
Subject: Re: Rendering ZBrush displacement in Soft

I also use GoZ a lot, the only bug that happens for me is that if you send from 
Softimage to Zbrush, make an UV check before start sculpting.  Sometimes they 
got screwed.  So export an OBJ and import in Zbrush do your sculpt and sendback 
to SI.  Make sure your UV's are consistent and flip the U before exporting from 
Zbrush or your map is going to come up flipped. upside down.
Regularly I don't subd the mesh in Zbrush more than 4 subd.   As you need to 
subd your mesh also inside SI the same amount to properly displace the geo when 
rendering.  If you find that you need more subd in your sculpt.  I willo go 
back, subd the mesh to get more polys and send it back to Zbrush, so you can go 
as high as 8 subd from the original mesh, that is a lot.

As far as I remember you can use a 32bit depth image to plug that into the 
scalar change range node.  But this depth in the bitmap has nothing to do with 
the linear workflow, as the linear workflow is related to gamma correct display 
and rendering, and this maps are to drive values of units to displace the 
geometry inwards and outwards and to be interpretated by the render engine
Here is video that will help to understand the displacement in SI and MR.

https://vimeo.com/29898426

.


[http://img694.imageshack.us/img694/8965/erojamailpleca.jpg]

2014/1/8 Cristobal Infante cgc...@gmail.commailto:cgc...@gmail.com
You read it linear for sure. What exactly is your problem, are you not getting 
enough detail?. If this is the problem it can also be the UVs are not big 
enough for each poly.

To be honest the final shader doesn't really matter for the disp, in fact GoZ 
export with a phong. I personally only use Architectural materials when in MR 
though.



On 8 January 2014 12:45, Szabolcs Matefy 
szabol...@crytek.commailto:szabol...@crytek.com wrote:
OK, next question. If you are working in linear workflow, how would you set up 
the displacement? At this moment I feed the displacement map as Linear, and 
scale it down with Change Range node. The displacement map is an OpenEXR 32 bit 
texture, and I am using the MILA mental Ray shaders.

Cheers

Szabolcs

From: 
softimage-boun...@listproc.autodesk.commailto:softimage-boun...@listproc.autodesk.com
 
[mailto:softimage-boun...@listproc.autodesk.commailto:softimage-boun...@listproc.autodesk.com]
 On Behalf Of Cristobal Infante
Sent: Wednesday, January 08, 2014 12:08 PM

To: softimage@listproc.autodesk.commailto:softimage@listproc.autodesk.com
Subject: Re: Rendering ZBrush displacement in Soft

GoZ for softimage all the way...

On 8 January 2014 09:24, Emilio Hernandez 
emi...@e-roja.commailto:emi...@e-roja.com wrote:
The trick is the scalar change range node in the render tree to match the 
displacement.  So MR will know wich values go up and wich values go down.  Take 
a look at the alpha value when exporting the displacement map from Zbrush.  
That value should be placed in the scalar change range node in the render tree.
So let's say that the alpha value in Zbrush for the exported map is 0.054.  You 
should plug the dm in the scalar change range node and place the minimum value 
to -0.054 ant the max to 0.054.  That way MR will know where is 0.0 value and 
the max and min displacement.

[http://img694.imageshack.us/img694/8965/erojamailpleca.jpg]

2014/1/8 Florian Breg florian.b...@gmail.commailto:florian.b...@gmail.com

Have you tried GoZ for softimage? I used it a lot recently and it works like a 
charm.

With the click of a button you get your model imported into SI with all the 
maps attached and a change range node with the right values added to your 
displacement. It even flips the maps for you.

I think you can download it from the pixologic site.

Good Luck,
Florian
Am 08.01.2014 09:18 schrieb Szabolcs Matefy 
szabol...@crytek.commailto:szabol...@crytek.com:

I'm sure that I'm the source of 

Re: BUG: referenced models with gear rig swapping animation between characters.

2014-01-09 Thread Ognjen Vukovic
It seems to be throwing a delta invalid error and shouting at me that the
emdl file was created in another version, but we are running the same
version of 2013 on every comp

 INFO : 4152 - Data loaded from file Z:\\Rusco_Rig.emdl was created
with build number: 11.0.525.0 - compatibility version: 1100

...


On Thu, Jan 9, 2014 at 11:49 AM, Ognjen Vukovic ognj...@gmail.com wrote:

 Hi guys, we have a bit of a strange situation here, as you can see by the
 title...

 A couple of referenced models with characters are swapping the animation
 in random scenes, the bug seems to appear when the models are referenced
 but the moment you localize everything it seems to revert to the original
 state, so im presuming it has something to do with the deltas going haywire
 but i have no clue as to how to attack this problem since my knowledge of
 animation and anything tied to animation is very limited.

 I was hoping someone might have run into this before and might have a
 quick fix.

 Cheers,
 Ogi.



Re: BUG: referenced models with gear rig swapping animation between characters.

2014-01-09 Thread Vladimir Jankijevic
That's just an information that the emdl was created with version
11.0.525.0 - 2013


On Thu, Jan 9, 2014 at 12:11 PM, Ognjen Vukovic ognj...@gmail.com wrote:

 It seems to be throwing a delta invalid error and shouting at me that the
 emdl file was created in another version, but we are running the same
 version of 2013 on every comp

  INFO : 4152 - Data loaded from file Z:\\Rusco_Rig.emdl was created
 with build number: 11.0.525.0 - compatibility version: 1100

 ...


 On Thu, Jan 9, 2014 at 11:49 AM, Ognjen Vukovic ognj...@gmail.com wrote:

 Hi guys, we have a bit of a strange situation here, as you can see by the
 title...

 A couple of referenced models with characters are swapping the animation
 in random scenes, the bug seems to appear when the models are referenced
 but the moment you localize everything it seems to revert to the original
 state, so im presuming it has something to do with the deltas going haywire
 but i have no clue as to how to attack this problem since my knowledge of
 animation and anything tied to animation is very limited.

 I was hoping someone might have run into this before and might have a
 quick fix.

 Cheers,
 Ogi.





Re: Redshift3D Render

2014-01-09 Thread Paul Griswold
​I just added 2 780ti's to my machine  use my 680 just to drive my
displays.  One of the negatives right now with Redshift (AFAIK) is that
it's not entirely optimized for more than 2 cards.  Once you get 3-4 cards
in place you start seeing diminishing returns.

But that might be an nVidia problem, not a Redshift problem.

I think the main thing you have to consider is - do you already have a
renderfarm that is CPU based and are you willing to migrate to GPU-based
rendering?  I have rack-mounted machines that just have Intel motheboard
graphics  no space for a GPU, so for now Redshift is limited to just 2
machines.

For larger companies there's a space  heat issue as well.  Running just
the 2 780ti's all night long kept my office nice and toasty warm.  You need
to have a very large power supply as well as a case  motherboard that can
fit the cards.  It'd be very interesting to see a comparison of space, heat
and watts consumed for CPU's vs GPU's for rendering...

-Paul


ᐧ


On Thu, Jan 9, 2014 at 3:39 AM, Mirko Jankovic mirkoj.anima...@gmail.comwrote:

 Well I finally got nice reason to get 4 Titans.. mmm 4way SLI and 3 27
 mon stereo gaming.
 Soo nice to have same comp good for both job and gaming.



 On Thu, Jan 9, 2014 at 3:49 AM, Sebastien Sterling 
 sebastien.sterl...@gmail.com wrote:

 Call me immature, but i kind of love the idea of my gaming pc also being
 my render farm :P

 Sod of Nvidia and your flaby Quadro cards there expensive as fuck and you
 can't play Crysis on them :)


 On 8 January 2014 23:13, Tim Leydecker bauero...@gmx.de wrote:

 For a current list of features available as well as a roadmap,
 I would like to suggest to just go and give it a free try:

 https://www.redshift3d.com/get-redshift

 Yes, actually you don´t even have to commit to spending $100 directly,
 the Free Beta Trial gives you 30 days of full access to Redshift.

 A special benefit of this free trial option is that you could actually
 try out how a bunch of machines would run using redshift in a farm or
 knot.

 Reading the docs doesn´t require a login:

 http://docs.redshift3d.com/Default.html


 Redshift is a really well balanced renderer and I wholehartedly trust in
 it´s success.

 With the above opportunity available it is a good time to test it in
 your production scenario
 and wheight it against VRay and Arnold, which are also both very nice
 plattforms with
 enough momentum to also be around for quite a while.

 I am sure Redshift is a valuable addition to that arsenal.

 Cheers,

 tim








 On 08.01.2014 22:44, Emilio Hernandez wrote:

 Backing up Tim, in the forums there is actually a hair test in
 Softimage with a simple phong shader.  And IMHO it looks nice.




 2014/1/8 Tim Crowson tim.crow...@magneticdreams.com mailto:
 tim.crow...@magneticdreams.com


 They've stated pretty clearly that Hair and Strand support is the
 next big thing to come... shouldn't be too long now...

 -Tim



 On 1/8/2014 3:36 PM, Daniel Kim wrote:

 I found some weird result of displacement map with Redshift.
 Elevation is okay, but sometimes I could see weird connection of UVM. All
 UVM boders wasn't smooth and I had no
 idea how to fix it. Arnold has that option though : / But more
 option I need is... hair @__@ and ICE strand


 
 ---
 Daniel Kim
 Animation Director  Professional 3D Generalist
 http://www.danielkim3d.com
 
 ---




 On Thu, Jan 9, 2014 at 10:31 AM, Emilio Hernandez 
 emi...@e-roja.com mailto:emi...@e-roja.com wrote:

 Displacement and bump map are there and they work beautiful.
  They event implement a scalar change range into the displacement node.




 2014/1/8 Tim Crowson tim.crow...@magneticdreams.com mailto:
 tim.crow...@magneticdreams.com


 I'll repost what I said in the other thread

 We started using Redshift back in March and pretty much
 use it exclusively now. Of course it all depends on the needs of the
 project (and there are still some real
 limitations).  The RS dev team is top notch though. I'm
 really excited to see how things will be at the end of this calendar year.
  Redshift development is
 progressing at a fantastic rate, and the pricing is very
 competitive. For facilities, even small ones, it does require that you
 spend some time considering your
 hardware and infrastructure, especially if you want to
 start converting CPU farms for GPU rendering, or augmenting them.
 Fortunately, Redshift isn't licensed per GPU,
 but per machine, and that should provide some breathing
 room.

 To be honest (and I realize we have many Arnold folks
 here), here at Magnetic we evaluated our rendering options (MR, vRay,
 3Delight, Arnold). I even started working
 on a Soft-to-Modo pipeline. Among these Arnold was 

Re: BUG: referenced models with gear rig swapping animation between characters.

2014-01-09 Thread Ognjen Vukovic
 sorry my bad,  still working in autopilot mode. we reverted back to
some old models before the shaders where attached and the problem went awey
so i cant catch the error now but it might seem to be something linked to
the fact that the rigger has the same names for certain geometry in
different models? not quite sure yet but im sure it will sneak up again
during the day.


On Thu, Jan 9, 2014 at 12:15 PM, Vladimir Jankijevic 
vladi...@elefantstudios.ch wrote:

 That's just an information that the emdl was created with version
 11.0.525.0 - 2013


 On Thu, Jan 9, 2014 at 12:11 PM, Ognjen Vukovic ognj...@gmail.com wrote:

 It seems to be throwing a delta invalid error and shouting at me that the
 emdl file was created in another version, but we are running the same
 version of 2013 on every comp

  INFO : 4152 - Data loaded from file Z:\\Rusco_Rig.emdl was created
 with build number: 11.0.525.0 - compatibility version: 1100

 ...


 On Thu, Jan 9, 2014 at 11:49 AM, Ognjen Vukovic ognj...@gmail.comwrote:

 Hi guys, we have a bit of a strange situation here, as you can see by
 the title...

 A couple of referenced models with characters are swapping the animation
 in random scenes, the bug seems to appear when the models are referenced
 but the moment you localize everything it seems to revert to the original
 state, so im presuming it has something to do with the deltas going haywire
 but i have no clue as to how to attack this problem since my knowledge of
 animation and anything tied to animation is very limited.

 I was hoping someone might have run into this before and might have a
 quick fix.

 Cheers,
 Ogi.






exporting camera roll - FBX?

2014-01-09 Thread Paul Griswold
I have an animated camera where I've also animated the roll on it.

When I export the scene as FBX, the camera imports without the roll.

Is there a way to include it?


Thanks,

Paul

ᐧ


Re: Redshift3D Render

2014-01-09 Thread James De Colling
how diminishing are we talking? some enthusiast motherboards support upto 6
cards, also does pci-e slot speed make a difference?


On Thu, Jan 9, 2014 at 10:40 PM, Paul Griswold 
pgrisw...@fusiondigitalproductions.com wrote:

 ​I just added 2 780ti's to my machine  use my 680 just to drive my
 displays.  One of the negatives right now with Redshift (AFAIK) is that
 it's not entirely optimized for more than 2 cards.  Once you get 3-4 cards
 in place you start seeing diminishing returns.

 But that might be an nVidia problem, not a Redshift problem.

 I think the main thing you have to consider is - do you already have a
 renderfarm that is CPU based and are you willing to migrate to GPU-based
 rendering?  I have rack-mounted machines that just have Intel motheboard
 graphics  no space for a GPU, so for now Redshift is limited to just 2
 machines.

 For larger companies there's a space  heat issue as well.  Running just
 the 2 780ti's all night long kept my office nice and toasty warm.  You need
 to have a very large power supply as well as a case  motherboard that can
 fit the cards.  It'd be very interesting to see a comparison of space, heat
 and watts consumed for CPU's vs GPU's for rendering...

 -Paul


 ᐧ


 On Thu, Jan 9, 2014 at 3:39 AM, Mirko Jankovic 
 mirkoj.anima...@gmail.comwrote:

 Well I finally got nice reason to get 4 Titans.. mmm 4way SLI and 3 27
 mon stereo gaming.
 Soo nice to have same comp good for both job and gaming.



 On Thu, Jan 9, 2014 at 3:49 AM, Sebastien Sterling 
 sebastien.sterl...@gmail.com wrote:

 Call me immature, but i kind of love the idea of my gaming pc also being
 my render farm :P

 Sod of Nvidia and your flaby Quadro cards there expensive as fuck and
 you can't play Crysis on them :)


 On 8 January 2014 23:13, Tim Leydecker bauero...@gmx.de wrote:

 For a current list of features available as well as a roadmap,
 I would like to suggest to just go and give it a free try:

 https://www.redshift3d.com/get-redshift

 Yes, actually you don´t even have to commit to spending $100 directly,
 the Free Beta Trial gives you 30 days of full access to Redshift.

 A special benefit of this free trial option is that you could actually
 try out how a bunch of machines would run using redshift in a farm or
 knot.

 Reading the docs doesn´t require a login:

 http://docs.redshift3d.com/Default.html


 Redshift is a really well balanced renderer and I wholehartedly trust
 in it´s success.

 With the above opportunity available it is a good time to test it in
 your production scenario
 and wheight it against VRay and Arnold, which are also both very nice
 plattforms with
 enough momentum to also be around for quite a while.

 I am sure Redshift is a valuable addition to that arsenal.

 Cheers,

 tim








 On 08.01.2014 22:44, Emilio Hernandez wrote:

 Backing up Tim, in the forums there is actually a hair test in
 Softimage with a simple phong shader.  And IMHO it looks nice.




 2014/1/8 Tim Crowson tim.crow...@magneticdreams.com mailto:
 tim.crow...@magneticdreams.com


 They've stated pretty clearly that Hair and Strand support is the
 next big thing to come... shouldn't be too long now...

 -Tim



 On 1/8/2014 3:36 PM, Daniel Kim wrote:

 I found some weird result of displacement map with Redshift.
 Elevation is okay, but sometimes I could see weird connection of UVM. All
 UVM boders wasn't smooth and I had no
 idea how to fix it. Arnold has that option though : / But more
 option I need is... hair @__@ and ICE strand


 
 ---
 Daniel Kim
 Animation Director  Professional 3D Generalist
 http://www.danielkim3d.com
 
 ---




 On Thu, Jan 9, 2014 at 10:31 AM, Emilio Hernandez 
 emi...@e-roja.com mailto:emi...@e-roja.com wrote:

 Displacement and bump map are there and they work beautiful.
  They event implement a scalar change range into the displacement node.




 2014/1/8 Tim Crowson tim.crow...@magneticdreams.com mailto:
 tim.crow...@magneticdreams.com


 I'll repost what I said in the other thread

 We started using Redshift back in March and pretty much
 use it exclusively now. Of course it all depends on the needs of the
 project (and there are still some real
 limitations).  The RS dev team is top notch though. I'm
 really excited to see how things will be at the end of this calendar 
 year.
  Redshift development is
 progressing at a fantastic rate, and the pricing is very
 competitive. For facilities, even small ones, it does require that you
 spend some time considering your
 hardware and infrastructure, especially if you want to
 start converting CPU farms for GPU rendering, or augmenting them.
 Fortunately, Redshift isn't licensed per GPU,
 but per machine, and that should provide some breathing
 room.

Re: Linking a light (Sun) to a Physical sky ?

2014-01-09 Thread olivier jeannel

Bump...

So nobody has a method ?

Le 08/01/2014 16:32, olivier jeannel a écrit :

Very dumb question, I did it before, I'm sure...
How do I link a light (infinite - sun) rotation to a vector (Sun 
direction) in a Physicla sky property page ? (So that my light is a 
sun...)

I'm in redshift...






Re: rigging in xsi vs maya

2014-01-09 Thread Guillaume Laforge
I didn't read every posts so maybe my understanding is wrong but based in
last replies from Luc-Eric and Tim Leydecker, it sounds like point cloud
scanning is a rigging feature.
It is not, so lets return to the subject please :).

That illustrate well that it is much more easy to put money on new techs
(like point cloud scanning, web based applications, etc...) than to think
about how to improve/re-design an existing workflow like character rigging
! We saw some new systems in modeling (ZBrush etc...) and rendering
(Katana) some years ago, but still nothing in the rigging area. It make
sense as rigging is really a different culture. You need to be a good
character rigger to understand and build a good rigging system. But being a
good character rigger means spend a lot of time on existing tools like Maya
or XSI. At the end you think only through the proposed tools of your app.
If you are a developer interested in designing a rigging system, it is the
opposite problem, you can have a fresh new vision but you can miss
important concepts of character rigging in your tool.

Interesting subject, if you forget about Maya and XSI :)

Cheers,

Guillaume Laforge





On Thu, Jan 9, 2014 at 4:18 AM, Tim Leydecker bauero...@gmx.de wrote:

 Autodesk is doing a lot of development in the area of 3D scan data
 handling.

 If you look into what is going on in the area of topology data aquisition
 for
 architecture, engineering and the military, there is a shift towards 3D
 pointcloud
 data which imho is compareable to what 2D tracking as a concept brought us
 in the 90s.

 (Facial recognition and finally image based modeling and camera positional
 data)

 It is at hand that the more complex, raw 3D point cloud data will need new
 and abstracted ways
 of handling and manipulation, filtering options and adaptive control
 layers for approximated data.

 The implication such data for 3D animation brings is that the concept of
 wheighting a fixed number
 of vertices to a bone may have to be extended beyond a fixed number of
 polygons.

 Unfortunately, taking fall-off based volume wheighting as in it´s current
 level of finesse
 may give worse results than before, especially if your shape options for
 the influence volume
 are limited to capsules, boxes or spheres.

 I am a bit worried that the process of riggingwheighting an organic
 character will become even
 more frustrating and stiff or at least will need even more steps, like
 creating an extra controlsurface
 with a fixed number of points and wrapping it around the high-density data.

 Such a wrap-deformer takes away control. It´s always the rims and little
 caveats that need extra care.

 Cheers,

 tim









 On 09.01.2014 02:13, Guillaume Laforge wrote:

  On Wed, Jan 8, 2014 at 7:55 PM, Luc-Eric Rousseau 
 luceri...@gmail.commailto:
 luceri...@gmail.com wrote:

 In the new future  ( not talking about autodesk here)  I think
 workflows will standards will be Gator-like tools to deal with topo
 changes (point clouds tools as necessary also ptex-based workflows)
 and katana-like proceduralism for render passes-like workflow.


 I'm still wondering if a company ( not talking about Autodesk here ) will
 do anything new like that for our little world. Money for such large dev
 projects is just not in the
 animation/vfx world anymore. I'm not sarcastic, just realist. So lets
 embrace old techs like Maya or XSI. They won't evolve too much but won't
 disappear before many (many) years.

 Btw, Katana is not the futur, it is now :).




Re: Redshift3D Render

2014-01-09 Thread Paul Griswold
There was a discussion on the RS forums about it.  I don't recall the
numbers, though.  I don't think the speed of the PCIe slot made a huge
difference.  It's really all about the speed of the card.

Also, although it doesn't load the entire scene into your card's memory,
the more memory your card has, the better it is.

But overall, for the type of work I'm mainly doing these days, it's
extremely fast.  In fact, it's so fast that I was finding the bottleneck
was the time taken to export the mesh to Redshift, not rendering.  Redshift
has a proxy system like Vray  Arnold, but you have to manually create
proxies per object  my scene had hundreds and hundreds of objects, so I
didn't have time to create them.  Therefore, it was creating a renderable
mesh per frame - so on a frame that took 28 seconds to render, 20 seconds
was spent exporting the mesh and 8 seconds were spent on rendering.  But
again, it's a beta and they're continuing to improve things like the proxy
system.

Once I'm caught up I'm hoping to try rendering the classroom scene and see
how it does.

-Paul


ᐧ


Re: Linking a light (Sun) to a Physical sky ?

2014-01-09 Thread Rob Chapman
rendereditinit physical sky  ?this one constrains the light
direction into the shader for you (mental Ray)  maybe you could copy the
expression from here?


On 9 January 2014 11:54, olivier jeannel olivier.jean...@noos.fr wrote:

 Bump...

 So nobody has a method ?


 Le 08/01/2014 16:32, olivier jeannel a écrit :

 Very dumb question, I did it before, I'm sure...

 How do I link a light (infinite - sun) rotation to a vector (Sun
 direction) in a Physicla sky property page ? (So that my light is a sun...)
 I'm in redshift...






Re: rigging in xsi vs maya

2014-01-09 Thread Tim Leydecker

Hey Guillaume,

go and skin/rig/wheight a raw 3D scan mesh directly to bones.

Look at what comes in terms of animation and skeleton recognition
in the xbox kinect sdk and the xbox one.

Cheers,

tim




On 09.01.2014 13:09, Guillaume Laforge wrote:

I didn't read every posts so maybe my understanding is wrong but based in last 
replies from Luc-Eric and Tim Leydecker, it sounds like point cloud scanning is 
a rigging feature.
It is not, so lets return to the subject please :).

That illustrate well that it is much more easy to put money on new techs (like 
point cloud scanning, web based applications, etc...) than to think about how 
to improve/re-design an
existing workflow like character rigging ! We saw some new systems in modeling 
(ZBrush etc...) and rendering (Katana) some years ago, but still nothing in the 
rigging area. It make
sense as rigging is really a different culture. You need to be a good character 
rigger to understand and build a good rigging system. But being a good 
character rigger means spend
a lot of time on existing tools like Maya or XSI. At the end you think only 
through the proposed tools of your app. If you are a developer interested in 
designing a rigging system,
it is the opposite problem, you can have a fresh new vision but you can miss 
important concepts of character rigging in your tool.

Interesting subject, if you forget about Maya and XSI :)

Cheers,

Guillaume Laforge





On Thu, Jan 9, 2014 at 4:18 AM, Tim Leydecker bauero...@gmx.de 
mailto:bauero...@gmx.de wrote:

Autodesk is doing a lot of development in the area of 3D scan data handling.

If you look into what is going on in the area of topology data aquisition 
for
architecture, engineering and the military, there is a shift towards 3D 
pointcloud
data which imho is compareable to what 2D tracking as a concept brought us 
in the 90s.

(Facial recognition and finally image based modeling and camera positional 
data)

It is at hand that the more complex, raw 3D point cloud data will need new 
and abstracted ways
of handling and manipulation, filtering options and adaptive control layers 
for approximated data.

The implication such data for 3D animation brings is that the concept of 
wheighting a fixed number
of vertices to a bone may have to be extended beyond a fixed number of 
polygons.

Unfortunately, taking fall-off based volume wheighting as in it´s current 
level of finesse
may give worse results than before, especially if your shape options for 
the influence volume
are limited to capsules, boxes or spheres.

I am a bit worried that the process of riggingwheighting an organic 
character will become even
more frustrating and stiff or at least will need even more steps, like 
creating an extra controlsurface
with a fixed number of points and wrapping it around the high-density data.

Such a wrap-deformer takes away control. It´s always the rims and little 
caveats that need extra care.

Cheers,

tim









On 09.01.2014 02:13, Guillaume Laforge wrote:

On Wed, Jan 8, 2014 at 7:55 PM, Luc-Eric Rousseau luceri...@gmail.com 
mailto:luceri...@gmail.com mailto:luceri...@gmail.com 
mailto:luceri...@gmail.com wrote:

 In the new future  ( not talking about autodesk here)  I think
 workflows will standards will be Gator-like tools to deal with topo
 changes (point clouds tools as necessary also ptex-based workflows)
 and katana-like proceduralism for render passes-like workflow.


I'm still wondering if a company ( not talking about Autodesk here ) 
will do anything new like that for our little world. Money for such large dev 
projects is just not in the
animation/vfx world anymore. I'm not sarcastic, just realist. So lets 
embrace old techs like Maya or XSI. They won't evolve too much but won't 
disappear before many (many)
years.

Btw, Katana is not the futur, it is now :).




Re: exporting camera roll - FBX?

2014-01-09 Thread David Barosin
Plot it before you export?  The roll I believe is from a direction
constraint.


On Thu, Jan 9, 2014 at 6:46 AM, Paul Griswold 
pgrisw...@fusiondigitalproductions.com wrote:

 I have an animated camera where I've also animated the roll on it.

 When I export the scene as FBX, the camera imports without the roll.

 Is there a way to include it?


 Thanks,

 Paul

 ᐧ



Re: exporting camera roll - FBX?

2014-01-09 Thread Paul Griswold
I've tried both plotted and unplotted.

It seems to be a Fusion issue, not Softimage.  For some reason Fusion can't
see animated roll on the camera, even when it's plotted.

-Paul

ᐧ


On Thu, Jan 9, 2014 at 7:34 AM, David Barosin dbaro...@gmail.com wrote:

 Plot it before you export?  The roll I believe is from a direction
 constraint.


 On Thu, Jan 9, 2014 at 6:46 AM, Paul Griswold 
 pgrisw...@fusiondigitalproductions.com wrote:

 I have an animated camera where I've also animated the roll on it.

 When I export the scene as FBX, the camera imports without the roll.

 Is there a way to include it?


 Thanks,

 Paul

 ᐧ





Re: Linking a light (Sun) to a Physical sky ?

2014-01-09 Thread olivier jeannel

N
There is a Render /Edit/Create RedShift SkyShader
I NEVER go through these menus..
Bangging my head on the desk


Pulverized by the shame...

Thank you oh grandmaster of the infinite knowledge !



Le 09/01/2014 13:14, Rob Chapman a écrit :
rendereditinit physical sky  ?this one constrains the light 
direction into the shader for you (mental Ray)  maybe you could copy 
the expression from here?



On 9 January 2014 11:54, olivier jeannel olivier.jean...@noos.fr 
mailto:olivier.jean...@noos.fr wrote:


Bump...

So nobody has a method ?


Le 08/01/2014 16:32, olivier jeannel a écrit :

Very dumb question, I did it before, I'm sure...

How do I link a light (infinite - sun) rotation to a vector
(Sun direction) in a Physicla sky property page ? (So that my
light is a sun...)
I'm in redshift...








Re: exporting camera roll - FBX?

2014-01-09 Thread David Barosin
Roll is really just rotation.  Is any rotation making it over?

To be thorough, constrain a null to the soft camera - plot it and copy the
animation back to the camera.
Export the camera with the interest.


On Thu, Jan 9, 2014 at 7:35 AM, Paul Griswold 
pgrisw...@fusiondigitalproductions.com wrote:

 I've tried both plotted and unplotted.

 It seems to be a Fusion issue, not Softimage.  For some reason Fusion
 can't see animated roll on the camera, even when it's plotted.

 -Paul

 ᐧ


 On Thu, Jan 9, 2014 at 7:34 AM, David Barosin dbaro...@gmail.com wrote:

 Plot it before you export?  The roll I believe is from a direction
 constraint.


 On Thu, Jan 9, 2014 at 6:46 AM, Paul Griswold 
 pgrisw...@fusiondigitalproductions.com wrote:

 I have an animated camera where I've also animated the roll on it.

 When I export the scene as FBX, the camera imports without the roll.

 Is there a way to include it?


 Thanks,

 Paul

 ᐧ






Re: rigging in xsi vs maya

2014-01-09 Thread Sebastien Sterling
Why not a node based rigging system ? (not necessarily an ice node system)
but its own thing, you arrange your nulls, you add rig trees to them in a
small interface graph where you have nodes for different behaviours like
ik, fk, hik, twist, strech, you plug the nulls according to the hierarchy
you want, each node has its own params so you can expose or lock or modify
them in the rig or synoptic. i'm sure such a system wouldn't cover
everything, its often what i get told, that rigging is so complex a proses
that in the end the longest traditional method is the only one that allows
for the flexibility and reactivity necessary for a pipe. in spite of this i
think such a system has merrit and deserves to go past prototype, if only
to offer another perspective. its quite probable that neither xsi or mayas
architecture is able to accommodate such a system natively, but plug-ins
like yeti are basically like their own independent little engines running
within the shell of a dcc, the same is true for fabric i assume.


On 9 January 2014 13:09, Guillaume Laforge
guillaume.laforge...@gmail.comwrote:

 I didn't read every posts so maybe my understanding is wrong but based in
 last replies from Luc-Eric and Tim Leydecker, it sounds like point cloud
 scanning is a rigging feature.
 It is not, so lets return to the subject please :).

 That illustrate well that it is much more easy to put money on new techs
 (like point cloud scanning, web based applications, etc...) than to think
 about how to improve/re-design an existing workflow like character rigging
 ! We saw some new systems in modeling (ZBrush etc...) and rendering
 (Katana) some years ago, but still nothing in the rigging area. It make
 sense as rigging is really a different culture. You need to be a good
 character rigger to understand and build a good rigging system. But being a
 good character rigger means spend a lot of time on existing tools like Maya
 or XSI. At the end you think only through the proposed tools of your app.
 If you are a developer interested in designing a rigging system, it is the
 opposite problem, you can have a fresh new vision but you can miss
 important concepts of character rigging in your tool.

 Interesting subject, if you forget about Maya and XSI :)

 Cheers,

 Guillaume Laforge





 On Thu, Jan 9, 2014 at 4:18 AM, Tim Leydecker bauero...@gmx.de wrote:

 Autodesk is doing a lot of development in the area of 3D scan data
 handling.

 If you look into what is going on in the area of topology data aquisition
 for
 architecture, engineering and the military, there is a shift towards 3D
 pointcloud
 data which imho is compareable to what 2D tracking as a concept brought
 us in the 90s.

 (Facial recognition and finally image based modeling and camera
 positional data)

 It is at hand that the more complex, raw 3D point cloud data will need
 new and abstracted ways
 of handling and manipulation, filtering options and adaptive control
 layers for approximated data.

 The implication such data for 3D animation brings is that the concept of
 wheighting a fixed number
 of vertices to a bone may have to be extended beyond a fixed number of
 polygons.

 Unfortunately, taking fall-off based volume wheighting as in it´s current
 level of finesse
 may give worse results than before, especially if your shape options for
 the influence volume
 are limited to capsules, boxes or spheres.

 I am a bit worried that the process of riggingwheighting an organic
 character will become even
 more frustrating and stiff or at least will need even more steps, like
 creating an extra controlsurface
 with a fixed number of points and wrapping it around the high-density
 data.

 Such a wrap-deformer takes away control. It´s always the rims and little
 caveats that need extra care.

 Cheers,

 tim









 On 09.01.2014 02:13, Guillaume Laforge wrote:

  On Wed, Jan 8, 2014 at 7:55 PM, Luc-Eric Rousseau 
 luceri...@gmail.commailto:
 luceri...@gmail.com wrote:

 In the new future  ( not talking about autodesk here)  I think
 workflows will standards will be Gator-like tools to deal with topo
 changes (point clouds tools as necessary also ptex-based workflows)
 and katana-like proceduralism for render passes-like workflow.


 I'm still wondering if a company ( not talking about Autodesk here )
 will do anything new like that for our little world. Money for such large
 dev projects is just not in the
 animation/vfx world anymore. I'm not sarcastic, just realist. So lets
 embrace old techs like Maya or XSI. They won't evolve too much but won't
 disappear before many (many) years.

 Btw, Katana is not the futur, it is now :).





Aw: Re: exporting camera roll - FBX?

2014-01-09 Thread Leo Quensel
Fusion is completely unable to properly import XSI cameras - I already tried pretty much everything. We ended up exporting to maya via alembic (cause fbx cameras produce a mismatch) and export a .ma file from there -_-


Gesendet:Donnerstag, 09. Januar 2014 um 13:51 Uhr
Von:Paul Griswold pgrisw...@fusiondigitalproductions.com
An:softimage@listproc.autodesk.com softimage@listproc.autodesk.com
Betreff:Re: exporting camera roll - FBX?



Haha - you must be reading my mind. Thats the next thing I tried. 



Oddly enough, it still doesnt work. I even tried plotting the null  exporting it, but the null came in with no animation at all.



I think Fusion can read dotXSI cameras, so thats next on my list to test...



-Paul







On Thu, Jan 9, 2014 at 7:46 AM, David Barosin dbaro...@gmail.com wrote:




Roll is really just rotation. Is any rotation making it over?

To be thorough, constrain a null to the soft camera - plot it and copy the animation back to the camera.

Export the camera with the interest.




On Thu, Jan 9, 2014 at 7:35 AM, Paul Griswold pgrisw...@fusiondigitalproductions.com wrote:



Ive tried both plotted and unplotted.



It seems to be a Fusion issue, not Softimage. For some reason Fusion cant see animated roll on the camera, even when its plotted.



-Paul









On Thu, Jan 9, 2014 at 7:34 AM, David Barosin dbaro...@gmail.com wrote:


Plot it before you export? The roll I believe is from a direction constraint.




On Thu, Jan 9, 2014 at 6:46 AM, Paul Griswold pgrisw...@fusiondigitalproductions.com wrote:



I have an animated camera where Ive also animated the roll on it.



When I export the scene as FBX, the camera imports without the roll.



Is there a way to include it?





Thanks,



Paul





























Re: exporting camera roll - FBX?

2014-01-09 Thread Francisco Criado
Here is my tip for exporting cameras to other packages and never had any
trouble. Animate your camera and when ready first plot transformations of
the camera itself and then the interest. Then i plot the constrained
transforms from the camera (for the roll) and then delete the interest form
the scene, leaving just the camera object.
That worked perfect for me, even in maya recognizes de image plane property
after i had a rotoscope image on my camera viewport.

Hope it helps.

Francisco.

On Thursday, January 9, 2014, Leo Quensel wrote:

 Fusion is completely unable to properly import XSI cameras - I already
 tried pretty much everything. We ended up exporting to maya via alembic
 (cause fbx cameras produce a mismatch) and export a .ma file from there -_-

 *Gesendet:* Donnerstag, 09. Januar 2014 um 13:51 Uhr
 *Von:* Paul Griswold 
 pgrisw...@fusiondigitalproductions.comjavascript:_e({}, 'cvml', 
 'pgrisw...@fusiondigitalproductions.com');
 
 *An:* softimage@listproc.autodesk.com javascript:_e({}, 'cvml',
 'softimage@listproc.autodesk.com'); 
 softimage@listproc.autodesk.comjavascript:_e({}, 'cvml', 
 'softimage@listproc.autodesk.com');
 
 *Betreff:* Re: exporting camera roll - FBX?
  Haha - you must be reading my mind.  That's the next thing I tried.

 Oddly enough, it still doesn't work.  I even tried plotting the null 
 exporting it, but the null came in with no animation at all.

 I think Fusion can read dotXSI cameras, so that's next on my list to
 test...

 -Paul

 ᐧ

 On Thu, Jan 9, 2014 at 7:46 AM, David Barosin dbaro...@gmail.com wrote:

  Roll is really just rotation.  Is any rotation making it over?

 To be thorough, constrain a null to the soft camera - plot it and copy the
 animation back to the camera.
  Export the camera with the interest.

 On Thu, Jan 9, 2014 at 7:35 AM, Paul Griswold 
 pgrisw...@fusiondigitalproductions.com wrote:

  I've tried both plotted and unplotted.

 It seems to be a Fusion issue, not Softimage.  For some reason Fusion
 can't see animated roll on the camera, even when it's plotted.

 -Paul

 ᐧ

 On Thu, Jan 9, 2014 at 7:34 AM, David Barosin dbaro...@gmail.com wrote:

 Plot it before you export?  The roll I believe is from a direction
 constraint.

 On Thu, Jan 9, 2014 at 6:46 AM, Paul Griswold 
 pgrisw...@fusiondigitalproductions.com wrote:

  I have an animated camera where I've also animated the roll on it.




Re: Redshift3D Render

2014-01-09 Thread Mirko Jankovic
Here are some details from testing I did a while ago:



On Thu, Jan 9, 2014 at 1:11 PM, Paul Griswold 
pgrisw...@fusiondigitalproductions.com wrote:

 There was a discussion on the RS forums about it.  I don't recall the
 numbers, though.  I don't think the speed of the PCIe slot made a huge
 difference.  It's really all about the speed of the card.

 Also, although it doesn't load the entire scene into your card's memory,
 the more memory your card has, the better it is.

 But overall, for the type of work I'm mainly doing these days, it's
 extremely fast.  In fact, it's so fast that I was finding the bottleneck
 was the time taken to export the mesh to Redshift, not rendering.  Redshift
 has a proxy system like Vray  Arnold, but you have to manually create
 proxies per object  my scene had hundreds and hundreds of objects, so I
 didn't have time to create them.  Therefore, it was creating a renderable
 mesh per frame - so on a frame that took 28 seconds to render, 20 seconds
 was spent exporting the mesh and 8 seconds were spent on rendering.  But
 again, it's a beta and they're continuing to improve things like the proxy
 system.

 Once I'm caught up I'm hoping to try rendering the classroom scene and see
 how it does.

 -Paul


 ᐧ



Re: rigging in xsi vs maya

2014-01-09 Thread Stefan Kubicek

go and skin/rig/wheight a raw 3D scan mesh directly to bones.



What would be a typical scenario for this? The point count is adjustable  
(at the expense of detail),
but the topology will aways be a mess unless properly retopo-ed, wouldn't  
it?


I agree to the rigging paradigm needing some rethinking. I grew up with  
black box systems like Character Studio and CAT. Creating a rig based on  
those takes only minutes to hours, not days, but they lack  
customizability. Yet, results were good enough that I was constantly  
asking myself as to why anyone could possible want to use anything else  
for 90% of the work you see being produced anywhere. It's such a huge cost  
factor, both in terms of time it takes to create the rig and time it takes  
to trouble shoot and maintain it if it breaks (which the black box systems  
next to never do) or needs extensions. Autoriggers (Gear etc) reduce the  
creation time factor at the expense of flexibility, yet the maintenance  
aspect stays to a certain degree. What I also miss in them is the ability  
to have a mesh enveloped to joints and just put the rig on top, allowing  
to test deformations directly by posing the envelope rig without having to  
create a control rig - a given with the black box systems because the  
control rig _is_ the envelope rig. The only thing I know that works in a  
similar way is Motion Builder, in that you import your enveloped mesh and  
joints and apply rigging solvers to it, again at the expense of  
flexibility - it only supports humanoid and 4-legged creatures.
Fabric/Osirirs looks like it could deliver such a paradigm change - a  
modular rigging system where the building blocks are encapsulated and the  
asset that the user interacts with in the scene is light-weight, fast and  
easy to manipulate, and hard to break. I'm really looking forward to that,  
even though flexibility beyond a certain point will probably need to be  
paid for with programming knowledge and -time again.



Look at what comes in terms of animation and skeleton recognition
in the xbox kinect sdk and the xbox one.

Cheers,

tim




On 09.01.2014 13:09, Guillaume Laforge wrote:
I didn't read every posts so maybe my understanding is wrong but based  
in last replies from Luc-Eric and Tim Leydecker, it sounds like point  
cloud scanning is a rigging feature.

It is not, so lets return to the subject please :).

That illustrate well that it is much more easy to put money on new  
techs (like point cloud scanning, web based applications, etc...) than  
to think about how to improve/re-design an
existing workflow like character rigging ! We saw some new systems in  
modeling (ZBrush etc...) and rendering (Katana) some years ago, but  
still nothing in the rigging area. It make
sense as rigging is really a different culture. You need to be a good  
character rigger to understand and build a good rigging system. But  
being a good character rigger means spend
a lot of time on existing tools like Maya or XSI. At the end you think  
only through the proposed tools of your app. If you are a developer  
interested in designing a rigging system,
it is the opposite problem, you can have a fresh new vision but you can  
miss important concepts of character rigging in your tool.


Interesting subject, if you forget about Maya and XSI :)

Cheers,

Guillaume Laforge





On Thu, Jan 9, 2014 at 4:18 AM, Tim Leydecker bauero...@gmx.de  
mailto:bauero...@gmx.de wrote:


Autodesk is doing a lot of development in the area of 3D scan data  
handling.


If you look into what is going on in the area of topology data  
aquisition for
architecture, engineering and the military, there is a shift  
towards 3D pointcloud
data which imho is compareable to what 2D tracking as a concept  
brought us in the 90s.


(Facial recognition and finally image based modeling and camera  
positional data)


It is at hand that the more complex, raw 3D point cloud data will  
need new and abstracted ways
of handling and manipulation, filtering options and adaptive  
control layers for approximated data.


The implication such data for 3D animation brings is that the  
concept of wheighting a fixed number
of vertices to a bone may have to be extended beyond a fixed number  
of polygons.


Unfortunately, taking fall-off based volume wheighting as in it´s  
current level of finesse
may give worse results than before, especially if your shape  
options for the influence volume

are limited to capsules, boxes or spheres.

I am a bit worried that the process of riggingwheighting an  
organic character will become even
more frustrating and stiff or at least will need even more steps,  
like creating an extra controlsurface
with a fixed number of points and wrapping it around the  
high-density data.


Such a wrap-deformer takes away control. It´s always the rims and  
little caveats that need extra care.


Cheers,

tim










Re: Stopping simulation!?

2014-01-09 Thread Morten Bartholdy
Thanks Oscar. I used to do that, but Stephen Blair or Luc Eric informed at
one point that it had been fixed from 2012 or 2013 and it was working fine
in 2013 SP1 here in the fall. I will try the kb arrow. At leats Mootz tools
react to 2 x CTRL.

Morten




Den 8. januar 2014 kl. 18:51 skrev Oscar Juarez
tridi.animei...@gmail.com:

 You can try this
 
 http://xsisupport.com/2011/03/11/softimage-blog-%C2%BB-work-around-to-problems-with-tablets-and-softimage/
 http://xsisupport.com/2011/03/11/softimage-blog-%C2%BB-work-around-to-problems-with-tablets-and-softimage/
 
 If it still not working, try to stop playback with keyboard, down arrow.
 
 
 On Wed, Jan 8, 2014 at 4:28 PM, Emilio Hernandez  emi...@e-roja.com
 mailto:emi...@e-roja.com  wrote:
  Now that you mention I am having the same issue.  Any ideas?  Thx
  
  
  
  2014/1/8 Morten Bartholdy  x...@colorshopvfx.dk
  mailto:x...@colorshopvfx.dk 
   I have had the doubtful priviledge of having to do a variety of simulation
   lately in Soft, and if I remember correctly, in the fall I was able to
   stop
   simulations by mouse and pen input, but now I can't do that any longer -
   actually no matter how simple the sim is.
   
   I am on SI 2013 SP1 Win7 and the only thing I have installed in the
   meantime is Momentum. I have not tried removing it, and before doing that
   I
   just want to hear if anyone else has experienced something like this and
   know of a fix?
   
   Thanks
   
   Morten
   
   
   


Re: Redshift3D Render

2014-01-09 Thread Chris Johnson
Daniel,

You mentioned using it in conjunction with Arnoldwhat way would you use
both together...as in render some elements in Arnold and Redshift?


On Thu, Jan 9, 2014 at 8:26 AM, Mirko Jankovic mirkoj.anima...@gmail.comwrote:

 Here are some details from testing I did a while ago:



 On Thu, Jan 9, 2014 at 1:11 PM, Paul Griswold 
 pgrisw...@fusiondigitalproductions.com wrote:

 There was a discussion on the RS forums about it.  I don't recall the
 numbers, though.  I don't think the speed of the PCIe slot made a huge
 difference.  It's really all about the speed of the card.

 Also, although it doesn't load the entire scene into your card's memory,
 the more memory your card has, the better it is.

 But overall, for the type of work I'm mainly doing these days, it's
 extremely fast.  In fact, it's so fast that I was finding the bottleneck
 was the time taken to export the mesh to Redshift, not rendering.  Redshift
 has a proxy system like Vray  Arnold, but you have to manually create
 proxies per object  my scene had hundreds and hundreds of objects, so I
 didn't have time to create them.  Therefore, it was creating a renderable
 mesh per frame - so on a frame that took 28 seconds to render, 20 seconds
 was spent exporting the mesh and 8 seconds were spent on rendering.  But
 again, it's a beta and they're continuing to improve things like the proxy
 system.

 Once I'm caught up I'm hoping to try rendering the classroom scene and
 see how it does.

 -Paul


 ᐧ





Re: rumor, Soft dead within the next year

2014-01-09 Thread Francisco Criado
Yes please! all mornings i wake up, and the first thing i do is check mails
(bad habit) and find myself with the rumor that soft... and its not very
possitive checking it every single day for the last 20 days.
F.


On Wednesday, January 8, 2014, Rob Wuijster wrote:

  Hi chaps,

 Now we're moving on to a render discussion about Redshift and such, can we
 stop using this thread?
 Oterwise this 'doom thread' will live on for a while ;-)

 Rob

 \/-\/\/

 On 8-1-2014 20:04, Tim Leydecker wrote:

 It´s worth using the redshift3d shaders, the new blend material is really
 nice,
 normal map blending works nice and the conductor/dielectric option to
 drive
 reflection gives believable metal reflection behaviour results easily.

 You´ll also get better (lights/shadow) sampling compared to using default
 shaders.

 Imho, if you spent time with mR or VRay or Arnold shaders, you will have
 no problem transfering your knowledge to Redshift3D.

 In terms of benefiting from speed while tweaking, go and set the renderers
 threshold to
 0.2 or even higher, I find that is good enough for judging light/color
 intensities and
 gives fast turnaround.

 Personally, I tend to push per light lightsamples higher than default,
 even if that is not
 neccessary in Redshift3D´s unified sampling aproach, to me it feels I
 have influence on
 the wheight of samples anyway.

 Enjoy.

 It´s really, really cool.

 Cheers,

 tim



 On 08.01.2014 19:08, Byron Nash wrote:

 When switching over to Redshift, are you all typically redoing the shaders
 using the Redshift ones or trying to rely on the compatibility with
 standard ones? I'm interested to
 check it out but would like to approach it correctly.

 Thanks,
 Byron


 On Wed, Jan 8, 2014 at 11:41 AM, Emilio Hernandez emi...@e-roja.com
 mailto:emi...@e-roja.com wrote:

 It sounds promising.  I don't know.

 The funny thing is that Quadros actually render slower than GTX in my
 experience. As they have lower CUDA cores.  My GTX470 alone rendered faster
 than a Quadro 3000.  As the
 GTX is more focused to games and Quadros to faster video display
 processing, the Quadros have a lower memory bandwith and less CUDA cores.
 At least from the last comparisions
 I have doing in the Nvidia site.  Actually I was planning to upgrade
 my GTX470 to a GTX 780Ti instead of the Titan.  A few bucks off the price
 and it has excellent specs.

 GTX 780 Ti GPU Engine Specs:
 2880CUDA Cores
 875Base Clock (MHz)
 928Boost Clock (MHz)
 210Texture Fill Rate (GigaTexels/sec)

 GTX 780 Ti Memory Specs:
 7.0 GbpsMemory Clock
 3072 MBStandard Memory Config
 GDDR5Memory Interface
 384-bitMemory Interface Width
 336Memory Bandwidth (GB/sec)


  From this numbers what you are looking for, is to see which GPU will
 perform faster are the number of CUDA Cores and the memory bandwith.  The
 higher the better.  As the
 memory bandwith is how fast the data can be transfered to memory to be
 processed by the CUDA cores.

 Some guys are already using Redshift with RoyalRender.  I don't how
 fast they are rendering, but now you can have a render farm with cheap
 processors and a couple of this GPU
 inside.

 A quick example.

 The same scene in round numbers per frame in my machine.

 Arnold:   15 min
 Redhsfit:  4 min

 So you can expect at least a reduction of 73% in your render times.










 2014/1/8 Dan Yargici danyarg...@gmail.com
 mailto:danyarg...@gmail.com

 Anyone tried using gpubox with Redshift?

 http://renegatt.com/

 -
 No virus found in this message.
 Checked by AVG - www.avg.com
 Version: 2014.0.4259 / Virus Database: 3658/6986 - Release Date: 01/08/14






Re: [ICE] B-Spline/NURBS curve compound?

2014-01-09 Thread Daniel Brassard
I did some experiment in ICE with Cubic B Spline. You can see the result on
si-community, there even some compound you can download and experiment with.

http://www.si-community.com/community/viewtopic.php?f=15t=1802

Hope that help.

Daniel



On Thu, Jan 9, 2014 at 5:38 AM, Martin Chatterjee 
martin.chatterjee.li...@googlemail.com wrote:

 Cheers Vladimir, that definitely helps!

 Can't believe I actually overlooked this compound in my search... :)

 Martin


 --
Martin Chatterjee

 [ Freelance Technical Director ]
 [   http://www.chatterjee.de   ]
 [ https://vimeo.com/chatterjee ]


 On Thu, Jan 9, 2014 at 11:23 AM, Vladimir Jankijevic 
 vladi...@elefantstudios.ch wrote:

 there is the 'Piecewise Cubic B Spline' Compound you could use as a
 starting point.

 Cheers,
 Vladimir


 On Thu, Jan 9, 2014 at 11:21 AM, Martin Chatterjee 
 martin.chatterjee.li...@googlemail.com wrote:

 Hey there,

 is anybody aware of an ICE compound solving B-Spline/NURBS curves?

 You know, like the existing 'Bezier 4' and 'Bezier 5' compounds but for
 a curve with an arbitrary number of control points?

 Thanks in advance, cheers,

 Martin
 --
Martin Chatterjee

 [ Freelance Technical Director ]
 [   http://www.chatterjee.de   ]
 [ https://vimeo.com/chatterjee ]






Re: rigging in xsi vs maya

2014-01-09 Thread Tim Leydecker

 What would be a typical scenario for this? The point count is adjustable (at 
the expense of detail),
 but the topology will aways be a mess unless properly retopo-ed, wouldn't it?

That is what I meant to suggest when I wrote:

It is at hand that the more complex, raw 3D point cloud data will need new and 
abstracted ways
of handling and manipulation, filtering options and adaptive control layers for 
approximated data

Basically, a rigged lower density mesh and a displacement trying to capture 
small detail
while at the same time loosing control over how that detail will actually 
react, that´s
sort of the established standard for working with high detail. Reduce detail 
until you
can handle it and hope nobody will notice. Depending on personality, wave away 
concerns.

The reason why I suggest to go and skin/rig/wheight a raw 3D scan mesh directly 
to bones:

That is the data you want to animate, everything else is already yet another 
degraded derrivate.
Going and trying it will show the limitations of current toolsets. Then do some 
cloth sim ontop
and fix interpenetration issues. Or first of all wait for collision simulation 
to finish.


Like using *.jpg´s with lossy compression as the input for color grading, then 
re-compressing again
as a lossy *.jpg and wondering why there´s block artifacts.

Cheers,

tim






On 09.01.2014 14:34, Stefan Kubicek wrote:

go and skin/rig/wheight a raw 3D scan mesh directly to bones.



What would be a typical scenario for this? The point count is adjustable (at 
the expense of detail),
but the topology will aways be a mess unless properly retopo-ed, wouldn't it?

I agree to the rigging paradigm needing some rethinking. I grew up with black 
box systems like Character Studio and CAT. Creating a rig based on those takes 
only minutes to hours,
not days, but they lack customizability. Yet, results were good enough that I 
was constantly asking myself as to why anyone could possible want to use 
anything else for 90% of the
work you see being produced anywhere. It's such a huge cost factor, both in 
terms of time it takes to create the rig and time it takes to trouble shoot and 
maintain it if it breaks
(which the black box systems next to never do) or needs extensions. Autoriggers 
(Gear etc) reduce the creation time factor at the expense of flexibility, yet 
the maintenance aspect
stays to a certain degree. What I also miss in them is the ability to have a mesh 
enveloped to joints and just put the rig on top, allowing to test 
deformations directly by
posing the envelope rig without having to create a control rig - a given with 
the black box systems because the control rig _is_ the envelope rig. The only 
thing I know that works
in a similar way is Motion Builder, in that you import your enveloped mesh and 
joints and apply rigging solvers to it, again at the expense of flexibility - 
it only supports
humanoid and 4-legged creatures.
Fabric/Osirirs looks like it could deliver such a paradigm change - a modular 
rigging system where the building blocks are encapsulated and the asset that 
the user interacts with
in the scene is light-weight, fast and easy to manipulate, and hard to break. 
I'm really looking forward to that, even though flexibility beyond a certain 
point will probably need
to be paid for with programming knowledge and -time again.


Look at what comes in terms of animation and skeleton recognition
in the xbox kinect sdk and the xbox one.

Cheers,

tim




On 09.01.2014 13:09, Guillaume Laforge wrote:

I didn't read every posts so maybe my understanding is wrong but based in last 
replies from Luc-Eric and Tim Leydecker, it sounds like point cloud scanning is 
a rigging feature.
It is not, so lets return to the subject please :).

That illustrate well that it is much more easy to put money on new techs (like 
point cloud scanning, web based applications, etc...) than to think about how 
to improve/re-design an
existing workflow like character rigging ! We saw some new systems in modeling 
(ZBrush etc...) and rendering (Katana) some years ago, but still nothing in the 
rigging area. It make
sense as rigging is really a different culture. You need to be a good character 
rigger to understand and build a good rigging system. But being a good 
character rigger means spend
a lot of time on existing tools like Maya or XSI. At the end you think only 
through the proposed tools of your app. If you are a developer interested in 
designing a rigging system,
it is the opposite problem, you can have a fresh new vision but you can miss 
important concepts of character rigging in your tool.

Interesting subject, if you forget about Maya and XSI :)

Cheers,

Guillaume Laforge





On Thu, Jan 9, 2014 at 4:18 AM, Tim Leydecker bauero...@gmx.de 
mailto:bauero...@gmx.de wrote:

Autodesk is doing a lot of development in the area of 3D scan data handling.

If you look into what is going on in the area of topology data aquisition 
for
architecture, 

Re: rumor, Soft dead within the next year

2014-01-09 Thread wavo

Am 1/9/2014 2:45 PM, schrieb Francisco Criado:

 Yes please! all mornings i wake up, and the first thing i do is check 
mails (bad habit) and find myself with the rumor that soft... and its 
not very possitive checking it every single day for the last 20 days.

 F.

 yep, but the good thing is, now that we have 2014, we got another 
Year for SI!

(Now it will (maybe) die at the end of 2015!)


Walter
--


*Walter Volbers*
Senior Animator

*FIFTYEIGHT*3D
Animation  Digital Effects GmbH

Kontorhaus Osthafen
Lindleystraße 12
60314 Frankfurt am Main
Germany

Telefon +49 (0) 69.48 000 55.50
Telefax +49 (0) 69.48 000 55.15

_mailto:w...@fiftyeight.com
http://www.fiftyeight.com
_


ESC*58*
Eine Kooperation der escape GmbH und der FIFTYEIGHT3D GmbH

_http://www.ESC58.de
_


Re: BUG: referenced models with gear rig swapping animation between characters.

2014-01-09 Thread Eric Thivierge
I would double check that the deltas have the correct model target they 
are trying to be applied to in their PPG. There shouldn't be a problem 
with meshes named the same in both models. The whole point of the model 
is to give things a unique namespace and encapsulate the asset.


Eric T.

On Thursday, January 09, 2014 6:44:45 AM, Ognjen Vukovic wrote:

 sorry my bad,  still working in autopilot mode. we reverted back
to some old models before the shaders where attached and the problem
went awey so i cant catch the error now but it might seem to be
something linked to the fact that the rigger has the same names for
certain geometry in different models? not quite sure yet but im sure
it will sneak up again during the day.


On Thu, Jan 9, 2014 at 12:15 PM, Vladimir Jankijevic
vladi...@elefantstudios.ch mailto:vladi...@elefantstudios.ch wrote:

That's just an information that the emdl was created with version
11.0.525.0 - 2013


On Thu, Jan 9, 2014 at 12:11 PM, Ognjen Vukovic ognj...@gmail.com
mailto:ognj...@gmail.com wrote:

It seems to be throwing a delta invalid error and shouting at
me that the emdl file was created in another version, but we
are running the same version of 2013 on every comp

 INFO : 4152 - Data loaded from file Z:\\Rusco_Rig.emdl
was created with build number: 11.0.525.0 - compatibility
version: 1100

...


On Thu, Jan 9, 2014 at 11:49 AM, Ognjen Vukovic
ognj...@gmail.com mailto:ognj...@gmail.com wrote:

Hi guys, we have a bit of a strange situation here, as you
can see by the title...

A couple of referenced models with characters are swapping
the animation in random scenes, the bug seems to appear
when the models are referenced but the moment you localize
everything it seems to revert to the original state, so im
presuming it has something to do with the deltas going
haywire but i have no clue as to how to attack this
problem since my knowledge of animation and anything tied
to animation is very limited.

I was hoping someone might have run into this before and
might have a quick fix.

Cheers,
Ogi.








Re: rigging in xsi vs maya

2014-01-09 Thread Eric Thivierge

Sebastian, look at ILM's Block Party 2 rigging system.


On 1/9/2014 7:53 AM, Sebastien Sterling wrote:
Why not a node based rigging system ? (not necessarily an ice node 
system) but its own thing, you arrange your nulls, you add rig trees 
to them in a small interface graph where you have nodes for different 
behaviours like ik, fk, hik, twist, strech, you plug the nulls 
according to the hierarchy you want, each node has its own params so 
you can expose or lock or modify them in the rig or synoptic. i'm sure 
such a system wouldn't cover everything, its often what i get told, 
that rigging is so complex a proses that in the end the longest 
traditional method is the only one that allows for the flexibility and 
reactivity necessary for a pipe. in spite of this i think such a 
system has merrit and deserves to go past prototype, if only to offer 
another perspective. its quite probable that neither xsi or mayas 
architecture is able to accommodate such a system natively, but 
plug-ins like yeti are basically like their own independent little 
engines running within the shell of a dcc, the same is true for fabric 
i assume.




Re: Rendering ZBrush displacement in Soft

2014-01-09 Thread peter_b
yeah, about fine detail and skin shading,
subsurface scattering does tend to cover up fine surface detail – recesses and 
wrinkles in the skin in particular.

just think of it: a wrinkle on top of the skin is like a fine ridge. without 
scattering, if the light comes from the left, the left of that ridge would be 
bright and the right in shadow. but the scattering makes the shadow part light 
up – and if your SSS shader compensates the diffuse (which it does by default) 
the left part will be a bit less bright than it would be without scattering.
so as a net result this counters the diffuse light, effectively washing out the 
ridge.

so I’d first check with diffuse(spec) only to see if the displacement and bump 
detail looks right to you - conform to what you had in the sculpt.

exaggerating the bump/displace might help a bit but probably not enough.
You’ll really distort the object a lot while the fine detail will still be 
washed out by scattering.

mixing the bump and displacement maps with the scattering color and AO or 
cavity map can help to bring back some of that surface detail.

don’t forget a spec/reflectivity map as well but that’s regardless of 
scattering.
if you don’t have one, my standin solution is use the diffuse inverted to drive 
spec/reflectivity as well as shinyness/gloss (with proper change range nodes) - 
so dark diffuse results in a stronger but concentrated highlight and bright 
diffuse in a less intense but wider highlight. 
this works rather well and more often than not, it’s like this in reality. it 
certainly beats not texturing the spec/ref.

ah, it’s the classic “it looks great in zbrush, why don’t I get the same in the 
render?” and there’s no easy solution to that.
Zbrush viewport makes detail in the sculpt stand out very well and it takes 
work to bring that back in the render.


From: Szabolcs Matefy 
Sent: Thursday, January 09, 2014 12:12 PM
To: softimage@listproc.autodesk.com 
Subject: RE: Rendering ZBrush displacement in Soft

Actually finally I managed to work with displacement. However, displacement is 
not as detailed as I’d like, but boosting with a bump map, it looks fine. 
Unfortunately, the midpoint seems to be off, I have to somehow tune it. In ZB I 
set midpoint to 0.5, because if I set it to 0, It looked as if I have no 
recesses on the skin. Now what I think is that I have to exaggerate the details 
to make it work properly with skin shading, but that’s another story.

 

It looks like that the details are in the texture, but somehow the model 
doesn’t want to reflect it, maybe I should pump up the subdivision in the 
displacement tab of geoapprox PPG.

 

Cheers

 

 

Szabolcs

 

PS. For me, after GoZ is used then I have two issues: 1) I can’t Alt-Tab to 
change between tasks, I have to minimize ZBrush, 2) All playback function, 
simulation is not available anymore in Softimage

 

 

Cheers

 

 

 

 

From: softimage-boun...@listproc.autodesk.com 
[mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Emilio Hernandez
Sent: Wednesday, January 08, 2014 4:58 PM
To: softimage@listproc.autodesk.com
Subject: Re: Rendering ZBrush displacement in Soft

 

I also use GoZ a lot, the only bug that happens for me is that if you send from 
Softimage to Zbrush, make an UV check before start sculpting.  Sometimes they 
got screwed.  So export an OBJ and import in Zbrush do your sculpt and sendback 
to SI.  Make sure your UV's are consistent and flip the U before exporting from 
Zbrush or your map is going to come up flipped. upside down.

Regularly I don't subd the mesh in Zbrush more than 4 subd.   As you need to 
subd your mesh also inside SI the same amount to properly displace the geo when 
rendering.  If you find that you need more subd in your sculpt.  I willo go 
back, subd the mesh to get more polys and send it back to Zbrush, so you can go 
as high as 8 subd from the original mesh, that is a lot.

 

As far as I remember you can use a 32bit depth image to plug that into the 
scalar change range node.  But this depth in the bitmap has nothing to do with 
the linear workflow, as the linear workflow is related to gamma correct display 
and rendering, and this maps are to drive values of units to displace the 
geometry inwards and outwards and to be interpretated by the render engine

Here is video that will help to understand the displacement in SI and MR.

https://vimeo.com/29898426

.

 






 

2014/1/8 Cristobal Infante cgc...@gmail.com

You read it linear for sure. What exactly is your problem, are you not getting 
enough detail?. If this is the problem it can also be the UVs are not big 
enough for each poly.

 

To be honest the final shader doesn't really matter for the disp, in fact GoZ 
export with a phong. I personally only use Architectural materials when in MR 
though.

 

 

 

On 8 January 2014 12:45, Szabolcs Matefy szabol...@crytek.com wrote:

OK, next question. If you are working in linear workflow, how would you set up 
the 

Re: rigging in xsi vs maya

2014-01-09 Thread Ciaran Moloney
Maybe we could rename constraints with ICE? Eat it Maya!


On Wed, Jan 8, 2014 at 9:57 PM, Matt Lind ml...@carbinestudios.com wrote:

 Butbut.buteverybody said ICE can do oh so much more.  Say it
 ain't so.




Re: Redshift3D Render

2014-01-09 Thread Tim Crowson
We've been testing 1 Titan vs. 3 and so far, the speed increase of the 
triple-Titan box is holding at about 2.45x. In an email exchange (or 
maybe it was on the forums, can't recall) it was mentioned that on the 
topic parallelization, Pixar had determined that even for them, 4 units 
together (of whatever, not necessarily Titans) was the max they could 
really go before it started to cost more money than it was worth. In our 
case, I'm thinking 3 might be our max, based on some nerdy mathematics 
by one of our IT guys analyzing render times per shot, per frame, 
hardware/software costs, rack space used, etc.


But hey, Redshift aside, the Titan in my workstation is doing wonders 
for my viewport performance in Soft. I had a 58M, 2500-item model 
derived from a CAD file the other day, and this thing was letting me 
tumble around it at ~15fps in Shaded mode. That ain't shabby!

-Tim


On 1/9/2014 6:11 AM, Paul Griswold wrote:
There was a discussion on the RS forums about it.  I don't recall the 
numbers, though.  I don't think the speed of the PCIe slot made a huge 
difference.  It's really all about the speed of the card.


Also, although it doesn't load the entire scene into your card's 
memory, the more memory your card has, the better it is.


But overall, for the type of work I'm mainly doing these days, it's 
extremely fast.  In fact, it's so fast that I was finding the 
bottleneck was the time taken to export the mesh to Redshift, not 
rendering.  Redshift has a proxy system like Vray  Arnold, but you 
have to manually create proxies per object  my scene had hundreds and 
hundreds of objects, so I didn't have time to create them.  Therefore, 
it was creating a renderable mesh per frame - so on a frame that took 
28 seconds to render, 20 seconds was spent exporting the mesh and 8 
seconds were spent on rendering.  But again, it's a beta and they're 
continuing to improve things like the proxy system.


Once I'm caught up I'm hoping to try rendering the classroom scene and 
see how it does.


-Paul


ᐧ


--
Signature




Re: [ICE] B-Spline/NURBS curve compound?

2014-01-09 Thread Martin Chatterjee
Thanks Daniel, I'll have a look.

--
   Martin Chatterjee

[ Freelance Technical Director ]
[   http://www.chatterjee.de   ]
[ https://vimeo.com/chatterjee ]


On Thu, Jan 9, 2014 at 2:48 PM, Daniel Brassard dbrassar...@gmail.comwrote:

 I did some experiment in ICE with Cubic B Spline. You can see the result
 on si-community, there even some compound you can download and experiment
 with.

 http://www.si-community.com/community/viewtopic.php?f=15t=1802

 Hope that help.

 Daniel



 On Thu, Jan 9, 2014 at 5:38 AM, Martin Chatterjee 
 martin.chatterjee.li...@googlemail.com wrote:

 Cheers Vladimir, that definitely helps!

 Can't believe I actually overlooked this compound in my search... :)

 Martin


 --
Martin Chatterjee

 [ Freelance Technical Director ]
 [   http://www.chatterjee.de   ]
 [ https://vimeo.com/chatterjee ]


 On Thu, Jan 9, 2014 at 11:23 AM, Vladimir Jankijevic 
 vladi...@elefantstudios.ch wrote:

 there is the 'Piecewise Cubic B Spline' Compound you could use as a
 starting point.

 Cheers,
 Vladimir


 On Thu, Jan 9, 2014 at 11:21 AM, Martin Chatterjee 
 martin.chatterjee.li...@googlemail.com wrote:

 Hey there,

 is anybody aware of an ICE compound solving B-Spline/NURBS curves?

 You know, like the existing 'Bezier 4' and 'Bezier 5' compounds but for
 a curve with an arbitrary number of control points?

 Thanks in advance, cheers,

 Martin
 --
Martin Chatterjee

 [ Freelance Technical Director ]
 [   http://www.chatterjee.de   ]
 [ https://vimeo.com/chatterjee ]







Re: rigging in xsi vs maya

2014-01-09 Thread Sergio Mucino

  
  
I've been doing quite a bit of rigging in Modo lately, and I have
been very surprised by its capabilities.
One thing they do support is heat mapping. It's quite nice to use,
but there are several requirement that need to be met for a mesh to
be acceptable for heat binding. I don't know if all heat mapping
implementations are based on the same algo(s), and therefore,
inherit the same requirements, but here they go (copying/pasting
from the docs):

--Mesh must form a volume,
  though holes are supported (such as eye sockets).
 --Target mesh
  must be only polygonal, no single vertices,
  floating edges or curves can be present.
 --No shared
  vertices, edges or polygons (non-manifold surfaces) allowed
  between multiple components. 
 --All joints
  must be contained within the volume of the mesh. 

Otherwise, you can still use the available smooth or rigid binding
methods. I don't know if any problems you ran into could be due to
some of these conditions, but there... just in case.

On 08/01/2014 8:31 AM, Sebastien
  Sterling wrote:


  
One feature i would have loved to see implemented across
  the board of autodesk products (apart from Alembic which
  should really just be a new standard by now...) is the heat
  map algorithm. in theory, is this that difficult to implement
  in Soft and Max ? apparently it was made by a bunch of
  students checking up on heat distribution algorithm papers for
  designing old radiators.
  
  http://www.youtube.com/watch?v=aCBx8MjEvvo
  

On paper it looks like the best shit ever, so we of the CHR Dep
wanted to use it to test characters for deformation in maya pre
rigging. trouble was, apparently its extremely susceptible, and
i'm not quite sure to what, topology, mesh density... but in any
case a Lead at rigging scripted a small ui allowing us to just
bypass most of the checks, making the tech actually usable, and
it worked great... until we realised that it actually pops
vertices slightly away from their initial position... in
fairness we used a script to access these capabilities so maybe
that caused the problem, i doubt it but there was tampering,
maybe someone else has had more controled experiences with Heat
mapping, like i said before it still seems like a really useful
addition,
  
  

On 8 January 2014 10:52, Tim Leydecker
  bauero...@gmx.de
  wrote:
  
Using a 3DSMax rigged sample character scene from the UDK
docs,
I made a roundtrip through Maya and Softimage using the
*.fbx format.

I didnt try to export any rig controls, just a "human" rig.


Its worth checking to have the latest *.fbx version
installed and
using an export preset that seems applicable, I think I
resorted to
"Autodesk Media Entertainment 2012 bla" (im on 2012s).

I cant say if that was the best way but that roundtrip
worked.

I ended up with Maya/3DSMax/Softimage each having the
rigged, animated character in a scene.

In my case, there was some nuisance with the BIPED rig
getting interpreted as a second rig
the character is rigged to in Softimage, I had to delete
that biped in XSI to get back to
similar results as in 3DSMax, leaving only the rig meant for
export - it is likely that was
my export settings or selection settings. I had straight
results going from Maya to Softimage.

Cheers,


tim

  

On 07.01.2014 23:58, Steven Caron wrote:

  this thread is some what well timed... i am in maya
  right now. i need to get a mesh and its skin/envelope
  into softimage. i did not rig this object and i don't
  know enough about
  maya to try and understand it through inspection. in
  softimage i would select the mesh, then select the
  deformers from envelope, then key frame those objects
  and remove the
  constraints on them in mass with 'remove all
  constraints'
  
  is NONE of that doable in maya? cause i am having a
  hell of a time figuring it out.
  
  s

  

  


  



Re: rigging in xsi vs maya

2014-01-09 Thread Eric Thivierge
I posted this on the Softimage User Voice but I really really want to 
try this Geodesic Voxel Binding:

https://vimeo.com/69268846

On Thursday, January 09, 2014 10:34:36 AM, Sergio Mucino wrote:

I've been doing quite a bit of rigging in Modo lately, and I have been
very surprised by its capabilities.
One thing they do support is heat mapping.  It's quite nice to use,
but there are several requirement that need to be met for a mesh to be
acceptable for heat binding. I don't know if all heat mapping
implementations are based on the same algo(s), and therefore, inherit
the same requirements, but here they go (copying/pasting from the docs):

/--//Mesh must form a volume, though holes are supported (such as eye
sockets).//
--//Target mesh must be //only//polygonal, no single vertices,
floating edges or curves can be present.//
--//No shared vertices, edges or polygons (non-manifold surfaces)
allowed between multiple components. //
--//All joints must be contained within the volume of the mesh. /

Otherwise, you can still use the available smooth or rigid binding
methods. I don't know if any problems you ran into could be due to
some of these conditions, but there... just in case.

On 08/01/2014 8:31 AM, Sebastien Sterling wrote:

One feature i would have loved to see implemented across the board of
autodesk products (apart from Alembic which should really just be a
new standard by now...) is the heat map algorithm. in theory, is this
that difficult to implement in Soft and Max ? apparently it was made
by a bunch of students checking up on heat distribution algorithm
papers for designing old radiators.

http://www.youtube.com/watch?v=aCBx8MjEvvo

On paper it looks like the best shit ever, so we of the CHR Dep
wanted to use it to test characters for deformation in maya pre
rigging. trouble was, apparently its extremely susceptible, and i'm
not quite sure to what, topology, mesh density... but in any case a
Lead at rigging scripted a small ui allowing us to just bypass most
of the checks, making the tech actually usable, and it worked
great... until we realised that it actually pops vertices slightly
away from their initial position... in fairness we used a script to
access these capabilities so maybe that caused the problem, i doubt
it but there was tampering, maybe someone else has had more controled
experiences with Heat mapping, like i said before it still seems like
a really useful addition,


On 8 January 2014 10:52, Tim Leydecker bauero...@gmx.de
mailto:bauero...@gmx.de wrote:

Using a 3DSMax rigged sample character scene from the UDK docs,
I made a roundtrip through Maya and Softimage using the *.fbx format.

I didn´t try to export any rig controls, just a human rig.


It´s worth checking to have the latest *.fbx version installed and
using an export preset that seems applicable, I think I resorted to
Autodesk Media Entertainment 2012 bla (im on 2012´s).

I can´t say if that was the best way but that roundtrip worked.

I ended up with Maya/3DSMax/Softimage each having the rigged,
animated character in a scene.

In my case, there was some nuisance with the BIPED rig getting
interpreted as a second rig
the character is rigged to in Softimage, I had to delete that
biped in XSI to get back to
similar results as in 3DSMax, leaving only the rig meant for
export - it is likely that was
my export settings or selection settings. I had straight results
going from Maya to Softimage.

Cheers,


tim


On 07.01.2014 23:58, Steven Caron wrote:

this thread is some what well timed... i am in maya right
now. i need to get a mesh and its skin/envelope into
softimage. i did not rig this object and i don't know enough
about
maya to try and understand it through inspection. in
softimage i would select the mesh, then select the deformers
from envelope, then key frame those objects and remove the
constraints on them in mass with 'remove all constraints'

is NONE of that doable in maya? cause i am having a hell of a
time figuring it out.

s




--





Re: Redshift3D Render

2014-01-09 Thread Mirko Jankovic
Hey Tim
Would you be able to take 2 minutes of your tmie and run this ol python
script for SI with your titan?
I'm getting weird results with an 780 in my home system outperforming titan
a lot... well here is copy paste from forum if you are able to check it out
as well.. thanks!:

itan: ~170 fps
780: ~245 fps

Go figure [image: :)]
But I'm suspecting something weird with my titan system for some time will
have to test further but would be great if anyone with titan as well could
run it too?
This old python script:
Application.CreatePrim(Cube, MeshSurface, , )
Application.SetValue(cube.polymsh.geom.subdivu, 831, )
Application.SetValue(cube.polymsh.geom.subdivv, 800, )
Application.SetValue(cube.polymsh.geom.subdivbase, 800, )
Application.SetValue(Camera.camvis.refreshrate, True, )
Application.SetDisplayMode(Camera, shaded)
Application.DeselectAll()
Application.SetValue(PlayControl.Out, 5000, )
Application.DeselectAll()
Application.GetPrim(Null, , , )
Application.SelectObj(Camera_Root, , )
Application.CopyPaste(Camera_Root, , null, 1)
Application.SelectObj(null, , )
Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
1, , , , , )
Application.SetValue(PlayControl.Key, 5000, )
Application.SetValue(PlayControl.Current, 5000, )
Application.Rotate(, 0, 8000, 0, siAbsolute, siPivot, siObj, siY,
, , , , , , , 0, )
Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
5000, , , , , )
Application.FirstFrame()

Just paste in python script run and hit play.
Thakns!


On Thu, Jan 9, 2014 at 3:34 PM, Tim Crowson
tim.crow...@magneticdreams.comwrote:

  We've been testing 1 Titan vs. 3 and so far, the speed increase of the
 triple-Titan box is holding at about 2.45x. In an email exchange (or maybe
 it was on the forums, can't recall) it was mentioned that on the topic
 parallelization, Pixar had determined that even for them, 4 units together
 (of whatever, not necessarily Titans) was the max they could really go
 before it started to cost more money than it was worth. In our case, I'm
 thinking 3 might be our max, based on some nerdy mathematics by one of our
 IT guys analyzing render times per shot, per frame, hardware/software
 costs, rack space used, etc.

 But hey, Redshift aside, the Titan in my workstation is doing wonders for
 my viewport performance in Soft. I had a 58M, 2500-item model derived from
 a CAD file the other day, and this thing was letting me tumble around it at
 ~15fps in Shaded mode. That ain't shabby!
 -Tim



 On 1/9/2014 6:11 AM, Paul Griswold wrote:

  There was a discussion on the RS forums about it.  I don't recall the
 numbers, though.  I don't think the speed of the PCIe slot made a huge
 difference.  It's really all about the speed of the card.

  Also, although it doesn't load the entire scene into your card's memory,
 the more memory your card has, the better it is.

  But overall, for the type of work I'm mainly doing these days, it's
 extremely fast.  In fact, it's so fast that I was finding the bottleneck
 was the time taken to export the mesh to Redshift, not rendering.  Redshift
 has a proxy system like Vray  Arnold, but you have to manually create
 proxies per object  my scene had hundreds and hundreds of objects, so I
 didn't have time to create them.  Therefore, it was creating a renderable
 mesh per frame - so on a frame that took 28 seconds to render, 20 seconds
 was spent exporting the mesh and 8 seconds were spent on rendering.  But
 again, it's a beta and they're continuing to improve things like the proxy
 system.

  Once I'm caught up I'm hoping to try rendering the classroom scene and
 see how it does.

  -Paul


  ᐧ


 --





Re: rigging in xsi vs maya

2014-01-09 Thread Sergio Mucino

  
  
I saw that video a while ago. I would expect to see this show up in
Maya sometime 'soon' (hopefully).'


On 09/01/2014 10:38 AM, Eric Thivierge
  wrote:

I
  posted this on the Softimage User Voice but I really really want
  to try this Geodesic Voxel Binding:
  
  https://vimeo.com/69268846
  
  
  On Thursday, January 09, 2014 10:34:36 AM, Sergio Mucino wrote:
  
  I've been doing quite a bit of rigging in
Modo lately, and I have been

very surprised by its capabilities.

One thing they do support is heat mapping.  It's quite nice to
use,

but there are several requirement that need to be met for a mesh
to be

acceptable for heat binding. I don't know if all heat mapping

implementations are based on the same algo(s), and therefore,
inherit

the same requirements, but here they go (copying/pasting from
the docs):


/--//Mesh must form a volume, though holes are supported (such
as eye

sockets).//

--//Target mesh must be //only//polygonal, no single
vertices,

floating edges or curves can be present.//

--//No shared vertices, edges or polygons (non-manifold
surfaces)

allowed between multiple components. //

--//All joints must be contained within the volume of the
mesh. /


Otherwise, you can still use the available smooth or rigid
binding

methods. I don't know if any problems you ran into could be due
to

some of these conditions, but there... just in case.


On 08/01/2014 8:31 AM, Sebastien Sterling wrote:

One feature i would have loved to see
  implemented across the board of
  
  autodesk products (apart from Alembic which should really just
  be a
  
  new standard by now...) is the heat map algorithm. in theory,
  is this
  
  that difficult to implement in Soft and Max ? apparently it
  was made
  
  by a bunch of students checking up on heat distribution
  algorithm
  
  papers for designing old radiators.
  
  
  http://www.youtube.com/watch?v=aCBx8MjEvvo
  
  
  On paper it looks like the best shit ever, so we of the CHR
  Dep
  
  wanted to use it to test characters for deformation in maya
  pre
  
  rigging. trouble was, apparently its extremely susceptible,
  and i'm
  
  not quite sure to what, topology, mesh density... but in any
  case a
  
  Lead at rigging scripted a small ui allowing us to just bypass
  most
  
  of the checks, making the tech actually usable, and it worked
  
  great... until we realised that it actually pops vertices
  slightly
  
  away from their initial position... in fairness we used a
  script to
  
  access these capabilities so maybe that caused the problem, i
  doubt
  
  it but there was tampering, maybe someone else has had more
  controled
  
  experiences with Heat mapping, like i said before it still
  seems like
  
  a really useful addition,
  
  
  
  On 8 January 2014 10:52, Tim Leydecker bauero...@gmx.de
  
  mailto:bauero...@gmx.de wrote:
  
  
      Using a 3DSMax rigged sample character scene from the UDK
  docs,
  
      I made a roundtrip through Maya and Softimage using the
  *.fbx format.
  
  
      I didn´t try to export any rig controls, just a "human"
  rig.
  
  
  
      It´s worth checking to have the latest *.fbx version
  installed and
  
      using an export preset that seems applicable, I think I
  resorted to
  
      "Autodesk Media Entertainment 2012 bla" (im on 2012´s).
  
  
      I can´t say if that was the best way but that roundtrip
  worked.
  
  
      I ended up with Maya/3DSMax/Softimage each having the
  rigged,
  
      animated character in a scene.
  
  
      In my case, there was some nuisance with the BIPED rig
  getting
  
      interpreted as a second rig
  
      the character is rigged to in Softimage, I had to delete
  that
  
      

Re: rigging in xsi vs maya

2014-01-09 Thread Sergio Mucino

  
  
This has been pretty much my only "um..." regarding ICE. It seems to
be like a (powerful) local black box that is related to one object.
I know that an ICE graph can actually get and set data to multiple
locations, but in some cases, one needs to jump through hoops (for
example, it's difficult to read-write data from other ICE graphs...
or at least, not straight-forward). In Maya, everything is part of
the scene graph, so its a lot easier to read/write data, and find
all related operations to a certain node.
However, Maya has to have the worst node editor I've ever had to
touch. I would definitely not want to see something like that in
Softimage (or anywhere else for that matter). Every time I try to
use it, it makes me want to kick puppies, and come back flying to
the Hypergraph. I much prefer the ICE UI/workflow (I'd just like it
more if it was "global") and Modo's Schematic View (by orders of
magnitude).


On 08/01/2014 5:00 PM, Eric Thivierge
  wrote:

Yeah,
  ICE could do that if they keep pushing it... maybe? Though I think
  it's pretty black boxed in terms of just having the high level
  access to objects, not the underlying nodes.
  
  
  A Node Editor like Maya plus exposing more of the internals in the
  Scene Explorer would be something to look at if this ever gets any
  attention.
  
  
  @Emilio, we need this in Softimage as well!
  
  
  On Wednesday, January 08, 2014 4:58:03 PM, Emilio Hernandez wrote:
  
  Haha.  Maybe because Maya needs it, so you
can dig in there and get it

working properly.  While in Softimage not


;)  Just fueling the fire!





2014/1/8 Eric Thivierge ethivie...@hybride.com

mailto:ethivie...@hybride.com


    Just because I want to fuel the fire, I'll toss in that
while the

    workflow in Maya is quite flawed out of the box, you can get
to

    more internals of the scene graph and manipulate it than we
have

    in Softimage.



    On Wednesday, January 08, 2014 4:15:04 PM, Alan Fregtman
wrote:


    Bravo! Bravo!! :) I echo your exact sentiments, David
(though

    my own

    credentials are puny by comparison.)


    The operator stack should be permanently on the box as a
"hot

    feature". We all take it for granted all the time, but

    seriously it's

    one of the best features in Soft.




    On Wed, Jan 8, 2014 at 3:10 PM, Steven Caron
car...@gmail.com

    mailto:car...@gmail.com

    mailto:car...@gmail.com
mailto:car...@gmail.com wrote:


    thank you! thank you! thank you!... i knew i wasn't
crazy

    thinking

    rigging in maya is a PITA!



    On Wed, Jan 8, 2014 at 11:45 AM, David Gallagher

    davegsoftimagel...@gmail.com

    mailto:davegsoftimagel...@gmail.com

    mailto:davegsoftimagelist@__gmail.com

    mailto:davegsoftimagel...@gmail.com
wrote:



    I rigged on quite a few characters in Maya at
Blue Sky

    Studios

    and now (Softimage) AnimSchool. We offer the
well-known

    "Malcolm" rig for free.


    There is no comparison to rigging in Softimage
and

    Maya--not

    the kind of rigging I do. I often assume by now
they have

    better workflows in Maya, but I'm often
surprised to

    find how

    convoluted and limiting the workflows are to
this day.

    Most

    Maya people must not know there are better ways
of

    working or

    aren't doing the kinds of things I am, because
the

    difference

    is profound.


    -At any point in the rigging process, you can
make

    edits in

    the model stack to change the shape and topology
of
  

Re: Redshift3D Render

2014-01-09 Thread Emilio Hernandez
Hey Mirko I ran your script and I got 50.7 fps...

But then I remembered I have my displays plugged in to my 470.. hahaha.

Don't ask why, but when using AE with the displays plugged into the Ti,  AE
does not like it and disables GPU for calculations...

P.




2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 Hey Tim
 Would you be able to take 2 minutes of your tmie and run this ol python
 script for SI with your titan?
 I'm getting weird results with an 780 in my home system outperforming
 titan a lot... well here is copy paste from forum if you are able to check
 it out as well.. thanks!:

 itan: ~170 fps
 780: ~245 fps

 Go figure [image: :)]
 But I'm suspecting something weird with my titan system for some time will
 have to test further but would be great if anyone with titan as well could
 run it too?
 This old python script:
 Application.CreatePrim(Cube, MeshSurface, , )
 Application.SetValue(cube.polymsh.geom.subdivu, 831, )
 Application.SetValue(cube.polymsh.geom.subdivv, 800, )
 Application.SetValue(cube.polymsh.geom.subdivbase, 800, )
 Application.SetValue(Camera.camvis.refreshrate, True, )
 Application.SetDisplayMode(Camera, shaded)
 Application.DeselectAll()
 Application.SetValue(PlayControl.Out, 5000, )
 Application.DeselectAll()
 Application.GetPrim(Null, , , )
 Application.SelectObj(Camera_Root, , )
 Application.CopyPaste(Camera_Root, , null, 1)
 Application.SelectObj(null, , )
 Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
 1, , , , , )
 Application.SetValue(PlayControl.Key, 5000, )
 Application.SetValue(PlayControl.Current, 5000, )
 Application.Rotate(, 0, 8000, 0, siAbsolute, siPivot, siObj,
 siY, , , , , , , , 0, )
 Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
 5000, , , , , )
 Application.FirstFrame()

 Just paste in python script run and hit play.
 Thakns!


 On Thu, Jan 9, 2014 at 3:34 PM, Tim Crowson 
 tim.crow...@magneticdreams.com wrote:

  We've been testing 1 Titan vs. 3 and so far, the speed increase of the
 triple-Titan box is holding at about 2.45x. In an email exchange (or maybe
 it was on the forums, can't recall) it was mentioned that on the topic
 parallelization, Pixar had determined that even for them, 4 units together
 (of whatever, not necessarily Titans) was the max they could really go
 before it started to cost more money than it was worth. In our case, I'm
 thinking 3 might be our max, based on some nerdy mathematics by one of our
 IT guys analyzing render times per shot, per frame, hardware/software
 costs, rack space used, etc.

 But hey, Redshift aside, the Titan in my workstation is doing wonders for
 my viewport performance in Soft. I had a 58M, 2500-item model derived from
 a CAD file the other day, and this thing was letting me tumble around it at
 ~15fps in Shaded mode. That ain't shabby!
 -Tim



 On 1/9/2014 6:11 AM, Paul Griswold wrote:

  There was a discussion on the RS forums about it.  I don't recall the
 numbers, though.  I don't think the speed of the PCIe slot made a huge
 difference.  It's really all about the speed of the card.

  Also, although it doesn't load the entire scene into your card's
 memory, the more memory your card has, the better it is.

  But overall, for the type of work I'm mainly doing these days, it's
 extremely fast.  In fact, it's so fast that I was finding the bottleneck
 was the time taken to export the mesh to Redshift, not rendering.  Redshift
 has a proxy system like Vray  Arnold, but you have to manually create
 proxies per object  my scene had hundreds and hundreds of objects, so I
 didn't have time to create them.  Therefore, it was creating a renderable
 mesh per frame - so on a frame that took 28 seconds to render, 20 seconds
 was spent exporting the mesh and 8 seconds were spent on rendering.  But
 again, it's a beta and they're continuing to improve things like the proxy
 system.

  Once I'm caught up I'm hoping to try rendering the classroom scene and
 see how it does.

  -Paul


  ᐧ


 --






Re: Rendering ZBrush displacement in Soft

2014-01-09 Thread Emilio Hernandez
Something that helps a lot is using a normal map.  To get the fine details
without pumping up additional geometry.

Get a normal map out of Zbrush the same way you do the displacemnt.

Add a Binormal map to the object in softimage  and use it to drive bump.
This will add the extra fine details in your render.

Cheers.





2014/1/9 pete...@skynet.be

   yeah, about fine detail and skin shading,
 subsurface scattering does tend to cover up fine surface detail – recesses
 and wrinkles in the skin in particular.

 just think of it: a wrinkle on top of the skin is like a fine ridge.
 without scattering, if the light comes from the left, the left of that
 ridge would be bright and the right in shadow. but the scattering makes the
 shadow part light up – and if your SSS shader compensates the diffuse
 (which it does by default) the left part will be a bit less bright than it
 would be without scattering.
 so as a net result this counters the diffuse light, effectively washing
 out the ridge.

 so I’d first check with diffuse(spec) only to see if the displacement and
 bump detail looks right to you - conform to what you had in the sculpt.

 exaggerating the bump/displace might help a bit but probably not enough.
 You’ll really distort the object a lot while the fine detail will still be
 washed out by scattering.

 mixing the bump and displacement maps with the scattering color and AO or
 cavity map can help to bring back some of that surface detail.

 don’t forget a spec/reflectivity map as well but that’s regardless of
 scattering.
 if you don’t have one, my standin solution is use the diffuse inverted to
 drive spec/reflectivity as well as shinyness/gloss (with proper change
 range nodes) - so dark diffuse results in a stronger but concentrated
 highlight and bright diffuse in a less intense but wider highlight.
 this works rather well and more often than not, it’s like this in reality.
 it certainly beats not texturing the spec/ref.

 ah, it’s the classic “it looks great in zbrush, why don’t I get the same
 in the render?” and there’s no easy solution to that.
 Zbrush viewport makes detail in the sculpt stand out very well and it
 takes work to bring that back in the render.


  *From:* Szabolcs Matefy szabol...@crytek.com
 *Sent:* Thursday, January 09, 2014 12:12 PM
 *To:* softimage@listproc.autodesk.com
 *Subject:* RE: Rendering ZBrush displacement in Soft


 Actually finally I managed to work with displacement. However,
 displacement is not as detailed as I’d like, but boosting with a bump map,
 it looks fine. Unfortunately, the midpoint seems to be off, I have to
 somehow tune it. In ZB I set midpoint to 0.5, because if I set it to 0, It
 looked as if I have no recesses on the skin. Now what I think is that I
 have to exaggerate the details to make it work properly with skin shading,
 but that’s another story.



 It looks like that the details are in the texture, but somehow the model
 doesn’t want to reflect it, maybe I should pump up the subdivision in the
 displacement tab of geoapprox PPG.



 Cheers





 Szabolcs



 PS. For me, after GoZ is used then I have two issues: 1) I can’t Alt-Tab
 to change between tasks, I have to minimize ZBrush, 2) All playback
 function, simulation is not available anymore in Softimage





 Cheers









 *From:* softimage-boun...@listproc.autodesk.com [mailto:
 softimage-boun...@listproc.autodesk.com] *On Behalf Of *Emilio Hernandez
 *Sent:* Wednesday, January 08, 2014 4:58 PM
 *To:* softimage@listproc.autodesk.com
 *Subject:* Re: Rendering ZBrush displacement in Soft



 I also use GoZ a lot, the only bug that happens for me is that if you send
 from Softimage to Zbrush, make an UV check before start sculpting.
 Sometimes they got screwed.  So export an OBJ and import in Zbrush do your
 sculpt and sendback to SI.  Make sure your UV's are consistent and flip the
 U before exporting from Zbrush or your map is going to come up flipped.
 upside down.

 Regularly I don't subd the mesh in Zbrush more than 4 subd.   As you need
 to subd your mesh also inside SI the same amount to properly displace the
 geo when rendering.  If you find that you need more subd in your sculpt.  I
 willo go back, subd the mesh to get more polys and send it back to Zbrush,
 so you can go as high as 8 subd from the original mesh, that is a lot.



 As far as I remember you can use a 32bit depth image to plug that into the
 scalar change range node.  But this depth in the bitmap has nothing to do
 with the linear workflow, as the linear workflow is related to gamma
 correct display and rendering, and this maps are to drive values of units
 to displace the geometry inwards and outwards and to be interpretated by
 the render engine

 Here is video that will help to understand the displacement in SI and MR.

 https://vimeo.com/29898426

 .






 2014/1/8 Cristobal Infante cgc...@gmail.com

 You read it linear for sure. What exactly is your problem, are you not
 getting enough detail?. If this is 

Re: rigging in xsi vs maya

2014-01-09 Thread Sergio Mucino

  
  
I absolutely hate this behavior in Maya. It's, frankly, ridiculous.
Maya's weighting tools are totally sub-par compared to any other 3d
application I've used (including Max). Why it is this way, I don't
know, but as a user, it's incredibly frustrating to have to focus on
not shooting yourself in the foot (as daring to perform a smooth
weights operation with all bones unlocked) more than getting actual
work done. Maya has great things for it, but binding and weighting
is definitely not one of them. It's pretty bad, actually. Ok, rant
off.  :-) 

On 07/01/2014 9:57 PM, Sebastien
  Sterling wrote:


  I was quite shocked to learn from riggers in my
last job, that in maya you have to "lock all bones but the ones
you want to weight to via small tick boxes" failure to do so
aparently causing maya to through random influences around...
  
  

On 8 January 2014 02:22, Alan Fregtman
  alan.fregt...@gmail.com
  wrote:
  
Last time I had to use Maya I would use
  Crosswalk to transfer the skinned mesh from Maya to Soft,
  do my weighting in home sweet home, then I wrote an
  exporter that saved out my weights in the "cometSaveWeights"
  format. Life saver!
  

  


  

  
  On Tue, Jan 7, 2014 at 6:15
PM, Steven Caron car...@gmail.com
wrote:

  arg, figured it out.



  import pymel.core as pm
  pm.select(pm.skinCluster(pm.selected()[0],
query=True, influence=True))



best UI ever!
  
  

  

On Tue, Jan 7, 2014
  at 2:58 PM, Steven Caron car...@gmail.com
  wrote:
  

  

  this thread is some what well
timed... i am in maya right now.
i need to get a mesh and its
skin/envelope into softimage. i
did not rig this object and i
don't know enough about maya to
try and understand it through
inspection. in softimage i would
select the mesh, then select the
deformers from envelope, then
key frame those objects and
remove the constraints on them
in mass with 'remove all
constraints'
  
  
  is NONE of that doable in
maya? cause i am having a hell
of a time figuring it out.
  
  
  
  s

  

  


  

  

  
  

  

  


  


-- 
  

  



Re: Redshift3D Render

2014-01-09 Thread Tim Crowson

I just get 60.0 fps +
How are you getting it display a value higher than 60? I'm pretty sure 
it the actual fps is higher, but the value in the viewport is capped at 
60

-Tim


On 1/9/2014 10:12 AM, Leonard Koch wrote:
I get about 28-31 out of my 680. Does anyone have a common explanation 
for that?



On Thu, Jan 9, 2014 at 5:10 PM, Emilio Hernandez emi...@e-roja.com 
mailto:emi...@e-roja.com wrote:


Hey Mirko I ran your script and I got 50.7 fps...

But then I remembered I have my displays plugged in to my 470..
hahaha.

Don't ask why, but when using AE with the displays plugged into
the Ti,  AE does not like it and disables GPU for calculations...

P.




2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com
mailto:mirkoj.anima...@gmail.com

Hey Tim
Would you be able to take 2 minutes of your tmie and run this
ol python script for SI with your titan?
I'm getting weird results with an 780 in my home system
outperforming titan a lot... well here is copy paste from
forum if you are able to check it out as well.. thanks!:

itan: ~170 fps
780: ~245 fps

Go figure :)
But I'm suspecting something weird with my titan system for
some time will have to test further but would be great if
anyone with titan as well could run it too?
This old python script:
Application.CreatePrim(Cube, MeshSurface, , )
Application.SetValue(cube.polymsh.geom.subdivu, 831, )
Application.SetValue(cube.polymsh.geom.subdivv, 800, )
Application.SetValue(cube.polymsh.geom.subdivbase, 800, )
Application.SetValue(Camera.camvis.refreshrate, True, )
Application.SetDisplayMode(Camera, shaded)
Application.DeselectAll()
Application.SetValue(PlayControl.Out, 5000, )
Application.DeselectAll()
Application.GetPrim(Null, , , )
Application.SelectObj(Camera_Root, , )
Application.CopyPaste(Camera_Root, , null, 1)
Application.SelectObj(null, , )

Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
1, , , , , )
Application.SetValue(PlayControl.Key, 5000, )
Application.SetValue(PlayControl.Current, 5000, )
Application.Rotate(, 0, 8000, 0, siAbsolute, siPivot,
siObj, siY, , , , , , , , 0, )

Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
5000, , , , , )
Application.FirstFrame()

Just paste in python script run and hit play.
Thakns!


On Thu, Jan 9, 2014 at 3:34 PM, Tim Crowson
tim.crow...@magneticdreams.com
mailto:tim.crow...@magneticdreams.com wrote:

We've been testing 1 Titan vs. 3 and so far, the speed
increase of the triple-Titan box is holding at about
2.45x. In an email exchange (or maybe it was on the
forums, can't recall) it was mentioned that on the topic
parallelization, Pixar had determined that even for them,
4 units together (of whatever, not necessarily Titans) was
the max they could really go before it started to cost
more money than it was worth. In our case, I'm thinking 3
might be our max, based on some nerdy mathematics by one
of our IT guys analyzing render times per shot, per frame,
hardware/software costs, rack space used, etc.

But hey, Redshift aside, the Titan in my workstation is
doing wonders for my viewport performance in Soft. I had a
58M, 2500-item model derived from a CAD file the other
day, and this thing was letting me tumble around it at
~15fps in Shaded mode. That ain't shabby!
-Tim



On 1/9/2014 6:11 AM, Paul Griswold wrote:

There was a discussion on the RS forums about it.  I
don't recall the numbers, though.  I don't think the
speed of the PCIe slot made a huge difference.  It's
really all about the speed of the card.

Also, although it doesn't load the entire scene into your
card's memory, the more memory your card has, the better
it is.

But overall, for the type of work I'm mainly doing these
days, it's extremely fast.  In fact, it's so fast that I
was finding the bottleneck was the time taken to export
the mesh to Redshift, not rendering.  Redshift has a
proxy system like Vray  Arnold, but you have to manually
create proxies per object  my scene had hundreds and
hundreds of objects, so I didn't have time to create
them.  Therefore, it was creating a renderable mesh per
frame - so on a frame that took 28 seconds to render, 20
seconds was spent exporting the mesh and 8 seconds were
   

Re: Redshift3D Render

2014-01-09 Thread Emilio Hernandez
As far as I remember from watching the specs in order to high a better rate
the magic number is the memory speed transfer.

I was going to buy a 680 but then realized that my 470 has more speed in
the memory transfer.

Don't know why but Nvidia choked the speed in most of the 600 series except
for the Titan.  And open again the hoose in the 690.

In my experience and from what we have discussing in the Redshift forums
the two magic numbers to see which GPU will perform faster is not only the
Fill triangles per sec number but also the memory transfer speed.  Below
are the specs of the Titan, the 680 and the 470 I have.

*GTX TITAN GPU Engine Specs:*
2688CUDA Cores
837Base Clock (MHz)
876Boost Clock (MHz)
187.5Texture Fill Rate (billion/sec)

*GTX TITAN Memory Specs:*
6.0 GbpsMemory Clock
6144 MBStandard Memory Config
GDDR5Memory Interface
384-bit GDDR5Memory Interface Width
288.4Memory Bandwidth (GB/sec)

*GTX 680 GPU Engine Specs:*
CUDA Cores
1006Base Clock (MHz)
1058Boost Clock (MHz)
128.8Texture Fill Rate (billion/sec)

*GTX 680 Memory Specs:*
6.0 GbpsMemory Speed
2048MBStandard Memory Config
256-bit GDDR5Memory Interface Width
192.2Memory Bandwidth (GB/sec)

GTX 470 GPU Engine Specs
CUDA Cores 448
Graphics Clock (MHz) 607 MHz
Processor Clock (MHz) 1215 MHz
Texture Fill Rate (billion/sec) 34.0
Memory Specs
Memory Clock 1674 MHz (3348 data rate)
Standard Memory Config 1280 MB
Memory Interface GDDR5
Memory Interface Width 320-bit
Memory Bandwidth (GB/sec) 133.9 GB/sec



So as  you can see my 470 MB (Memory Bandwith) is not that far from the 680
but the Memory Clock speed, and the Processor Clock speed, of the 470 is
higher than the 680

Not a guru here but that can be an explanation of why the 470 has faster
fps than the 680...

For rendering you make sure you compare the Memory Bandwith of the GPU.
Beside the CUDA.  Double the CUDAs but half the Memory Bandwith, and I
cannot assure but it will render almost the same.




2014/1/9 Leonard Koch leonardkoch...@gmail.com

 I get about 28-31 out of my 680. Does anyone have a common explanation for
 that?


 On Thu, Jan 9, 2014 at 5:10 PM, Emilio Hernandez emi...@e-roja.comwrote:

 Hey Mirko I ran your script and I got 50.7 fps...

 But then I remembered I have my displays plugged in to my 470.. hahaha.

 Don't ask why, but when using AE with the displays plugged into the Ti,
 AE does not like it and disables GPU for calculations...

 P.




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 Hey Tim
 Would you be able to take 2 minutes of your tmie and run this ol python
 script for SI with your titan?
 I'm getting weird results with an 780 in my home system outperforming
 titan a lot... well here is copy paste from forum if you are able to check
 it out as well.. thanks!:

 itan: ~170 fps
 780: ~245 fps

 Go figure [image: :)]
 But I'm suspecting something weird with my titan system for some time
 will have to test further but would be great if anyone with titan as well
 could run it too?
 This old python script:
 Application.CreatePrim(Cube, MeshSurface, , )
 Application.SetValue(cube.polymsh.geom.subdivu, 831, )
 Application.SetValue(cube.polymsh.geom.subdivv, 800, )
 Application.SetValue(cube.polymsh.geom.subdivbase, 800, )
 Application.SetValue(Camera.camvis.refreshrate, True, )
 Application.SetDisplayMode(Camera, shaded)
 Application.DeselectAll()
 Application.SetValue(PlayControl.Out, 5000, )
 Application.DeselectAll()
 Application.GetPrim(Null, , , )
 Application.SelectObj(Camera_Root, , )
 Application.CopyPaste(Camera_Root, , null, 1)
 Application.SelectObj(null, , )
 Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
 1, , , , , )
 Application.SetValue(PlayControl.Key, 5000, )
 Application.SetValue(PlayControl.Current, 5000, )
 Application.Rotate(, 0, 8000, 0, siAbsolute, siPivot, siObj,
 siY, , , , , , , , 0, )
 Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
 5000, , , , , )
 Application.FirstFrame()

 Just paste in python script run and hit play.
 Thakns!


 On Thu, Jan 9, 2014 at 3:34 PM, Tim Crowson 
 tim.crow...@magneticdreams.com wrote:

  We've been testing 1 Titan vs. 3 and so far, the speed increase of the
 triple-Titan box is holding at about 2.45x. In an email exchange (or maybe
 it was on the forums, can't recall) it was mentioned that on the topic
 parallelization, Pixar had determined that even for them, 4 units together
 (of whatever, not necessarily Titans) was the max they could really go
 before it started to cost more money than it was worth. In our case, I'm
 thinking 3 might be our max, based on some nerdy mathematics by one of our
 IT guys analyzing render times per shot, per frame, hardware/software
 costs, rack space used, etc.

 But hey, Redshift aside, the Titan in my workstation is doing wonders
 for my viewport performance in Soft. I had a 58M, 2500-item model derived
 from a CAD file the other day, and this thing was letting me tumble around
 it 

Re: Redshift3D Render

2014-01-09 Thread Emilio Hernandez
Yes Mirko tell the secret.  I don't want to break my mind thinking about
memory clocks and bandwiths




2014/1/9 Tim Crowson tim.crow...@magneticdreams.com

  I just get 60.0 fps +
 How are you getting it display a value higher than 60? I'm pretty sure it
 the actual fps is higher, but the value in the viewport is capped at 60
 -Tim



 On 1/9/2014 10:12 AM, Leonard Koch wrote:

 I get about 28-31 out of my 680. Does anyone have a common explanation for
 that?


 On Thu, Jan 9, 2014 at 5:10 PM, Emilio Hernandez emi...@e-roja.comwrote:

   Hey Mirko I ran your script and I got 50.7 fps...

  But then I remembered I have my displays plugged in to my 470.. hahaha.

  Don't ask why, but when using AE with the displays plugged into the Ti,
 AE does not like it and disables GPU for calculations...

  P.




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 Hey Tim
 Would you be able to take 2 minutes of your tmie and run this ol python
 script for SI with your titan?
 I'm getting weird results with an 780 in my home system outperforming
 titan a lot... well here is copy paste from forum if you are able to check
 it out as well.. thanks!:

  itan: ~170 fps
 780: ~245 fps

 Go figure [image: :)]
 But I'm suspecting something weird with my titan system for some time
 will have to test further but would be great if anyone with titan as well
 could run it too?
 This old python script:
 Application.CreatePrim(Cube, MeshSurface, , )
 Application.SetValue(cube.polymsh.geom.subdivu, 831, )
 Application.SetValue(cube.polymsh.geom.subdivv, 800, )
 Application.SetValue(cube.polymsh.geom.subdivbase, 800, )
 Application.SetValue(Camera.camvis.refreshrate, True, )
 Application.SetDisplayMode(Camera, shaded)
 Application.DeselectAll()
 Application.SetValue(PlayControl.Out, 5000, )
 Application.DeselectAll()
 Application.GetPrim(Null, , , )
 Application.SelectObj(Camera_Root, , )
 Application.CopyPaste(Camera_Root, , null, 1)
 Application.SelectObj(null, , )
 Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
 1, , , , , )
 Application.SetValue(PlayControl.Key, 5000, )
 Application.SetValue(PlayControl.Current, 5000, )
 Application.Rotate(, 0, 8000, 0, siAbsolute, siPivot, siObj,
 siY, , , , , , , , 0, )
 Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
 5000, , , , , )
 Application.FirstFrame()

  Just paste in python script run and hit play.
 Thakns!


 On Thu, Jan 9, 2014 at 3:34 PM, Tim Crowson 
 tim.crow...@magneticdreams.com wrote:

  We've been testing 1 Titan vs. 3 and so far, the speed increase of the
 triple-Titan box is holding at about 2.45x. In an email exchange (or maybe
 it was on the forums, can't recall) it was mentioned that on the topic
 parallelization, Pixar had determined that even for them, 4 units together
 (of whatever, not necessarily Titans) was the max they could really go
 before it started to cost more money than it was worth. In our case, I'm
 thinking 3 might be our max, based on some nerdy mathematics by one of our
 IT guys analyzing render times per shot, per frame, hardware/software
 costs, rack space used, etc.

 But hey, Redshift aside, the Titan in my workstation is doing wonders
 for my viewport performance in Soft. I had a 58M, 2500-item model derived
 from a CAD file the other day, and this thing was letting me tumble around
 it at ~15fps in Shaded mode. That ain't shabby!
 -Tim



 On 1/9/2014 6:11 AM, Paul Griswold wrote:

  There was a discussion on the RS forums about it.  I don't recall the
 numbers, though.  I don't think the speed of the PCIe slot made a huge
 difference.  It's really all about the speed of the card.

  Also, although it doesn't load the entire scene into your card's
 memory, the more memory your card has, the better it is.

  But overall, for the type of work I'm mainly doing these days, it's
 extremely fast.  In fact, it's so fast that I was finding the bottleneck
 was the time taken to export the mesh to Redshift, not rendering.  Redshift
 has a proxy system like Vray  Arnold, but you have to manually create
 proxies per object  my scene had hundreds and hundreds of objects, so I
 didn't have time to create them.  Therefore, it was creating a renderable
 mesh per frame - so on a frame that took 28 seconds to render, 20 seconds
 was spent exporting the mesh and 8 seconds were spent on rendering.  But
 again, it's a beta and they're continuing to improve things like the proxy
 system.

  Once I'm caught up I'm hoping to try rendering the classroom scene
 and see how it does.

  -Paul


  ᐧ


  --






 --





Re: Redshift3D Render

2014-01-09 Thread Leonard Koch
Ah interesting. That begins to explain it.
Thanks Emilio.


On Thu, Jan 9, 2014 at 5:36 PM, Emilio Hernandez emi...@e-roja.com wrote:

 Yes Mirko tell the secret.  I don't want to break my mind thinking about
 memory clocks and bandwiths




 2014/1/9 Tim Crowson tim.crow...@magneticdreams.com

  I just get 60.0 fps +
 How are you getting it display a value higher than 60? I'm pretty sure it
 the actual fps is higher, but the value in the viewport is capped at 60
 -Tim



 On 1/9/2014 10:12 AM, Leonard Koch wrote:

 I get about 28-31 out of my 680. Does anyone have a common explanation
 for that?


 On Thu, Jan 9, 2014 at 5:10 PM, Emilio Hernandez emi...@e-roja.comwrote:

   Hey Mirko I ran your script and I got 50.7 fps...

  But then I remembered I have my displays plugged in to my 470.. hahaha.

  Don't ask why, but when using AE with the displays plugged into the
 Ti,  AE does not like it and disables GPU for calculations...

  P.




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 Hey Tim
 Would you be able to take 2 minutes of your tmie and run this ol python
 script for SI with your titan?
 I'm getting weird results with an 780 in my home system outperforming
 titan a lot... well here is copy paste from forum if you are able to check
 it out as well.. thanks!:

  itan: ~170 fps
 780: ~245 fps

 Go figure [image: :)]
 But I'm suspecting something weird with my titan system for some time
 will have to test further but would be great if anyone with titan as well
 could run it too?
 This old python script:
 Application.CreatePrim(Cube, MeshSurface, , )
 Application.SetValue(cube.polymsh.geom.subdivu, 831, )
 Application.SetValue(cube.polymsh.geom.subdivv, 800, )
 Application.SetValue(cube.polymsh.geom.subdivbase, 800, )
 Application.SetValue(Camera.camvis.refreshrate, True, )
 Application.SetDisplayMode(Camera, shaded)
 Application.DeselectAll()
 Application.SetValue(PlayControl.Out, 5000, )
 Application.DeselectAll()
 Application.GetPrim(Null, , , )
 Application.SelectObj(Camera_Root, , )
 Application.CopyPaste(Camera_Root, , null, 1)
 Application.SelectObj(null, , )
 Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
 1, , , , , )
 Application.SetValue(PlayControl.Key, 5000, )
 Application.SetValue(PlayControl.Current, 5000, )
 Application.Rotate(, 0, 8000, 0, siAbsolute, siPivot, siObj,
 siY, , , , , , , , 0, )
 Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
 5000, , , , , )
 Application.FirstFrame()

  Just paste in python script run and hit play.
 Thakns!


 On Thu, Jan 9, 2014 at 3:34 PM, Tim Crowson 
 tim.crow...@magneticdreams.com wrote:

  We've been testing 1 Titan vs. 3 and so far, the speed increase of
 the triple-Titan box is holding at about 2.45x. In an email exchange (or
 maybe it was on the forums, can't recall) it was mentioned that on the
 topic parallelization, Pixar had determined that even for them, 4 units
 together (of whatever, not necessarily Titans) was the max they could
 really go before it started to cost more money than it was worth. In our
 case, I'm thinking 3 might be our max, based on some nerdy mathematics by
 one of our IT guys analyzing render times per shot, per frame,
 hardware/software costs, rack space used, etc.

 But hey, Redshift aside, the Titan in my workstation is doing wonders
 for my viewport performance in Soft. I had a 58M, 2500-item model derived
 from a CAD file the other day, and this thing was letting me tumble around
 it at ~15fps in Shaded mode. That ain't shabby!
 -Tim



 On 1/9/2014 6:11 AM, Paul Griswold wrote:

  There was a discussion on the RS forums about it.  I don't recall
 the numbers, though.  I don't think the speed of the PCIe slot made a huge
 difference.  It's really all about the speed of the card.

  Also, although it doesn't load the entire scene into your card's
 memory, the more memory your card has, the better it is.

  But overall, for the type of work I'm mainly doing these days, it's
 extremely fast.  In fact, it's so fast that I was finding the bottleneck
 was the time taken to export the mesh to Redshift, not rendering.  
 Redshift
 has a proxy system like Vray  Arnold, but you have to manually create
 proxies per object  my scene had hundreds and hundreds of objects, so I
 didn't have time to create them.  Therefore, it was creating a renderable
 mesh per frame - so on a frame that took 28 seconds to render, 20 seconds
 was spent exporting the mesh and 8 seconds were spent on rendering.  But
 again, it's a beta and they're continuing to improve things like the proxy
 system.

  Once I'm caught up I'm hoping to try rendering the classroom scene
 and see how it does.

  -Paul


  ᐧ


  --






 --







Re: Redshift3D Render

2014-01-09 Thread Ben Houston
For GPU speeds, you always need to consult this list, it is pretty
representative of what to expect from things like Redshift3D:

http://www.videocardbenchmark.net/high_end_gpus.html

-ben


On Thu, Jan 9, 2014 at 11:36 AM, Emilio Hernandez emi...@e-roja.com wrote:

 Yes Mirko tell the secret.  I don't want to break my mind thinking about
 memory clocks and bandwiths




 2014/1/9 Tim Crowson tim.crow...@magneticdreams.com

  I just get 60.0 fps +
 How are you getting it display a value higher than 60? I'm pretty sure it
 the actual fps is higher, but the value in the viewport is capped at 60
 -Tim



 On 1/9/2014 10:12 AM, Leonard Koch wrote:

 I get about 28-31 out of my 680. Does anyone have a common explanation
 for that?


 On Thu, Jan 9, 2014 at 5:10 PM, Emilio Hernandez emi...@e-roja.comwrote:

   Hey Mirko I ran your script and I got 50.7 fps...

  But then I remembered I have my displays plugged in to my 470.. hahaha.

  Don't ask why, but when using AE with the displays plugged into the
 Ti,  AE does not like it and disables GPU for calculations...

  P.




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 Hey Tim
 Would you be able to take 2 minutes of your tmie and run this ol python
 script for SI with your titan?
 I'm getting weird results with an 780 in my home system outperforming
 titan a lot... well here is copy paste from forum if you are able to check
 it out as well.. thanks!:

  itan: ~170 fps
 780: ~245 fps

 Go figure [image: :)]
 But I'm suspecting something weird with my titan system for some time
 will have to test further but would be great if anyone with titan as well
 could run it too?
 This old python script:
 Application.CreatePrim(Cube, MeshSurface, , )
 Application.SetValue(cube.polymsh.geom.subdivu, 831, )
 Application.SetValue(cube.polymsh.geom.subdivv, 800, )
 Application.SetValue(cube.polymsh.geom.subdivbase, 800, )
 Application.SetValue(Camera.camvis.refreshrate, True, )
 Application.SetDisplayMode(Camera, shaded)
 Application.DeselectAll()
 Application.SetValue(PlayControl.Out, 5000, )
 Application.DeselectAll()
 Application.GetPrim(Null, , , )
 Application.SelectObj(Camera_Root, , )
 Application.CopyPaste(Camera_Root, , null, 1)
 Application.SelectObj(null, , )
 Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
 1, , , , , )
 Application.SetValue(PlayControl.Key, 5000, )
 Application.SetValue(PlayControl.Current, 5000, )
 Application.Rotate(, 0, 8000, 0, siAbsolute, siPivot, siObj,
 siY, , , , , , , , 0, )
 Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
 5000, , , , , )
 Application.FirstFrame()

  Just paste in python script run and hit play.
 Thakns!


 On Thu, Jan 9, 2014 at 3:34 PM, Tim Crowson 
 tim.crow...@magneticdreams.com wrote:

  We've been testing 1 Titan vs. 3 and so far, the speed increase of
 the triple-Titan box is holding at about 2.45x. In an email exchange (or
 maybe it was on the forums, can't recall) it was mentioned that on the
 topic parallelization, Pixar had determined that even for them, 4 units
 together (of whatever, not necessarily Titans) was the max they could
 really go before it started to cost more money than it was worth. In our
 case, I'm thinking 3 might be our max, based on some nerdy mathematics by
 one of our IT guys analyzing render times per shot, per frame,
 hardware/software costs, rack space used, etc.

 But hey, Redshift aside, the Titan in my workstation is doing wonders
 for my viewport performance in Soft. I had a 58M, 2500-item model derived
 from a CAD file the other day, and this thing was letting me tumble around
 it at ~15fps in Shaded mode. That ain't shabby!
 -Tim



 On 1/9/2014 6:11 AM, Paul Griswold wrote:

  There was a discussion on the RS forums about it.  I don't recall
 the numbers, though.  I don't think the speed of the PCIe slot made a huge
 difference.  It's really all about the speed of the card.

  Also, although it doesn't load the entire scene into your card's
 memory, the more memory your card has, the better it is.

  But overall, for the type of work I'm mainly doing these days, it's
 extremely fast.  In fact, it's so fast that I was finding the bottleneck
 was the time taken to export the mesh to Redshift, not rendering.  
 Redshift
 has a proxy system like Vray  Arnold, but you have to manually create
 proxies per object  my scene had hundreds and hundreds of objects, so I
 didn't have time to create them.  Therefore, it was creating a renderable
 mesh per frame - so on a frame that took 28 seconds to render, 20 seconds
 was spent exporting the mesh and 8 seconds were spent on rendering.  But
 again, it's a beta and they're continuing to improve things like the proxy
 system.

  Once I'm caught up I'm hoping to try rendering the classroom scene
 and see how it does.

  -Paul


  ᐧ


  --






 --







-- 
Best regards,
Ben Houston
Voice: 613-762-4113 Skype: ben.exocortex Twitter: @exocortexcom

Re: Redshift3D Render

2014-01-09 Thread Emilio Hernandez
Just a funny fact that I found...

After opening several tabs for looking for the Nvidia specs, I left them
open.  I am using FireFox.

I hit play again and the fps dropped by half 24 fps.

Start scratching my head.  Closed the addtional tabs of Firefox, restarted
Softimage and the speed was back to 50.7

So my guess is that this modern browser suck a lot from the GPU... pfff.






2014/1/9 Leonard Koch leonardkoch...@gmail.com

 Ah interesting. That begins to explain it.
 Thanks Emilio.


 On Thu, Jan 9, 2014 at 5:36 PM, Emilio Hernandez emi...@e-roja.comwrote:

 Yes Mirko tell the secret.  I don't want to break my mind thinking about
 memory clocks and bandwiths




 2014/1/9 Tim Crowson tim.crow...@magneticdreams.com

  I just get 60.0 fps +
 How are you getting it display a value higher than 60? I'm pretty sure
 it the actual fps is higher, but the value in the viewport is capped at
 60
 -Tim



 On 1/9/2014 10:12 AM, Leonard Koch wrote:

 I get about 28-31 out of my 680. Does anyone have a common explanation
 for that?


 On Thu, Jan 9, 2014 at 5:10 PM, Emilio Hernandez emi...@e-roja.comwrote:

   Hey Mirko I ran your script and I got 50.7 fps...

  But then I remembered I have my displays plugged in to my 470.. hahaha.

  Don't ask why, but when using AE with the displays plugged into the
 Ti,  AE does not like it and disables GPU for calculations...

  P.




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 Hey Tim
 Would you be able to take 2 minutes of your tmie and run this ol
 python script for SI with your titan?
 I'm getting weird results with an 780 in my home system outperforming
 titan a lot... well here is copy paste from forum if you are able to check
 it out as well.. thanks!:

  itan: ~170 fps
 780: ~245 fps

 Go figure [image: :)]
 But I'm suspecting something weird with my titan system for some time
 will have to test further but would be great if anyone with titan as well
 could run it too?
 This old python script:
 Application.CreatePrim(Cube, MeshSurface, , )
 Application.SetValue(cube.polymsh.geom.subdivu, 831, )
 Application.SetValue(cube.polymsh.geom.subdivv, 800, )
 Application.SetValue(cube.polymsh.geom.subdivbase, 800, )
 Application.SetValue(Camera.camvis.refreshrate, True, )
 Application.SetDisplayMode(Camera, shaded)
 Application.DeselectAll()
 Application.SetValue(PlayControl.Out, 5000, )
 Application.DeselectAll()
 Application.GetPrim(Null, , , )
 Application.SelectObj(Camera_Root, , )
 Application.CopyPaste(Camera_Root, , null, 1)
 Application.SelectObj(null, , )
 Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
 1, , , , , )
 Application.SetValue(PlayControl.Key, 5000, )
 Application.SetValue(PlayControl.Current, 5000, )
 Application.Rotate(, 0, 8000, 0, siAbsolute, siPivot, siObj,
 siY, , , , , , , , 0, )
 Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
 5000, , , , , )
 Application.FirstFrame()

  Just paste in python script run and hit play.
 Thakns!


 On Thu, Jan 9, 2014 at 3:34 PM, Tim Crowson 
 tim.crow...@magneticdreams.com wrote:

  We've been testing 1 Titan vs. 3 and so far, the speed increase of
 the triple-Titan box is holding at about 2.45x. In an email exchange (or
 maybe it was on the forums, can't recall) it was mentioned that on the
 topic parallelization, Pixar had determined that even for them, 4 units
 together (of whatever, not necessarily Titans) was the max they could
 really go before it started to cost more money than it was worth. In our
 case, I'm thinking 3 might be our max, based on some nerdy mathematics by
 one of our IT guys analyzing render times per shot, per frame,
 hardware/software costs, rack space used, etc.

 But hey, Redshift aside, the Titan in my workstation is doing wonders
 for my viewport performance in Soft. I had a 58M, 2500-item model derived
 from a CAD file the other day, and this thing was letting me tumble 
 around
 it at ~15fps in Shaded mode. That ain't shabby!
 -Tim



 On 1/9/2014 6:11 AM, Paul Griswold wrote:

  There was a discussion on the RS forums about it.  I don't recall
 the numbers, though.  I don't think the speed of the PCIe slot made a 
 huge
 difference.  It's really all about the speed of the card.

  Also, although it doesn't load the entire scene into your card's
 memory, the more memory your card has, the better it is.

  But overall, for the type of work I'm mainly doing these days, it's
 extremely fast.  In fact, it's so fast that I was finding the bottleneck
 was the time taken to export the mesh to Redshift, not rendering.  
 Redshift
 has a proxy system like Vray  Arnold, but you have to manually create
 proxies per object  my scene had hundreds and hundreds of objects, so I
 didn't have time to create them.  Therefore, it was creating a renderable
 mesh per frame - so on a frame that took 28 seconds to render, 20 seconds
 was spent exporting the mesh and 8 seconds were spent on rendering.  But
 again, 

Re: Redshift3D Render

2014-01-09 Thread Stefan Kubicek
Title: Signature

I guess 60fps is the refresh rate of your display, right?
Have you disabled VSync in the driver settings?
I just get "60.0 fps +"
How are you getting it display a value higher than 60? I'm pretty
sure it the actual fps is higher, but the value in the viewport is
capped at 60
-Tim


On 1/9/2014 10:12 AM, Leonard Koch
  wrote:


  I get about 28-31 out of my 680. Does anyone have a
common explanation for that?
  

On Thu, Jan 9, 2014 at 5:10 PM, Emilio
  Hernandez emi...@e-roja.com
  wrote:
  

  

  Hey Mirko I ran your script and I got 50.7 fps...

  
  But then I remembered I have my displays plugged in to
  my 470.. hahaha.
  

Don't ask why, but when using AE with the displays
plugged into the Ti, AE does not like it and disables
GPU for calculations...

  
  P.


  
  
  

  
  
  2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

  Hey Tim
Would you be able to take 2 minutes of your
  tmie and run this ol python script for SI with
  your titan?
I'm getting weird results with an 780 in my
  home system outperforming titan a lot... well
  here is copy paste from forum if you are able
  to check it out as well.. thanks!:


itan:
~170 fps
  780:
~245 fps
  
  Go
figure
  But
I'm suspecting something weird with my titan
system for some time will have to test
further but would be great if anyone with
titan as well could run it too?
  This
old python script:
  Application.CreatePrim("Cube",
"MeshSurface", "", "")
  Application.SetValue("cube.polymsh.geom.subdivu",
831, "")
  Application.SetValue("cube.polymsh.geom.subdivv",
800, "")
  Application.SetValue("cube.polymsh.geom.subdivbase",
800, "")
  Application.SetValue("Camera.camvis.refreshrate",
True, "")
  Application.SetDisplayMode("Camera",
"shaded")
  Application.DeselectAll()
  Application.SetValue("PlayControl.Out",
5000, "")
  Application.DeselectAll()
  Application.GetPrim("Null",
"", "", "")
  Application.SelectObj("Camera_Root",
"", "")
  Application.CopyPaste("Camera_Root",
"", "null", 1)
  Application.SelectObj("null",
"", "")
  Application.SaveKey("null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz",
1, "", "", "", "", "")
  Application.SetValue("PlayControl.Key",
5000, "")
  Application.SetValue("PlayControl.Current",
5000, "")
  Application.Rotate("",
0, 8000, 0, "siAbsolute", "siPivot",
"siObj", "siY", "", "", "", "", "", "", "",
0, "")
  Application.SaveKey("null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz",
5000, "", "", "", "", "")
  Application.FirstFrame()


  

  Just paste in python script run and hit
play.
  Thakns!

  
  

  

On Thu, 

Re: Redshift3D Render

2014-01-09 Thread Stefan Kubicek

So does Chrome btw. I also notice this wehn running my laptop without power supply. The GPU sucks the battery dry in an hour.With Chrome closed it's 4 hrs. GPU acceleration can be turned off in Chrome though, don't know about Firefox.Just a funny fact that I found...After opening several tabs for looking for the Nvidia specs, I left them open. I am using FireFox.I hit play again and the fps dropped by half 24 fps.
Start scratching my head. Closed the addtional tabs of Firefox, restarted Softimage and the speed was back to 50.7So my guess is that this modern browser suck a lot from the GPU... pfff.

2014/1/9 Leonard Koch leonardkoch...@gmail.com
Ah interesting. That begins to explain it.Thanks Emilio.On Thu, Jan 9, 2014 at 5:36 PM, Emilio Hernandez emi...@e-roja.com wrote:

Yes Mirko tell the secret. I don't want to break my mind thinking about memory clocks and bandwiths



2014/1/9 Tim Crowson tim.crow...@magneticdreams.com



  

  
  
I just get "60.0 fps +"
How are you getting it display a value higher than 60? I'm pretty
sure it the actual fps is higher, but the value in the viewport is
capped at 60
-Tim


On 1/9/2014 10:12 AM, Leonard Koch
  wrote:


  I get about 28-31 out of my 680. Does anyone have a
common explanation for that?
  

On Thu, Jan 9, 2014 at 5:10 PM, Emilio
  Hernandez emi...@e-roja.com
  wrote:
  

  

  Hey Mirko I ran your script and I got 50.7 fps...

  
  But then I remembered I have my displays plugged in to
  my 470.. hahaha.
  

Don't ask why, but when using AE with the displays
plugged into the Ti, AE does not like it and disables
GPU for calculations...

  
  P.


  
  
  

  
  
  2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

  Hey Tim
Would you be able to take 2 minutes of your
  tmie and run this ol python script for SI with
  your titan?
I'm getting weird results with an 780 in my
  home system outperforming titan a lot... well
  here is copy paste from forum if you are able
  to check it out as well.. thanks!:


itan:
~170 fps
  780:
~245 fps
  
  Go
figure



  But
I'm suspecting something weird with my titan
system for some time will have to test
further but would be great if anyone with
titan as well could run it too?
  This
old python script:
  Application.CreatePrim("Cube",
"MeshSurface", "", "")
  Application.SetValue("cube.polymsh.geom.subdivu",
831, "")
  Application.SetValue("cube.polymsh.geom.subdivv",
800, "")
  Application.SetValue("cube.polymsh.geom.subdivbase",
800, "")
  Application.SetValue("Camera.camvis.refreshrate",
True, "")
  Application.SetDisplayMode("Camera",
"shaded")
  Application.DeselectAll()



  Application.SetValue("PlayControl.Out",
5000, "")
  Application.DeselectAll()



  Application.GetPrim("Null",
"", "", "")
  Application.SelectObj("Camera_Root",
"", "")
  Application.CopyPaste("Camera_Root",
"", "null", 1)
  Application.SelectObj("null",
"", "")
  Application.SaveKey("null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz",
1, "", "", "", "", "")



  Application.SetValue("PlayControl.Key",
5000, "")
  Application.SetValue("PlayControl.Current",
5000, "")
  

Re: Redshift3D Render

2014-01-09 Thread Francisco Criado
Emilio, did you try telling the application (firefox, chrome) to use the
cpu instead of the gpu? that happened to me too and found it usefu,l
through the nvidia control panel.

F.




2014/1/9 Emilio Hernandez emi...@e-roja.com

 Just a funny fact that I found...

 After opening several tabs for looking for the Nvidia specs, I left them
 open.  I am using FireFox.

 I hit play again and the fps dropped by half 24 fps.

 Start scratching my head.  Closed the addtional tabs of Firefox, restarted
 Softimage and the speed was back to 50.7

 So my guess is that this modern browser suck a lot from the GPU... pfff.






 2014/1/9 Leonard Koch leonardkoch...@gmail.com

 Ah interesting. That begins to explain it.
 Thanks Emilio.


 On Thu, Jan 9, 2014 at 5:36 PM, Emilio Hernandez emi...@e-roja.comwrote:

 Yes Mirko tell the secret.  I don't want to break my mind thinking about
 memory clocks and bandwiths




 2014/1/9 Tim Crowson tim.crow...@magneticdreams.com

  I just get 60.0 fps +
 How are you getting it display a value higher than 60? I'm pretty sure
 it the actual fps is higher, but the value in the viewport is capped at
 60
 -Tim



 On 1/9/2014 10:12 AM, Leonard Koch wrote:

 I get about 28-31 out of my 680. Does anyone have a common explanation
 for that?


 On Thu, Jan 9, 2014 at 5:10 PM, Emilio Hernandez emi...@e-roja.comwrote:

   Hey Mirko I ran your script and I got 50.7 fps...

  But then I remembered I have my displays plugged in to my 470..
 hahaha.

  Don't ask why, but when using AE with the displays plugged into the
 Ti,  AE does not like it and disables GPU for calculations...

  P.




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 Hey Tim
 Would you be able to take 2 minutes of your tmie and run this ol
 python script for SI with your titan?
 I'm getting weird results with an 780 in my home system outperforming
 titan a lot... well here is copy paste from forum if you are able to 
 check
 it out as well.. thanks!:

  itan: ~170 fps
 780: ~245 fps

 Go figure [image: :)]
 But I'm suspecting something weird with my titan system for some time
 will have to test further but would be great if anyone with titan as well
 could run it too?
 This old python script:
 Application.CreatePrim(Cube, MeshSurface, , )
 Application.SetValue(cube.polymsh.geom.subdivu, 831, )
 Application.SetValue(cube.polymsh.geom.subdivv, 800, )
 Application.SetValue(cube.polymsh.geom.subdivbase, 800, )
 Application.SetValue(Camera.camvis.refreshrate, True, )
 Application.SetDisplayMode(Camera, shaded)
 Application.DeselectAll()
 Application.SetValue(PlayControl.Out, 5000, )
 Application.DeselectAll()
 Application.GetPrim(Null, , , )
 Application.SelectObj(Camera_Root, , )
 Application.CopyPaste(Camera_Root, , null, 1)
 Application.SelectObj(null, , )
 Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
 1, , , , , )
 Application.SetValue(PlayControl.Key, 5000, )
 Application.SetValue(PlayControl.Current, 5000, )
 Application.Rotate(, 0, 8000, 0, siAbsolute, siPivot, siObj,
 siY, , , , , , , , 0, )
 Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
 5000, , , , , )
 Application.FirstFrame()

  Just paste in python script run and hit play.
 Thakns!


 On Thu, Jan 9, 2014 at 3:34 PM, Tim Crowson 
 tim.crow...@magneticdreams.com wrote:

  We've been testing 1 Titan vs. 3 and so far, the speed increase of
 the triple-Titan box is holding at about 2.45x. In an email exchange (or
 maybe it was on the forums, can't recall) it was mentioned that on the
 topic parallelization, Pixar had determined that even for them, 4 units
 together (of whatever, not necessarily Titans) was the max they could
 really go before it started to cost more money than it was worth. In our
 case, I'm thinking 3 might be our max, based on some nerdy mathematics 
 by
 one of our IT guys analyzing render times per shot, per frame,
 hardware/software costs, rack space used, etc.

 But hey, Redshift aside, the Titan in my workstation is doing
 wonders for my viewport performance in Soft. I had a 58M, 2500-item 
 model
 derived from a CAD file the other day, and this thing was letting me 
 tumble
 around it at ~15fps in Shaded mode. That ain't shabby!
 -Tim



 On 1/9/2014 6:11 AM, Paul Griswold wrote:

  There was a discussion on the RS forums about it.  I don't recall
 the numbers, though.  I don't think the speed of the PCIe slot made a 
 huge
 difference.  It's really all about the speed of the card.

  Also, although it doesn't load the entire scene into your card's
 memory, the more memory your card has, the better it is.

  But overall, for the type of work I'm mainly doing these days,
 it's extremely fast.  In fact, it's so fast that I was finding the
 bottleneck was the time taken to export the mesh to Redshift, not
 rendering.  Redshift has a proxy system like Vray  Arnold, but you 
 have to
 manually create proxies per object  my scene had hundreds and hundreds 
 of
 

Re: Redshift3D Render

2014-01-09 Thread Emilio Hernandez
I am going to try your suggestions right now.  That will explain a lot of
things.






2014/1/9 Tim Crowson tim.crow...@magneticdreams.com

  I wish it was always that simple. For instance, Redshift will perform
 better with more vram, and the Titan comes standard with 6GB, which is not
 even an option on the 780 or 780Ti. If you can get by with less vram
 though, the 780s are pretty sweet. Can't wait to see what they announce
 next.
 -Tim


 On 1/9/2014 10:39 AM, Ben Houston wrote:

 For GPU speeds, you always need to consult this list, it is pretty
 representative of what to expect from things like Redshift3D:

  http://www.videocardbenchmark.net/high_end_gpus.html

  -ben


 On Thu, Jan 9, 2014 at 11:36 AM, Emilio Hernandez emi...@e-roja.comwrote:

 Yes Mirko tell the secret.  I don't want to break my mind thinking about
 memory clocks and bandwiths




 2014/1/9 Tim Crowson tim.crow...@magneticdreams.com

  I just get 60.0 fps +
 How are you getting it display a value higher than 60? I'm pretty sure
 it the actual fps is higher, but the value in the viewport is capped at
 60
 -Tim



 On 1/9/2014 10:12 AM, Leonard Koch wrote:

 I get about 28-31 out of my 680. Does anyone have a common explanation
 for that?


 On Thu, Jan 9, 2014 at 5:10 PM, Emilio Hernandez emi...@e-roja.comwrote:

   Hey Mirko I ran your script and I got 50.7 fps...

  But then I remembered I have my displays plugged in to my 470.. hahaha.

  Don't ask why, but when using AE with the displays plugged into the
 Ti,  AE does not like it and disables GPU for calculations...

  P.




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 Hey Tim
 Would you be able to take 2 minutes of your tmie and run this ol
 python script for SI with your titan?
 I'm getting weird results with an 780 in my home system outperforming
 titan a lot... well here is copy paste from forum if you are able to check
 it out as well.. thanks!:

  itan: ~170 fps
 780: ~245 fps

 Go figure [image: :)]
 But I'm suspecting something weird with my titan system for some time
 will have to test further but would be great if anyone with titan as well
 could run it too?
 This old python script:
 Application.CreatePrim(Cube, MeshSurface, , )
 Application.SetValue(cube.polymsh.geom.subdivu, 831, )
 Application.SetValue(cube.polymsh.geom.subdivv, 800, )
 Application.SetValue(cube.polymsh.geom.subdivbase, 800, )
 Application.SetValue(Camera.camvis.refreshrate, True, )
 Application.SetDisplayMode(Camera, shaded)
 Application.DeselectAll()
 Application.SetValue(PlayControl.Out, 5000, )
 Application.DeselectAll()
 Application.GetPrim(Null, , , )
 Application.SelectObj(Camera_Root, , )
 Application.CopyPaste(Camera_Root, , null, 1)
 Application.SelectObj(null, , )
 Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
 1, , , , , )
 Application.SetValue(PlayControl.Key, 5000, )
 Application.SetValue(PlayControl.Current, 5000, )
 Application.Rotate(, 0, 8000, 0, siAbsolute, siPivot, siObj,
 siY, , , , , , , , 0, )
 Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
 5000, , , , , )
 Application.FirstFrame()

  Just paste in python script run and hit play.
 Thakns!


 On Thu, Jan 9, 2014 at 3:34 PM, Tim Crowson 
 tim.crow...@magneticdreams.com wrote:

  We've been testing 1 Titan vs. 3 and so far, the speed increase of
 the triple-Titan box is holding at about 2.45x. In an email exchange (or
 maybe it was on the forums, can't recall) it was mentioned that on the
 topic parallelization, Pixar had determined that even for them, 4 units
 together (of whatever, not necessarily Titans) was the max they could
 really go before it started to cost more money than it was worth. In our
 case, I'm thinking 3 might be our max, based on some nerdy mathematics by
 one of our IT guys analyzing render times per shot, per frame,
 hardware/software costs, rack space used, etc.

 But hey, Redshift aside, the Titan in my workstation is doing wonders
 for my viewport performance in Soft. I had a 58M, 2500-item model derived
 from a CAD file the other day, and this thing was letting me tumble 
 around
 it at ~15fps in Shaded mode. That ain't shabby!
 -Tim



 On 1/9/2014 6:11 AM, Paul Griswold wrote:

  There was a discussion on the RS forums about it.  I don't recall
 the numbers, though.  I don't think the speed of the PCIe slot made a 
 huge
 difference.  It's really all about the speed of the card.

  Also, although it doesn't load the entire scene into your card's
 memory, the more memory your card has, the better it is.

  But overall, for the type of work I'm mainly doing these days, it's
 extremely fast.  In fact, it's so fast that I was finding the bottleneck
 was the time taken to export the mesh to Redshift, not rendering.  
 Redshift
 has a proxy system like Vray  Arnold, but you have to manually create
 proxies per object  my scene had hundreds and hundreds of objects, so I
 didn't have time to create them.  

Re: Redshift3D Render

2014-01-09 Thread Tim Crowson
I wish it was always that simple. For instance, Redshift will perform 
better with more vram, and the Titan comes standard with 6GB, which is 
not even an option on the 780 or 780Ti. If you can get by with less vram 
though, the 780s are pretty sweet. Can't wait to see what they announce 
next.

-Tim

On 1/9/2014 10:39 AM, Ben Houston wrote:
For GPU speeds, you always need to consult this list, it is pretty 
representative of what to expect from things like Redshift3D:


http://www.videocardbenchmark.net/high_end_gpus.html

-ben


On Thu, Jan 9, 2014 at 11:36 AM, Emilio Hernandez emi...@e-roja.com 
mailto:emi...@e-roja.com wrote:


Yes Mirko tell the secret.  I don't want to break my mind thinking
about memory clocks and bandwiths




2014/1/9 Tim Crowson tim.crow...@magneticdreams.com
mailto:tim.crow...@magneticdreams.com

I just get 60.0 fps +
How are you getting it display a value higher than 60? I'm
pretty sure it the actual fps is higher, but the value in the
viewport is capped at 60
-Tim



On 1/9/2014 10:12 AM, Leonard Koch wrote:

I get about 28-31 out of my 680. Does anyone have a common
explanation for that?


On Thu, Jan 9, 2014 at 5:10 PM, Emilio Hernandez
emi...@e-roja.com mailto:emi...@e-roja.com wrote:

Hey Mirko I ran your script and I got 50.7 fps...

But then I remembered I have my displays plugged in to my
470.. hahaha.

Don't ask why, but when using AE with the displays
plugged into the Ti,  AE does not like it and disables
GPU for calculations...

P.




2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com
mailto:mirkoj.anima...@gmail.com

Hey Tim
Would you be able to take 2 minutes of your tmie and
run this ol python script for SI with your titan?
I'm getting weird results with an 780 in my home
system outperforming titan a lot... well here is copy
paste from forum if you are able to check it out as
well.. thanks!:

itan: ~170 fps
780: ~245 fps

Go figure :)
But I'm suspecting something weird with my titan
system for some time will have to test further but
would be great if anyone with titan as well could run
it too?
This old python script:
Application.CreatePrim(Cube, MeshSurface, , )
Application.SetValue(cube.polymsh.geom.subdivu,
831, )
Application.SetValue(cube.polymsh.geom.subdivv,
800, )
Application.SetValue(cube.polymsh.geom.subdivbase,
800, )
Application.SetValue(Camera.camvis.refreshrate,
True, )
Application.SetDisplayMode(Camera, shaded)
Application.DeselectAll()
Application.SetValue(PlayControl.Out, 5000, )
Application.DeselectAll()
Application.GetPrim(Null, , , )
Application.SelectObj(Camera_Root, , )
Application.CopyPaste(Camera_Root, , null, 1)
Application.SelectObj(null, , )

Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
1, , , , , )
Application.SetValue(PlayControl.Key, 5000, )
Application.SetValue(PlayControl.Current, 5000, )
Application.Rotate(, 0, 8000, 0, siAbsolute,
siPivot, siObj, siY, , , , , , ,
, 0, )

Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
5000, , , , , )
Application.FirstFrame()

Just paste in python script run and hit play.
Thakns!


On Thu, Jan 9, 2014 at 3:34 PM, Tim Crowson
tim.crow...@magneticdreams.com
mailto:tim.crow...@magneticdreams.com wrote:

We've been testing 1 Titan vs. 3 and so far, the
speed increase of the triple-Titan box is holding
at about 2.45x. In an email exchange (or maybe it
was on the forums, can't recall) it was mentioned
that on the topic parallelization, Pixar had
determined that even for them, 4 units together
(of whatever, not necessarily Titans) was the max
they could really go before it started to cost
more money than it was worth. In our case, I'm
thinking 3 might be our max, based on some nerdy
mathematics by one of our IT guys analyzing
render times per shot, 

Re: Redshift3D Render

2014-01-09 Thread Emilio Hernandez
Well switch displays to the Titan and I am getting with the Vsync option
off the same as Tim 63.6fps

Maybe this Rubikjancovik cube has a trick :)




2014/1/9 Emilio Hernandez emi...@e-roja.com

 I am going to try your suggestions right now.  That will explain a lot of
 things.






 2014/1/9 Tim Crowson tim.crow...@magneticdreams.com

  I wish it was always that simple. For instance, Redshift will perform
 better with more vram, and the Titan comes standard with 6GB, which is not
 even an option on the 780 or 780Ti. If you can get by with less vram
 though, the 780s are pretty sweet. Can't wait to see what they announce
 next.
 -Tim


 On 1/9/2014 10:39 AM, Ben Houston wrote:

 For GPU speeds, you always need to consult this list, it is pretty
 representative of what to expect from things like Redshift3D:

  http://www.videocardbenchmark.net/high_end_gpus.html

  -ben


 On Thu, Jan 9, 2014 at 11:36 AM, Emilio Hernandez emi...@e-roja.comwrote:

 Yes Mirko tell the secret.  I don't want to break my mind thinking about
 memory clocks and bandwiths




 2014/1/9 Tim Crowson tim.crow...@magneticdreams.com

  I just get 60.0 fps +
 How are you getting it display a value higher than 60? I'm pretty sure
 it the actual fps is higher, but the value in the viewport is capped at
 60
 -Tim



 On 1/9/2014 10:12 AM, Leonard Koch wrote:

 I get about 28-31 out of my 680. Does anyone have a common explanation
 for that?


 On Thu, Jan 9, 2014 at 5:10 PM, Emilio Hernandez emi...@e-roja.comwrote:

   Hey Mirko I ran your script and I got 50.7 fps...

  But then I remembered I have my displays plugged in to my 470..
 hahaha.

  Don't ask why, but when using AE with the displays plugged into the
 Ti,  AE does not like it and disables GPU for calculations...

  P.




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 Hey Tim
 Would you be able to take 2 minutes of your tmie and run this ol
 python script for SI with your titan?
 I'm getting weird results with an 780 in my home system outperforming
 titan a lot... well here is copy paste from forum if you are able to 
 check
 it out as well.. thanks!:

  itan: ~170 fps
 780: ~245 fps

 Go figure [image: :)]
 But I'm suspecting something weird with my titan system for some time
 will have to test further but would be great if anyone with titan as well
 could run it too?
 This old python script:
 Application.CreatePrim(Cube, MeshSurface, , )
 Application.SetValue(cube.polymsh.geom.subdivu, 831, )
 Application.SetValue(cube.polymsh.geom.subdivv, 800, )
 Application.SetValue(cube.polymsh.geom.subdivbase, 800, )
 Application.SetValue(Camera.camvis.refreshrate, True, )
 Application.SetDisplayMode(Camera, shaded)
 Application.DeselectAll()
 Application.SetValue(PlayControl.Out, 5000, )
 Application.DeselectAll()
 Application.GetPrim(Null, , , )
 Application.SelectObj(Camera_Root, , )
 Application.CopyPaste(Camera_Root, , null, 1)
 Application.SelectObj(null, , )
 Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
 1, , , , , )
 Application.SetValue(PlayControl.Key, 5000, )
 Application.SetValue(PlayControl.Current, 5000, )
 Application.Rotate(, 0, 8000, 0, siAbsolute, siPivot, siObj,
 siY, , , , , , , , 0, )
 Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
 5000, , , , , )
 Application.FirstFrame()

  Just paste in python script run and hit play.
 Thakns!


 On Thu, Jan 9, 2014 at 3:34 PM, Tim Crowson 
 tim.crow...@magneticdreams.com wrote:

  We've been testing 1 Titan vs. 3 and so far, the speed increase of
 the triple-Titan box is holding at about 2.45x. In an email exchange (or
 maybe it was on the forums, can't recall) it was mentioned that on the
 topic parallelization, Pixar had determined that even for them, 4 units
 together (of whatever, not necessarily Titans) was the max they could
 really go before it started to cost more money than it was worth. In our
 case, I'm thinking 3 might be our max, based on some nerdy mathematics 
 by
 one of our IT guys analyzing render times per shot, per frame,
 hardware/software costs, rack space used, etc.

 But hey, Redshift aside, the Titan in my workstation is doing
 wonders for my viewport performance in Soft. I had a 58M, 2500-item 
 model
 derived from a CAD file the other day, and this thing was letting me 
 tumble
 around it at ~15fps in Shaded mode. That ain't shabby!
 -Tim



 On 1/9/2014 6:11 AM, Paul Griswold wrote:

  There was a discussion on the RS forums about it.  I don't recall
 the numbers, though.  I don't think the speed of the PCIe slot made a 
 huge
 difference.  It's really all about the speed of the card.

  Also, although it doesn't load the entire scene into your card's
 memory, the more memory your card has, the better it is.

  But overall, for the type of work I'm mainly doing these days,
 it's extremely fast.  In fact, it's so fast that I was finding the
 bottleneck was the time taken to export the mesh to Redshift, not
 

Re: Redshift3D Render

2014-01-09 Thread Mirko Jankovic
just got back , yes 60 is vsync on, turn of vsync in nvida control panel


On Thu, Jan 9, 2014 at 5:52 PM, Stefan Kubicek s...@tidbit-images.com wrote:

   I guess 60fps is the refresh rate of your display, right?  Have you
 disabled VSync in the driver settings?


 I just get 60.0 fps +
 How are you getting it display a value higher than 60? I'm pretty sure it
 the actual fps is higher, but the value in the viewport is capped at 60
 -Tim


 On 1/9/2014 10:12 AM, Leonard Koch wrote:

 I get about 28-31 out of my 680. Does anyone have a common explanation for
 that?


 On Thu, Jan 9, 2014 at 5:10 PM, Emilio Hernandez emi...@e-roja.comwrote:

   Hey Mirko I ran your script and I got 50.7 fps...

  But then I remembered I have my displays plugged in to my 470.. hahaha.

  Don't ask why, but when using AE with the displays plugged into the Ti,
 AE does not like it and disables GPU for calculations...

  P.




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 Hey Tim
 Would you be able to take 2 minutes of your tmie and run this ol python
 script for SI with your titan?
 I'm getting weird results with an 780 in my home system outperforming
 titan a lot... well here is copy paste from forum if you are able to check
 it out as well.. thanks!:

  itan: ~170 fps
 780: ~245 fps

 Go figure [image: :)]
 But I'm suspecting something weird with my titan system for some time
 will have to test further but would be great if anyone with titan as well
 could run it too?
 This old python script:
 Application.CreatePrim(Cube, MeshSurface, , )
 Application.SetValue(cube.polymsh.geom.subdivu, 831, )
 Application.SetValue(cube.polymsh.geom.subdivv, 800, )
 Application.SetValue(cube.polymsh.geom.subdivbase, 800, )
 Application.SetValue(Camera.camvis.refreshrate, True, )
 Application.SetDisplayMode(Camera, shaded)
 Application.DeselectAll()
 Application.SetValue(PlayControl.Out, 5000, )
 Application.DeselectAll()
 Application.GetPrim(Null, , , )
 Application.SelectObj(Camera_Root, , )
 Application.CopyPaste(Camera_Root, , null, 1)
 Application.SelectObj(null, , )
 Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
 1, , , , , )
 Application.SetValue(PlayControl.Key, 5000, )
 Application.SetValue(PlayControl.Current, 5000, )
 Application.Rotate(, 0, 8000, 0, siAbsolute, siPivot, siObj,
 siY, , , , , , , , 0, )
 Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
 5000, , , , , )
 Application.FirstFrame()

  Just paste in python script run and hit play.
 Thakns!


 On Thu, Jan 9, 2014 at 3:34 PM, Tim Crowson 
 tim.crow...@magneticdreams.com wrote:

  We've been testing 1 Titan vs. 3 and so far, the speed increase of the
 triple-Titan box is holding at about 2.45x. In an email exchange (or maybe
 it was on the forums, can't recall) it was mentioned that on the topic
 parallelization, Pixar had determined that even for them, 4 units together
 (of whatever, not necessarily Titans) was the max they could really go
 before it started to cost more money than it was worth. In our case, I'm
 thinking 3 might be our max, based on some nerdy mathematics by one of our
 IT guys analyzing render times per shot, per frame, hardware/software
 costs, rack space used, etc.

 But hey, Redshift aside, the Titan in my workstation is doing wonders
 for my viewport performance in Soft. I had a 58M, 2500-item model derived
 from a CAD file the other day, and this thing was letting me tumble around
 it at ~15fps in Shaded mode. That ain't shabby!
 -Tim



 On 1/9/2014 6:11 AM, Paul Griswold wrote:

  There was a discussion on the RS forums about it.  I don't recall the
 numbers, though.  I don't think the speed of the PCIe slot made a huge
 difference.  It's really all about the speed of the card.

  Also, although it doesn't load the entire scene into your card's
 memory, the more memory your card has, the better it is.

  But overall, for the type of work I'm mainly doing these days, it's
 extremely fast.  In fact, it's so fast that I was finding the bottleneck
 was the time taken to export the mesh to Redshift, not rendering.  Redshift
 has a proxy system like Vray  Arnold, but you have to manually create
 proxies per object  my scene had hundreds and hundreds of objects, so I
 didn't have time to create them.  Therefore, it was creating a renderable
 mesh per frame - so on a frame that took 28 seconds to render, 20 seconds
 was spent exporting the mesh and 8 seconds were spent on rendering.  But
 again, it's a beta and they're continuing to improve things like the proxy
 system.

  Once I'm caught up I'm hoping to try rendering the classroom scene
 and see how it does.

  -Paul


  ᐧ


  --






 --






 --
 ---
 Stefan Kubicek
 ---
 keyvis digital imagery
 Alfred Feierfeilstraße 3
 A-2380 Perchtoldsdorf bei Wien
 Phone: +43/699/12614231
 www.keyvis.at ste...@keyvis.at
 -- This email and its 

Re: Redshift3D Render

2014-01-09 Thread Emilio Hernandez
I already did that and still getting the 65 fps limit with the Titan.




2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 just got back , yes 60 is vsync on, turn of vsync in nvida control panel


 On Thu, Jan 9, 2014 at 5:52 PM, Stefan Kubicek s...@tidbit-images.comwrote:

   I guess 60fps is the refresh rate of your display, right?  Have you
 disabled VSync in the driver settings?


 I just get 60.0 fps +
 How are you getting it display a value higher than 60? I'm pretty sure it
 the actual fps is higher, but the value in the viewport is capped at 60
 -Tim


 On 1/9/2014 10:12 AM, Leonard Koch wrote:

 I get about 28-31 out of my 680. Does anyone have a common explanation
 for that?


 On Thu, Jan 9, 2014 at 5:10 PM, Emilio Hernandez emi...@e-roja.comwrote:

   Hey Mirko I ran your script and I got 50.7 fps...

  But then I remembered I have my displays plugged in to my 470.. hahaha.

  Don't ask why, but when using AE with the displays plugged into the
 Ti,  AE does not like it and disables GPU for calculations...

  P.




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 Hey Tim
 Would you be able to take 2 minutes of your tmie and run this ol python
 script for SI with your titan?
 I'm getting weird results with an 780 in my home system outperforming
 titan a lot... well here is copy paste from forum if you are able to check
 it out as well.. thanks!:

  itan: ~170 fps
 780: ~245 fps

 Go figure [image: :)]
 But I'm suspecting something weird with my titan system for some time
 will have to test further but would be great if anyone with titan as well
 could run it too?
 This old python script:
 Application.CreatePrim(Cube, MeshSurface, , )
 Application.SetValue(cube.polymsh.geom.subdivu, 831, )
 Application.SetValue(cube.polymsh.geom.subdivv, 800, )
 Application.SetValue(cube.polymsh.geom.subdivbase, 800, )
 Application.SetValue(Camera.camvis.refreshrate, True, )
 Application.SetDisplayMode(Camera, shaded)
 Application.DeselectAll()
 Application.SetValue(PlayControl.Out, 5000, )
 Application.DeselectAll()
 Application.GetPrim(Null, , , )
 Application.SelectObj(Camera_Root, , )
 Application.CopyPaste(Camera_Root, , null, 1)
 Application.SelectObj(null, , )
 Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
 1, , , , , )
 Application.SetValue(PlayControl.Key, 5000, )
 Application.SetValue(PlayControl.Current, 5000, )
 Application.Rotate(, 0, 8000, 0, siAbsolute, siPivot, siObj,
 siY, , , , , , , , 0, )
 Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
 5000, , , , , )
 Application.FirstFrame()

  Just paste in python script run and hit play.
 Thakns!


 On Thu, Jan 9, 2014 at 3:34 PM, Tim Crowson 
 tim.crow...@magneticdreams.com wrote:

  We've been testing 1 Titan vs. 3 and so far, the speed increase of
 the triple-Titan box is holding at about 2.45x. In an email exchange (or
 maybe it was on the forums, can't recall) it was mentioned that on the
 topic parallelization, Pixar had determined that even for them, 4 units
 together (of whatever, not necessarily Titans) was the max they could
 really go before it started to cost more money than it was worth. In our
 case, I'm thinking 3 might be our max, based on some nerdy mathematics by
 one of our IT guys analyzing render times per shot, per frame,
 hardware/software costs, rack space used, etc.

 But hey, Redshift aside, the Titan in my workstation is doing wonders
 for my viewport performance in Soft. I had a 58M, 2500-item model derived
 from a CAD file the other day, and this thing was letting me tumble around
 it at ~15fps in Shaded mode. That ain't shabby!
 -Tim



 On 1/9/2014 6:11 AM, Paul Griswold wrote:

  There was a discussion on the RS forums about it.  I don't recall
 the numbers, though.  I don't think the speed of the PCIe slot made a huge
 difference.  It's really all about the speed of the card.

  Also, although it doesn't load the entire scene into your card's
 memory, the more memory your card has, the better it is.

  But overall, for the type of work I'm mainly doing these days, it's
 extremely fast.  In fact, it's so fast that I was finding the bottleneck
 was the time taken to export the mesh to Redshift, not rendering.  
 Redshift
 has a proxy system like Vray  Arnold, but you have to manually create
 proxies per object  my scene had hundreds and hundreds of objects, so I
 didn't have time to create them.  Therefore, it was creating a renderable
 mesh per frame - so on a frame that took 28 seconds to render, 20 seconds
 was spent exporting the mesh and 8 seconds were spent on rendering.  But
 again, it's a beta and they're continuing to improve things like the proxy
 system.

  Once I'm caught up I'm hoping to try rendering the classroom scene
 and see how it does.

  -Paul


  ᐧ


  --






 --






 --
 ---
 Stefan Kubicek
 ---
 keyvis digital imagery
 Alfred 

Re: Redshift3D Render

2014-01-09 Thread Mirko Jankovic
hmm another gues.. is it set to RT or ALL in play?


On Thu, Jan 9, 2014 at 6:53 PM, Emilio Hernandez emi...@e-roja.com wrote:

 I already did that and still getting the 65 fps limit with the Titan.




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 just got back , yes 60 is vsync on, turn of vsync in nvida control panel


 On Thu, Jan 9, 2014 at 5:52 PM, Stefan Kubicek s...@tidbit-images.comwrote:

   I guess 60fps is the refresh rate of your display, right?  Have you
 disabled VSync in the driver settings?


 I just get 60.0 fps +
 How are you getting it display a value higher than 60? I'm pretty sure
 it the actual fps is higher, but the value in the viewport is capped at
 60
 -Tim


 On 1/9/2014 10:12 AM, Leonard Koch wrote:

 I get about 28-31 out of my 680. Does anyone have a common explanation
 for that?


 On Thu, Jan 9, 2014 at 5:10 PM, Emilio Hernandez emi...@e-roja.comwrote:

   Hey Mirko I ran your script and I got 50.7 fps...

  But then I remembered I have my displays plugged in to my 470.. hahaha.

  Don't ask why, but when using AE with the displays plugged into the
 Ti,  AE does not like it and disables GPU for calculations...

  P.




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 Hey Tim
 Would you be able to take 2 minutes of your tmie and run this ol
 python script for SI with your titan?
 I'm getting weird results with an 780 in my home system outperforming
 titan a lot... well here is copy paste from forum if you are able to check
 it out as well.. thanks!:

  itan: ~170 fps
 780: ~245 fps

 Go figure [image: :)]
 But I'm suspecting something weird with my titan system for some time
 will have to test further but would be great if anyone with titan as well
 could run it too?
 This old python script:
 Application.CreatePrim(Cube, MeshSurface, , )
 Application.SetValue(cube.polymsh.geom.subdivu, 831, )
 Application.SetValue(cube.polymsh.geom.subdivv, 800, )
 Application.SetValue(cube.polymsh.geom.subdivbase, 800, )
 Application.SetValue(Camera.camvis.refreshrate, True, )
 Application.SetDisplayMode(Camera, shaded)
 Application.DeselectAll()
 Application.SetValue(PlayControl.Out, 5000, )
 Application.DeselectAll()
 Application.GetPrim(Null, , , )
 Application.SelectObj(Camera_Root, , )
 Application.CopyPaste(Camera_Root, , null, 1)
 Application.SelectObj(null, , )
 Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
 1, , , , , )
 Application.SetValue(PlayControl.Key, 5000, )
 Application.SetValue(PlayControl.Current, 5000, )
 Application.Rotate(, 0, 8000, 0, siAbsolute, siPivot, siObj,
 siY, , , , , , , , 0, )
 Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
 5000, , , , , )
 Application.FirstFrame()

  Just paste in python script run and hit play.
 Thakns!


 On Thu, Jan 9, 2014 at 3:34 PM, Tim Crowson 
 tim.crow...@magneticdreams.com wrote:

  We've been testing 1 Titan vs. 3 and so far, the speed increase of
 the triple-Titan box is holding at about 2.45x. In an email exchange (or
 maybe it was on the forums, can't recall) it was mentioned that on the
 topic parallelization, Pixar had determined that even for them, 4 units
 together (of whatever, not necessarily Titans) was the max they could
 really go before it started to cost more money than it was worth. In our
 case, I'm thinking 3 might be our max, based on some nerdy mathematics by
 one of our IT guys analyzing render times per shot, per frame,
 hardware/software costs, rack space used, etc.

 But hey, Redshift aside, the Titan in my workstation is doing wonders
 for my viewport performance in Soft. I had a 58M, 2500-item model derived
 from a CAD file the other day, and this thing was letting me tumble 
 around
 it at ~15fps in Shaded mode. That ain't shabby!
 -Tim



 On 1/9/2014 6:11 AM, Paul Griswold wrote:

  There was a discussion on the RS forums about it.  I don't recall
 the numbers, though.  I don't think the speed of the PCIe slot made a 
 huge
 difference.  It's really all about the speed of the card.

  Also, although it doesn't load the entire scene into your card's
 memory, the more memory your card has, the better it is.

  But overall, for the type of work I'm mainly doing these days, it's
 extremely fast.  In fact, it's so fast that I was finding the bottleneck
 was the time taken to export the mesh to Redshift, not rendering.  
 Redshift
 has a proxy system like Vray  Arnold, but you have to manually create
 proxies per object  my scene had hundreds and hundreds of objects, so I
 didn't have time to create them.  Therefore, it was creating a renderable
 mesh per frame - so on a frame that took 28 seconds to render, 20 seconds
 was spent exporting the mesh and 8 seconds were spent on rendering.  But
 again, it's a beta and they're continuing to improve things like the 
 proxy
 system.

  Once I'm caught up I'm hoping to try rendering the classroom scene
 and see how it does.

  -Paul


  ᐧ


  --






 --






 --
 

Re: Redshift3D Render

2014-01-09 Thread Emilio Hernandez
It is set to ALL.




2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 hmm another gues.. is it set to RT or ALL in play?


 On Thu, Jan 9, 2014 at 6:53 PM, Emilio Hernandez emi...@e-roja.comwrote:

 I already did that and still getting the 65 fps limit with the Titan.




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 just got back , yes 60 is vsync on, turn of vsync in nvida control panel


 On Thu, Jan 9, 2014 at 5:52 PM, Stefan Kubicek 
 s...@tidbit-images.comwrote:

   I guess 60fps is the refresh rate of your display, right?  Have you
 disabled VSync in the driver settings?


 I just get 60.0 fps +
 How are you getting it display a value higher than 60? I'm pretty sure
 it the actual fps is higher, but the value in the viewport is capped at
 60
 -Tim


 On 1/9/2014 10:12 AM, Leonard Koch wrote:

 I get about 28-31 out of my 680. Does anyone have a common explanation
 for that?


 On Thu, Jan 9, 2014 at 5:10 PM, Emilio Hernandez emi...@e-roja.comwrote:

   Hey Mirko I ran your script and I got 50.7 fps...

  But then I remembered I have my displays plugged in to my 470..
 hahaha.

  Don't ask why, but when using AE with the displays plugged into the
 Ti,  AE does not like it and disables GPU for calculations...

  P.




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 Hey Tim
 Would you be able to take 2 minutes of your tmie and run this ol
 python script for SI with your titan?
 I'm getting weird results with an 780 in my home system outperforming
 titan a lot... well here is copy paste from forum if you are able to 
 check
 it out as well.. thanks!:

  itan: ~170 fps
 780: ~245 fps

 Go figure [image: :)]
 But I'm suspecting something weird with my titan system for some time
 will have to test further but would be great if anyone with titan as well
 could run it too?
 This old python script:
 Application.CreatePrim(Cube, MeshSurface, , )
 Application.SetValue(cube.polymsh.geom.subdivu, 831, )
 Application.SetValue(cube.polymsh.geom.subdivv, 800, )
 Application.SetValue(cube.polymsh.geom.subdivbase, 800, )
 Application.SetValue(Camera.camvis.refreshrate, True, )
 Application.SetDisplayMode(Camera, shaded)
 Application.DeselectAll()
 Application.SetValue(PlayControl.Out, 5000, )
 Application.DeselectAll()
 Application.GetPrim(Null, , , )
 Application.SelectObj(Camera_Root, , )
 Application.CopyPaste(Camera_Root, , null, 1)
 Application.SelectObj(null, , )
 Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
 1, , , , , )
 Application.SetValue(PlayControl.Key, 5000, )
 Application.SetValue(PlayControl.Current, 5000, )
 Application.Rotate(, 0, 8000, 0, siAbsolute, siPivot, siObj,
 siY, , , , , , , , 0, )
 Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
 5000, , , , , )
 Application.FirstFrame()

  Just paste in python script run and hit play.
 Thakns!


 On Thu, Jan 9, 2014 at 3:34 PM, Tim Crowson 
 tim.crow...@magneticdreams.com wrote:

  We've been testing 1 Titan vs. 3 and so far, the speed increase of
 the triple-Titan box is holding at about 2.45x. In an email exchange (or
 maybe it was on the forums, can't recall) it was mentioned that on the
 topic parallelization, Pixar had determined that even for them, 4 units
 together (of whatever, not necessarily Titans) was the max they could
 really go before it started to cost more money than it was worth. In our
 case, I'm thinking 3 might be our max, based on some nerdy mathematics 
 by
 one of our IT guys analyzing render times per shot, per frame,
 hardware/software costs, rack space used, etc.

 But hey, Redshift aside, the Titan in my workstation is doing
 wonders for my viewport performance in Soft. I had a 58M, 2500-item 
 model
 derived from a CAD file the other day, and this thing was letting me 
 tumble
 around it at ~15fps in Shaded mode. That ain't shabby!
 -Tim



 On 1/9/2014 6:11 AM, Paul Griswold wrote:

  There was a discussion on the RS forums about it.  I don't recall
 the numbers, though.  I don't think the speed of the PCIe slot made a 
 huge
 difference.  It's really all about the speed of the card.

  Also, although it doesn't load the entire scene into your card's
 memory, the more memory your card has, the better it is.

  But overall, for the type of work I'm mainly doing these days,
 it's extremely fast.  In fact, it's so fast that I was finding the
 bottleneck was the time taken to export the mesh to Redshift, not
 rendering.  Redshift has a proxy system like Vray  Arnold, but you 
 have to
 manually create proxies per object  my scene had hundreds and hundreds 
 of
 objects, so I didn't have time to create them.  Therefore, it was 
 creating
 a renderable mesh per frame - so on a frame that took 28 seconds to 
 render,
 20 seconds was spent exporting the mesh and 8 seconds were spent on
 rendering.  But again, it's a beta and they're continuing to improve 
 things
 like the proxy system.

  Once I'm caught up I'm hoping to try 

Re: Redshift3D Render

2014-01-09 Thread Mirko Jankovic
yea I guess that woul dbe the case but just tried.. strange really no
idea.. any chance to run cinebench 15 opengl test then? if that gives too
low result as well then something is not good


On Thu, Jan 9, 2014 at 6:58 PM, Emilio Hernandez emi...@e-roja.com wrote:

 It is set to ALL.




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 hmm another gues.. is it set to RT or ALL in play?


 On Thu, Jan 9, 2014 at 6:53 PM, Emilio Hernandez emi...@e-roja.comwrote:

 I already did that and still getting the 65 fps limit with the Titan.




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 just got back , yes 60 is vsync on, turn of vsync in nvida control panel


 On Thu, Jan 9, 2014 at 5:52 PM, Stefan Kubicek 
 s...@tidbit-images.comwrote:

   I guess 60fps is the refresh rate of your display, right?  Have you
 disabled VSync in the driver settings?


 I just get 60.0 fps +
 How are you getting it display a value higher than 60? I'm pretty sure
 it the actual fps is higher, but the value in the viewport is capped at
 60
 -Tim


 On 1/9/2014 10:12 AM, Leonard Koch wrote:

 I get about 28-31 out of my 680. Does anyone have a common explanation
 for that?


 On Thu, Jan 9, 2014 at 5:10 PM, Emilio Hernandez emi...@e-roja.comwrote:

   Hey Mirko I ran your script and I got 50.7 fps...

  But then I remembered I have my displays plugged in to my 470..
 hahaha.

  Don't ask why, but when using AE with the displays plugged into the
 Ti,  AE does not like it and disables GPU for calculations...

  P.




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 Hey Tim
 Would you be able to take 2 minutes of your tmie and run this ol
 python script for SI with your titan?
 I'm getting weird results with an 780 in my home system
 outperforming titan a lot... well here is copy paste from forum if you 
 are
 able to check it out as well.. thanks!:

  itan: ~170 fps
 780: ~245 fps

 Go figure [image: :)]
 But I'm suspecting something weird with my titan system for some
 time will have to test further but would be great if anyone with titan 
 as
 well could run it too?
 This old python script:
 Application.CreatePrim(Cube, MeshSurface, , )
 Application.SetValue(cube.polymsh.geom.subdivu, 831, )
 Application.SetValue(cube.polymsh.geom.subdivv, 800, )
 Application.SetValue(cube.polymsh.geom.subdivbase, 800, )
 Application.SetValue(Camera.camvis.refreshrate, True, )
 Application.SetDisplayMode(Camera, shaded)
 Application.DeselectAll()
 Application.SetValue(PlayControl.Out, 5000, )
 Application.DeselectAll()
 Application.GetPrim(Null, , , )
 Application.SelectObj(Camera_Root, , )
 Application.CopyPaste(Camera_Root, , null, 1)
 Application.SelectObj(null, , )
 Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
 1, , , , , )
 Application.SetValue(PlayControl.Key, 5000, )
 Application.SetValue(PlayControl.Current, 5000, )
 Application.Rotate(, 0, 8000, 0, siAbsolute, siPivot, siObj,
 siY, , , , , , , , 0, )
 Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
 5000, , , , , )
 Application.FirstFrame()

  Just paste in python script run and hit play.
 Thakns!


 On Thu, Jan 9, 2014 at 3:34 PM, Tim Crowson 
 tim.crow...@magneticdreams.com wrote:

  We've been testing 1 Titan vs. 3 and so far, the speed increase of
 the triple-Titan box is holding at about 2.45x. In an email exchange 
 (or
 maybe it was on the forums, can't recall) it was mentioned that on the
 topic parallelization, Pixar had determined that even for them, 4 units
 together (of whatever, not necessarily Titans) was the max they could
 really go before it started to cost more money than it was worth. In 
 our
 case, I'm thinking 3 might be our max, based on some nerdy mathematics 
 by
 one of our IT guys analyzing render times per shot, per frame,
 hardware/software costs, rack space used, etc.

 But hey, Redshift aside, the Titan in my workstation is doing
 wonders for my viewport performance in Soft. I had a 58M, 2500-item 
 model
 derived from a CAD file the other day, and this thing was letting me 
 tumble
 around it at ~15fps in Shaded mode. That ain't shabby!
 -Tim



 On 1/9/2014 6:11 AM, Paul Griswold wrote:

  There was a discussion on the RS forums about it.  I don't recall
 the numbers, though.  I don't think the speed of the PCIe slot made a 
 huge
 difference.  It's really all about the speed of the card.

  Also, although it doesn't load the entire scene into your card's
 memory, the more memory your card has, the better it is.

  But overall, for the type of work I'm mainly doing these days,
 it's extremely fast.  In fact, it's so fast that I was finding the
 bottleneck was the time taken to export the mesh to Redshift, not
 rendering.  Redshift has a proxy system like Vray  Arnold, but you 
 have to
 manually create proxies per object  my scene had hundreds and 
 hundreds of
 objects, so I didn't have time to create them.  Therefore, it was 
 creating
 a renderable mesh per 

Re: Redshift3D Render

2014-01-09 Thread Emilio Hernandez
Yes specailly if you are getting more than the double speed.

I ran the test and I got 75.78 fps

Thx




2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 yea I guess that woul dbe the case but just tried.. strange really no
 idea.. any chance to run cinebench 15 opengl test then? if that gives too
 low result as well then something is not good


 On Thu, Jan 9, 2014 at 6:58 PM, Emilio Hernandez emi...@e-roja.comwrote:

 It is set to ALL.




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 hmm another gues.. is it set to RT or ALL in play?


 On Thu, Jan 9, 2014 at 6:53 PM, Emilio Hernandez emi...@e-roja.comwrote:

 I already did that and still getting the 65 fps limit with the Titan.




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 just got back , yes 60 is vsync on, turn of vsync in nvida control
 panel


 On Thu, Jan 9, 2014 at 5:52 PM, Stefan Kubicek 
 s...@tidbit-images.comwrote:

   I guess 60fps is the refresh rate of your display, right?  Have
 you disabled VSync in the driver settings?


 I just get 60.0 fps +
 How are you getting it display a value higher than 60? I'm pretty
 sure it the actual fps is higher, but the value in the viewport is capped
 at 60
 -Tim


 On 1/9/2014 10:12 AM, Leonard Koch wrote:

 I get about 28-31 out of my 680. Does anyone have a common
 explanation for that?


 On Thu, Jan 9, 2014 at 5:10 PM, Emilio Hernandez 
 emi...@e-roja.comwrote:

   Hey Mirko I ran your script and I got 50.7 fps...

  But then I remembered I have my displays plugged in to my 470..
 hahaha.

  Don't ask why, but when using AE with the displays plugged into the
 Ti,  AE does not like it and disables GPU for calculations...

  P.




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 Hey Tim
 Would you be able to take 2 minutes of your tmie and run this ol
 python script for SI with your titan?
 I'm getting weird results with an 780 in my home system
 outperforming titan a lot... well here is copy paste from forum if you 
 are
 able to check it out as well.. thanks!:

  itan: ~170 fps
 780: ~245 fps

 Go figure [image: :)]
 But I'm suspecting something weird with my titan system for some
 time will have to test further but would be great if anyone with titan 
 as
 well could run it too?
 This old python script:
 Application.CreatePrim(Cube, MeshSurface, , )
 Application.SetValue(cube.polymsh.geom.subdivu, 831, )
 Application.SetValue(cube.polymsh.geom.subdivv, 800, )
 Application.SetValue(cube.polymsh.geom.subdivbase, 800, )
 Application.SetValue(Camera.camvis.refreshrate, True, )
 Application.SetDisplayMode(Camera, shaded)
 Application.DeselectAll()
 Application.SetValue(PlayControl.Out, 5000, )
 Application.DeselectAll()
 Application.GetPrim(Null, , , )
 Application.SelectObj(Camera_Root, , )
 Application.CopyPaste(Camera_Root, , null, 1)
 Application.SelectObj(null, , )
 Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
 1, , , , , )
 Application.SetValue(PlayControl.Key, 5000, )
 Application.SetValue(PlayControl.Current, 5000, )
 Application.Rotate(, 0, 8000, 0, siAbsolute, siPivot,
 siObj, siY, , , , , , , , 0, )
 Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
 5000, , , , , )
 Application.FirstFrame()

  Just paste in python script run and hit play.
 Thakns!


 On Thu, Jan 9, 2014 at 3:34 PM, Tim Crowson 
 tim.crow...@magneticdreams.com wrote:

  We've been testing 1 Titan vs. 3 and so far, the speed increase
 of the triple-Titan box is holding at about 2.45x. In an email 
 exchange (or
 maybe it was on the forums, can't recall) it was mentioned that on the
 topic parallelization, Pixar had determined that even for them, 4 
 units
 together (of whatever, not necessarily Titans) was the max they could
 really go before it started to cost more money than it was worth. In 
 our
 case, I'm thinking 3 might be our max, based on some nerdy 
 mathematics by
 one of our IT guys analyzing render times per shot, per frame,
 hardware/software costs, rack space used, etc.

 But hey, Redshift aside, the Titan in my workstation is doing
 wonders for my viewport performance in Soft. I had a 58M, 2500-item 
 model
 derived from a CAD file the other day, and this thing was letting me 
 tumble
 around it at ~15fps in Shaded mode. That ain't shabby!
 -Tim



 On 1/9/2014 6:11 AM, Paul Griswold wrote:

  There was a discussion on the RS forums about it.  I don't
 recall the numbers, though.  I don't think the speed of the PCIe slot 
 made
 a huge difference.  It's really all about the speed of the card.

  Also, although it doesn't load the entire scene into your card's
 memory, the more memory your card has, the better it is.

  But overall, for the type of work I'm mainly doing these days,
 it's extremely fast.  In fact, it's so fast that I was finding the
 bottleneck was the time taken to export the mesh to Redshift, not
 rendering.  Redshift has a proxy system like Vray  Arnold, but you 
 have to
 manually 

Re: Linking a light (Sun) to a Physical sky ?

2014-01-09 Thread Nicolas Burtnyk
For future reference, to connect an existing sun's light direction with an
existing sky shader in Redshift or Mental Ray, you use the command
ApplySunDirectionOp (undocumented I believe).

E.g. (in python):

Application.ApplySunDirectionOp( sun.kine.global,
[Passes.Default_Pass.Redshift_PhysicalSky.sun_direction.x,
Passes.Default_Pass.Redshift_PhysicalSky.sun_direction.y,
Passes.Default_Pass.Redshift_PhysicalSky.sun_direction.z] )



On Thu, Jan 9, 2014 at 4:39 AM, olivier jeannel olivier.jean...@noos.frwrote:

  N
 There is a Render /Edit/Create RedShift SkyShader
 I NEVER go through these menus..
 Bangging my head on the desk


 Pulverized by the shame...

 Thank you oh grandmaster of the infinite knowledge !



 Le 09/01/2014 13:14, Rob Chapman a écrit :

 rendereditinit physical sky  ?this one constrains the light
 direction into the shader for you (mental Ray)  maybe you could copy the
 expression from here?


 On 9 January 2014 11:54, olivier jeannel olivier.jean...@noos.fr wrote:

 Bump...

 So nobody has a method ?


 Le 08/01/2014 16:32, olivier jeannel a écrit :

 Very dumb question, I did it before, I'm sure...

 How do I link a light (infinite - sun) rotation to a vector (Sun
 direction) in a Physicla sky property page ? (So that my light is a sun...)
 I'm in redshift...








Re: Redshift3D Render

2014-01-09 Thread Mirko Jankovic
getting around 95 score in cinebench with titan, and again 780 in home comp
getting 140 score in cinebench...
wondering how can 780 crash titan so much in opengl.

titan system beside 4 titans have i7 3930K, asus p9x79-e ws MBO
and home comp i7 4770k on asus maximus VI hero

Will test some redshift rendering later to compare single GPU 780 vs titan


On Thu, Jan 9, 2014 at 7:11 PM, Emilio Hernandez emi...@e-roja.com wrote:

 Yes specailly if you are getting more than the double speed.

 I ran the test and I got 75.78 fps

 Thx




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 yea I guess that woul dbe the case but just tried.. strange really no
 idea.. any chance to run cinebench 15 opengl test then? if that gives too
 low result as well then something is not good


 On Thu, Jan 9, 2014 at 6:58 PM, Emilio Hernandez emi...@e-roja.comwrote:

 It is set to ALL.




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 hmm another gues.. is it set to RT or ALL in play?


 On Thu, Jan 9, 2014 at 6:53 PM, Emilio Hernandez emi...@e-roja.comwrote:

 I already did that and still getting the 65 fps limit with the Titan.




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 just got back , yes 60 is vsync on, turn of vsync in nvida control
 panel


 On Thu, Jan 9, 2014 at 5:52 PM, Stefan Kubicek 
 s...@tidbit-images.comwrote:

   I guess 60fps is the refresh rate of your display, right?  Have
 you disabled VSync in the driver settings?


 I just get 60.0 fps +
 How are you getting it display a value higher than 60? I'm pretty
 sure it the actual fps is higher, but the value in the viewport is 
 capped
 at 60
 -Tim


 On 1/9/2014 10:12 AM, Leonard Koch wrote:

 I get about 28-31 out of my 680. Does anyone have a common
 explanation for that?


 On Thu, Jan 9, 2014 at 5:10 PM, Emilio Hernandez 
 emi...@e-roja.comwrote:

   Hey Mirko I ran your script and I got 50.7 fps...

  But then I remembered I have my displays plugged in to my 470..
 hahaha.

  Don't ask why, but when using AE with the displays plugged into
 the Ti,  AE does not like it and disables GPU for calculations...

  P.




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 Hey Tim
 Would you be able to take 2 minutes of your tmie and run this ol
 python script for SI with your titan?
 I'm getting weird results with an 780 in my home system
 outperforming titan a lot... well here is copy paste from forum if 
 you are
 able to check it out as well.. thanks!:

  itan: ~170 fps
 780: ~245 fps

 Go figure [image: :)]
 But I'm suspecting something weird with my titan system for some
 time will have to test further but would be great if anyone with 
 titan as
 well could run it too?
 This old python script:
 Application.CreatePrim(Cube, MeshSurface, , )
 Application.SetValue(cube.polymsh.geom.subdivu, 831, )
 Application.SetValue(cube.polymsh.geom.subdivv, 800, )
 Application.SetValue(cube.polymsh.geom.subdivbase, 800, )
 Application.SetValue(Camera.camvis.refreshrate, True, )
 Application.SetDisplayMode(Camera, shaded)
 Application.DeselectAll()
 Application.SetValue(PlayControl.Out, 5000, )
 Application.DeselectAll()
 Application.GetPrim(Null, , , )
 Application.SelectObj(Camera_Root, , )
 Application.CopyPaste(Camera_Root, , null, 1)
 Application.SelectObj(null, , )
 Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
 1, , , , , )
 Application.SetValue(PlayControl.Key, 5000, )
 Application.SetValue(PlayControl.Current, 5000, )
 Application.Rotate(, 0, 8000, 0, siAbsolute, siPivot,
 siObj, siY, , , , , , , , 0, )
 Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
 5000, , , , , )
 Application.FirstFrame()

  Just paste in python script run and hit play.
 Thakns!


 On Thu, Jan 9, 2014 at 3:34 PM, Tim Crowson 
 tim.crow...@magneticdreams.com wrote:

  We've been testing 1 Titan vs. 3 and so far, the speed increase
 of the triple-Titan box is holding at about 2.45x. In an email 
 exchange (or
 maybe it was on the forums, can't recall) it was mentioned that on 
 the
 topic parallelization, Pixar had determined that even for them, 4 
 units
 together (of whatever, not necessarily Titans) was the max they could
 really go before it started to cost more money than it was worth. In 
 our
 case, I'm thinking 3 might be our max, based on some nerdy 
 mathematics by
 one of our IT guys analyzing render times per shot, per frame,
 hardware/software costs, rack space used, etc.

 But hey, Redshift aside, the Titan in my workstation is doing
 wonders for my viewport performance in Soft. I had a 58M, 2500-item 
 model
 derived from a CAD file the other day, and this thing was letting me 
 tumble
 around it at ~15fps in Shaded mode. That ain't shabby!
 -Tim



 On 1/9/2014 6:11 AM, Paul Griswold wrote:

  There was a discussion on the RS forums about it.  I don't
 recall the numbers, though.  I don't think the speed of the PCIe 
 slot made
 a huge difference.  It's really all about the 

Re: Redshift3D Render

2014-01-09 Thread Emilio Hernandez
Now I am starting to scratch my head to figure out the maze of Nvidia...

At least my benchmark said the Titatn beated up a Quadro 4000K ha ha




2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 getting around 95 score in cinebench with titan, and again 780 in home
 comp getting 140 score in cinebench...
 wondering how can 780 crash titan so much in opengl.

 titan system beside 4 titans have i7 3930K, asus p9x79-e ws MBO
 and home comp i7 4770k on asus maximus VI hero

 Will test some redshift rendering later to compare single GPU 780 vs titan


 On Thu, Jan 9, 2014 at 7:11 PM, Emilio Hernandez emi...@e-roja.comwrote:

 Yes specailly if you are getting more than the double speed.

 I ran the test and I got 75.78 fps

 Thx




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 yea I guess that woul dbe the case but just tried.. strange really no
 idea.. any chance to run cinebench 15 opengl test then? if that gives too
 low result as well then something is not good


 On Thu, Jan 9, 2014 at 6:58 PM, Emilio Hernandez emi...@e-roja.comwrote:

 It is set to ALL.




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 hmm another gues.. is it set to RT or ALL in play?


 On Thu, Jan 9, 2014 at 6:53 PM, Emilio Hernandez emi...@e-roja.comwrote:

 I already did that and still getting the 65 fps limit with the Titan.




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 just got back , yes 60 is vsync on, turn of vsync in nvida control
 panel


 On Thu, Jan 9, 2014 at 5:52 PM, Stefan Kubicek s...@tidbit-images.com
  wrote:

   I guess 60fps is the refresh rate of your display, right?  Have
 you disabled VSync in the driver settings?


 I just get 60.0 fps +
 How are you getting it display a value higher than 60? I'm pretty
 sure it the actual fps is higher, but the value in the viewport is 
 capped
 at 60
 -Tim


 On 1/9/2014 10:12 AM, Leonard Koch wrote:

 I get about 28-31 out of my 680. Does anyone have a common
 explanation for that?


 On Thu, Jan 9, 2014 at 5:10 PM, Emilio Hernandez emi...@e-roja.com
  wrote:

   Hey Mirko I ran your script and I got 50.7 fps...

  But then I remembered I have my displays plugged in to my 470..
 hahaha.

  Don't ask why, but when using AE with the displays plugged into
 the Ti,  AE does not like it and disables GPU for calculations...

  P.




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 Hey Tim
 Would you be able to take 2 minutes of your tmie and run this ol
 python script for SI with your titan?
 I'm getting weird results with an 780 in my home system
 outperforming titan a lot... well here is copy paste from forum if 
 you are
 able to check it out as well.. thanks!:

  itan: ~170 fps
 780: ~245 fps

 Go figure [image: :)]
 But I'm suspecting something weird with my titan system for some
 time will have to test further but would be great if anyone with 
 titan as
 well could run it too?
 This old python script:
 Application.CreatePrim(Cube, MeshSurface, , )
 Application.SetValue(cube.polymsh.geom.subdivu, 831, )
 Application.SetValue(cube.polymsh.geom.subdivv, 800, )
 Application.SetValue(cube.polymsh.geom.subdivbase, 800, )
 Application.SetValue(Camera.camvis.refreshrate, True, )
 Application.SetDisplayMode(Camera, shaded)
 Application.DeselectAll()
 Application.SetValue(PlayControl.Out, 5000, )
 Application.DeselectAll()
 Application.GetPrim(Null, , , )
 Application.SelectObj(Camera_Root, , )
 Application.CopyPaste(Camera_Root, , null, 1)
 Application.SelectObj(null, , )
 Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
 1, , , , , )
 Application.SetValue(PlayControl.Key, 5000, )
 Application.SetValue(PlayControl.Current, 5000, )
 Application.Rotate(, 0, 8000, 0, siAbsolute, siPivot,
 siObj, siY, , , , , , , , 0, )
 Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
 5000, , , , , )
 Application.FirstFrame()

  Just paste in python script run and hit play.
 Thakns!


 On Thu, Jan 9, 2014 at 3:34 PM, Tim Crowson 
 tim.crow...@magneticdreams.com wrote:

  We've been testing 1 Titan vs. 3 and so far, the speed increase
 of the triple-Titan box is holding at about 2.45x. In an email 
 exchange (or
 maybe it was on the forums, can't recall) it was mentioned that on 
 the
 topic parallelization, Pixar had determined that even for them, 4 
 units
 together (of whatever, not necessarily Titans) was the max they 
 could
 really go before it started to cost more money than it was worth. 
 In our
 case, I'm thinking 3 might be our max, based on some nerdy 
 mathematics by
 one of our IT guys analyzing render times per shot, per frame,
 hardware/software costs, rack space used, etc.

 But hey, Redshift aside, the Titan in my workstation is doing
 wonders for my viewport performance in Soft. I had a 58M, 2500-item 
 model
 derived from a CAD file the other day, and this thing was letting 
 me tumble
 around it at ~15fps in Shaded mode. That ain't shabby!
 -Tim



 On 1/9/2014 6:11 

Re: Redshift3D Render

2014-01-09 Thread Mirko Jankovic
cinebench you mean? so what is score? :)


On Thu, Jan 9, 2014 at 7:47 PM, Emilio Hernandez emi...@e-roja.com wrote:

 Now I am starting to scratch my head to figure out the maze of Nvidia...

 At least my benchmark said the Titatn beated up a Quadro 4000K ha ha




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 getting around 95 score in cinebench with titan, and again 780 in home
 comp getting 140 score in cinebench...
 wondering how can 780 crash titan so much in opengl.

 titan system beside 4 titans have i7 3930K, asus p9x79-e ws MBO
 and home comp i7 4770k on asus maximus VI hero

 Will test some redshift rendering later to compare single GPU 780 vs
 titan


 On Thu, Jan 9, 2014 at 7:11 PM, Emilio Hernandez emi...@e-roja.comwrote:

 Yes specailly if you are getting more than the double speed.

 I ran the test and I got 75.78 fps

 Thx




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 yea I guess that woul dbe the case but just tried.. strange really no
 idea.. any chance to run cinebench 15 opengl test then? if that gives too
 low result as well then something is not good


 On Thu, Jan 9, 2014 at 6:58 PM, Emilio Hernandez emi...@e-roja.comwrote:

 It is set to ALL.




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 hmm another gues.. is it set to RT or ALL in play?


 On Thu, Jan 9, 2014 at 6:53 PM, Emilio Hernandez 
 emi...@e-roja.comwrote:

 I already did that and still getting the 65 fps limit with the Titan.




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 just got back , yes 60 is vsync on, turn of vsync in nvida control
 panel


 On Thu, Jan 9, 2014 at 5:52 PM, Stefan Kubicek 
 s...@tidbit-images.com wrote:

   I guess 60fps is the refresh rate of your display, right?  Have
 you disabled VSync in the driver settings?


 I just get 60.0 fps +
 How are you getting it display a value higher than 60? I'm pretty
 sure it the actual fps is higher, but the value in the viewport is 
 capped
 at 60
 -Tim


 On 1/9/2014 10:12 AM, Leonard Koch wrote:

 I get about 28-31 out of my 680. Does anyone have a common
 explanation for that?


 On Thu, Jan 9, 2014 at 5:10 PM, Emilio Hernandez 
 emi...@e-roja.com wrote:

   Hey Mirko I ran your script and I got 50.7 fps...

  But then I remembered I have my displays plugged in to my 470..
 hahaha.

  Don't ask why, but when using AE with the displays plugged into
 the Ti,  AE does not like it and disables GPU for calculations...

  P.




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 Hey Tim
 Would you be able to take 2 minutes of your tmie and run this ol
 python script for SI with your titan?
 I'm getting weird results with an 780 in my home system
 outperforming titan a lot... well here is copy paste from forum if 
 you are
 able to check it out as well.. thanks!:

  itan: ~170 fps
 780: ~245 fps

 Go figure [image: :)]
 But I'm suspecting something weird with my titan system for some
 time will have to test further but would be great if anyone with 
 titan as
 well could run it too?
 This old python script:
 Application.CreatePrim(Cube, MeshSurface, , )
 Application.SetValue(cube.polymsh.geom.subdivu, 831, )
 Application.SetValue(cube.polymsh.geom.subdivv, 800, )
 Application.SetValue(cube.polymsh.geom.subdivbase, 800, )
 Application.SetValue(Camera.camvis.refreshrate, True, )
 Application.SetDisplayMode(Camera, shaded)
 Application.DeselectAll()
 Application.SetValue(PlayControl.Out, 5000, )
 Application.DeselectAll()
 Application.GetPrim(Null, , , )
 Application.SelectObj(Camera_Root, , )
 Application.CopyPaste(Camera_Root, , null, 1)
 Application.SelectObj(null, , )
 Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
 1, , , , , )
 Application.SetValue(PlayControl.Key, 5000, )
 Application.SetValue(PlayControl.Current, 5000, )
 Application.Rotate(, 0, 8000, 0, siAbsolute, siPivot,
 siObj, siY, , , , , , , , 0, )
 Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
 5000, , , , , )
 Application.FirstFrame()

  Just paste in python script run and hit play.
 Thakns!


 On Thu, Jan 9, 2014 at 3:34 PM, Tim Crowson 
 tim.crow...@magneticdreams.com wrote:

  We've been testing 1 Titan vs. 3 and so far, the speed
 increase of the triple-Titan box is holding at about 2.45x. In an 
 email
 exchange (or maybe it was on the forums, can't recall) it was 
 mentioned
 that on the topic parallelization, Pixar had determined that even 
 for them,
 4 units together (of whatever, not necessarily Titans) was the max 
 they
 could really go before it started to cost more money than it was 
 worth. In
 our case, I'm thinking 3 might be our max, based on some nerdy 
 mathematics
 by one of our IT guys analyzing render times per shot, per frame,
 hardware/software costs, rack space used, etc.

 But hey, Redshift aside, the Titan in my workstation is doing
 wonders for my viewport performance in Soft. I had a 58M, 
 2500-item model
 derived from a CAD file the other day, 

Re: Redshift3D Render

2014-01-09 Thread Emilio Hernandez
Dumb question here.  Where do I see the score? I only get a fps result for
OpenGl and a 759cb in the CPU results in Cinebench.

and I get 75.78fps at the OpenGl test.




2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 cinebench you mean? so what is score? :)


 On Thu, Jan 9, 2014 at 7:47 PM, Emilio Hernandez emi...@e-roja.comwrote:

 Now I am starting to scratch my head to figure out the maze of Nvidia...

 At least my benchmark said the Titatn beated up a Quadro 4000K ha ha




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 getting around 95 score in cinebench with titan, and again 780 in home
 comp getting 140 score in cinebench...
 wondering how can 780 crash titan so much in opengl.

 titan system beside 4 titans have i7 3930K, asus p9x79-e ws MBO
 and home comp i7 4770k on asus maximus VI hero

 Will test some redshift rendering later to compare single GPU 780 vs
 titan


 On Thu, Jan 9, 2014 at 7:11 PM, Emilio Hernandez emi...@e-roja.comwrote:

 Yes specailly if you are getting more than the double speed.

 I ran the test and I got 75.78 fps

 Thx




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 yea I guess that woul dbe the case but just tried.. strange really no
 idea.. any chance to run cinebench 15 opengl test then? if that gives too
 low result as well then something is not good


 On Thu, Jan 9, 2014 at 6:58 PM, Emilio Hernandez emi...@e-roja.comwrote:

 It is set to ALL.




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 hmm another gues.. is it set to RT or ALL in play?


 On Thu, Jan 9, 2014 at 6:53 PM, Emilio Hernandez 
 emi...@e-roja.comwrote:

 I already did that and still getting the 65 fps limit with the
 Titan.




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 just got back , yes 60 is vsync on, turn of vsync in nvida control
 panel


 On Thu, Jan 9, 2014 at 5:52 PM, Stefan Kubicek 
 s...@tidbit-images.com wrote:

   I guess 60fps is the refresh rate of your display, right?
  Have you disabled VSync in the driver settings?


 I just get 60.0 fps +
 How are you getting it display a value higher than 60? I'm pretty
 sure it the actual fps is higher, but the value in the viewport is 
 capped
 at 60
 -Tim


 On 1/9/2014 10:12 AM, Leonard Koch wrote:

 I get about 28-31 out of my 680. Does anyone have a common
 explanation for that?


 On Thu, Jan 9, 2014 at 5:10 PM, Emilio Hernandez 
 emi...@e-roja.com wrote:

   Hey Mirko I ran your script and I got 50.7 fps...

  But then I remembered I have my displays plugged in to my 470..
 hahaha.

  Don't ask why, but when using AE with the displays plugged into
 the Ti,  AE does not like it and disables GPU for calculations...

  P.




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 Hey Tim
 Would you be able to take 2 minutes of your tmie and run this
 ol python script for SI with your titan?
 I'm getting weird results with an 780 in my home system
 outperforming titan a lot... well here is copy paste from forum if 
 you are
 able to check it out as well.. thanks!:

  itan: ~170 fps
 780: ~245 fps

 Go figure [image: :)]
 But I'm suspecting something weird with my titan system for
 some time will have to test further but would be great if anyone 
 with titan
 as well could run it too?
 This old python script:
 Application.CreatePrim(Cube, MeshSurface, , )
 Application.SetValue(cube.polymsh.geom.subdivu, 831, )
 Application.SetValue(cube.polymsh.geom.subdivv, 800, )
 Application.SetValue(cube.polymsh.geom.subdivbase, 800, )
 Application.SetValue(Camera.camvis.refreshrate, True, )
 Application.SetDisplayMode(Camera, shaded)
 Application.DeselectAll()
 Application.SetValue(PlayControl.Out, 5000, )
 Application.DeselectAll()
 Application.GetPrim(Null, , , )
 Application.SelectObj(Camera_Root, , )
 Application.CopyPaste(Camera_Root, , null, 1)
 Application.SelectObj(null, , )
 Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
 1, , , , , )
 Application.SetValue(PlayControl.Key, 5000, )
 Application.SetValue(PlayControl.Current, 5000, )
 Application.Rotate(, 0, 8000, 0, siAbsolute, siPivot,
 siObj, siY, , , , , , , , 0, )
 Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
 5000, , , , , )
 Application.FirstFrame()

  Just paste in python script run and hit play.
 Thakns!


 On Thu, Jan 9, 2014 at 3:34 PM, Tim Crowson 
 tim.crow...@magneticdreams.com wrote:

  We've been testing 1 Titan vs. 3 and so far, the speed
 increase of the triple-Titan box is holding at about 2.45x. In an 
 email
 exchange (or maybe it was on the forums, can't recall) it was 
 mentioned
 that on the topic parallelization, Pixar had determined that even 
 for them,
 4 units together (of whatever, not necessarily Titans) was the 
 max they
 could really go before it started to cost more money than it was 
 worth. In
 our case, I'm thinking 3 might be our max, based on some nerdy 
 mathematics
 by one of our IT guys analyzing render times per shot, per frame,
 

Re: Redshift3D Render

2014-01-09 Thread Mirko Jankovic
yes fps is score. hmm maybe if I got some time I could get one of titans
and test it out on maximus MBO to see if that is the diference...


On Thu, Jan 9, 2014 at 8:09 PM, Emilio Hernandez emi...@e-roja.com wrote:

 Dumb question here.  Where do I see the score? I only get a fps result for
 OpenGl and a 759cb in the CPU results in Cinebench.

 and I get 75.78fps at the OpenGl test.




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 cinebench you mean? so what is score? :)


 On Thu, Jan 9, 2014 at 7:47 PM, Emilio Hernandez emi...@e-roja.comwrote:

 Now I am starting to scratch my head to figure out the maze of Nvidia...

 At least my benchmark said the Titatn beated up a Quadro 4000K ha ha




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 getting around 95 score in cinebench with titan, and again 780 in home
 comp getting 140 score in cinebench...
 wondering how can 780 crash titan so much in opengl.

 titan system beside 4 titans have i7 3930K, asus p9x79-e ws MBO
 and home comp i7 4770k on asus maximus VI hero

 Will test some redshift rendering later to compare single GPU 780 vs
 titan


 On Thu, Jan 9, 2014 at 7:11 PM, Emilio Hernandez emi...@e-roja.comwrote:

 Yes specailly if you are getting more than the double speed.

 I ran the test and I got 75.78 fps

 Thx




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 yea I guess that woul dbe the case but just tried.. strange really no
 idea.. any chance to run cinebench 15 opengl test then? if that gives too
 low result as well then something is not good


 On Thu, Jan 9, 2014 at 6:58 PM, Emilio Hernandez 
 emi...@e-roja.comwrote:

 It is set to ALL.




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 hmm another gues.. is it set to RT or ALL in play?


 On Thu, Jan 9, 2014 at 6:53 PM, Emilio Hernandez emi...@e-roja.com
  wrote:

 I already did that and still getting the 65 fps limit with the
 Titan.




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 just got back , yes 60 is vsync on, turn of vsync in nvida
 control panel


 On Thu, Jan 9, 2014 at 5:52 PM, Stefan Kubicek 
 s...@tidbit-images.com wrote:

   I guess 60fps is the refresh rate of your display, right?
  Have you disabled VSync in the driver settings?


 I just get 60.0 fps +
 How are you getting it display a value higher than 60? I'm
 pretty sure it the actual fps is higher, but the value in the 
 viewport is
 capped at 60
 -Tim


 On 1/9/2014 10:12 AM, Leonard Koch wrote:

 I get about 28-31 out of my 680. Does anyone have a common
 explanation for that?


 On Thu, Jan 9, 2014 at 5:10 PM, Emilio Hernandez 
 emi...@e-roja.com wrote:

   Hey Mirko I ran your script and I got 50.7 fps...

  But then I remembered I have my displays plugged in to my
 470.. hahaha.

  Don't ask why, but when using AE with the displays plugged
 into the Ti,  AE does not like it and disables GPU for 
 calculations...

  P.




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 Hey Tim
 Would you be able to take 2 minutes of your tmie and run this
 ol python script for SI with your titan?
 I'm getting weird results with an 780 in my home system
 outperforming titan a lot... well here is copy paste from forum 
 if you are
 able to check it out as well.. thanks!:

  itan: ~170 fps
 780: ~245 fps

 Go figure [image: :)]
 But I'm suspecting something weird with my titan system for
 some time will have to test further but would be great if anyone 
 with titan
 as well could run it too?
 This old python script:
 Application.CreatePrim(Cube, MeshSurface, , )
 Application.SetValue(cube.polymsh.geom.subdivu, 831, )
 Application.SetValue(cube.polymsh.geom.subdivv, 800, )
 Application.SetValue(cube.polymsh.geom.subdivbase, 800, )
 Application.SetValue(Camera.camvis.refreshrate, True, )
 Application.SetDisplayMode(Camera, shaded)
 Application.DeselectAll()
 Application.SetValue(PlayControl.Out, 5000, )
 Application.DeselectAll()
 Application.GetPrim(Null, , , )
 Application.SelectObj(Camera_Root, , )
 Application.CopyPaste(Camera_Root, , null, 1)
 Application.SelectObj(null, , )
 Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
 1, , , , , )
 Application.SetValue(PlayControl.Key, 5000, )
 Application.SetValue(PlayControl.Current, 5000, )
 Application.Rotate(, 0, 8000, 0, siAbsolute, siPivot,
 siObj, siY, , , , , , , , 0, )
 Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
 5000, , , , , )
 Application.FirstFrame()

  Just paste in python script run and hit play.
 Thakns!


 On Thu, Jan 9, 2014 at 3:34 PM, Tim Crowson 
 tim.crow...@magneticdreams.com wrote:

  We've been testing 1 Titan vs. 3 and so far, the speed
 increase of the triple-Titan box is holding at about 2.45x. In 
 an email
 exchange (or maybe it was on the forums, can't recall) it was 
 mentioned
 that on the topic parallelization, Pixar had determined that 
 even for them,
 4 units together (of whatever, not necessarily Titans) was the 
 max they
 

Re: rigging in xsi vs maya

2014-01-09 Thread Martin
Maya's component editor really sucks because of that lock thing. It really slow 
my workflow so I usually deal with the export / import to SI and work there my 
weighting.

But when I can, I skin my model in Maya before sending it to SI, because it has 
a much better default weighting than SI and Maya bones are easier to deal with 
when you do game assets (and FBX convert them nicely into nulls when 
importing). With proper settings you can use the default Maya weights for a 
test model where precise weights aren't needed. SI weights are pretty much 
useless, not even for a mob or test character. You need to re-weight almost 
everything manually, and if you work with nulls, it is even worst.

I haven't used yet that heat skin or whatever from newer Maya versions but it 
looks cool. (I only use old versions for work)


Martin
Sent from my iPhone

 Maya has great things for it, but binding and weighting is definitely not one 
 of them. It's pretty bad, actually. Ok, rant off.



RE: Batch function for Ultimapper and Render Map

2014-01-09 Thread Matt Lind
You'll have to modify all references to rendermap to find/call ultimapper 
instead.  That includes the parameters which are adjusted dynamically such as 
the output filename.  You can get that information by setting your scene 
explorer to show all parameters ('all nodes' filter), use script names, show 
parameter values, then inspect the property in the scene explorer.  The names 
you see will be the names you use in code.  Select/mark the parameters to see 
their values.  To get the commands to call, you can click the generate maps 
button and see what pops out in the script log, then convert that to the 
scripting object model if you want to learn.

For example, changing the output filename parameter for rendermap would dump 
something like this to the script log:

SetValue( cube.rendermap.imagefilepath, c:\\tmp\\my_image.tga, null );

If you have a reference to the rendermap property in your code, the object 
model equivalent would be this:

oRenderMapProperty.Parameters( imagefilepath ).value = 
c:\\tmp\\my_image.tga;

Notice the parameter name is the same, but having the reference to the 
rendermap property simplifies your code so you don't have to deal with absolute 
paths and strings.

The generate maps button calls the RegenerateMaps() command, so no object model 
equivalent for that.  But since it requires a rendermap property as an input 
argument, you can pass the value from the rendermap property reference than 
having to deal with strings, as illustrated in my example.  Compare that with 
what you see in the script log for comparison.

Ultimapper will have a few extra wrinkles than rendermap, such as distance from 
surface to probe with rays, and so on.  It's not a difficult task, success is 
purely based on whether you account for all the details.

Matt



From: softimage-boun...@listproc.autodesk.com 
[mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Nicolas Esposito
Sent: Thursday, January 09, 2014 12:29 AM
To: softimage@listproc.autodesk.com
Subject: Re: Batch function for Ultimapper and Render Map

Wow Matt,
Thanks for spending time writing the script, I'll test it out together with the 
Mapify plugin to see which one could suit my needs

Sorry to ask the same question as before, but based on your Jscript for 
Ultimapper I'll just need to replace the function with the Ultimapper function, 
add the maps variables ( normals, albedo, depth, AO ) and change the specified 
ultimapper property to be what is called Ultimapper thingy at the end of the 
script

I'll start experimenting with the SDK editor to see if there are some usefull 
informations ;)

Thanks again Matt for the script, amazing how you guys could come out with a 
solution within few minutes just writing it down! :)

2014/1/9 Matt Lind ml...@carbinestudios.commailto:ml...@carbinestudios.com

// Jscript - will need some tweaking to be functional
RenderMapSequence( 1, 100, C:\\tmp\\my_sequence.CURRENTFRAME.tga );

function RenderMapSequence( FrameStart, FrameEnd, FileName )
{
var oPlayControl = Dictionary.GetObject( PlayControl );

// get eligible objects from selection
var aFilterNames = new Array( siPolyMeshFilter, siSurfaceMeshFilter );
var oObjects = SIFilter( null, aFilterNames.join(,), true, siQuickSearch );

If ( !oObjects || oObjects.Count =0  ) {
LogMessage( nothing selected, siError );
}

for ( var i = 0; i  oObjects.Count; i++ ) {

var oObject = oObjects(i);

var oProperties = oObject.Properties.Filter( rendermap );
if ( oProperties.Count = 0 ) {
// rendermap property not found
continue;
}
var oRenderMapProperty = oProperties(0);

for ( var j = FrameStart; j = FrameEnd; j++ ) {

var FrameCurrent =  j;

// advance timeline
oPlayControl.Parameters( Current ).value = FrameCurrent;

// update the parameter defining output image file name
var ImageFileName = FileName.replace( /CURRENTFRAME/, 
FrameCurrent );
oRenderMapProperty.Parameters( imagefilepath ).value = 
ImageFileName;

// execute specified rendermap property
RegenerateMaps ( oRenderMapProperty.FullName );
}
}
}


From: 
softimage-boun...@listproc.autodesk.commailto:softimage-boun...@listproc.autodesk.com
 
[mailto:softimage-boun...@listproc.autodesk.commailto:softimage-boun...@listproc.autodesk.com]
 On Behalf Of Nicolas Esposito
Sent: Wednesday, January 08, 2014 3:19 PM
To: softimage@listproc.autodesk.commailto:softimage@listproc.autodesk.com
Subject: Re: Batch function for Ultimapper and Render Map

The same script functionality to execute every frame and update the output file 
could be applied to Ultimapper as well, am I correct?

Sorry but I'm not too familiar with scripting, but looking at the manual looks 
nothing super-complicated ;)

2014/1/9 Nicolas Esposito 3dv...@gmail.commailto:3dv...@gmail.com
This is gold!
Thanks Matt and Alan ;)

2014/1/8 Alan 

Re: rigging in xsi vs maya

2014-01-09 Thread Steven Caron
if you do a lot of bipeds, and you have a base set of deformers with a good
naming convention. you can skin a generic biped mesh and use gator to
transfer the weights... never use default weighting again.


On Thu, Jan 9, 2014 at 11:32 AM, Martin furik...@gmail.com wrote:

 Maya's component editor really sucks because of that lock thing. It really
 slow my workflow so I usually deal with the export / import to SI and work
 there my weighting.

 But when I can, I skin my model in Maya before sending it to SI, because
 it has a much better default weighting than SI and Maya bones are easier to
 deal with when you do game assets (and FBX convert them nicely into nulls
 when importing). With proper settings you can use the default Maya weights
 for a test model where precise weights aren't needed. SI weights are pretty
 much useless, not even for a mob or test character. You need to re-weight
 almost everything manually, and if you work with nulls, it is even worst.

 I haven't used yet that heat skin or whatever from newer Maya versions but
 it looks cool. (I only use old versions for work)


 Martin
 Sent from my iPhone

  Maya has great things for it, but binding and weighting is definitely
 not one of them. It's pretty bad, actually. Ok, rant off.




Re: rigging in xsi vs maya

2014-01-09 Thread Emilio Hernandez
I used the default weighting before a lot, but never again, neither Maya
nor Softimage.  It is much faster to have a proper weighting using the
inside out method on both apps.

Wich in my case it is faster and more controlable in Softimage than in
Maya.  Specially by using the weight editor in Softimage, I had some funky
experiences using the same in Maya.






2014/1/9 Steven Caron car...@gmail.com

 if you do a lot of bipeds, and you have a base set of deformers with a
 good naming convention. you can skin a generic biped mesh and use gator to
 transfer the weights... never use default weighting again.



 On Thu, Jan 9, 2014 at 11:32 AM, Martin furik...@gmail.com wrote:

 Maya's component editor really sucks because of that lock thing. It
 really slow my workflow so I usually deal with the export / import to SI
 and work there my weighting.

 But when I can, I skin my model in Maya before sending it to SI, because
 it has a much better default weighting than SI and Maya bones are easier to
 deal with when you do game assets (and FBX convert them nicely into nulls
 when importing). With proper settings you can use the default Maya weights
 for a test model where precise weights aren't needed. SI weights are pretty
 much useless, not even for a mob or test character. You need to re-weight
 almost everything manually, and if you work with nulls, it is even worst.

 I haven't used yet that heat skin or whatever from newer Maya versions
 but it looks cool. (I only use old versions for work)


 Martin
 Sent from my iPhone

  Maya has great things for it, but binding and weighting is definitely
 not one of them. It's pretty bad, actually. Ok, rant off.





Re: rigging in xsi vs maya

2014-01-09 Thread Sebastien Sterling
looks interesting eric, but i can only find one page, kind of what i had in
mind but a little more abstract, i suppose it all depends how much of this
system is basically reusing elements in maya, and how much is its own
thing, e.g does it still use maya bones or does it have its own custom
primitive, locator null, helper...

thanks for sharing i had not seen this before its pretty cool :)


On 9 January 2014 15:28, Eric Thivierge ethivie...@hybride.com wrote:

 Sebastian, look at ILM's Block Party 2 rigging system.



 On 1/9/2014 7:53 AM, Sebastien Sterling wrote:

 Why not a node based rigging system ? (not necessarily an ice node
 system) but its own thing, you arrange your nulls, you add rig trees to
 them in a small interface graph where you have nodes for different
 behaviours like ik, fk, hik, twist, strech, you plug the nulls according to
 the hierarchy you want, each node has its own params so you can expose or
 lock or modify them in the rig or synoptic. i'm sure such a system wouldn't
 cover everything, its often what i get told, that rigging is so complex a
 proses that in the end the longest traditional method is the only one that
 allows for the flexibility and reactivity necessary for a pipe. in spite of
 this i think such a system has merrit and deserves to go past prototype, if
 only to offer another perspective. its quite probable that neither xsi or
 mayas architecture is able to accommodate such a system natively, but
 plug-ins like yeti are basically like their own independent little engines
 running within the shell of a dcc, the same is true for fabric i assume.





Re: rigging in xsi vs maya

2014-01-09 Thread Cesar Saez
We worked out the default weighting for Justin using the lowpoly 'slices'
of the mesh (we needed them anyway) and a bit of smoothing.
Simple stuff, but with 2 clicks (a simple script) we were able to get a
quite decent base to work on.


Re: rigging in xsi vs maya

2014-01-09 Thread Max Evgrafov
Hi guys. Now I adapt the Norman's rig for XSI
https://www.youtube.com/watch?v=fIUTkJcWPv8
https://www.youtube.com/watch?v=2nUTCBbQaYM
https://www.youtube.com/watch?v=pZajrQCVbLU


2014/1/10 Cesar Saez cesa...@gmail.com

 We worked out the default weighting for Justin using the lowpoly 'slices'
 of the mesh (we needed them anyway) and a bit of smoothing.
 Simple stuff, but with 2 clicks (a simple script) we were able to get a
 quite decent base to work on.




-- 
Евграфов Максим.(Summatr)
https://vimeo.com/user3098735/videos
---
Хорошего Вам настроения !!! :-)


Re: rigging in xsi vs maya

2014-01-09 Thread Max Evgrafov
there are many free rigs for maya and there are little free rigs for xi.
justice must prevail!


2014/1/10 Max Evgrafov summ...@gmail.com

 Hi guys. Now I adapt the Norman's rig for XSI
 https://www.youtube.com/watch?v=fIUTkJcWPv8
 https://www.youtube.com/watch?v=2nUTCBbQaYM
 https://www.youtube.com/watch?v=pZajrQCVbLU


 2014/1/10 Cesar Saez cesa...@gmail.com

 We worked out the default weighting for Justin using the lowpoly 'slices'
 of the mesh (we needed them anyway) and a bit of smoothing.
 Simple stuff, but with 2 clicks (a simple script) we were able to get a
 quite decent base to work on.




 --
 Евграфов Максим.(Summatr)
 https://vimeo.com/user3098735/videos
 ---
 Хорошего Вам настроения !!! :-)




-- 
Евграфов Максим.(Summatr)
https://vimeo.com/user3098735/videos
---
Хорошего Вам настроения !!! :-)


Re: rigging in xsi vs maya

2014-01-09 Thread Sebastien Sterling
Lovely stuff Max !


On 10 January 2014 04:29, Max Evgrafov summ...@gmail.com wrote:

 Hi guys. Now I adapt the Norman's rig for XSI
 https://www.youtube.com/watch?v=fIUTkJcWPv8
 https://www.youtube.com/watch?v=2nUTCBbQaYM
 https://www.youtube.com/watch?v=pZajrQCVbLU


 2014/1/10 Cesar Saez cesa...@gmail.com

 We worked out the default weighting for Justin using the lowpoly 'slices'
 of the mesh (we needed them anyway) and a bit of smoothing.
 Simple stuff, but with 2 clicks (a simple script) we were able to get a
 quite decent base to work on.




 --
 Евграфов Максим.(Summatr)
 https://vimeo.com/user3098735/videos
 ---
 Хорошего Вам настроения !!! :-)



Re: rigging in xsi vs maya

2014-01-09 Thread Sebastien Sterling
Looks like you really brought him through ! Is it identical to the maya
version ? at any rate i salute you sir !


On 10 January 2014 04:57, Sebastien Sterling
sebastien.sterl...@gmail.comwrote:

 Lovely stuff Max !


 On 10 January 2014 04:29, Max Evgrafov summ...@gmail.com wrote:

 Hi guys. Now I adapt the Norman's rig for XSI
 https://www.youtube.com/watch?v=fIUTkJcWPv8
 https://www.youtube.com/watch?v=2nUTCBbQaYM
 https://www.youtube.com/watch?v=pZajrQCVbLU


 2014/1/10 Cesar Saez cesa...@gmail.com

 We worked out the default weighting for Justin using the lowpoly
 'slices' of the mesh (we needed them anyway) and a bit of smoothing.
 Simple stuff, but with 2 clicks (a simple script) we were able to get a
 quite decent base to work on.




 --
 Евграфов Максим.(Summatr)
 https://vimeo.com/user3098735/videos
 ---
 Хорошего Вам настроения !!! :-)





Re: rigging in xsi vs maya

2014-01-09 Thread David Gallagher


Great! I love Norman.


On 1/9/2014 8:29 PM, Max Evgrafov wrote:

Hi guys. Now I adapt the Norman's rig for XSI
https://www.youtube.com/watch?v=fIUTkJcWPv8
https://www.youtube.com/watch?v=2nUTCBbQaYM
https://www.youtube.com/watch?v=pZajrQCVbLU


2014/1/10 Cesar Saez cesa...@gmail.com mailto:cesa...@gmail.com

We worked out the default weighting for Justin using the lowpoly
'slices' of the mesh (we needed them anyway) and a bit of smoothing.
Simple stuff, but with 2 clicks (a simple script) we were able to
get a quite decent base to work on.




--
Евграфов Максим.(Summatr)
https://vimeo.com/user3098735/videos
---
Хорошего Вам настроения !!! :-)




Re: rigging in xsi vs maya

2014-01-09 Thread Martin Yara
In games we have different name conventions, bones count and structure,
model and unit size, character proportions, etc. so a base mesh is not very
useful outside that project. But of course I do use Gator and Maya's copy
weights to accelerate my character mass production when it is possible.

I also use the SI default weighting with 1 deformer as a base, enforce
limit bones if needed, then retouch and smooth where needed. SI's envelope
with more than 1 deformer is unusable because you don't have any control in
the smooth distance or dropoff rate. In Softimage's Set Envelope you have
what, number of skeletons option and a Method option that I haven't figured
out yet in what situation I could use the Normal-based one.

We use only nulls in games (Softimage), not bones (bones are only for
rigging), and therefore the default weights sucks even harder because the
nulls position are taken as the bone center, so it is totally unusable.
Re-weight from scratch.
I could slice the character, envelope, smooth, and gator like Cesar said, I
guess. Or I could convert my nulls to bones. I may have to write
something to convert nulls to SI bones. Does anyone have a null to bone
script ?

When I have to do more than 1 similar character, I usually create a very
low poly base character and then use Gator / Copy Weights. But if not, I
have to set all the weights manually without a decent base, or do it in
Maya.

I remember that when I was a junior, learning Maya, somehow I messed up the
weights with a couple of hours to deadline to deliver a few still renders.
The character was a monster with a few tentacles and all that weird stuff
so copy weights wasn't an option. The senior came, increased the default
dropoff rate, apply, and the character was good enough for posing and
deliver. With heat map now, it seems it's got better. And that's the only
weight related thing that I like in Maya.

Another thing I like in maya is that you can lock everything, even points
positions (In SI I had to write an script and an ice compound to lock
points), but that's another story.

Martin


Re: Redshift3D Render

2014-01-09 Thread Mirko Jankovic
just played a bit with overclocking...
now I pushed PCI GEN in bios back to auto instead of GEN 3 so it is now PCI
2.0 x16 instead of 3..
That alone gave me slightly higher score on cinebench.. and after
overclocking CPU to 4.5 now new result in cinebench for titan is 100, and
script in softimage gives me ~200 fps.
Oh well...


On Thu, Jan 9, 2014 at 8:24 PM, Mirko Jankovic mirkoj.anima...@gmail.comwrote:

 yes fps is score. hmm maybe if I got some time I could get one of titans
 and test it out on maximus MBO to see if that is the diference...


 On Thu, Jan 9, 2014 at 8:09 PM, Emilio Hernandez emi...@e-roja.comwrote:

 Dumb question here.  Where do I see the score? I only get a fps result
 for OpenGl and a 759cb in the CPU results in Cinebench.

 and I get 75.78fps at the OpenGl test.




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 cinebench you mean? so what is score? :)


 On Thu, Jan 9, 2014 at 7:47 PM, Emilio Hernandez emi...@e-roja.comwrote:

 Now I am starting to scratch my head to figure out the maze of Nvidia...

 At least my benchmark said the Titatn beated up a Quadro 4000K ha ha




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 getting around 95 score in cinebench with titan, and again 780 in home
 comp getting 140 score in cinebench...
 wondering how can 780 crash titan so much in opengl.

 titan system beside 4 titans have i7 3930K, asus p9x79-e ws MBO
 and home comp i7 4770k on asus maximus VI hero

 Will test some redshift rendering later to compare single GPU 780 vs
 titan


 On Thu, Jan 9, 2014 at 7:11 PM, Emilio Hernandez emi...@e-roja.comwrote:

 Yes specailly if you are getting more than the double speed.

 I ran the test and I got 75.78 fps

 Thx




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 yea I guess that woul dbe the case but just tried.. strange really
 no idea.. any chance to run cinebench 15 opengl test then? if that gives
 too low result as well then something is not good


 On Thu, Jan 9, 2014 at 6:58 PM, Emilio Hernandez 
 emi...@e-roja.comwrote:

 It is set to ALL.




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 hmm another gues.. is it set to RT or ALL in play?


 On Thu, Jan 9, 2014 at 6:53 PM, Emilio Hernandez 
 emi...@e-roja.com wrote:

 I already did that and still getting the 65 fps limit with the
 Titan.




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 just got back , yes 60 is vsync on, turn of vsync in nvida
 control panel


 On Thu, Jan 9, 2014 at 5:52 PM, Stefan Kubicek 
 s...@tidbit-images.com wrote:

   I guess 60fps is the refresh rate of your display, right?
  Have you disabled VSync in the driver settings?


 I just get 60.0 fps +
 How are you getting it display a value higher than 60? I'm
 pretty sure it the actual fps is higher, but the value in the 
 viewport is
 capped at 60
 -Tim


 On 1/9/2014 10:12 AM, Leonard Koch wrote:

 I get about 28-31 out of my 680. Does anyone have a common
 explanation for that?


 On Thu, Jan 9, 2014 at 5:10 PM, Emilio Hernandez 
 emi...@e-roja.com wrote:

   Hey Mirko I ran your script and I got 50.7 fps...

  But then I remembered I have my displays plugged in to my
 470.. hahaha.

  Don't ask why, but when using AE with the displays plugged
 into the Ti,  AE does not like it and disables GPU for 
 calculations...

  P.




 2014/1/9 Mirko Jankovic mirkoj.anima...@gmail.com

 Hey Tim
 Would you be able to take 2 minutes of your tmie and run this
 ol python script for SI with your titan?
 I'm getting weird results with an 780 in my home system
 outperforming titan a lot... well here is copy paste from forum 
 if you are
 able to check it out as well.. thanks!:

  itan: ~170 fps
 780: ~245 fps

 Go figure [image: :)]
 But I'm suspecting something weird with my titan system for
 some time will have to test further but would be great if anyone 
 with titan
 as well could run it too?
 This old python script:
 Application.CreatePrim(Cube, MeshSurface, , )
 Application.SetValue(cube.polymsh.geom.subdivu, 831, )
 Application.SetValue(cube.polymsh.geom.subdivv, 800, )
 Application.SetValue(cube.polymsh.geom.subdivbase, 800, )
 Application.SetValue(Camera.camvis.refreshrate, True, )
 Application.SetDisplayMode(Camera, shaded)
 Application.DeselectAll()
 Application.SetValue(PlayControl.Out, 5000, )
 Application.DeselectAll()
 Application.GetPrim(Null, , , )
 Application.SelectObj(Camera_Root, , )
 Application.CopyPaste(Camera_Root, , null, 1)
 Application.SelectObj(null, , )
 Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
 1, , , , , )
 Application.SetValue(PlayControl.Key, 5000, )
 Application.SetValue(PlayControl.Current, 5000, )
 Application.Rotate(, 0, 8000, 0, siAbsolute, siPivot,
 siObj, siY, , , , , , , , 0, )
 Application.SaveKey(null.kine.local.rotx,null.kine.local.roty,null.kine.local.rotz,
 5000, , , , , )
 Application.FirstFrame()

  Just paste in python script run and hit play.
 Thakns!


 On Thu, Jan 9, 2014 at 3:34 PM, Tim