Re: White House Down

2013-07-02 Thread Eugen Sares

Thanks a lot for this, Mathieu!
Always nice to hear when Softimage is used on such high profile titles. 
That prooves a lot technically, and it is good for the spirit, too...

Autodesk wants to use this for advertising...
Also, what you say about Fabric Engine instead of Nuke is amazing.

If I may ask,
which version did you use, and how many seats?
Any serious trouble you ran into?
So you built your own crowd system... what's the reason for not using 
the built-in system?




Am 02.07.2013 18:19, schrieb Mathieu Leclaire:

Hi guys,

I just wanted to share some information on the shots we did for White 
House Down.


First off, there's an article in fxguide that explains a bit what we 
did :


http://www.fxguide.com/featured/action-beats-6-scenes-from-white-house-down/ 




And here is some more details about how we did it :

We built upon our ICE based City Generator that we created for Spy 
Kids 4. In SK4, all the buildings where basically a bunch of instances 
(windows, wall, doors, etc.) put together using Softimage ICE logic to 
build very generic buildings. ICE was also used to create the 
streetscape, populate the city with props (lamp post, traffic lights, 
garbage cans, bus stops, etc.), distribute static trees and car 
traffic. Everything was instances so memory consumption was very low 
and render times where minimal (20-30 minutes a frame in Mental Ray at 
the time). The city in Spy Kids 4 was very generic and the cameras 
where very high up in the sky so we didn't care as much about having a 
lot of details and interaction on the ground level and we didn't 
really need specific and recognizable buildings either.


The challenge in White House Down was the fact that it was Washington 
and we needed to recognize very specific landmarks so it needed to be 
a lot less generic. The action also happens very close to the ground 
so we needed to have a lot more detail on the ground level and there 
needed to be a lot of interaction with the helicopters that are 
passing by.


So we modeled a lot more specific assets to add more variation (very 
specific buildings and recognizable landmarks, more props, more 
vegetation, more cars, etc.). We updated our building generator to 
allow more customizations. We updated our props and cars distribution 
systems. They where all still ICE based instances, but we added a lot 
more controls to allow our users to easily manage such complex scenes. 
We had a system to automate the texturing of cars and props based on 
rules so we could texture thousands of assets very quickly. Everything 
was also converted to Stand-Ins to keep our working scenes very light 
and leave the heavy lifting to the renderer.


Which brings me to Arnold.

We knew the trick to making these shots as realistic as possible would 
be to add as much details as we possibly could. Arnold is so good at 
handling a lot of geometry and we where all very impressed by how much 
Arnold could chew (we where managing somewhere around 500-600 million 
polygons at a time) but it still wasn't going to be enough, so we 
built a deep image compositing pipeline for this project to allowed us 
to add so much more detail to the shots.


Every asset where built in low and high resolution. So we basically 
loaded whatever elements we where rendering in a layer as high 
resolution while the rest of the scene assets where all low resolution 
only to be visible through secondary rays (so to cast reflections, 
shadows, GI, etc.). We could then combine all the layers through deep 
compositing and could extract whatever layer we desired without 
worrying about generating the proper hold-out mattes at render time 
(which would have been impossible to manage at that level of detail).


In one shot, we calculated that once all the layers where merged 
together using our deep image pipeline, it added up to just over 4.2 
billion polygons... though that number is not quite exact since we 
always loaded all assets as lo-res in memory except for the visible 
elements that where being rendered in high resolution. We have a lot 
of low res geometry that is repeated in many layers, so the exact 
number is slightly lower then the 4.2 billion polygons reported, but 
still... we ended up managing a lot of data for that show.


Render times where also very reasonable, varying from 20 minutes to 
2-3 hours per frame rendered at 3K. Once we added all the layers in 
one shot, then it came somewhere between 10-12 hours per frame.


We started out using Nuke to manipulate our deep images, but we ended 
up creating an in-house custom standalone application using Creation 
Platform from Fabric Engine to accelerate the deep image 
manipulations. What took hours to manage in Nuke could now be done in 
minutes and we could now also exploit our entire render farm to 
extract the desired layers when needed.


Finally, the last layer of complexity came from the interaction 
between the helicopters and the environment. We simulated and baked 
rotor

Re: Weird warning

2013-07-02 Thread Orlando Esponda
Take a look here:

http://xsisupport.com/2011/12/21/warning-3000-objects-were-not-saved-normally/


On Tue, Jul 2, 2013 at 11:10 PM, Daniel Kim wrote:

> I exported model and imported it into a new scene, then the error messages
> no more pop up.
>
> I guess my scene file has been corrupted or something.
>
> ** **
>
> Thanks Matt
>
> ** **
>
> *From:* softimage-boun...@listproc.autodesk.com [mailto:
> softimage-boun...@listproc.autodesk.com] *On Behalf Of *Matt Lind
> *Sent:* Wednesday, July 03, 2013 12:25 PM
> *To:* softimage@listproc.autodesk.com
> *Subject:* RE: Weird warning 
>
> ** **
>
> Could be a number of causes, but basically the rug was pulled out from
> under the uvspace (texture projection) leaving it orphaned in the scene.
> Softimage doesn’t know what to do with it, so it spits out the warning. **
> **
>
> ** **
>
> ** **
>
> Matt
>
> ** **
>
> ** **
>
> ** **
>
> *From:* softimage-boun...@listproc.autodesk.com [
> mailto:softimage-boun...@listproc.autodesk.com]
> *On Behalf Of *Daniel Kim
> *Sent:* Tuesday, July 02, 2013 5:22 PM
> *To:* softimage@listproc.autodesk.com
> *Subject:* Weird warning 
>
> ** **
>
> Hi guys.
>
> ** **
>
> I get weird warning from Softimage
>
> ** **
>
> ' WARNING : 3000 - Save: [3] objects were not saved normally
>
> ' WARNING : 3000 - -- [uvspace<1279>] was saved, but is disconnected from
> the scene. (Floating object)
>
> ' WARNING : 3000 - -- [uvspace<1008>] was saved, but is disconnected from
> the scene. (Floating object)
>
> ' WARNING : 3000 - -- [uvspace<1259>] was saved, but is disconnected from
> the scene. (Floating object)
>
> ** **
>
> I’ve never seen this one before. One of my team mates get this message,
> but have no idea where and what to check.
>
> If anyone know, please post.
>
> ** **
>
> Thanks
>
> Daniel
>
> ** **
>

-- 
--
IMPRESSUM:
PiXABLE STUDIOS GmbH & Co.KG, Sitz: Dresden, Amtsgericht: Dresden, HRA 6857,
Komplementärin: Lenhard & Barth Verwaltungsgesellschaft mbH, Sitz: Dresden,
Amtsgericht: Dresden, HRB 26501, Geschäftsführer: Frank Lenhard, Tino Barth

IMPRINT:
PiXABLE STUDIOS GmbH & Co.KG, Domicile: Dresden, Court of Registery: 
Dresden,
Company Registration Number: HRA 6857, General Partner: Lenhard & Barth
Verwaltungsgesellschaft mbH, Domicile: Dresden, Court of Registery: 
Dresden, Company
Registration Number: HRB 26501, Chief Executive Officers: Frank Lenhard, 
Tino Barth


--
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte 
Informationen. Wenn Sie nicht
der richtige Adressat sind oder diese E-Mail irrtümlich erhalten haben, 
informieren Sie bitte
sofort den Absender und vernichten Sie diese Mail. Das unerlaubte Kopieren 
sowie die
unbefugte Weitergabe dieser Mail ist nicht gestattet.

This e-mail may contain confidential and/or privileged information. If you 
are not the intended
recipient (or have received this e-mail in error) please notify the sender 
immediately and destroy
this e-mail. Any unauthorized copying, disclosure or distribution of the 
material in this e-mail is
strictly forbidden. 


Re: Weird warning

2013-07-02 Thread Christian Freisleder

this will help.
http://xsisupport.com/2011/12/21/
christian

On 03.07.2013 06:10, Daniel Kim wrote:


I exported model and imported it into a new scene, then the error 
messages no more pop up.


I guess my scene file has been corrupted or something.

Thanks Matt

*From:*softimage-boun...@listproc.autodesk.com 
[mailto:softimage-boun...@listproc.autodesk.com] *On Behalf Of *Matt Lind

*Sent:* Wednesday, July 03, 2013 12:25 PM
*To:* softimage@listproc.autodesk.com
*Subject:* RE: Weird warning

Could be a number of causes, but basically the rug was pulled out from 
under the uvspace (texture projection) leaving it orphaned in the 
scene.  Softimage doesn't know what to do with it, so it spits out the 
warning.


Matt

*From:*softimage-boun...@listproc.autodesk.com 
 
[mailto:softimage-boun...@listproc.autodesk.com] *On Behalf Of *Daniel Kim

*Sent:* Tuesday, July 02, 2013 5:22 PM
*To:* softimage@listproc.autodesk.com 


*Subject:* Weird warning

Hi guys.

I get weird warning from Softimage

' WARNING : 3000 - Save: [3] objects were not saved normally

' WARNING : 3000 - -- [uvspace<1279>] was saved, but is disconnected 
from the scene. (Floating object)


' WARNING : 3000 - -- [uvspace<1008>] was saved, but is disconnected 
from the scene. (Floating object)


' WARNING : 3000 - -- [uvspace<1259>] was saved, but is disconnected 
from the scene. (Floating object)


I've never seen this one before. One of my team mates get this 
message, but have no idea where and what to check.


If anyone know, please post.

Thanks

Daniel





RE: Weird warning

2013-07-02 Thread Daniel Kim
I exported model and imported it into a new scene, then the error messages no 
more pop up.
I guess my scene file has been corrupted or something.

Thanks Matt

From: softimage-boun...@listproc.autodesk.com 
[mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Matt Lind
Sent: Wednesday, July 03, 2013 12:25 PM
To: softimage@listproc.autodesk.com
Subject: RE: Weird warning

Could be a number of causes, but basically the rug was pulled out from under 
the uvspace (texture projection) leaving it orphaned in the scene.  Softimage 
doesn't know what to do with it, so it spits out the warning.


Matt



From: 
softimage-boun...@listproc.autodesk.com
 [mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Daniel Kim
Sent: Tuesday, July 02, 2013 5:22 PM
To: softimage@listproc.autodesk.com
Subject: Weird warning

Hi guys.

I get weird warning from Softimage

' WARNING : 3000 - Save: [3] objects were not saved normally
' WARNING : 3000 - -- [uvspace<1279>] was saved, but is disconnected from the 
scene. (Floating object)
' WARNING : 3000 - -- [uvspace<1008>] was saved, but is disconnected from the 
scene. (Floating object)
' WARNING : 3000 - -- [uvspace<1259>] was saved, but is disconnected from the 
scene. (Floating object)

I've never seen this one before. One of my team mates get this message, but 
have no idea where and what to check.
If anyone know, please post.

Thanks
Daniel



RE: Weird warning

2013-07-02 Thread Daniel Kim
It keeps popping up : /

I just exported model and import it into a new scene, then the error message is 
gone.

Thanks

From: softimage-boun...@listproc.autodesk.com 
[mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Alan Fregtman
Sent: Wednesday, July 03, 2013 2:13 PM
To: XSI Mailing List
Subject: Re: Weird warning

If you reopen your saved scene, those disconnected things will get cleaned up 
for you.


On Tue, Jul 2, 2013 at 8:22 PM, Daniel Kim 
mailto:dani...@magicboxandapps.com>> wrote:
Hi guys.

I get weird warning from Softimage

' WARNING : 3000 - Save: [3] objects were not saved normally
' WARNING : 3000 - -- [uvspace<1279>] was saved, but is disconnected from the 
scene. (Floating object)
' WARNING : 3000 - -- [uvspace<1008>] was saved, but is disconnected from the 
scene. (Floating object)
' WARNING : 3000 - -- [uvspace<1259>] was saved, but is disconnected from the 
scene. (Floating object)

I've never seen this one before. One of my team mates get this message, but 
have no idea where and what to check.
If anyone know, please post.

Thanks
Daniel




Re: Weird warning

2013-07-02 Thread Alan Fregtman
If you reopen your saved scene, those disconnected things will get cleaned
up for you.



On Tue, Jul 2, 2013 at 8:22 PM, Daniel Kim wrote:

> Hi guys.
>
> ** **
>
> I get weird warning from Softimage
>
> ** **
>
> ' WARNING : 3000 - Save: [3] objects were not saved normally
>
> ' WARNING : 3000 - -- [uvspace<1279>] was saved, but is disconnected from
> the scene. (Floating object)
>
> ' WARNING : 3000 - -- [uvspace<1008>] was saved, but is disconnected from
> the scene. (Floating object)
>
> ' WARNING : 3000 - -- [uvspace<1259>] was saved, but is disconnected from
> the scene. (Floating object)
>
> ** **
>
> I’ve never seen this one before. One of my team mates get this message,
> but have no idea where and what to check.
>
> If anyone know, please post.
>
> ** **
>
> Thanks
>
> Daniel
>
> ** **
>


RE: Weird warning

2013-07-02 Thread Matt Lind
Could be a number of causes, but basically the rug was pulled out from under 
the uvspace (texture projection) leaving it orphaned in the scene.  Softimage 
doesn't know what to do with it, so it spits out the warning.


Matt



From: softimage-boun...@listproc.autodesk.com 
[mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Daniel Kim
Sent: Tuesday, July 02, 2013 5:22 PM
To: softimage@listproc.autodesk.com
Subject: Weird warning

Hi guys.

I get weird warning from Softimage

' WARNING : 3000 - Save: [3] objects were not saved normally
' WARNING : 3000 - -- [uvspace<1279>] was saved, but is disconnected from the 
scene. (Floating object)
' WARNING : 3000 - -- [uvspace<1008>] was saved, but is disconnected from the 
scene. (Floating object)
' WARNING : 3000 - -- [uvspace<1259>] was saved, but is disconnected from the 
scene. (Floating object)

I've never seen this one before. One of my team mates get this message, but 
have no idea where and what to check.
If anyone know, please post.

Thanks
Daniel



Weird warning

2013-07-02 Thread Daniel Kim
Hi guys.

I get weird warning from Softimage

' WARNING : 3000 - Save: [3] objects were not saved normally
' WARNING : 3000 - -- [uvspace<1279>] was saved, but is disconnected from the 
scene. (Floating object)
' WARNING : 3000 - -- [uvspace<1008>] was saved, but is disconnected from the 
scene. (Floating object)
' WARNING : 3000 - -- [uvspace<1259>] was saved, but is disconnected from the 
scene. (Floating object)

I've never seen this one before. One of my team mates get this message, but 
have no idea where and what to check.
If anyone know, please post.

Thanks
Daniel



Re: White House Down

2013-07-02 Thread Andre De Angelis
Stunning work Mathieu


On Wed, Jul 3, 2013 at 2:19 AM, Mathieu Leclaire wrote:

> Hi guys,
>
> I just wanted to share some information on the shots we did for White
> House Down.
>
> First off, there's an article in fxguide that explains a bit what we did :
>
> http://www.fxguide.com/**featured/action-beats-6-**
> scenes-from-white-house-down/
>
>
> And here is some more details about how we did it :
>
> We built upon our ICE based City Generator that we created for Spy Kids 4.
> In SK4, all the buildings where basically a bunch of instances (windows,
> wall, doors, etc.) put together using Softimage ICE logic to build very
> generic buildings. ICE was also used to create the streetscape, populate
> the city with props (lamp post, traffic lights, garbage cans, bus stops,
> etc.), distribute static trees and car traffic. Everything was instances so
> memory consumption was very low and render times where minimal (20-30
> minutes a frame in Mental Ray at the time). The city in Spy Kids 4 was very
> generic and the cameras where very high up in the sky so we didn't care as
> much about having a lot of details and interaction on the ground level and
> we didn't really need specific and recognizable buildings either.
>
> The challenge in White House Down was the fact that it was Washington and
> we needed to recognize very specific landmarks so it needed to be a lot
> less generic. The action also happens very close to the ground so we needed
> to have a lot more detail on the ground level and there needed to be a lot
> of interaction with the helicopters that are passing by.
>
> So we modeled a lot more specific assets to add more variation (very
> specific buildings and recognizable landmarks, more props, more vegetation,
> more cars, etc.). We updated our building generator to allow more
> customizations. We updated our props and cars distribution systems. They
> where all still ICE based instances, but we added a lot more controls to
> allow our users to easily manage such complex scenes. We had a system to
> automate the texturing of cars and props based on rules so we could texture
> thousands of assets very quickly. Everything was also converted to
> Stand-Ins to keep our working scenes very light and leave the heavy lifting
> to the renderer.
>
> Which brings me to Arnold.
>
> We knew the trick to making these shots as realistic as possible would be
> to add as much details as we possibly could. Arnold is so good at handling
> a lot of geometry and we where all very impressed by how much Arnold could
> chew (we where managing somewhere around 500-600 million polygons at a
> time) but it still wasn't going to be enough, so we built a deep image
> compositing pipeline for this project to allowed us to add so much more
> detail to the shots.
>
> Every asset where built in low and high resolution. So we basically loaded
> whatever elements we where rendering in a layer as high resolution while
> the rest of the scene assets where all low resolution only to be visible
> through secondary rays (so to cast reflections, shadows, GI, etc.). We
> could then combine all the layers through deep compositing and could
> extract whatever layer we desired without worrying about generating the
> proper hold-out mattes at render time (which would have been impossible to
> manage at that level of detail).
>
> In one shot, we calculated that once all the layers where merged together
> using our deep image pipeline, it added up to just over 4.2 billion
> polygons... though that number is not quite exact since we always loaded
> all assets as lo-res in memory except for the visible elements that where
> being rendered in high resolution. We have a lot of low res geometry that
> is repeated in many layers, so the exact number is slightly lower then the
> 4.2 billion polygons reported, but still... we ended up managing a lot of
> data for that show.
>
> Render times where also very reasonable, varying from 20 minutes to 2-3
> hours per frame rendered at 3K. Once we added all the layers in one shot,
> then it came somewhere between 10-12 hours per frame.
>
> We started out using Nuke to manipulate our deep images, but we ended up
> creating an in-house custom standalone application using Creation Platform
> from Fabric Engine to accelerate the deep image manipulations. What took
> hours to manage in Nuke could now be done in minutes and we could now also
> exploit our entire render farm to extract the desired layers when needed.
>
> Finally, the last layer of complexity came from the interaction between
> the helicopters and the environment. We simulated and baked rotor wash wind
> fields of air being pushed by those animated Black Hawks using Exocortex
> Slipstream. That wind field then was used to simulate dust, debris, tree
> deformations and crowd cloth simulations. Since the trees needed to be
> simulated, we created a custom ICE strand based tr

Re: [SItoA] White House Down

2013-07-02 Thread Ahmidou Lyazidi
Thanks for the details Mathieux, very much appreciated!

---
Ahmidou Lyazidi
Director | TD | CG artist
http://vimeo.com/ahmidou/videos
http://www.cappuccino-films.com


2013/7/3 Greg Punchatz 

> Sounds brilliant . I need to see the movie now.
>
>
>
> Sent from my iPhone
>
> On Jul 2, 2013, at 11:19 AM, Mathieu Leclaire 
> wrote:
>
> > Hi guys,
> >
> > I just wanted to share some information on the shots we did for White
> House Down.
> >
> > First off, there's an article in fxguide that explains a bit what we did
> :
> >
> >
> http://www.fxguide.com/featured/action-beats-6-scenes-from-white-house-down/
> >
> >
> > And here is some more details about how we did it :
> >
> > We built upon our ICE based City Generator that we created for Spy Kids
> 4. In SK4, all the buildings where basically a bunch of instances (windows,
> wall, doors, etc.) put together using Softimage ICE logic to build very
> generic buildings. ICE was also used to create the streetscape, populate
> the city with props (lamp post, traffic lights, garbage cans, bus stops,
> etc.), distribute static trees and car traffic. Everything was instances so
> memory consumption was very low and render times where minimal (20-30
> minutes a frame in Mental Ray at the time). The city in Spy Kids 4 was very
> generic and the cameras where very high up in the sky so we didn't care as
> much about having a lot of details and interaction on the ground level and
> we didn't really need specific and recognizable buildings either.
> >
> > The challenge in White House Down was the fact that it was Washington
> and we needed to recognize very specific landmarks so it needed to be a lot
> less generic. The action also happens very close to the ground so we needed
> to have a lot more detail on the ground level and there needed to be a lot
> of interaction with the helicopters that are passing by.
> >
> > So we modeled a lot more specific assets to add more variation (very
> specific buildings and recognizable landmarks, more props, more vegetation,
> more cars, etc.). We updated our building generator to allow more
> customizations. We updated our props and cars distribution systems. They
> where all still ICE based instances, but we added a lot more controls to
> allow our users to easily manage such complex scenes. We had a system to
> automate the texturing of cars and props based on rules so we could texture
> thousands of assets very quickly. Everything was also converted to
> Stand-Ins to keep our working scenes very light and leave the heavy lifting
> to the renderer.
> >
> > Which brings me to Arnold.
> >
> > We knew the trick to making these shots as realistic as possible would
> be to add as much details as we possibly could. Arnold is so good at
> handling a lot of geometry and we where all very impressed by how much
> Arnold could chew (we where managing somewhere around 500-600 million
> polygons at a time) but it still wasn't going to be enough, so we built a
> deep image compositing pipeline for this project to allowed us to add so
> much more detail to the shots.
> >
> > Every asset where built in low and high resolution. So we basically
> loaded whatever elements we where rendering in a layer as high resolution
> while the rest of the scene assets where all low resolution only to be
> visible through secondary rays (so to cast reflections, shadows, GI, etc.).
> We could then combine all the layers through deep compositing and could
> extract whatever layer we desired without worrying about generating the
> proper hold-out mattes at render time (which would have been impossible to
> manage at that level of detail).
> >
> > In one shot, we calculated that once all the layers where merged
> together using our deep image pipeline, it added up to just over 4.2
> billion polygons... though that number is not quite exact since we always
> loaded all assets as lo-res in memory except for the visible elements that
> where being rendered in high resolution. We have a lot of low res geometry
> that is repeated in many layers, so the exact number is slightly lower then
> the 4.2 billion polygons reported, but still... we ended up managing a lot
> of data for that show.
> >
> > Render times where also very reasonable, varying from 20 minutes to 2-3
> hours per frame rendered at 3K. Once we added all the layers in one shot,
> then it came somewhere between 10-12 hours per frame.
> >
> > We started out using Nuke to manipulate our deep images, but we ended up
> creating an in-house custom standalone application using Creation Platform
> from Fabric Engine to accelerate the deep image manipulations. What took
> hours to manage in Nuke could now be done in minutes and we could now also
> exploit our entire render farm to extract the desired layers when needed.
> >
> > Finally, the last layer of complexity came from the interaction between
> the helicopters and the environment. We simulated and baked rotor wash wind

Why does the factory "Verlet Framework" seem to be a frame ahead?

2013-07-02 Thread Alan Fregtman
Hey guys,

I've worked around this issue but I'm very curious why it's happening:

If you use the "Verlet Framework" compound to do some jiggle using the
thing to point to another target mesh, it seems that the jiggly bits are
ahead of the non-jiggly parts.

Any ideas why that is? I worked around it by subtracting PointVelocity from
the PointPosition in Post-Simulation, and that makes it lag after, not
ahead, and appears to look alright.

Cheers,

   -- Alan


Re: Change the filters of the Pass Explorer in the Render Manager?

2013-07-02 Thread Stephen Blair

You'd probably have to change the RV_Init for the Render Manager view.

You could press U to display the current pass and your partitions.



On 02/07/2013 3:55 PM, Tim Crowson wrote:
I'm working with a custom layout and noticed that the Render Manager's 
Pass Explorer on the far left is filtered a bit too conservatively for 
my taste. I typically set the Pass Explorer to show 'All + Animatable 
Parameters.'  I'd like to get something similar in the Pass Explorer 
that is nested within the Render Manager. Is this possible? As it is 
now, the Render Manager won't even show me my Partitions! Egads!


--
Signature

*Tim Crowson
*/Lead CG Artist/

*Magnetic Dreams, Inc.
*2525 Lebanon Pike, Building C. Nashville, TN 37214
*Ph*  615.885.6801 | *Fax*  615.889.4768 | www.magneticdreams.com
tim.crow...@magneticdreams.com

/
/





Change the filters of the Pass Explorer in the Render Manager?

2013-07-02 Thread Tim Crowson
I'm working with a custom layout and noticed that the Render Manager's 
Pass Explorer on the far left is filtered a bit too conservatively for 
my taste. I typically set the Pass Explorer to show 'All + Animatable 
Parameters.'  I'd like to get something similar in the Pass Explorer 
that is nested within the Render Manager. Is this possible? As it is 
now, the Render Manager won't even show me my Partitions! Egads!


--
Signature

*Tim Crowson
*/Lead CG Artist/

*Magnetic Dreams, Inc.
*2525 Lebanon Pike, Building C. Nashville, TN 37214
*Ph*  615.885.6801 | *Fax*  615.889.4768 | www.magneticdreams.com
tim.crow...@magneticdreams.com

/
/



Re: [SItoA] White House Down

2013-07-02 Thread Greg Punchatz
Sounds brilliant . I need to see the movie now. 



Sent from my iPhone

On Jul 2, 2013, at 11:19 AM, Mathieu Leclaire  wrote:

> Hi guys,
> 
> I just wanted to share some information on the shots we did for White House 
> Down.
> 
> First off, there's an article in fxguide that explains a bit what we did :
> 
> http://www.fxguide.com/featured/action-beats-6-scenes-from-white-house-down/
> 
> 
> And here is some more details about how we did it :
> 
> We built upon our ICE based City Generator that we created for Spy Kids 4. In 
> SK4, all the buildings where basically a bunch of instances (windows, wall, 
> doors, etc.) put together using Softimage ICE logic to build very generic 
> buildings. ICE was also used to create the streetscape, populate the city 
> with props (lamp post, traffic lights, garbage cans, bus stops, etc.), 
> distribute static trees and car traffic. Everything was instances so memory 
> consumption was very low and render times where minimal (20-30 minutes a 
> frame in Mental Ray at the time). The city in Spy Kids 4 was very generic and 
> the cameras where very high up in the sky so we didn't care as much about 
> having a lot of details and interaction on the ground level and we didn't 
> really need specific and recognizable buildings either.
> 
> The challenge in White House Down was the fact that it was Washington and we 
> needed to recognize very specific landmarks so it needed to be a lot less 
> generic. The action also happens very close to the ground so we needed to 
> have a lot more detail on the ground level and there needed to be a lot of 
> interaction with the helicopters that are passing by.
> 
> So we modeled a lot more specific assets to add more variation (very specific 
> buildings and recognizable landmarks, more props, more vegetation, more cars, 
> etc.). We updated our building generator to allow more customizations. We 
> updated our props and cars distribution systems. They where all still ICE 
> based instances, but we added a lot more controls to allow our users to 
> easily manage such complex scenes. We had a system to automate the texturing 
> of cars and props based on rules so we could texture thousands of assets very 
> quickly. Everything was also converted to Stand-Ins to keep our working 
> scenes very light and leave the heavy lifting to the renderer.
> 
> Which brings me to Arnold.
> 
> We knew the trick to making these shots as realistic as possible would be to 
> add as much details as we possibly could. Arnold is so good at handling a lot 
> of geometry and we where all very impressed by how much Arnold could chew (we 
> where managing somewhere around 500-600 million polygons at a time) but it 
> still wasn't going to be enough, so we built a deep image compositing 
> pipeline for this project to allowed us to add so much more detail to the 
> shots.
> 
> Every asset where built in low and high resolution. So we basically loaded 
> whatever elements we where rendering in a layer as high resolution while the 
> rest of the scene assets where all low resolution only to be visible through 
> secondary rays (so to cast reflections, shadows, GI, etc.). We could then 
> combine all the layers through deep compositing and could extract whatever 
> layer we desired without worrying about generating the proper hold-out mattes 
> at render time (which would have been impossible to manage at that level of 
> detail).
> 
> In one shot, we calculated that once all the layers where merged together 
> using our deep image pipeline, it added up to just over 4.2 billion 
> polygons... though that number is not quite exact since we always loaded all 
> assets as lo-res in memory except for the visible elements that where being 
> rendered in high resolution. We have a lot of low res geometry that is 
> repeated in many layers, so the exact number is slightly lower then the 4.2 
> billion polygons reported, but still... we ended up managing a lot of data 
> for that show.
> 
> Render times where also very reasonable, varying from 20 minutes to 2-3 hours 
> per frame rendered at 3K. Once we added all the layers in one shot, then it 
> came somewhere between 10-12 hours per frame.
> 
> We started out using Nuke to manipulate our deep images, but we ended up 
> creating an in-house custom standalone application using Creation Platform 
> from Fabric Engine to accelerate the deep image manipulations. What took 
> hours to manage in Nuke could now be done in minutes and we could now also 
> exploit our entire render farm to extract the desired layers when needed.
> 
> Finally, the last layer of complexity came from the interaction between the 
> helicopters and the environment. We simulated and baked rotor wash wind 
> fields of air being pushed by those animated Black Hawks using Exocortex 
> Slipstream. That wind field then was used to simulate dust, debris, tree 
> deformations and crowd cloth simulations. Since the trees needed to be 
> simulated, we created

RE: ICE annotation?

2013-07-02 Thread Ponthieux, Joseph G. (LARC-E1A)[LITES]
Yes. That's the one.

--
Joey Ponthieux
LaRC Information Technology Enhanced Services (LITES)
Mymic Technical Services
NASA Langley Research Center
__
Opinions stated here-in are strictly those of the author and do not
represent the opinions of NASA or any other party.

From: softimage-boun...@listproc.autodesk.com 
[mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Matt Morris
Sent: Tuesday, July 02, 2013 1:02 PM
To: softimage@listproc.autodesk.com
Subject: Re: ICE annotation?

There's a group comment node that you can use - I think that's what you're 
after?

On 2 July 2013 17:59, Ponthieux, Joseph G. (LARC-E1A)[LITES] 
mailto:j.ponthi...@nasa.gov>> wrote:
I recall back when ICE first came out that there was a way to group together a 
bunch of ICE nodes, regardless how they were connected,  for the purpose of 
annotation or explanation. Can't seem to figure out how to do that now. Is that 
still possible?

--
Joey Ponthieux
LaRC Information Technology Enhanced Services (LITES)
Mymic Technical Services
NASA Langley Research Center
__
Opinions stated here-in are strictly those of the author and do not
represent the opinions of NASA or any other party.




--
www.matinai.com


Re: ICE annotation?

2013-07-02 Thread Matt Morris
There's a group comment node that you can use - I think that's what you're
after?


On 2 July 2013 17:59, Ponthieux, Joseph G. (LARC-E1A)[LITES] <
j.ponthi...@nasa.gov> wrote:

> I recall back when ICE first came out that there was a way to group
> together a bunch of ICE nodes, regardless how they were connected,  for the
> purpose of annotation or explanation. Can’t seem to figure out how to do
> that now. Is that still possible?
>
> ** **
>
> --
>
> Joey Ponthieux
>
> LaRC Information Technology Enhanced Services (LITES)
>
> Mymic Technical Services
>
> NASA Langley Research Center
>
> __
>
> Opinions stated here-in are strictly those of the author and do not 
>
> represent the opinions of NASA or any other party.
>
> ** **
>



-- 
www.matinai.com


ICE annotation?

2013-07-02 Thread Ponthieux, Joseph G. (LARC-E1A)[LITES]
I recall back when ICE first came out that there was a way to group together a 
bunch of ICE nodes, regardless how they were connected,  for the purpose of 
annotation or explanation. Can't seem to figure out how to do that now. Is that 
still possible?

--
Joey Ponthieux
LaRC Information Technology Enhanced Services (LITES)
Mymic Technical Services
NASA Langley Research Center
__
Opinions stated here-in are strictly those of the author and do not
represent the opinions of NASA or any other party.



White House Down

2013-07-02 Thread Mathieu Leclaire

Hi guys,

I just wanted to share some information on the shots we did for White 
House Down.


First off, there's an article in fxguide that explains a bit what we did :

http://www.fxguide.com/featured/action-beats-6-scenes-from-white-house-down/


And here is some more details about how we did it :

We built upon our ICE based City Generator that we created for Spy Kids 
4. In SK4, all the buildings where basically a bunch of instances 
(windows, wall, doors, etc.) put together using Softimage ICE logic to 
build very generic buildings. ICE was also used to create the 
streetscape, populate the city with props (lamp post, traffic lights, 
garbage cans, bus stops, etc.), distribute static trees and car traffic. 
Everything was instances so memory consumption was very low and render 
times where minimal (20-30 minutes a frame in Mental Ray at the time). 
The city in Spy Kids 4 was very generic and the cameras where very high 
up in the sky so we didn't care as much about having a lot of details 
and interaction on the ground level and we didn't really need specific 
and recognizable buildings either.


The challenge in White House Down was the fact that it was Washington 
and we needed to recognize very specific landmarks so it needed to be a 
lot less generic. The action also happens very close to the ground so we 
needed to have a lot more detail on the ground level and there needed to 
be a lot of interaction with the helicopters that are passing by.


So we modeled a lot more specific assets to add more variation (very 
specific buildings and recognizable landmarks, more props, more 
vegetation, more cars, etc.). We updated our building generator to allow 
more customizations. We updated our props and cars distribution systems. 
They where all still ICE based instances, but we added a lot more 
controls to allow our users to easily manage such complex scenes. We had 
a system to automate the texturing of cars and props based on rules so 
we could texture thousands of assets very quickly. Everything was also 
converted to Stand-Ins to keep our working scenes very light and leave 
the heavy lifting to the renderer.


Which brings me to Arnold.

We knew the trick to making these shots as realistic as possible would 
be to add as much details as we possibly could. Arnold is so good at 
handling a lot of geometry and we where all very impressed by how much 
Arnold could chew (we where managing somewhere around 500-600 million 
polygons at a time) but it still wasn't going to be enough, so we built 
a deep image compositing pipeline for this project to allowed us to add 
so much more detail to the shots.


Every asset where built in low and high resolution. So we basically 
loaded whatever elements we where rendering in a layer as high 
resolution while the rest of the scene assets where all low resolution 
only to be visible through secondary rays (so to cast reflections, 
shadows, GI, etc.). We could then combine all the layers through deep 
compositing and could extract whatever layer we desired without worrying 
about generating the proper hold-out mattes at render time (which would 
have been impossible to manage at that level of detail).


In one shot, we calculated that once all the layers where merged 
together using our deep image pipeline, it added up to just over 4.2 
billion polygons... though that number is not quite exact since we 
always loaded all assets as lo-res in memory except for the visible 
elements that where being rendered in high resolution. We have a lot of 
low res geometry that is repeated in many layers, so the exact number is 
slightly lower then the 4.2 billion polygons reported, but still... we 
ended up managing a lot of data for that show.


Render times where also very reasonable, varying from 20 minutes to 2-3 
hours per frame rendered at 3K. Once we added all the layers in one 
shot, then it came somewhere between 10-12 hours per frame.


We started out using Nuke to manipulate our deep images, but we ended up 
creating an in-house custom standalone application using Creation 
Platform from Fabric Engine to accelerate the deep image manipulations. 
What took hours to manage in Nuke could now be done in minutes and we 
could now also exploit our entire render farm to extract the desired 
layers when needed.


Finally, the last layer of complexity came from the interaction between 
the helicopters and the environment. We simulated and baked rotor wash 
wind fields of air being pushed by those animated Black Hawks using 
Exocortex Slipstream. That wind field then was used to simulate dust, 
debris, tree deformations and crowd cloth simulations. Since the trees 
needed to be simulated, we created a custom ICE strand based tree system 
to deform the branches and simulate the leaves movement from that wind 
field. Since the trees where all strand based, they where very light to 
manage and render. We also had created a custom ICE based crowd system 
for the movie Jappeloup th

White House Down, Hybride ICE

2013-07-02 Thread Eric Thivierge

Hey all,

New article about the FX work on White House Down is out and they talk a 
little about Hybrides work using ICE. Mathieu Leclaire can chime in for 
any additional info / questions if you have any but I thought I'd throw 
the link in for anyone interested.


http://www.fxguide.com/featured/action-beats-6-scenes-from-white-house-down/

--
 
Eric Thivierge

===
Character TD / RnD
Hybride Technologies
 





Re: Expression help please

2013-07-02 Thread Eric Thivierge
Most likely you'll need to linear interpolate between the original 
position of the null and the one you're moving a percentage to.



Eric Thivierge
===
Character TD / RnD
Hybride Technologies


On July-02-13 8:59:37 AM, Vladimir Jankijevic wrote:

you are still not clear in respect to what the first null is moving.
How should the second null know what distance the first null has
moved, if you don't specify the point from which you measure the
distance. Thank about it






Re: Expression help please

2013-07-02 Thread Ben Beckett
Ok alls good thanks I made it work thanks


On 2 July 2013 14:08, Alok Gandhi  wrote:

> In case you want to know the distance traveled from the current position
> of the null, what you need is a scripted operator. It will not be possible
> with expressions.
>
>
> On Tue, Jul 2, 2013 at 8:59 AM, Vladimir Jankijevic <
> vladi...@elefantstudios.ch> wrote:
>
>> you are still not clear in respect to what the first null is moving. How
>> should the second null know what distance the first null has moved, if you
>> don't specify the point from which you measure the distance. Thank about it
>>
>>
>>
>
>
> --
>


Re: Expression help please

2013-07-02 Thread Alok Gandhi
In case you want to know the distance traveled from the current position of
the null, what you need is a scripted operator. It will not be possible
with expressions.


On Tue, Jul 2, 2013 at 8:59 AM, Vladimir Jankijevic <
vladi...@elefantstudios.ch> wrote:

> you are still not clear in respect to what the first null is moving. How
> should the second null know what distance the first null has moved, if you
> don't specify the point from which you measure the distance. Thank about it
>
>
>


--


Re: Expression help please

2013-07-02 Thread Vladimir Jankijevic
you are still not clear in respect to what the first null is moving. How
should the second null know what distance the first null has moved, if you
don't specify the point from which you measure the distance. Thank about it


Re: Expression help please

2013-07-02 Thread Vladimir Jankijevic
moved according to what? The scene origin? Then it would
be null1.kine.global.posx*0.5, and so on


On Tue, Jul 2, 2013 at 2:20 PM, Ben Beckett  wrote:

> Hi all
>
> If I had 2 nulls, null1 and null2 and I want null2 to move 50% of the
> distance null1 has moved
>
> what would the expression be, that would write in the kine.global.posx,
> posy & posz of null2
>
> This sound like a good old fashion maths question!!
>
> Any help would be brill.
>
> Cheers
> Ben
>



-- 
---
Vladimir Jankijevic
Technical Direction

Elefant Studios AG
Lessingstrasse 15
CH-8002 Zürich

+41 44 500 48 20

www.elefantstudios.ch
---


Expression help please

2013-07-02 Thread Ben Beckett
Hi all

If I had 2 nulls, null1 and null2 and I want null2 to move 50% of the
distance null1 has moved

what would the expression be, that would write in the kine.global.posx,
posy & posz of null2

This sound like a good old fashion maths question!!

Any help would be brill.

Cheers
Ben


Re: playing animated gif using pyqt in softimage

2013-07-02 Thread Angeline
it's working now thanks.


Regards
Angeline

Junior TD
One Animation


On 2 July 2013 17:04, Cristobal Infante  wrote:

> Does it work if you use a normal jpg?
>
> I ran into a similar problem where the path of the image was the problem.
>
> When using python you need to use double slash for paths.
>
>
>
>
> On 2 July 2013 09:31, Angeline  wrote:
>
>>
>> Hi,
>>
>> I'm having some troubles with trying to play animated gifs using QMovie
>> attached a QLabel in a dialog in Softimage.
>>
>> I'm using the setMovie() in QLabel to display the animated gif.
>>
>> When I ran the plugin, the window is empty despite starting the movie on
>> load. The same set of codes works when I ran it using python shell without
>> the Softimage plugin codes.
>>
>> Thanks
>> Angeline
>>
>> Junior TD
>> One Animation
>>
>
>


Re: playing animated gif using pyqt in softimage

2013-07-02 Thread Cristobal Infante
Does it work if you use a normal jpg?

I ran into a similar problem where the path of the image was the problem.

When using python you need to use double slash for paths.




On 2 July 2013 09:31, Angeline  wrote:

>
> Hi,
>
> I'm having some troubles with trying to play animated gifs using QMovie
> attached a QLabel in a dialog in Softimage.
>
> I'm using the setMovie() in QLabel to display the animated gif.
>
> When I ran the plugin, the window is empty despite starting the movie on
> load. The same set of codes works when I ran it using python shell without
> the Softimage plugin codes.
>
> Thanks
> Angeline
>
> Junior TD
> One Animation
>


Re: playing animated gif using pyqt in softimage

2013-07-02 Thread philipp.oeser
Hi Angeline,

I recently had problems displaying image formats other then pngs as well [was a
thread on this mailinglist called "PyQT for Softimage: problem with image
formats? (was: problem with QIcons)"].

You could try adding the PyQt image plugins (e.g. qgif4.dll) "by hand" like
this:
==
...
sianchor = Application.getQtSoftimageAnchor()

# now add the libs by hand
QtGui.QApplication.instance().addLibraryPath('/pathtopy/sitepackages/PyQt4/plugins')

sianchor = sip.wrapinstance( long(sianchor), QWidget )
oDialog = QDialog(sianchor)
...
==

Hope this helps? (Not sure if solves the animated gif thing, it sure solved
displaying jpgs etc.)




> Angeline  hat am 2. Juli 2013 um 10:31 geschrieben:
> 
> 
>  Hi,
> 
>  I'm having some troubles with trying to play animated gifs using QMovie
> attached a QLabel in a dialog in Softimage.
> 
>  I'm using the setMovie() in QLabel to display the animated gif.
> 
>  When I ran the plugin, the window is empty despite starting the movie on
> load. The same set of codes works when I ran it using python shell without the
> Softimage plugin codes.
> 
>  Thanks
>  Angeline
> 
>  Junior TD
>  One Animation
> 
















































[nhb]
Philipp Oeser
Pipeline Engineer
T   +49 40 - 450 120 - 401
www.nhb.de 


nhb video GmbH | nhb ton GmbH

Alsterglacis 8 | 20354 Hamburg

nhb video GmbH, HRB 61617
Geschäftsführer: Michael Vitzthum, Matthias Rewig
nhb ton GmbH, HRB 73877
Geschäftsführer: Michael Vitzthum, Matthias Rewig
[dolby]nhb is Dolby approved
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Das unerlaubte Weiterleiten dieser Mail ist nicht gestattet. This
e-mail may contain confidential and/or privileged information. Any unauthorised
disclosure of the material in this e-mail is forbidden.

playing animated gif using pyqt in softimage

2013-07-02 Thread Angeline
Hi,

I'm having some troubles with trying to play animated gifs using QMovie
attached a QLabel in a dialog in Softimage.

I'm using the setMovie() in QLabel to display the animated gif.

When I ran the plugin, the window is empty despite starting the movie on
load. The same set of codes works when I ran it using python shell without
the Softimage plugin codes.

Thanks
Angeline

Junior TD
One Animation