Re: Texture size in games?

2013-05-15 Thread Stefan Andersson
Also, if and old dog like me that knows everything and nothing, where
would be the best starting point? Conversion of knowledge.

Or rather don't learn this, totally useless :)

Regards
Stefan


-- Sent from a phone booth in purgatory

On May 15, 2013, at 8:31, Stefan Andersson sander...@gmail.com wrote:

 Hi all!
 This might be a strange question, but what would be the normal texture
 size today when creating content for games?
 I'm trying to learn a new profession and need to test out the basics
 at home before I jump out into the void :)

 Also, would unity be a good practice platform? Or any other recommendations?

 I'm trying out something new here, so any suggestions and tips are welcomed!

 Best regards
 Stefan


 -- Sent from a phone booth in purgatory


Texture size in games?

2013-05-15 Thread Stefan Andersson
Hi all!
This might be a strange question, but what would be the normal texture
size today when creating content for games?
I'm trying to learn a new profession and need to test out the basics
at home before I jump out into the void :)

Also, would unity be a good practice platform? Or any other recommendations?

I'm trying out something new here, so any suggestions and tips are welcomed!

Best regards
Stefan


-- Sent from a phone booth in purgatory


RE: custom ICENode - questions and request for example source code

2013-05-15 Thread Matt Lind
well, let's answer the questions first:

1) Does anybody have source code they are willing to share for custom ICE Nodes 
that deal with topology and/or geometry?

2) Does the lack of reference, location, and execute ports for custom ICE nodes 
mean I cannot cast a location search from inside an ICE node?



To answer your question:

Imagine two nulls and two NURBS Surfaces.  the task is to find the nearest 
location from the first null to the first surface.  At that location, build an 
orthonormal basis and compute the local transform of the null relative to that 
basis.  Then reconstruct that relationship by applying it to the 2nd null 
relative to the 2nd surface assuming both surfaces use uniform 
parameterization, not non-uniform as is the softimage default.  Version 2: 
extend to operate on vertices of polygon meshes instead of nulls.  I have a 
working version, but it is slow and not very stable.

The problem I'm encountering is it simply takes too many factory nodes to be 
able to work efficiently. Each node has a certain amount of overhead regardless 
of what it does. Plus, the support for NURBS in ICE is rather abysmal. I have 
to construct my own orthonormal basis plus implement my own algorithm to 
convert from non-uniform parameterization to uniform parameterization.  Both 
are doable, but take very many nodes to do it (including support for edge 
cases) making the whole effort rather clumsy at best. The parameterization 
conversion is expensive as it involves sorting and searching 
(while/repeat/counter nodes).  When applying the ICE Compound to a polygon mesh 
with 5,000+ vertices.it gets the job done, but chugs.

I have a version of this tool written as a scripted operator, and it performs 
really well because it has better SDK support and the sorting/searching can be 
better optimized.  But one shortcoming of scripted operators is they 
self-delete if an input goes missing (which often happens on scene load or 
model import when the content has been modifed externally).  This in turn 
causes content using the operator to malfunction generating bug reports which 
are sent to artists to fix.  Unfortunately most artists weren't around when the 
content was created years ago, so they have no idea what's wrong, what the 
expected output is supposed to look like, or how to fix it.  Often an asset has 
to be retired and replaced.   This is my motivation for rewriting the tool as a 
custom ICE node as ICE is much more graceful when it's inputs don't exist - it 
just turns red and sits patiently until conditions improve.  This gives artists 
a chance to fix the problem without having to sweat the details because they 
can read the GetData node to see what's missing, then find and repair it.  I'm 
trying to make the content in our pipeline more durable.

So...I'm looking for code samples of how to deal with topology and geometry in 
ICE.  So far I have not found any.


Matt







From: softimage-boun...@listproc.autodesk.com 
[softimage-boun...@listproc.autodesk.com] On Behalf Of Raffaele Fragapane 
[raffsxsil...@googlemail.com]
Sent: Tuesday, May 14, 2013 9:00 PM
To: softimage@listproc.autodesk.com
Subject: Re: custom ICENode - questions and request for example source code

Yeah, same hunch here.
Unless the performance expectations are in the multiple characters real-time 
concurrently, in which case I think neither way is gonna get there usually.


On Wed, May 15, 2013 at 1:04 PM, Ciaran Moloney 
moloney.cia...@gmail.commailto:moloney.cia...@gmail.com wrote:
I'm sorta , kinda sure that's a dead end for a custom node. You might be better 
off optimizing your ICE tree. It doesn't sound like such a complex problem, 
care to share?


On Wed, May 15, 2013 at 2:41 AM, Matt Lind 
ml...@carbinestudios.commailto:ml...@carbinestudios.com wrote:
I’ve been looking at the ICE SDK as a start to the process of writing custom 
ICE Nodes in C++.  I need to write topology generators, modifiers and 
deformation nodes.  So far all the source code I’ve seen supplied with 
Softimage only deal with particle clouds or primitive data such as converting 
integers to scalars.  Does anybody have source code for working with the 
Softimage SDK inside an ICE Node to modify topology/geometry?.or 
Kinematics?   Example:  creating a polygon mesh from scratch, adding/removing 
subcomponents, dealing with clusters, etc…  I ask this partly because the ICE 
SDK docs say to not use the object model….which leads to the question – how do 
I do anything?




While also browsing the SDK docs, I saw in the ‘limitations’ section that 
custom ICE Nodes cannot define reference, location, or execute ports.   Since I 
am very interested in working with locations, does this mean I cannot do 
queries for locations from inside the ICE Node?  Or does it only mean I cannot 
send/receive locations from other ICE nodes?

Example:

I need to write an ICE Node which takes a polygon mesh and 2 NURBS Surfaces as 

Re: Texture size in games?

2013-05-15 Thread Stefan Kubicek
I haven't had much to do with games for the last two years so some info  
might be outdated already, but Unity seems to be a good starting point  
both in terms of features and affordability.


As for textures, it totally depends on what type of asset you want to  
build, and for which Platform (Hardware).
A typical character texture set three years ago (diffuse and normal map,  
maybe specularity map and AO map too if needed) used to be 512x512 in  
resolution for the head and hands and another 512x512 texture set for the  
body (usually you want higher texture detail on the face if it's seen  
close-up), plus the same again for the body. These days it has most likely  
quadrupled for modern hardware.


I've also seen combinations of resolutions where the diffuse texture would  
only be half the size of the normal map (if the diffuse texture is  
low-freuqent compared to the normal map for example), and vice versa, it  
depends on what maps contain the look-defining detail.


Good practice is to use textures with resolutions of the power of two  
(128x128,256x256,512x512,1024x1024), since depending on engine and target  
hardware non-power-of-two
textures are often scaled up the the nearest power of two automatically,  
consuming the same amount of RAM, but not offering anymore precious  
detail. There are exceptions to this rule, on Wii for example you may use  
combinations of power of two and non-power-of-two textures, like 256x512,  
without wasting any RAM (textures are not automatically scaled to power of  
two).


If you are using texture baking, and the engine makes use of mip mapping  
(as most do) make sure to keep the individual islands far enough apart  
from each other so they don't bleed into each other when scaled down in  
the mip map creation process. Choose a suitable fill color (one with low  
contrast to the surrounding baked color information) too for the same  
reason.


For skinned geometry use as little a number of skin weights as possible,  
most engines have special code paths for fast deformation of various  
numbers of skin weights per vertex (e.g. 1, 4, 8, but not 3,5 or 7), as  
well as the maximum number of joints deforming a single mesh (e.g. 64),  
though I'm sure on modern hardware the latter is not so much of a concern  
anymore. I found 4 skin weights per vertex to be sufficient for pretty  
much anything, including joint-driven facial animation).


I'm sure there are tons of tutorials covering all those aspects, also in  
regards to Unity, different engines and hardware may have different needs  
or specialities.




Hi all!
This might be a strange question, but what would be the normal texture
size today when creating content for games?
I'm trying to learn a new profession and need to test out the basics
at home before I jump out into the void :)

Also, would unity be a good practice platform? Or any other  
recommendations?


I'm trying out something new here, so any suggestions and tips are  
welcomed!


Best regards
Stefan


-- Sent from a phone booth in purgatory



--
---
   Stefan Kubicek
---
   keyvis digital imagery
  Alfred Feierfeilstraße 3
   A-2380 Perchtoldsdorf bei Wien
 Phone:+43/699/12614231
  www.keyvis.at  ste...@keyvis.at
--  This email and its attachments are   --
--confidential and for the recipient only--



RE: Texture size in games?

2013-05-15 Thread Szabolcs Matefy
Hi

As a game artist it's very hard to really answer :D

I used to make my textures usually in 2k*2k and downsize for the needs.
Usually I meet always requests, like the texture is good, could you make
it in bigger res? So a 2k is a safe size, you can deliver any size
fitting to the rule of power of two (16, 32, 64, 128, 256, 512, 1024,
2048).

Unity is a cool engine, and there are many users, and companies using
it, so you are on a safe ground, if you learn it. However, I might
suggest you to learn Unreal too, or CryEngine (I'm Crytek employee :D ).
Of course knowing an engine will help to learn others, and you might
learn the workflow of game development in any engine, but of course the
workflow depends on the company you work for too. Like, Naughty Dog has
no level editor, but Maya is used for art authoring, but AI lighting,
etc is made in their own editor. So it really depends on the company.

Cheers

-Original Message-
From: softimage-boun...@listproc.autodesk.com
[mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Stefan
Andersson
Sent: Wednesday, May 15, 2013 8:31 AM
To: softimage@listproc.autodesk.com
Subject: Texture size in games?

Hi all!
This might be a strange question, but what would be the normal texture
size today when creating content for games?
I'm trying to learn a new profession and need to test out the basics at
home before I jump out into the void :)

Also, would unity be a good practice platform? Or any other
recommendations?

I'm trying out something new here, so any suggestions and tips are
welcomed!

Best regards
Stefan


-- Sent from a phone booth in purgatory




Re: Texture size in games?

2013-05-15 Thread Martin
Hi Stefan,

I don't think there is a normal size.

It will depend on your platform specs and type of game. If it's a fighting
game, you may use more resources because it will display less characters at
the same time than a strategy game.

In a fight game for PS3, we used 1024 for the body and 512 for the face in
the final DDS texture. Normal, Specular Map and Color only. I think the
polygon count was about 20.000 ~ 25.000 per character.
The PSD data was made in 2048, 1024 i think. We usually work with doubled
resolution because you never know when the specs will change, or if you'll
have a sequel with better specs, or you'll have to change UVs drastically
and bake your texture to your new UV.

In an arcade game we used a lot of textures per character, 512 for the eye,
1024 for the face, 1024 for the hair, and the rest, as many 1024 pics as
needed as long it didn't exceed 20Mb, (or was it 15?)
Color specular, color and normal this time.

For the last 3DS game we did, it was like 128 x 7 textures. This game had
interchangable parts so it had a 128x128 limit per part. and about 7000
polys per char.
Here we used doubled resolution too in our PSD source, but some details
needed dot level edition in the final resolution.

Only 2 ~ 4 bones deformers per point, weights without decimals (Maya) or
power of 10 (SI), T pose for characters, no more than xx bones(null bones)
per char, irregular UVs and tex.size (bigger UVs for details like a
character eyes, etc), no more than x lights per scene, etc. are quite
common limitations. Real Time shaders aren't as good as working with
pre-render stuff, so we need to cover that by painting a little more the
textures. (ex, usually you don't have real time occlusion so you have to
paint it, or bake it), sometimes you have to try to overlap your UVs so it
will look like you are using a bigger texture (like tiling some parts).

Another thing about mipmaping, with the automatic down scaling you'll see
seams be sure to bleed your textures outside your UV islands.

Like I said, it depends, and you'll have to learn to deal with those limits.

BTW, depending on the programmer skill, you may be able to use more or less
tex.resolution and polygons.

Unity? It will also depend in what do you want to do.
If you're going to create assets only, chances are your client will use a
different engine so your unity skills won't be that useful.
Modeling, sculpting, weighting, texturing, animation, rigging and scripting
skills would be better.

If you're going to do everything (assets, programming, game planning, etc)
by yourself or with a small team, I think Unity could be very friendly for
a small project.

GL

M.Yara




On Wed, May 15, 2013 at 3:46 PM, Stefan Andersson sander...@gmail.comwrote:

 Also, if and old dog like me that knows everything and nothing, where
 would be the best starting point? Conversion of knowledge.

 Or rather don't learn this, totally useless :)

 Regards
 Stefan


 -- Sent from a phone booth in purgatory

 On May 15, 2013, at 8:31, Stefan Andersson sander...@gmail.com wrote:

  Hi all!
  This might be a strange question, but what would be the normal texture
  size today when creating content for games?
  I'm trying to learn a new profession and need to test out the basics
  at home before I jump out into the void :)
 
  Also, would unity be a good practice platform? Or any other
 recommendations?
 
  I'm trying out something new here, so any suggestions and tips are
 welcomed!
 
  Best regards
  Stefan
 
 
  -- Sent from a phone booth in purgatory



RE: Texture size in games?

2013-05-15 Thread Szabolcs Matefy
And if we are technical details, number of vertex data matter a lot
(depending on engine of course) So the number of UV islands, hard edges,
materials are affecting performance too. And the texture compression
too. I was working on several games when I was working as a freelancer,
and there were plenty of technical requirements that were not the same
at the different client. For example, there were a client with the
requirement to delete the blue channel from the normal map, and swap the
red and blue channels. Or, there were another one, who required BUMP
texture in the diffuse color alpha...

 

When I did characters for FPS game, we made a head texture (512 for the
head) and a body (1024) and an equipment. Alpha channel was used for
opacity (alpha tested), and specular was R channel for specular, G for
Glossiness

 

If you just want to learn game development, visit gameartisans.org,
game-artist.net, polycount etc. If you have specific companies you would
like to work for, study their workflow, and tools.

 

From: softimage-boun...@listproc.autodesk.com
[mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Martin
Sent: Wednesday, May 15, 2013 10:06 AM
To: softimage@listproc.autodesk.com
Subject: Re: Texture size in games?

 

Hi Stefan, 

 

I don't think there is a normal size.

 

It will depend on your platform specs and type of game. If it's a
fighting game, you may use more resources because it will display less
characters at the same time than a strategy game.

 

In a fight game for PS3, we used 1024 for the body and 512 for the face
in the final DDS texture. Normal, Specular Map and Color only. I think
the polygon count was about 20.000 ~ 25.000 per character.

The PSD data was made in 2048, 1024 i think. We usually work with
doubled resolution because you never know when the specs will change, or
if you'll have a sequel with better specs, or you'll have to change UVs
drastically and bake your texture to your new UV.

 

In an arcade game we used a lot of textures per character, 512 for the
eye, 1024 for the face, 1024 for the hair, and the rest, as many 1024
pics as needed as long it didn't exceed 20Mb, (or was it 15?)

Color specular, color and normal this time.

 

For the last 3DS game we did, it was like 128 x 7 textures. This game
had interchangable parts so it had a 128x128 limit per part. and about
7000 polys per char.

Here we used doubled resolution too in our PSD source, but some details
needed dot level edition in the final resolution.

 

Only 2 ~ 4 bones deformers per point, weights without decimals (Maya) or
power of 10 (SI), T pose for characters, no more than xx bones(null
bones) per char, irregular UVs and tex.size (bigger UVs for details like
a character eyes, etc), no more than x lights per scene, etc. are quite
common limitations. Real Time shaders aren't as good as working with
pre-render stuff, so we need to cover that by painting a little more the
textures. (ex, usually you don't have real time occlusion so you have to
paint it, or bake it), sometimes you have to try to overlap your UVs so
it will look like you are using a bigger texture (like tiling some
parts).

 

Another thing about mipmaping, with the automatic down scaling you'll
see seams be sure to bleed your textures outside your UV islands.

 

Like I said, it depends, and you'll have to learn to deal with those
limits.

 

BTW, depending on the programmer skill, you may be able to use more or
less tex.resolution and polygons.

 

Unity? It will also depend in what do you want to do.

If you're going to create assets only, chances are your client will use
a different engine so your unity skills won't be that useful.

Modeling, sculpting, weighting, texturing, animation, rigging and
scripting skills would be better.

 

If you're going to do everything (assets, programming, game planning,
etc) by yourself or with a small team, I think Unity could be very
friendly for a small project.

 

GL

 

M.Yara

 

 

 

On Wed, May 15, 2013 at 3:46 PM, Stefan Andersson sander...@gmail.com
wrote:

Also, if and old dog like me that knows everything and nothing, where
would be the best starting point? Conversion of knowledge.

Or rather don't learn this, totally useless :)

Regards
Stefan


-- Sent from a phone booth in purgatory

On May 15, 2013, at 8:31, Stefan Andersson sander...@gmail.com wrote:

 Hi all!
 This might be a strange question, but what would be the normal texture
 size today when creating content for games?
 I'm trying to learn a new profession and need to test out the basics
 at home before I jump out into the void :)

 Also, would unity be a good practice platform? Or any other
recommendations?

 I'm trying out something new here, so any suggestions and tips are
welcomed!

 Best regards
 Stefan


 -- Sent from a phone booth in purgatory

 



RE: Texture size in games?

2013-05-15 Thread Matt Lind
Well, you can look at it from two different points of view:

a) Do what many game artists do and brute force their way through making 
content with heavy iteration.

b) Do what many game programmers do and try to be efficient.


If you just want a job in games, follow path A which doesn't really require 
much learning on your part, but does require a lot of practice.  You need to 
follow the specs for whatever engine you're developing content for, and be 
frugal with whatever resources you have available to make the content.  The 
specs are project specific and change frequently.  Therefore, pick an engine 
and make something to function within it.  then choose a different engine and 
try to make the content function in that one too.  You'll quickly learn making 
functional content can be very difficult and is a skillset of itself.

Following course B, anything a game programmer is going to tell you in making 
art is how to make the end result efficient for his needs.  he doesn't give a 
crap how many hours you spend on it or what it looks like.  He just wants it 
packaged in a tiny efficient form that doesn't blow up during runtime or 
induces expensive resources.  Since programmers are not artists, they don't 
know you want screenspace ambient occlusion, or fancy pixel based shading 
effects, or whatever.  In fact, they prefer you not use them because they want 
the CPU/GPU time for themselves to improve gameplay and other engine specific 
functions.

So, if you want to make good art, retain sanity, and do a good job, your best 
bet is to starting learning computer science / computer architecture and apply 
the knowledge towards your artwork.  That is how the more successful game 
artists rise through the ranks as they are the ones that approach the 
programmers and suggest how art can be made better and more efficiently by 
applying technical knowledge to their art techniques.  If you rely on the 
programmer to figure it all out, you're going to be in for a lot of pain and 
feel unfulfilled by working in a very confined box.  If you rely on other 
artists to figure it out, you'll be in for even more pain as the chaos from 
lack of technical knowledge resulting in brute force techniques will drive you 
crazy.


first assignment:

Start with modeling.  The goal is to make the most robust looking bipedal 
character mesh that can be animated (deformed like an envelope) while being 
extremely frugal with polygon count.  Say, and entire seamless mesh at less 
than 5,000 polygons - triangles and quads only.  Keep iterating on it until you 
cannot find anything to iterate on anymore.  then, pretend a programmer enters 
your space and gives you a tongue lashing for exceeding the polygon count.  So 
redo the asset with a new polygon limit of 1,000 polygons.  sounds harsh, but 
as you do it, you'll discover things on the 1,000 polygon version that could be 
applied to the 5,000 polygon version you wouldn't have thought of until you 
were forced into the situation.  Basically its an excercise in determining 
artistic priorities.  once you reach the 1,000 polygon version satisfactorily, 
change the criteria to 400 polygons.  Once you finish the 400 polygon version, 
take what you learned and apply it back to the 5,000 polyg!
 on version.  Actual polygon counts used in production vary with the platform 
and title.  Example: a boxing game on a console will probably throw 50K 
polygons or more at the characters because the environment is small and few 
subjects of interest.  An MMORPG running on a PC will devote under 10K per 
character because the worlds are large and there are many characters sharing 
the computing resources.  An embedded game running on a phone or tablet will 
probably use significantly less as the computing power is also much less.

Once you finish modeling, apply an envelope with nulls as deformers, but limit 
yourself to 30 nulls for the entire character.  now make him bend and deform as 
expected with those 30 nulls and limit each vertex to being assigned to 4 
bones/nulls or less - and that's a hard limit.  Now do that to the 5000 
polygon, 1000 polygon and 400 polygon versions of the character so each looks 
as similar as possible to the others - including fingers and toes.  Notice how 
each behaves and must be constructed differently to reach the same end result.  
Now you'll discover how you must retopologize your geometry - so take what you 
learned and start over again with the modeling.  

As for renderingassume each texture applied consumes a render batch.  Think 
of batches as render passes performed on the GPU.  Each batch has a certain 
amount of setup cost which is often more expensive than the time spent 
rendering the contents of the batch.  Therefore it's critically important to 
minimize the number of batches you induce on the GPU.  Assume each light, 
shadow, and unique material induces a batch.  The name of the game is to create 
that character fully textured and lit using 

RE: Texture size in games?

2013-05-15 Thread Szabolcs Matefy
Very good writing Matt, I love the part you wrote about the 5000-1000-400-5000 
method! :)

And in addition, our game engine was really sensitive to vertex data (UV island 
borders, hard edges doubled the vertex data, and sometimes the drawcall as 
well), and lead artist was sensitive to the UV coverage ration, and the pixel 
density as well. So game art is not an easy art :D

-Original Message-
From: softimage-boun...@listproc.autodesk.com 
[mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Matt Lind
Sent: Wednesday, May 15, 2013 10:27 AM
To: softimage@listproc.autodesk.com
Subject: RE: Texture size in games?

Well, you can look at it from two different points of view:

a) Do what many game artists do and brute force their way through making 
content with heavy iteration.

b) Do what many game programmers do and try to be efficient.


If you just want a job in games, follow path A which doesn't really require 
much learning on your part, but does require a lot of practice.  You need to 
follow the specs for whatever engine you're developing content for, and be 
frugal with whatever resources you have available to make the content.  The 
specs are project specific and change frequently.  Therefore, pick an engine 
and make something to function within it.  then choose a different engine and 
try to make the content function in that one too.  You'll quickly learn making 
functional content can be very difficult and is a skillset of itself.

Following course B, anything a game programmer is going to tell you in making 
art is how to make the end result efficient for his needs.  he doesn't give a 
crap how many hours you spend on it or what it looks like.  He just wants it 
packaged in a tiny efficient form that doesn't blow up during runtime or 
induces expensive resources.  Since programmers are not artists, they don't 
know you want screenspace ambient occlusion, or fancy pixel based shading 
effects, or whatever.  In fact, they prefer you not use them because they want 
the CPU/GPU time for themselves to improve gameplay and other engine specific 
functions.

So, if you want to make good art, retain sanity, and do a good job, your best 
bet is to starting learning computer science / computer architecture and apply 
the knowledge towards your artwork.  That is how the more successful game 
artists rise through the ranks as they are the ones that approach the 
programmers and suggest how art can be made better and more efficiently by 
applying technical knowledge to their art techniques.  If you rely on the 
programmer to figure it all out, you're going to be in for a lot of pain and 
feel unfulfilled by working in a very confined box.  If you rely on other 
artists to figure it out, you'll be in for even more pain as the chaos from 
lack of technical knowledge resulting in brute force techniques will drive you 
crazy.


first assignment:

Start with modeling.  The goal is to make the most robust looking bipedal 
character mesh that can be animated (deformed like an envelope) while being 
extremely frugal with polygon count.  Say, and entire seamless mesh at less 
than 5,000 polygons - triangles and quads only.  Keep iterating on it until you 
cannot find anything to iterate on anymore.  then, pretend a programmer enters 
your space and gives you a tongue lashing for exceeding the polygon count.  So 
redo the asset with a new polygon limit of 1,000 polygons.  sounds harsh, but 
as you do it, you'll discover things on the 1,000 polygon version that could be 
applied to the 5,000 polygon version you wouldn't have thought of until you 
were forced into the situation.  Basically its an excercise in determining 
artistic priorities.  once you reach the 1,000 polygon version satisfactorily, 
change the criteria to 400 polygons.  Once you finish the 400 polygon version, 
take what you learned and apply it back to the 5,000 polyg!
 on version.  Actual polygon counts used in production vary with the platform 
and title.  Example: a boxing game on a console will probably throw 50K 
polygons or more at the characters because the environment is small and few 
subjects of interest.  An MMORPG running on a PC will devote under 10K per 
character because the worlds are large and there are many characters sharing 
the computing resources.  An embedded game running on a phone or tablet will 
probably use significantly less as the computing power is also much less.

Once you finish modeling, apply an envelope with nulls as deformers, but limit 
yourself to 30 nulls for the entire character.  now make him bend and deform as 
expected with those 30 nulls and limit each vertex to being assigned to 4 
bones/nulls or less - and that's a hard limit.  Now do that to the 5000 
polygon, 1000 polygon and 400 polygon versions of the character so each looks 
as similar as possible to the others - including fingers and toes.  Notice how 
each behaves and must be constructed differently to reach the same end result.  

Re: Texture size in games?

2013-05-15 Thread Stefan Andersson
Great response everyone! Except for polycounts and fascist UV mapping, it
more or less sounds similar in a lot of ways to what I'm already doing. I
wont go into games trying to become a programmer, I do want to make art.
But as Matt suggested, I'm more likely to go art/tech since I'm somewhat of
a geek also.

Mip mapping is something that I'm familiar with, and my own asset tools
already have it in place that I convert with OIIO all textures to be
mip-mapped and also the power of two (just because I don't trust anyone I
also resize them). But doing mip-mapping for a game engine, does that
requires to export each level? Or what image formats are usually used for
doing mip-mapping? I can't see game engines using exr... or do they? :)

Before I go on and make Matt's little exercise I think I will build
something rigid and see how that looks. And I have to convert my
workstation from Linux to Windows.

I talked to my brother who is working at Massive, and he thinks I'm an
idiot... but he also said that they base the size of the texture depending
on meters in the engine. I guess it also depends a lot of which engine you
will use.
But it leads me to another question. I'm not 100% sure yet which modeling
software I will be using. My 14+ years with both Maya / Softimage leaves me
somewhat in the middle of those two. Blender is also a contender, but I'll
stick with the programs that I know from inside and out. However, Softimage
doesn't have any metric units. Would the usual assumtion that 1 SI unit is
10 cm still apply? or again... depends on the engine/exporter?

all the best!

your humble servant
stefan





On Wed, May 15, 2013 at 10:26 AM, Matt Lind ml...@carbinestudios.comwrote:

 Well, you can look at it from two different points of view:

 a) Do what many game artists do and brute force their way through making
 content with heavy iteration.

 b) Do what many game programmers do and try to be efficient.


 If you just want a job in games, follow path A which doesn't really
 require much learning on your part, but does require a lot of practice.
  You need to follow the specs for whatever engine you're developing content
 for, and be frugal with whatever resources you have available to make the
 content.  The specs are project specific and change frequently.  Therefore,
 pick an engine and make something to function within it.  then choose a
 different engine and try to make the content function in that one too.
  You'll quickly learn making functional content can be very difficult and
 is a skillset of itself.

 Following course B, anything a game programmer is going to tell you in
 making art is how to make the end result efficient for his needs.  he
 doesn't give a crap how many hours you spend on it or what it looks like.
  He just wants it packaged in a tiny efficient form that doesn't blow up
 during runtime or induces expensive resources.  Since programmers are not
 artists, they don't know you want screenspace ambient occlusion, or fancy
 pixel based shading effects, or whatever.  In fact, they prefer you not use
 them because they want the CPU/GPU time for themselves to improve gameplay
 and other engine specific functions.

 So, if you want to make good art, retain sanity, and do a good job, your
 best bet is to starting learning computer science / computer architecture
 and apply the knowledge towards your artwork.  That is how the more
 successful game artists rise through the ranks as they are the ones that
 approach the programmers and suggest how art can be made better and more
 efficiently by applying technical knowledge to their art techniques.  If
 you rely on the programmer to figure it all out, you're going to be in for
 a lot of pain and feel unfulfilled by working in a very confined box.  If
 you rely on other artists to figure it out, you'll be in for even more pain
 as the chaos from lack of technical knowledge resulting in brute force
 techniques will drive you crazy.


 first assignment:

 Start with modeling.  The goal is to make the most robust looking bipedal
 character mesh that can be animated (deformed like an envelope) while being
 extremely frugal with polygon count.  Say, and entire seamless mesh at less
 than 5,000 polygons - triangles and quads only.  Keep iterating on it until
 you cannot find anything to iterate on anymore.  then, pretend a programmer
 enters your space and gives you a tongue lashing for exceeding the polygon
 count.  So redo the asset with a new polygon limit of 1,000 polygons.
  sounds harsh, but as you do it, you'll discover things on the 1,000
 polygon version that could be applied to the 5,000 polygon version you
 wouldn't have thought of until you were forced into the situation.
  Basically its an excercise in determining artistic priorities.  once you
 reach the 1,000 polygon version satisfactorily, change the criteria to 400
 polygons.  Once you finish the 400 polygon version, take what you learned
 and apply it back to the 5,000 polyg!
  on 

RE: Texture size in games?

2013-05-15 Thread Szabolcs Matefy
We use SI for 1unit = 1 m. That's our habit :D But it depends on the
exporter. If you use FBX for games, you might use any scale like 1 unit
= 10 cm, etc. Mipmapping, and UV mapping for mipmapping is real pain in
the artist ass :D I recall debates with lead artist, who insisted that I
should move the islands closer to make more coverage in the UV space,
after that insisted to move them apart, to avoid mipmap borders
appearing on UV borders...

 

 

 

From: softimage-boun...@listproc.autodesk.com
[mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Stefan
Andersson
Sent: Wednesday, May 15, 2013 10:52 AM
To: softimage@listproc.autodesk.com
Subject: Re: Texture size in games?

 

Great response everyone! Except for polycounts and fascist UV mapping,
it more or less sounds similar in a lot of ways to what I'm already
doing. I wont go into games trying to become a programmer, I do want to
make art. But as Matt suggested, I'm more likely to go art/tech since
I'm somewhat of a geek also.


Mip mapping is something that I'm familiar with, and my own asset tools
already have it in place that I convert with OIIO all textures to be
mip-mapped and also the power of two (just because I don't trust anyone
I also resize them). But doing mip-mapping for a game engine, does that
requires to export each level? Or what image formats are usually used
for doing mip-mapping? I can't see game engines using exr... or do they?
:)

Before I go on and make Matt's little exercise I think I will build
something rigid and see how that looks. And I have to convert my
workstation from Linux to Windows.


I talked to my brother who is working at Massive, and he thinks I'm an
idiot... but he also said that they base the size of the texture
depending on meters in the engine. I guess it also depends a lot of
which engine you will use. 

But it leads me to another question. I'm not 100% sure yet which
modeling software I will be using. My 14+ years with both Maya /
Softimage leaves me somewhat in the middle of those two. Blender is also
a contender, but I'll stick with the programs that I know from inside
and out. However, Softimage doesn't have any metric units. Would the
usual assumtion that 1 SI unit is 10 cm still apply? or again... depends
on the engine/exporter?

all the best!

your humble servant 

stefan

 

 

On Wed, May 15, 2013 at 10:26 AM, Matt Lind ml...@carbinestudios.com
wrote:

Well, you can look at it from two different points of view:

a) Do what many game artists do and brute force their way through making
content with heavy iteration.

b) Do what many game programmers do and try to be efficient.


If you just want a job in games, follow path A which doesn't really
require much learning on your part, but does require a lot of practice.
You need to follow the specs for whatever engine you're developing
content for, and be frugal with whatever resources you have available to
make the content.  The specs are project specific and change frequently.
Therefore, pick an engine and make something to function within it.
then choose a different engine and try to make the content function in
that one too.  You'll quickly learn making functional content can be
very difficult and is a skillset of itself.

Following course B, anything a game programmer is going to tell you in
making art is how to make the end result efficient for his needs.  he
doesn't give a crap how many hours you spend on it or what it looks
like.  He just wants it packaged in a tiny efficient form that doesn't
blow up during runtime or induces expensive resources.  Since
programmers are not artists, they don't know you want screenspace
ambient occlusion, or fancy pixel based shading effects, or whatever.
In fact, they prefer you not use them because they want the CPU/GPU time
for themselves to improve gameplay and other engine specific functions.

So, if you want to make good art, retain sanity, and do a good job, your
best bet is to starting learning computer science / computer
architecture and apply the knowledge towards your artwork.  That is how
the more successful game artists rise through the ranks as they are the
ones that approach the programmers and suggest how art can be made
better and more efficiently by applying technical knowledge to their art
techniques.  If you rely on the programmer to figure it all out, you're
going to be in for a lot of pain and feel unfulfilled by working in a
very confined box.  If you rely on other artists to figure it out,
you'll be in for even more pain as the chaos from lack of technical
knowledge resulting in brute force techniques will drive you crazy.


first assignment:

Start with modeling.  The goal is to make the most robust looking
bipedal character mesh that can be animated (deformed like an envelope)
while being extremely frugal with polygon count.  Say, and entire
seamless mesh at less than 5,000 polygons - triangles and quads only.
Keep iterating on it until you cannot find anything to iterate on

Re: Texture size in games?

2013-05-15 Thread Martin
One thing that is very different from rendering works, is that you need to
be very clean with your data, naming convetion may be more
important, history and alive operators are not welcome.
Basic scripting skills helps a lot even if you're not a TD, and specially
if your company doesn't have too many TDs or no TDs at all. My workflow and
my team workflow improved a lot when I learned a few scripting tricks. My
data is cleaner and my clients happier.

So, if you can't script, I really recommend you to learn some basic things.
Basic scripting is more useful than ICE here, if you have to choose one.

About mipmap, Mipmap generation is automatic. The format depends on your
project. DDS is almost the standard in a lot of projects I've been
involved. Some using Nvidia plugins, some other propietary tools but DDS
has been quite the standard lately. Last time I did a Nintendo platform
project we were using NW4C TGA. A format that comes with the Nintendo Tools
package.

Modeling software also changes depending on the project because the
programmers may write their tools based only on one software.
In Japan, Maya and Softimage are the most used. You need to match your
client's version too, here is where Autodesk old version policies screw you
if you have a subscription, 3 previous versions are not enough !!
But most of the time we use 3 years versions (right now we are using SI
2011 in my current project).

I haven't seen a single project based on Blender, but it doesn't mean that
you can't use it, you just have to convert it to your client's software
when you deliver your work.

And here is where you'll have to learn how to live with conversions. They
aren't as simple as we would like to. Sometimes you'll have to try FBX,
Collada, Crosswalk, OBJ, because depending on the case one can be better
than the other. And after that, you'll have to clean that data, because
converted data have a lot of garbage. Here is where your scripting skills
will save you hours of work. Specially if you need to convert animations.

In no-SI projects, I usually do 80% of my modeling work in SI, convert it
to Maya or Max and finish it there.

M.Yara


On Wed, May 15, 2013 at 5:52 PM, Stefan Andersson sander...@gmail.comwrote:

 Great response everyone! Except for polycounts and fascist UV mapping, it
 more or less sounds similar in a lot of ways to what I'm already doing. I
 wont go into games trying to become a programmer, I do want to make art.
 But as Matt suggested, I'm more likely to go art/tech since I'm somewhat of
 a geek also.

 Mip mapping is something that I'm familiar with, and my own asset tools
 already have it in place that I convert with OIIO all textures to be
 mip-mapped and also the power of two (just because I don't trust anyone I
 also resize them). But doing mip-mapping for a game engine, does that
 requires to export each level? Or what image formats are usually used for
 doing mip-mapping? I can't see game engines using exr... or do they? :)

 Before I go on and make Matt's little exercise I think I will build
 something rigid and see how that looks. And I have to convert my
 workstation from Linux to Windows.

 I talked to my brother who is working at Massive, and he thinks I'm an
 idiot... but he also said that they base the size of the texture depending
 on meters in the engine. I guess it also depends a lot of which engine you
 will use.
 But it leads me to another question. I'm not 100% sure yet which modeling
 software I will be using. My 14+ years with both Maya / Softimage leaves me
 somewhat in the middle of those two. Blender is also a contender, but I'll
 stick with the programs that I know from inside and out. However, Softimage
 doesn't have any metric units. Would the usual assumtion that 1 SI unit is
 10 cm still apply? or again... depends on the engine/exporter?

 all the best!

 your humble servant
 stefan





 On Wed, May 15, 2013 at 10:26 AM, Matt Lind ml...@carbinestudios.comwrote:

 Well, you can look at it from two different points of view:

 a) Do what many game artists do and brute force their way through making
 content with heavy iteration.

 b) Do what many game programmers do and try to be efficient.


 If you just want a job in games, follow path A which doesn't really
 require much learning on your part, but does require a lot of practice.
  You need to follow the specs for whatever engine you're developing content
 for, and be frugal with whatever resources you have available to make the
 content.  The specs are project specific and change frequently.  Therefore,
 pick an engine and make something to function within it.  then choose a
 different engine and try to make the content function in that one too.
  You'll quickly learn making functional content can be very difficult and
 is a skillset of itself.

 Following course B, anything a game programmer is going to tell you in
 making art is how to make the end result efficient for his needs.  he
 doesn't give a crap how many hours 

heavy ice operation flaws

2013-05-15 Thread Sebastian Kowalski
hey list,

i am feeling a bit bad because of complaining every time i am writing to this 
beloved list.
sorry, but…

a) load a heavy cache, generate a shitload of points, build up huge arrays and 
the ram consumption raises to max. well thats not the problem, but delete that 
cloud, plug off the cache reader the memory wont flush.. its stays that high.
seems to be reserving the ram, cause it wont rise higher when doing the 
operation again. ok i understand. but then setup a second cloud, do something 
similiar, and goodbye softimage. 

b) having an hidden pointcloud in the scene does not say that that tree wont be 
evaluated when some param outside the tree is feeding it. like doing some cam 
frustum calculation. 

c) similar to b), save a scene with an hidden object, simulated by an ice tree. 
its gonna evaluate, progress bar is up (and on win 7, its gonna hide itself 
behind the main application).
on occasions it also gonna ramp up on ram, and wont flush after saving is done. 
(when you got an HIDDEN pointcloud reading a cache )

thats all on 2013.

-sebastian






Aw: heavy ice operation flaws

2013-05-15 Thread Leo Quensel

To b:



I once tried to cast rays from a camera to points in a pointcloud to determine visibility.

It was enough to have the camera connected to a raycast node and it was evaluated all the time - even if the raycast node was completely disconnected from

anything else.

This was on 2012 (without SP) - havent tried since.



Gesendet:Mittwoch, 15. Mai 2013 um 11:33 Uhr
Von:Sebastian Kowalski l...@sekow.com
An:softimage@listproc.autodesk.com softimage@listproc.autodesk.com
Betreff:heavy ice operation flaws

hey list,

i am feeling a bit bad because of complaining every time i am writing to this beloved list.
sorry, but

a) load a heavy cache, generate a shitload of points, build up huge arrays and the ram consumption raises to max. well thats not the problem, but delete that cloud, plug off the cache reader the memory wont flush.. its stays that high.
seems to be reserving the ram, cause it wont rise higher when doing the operation again. ok i understand. but then setup a second cloud, do something similiar, and goodbye softimage.

b) having an hidden pointcloud in the scene does not say that that tree wont be evaluated when some param outside the tree is feeding it. like doing some cam frustum calculation.

c) similar to b), save a scene with an hidden object, simulated by an ice tree. its gonna evaluate, progress bar is up (and on win 7, its gonna hide itself behind the main application).
on occasions it also gonna ramp up on ram, and wont flush after saving is done. (when you got an HIDDEN pointcloud reading a cache )

thats all on 2013.

-sebastian









Re: custom ICENode - questions and request for example source code

2013-05-15 Thread Ahmidou Lyazidi
From my small experience about this, you can't make a custom topology or
kinematics node, you make a node that abstract the more or less complex
computation, then you feed the topology nodes (or a matrix in the case of
kinematics).
As you stated you can't use locators, or location queries in a custom ice
node, so if you need them the workflow is to break you ice node in smaller
parts.

About the preformance sometimes it's faster, sometimes quite the same. I
made a parallel transport frame node, the gain was only 15% but the setup
faster.
This node seems to perform way faster:
http://shaderop.com/2011/07/cubic-bezier-curve-node-for-softimage-ice/index.html

A


---
Ahmidou Lyazidi
Director | TD | CG artist
http://vimeo.com/ahmidou/videos


2013/5/15 Matt Lind ml...@carbinestudios.com

  well, let's answer the questions first:

 1) Does anybody have source code they are willing to share for custom ICE
 Nodes that deal with topology and/or geometry?

 2) Does the lack of reference, location, and execute ports for custom ICE
 nodes mean I cannot cast a location search from inside an ICE node?



 To answer your question:

 Imagine two nulls and two NURBS Surfaces.  the task is to find the nearest
 location from the first null to the first surface.  At that location, build
 an orthonormal basis and compute the local transform of the null relative
 to that basis.  Then reconstruct that relationship by applying it to the
 2nd null relative to the 2nd surface assuming both surfaces use uniform
 parameterization, not non-uniform as is the softimage default.  Version 2:
 extend to operate on vertices of polygon meshes instead of nulls.  I have a
 working version, but it is slow and not very stable.

 The problem I'm encountering is it simply takes too many factory nodes to
 be able to work efficiently. Each node has a certain amount of overhead
 regardless of what it does. Plus, the support for NURBS in ICE is rather
 abysmal. I have to construct my own orthonormal basis plus implement my own
 algorithm to convert from non-uniform parameterization to uniform
 parameterization.  Both are doable, but take very many nodes to do it
 (including support for edge cases) making the whole effort rather clumsy at
 best. The parameterization conversion is expensive as it involves sorting
 and searching (while/repeat/counter nodes).  When applying the ICE Compound
 to a polygon mesh with 5,000+ vertices.it gets the job done, but
 chugs.

 I have a version of this tool written as a scripted operator, and it
 performs really well because it has better SDK support and the
 sorting/searching can be better optimized.  But one shortcoming of scripted
 operators is they self-delete if an input goes missing (which often happens
 on scene load or model import when the content has been modifed
 externally).  This in turn causes content using the operator to malfunction
 generating bug reports which are sent to artists to fix.  Unfortunately
 most artists weren't around when the content was created years ago, so they
 have no idea what's wrong, what the expected output is supposed to look
 like, or how to fix it.  Often an asset has to be retired and replaced.
 This is my motivation for rewriting the tool as a custom ICE node as ICE is
 much more graceful when it's inputs don't exist - it just turns red and
 sits patiently until conditions improve.  This gives artists a chance to
 fix the problem without having to sweat the details because they can read
 the GetData node to see what's missing, then find and repair it.  I'm
 trying to make the content in our pipeline more durable.

 So...I'm looking for code samples of how to deal with topology and
 geometry in ICE.  So far I have not found any.


 Matt






  --
 *From:* softimage-boun...@listproc.autodesk.com [
 softimage-boun...@listproc.autodesk.com] On Behalf Of Raffaele Fragapane [
 raffsxsil...@googlemail.com]
 *Sent:* Tuesday, May 14, 2013 9:00 PM
 *To:* softimage@listproc.autodesk.com
 *Subject:* Re: custom ICENode - questions and request for example source
 code

   Yeah, same hunch here.
  Unless the performance expectations are in the multiple characters
 real-time concurrently, in which case I think neither way is gonna get
 there usually.


 On Wed, May 15, 2013 at 1:04 PM, Ciaran Moloney 
 moloney.cia...@gmail.comwrote:

 I'm sorta , kinda sure that's a dead end for a custom node. You might be
 better off optimizing your ICE tree. It doesn't sound like such a complex
 problem, care to share?


 On Wed, May 15, 2013 at 2:41 AM, Matt Lind ml...@carbinestudios.comwrote:

  I’ve been looking at the ICE SDK as a start to the process of writing
 custom ICE Nodes in C++.  I need to write topology generators, modifiers
 and deformation nodes.  So far all the source code I’ve seen supplied with
 Softimage only deal with particle clouds or primitive data such as
 converting integers to scalars.  Does anybody 

Any (CG) software as a service. Now.

2013-05-15 Thread Stefan Kubicek
I have the slight feeling that this isn't necessarily going to slow down  
the migration from desktop to web-based applications:  
https://www.youtube.com/watch?v=YUsCnWBK8gchd=1


Press release: http://www.otoy.com/130501_OTOY_release_FINAL.pdf



--
---
Stefan Kubicek
---
keyvis digital imagery
   Alfred Feierfeilstraße 3
A-2380 Perchtoldsdorf bei Wien
  Phone:+43/699/12614231
   www.keyvis.at  ste...@keyvis.at
--  This email and its attachments are   --
--confidential and for the recipient only--



Re: custom ICENode - questions and request for example source code

2013-05-15 Thread Raffaele Fragapane
If you are using while and repeat nodes to reparametrize the surface, you
are paying a ton of unnecessary costs. Yes, those things are slow, no, they
are often not required, which is why both me and Ciaran had the same hunch.

Matter of fact, I very recently worked on an equivalent problem, but
trickier (think of adding a dimensionality).
By far the fastest approach, although it might seem counter-intuitive, is
to search and filter geometry, even if it's A LOT of nearest location runs,
they will always be fast and do an excellent job of accessing a shared
optimized structure, then it's up to you to filter the arrays in an ICE
friendly way (so, no repeats), which again is a puzzling art on its own
some times (Stephen's has excellent blog entries about many basics and
sorting tricks if you are unfamiliar)

I hear a lot of people complaining about ICE performance, and then
frequently enough they treat it like if it was a normal programming
language and try to hammer in whiles, repeats, walking to conditions,
multidimensional array equivalents and so on, on the assumption that saving
nodes is going to make things faster, when in actuality there are other
ways, that might seem counter-intuitive, that will blaze by any of those
methods.
Most factory nodes even in the hundreds add a negligible overhead, I have
complex functions totalling hundreds running faster than the monitor loop
can time them, and still topping the vSync with sampling rates in the
thousands. Food for thought there.

ICE still sucks at some many to one cases and definitely does at many to
many, but a problem like re-parametrizing a surface and getting a
correlated coherent transform for a null from is not one of them.

I mean no offense, but it sounds like you haven't spent a lot of time
working with ICE, and you are coming from the assumption that your
respectable programming knowledge in terms of what's optimal and what isn't
might transfer across directly, when chances are it's hurting more than
anything.
You have to think laterally a good few degrees of separation from C or JS
to ICE in terms of what's optimal, it's often ironically a lot closer to
the metal in its SIMD roots than something that gets to scuttle through GCC
before running gets to be.



On Wed, May 15, 2013 at 5:42 PM, Matt Lind ml...@carbinestudios.com wrote:

  well, let's answer the questions first:

 1) Does anybody have source code they are willing to share for custom ICE
 Nodes that deal with topology and/or geometry?

 2) Does the lack of reference, location, and execute ports for custom ICE
 nodes mean I cannot cast a location search from inside an ICE node?



 To answer your question:

 Imagine two nulls and two NURBS Surfaces.  the task is to find the nearest
 location from the first null to the first surface.  At that location, build
 an orthonormal basis and compute the local transform of the null relative
 to that basis.  Then reconstruct that relationship by applying it to the
 2nd null relative to the 2nd surface assuming both surfaces use uniform
 parameterization, not non-uniform as is the softimage default.  Version 2:
 extend to operate on vertices of polygon meshes instead of nulls.  I have a
 working version, but it is slow and not very stable.

 The problem I'm encountering is it simply takes too many factory nodes to
 be able to work efficiently. Each node has a certain amount of overhead
 regardless of what it does. Plus, the support for NURBS in ICE is rather
 abysmal. I have to construct my own orthonormal basis plus implement my own
 algorithm to convert from non-uniform parameterization to uniform
 parameterization.  Both are doable, but take very many nodes to do it
 (including support for edge cases) making the whole effort rather clumsy at
 best. The parameterization conversion is expensive as it involves sorting
 and searching (while/repeat/counter nodes).  When applying the ICE Compound
 to a polygon mesh with 5,000+ vertices.it gets the job done, but
 chugs.

 I have a version of this tool written as a scripted operator, and it
 performs really well because it has better SDK support and the
 sorting/searching can be better optimized.  But one shortcoming of scripted
 operators is they self-delete if an input goes missing (which often happens
 on scene load or model import when the content has been modifed
 externally).  This in turn causes content using the operator to malfunction
 generating bug reports which are sent to artists to fix.  Unfortunately
 most artists weren't around when the content was created years ago, so they
 have no idea what's wrong, what the expected output is supposed to look
 like, or how to fix it.  Often an asset has to be retired and replaced.
 This is my motivation for rewriting the tool as a custom ICE node as ICE is
 much more graceful when it's inputs don't exist - it just turns red and
 sits patiently until conditions improve.  This gives artists a chance to
 fix the problem without having to sweat the 

Re: custom ICENode - questions and request for example source code

2013-05-15 Thread Guillaume Laforge
For topology, there is no SDK access. But a custom node can do all the low
level stuff (manipulating the so called PolygonalDescription).
Same for Kinematics, a custom node can manipulate a 4X4 matrix.
At the end you need to set the topo/kine using the corresponding attribute.

Here is an example (with source code) to create topology using a custom ICE
node: http://frenchdog.wordpress.com/2012/01/05/happy-2012/

If you want to do everything with one node, it is a job for a custom
operator I think.

Guillaume


On Wed, May 15, 2013 at 6:07 AM, Ahmidou Lyazidi ahmidou@gmail.comwrote:

 From my small experience about this, you can't make a custom topology or
 kinematics node, you make a node that abstract the more or less complex
 computation, then you feed the topology nodes (or a matrix in the case of
 kinematics).
 As you stated you can't use locators, or location queries in a custom ice
 node, so if you need them the workflow is to break you ice node in smaller
 parts.

 About the preformance sometimes it's faster, sometimes quite the same. I
 made a parallel transport frame node, the gain was only 15% but the setup
 faster.
 This node seems to perform way faster:

 http://shaderop.com/2011/07/cubic-bezier-curve-node-for-softimage-ice/index.html

 A


 ---
 Ahmidou Lyazidi
 Director | TD | CG artist
 http://vimeo.com/ahmidou/videos


 2013/5/15 Matt Lind ml...@carbinestudios.com

  well, let's answer the questions first:

 1) Does anybody have source code they are willing to share for custom ICE
 Nodes that deal with topology and/or geometry?

 2) Does the lack of reference, location, and execute ports for custom ICE
 nodes mean I cannot cast a location search from inside an ICE node?



 To answer your question:

 Imagine two nulls and two NURBS Surfaces.  the task is to find the
 nearest location from the first null to the first surface.  At that
 location, build an orthonormal basis and compute the local transform of the
 null relative to that basis.  Then reconstruct that relationship by
 applying it to the 2nd null relative to the 2nd surface assuming both
 surfaces use uniform parameterization, not non-uniform as is the softimage
 default.  Version 2: extend to operate on vertices of polygon meshes
 instead of nulls.  I have a working version, but it is slow and not very
 stable.

 The problem I'm encountering is it simply takes too many factory nodes to
 be able to work efficiently. Each node has a certain amount of overhead
 regardless of what it does. Plus, the support for NURBS in ICE is rather
 abysmal. I have to construct my own orthonormal basis plus implement my own
 algorithm to convert from non-uniform parameterization to uniform
 parameterization.  Both are doable, but take very many nodes to do it
 (including support for edge cases) making the whole effort rather clumsy at
 best. The parameterization conversion is expensive as it involves sorting
 and searching (while/repeat/counter nodes).  When applying the ICE Compound
 to a polygon mesh with 5,000+ vertices.it gets the job done, but
 chugs.

 I have a version of this tool written as a scripted operator, and it
 performs really well because it has better SDK support and the
 sorting/searching can be better optimized.  But one shortcoming of scripted
 operators is they self-delete if an input goes missing (which often happens
 on scene load or model import when the content has been modifed
 externally).  This in turn causes content using the operator to malfunction
 generating bug reports which are sent to artists to fix.  Unfortunately
 most artists weren't around when the content was created years ago, so they
 have no idea what's wrong, what the expected output is supposed to look
 like, or how to fix it.  Often an asset has to be retired and replaced.
 This is my motivation for rewriting the tool as a custom ICE node as ICE is
 much more graceful when it's inputs don't exist - it just turns red and
 sits patiently until conditions improve.  This gives artists a chance to
 fix the problem without having to sweat the details because they can read
 the GetData node to see what's missing, then find and repair it.  I'm
 trying to make the content in our pipeline more durable.

 So...I'm looking for code samples of how to deal with topology and
 geometry in ICE.  So far I have not found any.


 Matt






  --
 *From:* softimage-boun...@listproc.autodesk.com [
 softimage-boun...@listproc.autodesk.com] On Behalf Of Raffaele Fragapane
 [raffsxsil...@googlemail.com]
 *Sent:* Tuesday, May 14, 2013 9:00 PM
 *To:* softimage@listproc.autodesk.com
 *Subject:* Re: custom ICENode - questions and request for example source
 code

   Yeah, same hunch here.
  Unless the performance expectations are in the multiple characters
 real-time concurrently, in which case I think neither way is gonna get
 there usually.


 On Wed, May 15, 2013 at 1:04 PM, Ciaran Moloney 

Re: Aw: heavy ice operation flaws

2013-05-15 Thread Andreas Böinghoff

b:

I'm a bit double-minded when it comes to evaluation of hidden ICE trees. 
Sometimes it's nice to have, sometimes not. Personally I would love to 
see two this.


First: The possibility to turn of the evaluation of all ICE trees in the 
Scene (Like in Maya - Evaluate Notes). That would also be nice for 
Deformers, Envelopes, Contraints...
Second: The possibility to Mute ICE trees. The Disable from here option 
helps from time to time, but a hard Mute switch would be nice.


On 5/15/2013 12:02 PM, Leo Quensel wrote:

To b:
I once tried to cast rays from a camera to points in a pointcloud to 
determine visibility.
It was enough to have the camera connected to a raycast node and it 
was evaluated all the time - even if the raycast node was completely 
disconnected from

anything else.
This was on 2012 (without SP) - haven't tried since.
*Gesendet:* Mittwoch, 15. Mai 2013 um 11:33 Uhr
*Von:* Sebastian Kowalski l...@sekow.com
*An:* softimage@listproc.autodesk.com softimage@listproc.autodesk.com
*Betreff:* heavy ice operation flaws
hey list,

i am feeling a bit bad because of complaining every time i am writing 
to this beloved list.

sorry, but…

a) load a heavy cache, generate a shitload of points, build up huge 
arrays and the ram consumption raises to max. well thats not the 
problem, but delete that cloud, plug off the cache reader the memory 
wont flush.. its stays that high.
seems to be reserving the ram, cause it wont rise higher when doing 
the operation again. ok i understand. but then setup a second cloud, 
do something similiar, and goodbye softimage.


b) having an hidden pointcloud in the scene does not say that that 
tree wont be evaluated when some param outside the tree is feeding it. 
like doing some cam frustum calculation.


c) similar to b), save a scene with an hidden object, simulated by an 
ice tree. its gonna evaluate, progress bar is up (and on win 7, its 
gonna hide itself behind the main application).
on occasions it also gonna ramp up on ram, and wont flush after saving 
is done. (when you got an HIDDEN pointcloud reading a cache )


thats all on 2013.

-sebastian







Re: custom ICENode - questions and request for example source code

2013-05-15 Thread Guillaume Laforge
As my blog is rather dead those days, I put the code, compound and scenes
on this public repo: https://github.com/frenchdog/ICEConvexHull

Cheers,
Guillaume


On Wed, May 15, 2013 at 6:39 AM, Guillaume Laforge 
guillaume.laforge...@gmail.com wrote:

 For topology, there is no SDK access. But a custom node can do all the low
 level stuff (manipulating the so called PolygonalDescription).
 Same for Kinematics, a custom node can manipulate a 4X4 matrix.
 At the end you need to set the topo/kine using the corresponding attribute.

 Here is an example (with source code) to create topology using a custom
 ICE node: http://frenchdog.wordpress.com/2012/01/05/happy-2012/

 If you want to do everything with one node, it is a job for a custom
 operator I think.

 Guillaume


 On Wed, May 15, 2013 at 6:07 AM, Ahmidou Lyazidi ahmidou@gmail.comwrote:

 From my small experience about this, you can't make a custom topology or
 kinematics node, you make a node that abstract the more or less complex
 computation, then you feed the topology nodes (or a matrix in the case of
 kinematics).
 As you stated you can't use locators, or location queries in a custom ice
 node, so if you need them the workflow is to break you ice node in smaller
 parts.

 About the preformance sometimes it's faster, sometimes quite the same. I
 made a parallel transport frame node, the gain was only 15% but the setup
 faster.
 This node seems to perform way faster:

 http://shaderop.com/2011/07/cubic-bezier-curve-node-for-softimage-ice/index.html

 A


 ---
 Ahmidou Lyazidi
 Director | TD | CG artist
 http://vimeo.com/ahmidou/videos


 2013/5/15 Matt Lind ml...@carbinestudios.com

  well, let's answer the questions first:

 1) Does anybody have source code they are willing to share for custom
 ICE Nodes that deal with topology and/or geometry?

 2) Does the lack of reference, location, and execute ports for custom
 ICE nodes mean I cannot cast a location search from inside an ICE node?



 To answer your question:

 Imagine two nulls and two NURBS Surfaces.  the task is to find the
 nearest location from the first null to the first surface.  At that
 location, build an orthonormal basis and compute the local transform of the
 null relative to that basis.  Then reconstruct that relationship by
 applying it to the 2nd null relative to the 2nd surface assuming both
 surfaces use uniform parameterization, not non-uniform as is the softimage
 default.  Version 2: extend to operate on vertices of polygon meshes
 instead of nulls.  I have a working version, but it is slow and not very
 stable.

 The problem I'm encountering is it simply takes too many factory nodes
 to be able to work efficiently. Each node has a certain amount of overhead
 regardless of what it does. Plus, the support for NURBS in ICE is rather
 abysmal. I have to construct my own orthonormal basis plus implement my own
 algorithm to convert from non-uniform parameterization to uniform
 parameterization.  Both are doable, but take very many nodes to do it
 (including support for edge cases) making the whole effort rather clumsy at
 best. The parameterization conversion is expensive as it involves sorting
 and searching (while/repeat/counter nodes).  When applying the ICE Compound
 to a polygon mesh with 5,000+ vertices.it gets the job done, but
 chugs.

 I have a version of this tool written as a scripted operator, and it
 performs really well because it has better SDK support and the
 sorting/searching can be better optimized.  But one shortcoming of scripted
 operators is they self-delete if an input goes missing (which often happens
 on scene load or model import when the content has been modifed
 externally).  This in turn causes content using the operator to malfunction
 generating bug reports which are sent to artists to fix.  Unfortunately
 most artists weren't around when the content was created years ago, so they
 have no idea what's wrong, what the expected output is supposed to look
 like, or how to fix it.  Often an asset has to be retired and replaced.
 This is my motivation for rewriting the tool as a custom ICE node as ICE is
 much more graceful when it's inputs don't exist - it just turns red and
 sits patiently until conditions improve.  This gives artists a chance to
 fix the problem without having to sweat the details because they can read
 the GetData node to see what's missing, then find and repair it.  I'm
 trying to make the content in our pipeline more durable.

 So...I'm looking for code samples of how to deal with topology and
 geometry in ICE.  So far I have not found any.


 Matt






  --
 *From:* softimage-boun...@listproc.autodesk.com [
 softimage-boun...@listproc.autodesk.com] On Behalf Of Raffaele
 Fragapane [raffsxsil...@googlemail.com]
 *Sent:* Tuesday, May 14, 2013 9:00 PM
 *To:* softimage@listproc.autodesk.com
 *Subject:* Re: custom ICENode - questions and request for 

Re: Aw: heavy ice operation flaws

2013-05-15 Thread Guillaume Laforge
+1 for:

- Various Graph evaluation modes : Mute, On demand, Always (some examples
here: https://vimeo.com/48242379)
- Faster evaluation of the graph validation (as it would still be needed if
you are editing a mutted ICETree)
- Fixing Hidden but still evaluating ICETree bugs


On Wed, May 15, 2013 at 6:47 AM, Andreas Böinghoff boeingh...@s-farm.dewrote:

  b:

 I'm a bit double-minded when it comes to evaluation of hidden ICE trees.
 Sometimes it's nice to have, sometimes not. Personally I would love to see
 two this.

 First: The possibility to turn of the evaluation of all ICE trees in the
 Scene (Like in Maya - Evaluate Notes). That would also be nice for
 Deformers, Envelopes, Contraints...
 Second: The possibility to Mute ICE trees. The Disable from here option
 helps from time to time, but a hard Mute switch would be nice.


 On 5/15/2013 12:02 PM, Leo Quensel wrote:

  To b:

 I once tried to cast rays from a camera to points in a pointcloud to
 determine visibility.
 It was enough to have the camera connected to a raycast node and it was
 evaluated all the time - even if the raycast node was completely
 disconnected from
 anything else.
 This was on 2012 (without SP) - haven't tried since.

 *Gesendet:* Mittwoch, 15. Mai 2013 um 11:33 Uhr
 *Von:* Sebastian Kowalski l...@sekow.com l...@sekow.com
 *An:* softimage@listproc.autodesk.com softimage@listproc.autodesk.com
 softimage@listproc.autodesk.com softimage@listproc.autodesk.com
 *Betreff:* heavy ice operation flaws
 hey list,

 i am feeling a bit bad because of complaining every time i am writing to
 this beloved list.
 sorry, but…

 a) load a heavy cache, generate a shitload of points, build up huge arrays
 and the ram consumption raises to max. well thats not the problem, but
 delete that cloud, plug off the cache reader the memory wont flush.. its
 stays that high.
 seems to be reserving the ram, cause it wont rise higher when doing the
 operation again. ok i understand. but then setup a second cloud, do
 something similiar, and goodbye softimage.

 b) having an hidden pointcloud in the scene does not say that that tree
 wont be evaluated when some param outside the tree is feeding it. like
 doing some cam frustum calculation.

 c) similar to b), save a scene with an hidden object, simulated by an ice
 tree. its gonna evaluate, progress bar is up (and on win 7, its gonna hide
 itself behind the main application).
 on occasions it also gonna ramp up on ram, and wont flush after saving is
 done. (when you got an HIDDEN pointcloud reading a cache )

 thats all on 2013.

 -sebastian









Aw: Re: heavy ice operation flaws

2013-05-15 Thread Leo Quensel

Oh yes the graph validation becomes a huge pain after a certain number of nodes. I had projects where it took 30 seconds just to

connect a single node. No fun :(



Gesendet:Mittwoch, 15. Mai 2013 um 13:08 Uhr
Von:Guillaume Laforge guillaume.laforge...@gmail.com
An:softimage@listproc.autodesk.com softimage@listproc.autodesk.com
Betreff:Re: Aw: heavy ice operation flaws


+1 for:


- Various Graph evaluation modes : Mute, On demand, Always (some examples here:https://vimeo.com/48242379)

- Faster evaluation of the graph validation (as it would still be needed if you are editing a mutted ICETree)

- Fixing Hidden but still evaluating ICETree bugs



On Wed, May 15, 2013 at 6:47 AM, Andreas Binghoff boeingh...@s-farm.de wrote:



b:

Im a bit double-minded when it comes to evaluation of hidden ICE trees. Sometimes its nice to have, sometimes not. Personally I would love to see two this.

First: The possibility to turn of the evaluation of all ICE trees in the Scene (Like in Maya - Evaluate Notes). That would also be nice for Deformers, Envelopes, Contraints...
Second: The possibility to Mute ICE trees. The Disable from here option helps from time to time, but a hard Mute switch would be nice.



On 5/15/2013 12:02 PM, Leo Quensel wrote:








To b:



I once tried to cast rays from a camera to points in a pointcloud to determine visibility.

It was enough to have the camera connected to a raycast node and it was evaluated all the time - even if the raycast node was completely disconnected from

anything else.

This was on 2012 (without SP) - havent tried since.



Gesendet:Mittwoch, 15. Mai 2013 um 11:33 Uhr
Von:Sebastian Kowalski l...@sekow.com
An:softimage@listproc.autodesk.com softimage@listproc.autodesk.com
Betreff:heavy ice operation flaws

hey list,

i am feeling a bit bad because of complaining every time i am writing to this beloved list.
sorry, but

a) load a heavy cache, generate a shitload of points, build up huge arrays and the ram consumption raises to max. well thats not the problem, but delete that cloud, plug off the cache reader the memory wont flush.. its stays that high.
seems to be reserving the ram, cause it wont rise higher when doing the operation again. ok i understand. but then setup a second cloud, do something similiar, and goodbye softimage.

b) having an hidden pointcloud in the scene does not say that that tree wont be evaluated when some param outside the tree is feeding it. like doing some cam frustum calculation.

c) similar to b), save a scene with an hidden object, simulated by an ice tree. its gonna evaluate, progress bar is up (and on win 7, its gonna hide itself behind the main application).
on occasions it also gonna ramp up on ram, and wont flush after saving is done. (when you got an HIDDEN pointcloud reading a cache )

thats all on 2013.

-sebastian





















Re: Re: heavy ice operation flaws

2013-05-15 Thread Vincent Fortin
I didn't know it was called the graph validation but this problem happens
to me as soon as the scene grows over a certain number of point
clouds/nodes. The evaluation of the graph itself is *super fast* but
dropping a new node or even selecting one takes forever with screen turning
white. In extreme cases the problem seems to extend to other basic
operations in the application such as accessing menus or changing
camera, which is weird.
With everything hidden and viewports muted, you'd think you could at least
lay down all new nodes before getting anything to evaluate.
+1 for what Guillaume said!


On Wed, May 15, 2013 at 7:28 AM, Leo Quensel le...@gmx.de wrote:

  Oh yes the graph validation becomes a huge pain after a certain number
 of nodes. I had projects where it took 30 seconds just to
 connect a single node. No fun :(

 *Gesendet:* Mittwoch, 15. Mai 2013 um 13:08 Uhr
 *Von:* Guillaume Laforge guillaume.laforge...@gmail.com
 *An:* softimage@listproc.autodesk.com softimage@listproc.autodesk.com
 *Betreff:* Re: Aw: heavy ice operation flaws
  +1 for:

 - Various Graph evaluation modes : Mute, On demand, Always (some examples
 here: https://vimeo.com/48242379)
 - Faster evaluation of the graph validation (as it would still be needed
 if you are editing a mutted ICETree)
 - Fixing Hidden but still evaluating ICETree bugs

 On Wed, May 15, 2013 at 6:47 AM, Andreas Böinghoff 
 boeingh...@s-farm.dewrote:

  b:

 I'm a bit double-minded when it comes to evaluation of hidden ICE trees.
 Sometimes it's nice to have, sometimes not. Personally I would love to see
 two this.

 First: The possibility to turn of the evaluation of all ICE trees in the
 Scene (Like in Maya - Evaluate Notes). That would also be nice for
 Deformers, Envelopes, Contraints...
 Second: The possibility to Mute ICE trees. The Disable from here option
 helps from time to time, but a hard Mute switch would be nice.


 On 5/15/2013 12:02 PM, Leo Quensel wrote:

  To b:

 I once tried to cast rays from a camera to points in a pointcloud to
 determine visibility.
 It was enough to have the camera connected to a raycast node and it was
 evaluated all the time - even if the raycast node was completely
 disconnected from
 anything else.
 This was on 2012 (without SP) - haven't tried since.

 *Gesendet:* Mittwoch, 15. Mai 2013 um 11:33 Uhr
 *Von:* Sebastian Kowalski l...@sekow.com http://l...@sekow.com
 *An:* 
 softimage@listproc.autodesk.comhttp://softimage@listproc.autodesk.com
 softimage@listproc.autodesk.comhttp://softimage@listproc.autodesk.com
 *Betreff:* heavy ice operation flaws
 hey list,

 i am feeling a bit bad because of complaining every time i am writing to
 this beloved list.
 sorry, but…

 a) load a heavy cache, generate a shitload of points, build up huge
 arrays and the ram consumption raises to max. well thats not the problem,
 but delete that cloud, plug off the cache reader the memory wont flush..
 its stays that high.
 seems to be reserving the ram, cause it wont rise higher when doing the
 operation again. ok i understand. but then setup a second cloud, do
 something similiar, and goodbye softimage.

 b) having an hidden pointcloud in the scene does not say that that tree
 wont be evaluated when some param outside the tree is feeding it. like
 doing some cam frustum calculation.

 c) similar to b), save a scene with an hidden object, simulated by an ice
 tree. its gonna evaluate, progress bar is up (and on win 7, its gonna hide
 itself behind the main application).
 on occasions it also gonna ramp up on ram, and wont flush after saving is
 done. (when you got an HIDDEN pointcloud reading a cache )

 thats all on 2013.

 -sebastian








Re: Re: heavy ice operation flaws

2013-05-15 Thread Jens Lindgren
+1 for faster validation as well. Really annoying.


On Wed, May 15, 2013 at 2:40 PM, Vincent Fortin vfor...@gmail.com wrote:

 I didn't know it was called the graph validation but this problem happens
 to me as soon as the scene grows over a certain number of point
 clouds/nodes. The evaluation of the graph itself is *super fast* but
 dropping a new node or even selecting one takes forever with screen turning
 white. In extreme cases the problem seems to extend to other basic
 operations in the application such as accessing menus or changing
 camera, which is weird.
 With everything hidden and viewports muted, you'd think you could at least
 lay down all new nodes before getting anything to evaluate.
 +1 for what Guillaume said!


 On Wed, May 15, 2013 at 7:28 AM, Leo Quensel le...@gmx.de wrote:

  Oh yes the graph validation becomes a huge pain after a certain number
 of nodes. I had projects where it took 30 seconds just to
 connect a single node. No fun :(

 *Gesendet:* Mittwoch, 15. Mai 2013 um 13:08 Uhr
 *Von:* Guillaume Laforge guillaume.laforge...@gmail.com
 *An:* softimage@listproc.autodesk.com softimage@listproc.autodesk.com
 *Betreff:* Re: Aw: heavy ice operation flaws
  +1 for:

 - Various Graph evaluation modes : Mute, On demand, Always (some examples
 here: https://vimeo.com/48242379)
 - Faster evaluation of the graph validation (as it would still be needed
 if you are editing a mutted ICETree)
 - Fixing Hidden but still evaluating ICETree bugs

 On Wed, May 15, 2013 at 6:47 AM, Andreas Böinghoff 
 boeingh...@s-farm.dewrote:

  b:

 I'm a bit double-minded when it comes to evaluation of hidden ICE trees.
 Sometimes it's nice to have, sometimes not. Personally I would love to see
 two this.

 First: The possibility to turn of the evaluation of all ICE trees in the
 Scene (Like in Maya - Evaluate Notes). That would also be nice for
 Deformers, Envelopes, Contraints...
 Second: The possibility to Mute ICE trees. The Disable from here option
 helps from time to time, but a hard Mute switch would be nice.


 On 5/15/2013 12:02 PM, Leo Quensel wrote:

  To b:

 I once tried to cast rays from a camera to points in a pointcloud to
 determine visibility.
 It was enough to have the camera connected to a raycast node and it was
 evaluated all the time - even if the raycast node was completely
 disconnected from
 anything else.
 This was on 2012 (without SP) - haven't tried since.

 *Gesendet:* Mittwoch, 15. Mai 2013 um 11:33 Uhr
 *Von:* Sebastian Kowalski l...@sekow.com http://l...@sekow.com
 *An:* 
 softimage@listproc.autodesk.comhttp://softimage@listproc.autodesk.com
 softimage@listproc.autodesk.comhttp://softimage@listproc.autodesk.com
 *Betreff:* heavy ice operation flaws
 hey list,

 i am feeling a bit bad because of complaining every time i am writing to
 this beloved list.
 sorry, but…

 a) load a heavy cache, generate a shitload of points, build up huge
 arrays and the ram consumption raises to max. well thats not the problem,
 but delete that cloud, plug off the cache reader the memory wont flush..
 its stays that high.
 seems to be reserving the ram, cause it wont rise higher when doing the
 operation again. ok i understand. but then setup a second cloud, do
 something similiar, and goodbye softimage.

 b) having an hidden pointcloud in the scene does not say that that tree
 wont be evaluated when some param outside the tree is feeding it. like
 doing some cam frustum calculation.

 c) similar to b), save a scene with an hidden object, simulated by an
 ice tree. its gonna evaluate, progress bar is up (and on win 7, its gonna
 hide itself behind the main application).
 on occasions it also gonna ramp up on ram, and wont flush after saving
 is done. (when you got an HIDDEN pointcloud reading a cache )

 thats all on 2013.

 -sebastian









-- 
Jens Lindgren
--
Lead Technical Director
Magoo 3D Studios http://www.magoo3dstudios.com/


Re: PyQtForSoftimage with PySide support

2013-05-15 Thread Stefan Kubicek

Hi Steven,

I managed to compile and create a Windows installer for PySide 1.1.3 using  
Qt 4.8.4, VS2010 and Python 2.7.4, all x64. For a quick test I also  
modified the PyQtForSoftimage Python scripts to use the PySide libs  
instead of PyQt and I was able to run the ExampleDialog and  
ExampleSignalSlot Examples that come with your addon.
The ExampleMenu example throws an error when clicking on one of the menu  
items from the popup, it looks like PySide expects a slightly different  
syntax, which should be fixable:


TypeError: 'PySide.QtGui.QMenu.exec_' called with wrong argument types:
#   PySide.QtGui.QMenu.exec_(int, int)
# Supported signatures:
#   PySide.QtGui.QMenu.exec_()
#   PySide.QtGui.QMenu.exec_(list, PySide.QtCore.QPoint,  
PySide.QtGui.QAction = None)
#   PySide.QtGui.QMenu.exec_(list, PySide.QtCore.QPoint,  
PySide.QtGui.QAction, PySide.QtGui.QWidget)
#   PySide.QtGui.QMenu.exec_(PySide.QtCore.QPoint, PySide.QtGui.QAction =  
None)


Besides that it's looking good.

How far did you get, and is there anything else to consider?

Also, I don't want to rush ahead, but if it's of any use to you or any one  
else (and legal, I suppose?) I can put that installer (or just  
shiboken.pyd) and maybe even the modified Plugin scripts up for download  
for people to experiment with in the mean time.
I also thought of documenting the build process (also as a reminder for me  
should I ever have to do this again), since most of the info is scattered  
all over the net and there are likely others who need to compile  
themselves too at some point and could save a lot of time with all info in  
one place. Let me know what you think.





iicky... i will just provide an installer which is just their build  
system
run vanilla. which includes shiboken... for now until they get it  
together.



On Mon, May 13, 2013 at 7:24 PM, Luc-Eric Rousseau  
luceri...@gmail.comwrote:


you could take the pyside dlls from a maya 2014 install, we figured all  
of

those compilation issues. they're compiled for python 2.7 and vc2010.
 these dlls won't work with python 2.6


On Monday, May 13, 2013, Stefan Kubicek wrote:

SI2014 looks quite attractive due to all the bug fixes, so 2.7.4  it  
will

be for me soon.
I don't know if it's ok to mix Python versions, e.g. use Shiboken
compiled against 2.6.x in a 2.7.x environment,
but even if it works I'd just feel...uneasy, never knowing if the next
cryptic error message is due to mixing
versions, or my own fault.

 well that is what we are sorting out. if we need to compile our own

PySide
version/installer then guess what? the PySide license allows me to do
that
:)



That's exactly what made me look into PySide too. The license is very
copyleft (if that's the right term).






--
---
   Stefan Kubicek
---
   keyvis digital imagery
  Alfred Feierfeilstraße 3
   A-2380 Perchtoldsdorf bei Wien
 Phone:+43/699/12614231
  www.keyvis.at  ste...@keyvis.at
--  This email and its attachments are   --
--confidential and for the recipient only--



Re: heavy ice operation flaws

2013-05-15 Thread Andy Moorer
+1 here for G's suggestions as well, graph eval modes would solve so many 
issues. 

If we could tag certain execution ports of an ice op to always evaluate while 
preserving the on demand speed of other branches, even better.

 +1 for:
  
 - Various Graph evaluation modes : Mute, On demand, Always (some examples 
 here: https://vimeo.com/48242379)
 - Faster evaluation of the graph validation (as it would still be needed if 
 you are editing a mutted ICETree)
 - Fixing Hidden but still evaluating ICETree bugs 


Re: Any (CG) software as a service. Now.

2013-05-15 Thread Alan Fregtman
Thiago's new Lagoa cloud-based renderer is also a pretty impressive use of
HTML5 for a 3D DCC:

http://home.lagoa.com/



On Wed, May 15, 2013 at 6:15 AM, Stefan Kubicek s...@tidbit-images.comwrote:

 I have the slight feeling that this isn't necessarily going to slow down
 the migration from desktop to web-based applications:
 https://www.youtube.com/watch?**v=YUsCnWBK8gchd=1https://www.youtube.com/watch?v=YUsCnWBK8gchd=1

 Press release: 
 http://www.otoy.com/130501_**OTOY_release_FINAL.pdfhttp://www.otoy.com/130501_OTOY_release_FINAL.pdf



 --
 --**-
 Stefan Kubicek
 --**-
 keyvis digital imagery
Alfred Feierfeilstraße 3
 A-2380 Perchtoldsdorf bei Wien
   Phone:+43/699/12614231
www.keyvis.at  ste...@keyvis.at
 --  This email and its attachments are   --
 --confidential and for the recipient only--




Re: Any (CG) software as a service. Now.

2013-05-15 Thread Stefan Kubicek

Indeed, if only I had more time to play with it :-/



Thiago's new Lagoa cloud-based renderer is also a pretty impressive use of
HTML5 for a 3D DCC:

http://home.lagoa.com/



On Wed, May 15, 2013 at 6:15 AM, Stefan Kubicek s...@tidbit-images.comwrote:


I have the slight feeling that this isn't necessarily going to slow down
the migration from desktop to web-based applications:
https://www.youtube.com/watch?**v=YUsCnWBK8gchd=1https://www.youtube.com/watch?v=YUsCnWBK8gchd=1

Press release: 
http://www.otoy.com/130501_**OTOY_release_FINAL.pdfhttp://www.otoy.com/130501_OTOY_release_FINAL.pdf



--
--**-
Stefan Kubicek
--**-
keyvis digital imagery
   Alfred Feierfeilstraße 3
A-2380 Perchtoldsdorf bei Wien
  Phone:+43/699/12614231
   www.keyvis.at  ste...@keyvis.at
--  This email and its attachments are   --
--confidential and for the recipient only--







--
---
   Stefan Kubicek
---
keyvis digital imagery
   Alfred Feierfeilstraße 3
A-2380 Perchtoldsdorf bei Wien
  Phone:+43/699/12614231
   www.keyvis.at  ste...@keyvis.at
--  This email and its attachments are   --
--confidential and for the recipient only--



Re: PyQtForSoftimage with PySide support

2013-05-15 Thread Steven Caron
thanks for the offer stefan, but i have an installer already and i plan on
making it available once it gets some testing.

a made a module to abstract the python qt bindings (combined a few that i
found online), this module attempts to remove any need for different
syntax. that code is available now on the github page. also you might have
an old copy of my plugin because there is a fourth example. its called
ExampleUIFile and this is where i am still having toruble. i am trying to
make it work properly with the abstraction module but with some issues.

as far as the QMenu is concerned that looks like you used the wrong
arguments. my code works without modification... menu.exec_(QCursor.pos())

from what i know, it is legal to provide a link to the precompiled
installer.

i think improved documentation is in order, feel free to fork the repro and
make changes to the readme.creole then send me a pull request.

s


On Wed, May 15, 2013 at 6:30 AM, Stefan Kubicek s...@tidbit-images.comwrote:

 Hi Steven,

 I managed to compile and create a Windows installer for PySide 1.1.3 using
 Qt 4.8.4, VS2010 and Python 2.7.4, all x64. For a quick test I also
 modified the PyQtForSoftimage Python scripts to use the PySide libs instead
 of PyQt and I was able to run the ExampleDialog and ExampleSignalSlot
 Examples that come with your addon.
 The ExampleMenu example throws an error when clicking on one of the menu
 items from the popup, it looks like PySide expects a slightly different
 syntax, which should be fixable:

 TypeError: 'PySide.QtGui.QMenu.exec_' called with wrong argument types:
 #   PySide.QtGui.QMenu.exec_(int, int)
 # Supported signatures:
 #   PySide.QtGui.QMenu.exec_()
 #   PySide.QtGui.QMenu.exec_(list, PySide.QtCore.QPoint,
 PySide.QtGui.QAction = None)
 #   PySide.QtGui.QMenu.exec_(list, PySide.QtCore.QPoint,
 PySide.QtGui.QAction, PySide.QtGui.QWidget)
 #   PySide.QtGui.QMenu.exec_(**PySide.QtCore.QPoint, PySide.QtGui.QAction
 = None)

 Besides that it's looking good.

 How far did you get, and is there anything else to consider?

 Also, I don't want to rush ahead, but if it's of any use to you or any one
 else (and legal, I suppose?) I can put that installer (or just
 shiboken.pyd) and maybe even the modified Plugin scripts up for download
 for people to experiment with in the mean time.
 I also thought of documenting the build process (also as a reminder for me
 should I ever have to do this again), since most of the info is scattered
 all over the net and there are likely others who need to compile themselves
 too at some point and could save a lot of time with all info in one place.
 Let me know what you think.from


RE: custom ICENode - questions and request for example source code

2013-05-15 Thread Matt Lind
That's what I was afraid of.

I remember your findings from a while ago, which was part of my incentive to 
pursue this route.  500ms vs. 20ms is quite significant (2500%). In my case it 
would be the difference between acceptable performance and unacceptable 
performance.

I'm OK with having to break this down into a small handful of nodes (~10), but 
I'm not OK with having to use 300 or so as is currently the case.

On the kinematics front, I'd like to compute the local transform of one object 
relative to another and spit out the result as a 4x4 matrix.  That alone would 
eliminate 50 nodes from the tree for each instance which the functionality is 
needed.  Another node to convert a UV location from non-uniform to uniform 
parameterized space would eliminate a significant number of nodes too, and 
that's really the bottleneck at this point because doing searches and reverse 
lookups using the factory nodes is quite cumbersome and impractical.



Matt




From: softimage-boun...@listproc.autodesk.com 
[mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Ahmidou Lyazidi
Sent: Wednesday, May 15, 2013 3:07 AM
To: softimage@listproc.autodesk.com
Subject: Re: custom ICENode - questions and request for example source code

From my small experience about this, you can't make a custom topology or 
kinematics node, you make a node that abstract the more or less complex  
computation, then you feed the topology nodes (or a matrix in the case of  
kinematics).
As you stated you can't use locators, or location queries in a custom ice node, 
so if you need them the workflow is to break you ice node in smaller parts.
About the preformance sometimes it's faster, sometimes quite the same. I made a 
parallel transport frame node, the gain was only 15% but the setup faster.
This node seems to perform way faster:
http://shaderop.com/2011/07/cubic-bezier-curve-node-for-softimage-ice/index.html
A


---
Ahmidou Lyazidi
Director | TD | CG artist
http://vimeo.com/ahmidou/videos

2013/5/15 Matt Lind ml...@carbinestudios.commailto:ml...@carbinestudios.com
well, let's answer the questions first:

1) Does anybody have source code they are willing to share for custom ICE Nodes 
that deal with topology and/or geometry?

2) Does the lack of reference, location, and execute ports for custom ICE nodes 
mean I cannot cast a location search from inside an ICE node?



To answer your question:

Imagine two nulls and two NURBS Surfaces.  the task is to find the nearest 
location from the first null to the first surface.  At that location, build an 
orthonormal basis and compute the local transform of the null relative to that 
basis.  Then reconstruct that relationship by applying it to the 2nd null 
relative to the 2nd surface assuming both surfaces use uniform 
parameterization, not non-uniform as is the softimage default.  Version 2: 
extend to operate on vertices of polygon meshes instead of nulls.  I have a 
working version, but it is slow and not very stable.

The problem I'm encountering is it simply takes too many factory nodes to be 
able to work efficiently. Each node has a certain amount of overhead regardless 
of what it does. Plus, the support for NURBS in ICE is rather abysmal. I have 
to construct my own orthonormal basis plus implement my own algorithm to 
convert from non-uniform parameterization to uniform parameterization.  Both 
are doable, but take very many nodes to do it (including support for edge 
cases) making the whole effort rather clumsy at best. The parameterization 
conversion is expensive as it involves sorting and searching 
(while/repeat/counter nodes).  When applying the ICE Compound to a polygon mesh 
with 5,000+ vertices.it gets the job done, but chugs.

I have a version of this tool written as a scripted operator, and it performs 
really well because it has better SDK support and the sorting/searching can be 
better optimized.  But one shortcoming of scripted operators is they 
self-delete if an input goes missing (which often happens on scene load or 
model import when the content has been modifed externally).  This in turn 
causes content using the operator to malfunction generating bug reports which 
are sent to artists to fix.  Unfortunately most artists weren't around when the 
content was created years ago, so they have no idea what's wrong, what the 
expected output is supposed to look like, or how to fix it.  Often an asset has 
to be retired and replaced.   This is my motivation for rewriting the tool as a 
custom ICE node as ICE is much more graceful when it's inputs don't exist - it 
just turns red and sits patiently until conditions improve.  This gives artists 
a chance to fix the problem without having to sweat the details because they 
can read the GetData node to see what's missing, then find and repair it.  I'm 
trying to make the content in our pipeline more durable.

So...I'm looking for code samples of how to deal with 

RE: custom ICENode - questions and request for example source code

2013-05-15 Thread Matt Lind
BTW - the link to the source code is dead.

Matt



From: softimage-boun...@listproc.autodesk.com 
[mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Ahmidou Lyazidi
Sent: Wednesday, May 15, 2013 3:07 AM
To: softimage@listproc.autodesk.com
Subject: Re: custom ICENode - questions and request for example source code

From my small experience about this, you can't make a custom topology or 
kinematics node, you make a node that abstract the more or less complex  
computation, then you feed the topology nodes (or a matrix in the case of  
kinematics).
As you stated you can't use locators, or location queries in a custom ice node, 
so if you need them the workflow is to break you ice node in smaller parts.
About the preformance sometimes it's faster, sometimes quite the same. I made a 
parallel transport frame node, the gain was only 15% but the setup faster.
This node seems to perform way faster:
http://shaderop.com/2011/07/cubic-bezier-curve-node-for-softimage-ice/index.html
A


---
Ahmidou Lyazidi
Director | TD | CG artist
http://vimeo.com/ahmidou/videos

2013/5/15 Matt Lind ml...@carbinestudios.commailto:ml...@carbinestudios.com
well, let's answer the questions first:

1) Does anybody have source code they are willing to share for custom ICE Nodes 
that deal with topology and/or geometry?

2) Does the lack of reference, location, and execute ports for custom ICE nodes 
mean I cannot cast a location search from inside an ICE node?



To answer your question:

Imagine two nulls and two NURBS Surfaces.  the task is to find the nearest 
location from the first null to the first surface.  At that location, build an 
orthonormal basis and compute the local transform of the null relative to that 
basis.  Then reconstruct that relationship by applying it to the 2nd null 
relative to the 2nd surface assuming both surfaces use uniform 
parameterization, not non-uniform as is the softimage default.  Version 2: 
extend to operate on vertices of polygon meshes instead of nulls.  I have a 
working version, but it is slow and not very stable.

The problem I'm encountering is it simply takes too many factory nodes to be 
able to work efficiently. Each node has a certain amount of overhead regardless 
of what it does. Plus, the support for NURBS in ICE is rather abysmal. I have 
to construct my own orthonormal basis plus implement my own algorithm to 
convert from non-uniform parameterization to uniform parameterization.  Both 
are doable, but take very many nodes to do it (including support for edge 
cases) making the whole effort rather clumsy at best. The parameterization 
conversion is expensive as it involves sorting and searching 
(while/repeat/counter nodes).  When applying the ICE Compound to a polygon mesh 
with 5,000+ vertices.it gets the job done, but chugs.

I have a version of this tool written as a scripted operator, and it performs 
really well because it has better SDK support and the sorting/searching can be 
better optimized.  But one shortcoming of scripted operators is they 
self-delete if an input goes missing (which often happens on scene load or 
model import when the content has been modifed externally).  This in turn 
causes content using the operator to malfunction generating bug reports which 
are sent to artists to fix.  Unfortunately most artists weren't around when the 
content was created years ago, so they have no idea what's wrong, what the 
expected output is supposed to look like, or how to fix it.  Often an asset has 
to be retired and replaced.   This is my motivation for rewriting the tool as a 
custom ICE node as ICE is much more graceful when it's inputs don't exist - it 
just turns red and sits patiently until conditions improve.  This gives artists 
a chance to fix the problem without having to sweat the details because they 
can read the GetData node to see what's missing, then find and repair it.  I'm 
trying to make the content in our pipeline more durable.

So...I'm looking for code samples of how to deal with topology and geometry in 
ICE.  So far I have not found any.


Matt







From: 
softimage-boun...@listproc.autodesk.commailto:softimage-boun...@listproc.autodesk.com
 
[softimage-boun...@listproc.autodesk.commailto:softimage-boun...@listproc.autodesk.com]
 On Behalf Of Raffaele Fragapane 
[raffsxsil...@googlemail.commailto:raffsxsil...@googlemail.com]
Sent: Tuesday, May 14, 2013 9:00 PM
To: softimage@listproc.autodesk.commailto:softimage@listproc.autodesk.com
Subject: Re: custom ICENode - questions and request for example source code
Yeah, same hunch here.
Unless the performance expectations are in the multiple characters real-time 
concurrently, in which case I think neither way is gonna get there usually.

On Wed, May 15, 2013 at 1:04 PM, Ciaran Moloney 
moloney.cia...@gmail.commailto:moloney.cia...@gmail.com wrote:
I'm sorta , kinda sure that's a dead end for a custom node. You 

Re: custom ICENode - questions and request for example source code

2013-05-15 Thread Sebastien Sterling
 for Topology there is no SDK access does this mean none existent or
locked ? and..Why ? if Matt wants to create custom nodes, are the
limitation inherent to ice, or is ice like the standard SDK locked in
certain areas ?


On 15 May 2013 20:25, Matt Lind ml...@carbinestudios.com wrote:

 That’s what I was afraid of.

 ** **

 I remember your findings from a while ago, which was part of my incentive
 to pursue this route.  500ms vs. 20ms is quite significant (2500%). In my
 case it would be the difference between acceptable performance and
 unacceptable performance.

 ** **

 I’m OK with having to break this down into a small handful of nodes (~10),
 but I’m not OK with having to use 300 or so as is currently the case.

 ** **

 On the kinematics front, I’d like to compute the local transform of one
 object relative to another and spit out the result as a 4x4 matrix.  That
 alone would eliminate 50 nodes from the tree for each instance which the
 functionality is needed.  Another node to convert a UV location from
 non-uniform to uniform parameterized space would eliminate a significant
 number of nodes too, and that’s really the bottleneck at this point because
 doing searches and reverse lookups using the factory nodes is quite
 cumbersome and impractical.

 ** **

 ** **

 ** **

 Matt

 ** **

 ** **

 ** **

 ** **

 *From:* softimage-boun...@listproc.autodesk.com [mailto:
 softimage-boun...@listproc.autodesk.com] *On Behalf Of *Ahmidou Lyazidi
 *Sent:* Wednesday, May 15, 2013 3:07 AM

 *To:* softimage@listproc.autodesk.com
 *Subject:* Re: custom ICENode - questions and request for example source
 code

 ** **

 From my small experience about this, you can't make a custom topology or
 kinematics node, you make a node that abstract the more or less complex
 computation, then you feed the topology nodes (or a matrix in the case of
 kinematics).

 As you stated you can't use locators, or location queries in a custom ice
 node, so if you need them the workflow is to break you ice node in smaller
 parts.

 About the preformance sometimes it's faster, sometimes quite the same. I
 made a parallel transport frame node, the gain was only 15% but the setup
 faster.

 This node seems to perform way faster:


 http://shaderop.com/2011/07/cubic-bezier-curve-node-for-softimage-ice/index.html
 

 A

 ** **


 

 ---
 Ahmidou Lyazidi
 Director | TD | CG artist
 http://vimeo.com/ahmidou/videos

 ** **

 2013/5/15 Matt Lind ml...@carbinestudios.com

 well, let's answer the questions first:

  

 1) Does anybody have source code they are willing to share for custom ICE
 Nodes that deal with topology and/or geometry?

  

 2) Does the lack of reference, location, and execute ports for custom ICE
 nodes mean I cannot cast a location search from inside an ICE node?

  

  

  

 To answer your question:

  

 Imagine two nulls and two NURBS Surfaces.  the task is to find the nearest
 location from the first null to the first surface.  At that location, build
 an orthonormal basis and compute the local transform of the null relative
 to that basis.  Then reconstruct that relationship by applying it to the
 2nd null relative to the 2nd surface assuming both surfaces use uniform
 parameterization, not non-uniform as is the softimage default.  Version 2:
 extend to operate on vertices of polygon meshes instead of nulls.  I have a
 working version, but it is slow and not very stable.

  

 The problem I'm encountering is it simply takes too many factory nodes to
 be able to work efficiently. Each node has a certain amount of overhead
 regardless of what it does. Plus, the support for NURBS in ICE is rather
 abysmal. I have to construct my own orthonormal basis plus implement my own
 algorithm to convert from non-uniform parameterization to uniform
 parameterization.  Both are doable, but take very many nodes to do it
 (including support for edge cases) making the whole effort rather clumsy at
 best. The parameterization conversion is expensive as it involves sorting
 and searching (while/repeat/counter nodes).  When applying the ICE Compound
 to a polygon mesh with 5,000+ vertices.it gets the job done, but
 chugs.  

  

 I have a version of this tool written as a scripted operator, and it
 performs really well because it has better SDK support and the
 sorting/searching can be better optimized.  But one shortcoming of scripted
 operators is they self-delete if an input goes missing (which often happens
 on scene load or model import when the content has been modifed
 externally).  This in turn causes content using the operator to malfunction
 generating bug reports which are sent to artists to fix.  Unfortunately
 most artists weren't around when the content was created years ago, so they
 have no idea what's wrong, what the expected output is supposed to look
 

RE: custom ICENode - questions and request for example source code

2013-05-15 Thread Grahame Fuller
I'm confused. Calculating the local tranform of of object relative to another 
is just a matter of matrix inversion and multiplication. But of course you 
already know that, Matt, so I must be misunderstanding something about your 
problem.

Also, have you tried using either the Reinterpret Location on New Geometry or 
the UV to Location nodes? They might avoid the parameterization issues.

If I can find some spare time I might try to knock up a simple demo. Remind me 
which version of Softimage you are currently using?

gray

From: softimage-boun...@listproc.autodesk.com 
[mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Matt Lind
Sent: Wednesday, May 15, 2013 02:26 PM
To: softimage@listproc.autodesk.com
Subject: RE: custom ICENode - questions and request for example source code

That's what I was afraid of.

I remember your findings from a while ago, which was part of my incentive to 
pursue this route.  500ms vs. 20ms is quite significant (2500%). In my case it 
would be the difference between acceptable performance and unacceptable 
performance.

I'm OK with having to break this down into a small handful of nodes (~10), but 
I'm not OK with having to use 300 or so as is currently the case.

On the kinematics front, I'd like to compute the local transform of one object 
relative to another and spit out the result as a 4x4 matrix.  That alone would 
eliminate 50 nodes from the tree for each instance which the functionality is 
needed.  Another node to convert a UV location from non-uniform to uniform 
parameterized space would eliminate a significant number of nodes too, and 
that's really the bottleneck at this point because doing searches and reverse 
lookups using the factory nodes is quite cumbersome and impractical.



Matt




From: 
softimage-boun...@listproc.autodesk.commailto:softimage-boun...@listproc.autodesk.com
 [mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Ahmidou Lyazidi
Sent: Wednesday, May 15, 2013 3:07 AM
To: softimage@listproc.autodesk.commailto:softimage@listproc.autodesk.com
Subject: Re: custom ICENode - questions and request for example source code

From my small experience about this, you can't make a custom topology or 
kinematics node, you make a node that abstract the more or less complex  
computation, then you feed the topology nodes (or a matrix in the case of  
kinematics).
As you stated you can't use locators, or location queries in a custom ice node, 
so if you need them the workflow is to break you ice node in smaller parts.
About the preformance sometimes it's faster, sometimes quite the same. I made a 
parallel transport frame node, the gain was only 15% but the setup faster.
This node seems to perform way faster:
http://shaderop.com/2011/07/cubic-bezier-curve-node-for-softimage-ice/index.html
A


---
Ahmidou Lyazidi
Director | TD | CG artist
http://vimeo.com/ahmidou/videos

2013/5/15 Matt Lind ml...@carbinestudios.commailto:ml...@carbinestudios.com
well, let's answer the questions first:

1) Does anybody have source code they are willing to share for custom ICE Nodes 
that deal with topology and/or geometry?

2) Does the lack of reference, location, and execute ports for custom ICE nodes 
mean I cannot cast a location search from inside an ICE node?



To answer your question:

Imagine two nulls and two NURBS Surfaces.  the task is to find the nearest 
location from the first null to the first surface.  At that location, build an 
orthonormal basis and compute the local transform of the null relative to that 
basis.  Then reconstruct that relationship by applying it to the 2nd null 
relative to the 2nd surface assuming both surfaces use uniform 
parameterization, not non-uniform as is the softimage default.  Version 2: 
extend to operate on vertices of polygon meshes instead of nulls.  I have a 
working version, but it is slow and not very stable.

The problem I'm encountering is it simply takes too many factory nodes to be 
able to work efficiently. Each node has a certain amount of overhead regardless 
of what it does. Plus, the support for NURBS in ICE is rather abysmal. I have 
to construct my own orthonormal basis plus implement my own algorithm to 
convert from non-uniform parameterization to uniform parameterization.  Both 
are doable, but take very many nodes to do it (including support for edge 
cases) making the whole effort rather clumsy at best. The parameterization 
conversion is expensive as it involves sorting and searching 
(while/repeat/counter nodes).  When applying the ICE Compound to a polygon mesh 
with 5,000+ vertices.it gets the job done, but chugs.

I have a version of this tool written as a scripted operator, and it performs 
really well because it has better SDK support and the sorting/searching can be 
better optimized.  But one shortcoming of scripted operators is they 
self-delete if an input goes missing (which often happens on scene load or 
model import when the 

Re: custom ICENode - questions and request for example source code

2013-05-15 Thread Steven Caron
they don't provide access to the 'topology' type input or output port. you
have to make a node that outputs point positions and polygon description.

here is an example node which meshes and openvdb grid from an incomplete
project of mine...

https://github.com/caron/OpenVDB_Softimage/blob/master/VDB_Node_VolumeToMesh.cpp

s


On Wed, May 15, 2013 at 11:57 AM, Sebastien Sterling 
sebastien.sterl...@gmail.com wrote:

  for Topology there is no SDK access does this mean none existent or
 locked ? and..Why ? if Matt wants to create custom nodes, are the
 limitation inherent to ice, or is ice like the standard SDK locked in
 certain areas ?


 On 15 May 2013 20:25, Matt Lind ml...@carbinestudios.com wrote:

 That’s what I was afraid of.

 ** **

 I remember your findings from a while ago, which was part of my incentive
 to pursue this route.  500ms vs. 20ms is quite significant (2500%). In my
 case it would be the difference between acceptable performance and
 unacceptable performance.

 ** **

 I’m OK with having to break this down into a small handful of nodes
 (~10), but I’m not OK with having to use 300 or so as is currently the case.
 

 ** **

 On the kinematics front, I’d like to compute the local transform of one
 object relative to another and spit out the result as a 4x4 matrix.  That
 alone would eliminate 50 nodes from the tree for each instance which the
 functionality is needed.  Another node to convert a UV location from
 non-uniform to uniform parameterized space would eliminate a significant
 number of nodes too, and that’s really the bottleneck at this point because
 doing searches and reverse lookups using the factory nodes is quite
 cumbersome and impractical.

 ** **

 ** **

 ** **

 Matt

 ** **

 ** **

 ** **

 ** **

 *From:* softimage-boun...@listproc.autodesk.com [mailto:
 softimage-boun...@listproc.autodesk.com] *On Behalf Of *Ahmidou Lyazidi
 *Sent:* Wednesday, May 15, 2013 3:07 AM

 *To:* softimage@listproc.autodesk.com
 *Subject:* Re: custom ICENode - questions and request for example source
 code

 ** **

 From my small experience about this, you can't make a custom topology or
 kinematics node, you make a node that abstract the more or less complex
 computation, then you feed the topology nodes (or a matrix in the case of
 kinematics).

 As you stated you can't use locators, or location queries in a custom ice
 node, so if you need them the workflow is to break you ice node in smaller
 parts.

 About the preformance sometimes it's faster, sometimes quite the same. I
 made a parallel transport frame node, the gain was only 15% but the setup
 faster.

 This node seems to perform way faster:


 http://shaderop.com/2011/07/cubic-bezier-curve-node-for-softimage-ice/index.html
 

 A

 ** **


 

 ---
 Ahmidou Lyazidi
 Director | TD | CG artist
 http://vimeo.com/ahmidou/videos

 ** **

 2013/5/15 Matt Lind ml...@carbinestudios.com

 well, let's answer the questions first:

  

 1) Does anybody have source code they are willing to share for custom ICE
 Nodes that deal with topology and/or geometry?

  

 2) Does the lack of reference, location, and execute ports for custom ICE
 nodes mean I cannot cast a location search from inside an ICE node?

  

  

  

 To answer your question:

  

 Imagine two nulls and two NURBS Surfaces.  the task is to find the
 nearest location from the first null to the first surface.  At that
 location, build an orthonormal basis and compute the local transform of the
 null relative to that basis.  Then reconstruct that relationship by
 applying it to the 2nd null relative to the 2nd surface assuming both
 surfaces use uniform parameterization, not non-uniform as is the softimage
 default.  Version 2: extend to operate on vertices of polygon meshes
 instead of nulls.  I have a working version, but it is slow and not very
 stable.

  

 The problem I'm encountering is it simply takes too many factory nodes to
 be able to work efficiently. Each node has a certain amount of overhead
 regardless of what it does. Plus, the support for NURBS in ICE is rather
 abysmal. I have to construct my own orthonormal basis plus implement my own
 algorithm to convert from non-uniform parameterization to uniform
 parameterization.  Both are doable, but take very many nodes to do it
 (including support for edge cases) making the whole effort rather clumsy at
 best. The parameterization conversion is expensive as it involves sorting
 and searching (while/repeat/counter nodes).  When applying the ICE Compound
 to a polygon mesh with 5,000+ vertices.it gets the job done, but
 chugs.  

  

 I have a version of this tool written as a scripted operator, and it
 performs really well because it has better SDK support and the
 sorting/searching can be better optimized.  But one shortcoming of scripted
 operators is they 

RE: custom ICENode - questions and request for example source code

2013-05-15 Thread Matt Lind
I think your assumptions are rather off base, Raff.

I'd be interested in seeing how you remap a non-uniform UV coordinate to 
uniform space in ICE using your brute force technique.  I solved this problem 
for a traditional operator, but I cannot see how using your methods it can be 
done in ICE.  In fact, I don't think ICE exposes enough of the right kind of 
information to make it possible.  But since you said you've done it, I'd like 
to see how it's done. :)

The problem:

Given a location described as a normalized UV coordinate in non-uniform 
parameterized space, find the equivalent location on another NURBS surface as a 
normalized UV coordinate described in uniform parameterized space.

Test case:

Given 2 NURBS grids with 4 isolines (subdivisions) in U and V.  Leave the first 
surface as a flat plane without deformations, create the 2nd surface by 
duplicating the first surface and deforming the 2nd surface significantly - 
translate the 2nd surface away from the world origin so you can see what you're 
doing.  On the first surface, get the UV Coordinate for the first interior 
isoline intersection in U and V (should be roughly 0.25, 0.25).  Convert that 
UV coordinate to uniform parameterized space so it finds the same first 
interior isoline intersection on the 2nd surface.  Do it using only factory ICE 
nodes.

Actual use case: Repeat the test for arbitrary locations when the surfaces are 
surface meshes comprised of multiple surfaces (or subsurfaces if you prefer)


The main problem here is it takes waaay too many nodes to get the job done 
in a practical manner.  We need protection against regressions of nodes that 
seem to occur from release to release.  The last thing I want to deal with is 
debugging an ICE Tree with 300+ nodes because one node in the bunch now clamps 
incorrectly, returns NaN, or doesn't handle divide by zero errors correctly 
(because a bug elsewhere fed it a zero).  Finding problems like this in a 
traditional operator is manageable, but doing so in ICE is torture.


Matt






From: softimage-boun...@listproc.autodesk.com 
[mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Raffaele Fragapane
Sent: Wednesday, May 15, 2013 3:36 AM
To: softimage@listproc.autodesk.com
Subject: Re: custom ICENode - questions and request for example source code

If you are using while and repeat nodes to reparametrize the surface, you are 
paying a ton of unnecessary costs. Yes, those things are slow, no, they are 
often not required, which is why both me and Ciaran had the same hunch.

Matter of fact, I very recently worked on an equivalent problem, but trickier 
(think of adding a dimensionality).
By far the fastest approach, although it might seem counter-intuitive, is to 
search and filter geometry, even if it's A LOT of nearest location runs, they 
will always be fast and do an excellent job of accessing a shared optimized 
structure, then it's up to you to filter the arrays in an ICE friendly way (so, 
no repeats), which again is a puzzling art on its own some times (Stephen's has 
excellent blog entries about many basics and sorting tricks if you are 
unfamiliar)

I hear a lot of people complaining about ICE performance, and then frequently 
enough they treat it like if it was a normal programming language and try to 
hammer in whiles, repeats, walking to conditions, multidimensional array 
equivalents and so on, on the assumption that saving nodes is going to make 
things faster, when in actuality there are other ways, that might seem 
counter-intuitive, that will blaze by any of those methods.
Most factory nodes even in the hundreds add a negligible overhead, I have 
complex functions totalling hundreds running faster than the monitor loop can 
time them, and still topping the vSync with sampling rates in the thousands. 
Food for thought there.

ICE still sucks at some many to one cases and definitely does at many to many, 
but a problem like re-parametrizing a surface and getting a correlated coherent 
transform for a null from is not one of them.

I mean no offense, but it sounds like you haven't spent a lot of time working 
with ICE, and you are coming from the assumption that your respectable 
programming knowledge in terms of what's optimal and what isn't might transfer 
across directly, when chances are it's hurting more than anything.
You have to think laterally a good few degrees of separation from C or JS to 
ICE in terms of what's optimal, it's often ironically a lot closer to the metal 
in its SIMD roots than something that gets to scuttle through GCC before 
running gets to be.


On Wed, May 15, 2013 at 5:42 PM, Matt Lind 
ml...@carbinestudios.commailto:ml...@carbinestudios.com wrote:
well, let's answer the questions first:

1) Does anybody have source code they are willing to share for custom ICE Nodes 
that deal with topology and/or geometry?

2) Does the lack of reference, location, and execute ports for custom ICE nodes 
mean I cannot cast a location 

Re: Texture size in games?

2013-05-15 Thread Stefan Andersson
Again, thanks everyone for the feedback. Really helpful information. Faith
has it that I'm not suposed to actually give this a try... Tried to install
Win7 on my HP machine, and it messed up my EFI boot manager. So nothing
would boot up. And I'm really bad at error searching Windows errors.

So I'm installing CentOS again, need some familiar ground before making
another attempt.

With that said, I still think I will make some sort of attempt at creating
some game ready award winning AAA class models :)


regards
stefan



On Wed, May 15, 2013 at 11:26 AM, Martin furik...@gmail.com wrote:

 One thing that is very different from rendering works, is that you need to
 be very clean with your data, naming convetion may be more
 important, history and alive operators are not welcome.
 Basic scripting skills helps a lot even if you're not a TD, and specially
 if your company doesn't have too many TDs or no TDs at all. My workflow and
 my team workflow improved a lot when I learned a few scripting tricks. My
 data is cleaner and my clients happier.

 So, if you can't script, I really recommend you to learn some basic
 things. Basic scripting is more useful than ICE here, if you have to choose
 one.

 About mipmap, Mipmap generation is automatic. The format depends on your
 project. DDS is almost the standard in a lot of projects I've been
 involved. Some using Nvidia plugins, some other propietary tools but DDS
 has been quite the standard lately. Last time I did a Nintendo platform
 project we were using NW4C TGA. A format that comes with the Nintendo Tools
 package.

 Modeling software also changes depending on the project because the
 programmers may write their tools based only on one software.
 In Japan, Maya and Softimage are the most used. You need to match your
 client's version too, here is where Autodesk old version policies screw you
 if you have a subscription, 3 previous versions are not enough !!
 But most of the time we use 3 years versions (right now we are using SI
 2011 in my current project).

 I haven't seen a single project based on Blender, but it doesn't mean that
 you can't use it, you just have to convert it to your client's software
 when you deliver your work.

 And here is where you'll have to learn how to live with conversions. They
 aren't as simple as we would like to. Sometimes you'll have to try FBX,
 Collada, Crosswalk, OBJ, because depending on the case one can be better
 than the other. And after that, you'll have to clean that data, because
 converted data have a lot of garbage. Here is where your scripting skills
 will save you hours of work. Specially if you need to convert animations.

 In no-SI projects, I usually do 80% of my modeling work in SI, convert it
 to Maya or Max and finish it there.

 M.Yara


 On Wed, May 15, 2013 at 5:52 PM, Stefan Andersson sander...@gmail.comwrote:

 Great response everyone! Except for polycounts and fascist UV mapping, it
 more or less sounds similar in a lot of ways to what I'm already doing. I
 wont go into games trying to become a programmer, I do want to make art.
 But as Matt suggested, I'm more likely to go art/tech since I'm somewhat of
 a geek also.

 Mip mapping is something that I'm familiar with, and my own asset tools
 already have it in place that I convert with OIIO all textures to be
 mip-mapped and also the power of two (just because I don't trust anyone I
 also resize them). But doing mip-mapping for a game engine, does that
 requires to export each level? Or what image formats are usually used for
 doing mip-mapping? I can't see game engines using exr... or do they? :)

 Before I go on and make Matt's little exercise I think I will build
 something rigid and see how that looks. And I have to convert my
 workstation from Linux to Windows.

 I talked to my brother who is working at Massive, and he thinks I'm an
 idiot... but he also said that they base the size of the texture depending
 on meters in the engine. I guess it also depends a lot of which engine you
 will use.
 But it leads me to another question. I'm not 100% sure yet which modeling
 software I will be using. My 14+ years with both Maya / Softimage leaves me
 somewhat in the middle of those two. Blender is also a contender, but I'll
 stick with the programs that I know from inside and out. However, Softimage
 doesn't have any metric units. Would the usual assumtion that 1 SI unit is
 10 cm still apply? or again... depends on the engine/exporter?

 all the best!

 your humble servant
 stefan





 On Wed, May 15, 2013 at 10:26 AM, Matt Lind ml...@carbinestudios.comwrote:

 Well, you can look at it from two different points of view:

 a) Do what many game artists do and brute force their way through making
 content with heavy iteration.

 b) Do what many game programmers do and try to be efficient.


 If you just want a job in games, follow path A which doesn't really
 require much learning on your part, but does require a lot of practice.
  You need to follow the 

RE: custom ICENode - questions and request for example source code

2013-05-15 Thread Matt Lind
Yes, computing a local transform between two objects is pretty trivial, but 
that's not what I'm doing.

I am finding the nearest location on a NURBS surface from an object.  I then 
build an orthonormal basis at that location, and compute the local transform 
from the object relative to the orthonormal basis.  The issue is with the 
location on the NURBS surface.  There is no convenient way to compute the 
orthonormal basis because the information returned in the point locator is 
approximate and based on the control point hull of the surface, not the surface 
itself.  Therefore I've had to resort to a workaround of manually constructing 
the tangent vectors by issuing multiple location searches by minute distances 
in U and V from the nearest location found on the surface.  Problems arise when 
I get near the edge/boundary of a surface as I must flip my logic around to 
create vectors pointing the other direction so I can construct the basis.  I 
have accomplished the feat, but not after using waay more nodes than should 
be necessary for such a basic task.  I would like to package this into a custom 
ICE node for convenience as the functionality is needed multiple times within 
the ICE Tree.

The UV to Location and reinterpret nodes both operate in non-uniform 
parameterized space.  They take a given UV coordinate from one surface and 
remap it to the other surface, but the resulting location is not topologically 
equivalent.  I had to solve this manually in my scripted operator by doing a 
reverse lookup of the surrounding knots and samples of the location on the 
first surface, then do a linear interpolation between the equivalent 
subcomponents on the other surface to find the topologically equivalent 
location.  Works, but is slow, and Softimage occasionally returns NaN when 
requesting the sample/knot near a boundary/edge.

Using Softimage 2013 SP1 (32 bit)


Matt




From: softimage-boun...@listproc.autodesk.com 
[mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Grahame Fuller
Sent: Wednesday, May 15, 2013 12:10 PM
To: softimage@listproc.autodesk.com
Subject: RE: custom ICENode - questions and request for example source code

I'm confused. Calculating the local tranform of of object relative to another 
is just a matter of matrix inversion and multiplication. But of course you 
already know that, Matt, so I must be misunderstanding something about your 
problem.

Also, have you tried using either the Reinterpret Location on New Geometry or 
the UV to Location nodes? They might avoid the parameterization issues.

If I can find some spare time I might try to knock up a simple demo. Remind me 
which version of Softimage you are currently using?

gray

From: 
softimage-boun...@listproc.autodesk.commailto:softimage-boun...@listproc.autodesk.com
 [mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Matt Lind
Sent: Wednesday, May 15, 2013 02:26 PM
To: softimage@listproc.autodesk.commailto:softimage@listproc.autodesk.com
Subject: RE: custom ICENode - questions and request for example source code

That's what I was afraid of.

I remember your findings from a while ago, which was part of my incentive to 
pursue this route.  500ms vs. 20ms is quite significant (2500%). In my case it 
would be the difference between acceptable performance and unacceptable 
performance.

I'm OK with having to break this down into a small handful of nodes (~10), but 
I'm not OK with having to use 300 or so as is currently the case.

On the kinematics front, I'd like to compute the local transform of one object 
relative to another and spit out the result as a 4x4 matrix.  That alone would 
eliminate 50 nodes from the tree for each instance which the functionality is 
needed.  Another node to convert a UV location from non-uniform to uniform 
parameterized space would eliminate a significant number of nodes too, and 
that's really the bottleneck at this point because doing searches and reverse 
lookups using the factory nodes is quite cumbersome and impractical.



Matt




From: 
softimage-boun...@listproc.autodesk.commailto:softimage-boun...@listproc.autodesk.com
 [mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Ahmidou Lyazidi
Sent: Wednesday, May 15, 2013 3:07 AM
To: softimage@listproc.autodesk.commailto:softimage@listproc.autodesk.com
Subject: Re: custom ICENode - questions and request for example source code

From my small experience about this, you can't make a custom topology or 
kinematics node, you make a node that abstract the more or less complex  
computation, then you feed the topology nodes (or a matrix in the case of  
kinematics).
As you stated you can't use locators, or location queries in a custom ice node, 
so if you need them the workflow is to break you ice node in smaller parts.
About the preformance sometimes it's faster, sometimes quite the same. I made a 
parallel transport frame node, the gain was only 15% but the setup faster.
This node seems to perform way 

Re: Rigid body initial velocity - no ICE

2013-05-15 Thread Adam Sale
what if you were just to animate the ball directly, instead of using a null
parent?  Set keys to animate it through space, while defining it as a
passive RBD,
then animate it being an Active RBD at the point where the ball's last key
is set.

That should work.


On Wed, May 15, 2013 at 12:37 PM, David Rivera 
activemotionpictu...@yahoo.com wrote:

 Hi, I am setting up a really basic rigid body simulation from the toolbar
 (no ICE for the moment).
 And so I just parented a couple of boxes to nulls (legs) and then parented
 those to a thorax null (with corresponding box child)
 I animate these as I need, set the rigid body property page for the boxes
 to mute.

 I activate the rbd for the boxes to collide with the ball at frame 33.
 The ball is child of an animated null. What I´m trying to accomplish is
 that the rbd ball picks up it´s initial velocity
 from the animated null. But this is not happening (or maybe I´m releasing
 (activating the rbd property too close to the boxes) too early).

 Here´s the video:
 http://www.youtube.com/watch?v=hVQ5V5j6GLQfeature=youtu.be

 And there´s also an attachment pic explaining this.
 How could I make the ball hit the boxes with initial velocity picked up
 from animation? I´ve read the online
 help for softimage 2012, but I think the ppg picture there it´s outdated
 from 2011 or before..

 Any ideas for this?

 Thanks.
 David R.

 ps: I got no plugins for this. And if this is to be set on ICE, remains to
 be seen :)



Re[2]: custom ICENode - questions and request for example source code

2013-05-15 Thread Guillaume Laforge
 for Topology there is no SDK access does this mean none existent or locked ? 
and..Why ?



It means that you can't get the Topology attribute inside your custom node to 
play with it and you can't output Topology attribute from a custom node.


The Topology attribute is not only the topology of the mesh. You can imagine 
the Topology attribute like a system that let you describe what you want to do 
on your mesh using standard Softimage modeling operators. 
Lets say you want to extrude and then add a vertex to your polygon mesh, the 
topology attribute will just store a stack of actions that will be executed 
only when you plug it into a Set Data node set to Topology.


The operations that can be added to this stack must be known by Softimage. For 
example, if you plug a Split Edge node into a Set Topology node, it will call 
the same Split Edge function than modelers are using since ...XSI can split 
edges :).


Adding SDK support for ICE Topology would mean adding the ability for users to 
add their own custom modeling function callable from ICE. I have no idea how 
complicate it would be to implement such thing :).


While I was working on ICE Modeling I added the Create Topo node to let users 
create custom topology from built in nodes or from a custom one.
Create Topo just need the array of positions and the indices of the polygons to 
create the mesh. 


The advantage of native modeling commands vs Create Topo is that they can 
update clusters...


Now you know as much as I know on ICE topo ;)


Cheers


G.   


-Original Message-
From: Sebastien Sterling sebastien.sterl...@gmail.com
To: softimage@listproc.autodesk.com
Date: 05/15/13 14:58
Subject: Re: custom ICENode - questions and request for example source code

 for Topology there is no SDK access does this mean none existent or locked ? 
and..Why ? if Matt wants to create custom nodes, are the limitation inherent to 
ice, or is ice like the standard SDK locked in certain areas ?
 
 

On 15 May 2013 20:25, Matt Lind ml...@carbinestudios.com wrote:

That's what I was afraid of.
 
I remember your findings from a while ago, which was part of my incentive to 
pursue this route.  500ms vs. 20ms is quite significant (2500%). In my case it 
would be the difference between acceptable performance and unacceptable 
performance.
 
I'm OK with having to break this down into a small handful of nodes (~10), but 
I'm not OK with having to use 300 or so as is currently the case.
 
On the kinematics front, I'd like to compute the local transform of one object 
relative to another and spit out the result as a 4x4 matrix.  That alone would 
eliminate 50 nodes from the tree for each instance which the functionality is 
needed.  Another node to convert a UV location from non-uniform to uniform 
parameterized space would eliminate a significant number of nodes too, and 
that's really the bottleneck at this point because doing searches and reverse 
lookups using the factory nodes is quite cumbersome and impractical.
 
 
 
Matt
 
 
 
 
From: softimage-boun...@listproc.autodesk.com 
[mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Ahmidou Lyazidi
Sent: Wednesday, May 15, 2013 3:07 AM
 
To: softimage@listproc.autodesk.com
Subject: Re: custom ICENode - questions and request for example source code
 
 

From my small experience about this, you can't make a custom topology or 
kinematics node, you make a node that abstract the more or less complex  
computation, then you feed the topology nodes (or a matrix in the case of  
kinematics).
As you stated you can't use locators, or location queries in a custom ice node, 
so if you need them the workflow is to break you ice node in smaller parts. 
About the preformance sometimes it's faster, sometimes quite the same. I made a 
parallel transport frame node, the gain was only 15% but the setup faster. 
This node seems to perform way faster:

http://shaderop.com/2011/07/cubic-bezier-curve-node-for-softimage-ice/index.html

A

 
 
 



---
Ahmidou Lyazidi
Director | TD | CG artist
http://vimeo.com/ahmidou/videos
 
2013/5/15 Matt Lind ml...@carbinestudios.com

well, let's answer the questions first:

 

1) Does anybody have source code they are willing to share for custom ICE Nodes 
that deal with topology and/or geometry?

 

2) Does the lack of reference, location, and execute ports for custom ICE nodes 
mean I cannot cast a location search from inside an ICE node?

 

 

 

To answer your question:

 

Imagine two nulls and two NURBS Surfaces.  the task is to find the nearest 
location from the first null to the first surface.  At that location, build an 
orthonormal basis and compute the local transform of the null relative to that 
basis.  Then reconstruct that relationship by applying it to the 2nd null 
relative to the 2nd surface assuming both surfaces use uniform 
parameterization, not non-uniform as is the softimage default.  Version 2: 
extend to operate on 

StrandPosition on PointPosition

2013-05-15 Thread olivier jeannel

Hi there,

Nightmaring on this the whole afternoon, I thought I'd be genius enough 
to find but, no :(


I have a simulated pointcloud of 40 particles.
I have another non simulated pointcloud of 1 particle with a strand with 
a StrandCount of 40.


How the hell, do I snap the 40 StrandPosition on the 40 (PointPosition) 
particles ?
I'd like one big strand (not 40 pieces of strands in between 
pointpositions) .


Thank's for any help ^^

Olivier


Re: StrandPosition on PointPosition

2013-05-15 Thread Steven Caron
you want a line connecting them all?


On Wed, May 15, 2013 at 1:25 PM, olivier jeannel olivier.jean...@noos.frwrote:

 Hi there,

 Nightmaring on this the whole afternoon, I thought I'd be genius enough to
 find but, no :(

 I have a simulated pointcloud of 40 particles.
 I have another non simulated pointcloud of 1 particle with a strand with a
 StrandCount of 40.

 How the hell, do I snap the 40 StrandPosition on the 40 (PointPosition)
 particles ?
 I'd like one big strand (not 40 pieces of strands in between
 pointpositions) .

 Thank's for any help ^^

 Olivier



Re: StrandPosition on PointPosition

2013-05-15 Thread olivier jeannel

Ye


Le 15/05/2013 22:27, Steven Caron a écrit :

you want a line connecting them all?


On Wed, May 15, 2013 at 1:25 PM, olivier jeannel 
olivier.jean...@noos.fr mailto:olivier.jean...@noos.fr wrote:


Hi there,

Nightmaring on this the whole afternoon, I thought I'd be genius
enough to find but, no :(

I have a simulated pointcloud of 40 particles.
I have another non simulated pointcloud of 1 particle with a
strand with a StrandCount of 40.

How the hell, do I snap the 40 StrandPosition on the 40
(PointPosition) particles ?
I'd like one big strand (not 40 pieces of strands in between
pointpositions) .

Thank's for any help ^^

Olivier






ICE Caching and Value Token in path

2013-05-15 Thread Eric Thivierge
I'm trying to script a quick script to cache some particle clouds and 
meshes out and am using the xsi.CacheObjectsIntoFileDialog() command to 
create the caching options and then filling in the path using a series 
of tokens.


The problem is that I need to use the [Value] token pointing to a custom 
param set parameter on the object's model. The resolved path shows it is 
parsing it correctly however when I hit the Cache! button it says that 
the token is invalid. The following is essentially what have in the path 
field:


[project path]/Simulation/[model]/[object]/[Value 
[model].rigCache_settings.dept]/[Value [model].rigCache_settings.version]/


I knew it was dicey tossing a token inside another one but it is 
resolved properly like I said before, in the read only field below the 
path field but it's just when it goes to cache it isn't valid.


Is there another route I should be taking that will work for parameters 
that live on the model?


Thanks,

--
 
Eric Thivierge

===
Character TD / RnD
Hybride Technologies
 





Re: StrandPosition on PointPosition

2013-05-15 Thread olivier jeannel

It's not possible that it was so simple.
A
Thank's Thomas ! You're my new best friend :D


Le 15/05/2013 22:42, Thomas Volkmann a écrit :

no time to try it, but 'build array from set' should work?
get the 40pointspointcloud.pointPosition - build array from set - 
set StrandPosition
olivier jeannel olivier.jean...@noos.fr hat am 15. Mai 2013 um 
22:31 geschrieben:


Ye


Le 15/05/2013 22:27, Steven Caron a écrit :

you want a line connecting them all?


On Wed, May 15, 2013 at 1:25 PM, olivier jeannel 
olivier.jean...@noos.fr mailto:olivier.jean...@noos.fr wrote:


Hi there,

Nightmaring on this the whole afternoon, I thought I'd be genius
enough to find but, no :(

I have a simulated pointcloud of 40 particles.
I have another non simulated pointcloud of 1 particle with a
strand with a StrandCount of 40.

How the hell, do I snap the 40 StrandPosition on the 40
(PointPosition) particles ?
I'd like one big strand (not 40 pieces of strands in between
pointpositions) .

Thank's for any help ^^

Olivier







Re: StrandPosition on PointPosition

2013-05-15 Thread Thomas Volkmann
I can't wait to tell my mum that I have a real friend now!! She will be super
exited!

 olivier jeannel olivier.jean...@noos.fr hat am 15. Mai 2013 um 23:02
 geschrieben:
 
  It's not possible that it was so simple.
  A
  Thank's Thomas ! You're my new best friend :D
 
 
  Le 15/05/2013 22:42, Thomas Volkmann a écrit :
 
 no time to try it, but 'build array from set' should work?
   get the 40pointspointcloud.pointPosition - build array from set - set
  StrandPosition
  
  
   olivier jeannel olivier.jean...@noos.fr
   mailto:olivier.jean...@noos.fr hat am 15. Mai 2013 um 22:31
   geschrieben:
   
 Ye
   
   
 Le 15/05/2013 22:27, Steven Caron a écrit :
   
   you want a line connecting them all?


  On Wed, May 15, 2013 at 1:25 PM, olivier jeannel
olivier.jean...@noos.fr mailto:olivier.jean...@noos.fr  wrote:
Hi there,
 
Nightmaring on this the whole afternoon, I thought I'd be
 genius enough to find but, no :(
 
I have a simulated pointcloud of 40 particles.
I have another non simulated pointcloud of 1 particle with
 a strand with a StrandCount of 40.
 
How the hell, do I snap the 40 StrandPosition on the 40
 (PointPosition) particles ?
I'd like one big strand (not 40 pieces of strands in
 between pointpositions) .
 
Thank's for any help ^^
 
Olivier
   
 
  
  
  




Re: custom ICENode - questions and request for example source code

2013-05-15 Thread Ahmidou Lyazidi
I asked Mohammad,* *and he gave me the autorisation to share his sources:
https://bitbucket.org/ahmidou_lyazidi/so_cubicbeziercurve

Cheers

---
Ahmidou Lyazidi
Director | TD | CG artist
http://vimeo.com/ahmidou/videos


2013/5/16 Matt Lind ml...@carbinestudios.com

 BTW – the link to the source code is dead.

 ** **

 Matt

 ** **

 ** **

 ** **

 *From:* softimage-boun...@listproc.autodesk.com [mailto:
 softimage-boun...@listproc.autodesk.com] *On Behalf Of *Ahmidou Lyazidi

 *Sent:* Wednesday, May 15, 2013 3:07 AM
 *To:* softimage@listproc.autodesk.com
 *Subject:* Re: custom ICENode - questions and request for example source
 code

 ** **

 From my small experience about this, you can't make a custom topology or
 kinematics node, you make a node that abstract the more or less complex
 computation, then you feed the topology nodes (or a matrix in the case of
 kinematics).

 As you stated you can't use locators, or location queries in a custom ice
 node, so if you need them the workflow is to break you ice node in smaller
 parts.

 About the preformance sometimes it's faster, sometimes quite the same. I
 made a parallel transport frame node, the gain was only 15% but the setup
 faster.

 This node seems to perform way faster:


 http://shaderop.com/2011/07/cubic-bezier-curve-node-for-softimage-ice/index.html
 

 A

 ** **


 

 ---
 Ahmidou Lyazidi
 Director | TD | CG artist
 http://vimeo.com/ahmidou/videos





RE: custom ICENode - questions and request for example source code

2013-05-15 Thread Matt Lind
What attributes are you setting in your SetData nodes?




Matt





From: softimage-boun...@listproc.autodesk.com 
[mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Grahame Fuller
Sent: Wednesday, May 15, 2013 2:05 PM
To: softimage@listproc.autodesk.com
Subject: RE: custom ICENode - questions and request for example source code

Reinterpret Location does not work well for this case but I seem to be getting 
good results with UV to Location. See attached pic. (Let me know if you can't 
see the attachment.) I tried several values and they all seem good.

Now to find a clever way to store and read the subsurface ID.

gray

From: 
softimage-boun...@listproc.autodesk.commailto:softimage-boun...@listproc.autodesk.com
 [mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Matt Lind
Sent: Wednesday, May 15, 2013 03:15 PM
To: softimage@listproc.autodesk.commailto:softimage@listproc.autodesk.com
Subject: RE: custom ICENode - questions and request for example source code

I think your assumptions are rather off base, Raff.

I'd be interested in seeing how you remap a non-uniform UV coordinate to 
uniform space in ICE using your brute force technique.  I solved this problem 
for a traditional operator, but I cannot see how using your methods it can be 
done in ICE.  In fact, I don't think ICE exposes enough of the right kind of 
information to make it possible.  But since you said you've done it, I'd like 
to see how it's done. :)

The problem:

Given a location described as a normalized UV coordinate in non-uniform 
parameterized space, find the equivalent location on another NURBS surface as a 
normalized UV coordinate described in uniform parameterized space.

Test case:

Given 2 NURBS grids with 4 isolines (subdivisions) in U and V.  Leave the first 
surface as a flat plane without deformations, create the 2nd surface by 
duplicating the first surface and deforming the 2nd surface significantly - 
translate the 2nd surface away from the world origin so you can see what you're 
doing.  On the first surface, get the UV Coordinate for the first interior 
isoline intersection in U and V (should be roughly 0.25, 0.25).  Convert that 
UV coordinate to uniform parameterized space so it finds the same first 
interior isoline intersection on the 2nd surface.  Do it using only factory ICE 
nodes.

Actual use case: Repeat the test for arbitrary locations when the surfaces are 
surface meshes comprised of multiple surfaces (or subsurfaces if you prefer)


The main problem here is it takes waaay too many nodes to get the job done 
in a practical manner.  We need protection against regressions of nodes that 
seem to occur from release to release.  The last thing I want to deal with is 
debugging an ICE Tree with 300+ nodes because one node in the bunch now clamps 
incorrectly, returns NaN, or doesn't handle divide by zero errors correctly 
(because a bug elsewhere fed it a zero).  Finding problems like this in a 
traditional operator is manageable, but doing so in ICE is torture.


Matt






From: 
softimage-boun...@listproc.autodesk.commailto:softimage-boun...@listproc.autodesk.com
 [mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Raffaele 
Fragapane
Sent: Wednesday, May 15, 2013 3:36 AM
To: softimage@listproc.autodesk.commailto:softimage@listproc.autodesk.com
Subject: Re: custom ICENode - questions and request for example source code

If you are using while and repeat nodes to reparametrize the surface, you are 
paying a ton of unnecessary costs. Yes, those things are slow, no, they are 
often not required, which is why both me and Ciaran had the same hunch.

Matter of fact, I very recently worked on an equivalent problem, but trickier 
(think of adding a dimensionality).
By far the fastest approach, although it might seem counter-intuitive, is to 
search and filter geometry, even if it's A LOT of nearest location runs, they 
will always be fast and do an excellent job of accessing a shared optimized 
structure, then it's up to you to filter the arrays in an ICE friendly way (so, 
no repeats), which again is a puzzling art on its own some times (Stephen's has 
excellent blog entries about many basics and sorting tricks if you are 
unfamiliar)

I hear a lot of people complaining about ICE performance, and then frequently 
enough they treat it like if it was a normal programming language and try to 
hammer in whiles, repeats, walking to conditions, multidimensional array 
equivalents and so on, on the assumption that saving nodes is going to make 
things faster, when in actuality there are other ways, that might seem 
counter-intuitive, that will blaze by any of those methods.
Most factory nodes even in the hundreds add a negligible overhead, I have 
complex functions totalling hundreds running faster than the monitor loop can 
time them, and still topping the vSync with sampling rates in the thousands. 
Food for thought there.

ICE still sucks at some many to one cases and definitely 

Re: custom ICENode - questions and request for example source code

2013-05-15 Thread Raffaele Fragapane
Matt, is the test case you outlined also your use case?
Reparametrization, even outside of ICE, is non-trivial since if you want
equidistant you are basically facing a minimization problem, which is where
I assume you went for forward walking technique (repeat with bouncing or
decreasing increment until a lowest possible U and V Value is found
returning a distance withing tolerance of the discrete interval).
I tried that, and it was prohibitively expensive as it involves whiles and
repeates that degrage the graph's threading and inflate memory use
enormously.

What, to my surprise, I found out the first time I tackled the problem at
its lowest dimensionality is that using a ton of get-closest location and a
single repeat (and then ridding myself of that in favour of starting from a
set of samples ran through a fixed number of iterations hard-wired) had
practically no cost compared to that, and threaded more efficiently across
all cores at all times.

Get closest location on its own of course will return data you want to
filter, especially in areas where there is considerable discontinuity (high
rate of change for the first order derivative), but nothing that filtering
by a ruleset wouldn't deal with excellently (exclude precedent location 
filter in range  filter by lowest U or V to avoid skipping the entire
discontinuity and then a further get closest resized and filtered again).

If you literally are limited to cases with only a few control vertices and
you can guarantee the discontinuity isn't too brutal (IE: first order
derivative between subsequent nodes doesn't change by more than 90 degrees
minus iota) the problem is a great deal simpler than if you have many knots
and the domain of the surface has practically no boundaries other than
those of the function. That's why I was asking about the case.

Playing with the arrays for filtering in a safe and fast way was also key,
and that is counter-intuitive compared to how you would deal with arrays in
traditional programming, especially performance wise, but possible (again,
Stephen and Julian's blogs have many gems).

I would also consider using a very dense poly or point cloud conversion of
the nurbs plane with data samples from the surface, if this is an on-off
tool, over using the surface itself, but that might or might not be
possible.

I still don't know what your performance target is. If it's dozens of
frames per second, or 60hz across multiple setups, I'd say you're bettter
off dropping this like a dead rat and instantly explore other venues.
If it's a conforming tool used in a session with clear entry and exit
points, then the average 15-20hz that is perceived as still smooth when
operating a tool is more achievable.

Lastly, you always have the option of dealing with the parametrization in
your own OP and writing a transform per discrete element to use in ICE for
the rest from there, which is probably the sane thing to do if you have
dense surfaces and the problem has an unbound domain. ICE just isn't well
suited to dealing with a lot of fringe case handling to scale performance
(it does best when dealing with the same operation, no matter how big, run
many times as widely as possible instead of at variable depth), whereas in
an OP that kind of optimization always works well.


RE: custom ICENode - questions and request for example source code

2013-05-15 Thread Matt Lind
This is 'a' use case.  There are many others.

I didn't use the slow approach you describe below.  I resorted to an 
approximation method which involves a reverse lookup to find the nearest NURBS 
Samples which surround the location on the surface (via a custom binary search 
on the NURBS Samples collection), then do a barycentric-like computation 
between those samples to derive the UV coordinate in uniform parameterized 
space.  The process is done in reverse to apply the mapped location to the 
other surface.   This works relatively fast in a scripted operator, but the 
pitfall is Softimage occasionally returns NaN or undefined when querying the 
NurbsSamples collection - usually when querying a sample which resides on a 
boundary edge of the surface, but it's inconsistent.  I have to implement 
significant error trapping to prevent the operator from crashing Softimage.

Anyway, I cannot use the approximation technique in ICE because ICE does not 
have the capability to query NurbsSamples in that way.  Even if it could, I 
would have to implement my own binary search and interpolation methods too.  
That is where the bloat and performance degradation comes from when applied to 
production use case.  This is also a driving reason to write a custom ICE Node 
as I could cut out the middleman of bloat and cut to the chase.

The main reason for pursuing ICE is to improve durability of our content.  
Scripted and compiled operators have the pitfall of self-deleting when an input 
is missing.  In the test case, if one of the nulls goes missing, the operator 
is deleted automatically and any content relying on that operator is now 
broken.  Since the operator may contain metadata specific to it's application, 
it may not be possible to reconstruct the effect after the fact.  This scenario 
is very common on scene load or model import when the inputs are referenced 
models and they have been modified externally.  If such a problem arises using 
an ICE node, it merely complains, turns red, and waits for the user to resolve 
the situation.  The system is still intact which gives the artist the 
opportunity to put things right - often with a face palm followed by getting 
latest version of the missing asset from source control - which is preferable 
over the artists marching into my office asking me to run diagnostics to figure 
out why his scene is broken only for me to have to dig back into previous 
versions of the scene to recognize the problem and determine what input is 
missing.


Matt




From: softimage-boun...@listproc.autodesk.com 
[mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Raffaele Fragapane
Sent: Wednesday, May 15, 2013 4:24 PM
To: softimage@listproc.autodesk.com
Subject: Re: custom ICENode - questions and request for example source code

Matt, is the test case you outlined also your use case?
Reparametrization, even outside of ICE, is non-trivial since if you want 
equidistant you are basically facing a minimization problem, which is where I 
assume you went for forward walking technique (repeat with bouncing or 
decreasing increment until a lowest possible U and V Value is found returning a 
distance withing tolerance of the discrete interval).
I tried that, and it was prohibitively expensive as it involves whiles and 
repeates that degrage the graph's threading and inflate memory use enormously.
What, to my surprise, I found out the first time I tackled the problem at its 
lowest dimensionality is that using a ton of get-closest location and a single 
repeat (and then ridding myself of that in favour of starting from a set of 
samples ran through a fixed number of iterations hard-wired) had practically no 
cost compared to that, and threaded more efficiently across all cores at all 
times.
Get closest location on its own of course will return data you want to filter, 
especially in areas where there is considerable discontinuity (high rate of 
change for the first order derivative), but nothing that filtering by a ruleset 
wouldn't deal with excellently (exclude precedent location  filter in range  
filter by lowest U or V to avoid skipping the entire discontinuity and then a 
further get closest resized and filtered again).
If you literally are limited to cases with only a few control vertices and you 
can guarantee the discontinuity isn't too brutal (IE: first order derivative 
between subsequent nodes doesn't change by more than 90 degrees minus iota) the 
problem is a great deal simpler than if you have many knots and the domain of 
the surface has practically no boundaries other than those of the function. 
That's why I was asking about the case.
Playing with the arrays for filtering in a safe and fast way was also key, and 
that is counter-intuitive compared to how you would deal with arrays in 
traditional programming, especially performance wise, but possible (again, 
Stephen and Julian's blogs have many gems).
I would also consider using a very dense poly or point cloud