Re: Ideas for star fields?

2014-06-26 Thread Nancy Jacobs


On Jun 26, 2014, at 9:58 AM, Ponthieux, Joseph G. (LARC-E1A)[LITES] 
j.ponthi...@nasa.gov wrote:

 If you are using the high res map, the jpeg file 
 TychoSkymapII.t5_16384x08192.jpg will work just as well as the tiff without 
 the file size overhead. Go to the image file’s ADJUST tab and set the 
 Exposure to something like 2.0. You’ll be absolutely amazed at what is 
 lurking in the lower range of the image. J
  
 This should work equally as well whether you use the Environment shader that 
 Matt suggested or a sphere object. If you are using a sphere object though 
 you should set the material to a constant shader for best results. I find an 
 exposure of about ~1 to ~1.5 lets these details show up without making the 
 Milky Way  disc too obvious.
  

Thanks ill try this! I was using the gamma they suggest of 1.8, within SI. It 
seems to look fairly realistic, but without many stars showing. I didn't think 
of changing the exposure (except on the HDR version I made).


 You’ll also want to avoid looking at either of the poles. The projection they 
 used does not appear to compensate real well with a typical spherical UV 
 projection
  

I do need to look at the poles, I need a full 360 spatial view, unrestricted. I 
thought I would have to use Flexify on it, but so far it looks good, on the 
south pole, though I have to check it more... 


 From a personal perspective, to see the universe in this way and with this 
 level of clarity is really amazing. Our sun is just one of those dots.
  

Yes, isn't it? And the vastness of space...so much space/time between all those 
stars as well. and we are circling only one of them on our tiny little earth...
I find when you zoom into the image, more and more stars appear, and you can 
see the color shifts present. I would like to see more of them in the render 
however, so thank you for the exposure advice. Gamma adj in SI reduces the 
sense of depth too much, the whitish haze there doesn't read well.

Nancy 


 --
 Joey Ponthieux
 LaRC Information Technology Enhanced Services (LITES)
 Mymic Technical Services
 NASA Langley Research Center
 __
 Opinions stated here-in are strictly those of the author and do not
 represent the opinions of NASA or any other party.
  
 From: softimage-boun...@listproc.autodesk.com 
 [mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Nancy Jacobs
 Sent: Thursday, June 26, 2014 2:17 AM
 To: softimage@listproc.autodesk.com
 Subject: Re: Ideas for star fields?
  
 I'm rendering with Redshift. What I've been experimenting with is to take the 
 star field map I'm using for the background, whether Hubble or now Joey 
 Ponthieux's wonderful suggestion of the NASA star field image. It seems to 
 wrap nicely to a sphere, not much shows up in the render, but it's a good 
 base to work with.
 
 
  


Re: Ideas for star fields?

2014-06-24 Thread Nancy Jacobs
Sounds great but I don't have Nuke... Just After Effects with trapcode 
particular. May be something useful here, but the problem with post comp is 
that I need to create lights for the scene that correspond to the starlight (as 
least somewhat). Resulting in a subtle GI lighting from space... Might still 
work, but I'd probably have to be far more advanced with all this than I am... 
Or maybe I'm just over thinking it.

Thanks!

On Jun 24, 2014, at 3:02 PM, Sylvain Lebeau s...@shedmtl.com wrote:

 Maybe take a look at StarPro plug in for Nuke?
 
 http://www.maasdigital.com/starpro/
 
 I never tried it, but at 227$ it's worth to check and you still keep the 
 control in comp. 
 Look at the first video for a little tut on it...
 
 hope it helps
 
 sly
 
 Sylvain Lebeau // SHED
 V-P/Visual effects supervisor
 1410, RUE STANLEY, 11E ÉTAGE MONTRÉAL (QUÉBEC) H3A 1P8
 T 514 849-1555 F 514 849-5025 WWW.SHEDMTL.COM http://WWW.SHEDMTL.COM
 am.png
 VFX Curriculum 03: Compositing Basics
 mail to: s...@shedmtl.com
 
 
 
 
 On Jun 23, 2014, at 5:42 PM, Nancy Jacobs illus...@mip.net wrote:
 
 Hello, 
 
 I'm needing a star field kind of background for a scene, and looking for 
 ideas to create it. I have been using Hubble images wrapped around a sphere, 
 around the scene, but I'm finding it doesn't read well, even with very 
 high-res Hubble images. 
 
 So, I'm wondering about other ways to create star fields. Has to be 360 
 degrees, seamlessly -- and I don't have the capability to deal with that in 
 a compositing situation.
 
 Soany ideas?
 
 Thanks,
 Nancy
 


Re: Ideas for star fields?

2014-06-24 Thread Nancy Jacobs
Good point, if I use expressions to correct the rotation problems re the 
environment map and any SI world null rotation parameters... They have to be 
connected in a strange manner, at least as of 2014. Don't imagine they've fixed 
that...

Thanks,
Nancy

On Jun 24, 2014, at 1:22 PM, Matt Lind ml...@carbinestudios.com wrote:

 If you're just going to create a sphere with specks on it, why don't you use 
 an environment shader?  That does the same work without having to create a 
 sphere, deal with camera rigs, or mess up your ray depth computations in the 
 render.
 
 
 Matt
 
 
 
 
 
 
 
 -Original Message-
 From: softimage-boun...@listproc.autodesk.com 
 [mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Ponthieux, 
 Joseph G. (LARC-E1A)[LITES]
 Sent: Tuesday, June 24, 2014 6:25 AM
 To: softimage@listproc.autodesk.com
 Subject: RE: Ideas for star fields?
 
 Oh and one other thing. You may find that constraining the star field sphere 
 position directly to your camera and forcing the sphere orientation to remain 
 in sync with the scene will produce the best results. Render the stars out as 
 a pass and comp everything over them as the base image. By doing this the 
 stars will always maintain an exact distance from the camera and since 
 stars are such an incredible distance from us in space the illusion is 
 remarkably similar. It will also make the appearance of the stars much more 
 predictable as you can set them for what you want and you no longer have to 
 worry about that appearance changing other than camera orientation.
 
 --
 Joey Ponthieux
 LaRC Information Technology Enhanced Services (LITES) Mymic Technical 
 Services NASA Langley Research Center 
 __
 Opinions stated here-in are strictly those of the author and do not represent 
 the opinions of NASA or any other party.
 
 
 -Original Message-
 From: softimage-boun...@listproc.autodesk.com [mailto:softimage-
 boun...@listproc.autodesk.com] On Behalf Of Ponthieux, Joseph G. (LARC-
 E1A)[LITES]
 Sent: Tuesday, June 24, 2014 9:13 AM
 To: softimage@listproc.autodesk.com
 Subject: RE: Ideas for star fields?
 
 The problem is that you are using Hubble images. Hubble images are high res
 and beautiful but often are only representative of a single focal point in
 space. What you want is a star map that is a cylindrical projection suited 
 for
 your sphere. You will find the maps you need at this link. In particular the
 high res Tycho maps are probably what you want.
 
 http://svs.gsfc.nasa.gov/vis/a00/a003500/a003572/
 
 When you map these onto your sphere you will notice that the center of
 your sphere of the focal point of a disc or ring  of stars. You'll see 
 the
 ring form on the inner side of the sphere. There were three maps
 historically,  Tycho, Hipparcos, and Yale. The following links contain them 
 but
 these do not look like the highest res versions.
 
 http://www.nasa.gov/multimedia/3d_resources/assets/tycho8.html
 
 http://www.nasa.gov/multimedia/3d_resources/assets/hipp8.html
 
 http://www.nasa.gov/multimedia/3d_resources/assets/yale8.html
 
 
 Each was created at different resolutions and star counts. One is synthetic I
 think, and that I believe is the Yale map based upon the Tycho catalog. The
 map is of higher contrast and may lack a lot of the intermediate or 
 diminished
 stars so it may be useful in some circumstances. You'll have to figure out
 what the basic appearance is that you are looking for and a combination of
 the maps may be what you want. As you probably have already discovered,
 you won't be able to let your camera get too close to the texture surface as
 the stars will become abnormally large and the illusion will be lost.   Its 
 best if
 you scale the sphere as large as you can and keep the surface as far from the
 camera as possible to reach the effect you want.
 
 If you want a moving starfield, the best way to achieve that is generate a
 massive field of small triangles set to constant white. The distance apart, 
 size,
 and randomness will have to be worked out. You can do this as particles as
 well, but if the particles are set to pixel height you'll lose the sense of
 perspective and distance as you fly through them.
 
 --
 Joey Ponthieux
 LaRC Information Technology Enhanced Services (LITES)
 Mymic Technical Services
 NASA Langley Research Center
 __
 Opinions stated here-in are strictly those of the author and do not
 represent the opinions of NASA or any other party.
 
 -Original Message-
 From: softimage-boun...@listproc.autodesk.com [mailto:softimage-
 boun...@listproc.autodesk.com] On Behalf Of Nancy Jacobs
 Sent: Monday, June 23, 2014 5:43 PM
 To: Softimage Listserve
 Subject: Ideas for star fields?
 
 Hello,
 
 I'm needing a star field kind of background for a scene, and looking for
 ideas
 to create it. I have been using Hubble images wrapped around a sphere

Re: Ideas for star fields?

2014-06-24 Thread Nancy Jacobs
Thank you Jason for this awesome texturing advice. I've done a lot in photoshop 
with tiles and spherical texture maps, so this is my territory.

Some of these procedures i'll have to read over a few times to really get 
completely, so I hope you don't mind if I need to ask a couple questions about 
them at some point.

Thanks!
Nancy

On Jun 24, 2014, at 2:17 AM, Jason S jasonsta...@gmail.com wrote:

 In my experience, a textured sphere  can work pretty good, 
 
 You can tile an image (3-4 times on a sphere)
 with a tilable base star texture (as uniform as possible)
  large enough to hold enough subtle variations without perceiving patterns 
 (perhaps 1.5-3x the size of your final render res), 
 
 If you are using Photoshop, from a say 1or2k rez. small-star starfield pic, 
 (to make a 2-3k final pic)  you can do a 'filter-offset' by any odd amount, 
 and then breakup the seams to make it tilable -- super-easy specially for 
 stars, 
 you can use a speckly brush clone stamp with high opacity (so no opacity 
 gradient falloffs) 
 and a low brush step, (so 1 stamp at every ~20 pixels on strokes for very 
 random cloning)
 
 
 So you can make a relatively  'mostly uniform ' star map density as a base 
 BG, 
 ( with many-many  dim (almost subpixel) stars, a  a number of mediums,  and 
 really just a couple of bright ones,   all with a bit of  cloudy variations )
 if there arent enough dimer ones, or to add density or (uniformize?), 
 you can use a big clone stamp with that speckle-y brush, but in additive 
 (linear dodge) mode at varying opacity
 also with that now-tilable pic, you can scale it down 50%  tile it 4 times 
 in half opacity (linear dodge) for those  many faint BG stars
 
 Then, with those hubble pics, you can isolate interesting areas, make the 
 rest transparent, 
 and in 3d, add grids in key spots to add localized cloudy nebula patterns and 
 variations depending on what you're after 
 (with RGB intensity as opacity)
 
 If you really need 360 (up  down) with a spherical projection, 
 you'll probably want to mix-in a copy of that starfield texture for any 
 stretching at the poles of the sphere.
 
 I used a very speckle-y gradient  (made of fat noise) with a white to black 
 radial fat noise gradient in the center as an alpha for the same stars 
 texture, to project vertically top down (x-z) 
 
 You can also blend the star textures somewhat more than 1 in 3d so that some 
 stars can bleed a bit with perhaps an additive blurred version of just 
 those hot pixels.
 
 That may be enough on it's own, but if you are moving around (at light 
 speed?) 
 you can also add 3D stars, Adams tips seems like an excellent approach to 
 that :)  .. good luck! :)
 
 Jason
 
 
 On 06/23/14 17:50, Adam Sale wrote:
 Do you need nebulae, etc? 
 If its just stars, what about using a static point cloud with spherical / 
 displaced randomized spheres as shape. Randomize color and transparency per 
 point? 
 This would give you the 3d field you are looking for, then perhaps some 
 fluids to do neb clouds, simulated particles for comets, meteors etc.. 
 Perhaps use the hubble images or comp some stills together to make a bg 
 cyclo to pull the 3d elements together? 
 
 Adam
 
 
 On Mon, Jun 23, 2014 at 2:42 PM, Nancy Jacobs illus...@mip.net wrote:
 Hello,
 
 I'm needing a star field kind of background for a scene, and looking for 
 ideas to create it. I have been using Hubble images wrapped around a 
 sphere, around the scene, but I'm finding it doesn't read well, even with 
 very high-res Hubble images.
 
 So, I'm wondering about other ways to create star fields. Has to be 360 
 degrees, seamlessly -- and I don't have the capability to deal with that in 
 a compositing situation.
 
 Soany ideas?
 
 Thanks,
 Nancy
 


Re: Ideas for star fields?

2014-06-24 Thread Nancy Jacobs


On Jun 25, 2014, at 12:14 AM, Jason S jasonsta...@gmail.com wrote:

 Plus mistery soft light from a galaxies that always happens to be somewhere 
 around so that there may be light, with dust in space so we can see lazers :)

That's what I'm counting on! That Mystery soft light. Since what I'm doing 
can have a  bit of 'artistic license' ;-)... Though I am making it generally 
correspond to the starfield light.

After all, one can see in old paintings the 'heavenly light' thing... Where you 
don't really question where it comes from too much if it works in the 
painting... (ok so I'm a painter first after all... ;-))

Nancy




Ideas for star fields?

2014-06-23 Thread Nancy Jacobs
Hello, 

I'm needing a star field kind of background for a scene, and looking for ideas 
to create it. I have been using Hubble images wrapped around a sphere, around 
the scene, but I'm finding it doesn't read well, even with very high-res Hubble 
images. 

So, I'm wondering about other ways to create star fields. Has to be 360 
degrees, seamlessly -- and I don't have the capability to deal with that in a 
compositing situation.

Soany ideas?

Thanks,
Nancy


Re: Ideas for star fields?

2014-06-23 Thread Nancy Jacobs
Thanks Adam, this is a great idea, and something I'm not used to doing, so I 
will have to learn more. Would it be best to do this with ice? Or just basic 
particles?

As for fluidsam I missing somethingdo we have fluids in Softimage these 
days...? Nebula clouds would be perfect, as I need some interest, and something 
to account for more light happening in the scene. Basically, my scene is 
floating about in space...

Anything you can point me to to learn more about these processes? Even just 
topic keywords to explore would help.

Thanks very much!
Nancy

On Jun 23, 2014, at 5:50 PM, Adam Sale adamfs...@gmail.com wrote:

 Do you need nebulae, etc? 
 If its just stars, what about using a static point cloud with spherical / 
 displaced randomized spheres as shape. Randomize color and transparency per 
 point? 
 This would give you the 3d field you are looking for, then perhaps some 
 fluids to do neb clouds, simulated particles for comets, meteors etc.. 
 Perhaps use the hubble images or comp some stills together to make a bg cyclo 
 to pull the 3d elements together? 
 
 Adam
 
 
 On Mon, Jun 23, 2014 at 2:42 PM, Nancy Jacobs illus...@mip.net wrote:
 Hello,
 
 I'm needing a star field kind of background for a scene, and looking for 
 ideas to create it. I have been using Hubble images wrapped around a sphere, 
 around the scene, but I'm finding it doesn't read well, even with very 
 high-res Hubble images.
 
 So, I'm wondering about other ways to create star fields. Has to be 360 
 degrees, seamlessly -- and I don't have the capability to deal with that in 
 a compositing situation.
 
 Soany ideas?
 
 Thanks,
 Nancy
 


Re: Rendering alternative to mental ray needed..

2014-03-23 Thread Nancy Jacobs
Thank you all so much for this information about Redshift! I'm planning on 
testing it out today, it looks like a very good tool for me. And I love the 
FPrime-like preview with gradual refinement. I really got hooked on that 
feedback some years ago. Miss it.

One question. I have a quadro fx 3800 in a 3DBoxx machine a few years 
oldwhich I see qualifies for Redshift, but needs replacing anyway, as it 
just won't cut it for Mari. Softimage works great with it, no complaining with 
the same models and hi-res textures visble as i use in Mari...but what can you 
do, Mari is elitist that way ;-)

I also have an older Boxx with an older quadro, but probably won't bother using 
it for a render farm, as it's not as powerful, and that would involve more 
expense. I will need to render quite a few frames of animation for my current 
mega (for me) video project. Overnights and days spent on my day job (painting) 
rendering with the newer Boxx, that's the plan.

What would everyone suggest as a video card replacement, not too pricey but 
substantially better than what I have? Yes I do have to check with Boxx or find 
out what will fit in there with my current motherboard, but, just off the cuff 
suggestions. There's always a point where the cost/benefit ratio levels off... 
And I'm thinking not to go with a quadro this time, another GeForce might do 
just as well with more power for the price? If so , what's a good major step 
up, works well with Mari and Redshift, priced for the little guy...

Thanks!
Nancy

On Mar 23, 2014, at 10:40 AM, Tim Crowson tim.crow...@magneticdreams.com 
wrote:

 Sebastion, I think they're comparing it with Mental Ray simply because of its 
 degree of integration within Softimage, not because of its results. And as 
 Andreas said, MR really does have great shaders. The Redshift Architectural 
 material was originally derived from the MR version, for example, but has 
 since evolved. There are plans to replace the RS_Arch shader entirely with a 
 new uber-material that would include SSS as well (I've been wanting to blend 
 SSS and Refraction in the same shader for a while now...). Beyond the 
 integration and shader support though, Redshift is nothing like Mental Ray at 
 all.
 
 If you're interested in what kinds of shading nodes Redshift offers (or which 
 XSI ones it supports), you can see that in the public docs here, under the 
 'Shaders' section.
 
 As for the contrast between the Octane workflow and the RS workflow... surely 
 it's night and day. Redshift is very tightly integrated into XSI, much like 
 SITOA is. It's a really smooth experience. Unless they've changed some things 
 recently with Octane, the raw workflow for using Octane is not nearly as 
 fluid.
 
 In regards to Nancy's original question... it's amazing how many proponents 
 of Redshift have appeared in the last few months. Having used all the 
 renderers mentioned here, I'd also suggest Redshift, since Arnold is just 
 overkill for the kind of work you do, Nancy (judging strictly by what I can 
 gather, of course).
 
 -Tim
 
 
 On 3/23/2014 12:56 AM, Sebastien Sterling wrote:
 What is the workflow like vs Octane ? has anyone tested ?
 
 I mean i hear people comparing Redshift to Mentalray in matters of handling, 
 personally i'm not a fan of MR interaction, but that might just be bias on 
 account of the slowness, and the all around instability, and the crashing 
 and the artefacts...
 
 Are the nodes anything different to what you would usually get ?
 
 
 On 23 March 2014 05:00, phil harbath phil.harb...@jamination.com wrote:
 I don’t know much about mac pros,  is that a pci-e 2 slot (or less?),  so 
 even though you are putting pci-e 3 cards in an older slot you are still 
 getting that kind result?  I have an computer about that age, if that 
 works, that would be a no brainer.
  
 From: Ed Manning
 Sent: Sunday, March 23, 2014 12:05 AM
 To: softimage@listproc.autodesk.com
 Subject: Re: Rendering alternative to mental ray needed..
  
 On the economic advantages of redshift or other gpu renderers. 
  
 My current workstations are Mac Pro 3.1s which are left over from the 
 company I shut down in 2009 (bootcamped  into Windows).  Essentially 
 worthless from a CPU standpoint. Putting a single $1000 titan gpu into one 
 of them makes it more efficient at rendering than any modern 16-core $8,000 
 workstation running any CPU ray tracer. Putting 2 titans in them is like 
 having my old 162-core blade server renderfarm without the $5000/month 
 electric bill. Not to mention all the IT overhead and license costs. 
  
 I have never seen a single piece of software (in concert with the 
 astonishing graphics hardware that is now so cheap and still getting 
 cheaper) have such a cost-reducing impact.
  
 Plus they are fanatically hard workers and great communicators. 
 
 -- 


Re: Demise of SI and what it means for fine arts work

2014-03-22 Thread Nancy Jacobs
These are great points Peter and Andres. 

Andres, I remember being in on the very first Truespace release... I loved the 
rendering engine, but couldn't stand the modeling. Interesting that you prefer 
it for modeling! I had many conversations with their tech support on why don't 
you have this or that simple modeling tool...but they insisted on their 
methodology, which to me was imprecise, though I can see how someone who 
preferred to model in another way would find it interesting. Did they change 
the modeling tools over the years? (I think I bailed at Truespace 3). 

I got well into Imagine around that time, and found it great for modeling... 
Even did a couple commercial projects with it if you can imagine that. A couple 
other software carcasses I don't remember the names of ;-).then settled on 
Lightwave for some time. Loved the modeling tools there, and the rendering, the 
process and the results...with some brilliant pluginsfrankly I still miss 
Lightwave there, it was easier to achieve my aesthetic with it. It's the (lack 
of) animation that broke Lightwave for me, and that dual modeling/rendering 
application situation. XSI was an absolute dream come true there. The most 
logical and intuitive piece of software I've ever used. With animation systems 
that are so much easier to work with and visualize. First time I've ever been 
able to do any successful rigging of a realistic human model was with XSI. And 
it's great for modeling too, once you get the hang of it.

You are all so right, I'm sure SI as is will continue to provide the tools I 
need for a very long time. As long as the help files remain online...? And if 
they don't break the final release so it is unusable. And if it is still 
available to install on future computers and windows platforms. That's the 
scary part, really. I'm even kind of hesitant to mention these things because 
I'm afraid AD is listening and might see a way to break us of the SI habit 
here...

Also, I've never really been able to get fully the results I want out of mental 
ray. I've had to put a lot of hours into studying it and experimenting, and 
still can't get a lot of the rendering effects I got with Lightwave. I loved 
their system of gradation effects you could put on anything, but somehow it 
doesn't translate to mental ray's system. Working with mental ray seems kind of 
like wrestling with an invisible bear sometimes...

I've heard there is a passionate C4D community, and it is often touted as a 
tool for artists, and easy to use. But when I've looked at it, it seems limited 
compared with SI. Does anyone know the state of it now, has it improved in the 
area of animation and effects? How does it compare with SI? 

I just keep remembering FPrime in Lightwave, and how great an integrated 
renderer that was to use... Fast for GI effects too, which i sorely need. And 
KRay was coming along and looking good too. If only I could get the best of 
both worlds there...Lightwave rendering plugins with SI everything else...

Nancy Jacobs 
http://www.childofillusion.net/

On Mar 22, 2014, at 9:54 AM, Andres Stephens drais...@outlook.com wrote:

 I agree with you Tenshi and Peter. 
 
 I still use Truespace for my modelling and previz 5 years after Microsoft 
 shut it down. 
 
 Yes it still only uses viewport tech based on directX 9, yes it has none of 
 the latest bells and whistles these days ... 
 
 But it's the community, very small as is (you could count them on your hands 
 and feet) that helps me keep it alive as my artistic tool of choice (and it 
 still wowzers clients as I quickly slap together and modify on demand previz 
 and models in a decent viewport today) . There still are some heros 
 developing it and even doing unofficial updates, or compiling uncontinued 
 plugins and tutorials together and keeping then shared. Even resurrecting old 
 websites (www.Caligari.us) . 
 
 And this last release of Truespace from 2009 was only beta. (though luckily 
 they left it for free in its dying breaths) 
 
 I agree with you both, and when I am able to purchase a right to softimage 
 2015, I can still see years of shelf life for such a professional and capable 
 product like si, with so much room to expand on concepts I don't even know 
 (ICE) that no matter how advanced it's competion will become, it still can be 
 a competitive and perfect tool of choice for individuals or small studios - 
 for years and years to come. 
 
 And I hope the community, even after shifting software, will not drift apart 
 and like TS, keep up the development how they can, the art, the products, and 
 in the right time master other tools and share... But there will always be 
 that first love on the side. 
 
 I guess the Truespace forum www.united3dartists.com/forum had a fitting 
 title when it was created right after the demise of its beloved software.. 
 
 Stay United, stay a true 3d artist, love your software dead or alive, and 
 keep mastering your skills with any

Rendering alternative to mental ray needed..

2014-03-22 Thread Nancy Jacobs
Hello, 

I was beginning to ponder, in another thread, some rendering issues in SI. I've 
never really liked mental ray, tried to, spent a long time studying it, and 
could never get the kind of aesthetic I was able to achieve in Lightwave, with 
the Steve Worley plugin FPrime, or the newer Kray, which was beginning a 
promising development about when I left. Seems to have taken off now, too.

I ported several projects over from Lightwave to XSI some years ago, and tried 
to reproduce the very nice rendering results I got in Lightwave. Totally 
different system of course. I figured it was one good approach to learning 
mental ray. Well, I couldn't get anywhere close, and it was a struggle, and I 
never really got acceptable results. Every other aspect of the projects went so 
much better, though, in XSI. And there was animation in XSI I couldn't even 
begin to do in Lightwave.

I need fast GI effects, that's integral to what I do. A lot of interior 
architectural-like surfaces. Not necessarily total realism, just an aesthetic 
-- I need to really be able to get in there and tweak things and make them 
work. I use a lot of HDR lighting, I know how to hand tweak it in photoshop, or 
create it from scratch, and I like to use that to control my light and color. 
I've used FG quite a bit, and am not always happy how it translates things. GI 
is way too slow in my scenes.

FPrime and Kray had a way of handling these GI lighting effects that was very 
efficient, and tweakable. That's the one thing from Lightwave that I REALLY 
miss. Anything comparable for Softimage?

I remember I also could get some reflectivity and ray tracing in Lightwave at a 
fraction of the rendering time mental ray takes. I can't use it at all in 
mental ray, with GI (including FG) except on a single object basis. Not for 
animation on many frames, too costly. But I do like the nodal texturing system 
built into SI.

Anyway, considering these factors, is there any rendering solution for SI that 
anyone can recommend that will give me what I'm looking for? That isn't too 
expensive...

Thanks,
Nancy Jacobs

http://www.childofillusion.net/

Demise of SI and what it means for fine arts work

2014-03-20 Thread Nancy Jacobs
When I bought XSI years ago, I compared it with Maya, and the 3d software 
packages i had been using since the dawn of the phenomenon, and made my 
decision. I never looked back. I have been extremely happy with XSI -- the 
workflow, the interface, everything was geared toward ease of use and learning, 
and visualization of a project from beginning to end. It has been the one piece 
of software that I find myself saying, every time I use it, what a fantastic 
piece of software! A joy to learn and use. And I've barely delved into ICE.

When Autodesk purchased XSI, I was crushed. People speak of AD acquiring XSI to 
use its technology, and Maurice Patel has stated, We also acquire tech, 
redesign and re-engineer it, even rewrite it entirely, to fit into our products 
and workflows and yes, if it is more efficient to do so, we just integrate it. 
So that is obviously one reason for them to acquire XSIright after ICE was 
introduced.

But what I thought then, and sadly seems to be coming true... Is that AD 
acquired XSI in order to acquire and 'integrate' XSI's USER BASE. What better 
way for a company to dominate the user base of a software genre than to acquire 
software products in that genre, kill them, and then offer the stunned user 
base a cost-efficient (in the short term) entree into their preferred product. 
Plus they get to cannibalize the dead software and use it to pump up their 
'chosen one'. But we are not seeing that latter tech application effect so much 
as we are seeing the hijacking of the user base of Softimage. And, as so many 
have pointed out, bringing Maya into a state where SI users will find their 
workflow and features emulated is only a vague promise for future application. 
Not likely to be realized, considering the track record of Autodesk. 

Does this remind anyone of the infamous corporate takeover mentality...? 
Applied to software, of course. Same principle. Only here, it is the user base 
which is the prize, the economic draw of an expanded user base over the years. 
Especially as Maya, and the expensive plugins and expansions needed to do 
comparable work that XSI does out of the box... is significantly more expensive 
than XSI.

I am a one-person fine artist, primarily a painter, using SI as a tool for 
video installation work. This is a grey area of use, not completely 
non-commercial, as art shows have some commerce involved, still the return on 
investment in the area of 3D work is always likely to be a loss. Still, I 
reluctantly went for the SI maintenance agreement with AD when it bought XSI, 
stretching my budget as far as it will go. Maya is not an artist tool like SI 
is, and not agreeable to a small artist's budget. Very few options remain, in 
that regard. I left Lightwave because of its lack of non-linear workflow, and 
cumbersome animation. XSI was light years ahead in these areas. I made my 
choice, but now it seems that people like me are being squeezed out of any 
chance of developing our interests and contributions to an alternate aspect of 
3D work. 

I very much admire the work of all of you who work in the industry, and the 
truly amazing things you do with SI, or any software. Incredible, what you 
accomplish. (And i often find myself wishing i had the great teams you have to 
be able to accomplish more of what I envision.) But there has to be a place for 
small artists who choose to use 3D software for other purposes, and take it in 
a somewhat different direction. We may not be a large user base which will be 
economically significant to a company like Autodesk, but this (fine arts) 
aspect of 3D work needs to be able to exist. And that is becoming increasingly 
doubtful, with the big sharks gobbling up our accessible software package and 
leaving us behind with little chance to develop our work.

Nancy Jacobs
http://www.childofillusion.net/

On Mar 18, 2014, at 2:34 PM, Paul Griswold 
pgrisw...@fusiondigitalproductions.com wrote:

 Thanks Maurice,
 
 So the information I have today is - most of my work is done with Softimage 
 and there is 0% chance it will be continued.
 
 Autodesk has a 99% failure rate internally with creating innovative products. 
 (your words)
 
 Autodesk wants me to move to Maya, an old, outdated package that cannot do 
 what I need now, requires significant work (scripts, plugins, etc.) to make 
 usable, is not conducive to small shops or freelancers, and there is no 
 promise that it will ever be able to do what Softimage can do right now.  
 Making that move not only moves me back to the junior level, but reduces my 
 pay, lowers the quality of my work, and significantly hampers my ability to 
 compete.
 
 Bifrost is being developed at a company with a 99% failure rate with creating 
 innovative products.  Bifrost is not an ICE replacement and may never be one.
 
 And, apparently in this industry you should not have all your eggs in one 
 basket.  Unfortunately Autodesk bought the goose laying the golden eggs

Re: Environment sphere issues

2013-08-02 Thread Nancy Jacobs
I have an old copy of HDR shop v1 on my computer, I'm sure it's the same as 
your linkthe one you linked to, my Norton antivirus, horrified, deleted 
immediately! ;) 

I do remember using this in ancient times, must'vee been when image files were 
smaller, but this one crashed it. And I do need a high res image because this 
is the background for my project. My HDR lighting image can live with a little 
polar distortion, and of course it's much smaller.

Which brings me to another question -- doesn't all that dynamic range 
conversion, internally to HDR shop, degrade or change the low dynamic range 
image? Moot of course if it crashes, but it does have the conversion I need. 
Dang it. I can't find anything else that does.

Thanks,
Nancy

On Aug 1, 2013, at 5:34 PM, Stephen Davidson magic...@bellsouth.net wrote:

 perhaps you missed one of my earlier postings...
 
 Here is a free download (pc application)
 of a tool (HDRshop version 1) that can convert between the different
 environment map formats.
 http://ict.debevec.org/~debevec/HDRShop/download/
 
 
 here is documentation for all versions.
 http://gl.ict.usc.edu/HDRShop/documentation/HDRShop_v3_man.pdf
 
 Only version 1 is free, but that is all you need for format conversion.
 
 
 On Thu, Aug 1, 2013 at 3:17 PM, Nancy Jacobs illus...@mip.net wrote:
 Thanks to both Nicholas and Stephen again, that explains a lot more and 
 sounds like a great idea So you can only use this Pano2VR for the 
 transform back and forth? I visited their website -- they have a watermark 
 on the free version. Apparently it costs $93 -- that's pretty steep for my 
 uses, considering I don't need all their other functionality. Doesn't 
 photoshop or some other tool do this conversion? I just signed on to Adobe 
 Creative Cloud...they ought to have something in all that software that 
 would do this, you'd think?
 
 On Aug 1, 2013, at 2:57 PM, Stephen Davidson magic...@bellsouth.net wrote:
 
 I have use both sphere and cross (or cube) mapping for reflections.
 Both work fine, and have advantages and disadvantages, depending on the 
 specific situation.
 The fact that an environment is a cube is not an issue.
 It is simply a different way to map the environment.
 The fact that it is a cube is not apparent in the resulting
 rendered image. I understand your concern, but it
 looks just fine. It is just easier to paint out the polar pinches
 in this format. Nicholas is correct in that you can just
 turn the change the format of the environment map and
 you loose nothing. 
 
 make both a equirectangular and cube format environment map
 and choose what works best for you. I think you will see there is no
 difference, except when painting out the pole pinches.
 
 
 On Tue, Jul 30, 2013 at 5:15 PM, Nancy Jacobs illus...@mip.net wrote:
 Thanks, Stephen and Nicholas for the information on cubical projection. 
 Frankly, I'm partial to spheres... I've always found them better as 
 background environments -- cubes never seem right, the edges tend to be 
 apparent. especially because this is a scene in a 360 space and i don't 
 want to have to avoid the camera looking at the edges of the cube. But I 
 also don't want to have to avoid the poles of a sphere. But I've never 
 tried the cubical projection in Softimage, is it better somehow? You're 
 right, Nicholas, it would be easier to paint out the distortion in PS. But 
 I don't want to do all that work on creating a cubical projection and have 
 it not read well in the render.
 
 Have you used it effectively when you need 360 degree correctness?
 
 Thanks!
 
 On Jul 29, 2013, at 4:39 PM, Stephen Davidson magic...@bellsouth.net 
 wrote:
 
 Exactly. Then use the cross version (Pano2VR creates a horizontal cross)
 setting Softimage's environmental mapping to horizontal cross.
 Is this not working for you, now?
 
 
 On Mon, Jul 29, 2013 at 2:54 PM, Nicholas Breslow 
 n...@nicholasbreslow.com wrote:
 The basic workflow I’ve used for this in the past is to convert the 
 equirectangular panorama to a cubical projection. Then you can paint out 
 the nadir (poles) on the top/bottom of the cube in PS/other to get rid 
 of the distortion. You can use Pano2vr 
 http://gardengnomesoftware.com/pano2vr.php for the conversion.  After 
 convert it back to equirectangular. Very similar to the Polar method 
 mentioned before.
 
  
 
 Hope that is what you were going for – just glanced and thought I would 
 share this.
 
  
 
 Nicholas Breslow
 
  
 
  
 
 From: softimage-boun...@listproc.autodesk.com 
 [mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Nancy 
 Jacobs
 Sent: Sunday, July 28, 2013 6:25 PM
 To: softimage@listproc.autodesk.com
 Subject: Re: Environment sphere issues
 
  
 
 Thanks for this info, Stephen, but I really need the spherical 
 environment for a seamless space experience. 
 
  
 
 Now that I've got the implicit projection working, it does a better job 
 rendering the image at the poles, but still not good enough. Guess

Re: Environment sphere issues

2013-08-02 Thread Nancy Jacobs
Nick, I checked this out and loaded the actions into PS, but it only does 
equirectangular to angular fisheye and back. And lots of cube to cross 
combinations. But nothing that would get you from a panorama to a cross. Unless 
I missed something?

Thanks,
Nancy

On Aug 1, 2013, at 3:26 PM, Nicholas Breslow n...@nicholasbreslow.com wrote:

 Hi Nancy,
  
 Check out Andrew Hazelden’s Blog here: 
 http://www.andrewhazelden.com/blog/2012/11/domemaster-photoshop-actions-pack/
  
 The Domemaster Photoshop Actions Pack should do what you need. His site is 
 interesting – worth a look through.
  
 PS – Disclaimer: I’ve only used Pano2VR as a license was purchased for me but 
 the actions ~should~ work nicely. Let me know if they don’t.
  
 -Nick
  
 From: softimage-boun...@listproc.autodesk.com 
 [mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Nancy Jacobs
 Sent: Thursday, August 1, 2013 3:17 PM
 To: softimage@listproc.autodesk.com
 Subject: Re: Environment sphere issues
  
 Thanks to both Nicholas and Stephen again, that explains a lot more and 
 sounds like a great idea So you can only use this Pano2VR for the 
 transform back and forth? I visited their website -- they have a watermark on 
 the free version. Apparently it costs $93 -- that's pretty steep for my uses, 
 considering I don't need all their other functionality. Doesn't photoshop or 
 some other tool do this conversion? I just signed on to Adobe Creative 
 Cloud...they ought to have something in all that software that would do this, 
 you'd think?
 
 On Aug 1, 2013, at 2:57 PM, Stephen Davidson magic...@bellsouth.net wrote:
 
 I have use both sphere and cross (or cube) mapping for reflections.
 Both work fine, and have advantages and disadvantages, depending on the 
 specific situation.
 The fact that an environment is a cube is not an issue.
 It is simply a different way to map the environment.
 The fact that it is a cube is not apparent in the resulting
 rendered image. I understand your concern, but it
 looks just fine. It is just easier to paint out the polar pinches
 in this format. Nicholas is correct in that you can just
 turn the change the format of the environment map and
 you loose nothing. 
  
 make both a equirectangular and cube format environment map
 and choose what works best for you. I think you will see there is no
 difference, except when painting out the pole pinches.
  
 
 On Tue, Jul 30, 2013 at 5:15 PM, Nancy Jacobs illus...@mip.net wrote:
 Thanks, Stephen and Nicholas for the information on cubical projection. 
 Frankly, I'm partial to spheres... I've always found them better as 
 background environments -- cubes never seem right, the edges tend to be 
 apparent. especially because this is a scene in a 360 space and i don't want 
 to have to avoid the camera looking at the edges of the cube. But I also 
 don't want to have to avoid the poles of a sphere. But I've never tried the 
 cubical projection in Softimage, is it better somehow? You're right, 
 Nicholas, it would be easier to paint out the distortion in PS. But I don't 
 want to do all that work on creating a cubical projection and have it not 
 read well in the render.
  
 Have you used it effectively when you need 360 degree correctness?
  
 Thanks!
 
 On Jul 29, 2013, at 4:39 PM, Stephen Davidson magic...@bellsouth.net wrote:
 
 Exactly. Then use the cross version (Pano2VR creates a horizontal cross)
 setting Softimage's environmental mapping to horizontal cross.
 Is this not working for you, now?
  
 
 On Mon, Jul 29, 2013 at 2:54 PM, Nicholas Breslow n...@nicholasbreslow.com 
 wrote:
 The basic workflow I’ve used for this in the past is to convert the 
 equirectangular panorama to a cubical projection. Then you can paint out the 
 nadir (poles) on the top/bottom of the cube in PS/other to get rid of the 
 distortion. You can use Pano2vr http://gardengnomesoftware.com/pano2vr.php 
 for the conversion.  After convert it back to equirectangular. Very similar 
 to the Polar method mentioned before.
  
 Hope that is what you were going for – just glanced and thought I would share 
 this.
  
 Nicholas Breslow
  
  
 From: softimage-boun...@listproc.autodesk.com 
 [mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Nancy Jacobs
 Sent: Sunday, July 28, 2013 6:25 PM
 To: softimage@listproc.autodesk.com
 Subject: Re: Environment sphere issues
  
 Thanks for this info, Stephen, but I really need the spherical environment 
 for a seamless space experience. 
  
 Now that I've got the implicit projection working, it does a better job 
 rendering the image at the poles, but still not good enough. Guess ill have 
 to drag a sphere into Mari and  try painting out the distortion. That plugin 
 you linked me to gives some cool vortex effects at the poles, maybe ill find 
 a use for that! But I still wonder why it's working for your images and not 
 mine. Maybe it's in the type of image and what is happening visually near the 
 bottom and top of the image

Re: something to cheer you guys up a bit :)

2013-08-02 Thread Nancy Jacobs
You guys just crack me up...%^D

On Aug 1, 2013, at 5:22 PM, Paul Doyle technove...@gmail.com wrote:

 this is why we cant have nice things.
 
 
 On 1 August 2013 17:12, Alan Fregtman alan.fregt...@gmail.com wrote:
 All right stop.
 Fabricate and listen!
 Splice is back with a brand new integration
 Something grabs a hold of me tightly
 Flow like a harpoon daily and nightly
 Will it ever stop?
 Yo, I don't know
 Turn off the lights, and I'll show
 To the extreme I rock an app like a vandal
 Light up Stage and wax a chump like a candle
 
 Dance
 Bum rush the speaker that booms
 I'm killin' your brain like a poisonous mushroom
 Steadily, as I code a dope node
 Anything less than the best doesn't bode
 Love it or leave it
 You better gain way
 You better hit bull's eye
 The kid don't play
 If there was a problem
 Yo, I'll solve it
 Check out the hook while this TD resolves it.
 
 Splice splice baby...
 ...Splice, splice, baby...
 
 
 
 ツ
 
 
 
 On Thu, Aug 1, 2013 at 4:35 PM, Eric Thivierge ethivie...@hybride.com 
 wrote:
 Collaborate, Fabric8, and listen?
 
 
 
 Eric Thivierge
 ===
 Character TD / RnD
 Hybride Technologies
 
 
 On August-01-13 4:25:44 PM, Paul Doyle wrote:
 just stop.
 
 
 On 1 August 2013 16:08, Eric Thivierge ethivie...@hybride.com
 mailto:ethivie...@hybride.com wrote:
 
 Fabric8?
 
 
 Eric Thivierge
 ===
 Character TD / RnD
 Hybride Technologies
 
 
 On August-01-13 4:04:07 PM, Paul Doyle wrote:
 
 Fabricate is not going to be a thing :)
 


Re: Environment sphere issues

2013-08-02 Thread Nancy Jacobs
Yes, me too, same specs. But this image is 9k on the long side (not HdRI 
though) and I think HDR shop finally met its match with this one.

BUT I found this handy plugin for photoshop, it's even 64 bit. Flexibly 2 by 
Flaming Pear. Very nifty thing, it will output just 'zenith and nadir' files 
for an equirectangular image, so you can make distortion corrections, and then 
it pulls the corrected file back in to an equirectangular state, does this all 
in the correct size for the original image, so you don't have to figure THAT 
one out, ready to composites with original image.

It worked surprisingly well with just my bleary-eyed experimenting in the wee 
hours. I woke up to a very renderable equirectangular sphere map. Yay. At $50, 
and all the projections it does (Though a lot of them are just amusing, for 
printing out and making constructions it seems), this plugin is worth a 
purchase.

BTW, the link you sent seems like the old, old link from dbevec for HDR shop. 
Maybe Norton destroyed it because it was too old, and therefore suspect (like 
some of us ;-)) It didn't report that  it found a specific virus.

On Aug 2, 2013, at 8:28 AM, Stephen Davidson magic...@bellsouth.net wrote:

 I'm running HDRshop ver 1.0 on WIndows 7 64 bit with no issues.
 I don't see any degrading and I've used it with some 4K HDR files
 with no issues at all.
 
 No virus warnings, either. I'm using Panda Cloud for antivirus.
 Maybe I should do a virus scan, as I just downloaded it to test the
 link.
 
 
 On Fri, Aug 2, 2013 at 2:43 AM, Nancy Jacobs illus...@mip.net wrote:
 I have an old copy of HDR shop v1 on my computer, I'm sure it's the same as 
 your linkthe one you linked to, my Norton antivirus, horrified, deleted 
 immediately! ;) 
 
 I do remember using this in ancient times, must'vee been when image files 
 were smaller, but this one crashed it. And I do need a high res image 
 because this is the background for my project. My HDR lighting image can 
 live with a little polar distortion, and of course it's much smaller.
 
 Which brings me to another question -- doesn't all that dynamic range 
 conversion, internally to HDR shop, degrade or change the low dynamic range 
 image? Moot of course if it crashes, but it does have the conversion I need. 
 Dang it. I can't find anything else that does.
 
 Thanks,
 Nancy
 
 On Aug 1, 2013, at 5:34 PM, Stephen Davidson magic...@bellsouth.net wrote:
 
 perhaps you missed one of my earlier postings...
 
 Here is a free download (pc application)
 of a tool (HDRshop version 1) that can convert between the different
 environment map formats.
 http://ict.debevec.org/~debevec/HDRShop/download/
 
 
 here is documentation for all versions.
 http://gl.ict.usc.edu/HDRShop/documentation/HDRShop_v3_man.pdf
 
 Only version 1 is free, but that is all you need for format conversion.
 
 
 On Thu, Aug 1, 2013 at 3:17 PM, Nancy Jacobs illus...@mip.net wrote:
 Thanks to both Nicholas and Stephen again, that explains a lot more and 
 sounds like a great idea So you can only use this Pano2VR for the 
 transform back and forth? I visited their website -- they have a watermark 
 on the free version. Apparently it costs $93 -- that's pretty steep for my 
 uses, considering I don't need all their other functionality. Doesn't 
 photoshop or some other tool do this conversion? I just signed on to Adobe 
 Creative Cloud...they ought to have something in all that software that 
 would do this, you'd think?
 
 On Aug 1, 2013, at 2:57 PM, Stephen Davidson magic...@bellsouth.net 
 wrote:
 
 I have use both sphere and cross (or cube) mapping for reflections.
 Both work fine, and have advantages and disadvantages, depending on the 
 specific situation.
 The fact that an environment is a cube is not an issue.
 It is simply a different way to map the environment.
 The fact that it is a cube is not apparent in the resulting
 rendered image. I understand your concern, but it
 looks just fine. It is just easier to paint out the polar pinches
 in this format. Nicholas is correct in that you can just
 turn the change the format of the environment map and
 you loose nothing. 
 
 make both a equirectangular and cube format environment map
 and choose what works best for you. I think you will see there is no
 difference, except when painting out the pole pinches.
 
 
 On Tue, Jul 30, 2013 at 5:15 PM, Nancy Jacobs illus...@mip.net wrote:
 Thanks, Stephen and Nicholas for the information on cubical projection. 
 Frankly, I'm partial to spheres... I've always found them better as 
 background environments -- cubes never seem right, the edges tend to be 
 apparent. especially because this is a scene in a 360 space and i don't 
 want to have to avoid the camera looking at the edges of the cube. But I 
 also don't want to have to avoid the poles of a sphere. But I've never 
 tried the cubical projection in Softimage, is it better somehow? You're 
 right, Nicholas, it would be easier to paint out the distortion in PS

Re: Environment alignment mapping question

2013-08-02 Thread Nancy Jacobs
Yeah, I tested out the 'transformation' rotations in the ppg  for the 
environment node. I went with what soft changes it to when I put in the correct 
rotations (my env. sphere rotation). I checked it reflected on a test object, 
against the result when ray traced (which reflects my env. sphere). The 
reflection is in a different position than the raytrace. So, the rotation in 
the env. node is not in the same universe as my env. sphere rotations.

How do they relate to each other? Does anyone know?

Thanks,
Nancy

On Aug 2, 2013, at 8:28 PM, Nancy Jacobs illus...@mip.net wrote:

 Hey. Is there a way to somehow connect my actual mapped environment sphere to 
 the environment node on a material shader, so that the node will read the 
 rotation of the image?
 
 I'm trying to get some cheap reflections from my environment. I know to put 
 an environment node on the clusters where I want reflections, and switch the 
 reflection mode to environment. The problem is that I need to align the image 
 rotation there with the rotation of the environment sphere I've been working 
 on. 
 
 The transformation parameters in the environment node (rotation) don't seem 
 to hold the values I type in. So I don't quite trust them. Using the values 
 they change to doesn't look correct. I have a null rotating my environment 
 spheres from world 0,0,0. But the env. sphere will be rotating in the scene 
 animation (relativity!) and so I will need to animate any parameter which 
 maps the environment to match this rotation. 
 
 I'd like to find some way to be sure the environment node (or pass shader if 
 i go that way) matches the environment sphere I have set up, and follows that 
 rotation.
 
 Surely you've all run into this problem before, as the alternative seems to 
 be ray tracing the reflections, which slows the render to a crawl.
 
 Thanks,
 Nancy



Re: Environment sphere issues

2013-08-01 Thread Nancy Jacobs
Thanks to both Nicholas and Stephen again, that explains a lot more and sounds 
like a great idea So you can only use this Pano2VR for the transform back 
and forth? I visited their website -- they have a watermark on the free 
version. Apparently it costs $93 -- that's pretty steep for my uses, 
considering I don't need all their other functionality. Doesn't photoshop or 
some other tool do this conversion? I just signed on to Adobe Creative 
Cloud...they ought to have something in all that software that would do this, 
you'd think?

On Aug 1, 2013, at 2:57 PM, Stephen Davidson magic...@bellsouth.net wrote:

 I have use both sphere and cross (or cube) mapping for reflections.
 Both work fine, and have advantages and disadvantages, depending on the 
 specific situation.
 The fact that an environment is a cube is not an issue.
 It is simply a different way to map the environment.
 The fact that it is a cube is not apparent in the resulting
 rendered image. I understand your concern, but it
 looks just fine. It is just easier to paint out the polar pinches
 in this format. Nicholas is correct in that you can just
 turn the change the format of the environment map and
 you loose nothing. 
 
 make both a equirectangular and cube format environment map
 and choose what works best for you. I think you will see there is no
 difference, except when painting out the pole pinches.
 
 
 On Tue, Jul 30, 2013 at 5:15 PM, Nancy Jacobs illus...@mip.net wrote:
 Thanks, Stephen and Nicholas for the information on cubical projection. 
 Frankly, I'm partial to spheres... I've always found them better as 
 background environments -- cubes never seem right, the edges tend to be 
 apparent. especially because this is a scene in a 360 space and i don't want 
 to have to avoid the camera looking at the edges of the cube. But I also 
 don't want to have to avoid the poles of a sphere. But I've never tried the 
 cubical projection in Softimage, is it better somehow? You're right, 
 Nicholas, it would be easier to paint out the distortion in PS. But I don't 
 want to do all that work on creating a cubical projection and have it not 
 read well in the render.
 
 Have you used it effectively when you need 360 degree correctness?
 
 Thanks!
 
 On Jul 29, 2013, at 4:39 PM, Stephen Davidson magic...@bellsouth.net wrote:
 
 Exactly. Then use the cross version (Pano2VR creates a horizontal cross)
 setting Softimage's environmental mapping to horizontal cross.
 Is this not working for you, now?
 
 
 On Mon, Jul 29, 2013 at 2:54 PM, Nicholas Breslow 
 n...@nicholasbreslow.com wrote:
 The basic workflow I’ve used for this in the past is to convert the 
 equirectangular panorama to a cubical projection. Then you can paint out 
 the nadir (poles) on the top/bottom of the cube in PS/other to get rid of 
 the distortion. You can use Pano2vr 
 http://gardengnomesoftware.com/pano2vr.php for the conversion.  After 
 convert it back to equirectangular. Very similar to the Polar method 
 mentioned before.
 
  
 
 Hope that is what you were going for – just glanced and thought I would 
 share this.
 
  
 
 Nicholas Breslow
 
  
 
  
 
 From: softimage-boun...@listproc.autodesk.com 
 [mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Nancy Jacobs
 Sent: Sunday, July 28, 2013 6:25 PM
 To: softimage@listproc.autodesk.com
 Subject: Re: Environment sphere issues
 
  
 
 Thanks for this info, Stephen, but I really need the spherical environment 
 for a seamless space experience. 
 
  
 
 Now that I've got the implicit projection working, it does a better job 
 rendering the image at the poles, but still not good enough. Guess ill 
 have to drag a sphere into Mari and  try painting out the distortion. That 
 plugin you linked me to gives some cool vortex effects at the poles, maybe 
 ill find a use for that! But I still wonder why it's working for your 
 images and not mine. Maybe it's in the type of image and what is happening 
 visually near the bottom and top of the image.
 
  
 
 
 On Jul 28, 2013, at 1:19 AM, Stephen Davidson magic...@bellsouth.net 
 wrote:
 
 Here is a nice article on creating cubic environment maps from stitched 
 panoramic photos, using Blender.
 
 very clever:
 
 http://www.aerotwist.com/tutorials/create-your-own-environment-maps/
 
  
 
 On Sat, Jul 27, 2013 at 9:42 PM, Nancy Jacobs illus...@mip.net wrote:
 
 Stephen, this plugin really didn't work for me. It way overdid some kind 
 of smearing, spiraling algorithm. Looks a lot worse than the original. I 
 wonder what he's thinking, or what went wrong here... Any ideas?
 
  
 
 Thanks for the link, however. I was really stoked when I thought it was 
 going to solve this problem. Maybe something in Softimage mapping is 
 trying to solve this and doesn't quite do it, so this plugin 
 overcompensates?
 
  
 
 I still think implicit mapping would help, as the help files indicate, if 
 I could get any image to show up on the sphere.
 
  
 
 Thanks again,
 
 Nancy

Re: Environment sphere issues

2013-07-30 Thread Nancy Jacobs
Thanks, Stephen and Nicholas for the information on cubical projection. 
Frankly, I'm partial to spheres... I've always found them better as background 
environments -- cubes never seem right, the edges tend to be apparent. 
especially because this is a scene in a 360 space and i don't want to have to 
avoid the camera looking at the edges of the cube. But I also don't want to 
have to avoid the poles of a sphere. But I've never tried the cubical 
projection in Softimage, is it better somehow? You're right, Nicholas, it would 
be easier to paint out the distortion in PS. But I don't want to do all that 
work on creating a cubical projection and have it not read well in the render.

Have you used it effectively when you need 360 degree correctness?

Thanks!

On Jul 29, 2013, at 4:39 PM, Stephen Davidson magic...@bellsouth.net wrote:

 Exactly. Then use the cross version (Pano2VR creates a horizontal cross)
 setting Softimage's environmental mapping to horizontal cross.
 Is this not working for you, now?
 
 
 On Mon, Jul 29, 2013 at 2:54 PM, Nicholas Breslow n...@nicholasbreslow.com 
 wrote:
 The basic workflow I’ve used for this in the past is to convert the 
 equirectangular panorama to a cubical projection. Then you can paint out the 
 nadir (poles) on the top/bottom of the cube in PS/other to get rid of the 
 distortion. You can use Pano2vr http://gardengnomesoftware.com/pano2vr.php 
 for the conversion.  After convert it back to equirectangular. Very similar 
 to the Polar method mentioned before.
 
  
 
 Hope that is what you were going for – just glanced and thought I would 
 share this.
 
  
 
 Nicholas Breslow
 
  
 
  
 
 From: softimage-boun...@listproc.autodesk.com 
 [mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Nancy Jacobs
 Sent: Sunday, July 28, 2013 6:25 PM
 To: softimage@listproc.autodesk.com
 Subject: Re: Environment sphere issues
 
  
 
 Thanks for this info, Stephen, but I really need the spherical environment 
 for a seamless space experience. 
 
  
 
 Now that I've got the implicit projection working, it does a better job 
 rendering the image at the poles, but still not good enough. Guess ill have 
 to drag a sphere into Mari and  try painting out the distortion. That plugin 
 you linked me to gives some cool vortex effects at the poles, maybe ill find 
 a use for that! But I still wonder why it's working for your images and not 
 mine. Maybe it's in the type of image and what is happening visually near 
 the bottom and top of the image.
 
  
 
 
 On Jul 28, 2013, at 1:19 AM, Stephen Davidson magic...@bellsouth.net wrote:
 
 Here is a nice article on creating cubic environment maps from stitched 
 panoramic photos, using Blender.
 
 very clever:
 
 http://www.aerotwist.com/tutorials/create-your-own-environment-maps/
 
  
 
 On Sat, Jul 27, 2013 at 9:42 PM, Nancy Jacobs illus...@mip.net wrote:
 
 Stephen, this plugin really didn't work for me. It way overdid some kind of 
 smearing, spiraling algorithm. Looks a lot worse than the original. I wonder 
 what he's thinking, or what went wrong here... Any ideas?
 
  
 
 Thanks for the link, however. I was really stoked when I thought it was 
 going to solve this problem. Maybe something in Softimage mapping is trying 
 to solve this and doesn't quite do it, so this plugin overcompensates?
 
  
 
 I still think implicit mapping would help, as the help files indicate, if I 
 could get any image to show up on the sphere.
 
  
 
 Thanks again,
 
 Nancy
 
 
 On Jul 27, 2013, at 8:18 PM, Stephen Davidson magic...@bellsouth.net wrote:
 
 If you have Photoshop, here is a link to something called spherical mapping 
 corrector:
 
 http://www.richardrosenman.com/software/downloads/
 
  
 
 No 64 bit support, I believe.
 
  
 
 here is the install and use docs:
 
 Spherical Mapping Corrector - v1.4,  © 2008 Richard Rosenman Advertising  
 Design. Release date: 03/15/03, Updated 09/28/08.
 
  
 
  
 
 INSTALLATION:
 
  
 
 Simply unzip spheremap.zip and copy spheremap.8bf to your 
 \Photoshop\Plug-Ins\ folder, or whichever plugin folder your host program 
 uses. Load your program, open an image, go to the plugins menu and select 
 the plugin.
 
  
 
  
 
 DESCRIPTION:
 
  
 
 This filter produces texture map correction for spherical mapping.
 
  
 
 When projecting a rectangular texture onto a sphere using traditional 
 spherical mapping coordinates, distortion ('pinching') occurs at the poles 
 where the texture must come to a point. Given the different topology of a 
 plane and a sphere, it is impossible to avoid this, or any kind of 
 distortion. However, by properly distorting the texture map, it is possible 
 to minimize and even compensate for the polar distortion.
 
  
 
 Special thanks to Paul Bourke for allowing his algorithm to be ported to 
 this plugin. For more information, please visit Mr. Bourke's site at 
 http://astronomy.swin.edu.au/~pbourke/.
 
  
 
 Sub-Sampling: Specifies what type of pixel sub-sampling to use

Re: Implicit texture projection not working

2013-07-28 Thread Nancy Jacobs
Thanks Luca, for letting me know it doesnt show in ogl. it does render -- it 
seems I had my HDRI disabled so I wasn't getting any light.

On Jul 28, 2013, at 9:27 AM, Luca superposit...@gmail.com wrote:

 Strange. 
 What you need to do is :
 
 1 - Create Sphere
 2 - set Sphere or UV projection.
 3 - in the same ppg EDIT the Uv Projection and choose Implicit
 4 - Render. It should work.
 
 
 2013/7/28 Nancy Jacobs illus...@mip.net
 Luca, I didn't freeze the object, but the image texture doesn't render 
 either. I'm using 2014 also.
 
 On Jul 27, 2013, at 11:06 PM, Luca superposit...@gmail.com wrote:
 
 Uh... interesting bug, I've just discovered. I remembered freezing an 
 implicit object was losing the Implicit property. But it seems is not like 
 that anymore.
 To prevent this problem Softimage simply doesn't let you freeze the object.
 The bug is (if it is a bug as I think it is) after trying to freeze the 
 sphere in Implicit mode and getting the sphere back to explicit it removes 
 the property, but it's impossibile to freeze the object in anyway, and 
 removing the projection it removes the object, too.
 Softimage still thinks the object is in Implicit mode, without showing the 
 implicit UV. Isn't it weird? 
 SI 2014.
 
 
 2013/7/28 Luca superposit...@gmail.com
 If you set implicit projection, it will be visible only in render.
 
 Anyway you can't freeze it or it will lose the implicit property. 
 
 
 2013/7/28 Nancy Jacobs illus...@mip.net
 Hey,
 
 I thought I'd solved my problem with images distorting in spherical 
 mapping, by...what? Reading the manual. But, no
 
 Apparently, creating a 'purely implicit' texture projection is supposed 
 to solve this issue of image distortion at the poles. They even have 
 pictures proving it. However, I can't get any image to map to a sphere 
 using this texture projection method. I also found, in the manual, that 
 one is supposed to use an 'image implicit' node to map the image (they 
 don't tell you that initially, you have to accidentally find it...). 
 However, that doesn't work either. All I get is the dreaded generic color 
 one gets when ones texture projection is not in the same universe, if you 
 know what I mean.
 
 Having followed the manual's instructions, what am I missing here?
 
 Thanks for any,
 Nancy
 
 
 
 -- 
 ...superpositiviii...qualunque cosa accada!...
 
 
 
 -- 
 ...superpositiviii...qualunque cosa accada!...
 
 
 
 -- 
 ...superpositiviii...qualunque cosa accada!...


Re: Environment sphere issues

2013-07-28 Thread Nancy Jacobs
Thanks for this info, Stephen, but I really need the spherical environment for 
a seamless space experience. 

Now that I've got the implicit projection working, it does a better job 
rendering the image at the poles, but still not good enough. Guess ill have to 
drag a sphere into Mari and  try painting out the distortion. That plugin you 
linked me to gives some cool vortex effects at the poles, maybe ill find a use 
for that! But I still wonder why it's working for your images and not mine. 
Maybe it's in the type of image and what is happening visually near the bottom 
and top of the image.


On Jul 28, 2013, at 1:19 AM, Stephen Davidson magic...@bellsouth.net wrote:

 Here is a nice article on creating cubic environment maps from stitched 
 panoramic photos, using Blender.
 very clever:
 http://www.aerotwist.com/tutorials/create-your-own-environment-maps/
 
 
 On Sat, Jul 27, 2013 at 9:42 PM, Nancy Jacobs illus...@mip.net wrote:
 Stephen, this plugin really didn't work for me. It way overdid some kind of 
 smearing, spiraling algorithm. Looks a lot worse than the original. I wonder 
 what he's thinking, or what went wrong here... Any ideas?
 
 Thanks for the link, however. I was really stoked when I thought it was 
 going to solve this problem. Maybe something in Softimage mapping is trying 
 to solve this and doesn't quite do it, so this plugin overcompensates?
 
 I still think implicit mapping would help, as the help files indicate, if I 
 could get any image to show up on the sphere.
 
 Thanks again,
 Nancy
 
 On Jul 27, 2013, at 8:18 PM, Stephen Davidson magic...@bellsouth.net wrote:
 
 If you have Photoshop, here is a link to something called spherical mapping 
 corrector:
 http://www.richardrosenman.com/software/downloads/
 
 No 64 bit support, I believe.
 
 here is the install and use docs:
 Spherical Mapping Corrector - v1.4,  © 2008 Richard Rosenman Advertising  
 Design. Release date: 03/15/03, Updated 09/28/08.
 
 
 INSTALLATION:
 
 Simply unzip spheremap.zip and copy spheremap.8bf to your 
 \Photoshop\Plug-Ins\ folder, or whichever plugin folder your host program 
 uses. Load your program, open an image, go to the plugins menu and select 
 the plugin.
 
 
 DESCRIPTION:
 
 This filter produces texture map correction for spherical mapping.
 
 When projecting a rectangular texture onto a sphere using traditional 
 spherical mapping coordinates, distortion ('pinching') occurs at the poles 
 where the texture must come to a point. Given the different topology of a 
 plane and a sphere, it is impossible to avoid this, or any kind of 
 distortion. However, by properly distorting the texture map, it is possible 
 to minimize and even compensate for the polar distortion.
 
 Special thanks to Paul Bourke for allowing his algorithm to be ported to 
 this plugin. For more information, please visit Mr. Bourke's site at 
 http://astronomy.swin.edu.au/~pbourke/.
 
 Sub-Sampling: Specifies what type of pixel sub-sampling to use. (Nearest 
 Neighbor being fastest, Bicubic being best.
 
 
 On Sat, Jul 27, 2013 at 6:46 PM, Nancy Jacobs illus...@mip.net wrote:
 Greetings,
 
 I'm using the old-style environment spheres with an HDR image wrapped to 
 light the scene, but invisible to rendering, and a beauty image visible to 
 the render. The problem is the very visible distortion near the poles of 
 the sphere. I need 360 degree visual acceptability. I am using a 
 background which I've made seamless in both directions, a 2:1 rectangle. 
 It seems this worked in renders at one point years ago in another 
 software. Perhaps even XSII don't recall.
 
 I'm also trying to substitute this arrangement by using both an 
 environment (using the HDRI), and 'Spherical Mapping' (using the beauty 
 image), in the Pass Shaders. But I'm getting very strange results, so not 
 sure if this is the way to go. Also, it's difficult to line them up 
 properly so that the light in the HDRI is coming from the same place as 
 the equivalent visible areas in the beauty image -- which of course one 
 can do easily in the wrapped spheres. But in the pass shaders, they don't 
 seem to use the same rotation systems...
 
 Any advice on getting an undistorted, seamless image going here? With 
 proper orientations?
 
 Thanks,
 Nancy
 
 
 
 -- 
 
 Best Regards,
   Stephen P. Davidson 
(954) 552-7956
 sdavid...@3danimationmagic.com
 
 Any sufficiently advanced technology is indistinguishable from magic
 
 
  - Arthur C. Clarke
 
 
 
 
 
 
 -- 
 
 Best Regards,
   Stephen P. Davidson 
(954) 552-7956
 sdavid...@3danimationmagic.com
 
 Any sufficiently advanced technology is indistinguishable from magic
 
  
 - Arthur C. Clarke
 
 


Environment sphere issues

2013-07-27 Thread Nancy Jacobs
Greetings,

I'm using the old-style environment spheres with an HDR image wrapped to light 
the scene, but invisible to rendering, and a beauty image visible to the 
render. The problem is the very visible distortion near the poles of the 
sphere. I need 360 degree visual acceptability. I am using a background which 
I've made seamless in both directions, a 2:1 rectangle. It seems this worked in 
renders at one point years ago in another software. Perhaps even XSII don't 
recall.

I'm also trying to substitute this arrangement by using both an environment 
(using the HDRI), and 'Spherical Mapping' (using the beauty image), in the Pass 
Shaders. But I'm getting very strange results, so not sure if this is the way 
to go. Also, it's difficult to line them up properly so that the light in the 
HDRI is coming from the same place as the equivalent visible areas in the 
beauty image -- which of course one can do easily in the wrapped spheres. But 
in the pass shaders, they don't seem to use the same rotation systems...

Any advice on getting an undistorted, seamless image going here? With proper 
orientations?

Thanks,
Nancy


Implicit texture projection not working

2013-07-27 Thread Nancy Jacobs
Hey,

I thought I'd solved my problem with images distorting in spherical mapping, 
by...what? Reading the manual. But, no

Apparently, creating a 'purely implicit' texture projection is supposed to 
solve this issue of image distortion at the poles. They even have pictures 
proving it. However, I can't get any image to map to a sphere using this 
texture projection method. I also found, in the manual, that one is supposed to 
use an 'image implicit' node to map the image (they don't tell you that 
initially, you have to accidentally find it...). However, that doesn't work 
either. All I get is the dreaded generic color one gets when ones texture 
projection is not in the same universe, if you know what I mean.

Having followed the manual's instructions, what am I missing here?

Thanks for any,
Nancy


Re: Environment sphere issues

2013-07-27 Thread Nancy Jacobs
Stephen, this plugin really didn't work for me. It way overdid some kind of 
smearing, spiraling algorithm. Looks a lot worse than the original. I wonder 
what he's thinking, or what went wrong here... Any ideas?

Thanks for the link, however. I was really stoked when I thought it was going 
to solve this problem. Maybe something in Softimage mapping is trying to solve 
this and doesn't quite do it, so this plugin overcompensates?

I still think implicit mapping would help, as the help files indicate, if I 
could get any image to show up on the sphere.

Thanks again,
Nancy

On Jul 27, 2013, at 8:18 PM, Stephen Davidson magic...@bellsouth.net wrote:

 If you have Photoshop, here is a link to something called spherical mapping 
 corrector:
 http://www.richardrosenman.com/software/downloads/
 
 No 64 bit support, I believe.
 
 here is the install and use docs:
 Spherical Mapping Corrector - v1.4,  © 2008 Richard Rosenman Advertising  
 Design. Release date: 03/15/03, Updated 09/28/08.
 
 
 INSTALLATION:
 
 Simply unzip spheremap.zip and copy spheremap.8bf to your 
 \Photoshop\Plug-Ins\ folder, or whichever plugin folder your host program 
 uses. Load your program, open an image, go to the plugins menu and select the 
 plugin.
 
 
 DESCRIPTION:
 
 This filter produces texture map correction for spherical mapping.
 
 When projecting a rectangular texture onto a sphere using traditional 
 spherical mapping coordinates, distortion ('pinching') occurs at the poles 
 where the texture must come to a point. Given the different topology of a 
 plane and a sphere, it is impossible to avoid this, or any kind of 
 distortion. However, by properly distorting the texture map, it is possible 
 to minimize and even compensate for the polar distortion.
 
 Special thanks to Paul Bourke for allowing his algorithm to be ported to this 
 plugin. For more information, please visit Mr. Bourke's site at 
 http://astronomy.swin.edu.au/~pbourke/.
 
 Sub-Sampling: Specifies what type of pixel sub-sampling to use. (Nearest 
 Neighbor being fastest, Bicubic being best.
 
 
 On Sat, Jul 27, 2013 at 6:46 PM, Nancy Jacobs illus...@mip.net wrote:
 Greetings,
 
 I'm using the old-style environment spheres with an HDR image wrapped to 
 light the scene, but invisible to rendering, and a beauty image visible to 
 the render. The problem is the very visible distortion near the poles of the 
 sphere. I need 360 degree visual acceptability. I am using a background 
 which I've made seamless in both directions, a 2:1 rectangle. It seems this 
 worked in renders at one point years ago in another software. Perhaps even 
 XSII don't recall.
 
 I'm also trying to substitute this arrangement by using both an environment 
 (using the HDRI), and 'Spherical Mapping' (using the beauty image), in the 
 Pass Shaders. But I'm getting very strange results, so not sure if this is 
 the way to go. Also, it's difficult to line them up properly so that the 
 light in the HDRI is coming from the same place as the equivalent visible 
 areas in the beauty image -- which of course one can do easily in the 
 wrapped spheres. But in the pass shaders, they don't seem to use the same 
 rotation systems...
 
 Any advice on getting an undistorted, seamless image going here? With proper 
 orientations?
 
 Thanks,
 Nancy
 
 
 
 -- 
 
 Best Regards,
   Stephen P. Davidson 
(954) 552-7956
 sdavid...@3danimationmagic.com
 
 Any sufficiently advanced technology is indistinguishable from magic
 
  
 - Arthur C. Clarke
 
 


Re: Requesting advice about light object

2013-07-06 Thread Nancy Jacobs
Hey nevermindI think I got what I need with the mia_lightsurface node piped 
into the architectural shader, with considerable tweaking

Thanks for the ideas!

On Jul 5, 2013, at 7:03 PM, Nancy Jacobs illus...@mip.net wrote:

 I'm wondering about the suggested use of the rayswitch node for this. 
 Especially Matt and Ahmidou suggested it. Do you mean the 
 mip_rayswitch_advanced? And would I use it to split up the shader influences 
 on one sphere instead of making the two spheres as in Matt's suggestion? 
 
 Because in your 'old school' method, Matt, which I have been using to light 
 the scene and create a background, with an hdri and a higher res color image 
 (seen), there is not the desired glow effect. But it could create light for 
 the FG, instead of a point light, which may be desirable.
 
 Any clearer explanations about using the rayswitch node, in this context?
 
 Many thanks,
 Nancy
 
 On Jul 3, 2013, at 9:46 PM, Matt Lind ml...@carbinestudios.com wrote:
 
 Many solutions, but here's an older simple solution:
 
 The object emitting illumination should have secondary rays active, and 
 primary rays inactive.  This allows rays to be cast for Global illumination, 
 final gathering, etc..., but not be directly visible by the camera.  This 
 allows you to use a constant shaded sphere as the emitter.
 
 The object acting as the bulb and visible to the camera should have the 
 opposite settings so it appears in the render, but doesn't block the rays 
 cast by the emitter. 
 
 You may need to do more subtle tweaking of visibility to account of other 
 situations.  For those cases I'll refer you to the 'ray switch' node which 
 can perform the same task at a more granular level.
 
 Matt
 
 
 
 -Original Message-
 From: softimage-boun...@listproc.autodesk.com 
 [mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Nancy Jacobs
 Sent: Wednesday, July 03, 2013 3:02 PM
 To: Softimage Listserve
 Subject: Requesting advice about light object
 
 Hello,
 
 I've been away from XSi for about two years due to illness, but am back 
 working on a project. So, this may be very easy for you guys to answer. I'm 
 feeling noobness here, having forgotten so many things!
 
 I'm trying to make a small sphere look like a light source, illuminating 
 part of the scene, which is only lit by final gathering and a globe HDR 
 object. I'm pretty sure I've done this successfully before, but it's failing 
 me this time...;-)
 
 I can't make the sphere a constant material, because I want it to use many 
 of the attributes of the architectural shader. In other words, I want it to 
 be a reflective ball which is glowing and emitting light. I like the 
 simplified emulated reflections i was getting by using optimizations in the 
 architectural shader (also the rendering time advantage). So, I tried 
 putting a point light inside it, which has somewhat fouled up the nice 
 surface I had going with the architectural shader. And there is no 
 'incandescence' in this shader to make it appear glowing. I don't want to 
 see the actual point light in the render, but I do want the ball itself to 
 be a bit luminous.
 
 I'm also having a lot of trouble with the light attenuation using the 
 exponent. Does anyone have any idea how to measure 'Softimage units' in ones 
 scene? Do they relate to say, the sphere radius I am using? Or is that a 
 whole 'nother thing
 
 Anyone who can give me some ideas or point me to some info about this 
 glowing sphere light thing, much appreciated... Seems so rudimentary...
 
 Thanks,
 Nancy
 
 



This is driving me crazy...

2013-07-06 Thread Nancy Jacobs
Hey, sorry to bother you guys again so soon, but wondering if anyone knows what 
would make portal lights NOT show the image behind them, in the background, 
wrapped in the environment sphere.

This is an old problem I never could figure out, and it's driving me crazy... 

I have a room environment where I want to use portal lights, but when I turn on 
area lightvisible in render, the portal light blocks the view outside. If I 
switch that off, I get error messages about how it is supposed to be on. 

A couple years ago Manny sent me a test file which has portal lights working as 
they are supposed to, and I even imported my double-sphere environment into it, 
and the lights work ok with it (they don't block the background image), with 
area lightvisible in render switched on. Portal lights with these same 
settings block out the same environment background in my project. It's one of 
those visible hi res background spheres, plus a visible only to secondary rays 
HDR sphere to light the scene. IT works in the test scene, so why not the real 
scene?

I swear I've checked everything. In the test file it seems you can check all 
sorts of visibility options on and off, and nothing matters except keeping the 
'visible in render' parameter in the PORTAL LIGHT/Color section switched off, 
and it still reveals the background, as should be.

Anyway, if anyone has encountered this very frustrating problem, please let me 
know what I could be missing. 

Any ideas? Possibilities?

Thanks,
Nancy


Re: This is driving me crazy...

2013-07-06 Thread Nancy Jacobs
Yeah, both those things = true

But hey, I just figured it out at 5am. Turns out, you have to make sure 
that, under Visibility/Rendering/transparency is ON for the background you want 
to see, and OFF for the HDR image. I guess the portal lights show what they are 
'transparent' toMy portal lights were seeing the HDR blown out image, and 
not seeing the background. Finally I figure this one outnow I get to go to 
sleep

cheers!

On Jul 6, 2013, at 4:49 AM, Cesar Saez cesa...@gmail.com wrote:

 Hey Nancy,
 Make sure you are using 'Portal Light (mia)' node (there are 2 portal_light 
 nodes) and 'visible in render' (just below 'Tint') is turned off.
 
 Cheers!
 
 
 On Sat, Jul 6, 2013 at 4:17 AM, Nancy Jacobs illus...@mip.net wrote:
 Hey, sorry to bother you guys again so soon, but wondering if anyone knows 
 what would make portal lights NOT show the image behind them, in the 
 background, wrapped in the environment sphere.
 
 This is an old problem I never could figure out, and it's driving me crazy...
 
 I have a room environment where I want to use portal lights, but when I turn 
 on area lightvisible in render, the portal light blocks the view outside. 
 If I switch that off, I get error messages about how it is supposed to be on.
 
 A couple years ago Manny sent me a test file which has portal lights working 
 as they are supposed to, and I even imported my double-sphere environment 
 into it, and the lights work ok with it (they don't block the background 
 image), with area lightvisible in render switched on. Portal lights with 
 these same settings block out the same environment background in my project. 
 It's one of those visible hi res background spheres, plus a visible only to 
 secondary rays HDR sphere to light the scene. IT works in the test scene, so 
 why not the real scene?
 
 I swear I've checked everything. In the test file it seems you can check all 
 sorts of visibility options on and off, and nothing matters except keeping 
 the 'visible in render' parameter in the PORTAL LIGHT/Color section switched 
 off, and it still reveals the background, as should be.
 
 Anyway, if anyone has encountered this very frustrating problem, please let 
 me know what I could be missing.
 
 Any ideas? Possibilities?
 
 Thanks,
 Nancy
 


Re: Requesting advice about light object

2013-07-05 Thread Nancy Jacobs
I'm wondering about the suggested use of the rayswitch node for this. 
Especially Matt and Ahmidou suggested it. Do you mean the 
mip_rayswitch_advanced? And would I use it to split up the shader influences on 
one sphere instead of making the two spheres as in Matt's suggestion? 

Because in your 'old school' method, Matt, which I have been using to light the 
scene and create a background, with an hdri and a higher res color image 
(seen), there is not the desired glow effect. But it could create light for the 
FG, instead of a point light, which may be desirable.

Any clearer explanations about using the rayswitch node, in this context?

Many thanks,
Nancy

On Jul 3, 2013, at 9:46 PM, Matt Lind ml...@carbinestudios.com wrote:

 Many solutions, but here's an older simple solution:
 
 The object emitting illumination should have secondary rays active, and 
 primary rays inactive.  This allows rays to be cast for Global illumination, 
 final gathering, etc..., but not be directly visible by the camera.  This 
 allows you to use a constant shaded sphere as the emitter.
 
 The object acting as the bulb and visible to the camera should have the 
 opposite settings so it appears in the render, but doesn't block the rays 
 cast by the emitter. 
 
 You may need to do more subtle tweaking of visibility to account of other 
 situations.  For those cases I'll refer you to the 'ray switch' node which 
 can perform the same task at a more granular level.
 
 Matt
 
 
 
 -Original Message-
 From: softimage-boun...@listproc.autodesk.com 
 [mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Nancy Jacobs
 Sent: Wednesday, July 03, 2013 3:02 PM
 To: Softimage Listserve
 Subject: Requesting advice about light object
 
 Hello,
 
 I've been away from XSi for about two years due to illness, but am back 
 working on a project. So, this may be very easy for you guys to answer. I'm 
 feeling noobness here, having forgotten so many things!
 
 I'm trying to make a small sphere look like a light source, illuminating part 
 of the scene, which is only lit by final gathering and a globe HDR object. 
 I'm pretty sure I've done this successfully before, but it's failing me this 
 time...;-)
 
 I can't make the sphere a constant material, because I want it to use many of 
 the attributes of the architectural shader. In other words, I want it to be a 
 reflective ball which is glowing and emitting light. I like the simplified 
 emulated reflections i was getting by using optimizations in the 
 architectural shader (also the rendering time advantage). So, I tried putting 
 a point light inside it, which has somewhat fouled up the nice surface I had 
 going with the architectural shader. And there is no 'incandescence' in this 
 shader to make it appear glowing. I don't want to see the actual point light 
 in the render, but I do want the ball itself to be a bit luminous.
 
 I'm also having a lot of trouble with the light attenuation using the 
 exponent. Does anyone have any idea how to measure 'Softimage units' in ones 
 scene? Do they relate to say, the sphere radius I am using? Or is that a 
 whole 'nother thing
 
 Anyone who can give me some ideas or point me to some info about this glowing 
 sphere light thing, much appreciated... Seems so rudimentary...
 
 Thanks,
 Nancy
 



Requesting advice about light object

2013-07-03 Thread Nancy Jacobs
Hello,

I've been away from XSi for about two years due to illness, but am back working 
on a project. So, this may be very easy for you guys to answer. I'm feeling 
noobness here, having forgotten so many things!

I'm trying to make a small sphere look like a light source, illuminating part 
of the scene, which is only lit by final gathering and a globe HDR object. I'm 
pretty sure I've done this successfully before, but it's failing me this 
time...;-)

I can't make the sphere a constant material, because I want it to use many of 
the attributes of the architectural shader. In other words, I want it to be a 
reflective ball which is glowing and emitting light. I like the simplified 
emulated reflections i was getting by using optimizations in the architectural 
shader (also the rendering time advantage). So, I tried putting a point light 
inside it, which has somewhat fouled up the nice surface I had going with the 
architectural shader. And there is no 'incandescence' in this shader to make it 
appear glowing. I don't want to see the actual point light in the render, but I 
do want the ball itself to be a bit luminous.

 I'm also having a lot of trouble with the light attenuation using the 
exponent. Does anyone have any idea how to measure 'Softimage units' in ones 
scene? Do they relate to say, the sphere radius I am using? Or is that a whole 
'nother thing

Anyone who can give me some ideas or point me to some info about this glowing 
sphere light thing, much appreciated... Seems so rudimentary...

Thanks,
Nancy


Re: Requesting advice about light object

2013-07-03 Thread Nancy Jacobs
These all sound like good ideas... I did try out the mia light surface shader, 
and got a bit closer to what I want, but i'll definitely check all these ideas 
out next work session. 

Thanks!

On Jul 3, 2013, at 10:42 PM, Rares Halmagean ra...@rarebrush.com wrote:

 Another solution Nancy, may be to composite a render of the constant shaded 
 material over your architectural sphere depending on how comfortable you are 
 with approach.
 
 Or a simpler solution than the one I suggested earler comes to mind; Use a 
 mix_2_colors node by plugging the architectural shaders out into mix2color 
 base_color input, plug the lambert or constants out into mix2color color 1 
 followed by mix_2_colors out to surface input of the material node, making 
 sure to dial the constant node contribution via the weight rgba in the mix 
 layer of the mix_2_colors node. That way you maintain the architectural 
 material surface properties and mix in the incandescent property of the 
 constant shader. You may have to play around with the architectural shader 
 diffuse, reflection properties and mix2colors mix layer properties till you 
 get the look you want. 
 
 gdfdifhb.jpg
 
 -Rares
 
 
 On 7/3/2013 8:46 PM, Matt Lind wrote:
 Many solutions, but here's an older simple solution:
 
 The object emitting illumination should have secondary rays active, and 
 primary rays inactive.  This allows rays to be cast for Global illumination, 
 final gathering, etc..., but not be directly visible by the camera.  This 
 allows you to use a constant shaded sphere as the emitter.
 
 The object acting as the bulb and visible to the camera should have the 
 opposite settings so it appears in the render, but doesn't block the rays 
 cast by the emitter. 
 
 You may need to do more subtle tweaking of visibility to account of other 
 situations.  For those cases I'll refer you to the 'ray switch' node which 
 can perform the same task at a more granular level.
 
 Matt
 
 
 
 -Original Message-
 From: softimage-boun...@listproc.autodesk.com 
 [mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Nancy Jacobs
 Sent: Wednesday, July 03, 2013 3:02 PM
 To: Softimage Listserve
 Subject: Requesting advice about light object
 
 Hello,
 
 I've been away from XSi for about two years due to illness, but am back 
 working on a project. So, this may be very easy for you guys to answer. I'm 
 feeling noobness here, having forgotten so many things!
 
 I'm trying to make a small sphere look like a light source, illuminating 
 part of the scene, which is only lit by final gathering and a globe HDR 
 object. I'm pretty sure I've done this successfully before, but it's failing 
 me this time...;-)
 
 I can't make the sphere a constant material, because I want it to use many 
 of the attributes of the architectural shader. In other words, I want it to 
 be a reflective ball which is glowing and emitting light. I like the 
 simplified emulated reflections i was getting by using optimizations in the 
 architectural shader (also the rendering time advantage). So, I tried 
 putting a point light inside it, which has somewhat fouled up the nice 
 surface I had going with the architectural shader. And there is no 
 'incandescence' in this shader to make it appear glowing. I don't want to 
 see the actual point light in the render, but I do want the ball itself to 
 be a bit luminous.
 
  I'm also having a lot of trouble with the light attenuation using the 
 exponent. Does anyone have any idea how to measure 'Softimage units' in ones 
 scene? Do they relate to say, the sphere radius I am using? Or is that a 
 whole 'nother thing
 
 Anyone who can give me some ideas or point me to some info about this 
 glowing sphere light thing, much appreciated... Seems so rudimentary...
 
 Thanks,
 Nancy
 
 
 -- 
 Rares Halmagean
 ___
 visual development and 3d character  content creation. 
 rarebrush.com


Re: Requesting advice about light object

2013-07-03 Thread Nancy Jacobs
Wow, here's a duh momentas that's exactly what I'm doing with the 2 
environment spheres, with the visible image and the invisible HDR image! So, 
i'll try it on a smaller scale for this as well.  Not familiar with the ray 
switch mode, though, I will have to look into that. Thank you Matt. With this I 
wouldn't need the point light either, as I'm using FG anyway.

On Jul 3, 2013, at 9:46 PM, Matt Lind ml...@carbinestudios.com wrote:

 Many solutions, but here's an older simple solution:
 
 The object emitting illumination should have secondary rays active, and 
 primary rays inactive.  This allows rays to be cast for Global illumination, 
 final gathering, etc..., but not be directly visible by the camera.  This 
 allows you to use a constant shaded sphere as the emitter.
 
 The object acting as the bulb and visible to the camera should have the 
 opposite settings so it appears in the render, but doesn't block the rays 
 cast by the emitter. 
 
 You may need to do more subtle tweaking of visibility to account of other 
 situations.  For those cases I'll refer you to the 'ray switch' node which 
 can perform the same task at a more granular level.
 
 Matt
 
 
 
 -Original Message-
 From: softimage-boun...@listproc.autodesk.com 
 [mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Nancy Jacobs
 Sent: Wednesday, July 03, 2013 3:02 PM
 To: Softimage Listserve
 Subject: Requesting advice about light object
 
 Hello,
 
 I've been away from XSi for about two years due to illness, but am back 
 working on a project. So, this may be very easy for you guys to answer. I'm 
 feeling noobness here, having forgotten so many things!
 
 I'm trying to make a small sphere look like a light source, illuminating part 
 of the scene, which is only lit by final gathering and a globe HDR object. 
 I'm pretty sure I've done this successfully before, but it's failing me this 
 time...;-)
 
 I can't make the sphere a constant material, because I want it to use many of 
 the attributes of the architectural shader. In other words, I want it to be a 
 reflective ball which is glowing and emitting light. I like the simplified 
 emulated reflections i was getting by using optimizations in the 
 architectural shader (also the rendering time advantage). So, I tried putting 
 a point light inside it, which has somewhat fouled up the nice surface I had 
 going with the architectural shader. And there is no 'incandescence' in this 
 shader to make it appear glowing. I don't want to see the actual point light 
 in the render, but I do want the ball itself to be a bit luminous.
 
 I'm also having a lot of trouble with the light attenuation using the 
 exponent. Does anyone have any idea how to measure 'Softimage units' in ones 
 scene? Do they relate to say, the sphere radius I am using? Or is that a 
 whole 'nother thing
 
 Anyone who can give me some ideas or point me to some info about this glowing 
 sphere light thing, much appreciated... Seems so rudimentary...
 
 Thanks,
 Nancy