Showing posts with label technical direction. Show all posts
Showing posts with label technical direction. Show all posts

Sunday, October 5, 2014

The Heavens Have Foretold Your Doom


At one time or another, many computer animation people have worked to create an illusion of the night sky from earth or of its cousin, a “star field”, which is an imaginary view of the stars from space. Whether this was for their own amusement, for visual effects purposes, or for scientific visualization, these innocents would approach the problem with the assumption that it was going to be easy. How hard could it be, its just a bunch of random white dots, after all. Imagine their surprise when they discovered that doing excellent starfields is far from trivial.

A classic traditional technique to create starfields is to create a cyc, or curved screen, painted black and with very small holes punched in it. Then behind this screen was a curved light source, usually florescent tubes. The camera would be at the center of the implied sphere of the screen and when the room was darkened and the backlight illuminated, you had a curved space of very bright, very small light sources which could be photographed with long exposures when the camera was moving. The result was excellent motion blurred, perfectly antialiased, very high contrast star fields. But ultimately there were certain moves that the motion control camera could not easily do, such as tumble end over end for example, so there was a need to synthetically generate these elements.

Another time honored technique which looked excellent was the painting on glass. Most of the times you saw stars in Close Encounters of the Third Kind (1977) you were seeing an optical composite of a live action element or motion control shot with a matte painting on glass.




Since everyone seems to have to go through the same learning curve, I am providing notes here for what some of the issues facing 3D technical directors as they produce their first starfield and I have written it as a letter to my younger self.


September 19, 1983

Oh, unwary traveler, so proud of your 3D knowledge, your knowledge of geometric modeling, or of animation whether scripted or procedural, and of global illumination; do you think to encompass the heavens with these pathetic tools? Fool, your doom is assured. There are more things in heaven and on earth than are encompassed in your philosophy, or so I have heard, and when you approach the field of scientific visualization you must unlearn what you have learned and embrace the esoteric wisdom. You must open your eyes in order to see the light.

What perils await the unwary, the arrogant, the unlettered?

The first peril is the vast expanse of space. There is the scale of mortal man, then the scale of the solar system, then the scale of one single galaxy, and then beyond. These differences in scales are way beyond what most software packages can handle, so using the 3D positions of everything in a naive fashion is unlikely to work.

And that renderer you are so proud of.  Does it do all its calculations of space in 64 bit floating point or even higher precision?   Most renderers, with a few notable exceptions,  do the majority of their work using single precision floating point which may be adequate for a giant robot or two, but falls apart in the vast distances of space.  

The second peril involves the issue of filtering of what is very untypical samples.  Most scenes render surfaces with various lighting applied.   But a great deal of what you wish to render are stars but what are stars? Stars are huge things, but they are (for all practical purposes) infinitely bright and infinitely small (on the screen). The amount of energy concentrated in a single pixel may be immense, but the pixel next to it may have very little or no energy at all. And what happens under those circumstances when you move the camera? Well, it aliases, of course, terribly. Furthermore, if one has modeled stars very far away and you are using point sampling of one form or another to simulate area sampling, then if you are not careful, some of your samples will miss and you will have aliasing again.

Part of the solution is to use a good filter and lots of samples and in the choice of filter lurks another threat since as we know a "good" filter, perhaps a 7x7 sinc for example, is likely to have negative lobes, and instead of throwing those values out, you should keep them until the end and even then you should not throw them away. What then to do with them is a mystery left as an exercise for the reader.  The best solution of course would be to have a display that could absorb light as well as emit it, but we wait in vain for the display manufacturers to come to our aid.

And what about those overly bright stars? Will you generate glows and other artifacts? After all we are not just trying to simulate realistic stars, we are often trying to simulate realistic stars as the audience has seen them, and expects to see them.

Although the sky is filled with stars, that is not the only thing that there is. There are also great fuzzy areas known as nebulae and sometimes other galaxies. It turns out that if there is any data for that, it is likely to be volume data. But even if there is no data and you create your own, volume rendering is the best way to render a nebula one might argue. Does your renderer of choice do volume rendering?

Review the following image of the earthling's galaxy.




Do you notice the great areas of darkness? That of course is the infamous "space dust", the so-called Interstellar Media or ISM which must surely exist to hide from us the center of our galaxy where no doubt an entity of great evil exists. Surely you do not think it a coincidence that the space dust would hide what is arguably the most spectacular sight in our little neighborhood? Since most star catalogs do not have the ISM modeled, you may wish to develop a model of ISM in your spare time. If not, the galaxy will not look right unless you simply leave out the stars that are in those areas (which may or may not be be in the catalog anyway as they are impossible to view from earth, at least in the visible bands).

Because you are rendering stars, no doubt you have studied scotopic vision.  It goes without saying that whenever the biped mammals have watched the stars they have, generally, been night adapted. And yet they see color sometimes, perhaps they see Angry Red Planet Mars or Betelgeuse and they perceive the color red.   How then are they seeing color? It may help the seeker of knowledge to realize that “scotopic” is named for the Skoptsy sect of religious devotees whose most notable doctrine is of male castration.  (see link below)

Of course I am sure when you move the camera you will motion blur everything. Oh yes, what do you plan to do with the speed of light issue? I am sure you will come up with something.

So, foolish mortal, you have been warned.

These are just the first of the issues you must address for a proper starfield.

Fools may go where wise people fear to tread.

Sincerely,
A Friend.



___________________________________________________________


Scotopic Vision

The Skoptsy

Close Encounters of the Third Kind (1977) on IMDB

Friday, February 8, 2013

Real Time Programmable Shaders and Me


[or is it "... and I" ?]

As part of my Solari Sign simulation, I am working through more of the learning curve on Open GL shader language, e.g. GLSL or programmable shaders.

It is pretty cool but it sure is awkward.

There is a list of things you have to get through that are arcane in the extreme before you can do basic programmable shaders: compiling, linking and running shaders, creating and setting uniform variables, creating and using texture maps, figuring out the relationship between traditional Open GL and the new programmable shader paradigm, and so forth. As with so many things in Open GL, going from the documentation to real applications is not well documented or self-explantory. The list goes on and on, and when you need to add a new feature, you have to be prepared to dive into the bits for days before you emerge.

But once you build up an infrastructure to make these things manageable, then it is a lot like writing shaders in Renderman circa 1988, but in real time.

And real time is fun.

For example, out of frustration with an object that was relentlessly invisible no matter what I did, I mapped a texture map variable I had been calculating left over from a previous test. To my amazement, I picked up the texture map from the last digit of a digital clock I had running on the display. Only in this case it was mapped on an object that filled the screen, and it was changing every second.






Its soft because the preloaded texture maps are 128x128 but that could be easily fixed. 

Anyway, I think NVIDIA or someone should do the following:

1. Document the relationship between Open GL and GLSL with modern examples.

2. Write and document a toolkit, maybe libglsl, that lets one do basic GLSL functionality at a slightly higher level.  If no one else has done it, I may do it.

         Such things as: read shaders from disk and compile into a program, defining and setting
         uniform variables, loading and enabling texture maps, etc.

3. Create a good implementation of noise, classic or simplex, and make it available.

        There is an implementation of noise that looks very good online, but it is 10 pages of
        code and its days of work to transfer it to your program. That is less work than it would
        be if you had to write it from scratch.

As for using real time graphics for work directly in motion picture filmmaking, in other words, as final footage, that will only work for certain kinds of graphics.  For visual effects and most final animation such things as advanced filtering, motion blur and global illumination is either required or highly desirable.

For a very low budget film of course, anything is possible.