Friday, February 8, 2013

Real Time Programmable Shaders and Me


[or is it "... and I" ?]

As part of my Solari Sign simulation, I am working through more of the learning curve on Open GL shader language, e.g. GLSL or programmable shaders.

It is pretty cool but it sure is awkward.

There is a list of things you have to get through that are arcane in the extreme before you can do basic programmable shaders: compiling, linking and running shaders, creating and setting uniform variables, creating and using texture maps, figuring out the relationship between traditional Open GL and the new programmable shader paradigm, and so forth. As with so many things in Open GL, going from the documentation to real applications is not well documented or self-explantory. The list goes on and on, and when you need to add a new feature, you have to be prepared to dive into the bits for days before you emerge.

But once you build up an infrastructure to make these things manageable, then it is a lot like writing shaders in Renderman circa 1988, but in real time.

And real time is fun.

For example, out of frustration with an object that was relentlessly invisible no matter what I did, I mapped a texture map variable I had been calculating left over from a previous test. To my amazement, I picked up the texture map from the last digit of a digital clock I had running on the display. Only in this case it was mapped on an object that filled the screen, and it was changing every second.






Its soft because the preloaded texture maps are 128x128 but that could be easily fixed. 

Anyway, I think NVIDIA or someone should do the following:

1. Document the relationship between Open GL and GLSL with modern examples.

2. Write and document a toolkit, maybe libglsl, that lets one do basic GLSL functionality at a slightly higher level.  If no one else has done it, I may do it.

         Such things as: read shaders from disk and compile into a program, defining and setting
         uniform variables, loading and enabling texture maps, etc.

3. Create a good implementation of noise, classic or simplex, and make it available.

        There is an implementation of noise that looks very good online, but it is 10 pages of
        code and its days of work to transfer it to your program. That is less work than it would
        be if you had to write it from scratch.

As for using real time graphics for work directly in motion picture filmmaking, in other words, as final footage, that will only work for certain kinds of graphics.  For visual effects and most final animation such things as advanced filtering, motion blur and global illumination is either required or highly desirable.

For a very low budget film of course, anything is possible.

No comments:

Post a Comment