Showing posts with label GPUs. Show all posts
Showing posts with label GPUs. Show all posts

Tuesday, November 13, 2018

Lessons on the Path to Righteousness and the Installation of Tensorflow on Centos

draft

Tensorflow is one of the open source solutions to a machine learning back end.  That, with a Keras layer on top, is one of the more popular machine learning environments out there. Among other things, it supports both central and graphics processing on most operating systems.

As in so many things in life, a clever or lucky choice can achieve a goal with no effort, but a similar choice can result in weeks, years or even decades of hell.

There are a number of surprises involved in installing these packages on your operating system of choice, and this note is intended to help you, readers, avoid shooting yourself in the foot or the head as the case may be.

1. Never, never, never try to install from source no matter who advises you to.  It is perfectly possible to install from source on a bare metal machine without any virtual environments, or you could just hit yourself with a large hammer for a few weeks.  Who knew that there were so many different ways to install Python, or that there were so many Pythons?  And that is just the tip of a very nasty set of icebergs.

2. So whenever you are given an opportunity to isolate yourself from the real world by using a virtual environment, whether in Python or anywhere else, take it.  In particular, for the Windows 10 version, a choice of the python virtual environment and a precompiled version of Tensorflow/Keras will result in a cpu only version in an afternoon.  For some of you, you are done and can move on.

3. For those of us in Linux world, you now have to choose between a few specific versions of Ubuntu and everything else.  You who would compromise your integrity and have no aesthetic sense are welcome to use Ubuntu.  Go, it is there for you.

4. For the rest of us who might use an adult version of Linux, my operating system of choice is Centos / RHEL 7.5 which is the most recent version.  I thought I had to compile from source, but this turns out not to be the case.  What turns out to to one of the best paths through this jungle is to use the Docker (container) version as follows.

5. Install Docker by registering as a free user of the Community Edition.  Having registered, and installed the preferred package from the preferred repository, you are now eligible to use containers that have been registered with Docker.

6. Tensorflow creates a new version more or less every day in a variety of flavors (no GPU, GPU, etc) and puts them out on the Docker registry with such adjectives as "latest" or "stable" for example.

7. Using these magic words you can create the name of a container you want to run.  You use one of the magic containers and it loads that part of the container whose layers are not already local, and if you so specify, you are in a shell, in a container, in which you can go into python, load tensorflow and keras and you are off to the races in a cpu version of Tensorflow.

8. Of course, at this point you are now using containers and you will need to spend a day learning about container file systems and other nuance.  Its not too bad though.

9. For those of you who foolishly also want to use GPU acceleration, you have chosen a slightly more difficult path.  You will have to install a different version of the "docker" program from Nvidia and Github.  But once you do, and once you install the GPU driver on your Linux (a bird of a different feather) you can now use a container with GPU from that list mentioned above.

Good luck!

Tuesday, February 25, 2014

Using the GPU for Real Work: A Postmortem


After doing about a dozen projects with CUDA/GPU for my own edification, I made the mistake of trying to help out some friends on their project.

After working through various issues / problems I am came up with a list of somewhat obvious conclusions. I knew some of these going in, but some of them were a surprise and some were confirmed as being really true, not just sort of true.

I showed this to a friend who has spent a great deal of his career designing graphics hardware and he confirmed these and added a few of his own. I showed this list to another friend who has used the GPU commercially and he tells me I am all wrong. He always got 50-100 times speedup without any problems and things just work.

So you are on your own, kids.

Believe these or not as you please.

1. An algorithm that has been optimized for a conventional computer will be so completely unsuitable for the GPU that you should not even try to port it. One is much better off abandoning what you did before and rethink the problem for the GPU.

2. A major part of any GPU solution is getting the data to and from the GPU. Depending on what else you are doing, this could have a serious impact on the performance of the application and its design.

3. In general you should not expect to just tack a GPU program/shader/whatever onto an already existing program. You should expect to have to do major work to rearchitect your program to use the GPU.

4. Do not expect to be able to do a lot of magic things with the display and still be able to do intensive work on the GPU. Under those circumstances, plan to have a second GPU for your compute work.  I am still not completely clear on how NVIDIA shares one GPU with two very different tasks (the computers window system and your program, for example), but it does, up to a point.

4. As part of planning to use the GPU in your application, you should budget/allocate time for the core developer to work with your GPU programmer to hash out ideas, issues, problems. If your core developer does not have the time or the interest, do not try to use the GPU.

5. Debugging GPU programs is much harder than debugging normal programs. Think microcode but a little better than that.

6. Performance for the GPU is something of a black art. Small differences in algorithm can have impressive differences in the received performance. It can be remarkably difficult to predict in advance what kind of performance you are to see ultimately on your algorithm and project, even after optimization.

7. Not all GPUs are created equal even if they are software compatible.

8. And of the unequal GPUs, GPUs for laptops are particularly unequal.

9. Although the technology of GPUs and their programming is maturing, and NVIDIA has done a very good job, things are not perfect and when you run into a problem you may spend weeks and weeks getting yourself out. Examples upon request.

10. When you add a GPU to a mix of a larger application, you have complicated testing, deployment and support. If you do not have the budget for this, do not try to use the GPU.

In conclusion, GPUs are not a magic solution that just makes things faster. Under the right circumstances, performance of GPU can be impressive, but lots of things have to go right and nothing is free.

Unless you are my friend who says that GPUs just work and speed things up. In that case, I guess they are free.

Friday, October 25, 2013

The Mighty Sphere


About two years ago, I decided to learn NVIDIA's GPU programming environment, CUDA. I wrote a volume renderer in it which can render anything you want as long as it is a sphere.

The problem of course with volume rendering is getting data to render. Volume datasets are usually associated with scientific visualization and when you can get them at all they are not trivial to process. They are real data about real things and it requires serious work to make something of them.

So, for my tests I used normal 3D objects but made every vertex a sphere. It turned out pretty well. Here are two test images, one with glowy spheres and one with spheres that were more hardedged.

You get extra credit if you can figure out what they were originally.








Give up?  The one on the bottom is an upside down SR-71.   The one on top is something with a backbone, you can see the vertebrae clearly.  Dont remember what it was, though.