After doing about a dozen projects with
CUDA/GPU for my own edification, I made the mistake of trying to help
out some friends on their project.
After working through various issues / problems I
am came up with a list of somewhat obvious conclusions. I knew some
of these going in, but some of them were a surprise and some were
confirmed as being really true, not just sort of true.
I showed this to a friend who has spent
a great deal of his career designing graphics hardware and he
confirmed these and added a few of his own. I showed this list to
another friend who has used the GPU commercially and he tells me I am
all wrong. He always got 50-100 times speedup without any problems
and things just work.
So you are on your own, kids.
Believe these or not as you please.
Believe these or not as you please.
1. An algorithm that has been optimized for
a conventional computer will be so completely unsuitable for the GPU
that you should not even try to port it. One is much better off
abandoning what you did before and rethink the problem for the GPU.
2. A major part of any GPU solution is
getting the data to and from the GPU. Depending on what else you
are doing, this could have a serious impact on the performance of the
application and its design.
3. In general you should not expect to
just tack a GPU program/shader/whatever onto an already existing program. You should
expect to have to do major work to rearchitect your program to use
the GPU.
4. Do not expect to be able to do a lot
of magic things with the display and still be able to do intensive
work on the GPU. Under those circumstances, plan to have a second
GPU for your compute work. I am still not completely clear on how NVIDIA shares one GPU with two very different tasks (the computers window system and your program, for example), but it does, up to a point.
4. As part of planning to use the GPU
in your application, you should budget/allocate time for the core
developer to work with your GPU programmer to hash out ideas, issues,
problems. If your core developer does not have the time or the
interest, do not try to use the GPU.
5. Debugging GPU programs is much
harder than debugging normal programs. Think microcode but a little
better than that.
6. Performance for the GPU is something
of a black art. Small differences in algorithm can have impressive
differences in the received performance. It can be remarkably
difficult to predict in advance what kind of performance you are to
see ultimately on your algorithm and project, even after
optimization.
7. Not all GPUs are created equal even
if they are software compatible.
8. And of the unequal GPUs, GPUs for
laptops are particularly unequal.
9. Although the technology of GPUs and
their programming is maturing, and NVIDIA has done a very good job,
things are not perfect and when you run into a problem you may spend
weeks and weeks getting yourself out. Examples upon request.
10. When you add a GPU to a mix of a
larger application, you have complicated testing, deployment and
support. If you do not have the budget for this, do not try to use
the GPU.
In conclusion, GPUs are not a magic
solution that just makes things faster. Under the right
circumstances, performance of GPU can be impressive, but lots of
things have to go right and nothing is free.
Unless you are my friend who says that
GPUs just work and speed things up. In that case, I guess they are
free.