Have you seen this GPU programming kit:
What do you think are the difficulties of GPU programming with this and would some work still need to be done by the CPU like on F@H?
I suspect every project would have to be a hybrid and use both the CPU and GPU. At the end of the day the OS needs to know what is going on, otherwise it would just overwrite whatever work you assigned to the GPU with something else.
To be honest, I haven’t studied the S@H or R@H source closely enough to know how easy or hard it would be to port using the various toolkits released by the video card manufacturers.
Hopefully we’ll start to see more traction in this area with the release of 5.8, since it has some basic video card detection code in it.