GPU Computing

GPU computing is the term used for general purpose computing using the GPU (Graphics Processor Unit). This is typically used within the Financial, Scientific and Engineering industries.

The idea behind GPU computing is to use a CPU and GPU together in a heterogeneous computing design. The linear portion of the application in question runs on the CPU (Central Processor Unit) and the computationally-intensive portion runs on the GPU (with the GPU acting like a co-processor for parallel data tasks).

With the 2x processors (CPU & GPU) working in collaboration with each other the user’s perspective is the application runs much faster than if it was running on the CPU alone. This is because it is using the high-performance of the GPU to boost performance.

In order to maximize performance and make the most effective use of the GPU the application developer will have to modify (or port) specific functions of their existing application to the compute-intensive kernels and map them to the graphics processor unit (GPU). The rest of the application simply remains coded for the CPU.

Mapping a function to the GPU involves rewriting the function in question to expose the parallelism in the function and adding GPU programming keywords (such as NVIDIA’s CUDA, Brook, OpenCL etc) to move data to and from the GPU.

Computing data using the GPU enables the massively parallel architecture of today’s modern graphics processors to be utilized in real world tasks. Modern GPU’s have masses of GPU Cores (100’s in comparison to modem CPU’s) that can work together to process hugely complex data sets.

History of GPU Computing

Graphics processor chips started life as fixed function graphics pipelines. Over time they became increasingly programmable, which led to graphics chip manufacturers (ATI, NVIDIA etc) to introduce the first GPU (or Graphics Processing Unit).

In the late 1990’s computer scientists and medical researchers began to use GPU’s for running general purpose computational tasks. These GPU Computing pioneers found the excellent floating point performance located inside the GPU led to a huge performance boost for a vast range of scientific applications. This soon became GPGPU (or General Purpose Computing on GPU).

The problem with GPGPU is that it requires the use of graphics programming languages like OpenGL and Cg to program the GPU. Researchers and software developers have to re-code their applications look like graphics applications in order to access raw power of the GPU.

Now with the development of fully programmable GPU’s graphics processor manufacturers (NVIDIA or ATI/AMD) have developed products that can be programmed using a much larger set of general coding languages (Cuda, Brook, OpenCL and DirectX-11 Compute).

Please do not hesitate to contact us for further details on any of our GPU computing products.

Leave a Reply