N/APosted on - 01/12/2012
Where I can search the details regarding the programming in ‘CULA’? Also need to find ‘functions reference’. What are the tools may used for making the ‘CULA’? I don’t know why the CULA functions are taking pointers to host memory. It was supposed to take GPU memory. Why this CULA interface seems different from all other CPU implementations?
Why does my computer host pointers in CULA
A set of GPU-accelerated linear algebra libraries, that is what CULA is. It utilizes the NVIDIA CUDA parallel computing architecture to improve dramatically computation speed and speed of mathematics.
GPUs can process more than graphics. It has the capacity to run computational intensive and general purpose software using its massive parallel architectures. Together with proper implementations it can give large speed ups to applications when ported to the GPU.
The entire NVIDIA Tesla line supports CULA. NVIDIA Quatro workstation graphic cards also supports CULA. Full list can be found here https://developer.nvidia.com/cuda-gpus
The actual speed of CULA usually depends on how heavy the algorithm is. Together with the data set and what you are benchmarking against will also contribute. You can see detailed results here http://www.culatools.com/dense/performance/.