Accessing Allegro's VRAM-Bitmap via CUDA
DLComDev

Hi there,

I've recently developed a neat little Ray Tracer able to be computed in realtime with NVIDIA CUDA. Currently, the graphics card calculates one bitmap and stores it in the VRAM. After that, it is transferred to the RAM to be used as an ALLEGRO_BITMAP. I then draw it to the screen. If I understand correctly, Allegro's standard Bitmap-containers are stored in the VRAM of a graphics card or can at least be specified to rest in the VRAM, according to the specification for "al_set_new_bitmap_flags". Therefore my current algorithm is quite a waste of performance, because the data is generated in the VRAM to be transferred to the RAM to then be transferred back into the VRAM, which doesn't really make a lot of sense.

My question is the following: Is it possible to manipulate an Allegro Bitmap with CUDA without using the cpu and standard memory?

I know that this is probably quite hard to answer for a normal user, so maybe the devs know more about that?

Well, maybe I'll get a useful answer.

Til then,

DLCom

Polybios

I've not done this, but you probably have to use an OpenGL (Direct3D) texture inbetween. Allegro (video) bitmaps are basically wrappers around an OpenGL (Direct3D) texture. You can get the OpenGL texture id with al_get_opengl_texture, for example. Then you should be able to write to this texture with CUDA. Quick googling tells me that cudaGraphicsGLRegisterImage and cudaBindTextureToArray might be things to look for (among others, probably).

DLComDev

Thank you! Haven't thought about that ;D

Thread #616960. Printed from Allegro.cc