vertex complexity
Mark Oates

Anybody have a frame of reference for what is or is not a reasonable way to define a vertex within a pipeline? (For example with an al_create_vertex_decl e.g. ALLEGRO_VERTEX_ELEMENT)

Say you have a bump map, a texture map, a specular map, texture map, cube map, normal map, and height map for this model. Is it reasonable to have a vertex definition that's like this "god object" of all these texture coordinates? What's the best way to architect this? How is it usually done?

Thx :)

Erin Maus

Generally, it's fine to have a few--or even just one--vertex type. I believe Unity, for example, has a "global vertex type" with color, texture, position, normals, etc it uses for all mesh data. Unused data simply isn't processed by the vertex shader in a programmable pipeline. At worst, extraneous amount of data may hurt performance, but that would only be an issue at extremely large vertex sizes or with extremely large vertex buffers, which is incredibly unlikely for an overwhelming majority of games...

Not to mention you can store some crazy attributes in a vertex for specific use cases. In my proof-of-concept vector library, my vertex layout looked like so:

struct path_vertex {
  vector2 position;
  vector3 coefficient;
  int resource_index;
};

Position was self-explanatory. Coefficient.xy was a special value used in the Loop-Blinn evaluation method for quadratic curves, while Coefficient.z was used to negate the result of said method when necessary. The resource index was a reference to a cell of a texture that stored fill data for the pixel shader; it could also, say, be used to index a transform which would be applied to the vertex being processed, but I never got around to implementing that in the proof-of-concept.

GPU-based particle renderers make use of "vertex textures," as well. It skips the CPU <-> GPU upload step if all the necessary data can be calculated on the GPU.

Mark Oates

Can you give me an example (or screenshot) of what your vector library might be used for? Like, rendering SVGs or is that something different?

Erin Maus

video

It's a video of the preliminary/proof-of-concept version Algae.Canvas I made in C#. All texture data is packed into one buffer and transformed on the CPU, which is counter-productive. It would have been better to store the path transforms (6 floats) in two floating-point textures and keep the mesh data itself in world-space, using the resource index as a look up like I do for fill color. This would still enable batch rendering, but would relieve a lot of processing from the CPU to the GPU (which is better suited for this type of computation). There's also a small bug in the multi-threaded transform code that causes a lock-up, ugh. Lessons learned for the serious, native implementation...

Thread #615893. Printed from Allegro.cc