Vertex data sharing.

General Discussions Regarding L. Spiro Engine

Vertex data sharing.

Postby daniel » Wed Nov 07, 2012 7:50 pm

Iv'e noticed that VertexBuffers store data in system memory, which makes me wonder how you handle shared data between VertexBuffers (for example shared bone data or texture coordinates between models who have the same build type).
I am wondering if it wouldn't be better to have an extra layer over VertexBuffer for that purpose. Which also makes me wonder, for opengl, would it be considered good to keep a vao in such a class? I mean something like this:

Code: Select all
class ModelVertexData
{
    public:
    ...
    ...
   
        void PreRender()
        {
            glBindVertexArray(m_VAO);
            ...
            ...
            ...
            glBindVertexArray(0);
        };
   
    private:
        struct BufferEntry
        {
            VertexBuffer* buffer;
            BufferType type;
            GLuint offset
            ...
        };
       
        GLuint m_vao;
        std::vector<BufferEntry> m_entries;
};
daniel
I Have a Question
 
Posts: 4
Joined: Fri Nov 02, 2012 10:11 am

Re: Vertex data sharing.

Postby L. Spiro » Mon Nov 12, 2012 11:43 pm

I may have misunderstood the question, but I am not sure what you see as the main hurdle to sharing data between vertex buffers.
Sharing vertex buffers between meshes of a single model is quite common, since artists often just copy-paste one part the other side. The wheels of a car are often just the same vertex buffer with a reversed transform applied (and thus reversed culling).

In this case the whole vertex buffer is shared between the subsets (meshes) of a model, but I don’t think this is what you meant by your question.

Then there is the case of instances of models, in which each instance just references the base model’s vertex buffers. Again I don’t think this is what you meant.

I think you are talking about 2 meshes that use the same vertex coordinates and normals, but different texture coordinates (for example).
But the answer is actually the same.
The vertices and normals are in one shared VBO. Each model has its texture coordinates in a different VBO.

Streams (the term used by DirectX) are used to swap between the combinations of vertex/normal VBO’s and texture-coordinate VBO’s.
In other words, just activate VBO 1 and VBO 2 together, then activate VBO 1 and VBO 3 together.


No extra layers needed.
If you want to encapsulate this behind a VBA, you need one VBA for each possible combination (in this case 2). This doesn’t change what is ultimately happening though, and if you do use VBA’s I suggest not adding them as a hidden feature of the VertexBuffer class, but as their own new class. Otherwise you don’t have a very intuitive way of flexibly making all the combinations of VBO’s you will need.


L. Spiro
It is amazing how often people try to be unique, and yet they are always trying to make others be like them.
- L. Spiro 2011
L. Spiro
Site Admin
 
Posts: 54
Joined: Thu Jul 21, 2011 2:59 pm
Location: Tokyo, Japan

Re: Vertex data sharing.

Postby daniel » Wed Nov 14, 2012 9:39 pm

The reason i asked is because i could not find where multiple buffers can make up a mesh. Render parts only keep one whole buffer, and VertexBuffers don't contain such information. I guess ill just have to look harder.
daniel
I Have a Question
 
Posts: 4
Joined: Fri Nov 02, 2012 10:11 am

Re: Vertex data sharing.

Postby L. Spiro » Sun Nov 18, 2012 10:36 pm

I technically answered your question.
A “render part” has information for a whole set of vertex data, but that does not mean a single vertex buffer must be used for it.
Again, using streams (DirectX terminology) you can do exactly what you want. Read that part of my last post again.


L. Spiro
It is amazing how often people try to be unique, and yet they are always trying to make others be like them.
- L. Spiro 2011
L. Spiro
Site Admin
 
Posts: 54
Joined: Thu Jul 21, 2011 2:59 pm
Location: Tokyo, Japan

Re: Vertex data sharing.

Postby daniel » Tue Nov 20, 2012 1:35 pm

I'm sorry, i wasn't clear on what i was saying. I saw that you can activate several vertexbuffers using streams but from reading a bit through the doxygen documentation i could not find where the information is kept.

for example, if you have 1 vertexbuffer with positions and normals and another vertexbuffer for texture coordinates, where is this recorded? how does the rendering method know that it's supposed to use two buffers instead of one?

anyway, ill keep looking at the API documentation.
daniel
I Have a Question
 
Posts: 4
Joined: Fri Nov 02, 2012 10:11 am

Re: Vertex data sharing.

Postby L. Spiro » Sat Nov 24, 2012 11:40 am

The models themselves know which vertex buffers to activate based on what type of render needs to be performed, and it is the duty of the model parts to render themselves.

The pipeline works as follows:
  • Time to render. Scene manager collects objects that might be inside the viewing area by traversing an octree.
  • The remaining objects are told they are about to be rendered and given a render-queue set (a set is just 2 render queues, one for opaque, one for translucent). They are also passed the view frustum. The models perform culling on themselves and on all the meshes inside of them and add all the appropriate meshes to the appropriate render queue. Note that if the model is entirely inside the frustum then the meshes are all added without additional frustum checks.
    • The type of render about to be performed is also passed. It could be a normal render or a shadow-map render etc. The models use this information to tell the render queue what shader ID will be used for its render because the render queue needs to know this for sorting. Additionally, a pointer to the mesh is also passed. This is important later.
  • The render sorts all the entries it has via shader ID, textures, and distance from camera.
  • The scene manager orchestrates all the rendering passes using this sorted list, although there will be different render queues for shadow-mapping etc. After the scene manager sets up the state for a render pass (enabling lights, setting states, etc.), it tells the appropriate render queue to render each object in sorted order.
  • But the render queue itself doesn’t actually do the rendering. Remember how each mesh passed a pointer to itself? The render queue uses that to call a virtual function on each mesh, and that allows the meshes to render themselves.
  • As each mesh renders itself, it activates the appropriate shaders, textures, index buffers, and vertex buffers needed for the type of render being performed. It is the models themselves who decided to split the vertex buffers and they are responsible for managing when and how those vertex buffers are activated. Thus if a mesh is told to “render for a shadow map” then it activates only the vertices vertex buffer. If it is told to do a normal render it activate both sets of vertex buffers.

It is basically that simple. No complicated systems to create different combinations of vertex buffers and try to register them with such-and-such type of render.
The vertex buffers are the sole responsibility of their meshes, who are what decide both how to split them and how to recombine them.


L. Spiro
It is amazing how often people try to be unique, and yet they are always trying to make others be like them.
- L. Spiro 2011
L. Spiro
Site Admin
 
Posts: 54
Joined: Thu Jul 21, 2011 2:59 pm
Location: Tokyo, Japan


Return to General Related

Who is online

Users browsing this forum: No registered users and 0 guests

cron