![]() |
|
Different sampling behavior of Direct3D/HLSLS and OpenGL/GLSL |
RPG Hacker
Member #12,492
January 2011
![]() |
I have a problem with Allegro regarding the different sampling behavior of Direct3D/HLSL and OpenGL/GLSL. For the game I'm working on I coded a complex tilemap renderer. Instead of using the regular Allegro drawing functions, I decided to go with a 3D approach with an orthographic camera for multiple reasons (flexible camera, easy resizing, improving my 3D skills etc.). I used shaders, primitives (vertex buffers) and stuff like that to achieve this. This is the tileset I'm using for testing: {"name":"e598a1b0bf.png","src":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/7\/1\/715c7e87d7856259400e99d0cb5479dc.png","w":512,"h":640,"tn":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/7\/1\/715c7e87d7856259400e99d0cb5479dc"} Anyways, when I create an OpenGL display and render my tilemap using a GLSL shader, everything looks just fine: {"name":"1a88a25d65.png","src":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/c\/8\/c80705b518655aa2cb45e0b8ca3e5038.png","w":1300,"h":762,"tn":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/c\/8\/c80705b518655aa2cb45e0b8ca3e5038"} However, when I create a Direct3D display and render my tilemap using a HLSL shader, depending on my current view matrix/camera position, it turns out like this: {"name":"ff40480c5c.png","src":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/e\/6\/e6cf7203177e01e58e26d7f371707307.png","w":1300,"h":762,"tn":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/e\/6\/e6cf7203177e01e58e26d7f371707307"} //EDIT OpenGL Screenshot Some of the errors are harder to spot on this image (like the flowers and small trees being placed a pixel too high, the slope on the right being very unsmooth and all tiles in generel having artifacts they shouldn't have), but other errors are quite obvious and more annoying (like the line of dirt above the big tree). Both, OpenGL and Direct3D, should be set to use point filtering. I don't even know if linear filtering is possible here in the first place, but if it is, I'd eventually like to use it. But for now, just getting this to work correctly with point filtering would be fair enough for me. In any case, what HLSL just seems to do here is picking pixels that are out-of-bounds when sampling a texture. Like, in this case it adds the line of pixels right above the big tree to the image and removes the bottom row of pixels in exchange. This only happens when the camera is at certain positions (or in other words, when the view matrix' position vector contains certain values). The UV coordinates should always be the same, though, so I don't know why HLSL does this and how to prevent this. My first naive idea was to just clamp the values the view matrix gets to make sure that no fragment will ever get between two pixels, but I just couldn't work out a pattern for when this error actually occurs and I don't even know if this would work with all viewport sizes and if this would eventually work with linear filtering. There just has to be a better way to do this. Maybe a simple setting I've overlooked or a certain trick I can use in the shader to prevent this from happening. Does anyone know more about this behavior and how to prevent it? I don't really want to restrict my code to only work correctly in OpenGL. I'd also post my pixel shader code here, but it's quite a lot of code and I don't even know of it's related to the problem at all, so I'll skip it for know. I can add the shader code later if you think it might be the source of the problem.
|
SiegeLord
Member #7,827
October 2006
![]() |
I wonder if this has something to do with the D3D treats texel centers differently than OGL does. Have you tried adding 0.5 to your UV coordinates inside the shader for D3D? "For in much wisdom is much grief: and he that increases knowledge increases sorrow."-Ecclesiastes 1:18 |
RPG Hacker
Member #12,492
January 2011
![]() |
That is actually quite an interesting guess. I didn't know (or forgot) that this difference exists between OpenGL and Direct3D. I will definitely look into this when I'm back home. Although I'm a bit sceptical at this point because from my quick Google search it seems as though Direct3D actually sets 0.0/0.0 as the texel center. This makes it seem kinda unlikely to lead to this problem. I would actually expect this problem from OpenGL here instead of Direct3D (assuming that OpenGL sets 0.5/0.5 as the texel center). In any case, it is the best hint I've got so far, so I'll definitely give it a try later and see what this leads to. SiegeLord said: Have you tried adding 0.5 to your UV coordinates inside the shader for D3D? Surely you meant half a texel, right? Adding 0.5 to the UV coordinate itself would cause the fragment to be fetched from the other half of the texture.
|
SiegeLord
Member #7,827
October 2006
![]() |
RPG Hacker said: Surely you meant half a texel, right? Indeed. OpenGL indeed does use 0.5, 0.5 as the texel center. In Allegro we shift the output matrices by 0.5, 0.5, but don't do anything about the texture matrices along those lines. "For in much wisdom is much grief: and he that increases knowledge increases sorrow."-Ecclesiastes 1:18 |
RPG Hacker
Member #12,492
January 2011
![]() |
I'm actually using a custom shader/rendering setup (involving primitives add-on, shaders etc.) where I'm using my own matrices, which means I'm not using the default Allegro matrix at all (although it's in the shader - not having that uniform in the shader actually causes shader usage to fail in Allegro). Of course that also means that my code doesn't account for those 0.5 texels yet. Comparing the two screenshots side by side it indeed looks like the tiles in the Direct3D version are shifted about 0.5 to 1.0 texels down/right. Although this still kinda baffles me, since I'd expect the D3D version to be shifted up/left. I also wonder if adjusting the UV coordinates (assuming that this will actually fix my problem) will also prevent the problem from occuring at other viewport sizes or with different rendering matrices. I guess the only way to find out is by trying myelf as soon as I'm back home. EDIT: EDIT: However, as I can tell now, by adding the 0.5 in the HLSL shader in the first place, it seems as though I just exchanged my first problem with another problem: Now the lines are appearing on the bottom/right of tiles instead of on the top/left. So the problem is still there, basically. I wonder what I shoul do... EDIT: 1 float2 clippedTexCoord = Input.Texcoord;
2
3 if (clippedTexCoord.x * tileWidth < 1.0)
4 clippedTexCoord.x = 0.0;
5 if (clippedTexCoord.x * tileWidth >= tileWidth - 1.0)
6 clippedTexCoord.x = (tileWidth - 1.0) / tileWidth;
7 if (clippedTexCoord.y * tileHeight < 1.0)
8 clippedTexCoord.y = 0.0;
9 if (clippedTexCoord.y * tileHeight >= tileHeight - 1.0)
10 clippedTexCoord.y = (tileHeight - 1.0) / tileHeight;
Basically, this checks if the texture coordinate is somewhere on any of the edge texels of a tile. If it is, it adjusts the coordinate to be EXACTLY on the center of that texel (note that the 0.5 are added to the coordinate in a later step). I haven't tested what this looks like with linear filtering yet, but at least with point filtering it should always work fine.
|
Chris Katko
Member #1,881
January 2002
![]() |
Are you sure you're not just drawing bitmaps off-by-one to begin with? Ala [0 to BITMAP_WIDTH] when it should be [0 to BITMAP_WIDTH-1]? -----sig: |
RPG Hacker
Member #12,492
January 2011
![]() |
You could be right on that one. I assign UV coordinates of 0.0 to 1.0 to the four corners of a tile, multiply these values by the tile width (32, for example) and then divide the result by the texture width (512 in this case). Maybe I should actually multiply the UV coordinate by tile width - 1 instead. Thanks for the hint! EDIT: EDIT: So here is my question: Is anyone aware of different sampling/rasterization behaviors between GLSL 1.20 and GLSL 1.40? Anyone knows what's going on here?
|
Chris Katko
Member #1,881
January 2002
![]() |
RPG Hacker said: Maybe I should actually multiply the UV coordinate by tile width - 1 instead. Multiplying by the width-1 sounds wrong. I was going to mention it but you said it was working. Shouldn't it be something ala: Coordinates = 0.5f + [0 to TEXTURE_WIDTH-1]
-----sig: |
RPG Hacker
Member #12,492
January 2011
![]() |
Chris Katko said: Coordinates = 0.5f + [0 to TEXTURE_WIDTH-1] Yes, technically you are correct. But I'm not using the full texture for a tile, the texture is actually a tileset (as seen in the first post) and I'm only using a section of the texture for the tile's output image. First I calculate where on the tileset the tile graphic starts in pixels (by a simple multiplication and addition). Then I add (UV-Coordinate * (TileWidth - 1)) to this pixel to get the coordinate of the pixel to actually output. Now I just have to divide this pixel coordinate by the texture width to get the coordinate in texture space/UV space. Before this division I add the 0.5 or whatever value to the pixel position, depending on the shader I'm using. This is working perfectly in HLSL 9 and it has also worked perfectly in GLSL 1.20. In GLSL 1.40, while it is still technically working perfectly, the output image is slighty different from before, which is bothering me. I just don't know the reason for this. They must have changed something significant between those GLSL versions.
|
|