Allegro.cc - Online Community

Allegro.cc Forums » Programming Questions » Different sampling behavior of Direct3D/HLSLS and OpenGL/GLSL

Credits go to SiegeLord for helping out!
This thread is locked; no one can reply to it. rss feed Print
Different sampling behavior of Direct3D/HLSLS and OpenGL/GLSL
RPG Hacker
Member #12,492
January 2011
avatar

I have a problem with Allegro regarding the different sampling behavior of Direct3D/HLSL and OpenGL/GLSL.

For the game I'm working on I coded a complex tilemap renderer. Instead of using the regular Allegro drawing functions, I decided to go with a 3D approach with an orthographic camera for multiple reasons (flexible camera, easy resizing, improving my 3D skills etc.). I used shaders, primitives (vertex buffers) and stuff like that to achieve this. This is the tileset I'm using for testing:

{"name":"e598a1b0bf.png","src":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/7\/1\/715c7e87d7856259400e99d0cb5479dc.png","w":512,"h":640,"tn":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/7\/1\/715c7e87d7856259400e99d0cb5479dc"}e598a1b0bf.png

Anyways, when I create an OpenGL display and render my tilemap using a GLSL shader, everything looks just fine:

{"name":"1a88a25d65.png","src":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/c\/8\/c80705b518655aa2cb45e0b8ca3e5038.png","w":1300,"h":762,"tn":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/c\/8\/c80705b518655aa2cb45e0b8ca3e5038"}1a88a25d65.png

However, when I create a Direct3D display and render my tilemap using a HLSL shader, depending on my current view matrix/camera position, it turns out like this:

{"name":"ff40480c5c.png","src":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/e\/6\/e6cf7203177e01e58e26d7f371707307.png","w":1300,"h":762,"tn":"\/\/djungxnpq2nug.cloudfront.net\/image\/cache\/e\/6\/e6cf7203177e01e58e26d7f371707307"}ff40480c5c.png

//EDIT
Forum resizes the images, which makes it hard to see the problems, so here are the direct links:

OpenGL Screenshot
Direct3D Screenshot
//EDIT

Some of the errors are harder to spot on this image (like the flowers and small trees being placed a pixel too high, the slope on the right being very unsmooth and all tiles in generel having artifacts they shouldn't have), but other errors are quite obvious and more annoying (like the line of dirt above the big tree). Both, OpenGL and Direct3D, should be set to use point filtering. I don't even know if linear filtering is possible here in the first place, but if it is, I'd eventually like to use it. But for now, just getting this to work correctly with point filtering would be fair enough for me.

In any case, what HLSL just seems to do here is picking pixels that are out-of-bounds when sampling a texture. Like, in this case it adds the line of pixels right above the big tree to the image and removes the bottom row of pixels in exchange. This only happens when the camera is at certain positions (or in other words, when the view matrix' position vector contains certain values). The UV coordinates should always be the same, though, so I don't know why HLSL does this and how to prevent this. My first naive idea was to just clamp the values the view matrix gets to make sure that no fragment will ever get between two pixels, but I just couldn't work out a pattern for when this error actually occurs and I don't even know if this would work with all viewport sizes and if this would eventually work with linear filtering. There just has to be a better way to do this. Maybe a simple setting I've overlooked or a certain trick I can use in the shader to prevent this from happening.

Does anyone know more about this behavior and how to prevent it? I don't really want to restrict my code to only work correctly in OpenGL.

I'd also post my pixel shader code here, but it's quite a lot of code and I don't even know of it's related to the problem at all, so I'll skip it for know. I can add the shader code later if you think it might be the source of the problem.

SiegeLord
Member #7,827
October 2006
avatar

I wonder if this has something to do with the D3D treats texel centers differently than OGL does. Have you tried adding 0.5 to your UV coordinates inside the shader for D3D?

"For in much wisdom is much grief: and he that increases knowledge increases sorrow."-Ecclesiastes 1:18
[SiegeLord's Abode][Codes]:[DAllegro5]:[RustAllegro]

RPG Hacker
Member #12,492
January 2011
avatar

That is actually quite an interesting guess. I didn't know (or forgot) that this difference exists between OpenGL and Direct3D. I will definitely look into this when I'm back home. Although I'm a bit sceptical at this point because from my quick Google search it seems as though Direct3D actually sets 0.0/0.0 as the texel center. This makes it seem kinda unlikely to lead to this problem. I would actually expect this problem from OpenGL here instead of Direct3D (assuming that OpenGL sets 0.5/0.5 as the texel center). In any case, it is the best hint I've got so far, so I'll definitely give it a try later and see what this leads to.

SiegeLord said:

Have you tried adding 0.5 to your UV coordinates inside the shader for D3D?

Surely you meant half a texel, right? Adding 0.5 to the UV coordinate itself would cause the fragment to be fetched from the other half of the texture.
Adding 0.5 should be quite easy, though. In my pixel shader, I'm working with pixel positions, anyways (since I'm actually calculating the UV coordinates in the shader for easier animation). I could just add 0.5 to the pixel position before dividing by the texture dimensions. Let's see if that'll do the job.

SiegeLord
Member #7,827
October 2006
avatar

Surely you meant half a texel, right?

Indeed.

OpenGL indeed does use 0.5, 0.5 as the texel center. In Allegro we shift the output matrices by 0.5, 0.5, but don't do anything about the texture matrices along those lines.

"For in much wisdom is much grief: and he that increases knowledge increases sorrow."-Ecclesiastes 1:18
[SiegeLord's Abode][Codes]:[DAllegro5]:[RustAllegro]

RPG Hacker
Member #12,492
January 2011
avatar

I'm actually using a custom shader/rendering setup (involving primitives add-on, shaders etc.) where I'm using my own matrices, which means I'm not using the default Allegro matrix at all (although it's in the shader - not having that uniform in the shader actually causes shader usage to fail in Allegro). Of course that also means that my code doesn't account for those 0.5 texels yet. Comparing the two screenshots side by side it indeed looks like the tiles in the Direct3D version are shifted about 0.5 to 1.0 texels down/right. Although this still kinda baffles me, since I'd expect the D3D version to be shifted up/left.

I also wonder if adjusting the UV coordinates (assuming that this will actually fix my problem) will also prevent the problem from occuring at other viewport sizes or with different rendering matrices. I guess the only way to find out is by trying myelf as soon as I'm back home.

EDIT:
Alright. I actually got to test this just now. And indeed, adding 0.5/0.5 to my pixel position DID fix the problem. The images are now positioned equally in Direct3D and OpenGL. However, the fix led to another problem. When changing the size of my viewport, this happens:
Screenshot
This only happens in Direct3D, not in OpenGL. Any idea what the meaning of this is and how to prevent it? ???

EDIT:
Nevermind, I found the reason for the cut-off tiles. It was actually my own mistake. In my shader, I was discarding certain pixels that were out of a certain range. But I forgot to acount for the new 0.5 pixels, so I was accidentally clipping too much. I fixed the problem.

However, as I can tell now, by adding the 0.5 in the HLSL shader in the first place, it seems as though I just exchanged my first problem with another problem:

Screenshot

Now the lines are appearing on the bottom/right of tiles instead of on the top/left. So the problem is still there, basically. I wonder what I shoul do...

EDIT:
Alright. Just another update to let everyone know that I resolved the problem. I just used a little hack in my shader:

#SelectExpand
1 float2 clippedTexCoord = Input.Texcoord; 2 3 if (clippedTexCoord.x * tileWidth < 1.0) 4 clippedTexCoord.x = 0.0; 5 if (clippedTexCoord.x * tileWidth >= tileWidth - 1.0) 6 clippedTexCoord.x = (tileWidth - 1.0) / tileWidth; 7 if (clippedTexCoord.y * tileHeight < 1.0) 8 clippedTexCoord.y = 0.0; 9 if (clippedTexCoord.y * tileHeight >= tileHeight - 1.0) 10 clippedTexCoord.y = (tileHeight - 1.0) / tileHeight;

Basically, this checks if the texture coordinate is somewhere on any of the edge texels of a tile. If it is, it adjusts the coordinate to be EXACTLY on the center of that texel (note that the 0.5 are added to the coordinate in a later step). I haven't tested what this looks like with linear filtering yet, but at least with point filtering it should always work fine.

Chris Katko
Member #1,881
January 2002
avatar

Are you sure you're not just drawing bitmaps off-by-one to begin with? Ala [0 to BITMAP_WIDTH] when it should be [0 to BITMAP_WIDTH-1]?

-----sig:
“Programs should be written for people to read, and only incidentally for machines to execute.” - Structure and Interpretation of Computer Programs
"Political Correctness is fascism disguised as manners" --George Carlin

RPG Hacker
Member #12,492
January 2011
avatar

You could be right on that one. I assign UV coordinates of 0.0 to 1.0 to the four corners of a tile, multiply these values by the tile width (32, for example) and then divide the result by the texture width (512 in this case). Maybe I should actually multiply the UV coordinate by tile width - 1 instead. Thanks for the hint!

EDIT:
You were correct, that WAS the cause of the problem. Thanks for the hint!

EDIT:
AAAAAAND... me again! Need help with this once again.
Previously, when posting on this thread, I was using GLSL version 1.20 for my game. In there I had to add 1.0 texels to my texture coordinate and 0.5 texels to the HLSL texture coordinate to get the exact same output from OpenGL and Direct3D with any viewport size and have no artifacts, even with linear filtering. However, since then I had to switch to GLSL version 1.40 because I want to use uniform blocks in my shaders. This suddenly changed everything and I have no idea what's going on. With my previous +1.0 texels added in GLSL, there now appear lines/artifacts on the top/left of my tiles. Adding +0.0 to the texels, there appear lines at the bottom/right of my tiles. Only at +0.5 texels, no more artifacts appear. However, the output of GLSL and HLSL is still not the same. In GLSL, everything seems to be offset by about 0.5 to 1.0 pixels to the top/left. This also causes the filtering to look slightly different. This really annoys me, since everything worked so well with GLSL 1.20 once I figured out the problems.

So here is my question: Is anyone aware of different sampling/rasterization behaviors between GLSL 1.20 and GLSL 1.40? Anyone knows what's going on here?

Chris Katko
Member #1,881
January 2002
avatar

Maybe I should actually multiply the UV coordinate by tile width - 1 instead.

Multiplying by the width-1 sounds wrong. I was going to mention it but you said it was working.

Shouldn't it be something ala:

Coordinates = 0.5f + [0 to TEXTURE_WIDTH-1]

-----sig:
“Programs should be written for people to read, and only incidentally for machines to execute.” - Structure and Interpretation of Computer Programs
"Political Correctness is fascism disguised as manners" --George Carlin

RPG Hacker
Member #12,492
January 2011
avatar

Coordinates = 0.5f + [0 to TEXTURE_WIDTH-1]

Yes, technically you are correct. But I'm not using the full texture for a tile, the texture is actually a tileset (as seen in the first post) and I'm only using a section of the texture for the tile's output image. First I calculate where on the tileset the tile graphic starts in pixels (by a simple multiplication and addition). Then I add (UV-Coordinate * (TileWidth - 1)) to this pixel to get the coordinate of the pixel to actually output. Now I just have to divide this pixel coordinate by the texture width to get the coordinate in texture space/UV space. Before this division I add the 0.5 or whatever value to the pixel position, depending on the shader I'm using. This is working perfectly in HLSL 9 and it has also worked perfectly in GLSL 1.20. In GLSL 1.40, while it is still technically working perfectly, the output image is slighty different from before, which is bothering me. I just don't know the reason for this. They must have changed something significant between those GLSL versions.

Go to: