- What video card? (and videocard RAM)
- What CPU?
- What operating system?
- OpenGL or Direct3D?
- Why are you using a gigantic texture? What's the purpose of your application?
Megatextures (1M x 1M+) are entirely possible, through a virtual memory setup that swaps chunks of 1024x1024 or 2048x2048, etc into textures on-the-fly. (See ID Tech's Rage game, and the plethora of articles written on megatextures.)
There is a penalty for textures that aren't power-of-two on many older cards. Try 16384x16384. But modern drivers "should" automatically force it to upgrade the size to the nearest power-of-two.
Try with each power-of-two smaller texture and post the results for your card.
Ala try 16384^2, 8192^2, 4096^2, 2048^2, 1024^2, 512^2.
It's definitely interesting that it's not "faster" using a sub-bitmap. My only guess is (and I don't know if this is by design, a design mistake, or what) that either: a sub-bitmap draws the entire bitmap each time, but "clips" it. So you shouldn't use a sub-bitmap into a huge bitmap unless you NEED it--like a sliding window where you can use X =230, 231, 232, 233, .... 10000. That's opposed to a "tile" based setup where you only need every 512x512 chunk. You could easily split 512x512 textures into multiple smaller texture atlases of 2048x2048 or 4096x4096.
The other reason it could be slow is at the driver/hardware level, you can't load a "partial" texture and it HAS to load the entire thing before it can start splitting it up for a sub-bitmap.
--> Are you changing textures often? Or JUST using one? That is, is it "slower" if you draw both a 10K texture and a 128 texture in the same loop and "faster" if you only use the 10K texture? (Ala it has to swap the loaded texture by loading to 10K each time.)
I might run some tests on my own on this, because I'm sure I'm going to run into it at some point.
I'm not sure what the average GPU can handle though.
WAY LARGER than you think. I used to have a website that listed tons of them, but I've lost it. I'll look for it. Mobile is obviously much smaller. But still at least 2048 or 4096. (Especially when you realize their screen resolutions are larger than that on many new phones!)
A GTX 1080 supports 131072x65536. I'm guessing they're pushing for higher and higher because CUDA / science benefits from the larger sizes and nVidia has been moving toward science for at least 8 years or so.
GTX 980 is 65536x65536.
It should be noted that when the texture is "layered" (an array?), the max size is lower. GTX 980, for example, is 16K x 16K x 2048 layers.
Depending on what Compute Level the CUDA core is, for example, 8Kx8Kx512 is the requirement for the oldest Compute 1.0 to 1.3. Compute 2.x is 16K x 2048. I would imagine the OpenGL capabilities would be equivalent for an nVidia card. For a AMD/Intel, you could still go by DirectX and OpenGL version. I don't recall the field name, but in OpenGL there is a spec for "minimum [maximum texture size]" mandated to support that OpenGL version. (Almost sure there is also a DirectX one too.)
While this is interesting stuff, it's a little off-topic. The OP's code shouldn't work at all on a card that doesn't support that resolution texture. (Unless you're worrying about selling your game to people with worse specification videocards.)