Faster 32bit performance than 16bit
BlitzMax Forums/BlitzMax Programming/Faster 32bit performance than 16bit
| ||
I am getting much better performance setting up graphics as 32bit ( Graphics(w,h,32,60). In OpenGl it's more then double the performance (in my game from 75 fps to 180 fps ,vsync off). I am only drawing images to the screen. Given my setup (see below) is this normal? I would expect 16bit to be faster? My best guess is since the video card uses the system DDR3 memory that overhead in handling 16bit is somehow costly. My Setup: Win 7 64bit Geforce 6150 Athlon X2 2.8ghz 4 Gig DDR3 Ideas? Thanks, Last edited 2010 |
| ||
I believe 16 bit is considered "deprecated" nowadays. In other words, hardware is higly optimized for 32 bit, and 16 bit is (lack of any other words, but I don't think it's correct) "emulated", so there's your performance loss. If you want more fps look for other optimizations :) Last edited 2010 |
| ||
The main advantage of 16-bit is approximately half the amount of video memory usage. |
| ||
The main advantage of 16-bit is approximately half the amount of video memory usage. Sometimes |
| ||
If you have some information about this which clearly I lack, maybe you'd like to reveal what it is so people who wish to learn more about this will be able to do so? |
| ||
The main advantage of 16-bit is approximately half the amount of video memory usage. On the iPhone this isnt so seeing as ALL images are converted at load time to 32bit and power of two so the system can use them. There are some graphics cards out there that do this too. |
| ||
Ok fair enough. But it could perhaps be a reasonable assumption that old graphics cards which don't do this, are more likely to have less video memory and so receive the most benefit from the change? |
| ||
But it could perhaps be a reasonable assumption that old graphics cards which don't do this, are more likely to have less video memory and so receive the most benefit from the change? Nope. I think even old geforces (4,5...) already convert internally everything to 32 bits. Unless you're going for a 10 year old hardware - or mobile phones, you shouldn't even bother with 16bpp. You could probably win more memory with texture compression nowadays, but that is something I don't even know how to compare :/ |
| ||
Then what is the point of the setting? |
| ||
Then what is the point of the setting? Backward compatibility, I would say. |
| ||
The hardware is probably optimized for 32-bit, is all. 16-bit probably goes through an internal or software translation to display it. I recall that I found the same thing on an old g3 Mac, where you could set up and use 32-bit textures and stuff but the display, if 16-bit, was slower due to some conversion process used when flipping the display. |
| ||
Running a test (with in my game, vsync off) on DX 7,9 and OpenGL the results are as follows: DX9 16bpp - 120 fps DX9 32bpp - 120 fps DX7 16bpp - 75 fps DX7 32bpp - 105 fps OpenGL 16bpp - 75 fps OpenGL 32bpp - 175 fps Given that DX7 and OpenGL both got 75 fps in 16bpp I would agree that some kind of emulation/translation is going on in software. |
| ||
That's interesting to note. Still, there is the question of video memory usage. I've yet to find a program that can reliably report video memory usage. (I've tried ones which say they can, but don't work or give weird results) Last edited 2010 |