Creating antialiasing by the use of down sampling

Blitz3D Forums/Blitz3D Programming/Creating antialiasing by the use of down sampling

_33(Posted 2010) [#1]
Inspired by an article about the game Mass Effect 2 ( http://www.pcgameshardware.com/aid,704139/Mass-Effect-2-Better-quality-with-downsampling-at-6400-x-3600-pixels/News/ ) , I figured some people might be interested in the technique for their 3D projects in Blitz. Downsampling is a technique of rendering at a higher resolution, and using that higher resolution render on a lower resolution display output. Take for example 1024x768 using Blitz3D. It is a known fact that Blitz3D can not render with antialiasing ON. I don't know if it's a Direct X 7 limitation, but it is a limitation that brings image quality issues for everyone's projects. The limitations are of course jagged edges on all of the scene objects, specially where there is a high contrast between two objects. It's also quite evident on say fences and smaller objects.

With downsampling, it is possible for one to actually simulate the process of antialiasing. As the higher resolution image is rendered into a texture, the technique would be to take that render and use that as your output for your game, but on a lower resolution. So, for example you'd use a 2048 by 1536 texture, perform a RenderWorld to the texture. Then take that texture, apply it to a rectangle "sprite" that is positionned to fill the screen in a precise manner, then RenderWorld again at the lower 1024 by 768. This should give precise antialiasing, but of course, the process of doing this will come with a speed penalty.

This is just an idea. I haven't tried it, but I would bet it works. Lemme know what you think! Also, RenderToTexture (RTT) isn't native to Blitz3D. So I suppose you would need something like FastExtensions or something like this to do the RTT portion. Finally, if you have an ATI graphics card, it is possible to set AA options to force the application to take default values. Same with Anisotropic filtering. So, in such a case it is useless to revert to downsampling. But, if you have the luxury of downsampling, you will have an effective way of removing jaggy edges from your game visuals, no matter what.


Kryzon(Posted 2010) [#2]
Interesting indeed. There is a D3D 7 .Decls lib out there that has render-target support, so it would probably ease a bit the pain of having to use CopyRect.

You'll need to do some scaling with the screen-quad or viewport, because the supersampled texture will need to be 2048² (even if you try to create one with a height of 1536, Blitz will make it the next higher power-of-2 number).


Yasha(Posted 2010) [#3]
Seen this?

Works well, but is extremely slow... I'm sure it could be optimised though.


Ross C(Posted 2010) [#4]
It does indeed work. sswift created a decent blur routine too. I don't know how it matches up to the fastlibs though.

TBH, i've found that this method is somewhat wasteful... If you wanted to render your scene at 1024x768, why not just render everything at a higher resolution, copy across to a texture, and apply to a sprite. Scale and position this sprite, so you get a 1:1 texel to pixel ratio, having your sprite cover an area of 1024 x 768. Hide everything, apart from the sprite, then renderworld again, and copy to a sprite. You will have to adjust the sprite scale when you apply the newer texture, so it fills the screen.

You will have only rendered a full scene once, and once again with a sprite showing, and you will have decent blurred scene. In fact, i'm gonna knock up a demo, and see what happens :D

Or, just render at the higher resolution and be done it. That will be faster again!


Rroff(Posted 2010) [#5]
Super sampling is a massive fps hog...

Most people with an nVidia or ATI card these days (which is the majority of the gaming market) will either force it on globally, be upto speed with creating profiles or don't even care about AA.

You can distribute an xml updater for the nVidia profiles to force MSAA in your application if you want and ATI will have this feature with the next driver update.


_33(Posted 2010) [#6]
Thanks Yasha! I haven't been able to make it work yet thoe. Possibly because I don't have the latest version of Fast Extensions. i'm on 1.12.


Kryzon(Posted 2010) [#7]

TBH, i've found that this method is somewhat wasteful... If you wanted to render your scene at 1024x768, why not just render everything at a higher resolution, copy across to a texture, and apply to a sprite. Scale and position this sprite, so you get a 1:1 texel to pixel ratio, having your sprite cover an area of 1024 x 768. Hide everything, apart from the sprite, then renderworld again, and copy to a sprite. You will have to adjust the sprite scale when you apply the newer texture, so it fills the screen.


Auto-Reply mode, Ross? that's what the OP proposed in the first place =X

About the blur... doing something per-pixel like that routine you mentioned, now that would be a resource hog.

[...]

@OP: To improve performance, parent everything that is NOT the quad to a single pivot that you should hide when rendering the quad only.

Boo-Ya!


Ross C(Posted 2010) [#8]
Yeah, i got that :D Ok, the biggest problem comes from not having the ability to create non-square textures. so when rendering 1024 x 768, you have to render to a 1024x1024 texture. So, anything over 1024 x 768, will cause the texture to be scaled up to 2048x2048. Not a great idea...

So, whats happening here, is the scene is being rendered, then copied to a 1024 x 1024 texture, which is applied to a quad i made, that covers the screen. The texture is scaled up (and positioned), vertically, so you get exactly the 1024 x 768 area on the screen. The scene is then hidden, and the sprite shown. This causes a minor anti alias effect. Check it out.

Press the 2 key to hide the screen sprite. Press the 3 key to enable the anti aliasing effect again. You see the effect slightly better if you make all the sprites white.

Graphics3D 1024,768
SetBuffer BackBuffer()

Global camera = CreateCamera()
PositionEntity camera,0,0,-10

Global hide_pivot = CreatePivot()

For loop = 0 To 40

	s = CreateSphere(8,hide_pivot)
	PositionEntity s,Rnd(-10,10),Rnd(-10,10),Rnd(0,10)
	EntityColor s,Rnd(100,200),Rnd(100,200),Rnd(100,200)
	
Next

Global high_scene_texture = CreateTexture(1024,1024,256)
Global screen_quad = CreateMesh()
Global screen_surface = CreateSurface(screen_quad)
v0 = AddVertex(screen_surface,-0.5, 0.5,0,0,0)
v1 = AddVertex(screen_surface, 0.5, 0.5,0,1,0)
v2 = AddVertex(screen_surface,-0.5,-0.5,0,0,1)
v3 = AddVertex(screen_surface, 0.5,-0.5,0,1,1)

AddTriangle(screen_surface,v0,v1,v2)
AddTriangle(screen_surface,v2,v1,v3)

EntityFX screen_quad,1

PositionEntity screen_quad,EntityX(camera),EntityY(camera),EntityZ(camera)+5.12
ScaleMesh screen_quad,10.24,7.68,1
;EntityColor screen_sprite,0,0,0


ScaleTexture high_scene_texture,1,1.33333

Global hide_quad = 0

While Not KeyHit(1)

	HideEntity screen_quad
	ShowEntity hide_pivot

	RenderWorld
	Flip
	
	CopyRect 0,0,1024,768,0,0,FrontBuffer(),TextureBuffer(high_scene_texture)
	EntityTexture screen_quad,high_scene_texture
	
	HideEntity hide_pivot
	
	ShowEntity screen_quad
	
	If KeyHit(2) Then hide_quad = 1
	If KeyHit(3) Then hide_quad = 0
	
	If hide_quad = 0 Then
		ShowEntity screen_quad
	ElseIf hide_quad = 1 Then
		HideEntity screen_quad
		ShowEntity hide_pivot
	End If
	
	RenderWorld
	Flip
	
Wend



Ross C(Posted 2010) [#9]
Sorry to bump, but i fixed a part of the code.


_33(Posted 2010) [#10]
Hi Ross C. I've tried your code. It's not exactly what I talked about. What you seem to be doing is scale down from 1024x1024 to 1024x768. So in effect you'll created minor antialiasing verticqally. I tried to fix your code for a 2048x2048 texture but couldn't figure it out.


Kryzon(Posted 2010) [#11]
The problem is that we'd need to have our render-target to be 2048x2048 as well, and since we only have 1 render target - the one we create with Graphics3D - we hit a dead end.
Last hope would be making the camera viewport 2048², but I don't think that'll work.

To do it properly, you'd need to use a D3D 7 hooking lib so we can specify the appropriate render target, a 2048² texture.


Ross C(Posted 2010) [#12]
Hmmm, i'll have a look again. My idea was to render the screen (using a viewport of 800,600, copy the 800 x 600 area to the 1024 x 1024 texture, and display that as the texture over the quad. Limme try that again! ^_^


Kryzon(Posted 2010) [#13]
Why would you be up-sampling?

You should be down-sampling, taking a big thing and sampling it into a lower resolution.


Ross C(Posted 2010) [#14]
Well, i am downscaling. I'm displaying a lower resolution overlay, that will blur, via bi-linear filtering.

Render the viewport (using graphics 1024x768), at 800,600 via cameraviewport command. Then copy this to the texture on the quad. The bi-linear filtering should blur out the pixels.

[EDIT] scrap that. I'm going for a lie down. I'm just talking rubbish now...


Ross C(Posted 2010) [#15]
Ok, not much of a lie down. I'm just really scratching my head now, and wondering why you want anti-aliasing like this? Huge speed hit for one. Why not just render the whole scene in a higher resolution and display it as such. That way your reducing the aliasing, and have a much higher resolution :)


Kryzon(Posted 2010) [#16]
Well, some commercial games use it, so we can be sure it's not bogus. But with the old framework we are using, it's kind of a far-fetched method.


skidracer(Posted 2010) [#17]
DX7 style AA was dropped in DX8 and is unsupported on ALL modern graphics cards.


Ross C(Posted 2010) [#18]
Indeed it is Kryzon. I think it would still be faster to render the screen in a higher resolution, vs two or more renders per loop.


_33(Posted 2010) [#19]
Well it would be great to have the higher resolution and the downsampling, even. I don't think one eliminates the need for the other.