Imaginary human
BlitzMax Forums/OpenGL Module/Imaginary human
| ||
you seem to do a lot of OPENGL programming, do you think MAX2D is efficient or could you improve on it? |
| ||
If you only want to ask ImaginaryHuman then his email is in his profile. <edit> but it seems odd to alienate anybody else who might have thoughts. |
| ||
Everything can be improved. Yes, its possible I was referring to everyone in the room ;) |
| ||
Hi Jeremy! Yes I think OpenGL is a great graphics API and for me graphics programming is about the most appealing area of programming in general. Being visual I like I to `see` results and find interesting ways of creating visual effects. I am by no means the most productive or experienced with it on these forums but I'll try my best to answer your question. I just bought the newest Red Book (it explains how to use almost everything in OpenGL 2), the Orange Book (goes into great detail about GLSL shader programming), and the OpenGL SuperBible (updated for OpenGL 2.1 so it includes some things the others do not like floating point buffers, frame buffer objects, occlusion queries etc plus includes a full API reference. I just got done reading the red and orange books and now starting on the SuperBible. You can read much of the material online, like at the OpenGL website. Older outdated versions of these books are online also. I mention the books because it really does help to have an understanding of the full API, how to use it and the possibilities it provides. When you get an understand of what tools you're working with you can start to use them more masterfully and inventively to do the things your imagination comes up with. It also helps to see some of the tech demo's available for your graphics card like from ATI and NVidia, or play some recent games, to get a grasp of what is possible in modern graphics. Back in the Amiga days Blitz used the standard Amiga hardware and software approaches for doing graphics, which mainly meant (for those uninitiated) the use of a blitter chip (block-image-transfer) which wrote to the same memory shared by the screen buffer and also the same memory that for many people was used to store and execute programs on the CPU. It was also the same memory used for Direct Memory Access from various parts of the system like loading files from disk, processing the video signal, playing sounds, etc. With all that load on it, and not being particularly fast by today's standards, the graphics could only go so fast. Yet many people had faster-than-standard CPU's like the 68030, 68040 and 68060 (motorola). Many people also had additional ram installed which was not used by anything other than the CPU. Access to this ram was much faster so the CPU was actually able to do processing faster than the blitter hardware. Also the Amiga used planar bitplanes for its video screens while working in `chunky` format was in many cases more efficient especially for throwing around individual pixels and doing `blitting`. So what I did was I wrote a chunky-graphics-based graphics library add-on for BlitzBasic which used the CPU to do all the graphics processing in `fast ram` and then could either transfer the results to a chunky-format graphics card or convert the chunky graphics to planar format using one of the popular-at-the-time chunky-to-planar converters. Overall, even with the overhead of converting chunky graphics to `chip ram`, the amount of graphics which could be drawn was much higher than the native system. Especially on a 68040 at 25Mhz or higher you could do several times as much graphics processing. The library allowed Blitz programmers to do things like throw thousands of particles (dots) around at a decent rate, do lots of scrolls, stencilling, blitting, sprites, zooming, and some other funky effects. Compared to the `old system` of native Blitz commands it was quite a step up for most people. It couldn't do rotation or any kind of 3D stuff, but at the time it was pretty cool. </trip down memory lane></blowing one's own horn> However, back in the day the graphics landscape was quite different and the techniques for exploiting it were different. Today we're dealing with much higher resolutions, 32-bit non-indexed color, incredibly fast higher level graphics running on the hardware, fully hardware accelerated 3D, matrix transformation and perspective projections, advanced additional buffers like depth and stencils, and even programmable hardware shading. The kinds of things you can do now are WAY more advanced than my old graphics library. OpenGL does all of the things that I was aiming for my library to do and much much more beyond. Now, Max2D is nice to have in the language but it is *very* basic. The only thing it does which my 10-year-old software library couldn't do is to draw an image rotated. And there are some things it cannot do which my library could do, like drawing unfilled primitives and particles. It might well be true that Max2D is accelerated by the hardware, which is great, but in terms of features it really couldn't be much more minimal. If I can be comparing it to graphics libraries from a decade ago, but for the speed, that doesn't exactly scream `progress`. In addition to that, the way that Max2D has been designed for efficiency is very basic. We know that the hardware will accelerate the drawing of images, lines and points, but that doesn't necessarily make it efficient. Max2D takes each drawing operation in almost total isolation. If you want to draw 100 images you have to set up the hardware for drawing images, draw one image, and then tell the hardware you're done drawing images. At the very least it could be that you tell the hardware you're drawing images, draw multiple images in sequence, and then tell the hardware you're done. There is nothing in Max2D that considers how things would be faster if you could process multiple objects at once. And yet OpenGL does have those facilities. Also the graphics hardware works differently these days. You have separate video ram used by OpenGL, a graphics bus which transfers stuff to/from main memory, and the CPU operating on main memory. There are features in OpenGL 1.1 like vertex arrays which let you specify multiple objects to draw. Then you make one function call to go draw them and OpenGL does it more efficiently than just drawing one object at a time. Because GL is at a lower level than Max2D it can break down operations into multiple instructions and those instructions can be optimized and grouped together, and state changes can be grouped together, so that the rendering of multiple objects is much more efficient. It is possible to draw 2D images faster with some better OpenGL code than to draw them with Max2D. Then in more recent versions of OpenGL you have even more exciting possibilities like vertex buffer objects which resides in video ram, vertex and fragment shaders which can do all kinds of stuff along the way, and not to forget 3D! Max2D doesn't really make any efforts to consider that you might be drawing a larger number of items, or that those items have something in common, or that this can be optimized to run faster. In a way, it's like the old mindset of software-base rendering trying to do the same things in the same ways on current hardware. The techniques are way out of date and there simply are faster ways to do things. Feature-wise I think Max2D should at the very least have more facilities which make better use of OpenGL 1.1 capabilities. e.g. why don't we have commands that operate the stencil buffer? why are there still no commands to draw hollow 2d primitives? How about rendering to textures or at least grabbing the backbuffer into a texture - it's decent enough performance for many uses. How about being able to at least rotate sprites in 3D? How about a particle engine even if it's just a matter of plotting zoomed/rotated sprites on the screen? There is really a lot that even the oldest OpenGL can do which has not been exploited in Max2D and has not been made available to the programmer. It's kind of like having a brand new high-tech hi-fi system and only ever using it to play a casette tape, or a high definition HDTV and only using it to watch low-resolution video. I know Mark has to write code for DirectX as well as OpenGL and I don't know if that is really holding things back, but I'm sure DX can do a lot of the same stuff that GL can do. Okay so Max2D is `simple` and `easy` and `basic` for beginner programmers, but it definitely doesn't go beyond that. It doesn't `scale well` to larger numbers of objects and doesn't make full use of what the underlying API can really do. We could all be writing much more exciting games right now with amazing graphics effects, facilitating the kind of creative inspiration that you saw on the Amiga. If you don't give your users versatile tools it limits their creativity. OpenGL is a fairly low level API and its operations are broken down into smaller pieces. For example, to draw a 2D image (after texturing is defined and switched on) takes at least 10-14 instructions. You can well imagine, then, that there are ways to organize these instructions in groups for greater efficiency. We should also be able to put multiple images on a single texture `surface` and then draw entire particle systems of objects based on that texture. This alone would be more efficient than calling DrawImage() multiple times, perhaps as much as 2-3 times faster. I just don't see that Max2D is really doing anything other than stepping one foot into the door of hardware-accelerated graphics. I think possibly that mark decided to come up with a bare-bones 2D graphics API, thinking he'd then set about working on a full 3D API to replace it. But as we know the 3D API hasn't arrived yet and we're still barely able to exploit the possibilities that 2D has to offer. Although it's nice to use some blending like lightblend, shadeblend, and rotation, there is SO much more that could be done. It's not all about just drawing scaled rotated images. You might notice that the kind of products that are created within a given programming language or environment are largely constrained by what that system can do. BlitzMax games have a `look and feel` comprising a mindset of `make everything an image, zoom it and rotate it and throw it around the screen`. Also there is the mindset of `use lightblend for all particle effects`. But what else is there? Since these are really the only two useful areas of Max2D, with perhaps drawing points and lines coming in 3rd, users don't really have many creative options. Games were doing rotated scaled lightblended graphics in *software* several years ago. I think graphics API's need to be created by people who are really into graphics. Max2D is more like an afterthought strapped on to BlitzMax. I think once Max3D comes along you'll be able to do a lot with a texture-mapped quad that you can't do at the moment, like move it around in 3D, give it realistic shadows, perhaps apply some shaders, etc. And that will hands-down totally replace Max2D entirely. But until then there are definitely many limitations to Max2D which could be greatly improved upon. |
| ||
sorry didnt mean to alienate anyone, IH I remember the guy who wrote Eschalon got an email from someone who profiled his game and told him his OpenGL code was inefficient. So Ive been wondering if there is a better way of doing things. It would be great to have more control over when to tell the hardware that drawing is finished for that frame. One thing that I found difficult when looking at some OpenGL code was trying to get the images the right size and in the right place on the screen, it seems you have to do complicated co-ordinate manipulations to get an image to behave like it does with software drawing e.g. drawimage(35,83) doesnt seem possible to do with openGL. |
| ||
I personally think max2d should be OpenGL only, either that or take advantage of OpenGL to make that side of the stick more powerful. |
| ||
It's not very difficult to draw a texture mapped quad in 2D the same as you would in software. I am not sure why you thought it would be difficult. There are a few things to set up but then it's pretty straight forward. Have a look at BRL's OpenGL Max2D module sourcecode and see how they do it there. There are 5 basic steps. 1. Set up your orthogrpahic projection and viewport a) GlOrtho2D() 'pass the window dimensions b) glViewport() 'pass the window dimensions 2. Set up your texture and upload it from main memory to OpenGL a) Set up the data transfer b) Generate a new texture ID c) Bind (select) the texture c) Upload the texture image d) Set the texture's filtering 3. Switch on texture mapping a) glEnable(GL_TEXTURE_2D) 4. Define a quad, which comprises: a) A call to glBegin(GL_QUADS) 'at the start b) A call to glColor4ub(Red,Green,Blue,Alpha) 'for each vertex c) A call to glTexCoord2f(TextureX,TextureY) 'for each vertex d) A call to glVertex2i(X,Y) 'for each vertex 5. Finish the quad with glEnd() 'at the end Texture coordinates range from 0 to 1 across a texture. The corners are at 0,0 - 0,1 - 1,0 and 1,1. Depending on what values you set up your orthographic projection with, you might then define vertex coords the same as pixel coords. Optimizations: If you could put multiple images or frames of an animation on a single texture image and then use texture coordinates which identify `windows` within an image to use as the source for pixels, then you could a) avoid switching to a different texture for each image which is quite a costly operation and b) render multiple quads within a single call to glBegin() which is also normally a costly operation if used for single objects. Also once you get multiple images onto a single texture you can then start using vertex arrays or vertex buffers to render multiple quads in a single function call - reduces function call overhead and speeds up rendering. Max2D doesn't do any of this. What I did in my own higher level 2D library, based on OpenGL, is to create a glMultiBegin() function which keeps track of what type of primitives are currently being defined and if you want to define more of them it skips doing the glBegin()/GlEnd() calls. Then I draw my own quads more efficiently. When I'm done I do a glMultiEnd(). Also OpenGL has `display lists` which are like pre-compiled rendering instructions for things you want to draw the same many times. You can put all sorts of stuff into the list rather than it having effect immediately, and then call it later. You can put things like state changes, changes to matrices, geometry, etc in there. It will convert the function calls into a lower level of code understandable by the hardware, ie map to the hardware interface, and then when you call the list it can skip the overhead of the API calls etc. This all helps speed things up, even in OpenGL 1.1 I think Max2D has little option but to be DirectX compatible as well because a) there are far more Windows users with an opinion and b) on some machines DirectX is faster than OpenGL and c) on some machines the OpenGL driver is poor, non-existent, or needs updating to be hardware accelerated. I don't see that this really holds back Max2D, since most of what you could exploit from OpenGL would be doable in DirectX also. It's more a matter of motivation or planning from BRL. |
| ||
thanks, I always wanted to use glbegin before drawing all images and glend() after to see if it speeds things up. Does using GL commands make the setrotation() command obsolete? I spose it does... I know it makes setcolor() obsolete because you have to specify vertex colors. The optimisations with texture co-ords also look good The only disadvantage is - would you lose the collision commands? Also I would need some math to know where each vertex goes if I rotate an image. I presume that vertex placement also affects scale? Maybe the translation of vertex-coords to scales and rotations are already in Brl.max2d.mod So overall is it worth the time to go rooting through the source and make a stable system to gain some FPS?? ah dunno man |
| ||
If you are thinking to actually improve upon Max2D you will either have to decide to throw Max2D out and completely replace all of its parts or each time you do custom OpenGL calls you'll have to store all of the current OpenGL `state` onto some stack and restore it afterwards. Max2D assumes that it's making changes to the state/settings and that they are in known conditions so if you go changing things by directly making OpenGL calls you are likely to change something in a way that Max2D isn't expecting, resulting in it breaking and giving unpredictable results. glRotatef() I think replaces SetRotation, and glScale() would replace SetScale, and glTranslate would replace positioning an object by a handle position. e.g. glTranslate - move to the center of the object, or its handle position glRotate - rotate it around Z axis I don't know what collision detection stuff would work with OpenGL. You may have to write your own. I would say that if you want to improve upon Max2D you need to be intent to throw it out the window and rewrite your own OpenGL routines, your own collision detection and your own image/texture handling. You need to design a new system from the ground up to accomodate what you want it to do -rehashing something that is poorly implemented isn''t going to help you much. |
| ||
true, collision seems to be the big issue, im not technical enough to write those routines... BTW your email address always sends emails back to sender the easiest collision routine that I can think of is a polygon-polygon collision thingy which is already in the code archives. (you would have to create a polygon that matched the shape of the image) |
| ||
Hey, oops, sorry, had my old email address in there. Updated it now. |
| ||
i had a look through the red book, its er.. very complicated the examples dont work either i can only get glclear() to work, HMM I think ill leave it to the pros |
| ||
Examples don't work? They're probably written in C or C++ for the most part. I recommend the OpenGL SuperBible, it's written in an easier language and is better for beginners to OpenGL - explains how to use everything plus gives a full API reference. There are also other books out there about beginning with OpenGL. Good luck. |
| ||
thanks ill try again |
| ||
I did a little test and it seems that drawimage is as fast as using opengl commands! IN BLITZ3D blitz3d is a tiny bit faster but nothing to write home about |
| ||
Draw image will be probably about as fast as doing the equivalent code yourself - ie just glBegin and glEnd filled with the definition of a quad. That's the same as Blitz is doing so it'll be about the same speed. But where you could make it faster is to put multiple images on the same texture and then use different texture coordinates to reference each one as you draw them all in a single glBegin/glEnd section. That would avoid texture swaps, multiple glbegins, and other state changes. Should make it faster than blitzmax. |
| ||
ah OK, I get a line drawn across the quad for some reason, like the triangles that make up the quad are being drawn in wireframe over the picture -dunno why |
| ||
Not sure why that is. I had that occasionally on a software OpenGL driver. Are you sure that starting in the lower left corner of your quad and proceeding anticlockwise is correct? I usually start top left. It depends how you set up your projection matrix, ie where you put 0,0. |