Introduction

If you've followed the first 2 parts of this tutorial we've now talked about how to get a window on the screen, build and simple framework and code up the mechanics of the asteroids game. However, up to this point we've not really looked at how we visualise anything. This part of the tutorial will cover how we load up 3D models and display them to represent the player and the asteroid entities we looked at in Part 1.

Model Formats

3D Models are generally loaded from files created in other tools. For instance, the player models in Quake were often created with "QME" and then loaded into the game from MD2 files. For this tutorial we're going to look at Wavefront Object Files. They generally use the extension ".obj". This format was chosen because of its popularity as a export format for modelling tools. Pretty much any modelling tool will export to OBJ format. Its also a particularlly easy format to process.

Model file formats and generally documented somewhere - however the search for a reliable and accurate specification often takes longer than writing the code to load it. A good site to try initially is http://www.wotsit.org/. Then tend to have the majority of formats hanging around.

The specification/guide we're going to use for OBJ files is here. Its not a full specification but it gives us enough information to get the features we want. Note that this is an important point to realise. 3D file formats often support a great range of features, normally you only want to utilise a small set of these so you don't really need to cope with everything. Picking and choosing like this can dramatically reduce your development time. There are cases of course where a full feature set should be supported, for instance if you happen to be writing a generic loader for a scenegraph (Xith3D, JME).

Wavefront Object (.OBJ) format

Before we get working on the loader thats going to render the models in OpenGL we need to
be familiar with the format. Lets take a look at what our format specification says:

v x y z The vertex command, this specifies a vertex by its three coordinates. The vertex is implicitly named by the order it is found in the file. For example, the first vertex in the file is referenced as '1', the second as '2' and so on. None of the vertex commands actually specify any geometry, they are just points in space. vt u v [w] The vertex texture command specifies the UV (and optionally W) mapping. These will be floating point values between 0 and 1 which say how to map the texture. They really don't tell you anything by themselves, they must be grouped with a vertex in a 'f' face command. vn x y z The vertex normal command specifies a normal vector. A lot of times these aren't used, because the 'f' face command will use the order the 'v' commands are given to determine the normal instead. Like the 'vt' commands, they don't mean anything until grouped with a vertex in the 'f' face command. f v1[/vt1][/vn1] v2[/vt2][/vn2] v3[/vt3][/vn3] ... The face command specifies a polygon made from the verticies listed. You may have as many verticies as you like. To reference a vertex you just give its index in the file, for example 'f 54 55 56 57' means a face built from vertecies 54 - 57.

So, the OBJ file format is text based and consists of 4 types of definition. The first defines vertices (or points in space). This is signified by a line starting with a "v" followed by 3 numbers indicating the x, y and z coordinate of the point in space. The next definition is signified by "vt", standing for vertex texture. The two numbers following the indicator define a texture coordinate. The 3rd definition, "vn", defines a vertex normal - a single normal that can be applied to a vertex.

The 4th and most important definition is signified by "f" standing for face. The list of numbers after the "f" indicate which vertex, vertex texture coordinate and vertex normal should be combined to form a face. For instance, the following line:

f 1/3/3 2/4/4 3/5/5

indicates that a face should be built with 3 points. The first point uses the 1st vertex defined in the file, the 3rd texture coordinate and the 3rd normal. The second point used the 2nd vertex defined in the file, the 4th texture coordinate and the 4th normal. The final point uses the 3rd vertex defined in the file and the 5th texture coordinate and normal.

Now we have a feel for how the format works lets consider how we're going to load the data.

Reading the OBJ

The pseudo code for loading a OBJ file looks like this:

while there are more lines to read read a line if the line starts with a "v" then read the vertex and store it if the line starts with a "vt" then read the texture coordinate and store it if the line starts with a "vn" then read the normal coodinate and store it if the line starts with a "f" then read the indexes and store a face based on the data read

Lets put this into real Java code then.. What this code is actually going to do is just read the data and store it for rendering use. The class in question is ObjData. Take a look at the constructor, its responsible for executing the pseudo code above:

public ObjData(InputStream in) throws IOException { // read the file line by line adding the data to the appropriate // list held locally BufferedReader reader = new BufferedReader(new InputStreamReader(in)); while (reader.ready()) { String line = reader.readLine(); // if we read a null line thats means on some systems // we've reached the end of the file, hence we want to // to jump out of the loop if (line == null) { break; } // "vn" indicates normal data if (line.startsWith("vn")) { Tuple3 normal = readTuple3(line); normals.add(normal); // "vt" indicates texture coordinate data } else if (line.startsWith("vt")) { Tuple2 tex = readTuple2(line); texCoords.add(tex); // "v" indicates vertex data } else if (line.startsWith("v")) { Tuple3 vert = readTuple3(line); verts.add(vert); // "f" indicates a face } else if (line.startsWith("f")) { Face face = readFace(line); faces.add(face); } } // Print some diagnositics data so we can see whats happening // while testing System.out.println("Read " + verts.size() + " verticies"); System.out.println("Read " + faces.size() + " faces"); }

So we read each line, processing it and storing each of the vertex related bits of data into some lists stored in the ObjData instance. A "tuple" is just a term for a group of values that are related to each other. So, a Tuple2 in this case is used for the pair of values used for a vertex coordinate and a Tuple3 is used for normals and verticies. The utility methods to read these tuples simply split the remainder of the line on spaces.

Finally, if we read a face we're going to construct a record describing the face and store in another list. This way our ObjData simply contains all of the data stored in the OBJ file but in a format that can be used by other Java objects easily. The creation of the face object is handled in the readFace() method:

private Face readFace(String line) throws IOException { StringTokenizer points = new StringTokenizer(line, " "); points.nextToken(); int faceCount = points.countTokens(); // currently we only support triangels so anything other than // 3 verticies is invalid if (faceCount != 3) { throw new RuntimeException("Only triangles are supported"); } // create a new face data to populate with the values from the line Face face = new Face(faceCount); try { // for each line we're going to read 3 bits of data, the index // of the vertex, the index of the texture coordinate and the // normal. for (int i=0;i<faceCount;i++) { StringTokenizer parts = new StringTokenizer(points.nextToken(), "/"); int v = Integer.parseInt(parts.nextToken()); int t = Integer.parseInt(parts.nextToken()); int n = Integer.parseInt(parts.nextToken()); // we have the indicies we can now just add the point // data to the face. face.addPoint((Tuple3) verts.get(v-1), (Tuple2) texCoords.get(t-1), (Tuple3) normals.get(n-1)); } } catch (NumberFormatException e) { throw new IOException(e.getMessage()); } return face; }

First, note that the code assumes that only triangles will be supported (i.e. only 3 points per face). For our purposes here we only need triangles (all the models are carefully designed to only use triangles) - so we to save time we've skipped anything else that the model format might support. However, to be on the safe side, if a model with more than 3 points in a face is read in we'll throw a RuntimeException - it should be pretty obvious if anything goes wrong :)

Next we read in the index for each component of each point in the face, look them up in our lists and store them in a Face object which acts a data record.

So, after the ObjData has processed the input stream containing the OBJ file, the lists in the ObjData instance have been populated with all the data for the model. This object can now be used to create the OpenGL expressions to render the model to the screen.

Rendering the Model

Now we've got all the data out of the OBJ model we want to be able to render our triangles in OpenGL. We could simply do this in immediate mode (i.e. lots of calls to glVertex() etc at runtime) but thats not very good for performance. Since we know our models arn't going to change as we render them we can compile a OpenGL display list containing the model which we can render with one command!

A display list is a list of OpenGL operations identified by a single value. A developer creates a list and then issues the commands required to be contained with in it. The OpenGL implementation can them optimize these commands and store them on the graphics hardware. Consider that every command you issue to OpenGL must make it to the hardware, so the more commands you have to issue the more has to be sent to the card, and hence the slower things happen. With a good OpenGL implementation the display list will be completely optimised onto the card so we only have to issue a single command to the graphics hardware to get it to execute a whole list of commands.

Right, lets have a look at how will build the display list from our ObjData instance. It all happens in the ObjModel :

public ObjModel(ObjData data) { // we're going to process the OBJ data and produce a display list // that will display the model. First we ask OpenGL to create // us a display list listID = GL11.glGenLists(1); // next we start producing the contents of the list GL11.glNewList(listID, GL11.GL_COMPILE); // cycle through all the faces in the model data // rendering a triangle for it GL11.glBegin(GL11.GL_TRIANGLES); int faceCount = data.getFaceCount(); for (int i=0;i<data.getFaceCount();i++) { for (int v=0;v<3;v++) { // a position, normal and texture coordinate // for each vertex in the face Tuple3 vert = data.getFace(i).getVertex(v); Tuple3 norm = data.getFace(i).getNormal(v); Tuple2 tex = data.getFace(i).getTexCoord(v); GL11.glNormal3f(norm.getX(), norm.getY(), norm.getZ()); GL11.glTexCoord2f(tex.getX(), tex.getY()); GL11.glVertex3f(vert.getX(), vert.getY(), vert.getZ()); } } GL11.glEnd(); GL11.glEndList(); }

First, we create a display list by calling glGenLists(). We store the ID of the list locally so we can use it later on. Next, we start building the list wiht glNewList() and then process the ObjData we read from the file earlier.

For each face we found in the OBJ model we create a single triangle in a triangles array in OpenGL. The triangle is defined by applying the normal, texture coordinate and vertex information store in the face that we created earlier.

Once we've processed all the faces we're done so will close off the triangle array and end the list compilation. At this point the OpenGL implemention (and related hardware) can optimize the commands and ship them over to the hardware. So, how do we actually render the model? Well, thats the easy bit:

public void render() { // since we rendered our display list at construction we // can now just call this list causing it to be rendered // to the screen GL11.glCallList(listID); }

We simply call our list and OpenGL executes all the operations we set up earlier (based on an OBJ model).

Utility Wrapper

Cool, now we can load models and render them in OpenGL. However, its a little intricate to create ObjData object then pass that into a ObjModel and also write the code to get an InputStream. We could make it a bit easier by writing a simple static utility method that performs the common bits for us based on a simple reference to a model. Take a look a ObjLoader, tihs is where we've put that static method. It looks like this:

public static ObjModel loadObj(String ref) throws IOException { InputStream in = ObjLoader.class.getClassLoader().getResourceAsStream(ref); if (in == null) { throw new IOException("Unable to find: "+ref); } return new ObjModel(new ObjData(in)); }

Simply look up the reference specified on the class path. Check if the reference was actually found. Finally call the loading classes we've talked about above to load the data and then render it to a display list.

Now we can load models in one line. Nice!

Uses of the Model Loader

The game uses the model loader to load the models for both asteroids and the player. However, the we don't want to load the model on a per entity basis since that could take up a lot of memory and the model can happily be shared. So, for instance, in InGameState we load the player model like so:

shipTexture = loader.getTexture("res/ship.jpg"); shipModel = ObjLoader.loadObj("res/ship.obj");

and then we pass the model and texture into the player entity when we create it:

public void enter(GameWindow window) { entities.clear(); player = new Player(shipTexture, shipModel, shotTexture);

Finally, when the Player entity is rendered we simply bind our texture (since we want our model to be textured) then call our list (which contains our polys with texture coordinates), like this:

public void render() { // enable lighting for the player's model GL11.glEnable(GL11.GL_LIGHTING); // store the original matrix setup so we can modify it // without worrying about effecting things outside of this // class GL11.glPushMatrix(); // position the model based on the players currently game // location GL11.glTranslatef(positionX,positionY,0); // rotate the ship round to our current orientation for shooting GL11.glRotatef(rotationZ,0,0,1); // setup the matrix to draw the model for our player // rotate the ship to the right orientation for rendering. Our // ship model is modelled on a different axis to the one we're // using so we'd like to rotate it round GL11.glRotatef(90,1,0,0); // scale the model down because its way to big by default GL11.glScalef(0.01f,0.01f,0.01f); // bind to the texture for our model then render it. This // actually draws the geometry to the screen texture.bind(); model.render();

First, we move to the right location to place the player based on its current position and orientation. Next we scale the model down (because its a bit big to start with). Finally, we do the actual rendering code. We bind to the texture (so the model appears with it) - texture.bind(), and then render the model by calling model.render() (which calls the display list as we saw earlier).

Conclusion

Now, we can render the models for our entities as they bounce through space. We've talked about model formats, reading data and rendering the polygons. We've also had a quick look at how the game uses the models after they've been loaded.

Links


Tutorial written by Kevin Glass
Models from Ghoulish Arts
OBJ specification from Roy Riggs