The cocos3d framework is a sophisticated 3D application development framework for the iOS platform. This document describes the framework components and provides guidelines and best practices for building cocos3d iOS applications.
[toc=”2,3,4″ title=”In this Guide:”]
cocos3d is a significant extension to cocos2d, a popular, well-designed framework for building iOS games and applications that play out in 2D (or 2.5D isometric projection). Although on the one hand it is possible to start with the cocos3d Application template and develop a 3D application without knowing too much about the workings of cocos2d, to get the most out of this document, you should familiarize yourself with cocos2d. You can learn more about cocos2d at the cocos2d Wiki.
CC3Layer is one of two key classes that each application will subclass and customize. The other class you will subclass and customize in your application is
CC3World, described below.
Although cocos3d is a full 3D modeling and rendering engine, all 3D rendering occurs within
CC3Layer, a special cocos2d
CCLayer subclass. Since it is a type of
CCLayer, instances of
CC3Layer fit seamlessly into the cocos2d
CCNode hierarchy, allowing 2D nodes such as controls, labels, and health bars to be drawn under, over, or beside 3D model objects. With this design, 2D objects, 3D objects, and sound can interact with each other to create a rich, synchronized audio-visual experience.
CC3Layer acts as the bridge between the 2D and 3D worlds.
You can add a
CC3Layer instance anywhere in the visual node hierarchy of a cocos2d application. Although for the purposes of most games, the
CC3Layer instance will usually be sized to cover the complete screen, a
CC3Layer instance can be set to any size and added to a 2D parent visual
CCNode. So, your application could have a 2D scene that contains a smaller 3D scene embedded into it via a small
CC3Layer instance attached to a parent 2D
CC3Layer is a type of
CCNode, you can add child
CCNodes to it as well, mixing 2D nodes on top or below the 3D scene that is playing out within the
CC3Layer. Generally, this is the way most applications will combine 2D and 3D components, with a main 3D scene overlaid with 2D controls such as joysticks, fire buttons, dashboards, etc. Any
CCNode that is added to the
CC3Layer using the standard
addChild: method will appear overlaid on top of the 3D scene. You can also add 2D
CCNodes behind the 3D action as well, by using the
addChild:z: method, with a negative Z-order. Although this is generally not a common requirement, it can be used to provide a 2D skybox behind the 3D action.
When customizing your application’s subclass of
CC3Layer, you will typically override the following two template methods:
initializeControls– this method is where you can add your 2D controls into the
CC3Layer, and otherwise generally initialize your layer. This method is invoked automatically from any of the
initmethods of the layer.
update:– if you have scheduled regular updates using the standard
CCNode scheduleUpdatemethod, this update: method will be called periodically. This is where you can update any of the 2D controls you’ve added, or pass data from dynamic controls such as sliders or joysticks to your
CC3Worldinstance. If you override this method, be sure to call invoke the superclass implementation so it can pass the update notice to the
CC3Layer forms the bridge between the 2D and 3D visual worlds, it remains primarily a cocos2D layer, and no 3D activity actually takes place within
CC3Layer. All 3D activity, from model management to rendering, takes place within, and is the responsibility of, your application’s customized
CC3Layer, there are several key template methods that you will typically override in your application’s subclass of
initializeWorld– This method is where you populate the 3D models of your world. This can be accomplished through a combination of instantiting model objects directly and loading them from model data files exported from a 3D editor. The
CC3Worldinstance forms the base of a structural tree of nodes. Model objects are added as nodes to this root node instance using the
updateAfterTransform:– If you have scheduled regular updates through
CC3Worldmethods will be invoked automatically as part of the scheduled update, respectively before and after the transformation matrices of the nodes of the world have been recalculated, respectively, and before and after the same methods are invoked on the descendants of your
nodeSelected:byTouchEvent:at:– If your application is configured to support allowing the user to select 3D nodes using touch events, this template callback method will automatically be invoked when a touch event occurs. See the section on touch events for more information about handling the selection of 3D nodes by user touch events.
As the names imply, the difference between the two update methods is that
updateBeforeTransform: is invoked before the transformation matrix of the node is recalculated, whereas the
updateAfterTransform: is invoked after. Therefore, if you want to move, rotate or scale a node, you should do so in the
udpateBeforeTransform: method, to have your changes automatically applied to the transformation matrix of the node.
However, the global transform properties of each node (
globalScale) are determined as part of the calculation of the transformation matrix of that node. Therefore, if you want to make use of the current global properties, you should do so in the
updateAfterTransform: method. The global properties of a node can be used to test for collisions, or end conditions of movements, etc.
Sometimes, you may find the need to change the
scale properties of a node in the
updateAfterTransform: method, for example as part of collision detection and reaction. If you do, you should invoke the
updateTransformMatrices method on the top-most node that is affected, to have those changes immediately applied to the transformation matrix.
You do not need to do anything in either update methods for 3D objects that are acting predictably, such as those on trajectories, or those controlled by
CCActions. Their behaviour is handled by the nodes and actions themselves.
In addition to these main template methods, you may find the following
CC3World methods useful during operation, or simply useful to understand:
addChild: – Familiar from cocos2d, you can add child nodes (
CC3Nodes) to your 3D world.
addContentFromPODFile:– These are convenience methods for loading POD files directly into the
CC3World, and are added by the
PVRPODcategory if your application uses the module for loading 3D models from POD formatted files.
getNodeNamed: – Retrieves the node with the specified name that was previously added to the 3D world. Useful for grabbing hold of a node that was added as part of a file load.
activeCamera– Property returns (or sets) the 3D camera that is viewing the objects in the 3D world. You do not need to set this property. It will be automatically set to the first camera node added via one of the
add... methods (even if the camera is buried deep within a loaded node hierarchy). However, if you have multiple cameras, you can flip between them by setting this property.
ambientLight– This is the color of the ambient light of the 3D world. This is independent of any distinct lights that are added as child nodes.
createGLBuffers– This method causes all contained vertex data held by contained mesh nodes to be buffered to Vertex Buffer Objects (VBO’s) in the GL engine and usually into hardware buffers accessible to the GPU. This is an optional step, but highly recommended for improving performance. You can also invoke this method at any level in the node structural hierarchy if for some reason you want to load some, but not all, mesh data in the 3D world into VBO’s.
releaseRedundantData– After the
createGLBuffermethod has been invoked, this method can be used to release the data in main memory that is now redundant for meshes that have been buffered to the GL engine. You can also invoke this method at any level in the node structural hierarchy if for some reason you want to release some, but not all, mesh data from main memory. You can also exempt individual vertex arrays from releasing data by setting the
NOon the individual vertex array.
updateTransformMatrices– If you make changes to the location, rotation and scale properties of a node within the
updateAfterTransform:method, those changes will not automatically be applied to the transformation matrix of the node, since it has already been calculated when the
udpateAfterTransform:method is invoked. However, you can use the
udpateTransformMatricesto have those changes immediately applied to the transformation matrices of a node and all its descendants.
pause– these enable the operation of the update: method. When the
CC3Worldis paused, the
updateAfterTransform:methods are skipped, along with updates of all other 3D nodes.
cleanCaches– automatically invoked during low memory conditions within the application. This gives you the chance to dump any unneeded resources, for example, 3D objects that are far away, not part of this scene, etc.
Many of the responsibilities handled by
CC3World are divided by the implementation into private template methods that you can override in your subclass, to customize behaviour in a modular way.
All the objects in the 3D world, including models, cameras, lights, and the
CC3World itself, are known as nodes.
CC3Node is the base of the 3D node class hierarchy. Nodes can be assembled into structural assemblies using parent/child relationships. Moving, rotating, or hiding a node moves, rotates or hides all the children (and other ancestors) in concert.
This design will no doubt feel familiar to you as being analogous to
CCNodes in cocos2d, and the two node hierarchies do follow the same design pattern. However,
CC3Nodes cannot be mixed in the same structural assembly, primarily because
CC3Nodes must keep track of location, rotation and scaling in three dimensions instead of two. Nevertheless, the structural concepts between the two node families are consistent.
You assemble 3D nodes using the
addChild: method. All nodes have an identifying tag and can have a name. You can retrieve a specified node in an assembly with the
All nodes have
scale properties (plus a couple of others). You move, rotate, and scale
CC3Nodes by setting these properties. And, again, as with cocos2d nodes, the values of these properties are measured relative to the node’s parent node.
You can override the
updateAfterTransform: methods of a node subclass that acts predictively, such as those following a trajectory, to update these transform properties. See the discussion on
CC3World above regarding the difference between these two methods.
So, the wheels of a car can be child nodes on a car node. Each wheel node can rotate on its axis, and move up and down relative to the car as if on suspension via the
location property of each wheel node, and all the while, the whole car assembly may be travelling and bouncing down a dirt road simply by manipulating the
location property of the car node.
Any node can be the target of a
CCAction, to simply and easily control the movement and behaviour of the node in sophisticated ways. In addition, any node may be configured to respond to finger touch events by the user. See the sections on actions and touch events for more information on this interactivity.
You can use the
createGLBuffers method to cause all contained vertex data held by contained mesh nodes to be buffered to VBO’s in the GL engine. Generally, you should invoke this method at the highest level in the structure,
CC3World, but you can also invoke this method at any level in the node structural hierarchy if for some reason you want to load some, but not all, mesh data in the 3D world into VBO’s. Once data is loaded into GL VBO’s, you can use the
releaseRedundantData method to release the data from your main application memory.
CC3MeshNode puts the ‘3’ in 3D. An instance of
CC3MeshNode contains the 3D mesh data for a 3D object, plus the material, texture, or solid color that covers the surface of the object.
CC3MeshNode instance can be covered in only a single material, texture, or color. However, like any node, mesh nodes can be assembled into a node structure. Moving the parent node will move all of the child mesh nodes in concert, along with their materials and textures. So, a multi-colored beach-ball could be one parent node and several child mesh nodes, each corresponding to a differently colored panel on the beach-ball. The parent ball node can be moved, rotated and scaled, and all component mesh nodes will be affected in unison.
Materials, Textures and Colors
The visible characteristics of the surface of a mesh node is determined by either the material it is covered with, or the pure, solid color it is painted with. As mentioned above, each mesh node can be covered with only one material, or one solid color.
The mesh node holds an instance of a
CC3Material to describe the visual characteristics of the surface of the mesh.
CC3Material includes properties to set the various coloring characteristics of the surface, including the ambient, diffuse and specular reflective colors, the emissive color, and the surface shininess. It also includes properties to determine how the colors should blend with the colors from objects behind the mesh node, permitting effects such as translucency.
In addition to basic coloring, each
CC3Material instance can hold an instance of
CC3Texture to cover the surface of the mesh with a texture image.
The combination of coloring properties, blending properties, and textures interact with lighting conditions to create complex and realistic surface visual characteristics for the mesh.
There are two mechanisms for changing the coloring and opacity of a material covering a mesh node.
- To achieve the highest level of detail, accuracy and realism, you can individually set the explicit
destinationBlendproperties. This suite of properties gives you the most complete control over the appearance of the material and its interaction with lighting conditions and the colors of the objects behind it, allowing you to generate rich visual effects.
- At a simpler level,
CC3Materialalso supports the cocos2d
<CCRGBAProtocol>protocol. You can use the
opacityproperties of this protocol to set the most commonly used coloring and blending characteristics simply and easily. Setting the
colorproperty changes both the ambient and diffuse colors of the material in tandem. Setting the
opacityproperty also automatically sets the source and destination blend functions to appropriate values for the opacity level. By using the
opacityproperties, you will not be able to achieve the complexity and realism that you can by using the more detailed properties, but you can achieve good effect with much less effort. And by supporting the
<CCRGBAProtocol>protocol, the coloring and translucency of nodes with materials can be changed using standard cocos2d
CCFadeactions, making it easier for you to add dynamic coloring effects to your nodes.
If the mesh node does not contain a material, it will be painted with the color defined in the
pureColor property. This color is painted as is, pure and solid, and is not affected by the lighting conditions. In most cases, this looks artificial, and is not recommended for realistic scene coloring. But in some circumstances, such as cartoon effects, it may be useful.
CC3MeshModel, which in turn holds the raw vertex data in several
CC3VertexArray instances. The management of mesh data is spread across these three classes (
CC3VertexArray) to enable data reuse and reduce memory requirements. A single mesh model instance can be used in any number of individual mesh nodes, each covered with a different material, and placed in a different location. Whether you’re talking about hordes of zombies, or a twelve-place dinner setting laid out on a table, a single mesh model instance, with one copy of the raw vertex data can be reused in an number of similar nodes.
Each mesh model instance (typically an instance of the concrete subclass
CC3VertexArrayMeshModel) holds several
CC3VertexArray instances, one for each type of vertex data, such as vertex locations, normals, vertex colors, texture coordinate mapping, and vertex indices. And reuse can be applied at this level as well. A single vertex array instance can be attached to many mesh models. So, if you have the need for two teapot meshes, one textured, and one painted a solid color, you can use two separate instances of
CC3VertexArrayMeshModel, each containing the same instances of the vertex locations and vertex normals arrays (
CC3VertexNormals, respectively), but only the mesh model for the textured teapot would also contain a texture coordinate vertex array (
CC3VertexTextureCoordinates) instance. With this arrangement, there is only ever one copy of the underlying vertex data.
CC3LineNode & CC3PlaneNode
CC3PlaneNode are specialized subclasses of
CC3MeshNode that simplify the creation and drawing of lines and planes using vertex arrays, respectively.
In order to allow you to create hordes of invading armies, the
CC3Node class supports the
<NSCopying> protocol. To duplicate a node is simply a matter of invoking the
copy method on that node. In addition, there is a
copyWithName: method that duplicates the node, and gives the new copy its own name.
CC3Node creates a deep copy. This means that not only is the node itself copied, but copies are created of most components of the node, including the material, and any descendant child nodes. This allows the properties and structure of the duplicate node to be changed separately from those of the original node. For instance, the new node can be positioned, rotated, scaled, colored, or assigned a different texture from that of the original. Similarly, the child nodes of the duplicate may be modified separately, without affecting the original, or any other copies. You can copy an automobile node, and remove one of the wheels of the duplicate, while retaining all four wheels on the original. If a deep copy were not performed, changing parameters in the duplicate node, or its children, would result in the same changes appearing in all other copies of the original node.
There is one big exception to this deep copying. Since mesh data is for the most part static, and is designed to be shared between instances of
CC3MeshNode, mesh data is not copied, but is automatically shared between the original node and all its copies. Specifically, when an instance of
CC3MeshNode is copied, a copy is made of the encapsulated
CC3Material instance and is retained by the duplicate node, but the encapsulated
CC3MeshModel is not duplicated. Instead, the single
CC3MeshModel instance is retained by both the original node and the duplicate, and is henceforth shared by both nodes. This shallow-copying of the mesh data ensures that only one copy of the mesh data appears in the device memory.
The following additional rules apply to duplicating a
tagproperty is not copied. The duplicate node is assigned a new unique value for its
tagproperty. This is to ensure that the
tagproperty is unique across all nodes, including duplicates, and allows you to identify a duplicate as distinct from the original, even if the
nameproperty is left the same.
- The duplicate will initially have no parent. That will automatically be set when the duplicate node is added as a child to a parent node somewhere in the world. This applies to the specific node on which the
copyWithName:method was invoked. Any descendants of this node will be assigned to their parents after they are copied, so that the overall structure of the node assembly below the invoking node will be replicated in its entirety.
- Like mesh data, underlying textures are not duplicated (although the
CC3Textureinstance itself is). Nor is node animation data duplicated.
If you create your own subclass of
CC3Node, or any of its existing subclasses, and your new subclass adds state, you must implement the
populateFrom: method, including invoking the same superclass method as part of it, to ensure that state is transferred correctly from the original node to the duplicate, whenever an instance of your node class is duplicated.
These are special nodes that represent the camera and lights in the 3D world. Cameras and lights are directional, and are examples of
CC3TargettingNode, which, in addition to being able to move and rotate like other nodes, can be assigned a
targetLocation at which to point. A
target can be any other
CC3Node in the scene, and the camera or light can be told to track the target as it moves.
If you’ve used cocos2d, you’re no doubt familiar with the family of
CCActions, used to control the movement, coloring, visibility, and activities of
CC3Nodes can similarly be manipulated and controlled by
CCActions. Some features, such as tinting and fading can manipulated by standard cocos2d actions. However, because the 3D coordinate system is different than the 2D coordinate system, specialized 3D versions of movement, rotation, scaling, and animation actions are required. This family of 3D actions can be found as subclasses of the base
CC3TransformTo interval action. In addition, there are several cocos3d action subclasses that handle material-tinting.
The 3D family of actions can, of course, be used with standard cocos2d combination actions such as the family of ease actions, sequence actions, etc.
CC3Node can be animated with key-frame animation data held in a
CC3NodeAnimation instance within the
CC3Node instance. Typically, this animation data is loaded from the 3D model files exported from your 3D editor, but you can also assemble this data programmatically using the simple
CC3ArrayNodeAnimation subclass of
CC3NodeAnimation. Like mesh data, the
CC3NodeAnimation instances are stateless, and each may be shared by multiple
CC3Node instances, so that animation data need not be duplicated redundantly.
Any or all of the transformation properties:
scale, may be animated using
CC3ArrayNodeAnimation. If you need to animate other characteristics of your nodes, you can subclass these classes to add additional animation capabilities.
Once animation data is associated with a
CC3Node, the node can be animated with repeated invocations of the
establishAnimationFrameAt: method on the node itself, passing in a frame time, which is a floating point value between zero and one, with zero representing the first animation frame, and one representing the last animation frame.
In order to perform smooth animation, by default, the
CC3NodeAnimation will interpolate animation data between frames if the timing value passed in through the
establishAnimationFrameAt: method is between actual frame times. This is controlled by the
shouldInterpolate property on the
CC3NodeAnimation instance. By default, this property is set to
YES, but may be set to
NO if heavy interpolation interferes with frame rates.
For node assemblies, you will typically want to animate the whole assembly in concert. To support this, the
establishAnimationFrameAt: method automatically invokes the same method on each child node, passing the same animation frame time to each child node. Thus, animating a character, by invoking that method on the character node itself, will automatically forward the same invocation to each component child node of the character, such as the character’s limbs, or any weapon the character might be holding. As a result, all components of the character will move in sync with the character, as it is animated.
However, you can turn animation of any node in the assembly off, including selectively turning off animation of specific child nodes, via the
disableAnimation method on that child node. You can turn the animation of whole sub-assemblies of nodes off via the
Fractional animation of individual movements is possible by simply limiting the range of frame times that are sent with the
establishAnimationFrameAt: method. For example, if your character animation includes a run frame sequence and a jump frame sequence, you can invoke one or the other by simply starting and ending your animation at the frame times associated with the specific movement you want to invoke.
Controlling Animation with Actions
Although you can arrange to invoke the
establishAnimationFrameAt: method of your animated
CC3Node directly, in most cases it is much easier to use an instance of
CC3Animate to control the animation of a node.
CC3Animate is a type of
CCActionInterval, and will run through the animation frames automatically over a configurable time duration. Moreover, when instantiating the
CC3Animate, you can restrict the frames to a particular range, to allow fractional animation of specific movements.
To allow the user to interact with the objects in your 3D world, you can enable the selection of 3D objects using standard iOS finger touch events.
The first step in enabling touch activity is to set the
isTouchEnabled property of the
YES. Typically you will do this in the
initializeControls method of your customized
CC3Layer class. This causes the
CC3Layer to register itself to receive touch events from iOS.
As such, your
CC3Layer will receive and process
kCCTouchCancelled events. By default, your
CC3Layer will not receive or process
kCCTouchMoved events, because these are both quite voluminous and seldom used. However, you can configure your customized
CC3Layer subclass to receive and handle
kCCTouchMoved events by copying the commented-out
ccTouchMoved:withEvent: method from the
CC3Layer implementation and paste it into the implementation of your customized
CC3Layer subclass, with the commenting removed.
The second step is to set the
isTouchEnabled property to
YES, in any
CC3Nodes that you want the user to be able to select with a touch event. So, for example, let’s say your 3D world had a bowl of fruit on a table, you might want the user to be able to select a piece of fruit, but not the bowl or the table objects. In this case, you would set the
isTouchEnabled property to
YES on the
CC3Node instances representing each piece of fruit, but not on the
CC3Node instances that represent the bowl or table. Please also note that, to be touchable, the node must have its
visible property set to
Once these two steps have been completed, the
nodeSelected:byTouchEvent:at: callback method of your customized
CC3World subclass will automatically be invoked on each touch event. This callback includes the
CC3Node instance that was touched, the type of touch, and the 2D location of the touch point, in the local coordinates of the
If the touch occurred at a point under which there is no touchable
CC3Node, the callback
nodeSelected:byTouchEvent:at: method will still be invoked, but the node will be
nil, to indicate that a touch event occurred, but not on a touchable node.
For node assemblies, the node passed to the
nodeSelected:byTouchEvent:at: method will not necessarily be the individual component or leaf node that was touched. Instead, it will be the closest structural ancestor of the leaf node that has its
isTouchEnabled property set to
For example, if the node representing a wheel of a car is touched, it may be more desireable to identify the car as being the object of interest to be selected, instead of the wheel. In this case, setting the
isTouchEnabled property to
YES on the car, but leaving it as
NO on each wheel, will allow a wheel to be touched, but the node received by the
nodeSelected:byTouchEvent:at: callback will be the node that represents the car as a whole.
When using translucent nodes, you might notice that when a translucent node is displayed over the raw background color of the
CC3Layer, it will appear to flicker slightly when a touch event occurs. This is a side effect of the mechanism used to identify the 3D node that is under the 2D touch point.
The selection mechanism uses a color-picking algorithm, which momentarily paints each node with a unique color and then reads the color at the touch point to identify the node. The scene is then immediately drawn a second time with proper coloring. Since this second scene rendering occurs within the same rendering frame as the first, the second rendering completely overwrites the first, and the user is completely unaware than the first pass occurred.
The only exception to this is when a translucent node has no opaque nodes behind it. In this case, because the underlying layer background color is not redrawn on the second rendering pass, some of the solid coloring used to paint the node during the selection rendering pass will “leak through” the transparency of the second rendering pass. In effect, for one frame, the translucent node does not have the layer background color behind it, causing a momentary flicker.
It is important to realize that this effect does not occur when the translucent node has opaque nodes behind it in the 3D scene. This is because the opaque background nodes will be redrawn as well, and it will be these background nodes that will be seen through the translucent node, as they should be.
Therefore, if you have translucent nodes and are using touch selection, to avoid this slight flicker be sure to include an opaque skybox node at the back of your 3D scene, over which the other nodes of your scene will be drawn.
CC3Node holds the location of the 3D node in the 2D coordinate system of the window. It acts as a bridge between the 3D coordinate system and the 2D coordinate system. Knowing the
projectedLocation of a 3D node allows you to relate it to a 2D controls such as a targetting reticle, or to a touch event.
projectedLocation is a 3D vector. As you would expect, the
Y-components provide the location of the 3D node on the screen, in the 2D coordinate system of the
CC3Layer. If the 3D node is located somewhere out of the view of the camera, and therefore not displayable within the layer, these values will be outside the range given by the
contentSize of the
CC3Layer. Following from this, either coordinate value may be negative, indicating that the node is located to the left, or below, the view of the camera.
Z-component of the
projectedLocation contains the straight-line distance between the 3D node and the camera, as measured in the coordinates of your 3D world. This value may be negative, indicating that the 3D node is actually behind the camera. In most cases, of course, you will be interested in nodes that are in front of the camera, and the
Z-component of the
projectedLocation can help you identify that.
CC3Node also supports the related
property. It is derived from the
projectedLocation property and, for the most part, contains the same
Y– coordinates as
projectedLocation, but as a 2D
CGPoint, which is immediately usable by the cocos2d framework.
However, as a 2D point, the
projectedPosition lacks the needed ability to distinguish whether the node is in front of, or behind, the camera. Since this information is almost always needed, the point is encoded so that, if the 3D node is actually behind the plane of the camera, both the
Y-components of the
projectedPosition will contain the large negative value
-CGFLOAT_MAX. If you use
projectedPosition, you can test for this value to determine whether the 3D node is in front of, or behind, the camera.
For most nodes, these properties are not calculated automatically. In the
update: method of your
CC3World you can have them calculated for a node of interest by passing the node to the
projectNode: method of the
CC3Billboard nodes, the
projectedPosition properties are calculated automatically on each update. A
CC3Billboard is a 3D node that can hold an instance of a 2D cocos2d
CCNode, and display that 2D node at the
projectedPosition of the
CC3Billboard node. Since the 2D node is drawn in 2D, it always appears to face the camera, and is always drawn over all 3D content. The contained
CCNode can be any cocos2d node, and it can be configured to automatically scale to the correct perspective sizing, shrinking as the
CC3Billboard moves away from the camera, and growing as the
CC3Billboard approaches the camera. A common use of
CC3Billboard is to display information about a 3D node, such as a textual name label or a health-bar for a game character.
Sophisticated 3D games are dependent on loading 3D models from resources created in 3D editors such as Blender, Cheetah3D, Maya, 3ds Max, and Cinema 4D. Since the number of possible file formats is significant, and can evolve, the loading of model data is handled by pluggable loaders and frameworks. When building your application with cocos3d, you only need to include the loaders that you use.
The pluggable loading framework is based around two template classes:
- The resource class, which takes care of loading and parsing the 3D data file, creating instances of nodes, mesh models, vertex arrays, materials, textures, cameras, lights, etc., and assembles them into a node hierarchy. This will be a subclass of
CC3Resource, and is tailored to parsing and assembling a specific data file format.
- The resource node class. This will be a subclass of
CC3ResourceNode, and as such, is actually a type of
CC3Node. It wraps the
CC3Resourceinstance, extracts populated nodes from it, and adds them as child nodes to itself. This node can be used like any other
CC3Nodeinstance, and is usually simply added as a child to your
CC3Worldinstance. You can move, rotate, scale, or hide all the components loaded from a file simply by manipulating the properties of the
In addition to these two template classes, a pluggable loading package will usually contain specialized subclasses of
CC3Camera, etc., in order to set the properties easily from the data loaded from the file. But once created, these objects behave like any other node of their type.
A pluggable loading package may also add categories to base classes as a convenience. Thus, the PowerVR
POD file loading package adds a category to
CC3World that adds the method
addContentFromPODResourceFile: as a convenience for loading POD files directly into your
The initial loading package available with cocos3d is for PowerVR
In a 3D world, the camera is always pointing in some direction, and generally, is only seeing a fraction of the objects in the 3D world. It is therefore a waste of precious time and resources to try to draw all of these objects that will not be seen.
The task of determining which objects don’t need to be drawn, and then removing them from the rendering pipeline is known as culling. There are two types of culling: frustum culling, which is the removal of objects that are not within the field of view of the camera, and occlusion culling, which is the removal of objects that are within the field of view of the camera, but can’t be seen because they are being visually blocked by other objects. For example, they might be located in another room in your the map, or might be hiding behind a large rock. At this time, cocos3d does not perform any kind of occlusion culling.
cocos3d performs frustum culling automatically. Objects that are outside the view of the camera will not be drawn. This is accomplished by specifying a bounding volume for each mesh node. Each node holds onto an instance of a
CC3NodeBoundingVolume, which it delegates to when determining whether it should draw itself to the GL engine. All mesh nodes in cocos3d have a default bounding volume that is reasonably accurate, and for the most part, you won’t even need to think about bounding volumes. However, we’ll describe the operation of bounding volumes here, for those situations where the application may want to improve on the default behaviour.
The trick with bounding volumes is to strike a balance between the time it takes to determine whether a node should be rendered, and the time it would take to simply blindly render the object. To that end, different kinds of bounding volumes can be specified, some of which are very fast, but perhaps less accurate, and others that are very accurate, but time consuming.
Accuracy is important because acn inaccurate bounding volume can either result in the effort of rendering an object being performed when the objects won’t be seen, or worse, an object might be culled, when in fact at least part of it would really be seen if it were drawn. This second type of inaccuracy, over-zealous culling results in objects strangely popping in and out of existence, particularly around the edges of the camera’s field of view. Since this behaviour is generally undesirable, you should stay away from creating bounding volumes that are overly-zealous. The bounding volumes included by default in cocos3d always ensure that all vertices are included within the bounding volume, so that objects are never culled when they are actually partially visible.
The family of cocos3d bounding volumes includes a number of different subclasses, each performing a different type of boundary test. The two most commonly used are a spherical bounding volume, represented by subclasses of
CC3NodeSphericalBoundingVolume, and axially-aligned-bounding-box (AABB) bounding volumes, represented by subclasses of
Checking spherical boundaries against the camera frustum is very fast. However, for most non-spherical real-world 3D objects, such as a human character, or automobile, a sphere that completely envelopes the object is much larger than the object. The result is that near the periphery of the camera’s view, the sphere may be partially inside the frustum, but the object is not. The result is that the object will be rendered (because the sphere can be “seen” by the camera), even though the object itself will not be seen.
Bounding boxes are often more accurate for many real-world objects, but are more expensive to test against the camera frustum, because all eight points that define the volume must be tested against the frustum.
At the extreme end of the scale, the most accurate bounding volume would be to test every single vertex in the mesh against the camera’s frustum. By definition, this will result in almost perfect accuracy, but at the large cost of testing all the vertices. In many cases, since the GPU can generally do this faster than the CPU, there is no cost savings in testing all vertices. For this reason, cocos3d does not yet include a default all-vertex bounding volume test. However, such a bounding volume could easily be added and used by the application.
Bounding volumes, whether they be point, spherical, boxes, or custom, are calculated and defined automatically by the vertices in the mesh node. In addition, they scale automatically as the node is scaled.
Finally, to make use of both fast boundary tests to exclude objects that are clearly far from the camera’s field of view, and also more accurate boundary tests for objects that are at the edges of the camera’s field of view, cocos3d also includes the
CC3NodeTighteningBoundingVolumeSequence bounding volume. Instances of this class hold an array of other bounding volumes, and tests the node against them in sequential order. As soon as one contained bounding volume indicates that the node is definitely outside the camera’s field of view, the node is rejected and not drawn.
The trick is to order the contained bounding volumes so that fast, but broad, tests are performed early in the sequence, so they can reject the node if it is clearly nowhere near the camera’s field of view. Subsequent bounding volumes in the sequence should be less broad, but will not be tested unless the earlier bounding volumes have accepted the node as being visible.
By default, mesh nodes in cocos3d make use of a tightening sequence that contains first a spherical bounding volume and then an AABB bounding volume. For the most part, this will likely be sufficient, and you won’t even have to think about bounding volumes. However, some applications may benefit from custom bounding volumes.
The underlying OpenGL ES engine uses a pipeline of commands for manipulating the state of the rendering engine. The switching of some states can be a costly exercise. Consequently, it is often useful for the application to consider the order in which objects are rendered to the engine. For example, if several objects make use of the same texture, it is usually beneficial to draw all of these objects one after the other, before drawing an object that is covered with a different texture.
To that end, the drawing order of the objects within a
CC3World can be configured by the application for optimal performance. The
CC3World instance holds onto an instance of a
CC3NodeSequencer, and delegates the ordering of nodes for drawing to that sequencer. If no node sequencer is supplied to the CC3World, the world will draw nodes hierarchically, following the structural hierarchy of its children. In most cases, this will not be the optimal ordering.
As with bounding volumes discussed above, there are a number of different types of node sequencers, and establishing a different ordering is simply a matter of plugging and configuring a different sequencer into your
There are two key types of sequencers,
CC3NodeArraySequencers, which hold a collection of nodes, grouped in some defined order, and
CC3BTreeNodeSequencers, which hold a B-tree of other sequencers, allowing groupings of groupings of nodes, ad infinitum. You can assemble a complex sequence definition, by assembling different types of sequencers into a structural B-tree hierarchy.
Every sequencer contains in instance of a
CC3NodeEvaluator subclass. A node evaluator tests a node against some criteria, and returns a boolean result indicating whether the node is accepted or rejected, effectively giving each sequencer a chance to answer the question “Do you want this node?”. Using this mechanism, a B-tree sequencer can determine which of its child sequencers wants to take care of ordering any particular node.
A simple example might help here. Both Imagination Technologies (the supplier of the iOS GPU’s) and Apple recommend that all opaque objects should be drawn before translucent objects. In addition, the translucent objects themselves should be drawn in Z-order, which is the reverse order of distance from the camera, with the farther translucent objects being drawn first, and the nearer translucent objects drawn over them.
This drawing order can be accomplished with a B-tree sequencer containing two array sequencers: the first with an evaluator that tests the node for opacity, and the second with an evaluator that tests the node for translucency. The array sequencer holding the opaque nodes can simply hold them in the order they are added. However, the array sequencer holding the translucent nodes needs to order the nodes by their Z-order.
This structure looks as follows:
When testing a node, the B-tree sequencer will first ask the child sequencer with the opacity test if it is interested in the node, before asking the second sequencer. The result is that the first sequencer will contain opaque nodes in the order they were added, and the second will contain translucent nodes in Z-order.
Because nodes may be moving around from frame to frame, be aware that Z-ordering requires that the nodes in the
CC3NodeArrayZOrderSequencer be sorted on each and every frame update. This can have significant performance implications if there are a large number of translucent nodes. To improve performance, ensure that translucent nodes really are visibly translucent (don’t waste time sorting translucent nodes that can’t be seen as translucent by the user), and keep the number of translucent nodes to a minimum.
The requirement that this example illustrates is so common that
CC3BTreeNodeSequencer includes the class method
sequencerLocalContentOpaqueFirst, to create just such a sequencer assembly. There are also the class methods
sequencerLocalContentOpaqueFirstGroupMeshes, which take the concept a bit further and group the opaque nodes so that opaque nodes with the same texture or mesh are drawn together, one after the other, before other opaque nodes are drawn. As above, in each of these sequencers, the translucent nodes are sorted by Z-order.
CC3World uses a sequencer created by the
sequencerLocalContentOpaqueFirst method, thereby drawing opaque nodes first, in the order they were added, then translucent nodes in Z-order.
By design, both cocos2d and cocos3d support a clear separation between updating and drawing activities, and each is invoked on a separate loop. In addition to helping to keep your application code organized, there are good performance reasons for separating updating from drawing. OpenGL ES is designed as a state engine pipeline, designed for a steady stream of rendering commands, with ideally no data traveling back upstream to the application. This allows the GPU to keep its own time. Keeping the application rendering loop lean and mean, helps the GPU operate at the full frame rate, and leaves open options for scaling back the update rate separately if needed, or breaking the update loop into several passes, or possibly even multi-threading the updates, in extreme cases.
With all this in mind, you should keep the code you write in the
updateAfterTransform: methods of any
CC3Node subclass free of drawing or rendering operations. Similarly, should you override the
draw methods in any
CC3Node subclass, you should only perform rendering operations, and keep those methods free of model updates.
As mentioned above in the discussion of rendering order, the performance of the OpenGL ES state machine pipeline is slowed down by frequent state changes. And the OpenGL ES engine incurs costs when performing state machine changes that really didn’t need to happen because the engine already was in that state. To that end, the application should do its best to ensure that GL engine state changes are only made if state really is changing.
To ensure that, cocos3d shadows the state of the GL engine outside of the GL engine itself. When an attempt is made to change state, cocos3d first checks to see if the GL engine is already in that state, by checking the value of the shadow state. Only if the state is changing from that of the shadow state is it propagated to the GL engine.
This state shadow tracking is automatic, and most application developers can safely ignore it. However, some sophisticated applications may need to interact with this framework, so I’ll give an overview of it here.
In cocos3d, this shadow state is managed by a singleton instance of
CC3OpenGLES11Engine. All calls to the GL engine are made through this singleton instance.
CC3OpenGLES11Engine singleton instance contains a number of instances of subclasses of
CC3OpenGLES11StateTrackerManager, each geared towards tracking a particular grouping of GL state, such as capabilities, or lighting, materials, etc. Each of these tracker managers is accessed through a property in the
CC3OpenGLES11Engine singleton instance. Each tracker manager exposes individual elements of state through properties.
For shadow tracking to work, it is absolutely critical that all GL state changes are handled by the shadow tracker. So, when adding functionality to cocos3d for your application, if you need to make calls to the GL engine, be sure to do so through the proper shadow tracker! A few examples of how to make a GL call through a shadow tracker, and the GL call that it replaces are:
[[CC3OpenGLES11Engine engine].serverCapabilities.texture2D enable]<->
[CC3OpenGLES11Engine engine].materials.ambientColor.value = kCCC4FBlue<->
glMaterialfv(GL_FRONT_AND_BACK, GL_AMBIENT, (GLfloat*)&kCCC4FBlue)
[CC3OpenGLES11Engine engine].state.scissor.value = self.activeCamera.viewport<->
glScissor(vp.x, vp.y, vp.w, vp.h), where
vp = self.activeCameraViewport
Since it is critical that all GL calls be processed through the state shadow trackers, but cocos2d does not use state shadow tracking, you may be wondering how the shadow trackers know what the value a GL state has at the time that rendering control is passed to the cocos3d framework from cocos2d.
Each primitive state tracker has an
originalValueHandling property, which tells that tracker how to determine the original value. The enumerated value of this property can tell the tracker to do one of the following:
- simply ignore the current state at the beginning of each rendering frame
- read the current state at the beginning of each rendering frame
- read the current state at the beginning of the first rendering frame and assume that value always
In addition, for the last two options, the enumeration can be modified to tell the tracker to restore that original state back to the GL engine, because cocos2d is assuming it to have stayed the same.
The original value state is set to the optimal value for combining cocos2d and cocos3d, generally either ignoring the value, or reading it once on the first frame. However, if your application varies from frame to frame and you find that the initial value can change from frame to frame at the point that cocos3d takes over, you can modify the
originalValueHandling property of the associated state tracker to read the value on every frame.
Keep in mind that this reading interrupts the the flow of the GL pipeline significantly, and should generally be avoided at all costs. A better solution is to ensure that your cocos2d code leaves GL state with consistent values from frame to frame at the time cocos3d takes over.