Wednesday, September 7, 2016

State reflection


The Stingray engine has two controller threads -- the main thread and the render thread. These two threads build up work for our job system, which is distributed on the remaining threads. The main thread and the render thread are pipelined, so that while the main thread runs the simulation/update for frame N, the render thread is processing the rendering work for the previous frame (N-1). This post will dive into the details how state is propagated from the main thread to the render thread.

I will use code snippets to explain how the state reflection works. It's mostly actual code from the engine but it has been cleaned up to a certain extent. Some stuff has been renamed and/or removed to make it easier to understand what's going on.

The main loop

Here is a slimmed down version of the update loop which is part of the main thread:

while (!quit())
    // Calls out to the mandatory user supplied `update` Lua function, Lua is used 
    // as a scripting language to manipulate objects. From Lua worlds, objects etc
    // can be created, manipulated, destroyed, etc. All these changes are recorded
    // on a `StateStream` that is a part of each world.

    // Flush state changes recorded on the `StateStream` for each world to
    // the rendering world representation.
    unsigned n_worlds = _worlds.size();
    for (uint32_t i = 0; i < n_worlds; ++i) {
        auto &world = *_worlds[i];

    // Begin a new render frame.

    // Calls out to the user supplied `render` Lua function. It's up to the script
    // to call render on worlds(). The script controls what camera and viewport
    // are used when rendering the world.

    // Present the frame.

    // End frame.

    // Never let the main thread run more than 1 frame a head of the render thread.

    // Create a new fence for the next frame.
    _frame_fence = _render_interface->create_fence();

First thing to point out is the _render_interface. This is not a class full of virtual functions that some other class can inherit from and override as the name might suggest. The word "interface" is used in the sense that it's used to communicate from one thread to another. So in this context the _render_interface is used to post messages from the main thread to the render thread.

As said in the first comment in the code snippet above, Lua is used as our scripting language and from Lua things such as worlds, objects, etc can be created, destroyed, manipulated, etc.

The state between the main thread and the render thread is very rarely shared, instead each thread has its own representation and when state is changed on the main thread that state is reflected over to the render thread. E.g., the MeshObject, which is the representation of a mesh with vertex buffers, materials, textures, shaders, skinning, data etc to be rendered, is the main thread representation and RenderMeshObject is the corresponding render thread representation. All objects that have a representation on both the main and render thread are setup to work the same way:

class MeshObject : public RenderStateObject

class RenderMeshObject : public RenderObject

The corresponding render thread class is prefixed with Render. We use this naming convention for all objects that have both a main and a render thread representation.

The main thread objects inherit from RenderStateObject and the render thread objects inherit from RenderObject. These structs are defined as:

struct RenderStateObject
    uint32_t render_handle;
    StateReflection *state_reflection;

struct RenderObject
    uint32_t type;

The render_handle is an ID that identifies the corresponding object on the render thread. state_reflection is a stream of data that is used to propagate state changes from the main thread to the render thread. type is an enum used to identify the type of render objects.

Object creation

In Stingray a world is a container of renderable objects, physical objects, sounds, etc. On the main thread, it is represented by the World class, and on the render thread by a RenderWorld.

When a MeshObject is created in a world on the main thread, there's an explicit call to WorldRenderInterface::create() to create the corresponding render thread representation:

MeshObject *mesh_object = MAKE_NEW(_allocator, MeshObject);

The purpose of the call to WorldRenderInterface::create is to explicitly create the render thread representation, acquire a render_handle and to post that to the render thread:

void WorldRenderInterface::create(MeshObject *mesh_object)
    // Get a unique render handle.
    mesh_object->render_handle = new_render_handle();

    // Set the state_reflection pointer, more about this later.
    mesh_object->state_reflection = &_state_reflection;

    // Create the render thread representation.
    RenderMeshObject *render_mesh_object = MAKE_NEW(_allocator, RenderMeshObject);

    // Pass the data to the render thread
    create_object(mesh_object->render_handle, RenderMeshObject::TYPE, render_mesh_object);

The new_render_handle function speaks for itself.

uint32_t WorldRenderInterface::new_render_handle()
    if (_free_render_handles.any()) {
        uint32_t handle = _free_render_handles.back();
        return handle;
    } else
        return _render_handle++;

There is a recycling mechanism for the render handles and a similar pattern reoccurs at several places in the engine. The release_render_handle function together with the new_render_handle function should give the complete picture of how it works.

void WorlRenderInterface::release_render_handle(uint32_t handle)

There is one WorldRenderInterface per world which contains the _state_reflection that is used by the world and all of its objects to communicate with the render thread. The StateReflection in its simplest form is defined as:

struct StateReflection
    StateStream *state_stream;

The create_object function needs a bit more explanation though:

void WorldRenderInterface::create_object(uint32_t render_handle, RenderObject::Type type, void *user_data)
    // Allocate a message on the `state_stream`.
    ObjectManagementPackage *omp;
    alloc_message(_state_reflection.state_stream, WorldRenderInterface::CREATE, &omp);

    omp->object_type = RenderWorld::TYPE;
    omp->render_handle = render_handle;
    omp->type = type;
    omp->user_data = user_data;

What happens here is that alloc_message will allocate enough bytes to make room for a MessageHeader together with the size of ObjectManagementPackage in a buffer owned by the StateStream. The StateStream is defined as:

struct StateStream
    void *buffer;
    uint32_t capacity;
    uint32_t size;

capacity is the size of the memory pointed to by buffer, size is the current amount of bytes allocated from buffer.

The MessageHeader is defined as:

struct MessageHeader
    uint32_t type;
    uint32_t size;
    uint32_t data_offset;

The alloc_message function will first place the MessageHeader and then comes the data, some ASCII to the rescue:

| MessageHeader | data                                              |
<- data_offset ->
<-                          size                                   ->

The size and data_offset mentioned in the ASCII are two of the members of MessageHeader, these are assigned during the alloc_message call:

template<Class T>
void alloc_message(StateStream *state_stream, uint32_t type, T **data)
    uint32_t data_size = sizeof(T);

    uint32_t message_size = sizeof(MessageHeader) + data_size;

    // Allocate message and fill in the header.
    void *buffer = allocate(state_stream, message_size, alignof(MessageHeader));
    auto header = (MessageHeader*)buffer;

    header->type = type;
    header->size = message_size;
    header->data_offset = sizeof(MessageHeader);

    *data = memory_utilities::pointer_add(buffer, header->data_offset);

The buffer member of the StateStream will contain several consecutive chunks of message headers and data blocks.

| Header | data | Header | data | Header | data | Header | data | etc   |

This is the necessary code on the main thread to create an object and populate the StateStream which will later on be consumed by the render thread. A very similar pattern is used when changing the state of an object on the main thread, e.g:

void MeshObject::set_flags(renderable::Flags flags)
    _flags = flags;

    // Allocate a message on the `state_stream`.
    SetVisibilityPackage *svp;
    alloc_message(state_reflection->state_stream, MeshObject::SET_VISIBILITY, &svp);

    // Fill in message information.
    svp->object_type = RenderMeshObject::TYPE;

    // The render handle that got assigned in `WorldRenderInterface::create`
    // to be able to associate the main thread object with its render thread 
    // representation.
    svp->handle = render_handle;

    // The new flags value.
    svp->flags = _flags;

Getting the recorded state to the render thread

Let's take a step back and explain what happens in the main update loop during the following code excerpt:

// Flush state changes recorded on the `StateStream` for each world to
// the rendering world representation.
unsigned n_worlds = _worlds.size();
for (uint32_t i = 0; i < n_worlds; ++i) {
    auto &world = *_worlds[i];

When Lua has been creating, destroying, manipulating, etc objects during update() and is done, each world's StateStream which contains all the recorded changes is ready to be sent over to the render thread for consumption. The call to RenderInterface::update_world() will do just that, it roughly looks like:

void RenderInterface::update_world(World &world)
    UpdateWorldMsg uw;

    // Get the render thread representation of the `world`.
    uw.render_world = render_world_representation(world);

    // The world's current `state_stream` that contains all changes made 
    // on the main thread.
    uw.state_stream = world->_world_reflection_interface.state_stream;

    // Create and assign a new `state_stream` to the world's `_world_reflection_interface`
    // that will be used for the next frame.
    world->_world_reflection_interface->state_stream = new_state_stream();

    // Post a message to the render thread to update the world.
    post_message(UPDATE_WORLD, &uw);

This function will create a new message and post it to the render thread. The world being flushed and its StateStream are stored in the message and a new StateStream is created that will be used for the next frame. This new StateStream is set on the WorldRenderInterface of the World, and since all objects being created got a pointer to the same WorldRenderInterface they will use the newly created StateStream when storing state changes for the next frame.

Render thread

The render thread is spinning in a message loop:

void RenderInterface::render_thread_entry()
    while (!_quit) {
        // If there's no message -- put the thread to sleep until there's
        // a new message to consume.
        RenderMessage *message = get_message();

        void *data = data(message);
        switch (message->type) {
            case UPDATE_WORLD:

            // ... And a lot more case statements to handle different messages. There
            // are other threads than the main thread that also communicate with the
            // render thread. E.g., the resource loading happens on its own thread
            // and will post messages to the render thread.

The internal_update_world() function is defined as:

void RenderInterface::internal_update_world(UpdateWorldMsg *uw)
    // Call update on the `render_world` with the `state_stream` as argument.

    // Release and recycle the `state_stream`.

It calls update() on the RenderWorld with the StateStream and when that is done the StateStream is released to a pool.

void RenderWorld::update(StateStream *state_stream)
    MessageHeader *message_header;
    StatePackageHeader *package_header;

    // Consume a message and get the `message_header` and `package_header`.
    while (get_message(state_stream, &message_header, (void**)&package_header)) {
        switch (package_header->object_type) {
            case RenderWorld::TYPE:
                auto omp = (WorldRenderInterface::ObjectManagementPackage*)package_header;
                // The call to `WorldRenderInterface::create` created this message.
                if (message_header->type == WorldRenderInterface::CREATE)
            case (RenderMeshObject::TYPE)
                if (message_header->type == MeshObject::SET_VISIBILITY) {
                    auto svp = (MeshObject::SetVisibilityPackage*>)package_header;

                    // The `render_handle` is used to do a lookup in `_objects_lut` to
                    // to get the `object_index`.
                    uint32_t object_index = _object_lut[package_header->render_handle];

                    // Get the `render_object`.
                    void *render_object = _objects[object_index];

                    // Cast it since the type is already given from the `object_type`
                    // in the `package_header`.
                    auto rmo = (RenderMeshObject*)render_object;

                    // Call update on the `RenderMeshObject`.
                    rmo->update(message_header->type, package_header);
            // ... And a lot more case statements to handle different kind of messages.

The above is mostly infrastructure to extract messages from the StateStream. It can be a bit involved since a lot of stuff is written out explicitly but the basic idea is hopefully simple and easy to understand.

On to the create_object call done when (message_header->type == WorldRenderInterface::CREATE) is satisfied:

void RenderWorld::create_object(WorldRenderInterface::ObjectManagementPackage *omp)
    // Acquire an `object_index`.
    uint32_t object_index = _objects.size();

    // Same recycling mechanism as seen for render handles.
    if (_free_object_indices.any()) {
        object_index = _free_object_indices.back();
    } else {
        _objects.resize(object_index + 1);
        _object_types.resize(object_index + 1);

    void *render_object = omp->user_data;
    if (omp->type == RenderMeshObject::TYPE) {
        // Cast the `render_object` to a `MeshObject`.
        RenderMeshObject *rmo = (RenderMeshObject*)render_object;

        // If needed, do more stuff with `rmo`.

    // Store the `render_object` and `type`.
    _objects[object_index] = render_object;
    _object_types[object_index] = omp->type;

    if (omp->render_handle >= _object_lut.size())
        _object_lut.resize(omp->handle + 1);
    // The `render_handle` is used
    _object_lut[omp->render_handle] = object_index;

So the take away from the code above lies in the general usage of the render_handle and the object_index. The render_handle of objects are used to do a look up in _object_lut to get the object_index and type. Let's look at an example, the same RenderWorld::update code presented earlier but this time the focus is when the message is MeshObject::SET_VISIBILITY:

void RenderWorld::update(StateStream *state_stream)
    StateStream::MessageHeader *message_header;
    StatePackageHeader *package_header;

    while (get_message(state_stream, &message_header, (void**)&package_header)) {
        switch (package_header->object_type) {
            case (RenderMeshObject::TYPE)
                if (message_header->type == MeshObject::SET_VISIBILITY) {
                    auto svp = (MeshObject::SetVisibilityPackage*>)package_header;

                    // The `render_handle` is used to do a lookup in `_objects_lut` to
                    // to get the `object_index`.
                    uint32_t object_index = _object_lut[package_header->render_handle];

                    // Get the `render_object` from the `object_index`.
                    void *render_object = _objects[object_index];

                    // Cast it since the type is already given from the `object_type`
                    // in the `package_header`.
                    auto rmo = (RenderMeshObject*)render_object;

                    // Call update on the `RenderMeshObject`.
                    rmo->update(message_header->type, svp);

The state reflection pattern shown in this post is a fundamental part of the engine. Similar patterns appear in other places as well and having a good understanding of this pattern makes it much easier to understand the internals of the engine.

Tuesday, September 6, 2016

A New Localization System for Stingray

The current Stingray localization system is based around the concept of properties. A property is any period separated part of the file name before the extension. Consider the following three files:

  • trees/larch_03.unit
  • trees/
  • trees/larch_03.ps4.unit

These three files all have the same type (.unit), and the same name (trees/larch_03), but their properties differ. The first one has no properties set. The second one has the property .fr and the last one has the property .ps4. (Note that resources can have more than one property.)

Properties are resolved in slightly different ways, depending on the kind of property. Platform properties are resolved at compile time, so if you compile for PS4, you will get the PS4 version of the resource (or the default version if there is no .ps4 specific version).

Other properties are resolved at resource load time. When you load a bunch of resources, which property variant is loaded depends on a global property preference order set from the script. A property preference order of ['.fr', '.es'] means that resources with the property .fr are be preferred, then resources with the property .es (if no .fr resource is available), and finally a resource without any properties at all.

This single mechanism is used for localizing strings, sounds, textures, etc. Strings, for example, are stored in .strings files, which are essentially just key-value stores:

file = "File"
open = "Open"

To create a French localized of this menu.strings resource, you just create a resource and fill it with:

file = "Fichier"
open = "Ouvert"

This basic localization system has served us well for many years, but it has some drawbacks that are starting to become more pronounced:

  • It doesn't allow file names with periods in them. Since we always interpret periods as properties, periods can't be a part of the regular file name. This isn't a huge problem when users name their own files, but as we are increasing the interoperability between Stingray and other software packages we more and more run into software that has, let's say peculiar, ways of naming its files. Renaming things by hand is cumbersome and can also break things when files cross-reference each other.

  • Switching language requires reloading the resource packages. This seems overly complicated. We have more memory these days than when we started building Stingray. In many cases, especially for strings, it makes more sense to keep them in memory all the time, so we can switch between them easily.

  • Just switching on platform isn't enough. Mobile devices range from very low-end to at least mid-end. Rather than having .ios and .android properties, we might want .low-quality and .high-quality and select which one to use based on the actual capabilities of the hardware.

  • Making editors work well with the property system has been surprisingly complicated. For example, when the editor runs on Windows, what should it show if there is a .win32 specialization of a resource -- the default version or the .win32 one? How would you edit a .ps4 resource when those are normally stripped out of the Windows runtime?

    We used to have this wonky think where you could sort of cross-compile the resources and say that "I want to run on Windows, but as if I was running on PS4. But to be honest, that system never really worked that well and in the new editor we have gotten rid of it.

Interestingly, out of all these problems, it is the first one -- the most stupid one -- that is the main impetus for change.

The New System

The new system has several parts. First, we decided that for systems that deal with localization a lot, such as strings and sounds it makes sense to have the system actually be aware of localization. That way, we can provide the best possible experience.

So the .strings format has changed to:

file = {en = "File", fr = "Fichier", ...}
open = {en = "Open", fr = "Ouvert", ...}

All the languages are stored in the same file and to switch language you just call Localizer.set_language("fr"). We keep all the different languages in memory at all times. Even for a game with ridiculous amounts of text this still doesn't use much memory and it means we can hot-swap languages instantly.

This is a nice approach, but it doesn't work for all resources. We don't want to add this deep kind of integration to resources that are normally not localized, such as .unit and .texture. Still, there sometimes is a need to localize such resources. For example, a .texture might have text in it that needs to be localized. We may need a low-poly version of a .unit for a less capable platform. Or a less gory version of an animation for countries with stricter age ratings.

To make things easier for the editor we decided to ditch the property system all together, and instead go for a substitution strategy. There are no special magical parts of a resource's path -- it is just a name and a type. But if you want to, you can say to the engine that all instances of a certain resource should be replaced with another resource:

trees/larch_03.unit → trees/larch_03_ps4.unit

Note here that there is nothing special or magical about the trees/larch_03_ps4.unit. There is no problem with displaying it on Windows. You just edit it in the editor, like any other unit. However, when you play the game -- any time a trees/larch_03.unit is requested by the engine, a trees/larch_03_ps4.unit is substituted. So if you have authored a level full of larch_03 units, when the override above is in place, you will instead see larch_03_ps4 units.

There are many ways for this scheme to go wrong. The gameplay script might expect to find a certain node branch_43 in the unit -- a node that exists in larch_03.unit, but not in larch_03_ps4.unit and this may lead to unexpected behavior. The same problem existed in the old property system. We don't try to do anything special about this, because it is impossible. In the end, it is only the gameplay script that can know what it means for two things to be similar enough to be used interchangeably. Anyone working with localized resources just has to be careful not to break things.

Overrides can be specified from the Lua script:

Application.set_resource_override("unit", "trees/larch_03", "trees/larch_03_ps4");

Note that this is a much more powerful system than the old property system. Any resource can be set to override any other -- we are not restricted to work within the strict naming scheme required by the property system. Also, the override is dynamic and can be determined at runtime. So it can be based on dynamic properties, such as measured CPU or GPU performance -- or a user setting for the amount of gore they are comfortable with.

It can even be used for completely different things than localization or platform specific resources -- such as replacing the units in a level for a night-time or psychedelic version of the same level. And I'm sure our users will find many other ways of (ab)using this mechanism.

But this dynamic system is not quite enough to do everything we want to do.

First, since the override is dynamic and only happens at runtime, our packaging system can't be aware of it. Normally, our packaging system figures out all resource dependencies automatically. So when you say that you want a package with the forest level, the packaging system will automatically pull in the larch_03 unit that is used in that level, any textures used by that unit, etc. But since the packaging system can't know that at runtime you will replace larch_03 with larch_03_ps4, it doesn't know that larch_03_ps4 and its dependencies should go into the package as well.

You could add larch_03_ps4 to the package manually, since you know it will be used. That might work if you only have one or two overrides. However, even with a very small amount of overrides micromanaging packages in this way becomes incredibly tedious and error prone.

Second, we don't want to burden the packages with resources that will never be used. If we are making a game for digital distribution on iOS or Android we don't want to include large PS4-only resources in that game.

So we need a static override mechanism that is known by the package manager to make sure it includes and excludes the right resources. The simplest thing would be a big file that just listed all the overrides. For example, to override larch_03 on PS4 we would write something like:

resource_overrides = [
    type = "unit"
    name = "trees/larch_03"
    override = "trees/larch_03_ps4"
    platforms = ["ps4"]

This would work, but could again get pretty tedious if there are a lot of overrides. It would be nice with something that was a bit more automatic.

Since our users are already used to using name suffixes such as .fr and .ps4 for localization, we decided to build on the same mechanism -- creating overrides automatically based on suffix rules:

resource_overrides = [
  {suffix = "_ps4", platforms = ["ps4"]}

This rule says that when we are compiling for the platform PS4, if we find a resource that has the same name as another resource, but with the added suffix _ps4, that resource will automatically be registered as an override for that resource:

trees/larch_03.unit → trees/larch_03_ps4.unit
leaves/larch_leaves.texture → leaves/larch_leaves_ps4.unit

In addition to platform settings, the system also generalizes to support other flags:

resource_overrides = [
  {suffix = "_fr", flags = ["fr"]}
  {suffix = "_4k", flags = ["4K"]}
  {suffix = "_noblood", flags = ["noblood", "PG-13"]}

This defines the _fr suffix for French localization. A 4K suffix _4k for high-quality versions of resources suitable for 4K monitors. And a _noblood suffix that selects resources without blood and gore.

The flags can be set at compile time with:

--compile --resource-flag-true 4K

This means that we are compiling a 4K version of the game, so when bundling only the 4K resources will be included and the other versions will be stripped out. Just as if we were compiling for a specific platform.

But we can also choose to resolve the flags at runtime:

--compile --resource-flag-runtime noblood

With this setting, both the regular resource and the _noblood resource will be included in the package and loaded into memory. And we can hot swap between them with:

Application.set_resource_flag("noblood", true)

I have not decided yet whether in addition to these two alternatives we should also have an option that resolves at package load time. I.e., both variants of the resource would be included on disk, but only one of them would be loaded into memory and if you wanted to switch resource you would have to unload the package and load it back into memory again.

I can see some use cases for this, but on the other hand adding more options complicates the system and I like to keep things as simple as possible.

A nice thing about this suffix mapping is that it can be configured to be backwards compatible with the old property system:

resource_overrides = [
  {suffix = ".fr", flags = ["fr"]}
  {suffix = ".ps4", platforms = ["ps4"]}
  {suffix = ".xb1", platforms = ["xb1"]}

Whenever we change something in Stingray we try to make it more flexible and data-driven, while at the same time ensuring that the most common cases are still easy to work with. This rewrite of the localization is a good example:

  • It fixes the problem with periods in file names. Periods are now only an issue if you have made an explicit suffix mapping that matches them.

  • We can switch language (or any other resource setting) at runtime.

  • The new system is more flexible -- it doesn't just handle localization and platform specific resources, we can set up whatever resource categories we want. And we can even dynamically override individual resources.

  • The editor no longer needs to do anything special to deal with the concept of "properties". Resources that are used to override other resources can be edited in the editor just like any other resource.

  • And the system can easily be configured to be backwards compatible with the old localization system.

I still feel slightly queasy about using name matching to drive parts of this system. Name matching is a practice that can go horribly wrong. But in this case, since the name matching is completely user controlled I think it makes a good compromise between purity and usability.

Tuesday, August 16, 2016

Render Config Extensions


The rendering pipe in Stingray is completely data-driven, meaning that everything from which GPU buffers (render targets etc) that are needed to compose the final rendered frame to the actual flow of the frames is described in the render_config file - a human readable json file. I have covered this in various presentations [1,2] over the years so I won’t be going into more details about it in this blog post, instead I’d like to focus on a new feature that we are rolling out in Stingray v1.5 - Render Config Extensions.

As Stingray is growing to cater to more industries than game development we see lots of feature requests that don’t necessarily fit in with our ideas of what should go into the default rendering pipe that we ship with Stingray. This has made it apparent that we need a way of doing deep integrations of new rendering features without having to duplicate the entire render_config file.

This is where the render_config_extension files comes into play. A render_config_extension is very similar to the main render_config except that instead of having to describe the entire rendering pipe it appends and inserts different json blocks into the main render_config.

When the engine starts the boot ini-file specifies what render_config to use as well as an array of render_config_extensions to load when setting up the renderer.

render_config = "core/stingray_renderer/renderer"
render_config_extensions = ["clouds-resources/clouds", "prism/prism"]

The array describes the initialization order of the extensions which makes it possible for the project author to control how the different extensions stacks on top of each other. It also makes it possible to build extensions that depends on other extensions.

A render_config_extension consists of two root blocks: append and insert_at:


The append block is used for everything that is order independent and allows you to append data to the following root blocks of the main render_config:

  • shader_libraries – lists additional shader_libraries to load
  • render_settings – add more render_settings (quality settings, debug flags, etc.)
  • shader_pass_flags – add more shader_pass_flags (used by shader system to dynamically turn on/off passes)
  • global_resources – additional global GPU resources to allocate on boot
  • resource_generators – expose new resource_generators
  • viewports – expose new viewport templates
  • lookup_tables – append to the list of resource_generators to execute when booting the renderer (mainly used for generating lookup tables)

One thing to note about extending these blocks is that we currently do not do any kind of name collision checking, so using a prefix to mimic a namespace for your extension is probably a good idea.

// example append block from JPs volumetric clouds plugin
append = {
  render_settings = {
    clouds_enabled = true
    clouds_raw_data_visualization = false
    clouds_weather_data_visualization = false

  shader_libraries = [

  global_resources = [
    // Clouds modelling resources:
    { name="clouds_result_texture1" type="render_target" image_type="image_3d" width=256 height=256 layers=256 format="R8G8B8A8" }
    { name="clouds_result_texture2" type="render_target" image_type="image_3d" width=64 height=64 layers=64 format="R8G8B8A8" }
    { name="clouds_result_texture3" type="render_target" image_type="image_2d" width=128 height=128 format="R8G8B8A8" }
    { name="clouds_weather_texture" type="render_target" image_type="image_2d" width=256 height=256 format="R8G8B8A8" }


The insert_at block allows you to insert layers and modifiers into already existing layer_configurations and resource_generators, either belonging to the main render_config file or a render_config_extension listed earlier in the render_config_extensions array of engine boot ini-file.

// example insert_at block from JPs volumetric clouds plugin
insert_at = {
  post_processing_development = {
    modifiers = [
      { type="dynamic_branch" render_settings={ clouds_weather_data_visualization=true }
        pass = [
          { type="fullscreen_pass" shader="debug_weather" input=["clouds_weather_texture"] output=["output_target"]  }

  skydome = {
    layers = [
      { resource_generator="clouds_modifier" profiling_scope="clouds" }

The object names under the insert_at block refers to extension_insertion_points listed in the main render_config file or one of the previously loaded render_config_extension files. We’ve chosen not to allow extensions to inject anywhere they like (using line numbers or similar crazyness), instead we expose a bunch of extension “hooks” at various places in the main render_config file. By doing this we hope to have a somewhat better chance of not breaking existing extensions as we continue to develop and potentially do bigger refactorings of the default render_config file.

Future work

This extension mechanism is somewhat of an experiment and we might need to rethink parts of it in a later version of Stingray. We’ve briefly discussed a potential need for dealing with versioning, i.e. allowing extensions to explicitly list what versions of Stingray they are compatible with (and maybe also allow extensions to have deviating implementations depending on version). Some kind of enforced name spacing and more aggressive validation to avoid name collisions have also been debated.

In the end we decided to ignore these potential problems for now and instead push for getting a first version out in 1.5 to unblock plugin developers and internal teams wanting to do efficient “deep” integrations of various rendering features. Hopefully we won’t regret this decision too much later on. ;)


  • [1] Flexible Rendering for Multiple Platforms (Tobias Persson, GDC 2012)
  • [2] Benefits of data-driven renderer (Tobias Persson, GDC 2011)

Sunday, July 31, 2016

Volumetric Clouds

There has been a lot of progress made recently with volumetric clouds in games. The folks from Reset have posted a great article regarding their custom dynamic clouds solution, Egor Yusov published Real-time Rendering of Physics-Based Clouds using Precomputed Scattering in GPU Pro 6, last year Andrew Schneider presented Real-time Volumetric Cloudscapes of Horizon: Zero Dawn, and just last week Sébastien Hillaire presented Physically Based Sky, Atmosphere and Cloud Rendering in Frostbite. Inspired by all this latest progress we decided to implement a Stingray plugin to get a feel for the challenge that is real time clouds rendering.

Note: This article isn't an introduction to volumetric cloud rendering but more of a small log of the development process of the plugin. Also, you can try it out for yourself or look at the code by downloading the Stingray plugin. Feel free to contribute!


The modeling of our clouds is heavily inspired by the Real-time Volumetric Rendering Course Notes and Real-time Volumetric Cloudscapes of Horizon: Zero Dawn. It uses a set of 3d and 2d noises that are modulated by a coverage and altitude term to generate the 3d volume to be rendered.

I was really impressed at the shapes that can be created from such simple building blocks. While you can definitely see cases where some tiling occurs, it’s not as bad as you would imagine. Once the textures are generated the tough part is to find the right sampling spaces and scales at which they should be sampled in the atmosphere. It's difficult to get a good balance between tiling artifacts vs getting enough high frequency details for the clouds. On top of that cache hits are greatly affected by the sampling scale used so it's another factor to consider.

Finding good sampling scales for all of these textures and choosing by how much the extrusion texture should affect the low frequency clouds is very time consuming. With some time you eventually build intuition for what will look good in most scenarios but it’s definitely a difficult part of the process.

We also generate some curl noise which is used to perturb and animate the clouds slightly. I've found that adding noise to the sampling position also reduces linear filtering artifacts that can arise when ray marching these low resolution 3d textures.

One thing that often bothered me is the oddly shaped cumulus clouds that can arise from tilled 3d noise. Those cases are particularly noticeable for distant clouds. Adding extra cloud coverage for lower altitude sampling positions minimizes this artifact.

Raymarching the volume at full resolution is too expensive even for high end graphics cards. So as suggested by Real-time Volumetric Cloudscapes of Horizon: Zero Dawn we reconstruct a full frame over 16 frames. I've found that to retain enough high frequency details of the clouds, we need a fairly high number of samples. We are currently using 256 steps when raymarching. We offset the starting position of the ray by a 4x4 Bayer matrix pattern to reduce banding artifacts that might appear due to undersampling. Mikkel Gjoel shared some great tips for banding reduction while presenting The Rendering Of Inside and encouraged the use of blue noise to remove banding patterns. While this gives better results there is a nice advantage of using a 4x4 pattern here: since we are rendering interleaved pixels it means that when rendering one frame we are rendering all pixels with the same Bayer offset. This yields a significant improvement in cache coherency compared to using a random noise offset per pixel. We also use an animated offset which allows us to gather a few extra samples through time. We use a 1d Halton sequence of 8 values and instead of using 100% of the 16ᵗʰ frame we use something like 75% to absorb the Halton samples.

To re-project the cloud volume we try to find a good approximation of the cloud's world position. While raymarching we track a weighted sum of the absorption position and generate a motion vector from it.

This allows us to reproject clouds with some degree of accuracy. Since we build one full resolution frame every 16ᵗʰ frame it’s important to track the samples as precisely as possible. This is especially true when the clouds are animated. Finding the right number of temporal samples you want to integrate over time is a compromise between getting a smoother signal for trackable pixels vs having a more noisy signal for invalidated pixels.


To light the volume we use the "Beer-Powder" term described by Real-time Volumetric Cloudscapes of Horizon: Zero Dawn. It's a nice model since it simulates some of the out-scattering that occurs at the edges of the clouds. We discovered early on that it was going to be difficult to find terms that looked good for both close and distant clouds. So (for now anyways) a lot of the scattering and extinction coefficients are view dependent. This proved to be a useful way of building intuition for how each term affects the lighting of the clouds.

We also added the ambient term described by the Real-time Volumetric Rendering Course Notes which is very useful to add detail where all light is absorbed by the volume.

The ambient function described takes three parameters: sampling altitude, bottom color and top color. Instead of using constant values, we calculate these values by sampling the atmosphere at a few key locations. This means our ambient term is dynamic and will reflect the current state of the atmosphere. We use two pairs of samples perpendicular to the sun vector and average them to get the bottom and top ambient colors respectively.

Since we already calculated an approximate absorption position for the reprojection, we use this position to change the absorption color based on the absorption altitude.

Finally, we can reduce the alpha term by a constant amount to skew the absorption color towards the overlayed atmospheric color. By default this is disabled but it can be interesting to create some very hazy skyscapes. If this hack is used, it's important to protect the scattering highlight colors somewhat.


The animation of the clouds consists of a 2d wind vector, a vertical draft amount and a weather system.

We dynamically calculate a 512x512 weather map which consists of 5 octaves of animated Perlin noise. We remap the noise value differently for each rgb component. This weather map is then sampled during the raymarch to update the coverage, cloud type and wetness terms of the current cloud sample. Right now we resample this weather term for each ray step but a possible optimization would be to sample the weather data and the start and end of the ray positions and interpolate these values at each step. All of the weather terms come in sunny/stormy pairs so that we can lerp them based in a probability of rain percentage. This allows the weather system to have storms coming in and out.

The wetness term is used to update a structure of terms which defines how the clouds look based on how much humidity they carry. This is a very expensive lerp which happens per ray march and should be reduced to the bare minimum (the raymarch is instruction bound so each removed lerp is a big win optimization wise). But for the current exploratory phase it’s proving useful to be able to tweak a lot of these terms individually.

Future work

I think that as hardware gets more powerful realtime cloudscape solutions will be used more and more. There is tons of work left to do in this area. It is absolutely fascinating, challenging and beautiful. I am personally interested in improving the sense of scale the rendered clouds can have. To do so, I feel that the key is to reveal more and more of the high frequency details that shape the clouds. I think smaller cloud features are key to put in perspective the larger cloud features around them. But extracting higher frequency details usually comes at the cost of increasing the sampling rate.

We also need to think of how to handle shadows and reflections. We've done some quick tests by updating a 512x512 opacity shadow map which seemed to work ok. Since it is not a view frustum dependent term we can absorb the cost of updating the map over a much longer period of time than 16 frames. Also, we could generate this map by taking fewer samples in a coarser representation of the clouds. The same approach would work for generating a global specular cubemap.

I hope we continue to see more awesome presentations at GDC and Siggraph in the coming years regarding this topic!


Friday, April 1, 2016

The Poolroom

The Poolroom

Figure 1 : Poolroom Pool Table

The poolroom was my first attempt at creating a truly rich environmental experience with Stingray. Most architectural visualization scenes you see are antiseptically clean and uncomfortably modern. I wanted to break away from that. I wanted an environment I would feel at home with, not one that a movie star would buy for sheer resale value to another movie star. I also wanted the challenge of working with natural and texturally rich materials. Not white on white, as is generally the case.

Figure : Poolroom Clock

To this end, I started looking for cozy but luxurious spaces on google and eventually came across a nice reference photo I could work with. Warm rich woods, lots of games, a bar, and well... those all speak to me. For better or worse, I felt this room was one I would personally feel comfortable in. So I took on the challenge of re-creating that environment in 3D inside Stingray.

The challenges

The poolroom gave me some major challenges. Some I knew would be trouble from the start, but some I didn’t realize until I started rendering lightmaps. Most of my difficulties came down to handling materials properly.

Figure 3 : Poolroom Bar

Coming to grips with physically based shaders

In addition to being my first complete Arch-Viz scene in Stingray, this was also my first real stab at using physically based shading (PBS). Although physically based shading is similar in many regards to traditional texturing, it has its own set of tricks and gotchas. I actually had to re-do the scenes materials more than once as I learned the proper way to do things.

For example, my scene was predominantly dark woods. With dark woods, you really have to be sure you get the albedo material in the correct luminosity range or you end up with difficulties when you light the scene. In my first attempts, I found my light being just eaten up by the darkness of the wood’s color map. I kept cranking up the light Intensities, but this would flood the scene and lead to harsh and broken light bakes.

Figure 4 : Arcade Game /p>

Eventually, once I understood the effect of the color map’s luminosity and got the values in line, I started getting great results with normalized light intensities. My lighting began responding favorably with deep, rich lightmap bakes. When you get the physical properties of the materials right, Stingray’s light baker is both fast and very good. But I can’t stress enough: with PBS, you must ensure that your luminosity values are accurate.

Reference photo was HDR

When I was building out the scene and trying to mimic the reference photo’s lighting, I realized that the original image was made using some high-dynamic range techniques. I couldn’t seem to get the same level of exposure and visual detail in the shadowed areas of my scene.

Figure 5 : Before Ambient Fills

Figure 6 : After Ambient Fills

Because of this, I had to do some pretty fun trickery with my scene lighting. In the end, I got it by placing some subtle, non-shadow casting lights in key areas to bring up the brightness a little in those areas.

Figure 6 : Soft Controlled Lighting

All in all, the scene took a lot of lighting work to get just right. I have to say that I was very happy with how closely I was able to match the lighting, given that the original photo was HDR.

Lived-in but not dirty

The last big challenge was also related to materials. I had to find that fine balance of a room that is clean and tidy but also obviously lived-in. So often I find Arch-Viz work feels unnaturally smooth and clean, which can destroy the belief of the space. I really wanted my scene to break through the uncanny valley and feel real.

I handled this mostly by creating some very simple grunge maps, and applying them to the roughness maps using a simple custom shader. This was easy to build in Stingray’s node-based shader graph:

Figure 8 : Simple RMA style shader with tiling and grunge map with adjustment.

I have this shader set up so I can control the tiling of the color map, normals and other textures. The grunge map, on the other hand, is sampled using UV coordinates from the lightmap channel. This helps to hide the tiling over large areas like the walls, because the grunge value that gets multiplied in to the roughness is always different each time the other textures repeat.

Balancing the grunge properly was the biggest challenge here, but in the end, some still shots even get me doing a double-take. When that happens, I know I’m doing well. I also posted progress along the way on my Facebook page — when I had friends saying, “whoa, when can I come visit?” I knew I was nailing it.

3D modeling

Figure 9 : Record Player Model in Maya LT

I don’t have much that’s special to say about the 3D modeling process. I simply modeled all my assets the same way anyone would. Attention to detail is really the trick, and making sure that I created hand-made lightmap UVs for every object was critical to ensure the best light baking. Otherwise it was just simple modeling.

Figure 10 : Poolroom Model in MayaLT

One thing to note, however, is that I only used 3D tools that came with the Stingray package, except for Substance Designer and a little Photoshop. I did the entire scene’s modeling in MayaLT. Sometimes people think cheap is not good, but I believe this proves otherwise. MayaLT is incredible. I am super happy with the results and speed at which you can work with it. Best of all, it’s part of the package, so no additional costs.

Material design

Laying out the materials in the scene was pretty straightforward for the most part. At one point, I experimented with using more species of wood, but the different parts of the room started to feel disconnected. I started removing materials from my list, and eventually when I ended up with only a small handful the room came together as you see it.

Figure 11 : Record Player Material Design in Substance

I guess something else I should mention is performance shaders. Stingray comes with a great, flexible standard shader, but I wanted to eke out every little bit of performance I could on this scene while keeping the quality very high. Without much trouble, I created a library of my own purpose-built shaders (like the one mentioned earlier). I used these for various tasks. Simple colors, RMA (roughness-metallic-ambient occlusion), RMA-tiling shaders and a few others came together really quickly. From this handful of shaders, I was able to increase performance while simplifying my design process. I find it comforting how Stingray deals with shaders… it is just very easy to iterate and save a version. Much better usability than other systems I have tried.

Figure 12 : Shader Library

Fun stuff

Well, most game dev is hard work, the fun is at the end when you get to finally relax and see your efforts paid off. But there were definitely some really fun parts of making the poolroom.

One was the clock. It’s a small, almost easter-egg kind of thing, but I programmed the clock fully. Meaning, its hands move, the pendulum swings, and it also rings the hour. So if you are exploring the poolroom and it happens to be when the hour changes in your system clock, the clock in the game rings the hour for you. So two o’clock rings two times,  four o’clock rings four times, etc. The half-hour always strikes once. I modeled the clock after one that my father gave me, so I put some extra love into it. It is basically exactly the clock that hangs in my living room.

Figure 13 : Clock Model in MayaLT

Figure 14 : Clock Model in Stingray

I also gave the record player some extra attention, because my good friend Mathew Harwood was kind enough to do all the audio for the project. I felt the music really set the scene, and he even worked on it over my twitch stream so we could get feedback from some people who were watching. So yeah, press + or - in the game to start and stop the record player, complete with animated tone arm. Nothing super crazy, just a nice little touch.

Figure 15 : Record Player in Stingray

Community effort

One thing I found really neat about this project was that I streamed the entire creation process on my Twitch channel. I have never streamed much before this project, but it made the process much more fun. I had people to talk with, and often my viewers were helpful to me in suggesting ideas and noticing things I had not noticed. It was very collaborative and a great learning exercise for me and for my viewers. We got to learn from each other, which is the dream!

For example, the record player likely would not have been done to the level I did it had one of my viewers not pushed me to make a really detailed player. Because of this push, it ended up being a focus of the level, and even has some animation and basic controls a user can interact with.

Stop by my Twitch channel sometime at and say hi, I’d love to meet you.

Sunday, January 31, 2016

Hot Reloadable JavaScript, Batman!

JavaScript is my new favorite prototyping language. Not because the language itself is fantastic. I mean, it's not too bad. It actually has a lot of similarity to Lua, but it's hidden under a heavy layer of WAT!?, like:

  • Browser incompatibilities!?
  • Semi-colons are optional, but you "should" put them there anyway!?
  • Propagation of null, undefined and NaN until they cause an error very far from where they originated!?
  • Weird type conversions!? "0" == false!?
  • Every function is also an object constructor!? x = new add(5,7)!?
  • Every function is also a method!?
  • You must check everything with hasOwnProperty() when iterating over objects!?

But since Lua is a work of genius and beauty, being a half-assed version of Lua is still pretty good. You could do worse, as languages go.

And JavaScript is actually getting better. Browser compatibility is improving, automatic updates is a big factor in this. And if your goal is just to prototype and play, as opposed to building robust web applications, you can just pick your favorite browser, go with that and don't worry about compatibility. The ES6 standard also adds a lot of nice little improvements, like let, const, class, lexically scoped this (for arrow functions), etc.

But more than the language, the nice thing about JavaScript is that comes with a lot of the things you need to do interesting stuff -- a user interface, 2D and 3D drawing, a debugger, a console REPL, etc. And it's ubiquitous -- everybody has a web browser. If you do something interesting and want to show it to someone else, it is as easy as sending a link.

OK, so it doesn't have file system access (unless you run it through node.js), but who cares? What's so fun about reading and writing files anyway? The 60's called, they want their programming textbooks back!

I mean in JavaScript I can quickly whip up a little demo scene, add some UI controls and then share it with a friend. That's more exciting. I'm sure someone will tell me that I can do that in Ruby too. I'm sure I could, if I found the right gems to install, picked what UI library I wanted to use and learned how to use that, found some suitable bundling tools that could package it up in an executable, preferably cross-platform. But I would probably run into some annoying and confusing error along the way and just give up.

With increasing age I have less and less patience for the sysadmin part of programming. Installing libraries. Making sure that the versions work together. Converting a script to something that works with our build system. Solving PATH conflicts between multiple installed cygwin and mingw based toolchains. Learning the intricacies of some weird framework that will be gone in 18 months anyway. There is enough of that stuff that I have to deal with, just to do my job. I don't need any more. When I can avoid it, I do.

One thing I've noticed since I started to prototype in JavaScript is that since drawing and UI work is so simple to do, I've started to use programming for things that I previously would have done in other ways. For example, I no longer do graphs like this in a drawing program:

Instead I write a little piece of JavaScript code that draws the graph on an HTML canvas (code here: pipeline.js).

JavaScript canvas drawing cannot only replace traditional drawing programs, but also Visio (for process diagrams), Excel (graphs and charts), Photoshop and Graphviz. And it can do more advanced forms of visualization and styling, that are not possible in any of these programs.

For simple graphs, you could ask if this really saves any time in the long run, as compared to using a regular drawing program. My answer is: I don't know and I don't care. I think it is more important to do something interesting and fun with time than to save it. And for me, using drawing programs stopped being fun some time around when ClarisWorks was discontinued. If you ask me, so called "productivity software" has just become less and less productive since then. These days, I can't open a Word document without feeling my pulse racing. You can't even print the damned things without clicking through a security warning. Software PTSD. Programmers, we should be ashamed of ourselves. Thank god for Markdown.

Another thing I've stopped using is slide show software. That was never any fun either. Keynote was at least tolerable, which is more than you can say about Powerpoint. Now I just use Remark.js instead and write my slides directly in HTML. I'm much happier and I've lost 10 pounds! Thank you, JavaScript!

But I think for my next slide deck, I'll write it directly in JavaScript instead of using Remark. That's more fun! Frameworks? I don't need no stinking frameworks! Then I can also finally solve the issue of auto-adapting between 16:9 and 4:3 so I don't have to letterbox my entire presentation when someone wants me to run it on a 1995 projector. Seriously, people!

This is not the connector you are looking for!

And I can put HTML 5 videos directly in my presentation, so I don't have to shut down my slide deck to open a video in a separate program. Have you noticed that this is something that almost every speaker does at big conferences? Because apparently they haven't succeeded in getting their million dollar presentation software to reliably present a video file! Software! Everything is broken!

Anyhoo... to get back off topic, one thing that surprised me a bit about JavaScript is that there doesn't seem to be a lot of interest in hot-reloading workflows. Online there is JSBin, which is great, but not really practical for writing bigger things. If you start googling for something you can use offline, with your own favorite text editor, you don't find that much. This is a bit surprising, since JavaScript is a dynamic language -- hot reloading should be a hot topic.

There are some node modules that can do this, like budo. But I'd like something that is small and hackable, that works instantly and doesn't require installing a bunch of frameworks. By now, you know how I feel about that.

After some experimentation I found that adding a script node dynamically to the DOM will cause the script to be evaluated. What is a bit surprising is that you can remove the script node immediately afterwards and everything will still work. The code will still run and update the JavaScript environment. Again, since this is only for my personal use I've not tested it on Internet Explorer 3.0, only on the browsers I play with on a daily basis, Safari and Chrome Canary.

What this means is that we can write a require function for JavaScript like this:

function require(s)
    var script = document.createElement("script");
    script.src = s + "?" +;
    script.type = "text/javascript";
    var head = document.getElementsByTagName("head")[0];

We can use this to load script files, which is kind of nice. It means we don't need a lot of <script> tags in the HTML file. We can just put one there for our main script, index.js, and then require in the other scripts we need from there.

Also note the deftly use of + "?" + to prevent the browser from caching the script files. That becomes important when we want to reload them.

Since for dynamic languuages, reloading a script is the same thing as running it, we can get automatic reloads by just calling require on our own script from a timer:

function reload()

if (!window.has_reload) {
    window.has_reload = true;
    window.setInterval(reload, 250);

This reloads the script every 250 ms.

I use the has_reload flag on the window to ensure that I set the reload timer only the first time the file is run. Otherwise we would create more and more reload timers with every reload which in turn would cause even more reloads. If I had enough power in my laptop the resulting chain reaction would vaporize the universe in under three minutes. Sadly, since I don't all that will happen is that my fans will spin up a bit. Damnit, I need more power!

After each reload() I call my render() function to recreate the DOM, redraw the canvas, etc with the new code. That function might look something like this:

function render()
    var body = document.getElementsByTagName("body")[0];
    while (body.hasChildNodes()) {

    var canvas = document.createElement("canvas");
    canvas.width = 650;
    canvas.height = 530;
    var ctx = canvas.getContext("2d");

Note that I start by removing all the DOM elements under <body>. Otherwise each reload would create more and more content. That's still linear growth, so it is better than the exponential chain reaction you can get from the reload timer. But linear growth of the DOM is still pretty bad.

You might think that reloading all the scripts and redrawing the DOM every 250 ms would create a horrible flickering display. But so far, for my little play projects, everything works smoothly in both Safari and Chrome. Glad to see that they are double buffering properly.

If you do run into problems with flickering you could try using the Virtual DOM method that is so popular with JavaScript UI frameworks these days. But try it without that first and see if you really need it, because ugh frameworks, amirite?

Obviously it would be better to reload only when the files actually change and not every 250 ms. But to do that you would need to do something like adding a file system watcher connected to a web socket that could send a message when a reload was needed. Things would start to get complicated, and I like it simple. So far, this works well enough for my purposes.

As a middle ground you could have a small bootstrap script for doing the reload:

window.version = 23;
if (window.version != window.last_version) {
    window.last_version = window.version;

You would reload this small bootstrap script every 250 ms. But it would only trigger a reload of the other scripts and a re-render when you change the version number. This avoids the reload spamming, but it also removes the immediate feedback loop -- change something and see the effect immediately which I think is really important.

As always with script reloads, you must be a bit careful with how you write your scripts to ensure thy work nicely with the reload feature. For example, if you write:

class Rect

It works well in Safari, but Chrome Canary complains on the second reload that you are redefining a class. You can get around that by instead writing:

var Rect = class {

Now Chrome doesn't complain anymore, because obviously you are allowed to change the content of a variable.

To preserve state across reloads, I just put the all the state in a global variable on the window:

window.state = window.state || {}

The first time this is run, we get an empty state object, but on future reloads we keep the old state. The render() function uses the state to determine what to draw. For example, for a slide deck I would put the current slide number in the state, so that we stay on the same page after a reload.

Here is a GIF of the hot reloading in action. Note that the browser view changes as soon as I save the file in Atom:

(No psychoactive substances where consumed during the production of this blog post. Except caffeine. Maybe I should stop drinking coffee?)

Friday, January 29, 2016

Stingray Support -- Hello, I Am Someone Who Can Help

Hello, I am someone who can help. 

Here at the Autodesk Games team, we pride ourselves on supporting users of the Stingray game engine in the best ways possible – so to start, let’s cover where you can find information!

General Information Here!

Games Solutions Learning Channel on YouTube:
This is a series of videos about Stingray by the Autodesk Learning Team. They'll be updating the playlist with new videos over time. They're pretty responsive to community requests on the videos, so feel free to log in and comment if there's something specific you'd like to see.
Check out the playlist on YouTube.

Autodesk Stingray Quick Start Series, with Josh from Digital Tutors:
We enlisted the help from Digital Tutors to set up a video series that runs through the major sections of Stingray so you can get up and running quickly.
Check out the playlist on YouTube.

Autodesk Make Games learning site:
This is a site that we've made for people who are brand new to making games. If you've never made a game before, or never touched complex 3D tools or a game engine, this is a good place to start. We run you through Concept Art and Design phases, 3D content creation, and then using a game engine. We've also made a bunch of assets available to help brand new game makers get started.

Creative Market:
The Creative Market is a storefront where game makers can buy or sell 3D content. We've got a page set up just for Stingray, and it includes some free assets to help new game makers get started.

Stingray Online Help
Here you'll find more getting started movies, how-to topics, and references for the scripting and visual programming interfaces. We're working hard to get you all the info you need, and we're really excited to hear your feedback.

Forum Support Tutorial Channel on YouTube:
This is a series of videos that answers recurring forums questions by the Autodesk Support Team. They'll be updating the playlist with new videos over time. They're pretty responsive to community requests on the videos, so feel free to log in and comment if there's something specific you'd like to see.
Check out the playlist on YouTube.

You should also visit the Stingray Public Forums here, as there is a growing wealth of information and knowledge to search from.

Let's Get Started

Let’s get started. Hi, I’m Dan, nice to meet you. I am super happy to help you with any of your Stingray problems, issues, needs or general questions! However, I’m going to need to ask you to HELP ME, HELP YOU!!

It’s not always apparent when a user asks for help just exactly what that user is asking for. That being the case, here is some useful information on how to ask for help and what to provide us so that we can help you better and more quickly!
  •  Make sure you are very clear on what your specific problem is and describe it as best you can.
    • Include pictures or screen shots you may have
  • Tell us how you came to have this problem         
    • Give us detailed reproduction steps on how to arrive at the issue you are seeing
  • Attach your log files!
    • They can be found here: C:\Users\”USERNAME”\AppData\Local\Autodesk\Stingray\Logs
  • Attach any file that is a specific problem (zip it so it attaches to forum post)
  •  Make sure to let us know your system specifications 
  •  Make sure to let us know what Stingray engine version you are using

On another note … traduire, traduzir, 翻, Übersetzen, þýða, переведите, ਅਨੁਵਾਦ, , and ... translate! We use English as our main support language, however, these days – is really, really good! If English is not your first language, please feel free to write your questions and issues in your native language and we will translate it and get back to you. I often find that it is easier to understand from a translation and this helps us get you help just that much more quickly!

In Conclusion

So just to recap, make sure you are ready when you come to ask us a question! Have your issue sorted out, how to reproduce it, what engine version you are running, your system specs and attach your log files. This will help us, help you, just that much faster and we can get you on your way to making super awesome content in the Stingray game engine. Thanks!

Dan Matlack
Product Support Specialist – Games Solutions
Autodesk, Inc.