Monthly Archives: September 2016

Minecon 2016 Report

Say what? Minecon is a convention for Minecraft, so why in this blog? Well, I was invited to be on a panel about 3D printing Minecraft models, since I wrote Mineways (which gets a crazy 600 downloads a day; beats me who all these people are. I think it’s a case of 600 download it, 60 try it, 6 try it more than once, 0.6 become real users).

David Ng in the Mineways hat

David Ng in the Mineways hat

It was a bit odd going to this convention, especially since it was at the Anaheim Convention Center, where I was just two months ago for SIGGRAPH. This convention is the same size as SIGGRAPH, 12,000 attendees plus panelists, staff, exhibitors, etc. One organizer said a total of 14K attended. Of course, the tickets for Minecon sold out in 6 minutes – there are over 100 million copies of Minecraft sold and a lot of fanatical users out there. It’s only two days long, it uses somewhat less (and sometimes more) of the convention center’s space, and the median age of an attendee is probably around 12. My photo album is here, note in particular this one, where just about everyone is in one very long room – that’s something I don’t see at SIGGRAPH.

Not SIGGRAPH

Not SIGGRAPH

It’s not all just kids and pixelated blocks. There were a few good general technical talks about VR, video techniques, voxel modeling methods, etc. For example, John Carmack and others from Oculus spoke about the challenges of porting Minecraft to the Rift. Scattered throughout this presentation are some interesting bits about the user experience. Some clever ideas such as the person jumping a short distance actually gets a view that travels in a straight line, though his friends see him jumping in a parabola (parabolas upset the stomach more). Playing the ambient music actually helps stave off motion sickness for awhile, or so they think. They have a “chill out” feature that lets you leave the action and hang out in a quiet virtual room, looking at the game through a 2D window. Various other things. They spent 6 months of polishing the VR version before releasing it (despite a fair bit of pressure to get it out the door in a minimal-port form). Best/ickiest Carmack quote, “Don’t push it. We don’t need to be cleaning up sick in the demo room.” Honestly, an interesting session, with hints of the political pressure on the team. I’m impressed that they were able to take the time to polish the experience, given how an early release of the game’s port would undoubtedly help drive sales.

BlockWorks had a session on how they did their voxel modeling, with some slides of their incredible constructions. Seriously, click that link. It was interesting in that each artist tends to have a specialty – architecture, organics, mechanicals, etc. They use the free Chunky path tracer (great tool! I’ve played with it.) along with traditional renderers such as Cinema 4D. I would have liked to hear more about their custom voxel-based modeling tools, but alackaday, not much mention beyond WorldEdit and VoxelSniper. Also, they avoid using mesh voxelizers such as binvox and Qubicle. (Qubicle, by the way, looks like a nice package for All Things Voxel, including a mesh voxelizer than retains the color of the mesh.) Related, though not at Minecon, this article about RenderMan and Minecraft – a good detailed “how to” read.

You remember the Visible Human Project? I laughed when I saw this, and talked at length with its creator, Wizard Keen, aka Adam Clarke.

1:40 into The Torso

1:40 into The Torso

I also met some people working on Spigot, the unofficial mod platform for Minecraft. One explained to me in depth how Minecraft’s impressive terrain generation worked, as he had carefully decompiled it all in order to make a mod. Basically, there’s pass after pass of applying Perlin Noise in various ways. First the overall terrain, in a single block type. Then put biomes on. Then for each biome add the “topsoil layer” (e.g. grass and three dirt atop everything for the grasslands). Carve a bit more. Add tunnels separately. Add minerals. On and on. What was interesting to me was that there was this layered approach at all, that it wasn’t some giant single-pass function but each had its own function, e.g. add topsoil.

Other than the lead developer of Minecraft, YouTubers were the stars of the con. By some estimates, 15% of Youtube’s content is videogame related, and Minecraft dominates (GTA’s second). (Hey, before I did 12 steps for Minecraft, I made well over a hundred videos, mostly not worth watching. Here’s a reason why I love Minecraft.) The two main video editing packages used for Minecraft videos are Premier Pro & Vegas Pro. People uses FRAPS, OBS, and Action for video capture.

Tidbit: this Minecraft-related Hour of Code lesson from code.org is evidently extremely popular. Point kids at it, it uses a visual code editor (Blockly) to introduce “if” statements, loops, etc.

Entirely non-Minecraft, but I learned about it at Minecon: I’m probably the last person on Twitter to know about tweetdeck, which makes Twitter a bit more friendly.

My little talk at the panel went well, slides here. I particularly like that David Ng is using Minecraft with his students to build physical worlds.

Oh, and videos. I’d say the most amazing videos I saw were in the Rube Goldberg session; short session preview here, worth your while. The coaster rides by Nuropsych1 and others were astounding. If you want to veg out (and wait through ads), check the Dr. Who, GhostBusters, and Beetle Juice coasters, among others. Impressive builds, great lighting effects and optical illusions, lovely redstone (electronics) work, camera tricks, and it’s astounding it can all be done within Minecraft.

4:18 in to the Dr. Who coaster https://youtu.be/AKyt12Ezh3s?t=258

4:18 in with the Dr. Who coaster – click that link right now!

WebGL 2 Basics

This guest blog post is by Shuai Shao, a Masters student at UPenn under Patrick Cozzi. After hearing the announcement at SIGGRAPH, I was asking around for someone to write a “basics of WebGL 2” article and Patrick got Shuai involved. If you’re reading this any time after October 2016, see his Github repo for the latest version of this article, with any corrections folded in since then (we encourage you to contribute to it).

WebGL 2 is coming! Google Chrome just announced at SIGGRAPH 2016 that 100% of the WebGL 2 conformance suite is passing (on the first configurations).

If I have an engine that works well in WebGL 1, how do I move to WebGL 2? Things to consider:

  • What has to be changed?
  • What can be done in a better way?
  • What new features and functionalities can I add to my engine?

In this article we are focused on the first question. We discuss the main promoted features, which are supported by extensions in WebGL 1 that are part of the core of WebGL 2 and thus cannot be accessed in the old manner, along with some other compatibility issues.

You can find answers to the other two questions in our next article, which focuses on introducing new features.

In the future you may want some complete working sample code for reference, instead of just code snippets. WebGL 2 Samples pack is a resource you’ll find useful.

That’s enough for an intro. First of all, let’s get WebGL 2 working on your machine.

How do I start using WebGL 2?

Get a WebGL 2 Implementation (Browser)

You may have seen this before, let’s just hit the main points:

Get a WebGL 2 Context

Programmers always try to support as many browsers as possible. So do I. On top the WebGL 1 version of getContext, we will first try to access WebGL 2. If this fails, then drop back to WebGL 1. Here’s an example dervived from the Cesium WebGL engine:

var defaultToWebgl2 = false;

var webgl2Supported = (typeof WebGL2RenderingContext !== 'undefined');
var webgl2 = false;
var gl;

if (defaultToWebgl2 && webgl2Supported) {
    gl = canvas.getContext('webgl2', webglOptions);
    if (gl) {
        webgl2 = true;
    }
}
if (!gl) {
    gl = canvas.getContext('webgl', webglOptions);
}
if (!gl) {
    throw new Error('The browser supports WebGL, but initialization failed.');
}

Promoted Features

Some of the new WebGL 2 features are already available in WebGL 1 as extensions. However, these features will be part of the core spec in WebGL 2, which means support is guaranteed. In this first blog entry we are going to focus on these promoted features, together with potential compatibility issues they may cause.

First let’s find if there’s a way to change fewest existing WebGL 1 code using the extension to make it work correctly with a WebGL 2 context.

We may find that in some cases (instancing and VAO), it’s only the function we are calling that changes from the extension version to core version, while the parameters and pipeline don’t change. We used to call fooEXT, now we simply switch to foo.

Thanks to Javascript’s neat support of function objects, one solution is that we can create a function handler at startup, assigned with either the extension version from WebGL 1 or the core version from WebGL 2. Within the rest of the code we call this function handler.

if (!webgl2) {
    vaoExt = gl.getExtension("OES_vertex_array_object");
    //...
    gl.createVertexArray = vaoExt.createVertexArrayOES;
    //...
}

Yet this method can fail when changes are made in the shader (e.g., MRT). We still need to take a close look at each of these promoted features. So now let’s take a look at how the code changes for each of them.

Multiple Render Targets

MRT is a commonly used extension for deferred rendering, OIT, single-pass picking, etc.

WebGL 1

For MRT we used the WEBGL_draw_buffers extension as a work-around to write g-buffers in a single pass. Though it is widely supported (currently 57%+ browsers, according to WebGL stats), the extension-style code isn’t as clean as WebGL 2:

var ext = gl.getExtension('WEBGL_draw_buffers');
if (!ext) {
  // ...
}

We then bind multiple textures, tx[] in the example below, to different framebuffer color attachments.

var fb = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
gl.framebufferTexture2D(gl.FRAMEBUFFER, ext.COLOR_ATTACHMENT0_WEBGL, gl.TEXTURE_2D, tx[0], 0);
gl.framebufferTexture2D(gl.FRAMEBUFFER, ext.COLOR_ATTACHMENT1_WEBGL, gl.TEXTURE_2D, tx[1], 0);
gl.framebufferTexture2D(gl.FRAMEBUFFER, ext.COLOR_ATTACHMENT2_WEBGL, gl.TEXTURE_2D, tx[2], 0);
gl.framebufferTexture2D(gl.FRAMEBUFFER, ext.COLOR_ATTACHMENT3_WEBGL, gl.TEXTURE_2D, tx[3], 0);

Next we map the color attachments to draw buffer slots that the fragment shader will write to using gl_FragData.

ext.drawBuffersWEBGL([
  ext.COLOR_ATTACHMENT0_WEBGL, // gl_FragData[0]
  ext.COLOR_ATTACHMENT1_WEBGL, // gl_FragData[1]
  ext.COLOR_ATTACHMENT2_WEBGL, // gl_FragData[2]
  ext.COLOR_ATTACHMENT3_WEBGL  // gl_FragData[3]
]);

Also, an extra flag is needed in the shader:

#extension GL_EXT_draw_buffers : require
precision highp float;
// ...
void main() {
    gl_FragData[0] = vec4( v_position.xyz, 1.0 );
    gl_FragData[1] = vec4( v_normal.xyz, 1.0 );
    gl_FragData[2] = texture2D( u_colmap, v_uv );
    gl_FragData[3] = texture2D( u_normap, v_uv );
}

WebGL 2

For MRT our code becomes neat and clean in WebGL 2.

gl.framebufferTexture2D(gl.DRAW_FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, tex[0], 0);
gl.framebufferTexture2D(gl.DRAW_FRAMEBUFFER, gl.COLOR_ATTACHMENT1, gl.TEXTURE_2D, tex[1], 0);
gl.framebufferTexture2D(gl.DRAW_FRAMEBUFFER, gl.COLOR_ATTACHMENT2, gl.TEXTURE_2D, tex[2], 0);

defines an array of buffers into which outputs will be written. Draw by:

gl.drawBuffers( [gl.COLOR_ATTACHMENT0, gl.COLOR_ATTACHMENT1, gl.COLOR_ATTACHMENT2] );

Instead of mapping color attachments to the draw buffer, we directly use multiple out variables in the fragment shader. This code actually benefits from the new GLSL 3.0 ES, which we will discuss later in another blog post. However, using out itself is straightforward.

#version 300 es
precision highp float;
layout(location = 0) out vec4 gbuf_position;
layout(location = 1) out vec4 gbuf_normal;
layout(location = 2) out vec4 gbuf_colmap;
layout(location = 3) out vec4 gbuf_normap;
//...
void main()
{
    gbuf_position = vec4( v_position.xyz, 1.0 );
    gbuf_normal = vec4( v_normal.xyz, 1.0 );
    gbuf_colmap = texture2D( u_colmap, v_uv );
    gbuf_normap = texture2D( u_normap, v_uv );
}

Additionally, since Texture 2D Array is now available, we can choose to render to different layers of an array of texture 2d’s instead of separate 2d textures.

gl.framebufferTextureLayer(gl.DRAW_FRAMEBUFFER, gl.COLOR_ATTACHMENT0, texture, 0, 0);
gl.framebufferTextureLayer(gl.DRAW_FRAMEBUFFER, gl.COLOR_ATTACHMENT1, texture, 0, 1);
gl.framebufferTextureLayer(gl.DRAW_FRAMEBUFFER, gl.COLOR_ATTACHMENT2, texture, 0, 2);

Instancing

Instancing is a great performance booster for certain types of geometry, especially objects with many instances but without many vertices. Good examples are grass and fur. Instancing avoids the overhead of an individual API call per object, while minimizing memory costs by avoiding storing geometric data for each separate instance.

Instancing is exposed through the ANGLE_instanced_arrays extension in WebGL 1 (92%+ support). Now with WebGL 2 we can simply use drawArraysInstanced or drawArraysInstanced for the draw calls.

gl.drawArraysInstanced(gl.TRIANGLES, 0, 3, 2);

There is a new built-in variable (GLSL 3.0 ES) in the vertex shader called gl_InstanceID that can help with the draw instance call. For example, we can use this to assign each instance with a separate color.

// Vertex Shader
flat out int in instance
// ...
void main() {
    instance = gl_InstanceID;
}
// Fragment Shader
uniform Material {
    vec4 diffuse[NUM_MATERIALS];
} material;
flat in int instance;   // `flat` is a must for a int varying, plus we don't want the instance id to be interpolated
// ...
void main() {
    color = material.diffuse[instance % NUM_MATERIALS];
}

Vertex Array Object

VAO is very useful in terms of engine design. It allows us to store vertex array states for a set of buffers in a single, easy to manage object. It is exposed through the OES_vertex_array_object extension in WebGL 1 (89%+).

WebGL 1 with extension WebGL 2
createVertexArrayOES createVertexArray
deleteVertexArrayOES deleteVertexArray
isVertexArrayOES isVertexArray
bindVertexArrayOES bindVertexArray

An example:

var vertexArray = gl.createVertexArray();
gl.bindVertexArray(vertexArray);

// set vertex array states
var vertexPosLocation = 0; // set with GLSL layout qualifier
gl.enableVertexAttribArray(vertexPosLocation);
gl.bindBuffer(gl.ARRAY_BUFFER, vertexPosBuffer);
gl.vertexAttribPointer(vertexPosLocation, 2, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, null);
// ...

gl.bindVertexArray(null);

// ...

// render
gl.bindVertexArray(vertexArray);
gl.drawArrays(gl.TRIANGLES, 0, 6);

Shader Texture LOD

The Shader Texture LOD Bias control makes mipmap level control simpler for glossy environment effects in physically based rendering. This functionality is exposed through the EXT_shader_texture_lod extension in WebGL 1 (71%+).

vec4 texture2DLodEXT(sampler2D sampler, vec2 coord, float lod)

Now as part of core, the lodBias can be passed as an optional parameter to texture

gvec4 texture (gsampler2D sampler, vec2 P [, float bias] )

Fragment Depth

The fragment shader can explicitly set the depth value for the current fragment. This operation can be expensive because it can cause the early-z optimization to be disabled. However, it is needed in cases where the z-depth is modified on the fly.

This functionality is exposed through the EXT_frag_depth extension in WebGL 1 (66%+).

out float gl_FragDepth;

More details can be found in the GLSL 3.0 ES Spec.

Other compatibility issues

Look here for more information: WebGL 2 Spec Ch4.1

Credits