Tag Archives: voxel

Minecon 2016 Report

Say what? Minecon is a convention for Minecraft, so why in this blog? Well, I was invited to be on a panel about 3D printing Minecraft models, since I wrote Mineways (which gets a crazy 600 downloads a day; beats me who all these people are. I think it’s a case of 600 download it, 60 try it, 6 try it more than once, 0.6 become real users).

David Ng in the Mineways hat

David Ng in the Mineways hat

It was a bit odd going to this convention, especially since it was at the Anaheim Convention Center, where I was just two months ago for SIGGRAPH. This convention is the same size as SIGGRAPH, 12,000 attendees plus panelists, staff, exhibitors, etc. One organizer said a total of 14K attended. Of course, the tickets for Minecon sold out in 6 minutes – there are over 100 million copies of Minecraft sold and a lot of fanatical users out there. It’s only two days long, it uses somewhat less (and sometimes more) of the convention center’s space, and the median age of an attendee is probably around 12. My photo album is here, note in particular this one, where just about everyone is in one very long room – that’s something I don’t see at SIGGRAPH.

Not SIGGRAPH

Not SIGGRAPH

It’s not all just kids and pixelated blocks. There were a few good general technical talks about VR, video techniques, voxel modeling methods, etc. For example, John Carmack and others from Oculus spoke about the challenges of porting Minecraft to the Rift. Scattered throughout this presentation are some interesting bits about the user experience. Some clever ideas such as the person jumping a short distance actually gets a view that travels in a straight line, though his friends see him jumping in a parabola (parabolas upset the stomach more). Playing the ambient music actually helps stave off motion sickness for awhile, or so they think. They have a “chill out” feature that lets you leave the action and hang out in a quiet virtual room, looking at the game through a 2D window. Various other things. They spent 6 months of polishing the VR version before releasing it (despite a fair bit of pressure to get it out the door in a minimal-port form). Best/ickiest Carmack quote, “Don’t push it. We don’t need to be cleaning up sick in the demo room.” Honestly, an interesting session, with hints of the political pressure on the team. I’m impressed that they were able to take the time to polish the experience, given how an early release of the game’s port would undoubtedly help drive sales.

BlockWorks had a session on how they did their voxel modeling, with some slides of their incredible constructions. Seriously, click that link. It was interesting in that each artist tends to have a specialty – architecture, organics, mechanicals, etc. They use the free Chunky path tracer (great tool! I’ve played with it.) along with traditional renderers such as Cinema 4D. I would have liked to hear more about their custom voxel-based modeling tools, but alackaday, not much mention beyond WorldEdit and VoxelSniper. Also, they avoid using mesh voxelizers such as binvox and Qubicle. (Qubicle, by the way, looks like a nice package for All Things Voxel, including a mesh voxelizer than retains the color of the mesh.) Related, though not at Minecon, this article about RenderMan and Minecraft – a good detailed “how to” read.

You remember the Visible Human Project? I laughed when I saw this, and talked at length with its creator, Wizard Keen, aka Adam Clarke.

1:40 into The Torso

1:40 into The Torso

I also met some people working on Spigot, the unofficial mod platform for Minecraft. One explained to me in depth how Minecraft’s impressive terrain generation worked, as he had carefully decompiled it all in order to make a mod. Basically, there’s pass after pass of applying Perlin Noise in various ways. First the overall terrain, in a single block type. Then put biomes on. Then for each biome add the “topsoil layer” (e.g. grass and three dirt atop everything for the grasslands). Carve a bit more. Add tunnels separately. Add minerals. On and on. What was interesting to me was that there was this layered approach at all, that it wasn’t some giant single-pass function but each had its own function, e.g. add topsoil.

Other than the lead developer of Minecraft, YouTubers were the stars of the con. By some estimates, 15% of Youtube’s content is videogame related, and Minecraft dominates (GTA’s second). (Hey, before I did 12 steps for Minecraft, I made well over a hundred videos, mostly not worth watching. Here’s a reason why I love Minecraft.) The two main video editing packages used for Minecraft videos are Premier Pro & Vegas Pro. People uses FRAPS, OBS, and Action for video capture.

Tidbit: this Minecraft-related Hour of Code lesson from code.org is evidently extremely popular. Point kids at it, it uses a visual code editor (Blockly) to introduce “if” statements, loops, etc.

Entirely non-Minecraft, but I learned about it at Minecon: I’m probably the last person on Twitter to know about tweetdeck, which makes Twitter a bit more friendly.

My little talk at the panel went well, slides here. I particularly like that David Ng is using Minecraft with his students to build physical worlds.

Oh, and videos. I’d say the most amazing videos I saw were in the Rube Goldberg session; short session preview here, worth your while. The coaster rides by Nuropsych1 and others were astounding. If you want to veg out (and wait through ads), check the Dr. Who, GhostBusters, and Beetle Juice coasters, among others. Impressive builds, great lighting effects and optical illusions, lovely redstone (electronics) work, camera tricks, and it’s astounding it can all be done within Minecraft.

4:18 in to the Dr. Who coaster https://youtu.be/AKyt12Ezh3s?t=258

4:18 in with the Dr. Who coaster – click that link right now!

More With the Links

I love the movie sequel title “2 Fast 2 Furious”. How clever, and a great way to guarantee there will never be a third movie. Well, there was, but they had to go the colon route, “… : Tokyo Drift”.

Which is indicative of nothing, as I don’t think I’ve ever actually seen any of these movies. I was reminded of the title as my goal today is to whip through the backlog of 72 potential blog resource links I’ve been gathering on del.icio.us. [Well, as it turns out, I got through 39 of them (the fresher ones), 33 to go…]

ShaderX^7 has been published. We hope to give it an overview sometime soon (mine’s on backorder from Amazon.com).

From various source I heard that OnLive got a bit of notice at GDC. Think: pure server-side computation of all graphics for a game, i.e., a cloud computing model. Now even your grandma’s computer or even a rigged-out TV can play Crysis, assuming the net bandwidth is there. Which of course makes me think: what about latency? Lag for how other players see your action is always there, and causes mismatches (“how did I instantly die?”). But increasing lag for you seeing the consequences of your own actions seems like a non-starter for shooters, at least.

Mark DeLoura has a great two-part article on what game engines are licensed for titles. First part is a general survey, second is about the technology involved. I found it interesting to see what people cared about, e.g. multicore is on people’s minds. Nothing too shocking here, but it’s fantastic to see what is getting used, and why, in this marketplace.

Related to this, I happened across a list of game engines on wikipedia. Not massively useful (e.g. no sense of what’s popular), but a starting place.

John Ratcliff has a graphics math library available for download with an unrestrictive reuse license. He recently added best fit methods for AABB’s and OBB’s.

I was interested to look at the open source, cross-platform (!) model viewer GLC. I’ve wanted something like this for doing some experiments with mesh manipulation. Not a bad viewer, but that’s all it is at this point, unfortunately: you can’t even export to a different 3D format. The search continues… If you know a reasonable open source 3D file viewer/converter out there, please tell me. I should probably bite the bullet and just use Blender, but this application is way overkill.

CUDA voxel rendering – pretty impressive!

I liked this post on optimization mainly because of the line “I went in and found out that some title bar was getting rendered 140 times every time you refreshed the screen”. I can entirely relate (though 140 must be some kind of record): too many times I’ve put output debugging statements showing updates, only to see 2,3,6 updates happening. I once started on a project and in the first few weeks increased performance by 100%, simply by noting the main draw path was being executed twice each frame.

Speaking of performance, there’s an article on volume rendering optimizations when using a ray-casting approach on the GPU.

Wolfenstein source code for the free iPhone version, along with Carmack’s documentation on the project, is available.

Software patents are only slightly dumber than business method patents, which are patently absurd. I hadn’t noticed until now, but there was recently a ruling on a business method patent, In re Bilski, which has been used to strike down software patents.

A detailed data and execution flow diagram for the new DirectX 11 pipeline front-end is available from Jolly Jeffers.

People are still making ray-tracing specific hardware; witness Caustic Graphics. They have a rather amazing claim: “The CausticOne, however, thrives in incoherent raytracing situations: encouraging the use of multiple secondary rays per pixel. Its level of performance is not affected by the degree of incoherence.” Good trick. That said, I can’t say I see any large customer base for such a product. This seems like a company designed for acquisition, similar to Ageia. Fine by me, best of luck to them.

I’m happy to learn that the Humus site now has a news blog. This is a great site for demos of advanced techniques, and for honest comments about strengths and limitations of various approaches.

Another blog: The Geeks of 3D. Tracks demos, APIs, SDKs, and graphics card releases. Handy – some of the links here I found there.

There was a nice little article on data alignment on Gamasutra. Proper alignment is a key element in getting high performance.

I was trying to find the name of the projection of equidistant latitude and longitude lines for a surrounding spherical environment. From this interesting page (click on the “Wall Maps of the World” text) I found it: Plate Carrée.

Predicting the future is so much more interesting than predicting the past. I love this: MIPS per $1000. It’s entertaining to equate raw computing power with structured processing. By the same equivalence, I should be able to hook up 1700 mice in parallel to get a human brain.

A great line from a GPU review: “Nvidia’s new line of unbelievably expensive cards will block out the sun, and ray-trace its own shadow in real time.”

Faber College’s motto is “Knowledge is Good”. Learning about the idea of metamers would have saved this article from confusion. Coming back to this article now, I see all the comments have been removed, and an apologia trying to convert confusion into enlightenment added, but I think this still misses the point. Sure, there is a color associated with a single wavelength of light. But, my guess is that 99.99% of the colors we perceive arrive at any location on the eye as light with a spectral mix of wavelengths, not a single wavelength (Naty will correct me if I’m wrong). Unless you’re Dr. Evil and deal with sharks with frickin’ laser beams on their heads on a daily basis. Hmmm, I’m probably forgetting some other single-wavelength phenomena, like fluorescence. Anyway, the article did lead me to look up more information on metamers on Wikipedia, where I learnt about metameric failure, a term I hadn’t heard before. One more reason a simple RGB representation of color isn’t sufficient.

Cute thing: Snapily lets you turn some set of images or video into lenticular prints.

I don’t have a lot to say about what I do at Autodesk. Here’s a tidbit.

Art for the day, crayons as pixels.