Resources

You are currently browsing the archive for the Resources category.

Executive summary: use the Perl script at https://github.com/erich666/chex_latex

I have been fiddling with this Perl script for a few editions of Real-Time Rendering. It’s handy enough now that I thought I’d put it up in a repository, since it might help others out. There are other LaTeX linters out there, but I’ve found them fussy to set up and use (“just download the babbleTeX distribution, use the GNU C compiler to make the files, be sure to use tippyShell for the command line, and define three paths…”). Frankly, I’ve never been able to get any of them to work – maybe I just haven’t found the right one, and please do point me at any (and make sure the links are not dead).

Anyway, this script runs over 300 tests on your .tex files, returning warnings. I’ve tried to keep it simple and not over-spew (if you would like more spew, use the “-ps” command line options to look for additional stylistic glitches). I haven’t tried to put in every rule under the sun. Most of the tests exist because we ran into the problem in the book. The script is also graphics-friendly, in that common misspellings such as “tesselate” are flagged. It finds awkward phrases and weak writing. For example, you’ll rarely find the word “very” in the new edition of our book, as I took Mark Twain’s advice to heart: “Substitute ‘damn’ every time you’re inclined to write ‘very.’ Your editor will delete it and the writing will be just as it should be.” So the word “very” gets flagged. You could also find a substitute (and that website is also in the comments in the Perl script itself, along with other explanations of the sometimes-terse warnings).

Maybe you love to use “very” – that’s fine, just comment out or delete that rule in the Perl script, it’s trivial to do so. Or put “% chex_latex” as a comment at the end of the line using it, so the warning is no longer flagged. The script is just a text file, nothing to compile. Maybe you delete everything in the script but the one line that finds doubled words such as “the the” or “in in” or similar. In testing the script on five student theses kindly provided by John Owens, I was surprised by how many doubled words were found, along with a bunch of other true errors.

Oh, and even if you do not use this tool at all, consider at least tossing your titles through this website’s tester. It checks that all the words in a title are properly capitalized or lowercase.

A few minutes of additional work with various tools will make your presentation look more professional (and so, more trustworthy), so “just do it”. And, do you see the error in that previous sentence (hint: I wrote it in the U.S.)?

Update: I also added a little Perl script for “batch spell checking,” which for large documents is much more efficient (for me) than most interactive spell checkers. See the bottom of the repo page for details.

Tags: ,

As announced today at the Games Developers Conference by CRC Press / Taylor & Francis Group (booth 2104, South Hall – I’m told there’s a discount code to be had), we’re indeed finally putting out a new edition of Real-Time Rendering. It should be out by SIGGRAPH if all goes well. Tomas, Naty, and I have been working on this edition since August 2016. We realized that, given the amount that’s changed in area lighting, global illumination, and volume rendering, that we could use help, so asked Angelo Pesce, Michał Iwanicki, and Sébastien Hillaire to join us, which they all kindly and eagerly did. Their contributions both considerably improved the book and got it done.

If you want me to just shut up and tell you where to pre-order, go here. You’ll note the lack of cover, and lack of the new three authors. Those’ll get fixed once there’s a more official launch, and official pricing. I suspect the price won’t go down (which is a hint, and you can cancel later if I’m wrong; which reminds me, you should also book a room now for SIGGRAPH if you have the slightest chance of going, since you can also cancel up until July 22 without penalty).

One reason for no cover is that we’re still evaluating them. At the GDC booth you’ll see this artwork used:

fish cover candidate

This is a lovely, colorful model by Elinor Quittner. You can see the interactive model here, and definitely check out the Model Inspector feature on that page by clicking the “I” key (or the “layers” looking icon in the lower right) once the model’s loaded. I love this feature in Sketchfab, that you can examine the various elements. All that said, we’re still examining a number of other cover possibilities. Me, I’m happy we get to show off this potential design here now.

Back to the book itself. Let’s look at page count:

  • First edition, published 1999, 482 pages
  • Second edition, published 2002, 864 pages
  • Third edition, published 2008, 1045 pages
  • Fourth edition, to be published 2018, 1269? pages (1356?, including online)

This new edition is probably a worst-kept secret, in that anyone searching “Real-Time Rendering, 4th edition” on Amazon would have found the entry months ago, and CRC put it on their site some time before March 11. Also, doing a quick count just now, not including the editorial staff, 178 people helped us out in some way: reviewing sections or chapters, providing images, or clarifying concepts. The kind and generous support we’ve received is one of the reasons I love this field. There’s competition between companies, between research teams, and all the rest, it’s part of the landscape. But, underlying this “red in tooth and claw” veneer of competition, most everyone we asked genuinely wanted to share their knowledge and labor to help others understand how things work. I hope it’s the same in other fields, but I know it’s true for this one.

The progression of 3 years between 1st and 2nd, 6 between 2nd and 3rd, and 10 between 3rd and 4th is a reflection not so much of the length of time it takes for each new edition (which has indeed steadily increased), but rather how long it takes us to forget all the stress and pain involved in making a new edition. As a data point, our Google Doc of new references since the last edition is around 170 pages long, and does not include references we could easily dismiss, nor those we ran into later when more closely reading and writing. Each page has about 20 references on it (some duplicated among chapters), about 3200 in all. In the fourth edition we added “only” 1151 new references, and deleted 508 older ones, for a final total of 2059 references (this does not include references on collision detection – more on that in a minute).

We could have added all 3200 and more, but instead focused on that which sees use in applications, or is newest and presents a good overview of the state of the art in its area. The field has simply become far too large for us to cover every piece of research, and doing so would have been a disservice to most readers. On the other end of the spectrum, we have continued to avoid API-specific information and code, as there are plenty of books, repositories, and articles describing these – this website points to many of them (and will be updated in the coming months). We aim to be a guide to algorithms for practitioners.

To conclude, here’s the list of chapters:

1 Introduction
2 The Graphics Rendering Pipeline
3 The Graphics Processing Unit
4 Transforms
5 Shading Basics
6 Texturing
7 Shadows
8 Light and Color
9 Physically-Based Shading
10 Local Illumination
11 Global Illumination
12 Image-Space Effects
13 Beyond Polygons
14 Volumetric and Translucency Rendering
15 Non-Photorealistic Rendering
16 Polygonal Techniques
17 Curves and Curved Surfaces
18 Pipeline Optimization
19 Acceleration Algorithms
20 Efficient Shading
21 Virtual and Augmented Reality
22 Intersection Test Methods
23 Graphics Hardware
24 The Future

If you have a great memory, you’ll notice that the “Collision Detection” chapter from the 3rd edition is missing. We have a fully-updated chapter on this subject for the 4th edition. However, the page count was such that we decided to distribute it, along with the two math-related appendices in the 3rd edition, as online chapters free to download (Collision detection is not strictly a part of real-time rendering, but is an area we think is fascinating and where a fair bit of change has occurred – about 40% of the chapter is new material). We’ll be formatting all of these resources into PDF files nearer to release.

Because I have an addiction to text manipulation and analysis programs (more on that in a future blog post), I did some measures of how much the fourth edition is different than the third. The highly-precise but who knows how accurate number I computed was 59.81% new material by lines changed. By further weighting using the character count, I get a value of 68.99% new. These are probably high – if you change a word in a sentence, or even just join two lines into one, the whole line is considered new – but the takeaway is that a lot has changed in the past decade. We’ve learned a huge amount from writing the book, and by SIGGRAPH look forward to sharing it with you all.

Tags:

One reason I love interactive graphics is that every now and then something happens in the field – programmable shaders, powerful mobile devices, DX12/Vulkan/Metal, VR, AR, and now this – that changes what’s possible and how we think about interactive rendering. New algorithms arise to exploit new and different functionality. It’s a fun world!

Microsoft added ray tracing support to its DirectX API. And this time it’s not an April Fool’s Day spoof, like a decade ago. Called DirectX Raytracing, DXR for short, it adds the ability to cast rays as shader invocations. There are already a bunch of articles and blog posts.

Here are the resources I’ve noticed so far (updated as I see new ones – let me know):

It will be interesting to see if there’s any spike of interest for ray tracing on Google’s analytics. While I doubt having DXR functionality will change everything – it still has to be performant compared to other specialized techniques – it’s great seeing another tool in the toolbox, especially one so general. Even if no ray tracing is done in an interactive renderer that is in development, it will now be much easier to get a ground-truth image for comparison when testing other techniques, since shader evaluations and all the rest now fit within a ray tracing fragment. Ray and path tracing, done long enough (or smart enough), give the correct answer, versus screen-based techniques.

Doing these fast enough is the challenge, and denoisers and other filtering techniques (just as done today with rasterized-buffer-based algorithms) will see a lot of use in the coming months and years. I’m going to go out on a limb here, but I’m guessing GPUs will also get faster. Now if we can just get people to stop upping the resolution of screens and stop adding more content to scenes, it’ll all work out.

Even within the Remedy talk, we see ray tracing blending with other techniques more appropriate for diffuse global illumination effects. Ambient occlusion is of course a hack, but a lovely one, and ray tracing can stand in for screen-space methods and so avoid some artifacts. I think getting away from screen-space techniques is potentially a big win, as game artists and engineers won’t have to hack models or lighting to work around major artifacts seen in some situations, so saving time and money.

I’m also interested to see if this functionality gets used in other applications, as there are plenty of areas – all sorts of audio design applications, various other types of engineering analyses – that could benefit from faster turnaround on computations.

Enjoy exploring! I look forward to what we all find.

Some of the eye-candy videos:

Tags:

Andrew Glassner wrote another book, Deep Learning: From Basics to Practice. It’s two volumes, find it on Amazon here and here. It is meant as a full introduction to the topic, 1650 pages of text (with an additional 90 page glossary at the end). It uses about 1000 figures to build up mental models of how the various algorithms and processes work, and explains how to use the popular Keras neural net API with Python. There’s a free sample chapter, on backpropagation, at his site. I’ve read about a quarter of the book and look forward to getting to “the meat” – Glassner lays the groundwork with chapters on probability, test data and analysis, information theory, and other relevant topics before plunging into deep learning itself. He aims to be accessible to math-averse readers, but does not dumb down the material. While the writing style is informal and approachable, it sometimes takes a bit of work to absorb, which is as it should be.

Full disclosure: I’m friends with Andrew and helped review a portion of the book. I’ve received no pay, and bought the books for my own education, as they look to be useful. I’m impressed by his dedication in writing such a tome, 20 months of labor, working through a large number of academic papers (each chapter ends with a set of references, along with URLs). From past works, I feel confident that what I’m going to read is factually correct and written in a clear fashion.

If you already know about the topic and are lecturing on the subject, he’s made all the figures free to download and use under Fair Use, along with his Python/Jupyter notebooks for all examples. Here’s a figure from the style transfer section of Chapter 28.

Style Transfer

My only regret is there’s no back cover (e-books don’t need them), for relevant quotes from famous people. I even suggested a few:

  • “With artificial intelligence we are summoning the demon.” – Elon Musk (source)
  • “I think the development of full artificial intelligence could spell the end of the human race.” – Stephen Hawking (source)
  • “Artificial intelligence is the future, not only for Russia but for all of mankind… Whoever becomes the leader in this sphere will become the ruler of the world.” – Vladimir Putin (source)

Wouldn’t you want to read a book explaining the methods that will bring about the downfall of our civilization? Of course, they mean general intelligence, not the specialized tasks deep learning is aimed at. Books such as Incognito show how little we know of our own internal workings, how consciousness is just a small part of what the brain’s about. It’s hard to imagine we’re going to suddenly crack the problem of creating general intelligence any time soon, let alone create a runaway paperclip maximizer.

This existential threat feels way overblown, something that makes for great movies, sort of like how elevators go into free fall in Hollywood but never in real-life (problem was essentially solved a century ago). I saw Steven Pinker give a talk last night (his new book seems cheery, nice review here), and he noted that nuclear war and climate change catastrophes are much more real and important than fictitious runaway AIs. (Fun fact: Pinker was once an assembly language programmer.) His opinion piece is a great read, pointing out the dangers of apocalyptic thought. But I digress…

So, whether you’re waiting for the end of the world or for the Singularity (or both), Glassner’s book looks to be a good one to read in the meantime to get a grounding in this old-yet-new field and learn how to use deep learning systems available (for free!). Oh, and the two volumes are ridiculously cheap, and I find I can even read them on my cell phone.

The book WebGL Insights is now free to download as a PDF. Go get it.

Many of the articles are, of course, WebGL-centric, but some articles in the Rendering Section have general interest, especially for mobile developers. WebGL is “trailing edge,” in that it’s tied to OpenGL ES 2.0, which is what most mobile devices run. So techniques in that section will run in mobile apps in general. WebGL 2 (not covered in this book) is ES 3.0, basically, and current has 22% phone support and 8% tablet support – tablets don’t get refreshed as rapidly as phones.

 

Tags: , , ,

by Sebastien Vandenberghe

With the emerging number of experiences built using WebGL, and all the improvements made in the WebVR/AR space, it is critical to have efficient debugging tools. Whether you are just starting out or are already an experienced developer of 3D applications with WebGL, you likely know how tools can be important for productivity. Looking for such tools, you probably came across Patrick Cozzi’s blog post highlighting the most common ones. Unfortunately, many of these tools are no longer compatible with your project, due to missing WebGL2 features or extensions, such as draw buffers, 3D textures, and so on.

As a core contributor to BabylonJS, working at the engine level, on a daily basis I need to see the entire creation of frames, including all the available information from the WebGL state (Depth, Stencil, Blend, etc.) as well as the list of commands along with their arguments. In order to optimize the engine, I also need information and statistics about memory, draw calls, and primitives. These desires were a big motivation for me to develop SpectorJS. And as we love the WebGL community we decided to make it an Open Source Project, compatible with all existing WebGL 3D engines.

At the end of this walkthrough, you will be able to easily capture and inspect any WebGL frames rendered in your favorites applications. If you have any issues, do not hesitate to report them on Github. To stay informed of all the new features, follow us at @SpectorJS.

 

Table of Contents

Installation

Always looking to save time, the tool is directly available as a browser extension: ChromeFirefox – (more browsers are coming soon)

Embedding the library in your application or side-loading the extension are also possible. More information can be found on Github.

Basic Usage

Once installed, you can now on navigate to any website using WebGL, such as the Babylon JS playground, and you will notice the extension Icon turning red in the toolbar.

This highlights the presence of a canvas with a 3D context in the page or its embedded IFrames. Pressing the toolbar button reloads the page and the icon turns green, as Spector is now ready to capture. During the refresh Spector injects additional debug code that collects state and command information, along with other statistics.

Note: We do not enable it by default, so as to not interfere with any WebGL program unless explicitly requested.

Clicking this green button will display a popup helping you to capture frames.

Following the on-screen instructions and clicking the red circle will trigger a capture. If a canvas is selected, you can also, in this menu, pause or play  the rendered canvas frame by frame. Once the capture has been completed, a result panel will be displayed containing all the information you may need.

The bottom of the menu helps capturing what is happening during the page load on the first canvases present in the document. You can easily choose the number of commands to capture, as well as specify whether or not you would like to capture transient context (context created in the first canvas, even if not part of the DOM).

 

Note: A few reasons might prevent you capturing the context, the main one being that nothing is rendered if the scene is fully static. If this happens, moving the camera after pressing the capture button should be enough to start the capture.

Note: As collecting the information is pretty expensive, the capture may take a long time and you might have to press wait... a few times when the browser notifies you that the page is unresponsive. Unfortunately, we cannot work around this, as the capture needs to happen synchronously during the execution of your code. Without a synchronous capture, the rest of your code continues to react to external events with potential side effects on the capture.

Capture View

On the left side of the screen are displayed all the different visual state changes happening during the creation of the frame. They are displayed alongside their target frame buffer information. This helps to quickly understand how the frame has been built during troubleshooting sessions. Selecting one of the pictures automatically selects the command associated with it. The visual capture handles all the possible renderable outputs such as cube textures, 3D textures, draw buffers, render target texture, render buffers and so on.

The central panel is the commands panel. It displays the list of commands that were executed on the captured context during the frame. These are displayed chronologically. A color code is used to highlight issues and identify draw calls:

  • Orange Background: The selected command.
  • Blue Background: Draw Calls or Clear commands.
  • Green Command Name: Valid Commands (changing state to a new value).
  • Orange Command Name: Redundant Commands (meaning the value applied is the same as the current one which is useful to optimize a WebGL application)
  • Red Command Name: Deprecated WebGL Commands.

Selecting a command leads to display on the right side all of its detailed information including the command name, arguments, and JavaScript call stack. If a draw call has been selected, the various states involved in this call are all available. This is usually a pretty long list of information, as the captures contains the exhaustive list of states, attachments, programs, shaders, attributes, VAOs, uniforms, UBOs, transform feedbacks, and their attached properties. From this panel, the shader source code is also available from the program information, by following the Click to open link:

This opens a beautified view of the shader code, helping to ensure the defines and the code itself are as expected:

Note: Some information might be empty if there is an issue in the engine. For instance, unbound textures might lead to empty uniform information for the sampler. This is usually an interesting warning and more analytics are in progress to help highlight such use cases better.

A few other views are available for each capture.

Init and End State

Once a capture is open, the top command bar includes links to the initial and final state of the capture. This is useful to see how is the context was before the capture and at the end, to help deal with issues happening between frames, for instance.

Context and Frame Information

Commonly there are issues in WebGL applications related to either the canvas or the context setup. To be sure the current setup is correct, the information panel displays all the queryable information. This also contains statistics about the captured frame such as memory information, number of calls of each command, and drawn primitive information.

Sharing Captures

Since we often collaborate with others on projects or use multiple platforms, it is critical to be able to save and share captures. To do this, you can simply navigate to the Captures link in the menu, where all the captures of the session have been stored. Clicking on the floppy icon (nostalgia FTW) downloads the captured JSON file.

To open and view this file, Drag and Drop it on the Extension popup or the Capture list dedicated area. This feature can save a lot of time troubleshooting customer or cross-platform issues.

How to Compare Captures

As it is needed more often than anybody would like, comparing captures after an engine change is a must-have. A full capture comparison is currently under development, but in the meantime, captures can at least be put in different tabs of the browser, making it easier to check differences.

Checking the box in the popup menu forces the next capture to open in a new tab:

Custom Data

Displaying custom information is a nice trick to quickly identify the relationship between a material and its shader or between a mesh and its buffers. Adding custom data to the capture is achievable by adding a special field named __SPECTOR_Metadata to any WebGLObject. Once the field has been set, any command relying on this object displays the related metadata in the property panel.

javascript var cubeVerticesColorBuffer = gl.createBuffer();
cubeVerticesColorBuffer.__SPECTOR_Metadata = { name: "cubeVerticesColorBuffer" };

This enables the visibility of the custom name “cubeVerticesColorBuffer” in the capture Metadata wherever the buffer is in use.

Extension Control

Another interesting feature is the ability to drive the extension by code. Once the extension is enabled, from your browser’s dev tools, or even your code, you can call the following APIs on “spector.”:

  • captureNextFrame(obj: HTMLCanvasElement | RenderingContext) : Call to begin a capture of the next frame of a specific canvas or context.
  • startCapture(obj: HTMLCanvasElement | RenderingContext, commandCount: number) : Start a capture on a specific canvas or context. The capture will stop once it reaches the number of commands specified as a parameter, or after 10 seconds.
  • stopCapture(): ICapture : Stop the current capture and returns the result in JSON. It displays the result if the UI has been displayed. This returns undefined if the capture has not been completed or did not find any commands.
  • setMarker(marker: string) : Adds a marker that is displayed in the capture, helping you analyze the results.
  • clearMarker() : Clears the current marker from the capture for any subsequent calls.

The “spector” object is available on the window for this purpose.

This can be a tremendous help to capture the creation of your shadow maps, for instance. This can also be used to trigger a capture based on a user interaction or to set markers in your code to better analyse the capture.

The following example could be introduced safely in your code:

if (spector) {
    spector.setMarker("Shadow map creation");
}
[your shadow creation code]
if (spector) {
    spector.clearMarker();
}

Using the Standalone Version

If you prefer to use the library in your own application you can find it available on npm: spectorjs

Going Further

This extension being pretty new and under active development, a few features have been discussed for the next releases:

  • Capture Comparison
  • Image Comparison
  • Remote Debugging
  • Shader Editor
  1. Website
  2. Github
  3. Roadmap
  4. Report Issues
  5. Twitter @SpectorJS

I would like to particularly thank Eric Haines for the time spent to review the article, knowing the challenge it represents considering my English 🙂

Tags: , , ,

I made a page of thumbnail images of the 297 three.js examples. Here it is:

http://www.realtimerendering.com/threejs/

The three.js site used to have a page like this. I’m not sure why it disappeared, but now I don’t care, as I can more easily find demos I’ve looked at before but then forgot the names.

Bonus links: Stemkoski and Yomotsu also have useful demo pages, which used to be prominently linked from the three.js site but now are not.

Tags:

WebGL Links Page

I got tired of re-finding various useful WebGL and three.js links, so I made a page:

http://www.realtimerendering.com/webgl.html

What cool things am I missing?

I’ve made it a page of links I am likely to want to check out in the future. It’s a bit hard to draw the line. For example, I didn’t bother adding fun demos such as this and this, but I did add the page where I browse new demos. I don’t list development systems such as Goo Create for non-programmers, which is built on this open-source WebGL engine and has some interesting features. Nice things all, but I personally am unlikely to come back to them (or if I do, they’re now in this blog post).

Tags: ,

Some staff at Taylor & Francis kindly dug up some of the supplemental materials (mostly code) for the journal of graphics tools, namely, volumes 10-13. I’ve waded through it all and added these resources to the code repository:

Github JGT repository

If you have code from a JGT article that’s not listed here, please do send it on to me and I’ll add it.

Tags: ,

And, it’s over – Springer appears to have shut the gates a day later. Mistake? Buzz-generating marketing ploy? Who knows? I’ll leave the rest of the post intact, but books are no longer free. Some articles are, such as Knuth’s.

All books from Springer that are ten years old or older are free, go look.

Quoting Vít Tuček here, from Facebook (reposted by Pete Shirley):

Springer has made a lot of math & physics books available online, for free! Everything that is more than 10 years old.

If you don’t know which book you may want you can start here http://mathoverflow.net/questions/tagged/books

This links to the Graduate Texts in Mathematics series: https://t.co/R1EYrTrz5w

This is for all materials (books, journals, chapters, articles) from all fields: http://goo.gl/cB5rRc

This excellent computational geometry book is available (2nd Edition; the latest, 3rd edition costs money), as is this older-but-worthwhile one. For de Berg’s work, the free version is the second edition; other than these errata fixes, the 3rd edition’s major changes are that Chapter 7 includes information on Voronoi diagrams of line-segments and for farthest point, and Chapter 12 includes BSP trees for low-density scenes.

There are also older computer graphics related books, e.g. this one and this. Ancient, but the price is right, and some of this stuff doesn’t change.

Handy list of direct links for the math & physics PDFs here.

Me, I’m digging around for various recreational math books. One of my favorite books, period, is here: One Jump Ahead. There’s a recreational math book, Tracking the Automatic Ant, a collection from the Mathematical Intelligencer. Some bits of newer stuff from the Mathematical Intelligencer is also available, e.g., an article on mathematical vanity plates by Knuth, of all people. Some books I can’t find, as Springer’s searcher is pretty wonky, e.g. The Science of Cooking appears to somehow not exist, though there’s a short article available by the author.

Happy hunting, and email me or let us know in the comments if you find any other gems related to computer graphics.

« Older entries