JCGT is moving along

There was a bit of a stall with JCGT, the Journal of Computer Graphics Techniques, last year, to be honest. It’s a volunteer effort and Real-Life(tm) can get in the way. We’ve regrooved: I’m temporary Editor-in-Chief, Marc Olano is the Production Manager, we’ve recruited more editors, and we’re all caught up with the backlog, as of today. So far nine papers have been published this year. There are more submissions in the pipeline. Bottlenecks are quickly cleared. New features such as ORCID links for authors have been added. Come July, Alexander Wilkie will take over as Editor-in-Chief, after he’s done co-chairing EGSR 2025. Expect more useful additions to JCGT for authors and readers in the future.

I’ve been helping out in this way since November 2024. I’ve learned that the glamorous job of Editor-in-Chief is mostly cajoling, wheedling, coaxing, and begging the other volunteers – reviewers, editors, and authors – to help keep things moving. The major tool needed is a good reminder calendar of some sort. I use Remember The Milk. It’s also a great whipping boy, “my calendar reminds me that your review is due.” See, not my fault that I’m nagging you, it’s the calendar’s. I joke – most people are happy to help out and do so in a timely fashion, and a few are incredibly fast at responding.

Honestly, I’m thankful and impressed by the huge amounts of time and effort, freely offered, from hundreds of people in the computer graphics field over its 14-year history. Its focus on practical techniques is a niche rarely addressed by most conferences or journals. Thanks, all!

Now that we’re entering the summer conference season – I3D, EG, EGSR, HPG, DigiPro, SIGGRAPH, on and on – I’ll ask that you, yes, you, be on the lookout. Is there a cool paper you saw, or person you talked with, that had a useful technique which deserves more attention on its own? Please suggest that they consider submitting their idea to JCGT.

JCGT is run on a shoestring. Our one paid position is copy editor, the wonderful Charlotte Byrnes. This minimal structure, without any serious economic constraints or influences, lets us do what we want to do: offer peer-reviewed articles and code with “Diamond Open Access,” i.e., free to readers and authors. Better still, authors retain copyright on their articles, with a Creative Commons license allowing us to distribute them. This is rare in the publishing world; even non-profits normally own the copyright, often as a revenue source.

JCGT also has an informal arrangement with I3D. Since 2016, authors of JCGT articles accepted in the last year are offered the opportunity to present their work at I3D. For example, you’ll see four JCGT papers in I3D’s program this year. I3D 2025 has just happened, but the recordings should be up soon. Enjoy!

Below’s a teaser image from an article just published today – visit JCGT to see more.

Giving a Good Talk

We’re entering talk season, with a bunch of conferences coming up from May through August. I recently ran across an old article/talk from 1988 (updated in 2001) by Jim Blinn, Things I Hope Not to See or Hear at SIGGRAPH – find it starting on page 17 here. Short version: read it. Some of it’s dated (“don’t spend a lot of time fiddling with the focus”), much is not. You may disagree with some of the ideas there, but it’s worth 10 minutes of your time to check your assumptions. Plus, it’s funny.

Bits I particularly like (honestly, I could quote about a third of it – these are just teasers), slightly out of order, with my own views:

“Many of you are involved in the microcircuit revolution and tend to think this also applies to the text on your slides. It doesn’t. My personal rule is to put no more than six lines of text on any one slide.” – I see this guideline broken a fair bit nowadays, 17 lines on a slide, “hey, we have high-res displays.” You could probably project a whole page of text on the screen – would you? Why not? So, what’s your limit?

“But, you may ask, what if I have more than six lines? Well…just use more than one slide. See? Simple.” Honestly, slides are free. Admittedly there’s a tradeoff with having to read a new slide when it comes up, versus having everything laid out in one slide. But, there’s an excellent reason to avoid busy slides:

“The audience is not going to want to read a lot of text while simultaneously trying to pay attention to what you are saying. Text on slides should just consist of section headings.” If you’re reading your talk off the slide, that’s too much text. I think of the slide’s text parts as bits that fall in between the speaking, giving some structure and helping when attention wavers. There’s also a tendency to make the slideset a self-contained presentation. “You missed my talk? Well, my slideset explains it.” Please don’t. Or do, if you can’t blog or write an article about it, but put the detailed explanation in the notes section for each slide, not in the slide itself. I’ve even seen the speaker’s words in the notes section in one slideset, something I suspect we’ll see more of with AI assistance (cue amusing mistranscriptions – two weeks ago I saw for a speaker’s “there’s a handy link here” a closed caption of “there’s an ambulance here”).

“Don’t put more than one equation on a slide unless it is fantastically necessary.” My rule is to almost never put an equation in a talk, versus the ideas behind the equation, unless the talk is truly about that equation and it’s worth understanding the terms as presented. Equations are like super-dense encodings, a page of text compressed into a line. I don’t expect the speaker to pause and the audience to read and understand three hundred words projected on the screen, so I shouldn’t expect them to read a new equation and comprehend it during a presentation. If you do focus on an equation, his advice:

“Recast your equations into simpler chunks and give each chunk its own name. Make one master slide with the basic equation in terms of these names. Then make a separate slide to define each chunk.” I try something like that here.

“Look up at the audience; it looks a lot better for the TV cameras.” I am guilty of staring at the slide on my laptop’s screen, or worse yet, turning away from the audience and talking to the projected slide.

“Probably the most important parts of your talk are the first and last sentences. Have these all figured out before you go up to the podium.” And, with that in mind, Jim’s ending:

“Look up. Bright slides, big letters.
Uh, I guess that’s all I have to say.
Thank you.”

Scaniverse

I”ve been on a Scaniverse kick. I mentioned it back in March. Since then I’ve been enjoying making scans of objects in my neighborhood and on vacation. It’s a free app for iPhone and Android. (Geez, I sound like a marketing droid. But, honestly, it’s fun to play with and I like seeing 3D Gaussian splats get used.)

What’s nice about the app is that, even if you don’t scan anything, you can see what scans other people have uploaded in whatever area you’re in. Scans are organized by location. That said, some of those scans are poor quality – I wish I could filter them out, and have suggested the same.

The basic things I’ve found with scanning a scene:

  • Pick the “Splat” mode for representing the scene. It’s cool. I’ve never bothered trying the “Mesh” mode.
  • Start scanning from whatever view you want others to first see the object at.
  • Move around! Though, slowly. Walk around and video the object from all sides, if possible. You can kind of see what areas it’s got enough data for while you scan.
  • I usually go around an object at least twice, once with my camera low, once high, and at different distances. Scan for one to three minutes. Going for more than three minutes is too much (they say so).
  • Processing later is fine, as processing, enhancing, and uploading ties up the phone (can’t do anything else) and takes a lot of charge (maybe 5% or more, per scan?).
  • “Enhance scan” seems worth doing, and can even be done multiple times – I haven’t experimented.
  • You don’t have to upload and share any scan, you can keep it just in your library. The library’s a bit odd: you can rename a scan in your library but it won’t change the name on the map, and vice versa. I rename both.
  • More good tips can be found in Library, then the Gear icon. They’re quick and worth reading.

For more on splats and Niantic’s interest and history with them, see this worthwhile article – not dumbed down and with useful links. As mentioned before, Niantic has made their SPZ compressed format public, MIT license. Looking at my own scans, each takes up anywhere from 64 Mb to 632 Mb, with the average range being around 200 Mb.

My shared scans are here, and here’s one that should appeal to all computer science types (people rub his nose for good luck):

Seven Things for March 24, 2025

  • JCGT is back, baybee! Six papers published in the past four months, two more to appear soon. Things were a bit stalled for a while, but we’ve reorganized and have caught up. I’m editor-in-chief until the summer, at which point Alexander Wilkie will take over.
  • Consider I3D 2025, May 7-9. It’s in Jersey City this year, just across the river from New York City. Some of the recent JCGT papers will also be presented there. Also, if you’re near the Boston area, consider the free one-day NESG conference at MIT on April 26th.
  • SIGGRAPH 2025 hotel registration is open. The page is a bit goofy: you have to pick from the list of hotels, sorted by distance, vs. something sensible like showing you a map and hotel options on it.
  • At GDC Microsoft announced more_ray tracing extensions for DirectX 12. Shader execution reordering (SER) is now supported, along with opacity micromaps. SER is a large performance boost for applications using ray tracing. Execution reordering is also a part of Vulkan. Opacity micromaps help in ray tracing things like leaves rendered using cutout textures. As an example of the problem, back in 2020 Activision’s Call of Duty developers noted the heavy cost of alpha-testing textures with AnyHit during ray tracing. This feature improves that situation by making cutout testing much faster.
  • I thought our “tools that we use” list was a bit long – see the bottom of this page. That’s nothing – Angelo Pesce gives quite the set of lists in his blog. Some great ones in there, some I’ve never tried. Worth a look!
  • Fun free app: Scaniverse, by the Pokemon Go people, Niantic. 3D Gaussian splats capture places around the world. You can find scans available nearby on a map and add your own. To be honest, some of the scans are terrible, and I wish I could set a good starting viewpoint for the ones I’ve made. Here are two reasonable examples – one and two – from my neighborhood. On the technical side, they’ve open-sourced and MIT licensed their SPZ file format for compressed storage of splats, which I learned about from a talk at the Metaverse Standards Forum.
  • A new cool kids’ term: vibe coding. Just hit “accept all” for whatever’s suggested by the AI chatbot and go with the flow. You can get pretty far if you’re lucky (debugging the actual code can be a nightmare), e.g., see this Shadertoy experiment by David Hart – the prompts are shown as comments in the code.
ray tracer written with perplexity

Public Domain Day, 2025

It’s that most wonderful time of the year, January 1st, when a bunch of intellectual property becomes part of the public domain. Here are some sites talking about what’s what:

And, if you made it down this far, bonus math facts about 2025.

Seven Things for November 25, 2024

Really, the first item is the main reason for this post, as it just came out, but I’ll toss in a number of other things.

  • GPU Zen 3: is out! Table of contents here. I reviewed the 96 (!) page article on Cyberpunk 2077 and look forward to reading the rest.
  • You prefer free books? I’m late to the party, only recently learning about Eugene d’Eon’s book, A Hitchhiker’s Guide to Multiple Scattering. Don’t judge it by the title, it’s a serious reference for equations about all sorts of scattering. 741 pages, 175 of which are the bibliography.
  • Activision released the large model, Caldera, from Call of Duty in USD form, for free for testing. It and other free USD models are listed in our portal page, item 21.
  • Yes, I admit an interest in Minecraft. Fun things seen lately include this mock-up “live in a Minecraft (and many other) world(s) through AR” concept video. Enjoy (and ignore the various things wrong with the idea).
  • And of course a ray-tracing system made with Minecraft is clearly what is needed to render the frames.
  • Here’s a new 256-byte demoscene film. I love the “start of a horror film” description below it. Too big? The same person has a 128 byte program and lots of other fun works. More info here, including a reverse engineering of the demo.
  • Doom, it’s everywhere, the graphics equivalent of “hello world” (well, it’s a tad more work than that… maybe more like a test of how capable a device’s display is?). Here’s one summary of some devices (so meta: it runs on a modified chainsaw). There’s even a Reddit group. My own random link update (search our blog for others): toothbrush, non-Euclidean, and running on E coli cells (video start); simulation of the cell display shown below.
Doom running on E. coli cells. https://www.popsci.com/science/doom-e-coli-cells/

Seven Things for August 22, 2024

I keep thinking “tomorrow/next week/someday realsoonnow I should get one of these out.” Today’s that day! The excitement, it is palpable.

  • SIGGRAPH 2024: I missed it this year, for the first time since I started going in 1984 (no, I’m fine, and plan to be back next year). But, there’s always Stephen Hill’s wonderful list of SIGGRAPH 2024 Links. Just like being at SIGGRAPH, except for not seeing old friends and meeting new people, learning things through serendipity, and… OK, I’ll stop. If you’re reading this, you should go to SIGGRAPH, but that links page will keep you busy until then.
  • Stephen’s page reminded me to update our Portal Page, adding new links and fixing those that broke. And, yes, no one calls these sorts of things “portal pages” anymore, which is why I do. Be happy the font doesn’t also look like 1993.
  • One person at SIGGRAPH that I did hear about was Acerola (aka Garrett Gunnell), 222k (!) YouTube subscribers. He was at SIGGRAPH for the first time, evidently. He makes videos on various elements of computer graphics. Saying “various elements of computer graphics” – gah, that sounds dull. So, “these vids are lit, dog – no cap.” Well, I’ve watched only two of them so far: Your Colors Suck (it’s not your fault), which was solid technically (and it was nice to see a few of our “fair use” gallery images get reused) and What is a Graphics Programmer?, which had bits I knew little about, such as various positions in the games industry. This second video was a bit painful to me, in that it’s sad to see that learning computer graphics programming is not easy for most people to access. Offsetting that, it was a pleasant surprise that he recommends the RTR book, but I wouldn’t particularly choose ours as your first book o’ graphics. Is there a good general basics guide at this point? (Maybe these? Most are a decade old.) Well, Acerola did recommend the Unity tutorials, particularly on rendering, and the Rastertek tutorials for APIs.
  • Once you’ve caught up with the 282 technical talks at SIGGRAPH and all the rest, there are other smaller conferences out there. Beyond the EGSR resources I recently pointed at, HPG (colocated with SIGGRAPH in even years) and I3D also have YouTube channels of all talks and keynotes. Amazing. And now I’ll wander away from computer graphics, since this all is way more than you’ll ever digest.
  • One vaguely-related-to-computer-graphics books I finished last month is An Immense World, about animals’ many senses, and a bit on tetrachromacy. It’s a great bathroom book, where you read three pages and learn something new (which you’ll soon forget, but will tell a long-suffering spouse or friend or three about in the meantime). Scallops have eyes?!
  • That book ends on a sad note, pointing out how light and sound pollution affects animals. One thing that amazes me is how massively the price of lighting has dropped over past few hundred years. Four thousand years ago, a day’s wages could buy you maybe 10 minutes of light for a room. As recent as 1743 it took two days for the President of Harvard’s household to make a half year’s worth of candles (78 pounds of them). Compare that to a study done in the 1990’s, where a day’s work bought you 20,000 hours of (safer, non-smelly) light. LEDs are cheaper yet. The cost of light has fallen by a factor of 500,000 (from a great podcast series, BTW).
  • Unrelated fun fact: our brains like fractals. Begone, boring architecture! But what boring is, is still a matter of opinion. Brain scientists, help us out here… In the meantime, below are some fractally things for you to stare at, part of Safdie Architects‘ building, near where I live. Click that last link for places they’ve designed that look from a sci-fi film but are real (I love this one).

    (Feel like commenting? Too much spam on WordPress, so you could comment here instead.)

EGSR 2024 Talks and Photos now available

I co-chaired EGSR 2024 in London at the beginning of July. One of the great advantages of small conferences like this is that everyone in the room is someone you probably have a fair bit in common with. This EGSR was extremely well attended this year (I think it’s a record, something like around 150 registered). Here are links to resources.

All technical talks are available open access:
Computer Graphics Forum track
Symposium-only

Recordings of these talks, the two keynotes, and ceremonies are all available now on the YouTube Channel. Most entertaining award was the “Best improvised presentation with little clues.”

Some photo albums have been shared:
From Emilie Nogué
From Eric Haines
From Cyril Prichard

As always, you can find other links to resources for the papers from Ke-Sen Huang’s EGSR 2024 page.

Simple, Lossless Video Editor

I’m so happy about a program that Mauricio Vives told me about that I have to pass it on: Lossless Cut. Free, multi-platform, but so useful that I put money in the tip jar.

It’s a basic video editor, with one key feature: lossless editing. I’ve become the coordinator for a weekly talk series. Microsoft Teams does a nice job of recording the talks automatically, but the video files have a lot of warmup stuff at the start (people joining the meeting) and dead air at the end (it doesn’t stop recording until everyone’s left the meeting). I tried using another free editor, DaVinci Resolve, to trim away the useless bits and export. By default, the new MP4 file was about twice the size of the original! That’s no good. It’s undoubtedly due to whatever compression settings DaVinci Resolve is using by default. I might have eventually figured out some way to set those values, at the risk of muddying the video with a different compression scheme.

Lossless Cut avoids all that, maintaining the original video stream and just editing out the bits you don’t want. It does a bunch of things, but all I care about is trimming away the ends. I did, and it did, amazingly fast: when I did my first export I thought the program had failed, because it took two seconds to make a 200 Mb file. Makes sense, though.

The only confusing thing for me was doing the actual export. This critical command doesn’t appear anywhere in the menus up top. I finally noticed a big button in the lower right corner of the screen that said “Export” – aha. And clicking it comes up with an options dialog and no “OK” button – you need to click “Export” again. Which all sounds obvious when I write it out, but it took me a minute…

Anyway, this post isn’t really graphics related, but ’tis the SIGGRAPH talk season, so I wanted to publicize this wonderful thing as much as I could. Oh, and since I spent ten minutes of my workday writing this post, here’s NVIDIA’s list of papers and events at SIGGRAPH 2024 – there, time and post justified.

Update: Eran Guendelman notes that Avidemux is another free editor with lossless editing functionality.

There must be a way to export this video, if I could only figure it out…