60 Hz, 120 Hz, 240 Hz…

Update: first, take this 60 vs. 30 FPS test (sadly, now gone! Too much traffic, is my guess). I’ll assume it’s legit (I’ll be pretty entertained if it isn’t). If you get 11/11 consistently, what are you looking for?

A topic that came up in the Udacity forum for my graphics MOOC is 240 Hz displays. Yes, there are 240 Hz displays, such as the Eizo Foris FG2421 monitor. My understanding is that 60 Hz is truly the limit of human perception. To quote Principles of Digital Image Synthesis (which you can now download for free):

The effect of temporal smoothing leads to the way we perceive light
that blinks, or flickers. When the blinking is slow, we perceive the
individual flashes of light. Above a certain rate, called the critical
flicker frequency (or CFF), the flashes fuse together into a single
continuous image. Far below that rate we see simply a series of still
images, without an objectionable sense of near-continuity.

Under the best conditions, the CFF for a human is around 60 Hz [389].

Reference 389 is:

Robert Sekuler and Randolph Blake. Perception. Alfred A. Knopf, New York, 1985.

This book has been updated since 1985, the latest edition is from 2005. Wikipedia confirms this number of 60 Hz, with the special-case exception of the “phantom array effect”.

The monitor review’s “Response Time and Gaming” section notes:

Eizo can drive the LCD panel at 240 Hz by either showing each frame twice or by inserting black frames between the pictures, which is known to significantly reduce blurring on LCD panels.

This is interesting: the 240 Hz is not that high because the eye can actually perceive 240 Hz. Rather, it is used to compensate for response problems with LCD panels. The very fact that an entirely black frame can be inserted every other frame means that our CFF is clearly way below 240 Hz.

So, my naive conclusions are that (a) 240 Hz could indeed be meaningful to the monitor, in that it can use a few frames that, combined by the visual system itself, give a better image. This Hz value of the monitor should not be confused with the Hz value of what the eye can perceive. You won’t have a faster reaction time with a 120 Hz monitor.

The thing you evidently can get out of a high-Hertz monitor is better overall image quality. I can imagine that, on some perfect monitor (assume no LCD response problem), if you have a game generating frames at 240 FPS you’re getting rendered 4 frames blended per “frame” your eye received. Essentially it’s a very expensive form of motion blur; cheaper would be to generate 60 FPS with good motion blurring. Christer Ericsson long ago informally noted how a motion-blurred 30 FPS looks better to more people than 60 FPS unblurred (and recall that most films are 24 FPS, though of course we don’t care about reaction time for films). What was interesting about the Eizo Foris review is that the reviewer wants all motion blur removed:

You probably already own a 120 Hz monitor if you are a gamer, but your monitor most likely does not have the black frame insertion technology, which means that motion blurring can still occur (even though there is not [sic] stuttering because of 120 Hz). These two factors are certainly not independent, but 120 Hz does not ensure zero motion blurring either, as some would have you believe.

The type of motion blur they describe here is an artifact, blending a bit of the previous frame with the current frame. This sort of blur I can imagine is objectionable, objects leaving (very short lived) trails behind them. True (or computed) motion blurring happens within the frame itself, simulating the camera’s frame exposure length, not with some leftover from the previous frame. I’d like to know if gamers would prefer 60 FPS unblurred vs. 60 FPS “truly” blurred. If “unblurred” is in fact the answer, we can cross off a whole area of active research for interactive rendering. Kidding, researchers, kidding! There would still be other reasons to use motion blur, such as the desire to give a scene a cinematic feel.

For 30 vs. 60 FPS there is a “reaction time” argument, that with 60 FPS you get the information faster and can react more quickly. 60 vs. 120 vs. 240, no – you won’t react faster with 240 Hz, or even 120 Hz, as 60 Hz is essentially our perceptual maximum. My main concern as this monitor refresh speed metric increases is that it will be a marketing tool, the equivalent of Monster cables to audiophiles. Yes, there’s possibly a benefit to image quality. But statements such as “there is not [sic] stuttering because of 120 Hz” make it sound as if our perceptual system’s CFF is well above 60 Hz – it isn’t. The image quality may be higher at 120 or 240 Hz, and may even indirectly cause some sort of stuttering effect, but let’s talk about it in those terms, rather than the “this faster monitor will give you that split-second advantage to let you get off the shot faster than your opponent” discussion I sometimes run across.

That said, I’m no perception expert (but can read research by those who are), nor a hard-core gamer. If you have hard data to add to the discussion, please do! I’m happy to add edits to this post with any rigorous or even semi-rigorous results you cite. “I like my expensive monitor” doesn’t count.

p.s. I got 4/11 on the test, mainly because I couldn’t tell a darn bit of difference.

Tags: , , , ,

8 comments

  1. Tom Forsyth’s avatar

    Two separate things – flicker fusion and motion fusion. They’re not the same. That Wikipedia article is a mess, constantly switching between the two without warning. I’ll try to be clearer – I’ll try to use “Hz” when talking about flicker, and “fps” when talking about motion.

    Flicker fusion is simple – the lowest speed at which a flickering light stops flickering. Varies between about 40Hz and 90Hz for most people. There is also a distinction between conscious flicker and perceptible flicker. Using myself as an example, I cannot consciously see flicker above about 75Hz. However, if I look at a 75Hz flicker for more than about an hour, I’ll get a headache. The flicker needs to be above 85Hz to not hurt. This is all experience gained from the wonderful days of CRTs (85Hz CRTs were expensive – but I finally found a nice second-hand Iiyama)

    People are significantly more sensitive to flicker in their peripheral vision than in the middle (about 10Hz-15Hz difference). This may be why some people can tolerate 60Hz CRT TVs, but not 60Hz flourescent lighting flicker (or 50Hz in places like the UK).

    Motion fusion is the rate at which successive images appear to start moving smoothly. However notice that “moving smoothly” is a continuum, and higher framerates absolutely do improve the sense of motion. For most people, this starts at around 20fps (which is why most film is still at 24fps) and consciously increases in “quality” to 120fps and probably beyond.

    So how do these interact? Well, cinema typically has 24 frames per second, which is enough for motion, but would flicker like crazy if displayed at 24Hz. So they flash each frame two (48Hz), three (72Hz) or four (96Hz) times to reduce the flicker. I believe most cinemas use 72Hz these days, as 48Hz is too low for a “surround” experience – it causes too much flicker in our peripheral vision.

    The choice of 50/60Hz for CRT TVs was driven by flicker fusion, and with no frame-storage ability in early TVs, they were forced to actually display that many frames. However they cheated with resolution and interlaced to get 30 frames per second. However, it’s not a true 30fps in the cinema sense, because even with interlacing you can get 60 “motions” a second – though this does depend on the camera and the recording equipment. I’m not that well-versed in TV tech though – don’t know how often that happens.

    We know from video games that 60fps looks visibly better than 30fps for most people on fast-moving content – whether they care enough is a different question (e.g. I can easily see the difference, but one doesn’t really look “better” than the other to me on a monitor. In a VR HMD – that’s a different matter!).

    60fps does of course reduce average reaction times if done correctly. Fairly obvious how that happens. Though there’s some games that have to go with deeper pipelining to get to 60, which may nerf much of the reaction-time advantage, depending on how they do it.

    Now, what about 120 and 240?

    120fps is easy – it just looks smoother. Just like 60fps looks smoother than 30fps, 120fps looks smoother than 60fps for most people. It’s a more subtle effect than 30fps->60fps, but it’s certainly there – I see the difference mainly in smooth pans. Of course the TV has to get that extra frame data from somewhere, and they do it by extracting motion vectors between successive 60Hz frames and then inventing new frames by tweening. This causes some odd artifacts sometimes, and although the algorithms are much better these days I still prefer to have them off and just watch 60fps data. The tweening also adds latency – fine for a film, bad for interactive content.

    So why “240fps”? Well, first up, it ain’t 240fps – that’s marketing bullshit. It’s 120 frames of data, interleaved with 120 frames of black. Or, as an engineer might put it – 120fps with a 50% duty cycle on illumination. But that doesn’t look as good on a billboard.

    So why do this? Why turn the display off for half the time? Normally, LCD displays are full-persistence – the pixels are lit all the time. They take a while to switch between colours, but they’re always lit – they’re always showing something. That means that in things like smooth pans, your eyes are following the pan across the screen, but the pixels are of course moving in jumps, not continuously. This “smears” the image across your retina and results in blur. We get this really badly in VR, but it also happens on TVs. The solution is the same – low-persistence, otherwise known as “turning the pixels off between frames”. If you turn the pixels off, they won’t smear. For more details here, see Michael Abrash’s great post on the subject.

    (CRT TVs had a duty cycle of around 15%-25% depending on the phosphors, and most cinema is displayed at a 50% duty cycle)

    But if you do a 50% duty cycle at 60Hz, it will cause flicker (at least with the widescreen TV monstrosities we have these days). That’s why LCD TVs had to get to 120Hz before going low-persistence.

    But this prompts a question – why not show 60fps data at 120Hz with low persistence? Why do you need the motion-interpolation tech? Well, coz otherwise the pixels don’t smear, they double-image. Think of the same smearing effect, but with the middle of the smear missing – just the ends. So any time something moves, you get a double image. This is a very visible effect – even people who are perfectly happy with 30fps rendering and can’t see flicker over 50Hz can still see this double-edge effect at 120fps low-persistence! And that’s why the tweening is generally forced on if you go low-persistence.

    (it’s an interesting historical question why CRTs – which were inherently low-persistence, with about a 25% duty cycle depending on the phosphors – why did they not look terrible? My theory is that we just didn’t know any better – the competition was 24fps/48Hz cinema, and that sucked even more!)

  2. sneftel’s avatar

    Remember, the human visual system doesn’t operate at a particular sampling rate. 60Hz is a decent threshold for perception of flickering, but that’s a very different thing than response time. A 120Hz monitor will show you a picture up to 8.3 milliseconds sooner than a 60Hz monitor would. As a non-hardcore, non-headshots, non-m-m-monster-kill gamer I’m not sure whether that equates to a meaningful advantage, but I could certainly conceive of it doing so, at least in the aggregate.

  3. jonogibbs’s avatar

    It’s probably that 60Hz is an average – there are probably people who are sensitive beyond it. Certain at work, I know people don’t enjoy looking at 60Hz monitors all day and want them at least at 72Hz. It’s not that they perceptively perceive blinking so much as they get headaches if the refresh rate isn’t higher. But certainly 120Hz is way above any sensitivities I know…

  4. marcel’s avatar

    Imagine there is a 1 pixel dot moving over the screen with constant velocity, with 2 pixels per frame at 60 Hz (that’s not so fast). Let’s also assume that the image is displayed constantly for 1/60 th of a second and then immediately switches to the next image. The dot jumps 2 pixels from one frame to the next.
    However, the eye doesn’t jump but smoothly follows the average dot motion. At the beginning of the displayed frame, the dot is too far ahead, at the end it is too far behind. It appears blurred. With a higher display frequency or shorter display time per frame (black frame insertion) this blur gets less.

    That’s why high resolution TVs have higher display frequencies, use black frame insertion and do all this motion interpolation: a high image resolution with moving images needs a high display rate (or short cycle).

    I did some related research on this, too: Fast Motion Rendering for Single-Chip Stereo DLP Projectors.

  5. Eric’s avatar

    Thanks for the interesting replies. I’ll need to get up the energy and set aside time for the two long posts by Michael Abrash, Tom – they look interesting, but not a quick skim.

    Here’s a note from Jim Ferwerda, a professor at RIT’s Chester F. Carlson Center for Imaging Science:

    Critical flicker frequency is related to the temporal CSF (TCSF). This varies with luminance and the duty cycle of the flicker. 60Hz is a reasonable refresh rate for typical luminance levels of desktop LCD (sample-and-hold) displays but not for intermittent displays (CRTs, LCDs with flashed backlights, or black frames). Remember back in the 90’s when CRT refresh rates were 75-85Hz. This was because CRTs were getting brighter and flicker could be seen at 60Hz (especially in the periphery).

    My understanding is that the use of flashing LCD backlights/black frames (to reduce motion blur from LCD sample-and-hold) is the reason LCDs went to 120Hz. The advance to 240Hz was motivated in part by wanting to have good motion rendition in stereo on LCDs 240Hz mono = 2x120Hz stereo. To my understanding not much changed about the temporal response of the LCD technology, most of the improvement was in the flashing (LED) backlights. Another technology that went to high refresh rates was DLP. Here the issue was not blur but color breakup with motion due to the fact that DLPs are color sequential technology. Filter wheels with 6 segments and higher refresh rates help here.

    The other factor in sampling rates is motion rendition. In standard cinema images are shot at 24Hz and displayed at 48Hz (each frame flashed twice). At old-school theatre screen luminance levels, 48Hz is above the CFF, but motion (of objects or camera) will cause bad judder (doubling of visible edges and overall crappy rendition of motion). High frame rate cinema is an attempt to solve the motion rendition/judder problem as we get into digital theaters where the screens are big, bright, have high resolution, are sometimes 3D, and show action movies with lots of motion. See links below on HFR cinema.

    http://www.christiedigital.com/supportdocs/anonymous/christie-high-frame-rate-technology-overview.pdf

    http://info.christiedigital.com/lp/3d-hfr

    nem-summit.eu/wp-content/plugins/alcyonis-event-agenda/files/Higher-Frame-Rates-for-more-Immersive-Video-and-Television.pdf

    The ultimate answer of what is enough is complicated and depends on the the spatio/temporo/lumino characteristics of the scene, the capture system, the coding/transmission system, the display system, and the viewer (see paper below by Scott Daly that tries to make some sense of it all [Paper is “Enginering observations from spatiovelocity and spatiotemporal
    visual models”, one that will take some serious work on my part to absorb – Eric]). Still trying to wrap my head around it.

  6. Zap Andersson’s avatar

    As for the test, I scored 11/11, and it was quite obvious (latter half I only spent about a second of each video w. motion on it before clicking).

    My opinions on High Frame Rates in general is well documented and can be found here (Spoiler! I hate them – for movies. Games and VR may be another ball of wax… perhaps):

    http://masterzap.blogspot.se/2012/12/me-on-hobbits-and-high-frame-rates.html

    Enjoy the read.

    /Z

    /Z

  7. Zap Andersson’s avatar

    Secondly, the interesting topic is persistence, vs. the temporal characteristics of the display. Check out the temporal graphs on this page:

    http://en.wikipedia.org/wiki/Comparison_of_display_technology

    Let me tell you an anecdote you can relate to, Eric (you may even remember this) to understand why this is massively important:

    Back in the day, I wrote a renderer, which started its rendering by rendering every 64:th (8×8) pixel, then every (4×4) pixel, then ever (2×2) and finally filled in everything.

    I thought this looked a bit dark, so I changed it to render big 8×8 blocks, then 4×4 blocks… and it looked WAY WORSE.

    Why? Because showing the grid of every 8:th pixel against black, allowed your brain to fill in the intermediate data. You perceived the image as having more information than there was.

    Conversely, when I painted it in, I told the brain “Nope, there’s nothing there, just these big squares of flat pixel blobs”.

    *Exactly* the same thing happens temporally. If you show a sharp image briefly, show black, and then a sharp image a distance away from it, the eye can perceive this as a smooth motion, whereas if you show the sharp image, let it linger the whole time, and then switch to the other sharp image, the eye perceives it as a jerky motion, because it cannot fill in the temporal blank.

    This is why analog TV actually – in a sense – have a very high temporal resolution… not 30 or 25 fps, because the frames – in pure analog video – is irrelevant. The electron beam sweeps from top to bottom, and the scanning of the camera moves in sync with it, so a tall object moving from left to right would display a “rolling shutter” leaning effect IF you were to look at the entire frame… but in an analog TV, you are not. You are looking at the image being scanned, top to bottom, so there is NO “rolling shutter effect”, the object doesn’t seem to lean over at all, it simply seems to move smoothly from left to right!

    There used to be this flash video demonstration, which I sadly cannot find. The idea of it was to show how “bad LCD displays were compared to CRT’s”. It was basically a scrolling banner of text. Shown on a CRT, it was perfectly readable. Shown on an LCD, it was a smeary mess.

    The important bit is, this smeary mess actually do not exist on screen, IT IS IN YOUR BRAIN. Because your brain tries to follow the text, track it as it moves. On the CRT, it can do that, because the text just blinks briefly at one location, then blinks briefly at the next, which the eye can interpret as a single smooth motion. On the LCD, again, it just sits there in one location, then just sits there in another location, the eye tries to follow it, and it sees the discrepancy between the ideal smooth movement and the actual movement as an error.

    It’s the same effect as your brain does when you look at a 16 level gray scale ramp. You ever noticed how the individual bands in a 16 level gray scale ramp seem to be shaded within themselves? That is your BRAIN assuming an even (smooth) ramp, and seeing reality’s deviation from it (the stair step intensity changes) as “errors”.

    The key, though, to the “short blink” display is to do the sharp image. This is why all “motion smoothing” TV’s are hell-spawned crap that should be obliterated by fire – the original filmed material contains motion blur. When they try to “motion smooth” that, you get an over-blurry smeary mess of an un-watchable thing, that looks worse than 70’s daytime soap with their crappy tube cameras!!!

    If it was filmed at a low frame rate (24 fps), it has the motion blur baked into it, and the motion blur actually helps the eye, because it already presents the smeary version that the brain can draw conclusions from. It works if shot well, and the particular “magic” of 24fps presentation is undeniable for anyone w. a pair of eyes.

    Or at least, anyone who gets 11/11 on the above test 🙂

    …especially if you – like me – preferred the 30 fps in every instance.

    /Z

  8. Chad Capeland’s avatar

    Found this doing some unrelated research, but I thought I’d chime in with some comments, in case anyone stumbles on this again like I did.

    For the “black frame insertion”, it’s absolutely about the differences in the response rates of LCDs (slow) and LEDs (fast). Because the HVS can see the LCD pixels fading between frames, you get this strange dissolving effect. It’s often called a blur, but that’s not really what it is. The LCD image takes time to switch to a new frame, and humans can see it. But LEDs are crazy fast at switching. So what the monitor makers do is run the LED backlights with a hard overdrive and strobe them on and off, off when the LCD is in the fading part, and on when the LCD is ready with the next frame. You don’t see a dimming because they can safely overdrive the LEDs because of the reduced duty cycle.

    Zap mentioned that the smearing and fading effect is entirely in our brains, but it isn’t. You can take a photo of it happening or even better, use a highspeed camera. One interesting effect that you can produce with

    Also, Tom mentions that cinemas run with ~50% duty cycle. That’s true with 35mm, but with DLPs they run at nearly 100%. The DMD chips can switch pictures at >1000 fps, so there’s no need for darkening the image. Bulbs are too expensive to waste light. When you do 3D, though, THEN you get into dark time. The DMD shows black for a tiny amount of time, like 1-2 ms in order for the LCD shutter in front of the lens to change over.

    Regarding the lack of blur at higher framerates, just because you have images at 120 fps doesn’t mean that there isn’t motion blur. Fast objects blur, it’s just easier to read the motion because you get more samples when the color is homogeneous. It has the same smooth motion blur as 30 fps does, you get just as much photons collected over the same period of time. There’s just more opportunity for your eyes to see the continuous photon stream it’s used to seeing in the real world.

    Doug Trumbull has been developing films at 120 fps for a couple years now. Most series 2 cinema projectors can run at 120 fps, no problem, so long as you are OK with a 2K image. Christie does make a 4K 120 fps projector, but it is very expensive and cannot show DCP films. The real 120 fps 360 degree shutter images are incredibly lifelike. It makes the images look more like a window than a screen. If you get a chance to see one of his newer films, I’d highly recommend it.

    One of the problems with HFR for films is that the examples of it that we have are few and far between, and the famous ones, The Hobbit trilogy, did it incorrectly. The Hobbit didn’t shoot with a full open shutter, they used a more open shutter than 24 fps does, but they didn’t open it all the way. The effect was that the motion blur was still strobing like 24 fps, but with faster motion. But the real big issue was that they only did HFR with 3D.

    3D cinema projectors run at 120-192Hz. So what we normally see with 24 fps 3D films is frame 1 flashed between the right and the left eyes 6 times at 144Hz. Left right left right left right. Your brain sees 6 images in sequence, but absolutely NOTHING moved on screen during that time. Between the 6th and 7th flash, though, the images DO change, and you get 6 more frozen frames. The flickering is fast though, so your brain fuses it all together and you don’t see the flashing, after all, your brain can see that the image isn’t moving, so it assumes it’s constant. But with The Hobbit, they couldn’t run it at 48Hz, they ran it at 192Hz. But instead of 6 flashes between the eyes, there’s now 4. It’s still temporally disconnected from the flicker, so it’s not a huge improvement.

    But when you get to 60 fps, then you can run the projector at 120Hz, and something interesting happens. You don’t have to repeat any frames. You show each image in each eye only one time. Now your brain doesn’t have to reconstruct the motion from repeated still frames, it’s getting temporally correct frames. What’s really interesting, though, is if you temporally offset the left and right cameras. Make the right camera 8.3 ms delayed and now when you show the right eye, it’s 8.3 ms delayed in the action. And when you show the left eye, it’s now 8.3 ms later. So you get 120 fps of motion AND you get stereo at the same time. It doesn’t work well below 120Hz, the flickering becomes noticeable, and you can sense that your left and right eyes aren’t getting the images at the same time. What’s nice is that 120Hz is supported by all the series 2 projectors out in the field, so it’s viable for distribution, and 120Hz works well for printing down to 60 or 24 fps. Since most LCD displays run at 60Hz, and all broadcast runs at 60Hz, that’s a nice sweet spot.

Comments are now closed.