I feel like this article is really overselling this filter. A 4-point symmetric interpolation kernel can be parameterized as [k, 1-k, 1-k, k]/2, i.e. it has a single degree of freedom. k=-1/4 is bicubic, k=1/4 is this 'magic', and k=0 is bilinear. Bicubic is sharper, and 'magic' has better alias rejection. Which looks better depends on the image and the viewer's subjective preference. For insta photos, it's probably better to go for 'magic', while for text, one might prefer bicubic. Neither is "simpler" as this article keeps suggesting, they just have different filter coefficients, that's all. But any other value of k is an equally valid choice.
Took the time to make such a comparison using the article's sample images (even if the filter isn't sharpened in those), same thrice doubling of the small picture: http://0x0.st/XEEZ.png
I find such a test strange and irrelevant, though.
Ooh the animated comparison is really helpful. I couldn’t see it in the article, but with your version the Magic version feels flatter, almost like it’s a bokeh blur and not just a low pass. The Sigmoid seems far better than either Magic or Bicubic.
The page mostly talks about image resampling where the goal is more or less preserving all frequencies, but it's also extremely effective at implementing Gaussian blur. Basically, you do n iterations of sampling by 1/2 using this kernel, followed by a very small FIR filter, then n iterations of upsampling 2x using the same kernel. Here, n is essentially log2 of the blur radius, and the total amount of computation is essentially invariant to that radius. All these computations are efficient on GPU - in particular, the upsampling can be done using vanilla bilinear texture sampling (which is very cheap), just being slightly clever about the fractional coordinates.
It works well because, as stated, the kernel does a good job rejecting frequencies prone to aliasing. So, in particular, you don't get any real quality loss from doing 2x scale changes as opposed to bigger steps (and thus considerably larger FIR support).
I have some Python notebooks with some of these results, haven't gotten around to publishing them yet.
I have done something like this with a Lanczos kernel (a=1) downsizing repeatedly by 2x, a small Gaussian kernel, and then repeatedly upsizing by 2x with simple hardware bilinear sampling.
The (2D) Lanczos downsizing can be done with only four samples using the bilinear sampling tricks that you mention, and I avoided expensive trigonometric functions, divisions, and the singularity at 0 by using an even 8th order polynomial approximation. I would be curious to see the results using this kernel, but the Lanczos is so far the best that I've tried.
I was surprised I hadn't heard of this, or his related project JPEG-Clear. I have thought for years that the JPEG-Clear method is how responsive images should have been handled in a browser. A single-file format that can be progressively downloaded only up to the resolution it is being displayed at. If you zoom in, the rest of the data can be downloaded for more detail. Doesn't require complex multi-file image authoring steps, keeps simple <img src> grammar, and is more efficient than downloading multiple completely separate images.
Loading a more detailed version of an image as you zoom in is different from what a progressive JPEG does.
Loading a Progressive JPEG means you still unconditionally load the entire file, you just are able to show a low detail version before it is fully loaded. The last time I saw a progressive JPEG actually take time to load was when I had dialup.
1. You can terminate the loading process as soon as you're satisfied with the quality. It's just that browser don't do that.
2. The OPs JPEG-Clear proposal [1] also loads the entire file no matter what. It's literally just a reinvention of progressive JPEGs, presenting it as something novel.
> Fourthly, and most importantly, as noted above: m(x) is a partition of unity: it “fits into itself”; [...] if we place a copy of m(x) at integral positions, and sum up the results, we get a constant (unity) across all x. [...] This remarkable property can help prevent “beat” artifacts across a resized image.
So, basically the reason why this works better than other visually similar filters is that it happens to satify the Nyquist ISI criterion[1].
This uniform b-spline is the same one used often as a "Gaussian" approximation (three box filters) - see Paul Heckbert's 1986 paper here (apparently done at NYIT in the early 1980s with help from Ken Perlin):
In the “Bicubic: note the artifacts” comparison images, the bicubic version, regardless of the aliasing, is less blurry and has more detail than the “magic kernel” version. I therefore don’t agree that the latter is “visually, far superior”. There is at least some trade-off.
I guess anything is magic if you don't know how it works or if you need some clicks to promote your personal site.
This is basically a slightly different gaussian kernel and the "incredible results" of a small image becoming a larger resolution blurry image is completely normal.
Also you don't want negative lobes in image kernels no matter how theoretically ideal it is, because it will give you ringing artifacts.
If you work with image kernels / reconstruction filters long enough you will eventually learn that 90% of the time you want a gauss kernel.
> If you work with image kernels / reconstruction filters long enough you will eventually learn that 90% of the time you want a gauss kernel.
Strongly disagree, and my commercial software is known for its high image quality and antialiasing. Gaussian is way too blurry unless you're rendering for film.
In my film experience, I think most film people don’t like Gaussian either; too blurry for them as well. At least, I’ve sat in on filter evaluations with a couple of directors & VFX sups many years ago, and they said Gaussian was too soft and preferred a sharper Mitchell. But I am curious, perhaps similar to the sibling comment - how do you determine the optimal Gaussian width? You can certainly go narrower/sharper and get less blur at the cost of more artifacts similar to a sharper filter, right? BTW have we discussed this recently? ;) I love Gaussian’s ability to hide any hint of the pixel grid, which I find very few filters can do. I also tend to believe that, perceptually speaking, over-blurring slightly doesn’t hurt while under-blurring does, especially for moving things, but that might be more personal bias than objective reality. I would be interested to look at any comparisons or results from your software or in general, if you have some.
Historically, Pixar's RenderMan defaulted to a 2x2 Gaussian filter [1], and from what I can see, that hasn't changed [2].
Essentially it uses a truncated isotropic (non-separable) Gaussian defined by exp(-2.0 * (x*x + y*y)) with a 2x2 pixel support [3], which is slightly soft and completely avoids ringing.
Gaussian also plays very nicely with filtered importance sampling [4] since it has no negative lobes.
(Though I remember a number of studios using RenderMan preferring filters with a bit of sharpening.)
Oh very interesting, thanks! Yeah I guess the PDI folks wanted the benefits without the slightly soft part, but I stand corrected, not all film people are the same.
The truth is that when people test out filters they are looking close up at the pixels, trying to squeeze out detail, but the reality is that whatever minute detail might get slightly softened by a 2.2-2.5 gauss filter will get chewed up by the process of color correction and compression anyway.
The aliasing you can end up with from a mitchell filter though can be noticeable all through the process. Not only that, but what will the compositor do when they see the aliasing? They'll blur it.
Basically it is trying to squeeze blood from a stone and the image out of renderer is going to be far sharper than anyone will see because it will go through multiple stages. Even compositing almost never leaves a render verbatim. There is usually some sort of slight softening, chromatic aberration, lens distortion and/or other transforms that require resampling anyway.
It is picking up pennies in front of a bulldozer and only causes problems to have a filter that's too sharp, let alone one that has negative lobes.
> Basically it is trying to squeeze blood from a stone and the image out of renderer is going to be far sharper than anyone will see because it will go through multiple stages. Even compositing almost never leaves a render verbatim.
Don't forget that these days, it's all going through some ML-based denoiser, anyway. I wouldn't be surprised if filter choice is nearly irrelevant at this point (except for training).
Oh FWIW, the sessions I remember most vividly were for the first Shrek movie, which was printed to actual film, projected to a theater screen, and final render was at slightly lower than 1080p resolution. The digitizing to film does add a little blur, of course, but not a lot since the resolution was so low and the pixels so big. The filter did kinda matter, and the compositors did not add extra blur in this case. I was highly impressed with how sensitive the director and lighting/fx supes were to filter differences, and how quickly they could spot them in animated clips that were often less than one second long. The main thing they were doing was trying to avoid texture sizzle without overblurring.
Scanning film would be digitizing it, going out to film could be called printing.
does add a little blur
A lot of blur. 35mm film had a lot of folk wisdom built up around it and even though it would be scanned and worked on at 2k resolution for live action vfx, if you actually zoomed in it would be extremely soft.
You could get away with 1.5k just for printing to the first generation film, let alone the 2nd gen master prints and the third gen prints that went out to theaters.
The filter did kinda matter
It is extremely unlikely lighters were changing the actual pixel sample filters on a shot by shot basis. This is something that is set globally. Also if you set it differently for different passes and you use holdouts, your CG will not line up with the other CG renders and you will get edges from the compositing algebra not being exact.
I was highly impressed with how sensitive the director and lighting/fx supes were to filter differences,
No one is changing the pixel sampling filters on a shot by shot basis and no one is going to be able to tell the filter just by looking at playing back on film. This is simply not reality.
and how quickly they could spot them in animated clips that were often less than one second long
Absolutely not. Whatever you are talking about is not different pixel sample filters. Aliasing in general yes, but that's much more complex.
The main thing they were doing was trying to avoid texture sizzle without overblurring.
This has nothing to do with the renderer's pixel sample filters which is what would be the only analog to the article here. Maybe you are talking about texture filtering, although that is not a difficult problem due to mipmapping and summed area mapping. Even back then a geforce 2 could do great texture filtering in real time.
Maybe you are talking about large filter sizes in shadow maps, which need a lot of samples when using percentage closer filtering, but that has nothing to do with this.
Whoa what’s with the totally confrontational argumentative stance?!? I’m very confused by your reply. I thought we were having a nice conversation about filtering.
Lighters on Shrek were indeed not changing the pixel filter shot by shot. We did, however, multiple times, sit down to evaluate and compare various pixel filters built into the renderer, and when we did that, the pixel filter actually was changed for every shot, so they could be compared.
The filter mattered because it was set globally, and because the resolution was low, less than 1k pixels vertically. That is precisely why we spent time making sure we were using the filter we wanted.
I am in fact talking about different pixel filters, and the director and supes could tell the difference in short animated clips scanned to film, which is why it was so impressive to me. If you don’t believe me, I can only say that reflects on your experience and assumptions and not mine.
Both subtle high frequency sizzle as well as other symptoms of aliasing do occur when the pixel filter is slightly too sharp. I don’t know why you’re arguing that, but you seem to be making some bad assumptions.
Whoa what’s with the totally confrontational argumentative stance?!?
Someone correcting you isn't victimizing you.
I’m very confused by your reply.
That's fine, I'll explain it again.
We did, however, multiple times, sit down to evaluate and compare various pixel filters built into the renderer,
Makes sense, this is initial set up.
because the resolution was low, less than 1k pixels vertically.
Resolution isn't referred to by the vertical resolution in shorthand. 2k usually means around 2,000 pixels or more horizontally. The vertical resolution is mostly dependent on what crop of the 35mm negative is being used which dictates aspect ratio. You said things like "low resolution, 1080p" which aside from mostly being used for HD video, is actually close to 2k resolution (1920x1080). The reason this is important is that when I said 1.5k is still sharper than film I'm talking about a 25% step down in resolution still needing to be softened to fit in with 35mm photography.
Here is the really relevant part - this is mostly about edge anti-aliasing and possibly motion blur.
and the director and supes could tell the difference in short animated clips scanned to film, which is why it was so impressive to me. If you don’t believe me, I can only say that reflects on your experience and assumptions and not mine.
If there are clips meant to compare filters and they are being told what they are looking at this makes sense. If you are saying that they would look at a clip on film and say what pixel filter was being used, it's just not true. It's gone through too many stages and it's not something anyone cares about once it's locked down. It's too soft and pixel filters are too similar to each other. It's like someone drinking a long island iced tea and saying they can taste where the water in the cola came from.
Both subtle high frequency sizzle as well as other symptoms of aliasing do occur when the pixel filter is slightly too sharp. I don’t know why you’re arguing that, but you seem to be making some bad assumptions.
I already said this up top, but I don't think you're understanding what I've been saying all along, which is that the sharpness from something other than gauss isn't worth the trouble and whatever super minor and subtle softness is there is irrelevant because it's overshadowed through the rest of the process.
What you're saying here doesn't seem like you are talking about pixel filtering. Aliasing on the edges is the easiest aliasing to deal with because it doesn't take many samples. Dealing with it with pixel filters is basically the same as blur, which isn't really a way to confront it at all.
Before you were talking about texture aliasing and now you're talking about "high frequency sizzle" and if this was anywhere except the edges of the CG, it wasn't people talking about pixel filters that are being discussed here, it was texture or shadow map filtering.
I'm sure you saw and learned a lot, but this is a very specific context and part of rendering which is arguably one of the easiest parts to set and move on. It's much more likely what you saw was people dealing with difficult aliasing problems through other means that wouldn't blur the entire image.
I'll say my overall point again though (which isn't exactly related to what you said) - CG goes through a lot of stages that affect the image much more than the pixel filter, so it is not wise to use something with negative lobes to get very slight sharpness at one stage when it is all downsides with no upsides when looking at the entire pipeline.
You are straight up making incorrect assumptions about something I participated in first hand. I don’t know why, but you’re simply wrong. Some people do care about pixel filtering, some people truly do not want a Gaussian, and some people can tell the difference, even if you can’t.
4K and 8k are horizontal resolutions by convention. Historically and before 4K existed, resolutions were referred to vertically, from 240i to 480p to 720p to 1080p. I haven’t heard 2k much and it’s confusing next to 1080p, but maybe some people prefer that. Either way your blanket claim that resolution is referred to by the horizontal size is false.
You haven’t corrected anything other than my dusty film processing terminology, which is extremely pedantic given you understood what I said, and you’re saying things that simply aren’t true.
Being in the room doesn't mean understanding every aspect of rendering and that is fine. Even people finishing shots don't need to worry about this stuff, it's not usually a big sticking point.
but you’re simply wrong
I'm not. Downvotes, name dropping and getting upset don't change math.
some people truly do not want a Gaussian
I'm sure that's true, but I'm saying that most people don't realize that by the time they widen a sharp filter with negative lobes to compensate, blur the image slightly somewhere or just have an image go through the whole pipeline, it just isn't gaining anything and it possibly introduces some aliasing. It doesn't mean people aren't out there doing it anyway.
You said yourself that people were fighting aliasing on shrek, although you haven't distinguished between edge aliasing, texture aliasing and shadow aliasing, which is a big red flag.
Historically and before 4K existed, resolutions were referred to vertically, from 240i to 480p to 720p to 1080p.
And before HD broadcast resolutions, 2k for film was something talked about all day every day.
I haven’t heard 2k much and it’s confusing next to 1080p
It wasn't confusing before 1080p (and 1080i), it was the standard and that was long before HD video. Where do you think the terms 4k and 8k come from?
Either way your blanket claim that resolution is referred to by the horizontal size is false.
This is just how it was and is in film. You're mixing in HD video resolutions which went in the other directions.
In any event, the whole point I was making was that even lower than 2k resolutions were still too sharp for film, so an extremely slight softening from the pixel filter doesn't matter. Exposure to 35mm film is WAY more softening.
I think you aren't absorbing this fact. 35mm film is soft, and whatever edge comes out of the renderer is essentially blurred again by going out to film. Because of this there is no way to just look at CG printed to film and just know what pixel filter was used, that information is long gone and pretending otherwise is a hallucination.
You haven’t corrected anything other than my dusty film processing terminology,
I corrected more than that and you keep getting caught up on resolution without acknowledging that there are different types of aliasing from different sources and that 35mm film means lots of detail is lost.
If someone was talking about running a marathon said "I like these sneakers, this shirt, these tires, these shorts.." you would start to think you aren't talking about the same thing and that they don't have perfect knowledge of a niche subject, which is fine.
FYI, I can’t downvote your replies. If you’re getting downvotes, then maybe you’re earning them? More incorrect assumptions?
It doesn’t really matter if film is soft. If there’s aliasing or ringing or sizzling in the digital source, it can be visible on film even if it’s blurred more by the film. Once aliasing is introduced during sampling, blurring doesn’t necessarily make it go away, and in some cases extra blurring can make aliasing worse and more visible. This is why the pixel filter matters, and it seems like you would know that if your background matched your certainty.
That said, I think your claim that 2k by 1k is too sharp for 35mm film is also just plain wrong. 2 megapixels is quite a bit lower than the estimated spatial resolutions of most films at 35mm, and some have a resolving power much higher than that.
People can and did tell the difference between pixel filters on film, and it’s amusing you have jumped to a made up conclusion that it was otherwise and decided to presume to tell me you think it was something else, when it wasn’t. I worked on the renderer, it was indeed a pixel filter test, and people actually did similar tests at multiple film studios during the same time period. You’re wrong about this, making weird bad assumptions, and there is no math that backs you up on that.
I know, I never said you did, that's how the site works.
If you’re getting downvotes, then maybe you’re earning them?
People see a random blog link and think it's an authority, that's pretty much all there is to it.
More incorrect assumptions?
I think you mean you misunderstood something I wrote again even though it was clear the first time.
It doesn’t really matter if film is soft.
It does for what I was talking about.
If there’s aliasing or ringing or sizzling in the digital source, it can be visible on film even if it’s blurred more by the film.
No one has ever claimed this was wasn't true.
Once aliasing is introduced during sampling, blurring doesn’t necessarily make it go away,
No one claimed this an neither of these have to do with anything I've said.
and in some cases extra blurring can make aliasing worse and more visible.
Technically it would lower the frequency of the noise.
You still aren't distinguishing between different types of aliasing. Sampling for visibility and coverage from the camera is different from sampling for coverage of shadows.
You keep writing as if all aliasing is the same when pixel filters are going to be doing a weighted average of the samples from camera, which themselves could be aliasing from other sources.
That said, I think your claim that 2k by 1k is too sharp for 35mm film is also just plain wrong.
It's not. I've run wedges of pixel filters, different sizes and seen them laser printed to film.
Have you done this?
People can and did tell the difference between pixel filters on film,
Again you keep misunderstanding what I've said. Someone going in cold with no other information is not going to be able to look at a film print and know what pixel filter an image was rendered with, the information isn't there any more.
You’re wrong about this, making weird bad assumptions,
Nope. Your posts are full of giant red flags like not distinguishing between multiple types of aliasing and never hearing "2k resolution".
I'm not even sure what your point is other than replying to say I'm wrong about something I never said.
Show me some evidence. Show me laser printed and laser scanned 35mm film at 2k resolution and let's look at the edges.
Remember, all I originally said was that the vast majority of time someone will just want to use a gauss filter, because it looks good, has no negative lobes and the image will be softened more by the process anyway.
Why are you talking about “going in cold”? Who said anything about going in cold with no information? It is you who’s misunderstanding.
I don’t know why you decided to challenge my story, my experience, and every single point here to death. My first reply to you was mostly agreeing with your original take on Gaussians as I prefer them too, based on how good they are at getting rid of aliasing. But I wanted to give that a little industry color and history that isn’t visible to outsiders. Gaussians are visibly softer than other filters, the antialiasing properties come at the expense of sharpness, the softness is visible on film, and while the majority of the time any given random person might be best off with a Gaussian if they haven’t studied pixel filters, professionals sometimes prefer slightly sharper filters even if they might trade it for slight amounts of ringing or aliasing.
challenge my story, my experience, and every single point here to death.
If you say things that aren't consistent or don't make sense someone may say that it doesn't make sense.
People can prefer whatever they want, it doesn't mean a half pixel blur is going to be visible on 35mm film where edges are multiple pixel gradients.
Again, show me some evidence. I see people make claims about the resolution of 35mm film all the time, but when it comes time to show edges anywhere close to a render it never happens.
I find such a test strange and irrelevant, though.
It works well because, as stated, the kernel does a good job rejecting frequencies prone to aliasing. So, in particular, you don't get any real quality loss from doing 2x scale changes as opposed to bigger steps (and thus considerably larger FIR support).
I have some Python notebooks with some of these results, haven't gotten around to publishing them yet.
The (2D) Lanczos downsizing can be done with only four samples using the bilinear sampling tricks that you mention, and I avoided expensive trigonometric functions, divisions, and the singularity at 0 by using an even 8th order polynomial approximation. I would be curious to see the results using this kernel, but the Lanczos is so far the best that I've tried.
Loading a Progressive JPEG means you still unconditionally load the entire file, you just are able to show a low detail version before it is fully loaded. The last time I saw a progressive JPEG actually take time to load was when I had dialup.
2. The OPs JPEG-Clear proposal [1] also loads the entire file no matter what. It's literally just a reinvention of progressive JPEGs, presenting it as something novel.
[1] https://johncostella.com/jpegclear/
https://news.ycombinator.com/item?id=10404517 (2015)
https://news.ycombinator.com/item?id=26513518 (2021)
So, basically the reason why this works better than other visually similar filters is that it happens to satify the Nyquist ISI criterion[1].
[1]: https://en.wikipedia.org/wiki/Nyquist_ISI_criterion
https://dl.acm.org/doi/pdf/10.1145/15886.15921
This is basically a slightly different gaussian kernel and the "incredible results" of a small image becoming a larger resolution blurry image is completely normal.
Also you don't want negative lobes in image kernels no matter how theoretically ideal it is, because it will give you ringing artifacts.
If you work with image kernels / reconstruction filters long enough you will eventually learn that 90% of the time you want a gauss kernel.
Strongly disagree, and my commercial software is known for its high image quality and antialiasing. Gaussian is way too blurry unless you're rendering for film.
Essentially it uses a truncated isotropic (non-separable) Gaussian defined by exp(-2.0 * (x*x + y*y)) with a 2x2 pixel support [3], which is slightly soft and completely avoids ringing.
Gaussian also plays very nicely with filtered importance sampling [4] since it has no negative lobes.
(Though I remember a number of studios using RenderMan preferring filters with a bit of sharpening.)
[1] https://paulbourke.net/dataformats/rib/RISpec3_2.pdf#page=40
[2] https://rmanwiki-26.pixar.com/space/REN26/19661819/Filtering
[3] https://paulbourke.net/dataformats/rib/RISpec3_2.pdf#page=20...
[4] https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&d...
The aliasing you can end up with from a mitchell filter though can be noticeable all through the process. Not only that, but what will the compositor do when they see the aliasing? They'll blur it.
Basically it is trying to squeeze blood from a stone and the image out of renderer is going to be far sharper than anyone will see because it will go through multiple stages. Even compositing almost never leaves a render verbatim. There is usually some sort of slight softening, chromatic aberration, lens distortion and/or other transforms that require resampling anyway.
It is picking up pennies in front of a bulldozer and only causes problems to have a filter that's too sharp, let alone one that has negative lobes.
Don't forget that these days, it's all going through some ML-based denoiser, anyway. I wouldn't be surprised if filter choice is nearly irrelevant at this point (except for training).
Scanning film would be digitizing it, going out to film could be called printing.
does add a little blur
A lot of blur. 35mm film had a lot of folk wisdom built up around it and even though it would be scanned and worked on at 2k resolution for live action vfx, if you actually zoomed in it would be extremely soft.
You could get away with 1.5k just for printing to the first generation film, let alone the 2nd gen master prints and the third gen prints that went out to theaters.
The filter did kinda matter
It is extremely unlikely lighters were changing the actual pixel sample filters on a shot by shot basis. This is something that is set globally. Also if you set it differently for different passes and you use holdouts, your CG will not line up with the other CG renders and you will get edges from the compositing algebra not being exact.
I was highly impressed with how sensitive the director and lighting/fx supes were to filter differences,
No one is changing the pixel sampling filters on a shot by shot basis and no one is going to be able to tell the filter just by looking at playing back on film. This is simply not reality.
and how quickly they could spot them in animated clips that were often less than one second long
Absolutely not. Whatever you are talking about is not different pixel sample filters. Aliasing in general yes, but that's much more complex.
The main thing they were doing was trying to avoid texture sizzle without overblurring.
This has nothing to do with the renderer's pixel sample filters which is what would be the only analog to the article here. Maybe you are talking about texture filtering, although that is not a difficult problem due to mipmapping and summed area mapping. Even back then a geforce 2 could do great texture filtering in real time.
Maybe you are talking about large filter sizes in shadow maps, which need a lot of samples when using percentage closer filtering, but that has nothing to do with this.
Lighters on Shrek were indeed not changing the pixel filter shot by shot. We did, however, multiple times, sit down to evaluate and compare various pixel filters built into the renderer, and when we did that, the pixel filter actually was changed for every shot, so they could be compared.
The filter mattered because it was set globally, and because the resolution was low, less than 1k pixels vertically. That is precisely why we spent time making sure we were using the filter we wanted.
I am in fact talking about different pixel filters, and the director and supes could tell the difference in short animated clips scanned to film, which is why it was so impressive to me. If you don’t believe me, I can only say that reflects on your experience and assumptions and not mine.
Both subtle high frequency sizzle as well as other symptoms of aliasing do occur when the pixel filter is slightly too sharp. I don’t know why you’re arguing that, but you seem to be making some bad assumptions.
Someone correcting you isn't victimizing you.
I’m very confused by your reply.
That's fine, I'll explain it again.
We did, however, multiple times, sit down to evaluate and compare various pixel filters built into the renderer,
Makes sense, this is initial set up.
because the resolution was low, less than 1k pixels vertically.
Resolution isn't referred to by the vertical resolution in shorthand. 2k usually means around 2,000 pixels or more horizontally. The vertical resolution is mostly dependent on what crop of the 35mm negative is being used which dictates aspect ratio. You said things like "low resolution, 1080p" which aside from mostly being used for HD video, is actually close to 2k resolution (1920x1080). The reason this is important is that when I said 1.5k is still sharper than film I'm talking about a 25% step down in resolution still needing to be softened to fit in with 35mm photography.
Here is the really relevant part - this is mostly about edge anti-aliasing and possibly motion blur.
and the director and supes could tell the difference in short animated clips scanned to film, which is why it was so impressive to me. If you don’t believe me, I can only say that reflects on your experience and assumptions and not mine.
If there are clips meant to compare filters and they are being told what they are looking at this makes sense. If you are saying that they would look at a clip on film and say what pixel filter was being used, it's just not true. It's gone through too many stages and it's not something anyone cares about once it's locked down. It's too soft and pixel filters are too similar to each other. It's like someone drinking a long island iced tea and saying they can taste where the water in the cola came from.
Both subtle high frequency sizzle as well as other symptoms of aliasing do occur when the pixel filter is slightly too sharp. I don’t know why you’re arguing that, but you seem to be making some bad assumptions.
I already said this up top, but I don't think you're understanding what I've been saying all along, which is that the sharpness from something other than gauss isn't worth the trouble and whatever super minor and subtle softness is there is irrelevant because it's overshadowed through the rest of the process.
What you're saying here doesn't seem like you are talking about pixel filtering. Aliasing on the edges is the easiest aliasing to deal with because it doesn't take many samples. Dealing with it with pixel filters is basically the same as blur, which isn't really a way to confront it at all.
Before you were talking about texture aliasing and now you're talking about "high frequency sizzle" and if this was anywhere except the edges of the CG, it wasn't people talking about pixel filters that are being discussed here, it was texture or shadow map filtering.
I'm sure you saw and learned a lot, but this is a very specific context and part of rendering which is arguably one of the easiest parts to set and move on. It's much more likely what you saw was people dealing with difficult aliasing problems through other means that wouldn't blur the entire image.
I'll say my overall point again though (which isn't exactly related to what you said) - CG goes through a lot of stages that affect the image much more than the pixel filter, so it is not wise to use something with negative lobes to get very slight sharpness at one stage when it is all downsides with no upsides when looking at the entire pipeline.
4K and 8k are horizontal resolutions by convention. Historically and before 4K existed, resolutions were referred to vertically, from 240i to 480p to 720p to 1080p. I haven’t heard 2k much and it’s confusing next to 1080p, but maybe some people prefer that. Either way your blanket claim that resolution is referred to by the horizontal size is false.
You haven’t corrected anything other than my dusty film processing terminology, which is extremely pedantic given you understood what I said, and you’re saying things that simply aren’t true.
I doubt it.
about something I participated in first hand
Being in the room doesn't mean understanding every aspect of rendering and that is fine. Even people finishing shots don't need to worry about this stuff, it's not usually a big sticking point.
but you’re simply wrong
I'm not. Downvotes, name dropping and getting upset don't change math.
some people truly do not want a Gaussian
I'm sure that's true, but I'm saying that most people don't realize that by the time they widen a sharp filter with negative lobes to compensate, blur the image slightly somewhere or just have an image go through the whole pipeline, it just isn't gaining anything and it possibly introduces some aliasing. It doesn't mean people aren't out there doing it anyway.
You said yourself that people were fighting aliasing on shrek, although you haven't distinguished between edge aliasing, texture aliasing and shadow aliasing, which is a big red flag.
Historically and before 4K existed, resolutions were referred to vertically, from 240i to 480p to 720p to 1080p.
And before HD broadcast resolutions, 2k for film was something talked about all day every day.
I haven’t heard 2k much and it’s confusing next to 1080p
It wasn't confusing before 1080p (and 1080i), it was the standard and that was long before HD video. Where do you think the terms 4k and 8k come from?
Either way your blanket claim that resolution is referred to by the horizontal size is false.
This is just how it was and is in film. You're mixing in HD video resolutions which went in the other directions.
In any event, the whole point I was making was that even lower than 2k resolutions were still too sharp for film, so an extremely slight softening from the pixel filter doesn't matter. Exposure to 35mm film is WAY more softening.
I think you aren't absorbing this fact. 35mm film is soft, and whatever edge comes out of the renderer is essentially blurred again by going out to film. Because of this there is no way to just look at CG printed to film and just know what pixel filter was used, that information is long gone and pretending otherwise is a hallucination.
You haven’t corrected anything other than my dusty film processing terminology,
I corrected more than that and you keep getting caught up on resolution without acknowledging that there are different types of aliasing from different sources and that 35mm film means lots of detail is lost.
If someone was talking about running a marathon said "I like these sneakers, this shirt, these tires, these shorts.." you would start to think you aren't talking about the same thing and that they don't have perfect knowledge of a niche subject, which is fine.
It doesn’t really matter if film is soft. If there’s aliasing or ringing or sizzling in the digital source, it can be visible on film even if it’s blurred more by the film. Once aliasing is introduced during sampling, blurring doesn’t necessarily make it go away, and in some cases extra blurring can make aliasing worse and more visible. This is why the pixel filter matters, and it seems like you would know that if your background matched your certainty.
That said, I think your claim that 2k by 1k is too sharp for 35mm film is also just plain wrong. 2 megapixels is quite a bit lower than the estimated spatial resolutions of most films at 35mm, and some have a resolving power much higher than that.
https://www.kenrockwell.com/tech/film-resolution.htm
https://shotkit.com/35mm-film-resolution-vs-digital/
https://en.wikipedia.org/wiki/Comparison_of_digital_and_film...
People can and did tell the difference between pixel filters on film, and it’s amusing you have jumped to a made up conclusion that it was otherwise and decided to presume to tell me you think it was something else, when it wasn’t. I worked on the renderer, it was indeed a pixel filter test, and people actually did similar tests at multiple film studios during the same time period. You’re wrong about this, making weird bad assumptions, and there is no math that backs you up on that.
I know, I never said you did, that's how the site works.
If you’re getting downvotes, then maybe you’re earning them?
People see a random blog link and think it's an authority, that's pretty much all there is to it.
More incorrect assumptions?
I think you mean you misunderstood something I wrote again even though it was clear the first time.
It doesn’t really matter if film is soft.
It does for what I was talking about.
If there’s aliasing or ringing or sizzling in the digital source, it can be visible on film even if it’s blurred more by the film.
No one has ever claimed this was wasn't true.
Once aliasing is introduced during sampling, blurring doesn’t necessarily make it go away,
No one claimed this an neither of these have to do with anything I've said.
and in some cases extra blurring can make aliasing worse and more visible.
Technically it would lower the frequency of the noise.
You still aren't distinguishing between different types of aliasing. Sampling for visibility and coverage from the camera is different from sampling for coverage of shadows.
You keep writing as if all aliasing is the same when pixel filters are going to be doing a weighted average of the samples from camera, which themselves could be aliasing from other sources.
That said, I think your claim that 2k by 1k is too sharp for 35mm film is also just plain wrong.
It's not. I've run wedges of pixel filters, different sizes and seen them laser printed to film.
Have you done this?
People can and did tell the difference between pixel filters on film,
Again you keep misunderstanding what I've said. Someone going in cold with no other information is not going to be able to look at a film print and know what pixel filter an image was rendered with, the information isn't there any more.
You’re wrong about this, making weird bad assumptions,
Nope. Your posts are full of giant red flags like not distinguishing between multiple types of aliasing and never hearing "2k resolution".
I'm not even sure what your point is other than replying to say I'm wrong about something I never said.
Show me some evidence. Show me laser printed and laser scanned 35mm film at 2k resolution and let's look at the edges.
Remember, all I originally said was that the vast majority of time someone will just want to use a gauss filter, because it looks good, has no negative lobes and the image will be softened more by the process anyway.
Yes, at PDI, like I said.
Why are you talking about “going in cold”? Who said anything about going in cold with no information? It is you who’s misunderstanding.
I don’t know why you decided to challenge my story, my experience, and every single point here to death. My first reply to you was mostly agreeing with your original take on Gaussians as I prefer them too, based on how good they are at getting rid of aliasing. But I wanted to give that a little industry color and history that isn’t visible to outsiders. Gaussians are visibly softer than other filters, the antialiasing properties come at the expense of sharpness, the softness is visible on film, and while the majority of the time any given random person might be best off with a Gaussian if they haven’t studied pixel filters, professionals sometimes prefer slightly sharper filters even if they might trade it for slight amounts of ringing or aliasing.
If you say things that aren't consistent or don't make sense someone may say that it doesn't make sense.
People can prefer whatever they want, it doesn't mean a half pixel blur is going to be visible on 35mm film where edges are multiple pixel gradients.
Again, show me some evidence. I see people make claims about the resolution of 35mm film all the time, but when it comes time to show edges anywhere close to a render it never happens.