crt-pi shader users - reduce scaling artifacts with these configs in lr-mame2003, lr-fbalpha, lr-nestopia (and more to come)
-
@thelostsoul said in crt-pi shader users - reduce scaling artifacts with these configs in lr-mame2003, lr-fbalpha, lr-nestopia (and more to come):
Am I the only one asking for shader at 480p
I think so, because with that low res, you won't have many pixels to simulate the scanlines or shadow mask effects. And for folks with a real CRT, well, no shader needed because you have the CRT we are trying to simulate.
Of course, if the CRT is a high-resolution (very fine dot-pitch) multi-sync PC monitor, your real scanlines could lack the vintage look and feel at 480 that a TV would have shown. You might as well run it at a HIGH resolution and enable shaders like the rest of us.
-
@caver01 Hmm ok, I understand now it does not make any sense to produce such shader. Then I need a different CRT device. Thanks for your answers guys. :-) Never mind then.
-
Could this script be used to generate cfgs using zfast shaders?
-
@cloudlink it's an outstanding issue: https://github.com/dankcushions/crt-pi-configs/issues/11
no ETA on it.
-
@cloudlink If you're so inclined, it would be pretty straightforward to modify the script to use the zfast shaders.
The shaders are referenced in two places in the script - lines 93-96 and lines 105-108. In each instance, they look a bit like this;
if curvature: shader = "crt-pi-curvature-vertical.glslp" else: shader = "crt-pi-vertical.glslp"
Just change the shader name to the one you want to use and then run the script to generate the .cfg files.
The main reason (I assume) it's on @dankcushions to-do list with no ETA is that the more 'official' way to do it would be to present the user with the option - via the command line - to use either shader, which would take more coding.
-
@andrewh for me it's more that i haven't had a chance to test the shaders - they're allegedly faster so if the image quality is equivalent, i would probably just wholesale replace crt-pi with them. we could have an option, i suppose, but i think anyone running the script could probably just as well edit it to suit whatever they wanted.
i say no eta as i've been meaning to test them for about 7 months now, and still haven't :)
-
@dankcushions I have been using zfast for a while now and I will say that moire patterns and rainbows are reduced. I have always used curvature so I have not really taken advantage of the configs, but with zfast, the artifacts may be harder to notice anyway. I am curious about the results @cloudlink might be able to share.
-
@andrewh said in crt-pi shader users - reduce scaling artifacts with these configs in lr-mame2003, lr-fbalpha, lr-nestopia (and more to come):
@cloudlink If you're so inclined, it would be pretty straightforward to modify the script to use the zfast shaders.
The shaders are referenced in two places in the script - lines 93-96 and lines 105-108. In each instance, they look a bit like this;
if curvature: shader = "crt-pi-curvature-vertical.glslp" else: shader = "crt-pi-vertical.glslp"
Just change the shader name to the one you want to use and then run the script to generate the .cfg files.
The main reason (I assume) it's on @dankcushions to-do list with no ETA is that the more 'official' way to do it would be to present the user with the option - via the command line - to use either shader, which would take more coding.
Thanks.
I modified the python script to test it. I should be able to test it in a few hours.
Here's the modification if anyone else wants to try it:
https://pastebin.com/F1zp5qcz -
@dankcushions said in crt-pi shader users - reduce scaling artifacts with these configs in lr-mame2003, lr-fbalpha, lr-nestopia (and more to come):
@andrewh for me it's more that i haven't had a chance to test the shaders
Ah, ok - fair enough.
I swapped them in quite some time back - several months now - and haven't noticed anything that caused concern.
That said, I'm not necessarily the most discerning, so don't take this as any sort of suggestion that you don't need to test them yourself :-)
-
I tested out the script generating standard and curved zfast shader configs for Mame 2003 and FBA. It works great and both standard and curved shaders look and run great. I'm very impressed with the zfast shaders.
-
Are the scanlines guaranteed to be always aligned with the games original pixels, regardless of my scaling settings? E.g. if the game is low-res and fullscreen, and the pixels are large, the scanlines are also thicker?
-
@rsn8887 said in crt-pi shader users - reduce scaling artifacts with these configs in lr-mame2003, lr-fbalpha, lr-nestopia (and more to come):
Are the scanlines guaranteed to be always aligned with the games original pixels, regardless of my scaling settings? E.g. if the game is low-res and fullscreen, and the pixels are large, the scanlines are also thicker?
In my experience, the scanlines do line up with the game's original pixels, but they look more like CRT phosphor dots when you run higher resolutions. If you run on a low-res display, you cannot get around the inherent stair-stepping that is visible at the display's native pixels. At higher resolutions you have more pixels available for the shader to use. Theoretically, with extremely high resolution, a shader could create visually perfect scanlines, but of course, there are performance concerns when the resolution gets too high. The current optimum will typically be to run as high res as you can before performance becomes a problem. The shaders have been coded to run at HD resolution. Yet, even at HD you get some artifacts--hence the config files which try to align at least one dimension with an integer scale factor of your display's native resolution.
You cannot just say "higher resolution is better" because of the performance impact, but higher does get you a better looking effect in my opinion. Do some tests and decide what you like best.
-
@rsn8887 said in crt-pi shader users - reduce scaling artifacts with these configs in lr-mame2003, lr-fbalpha, lr-nestopia (and more to come):
Are the scanlines guaranteed to be always aligned with the games original pixels, regardless of my scaling settings? E.g. if the game is low-res and fullscreen, and the pixels are large, the scanlines are also thicker?
there are two separate resolutions at play with scanline shaders:
-
the original game's original resolution. this is typically 240 pixels high. a black scanline will be inserted between every vertical pixel. this rule always applies, no matter your render resolution:
-
your render resolution (by default in retropie this is your display's resolution). on an HDTV this would be 1080 pixels high. this determines the quality of the image, but it has no effect on the thickness (other than via scaling artefacts, which my script aims to eliminate) or number of scanlines.
whilst a game with a higher original resolution (eg, tekken 3 on the psx is 480 pixels high) will have more scanlines, you have to consider that this isn't a real world situation. CRTs have scanlines when running progressive scan, low resolution games. tekken 3 would be running in an interlaced (non-progressive) mode, so would NOT have scanlines. the shader isn't able to figure this out, so still injects scanlines regardless, but they are so thin they end up looking like a sort of 'haze' on the screen.
to answer your question specifically:
Are the scanlines guaranteed to be always aligned with the games original pixels, regardless of my scaling settings?
the shader guarantees that there will be a scanline between every vertical pixel (1) of the original game. if you upscale this to a different resolution (2), you will get scaling artefacts unless the new resolution (2) is a factor of the original resolution (1). this script aims to mitigate those artefacts - i explain how in the first post.
-
-
Interesting.
I wonder then why are there so many versions? There should be only one shader that adapts itself automatically and generates the masks etc. on the fly for whatever input and output resolutions are used.
For example why is there not a check inside the vertex shader to pass a variable to the fragment shader that disables scanlines if the game resolution is larger than 240p?
Also the vertical vs horizontal thing bothers me, this should be automatically checked within the vertex shader and the result passed to the fragment shader.
The vertex shader is only executed four or six times or so per frame for a 2d game. So the slowdown of these automatic checks should be negligible.
I read somewhere that making the shader more automatic would cause slowdown but that makes no sense to me, if these things are only done in the vertex shader.
-
@rsn8887 said in crt-pi shader users - reduce scaling artifacts with these configs in lr-mame2003, lr-fbalpha, lr-nestopia (and more to come):
Interesting.
I wonder then why are there so many versions? There should be only one shader that adapts itself automatically and generates the masks etc. on the fly for whatever input and output resolutions are used.
what do you mean many versions? there's not.
For example why is there not a check inside the vertex shader to pass a variable to the fragment shader that disables scanlines if the game resolution is larger than 240p?
Also the vertical vs horizontal thing bothers me, this should be automatically checked within the vertex shader and the result passed to the fragment shader.
The vertex shader is only executed four or six times or so per frame for a 2d game. So the slowdown of these automatic checks should be negligible.
I read somewhere that making the shader more automatic would cause slowdown but that makes no sense to me, if these things are only done in the vertex shader.
i mean, if you know better than the shader authors, why not do it yourself? :) i think zfast shader doesn't have a vertical variant, so maybe the issue is solved there, or maybe it doesn't have a shadow mask. i haven't tested it yet...
-
@dankcushions said in crt-pi shader users - reduce scaling artifacts with these configs in lr-mame2003, lr-fbalpha, lr-nestopia (and more to come):
i think zfast shader doesn't have a vertical variant,
No, there is a vertical variant of zfast too. Actually, there are four variants to cover the standard, vertical, and both again with curvature.
@rsn8887 it as been stated by davej and possibly others that, although it is possible to do some detection on the front-end to avoid the variants, this alone can have an adverse affect on performance. The point being, we all want these shaders to run as lean as possible and have as little effect on performance as necessary to achieve the visual results. The variants are really the same shader repeated, only with different configuration settings embedded so that we as users don't have to dive into the shader files and change the settings. In other words, it is simply easier to make a duplicate with adjustments and cycle through shaders to use them than it is to edit the files themselves. You also can specify which variant in rom-specific configs.
-
I understand. I didn't mean that we need extra options or anything done in the front-end. I mean that the vertex shader itself should set those options automatically internally. For example, the vertex shader can do a check if lines>rows to see if it is a vertical or horizontal game. It can also check what the output resolution is and adjust its behavior.
All of these checks are trivial and have to be done only per vertex, not per pixel. So these checks have to be done only once per frame. Well, technically four or six times, because the screen has four or six vertices.
-
@rsn8887 we are on the same page with the checks. You can probably find the discussion thread back when davej released the crt-pi shaders. This idea came up back then, and he explained how doing detection would affect performance.
-
@rsn8887 said in crt-pi shader users - reduce scaling artifacts with these configs in lr-mame2003, lr-fbalpha, lr-nestopia (and more to come):
I understand. I didn't mean that we need extra options or anything done in the front-end. I mean that the vertex shader itself should set those options automatically internally. For example, the vertex shader can do a check if lines>rows to see if it is a vertical or horizontal game. It can also check what the output resolution is and adjust its behavior.
All of these checks are trivial and have to be done only per vertex, not per pixel. So these checks have to be done only once per frame. Well, technically four or six times, because the screen has four or six vertices.
The issue isn't so much with the checks themselves, which are trivial even if done in the fragment shader, but the branching down different code paths based on the results of the checks - which you'd still have even if the fragment shader just tested a flag set in the vertex shader. crt-pi has to do as much of its configuration as compile time checks as possible to avoid those branches.
It's also worth pointing out that for some checks the information available isn't sufficient.. Your horizontal or vertical game check is a case in point.. What happens when a game is displayed on a screen that is in portrait orientation (because someone mainly plays vertical games)? The shadow mask emulation needs to be the opposite of what it is for landscape orientation and that information isn't provided to shaders by the libretro library.
-
@dankcushions Are the cfg files for FB Alpha still based on 0.2.97.39 or the latest 0.2.97.43?
Contributions to the project are always appreciated, so if you would like to support us with a donation you can do so here.
Hosting provided by Mythic-Beasts. See the Hosting Information page for more information.