Actually new frame-gen with DLSS 4.0 makes games looking better than native without increasing input lag much - i care much more about quality - but it’s still bad to have worse performance than in any AAA games - i mean… cyverpunk or mh wilds.
Actually Nvidia Frame Gen works in No rest for the wicked - also with not-native reflex - when it’s put in game via nvidia control panel - i got 4ms latency (8-12 without reflex) - and get to 240 FPS. But this not native - technique is worse as it’s not hardware based but software and graphic have artiffacts visible which not occurring in DLSS 4.0. i mean… new function from Ai based Frame gen that is based on software called “smoth frames” that is available in nvidia control panel.
i didn’t noticed it didn’t use DX12… which is bad… as i understand now why… it works like works and can’t be helped and surely… i see “unity” engine now in even worse way
I deleted it. and @Paule it is spam if it clutters a thread focused on problem solving.
also on another note, i worked in HPC for like 10 years, and the 1080 i got on release (even a few days prior) was the most reliable card i ever had. lasts me to this day. so please refrain from using this thread to argue about things that do not help, can not be changed and are pointless alltogether. thx.
The DLSS frame gen must always increase the input lag if you understand how it works. Your GPU must first render two real frames and before it shows the second one, it generates and show one or three fake mid-frames. So clearly, the input lag can’t be lower than the native one without the generation, because the real frames showing wouldn’t be delayed at all. The Nvidia Reflex can help in cases where you are GPU bound and it can do it with or without frame gen, so it’s hardly an argument. Also the way it’s implemented it’s lowering your FPS.
The DLSS frame gen is always a HW accelerated feature, running on the Tensor cores, using the same input we are giving to the “standard” DLSS (color, depth, motion vectors) so any glitches you see if forced by the DLSS override would be there with a proper game implementation.
I’m testing DLSS 4.0 on RTX 5070 Ti and despite the more powerful Tensor cores, if the new Transformer model is used, it’s a noticeable performance hit (about 0.8ms per frame with 4K HDR + DLSS set to Quality, so roughly costs me 10% of my FPS)
So if your goal is the highest possible FPS, the NV Reflex should be disabled and the DLSS switched to the older CNN model.
Have you seen the screen? i have 7ms latency… even if it get tripled or x10 it will be acceptable delay as it will be below 100. 7-14 is even to fast to a cat response
I have clearly said: i got it to work with new SOFTWARE frame game that is called “smoth frames” or smh like that with forced reflex by driver. Not via DLSS i have also used the 4.0 with a DLSS swapper - and got stable 240FPS with a 4ms latency which is better than even with just DLSS and no frame-gen.
anyway do a native hardware DLSS and optimilization get better for a breach update? on a high end gpu?
This is a Unity Engine game and is about what you can expect from the engine. I heard you can improve performance by limiting jobs threads and also using vulkan instead of DX11. At least I’ve had good experiences with Unity and Vulkan.