You don*t know what you are talking about.Watching a grandspartan stream who has a 3080ti graphics card with 64gbs of ram has massive lag i dont want anyone telling me i dont know what im saying the game is not optimized 1 bit and im sick and tired of people telling me otherwise. Dont tell the player base we dont know what we are talking about

More serious, the game is CPU- and probably cache-or datapath-bound in some situations as well, not GPU bound, and it is dependent on a graphics library that is not optimized for their use case.
"A chain is only as strong as its weakest link". You probably know that saying, right? Now in the case of 7d2d (different than most other games) the CPU is the weakest link. grandspartan could have two 3099ti overclocked in his computer (if such beast were available) and the game would not run one FPS faster. Because if the 3080ti was using 20% of its power then the two 3099ti would be running at 10% utilization. But the same 1 (!!) core of the CPU would be running at 100% just like it does now with a 3080ti.
Whatever you do to the GPU, the CPU is the weak link and it is running at full speed already.
Now could TFP change that? Not according to its developers, because the graphics engine (Unity) seems to feed the data to the GPU in one single thread running on 1 core of the CPU and that is the core that is already fully occupied with that task.
Since players who have super graphics cards expect to turn all options to ultra, their games often have low FPS and those FPS do NOT depend on their graphics card but on the CPU. Every graphics option they turn on also makes the CPU have to work slightly more in that single thread which is feeding the GPU with data. But that single thread in the CPU is already at its limit and so the game gets slower at ultra settings even though the GPU is bored out of its wits.
in the last years GPU performance and CPU performance have been improved as expected. But while the GPU was multi-core from the beginning and could fully employ those improvements, the speed of the CPU (for games at least) was almost equal to its single-core speed and all the improvements by adding more cores are half wasted by many games today. Since i7-4770k (6 generations ago) the single-core performance improved only by about 30% (at least according to a comparison website). And graphics engines like Unity are only slowly converting to the situation that all tasks have to to be balanced across cores.
I don't have experience with Unity myself. I don't know what the current status is. Eventually all graphics engines will have to be fully balanced multi-core or they will be superseded by better engines because at the moment CPU improvements largely only happen by adding more cores.
Last edited by a moderator: