CPU usage up 200-500% with 1.0 dedicated server

mar3ld

New member
Is there an explanation as to why CPU utilization has gone up so much with version 1.0? I'm using a Linux dedicated server and I'm noticing ...

  • ~ 500% up when server is idle
  • ~ 200% up during game play



I've searched for it but not found anything related to 1.0. Please point me to it if you know about an existing discussion already.

 
Last edited by a moderator:
Is it higher quality graphics? I have not compared the size of the graphics files, but playing solo offline, I had to downgrade my graphics to play 1.0. The animations for fire (forge, campfire) seem to challenge my PC, even with shadows basically off. 

 
The graphics have indeed improved in the game, but I'm talking about the dedicated server. No graphics involved.

 
NOTE: If you have a PC Bug Report, or are a Console player, you are in the wrong section!


Noted 👍🏻

I'm asking like this to get a better understanding if others are experiencing the same or if it's isolated to my systems (= my problem). If it starts to look like a bug, I'll make sure to use the link you provided and file a bug report.

Anyone else noticing high CPU usage with 1.0 dedicated server, especially at idle?

 
Here are actual numbers of CPU usage. See image and here's what is what:

Dedicated servers running under Linux:

  1. 7DtD 1.0 large RWG at idle
  2. 7DtD 1.0 medium RWG at idle
  3. 7DtD 1.0 small RWG at idle
  4. 7DtD 1.0 small RWG at idle
  5. Satisfactory at idle
  6. 7DtD a21 small RWG at idle
  7. No More Room In Hell, couple of players active
  8. Satisfactory at idle
  9. Linux system process
  10. Linux system process
  11. Linux system process
  12. Linux system process
  13. Linux system process

7DtD 1.0 = average 77% CPU at idle

7DtD a21 = average 16% CPU at idle

image.png

 
Last edited by a moderator:
Here are actual numbers of CPU usage. See image and here's what is what:

Dedicated servers running under Linux:

  1. 7DtD 1.0 large RWG at idle
  2. 7DtD 1.0 medium RWG at idle
  3. 7DtD 1.0 small RWG at idle
  4. 7DtD 1.0 small RWG at idle
  5. Satisfactory at idle
  6. 7DtD a21 small RWG at idle
  7. No More Room In Hell, couple of players active
  8. Satisfactory at idle
  9. Linux system process
  10. Linux system process
  11. Linux system process
  12. Linux system process
  13. Linux system process

7DtD 1.0 = average 77% CPU at idle

7DtD a21 = average 16% CPU at idle

View attachment 31604




The references here are a bit misleading. Your CPU % is not the actual total system CPU%.  If you have a 24-core CPU, your total CPU % would be 2400%. Assuming 81% of that would lead you to an actual CPU usage of 3.375%

bdaef0668c6cda6dbf225af5c6b7f392.png


In this example, the CPU has 32 cores available. So if the full CPU were used 100% that would be 3200% usage. That first one at 94.5% usage is only actually using 2.95% of the CPU. 

Here I captured one of the processes spiking over 100%.  I'm also including the full set of processors in the top. 

088b3b81a92ade758eeee240025982a4.png


As you can see, in that image system load is only a mere 4.86% for peak average. The highest load on a single CPU core is 94.8%, even though a process is showing 115.5% utilization.

 
Thank you very much, SylenThunder, for taking the time to respond so thoroughly. I truly appreciate it.

While I understand how CPU percentages are reported and the context about total CPU capacity, my concern remains regarding the significant increase in average CPU usage with the dedicated server for version 1.0 compared to version a21.

For instance, under similar and idle conditions, the a21 version averaged 16% CPU usage, while version 1.0 is averaging 77%. This considerable jump in usage seems disproportionate.

Do you have any ideas on what might be causing this increase?

I’m also eager to hear from others if they have noticed similar high average CPU usage with the 1.0 dedicated server. Any additional insights or shared experiences would be greatly helpful in determining whether this issue is widespread or specific to my setup.

Thank you once again for your input!

 
Anyone else experiencing significantly higher CPU usage with 1.0 compared to alpha versions using dedicated server?

 
Anyone else experiencing significantly higher CPU usage with 1.0 compared to alpha versions using dedicated server?
Still no here.  It is slightly higher, but not significantly.  Basically, what I would expect to see with the new systems and features.

Architecture means a lot though.  My base reference is a R9 5950X with a custom overclock curve.

In my earlier images, ageofdemons is our a21 Darkness Falls server. It's using "28%" completely idle with no players on it. Our top two server processes there were V1.0 with 5-8 players online in comparison. The a22test instance that spiked to 115%, was actually doing a stress test to simulate a crash we were troubleshooting at the time. I was unaware at the time of capturing the statistics that one of our groups admins was running tests to cause a crash and capture the data with a script to do recovery testing.

 
Still no here.  It is slightly higher, but not significantly.  Basically, what I would expect to see with the new systems and features.

Architecture means a lot though.  My base reference is a R9 5950X with a custom overclock curve.

In my earlier images, ageofdemons is our a21 Darkness Falls server. It's using "28%" completely idle with no players on it. Our top two server processes there were V1.0 with 5-8 players online in comparison. The a22test instance that spiked to 115%, was actually doing a stress test to simulate a crash we were troubleshooting at the time. I was unaware at the time of capturing the statistics that one of our groups admins was running tests to cause a crash and capture the data with a script to do recovery testing.


What size are your maps when comparing?

Can't reproduce that on mine. View attachment 31837


Hi. Thanks for chiming in. Would you care to explain what you're trying to tell by your picture? I'm not sure I follow ...

The Fun Pimps, do you have any info regarding this?

Again, this is 21.2 vs 1.0 at my place, all at idle:

  • All but PID 2900 is a 6K map.
  • The accumulated CPU time (TIME+) after just about two days is incredibly big.

image.png

 
Last edited by a moderator:
What size are your maps when comparing?
All of our maps are 10k worlds with customized RWG settings for more/larger cities. You can see more details about player counts and such here.

The Fun Pimps, do you have any info regarding this?
No offense, but so far you have failed to really give any information either.  Your most recent image has more of the overhead details from top, but there is still zero data on what CPU you are running, if it is bare metal or in a VM, rented server, or what. 

I have clearly stated that this information is important and provided similar specs on my setup in an effort to properly describe the situation.

If you aren't going to provide the details like logs and system information so that the data in your screenshot actually has some reference for meaning, there is no point in continuing this discussion.

 
Thank you for your feedback, SylenThunder, and I appreciate your continued engagement in this discussion.

I want to clarify that my main concern is not to focus on the specific hardware I’m using, which I’m not ready to disclose at this time. My intention is to keep the discussion open and centered around the differences in performance between version 1.0 and the previous alpha versions, rather than on the specifics of my setup.

The key issue here is that I’m running various versions of 7DtD on the exact same hardware, yet the performance difference is vast. For instance, where a21.2 averaged around 16% CPU usage at idle, version 1.0 is now averaging 77% under similar conditions. This is significant, especially considering that I’m running about 8-10 maps simultaneously on this machine, depending on the week of the month. The cumulative CPU usage is becoming overwhelming, which wasn’t an issue with the alpha versions.

To provide somewhat more context, I’m running the 7DtD dedicated servers close to bare metal, on Linux, no VM involved but I'm using Docker. All of my instances are running on the same machine, hence obviously also the same OS environment for all instances of, and regardless of, 7DtD version. I’m genuinely trying to understand what might be causing this increase in CPU usage — whether it’s related to TFP using new CPU instructions or optimizations, or something else entirely. I’m not looking to point fingers but to have a constructive discussion to find a solution and restore stable performance.

Since you mentioned the importance of providing logs, could you please advise on what specific logs or parts of the logs I should provide? I’m asking because I can’t see anything in the logs that directly relates to how the 7DtD servers are performing. The logs seem to contain more static information on what's loaded at start and what's changed during game play. I want to make sure I’m giving the most relevant data to help us better understand what might be happening.

Of course, the answer could simply be that nothing significant changed between the versions, and that no one else is experiencing what I am. If that’s the case, then this is likely a problem specific to my setup, and I’ll need to focus on resolving it on my end.

Thank you again for your input, and I’ll review the link you provided to see how your setup compares.

 
Last edited by a moderator:
My dedicated server is idling at 10% with current game directly after start. I can't say what it was in A21 but I would be surprised if it was anything less. I can't say what it would be after letting it run for a day since I don't have a need for letting it run that long.

Your logfile is simply needed so people with experience in reading logfiles for this game can look for suspicious data or generally for out-of-the-ordinary things. Nobody can tell you before looking at it where or what the suspicious data is. And different people will spot different things or get different ideas because they are looking for different things in there and have different experiences with all kinds of bugs.

The other reason is to get a full picture of the situation. I have seen dozens of players report an issue and claim one thing and the logfile showed something different. Mistakes happen.

 
Last edited by a moderator:
mar3ld said:
I want to clarify that my main concern is not to focus on the specific hardware I’m using, which I’m not ready to disclose at this time. My intention is to keep the discussion open and centered around the differences in performance between version 1.0 and the previous alpha versions, rather than on the specifics of my setup.
Since the architecture and setup has such a large impact on performance, it is extremely important to discuss the specifics when attempting to assimilate the data regarding the metrics you are seeing.  Sure different versions will have different performance simply due to changes in features, options, and how that version of the client utilizes your hardware. 

The problem is that you are claiming broad sweeping differences, and there is no point of reference to validate a possible cause.

As for the logs, it would be the full log from the server running. Beginning to end. That will give us a lot of information on what the performance metrics of the system are as it is working through gameplay. The most relevant data is basically everything that is inside of the log file.  

I'm trying to set an example, but Pastebin is getting so hard to use with how bloated these logs get.  One minute...

 
Google Drive hates my files too.  Probably the work firewall. I will have to play with it later.

Apparently, OneDrive works from work...

EOS_Crash.txt

In this particular log, the server did end up freezing because of an EOS issue. This has been not real common but has been happening regularly since Stable dropped. The client is still able to process telnet login requests, and getting details using the Steam API works peachy, so any monitoring tools think the server is alive. However, the server is incapable of processing user logins or even any commands via telnet/console. As a result, the only way to stop the client is by force killing the process.  Fun times.

 
Last edited by a moderator:
I recently deployed a small server in a LXC container on an Odroid H4 Ultra (with i3-N305), and I notice that the CPU load is not negligible when the server is doing nothing.

Is it that computationally expensive to wait for an incoming connection while doing nothing else at all?

 
I recently deployed a small server in a LXC container on an Odroid H4 Ultra (with i3-N305), and I notice that the CPU load is not negligible when the server is doing nothing.

Is it that computationally expensive to wait for an incoming connection while doing nothing else at all?


Unless I'm searching wrong, that CPU is akin to a lightweight/low power mobile processor without hyperthreading? That would explain your high CPU load, mine doesn't even register a percent for CPU usage on Windows when running idle. 

 
Yes, it is a low-power CPU, because I want to host some services (24/7) using as little power as possible.

I'm not sure that "my CPU is so powerful that it barely notices even moderate workloads" is quite convincing.

Because I just checked on a computer with a 4600G (using Linux), and I still get about 20% CPU usage while the server is idle.

I find it a bit much for a process that is doing nothing.

If the server was failing to deliver during games, yes, you could say that a beefier CPU would be expected.

But here, it's wasting CPU cycles while waiting, and powerful CPUs only hide that fact.

 
Back
Top