Stopping lag.. I could use some help.

Yes. Especially if you have players spread about. It is actively loading and writing to region files that could literally be gigabytes in size. It's doing it live as changes happen, and as data needs to be loaded when people move around.  It's a large reason for the current limitations on vehicle speeds.

And you need not just fast storage, but a CPU and RAM that can keep up, and the bandwidth on the motherboard to keep up.

Our main Linux array is a little bit overkill, but I set this system up to run several 7 Days servers and a full Ark cluster.  For your particular case, I would advise at least 3-4GB/s throughput on the drives. 


Yes to what? 10K drives, or ISCSI?

 
Yes to what? 10K drives, or ISCSI?
I still don't know what the actual bandwidth of your 10k drives are. You never said. My 15k drives get about 890MB/s, but to be fair, there are only two of them.  You've never provided the full details on your configuration there. I did ask about it in the very first post.

 
I still don't know what the actual bandwidth of your 10k drives are. You never said. My 15k drives get about 890MB/s, but to be fair, there are only two of them.  You've never provided the full details on your configuration there. I did ask about it in the very first post.


What specifics do you want to know though. They are 12 - Dell 600GB 10K drives in a RAID 10, off the RAID card in the PowerEdge R720.  What's a good program to test the i/o, because I am running it through a VM/datastore, so won't really get an accurate reading. 

 
Last edited by a moderator:
Run it from the 2019 instance that your server runs from.


It's basically almost equal on performance, the 10K RAID beats but like a few megs.  I got SSD's on the way.  I increased the CPU's to 12 on the server, that brought up the performance, but the CPU utilization is at like 2%.  I have a feeling it's the drives that are the bottleneck at this point because I can see how much I/O is going through it from the 7 days server through the performance monitor.

So we'll see when they arrive. 

 
Run it from the 2019 instance that your server runs from.


Also what are other hardware issues you have had, because you had good performance with your older chipset cpus.  So most likely this is all Hard drive issues, and if so, why isn't this game caching the items it uses from the hard drive to memory.   It's acting like a SQL database. 

 
Also what are other hardware issues you have had, because you had good performance with your older chipset cpus.  So most likely this is all Hard drive issues, and if so, why isn't this game caching the items it uses from the hard drive to memory.   It's acting like a SQL database. 
If it cached the gigabytes of data instead of writing it live you run into two issues. 1. Massive amounts of RAM being used. 2. Every time it saves the cache, it's going to tank performance.

Take Valheim as an example here. Even with as little data as it's processing, and it being written to a very small flat database, having it cache the data causes the game to almost pause every time it does a save. And the server isn't even hosting most the work!  You can see this in action on the PS4 and Xbox versions of the game, because they perform a similar operation. Game will freeze once every three minutes because it's performing a save.

 
If it cached the gigabytes of data instead of writing it live you run into two issues. 1. Massive amounts of RAM being used. 2. Every time it saves the cache, it's going to tank performance.

Take Valheim as an example here. Even with as little data as it's processing, and it being written to a very small flat database, having it cache the data causes the game to almost pause every time it does a save. And the server isn't even hosting most the work!  You can see this in action on the PS4 and Xbox versions of the game, because they perform a similar operation. Game will freeze once every three minutes because it's performing a save.


Hmm interesting, why is there such a huge deal writing to RAM, compared to a hd/ssd though?

 
Hmm interesting, why is there such a huge deal writing to RAM, compared to a hd/ssd though?
The issue isn't writing to RAM, it's that when you're storing a lot of data in RAM, and then have to write it to the disk, it's going to be an extremely large operation.  Make a copy of the Regions folder, and see how long that takes. Now imagine it has to do that every few minutes instead of just writing to the files live.  Also look at the total size of the files. At least that much RAM would be in use by the client.

This is a pretty basic representation, and it could be optimized a little bit. However the current method is the most optimal.

 
The issue isn't writing to RAM, it's that when you're storing a lot of data in RAM, and then have to write it to the disk, it's going to be an extremely large operation.  Make a copy of the Regions folder, and see how long that takes. Now imagine it has to do that every few minutes instead of just writing to the files live.  Also look at the total size of the files. At least that much RAM would be in use by the client.

This is a pretty basic representation, and it could be optimized a little bit. However the current method is the most optimal.
I see makes sense. 

 
The issue isn't writing to RAM, it's that when you're storing a lot of data in RAM, and then have to write it to the disk, it's going to be an extremely large operation.  Make a copy of the Regions folder, and see how long that takes. Now imagine it has to do that every few minutes instead of just writing to the files live.  Also look at the total size of the files. At least that much RAM would be in use by the client.

This is a pretty basic representation, and it could be optimized a little bit. However the current method is the most optimal.
What does the -nographics argument do?

 
I'll be honest, this seems extreme, but also like it is on his end somehow. As I said before (this thread or another) I moved my 7 Days server back to an old Optiplex workstation. It is an i5-2400, 16GB of DDR3, and a single 500GB SATA HDD. We have had five players on and no issues. If that garbage desktop can run it his server should blow it out of the water. Not blaming the game but his specs should be plenty. Perhaps something is bottle-necking him via virtualization? The desktop I am hosting on is 7 Pro 64bit on bare metal.

 
That's the only thing I can think of, it doesn't like the VM.  Because nothing else makes any sense. 

I'll be honest, this seems extreme, but also like it is on his end somehow. As I said before (this thread or another) I moved my 7 Days server back to an old Optiplex workstation. It is an i5-2400, 16GB of DDR3, and a single 500GB SATA HDD. We have had five players on and no issues. If that garbage desktop can run it his server should blow it out of the water. Not blaming the game but his specs should be plenty. Perhaps something is bottle-necking him via virtualization? The desktop I am hosting on is 7 Pro 64bit on bare metal.


I may have to check this:



It's VT-d.  It helps the host system communicate correctly with the CPU going through a VM.  I am going to try this late tonight.  It just seems weird to me that the CPU is sitting so low on percentage.  You triggered my memory on these two options, it has to be the VM, as Thunder is running it on Baremetal, as he said in the beginning. 

 
Last edited by a moderator:
Yeah you want the IOMMU exposed. I have always done that. I do not know if it is the issue though. Sylen seems to be leaning towards an issue saving chunks maxing out disk I/O.

 
Yeah, you want as close to direct I/O as you can get. This is why I haven't used VM's in like 10 years. Even with hardware virtualization, they do not get the same performance as bare metal.

 
Yeah, you want as close to direct I/O as you can get. This is why I haven't used VM's in like 10 years. Even with hardware virtualization, they do not get the same performance as bare metal.


That pretty much cements my theory that 7 Days server doesn't like VM/Esxi and WIndows.  People have been having good results with Proxmox and Linux. 

 
Sylen, check out XCP-ng. Barely any difference unless you get into a state where you had a single storage setup (say RAID10, 4 disks) and suddenly all VMs need to write to disk. However, professional setups use SANs and don't have that issue. I have ten Ark servers on two identical Xeon setups, and they never choke out.

 
Back
Top