How to fix AMD GPU stutters | Streaming settings config fix

Issue Description (Required):

Potential solution in ‘Attempted Solutions’ sections for people who don’t care discussion

Confirmed working for following GPU + CPU combos. Please share your specs and expereince below so we can keep adding to the list

  • GPU: RX 7900xtx | CPU: Ryzen 9800x3D
  • GPU: RX 9070 xt | CPU: Ryzen 7900x
  • GPU: 6700 xt | CPU: 5700x3D

Note: You will experience some pop-in when you very first load into your character select screen, but after this the game runs as smooth as helldivers 2 for me :slight_smile:

I have come back to Darktide and have been getting back into optimising the game’s ini files to stop the dreaded AMD stutters.

For credibility, i have previously spent months of my spare time testing and cumilated in this post (I fixed stutter and textures not loading in! (Texutre streaming config file change) ) which helped AMD users somewhat, and helped NVIDIA users as well!

I have returned and believe I have found the culprit for the AMD stutters. And it is the buffer sizes found within the Mesh, Texture, and feedback settings.

It seems that a buffer size above a value of 64 just does not work well with AMD graphics cards and causes issues.

This seems to be guidance for Mesh Shaders specifically, and I believe I essentially disabled the texture streaming buffer. I dont understand everything, but I am seeing noticeable improvements and really good results.

Links to two articles I have been reading to come to my conclusion:

Not sure if the devs see this, but it may be worth looking into as a solution

Attempted Solutions (Optional):

Please note I have a high end PC, and am unsure if this will work for lower end systems due to VRAM limits and load on the CPU. Please let me know your experience.

You will need to open config files and change some values within them for this fix.
These first 2 are found in your Darktide game folder settings_common.ini and win32_settings.ini

File path = C:\Program Files (x86)\Steam\steamapps\common\Warhammer 40,000 DARKTIDE\bundle\application_settings

Within win32_settings.ini towards the bottom. Change the following values

BEFORE

	streaming_buffer_size = 64
	streaming_texture_pool_size = 512

AFTER

	streaming_buffer_size = 0
	streaming_texture_pool_size = 0

Within settings_common.ini there are 3 sets of values that need changing I will provide a before and after.

BEFORE

feedback_streamer_settings = {
	feedback_buffer_size = 4
	max_age_out_tiles_per_frame = 64
	max_streaming_tiles_per_frame = 64
	max_texture_pool_size = 1024
	max_write_feedback_threshold = 0.009
	min_write_feedback_threshold = 0.005
	staging_buffer_size = 4
	threaded_streamer = true
	tile_age_out_time_ms = 5000
	tile_staging_buffer_size = 4

mesh_streamer_settings = {
	disable = false
	eviction_timeout = 5
	frame_time_budget = 1
	io_buffer_budget = 10240
	limit = 700

streaming_buffer_size = 32
streaming_max_open_streams = 50
streaming_texture_pool_size = 400
surface_properties = "application_settings/global"
texture_streamer_settings = {
	streaming_buffer_size = 64
	streaming_texture_pool_size = 512

AFTER

feedback_streamer_settings = {
	feedback_buffer_size = 16
	max_age_out_tiles_per_frame = 16
	max_streaming_tiles_per_frame = 16
	max_texture_pool_size = 1024
	max_write_feedback_threshold = 0.009
	min_write_feedback_threshold = 0.005
	staging_buffer_size = 1
	threaded_streamer = true
	tile_age_out_time_ms = 5000
	tile_staging_buffer_size = 1

mesh_streamer_settings = {
	disable = false
	eviction_timeout = 5
	frame_time_budget = 1
	io_buffer_budget = 64
	limit = 700

streaming_buffer_size = 0
streaming_max_open_streams = 64
streaming_texture_pool_size = 0
surface_properties = "application_settings/global"
texture_streamer_settings = {
	streaming_buffer_size = 0
	streaming_texture_pool_size = 0


Platform (Required):

PC - Steam

[PC] PC Specifications (Optional):

9800x3D
7900xtx
32gb DDR5 6200mhz tuned

5 Likes

unrelated to AMD but I’m gonna try this on my 4060 laptop and get back to you on whether or not my performance improves on Nvidia setups.

Ok this post legit needs to be pinned

Tested on Ryzen 7 5700x3d and 6700xt (driver 25.4.1)
In game video settings: 1080p resolution FSR quality, all graphics at low settings except texture maxed out and anistropic filtering at x16

FPS before editing the .ini files

100-180+fps with regular frametime spike and dropping my fps to 50’s

FPS after

same average FPS 100-180fps with no frametime spike, i’ve yet to see the framerate drop below 90fps during gameplay in Auric difficulty Damnation

wtf, this might be the first workaround that actually works outside of rolling back to driver 23.11.1

2 Likes

Yeah, this is what I’ve found as well.

Im still not 100% done tweaking the settings but this is the best the game has felt since 23.11.1

Thanks for the feedback and specs list

I’ve tried this in conjuction with your two other threads (the past tweaks and the fullscreen adjustment) and I am seeing a clear improvement. Only downside I can notice is that there’s indeed usually a few seconds of low detail meshes before proper ones load in when I say bring out a weapon or when new assets get loaded in.

Specs: Ryzen 5 5600, RX 7600 8GB, 32GB 3600MHz, on a SATA3 Samsung 870 EVO.

The cosmetics screen doesn’t turn into a slideshow at least but the mesh’s are distractingly low detail until the game gets around to loading stuff. The answer is here somewhere but I can’t figure out exactly what it is yet.

Normal LOD:

Oh no:

just reporting back on this. I’ve noticed that my overall performance has seemingly gone up not entirely sure if placebo or not but I’ll take it.

Altho like @DashHandsome has commented the LODs are a little out of whack and can be somewhat distracting.

A good benchmark is does the cosmetics screen start hitching

I can get the cosmetics screen to not stutter for a few scrolls back and forth but eventually it starts to get weird again. Beefing with these two lines specifically:

	max_age_out_tiles_per_frame = 64
	max_streaming_tiles_per_frame = 64

Per frame for a game with a wildly variable framerate like this? Maybe I’m missing something but idk.
Also because character portraits are real time you can get them to use the low LOD models if you get too freaky with your settings.

Anyway here’s a few more funny faces:
Calm, focused, and ready for anything


Mfw heresy

o7
thanks for sharing this it gets my 9800x3d runs my gpu utiliziton stable and over 90%

i have two pc and both have 5090,
cpus; 9800x3d and 14900ks
it is so strange to see that 9800x3d gpu utilization is lower than 14900ks
with 9800x3d most in game play i get 75-90 utilization
with 14900ks it gets 90-100 utilization and overall fps and %1 lows are much much better than 9800x3d

i think game stutters related with the amd cpus i dont know why. i dont hope maybe devlopers or amd side make something for this.
again thanks for sharing this settings.

This isn’t a weird thing at all. The CPU cannot keep up with the game’s requests and that’s what causes stutters and FPS drops when you’re on an AMD CPU.

There are 2 things that AMD CPUs suck at.

  1. Memory speed and latency (Infinity fabric is shite)
  2. I/O

This game hits both of these really hard and that is why you find the monolothic Intel CPUs are better for this game.

That’s the downside of chiplets. They are cheaper but for AMD, the infinity fabric speeds really REALLY hurt the CPU in certain scenarios.

I’m sure you’ve noticed that even tho the average FPS on the 9800x3D is higher, the CPU will fluctuate more than your 14900ks. This is why.

1 Like

dear vizra thanks for replying,

I am sure my monolithic 14900ks has better fps with msi ventus 5090 stock due to gpu utilization is higher. menu and also in game everytime it gets better way better gpu utilization

On the other hand;
asus crosshair hero x870e
9800x3d pbo +200mhz -20volts
2*16 32GB 6200mhz CL26 mem mclk uclk 1:1 fclk 2200
astral LC 5090 stock

i am playing game at 4k max and raytracing on with balance/quality dlss and fg on x2

if it is needed i can make and share video of two system performance differences. without your changing logs really flucuate to high sometimes 70s sometime 90s

thanks regards.

1 Like

Would this be something the devs can fix? Or we’re way past that? It feels a little weird that while the X3D CPUs tend to do well across the board, this game specifically seems to rely right on the weaknesses.

To me at least it looks more like a lack of optimisation rather than this game requiring specifically those strengths of the CPU.

Have you heard of the “AMDip”?

This is what you are experiencing firsthand and it is not a myth or a fanboi thing.

This is just the reality of AMD CPUs and one of their downsides.

This can be optimized around to an extent, but it will never be completely fixed as it is hardware limitation. Some games run into it, some games don’t.

Basically the second that the CPU has to leave the Cache and reach out to system memory, you will get the AMDip. This is why vCache is so good on AMD CPUs, it minimizes the amount of times you have to reach out to system memory.

You can mitigate this somewhat by Overclocking + Tuning the RAM + Infinity fabric, but stability testing takes AGES (take it from me i’d know lol).

If you want the best .1% lows you want a 14900k with eCores disabled, overclocked, and with RAM as fast as you can get it and tuned to the limit. It takes ages to do and is really hard to cool but that is hands down the best gaming experience you can get in terms of framerate CONSISTENCY.

The CPU clock speeds aren’t the issue with AMD CPUs, its the .1% lows due to lower memory bandwidth and higher than ideal memory latency.

This may change with future generations but for now this is what we have got.

I actually run curve optimiser with positive offset of 3 on all cores as I found my who PC was just snappier with it like that and my .1% lows improved a lot. Seems my CPU was technically stable, but not smooth. Try it and let me know your results.

If you have any questions about AMD CPU tuning let me know, I’ve spent a LOT of time overclocking over the years.

Hope this helps :slight_smile:

You dont need to prove the FPS is worse with your 9800x3D as this has always been the case.

I will say that your settings are really really good. Are you use your FCLK is stable, because if it is I am very jealous. That is a golden sample

hi again
cpu sp113
with tightened ram clocks legacy mode on and nitro 1-2-0 x8x8 aida64 test gets latency 58-59ms
more than 10-12 hours i tested it in prime95 and y cruncher vt3 and also when testing i was watching videos from internet pc sounds no tearing
i think everything seems normal

i luv my 9800x3d power consumption way better than monolithic 14900ks on the other games 9800x3d it is ok but in darktide that i addicted kiss gunners .ss not perform well fluctuation is so high

thanks again regards.

ps :
without any overclocking, tweaking also game has same fluctuations for me it is high.

Well you know better than me here, I used the Ryzen Master curve optimizer for now to drop my temps like 10 degrees, and turned on my 4 sticks of RAM to 3600MHz, but that’s about it.

My main issue is, your tweaks DID have an effect and while there’s a small visual impact at times, overall the experience is much better. But without them, low or high, max or minimum, the game would run into those stutters.

I do not think that’s a good sign because a game should scale better right? If I set to low I give up the eye candy but it SHOULD run better, yet here it doesn’t really. While I think you are right about the game preferring the monolithic Intel design, I still think something very wrong with the game deep down.

The stutters for example don’t even happen in intense scenarios where the game has to render a lot of entities, it can even happen randomly while walking in some completely empty hallway. So from my point of view this doesn’t make much sense, I am not a developer but from so many other games you tend to associate a performance dip with actually tangible actions on screen.

I have to say I appreciate your efforts though, I do hope Fatshark eventually comes around to changing things so we do not have to dig up through all these settings. And I’d rather not get a 14th gen and have to use a plane engine sized fan to cool it and then wake up with it burnt out like we saw a while back lol.

I tried these settings and it seems to be working. Before my FPS varied between 50-300FPS (5800x3d + 9070), but now it is 110-180 FPS (1440P), with no stuttering, just the occasional slow texture fill in, so it is much more playable. These are with high textures, medium everything else, no lense or bloom effects (they annoy me).

I also play on a laptop with an AMD 5800h processor with a 3070 gpu, it of course is medium textures but low settings on everything else. It plays the game fine 70-120FPS (1080P), I’ve never had to change any settings on it.

I don’t know enough to state whether the CPU has any effect, but my laptop with an AMD cpu and Nvidia GPU has played the game fine from the beginning.

A related problem is the game’s shader cache builder won’t correctly generate PSOs and that has been broken for a while.


On first boot it only makes shader_cache.hans and the console log will say it can’t find shader_library.pso_lib or state_stream_library.pso_lib. The game will load what it can from the .hans file and then save any PSOs it creates during runtime (the source of some stutter) after you close the game.

During startup of subsequent playthroughs you’ll see this line in a console log:

03:36:45.105 [d3d12 pipeline state] PSO cache loading finished, current fail rate 0/3590 (0%)

and at the end of the console log you’ll see something like this:

00:01:54.446 [d3d12 pipeline state] PSO fail rate 9/4436 (0%)
00:01:54.449 [d3d12 pipeline state] Serialized pipeline library

Examples from two different logs, I’m not wiping my cache again to make a point

Link

I’m at work so can’t try this atm, but AMD released an optional update stating it fixed poor game performance in the 9000 series. Has anyone been able to test this yet? Does it work for 6000/7000 series cards also?