Talent survey about unused talents

When there is time. Could we get an official unused talent survey?
I’m sure Fatshark is aware maybe it would be great help if they could get an overview of what should stay untouched and what could be changed without pissing people off or breaking already popular builds.

Seeing how there is still a lot of talents who are underused or straight up unviable.( See Zealots “surpress pain” vs “flaggelant” or siennas “molten skin”.) I think the game is in a way better place now with all the classes after the Big Balance Beta, but there is still that thing with those unused talents.

If it were to set up a questionaire (like those beta/update surveys) where we could rate each talent row from 1 to 4 where 1 is “its ****”, 2 is “not great, but I dont mind”,
3 is “leave it, its fine.” and 4 is “<3”
example: my answer Zealot first row would be: 3-4-2
Second row : 2-4-1
Ending in a spreadsheet showing whats completely unused.
The idea is to give Fatshark an overview on what to leave alone and try to switch out or tweak those unused talents according to whats already there.

I know it’s kinda unusual to balance things that way and all changes can have unforseen consequences but I would hate to see an update changing something only a few complained about but most were happy with.

Thanks for the best game ever. can’t wait for the winds of magic.



Mixed methods researcher here .

I think this could be better accomplished on the back end. I don’t believe the folks in the forums are a fair sampling of the general player audience, even more so when you consider which of us are opinionated/dedicated enough to participate in a survey.

Integrate this form of capture into a patch (if it isn’t there already). The capture should record:

  1. all of a person’s talent assignments across all classes. I think it would be prudent to try and capture this data once per week, per user.
  2. number of unique players active in a week following a major patch or DLC release (here the goal is diversity).
  3. relative level of players active in this period as determined by difficulties played on in total matches for the past week or so. Assign level to match (e.g. R = 1, V = 2, C = 3, L = 4), sum/total matches played. Once you’ve got effective levels, calculate the number of players active in the period, per level range (i.e 1.0 - 1.5, 1.5 -2.5 … 3.5 - 4.0),
  4. player class/character variation: here you’re noting the characters/classes they play. Give each character and class a score between 0-1. With 1 denoting exclusive play. (so, If I only played Bardin, but played his classes almost equally, I would have a score of (Bardin 1.0/0.3/0.3/0.4, or something like that. )
  5. Play frequency in terms of hours active, number of matches attempted, and number of matches completed.

This would leave you with a large amount of data (i.e. corpus).

From said corpus, establish some criteria for sampling. Here this would be another backend thing: count their from that derive an appropriate sample size for each group.

Now is where the fun begins.
For each level, grab a random sample of players within that level’s range. Examine their character/class preference. There may be value in comparing talent configurations for highly active classes vs. inactive classes. My instinct is that you want to look the populated talents for each (difficulty) level of play. Within each level, you want to look at talent allocation while factoring in character activity (i.e. how does this look for someone always playing Sienna, vs someone rarely playing Sienna).

This would yield complicated picture: maybe some talents are universally ignored, in which case they should be dropped. But maybe some talents are ignored by high level players, but often used by low level players (these should probably stay). Maybe some talents are used by players who play the class infrequently (this warrants closer analysis).

My point is, a survey because of participation/selection bias will give you an imperfect representation. Snapshots will be totally honest (though the window of observation may lead you to faulty conclusions… though you could always take more phase portraits).

There’s a lot of nuance to be examined here… and I would be down to talk about implementation further… but I think I’d need a fatshark person to say “yeah… I’m into this” before I’d map out the study design any further.

They might be doing this already. I don’t know. I think that (tacitly) what you’re looking for is a publicly available data set so that they community can better make sense of itself.



I think they may already have access to at least part of that kind of data. All our builds and such are stored in their backend servers, and I have a feeling I’ve heard or read some dev’s comments about having access to some usage statistics.

Of course, neither one by itself will give any insight into the reasons why some Talents (or Properties, or Traits) are unused and some are used near-universally. That’d need a separate questionnaire, and would really be required to address the underlying issues instead of the symptoms.

1 Like


You use the quantitative data to identify curious patterns, then from that data you develop targeted qualitative questionnaires. You make informed decisions from a synthesis of the two.

With that said, sometimes the quantitative data is unambiguous… though even then, that only speaks to the problem rather than the plausible solutions.

1 Like

Thank you for good replies both of you.

I’ve seen some stuff about them working on traits right now, so it could be some times before they adress this.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.

Why not join the Fatshark Discord https://discord.gg/K6gyMpu