Created
December 16, 2015 18:01
-
-
Save syhw/cf6644c7fb73b02a0131 to your computer and use it in GitHub Desktop.
<your moba here> (DOTA 2) heroes embedding
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
5v5 matches | |
number of heroes in the pool = K | |
dimension of the embedding = E | |
- encode a hero as a one-hot of heroes = 1-of-K | |
- learn a (K, E) matrix to go from hero -> vector (+ bias) | |
(notice that it can do set-of-heroes -> vector too) | |
- learn a logistic regression from both the embeddings of team1 and team2 to predict the winner by backprop through the embedding. | |
- do stats and t-SNE plots of embeddings of single heroes or combinations (teams) of heroes | |
- ... | |
- PROFIT!!! | |
- /!\ may not work /!\ | |
in Torch(~like): | |
K = 42 -- number of heroes | |
E = 50 | |
emb = nn.SparseLinear(K,E) | |
model = nn.Sequential():add(nn.ParallelTable():add(emb):add(emb)):add(nn.JoinTable(1)):add(nn.Linear(2*E,1)):add(nn.Sigmoid()) | |
criterion = nn.BCECriterion() | |
for game in games() do | |
team1, team2, result = game.get_teams() -- team1 and team2 are both 5-of-K vectors, result is 0 or 1 (first or second team) | |
-- be sure to randomize team1/team2 as radiants/dire sides, otherwise it'll learn the side bias too ;-) | |
error = criterion:forward(model:forward({team1, team2})) | |
grad = criterion:backward(model.output, result) | |
model:zeroGradParameters() | |
model:backward({team1, team2}, grad) | |
model:updateParameters(learning_rate) | |
end |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Not all games are created equal, so adding players' skill components and/or doing a regression on something based on the score should yield better results. Also, WSABIE! :)