SDAGL: A New Metric to Measure Regional Performance
Standard Deviation of Average Game Length is a new technique that uses a team’s average win and average loss times to chart how they perform amongst their regional opponents. Additionally, it can potentially be used to compare teams across regions as a season develops
On January 20th, the NA LCS returns to the Rift for the beginning of the Spring Split. It was just a few short months ago that the World Championship ended with SK Telecom T1 retaking the highest honors for the third time in a dramatic fashion as Korean titans battled for the crown amidst the scattered field of North American, European and Chinese contenders watching from afar. That the World Championship ended with a head-to-head of Korean super-teams may not have shocked many, but for hopeful fans across the World, their local heroes had represented the dreams of both fans and upcoming players alike.
Looking retrospectively, one might ask if it possible to compare teams in a way that leaves less room for debate. After the strong performance from NA’s CLG and the LPL’s Royal Never Give Up in the spectacular Mid-Season Invitational, there were whispers among the masses that perhaps Korea’s era of domination had come to an end. But it was not to be, and many were left questioning the quality of analysis was that was being perpetuated through channel such as Reddit or Twitch. This series of articles will attempt to find new ways to peer into the individual regions and gain insight into how teams have succeeded and struggled in an objective, mathematically sound method.
A Method to the Math-ness
Stand Deviation Average Game Length (SDAGL) is a new technique that compares the variance in game length among its teams, converts it to a scale that standardizes results and allows a new perspective on a team’s performance in relation to their regional opponents. In short, it is a measure of a team’s dominance over their opponents. If a team consistently wins their games decisively, with proper decision-making, rotations, allocation of resources and intelligent team-fighting, their average win time should be well below that of the league’s average game length (AGL) as a whole. With the league game length as the midpoint, we can plot teams’ performances relative to their particular average win time (AWL). In comparing teams within a region, generally teams who win faster overall will perform better than teams who take a long time to win. Teams with longer win times either had to mount a comeback and thus struggled in the early game (either due to misplays or issues in the draft), or they had issues closing out a game once they had a lead.
Once we have wins calculated, we repeat with the losses. In general, teams that are able to lose over a long period of time, have a better chance to win game that they have been trailing. This may sound simple, but some elaboration is necessary. If a team drafts a late game scaling comp, there is a chance that they never make into the late game and are shut out early, which represents either a failure to read a draft during picks/bans or in correctly executing the comp. Additionally, teams may have a perfect draft but early game mistakes (a baron steal, dropping dragons, losing first brick gold, etc.) can put them at severe gold disadvantages. At times, statistically speaking, it may appear near impossible for a team to overcome large gold deficits early, but teams that are able to constantly delay the enemy from taking towers, pushing objectives, or getting picks stall the game until both teams approach six items. At this point, both teams return to a 50/50 probability of winning (usually hinging on a 5v5 team fight). It can be seen now that losing, if it must happen, benefits teams who can consistently slow down the opponent in the process.
Taking these two points into consideration, we can plot losses on one axis and wins on another to form a target matrix that gives us four quadrants: Fast Wins, and Fast Losses; Slow Wins and Slow Losses; Fast Wins and Slow Losses; and Slow Wins and Fast Losses. In order to compare performance across regions, we simply find each regions’ Average Game Length (AGL) as the mean, calculate the standard deviation, and convert every teams’ Average Win Length (AWL) and Average Loss Length (ALL) to standard deviations (SDAWL, SDALL, respectively). Despite each region having a different meta-game and early game tactics which may alter the final game time drastically, a standard deviation lets us attempt to compare different regions on a consistent scale.
Finally we must interpret the results. As I have mentioned Fast Wins and Slow Losses should generally yield better long term results, especially in formats such as Best-of-Five championships. Conversely, teams that win slowly and lose quickly probably have fundamental concerns that must be addressed with their pick-bans, team compositions or talent development. In the middle are teams that lose slowly but are able to draw out their losses and pick up some late game 50/50 wins. These teams probably do not fare as well as teams that are able to win quickly but also get routed in fast losses. It is up to conjecture whether Slow/Slow or Fast/Fast is better, but it likely depends on the format of the series being played.
North America has two tiers, Korea is murky and EU is boring
Using data supplied from Riot’s API, I first want to extend my gratitude to Tim “Magic” Sevenhuysen who operates Oracleselixir.com for pulling and making this data public. Unfortunately, LPL data is not available at the moment. Let’s get to the results.
We can see from this graph that NA has two separate groups: Immortals leads the statistical pack with both the fastest wins and the shortest losses. The rest of the top 6 who all managed to enter the Split Finals are in the same region, albeit spread across three quadrants. The lowest four teams are clustered in the Fast Losses/Slow Wins quadrant.
In Europe, G2 was clearly ahead of the curve as they managed to draw out their losses almost a full standard deviation longer than the average EU LCS game while also having the fastest wins. Origen was a wild outlier with incredibly long wins but losses that ended just barely faster than the average. The rest of the league was all within half a deviation of the average which makes detailed analysis difficult.
Korea follows the formula fairly closely. ROX is the only team in their quadrant corresponding to their first place win. SK Telecom had a rocky finish to their season and lost to KT Rolster in the finals but you can see that during the regular split, they were able to come the closest to the Tigers’ performance. Samsung Galaxy is in the middle of the field but ended with spectacular Worlds and IEM Gyeonggi performances.
Finally in the LMS, J team, Flash Wolves and ahq finished 1,2,3. Interestingly Team Mist was dead last but still managed fast wins in their three victories on the season (3-25). Due to the low sample size however this must be considered an anomaly. Extreme Gamers finished second last which is much more clearly represented here.
Immortals were better than SK T1 Telecom; Samsung Galaxy Worse than Fnatic
It may be easy to look at a quadrant, see that Team X has a better composite than Team Y in another region and attempt to draw parallels. Keep in mind that teams’ scores are reported only in respect to their region’s averages. Instead a more useful metric may be to compare a team to their field and use international tournaments to attempt to draw intelligent comparisons. For example, in the recent IEM Gyeonggi tournament, Samsung Galaxy finished first overall with Immortals and Team Liquid in 3rd/4th. This gives us a hint how much stronger Korea is that a middle of the pack LCK team outright beat the best statistical team in NA, as well as the top LMS team in J Team. With that in mind, let us take a look at the World Championship Results:
This last chart indicates that almost all of the teams are on the middle band. No one team, except for possibly ROX tigers, truly pulled away from the crowd. It may indicate that the meta was unstable and teams hadn’t felt comfortable with playing (or revealing) certain potentially dominant playstyles. This chart also averages different game formats including BO1 through BO5’s, which I imagine is significant confounding variable. Furthermore, teams may identify win conditions that are decidedly late game (e.g. Elder Dragon) or draft comps that are purely scaling. Each of these points could have entire articles focused on explaining their impact on the meta-game and is beyond the scope of this text.
In the upcoming season, major roster shuffling has potentially evened the international playing field. This method of analysis may not currently be useful in ranking regions, however it gives an excellent starting point for teams to gauge their own growth and performance against their competitors within their region. Next time, I will focus on developing a new way to measure the effectiveness of on-map-gold in a player’s hands.
-Christopher “Mod” Edwards
Disclaimer: Stats were obtained from http://oracleselixir.com/ with special thanks to Tim "Magic" Sevenhuysen
Post: SDAGL: A New Metric to Measure Regional Performance