Personal Photo

No Photo
Rep [details >>] 0 pts

Custom Title
Beaver doesn't have a custom title currently.
Personal Info
Location: No Information
Born: No Information
Website: No Information
No Information
Other Information
Joined: 26-September 17
Status: (Offline)
Last Seen: Yesterday at 11:16 pm
Local Time: Jan 21 2018, 01:21 PM
209 posts (1.8 per day)
( 0.20% of total forum posts )
Contact Information
AIM No Information
Yahoo No Information
GTalk No Information
MSN No Information
SKYPE No Information
Message: Click here
Email: Private
View Signature



My Content
Jan 4 2018, 12:03 PM
1306 words

So I caught the tail end of an interesting discussion in discord. Why is the NSFL media payout system the way that it is? It started when DeathOnReddit was frustrated that posting his 1200 word article as two 600 word articles would get him more money and flowed naturally from there.

There's no doubt that 600 words is the optimal article length under the current format as you can see here:

You can think about it like this:
You are a very wordsy and verbose NSFL player and have 60,000 words of media bottled up inside of you that you want to unleash upon the league. You're also a min-maxer and want to post those words in the most efficient way possible. You have many options:
1. Post 1 article that's 60,000 words long - $62 million
2. Post 2 articles that are 30,000 words long - $64 million
3. Post 3 articles that are 20,000 words long - $66 million
4. Post 6 articles that are 10,000 words long - $72 million
5. Post 12 articles that are 5,000 words long - $84 million
6. Post 15 articles that are 4,000 words long - $86.25 million
7. Post 20 articles that are 3,000 words long - $90 million
8. Post 30 articles that are 2,000 words long - $84 million
9. Post 60 articles that are 1,000 words long - $96 million
10. Post 100 articles that are 600 words long - $100 million

Among articles that don't go past the maximum bonus tier (5,000 words) the least efficient article tiers are 2,000 and 5,000 words and 600 words is by far the most efficient - you would earn 19% more money for posting in 600 word increments than 2,000.

If that's what the league wants, that's fine. I'm sure nobody logs onto the NSFL hoping to open up the media section and find a 6,000 word novel - but I'm not sure that 600 words is where we should be optimizing either. As it says in the Payout Structure topic (emphasis mine):
This would mean a 600 word article gets their $600k base and the $400k bonus for a total of $1mil.  However if someone writes two 300 word articles, they would only get $600k.  This is to encourage longer articles and hopefully not so many people just writing the basic article to hit the word count.

These incentives do matter. I took a look at the 10 most recent topics in the Graded Articles forum that had their word count easily displayed and just two of them eclipsed 2,000 words. Four were fewer than 1,000 and the remaining four were between 1,000 and 2,000. The average of those 10 was just over 1,200 words.

Comparing this to other sim leagues, the last 5 SHL and the last 5 SMJHL articles averaged 2,000 words in a system where you get a flat $100k for every 100 words you write (though good articles are usually subject to a 1.5-2x multiplier for quality). In the PBE, the last 10 articles averaged about 1,600 in a system very similar to the NSFL but without the weird "never write a 2,000 word article" twist as their 2,000 word bonus is $1m instead of $800k.

Again, if we as a league want short articles then mission accomplished. This isn't a criticism, per se, but we should be aware of what we're incentivizing. So, taking all that into consideration I crafted some alternate pay structures to encourage different article lengths in case we want to do that:

600 words
Week 6 power rankings - 770 words
Portland Pythons Week 5 and 6 Review - 653 words
Portland Pythons Week 3 and 4 Review - 608 words

Keep as is.

1,000 Words
Yeti New Co-GM - 1,031 words
D-Line Standouts: Week 6 - 1,156 words

600 words - $350k (down from $400k) - $1583/word
1,000 words - $700k (up from $600k) - $1650/word
2,000 words - $1m (up from $800k) - $1600/word
$200k increase in bonus for each tier thereafter.

1,200 Words
Cornerbacks : An Ongoing Analysis (1/14) - 1,218 words

600 words - $300k (down from $400k) - $1500/word
1,000 words - $600k (same) - $1600/word
1,200 words - $800k (new tier) - $1667/word
2,000 words - $1.2m (up from $800k) - $1600/word
$200k increase in bonus for each tier thereafter.

1,500 Words
Cornerbacks : An Ongoing Analysis (2/14) - 1,637 words

600 words - $300k (down from $400k) - $1500/word
1,000 words - $600k (same) - $1600/word
1,500 words - $1m (new tier) - $1667/word
2,000 words - $1.2m (up from $800k) - $1600/word
$200k increase in bonus for each tier thereafter.

2,000 Words
DSFL Positional Power Ratings (Offense) - 2,330 words
The Specialist: Issue 2 (DSFL, Vol. 1) - 2,202 words

600 words - $300k (down from $400k) - $1500/word
1,000 words - $600k (same) - $1600/word
2,000 words - $1.4m (up from $800k) - $1700/word
$200k increase in bonus for each tier thereafter.

This isn't a mutually exclusive situation. If you think that both 600 word articles and 2,000 word articles (for example) have a place in the NSFL then we could craft a system for that, too.

600 words - $400k (same) - $1667/word
1,000 words - $600k (same) - $1600/word
1,500 words - $900k (new tier) - $1600/word
2,000 words - $1.3m (up from $800k) - $1650/word
$200k increase in bonus for each tier thereafter.

You could do this for whatever lengths we as a league would prefer.

These are based on my own personal preferences and are premised on amending the current system separate from changing to a structure posed above:
1. Add another bonus tier between 1,000 and 2,000 words and if you want to keep the number of tiers low and manageable kill off the 5,000 word tier. I think 1,200 or 1,500 words make the most sense for this threshold.
2. Add another bonus tier at 2,500 words and if you want to keep the number of tiers low and manageable kill off the 4,000 word tier. I imagine the vast majority of articles never hit 3,000 words and almost none hit 4 or 5k since there's no incentive to which means those tiers are better served in the relevant range.
3. Flatten the $/word disparity between tiers. Set word counts that you want to incentivize at $1650-1700/word, the tiers around them around at $1600/word, word counts that you want to disincentivize at $1500/word, and have the rest somewhere in between. This is the basic pattern I followed in crafting the structures above. The whiplash of going from $1600 to $1400 to $1500 in successive tiers sets up weird incentives.
4. Alternatively, scrap the ax+b (where a is the base $1000/word, x is the word count, and b is the tiered bonus) payout system altogether and go to a simple a*x system where the $/word payout changes depending on your word count. Example:
Tier 1: 0-599 words - $1000/word
Tier 2: 600-999 words - $1600/word
Tier 3: 1000-1499 words - $1625/word
Tier 4: 1500-1999 words - $1650/word
Tier 5: 2000-2499 words - $1675/word
Tier 6: 2500+ words - $1700/word
This would get rid of the weird mechanic where typing past a word count threshold feels like wasted effort since each successive word is making you less and less efficient (which is exactly what it's doing to me right now) but, naturally, would incentivize really long articles so I would also suggest bringing the top tier down a bit to prevent novels. Unless you're into that sort of thing.

What say you, dear reader? When you open an article how big does the word count have to be to make you think "ugh" and back out rather than brave the wall of text? What word counts are so small it makes you wonder "why would they even bother posting this?"

Fuck I should've posted this as 2 articles to get that extra $200k.

Nov 15 2017, 02:19 PM
I'm posting this in the hopes of getting some feedback to improve the methodology so if you have any suggestions or questions I'd love to hear them and if you see anything that looks glaringly wrong point it out because there is a non-zero chance that I fucked up somewhere along the line. These are far from being finished (see the last section for some of the flaws) so don't get too butthurt if you think your team is lower than it should be, etc.

Alright so I've been working through historical ELO Ratings for the NSFL and I've settled on these parameters for now:
  • 1500 ELO will be average and each team starts out there, including expansion teams
  • Beating a better team will increase your rating by more than beating a worse team (ex: in Week 14 last season ARI beat LVL 45-0 but only gained 3 ELO points, in the first playoff round they beat OCO 27-7 and gained 16)
  • Margin of victory matters but has diminishing returns (ex: in Week 6 last season BAL was 104 ELO points better than SJS, beat them 20-17, and gained 7 ELO points; 2 weeks later PHI was 102 ELO points better than COL, beat them 40-20, and gained 26 ELO points)
  • Preseason games don't count and playoff games are weighted the same as regular season games
  • All games are zero-sum: if one team goes up 20 points, the other team goes down 20. This means that each week that every team plays the average ELO is 1500 and doesn't fluctuate.
  • 25% season-to-season regression (that is, the start of season ELO Rating is 75% of your previous end of season ELO Rating and 25% average)

The line graph looked ugly as sin when I had bye weeks as breaks in the lines so the flat parts of the lines are typically when teams aren't playing. (PS: I hate whoever's idea it was to have bye weeks in Season 2)
S1W1 = Season 1 Week 1
S1P1 = Season 1 Playoff Round 1
Considering this is just preliminary I didn't fancy up the graph too much (x axis labels are uggo and I'm going to emphasize the demarcation between seasons more in the final version).

Current ELO Ratings:
1. Arizona - 1771
2. Orange County - 1628
3. Baltimore - 1571
4. Yellowknife - 1499
5. San Jose - 1490
6. Philadelphia - 1472
7. Las Vegas - 1329
8. Colorado - 1238

Some fun facts (keep in mind this is based on preliminary numbers):
  • Biggest ELO gain to date: Season 3 Week 5 when San Jose gained 80 points after beating Colorado 40-0
  • Biggest underdog to win: Season 3 Week 12 when Balitmore beat Arizona 26-23 with a ~19% win probability
  • Baltimore has the most wins as an underdog with 16
  • Arizona has won their only 2 games played as an underdog: Season 1 Week 9 against Orange County 19-7 and Season 1 Week 12 against Orange County 23-3
  • Las Vegas has been a favorite in just 5 of their 33 games played (15.15%), San Jose is just ahead with 7 in 46 (15.22%)
  • Yellowknife has lost as a favorite 16 times, most in the NSFL
  • Favorites overall are 115-71 (61.83%)
  • Yellowknife has been the steadiest team, never dipping below 1470 ELO (2nd highest floor behind ARI's 1501) and maxing out at 1576 ELO
  • Colorado has fluctuated the most, recording the 3rd highest peak at 1593 ELO (behind ARI's 1828 and OCO's 1670) as well as the lowest valley at 1238 ELO
  • San Jose has the lowest peak ELO of any team, maxing out at 1503 ELO in Week 2 of Season 1 and nearly matching that with 1501 in Week 4 of Season 2
Going forward some things I'm going to look at in an effort to improve these are:
  • Introduce home field advantage: this completely slipped my mind until I was typing this up so I'm posting all of this without taking home field advantage into account which is a glaring oversight, luckily Home/Away shouldn't be hard to pull from the data so it's just a matter of figuring out exactly how much home field is worth
  • Test the variables so make the ratings as predictive as possible (specifically how swingy the ratings are from game to game - don't want them too static but also don't want them to overreact)
  • Perhaps weight playoff games more than regular season games (I'm hesitant to do this because a team that makes the Ultimus and loses shouldn't have their rating crater below teams that didn't make the playoffs)
  • Whatever other ideas y'all may have

Nov 3 2017, 08:39 PM
Oct 20 2017, 01:59 PM
So in a follow up to the season-long ANY/A I did a few days ago I broke down the individual games:

Top 5 ANY/A performances:
1. Chris Orosz, week 13 against the Legion (nearly 3 standard deviations above average )
2. Mike Boss, week 6 against the Legion.
3. Mike Boss, week 8 against the Hawks.
4. Ethan Hunt, week 8 against the Legion.
5. Josh Bercovici, week 10 against the Sabrecats.

League averages:
ANY/A - 4.75
AY/A - 6.18
NY/A - 5.45

Percentage of games above average (ANY/A):
King Bronko, 78.57%
Mike Boss, 71.43%
Chris Orosz, 64.29%
Scrub Kyubee, 64.29%
Ethan Hunt, 57.14%
Clifford Rove, 42.86%
Josh Bercovici, 35.71%
Nicholas Pierno, 14.29%

Percentage of games at least 1 standard deviation above average (7.96+ ANY/A):
Mike Boss, 21.43%
King Bronko, 21.43%
Chris Orosz, 14.29%
Scrub Kyubee, 14.29%
Ethan Hunt, 7.14%
Josh Bercovici, 7.14%
Nicholas Pierno, 7.14%
Clifford Rove, 0.00%

Percentage of games at least 1 standard deviation below average (<1.53 ANY/A):
King Bronko, 0.00%
Chris Orosz, 0.00%
Scrub Kyubee, 0.00%
Mike Boss, 7.14%
Ethan Hunt, 14.29%
Josh Bercovici, 14.29%
Clifford Rove, 21.43%
Nicholas Pierno, 57.14%

Sometime this weekend I'll flip this data around and see which defenses allowed the best ANY/A performances against them and which forced quarterbacks into bad performances. I've got a whole bunch of trivia like the above lists that I can pull now so let me know if there's anything else you'd like to see.
Oct 17 2017, 12:54 AM
2060 words, typed it up at 3AM like a moron so hopefully no dumb mistakes

As ANY/A has come into vogue among NFL advanced stats nerds as a way of measuring quarterback performance I thought it would be a good entry point to start adapting some of these measures to the NSFL since it has a very straight-forward concept (yards per attempt with some added flourish) and is relevant to the most glamorous position in football. I'm going to start out with an explanation of the statistic as some background (what is it and why should it be used?), if you're not interested in that you can click here to go to the accompanying statistical analysis topic that simply has the tables and a few words. For the rest of you liars, you may proceed.

What is ANY/A?

ANY/A stands for Adjusted Net Yards per Attempt. It began as an attempt to end an age-old bar argument: Quarterback A throws for 220 yards on 30 attempts with 3 TDs and 0 INTs while Quarterback B throws for 400 yards on 50 attempts with 5 TDs and 2 INTs - who had the better performance? In order to answer this you must dovetail the various facets of quarterback play (yardage, touchdowns, interceptions, and so on) into one scale. How many more yards does a player have to throw for to make up for one fewer touchdown? How many touchdowns does a player have to throw to make up for one more interception? Most people don't systematically break it down into actual concrete and consistent values but rather use their own personal heuristics that may shift slightly from debate to debate. We ain't about that life, though.

The origins of ANY/A, like those of many advanced stats, can be found in the 1988 book The Hidden Game of Football by Bob Carroll, Pete Palmer, and John Thor in which they determined that in terms of their effect on the game touchdowns were worth about 10 yards (since revised to 20) and that interceptions were worth about -45 yards. They called their stat (which they touted as the "new passer rating") Adjusted Yards per Attempt or AY/A and in full it was as follows:

AY/A = (PassYards + 10*PassTDs - 45*INTs)/Attempts

So for the hypothetical stat lines above it would work out to:
QBA's AY/A = (220 + 10*3 - 45*0)/30
= 8.3

QBB's AY/A = (400 + 10*5 - 45*2)/50
= 7.2

Since then it has been tweaked (this 2008 article is what changed the touchdown value from 10 to 20) and the only distinguishable difference between AY/A and ANY/A is the addition of another component indicative of a quarterback's performance: sacks. Namely that sack yards subtract from the passing yards total (with the premise being that the quarterback "lost" those yards in a passing situation) and the changing of the denominator from pass attempts to drop backs (pass attempts plus sacks).

The current ANY/A formula in use around the nerdosphere is therefore as follows:

ANY/A = (PassYards + 20*PassTDs - 45*INT - SackYards)/(Attempts + Sacks)

Pro Football Reference has a great database of historical NFL ANY/A performances. Here are the best single seasons by this measure with 2004 Peyton Manning topping the list, here are the quarterbacks with the best career ANY/A with Aaron Rodgers currently topping this list, and here you can find the leaders from the 2016 season which was the first to ever pit the top 2 QBs by ANY/A in the Super Bowl. Virtually any article you read about quarterbacks on Football Outsiders, Football Prospective, the Pro Football Reference blog, or your NFL advanced stats site of choice will include mention of ANY/A these days. It's even been adapted into a team stat (a team's ANY/A compared to the ANY/A that team allows to opponents).

What will this tell us about the NSFL?

Honestly I'm not sure. At a bare minimum it gives us a rate-based way to measure quarterback performance - that is, highlighting quarterbacks who do more for their team on fewer drop backs instead of simply those who play in pass-heavy offenses. There are critical differences between the NSFL sim engine and real life, obviously, and I don't have confidence that those valuations of touchdowns and interceptions will hold up. I'm not sure whether ANY/A will overvalue or undervalue them but it strikes me as unlikely that it will translate seamlessly across. The big question to me is which of ANY/A, AY/A, and NY/A - all variants of the same basic concept - will fit this league best. For the purposes of this post I've used ANY/A but if you, dear reader, think that those touchdown and interception valuations are way off then you should expect NY/A to be more indicative of a good quarterback performance than ANY/A. You can find those numbers in the data topic.

While not its main purpose, some value of ANY/A is that it's a more predictive passing statistic than conventional stats like yards, yards per attempt, QBR, etc. That is, a good passing statistic will capture underlying indicators rather than the random fluctuations different quarterbacks see on a week-to-week and sometimes season-to-season basis and can be used to identify quarterbacks that are over-performing or under-performing their talent. For a better explanation on predictive and explanatory passing statistics than I can provide, you can check out this article.

The biggest benefit from this may not even be a direct use of ANY/A but a simple by-product as there were some interesting things in the sack data I noticed as I was collecting it. Interesting to me, at least. More on that in the previously linked accompanying statistical analysis article.

Stop milking word count and get on with it already

For the most part the components of ANY/A are easy to find in the sim index. Attempts, passing yards, touchdowns, and interceptions are all hallmark counting stats for quarterbacks and are prominently displayed and easily found. Sacks are not listed (at least as far as I know) by quarterback, which wouldn't be a problem if I could grab them from the offensive line stats but alas not every sack taken is assigned to someone.

So compared to the traditional quarterback stats, collecting sack yardage and sacks taken was a bit more involved. Since I couldn't find them listed anywhere I went through each game's play by play and recorded the lost yards for each sack (throwing out sacks taken by the two backup quarterbacks who won't qualify). This was cancerous enough to make me think about just scrapping the idea of ANY/A and simply going with AY/A but not cancerous enough for me to whip up a scraper. If I go back and do historical seasons or continue this going forward I'll definitely be using a scraper to make it easier but when I start out with a project I like to be a bit more hands on to get a feel for things.

In the end, when I was done whining I had a nice looking table like this:

Unsurprisingly, Mike Boss of the Orange County Otters comes out on top following an absolutely stellar season. Somewhat more surprisingly is that the gap between him and his competitors closed significantly after accounting for drop backs and sack yardage. To illustrate this:

In pure yardage he was one and a half standard deviations (!!) above second place Ethan Hunt of San Jose (and 1.713 above average).
In touchdowns he was a quarter of a standard deviation above second place Chris Orosz of Yellowknife (1.213 above average).
But he was just 1/30th of a standard deviation above second place King Bronko of Arizona in ANY/A (0.954 above average).

Z-scores (standard deviations above or below average) of all qualifying quarterbacks:

The biggest beneficiaries of these adjustments were King Bronko and Scrub Kyubee of Baltimore. Bronko rose from 4th in yardage and 3rd in touchdowns to a 2nd place in ANY/A that nearly entirely erased a 1,000 yard gap while Kyubee separated himself from a pack of quarterbacks with stats around league average into a clear 4th place position. Both of these quarterbacks had fewer than 550 drop backs (548 and 540 respectively, only Pierno had fewer among qualifying quarterbacks) which helped them a lot in a rate-based stat like this, especially in comparison to Boss's 685 drop backs.

Aside from Boss, Clifford Rove of Philadelphia was hurt the most by the adjustments. While on paper his 3,461 yards and 20 touchdowns look very serviceable, right around league average as a young player, ANY/A shows us that those numbers were buoyed by taking nearly 600 drop backs - in addition to those 24 interceptions being problematic in any analysis. He certainly has time to improve going forward, though, if he can work on mitigating those two factors. With league-average interceptions, sacks, and sack yards he can close nearly half the gap between he and Chris Orosz.

What's next?

As is usually the case with data analysis like this, more questions are asked than answered. First and foremost is the ANY/A scale. 6.3 ANY/A doesn't actually mean anything except in relation to other ANY/A numbers. What I mean by that is that in this season of the NSFL a 6.3 was spectacular but in the NFL a 6.3 is not very good: an NFL quarterback with Mike Boss's stat line in the NFL would've been ranked 17th of 32 last season, right below Philip Rivers and barely edging out Ryan Tannehill. Going forward I'd like to add the previous seasons of data to get a better idea of what an "elite" ANY/A number is for the NSFL. Was this an all-time great season by Boss or simply a great season? Because of how much more common sacks are in this sim engine we'll likely never see seasons that are typically considered elite in the NFL (9+ ANY/A) but ultimately that doesn't matter at all. With several seasons' worth of data we'll get a more complete picture of how well this statistic fits into this sim engine, too.

Further down the road I think it'd be interesting to revisit the formula and try to estimate how much touchdowns and interceptions are worth in the NSFL. As of now I don't have a plan of attack for that and am open to ideas but if it's not too complicated I think that'd be interesting to see.

Another question I have is whether sacks should even be included. They're included in the NFL formula because there's some evidence that suggests sacks are - contrary to popular belief - more on the quarterback and less on the offensive line and as such should be reflected in a measurement of the quarterback's performance. If that's not true, or even less true, in the sim than in real life then perhaps the easier AY/A is the way to go. On the other hand if that's more true in the sim than in real life then perhaps sack yards need to be adjusted to reflect their value. I'm not sure how I'd go about trying to figure that out but it's a thought at least.

Finally, I think it'd be interesting to look at the QB attributes to see what correlations there are between them and ANY/A that may be understated in comparisons with YPA, passing yards, or touchdowns. Definitely a long shot to find anything there but it's always fun to look.

Plus, there's all the stuff about sacks from the other topic. Why are nearly three quarters of all sacks either 6 or 7 yard losses? Why are there more 5 yard losses than 8 yard losses? Why are there more 11 yard losses than 8-10 yard losses combined? Is it because sacks are more prevalent in the sim so a greater percentage of them occur at the quarterback's drop back depth? Is there a scramble mechanic in the sim? There almost certainly is since there were two sacks with 0 yards lost. If so, is it working as intended? I would expect a much more even distribution of yards lost in a sim with a robust scrambling mechanic. Does it have something to do with the way we update defensive players, offensive linemen, and quarterbacks? Will we see this same type of distribution in 20 seasons? Or am I overlooking a perfectly reasonable explanation because it's nearly 4 AM and I'm tired? It's entirely possible that this issue has been hashed out and maybe even resolved, as well.

Last Visitors

Yesterday at 06:25 pm

Yesterday at 09:15 am

Jan 19 2018, 12:10 PM

No comments posted.
Add Comment

Mobile Version Last Visit: --