Recent Posts

Eric Seufert’s Best F2P Blog Post Isn’t About F2P

screenshot-at-sep-17-20-28-48Everyone’s favorite former Rovio employee is a prolific writer on F2P games; the closest we have to a Fukuyama. Seufert has covered a range of topics, but none more important than internal organization.

Seufert argues for a number of institutional policies to surround analysts with within an organization. Frequently, analytics and data are as much about the appearance of sophistication as they are actual value adds. This need not be the case. The confusion arises over where the value of data lies. Perhaps ironically, data’s value doesn’t lie in the data, but rather in the data analyst.

In most organizations, analytics reports to product teams, a mistake, Eric argues. Often product managers face the principal – agent problem: their incentives and the companies do not align. Product managers want to successfully manage products and will present the narrative they are doing so. This is inefficient for companies who often wish to assess the true performance and trajectory of a portfolio. When an analyst’s career path depend on a product manager their narratives will often match. With organizational independence from product teams, analyst’s incentives align closer to the companies, providing more objective analysis.

Not just an accountability watchdog, real analyst value revolves around the ability to drive product roadmaps. At it’s highest order, analytics is a forward looking discipline, not a backward looking one. By experimenting and studying human behavior, analysts find levers that pull certain responses.  This creates opportunities to exploit these levers. Do currency pinches increase monetization? Are new gotcha characters or new levels driving revenue? Should we invest more in reducing load times or UI changes? Using theory driven empirical investigation analysts can move companies towards better outcomes than competitors. If organizations don’t allow analysts to pursue these questions, they’ll become cheerleaders for product teams. On the other hand, if first order information (RR, ARPU) is not accessible or automated, analysts will forever be running the hamster wheel of reporting. This is one of the more overlooked points Eric argues for.

I think this suggests a dual mandate of analysts: (1) accountability of features and (2) what features are worth developing. This creates a natural tension of not only playing the role of watchdog to product managers but partners as well. It is the duty of good analysts to navigate this relationship successfully.

F2P Demand Curves Are Weird, Just Ask Levitt
A paradigm forever changed, one man carries a dying tradition.

A paradigm forever changed, one man carries a dying tradition.

Steve Levitt, the last price theory samurai, and John List, future nobel prize winner, have published a paper on free to play economics.

In a textbook neoclassical experiment, Levitt alters the quantity of Candy Crush hard currency at a given price point. While economists generally think of price variation as the way of deriving demand curves, quantity variations are just as legitimate a tool.

Despite a sample size of over 15 million and a wide range of quantity convexity (80% variation across variants), all quantity discounting schemes produced similar revenue. Levitt concludes by commenting,

“…varying quantity discounts across an extremely wide range had almost no profit impact in the short term.”

The interesting and little explored result indicates that,

…almost all of the impact of the price changes was among those already making a purchase; radical price reductions induced almost no new customers to buy…

This suggests free to play games are made up of two groups of users: purchasers and non-purchasers. This means the decision of becoming a customer is exogenous, there is no ability to convert non-customers to customers  i.e. this is decided outside of the game.  Put another way, non-customers are perfectly price inelastic and customers are perfectly price elastic. Indeed, industry research collaborate this.2

    Interesting, but is it actionable?

Were this to hold, it suggests a number of results. The first is that product manager’s ability to monetize non-customers (99%~ of users) will not come from IAP, but rather other forms. This may help explain why F2P ad revenue and incentivized video continues to show YoY growth.3 4
Furthermore, product managers should consider experiments exploring the maxima point of ad frequency. Given that there’s a trade-off between retention and ad-frequency there exists an optimal ad frequency point.
attachment-1
With little chance of non-customers converting to customers, product managers should worry less about increased ad frequency turning off potential customers.

The final result suggests the ROI of trying to raise the LTV of customers exceeds that of trying to raise the new customer creation rate. Product managers should develop roadmaps in accordance.

  1. http://venturebeat.com/2014/02/26/only-0-15-of-mobile-gamers-account-for-50-percent-of-all-in-game-revenue-exclusive/ [/note 1 http://www.gamesindustry.biz/articles/2013-08-22-two-thirds-of-whales-are-males
  2. https://www.chartboost.com/blog/2015/04/mr-jump-20-000-day-mobile-game-ads/
  3. http://www.pocketgamer.biz/news/63994/giftgamings-lift-deuls-daily-revenues-by-up-to-38/
How to Measure Whales
"$10.00 LTV am I right?"

“$10.00 LTV am I right?”

You’ve soft launched your game, done a UA push, and a string of hope appears. Against all odds, a dominant cohorted ARPU curve emerges! Is this this an anomaly or have you caught a whale?

The first way to examine this is to perform cointegration tests between the cohorted ARPU curves, testing for stastistical significance. It may be true the difference in the curves are real, but that doesn’t answer if you’ve caught a whale.

In 1905, Michael Lorenz developed a method for measuring relative inequality between nations known as the Lorenz curve.

Just keep saying what % of the population owns what % of the wealth and it'll make sense.

Just keep saying what % of the population owns what % of the wealth and it’ll make sense.

The F2P application is to define wealth as revenue (either on a daily or game level) and players as the population in the context of free to play games. By measuring how bent inwards a cohorted Lorenz curve is relative to other cohorted Lorenz curves we can measure the ‘whali-ness’™ of different cohorts. Even better is how this reduces to a single metric – the gini coefficient. A gini coefficient of zero indicates a perfectly equal distribution of income, 10% of the population owns 10% of the wealth, 20% of the population owns 20% of the wealth and so on and so forth. A gini coefficient of 1 is the exact opposite – a single person owns 100% of the wealth.

This translates to what % of players are responsible what % of the revenue. Measuring gini coefficients across games rather than cohorts gives more insight into how a particular game monetizes – whether it’d be whale, dolphin, or minow driven.

Actionable insights might include how effective introducing ads could be. A high gini coefficient (very few players are responsible for revenue) might mean there’s a more fertile base to monetize on.

The main insight, however, is further understanding. It’s clear that success can come about in drastically different ways in free to play games, the gini coefficient is simple way to measure that.

The Content Problem and the Death of Level Designers

 

rgj8Z.jpg

Here we see the content problem in its natural habitat

F2P is as much of a design choice as it is a business choice. Given this, F2P has its own set of design challenges among  which is the content problem.

Developers will only continue making additional content until the benefits are greater then the costs. This is specified when

expected marginal revenue from content > development costt + opportunity costt

where

development costt is the cumulative cost by time of release (t)

but if

User Acquisition Rate (UAR) < Churn Rate (CR)

there’s a shrinking pool of buyers which only widens at t+1. This is the essence of the content problem: how do we create content fast enough to curtail churn and while minimizing development costs?

The genius of PvP (Player v Player) environments is how they necessitate the emergence of a meta-game. In mathematics, Player vs Environment (PvE) resembles the field of optimization where strategies are static – one and done. PvP environments, however, resemble game theory models where it has been shown strategies evolve in an evolutionary process. This means equilibrium in PvP environments is constantly being reshuffled with each balance change; the search for dominant strategies in an ever shifting equilibrium is the game itself.

It’s been 4 years since the launch of Clash of Clans and there continue to be oodles of strategy videos. Supercell is constantly debuffing and buffing different units which makes some strategies more successful than others and by trial and error players expose this.

Is it a paradox to watch mobile strategy videos on a desktop?

The push for PvP environments has seen the emergence of ‘Systems Design’ and the demise of a Level Designers. With few exceptions, linear and deliberate gameplay has gone the way of Spaghetti Westerns.

On the other hand, a different type of PvE has found ways to combat the content problem. For example, Trials Frontier adopted meaningful level mastery with a touch of PvP. This is achieved via quests that revisit locations, stars, leaderboards, mission rewards, and gameplay that rewards depth (back/front flips can improve my times!). That said, PvE shares a smaller piece of the pie than it once did. This trend will only continue as F2P marches into the console and PC arena.

Get More Life Out of Your Lifetime Value Model! A Discussion of Methods.

Customer-Lifetime-Value

Predicting the average cumulative spending behavior or Lifetime Value (LTV) for players is incredibly valuable. Being able to do so helps figure out what to spend on User Acquisition (UA). If a cohort of players has an LTV of $1.90 and took $1 to acquire then we’ve made money! This helps evaluate how effective particular channels of advertising are as we’d expect different cohorts of players to have different values. Someone acquired via Facebook may be worth more then some acquired via Adcolony.

But wait there’s more!

My argument in this post is that LTV has great deal of value outside of marketing. In fact, LTV might have parts more valuable then the whole. How to predict LTV can adopt numerous approaches and each approach has associated benefits. Remember, there doesn’t have to be just one LTV model!

Consider four requirements we’d want out of an LTV model:

1. Accuracy

LTV predicted should be the LTV realized. Figuring out upward and downward bias in your coefficients is important here. This gives insight into the maximum or the minimum  to spend on UA depending on the direction you suspect your coefficients are biased towards.1

2. Portability

Creating models is labor intensive and even more so when doing so for multiple games. There are particular LTV models that sweep this aside called Pareto/Negative Binomial Distribution Models (NBD). Since they’re based only on the # of transactions as well as transaction recency they don’t require game specific information. This means you can apply them anywhere!

3. Interpretability

This one’s big and perhaps the most overlooked. Consider the Linear * Survival Analysis model approach to LTV. The first part is to predict when a particular player will churn. By including variables like rank, frustration rate (attempts on particular level), or social engagement we gain insight in what’s retaining players. This type of information is incredibly valuable.

  1. Scalability

If it’s F2P then there are going to hundreds of thousands to millions of players (you hope). I’ve seen some LTV approaches that would take eons of time to apply to a player pool of this size, our LTV should scale easily.

So how do the different approaches stack against one another?

Accuracy Portability Interpretability Scalability
Pareto/NBD2  / x x
ARPDAU * Retention3 x x
Linear * Survival Analysis4 x x x
Wooga + Excel5 x
Hazard Model6 x x x

Parteo/NBD is great, but it’s hard to incorporate a spend feature (it just predicts # of transactions).7 A small standard deviation in transaction value gives this model a great deal of value and something to benchmark against. This model also makes sense if data science labor is few and far in between.

ARPDAU * Retention is probably the approach you’re using; it’s a great starter LTV. If marketing/player behavior becomes more important, the gains to scale from a approach beyond this start to make more sense.

Wooga + Excel just doesn’t scale which kills its viability, but it’s conceptually useful to understand.

Linear * Survival Analysis  gives a great deal of interpretability that also sub-predicts customer churn time. This means testing whether the purchase of a particular item or mode increases churn time is done within the model. The interpretability of linear models also means it’s easy to see different LTV values for variables like country or device.

There are many, many different approaches beyond what’s been laid out here. Don’t settle on using just one model, each has costs and benefits that shouldn’t be ignored.

 

  1. The difference between MAE and RMSE come to mind.
  2. Schmittlein, David C., Donald G. Morrison, Richard Colombo. 1987. Counting your customers: Who are they and what will they do next? Management Sci. 33(January) 1–24.
  3. Seufert, Eric Benjamin (2013-12-27). Freemium Economics: Leveraging Analytics and User Segmentation to Drive Revenue (The Savvy Manager’s Guides) (p. 137). Elsevier Science. Kindle Edition.
  4. Rosset, Saharon, et al. “Customer lifetime value models for decision support.” Data Mining and Knowledge Discovery 7.3 (2003): 321-339.
  5. http://www.slideshare.net/EricSeufert/ltv-spreadsheet-models-eric-seufert
  6. Gupta, Sunil, et al. “Modeling customer lifetime value.” Journal of Service Research 9.2 (2006): 139-155.
  7. Glady, Nicolas, Bart Baesens, and Christophe Croux (2009), “A modified Pareto/NBD approach for predicting customer lifetime value”, Expert Systems with Applications, 36 (2) Part 1, 2062-2071.
Optimal Area Currency With Milton
Friedman and Mario are about the same height.

Friedman and Mario are about the same height.

In hindsight, one of Friedman’s great predictions is the Eurozone crisis. Despite being a massive champion for flexible exchange rates, Friedman never advocated for a common European currency.

Europe exemplifies a situation unfavourable to a common currency. It is composed of separate nations, speaking different languages, with different customs, and having citizens feeling far greater loyalty and attachment to their own country than to a common market or to the idea of Europe.

— Milton Friedman, The Times, November 19, 1997

The Greek financial crisis exemplifies many of the problems Friedman points out.

In an economic recession, central banks devalue the domestic currency to return the country to full employment. When economies are similar, recessions move together; if there’s a crisis in Texas, it’s probable there’s one in Washington. This makes central bank policy more effective because capital won’t escape from ‘recessed’ areas to ones with higher returns – there are none. It can be much harder to accomplish this in Europe, where every country’s economy is dramatically different, and institutional policy fluctuates widely.

What the hell does this have to do with game design?

Why do multiple currencies exist in games? Why not just have one type of currency rather than four or five?

The answer is segmentation and capital flight.

Once again, Supercell has provided us with a beautiful example, Clash of Clans (CoC). In CoC, some items cost gold, others and others elixir. After a quick scan, you’ll notice only the defensive items (cannons, archer towers, walls) cost gold, and only the offensive items (troops, barracks, spells). Why might this be the case? Segmenting these items gives designers greater control over the economy and minimizes the potential for ‘contagion’ effects. Consider a world in which Clash of Clans only contained gold. Players might prefer attacking rather than defending, encapsulating the idea of capital going to its highest return. If this were the case,e the game could become unbalanced as all players attack and none spend gold to upgrade their base defense. By segmenting base defense into the elixir, you remove any opportunity cost from expenditures for base defense. This is similar to giving your relatives a gift card; instead of spending it on whatever they fancy, they must now spend it on whatever is from the gift card store. A domestic currency is much like a gift card to that country’s ‘store’ just as an elixir is a gift card to only CoC’s offense ‘store.’

If Supercell finds players are not creating challenging defenses, increasing the rate of gold production is straightforward without worrying that money will be spent on offense. They can also do this by lowering the cost of items priced in gold. Supercell has toyed more with this strategy in their other title, Boom Beach.

The rules for when segmentation is worthwhile emerge from reading Mundell’s famous paper 1 backward.

Segmentation in games, like in real-world economies, gives game designers and central bankers more control.

  1.  Mundell, R. A.. (1961). A Theory of Optimum Currency Areas. The American Economic Review, 51(4), 657–665. Retrieved from http://www.jstor.org/stable/1812792

subscribe to the blog subscribe to the blog
subscribe to the blog subscribe to the blog
subscribe to the blog subscribe to the blog
subscribe to the blog subscribe to the blog

Subscribe to Our Newsletter
Tweets from @econosopher