Modeling Wine development and Linux adoption
A while back I found myself facing the question of when we could expect Wine to get substantially better. Wine has been in development for 16 years, and while it has millions of users there are still a great deal who are unhappy with it. This is completely understandable – Wine doesn’t yet work perfectly with all their applications, and they shouldn’t be happy until it does.
So, I decided to make a model of Wine development. That way I could figure out if we were doing something wrong, or if it even mattered which bugs we were solving as long as we were working.
Modelling Wine development and user happiness
Let’s define the Wine project as a set of 10,000 or so bugs yet to be solved. Maybe these represent API functions, or usability issues, or performance problems – whatever. Some of these bugs may be harder to solve than others, so we’ll give each one a difficulty rating in terms of time.
Now let’s define an application as some subset of these bugs. A working application is one that has all its bugs solved. We can also give each bug a different relative probability of affecting an application – maybe bug x is 10 times more likely to affect an application than bug y.
A user is then defined as a set of applications he needs. A “happy user” is one who has all his applications working. Just like with the applications, we can assign relative probabilities to reflect the real world – World of Warcraft is 60 times more likely than CuteCatExploderPro.
Finally, we need some strategy for solving the bugs. We’ll get to 100% bugs solved in the same amount of time regardless, but the order we do them in will matter. I was able to come up with about a dozen or so strategies, such as “pick a random unhappy user and solve a random bug in one of his applications”. I picked the most realistic of these, and told the simulation to alternate between them.
I then turned this all into a python script and combed it with the cairoplot chart generator, generating a picture like this:
You can download the script and play with it yourself – much of it is easily customizable
Surprising things learned from the model:
- The strategy we use – the order we tackle various bugs – really does matter. Every strategy gets to the perfect 100% end after solving all the bugs, but some get you 10 times as many happy users when you’re only half done. In practice, having far more users likely translates into extra developers and a much faster rate of development.
- Varying the difficulty of individual bugs didn’t matter much. The pictures came out pretty much the same
- Prioritizing the last few bugs in apps that are almost done is one of the most productive ways to increase happy users – in the simulations I ran it even outperformed working on the most popular application. Unfortunately in the real world it can sometimes be difficult to tell if an application “almost works” in Wine.
- Similarly, “almost happy users” are the easiest to satisfy. When we have many to choose from, picking one arbitrarily and ensuring he was happy before moving on to the next user significantly outperformed trying to satisfy almost happy users at random.
- Instances of “collateral damage” – the fixing of one application causing another application to start working without any extra effort – are rather uncommon until most applications are almost working. The wintrust API is needed by both Steam and iTunes, however when enough of wintrust was implemented to make Steam work there were still many unrelated bugs causing iTunes to remain broken.
- Just about every reasonable way of generating bug difficulty, relative bug probability, applications, and users that I could think of lead to the same general picture: something that looks roughly linear for most of it before taking a very sharp upward turn near the very end. In other words, the model tells us that we should expect to be pleasantly surprised. At some point, Wine will get very good very fast.
So, when will that point of sudden very rapid growth begin in the real world? No one can really know – even analyzing statistics like the growth of platinum ratings on AppDB would only give us information after the fact.
Modeling Linux adoption
We can make a similar model for Linux adoption itself. Just imagine a few well-known barriers to entry, like driver support and compatibility with existing applications. Now suppose that a particular barrier affects some large percentage of potential users.
If 80% of users can’t switch until they have working drivers, 80% can’t switch because of some existing Windows application, and 80% can’t switch because they haven’t heard of Ubuntu, then that means we can only expect to get 0.8% of the users. This is, incidentally, about as many as we have. As Joel on Software* puts it:
Think of these barriers as an obstacle course that people have to run before you can count them as your customers. If you start out with a field of 1000 runners, about half of them will trip on the tires; half of the survivors won’t be strong enough to jump the wall; half of those survivors will fall off the rope ladder into the mud, and so on, until only 1 or 2 people actually overcome all the hurdles.
So what’s our best strategy then? Eliminating these barriers to entry – turning the rope ladder into an elevator – is the only way to get new users. If we fix the driver problem, and the driver problem affects 80% of our would-be users, then we can quintuple our user base. If we fix the Windows application problem, and make Wine work well, then we can quintuple usage again.