When designing your compatibility testing strategy, you're likely to formalise the environments that are going to be developed and tested against. You may call the result your Support Matrix, or Browser Compatibility Matrix, or simply your Supported Environments.
Here are five of the fundamentals in reaching an effective Support Matrix.
Use targeted tracking data to inform your decisions
An obvious one, yet you often see a Support Matrix which simply lists the newest devices and software versions out in the market at the time of writing. Yet new devices or software can take a long time to reach the same uptake as their predecessors (Android 7 for example), and if one of your (or your client's) primary markets is in, say, China, you're going to make your product harder or impossible to use for a significant portion of potential users if you don't take older versions of Internet Explorer or Android into account.
It's all about the size of the quality audience you can cater for. Meaning your product is going to be more successful the more users you're able to present with a great experience — where those users are the ones you care about (for example, those likely to convert) — in relation to the development and testing effort.
So if the tracking data for a client's existing digital presence in a market has a significant and stable percentage of users using Internet Explorer on an older version of Windows, there is value in putting development and testing time against it. It would make sense to ignore minor rendering issues which do not obscure content, and in some cases even to create a simplified version to direct older platforms to, but you cannot simply ignore an older platform because it is old.
For the same reasons, if a vendor stops support for their product, that in itself doesn't provide adequate justification for dropping it from your Support Matrix (unless you work for that vendor). If people keep using it - remember how corporations kept paying for Windows XP support after the official end of support - it could be valuable or essential to keep accommodating those users for a while longer if they are indeed your users.
When you have tracking data available, use it. Don't say "nobody uses device X" if you only know what you and your friend use. Evidence first.
Don't support everything
Not everyone involved in a project might be aware of the sheer number of combinations of devices, OS versions, browser versions, email clients, etc. there exist in the world. They might expect a product to get tested against a nebulous Everything. It may be wise to pre-empt this by communicating your support strategy, conveying how you use targeted data to maximise return on investment.
One tactic for reducing the number of supported environments is to establish a support threshold. You could employ an arbitrary but useful cut-off point of, for example, 5% of total usage share and support only those environments which are used by at least 5% of your target audience. Where you put this threshold depends on how much your organisation or client are willing to spend on accommodating their users and how much it would cost you to do so.
In some cases you'll want to advocate for comprehensive support more strongly than in others. A global eCommerce web build is a different beast than a static HTML5 banner.
Understand the technology
If the two most used devices in your tracking data are iPhone 6s and iPhone 7, it's a given that you'd support both, right? Not necessarily. iPhone 6, 6s, and 7 have the same screen size, screen resolution, and pixel ratio. If you're testing emails, that should tell you they are essentially identical if the iOS version is the same. In this case, supporting iPhone 7 would also cover iPhone 6s and 6. In the context of a demanding mobile app, however, the minor differences in hardware capabilities could be important.
Understanding the devices, browsers, and OSes in your tracking data will help you decide which to support and how to support them.
Another example is browser auto-updating. Chrome, Firefox, and Edge (through Windows Updates) update themselves. Whilst it is possible to be behind on an update or to disable auto-updating altogether, the vast majority of users will be moved to a new version of these browsers whenever it gets a new stable release. This should tell you your Support Matrix can be out of date as soon as you create it if you include specific versions of these browsers. Either support the latest version or - if you have the need and capacity - the latest and the version before it.
There are numerous more examples. Aside from learning about the technology on the environments people use, also learn about the technology you use to develop the product, and the technology you use to gather your tracking data.
Share with your project team
Having a Support Matrix to test against is going to be a painful experience if the product under test wasn't developed to support the same environments. We're not playing a game of "guess what rules I will fail your work against", we're working together to get something of value out the door. Before development even starts, it should be understood by everyone what environments the product will need to deliver a great experience in (whatever that means in your project) and to what extent.
Make sure you get involved in kick-offs for projects and ensure you have or get access to appropriate tracking data to base a Support Matrix on. Keep an eye on resource requests for your design department. What they are working on is likely to go into development soon.
If the work is for a client, also find out what environment(s) they are going to use when signing off on the deliverables. Then either add that to your supported environments or request that the client stakeholders use an environment that fits what their target audience use. Even if you do, experience shows that someone will pop up from somewhere with a list of issues in an unsupported environment. When that happens, you'll be happier if your account handler is already equipped to handle it, namely by understanding your explanation on why only certain environments are supported and what it's going to cost if they want to add additional support retroactively.
Actually be able to support them
What if 12% of the target audience use a low spec Android tablet and you don't have one? Do you sell it as "88% of people DON'T use it" and ignore it? You could, but not having it is a weak justification for not supporting it. Either you exclude it for a valid reason or you find a way of covering the environment with a representative device.
There are many options for building compatibility testing capacity.
You can create a physical test lab, with a range of test desktops, virtual machines, laptops, and mobile devices which you manage yourself in terms of software versions, configuration, purchasing, and availability.
You can use cloud-based platforms like BrowserStack or Sauce Labs (or a number of other options) for desktop and mobile browsers or Litmus or Email on Acid (or a few other options) for email clients.
You can use a 3rd party testing service provider who already have a mature physical or cloud-based test lab.
You can use a 3rd party testing service provider who have a distributed network of testers they can use to cover any range of environments.
Or you can use a combination of these. The world is your oyster, if you can afford it.