Metrics for test run- and pass-rates are significantly affected by three long-standing problems with the test infrastructure:
1. Some tests can't be run on some paltforms, either because they don't compile, crash or otherwise don't run to completion.
2. Some tests are either explicitly skipped (via QSKIP) or implicitly skipped (empty test functions, or ifdef'd to empty).
3. The metrics system is unable to accurately determine which tests are valid for a particular platform.
We now have some projects where these issues mean that significant evaluation/processing of run- and pass rates is needed before meaningful and acceptable figures can be presented to those people who make decisions about when a product is ready to ship.
We should be able to deal with test cases more intelligently. Specifically, we should know exactly how many test-cases/test-functions are supposed to run on a specific platform and use that number to calculate the run-rate. So when we do everything well we should be able to achieve 100% run-rate on every platform, even though not every platform is running exactly the same tests.