If interviews fail, and we that there is no problem / no shared problem, here’s my backup plan:
Some people say that tests can explain a codebase. They explicitly encode expectations about the code’s behaviour, and so they’re not only verification, but also documentation. Being always-valid, this makes them super helpful during onboarding.
Our Research Partner NetCheck experienced multiple occasions, where their underlying mobility dataset changed in characteristics, but they only noticed after a while. And because the dataset was ever-changing, nobody bothered to write documentation for it!
Evolving datasets are similar to codebases, in a way: Both change all the time, and it’s very valuable to know when past assumptions no longer hold true.
That’s why my Plan B is to build a prototype for “statistical unit tests”. It could be a Jupyter Plugin, similar to Great Expectations. It could send test results to some central, user-friendly service that gives an overview. It could output JUnit and show results in CI.