“We work in one the world’s most risk-averse industries. The entire purpose of safety assessment is to identify risks of experimental new drugs and treatments.” This is often my opening line for the induction of a new team member in my company. I feel it’s an important statement to provide context to the uninitiated.
As a software vendor, the most noticeable manifestation of this is the reluctance by many of our customers to upgrade to new versions of software. Any change introduces a risk that needs to be checked and validated, even when that change brings about improvements in the functionality to generate SEND datasets.
I’ve often quoted Audrey Walker from Charles River when she once told me, “SEND has been the biggest change to our industry since the introduction of GLP.” An industry that saw relatively little change for decades, was suddenly thrust into the world of standardized electronic data. SEND Changes Everything was one of the sections of my recent webcast, Sensible SEND Live! and I’m sure we’ve all felt that significant change over recent years. And it keeps changing. There’s the ever-widening scope of SEND; the introduction of CDISC CORE and change to how SEND datasets are checked; and continual changes from the FDA as they get more and more use from the SEND packages. I think those of us that work in the SEND world have had to get comfortable with a certain level of continual change. New versions of software must be implemented because new standards must be supported. However, even this level of change seems overshadowed by the leaps and bounds being made by the likes of artificial intelligence and machine learning technologies.
For years, I’ve heard about the debate around virtual control subjects, and yet this is almost being surpassed as predictive toxicology and in silico methods are being used to not replace individual subjects, but entire studies. It seems that more and more I’m reading news articles about AI being used in the drug R&D process. My own organization is leading the charge within in silico and predictive replacements of carcinogenicity studies.
Even outside of our industry, AI and other new technologies are occupying vast amounts of media space. Clearly, we are living in an age of change. How does this sit with our risk aversion? If we were to use machine learning technologies to read and comprehend a study report, how could we validate such a system? We couldn’t ensure that it produced consistent results because the whole point would be that it would continually be learning and therefore continually improving its results.
So, I find myself asking, what happens to our risk aversion in the age of change? Do you have a strong opinion on the matter? If so, please feel free to email me at [email protected], I’d love to hear your thoughts on the topic.
‘til next time,
Marc