Trustworthy & Responsible AI Network (TRAIN), a consortium created to explore and set standards for the safe application of artificial intelligence (AI) in health care.
Can also please simplify further? I think I’m just a little green and I don’t quite get the following:
I get the GIGO and data quality bit but don’t understand how the lack of a correct ontology matters. Machine learning tech is able to reliably analyze, and now produce, data that is ontologically sensible. I guess this must be possible because there is some sense in the underlying ontology and medicine wouldn’t be exempt from this?
How do new sensing technologies (do you mean things like MRI scans or tech like sequencing?) relate to the ontology? Given the need for FDA approvals, clinical trials and validation before a tech can be used in healthcare one would assume there is plenty of time for standards to catch up. In the case of AI in healthcare, FDA moved relatively quickly. Almost as soon as AI was actually affecting radiology, FDA took action.
Regulatory capture is a well known business angle, no doubt at work here too. The point which I found interesting, especially for this forum, was two fold:
a) That standards (and protocols) were being used to bring relevant groups together. There is something very alive about the creation of standards.
b) The consortium is formed for safety and quality, which are classic regulatory goals. Regulation comes after discovery and productization of the new tech. How would the limits of TRAIN affect discovery at all?
Also please help me understand how healthcare data is fractal and how does decentralization address this?
As a meta-question, the whole thing could be viewed as disappointing and regular old regulatory capture perhaps, but how is this mind destroying? If anything, I would argue that if a consortium is being formed, and an individual cares about the outcomes then they should get involved. After all, their standards could become law, and if we wait for a peer reviewed outcomes study then we are 20 years too late.
I heard Amazon is providing “ambient listening” AI solutions for hospitals, to automate patient care documentation. Easy to see how standards for safety and usability would play a role there. Medical charts seem like a lynchpin in hospital operations.
This is a diagram by safety researcher Rene Amalberti, maybe a useful frame here. Basically a theory of diminishing returns for safety, driven by media attention. Paradoxically: the safer a system → the less often accidents occur → the better news they make → the more attention they receive.