Reference no: EM133037706
Summarize the following in a SHORTEST way possible!
Opportunities And Risks of Foundation Models in Healthcare and Biomedicine
The Opportunities and Risks of Foundation Models paper presents the conclusion that the biggest considerations are about ethics and safety that developments in Foundation Models elicit. This is perhaps best exemplified in the bidirectional relation between the foundation models and the tasks that make healthcare and biomedicine one of the high-risk areas that 13 authors have addressed.
AI first emerged in the 1990s in the form of predictive algorithms of Machine Learning and evolved to the larger datasets and more computation of Deep Learning in the 2010s. By 2018, this had morphed into the current Foundation Models of transfer learning and finetuning that largely seek to fuse all the relevant information through centralized data about a domain, and thereafter adapt it to tasks that also span multiple modes. The sheer scale and scope of data creation, data curation, training, adaptation and deployment gave rise to the term 'Foundation Models' to best capture the most potent of the latest AI iterations. According to Brown et al. (2020), GPT-3 for example, has 175 billion parameters and can be adapted via natural language prompts to do a wide range of tasks, despite being a foundation model. Foundation Models thus point to a future consensus of aptly specifying the role of AI while also connoting the significance of architectural stability, safety, and security of AI. The demand for AI is rapidly growing, with expected demand in 2028 of $87.41 billion[1].
The biggest dilemma facing AI emanates from the sheer scale and scope of Foundation Models which are based on conventional deep learning and transfer learning. The drive towards homogeneity and the process of transfer learning poses both risks and opportunities. Transfer learning allows the AI to transfer information from one AI to another so as one evolves in ability, so can the rest. This is further accentuated by the very nature of Foundation Models that corresponds with and denotes their capabilities, technical principles, their applications, and societal impact. Moreover, a further cautionary pause is introduced as the centrality of the Foundation Model to the field of AI shifts more towards the importance placed on the availability of data and the ability to harness it. This is on top of the wave of developments in self-supervised learning that further adds to the dilemma caused by its rapid expansion and deployment. Risks to foundation models continue with the potential for opening new vulnerabilities, which due to the stacked nature of AI data and learning, are unfortunately inherited downstream by all the adapted models. Additionally, the central characteristic task of manipulating historical data and using it to make future predictions makes bridging the gap between tasks and data a challenge across Foundation Models. On the other hand, though Foundation Models have benefited from the massive leaps in the computing power of hardware, this progress is however being eclipsed by the large impact of the social ecology of AI data from creation to deployment.
The fundamentally sociotechnical nature of AI creates an existing research gap that requires deep interdisciplinary collaboration. In the interactive framework of healthcare and biomedicine where foundation models enable various tasks as cited by Brown et al., the multimodal data generated by various sources in the healthcare ecosystem then becomes both a huge opportunity and a huge risk. As a mitigation strategy, data policy institutional arrangements should ideally accompany the cycle from data creation through to deployment of Foundation Models.
The potential for Foundation Models especially in healthcare and biomedicine lies in the integration of multimodal data that can guide the development of last-mile applications which bridge the socio-tech gap. The development of fast and accurate automated interfaces for healthcare providers and patients has applications that range from diagnosis and screening to Drug discovery, Personalized medicine, and Clinical trials in epidemiological investigations of biomedical concepts. In turn, healthcare delivery to patients benefits from improved efficiency and accuracy of care which can help reduce costs, especially to patients.
From the foregoing, the potentials of the Foundation Model lie largely in their transfer learning and scale, availability of data and self-supervised learning. In healthcare and biomedicine, this creates savings in terms of human resources, experimental time, and financial costs. Healthcare is extremely costly and these prices only rise year after year. According to the article 30% of healthcare spending may be wasteful due to administrative inefficiency and preventable medical errors[2]. Healthcare workers spend an immense amount of time updating electronic patient records manually. AI can be implemented to assist in patient visitation record keeping as well as directly feed related articles, studies, and diagnoses, based on patient information. On the patient side of the house AI can be used to create an interface to provide interactive answers to patient questions, preventative actions, and references. One of the largest applications of AI in the medical space is the Internet of Medical Things (IoMT). According to HealthTech, the rise of ioMT is pushed by "an increase in the number of connected medical devices that are able to generate, collect, analyze or transmit health data or images and connect to healthcare provider networks, transmitting data to either a cloud repository or internal servers"[3]. Nonetheless, the role and nature of existing foundation models pose main challenges that include Multimodality Explainability, Extrapolation, Legal and ethical regulations which altogether establish practical limits. Though pretraining can address this obstacle, the potential to accentuate harms still prevails especially in high-risk sectors such as healthcare, where data is largely impacted by the social ecosystem hence constrained by ethical and security considerations. In a sense, AI has not evolved past human error yet and has the capabilities of making similar mistakes.
Subsequently, given its almost limitless growth and deployment, the conclusion is that the verdict is still out on just how much more fine-tuning Foundation Models can take to overcome main challenges and make the most of its scale and adaptability potential.