Indian Council of Medical Research has published its guidelines on application of Artificial Intelligence in Biomedical Research and Healthcare.1 The guidelines emphasize the importance of ethical review processes for AI in health. This guideline state that the development and deployment of AI-based solutions in healthcare pose significant challenges related to data safety, privacy, and sharing.
S. No. | Particulars | Obligations |
1. | Laws referred | 1. IT Act, 2000. 2. Digital Information Security in Healthcare Act (DISHA) 2018 3. National Digital Health Blueprint (NDHB 2019) |
2. | Objects | The guidelines ensure ethical conduct and address emerging ethical challenges in the field of Artificial Intelligence (AI) in biomedical research and healthcare. These guidelines provide a framework for ethical decision-making in medical AI during the development, deployment, and adoption of AI-based solutions. |
3. | Definitions | Artificial intelligence(AI) is defined as “a system’s ability to correctly interpret external data and to use those learnings to achieve specific goal and tasks through flexible adaption” Data privacy must aim to prevent unauthorized access, modification, and/ or loss of personal data. These practices are crucial in the healthcare sector where medical information represents sensitive data that, if misused, could harm patients or subject them to discrimination even if it is unintended. |
4. | Scope | These guidelines apply to AI-based tools for all biomedical and health research and applications involving human participants and/or their biological data. These guidelines apply to health professionals, technology developers, researchers, entrepreneurs, hospitals, research institutions, organization(s), and laypersons who want to utilize health data for biomedical research and healthcare delivery using AI technology and techniques. |
5. | Collection, use, and disclosure of personal information | End-users must be explicitly explained the safeguards designed for the protection of privacy. They must be well informed about the type of data collected and how it will be used using either for developing the AI algorithms or interpretation or storage.The purpose and end goal of data collection by developers of AI technologies should be known to hospitals/institutes, technicians, and developers of the AI technology. The information about the use of AI-based technology should be shared with the healthcare recipient or their legal representative. A disclaimer in this regard must be included in the resultant document highlighting to what extent and for what purpose the said AI tool was used during biomedical research or clinical decision-making. The terms of service need to include pointers to guide end users that it is an AI technology and the results/diagnosis/interpretations are not done by humans. |
6. | Consent Requirements | The consent should be freely given and not obtained through duress or coercion of any kind, or by offering undue inducements.In case the participant is not competent (medically or legally) to give consent, the consent must be taken from a legally authorized representative Consent must be mandatory before running a predictive algorithm in participants/ patients |
7. | Data Sharing | Data sharing may expose patients/ participants to privacy threats. Additional consent from patients is required for data sharing if not taken previously. The consent must contain the nature of the data, to what extent it is being shared, and possible harm that can occur from sharing data.All international collaborations or assistance related to biomedical and health research concerning data collection, sharing of biological samples, and intellectual property must be submitted to the Health Ministry’s Screening Committee (HMSC) for approval before initiation. Indian laws and guidelines (DISHA & PDP guidelines) are to be adhered to. Appropriate MoU and/or MTA to safeguard the interests of participants and ensure compliance (addressing issues of confidentiality, sharing of data, and joint publications) must be ensured. |
8. | Individual Rights | Users should have control over the data that has been collected from them to develop and design AI technologies for healthcare. Users should be provided with the provision to access, modify, or remove such data from AI technology at any point in time. The provision of removing/modifying the data from the databases also must be ensured in case a patient opts out at a later time. The user must have the right to exercise the right to be forgotten. If any adverse event or injury occurs due to the use of AI technology, then the user/participant has the right to receive appropriate compensation. |
9. | Governing Authority powers | The EC reviews research proposal, progress, and final reports as well as reporting of adverse events and provides suggestions for minimizing the risk to the study. Recommendations regarding appropriate compensation for research-related injury should be made by the EC, wherever it is required. Monitoring visits at study sites should be carried out by the EC as and when needed. In case of conflicts in ethical requirements during the implementation of key ethical requirements, decisions on the tradeoff should be evaluated regularly.The source of both the training as well testing data should be properly documented and reviewed by organizational ECs. In case there is a difference between the actual purpose for data collection and the objective of the AI technology being developed using this data, the same should be documented and reviewed. The population on which the AI technology is intended to be used should be part of the testing and validation data sets.The researcher may apply to the EC for a waiver of consent if the research involves less than minimal risk to participants and the waiver will not jeopardize the participants’ rights and welfare. (As per the National Ethical Guidelines for Biomedical and Health Research, 2017).ECs should check the proposals for the data source, quality, safety, anonymization, and/or data piracy, data selection biases, participant protection, payment of compensation, the possibility of stigmatization, and others. |
10. | Data Anonymization | AI technology developers should use techniques such as data encryption and data anonymization to serve the purpose of protecting individuals’ privacy. It must be ensured that the data is completely anonymized and offline/delinked or from the global technology for its final use. |
11. | Data Surplus | The data collection should be limited to only what is necessary, with defined time limits for storage of the protected data. Excess data collected contributes to data surplus. It is unethical to repurpose data surplus without proper consent from the patient/ participants. Storing surplus data for future uses may require additional consent, if not taken earlier. |
12. | Risk Minimization | AI technologies are prone to cyber-attacks and can be exploited to get access to sensitive and private information, thus threatening the security and confidentiality of patients and their data. It must be ensured that the data is completely anonymized and offline/delinked or from the global technology for its final use.The Ethical Committee (EC) and other stakeholders must ensure that there is a favorable benefit-risk assessment. AI technologies should be built in line with the legal and data protection requirement of the country and with strict adherence to the basic principles of ethics. A robust explicitly stated mechanism should be in place to continuously monitor the performance, vulnerabilities, and safety standards of AI technology.AI technologies must adhere to the highest security standard concerning patient data.Special care must be given to ensure the safety and security of the vulnerable population.External audits for accessing potential risks must be encouraged. |
13. | Penalties | The relevant stakeholders should be made liable to pay compensation to the users in case of any harm or injury arising from the use of AI technologies. |
14. | Use of Biometric Data | AI technologies requiring human biometric data should have additional security measures to safeguard the data. Approval from EC and regulatory bodies should be mandatory for using such data. An accidental leak of such data can have unprecedented consequences. |
15. | Impact Assessment | Impact assessment must be carried out by relevant authorities before deploying AI for widespread use. It should focus on key areas like human rights, privacy, and ethical principles. |
References