Artificial Intelligence (AI) and Research

As an emerging technology, Artificial Intelligence (AI) holds significant potential to support investigators in their research and with patient care. There are many ways that AI, particularly Generative AI (GenAI), may be utilized in research, yet it’s important to balance leveraging such tools with maintaining research standards of accuracy, validity, originality, and reproducibility.

To ensure smooth integration of new AI tools, please follow these steps:

  1. AI Subcommittee Review: Submit an AI Subcommittee Review form for project review and approval by the AI Subcommittee at Ann & Robert H. Lurie Children's Hospital.
  2. New Research Technology Request: Concurrently, complete a New Research Technology Request to initiate a review and approval of the AI tool by Research Compliance, Quant Sci, Information Management and Legal Contract Services.

While Lurie Children’s supports and encourages exploration of AI and Machine Learning (ML) capabilities, this guidance provides compliance reminders and guardrails to ensure that we continue to maintain research integrity, protect patients’ privacy and deliver patient care following validated standards.

  • Patient Privacy Protection: It is not permissible under HIPAA or Lurie Children’s policy to share patient or research participant information in connection with public AI/ML services, such as ChatGPT. This is because, as currently configured, such public services can use and share any data without regard to HIPAA restrictions and other protections. Therefore, individual patient data and patient data sets (even if deidentified) may not be exposed to AI/ML services.
  • Confidential and Sensitive Data: Investigators must understand the potential risks associated with inputting sensitive, private, confidential, or proprietary data into AI tools, and that doing so may violate legal or contractual requirements, or expectations for privacy. GenAI tools cannot be assumed to be private or secure, as they often involve sending data to a third-party, and such information can eventually become public.
  • Human Verification: Human validation-review and fact checking is critical whenever using AI. Investigators should verify the accuracy and validity of GenAI outputs. The responsibility for research accuracy remains with researchers. Additionally, it’s essential to check for unintentional plagiarism. GenAI can produce verbatim copies of existing work, or more subtly, introduce ideas and results from other sources but provide incorrect or missing citations.
  • Disclosure Requirements: Be sure to keep documentation and provide disclosure of GenAI use in all aspects of the research process, in accordance with the principles of reproducibility, research transparency, authorship, and inventorship. Authors and investigators bear the ultimate responsibility for their works, including assurance of accuracy. As a best practice, authors should be transparent and fully disclose their use of generative AI tools.
  • Responsible Usage: When utilizing AI tools, it’s crucial to exercise individual responsibility by being aware of their risks and limitations. AI systems can exhibit biases based on the data they are trained on, which can lead to unintended consequences. Investigators should critically evaluate the outputs of AI tools, and understand their blind spots and potential biases, ensuring the use aligns with ethical standards and best practices.

AI Use and Intellectual Property

  • Please review the guidance on AI use as it pertains to navigating the complexities of intellectual property.
  • This guidance offers insight into areas such as patents, copyright, trademarks and trade secrets. For more on intellectual property development at the Medical Center, please visit Innovate2Impact.

To provide a foundational understanding of Generative AI, its risks and opportunities, as well as the Lurie Children’s Gen AI Policy and Procedures, please visit the Exploring Lurie Children’s Generative AI Policy and Procedures Workday course.

  • Artificial Intelligence (AI) is a field of computer science dedicated to developing algorithms and machines capable of solving complex problems by mimicking or modeling aspects of human thinking. As a technology, it refers to the actual algorithms, software, or machines that can use data to solve problems in ways that resemble human cognitive processes.
  • Machine Learning is a subset of AI that focuses on predictive performance, where variables and their relationships are not pre-determined but emerge as features from the process.
  • Generative AI (GenAI) is a type of AI that can generate new content based on learned patterns. Examples include generating text, images, or music.
  • Deep Learning is a subset of machine learning that can handle vast, unstructured dataset and perform complex tasks like speech recognition and natural language processing.
  • Hallucinations are factually incorrect or nonsensical outputs from AI systems presented as plausible or accurate.
  • For a more comprehensive list of definitions, please visit the Data Literacy Library’s Glossary on SharePoint.