December 31, 1969

The Future of Generative AI in Healthcare

The Future of Generative AI in Healthcare

In November of 2022, the world’s understanding of Artificial Intelligence (AI) changed. 

This is when the company OpenAI released ChatGPT, a large language model chatbot, based on GPT-3 (the third iteration of this technology), that allowed users to have a conversation with a bot, steering it towards desired style and output. This technology started to gain a new momentum it had never experienced before.

In reality, Artificial Intelligence has been around for many years, and while the world’s understanding of the technology is starting to catch up, so is its desire to implement this technology across all industries. 

There is no industry in which this is both more sensitive, or more exciting than in healthcare (at least, in this writer’s opinion). In an industry where sensitivity, timeliness and accuracy are key, AI provides a unique set of benefits, and challenges. It’s time for all healthcare providers to take note. 

Understanding AI in Healthcare

A close-up view of a vibrant purple ChatGPT interface on a screen, highlighting the "Try ChatGPT" button, inviting users to engage with the conversational AI technology.

Defining AI

To understand “AI in Healthcare” we must first understand what the technology actually entails. 

AI or Artificial Intelligence is a branch of computer science concerned with the ability of computers to take on tasks usually requiring the intelligence of a human. It was first coined in 1956 by computer scientist John McCarthy and has been used in many forms over the years. 

AI’s advantage is in its ability to scan through large datasets much faster than humans can, and provide statistically-based responses. Since the 1980s, most of the technology we call Artificial Intelligence has been based on “rules based” systems, so if a specific scenario was encountered, the technology would have a rule telling it what to do with that information.

AI in Healthcare 

It’s likely you already experience AI as a part of your healthcare. It is already being used as a diagnostic tool, especially in visually oriented fields like dermatology and oncology to determine likelihood of cancerous conditions too difficult to see with the human eye. It is also already being used by professionals in determining prognosis based on more individualized inputs, acting as a second opinion to providers.

If you haven’t seen these uses, you have likely already been the beneficiary of AI on the business side of healthcare, such as more quickly analyzing drug trial data or in providing reports on patient experience data. 

As healthcare providers look to the future of the profession, there are key areas in which practitioners familiar with AI are particularly excited. Particularly, this new technology helps to provide a few key benefits to practitioners including saving time on administrative tasks, providing diagnostic context and supporting accurate plans of care.

This all sounds great, right? Not so fast. While AI promises to transform (or continue to transform) the way we operate as healthcare providers, it also provides some nuances we must consider as we implement the technology. 

Generative AI in Healthcare—what it is, and why it’s different now.

A computer screen displaying web browser tabs open to the homepages of leading AI technology companies, including OpenAI and others, symbolizing the cutting-edge of artificial intelligence research and development.

Not all AI is what we call “generative AI”. In fact, much of what we have just discussed does not fall into this category. 

Generative Artificial Intelligence, according to McKinsey & Company refers to algorithms that can be used to create [or generate] new content, including audio, code, images, text, simulations, and videos. 

Generative AI breaks down into a few key levels of algorithmic learning:

First, it relies on the concept of Machine Learning, a technique that uses the ability to train models with data, effectively enabling it to “learn”. This is used in many forms of AI. 

Next, more advanced AI uses neural networks, a machine learning model built to “think” like the human brain. It then uses deep learning, which is a machine learning process with multiple layers of data. 

What starts to make it feel more human is that Generative AI then uses Language Learning Models and Natural Language Processing models to understand more vast, colloquial human input. It finally uses Generative Pre-Transformer models (like “Chat”GPT) then are trained to produce output. 

While various levels of this technology has been implemented as part of AI in healthcare before, the newest part is the ability of systems to take more casual human inputs (ie. conversations, searches), search the datasets it has, and generate a response. As you can imagine, there are a lot of considerations for this technology in healthcare.

Generative AI Uses in Healthcare

Here is the exciting news— 

Companies are already thinking about ways to integrate Generative AI into healthcare processes.

A few of the most exciting uses highlight how big healthcare systems are partnering with large datasets like Google DeepMind and IBM Watson to expand bandwidth and reduce burnout. A few of these examples include:

HCA Healthcare is piloting a program with Google that helps to extract key information from patient-provider conversations and use it to create clinical notes (which can be reviewed by the provider before being saved in the Electronic Health Records).

IBM recently released WatsonX, an update to its rules-based IBM Watson, with the intent of assisting in processes like diagnostic sessions.

While many startups currently exist referencing more traditional AI formats to analyze images and plans of care, you can imagine most of them are considering how they will be able to integrate the next phases of AI.

AI for Healthcare: Benefits and Applications

A smartphone displaying a welcoming message from the ChatGPT interface on the screen, positioned against a background featuring the OpenAI logo and the ChatGPT login page.

Now that we have explored what, and who is using AI, let’s talk about how it might become relevant to your practice. Specifically, how it can be used across diagnostics, treatment, and administrative tasks.

Traditional, or rules-based AI is currently being used … 

In diagnostics, to decipher and curate information. Hospitals produce 3.6 billion images every year, and 97% of that data goes unused, according to the World Health Organization. There is just too much information, some of it too minute to be compared or read through as a part of making a diagnosis. What AI can do very well is speed up that process by searching and analyze that data faster. Ultimately, this provides the provider with more time to focus on interpreting the output, and determining a potential plan of care. Companies like Enlitic are built to do just this for images.

In treatment, to help provide care plans. The more that is known in science, the better ability we have to customize plans of care. Artificial Intelligence can be used to develop plans of care based on these large amounts of data that are available. Cleveland Clinic is already partnering with IBM Watson to use data available to create optimal care plans. This technology is also being applied to the new field of precision medicine, taking genomic, environmental and lifestyle factors to design custom treatment plans. Artificial Intelligence is also being used to develop better early detection technology, and has been considered to outperform systems like MEWS (Modified Early Warning Score) to determine, based on factors input, when a patient might be most at risk for a life threatening event.

Generative AI is currently being used … 

In diagnostics, to triage and prioritize health concerns. In the age of telehealth, patients look for more efficient healthcare options. Scaling these is not always easy in healthcare. That’s where Generative AI can support. Especially at heavy volume or low staffing times (ie. overnight), AI can be a good tool for answering non-life threatening, lower stakes questions (ie. what should I do for a sprained ankle), or could help to gather some information upon intake that a physician can review before supporting the patient. Companies like Ada, Buoy Health and CareAngel are already piloting these kinds of chatbot solutions for patients.

Both traditional and generative AI are currently being used …

In administrative tasks to free up practitioners to spend time on patient care. The average physician spends 27% of time in face time with patients and 49% of time on administrative desk work according to a study by the American Medical Association and Dartmouth-Hitchcock.   These tasks like charting, and documenting are necessary, but painful uses of time, and the lack of engagement may also prove negative to a patient’s long term treatment. 

Generative AI technology powers tools like Ambient Clinical Documentation being developed by 3M, which uses AI technology to listen to a patient / provider conversation, transcribe it, and pull out key areas, as well as write clinical notes. 

Traditional AI also supports more automated tasks like updating patent records and billing. As a bonus, this kind of technology is also being used by many practitioners to understand the “last mile” of patient care, using AI to analyze patient surveys to understand how they can support patients better.

Administrative support is currently one of the most widely applicable areas for many practitioners as it allows them to do what they were trained to do, and what will make a difference—spending time with the patient.

Safety of AI in Healthcare

These exciting benefits and applications are not without safety risk if not implemented correctly.

As healthcare providers, the idea of putting care into the hands of any kind of “assistant” should not be taken lightly, and it is something to absolutely consider in the context of integrating technology like AI. 

Here are a few safety concerns to consider when thinking about current and future uses of AI:

Accuracy of information. We have come a long way, but we still have a long way to go. AI is built based on statistical probability and prediction models. It is not a sentient being, and does not always “predict” correctly. Like your average Google search, the output is only as good as the query put in, and it needs an informed fact check on anything that comes out.

A study published in the Journal of Internet Medical Research observed that when feeding in 36 clinical vignettes, the AI result was correct 60.3% of the time on the initial differential diagnoses, and  76.9% on the final diagnosis. In other words, the AI output still needs an informed clinical professional to vet and challenge the results generated.

This matters because our patients will not be able to tell the difference between an accurate and inaccurate diagnosis, and will not know when to feed additional context into the tool. A Pew Research Center survey found that 60% of Americans admit they would not be comfortable with relying on AI in their healthcare. AI cannot replace the work we do, it can only be a supportive diagnostic or organizational tool.

What you should do about it: Use AI selectively and carefully in your work. It is a good way to summarize data, brainstorm, bounce ideas off of. It can scale your bandwidth, but should not replicate you.

Informed Consent. Getting a patient to be completely open and communicative can already be tricky, without the idea of information being stored anywhere. Depending on your use for AI tools, you should make sure that the patient is comfortable with the technology you are using while in the room with them.

This is important because patient conversations can be very sensitive and vulnerable at times, and we need to do everything possible to ensure our patients will be comfortable giving us the right information to lead to their care (this can be hard enough!) It’s also important because in many states, two party consent is required when recording audio, so it’s important a patient can know this will be used.

What you should do about it: If and when you choose to use this technology in your work in the future, ensure you can properly explain how the technology works, and what will be used and how to your patients. Be comfortable adapting if they do not give consent for this technology.

Data Privacy. As providers, we must ensure on behalf of our patients that we are storing and using any sensitive data correctly (and legally!) Because of the sensitivity of the data in healthcare, we have to be specific about the tools we use to ensure they can support compliance with this data. A UserTesting study of 500 US users found those who used one of the top five healthcare chatbots expressed concern about revealing confidential information in addition to complex health conditions and usability.

This is extremely important because it's the law, and it also is important in creating trust with patients to be clear about how data is being used.

What you should do about it: When integrating any AI or data based features, use vendors specifically that are certified for this kind of data handling—do NOT use open AI.

Now that we have discussed the idea of data privacy, let’s examine more of the specific safety concerns in regards to HIPAA compliance.

HIPAA Compliance in AI for Healthcare

Any time we talk about data in healthcare, we must make sure HIPAA is a part of the conversation. HIPAA - or the Health Insurance Portability and Accountability Act of 1996 is a federal law that mandates protection of sensitive patient information. It refers specifically to PHI, or Protected Health Information, which is, according to the University of Berkeley Human Research Protection Program, any information that can be used to identify an individual that is obtained in the course of treatment. 

There are 18 PHI identifiers that make health information identifiable. They range from the obvious like names and social security numbers, to the less obvious like medical record numbers, IP addresses and vehicle identification numbers. 

HIPAA designates how the PHI can be used amongst medical entities, and sets a standard expectation for individuals to be able to control how that information is otherwise used. Healthcare Providers, Health Plans, Healthcare clearinghouses and Business Associates of healthcare providers are all subject to HIPAA. 

When considering using AI in your practice, you must consider the following in relation to HIPAA compliance:

  1. Is the partner providing the AI solution a Business Associate or entity also covered by HIPAA
  2. Is the data being input in such a way (ie. a chatbot taking input information) in which PHI is being incorporated into the conversation

When we think about using Generative AI, we must consider the data provider and vendor for that technology. ChatGPT, for example, is run by the company OpenAI, which as the name suggests has a dataset that is open and accessible to the world. Open language models like this are great for the models it is able to learn and generate, but it’s not great for those professions or companies that need to be protecting certain parts of their dataset. 

The medical diagnostic tools are being run on private sets of data. These are the tools you should consider using in your work—you should not be using ChatGPT for anything related to interfacing with your patient care.

How to Stay HIPAA Compliant with AI

Of course, we’ll add the disclaimer that as a legal matter, you should be consulting with the proper professionals to ensure you are in compliance. That said, as you’re anticipating integrating AI into your practice, in the future, there are a few things you will want to consider in order to stay HIPAA compliant. 

  1. Don’t use open source technologies like ChatGPT in your patient care
  2. Work with vendors who are HIPAA compliant in how they handle care 
  3. Determine with your data partner and vendor what information is collected, and whether it counts as PHI, and whether the identifiers can be removed

It’s worth noting that if this is making your head spin you are probably not alone. Most likely you will not be building your own data models for this kind of work, and will rely on vendors that already have solutions created. Currently, many vendors are racing to develop solutions that will work better for you, so it’s okay to wait a bit to see what the market produces. If you do choose to integrate Generative AI into your care though, make sure you work with a vendor who has certifications in HIPAA compliance.

Ensuring Patient Data Safety

As you are considering setting up or extending any technology within your organization, it is important to make sure that the data handling is not only legally compliant, but secure. When integrating your systems make sure you’re also taking the following precautions: 

  1. Encrypting data being sent - Google defines data encryption, most simply as “scrambling data into a secret code that can only be unlocked with a unique digital key”. Just like we protect patient data by not putting it into “open” or “public” AI models, we also don’t want it to be sent over “open” or “public” channels, or be easy to be read in transit. This means working on a private network or VPN, and working with vendors who properly “encrypt” any medical data being sent.
  2. Access control - this is effectively the process of setting up who has access to individual applications, settings, or segments of tools. Just like you put a password on your phone so that only you can access, you must be thoughtful about who can access specific tools and data, even within your organization. It’s better to err on the side of caution and start small and work out with access, rather than to allow broad access right from the start.
  3. Data handling procedures - according to the Department of Health and Human Services, “is the process of ensuring that research data is stored, archived or disposed of in a safe and secure manner during and after the conclusion of a research project.” Essentially, making sure you only keep the data you need, and you have a proper way of getting rid of it completely when you’re done with it. This is likely not something you are going to be setting up yourself, but you should make sure any data vendor or IT partner has methods of doing this properly.

What about AI for non-clinical use?

As we consider a range of uses for AI in healthcare, it is only fair to consider how you may use it in the business part of a practice.

While Generative AI is fairly sensitive and nuanced in the clinical setting, providers, especially those independently marketing their practices, have a few more options for which they can use technology like ChatGPT. A few of these examples may include:

  1. Generation of content ideas - if you’re looking to fill up a blog, social media feed, or email newsletter, this technology can help you brainstorm some concepts to use.
  2. Writing support - if you’re looking for the correct way to say or phrase something, you can work with ChatGPT to experiment with how it would write or say certain complicated phrases by giving it different prompts.

The Future of AI in Healthcare

AI has already come leaps and bounds, especially in the last twelve months. With a renewed excitement about how AI can transform industries and let professionals focus on key tasks, you can expect to see a lot of change—soon. Here are a few things we most expect to emerge with AI in healthcare.

Administrative tasks will become more automated (and finally integrate with EHRs) - this work is already existing in some ways, but it will become a more familiar way of life. Companies will continue to look for opportunities to support clinicians in freeing up time to talk with patients by automating notes and other administrative tasks. Providers will use AI to automate tasks based on certain answers, or write up outcomes.

Chatbots will become a regular part of intake flow - a great use for chatbots is to triage cases, like when we work with customer support at our banks, it can help determine where it is acceptable to give advice on the spot, and what cases need to be addressed by the small bandwidth of the clinicians available. Companies are already popping up to provide this early intake technology, we can expect this to continue to improve. 

AI will be used as part of the diagnostic flow - Presently this is used primarily in hospital settings, and applied in primarily rule-based contexts. According to a 2023 survey by Futurescan, more than 48% of hospital CEOs and strategy leaders are confident the infrastructure will be in place to use AI in decision making. As this technology becomes more a part of medical practice, it will become more prevalent in use throughout the healthcare industry.

New players will get into the game  - presently much of the AI technology has been focused on either a hospital/research setting or pure front office business setting. As the current economy is ripe for new business opportunities, we can expect to see more startups appearing to use traditional and Generative AI methods as part of a practitioners’ workflow. We can also expect to see existing technology start to adapt to healthcare using large datasets to improve the experience of patients. This will also include medical schools. As AI becomes an important tool, clinicians will be more adept at using it as part of their diagnostic process. Schools like University of Texas at San Antonio (UTSA) are introducing dual degree programs between Medicine and Artificial Intelligence in technology to help support students to become leaders in this field. 

As AI continues to grow and develop in the healthcare industry, it’s important to understand how the technology works now, but also to proceed with caution and care as we look at implementing it. While some technologies are already in early stages of use, many will become much more useful and user-friendly over time. 

As exciting as this technology is, and as many possibilities as it will provide, we must make sure we’re doing this with deference to the legal and safety standards with data we apply as clinicians. We must make sure we’re using proper vendors (not just jumping into the hype), and keeping any data to those vendors who will use it correctly. 

This technology is already moving fast - there has been more excitement and activity around this area in the last year, than the seven years before it (since OpenAI was founded). More technology is bound to come soon, and swiftly, it will be up to us to decide when it makes most sense to use it. Stay tuned as we follow the newest developments in tech, and what practitioners need to know.

Practice Better is the complete practice management platform for nutritionists, dietitians, and wellness professionals. Streamline your practice and begin your 14-day free trial today.

Start for free