Asian Chinese senior woman using smart phone reading text message in living room of her apartment in city
getty
Earlier this year, OpenAI released ChatGPT Health, offering digitally-savvy consumers a tool to aggregate health data scattered across disparate sites and portals. Chronic disease burden and costs in the US is most often disproportionately centered in underserved communities where people are less likely to use or benefit from this innovation. Health technology isn’t designed and marketed with these communities in mind, so, who is this for?
In a recent post, Sergei Polevikov, tech expert and founder of AI Health Uncut, describes uploading his health records to the new OpenAI health platform and offers a cautionary tale. The analysis hallucinated parts of his health history and he encountered administrative hurdles in the process. Polevikov says, “Automation helps me with efficiency and saves me time, but I don’t need AI to explain what is happening to me healthwise. The distribution of [these tools] is one-sided and people like me need them least.”
If someone like Polevikov encounters navigation roadblocks with AI-driven tools, what does this signal for those with low digital literacy and whose health and tech concerns are not included in the design and marketing of these tools?
Tech-Ignored, Not Tech-Averse
My recent street interviews about AI in Mobile, Alabama offer context for the disconnect between the surge in AI tools and tech support needs for those with low digital literacy. Leevonis Fisher, a community leader and founder of the Bay Area Women Coalition, highlighted how far we have left people behind. When I asked what she thought about AI, she said, “I think it’s all fake.” I pivoted to ask about her smartphone use. Fisher has an iPhone and uses Siri, Alexa and voice-to-text almost daily. Despite how entrenched these tools are in her routine, she had no idea they are AI-driven. To her, AI was a buzzword for an untrustworthy computer robot that creates fake videos for social media.
This is concerning. While engineers and tech gurus continue to create tools for ease and efficiency for those who need them least, those who could benefit most are omitted from design conversations and like Fisher, often don’t even know they already use AI-driven solutions every day. These communities are also least likely to own wearables, purchase direct-to-consumer diagnostics or log in to their electronic health records consistently. Unless we close these information and access gaps, ongoing AI and tech innovation will widen them. Furthermore, innovation that misses underserved, high disease burden communities is a missed market opportunity. The full economic potential of AI in healthcare won’t be achieved if the tools are only utilized by the healthiest and most tech-literate.
Are we ok with that?
Fisher likely represents millions of people across the country we might label tech-averse. But given her daily use of tech, she isn’t. She is tech-ignored. She doesn’t trust AI because no one has invited her to understand its utility or function in her life beyond Siri and Alexa. If we don’t understand her habits, constraints and specific needs, the most advanced AI in the world will only serve to widen the knowledge and use gaps.
Shaping AI By And For People
We are not powerless to address this. Organizations and efforts are emerging to shape AI policy and access. Last fall, the MacArthur Foundation, Omidyar Network, and eight other major philanthropies launched Humanity AI, a five-year, $500 million initiative to ensure that AI is shaped by and for people. According to Michele Jawando, president of Omidyar Network, “Tech has incredible potential but must be steered by humans, not the other way around. The future will not be written by algorithms, it will be written by people as a collective force.”
We should grapple with these questions and work toward consensus to build tech for everybody, but not before raising awareness and seeking context, experience and insight from the people we keep leaving behind.
