Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    How Senior Leaders Make Fewer, Better Decisions

    March 31, 2026

    Today’s Wordle #1746 Hints And Answer For Tuesday, March 31

    March 31, 2026

    Trying to Land Media Coverage. Don’t Make This Mistake.

    March 31, 2026
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    Live Wild Feel Well
    Subscribe
    • Home
    • Green Brands
    • Wild Living
    • Green Fitness
    • Brand Spotlights
    • About Us
    Live Wild Feel Well
    Home»Brand Spotlights»Study finds asking AI for advice could be making you a worse person
    Brand Spotlights

    Study finds asking AI for advice could be making you a worse person

    wildgreenquest@gmail.comBy wildgreenquest@gmail.comMarch 31, 2026003 Mins Read
    Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
    Follow Us
    Google News Flipboard
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link



    Whether we like it or not, artificial intelligence has infiltrated the workplace, and employees are under pressure to use it. However, according to a new study, you may want to skip asking AI to help you manage matters of the heart.

    The two-part study, titled “Sycophantic AI decreases prosocial intentions and promotes dependence,” was recently published in the journal Science. The experiment made the case that using chatbots for personal advice and navigating emotional situations can be harmful because the system is designed to tell people what they want to hear. Using chatbots may reinforce troubling behavior rather than help people take accountability for harm and apologize.

    A recent Cognitive FX poll found that about 38% of Americans report using AI chatbots weekly for emotional support, while a recent Pew Research study found that 12% of teens use AI for advice. According to a KFF poll, a lack of insurance also drives usage, too, with uninsured adults being more likely than those with insurance to use it (30% vs. 14%).

    For the latest study, researchers looked at how prevalent sycophancy—defined as “the tendency of AI-based large language models to excessively agree with, flatter, or validate users”—is across 11 leading AI models, including OpenAI’s GPT-4o, Anthropic’s Claude, and Google’s Gemini. 

    The researchers conducted three experiments with 2,405 participants. In the first study, the researchers fed the AI a series of questions asking for advice, posts from Reddit’s “Am I the Asshole (AITA)” forum, and a series of descriptions about wanting to harm other people or oneself, and then compared the AI responses with the human judgments. Overall, the models were 49% more likely than a human to endorse a user’s actions, even if they were harmful or illegal. 

    In the second study, participants imagined they were in a scenario described by an AITA post, where their actions had been judged as wrong. Then they read either a reply written by a human saying they were in the wrong, or a reply written by an AI saying they were in the right. In the third study, participants discussed a real conflict in their lives with an AI or a human.

    Worryingly, participants both trusted and preferred responses from sycophantic AIs that affirmed their actions. They also became more convinced that they were correct in their original actions, essentially reaffirming beliefs they already held rather than being challenged by the chatbot to think differently about the situation. The study noted that having their beliefs reaffirmed meant they were less likely to apologize after talking to the chatbot. 

    “In our human experiments, even a single interaction with sycophantic AI reduced participants’ willingness to take responsibility and repair interpersonal conflicts, while increasing their own conviction that they were right,” the study explained. 

    While taking advice from AI isn’t new, the study showcases just how harmful it can be. As social media’s algorithms drive engagement by enraging users, AI is chipping away at our ability to apologize and take accountability for hurting someone. As the study’s authors noted, that means “the very feature that causes harm also drives engagement.”



    Source link

    Follow on Google News Follow on Flipboard
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
    wildgreenquest@gmail.com
    • Website

    Related Posts

    Today’s Wordle #1746 Hints And Answer For Tuesday, March 31

    March 31, 2026

    Here’s When The ‘Arc Raiders’ Flashpoint Update Goes Live With New Guns, New Arc Enemies And More

    March 30, 2026

    John Summit’s Rise From Accountant to DJ

    March 30, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Secrets of the Blue Zones. My Summary

    March 17, 20264 Views

    ‘Leverage the local’: The fashion trend that explains why everyone around you is channeling their inner tourist

    March 29, 20262 Views

    JetBlue Is Exploring a Merger With These Rival Airlines

    March 27, 20262 Views
    Latest Reviews
    8.5

    Pico 4 Review: Should You Actually Buy One Instead Of Quest 2?

    wildgreenquest@gmail.comJanuary 15, 2021
    8.1

    A Review of the Venus Optics Argus 18mm f/0.95 MFT APO Lens

    wildgreenquest@gmail.comJanuary 15, 2021
    8.3

    DJI Avata Review: Immersive FPV Flying For Drone Enthusiasts

    wildgreenquest@gmail.comJanuary 15, 2021
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Demo
    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions
    • Disclaimer
    © 2026 ThemeSphere. Designed by ThemeSphere.

    Type above and press Enter to search. Press Esc to cancel.