Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Mark Zuckerberg’s Meta Connect 2026 playlist has the vibe of a cringey college party

    May 13, 2026

    This Unexpected Marketing Move Brought Him 1M Subscribers

    May 12, 2026

    4 Kentucky Adventures That Will Surprise You

    May 12, 2026
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    Live Wild Feel Well
    Subscribe
    • Home
    • Green Brands
    • Wild Living
    • Green Fitness
    • Brand Spotlights
    • About Us
    Live Wild Feel Well
    Home»Brand Spotlights»This Microsoft security team stress-tests AI for its worst-case scenarios
    Brand Spotlights

    This Microsoft security team stress-tests AI for its worst-case scenarios

    wildgreenquest@gmail.comBy wildgreenquest@gmail.comMarch 25, 2026003 Mins Read
    Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
    Follow Us
    Google News Flipboard
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link



    As soon as new AI products are released, security researchers and pranksters begin probing them for weaknesses, trying to push systems to violate their own safety precautions and coax them into producing anything from offensive content to instructions for building weapons.

    After all, AI risks are not just theoretical. In recent months, various AI companies have faced criticism for their software allegedly contributing to mental illness and suicide, nonconsensual fake nude images of real people, and aiding hackers in cybercrime. At the same time, techniques for bypassing safeguards also continue to evolve, with recent methods including everything from malicious prompts disguised with poetry to surreptitiously planting ideas in AI assistant memories via innocuous-looking online tools. 

    But long before new models reach the public, internal security teams are already stress-testing them. At Microsoft, that responsibility largely falls to the company’s AI Red Team, a group that since 2018 has worked with product teams and the broader AI community to pressure-test models and applications before bad actors can.

    In cybersecurity parlance, a red team focuses on simulating attacks against a system, while a blue team focuses on defending it. Microsoft’s AI Red Team is no exception, exploring a wide range of safety and security concerns—from loss-of-control situations where AI evades human oversight to issues around chemical, biological, and nuclear threats—across an assortment of AI software. 

    “We see a really, really diverse set of tech,” says Tori Westerhoff, principal AI security researcher on the Microsoft AI Red Team. “Part of the kind of magic of the team is that we can see anything from a product feature to a system to a copilot to a frontier model, and we get to see how tech is integrated across all of those, and how AI is growing and evolving.” 

    In one case, says Pete Bryan, principal AI security research lead on the Red Team, members worked with other Microsoft researchers to test whether AI could be manipulated into assisting with cyberattacks, including generating or refining malware. They experimented with framing questions in benign ways, such as describing a student project or security research scenario, then pushing systems to produce increasingly detailed outputs.

    The effort went beyond simple prompt testing. Researchers evaluated whether the AI could generate code that actually compiled and ran, and whether certain programming languages increased the likelihood of harmful outputs. In the worst case, Bryan says, the systems produced code comparable to what a low- to mid-level hacker might already create, but the team still refined detection systems to better flag such behavior.



    Source link

    Follow on Google News Follow on Flipboard
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
    wildgreenquest@gmail.com
    • Website

    Related Posts

    Mark Zuckerberg’s Meta Connect 2026 playlist has the vibe of a cringey college party

    May 13, 2026

    Hints & Clues For Wednesday, May 13 (You’ve Got . . .)

    May 12, 2026

    Crumbl’s founders just made a surprise announcement that could change the chain forever

    May 12, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Study finds asking AI for advice could be making you a worse person

    March 31, 202612 Views

    Workers are using AI to learn on the job, even though 65% worry about accuracy

    April 21, 20266 Views

    Deadly Ice Prompts a Critical Delay on Mount Everest

    April 21, 20264 Views
    Latest Reviews
    8.5

    Pico 4 Review: Should You Actually Buy One Instead Of Quest 2?

    wildgreenquest@gmail.comJanuary 15, 2021
    8.1

    A Review of the Venus Optics Argus 18mm f/0.95 MFT APO Lens

    wildgreenquest@gmail.comJanuary 15, 2021
    8.3

    DJI Avata Review: Immersive FPV Flying For Drone Enthusiasts

    wildgreenquest@gmail.comJanuary 15, 2021
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Demo
    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions
    • Disclaimer
    © 2026 ThemeSphere. Designed by ThemeSphere.

    Type above and press Enter to search. Press Esc to cancel.