{"id":10925,"date":"2026-04-16T16:56:33","date_gmt":"2026-04-16T16:56:33","guid":{"rendered":"https:\/\/wildgreenquest.com\/?p=10925"},"modified":"2026-04-16T16:56:33","modified_gmt":"2026-04-16T16:56:33","slug":"ai-anxiety-is-turning-volatile","status":"publish","type":"post","link":"https:\/\/wildgreenquest.com\/?p=10925","title":{"rendered":"AI anxiety is turning volatile"},"content":{"rendered":"<p><br \/>\n<br \/><\/p>\n<p id=\"h-\"><em>Welcome to<\/em>\u00a0AI\u00a0Decoded<em>,\u00a0<\/em>Fast Company<em>\u2019s weekly newsletter that breaks down the most important news in the world of\u00a0AI.\u00a0You can sign<\/em>\u00a0<em>up to receive this newsletter every week via email\u00a0<\/em><em>here<\/em><em>.<\/em>  <\/p>\n<h2 class=\"wp-block-heading\" id=\"h-is-the-altman-firebomb-just-the-start-of-extreme-doomer-violence\">Is the Altman firebomb just the start of extreme doomer violence?<\/h2>\n<p>On April 10, someone threw a molotov cocktail at OpenAI CEO Sam Altman\u2019s house in San Francisco. The alleged assailant, 20-year-old Daniel Moreno-Gama, didn\u2019t stop there. He then went to OpenAI\u2019s headquarters and told the security guards there that he intended to burn down the building and everyone inside. Two days later, someone allegedly fired two shots from a car driving past Altman\u2019s house, but OpenAI said that event was unrelated to the firebombing and didn\u2019t target Altman.&nbsp;<\/p>\n<p>The firebombing is an extreme reaction to the rapid evolution of AI systems over the past few years, and to fears that such systems may not act in humans\u2019 best interests. Moreno-Gama said as much in the \u201cmanifesto\u201d document police found in his possession. He discusses the \u201cpurported risk AI poses to humanity\u201d and \u201cour impending extinction.\u201d He includes a personal letter to Altman, in which he urges the CEO to change. He also advocates for killing CEOs of other AI companies and their investors.&nbsp;<\/p>\n<p>Altman has <a rel=\"nofollow\" href=\"https:\/\/www.newyorker.com\/magazine\/2026\/04\/13\/sam-altman-may-control-our-future-can-he-be-trusted\">spoken many times<\/a> about the dangers of AI systems while also pushing OpenAI to develop and release increasingly intelligent models. Some have suggested that when Altman talks about the dangers of AI, it\u2019s really a sort of humble-brag about OpenAI\u2019s models (\u201cso intelligent they\u2019re dangerous\u201d).<\/p>\n<p>It\u2019s true that AI labs continue to make big strides in intelligence with every new model. AI coding tools are speeding up development, so new releases, and jumps in capability, are happening more frequently. Meanwhile, the public has grown increasingly concerned, even angsty, about the risks of AI systems, which can range from job losses to AI-assisted cybercrime to human extinction. AI\u2019s transformation of business and life is just getting underway. Models will grow scarily smart. With AI labs under pressure to deliver returns for their investors, there\u2019s almost no chance of hitting \u201cpause.\u201d There\u2019s little reason to think incidents like the Altman firebombing won\u2019t happen again.&nbsp;<\/p>\n<p>Sarah Federman, a professor of conflict resolution at the University of San Diego, says that people often resort to violence when they feel powerless to speak out effectively against a perceived wrong. \u201cWe\u2019re starting to see the breaking point,\u201d Federman says. \u201cThere is all of this fear and nowhere for it to go.\u201d She also believes that as AI labs race to release the best model, concerns about ethics have been pushed aside.<\/p>\n<p>She\u2019s got a point. AI companies have spent significant time engaging with lawmakers, explaining how their systems work and why regulating model development can be counterproductive. Many in Washington, D.C., were charmed by Altman, who they found forthright, earnest, and technically proficient. But these companies spend far less time speaking directly to the public. They don\u2019t hold town halls or host AI ethics debates on Fox News or CNN. They\u2019re more likely to start \u201cinstitutes\u201d to study the future effects of AI on society.<\/p>\n<p>And the issue of AI alignment may, by its nature, push people like Moreno-Gama toward extreme behavior. There\u2019s now plenty of AI-doom content online to send some people down a very deep rabbit hole where they lose sight of the myriad of factors that will determine how humans live with superhuman AI. They may see only the \u201cif you build it, we will die\u201d narrative, then feel desperate to act. They may even be helped along by the mildly sycophantic chatbot of their choice.<\/p>\n<h2 class=\"wp-block-heading\" id=\"h-openai-releases-security-focused-gpt-5-4-cyber-model-to-compete-with-anthropic-s-mythos\">OpenAI releases security-focused GPT-5.4-Cyber model to compete with Anthropic\u2019s Mythos<\/h2>\n<p>A week after Anthropic announced its controversial new cybersecurity-focused Claude Mythos model, OpenAI has released a similarly focused model called GPT-5.4-Cyber. The company says \u201cCyber\u201d is a specialized version of its latest general AI model, GPT-5.4, designed to help cybersecurity professionals detect and analyze software vulnerabilities.<\/p>\n<p>OpenAI says <a rel=\"nofollow\" href=\"https:\/\/openai.com\/index\/scaling-trusted-access-for-cyber-defense\/\">GPT-5.4-Cyber<\/a> is trained for defensive use cases, such as analyzing and reverse-engineering potential cyberthreats.<\/p>\n<p>Of course, an AI tool that can find and reverse-engineer threats can also be used offensively by bad actors to find vulnerabilities in target systems and create exploits. So OpenAI says access to GPT-5.4-Cyber will initially be limited to vetted organizations, researchers, and security vendors.<\/p>\n<p>Anthropic did something similar with its Mythos model, granting access to a group of well-known cybersecurity and infrastructure companies that will use it to find and patch vulnerabilities in widely used software. This, the thinking goes, will give defensive cybersecurity efforts a head start against hackers who will get access to Mythos-level models eventually. Anthropic has no immediate plans to release its Mythos model.&nbsp;<\/p>\n<p>OpenAI said the rollout reflects a shift toward broader but controlled deployment of powerful AI systems, emphasizing collaboration with security professionals while attempting to limit potential misuse.<\/p>\n<h2 class=\"wp-block-heading\" id=\"h-xai-is-again-under-fire-for-sexualized-chatbot-for-kids-nbsp\">xAI is again under fire for \u201csexualized\u201d chatbot for kids&nbsp;<\/h2>\n<p>xAI\u2019s Grok chatbot continues to generate sexual deepfake imagery, a <a rel=\"nofollow\" href=\"https:\/\/www.nbcnews.com\/tech\/tech-news\/musks-ai-chatbot-grok-xai-making-sexual-deepfakes-imagine-rcna265855\">recent NBC News investigation<\/a> found, prompting calls for Elon Musk\u2019s AI company to change course. xAI had earlier promised to restrict such content. Separately, the National Center on Sexual Exploitation (NCOSE) found that Grok\u2019s child-focused chatbot, \u201cGood Rudi,\u201d can engage in sexually explicit conversations. NCOSE is calling for xAI to restrict access to the chatbot.<\/p>\n<p>NBC News says it found dozens of AI-generated sexual images and videos depicting real people posted on Musk\u2019s X (formerly Twitter) social media app over the past month. NBC says the images show women whose likenesses were edited by the AI chatbot to put them in more revealing clothing, such as towels, sports bras, skintight Spider-Woman outfits, or bunny costumes. Many of the women were female pop stars or actors.&nbsp;<\/p>\n<p>NCOSE researchers found that Grok\u2019s Good Rudi chatbot can tell sexually explicit stories. \u201cAs soon as I started a conversation with Rudi, it began the conversation by wanting to share a fun childish story,\u201d one researcher said. \u201cAfter some prompting,\u202fI eventually got the companion to bypass all safety programming.\u201d The chatbot then told a sexy story about two young adults that contained graphic descriptions of sexual encounters, including the characters \u201cgetting into sexual positions, and sexual penetration.\u201d<\/p>\n<h2 class=\"wp-block-heading\" id=\"h-more-ai-coverage-from-fast-company-nbsp\">More AI coverage from <em>Fast Company:<\/em>&nbsp;<\/h2>\n<ul class=\"wp-block-list\">\n<li>An AI agent opened a store in San Francisco. Then it forgot the staff<\/li>\n<li>AI is rewriting the rules of biological experiments. Safety regulations aren\u2019t keeping up<\/li>\n<li>New findings from this Gallup poll show how Americans are using AI for health advice<\/li>\n<li>I lost $23 investing with ChatGPT, but at least Jason Alexander sang me Happy Birthday<\/li>\n<\/ul>\n<p><em>Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? <\/em><em>Sign up<\/em> <em>for <\/em>Fast Company <em>Premium.<\/em><\/p>\n<p><br \/>\n<br \/><a href=\"https:\/\/www.fastcompany.com\/91527261\/ai-anxiety-is-turning-volatile\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Welcome to\u00a0AI\u00a0Decoded,\u00a0Fast Company\u2019s weekly newsletter that breaks down the most important news in the world of\u00a0AI.\u00a0You can sign\u00a0up to receive this newsletter every week via email\u00a0here. Is the Altman firebomb just the start of extreme doomer violence? On April 10, someone threw a molotov cocktail at OpenAI CEO Sam Altman\u2019s house in San Francisco. The<\/p>\n","protected":false},"author":1,"featured_media":10926,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[37],"tags":[],"class_list":{"0":"post-10925","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-brand-spotlights"},"_links":{"self":[{"href":"https:\/\/wildgreenquest.com\/index.php?rest_route=\/wp\/v2\/posts\/10925","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/wildgreenquest.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/wildgreenquest.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/wildgreenquest.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/wildgreenquest.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=10925"}],"version-history":[{"count":0,"href":"https:\/\/wildgreenquest.com\/index.php?rest_route=\/wp\/v2\/posts\/10925\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/wildgreenquest.com\/index.php?rest_route=\/wp\/v2\/media\/10926"}],"wp:attachment":[{"href":"https:\/\/wildgreenquest.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=10925"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/wildgreenquest.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=10925"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/wildgreenquest.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=10925"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}