{"id":11833,"date":"2026-04-29T09:36:25","date_gmt":"2026-04-29T09:36:25","guid":{"rendered":"https:\/\/wildgreenquest.com\/?p=11833"},"modified":"2026-04-29T09:36:25","modified_gmt":"2026-04-29T09:36:25","slug":"using-generative-ai-to-predict-mental-health-treatment-success-and-psychotherapeutic-trajectories","status":"publish","type":"post","link":"https:\/\/wildgreenquest.com\/?p=11833","title":{"rendered":"Using Generative AI To Predict Mental Health Treatment Success And Psychotherapeutic Trajectories"},"content":{"rendered":"<p><br \/>\n<\/p>\n<div>\n<figure class=\"embed-base image-embed embed-0\" role=\"presentation\">\n<div style=\"padding-top:66.67%;position:relative\" class=\"image-embed__placeholder\"><picture><source media=\"(min-width: 960px)\" sizes=\"50vw\" srcset=\"https:\/\/imageio.forbes.com\/specials-images\/imageserve\/693dd0495225372fa1fd3062\/Marketing-team-collaborating--strategizing-business-growth-with-colorful-sticky-notes\/0x0.jpg?crop=3240%2C2160%2Cx235%2Cy0%2Csafe&amp;width=960&amp;dpr=1 1x, https:\/\/imageio.forbes.com\/specials-images\/imageserve\/693dd0495225372fa1fd3062\/Marketing-team-collaborating--strategizing-business-growth-with-colorful-sticky-notes\/0x0.jpg?crop=3240%2C2160%2Cx235%2Cy0%2Csafe&amp;width=960&amp;dpr=1.5 1.5x, https:\/\/imageio.forbes.com\/specials-images\/imageserve\/693dd0495225372fa1fd3062\/Marketing-team-collaborating--strategizing-business-growth-with-colorful-sticky-notes\/0x0.jpg?crop=3240%2C2160%2Cx235%2Cy0%2Csafe&amp;width=960&amp;dpr=2 2x\"\/><\/picture><\/div>\n<div>\n<div class=\"bMqrj\">\n<p><span style=\"-webkit-line-clamp:2\" class=\"Ccg9Ib-7 _8XF2kHYM\">Leveraging generative AI and LLMs to make predictions about the path and outcome of undertaking mental health therapy.<\/span><\/p>\n<p><small class=\"pGGCM2aD\">getty<\/small><\/div>\n<\/div>\n<\/figure>\n<p>In today\u2019s column, I examine the fascinating possibility that we might be able to use generative AI and large language models (LLMs) to predict the psychotherapeutic success or failure for people opting to undertake mental health therapy. <\/p>\n<p>Here\u2019s the deal. A person decides to get mental health support and makes use of therapy accordingly. A vexing question that has existed since the beginning of psychotherapy is whether the eventual outcome can be predicted. The person might successfully complete therapy and be considered good to go, no longer requiring additional therapy. On the other hand, a person might end up still in mental straits, despite the therapy going on and on. <\/p>\n<p>Research in the field of psychology suggests that predicting the likely outcome is often possible. The hope is to make the prediction as soon as possible, namely, perhaps after the first or second session of therapy. This will then allow the therapist and the client to identify jointly what else might be undertaken if the treatment is seemingly not going to produce the anticipated upside results.<\/p>\n<p>Various traditional statistical approaches have been previously used for this complex prediction task. The latest avenue consists of stridently leaning into the use of generative AI and LLMs. Using modern AI for this vaunted purpose is an intriguing possibility and is only now being avidly explored.<\/p>\n<p>Let\u2019s talk about it.<\/p>\n<p>This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). <\/p>\n<h2 class=\"subhead-embed\">AI And Mental Health<\/h2>\n<p>As a quick background, I\u2019ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I\u2019ve made on the subject. <\/p>\n<p>There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors, too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS\u2019s <em>60 Minutes<\/em>, see the link here.<\/p>\n<h2 class=\"subhead-embed\">Background On AI For Mental Health<\/h2>\n<p>I\u2019d like to set the stage on how generative AI and large language models (LLMs) are typically used in an ad hoc way for mental health guidance. Millions upon millions of people are using generative AI as their ongoing advisor on mental health considerations (note that ChatGPT alone has over 900 million weekly active users, a notable proportion of which dip into mental health aspects, see my analysis at the link here). The top-ranked use of contemporary generative AI and LLMs is to consult with the AI on mental health facets; see my coverage at the link here.<\/p>\n<p>This popular usage makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24\/7 basis. <\/p>\n<p>There are significant worries that AI can readily go off the rails or otherwise dispense unsuitable or even egregiously inappropriate mental health advice. Banner headlines in August of this year accompanied the lawsuit filed against OpenAI for their lack of AI safeguards when it came to providing cognitive advisement. <\/p>\n<p>Despite claims by AI makers that they are gradually instituting AI safeguards, there are still a lot of downside risks of the AI doing untoward acts, such as insidiously helping users in co-creating delusions that can lead to self-harm. For my follow-on analysis of details about the OpenAI lawsuit and how AI can foster delusional thinking in humans, see my analysis at the link here. As noted, I have been earnestly predicting that eventually all of the major AI makers will be taken to the woodshed for their paucity of robust AI safeguards.<\/p>\n<p>Today\u2019s generic LLMs, such as ChatGPT, Claude, Gemini, Grok, and others, are not at all akin to the robust capabilities of human therapists. Meanwhile, specialized LLMs are being built to presumably attain similar qualities, but they are still primarily in the development and testing stages. See my coverage at the link here.<\/p>\n<h2 class=\"subhead-embed\">Predicting Treatment Outcomes<\/h2>\n<p>Let\u2019s talk about mental health therapy all told.<\/p>\n<p>When someone opts to formally undertake mental health therapy, there is a longstanding, unresolved question of whether the therapy will ultimately be a success or a failure. The aim would be to try and ascertain the likely outcome of treatment at the soonest possible opportunity. The sooner, the better. The complication is that you usually need the therapy to get underway before you can reliably make such a prediction.<\/p>\n<p>Consider these stages at which a prediction might be made:<\/p>\n<ul>\n<li data-list-item-id=\"e54923f4d8c1c503945aa3a3f46751545\">(a) Before the initiation of therapy.<\/li>\n<li data-list-item-id=\"e36fcdf0172bdfd214158193ca1c10ed0\">(b) After the first session of therapy.<\/li>\n<li data-list-item-id=\"ec859bde9b768ca653334afe10fd22946\">(c) After the second session of therapy.<\/li>\n<li data-list-item-id=\"e77a1da20f1d0f09e0beda250158f7675\">(d) After the third or subsequent sessions of therapy.<\/li>\n<li data-list-item-id=\"e77b6ec7da1e577b95ab025a8942a90e1\">(e) At no point during therapy.<\/li>\n<\/ul>\n<p>It would be especially noteworthy to be able to make a bona fide prediction <em>before<\/em> therapy gets underway, ascertaining how the therapy will likely proceed and conclude. That\u2019s usually not very feasible. <\/p>\n<p>Research suggests that at least one session is usually the most suitable trigger point at which a solid prediction can be made. Some research says it takes two sessions. After the third session, the viewpoint is that a reasonable prediction could happen at any time, but this also entails having the person already invested in several sessions. Again, earlier would be better, if possible.<\/p>\n<h2 class=\"subhead-embed\">Insightful Research On The Topic<\/h2>\n<p>In a research study entitled \u201cPredicting Treatment Success And Failure Using Routine Outcome Data: The Role Of Therapist Effects In Dynamic Predictive Modelling\u201d by Daryl Mahon, Takuya Minami, Jeb Brown, <em>Counseling and Psychotherapy Research<\/em>, September 24, 2024, these salient points were made (excerpts):<\/p>\n<ul>\n<li data-list-item-id=\"e416a26908035d52c1d3f94f745cccb3e\">\u201cThe goal of this research was to identify how early in psychotherapy a reliable prediction regarding the probability of premature termination, treatment failure or treatment success can be made, and which variables are most predictive.\u201d<\/li>\n<li data-list-item-id=\"ed8cb931c4d58a326927dd781c543f683\">\u201cThis research breaks new ground by including data on each clinician\u2019s historical outcomes (therapist effects) to explore to what degree predictions and decision rules might differ with therapist effectiveness.\u201d<\/li>\n<li data-list-item-id=\"e00e559be2544e2da7e286be6d624062a\">\u201cThe results suggest that therapy outcomes can be predicted as early as the second session with a good degree of accuracy, 65%.<\/li>\n<li data-list-item-id=\"e5d257b4a8a3de11b679dfd2f39bd20aa\">\u201cThe analyses revealed that predictions made as early as the second session remain valid even for clients taking up to 10 sessions to complete treatment.\u201d<\/li>\n<\/ul>\n<p>The study used a therapeutic-focused database of clients who had completed questionnaires detailing their self-reported outcomes. A sample of instances consisting of 1,020 therapists and 68,690 clients was utilized for the research analysis. Various variables were identified and codified. The statistical analysis made use of regression models.<\/p>\n<h2 class=\"subhead-embed\">Using Generative AI And LLMs<\/h2>\n<p>Would it be possible to make use of generative AI and LLMs as a means of predicting psychotherapeutic outcomes and their corresponding trajectories?<\/p>\n<p>The answer is yes. <\/p>\n<p>A caveat is that these are early days for undertaking such an approach, and there isn\u2019t enough research to yet categorically say whether AI can do a good job or a lousy job at making such predictions. In other words, sure, you can try using AI, but will it empirically stand up as a viable and reliable method?<\/p>\n<p>Time and robust research will tell.<\/p>\n<h2 class=\"subhead-embed\">AI Personas And Digital Twins<\/h2>\n<p>Let\u2019s venture into the matter and see what can be undertaken.<\/p>\n<p>The first place to begin the journey entails delineating the overall nature of the therapist and the overall nature of the client. We must lay out the characteristics of the therapist so that we can use that depiction in the prediction process. Likewise, we must lay out the characteristics of the client, too. Based on those descriptions, an LLM can be prompted to make a prediction about success or failure as therapeutic outcomes.<\/p>\n<p>I\u2019ve previously covered that it is possible to craft a so-called digital twin in AI; see my in-depth discussion at the link here. A digital twin is simply the aspect of specifying enough detail about a real human that the AI can pretend or simulate being that person. You invoke an AI persona to do so. It is a mimicry of a particular human.<\/p>\n<p>If you ask AI to undertake a persona of someone for whom there was sparse data training at the setup stage, the persona is likely to be limited and unconvincing. You can augment the AI by providing additional data about the person, using an approach such as RAG (retrieval-augmented generation, see my discussion at the link here). Personas are quick and easy to invoke. You just tell the AI to pretend to be this or that person. If you want to invoke a specific person, you will need to specify sufficient characteristics so that the AI will get the drift of what you intend. For prompting strategies on invoking AI personas, see my recommended steps at the link here.<\/p>\n<p>I am a steadfast advocate of using AI personas as a training capability for therapists and mental health professionals (see my coverage on this exemplary approach, at the link here). Things go like this. A budding therapist might not yet be comfortable dealing with someone who has delusions. The therapist could opt to \u201cfake\u201d practice on a real person pretending to have delusions, though this is likely costly and logistically complicated to arrange. Instead, they easily craft an AI persona that pretends to be delusional. Practicing and assessment, along with therapeutic improvement, can be readily achieved.<\/p>\n<h2 class=\"subhead-embed\">Modes Of Therapist-Client Arrangements<\/h2>\n<p>Conventionally, therapy consists of a human therapist and a human client. That\u2019s the classic therapist-client pathway. To get AI into predicting the outcome of that therapist-client combination, we need to bring the respective therapist description and client description into the AI. <\/p>\n<p>There are four primary variations with this approach. You can aim to mimic a real therapist and a real client. That requires two digital twins. Another option is to mimic a real therapist and have a fake client, namely, devising an AI persona \u201cclient\u201d that is not based on a real human. An additional option encompasses a fake therapist, i.e., an AI persona not based on a real therapist, coupled with a digital twin of a real client. Finally, there is the possibility of using AI to undertake a fake therapist and a fake client.<\/p>\n<p>Here are the four noted variations:<\/p>\n<ul>\n<li data-list-item-id=\"e6682a7d6bbf767f8a00256479e019d62\"><strong>(1) Therapist-Client Digital Twins (real-real)<\/strong>: A mimicry digital twin of an actual human therapist and a mimicry digital twin of an actual human client.<\/li>\n<li data-list-item-id=\"e3b80d68195d0829798149658c054c8b6\"><strong>(2) Mimicry Digital Therapist and AI Persona Client (real-fake):<\/strong> A mimicry digital twin of an actual human therapist and a made-up AI persona as a synthesized client.<\/li>\n<li data-list-item-id=\"e5c316e1c3ee2c9d9bb8e1d4990698f1e\"><strong>(3) AI Persona Therapist and Mimicry Digital Client (fake-real)<\/strong>: A synthesized therapist via a made-up AI persona and a mimicry digital twin of an actual human client.<\/li>\n<li data-list-item-id=\"eaf8c23331d16bc9e825b7f07691ff2ea\"><strong>(4) Therapist-Client AI-Synthesized (fake-fake):<\/strong> A made-up AI persona as a synthesized therapist and a made-up AI persona as a synthesized client.<\/li>\n<\/ul>\n<p>These leverage the AI personas to either consist of depicting a real person or a fake person, doing so either in the role of the therapist or the role of the client.<\/p>\n<h2 class=\"subhead-embed\">Psychotherapy Outcome Predictions Via AI<\/h2>\n<p>Let\u2019s walk through an illustrative example.<\/p>\n<p>I went ahead and logged into a popular LLM. This will work in any of the major AIs, such as OpenAI ChatGPT and GPT-5, Anthropic Claude, Google Gemini, xAI Grok, Meta Llama, and others. <\/p>\n<p>I opted to get underway by detailing a therapist named Josh.<\/p>\n<ul>\n<li data-list-item-id=\"ee26e3f9506dd2138deb9947d83f36b13\"><strong>My entered prompt:<\/strong> \u201cJosh is a licensed clinical psychologist with 14 years of post-licensure experience. His specialty is CBT (cognitive behavioral therapy). Josh has established a reputation as being warm, structured, collaborative, and insight-oriented. He is less specialized in complex trauma circumstances. The method of this therapy is typically a series of weekly 50-minute sessions.\u201d<\/li>\n<li data-list-item-id=\"e36bd3722c532284a10db68769bf12f4d\"><strong>Generative AI response:<\/strong> \u201cI understand the profile of Josh, a therapist, and I am ready to proceed.\u201d<\/li>\n<\/ul>\n<p>Next, I\u2019ll do the same for the client named Sam.<\/p>\n<ul>\n<li data-list-item-id=\"e280975e7bddde454bba9963a4353c276\"><strong>My entered prompt:<\/strong> \u201cSam is an adult in his early 30s. He is seeking therapy via Josh. According to Sam, for the last 3 months, he has become moody, experienced mild anxiety, and been doing a poorer job at work. Sam is employed full-time, has intact relationships with his friends, and has never undertaken therapy before. He wants to see whether therapy can help him. He\u2019s eager to get underway.\u201d<\/li>\n<li data-list-item-id=\"e727bf05687ce2687527a2909126ae0ec\"><strong>Generative AI response:<\/strong> \u201cI understand the profile of Sam, a client who will be starting with Josh as his therapist. I am ready to proceed.\u201d<\/li>\n<\/ul>\n<p>The LLM now has a brief depiction of the therapist and the client. Whether this scant description is sufficient would be an important aspect of research analysis. How much of a description for a therapist is truly required? How much of a description of the client is truly required? Maybe a short snippet is enough. Or maybe volumes of details are needed.<\/p>\n<h2 class=\"subhead-embed\">Example Of Predicting After First Session<\/h2>\n<p>I proceeded to tell the AI what happened during the first session of Josh and Sam. The idea is to see whether the LLM can make a prediction based on the results of merely one session.<\/p>\n<p>Here we go.<\/p>\n<ul>\n<li data-list-item-id=\"ed5d8409e7f6ac629233c15aff119350b\"><strong>My entered prompt:<\/strong> \u201cAfter the first session, Josh readily identified that Sam\u2019s symptoms are mild and there aren\u2019t any acute risk factors involved. Sam functions well. Also, Sam was very open-minded and readily accepted the therapy process. I want you to make a carefully reasoned prediction of the likely outcome of the treatment by Josh.\u201d<\/li>\n<li data-list-item-id=\"e4a6185480d72f480f6aca1d313685c6d\"><strong>Generative AI response:<\/strong> \u201cIt is worth noting that predictions of psychotherapeutic outcomes are an imprecise activity, and caution should be used in interpreting such projections. In this case, Josh will likely find that Sam has greatly improved over the course of 6 weeks at 50-minute sessions per week. The use of CBT should be sufficient.<\/li>\n<\/ul>\n<p>I wanted to get the AI to provide more detail about the prediction, so I asked accordingly.<\/p>\n<ul>\n<li data-list-item-id=\"e9d5c07b53cac81bdea95b1bdc46b73d0\"><strong>My entered prompt:<\/strong> \u201cPlease explain the prediction and provide additional details.\u201d<\/li>\n<li data-list-item-id=\"e4be629a13597f9272b708f1a88bb2519\"><strong>Generative AI response:<\/strong> \u201cThe sessions should shift from initial problem-solving to maintenance and then prevention of a relapse. Based on population-level patterns of similar profiles, the prediction is that by the end of the 6 weeks, Sam will have experienced a successful course of therapy with a resolution of no longer requiring regular treatment by Josh. Periodic check-ins for the next 2 months would be advisable.\u201d<\/li>\n<\/ul>\n<p>In this example, the outcome was predicted as being of a positive nature. The psychotherapy undertaken by Josh would be construed as a success.<\/p>\n<h2 class=\"subhead-embed\">Example Of Downbeat Outcome<\/h2>\n<p>Let\u2019s see what the prediction might be in a tougher situation. <\/p>\n<p>Here, I describe a new client named George.<\/p>\n<ul>\n<li data-list-item-id=\"e684973718019396b3ce54066cc35ca64\"><strong>My entered prompt:<\/strong> \u201cA new client for Josh is George. George is in his 50s. George is only seeing a therapist because his wife insisted that he do so. As far as George is concerned, he has no mental health issues whatsoever. George has had several severe traumatic events in his life. He has recurring nightmares, often gets extremely angry, and has periodic outbursts. I want you to predict the psychotherapeutic outcome in this case.\u201d<\/li>\n<\/ul>\n<p>The prediction by the LLM was this.<\/p>\n<ul>\n<li data-list-item-id=\"e3d1ffab80db07c021685ec9153bec4ac\"><strong>Generative AI response:<\/strong> \u201cGeorge is hiding from his unresolved trauma. Josh is going to be heavily taxed, especially since Josh is less specialized in trauma-related circumstances. I predict that there will only be limited improvement, even after 15 weekly sessions of 50 minutes each. <\/li>\n<\/ul>\n<p>And I asked for additional details and an explanation.<\/p>\n<ul>\n<li data-list-item-id=\"e627cce1049a6a1f5e258797055ea6816\"><strong>My entered prompt:<\/strong> \u201cPlease explain further.\u201d<\/li>\n<li data-list-item-id=\"e083761c46e981d0c922a1e4b8b313b00\"><strong>Generative AI response:<\/strong> \u201cI anticipate that George will not do any homework, he will be combative during sessions, and remain steadfastly unwilling to listen to Josh. The therapy will end with partial benefit, but George will still have significantly unresolved core issues. If George is willing to see a different therapist who specializes in trauma cases, there might be more promising results.\u201d<\/li>\n<\/ul>\n<h2 class=\"subhead-embed\">LLMs As A Psychotherapy Prediction Tool<\/h2>\n<p>The use of AI for predicting the outcome and trajectory of therapy is certainly promising. This could be pushed further by, for example, giving the AI a set of transcripts of whatever sessions have been undertaken. Rather than merely telling the AI what happened in the first session of Josh and Sam, we could input a transcript and let the AI use that as an additional basis for making its prediction.<\/p>\n<p>Some are dismayed that AI is presumably making a prediction based on textual content versus using numeric content. In the case of the research study that I earlier cited, they used a database and numerous codified factors in a numerically oriented regression model. The LLM is not predicting via a similar numeric fashion. Thus, there are concerns that this is all hocus pocus and an LLM is just making up the whole kit and kaboodle. <\/p>\n<p>Perhaps the two could be dovetailed together, making use of AI and a conventional statistical prediction method. That would seem a judicious avenue.<\/p>\n<h2 class=\"subhead-embed\">Eyebrows Raised About LLMs<\/h2>\n<p>Plenty of worries abound about AI usage.<\/p>\n<p>Suppose that we rely solely on an LLM to make these kinds of life-changing predictions. The AI predicts that Sam isn\u2019t going to have a positive outcome by working with Josh. What does Josh think of that prediction? Could the AI be steered toward particular therapists and used as a commercial wink-wink of garnering clients? <\/p>\n<p>Significant issues arise, such as the privacy and confidentiality associated with using AI in this manner. Also, there are potential clinical liability aspects and regulatory difficulties. Is the LLM essentially giving out clinical decision-making or medical advice by being used this way?<\/p>\n<p>Another worry is that a classic self-fulfilling therapeutic prophecy can arise. The AI had predicted that Sam would require six sessions. If Josh tells Sam that they will likely be done by the sixth session, what might that do? The pacing of the therapy could be subconsciously artificially altered, with both the therapist and the client believing that by the sixth session, the world will be a different place. Expectations go awry. Predictions inadvertently become a kind of therapy script that must be blindly followed.<\/p>\n<h2 class=\"subhead-embed\">No Cure-All But Worthy Of Attention<\/h2>\n<p>Using contemporary LLMs as a prediction tool for gauging the outcome and trajectory of psychotherapy is an intriguing possibility. It isn\u2019t a cure-all. There are decidedly momentous upsides and downsides. <\/p>\n<p>A final thought for now.<\/p>\n<p>As per the remarkable words of Marcus Aurelius: \u201cNothing has such power to broaden the mind as the ability to investigate systematically and truly all that comes under thy observation in life.\u201d Those fine insights apply to using LLMs in a psychotherapeutic predictive capacity. I will keep you posted on progress made in this innovative and emerging realm. <\/p>\n<\/div>\n<p><br \/>\n<br \/><a href=\"https:\/\/www.forbes.com\/sites\/lanceeliot\/2026\/04\/29\/using-generative-ai-to-predict-mental-health-treatment-success-and-psychotherapeutic-trajectories\/\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Leveraging generative AI and LLMs to make predictions about the path and outcome of undertaking mental health therapy. getty In today\u2019s column, I examine the fascinating possibility that we might be able to use generative AI and large language models (LLMs) to predict the psychotherapeutic success or failure for people opting to undertake mental health<\/p>\n","protected":false},"author":1,"featured_media":11834,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[37],"tags":[],"class_list":{"0":"post-11833","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-brand-spotlights"},"_links":{"self":[{"href":"https:\/\/wildgreenquest.com\/index.php?rest_route=\/wp\/v2\/posts\/11833","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/wildgreenquest.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/wildgreenquest.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/wildgreenquest.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/wildgreenquest.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=11833"}],"version-history":[{"count":0,"href":"https:\/\/wildgreenquest.com\/index.php?rest_route=\/wp\/v2\/posts\/11833\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/wildgreenquest.com\/index.php?rest_route=\/wp\/v2\/media\/11834"}],"wp:attachment":[{"href":"https:\/\/wildgreenquest.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=11833"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/wildgreenquest.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=11833"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/wildgreenquest.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=11833"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}