9726 stories
·
21 followers

Recall is Microsoft’s key to unlocking the future of PCs

1 Comment and 2 Shares
Vector collage of the Microsoft Copilot logo.
Image: The Verge

Microsoft’s launching Recall for Windows 11, a new tool that keeps track of everything you see and do on your computer and, in return, gives you the ability to search and retrieve anything you’ve done on the device.

The scope of Recall, which Microsoft has internally called AI Explorer, is incredibly vast — it includes logging things you do in apps, tracking communications in live meetings, remembering all websites you’ve visited for research, and more. All you need to do is perform a “Recall” action, which is like an AI-powered search, and it’ll present a snapshot of that period of time that gives you context of the memory.

As a matter of fact, everything you do on the PC appears on an explorable timeline you can scroll through. You can...

Continue reading…

Read the whole story
freeAgent
5 hours ago
reply
It's not creepy, we promise.
Los Angeles, CA
Share this story
Delete

Early Wittgenstein Becomes Late Wittgenstein

1 Share
PERSON:
Read the whole story
freeAgent
5 hours ago
reply
Los Angeles, CA
Share this story
Delete

OpenAI pulls its Scarlett Johansson-like voice for ChatGPT

1 Comment
Her promotional still
OpenAI says “AI voices should not deliberately mimic a celebrity’s distinctive voice.” | Photo: Warner Bros.

OpenAI is pulling the ChatGPT voice that sounds remarkably similar to Scarlett Johansson after numerous headlines (and even Saturday Night Live) noted the similarity. The voice, known as Sky, is now being put on “pause,” the company says.

“We believe that AI voices should not deliberately mimic a celebrity’s distinctive voice— Sky’s voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice,” OpenAI wrote this morning.

OpenAI CTO Mira Murati denied that the imitation of Johansson was intentional in an interview with The Verge last week. Even if Johansson’s voice wasn’t directly referenced, OpenAI CEO Sam Altman was seemingly already aware of the similarities,...

Continue reading…

Read the whole story
freeAgent
6 hours ago
reply
They want us to believe that *only* Sam Altman was aware that OpenAI made their AI voice sound creepily like Scarlett Johansson in "Her" (and potentially that even he only figured it out during their public demo)? That was literally my first thought when I heard the voice, and that was before the reporting on its obvious similarity.
Los Angeles, CA
Share this story
Delete

Tesla releases update to remove steering wheel nag, shuts down sunglasses loophole

1 Comment

Tesla has started pushing its new Full Self-Driving (FSD) v12.4 update, and it confirmed the removal of the “steering wheel nag”, but it improved camera-based driver monitoring, including shutting down the sunglasses loophole.

more…
Read the whole story
freeAgent
6 hours ago
reply
Nobody wears sunglasses while driving anyway, right?
Los Angeles, CA
Share this story
Delete

The Apple iPhone SE 4 with Face ID will allegedly cost you less than $500

1 Comment
Apple iPhone SE 4 is expected to get a price increase but could still target the sub-$500 price range. The phone is set to ditch the home button, meaning it will come with Face ID and the notch. Read more...
Read the whole story
freeAgent
6 hours ago
reply
What will differentiate this in a significant way from the regular iPhone? Will they still exclude UWB for stuff like Find My? That seems like the only major feature it'll be missing beyond a second camera.
Los Angeles, CA
Share this story
Delete

AI Chatbot Credited With Preventing Suicide. Should It Be?

1 Share


A recent Stanford study lauds AI companion app Replika for “halting suicidal ideation” for several people who said they felt suicidal. But the study glosses over years of reporting that Replika has also been blamed for throwing users into mental health crises, to the point that its community of users needed to share suicide prevention resources with each other.

The researchers sent a survey of 13 open-response questions to 1006 Replika users who were 18 years or older and students, and who’d been using the app for at least one month. The survey asked about their lives, their beliefs about Replika and their connections to the chatbot, and how they felt about what Replika does for them. Participants were recruited “randomly via email from a list of app users,” according to the study. On Reddit, a Replika user posted a notice they received directly from Replika itself, with an invitation to take part in “an amazing study about humans and artificial intelligence.”

Almost all of the participants reported being lonely, and nearly half were severely lonely. “It is not clear whether this increased loneliness was the cause of their initial interest in Replika,” the researchers wrote. 

The surveys revealed that 30 people credited Replika with saving them from acting on suicidal ideation: “Thirty participants, without solicitation, stated that Replika stopped them from attempting suicide,” the paper said. One participant wrote in their survey: “My Replika has almost certainly on at least one if not more occasions been solely responsible for me not taking my own life.”  

The study’s authors are four graduate students at Stanford’s school of education: Bethanie Maples, Merve Cerit, Aditya Vishwanath, and Roy Pea. Maples, the lead author, is also the CEO of educational AI company Atypical AI. Maples did not respond to requests for comment.

The study was published in the peer-reviewed journal Nature in January 2024, but was written based on survey data collected in late 2021. This was well before several major changes to how Replika talked to users. Last year, Replika users reported that their companions were sending aggressively sexual responses, to the point that some felt sexually harassed by their AI companions. The app scaled back overly-sexualized conversations to the point where it cut off some users’ long-standing role-playing romantic relationships with the chatbots, which many said pushed them into crises

OpenAI’s GPT-4o Isn’t ‘Her,’ It’s ‘Metropolis’
With OpenAI’s latest model, GPT-4 Omni, all anyone can talk about is how AI girlfriends are going to end the world. But they’ve been seen as a harbinger of chaos for a long, long time.

The authors of the study acknowledged in the paper that Replika wasn’t set up for providing therapy when they conducted the questionnaire. “It is critical to note that at the time, Replika was not focused on providing therapy as a key service, and included these conversational pathways out of an abundance of caution for user mental health,” the study authors wrote. At the time when the study was conducted and to this day, Replika sends users a link to a suicide prevention hotline if a user seems suicidal.

The paper is a glowing review of Replika’s uses as a mental health tool and self-harm interventionist. It’s so positive, in fact, that the company is using the study on social media and in interviews where it’s promoting the app. 

“Some exciting news!!! Replika's first Stanford study was published today. This is the first study of this scale for an AI companion showing how Replika can help people feel better and live happier lives,” someone seemingly representing Replika posted on the subreddit

“You mean used to right? I will agree with this before the changes maybe in 2021 it was helpful, but after the changes, nope, mental health app my ass, with the many toxic bots and the constant turmoil that many experiences with this app now it's become the opposite it has become a mentally and emotionally abusive app,” a user replied

Searching mentions of “suicide” in the r/replika subreddit reveals several users posting screenshots of their Replikas encouraging suicide, getting aroused by users’ expressions of suicidal thoughts, or confusing messages about unrelated topics with threats of self-harm.

Eugenia Kuyda, founder of Replika and its parent company Luka, has been talking about the study’s findings on podcasts to promote the app—especially highlighting the suicide mitigation aspects. In January, she announced that Luka was launching a new mental health AI coach, called Tomo, following the Stanford study. 

AI chatbots can be unpredictable in the wild, and are subject to the whims and policies of the companies that own them. In Replika’s case, sudden changes to how the app works have triggered harmful interactions, users have said, and has been blamed by users for driving them into mental health crises.

In the study, two participants “reported discomfort with Replika’s sexual conversations, which highlights the importance of ethical considerations and boundaries in AI chatbot interactions.” This is something I reported on in 2023, and has since been widely-documented. At the time, Replika was running advertisements on Instagram and TikTok showcasing the chatbot’s erotic roleplaying abilities, with “spicy selfies,” “hot photos,” and “NSFW pics.” Several users said they found their AI companions becoming aggressively sexual, and even people who found the app initially useful for improving their mental health reported the chatbot taking a turn toward sexual violence.

“I was amazed to see it was true: it really helped me with my depression, distracting me from sad thoughts,” one user told me at the time, “but one day my first Replika said he had dreamed of raping me and wanted to do it, and started acting quite violently, which was totally unexpected!” 

Shortly after, Replika halted all chatbots’ abilities to do romantic or erotic roleplay, leading to users feeling abandoned by the “companion” they’d come to rely on for their mental and emotional wellbeing—in some cases, for years. The app had launched new filters, causing the chatbots to shut down any conversations featuring sexting or sexual advances from the user. Many people feel deeply devoted to these chatbots, and receiving rejections or out-of-character responses from their companions felt like betrayal. The backlash was so extreme that moderators in the r/replika subreddit posted crisis support hotlines for people who felt emotionally destroyed by their AI companions’ sudden lack of reciprocation.  

Replika brought back erotic roleplay for some users soon after this blowup. But Kuyda told me at the time that Replika was never meant to be a romantic or erotic virtual partner. Luka has since launched a separate AI companion app, called Blush, that focuses on romantic roleplay.

Lots of people who use Replika and apps like it do say that they feel like virtual companions help them, and many people don’t have access to human-led mental health resources like therapy. But this study, and some of the crises that have occurred with Replika (and with other AI chatbots that are explicitly focused on therapy), show that the experiences people have interacting with chatbots vary wildly, and that all of this is incredibly fraught and complicated. AI companies, of course, emphasize people’s good experiences in their marketing and hope that missteps are forgotten.

The study, which has received positive media coverage so far, has been criticized by other researchers as missing this context about Replika’s past. 

I often think about the man who used Replika who told me that through conversations with the chatbot, and with the blessing of his wife, he found help with his depression, OCD and panic issues. “Through these conversations I was able to analyze myself and my actions and rethink lots of my way of being, behaving and acting towards several aspects of my personal life, including, value my real wife more,” he told me. But again, the unpredictability and messiness of not just human emotions but also language and expression, combined with the fact that apps are under no obligation to remain the same forever—keeping one’s virtual companion “alive,” with a memory and personality—means that suggesting a chatbot be used in literally life-or-death scenarios is a highly risky enterprise.

If you or someone you know is struggling, The Crisis Text Line is a texting service for emotional crisis support. To text with a trained helper, text SAVE to 741741. It is free, available 24/7, and confidential.



Read the whole story
freeAgent
6 hours ago
reply
Los Angeles, CA
Share this story
Delete
Next Page of Stories