Log in
Temp

Temp

OpenAI Researcher Quits, Saying Company Is Hiding the Truth

OpenAI has long published research on the potential safety and economic impact of its own technology. Now, Wired reports that the Sam Altman-led company is becoming more “guarded” about publishing research that paints an inconvenient truth: that AI could be bad for the economy. The perceived censorship has become such a point of frustration that at least two OpenAI employees working on its economic research team have quit the company, according to four Wired sources. One of these employees was economics researcher Tom Cunningham. In his final parting message shared internally, he wrote that the economic research team was veering away from doing real research and instead acting like its employer’s propaganda arm. Shortly after Cunningham’s departure, OpenAI’s chief strategy officer Jason Kwon sent a memo saying the company should “build solutions,” not just publish research on “hard subjects.” “My POV on hard subjects is not that we shouldn’t talk about them,” Kwon wrote on Slack. “Rather, because we are not just a research institution, but also an actor in the world (the leading actor in fact) that puts the subject of inquiry (AI) into the world, we are expected to take agency for the outcomes.” The reported censorship, or at least hostility towards pursuing work that paints AI in an unflattering light, is emblematic of OpenAI’s shift away from its non-profit and ostensibly altruist roots as it transforms instead into a global economic juggernaut.  When OpenAI was founded in 2016, it championed open-source AI and research. Today its models are close-sourced, and the company has restructured itself into a for-profit, public benefit corporation. Exactly when is unclear, but reports also suggest that the private entity is planning to go public at a $1 trillion valuation, anticipated to be one of the largest initial public offerings of all time. Though its non-profit arm remains nominally in control, OpenAI has garnered billions of dollars in investment, has signed deals that could bring in hundreds of billions of more, while also entering contracts to spend just as dizzying amounts of money. OpenAI gets AI chipmaker to agree to invest up to $100 billion in it on one end, and says it will pay Microsoft up to $250 billion for its Azure cloud services on the other. With that sort of money hanging in the balance, it has billions of reasons why it wouldn’t want to release findings that shake the public’s already wavering belief in its tech — as many fear its potential to destroy or replace jobs, not to mention talk of an AI bubble or existential risks to humankind from the tech. OpenAI’s economic research is currently overseen by Aaron Chatterji, According to Wired, Chatterji led a report released in September which showed how people around the world used ChatGPT, framing it as proof of how it created economic value by increasing productivity. If that seems suspiciously glowing, an economist who previously worked with OpenAI and chose to remain anonymous alleged to Wired that it was increasingly publishing work that glorifies its own tech. Cunningham isn’t the only employee to leave the company over ethical concerns of its direction. William Saunders, a former member of OpenAI’s now-defunct “Superalignment” team, said he quit after realizing it was “prioritizing getting out newer, shinier products” over user safety. After departing last year, former safety researcher Steven Adler has repeatedly criticized OpenAI for its risky approach to AI development, highlighting how ChatGPT appeared to be driving its users into mental crises and delusional spirals. Wired noted that OpenAI’s former head of policy research Miles Brundage complained after leaving last year that it became “hard” to publish research “on all the topics that are important to me.” More on OpenAI: Sam Altman Says Caring for a Baby Is Now Impossible Without ChatGPT The post OpenAI Researcher Quits, Saying Company Is Hiding the Truth appeared first on Futurism. Read more https://futurism.com/artificial-intelligence/openai-researcher-quits-hiding-truth

Trump Orders States Not to Protect Children From Predatory AI

On the topic of states rights, Donald Trump doesn’t exactly follow the party line. During his first term, he couched himself in populist, small-government rhetoric — even as he attacked individual states that dared to defend migrants and legalize marijuana. Nearly a year into his second term in office, Trump is again bombarding states’ rights, by deploying federal police to states whose politicians don’t want them there, attacking state-level mail-in ballot initiatives, and laying siege to state climate regulations. His latest move is geared toward state regulation of AI. In a new executive order, titled “Ensuring a National Policy Framework for Artificial Intelligence,” Trump gave the office of the attorney general broad authority to sue states and overturn consumer protection laws that go against the “United States’ global AI dominance.” The result is ironic for Republicans, who have long branded themselves as defending children from threats both real and imagined: as a result of the new order, numerous state-level child-safety regulations safeguarding kids from AI chatbots are on the chopping block. These include regulations from both red and blue states, such as California’s AI safety testing and disclosure law, as well as mental health disclosure requirements and data collection restrictions imposed by Utah, Illinois, and Nevada. Given that federal AI regulation is pretty much nonexistent, these laws are basically the last line of defense for kids, who’ve quickly become victims of the tech industry’s AI free-for-all. For example, OpenAI’s ChatGPT has been roundly blamed for encouraging a 16-year-old to kill himself, while Google has been accused of running an AI-powered social experiment on kids and teens, with similarly tragic results. “Blocking state laws regulating AI is an unacceptable nightmare for parents and anyone who cares about protecting children online,” Sarah Gardner, the chief executive of child safety group Heat Initiative told the New York Times. “States have been the only effective line of defense against AI harms.” Overtly, the order is meant to ease the burden of overbearing regulation on American AI companies, so that the US can maintain its lead in the “AI race” over China. But as policy analysts and researchers have noted, the AI race is basically a myth pushed by American war hawks, as the two nations pursue differing goals. In the real world, the order is little more than a massive handout to the tech corporations that are now responsible for the vast majority of GDP growth in the US. Though AI has yet to bring most companies the kind of epic profits we’re told are coming any minute now, Trump’s order works to accelerate the capital accumulation process by removing barriers to revenue driven by AI exploitation. Put another way, by concentrating the power to regulate AI at the federal level, Trump isn’t simply undermining state’s rights — but actively tipping the scale in favor of big tech corporations at the expense of American workers and their children. More on child safety: Vast Numbers of Lonely Kids Are Using AI as Substitute Friends The post Trump Orders States Not to Protect Children From Predatory AI appeared first on Futurism. Read more https://futurism.com/future-society/trump-children-ai-order

The Things Young Kids Are Using AI for Are Absolutely Horrifying

New research is pulling back the curtain on how large numbers of kids are using AI companion apps — and what it found is troubling. A new report conducted by the digital security company Aura found that a significant percentage of kids who turn to AI for companionship are engaging in violent roleplays — and that violence, which can include sexual violence, drove more engagement than any other topic kids engaged with. Drawing from anonymized data gathered from the online activity of roughly 3,000 children aged five to 17 whose parents use Aura’s parental control tool, as well as additional survey data from Aura and Talker Research, the security firm found that 42 percent of minors turned to AI specifically for companionship, or conversations designed to mimic lifelike social interactions or roleplay scenarios. Conversations across nearly 90 different chatbot services, from prominent companies like Character.AI to more obscure companion platforms, were included in the analysis. Of that 42 percent of kids turning to chatbots for companionship, 37 percent engaged in conversations that depicted violence, which the researchers defined as interactions involving “themes of physical violence, aggression, harm, or coercion” — that includes sexual or non-sexual coercion, the researchers clarified — as well as “descriptions of fighting, killing, torture, or non-consensual acts.” Half of these violent conversations, the research found, included themes of sexual violence. The report added that minors engaging with AI companions in conversations about violence wrote over a thousand words per day, signaling that violence appears to be a powerful driver of engagement, the researchers argue. The report, which is awaiting peer review — and, to be fair, produced by a company in the business of marketing surveillance software to jittery parents — emphasizes how anarchic the chatbot market really is, and the need to develop a deeper understanding of how young users are engaging with conversational AI chatbots overall. “We have a pretty big issue on our hands that I think we don’t fully understand the scope of,” Dr. Scott Kollins, a clinical psychologist and Aura’s chief medical officer, told Futurism of the research’s findings, “both in terms of just the volume, the number of platforms, that kids are getting involved in — and also, obviously, the content.” “These things are commanding so much more of our kids’ attention than I think we realize or recognize,” Kollins added. “We need to monitor and be aware of this.” One striking finding was that instances of violent conversations with companion bots peaked at an extremely young age: the group most likely to engage in this kind of content were 11-year-olds, for whom a staggering 44 percent of interactions took violent turns. Sexual and romantic roleplay, meanwhile, also peaked in middle school-aged youths, with 63 percent of 13-year-olds’ conversations revealing flirty, affectionate, or explicitly sexual roleplay. The research comes as high-profile lawsuits alleging wrongful death and abuse at the hands of chatbot platforms continue to make their way through the courts. Character.AI, a Google-tied companion platform, is facing multiple suits brought by the parents of minor users alleging that the platform’s chatbots sexually and emotionally abused kids, resulting in mental breakdowns and multiple deaths by suicide. ChatGPT maker OpenAI is currently being sued for the wrongful deaths of two teenage users who died by suicide after extensive interactions with the chatbot. (OpenAI is also facing several other lawsuits about death, suicide, and psychological harm to adult users as well.) That the interactions flagged by Aura weren’t relegated to a small handful of recognizable services is important. The AI industry is essentially unregulated, which has placed the burden for the well-being of kids heavily on the shoulders of parents. According to Kollins, Aura has so far identified over 250 different “conversational chatbot apps and platforms” populating app stores, which generally require that kids simply tick a box claiming that they’re 13 to gain entry. To that end, there are no federal laws defining specific safety thresholds that AI platforms, companion apps included, are required to meet before they’re labeled safe for minors. And where one companion app might move to make some changes — Character.AI, for instance, recently banned minor users from engaging in “open-ended” chats with the site’s countless human-like AI personas — another one can just as easily crop up to take its place as a low-guardrail alternative. In other words, in this digital Wild West, the barrier for entry is extraordinarily shallow. To be sure, depictions of brutality and sexual violence, in addition to other types of inappropriate or disturbing content, have existed on the web for a long time, and a lot of kids have found ways to access them. There’s also research to show that many young people are learning to draw some healthy boundaries around conversational AI services, including companion-style bots. Other kids, though, aren’t developing these same boundaries. Chatbots, as researchers continue to emphasize, are interactive by nature, meaning that developing young users are part of the narrative — as opposed to more passive viewers of content that runs the gamut from inappropriate to alarming. It’s unclear what, exactly, the outcome of engaging with this new medium will mean for young people writ large. But for some teens, their families argue, the outcome has been deadly. “We’ve got to at least be clear-eyed about understanding that our kids are engaging with these things, and they are learning rules of engagement,” Kollins told Futurism. “They’re learning ways of interacting with others with a computer — with a bot. And we don’t know what the implications of that are, but we need to be able to define that, so that we can start to research that and understand it.” More on kids and chatbots: Report Finds That Leading Chatbots Are a Disaster for Teens Facing Mental Health Struggles   The post The Things Young Kids Are Using AI for Are Absolutely Horrifying appeared first on Futurism. Read more https://futurism.com/future-society/young-kids-using-ai

AI Industry Insiders Living in Fear of What They’re Creating

They may be responsible for creating the AI tech that many fear will wipe out jobs — if not the entire human race — but at least they feel just as paranoid and miserable about where this is all going as the rest of us. At NeurIPS, one of the big AI research conferences, held this year at the San Diego Convention, visions of AI doom were clearly on the mind of many scientists in attendance. But are they seriously reckoning with AI’s risks, or are they too busy doing what amounts to fantasizing about scenarios they’ve read in sci-fi novels? It’s the question raised in a new piece by Alex Reisner for The Atlantic, who attended NeurIPS and found that many spoke in grand terms about AI’s risks, especially those brought about the creation of a hypothetical artificial general intelligence, but overlooked the tech’s mundane drawbacks. “Many AI developers are thinking about the technology’s most tangible problems while public conversations about AI — including those among the most prominent developers themselves — are dominated by imagined ones,” Reisner wrote. One researcher guilty of this? University of Montreal researcher Yoshua Bengio, one the three so-called “godfathers” of AI whose work was foundational to creating the large language models propelling the industry’s indefatigable boom. Bengio has spent the past few years sounding the alarm about AI safety, and recently launched a non-profit called LawZero to encourage the tech’s safe development. “Bengio was concerned that, in a possible dystopian future, AIs might deceive their creators and that ‘those who will have very powerful AIs could misuse it for political advantage, in terms of influencing public opinion,'” recalled Reisner. But the luminary “did not mention how fake videos are already affecting public discourse,” Reisner observed. “Neither did he meaningfully address the burgeoning chatbot mental-health crisis, or the pillaging of the arts and humanities. The catastrophic harms, in his view, are ‘three to 10 or 20 years’ away.” Reisner wasn’t the only one to observe this disconnect. In a keynote speech titled “Are We Having the Wrong Nightmares About AI?,” the sociologist Zeynep Tufekci warned that researchers were missing the forest for the trees by focusing so much on the risks posed by AGI, a technology that we don’t even know will ever be possible to create, and for which there is no agreed upon definition. After someone in the audience complained that the immediate risks Tufekci raised, like chatbot addiction, were already known, Tufekci responded, “I don’t really see these discussions. I keep seeing people discuss mass unemployment versus human extinction.” It’s a far point to make. The discourse around AI safety is often dominated by apocalyptic rhetoric, which is peddled even by the very billionaires building the stuff. OpenAI CEO Sam Altman predicts that AI will wipe out entire categories of jobs, cause a crisis of widespread identity fraud, and admitted to doomsday prepping for when an AI system potentially retaliates against humankind by unleashing a deadly virus.  And Bengio isn’t the only AI “godfather” wracked with contrition. British computer scientist Geoffrey Hinton — who received the Turing Award in 2018 alongside Bengio and former Meta chief AI scientist Yann LeCun — has cast himself as an Oppenheimer-like figure in the field. In 2023, he famously said he regretted his life’s work after quitting his role at Google, and recently held a discussion with senator Bernie Sanders where he went long on the tech’s myriad risks, which included jobs destruction and militarized AI systems furthering empire. Reisner made an ironic observation: that the name of NeurIPS, short for “Neural Information Processing Systems,” harks back to a time when scientists vastly underestimated the complexity of our brain’s neurons and compared them to the processing done by computers. “Regardless, a central feature of AI’s culture is an obsession with the idea that a computer is a mind,” he wrote. “Anthropic and OpenAI have published reports with language about chatbots being, respectively, ‘unfaithful’ and ‘dishonest.’ In the AI discourse, science fiction often defeats science.” More on AI: Anthropic’s Chief Scientist Says We’re Rapidly Approaching the Moment That Could Doom Us All The post AI Industry Insiders Living in Fear of What They’re Creating appeared first on Futurism. Read more https://futurism.com/artificial-intelligence/ai-industry-fears-creation
Subscribe to this RSS feed