Open in app View in browser Axios Login By Ina Fried and Ryan Heath · Jun 26, 2023 Ina here. And a special hi to the reader I saw at the Indigo Girls concert yesterday. (Sorry I didn't catch your name, but reply and say hi, if you like.) Today's Login is 1,242 words, a 5-minute read.
1 big thing: AI's next conflict is between open and closed Illustration: Aïda Amer/Axios
Open-source AI models, which let anyone view and manipulate the code, are growing in popularity as startups and giants alike race to compete with market leader ChatGPT, Axios' Ryan Heath writes.
- Why it matters: The White House and some experts fear open-source models could aid risky uses — especially synthetic biology, which could create the next pandemic. But those sounding the alarms may be too late.
What's happening: The wide-open code helps little guys take on tech giants. But it also could help dictators and terrorists.
- The startups are joined by Meta, which hopes to undercut Microsoft (OpenAI's key backer), and by foreign governments looking to out-innovate the U.S.
- Top government officials are freaked out by the national security implications of having large open-source AI models in the hands of anyone who can code.
In the closed corner are early movers in generative AI — including OpenAI and Google, which are seeking to protect their early-mover advantage.
- OpenAI, despite its name, uses a closed model for ChatGPT — meaning it's kept full control and ownership.
Why it matters: Building high quality AI models has become much cheaper since the open release of Meta's 65 billion-parameter LLaMA foundation model.
- The White House is concerned about the national security implications of having large open source AI models in the hands of anyone who can code, in addition to smaller models dedicated to risky uses of AI such as synthetic biology.
Driving the news: Mosaic released a new open source model Thursday — MPT-30B — which it says outperforms the original GPT-3.
- Hugging Face founder Clement Delangue testified before Congress last week that open source models "prevent black-box systems" and "make companies more accountable" while fostering innovation across the economy.
State of play: There are now at least 37 open source LLMs, including smaller models that work nearly as well as the biggest models.
- Falcon, the top-ranked open-source model, was released on May 31 by the United Arab Emirates' Technology Innovation Institute and now outperforms LLaMA, which it was built from.
- The Beijing Academy of AI released the mulitlingual Aquila on June 9.
Be smart: Open source code is, by definition, global, and once code is out "in the wild" it's almost impossible to corral or lock up.
- While open source AI can be misused in myriad ways, any effort to squash it would likely fail: adversarial governments would not cooperate, and small U.S. companies would pay a price in hobbled innovation.
Between the lines: Advocates of both open and closed systems claim to be democratizing access to AI, and most models blend elements of each approach.
- OpenAI admits it could not have built the closed ChatGPT system without access to open source products.
- Both open-source and proprietary AI models face complex legal questions over the presence of copyrighted material in the data pools used to train then.
What they're saying: Sens. Richard Blumenthal (D-Conn.) and Josh Hawley (R-Mo.) wrote to Meta on June 6 suggesting the company did not conduct any meaningful assessment of how its LLaMA could be misused once released to the public, and asked for proof of efforts to mitigate the risks.
The intrigue: The leaking of Meta's LLaMA model has allowed the company to play a spoiler role against Microsoft-aligned Open AI and Google.
- Google did the same to Apple after the iPhone was released in 2007 when it made its Android mobile operating system open source. Android now dominates global smartphone market share (though Apple controls much of the market's most profitable high end).
2. Social scientists looks to AI to study humans Illustration: Natalie Peeples/Axios
Social scientists are testing whether the AI systems that power ChatGPT and other text and image generating tools can be used to better understand the behaviors, beliefs and values of humans themselves, Axios' Alison Snyder reports.
Why it matters: Chatbots are being used to mimic the output of people — from cover letters to marketing copy to computer code. Some social scientists are now exploring whether they can offer new inroads to key questions about human behavior and help them reduce the time and cost of experiments.
Details: Two recent papers look at how social scientists might use large language models to address questions about human decision-making, morality, and a slew of other complex attributes at the heart of what it means to be human.
- One possibility is using LLMs in place of human participants, researchers wrote last week in the journal Science.
- They reason that LLMs, with their vast training sets, can produce responses that represent a greater diversity of human perspectives than data collected through a much more limited number of questionnaires and other traditional tools of social science. Scientists have already analyzed the word associations in texts to reveal gender or racial bias or how individualism changes in a culture over time.
- "So you can obviously scale it up and use sophisticated models with an agent being a representation of the society," says Igor Grossmann, a professor of psychology at the University of Waterloo and co-author of the article.
- "[A]t a minimum, studies that use simulated participants could be used to generate hypotheses that could then be confirmed in human populations," the authors write.
3. How AI is helping "sextortion" scammers Illustration: Aïda Amer/Axios
Rapidly advancing AI technologies are making it easier for scammers to extort victims, including children, by doctoring innocent photos into fake pornographic content, Axios' Jacob Knutson reports.
Why it matters: The warnings coincide with a general "explosion" of "sextortion" schemes targeting children and teens that have been linked more than a dozen suicides, according to the FBI.
Driving the news: The National Center for Missing and Exploited Children has recently received reports of manipulated images of victims being shared on social media and other platforms, says John Shehan, a senior vice president at the organization.
How it works: Typical sextortion schemes involve scammers coercing victims into sending explicit images, then demanding payment to keep the images private or delete them from the web.
- But with AI, malicious actors can pull benign photos or videos from social media and create explicit content using open-source image-generation tools.
- So-called "deepfakes" and the threats they pose have been around for years, but the tools to create them have recently become extremely powerful and more user-friendly, said John Wilson, a senior fellow at cybersecurity firm Fortra.
The big picture: The FBI said earlier this month that it has received reports from victims — including minors — that innocuous images of them had been altered using AI tools to create "true-to-life" explicit content, then shared on social media platforms or porn sites.
- "Once circulated, victims can face significant challenges in preventing the continual sharing of the manipulated content or removal from the internet," the FBI said.
- Last year, the FBI received 7,000 reports of financial sextortion against minors that resulted in at least 3,000 victims — primarily boys — according to a December public safety alert.
A message from Axios
Let AI make your inbox more focused Generative AI is changing how we work. Research shows it can cut your "writing" phase 50%+ — if you let it.
Axios HQ helps 500+ organizations...
- Make vital emails 40% shorter.
- Shift time from writing to editing.
You're still the one writing. You just spend less time doing it.
4. Take note On Tap
- Collision takes place today through Thursday in Toronto, while Snowflake Summit runs over the same stretch in Las Vegas.
Trading Places
- AI-powered tech services firm Turing has hired Phil Walsh as CMO. Walsh previously held the same title at healthcare AI firm Akasa.
ICYMI
- Using images of public figures to create deepfake porn is illegal in only a handful of states. (The Atlantic)
- Mark Gurman looks at Apple's plans beyond Vision Pro, including future VR headsets as well as some of the next computers, phones and tablets. (Bloomberg)
5. After you Login This 2016 street art exhibit of objects casting fake-but-amazing shadows is a classic worth revisiting.
A message from Axios
Let AI make your inbox more focused Generative AI is changing how we work. Research shows it can cut your "writing" phase 50%+ — if you let it.
Axios HQ helps 500+ organizations...
- Make vital emails 40% shorter.
- Shift time from writing to editing.
You're still the one writing. You just spend less time doing it.
Thanks to Scott Rosenberg for editing and Bryan McBournie for copy editing this newsletter.
Your personal policy analyst is here.Track tech policy formation at every step of the process with Axios Pro. Talk to our sales team today.
Axios thanks our partners for supporting our newsletters.
Axios, 3100 Clarendon Blvd, Arlington VA 22201
Sponsorship has no influence on editorial content.You received this email because you signed up for newsletters from Axios.
To stop receiving this newsletter, unsubscribe or manage your email preferences.Was this email forwarded to you?
Sign up now to get Axios in your inbox.Follow Axios on social media:
Comments