Protecting the public from abusive AI-generated content  - Microsoft On the Issues (2024)

AI-generated deepfakes are realistic, easy for nearly anyone to make, and increasingly being used for fraud, abuse, and manipulation – especially to target kids and seniors. While the tech sector and non-profit groups have taken recent steps to address this problem, it has become apparent that our laws will also need to evolve to combat deepfake fraud. In short, we need new laws to help stop bad actors from using deepfakes to defraud seniors or abuse children.  

While we and others have rightfully been focused on deepfakes used in election interference, the broad role they play in these other types of crime and abuse needs equal attention. Fortunately, members of Congress have proposed a range of legislation that would go a long way toward addressing the issue, the Administration is focused on the problem, groups like AARP and NCMEC and deeply involved in shaping the discussion, and industry has worked together and built a strong foundation in adjacent areas that can be applied here.  

One of the most important things the U.S. can do is pass a comprehensive deepfake fraud statute to prevent cybercriminals from using this technology to steal from everyday Americans. 

We don’t have all the solutions or perfect ones, but we want to contribute to and accelerate action. That’s why today we’re publishing 42 pages on what’s grounded us in understanding the challenge as well as a comprehensive set of ideas including endorsem*nts for the hard work and policies of others. Below is the foreword I’ve written to what we’re publishing. 

____________________________________________________________________________________

The below is written by Brad Smith for Microsoft’s report Protecting the Public from Abusive AI-Generated Content. Find the full copy of the report here: https://aka.ms/ProtectThePublic

“The greatest risk is not that the world will do too much to solve these problems. It’s that the world will do too little. And it’s not that governments will move too fast. It’s that they will be too slow.”

Those sentences conclude the book I coauthored in 2019 titled “Tools and Weapons.” As the title suggests, the book explores how technological innovation can serve as both a tool for societal advancement and a powerful weapon. In today’s rapidly evolving digital landscape, the rise of artificial intelligence (AI) presents both unprecedented opportunities and significant challenges. AI is transforming small businesses, education, and scientific research; it’s helping doctors and medical researchers diagnose and discover cures for diseases; and it’s supercharging the ability of creators to express new ideas. However, this same technology is also producing a surge in abusive AI-generated content, or as we will discuss in this paper, abusive “synthetic” content. 

Five years later, we find ourselves at a moment in history when anyone with access to the Internet can use AI tools to create a highly realistic piece of synthetic media that can be used to deceive: a voice clone of a family member, a deepfake image of a political candidate, or even a doctored government document. AI has made manipulating media significantly easier—quicker, more accessible, and requiring little skill. As swiftly as AI technology has become a tool, it has become a weapon. As this document goes to print, the U.S. government recently announced it successfully disrupted a nation-state sponsored AI-enhanced disinformation operation. FBI Director Christopher Wray said in his statement, “Russia intended to use this bot farm to disseminate AI-generated foreign disinformation, scaling their work with the assistance of AI to undermine our partners in Ukraine and influence geopolitical narratives favorable to the Russian government.” While we should commend U.S. law enforcement for working cooperatively and successfully with a technology platform to conduct this operation, we must also recognize that this type of work is just getting started. 

The purpose of this white paper is to encourage faster action against abusive AI-generated content by policymakers, civil society leaders, and the technology industry. As we navigate this complex terrain, it is imperative that the public and private sectors come together to address this issue head-on. Government plays a crucial role in establishing regulatory frameworks and policies that promote responsible AI development and usage. Around the world, governments are taking steps to advance online safety and address illegal and harmful content. 

The private sector has a responsibility to innovate and implement safeguards that prevent the misuse of AI. Technology companies must prioritize ethical considerations in their AI research and development processes. By investing in advanced analysis, disclosure, and mitigation techniques, the private sector can play a pivotal role in curbing the creation and spread of harmful AI-generated content, thereby maintaining trust in the information ecosystem. 

Civil society plays an important role in ensuring that both government regulation and voluntary industry action uphold fundamental human rights, including freedom of expression and privacy. By fostering transparency and accountability, we can build public trust and confidence in AI technologies. 

The following pages do three specific things: 1) illustrate and analyze the harms arising from abusive AI-generated content, 2) explain what Microsoft’s approach is, and 3) offer policy recommendations to begin combating these problems. Ultimately, addressing the challenges arising from abusive AI-generated content requires a united front. By leveraging the strengths and expertise of the public, private, and NGO sectors, we can create a safer and more trustworthy digital environment for all. Together, we can unleash the power of AI for good, while safeguarding against its potential dangers. 

Microsoft’s responsibility to combat abusive AI-generated content

Earlier this year, we outlined a comprehensive approach to combat abusive AI-generated content and protect people and communities, based on six focus areas: 

  1. A strong safety architecture. 
  2. Durable media provenance and watermarking. 
  3. Safeguarding our services from abusive content and conduct.
  4. Robust collaboration across industry and with governments and civil society. 
  5. Modernized legislation to protect people from the abuse of technology. 
  6. Public awareness and education. 

Core to all six of these is our responsibility to help address the abusive use of technology. We believe it is imperative that the tech sector continue to take proactive steps to address the harms we are seeing across services and platforms. We’ve taken concrete steps, including: 

  • Implementing a safety architecture that includes red team analysis, preemptive classifiers, blocking of abusive prompts, automated testing, and rapid bans of users who abuse the system. 
  • Automatically attaching provenance metadata to images generated with OpenAI’s DALL-E 3 model in Azure OpenAI Service, Microsoft Designer, and Microsoft Paint. 
  • Developing standards for content provenance and authentication through the Coalition for Content Provenance and Authenticity (C2PA) and implementing the C2PA standard so that content carrying the technology is automatically labelled on LinkedIn. 
  • Taking continued steps to protect users from online harms, including by joining the Tech Coalition’s Lantern program and expanding PhotoDNA’s availability. 
  • Launching new detection tools like Azure Operator Call Protection for our customers to detect potential phone scams using AI. 
  • Executing our commitments to the new Tech Accord to combat deceptive use of AI in elections. 

Protecting Americans through new legislative and policy measures 

This February, Microsoft and LinkedIn joined dozens of other tech companies to launch the Tech Accord to Combat Deceptive Use of AI in 2024 Elections at the Munich Security Conference. The Accord calls for action across three key pillars that we utilized to inspire the additional work found in this white paper: addressing deepfake creation, detecting and responding to deepfakes, and promoting transparency and resilience. 

In addition to combating AI deepfakes in our elections, it is important for lawmakers and policymakers to take steps to expand our collective abilities to (1) promote content authenticity, (2) detect and respond to abusive deepfakes, and (3) give the public the tools to learn about synthetic AI harms. We have identified new policy recommendations for policymakers in the United States. As one thinks about these complex ideas, we should also remember to think about this work in straightforward terms. These recommendations aim to: 

  • Protect our elections.
  • Protect seniors and consumers from online fraud.
  • Protect women and children from online exploitation.

Along those lines, it is worth mentioning three ideas that may have an outsized impact in the fight against deceptive and abusive AI-generated content. 

  • First, Congress should enact a new federal “deepfake fraud statute.” We need to give law enforcement officials, including state attorneys general, a standalone legal framework to prosecute AI-generated fraud and scams as they proliferate in speed and complexity. 
  • Second, Congress should require AI system providers to use state-of-the-art provenance tooling to label synthetic content. This is essential to build trust in the information ecosystem and will help the public better understand whether content is AI-generated or manipulated. 
  • Third, we should ensure that our federal and state laws on child sexual exploitation and abuse and non-consensual intimate imagery are updated to include AI-generated content. Penalties for the creation and distribution of CSAM and NCII (whether synthetic or not) are common-sense and sorely needed if we are to mitigate the scourge of bad actors using AI tools for sexual exploitation, especially when the victims are often women and children. 

These are not necessarily new ideas. The good news is that some of these ideas, in one form or another, are already starting to take root in Congress and state legislatures. We highlight specific pieces of legislation that map to our recommendations in this paper, and we encourage their prompt consideration by our state and federal elected officials. 

Microsoft offers these recommendations to contribute to the much-needed dialogue on AI synthetic media harms. Enacting any of these proposals will fundamentally require a whole-of-society approach. While it’s imperative that the technology industry have a seat at the table, it must do so with humility and a bias towards action. Microsoft welcomes additional ideas from stakeholders across the digital ecosystem to address synthetic content harms. Ultimately, the danger is not that we will move too fast, but that we will move too slowly or not at all. 

Tags: AI, elections, generative ai, LinkedIn, Online Safety, Responsible AI

Protecting the public from abusive AI-generated content  - Microsoft On the Issues (2024)

References

Top Articles
Latest iOS App Price Drops
11 Spar-Apps, die jeder Sparfuchs beim Einkauf braucht
Spasa Parish
Rentals for rent in Maastricht
159R Bus Schedule Pdf
Sallisaw Bin Store
Black Adam Showtimes Near Maya Cinemas Delano
Espn Transfer Portal Basketball
Pollen Levels Richmond
11 Best Sites Like The Chive For Funny Pictures and Memes
Things to do in Wichita Falls on weekends 12-15 September
Craigslist Pets Huntsville Alabama
Paulette Goddard | American Actress, Modern Times, Charlie Chaplin
Red Dead Redemption 2 Legendary Fish Locations Guide (“A Fisher of Fish”)
What's the Difference Between Halal and Haram Meat & Food?
R/Skinwalker
Rugged Gentleman Barber Shop Martinsburg Wv
Jennifer Lenzini Leaving Ktiv
Justified - Streams, Episodenguide und News zur Serie
Epay. Medstarhealth.org
Olde Kegg Bar & Grill Portage Menu
Cubilabras
Half Inning In Which The Home Team Bats Crossword
Amazing Lash Bay Colony
Juego Friv Poki
Dirt Devil Ud70181 Parts Diagram
Truist Bank Open Saturday
Water Leaks in Your Car When It Rains? Common Causes & Fixes
What’s Closing at Disney World? A Complete Guide
New from Simply So Good - Cherry Apricot Slab Pie
Drys Pharmacy
Ohio State Football Wiki
Find Words Containing Specific Letters | WordFinder®
FirstLight Power to Acquire Leading Canadian Renewable Operator and Developer Hydromega Services Inc. - FirstLight
Webmail.unt.edu
2024-25 ITH Season Preview: USC Trojans
Metro By T Mobile Sign In
Restored Republic December 1 2022
Lincoln Financial Field Section 110
Free Stuff Craigslist Roanoke Va
Wi Dept Of Regulation & Licensing
Pick N Pull Near Me [Locator Map + Guide + FAQ]
Crystal Westbrooks Nipple
Ice Hockey Dboard
Über 60 Prozent Rabatt auf E-Bikes: Aldi reduziert sämtliche Pedelecs stark im Preis - nur noch für kurze Zeit
Wie blocke ich einen Bot aus Boardman/USA - sellerforum.de
Infinity Pool Showtimes Near Maya Cinemas Bakersfield
Dermpathdiagnostics Com Pay Invoice
How To Use Price Chopper Points At Quiktrip
Maria Butina Bikini
Busted Newspaper Zapata Tx
Latest Posts
Article information

Author: Roderick King

Last Updated:

Views: 6164

Rating: 4 / 5 (51 voted)

Reviews: 90% of readers found this page helpful

Author information

Name: Roderick King

Birthday: 1997-10-09

Address: 3782 Madge Knoll, East Dudley, MA 63913

Phone: +2521695290067

Job: Customer Sales Coordinator

Hobby: Gunsmithing, Embroidery, Parkour, Kitesurfing, Rock climbing, Sand art, Beekeeping

Introduction: My name is Roderick King, I am a cute, splendid, excited, perfect, gentle, funny, vivacious person who loves writing and wants to share my knowledge and understanding with you.