OpenAI signed separate agreements with The Atlantic and Vox Media, allowing the AI developer to use their news content to train models for products like ChatGPT. The Atlantic and Vox will use OpenAI technology to enhance advertising and news products. Vox said it will use OpenAI tools to “enhance” its Strategist Gift Scout, a user search tool, and improve Forte, its targeted advertising platform. “Advertisers will benefit from the OpenAI partnership through stronger creative optimization and audience segment targeting, leading to even higher campaign performance,” Vox said. The Atlantic said it will have “privileged access” to OpenAI tech and will help shape “news experiences” on ChatGPT and other OpenAI services. The magazine will use OpenAI tools to develop Atlantic Labs, an experimental “microsite” using AI to build “new products and features to better serve” the platform’s “journalism and readers.” Atlantic CEO Nicholas Thompson said the partnership will make the company’s “reporting and stories more discoverable to millions” and give the company a “voice in shaping how news is surfaced” on OpenAI platforms. Vox Media CEO Jim Bankoff said the agreement “aligns with our goals of leveraging generative AI to innovate for our audiences and customers, protect and grow the value of our work and intellectual property, and boost productivity and discoverability to elevate the talent and creativity of our exceptional journalists and creators.” OpenAI Chief Operating Officer Brad Lightcap said the company is “dedicated to supporting high-quality journalism and the publishing ecosystem.”
With the EU AI Act set to become law soon (see 2405210001), the European Commission launched an internal AI Office expected to play a major role in implementing the measure. The office includes a regulation and compliance unit to ensure the act is uniformly enforced across the EU; a unit on AI safety to identify systemic risks of very capable general-purpose models and possible mitigation measures; an AI for societal good unit to deal with international engagement on issues such as weather modeling and cancer diagnoses; and a unit for AI innovation and policy coordination to oversee the EU's AI strategy and AI take-up. The office will be led by an AI Office head under the guidance of a scientific and an international affairs advisor. The organizational changes become effective on June 16; the act itself is expected to take effect in July, after which businesses have two years to comply.
An Oregon advisory council will develop recommendations for using AI across state government, Gov. Tina Kotek (D) said Wednesday. AI has potential risks and benefits, said Kotek, who created the council through an executive order Tuesday. “This rapidly developing technological landscape leads to questions that we must take head on, including concerns regarding ethics, privacy, equity, security, and social change.” State Chief Information Officer Terrence Woods will chair the council, Kotek’s office said.
International collaboration and interoperability are critical as AI develops, panelists said Thursday at the European AI Alliance Assembly in Madrid. Individual nations and regions are considering their approaches to AI regulation. Canada proposed an AI and Data Act in 2022 that takes a framework approach, said Karen Kennedy, Canada's senior trade commissioner. Instead of focusing on every possible risk involved, the measure sets a general risk-based approach, focused on interoperability, which can be shaped over time with further laws and standards. The U.S. wants "digital solidarity" rather than "digital sovereignty" as it moves forward on AI, said Merry Walker, State Department senior adviser to the special envoy for critical and emerging technology. It established an AI bill of rights last year, followed by President Joe Biden's executive order on AI earlier this month. This stopgap measure provides a framework for what the U.S. wants to accomplish while lawmakers consider what to do, she said. The U.S. approach is also risk-based, and emphasizes the importance of testing, monitoring and evaluating throughout the life of AI systems. The U.S. relies heavily on voluntary commitments and is pushing for standard-setting and international cooperation. The U.S. approach is similar to the EU's in its emphasis on standardization, which is important for international cooperation, said Lucilla Siolo, European Commission director for AI and digitalization of industry, Directorate-General for Communications Networks, Content and Technology. Codes of conduct are good, she said, but it's also useful to check how they're implemented and if they're being complied with. Interoperability is key to preventing regulatory fragmentation, said Ulrik Vestergaard Knudsen, Organisation for Economic Co-operation and Development deputy secretary-general: Even among like-minded countries there are differences. This leads to the question of what globally acceptable values can be found, and whether systems can be made interoperable, he said. At least eight initiatives on AI are taking place worldwide. Some, like the U.N. or Group of 20, are broad; others, such as the G7 and the London AI Safety Summit, are narrower. It's too late for everyone to take a single approach, so they must find common ground on principles, Knudsen said. Latin American and Caribbean nations reached a "milestone" two weeks ago when they approved the Santiago Declaration, which will establish an AI working group, said Eleonora Lamm, UNESCO program specialist-social and human services. This will be the first regional governance body on AI, and it will act on the political and technical level, she said.
Congress must act quickly to address AI-driven harms in U.S. elections, Senate Majority Leader Chuck Schumer, D-N.Y., said Wednesday on the Senate floor. He hosted the fifth and sixth AI Insight Forums, where attendees heard from experts about elections, privacy and liability. Attendees included: Microsoft Information Integrity Director Matthew Masterson, Meta Vice President-Public Policy Neil Potts, Google Global Elections Integrity Director David Vorhaus, Anti-Defamation League Vice President-Center for Technology & Society Yael Eisenstat, TechFreedom Free Speech Counsel Ari Cohn, Michigan Secretary of State Jocelyn Benson, Utah Lt. Gov. Deidre Henderson (R), Public Knowledge CEO Chris Lewis, Information Technology and Innovation Foundation Vice President Daniel Castro, Center for Democracy & Technology Vice President-Policy Samir Jain, Match Group CEO Bernard Kim and Mozilla President Mark Surman. Schumer raised concerns about AI-generated political ads and uncensored chatbots harming candidates from both parties. “If we don’t enact the right guardrails soon, we could soon live in a world where political campaigns regularly deploy totally fabricated -- yet totally believable -- images and footage” of candidates from both parties, “distorting their statements and greatly harming their election chances,” he said.
AI-generated content that violates copyright law may prompt enforcement action against unfair competition and unfair or deceptive practices, the FTC told the Copyright Office in comments Tuesday. The CO is studying policy issues related to AI technology to determine whether legislative or regulatory action is needed (see 2310310042). The FTC will “vigorously use the full range of its authorities to protect Americans from deceptive and unfair conduct and maintain open, fair, and competitive markets,” the agency said. This includes enforcement against violators that exploit creators’ reputations, diminish the value of creative work, share private information or cause injury to consumers, the agency said.
The National Institute of Standards and Technology’s National AI Advisory Committee's open meeting will be Oct. 19, in person and virtually. The meeting at the Commerce Department’s Hoover Building will include an update from the Law Enforcement Subcommittee.