|
|
House Lawmakers Discuss How to Curb Bias in AI
|
|
|
|
|
|
Welcome back. House lawmakers on Wednesday heard from experts with a range of ideas on how to manage the role of bias in artificial intelligence. As adoption of AI rises, there's growing pressure for regulation and oversight. How best to cope? Auditing and testing of algorithmic results may be part of the answer, John McCormick reports for WSJ Pro.
How are we doing? Let us know what you think, using the email links at the bottom of this newsletter.
|
|
|
|
|
The U.S. Capitol building in Washington. The House Financial Services Committee’s AI task force held its fifth hearing on Wednesday. CREDIT: AMANDA ANDRADE-RHOADES/BLOOMBERG NEWS
|
|
|
Experts told lawmakers Wednesday that AI needs to be regulated to ensure that financial-services companies using it don't discriminate against minorities and women. “This is an especially timely topic,” Rep. Bill Foster (D., Ill.), chairman of the House Financial Services Committee’s AI task force said Wednesday during a hearing. “It seems as though every week we’re hearing stories and questions about bias algorithms in the lending space from credit cards that discriminate against women to loans that discriminate based on where you went to school."
This is the fifth hearing for the task force, which was created in May to help Congress keep up with AI developments. More hearings are expected and the task force could conclude its work by issuing recommendations for legislative and regulatory policy changes, according to congressional staffers, John McCormick reports for WSJ Pro.
Some possible coping strategies:
-
Mr. Foster brought up two possible remedies: for algorithms or their outputs to be audited by third-party experts and for companies using AI to regularly self-test and perform benchmarking analyses that would be submitted to regulators for review.
-
Rayid Ghani, a professor in the Machine Learning Department and Heinz College of Information Systems and Public Policy at Carnegie Mellon University, proposes expanding regulatory frameworks in different policy areas to account for AI-assisted decision making. “A lot of these bodies already exist, SEC, Finra, CFPB, FDA, FEC,” he said. “We also recommend creating training programs, processes and tools to support these regulatory agencies.”
|
|
|
|
Deep Instinct raises $43 million. Deep Instinct, which uses deep learning to identify and stop a range of established and new cyberattacks, has raised $43 million in a series C round, TechCrunch reports. The funding was led by Millennium New Horizons, with Unbound, LG and Nvidia participating, according to TechCrunch. Deep Instinct, based in Israel, has raised a total of $100 million, with HP and Samsung among previous backers, TechCrunch said. The tech companies bundle and resell Deep Instinct’s solutions, or use them directly in their own services, according to
TechCrunch.
|
|
|
|
CybelAngel raises $36 million. CybelAngel, a digital risk management platform that provides actionable threat intelligence, has raised $36 million in a series B round, according to VentureBeat. The round was co-led by Prime Ventures and TempoCap, VentureBeat said.
|
|
|
|
The Google logo is seen at an event in Paris in May 2019. CREDIT: CHARLES PLATIAU/REUTERS
|
|
|
Optical sensors such as cameras and lidar tend to be confused by glass containers and other transparent objects. That dilemma led a team of Google researchers to collaborate with Columbia University and Synthesis AI, a data generation platform for computer vision, to develop ClearGrasp, according to VentureBeat. “It’s an algorithm capable of estimating accurate 3-D data of transparent objects,” VentureBeat said.
|
|
|
|
Public-health experts question whether the WHO has been too deferential to China in its handling of the new virus. It’s a conundrum that threatens the agency’s global authority. (WSJ)
|
|
|
|