|
|
AI Ethics: What’s Right? Dangers of Automating Candidate Reviews; Berkeley Bans Facial Recognition
|
|
|
|
|
|
|
A man holds a door to a Didi self-driving car during the World Artificial Intelligence Conference in Shanghai, China, in August 2019. PHOTO: JOSH HORWITZ/REUTERS
|
|
|
The ethics of AI: What happens when humans can’t agree on what is right? One of the appealing features of AI is its ability to come up with the right answer faster and more reliably than humans can, writes Stuart Madnick for the WSJ. In many cases, like 2 + 2 = 4, the right answer is singular and unambiguous.
But what is AI to do when it comes to ethical questions on which humans don't agree? This challenge has two important consequences, Mr. Madnick writes. First, it can delay the introduction or acceptance of new AI applications, such as autonomous vehicles. Second, it will require that management be prepared to explain and justify the rationale for how their AI will make these decisions.
[continues below]
|
|
|
|
Mr. Madnick, the John Norris Maguire Professor of Information Technologies at the MIT Sloan School of Management, brings up an example he uses in one of his classes:
|
|
Imagine an autonomous vehicle, under the control of AI, is driving down a steep mountain in Switzerland. It makes a sharp turn and the sensors and object-recognition software realize that: (1) There is a woman pushing a baby carriage in the cross walk to the right, (2) Three gentleman entered the intersection on the left, and (3) There are concrete barriers on the left and right.
The software also realizes that given the speed of the vehicle, the condition of the pavement and the distances involved, it would not be able to stop before coming across the intersection.
So, what should the vehicle do? When I ask my students, including senior executives, for a definitive, instant answer, there is a strong reluctance to respond. But situations like this will soon be reality and the consequences cannot be ignored….
For managers, this simple example makes clear the challenges of addressing ethical questions for AI. There will likely be many such situations as we apply AI to an increasing number of applications beyond autonomous vehicles. Of course, there is no decision that everyone would agree with. But much like the medical profession had to determine how to triage patients in the aftermath of a disaster, managers will need to develop a set of thoughtful principles, and not leave such decisions to some programmer of the AI. And they will need to be prepared to defend those principles….
Ethical decisions are indeed hard, and AI will increasingly raise these dilemmas. The dialogue on these matters must be started now, by creators of the science, by business leaders responsible for its uses, and by society which will have to live with the consequences.
|
|
|
|
|
A hiring sign is seen in Fort Lauderdale, Fla., in May. PHOTO JOE RAEDLE/GETTY IMAGES
|
|
|
The dangers of asking AI to evaluate a job candidate’s interview. For hiring managers, the potential of having AI weed out incompatible, or downright unpleasant people, is seductive, writes Lynda Spiegel for WSJ.
But as with most tech advances, there are ethical and legal implications surrounding AI-driven personality screening to consider. Ms. Spiegel, a founder of Rising Star Résumés, a job-search coaching and résumé service, notes that accuracy is one concern with an AI system that claims to identify undesirable personality types. How might the software’s conclusions be distorted by somebody having a bad day, or somebody who didn’t get much sleep the night before, or a person fighting a cold? And what about individuals who may have a medical condition, such as Bell’s palsy, that could negatively impact the software’s analysis of their personality? She continues:
|
|
While human-resources professionals usually educate hiring managers about which questions they can’t ask during a job interview, AI software can provide employers similar information that has been traditionally considered private, without the candidate’s knowledge or consent, which raises ethical considerations, as well as legal ones, if the information violates federal laws such as the Americans with Disabilities Act.
As an employer, I would argue that the perceived upside of personality screening in the workplace is overstated. Sure, no one enjoys working with difficult people, but that’s where one-on-one coaching from an HR pro can have an impact.
Artificial Intelligence has greatly improved myriad processes in the workplace, but there’s still a strong argument against its use in recruitment. While the tools save time, they eliminate reliance on human intuition in determining who is best-suited to join our workforce.
|
|
|
|
|
|
Berkeley bans facial recognition. Berkeley, Calif., has become the latest American city to prohibit government use of facial recognition, reports Bloomberg Law. The city joins San Francisco, Oakland, Calif., and Somerville, Mass., which have enacted similar bans.
|
|
|
|
Data labeling market seen hitting $5 billion by 2023. To date, only one in five businesses that know of AI’s potential have deployed the technology in their core operations. A reason for the slow uptake, according to the Economist, is the lack of quality data needed to teach algorithms. The most common form of AI needs hordes of labeled data. However, labeling data is mundane and companies prefer to have it done by outsiders, such as Hive or Amazon-owned Mechanical Turk, according to the report. This market for data-labeling services may hit $5 billion by 2023, according to Astasia Myers of Redpoint Ventures, a venture-capital firm.
|
|
|
|
Thrive Global acquires AI startup. Thrive Global, a behavior change media and technology company founded by Arianna Huffington, acquired neuroscience-based artificial intelligence startup Boundless Mind for an undisclosed amount, reports WSJ. In connection with the acquisition, Thrive grabbed a new round of funding led by JAZZ Venture Partners, a firm that previously backed Boundless, bringing the total raised by the company to over $65 million.
|
|
|
What: WSJ Tech Live Conference
Where: Laguna Beach, Calif.
When: Oct. 21-23, 2019
The Wall Street Journal’s global technology conference, WSJ Tech Live, returns to Laguna Beach, Calif. for its fifth year from Oct. 21 to Oct. 23 at the Montage hotel. Join top leaders in technology, media and business for newsmaking onstage interviews with Robert Iger, CEO of Walt Disney Co., Jeff Wilke, CEO of world-wide consumer, Amazon, Shari Redstone, vice chair of CBS and Viacom, Ajit Pai, chairman of the Federal Communications Commission, Michael Schroepfer, chief technology officer of Facebook, and many more.
Some of the themes we’ll be exploring this year:
-
Big tech and regulation
-
The streaming wars
-
The race to 5G
-
The landscape of threat and security
-
The U.S.-China trade war and its impact on technology
The full speaker list for 2019 and the agenda can be found here.
|
|
|
|