Cambridge awarded $2M to stop AI from ‘undermining core human values’

CAMBRIDGE, United Kingdom — Artificial intelligence makes life easier. But, is humanity implementing too much AI too fast? Are the proper precautions being put in place to ensure no system, algorithm, or robot ever goes against basic human values? Scientists from the University of Cambridge’s Leverhulme Centre for the Future of Intelligence (LCFI) have been awarded €1.9 million Euros ($2.1 million) to research that very topic, and hopefully form a fuller understanding of how to prevent AI from undermining “core human values.”

Mere science fiction just a few short decades ago, artificial intelligence has infiltrated virtually every aspect of modern life. Beyond just the ads you see online or the posts displayed on your social media feeds, algorithms have become near-ubiquitous across industries. AI systems are now routinely used to narrow down job candidates, approve or deny insurance claims, and even choose which patients receive requested medical procedures.

Unsettlingly, there are already plenty of examples of such AI algorithms causing “unintended social consequences.” One easy example is rampant misinformation on social media actively promoted by algorithms simply programmed to “increase engagement.” Other algorithms have actually shown racial bias in healthcare settings as well.

The sizable grant afforded to the team at Cambridge will help researchers collaborate with the AI industry itself and assist in the development of anti-discriminatory design principles. This, in turn, will ensure ethics are “at the heart of technological progress” moving forward. Researchers will put together toolkits and training programs for AI developers aimed at stopping pre-existing structural inequalities (gender, class, race) from being implemented into further, emerging systems.

“There is a huge knowledge gap,” says Dr. Stephen Cave, Director of LCFI, in a statement.. “No one currently knows what the impact of these new systems will be on core values, from democratic rights to the rights of minorities, or what measures will help address such threats. Understanding the potential impact of algorithms on human dignity will mean going beyond the code and drawing on lessons from history and political science.”

Last year, LCFI made headlines for announcing the first ever Master’s Program focusing on teaching AI ethics to relevant industry professionals. This grant will help the research team develop further methods of researching, understanding, and ultimately controlling human dignity in the digital age.

“AI technologies are leaving the door open for dangerous and long-discredited pseudoscience,” Dr. Cave adds. “It’s great that governments are now taking action to ensure AI is developed responsibly,” Dr. Cave concludes. “But legislation won’t mean much unless we really understand how these technologies are impacting on fundamental human rights and values.”

Follow on Google News

About the Author

John Anderer

Born blue in the face, John has been writing professionally for over a decade and covering the latest scientific research for StudyFinds since 2019. His work has been featured by Business Insider, Eat This Not That!, MSN, Ladders, and Yahoo!

Studies and abstracts can be confusing and awkwardly worded. He prides himself on making such content easy to read, understand, and apply to one’s everyday life.

The contents of this website do not constitute advice and are provided for informational purposes only. See our full disclaimer