Marie Claire Chelini, Trinity Communications
Trinity College of Arts & Sciences has launched an initiative to spur innovative research on the co-evolution of artificial intelligence and human behavior: the Society-Centered AI Initiative at Duke.
Directed by Chris Bail, professor of Sociology, Political Science and Public Policy, the Society-Centered AI Initiative at Duke is a collaborative effort aimed at fostering interdisciplinary research exploring the myriad ways in which AI will influence human behavior — and how social factors will shape the future of such technology.
Though these topics seem like two sides of the same coin, they are seldom addressed in conjunction, but rather are studied by fully segregated disciplines. This approach is limited: To fully understand the impact of AI on society, social scientists and humanists must understand how AI systems are developed and trained. Computer scientists, in turn, must gain a better understanding of human behavior and societal issues to push AI toward increasingly complex applications without losing sight of ethics or potential pitfalls.
Bail emphasizes that despite the attention given to the negative impacts of AI on society — privacy violations, racial biases, furthering inequalities — there is also room for optimism: “AI can help researchers understand human behavior, create better experiments and enable new forms of analysis that can help us study entire societies instead of just small groups of people,” he said.
“We are excited for the launch of this initiative and its potential for creating breakthrough insights into topics that have been heretofore impossible for us to study,” said Gary G. Bennett, Dean of Trinity College. “The initiative will build upon existing collaborations among faculty and across disciplines at Duke and position new interdisciplinary dialogue and research in the field of society-centered AI.”
And there is no shortage of possible breakthroughs. Pardis Emami-Naeini, assistant professor of Computer Science, wants to make sure that human-AI interactions are safe. One of her projects explores the privacy and ethical concerns associated with the use of AI chatbots to improve mental wellness. She also aims at developing an AI "nutrition" label to inform users about the data usage and security practices of generative AI technologies. “This transparency interface will empower users to make more informed decisions when engaging with generative AI,” she said.
Jon Green, assistant professor of Political Science, is also working with chatbots. He is eager to gain a better understanding of people’s political perspectives by combining the thorough information obtained through one-on-one interviews with the scalability offered by large language model-based interviewing agents. “Mass opinion surveys offer a broad overview of the public’s political attitudes but cannot capture the depth of people’s beliefs. Interviews can provide much more information but are too time- and labor-intensive to be conducted at mass scale,” he said. “By using large language model-based chatbots to conduct semi-structured interviews, we have the potential to incorporate the strengths of both approaches.”
Politics are also an interest of Kamesh Munagala, professor of Computer Science, albeit from a social choice perspective. He said that while traditional social choice theory focuses on the relationship between individual voting and its societal outcome, AI and moderated online platforms now allow individuals to not only express preferences but also to influence, and be influenced by, the opinions of others. He sides with the optimists: “Does this interactive environment lead to more optimal societal outcomes? Can they be modeled, and their effectiveness quantified? And could they inform the design of online platforms that promote more constructive and beneficial societal discourse?”
Duke’s strong foothold in interdisciplinarity makes it particularly suited to the collaborative nature of the Society-Centered AI Initiative. Preliminary meetings have gathered faculty not only from the departments of Political Science and Computer Science, but also from Statistical Science and Sociology, as well as the Pratt School of Engineering, the Fuqua School of Business, the School of Law, the School of Nursing, the Sanford School of Public Policy and the Social Science Research Institute.
“Many of the issues social scientists are dealing with are the exact same issues that computer scientists and engineers are dealing with,” said Bail. “But it was striking to see interest from every single school on campus, which suggests that the core objective here — how does AI influence society and how will society influence AI — is really spreading into every field, from law to medicine to engineering.”
Over the next three years, the Society-Centered AI Initiative at Duke will host a series of events to strengthen existing relationships between faculty and generate opportunities for new research collaborations within and beyond Duke. These events will include a mini-conference, a hackathon and a distinguished lecture series that will invite some of the most influential society-centered AI researchers in the country.
#
To receive the Society-Centered AI Initiative’s latest news and announcements, subscribe to its listserv.