This paper argues that the plethora of experiments with decentralized social technologies (DSTs)—clusters of which are sometimes called “the Web 3.0 ecosystem” or “the Fediverse”—have brought us to a constitutional moment. These technologies enable radical innovations in social, economic, and political institutions and practices, with the potential to support transformative approaches to political economy. They demand governance innovation. The paper develops a framework of prudent vigilance for making ethical choices in this space that help to both grasp positive opportunities for transformation and avoid the potentially problematic consequences. Most of our specific examples and concerns come from the blockchain/Web3 universe, as this has received the greatest investment, attention, and adoption to date. However, we aim to offer a framework for governance decision-making in conditions of uncertainty that applies more broadly to other DSTs. Specifically, under the framework of prudent vigilance, we propose a pragmatic, democratic, and pluralist approach to navigating bold experimentation with social practices and political economy enabled by these technologies. Our overarching goal is to provide a framework open to transformative improvement and constrained by guardrails and guiding values supportive of democracy, freedom, and pluralism. We take a relatively strong position, rather than simply laying out ethical issues and potential approaches. We seek to be provocative in order to spur further work and hope this paper will serve as a first bridge between academic philosophy and the DST community, which have hardly interacted to date.
In a Black History Month interview with WBUR’s OnPoint program JHD Director Danielle Allen discusses how educators should be teaching the history of Black enslavement in America’s classrooms. Joining her in the conversation are Guest Host, Kimberly Atkins Stohr, and David Blight, Director of the Gilder Lehrman Center for the Study of Slavery, Resistance and Abolition at Yale University.
Danielle Allen, Director of the Justice, Health & Democracy Impact Initiative and the Edmond & Lily Safra Center for Ethics has launched a new series in her Washington Post column focused on “Renovating Democracy”. In her first column for 2023, Allen shares the personal origins of her work to combat what she calls “The Great Pulling Apart” that polarizes Americans and threatens the fabric of our democracy, and lays the groundwork for a year-long conversation on renovating governance to meet the challenges of the moment.
The City of Madison’s Community Alternative Response Emergency Services (CARES) program is one of the partner cities currently piloting an alternative response program and sharing data and insights with JHD AERP Principal Investigator Danielle Allen and Program Leads, David Knight and Benjamin Barsky. This week, the City of Madison shared its CARES annual report and held a press conference documenting the program’s impact in its first year a
The American Journal of Human Genetics has published an article by Justice, Health & Democracy Impact Initiative (JHD) researchers Anna Lewis, Maddie Mauro, and Director, Danielle Allen, among others. The article offers a scoping review of 121 published articles on the use of population descriptors including race, ethnicity, and ancestry in biomedicine.
Director, Edmond & Lily Safra Center for Ethics
James Bryant Conant University Professor
Justice, Health, & Democracy Impact Initiative Director, Danielle Allen, is the first of five Harvard scholars to discuss what they would change about the U.S. Constitution in a new series of articles in The Harvard Gazette. Her solution: expand the House in order to better balance the Electoral College.
The Reimagining the Economy Project at the Harvard Kennedy School invites you to the project’s launch event on October 5, 2022, from 3:15 p.m. – 6:15 p.m. at the Nye Conference Center, Taubman Building.
The effects of machine learning on social justice, human rights, and democracy will depend not on the technology itself, but on human choices about how to design and deploy it.
The existing vision for artificial intelligence dangerously misconstrues intelligence as autonomous rather than social and relational.