Why ‘prosocial AI’ must be the framework for designing, deploying and governing AI

MT HANNACH
11 Min Read
Disclosure: This website may contain affiliate links, which means I may earn a commission if you click on the link and make a purchase. I only recommend products or services that I personally use and believe will add value to my readers. Your support is appreciated!

Join our daily and weekly newsletters for the latest updates and exclusive content covering cutting-edge AI. Learn more


While AI is omnipresent sphere of modern lifeThe biggest challenge facing business leaders, policymakers and innovators is no longer whether to adopt intelligent systems, but how. In a world marked by increasing polarization, resource depletion, erosion of trust in institutions and unstable information landscapes, the critical imperative is to design AI to contribute meaningfully and sustainably to human and planetary well-being.

Prosocial AI – a framework of design, deployment and governance principles that ensure AI is carefully adapted, trained, tested and targeted to improve people and the planet – is more than a moral stance or PR veneer. It is a strategic approach to positioning AI within a broader ecology of intelligence that prioritizes collective flourishing over narrow optimization.

The ABCD of AI Potential: From Gloom to Glory

The justification for Prosocial AI emerges from four intertwined domains: action, connection, climate and division (ABCD). Each area highlights the dual character of AI: it can either intensify existing dysfunctions or act as a catalyst for regenerative and inclusive solutions.

  • Agency: Too often, AI-powered platforms rely on addictive loops and opaque recommendation systems that erode user autonomy. Prosocial AI, on the other hand, can enable action by revealing where its suggestions come from, providing meaningful controls to users, and respecting the multifaceted nature of human decision-making. It’s not just “consent” or “transparency” as abstract buzzwords; it’s about designing AI interactions that recognize human complexity – the interplay of cognition, emotion, bodily experience and social context – and that enable individuals to navigate their digital environments without succumbing to manipulation or distraction.
  • Gluing: Digital technologies can either divide societies into echo chambers or serve as bridges connecting diverse people and ideas. Prosocial AI applies nuanced linguistic and cultural models to identify shared interests, highlight constructive contributions, and foster empathy across borders. Instead of fueling outrage and attracting attention, it helps participants discover complementary perspectives, thereby strengthening community bonds and the delicate social fabrics that bind societies together.
  • Climate: AI’s relationship with the environment is fraught with tension. AI can optimize supply chains, improve climate modeling and support environmental management. However, the computational intensity required to train large models often results in a considerable carbon footprint. A prosocial lens requires designs that balance these gains against ecological costs – adopting energy-efficient architectures, transparent lifecycle assessments, and ecologically sensitive data practices. Rather than viewing the planet as an afterthought, prosocial AI makes climate considerations a cardinal priority: AI must not only advise on sustainability, but must be sustainable.
  • Division: The cascades of misinformation and ideological divides that define our times are not an inevitable byproduct of technology, but the result of design choices that prioritize virality over truthfulness. Prosocial AI counters this by integrating cultural and historical background into its processes, respecting contextual differences, and providing fact-checking mechanisms that build trust. Rather than homogenizing knowledge or imposing top-down narratives, it promotes informed pluralism, making digital spaces more navigable, credible and inclusive.

Dual literacy: integrating AI and NI

Achieving this vision depends on cultivating what we might call “dual literacy.” On the one hand, there is mastery of AI: mastering the technical intricacies of algorithms, understanding how bias emerges from data, and establishing rigorous accountability and oversight mechanisms. On the other side is mastery of natural intelligence (NI): a holistic, embodied understanding of human cognition and emotion (brain and body), personal identity (self), and cultural grounding (Company).

This NI culture is not a set of general skills perched on the fringes of innovation; it’s fundamental. Human intelligence is shaped by neurobiology, physiology, interoception, cultural narratives, and community ethics – a complex tapestry that transcends reductive notions of “rational actors.” By bringing NI knowledge into dialogue with AI knowledge, developers, policymakers and regulators can ensure that digital architectures respect our multidimensional human reality. This holistic approach promotes systems that are ethically sound, context-sensitive, and capable of complementing rather than constraining human capabilities.

AI and NI in synergy: prosocial AI goes beyond zero-sum thinking

The popular imagination often pits machines against humans in a zero-sum competition. Prosocial AI challenges this dichotomy. Consider the beauty of complementarity in healthcare: AI excels at pattern recognition, sifting through vast quantities of medical images to detect anomalies that might escape human specialists. Physicians, in turn, rely on their embodied cognition and moral instincts to interpret results, communicate complex information, and consider each patient’s broader life context. The result is not just a more effective diagnosis; it is more humane and patient-centered care. Similar paradigms can transform decision-making in law, finance, governance and education.

By integrating AI precision with the nuanced judgment of human experts, we could move from hierarchical models of command and control to collaborative intelligence ecosystems. Here, machines manage complexity at scale and humans provide the moral vision and cultural mastery needed to ensure these systems serve genuine public interests.

Building prosocial infrastructure

To make prosocial AI central to our future, we need a concerted effort across all sectors:

Industrial and technological companies: Innovators can prioritize “human-in-the-loop” designs and explicitly reward metrics related to well-being rather than engagement at all costs. Instead of designing AI to hook users, they can build systems that inform, empower and uplift – measured by improved health outcomes, education attainment, environmental sustainability or social cohesion.

Example: THE AI Partnership provides frameworks for prosocial innovation, helping guide developers toward responsible practices.

Civil society and NGOs: Community groups and advocacy organizations can guide the development and deployment of AI, testing new tools in real-world settings. They can bring diverse ethnic, linguistic, and cultural perspectives to the design table, ensuring that the resulting AI systems meet a wide range of human experiences and needs.

Educational institutions: Schools and universities should integrate dual literacy into their curricula while strengthening critical thinking, ethics and cultural studies. By fostering knowledge in AI and NI, educational organizations can help ensure that future generations are proficient in machine learning (ML) and deeply grounded in human values.

Example: THE MIT Schwarzman College of Computing and the Stanford Institute for Human-centered AI illustrate transdisciplinary approaches that combine technical rigor and human inquiry.

Governments and policy makers: Legislative and regulatory frameworks can encourage pro-social innovation, making it economically viable for companies to produce AI systems that are transparent, accountable and aligned with social goals. Citizen assemblies and public consultations can inform these policies, ensuring that the direction of AI reflects the diversity of voices in society.

Beyond boxes towards a holistic hybrid future

As AI becomes deeply integrated into the global socio-economic fabric, we must resist the urge to treat the technology as a black box optimized for specific parameters. Instead, we can envision a hybrid future where human and machine intelligences co-evolve, guided by common principles and grounded in a holistic understanding of ourselves and our environment. Prosocial AI goes beyond a simplistic choice between innovation and responsibility. It offers a richer range of possibilities, in which AI empowers rather than addicts, connects rather than fragments, and regenerates rather than depletes.

The future of AI will not be determined solely by computational prowess or algorithmic trickery. How we organically integrate these capabilities into the human sphere will define it, recognizing the interplay of brain and body, self and society, local nuances and planetary imperatives. In doing so, we create a broader standard of success: one measured not just by profit or efficiency, but also by the flourishing of people and the resilience of the planet.

Prosocial AI can serve this purpose. The future begins now, with a new ABCD: Aaspire to an inclusive society; Bbelieve that you participate in its realization; Cchoose which side of history you want to be on; And Do what you think is right.

After two decades with UNICEF and the publication of various booksDr. Cornelia C. Walther East currently a principal investigator at the University of Pennsylvania working on ProSocial AI.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including data technicians, can share data insights and innovations.

If you want to learn more about cutting-edge ideas and up-to-date information, best practices, and the future of data and data technology, join us at DataDecisionMakers.

You might even consider contribute to an article to you!

Learn more about DataDecisionMakers

Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *