A brief summary of this article delivered by Professor Mehri Madarshahi at the Creativity 2030 - 5th International Forum jointly organized by ICCSD in Beijing on 27 March 2026.
About the Author

Mehri Madarshahi; Advisory Committee Member of the International Centre for Creativity and Sustainable Development under the auspices of UNESCO; and Honorary Professor of The Institute of Public Policy (IPP), South China University of Technology (SCUT).
(The views expressed in this article are solely those of the author and do not represent the official position of ICCSD)
Introduction
Artificial intelligence has moved in a remarkably short time from the margins of technical innovation to the center of everyday life, public administration, economic strategy, and global governance. It increasingly shapes how people communicate, learn, consume, create, navigate institutions, and imagine the future. Its applications now reach into education, healthcare, security, urban planning, communications, cultural production, environmental monitoring, and the management of public services. In many societies, AI is no longer perceived merely as an emerging technology, but as an organizing force with the capacity to reorder social relations, accelerate decision-making, and redefine the terms of efficiency and progress.
This expansion has also generated a powerful narrative of promise. AI is often presented as a transformative instrument capable of helping humanity solve complex problems at scale, from resource allocation and climate modeling to education delivery, public health, and economic optimization. Within the language of sustainable development, it is increasingly framed as an accelerator of implementation: a tool that can improve data analysis, strengthen forecasting, support evidence-based policy, and make systems more responsive and efficient. In this view, AI appears not only as a technological breakthrough but also as a potentially indispensable ally in achieving the Sustainable Development Goals.
Yet this confidence rests on an assumption that deserves much closer scrutiny: namely, that AI can be integrated into all dimensions of development without fundamentally altering the human meanings, social values, and cultural frameworks on which development depends. The issue is not whether AI can perform useful tasks. Clearly, it can. The more difficult question is whether the growing reliance on AI, especially in areas tied to identity, memory, creativity, education, justice, and community, may also reshape the normative environment within which development is pursued.
It is here that culture becomes central. Culture is not simply one sector among others, nor merely a body of artistic expression or heritage to be protected alongside economic and social goals. It is the sphere in which societies produce meaning, transmit memory, negotiate identity, and define what they value. It shapes how communities understand dignity, responsibility, belonging, justice, continuity, and change. In that sense, culture is not external to development. It is one of the conditions through which development becomes intelligible, legitimate, and sustainable.
And yet culture has remained institutionally marginal in the dominant development frameworks of our time. It was not included as a stand-alone goal in the original architecture of the 2030 Agenda, despite repeated calls from many quarters to give it more explicit normative recognition. This omission is significant. It suggests that what is most measurable, governable, and technocratically manageable continues to receive formal priority, while those dimensions of human life that are interpretive, relational, and historically grounded remain less visible in policy design. The current enthusiasm for AI risks intensifying that imbalance.
The Cultural Limits of AI
The central argument of this article is therefore straightforward but consequential: there is no inherent balancing force between artificial intelligence and culture. AI and culture do not operate according to the same logic, nor do they serve the same human functions. AI excels in abstraction, prediction, classification, and optimization. Culture lives through interpretation, ambiguity, memory, ethical judgment, symbolic meaning, and lived experience. AI may support selected aspects of development, and it may even assist in preserving or circulating cultural material. But it cannot replace the human interpretive frameworks through which development acquires meaning. Nor can it be assumed that technological expansion will naturally respect cultural depth. If any balance is to exist between AI and culture, it will have to be consciously constructed through governance, critical reflection, and the reassertion of human purposes over technological logic.
The first sign of this tension appears when AI is linked to sustainable development itself. Much of the current debate assumes that AI can serve as a neutral accelerator of the Sustainable Development Goals. In some domains, that is undoubtedly true. AI can support data analysis, resource allocation, environmental monitoring, logistics, infrastructure planning, and administrative coordination. It can identify patterns across vast datasets and improve efficiency in systems under pressure. These are meaningful capabilities, and they should not be dismissed.
But the SDGs are not only a technical agenda. They are also a human, social, and normative agenda. They concern not merely the delivery of services, but the conditions of dignity, inclusion, justice, resilience, and collective responsibility. This is precisely where the limits of AI become visible. AI is useful when the problem is one of optimization. It is far less useful when the problem is one of values, social trust, ethical judgment, cultural legitimacy, or structural inequality. In such cases, what matters is not only whether a policy is efficient, but whether it is accepted, understood, shared, and rooted in the realities of those whose lives it affects.
Rethinking the Nexus Between AI and SDGs
This is why AI has weak foundations in several important parts of the SDG agenda.
In SDG 5 on Gender Equality, AI can assist implementation in certain respects by identifying disparities in pay, access, representation, education, and service delivery. It can support data collection, reveal patterns of exclusion, and help institutions detect inequalities that might otherwise remain obscured. But the limits here are profound. AI systems are trained on data generated by societies that already contain gender bias. As a result, they may inherit, reproduce, or even reinforce discrimination under the appearance of technical neutrality. Automated systems used in hiring, credit, security, welfare distribution, or performance assessment can easily replicate unequal assumptions if the underlying data reflects past exclusion. More fundamentally, gender equality is not merely a problem of detection. It is a problem of power, law, institutional reform, representation, and social transformation. It requires changes in opportunity, voice, protection, and social norms. AI can help reveal patterns of inequality, but it cannot by itself alter the structures that produce them. In that sense, it may assist diagnosis without delivering emancipation.
In the case of SDG 11 on Sustainable Cities and Communities, AI can certainly help manage traffic flows, energy use, waste systems, and urban planning models. Yet sustainable communities are not built by optimization alone. A city is not sustainable simply because it is smart. It is sustainable when people feel they belong to it, when heritage is respected, when public space supports social life, when diversity is protected, and when development does not erase memory and identity. These are cultural questions before they are technical ones. An AI system may improve the management of a city while remaining indifferent to what makes that city meaningful to its inhabitants. Excessive reliance on digital modeling can produce urban environments that are more efficient but less humane: more monitored, more standardized, more commercially rationalized, and less attentive to local history, neighborhood character, and informal social practices. In this sense, AI may strengthen urban administration without strengthening community. And without community, the idea of sustainable cities becomes hollow.
The limitations of AI become even clearer in SDG 12 on Responsible Consumption and Production. At the operational level, AI can be highly useful. It can optimize supply chains, reduce waste, improve inventory management, monitor resource use, forecast demand, and support circular economy practices. In this sense, it can help make production systems more efficient and reduce measurable forms of inefficiency. Yet SDG 12 is not only about efficiency. It is also about transforming the culture of production and consumption itself. Unsustainable development is driven not only by technical waste, but by habits of overconsumption, disposable lifestyles, short-term profit incentives, and social norms that equate well-being with ever-expanding material acquisition. AI may help manage these systems more effectively while leaving their deeper logic untouched. In some cases, it may even intensify that logic by making production, marketing, and consumer targeting more precise and more profitable. Thus AI can reduce waste within an unsustainable model without necessarily changing the model itself. It can optimize consumption without questioning excess.
A similar tension is visible in SDG 13 on Climate Action. AI can be extremely powerful in climate modeling, disaster prediction, energy forecasting, and emissions monitoring. Yet climate action ultimately fails or succeeds not only because of data, but because of behavior, political will, public trust, and shared sacrifice. These are profoundly cultural matters. The ecological transition requires changes in lifestyles, consumption habits, social priorities, and collective imagination. It depends on whether societies are willing to rethink convenience, growth, mobility, waste, and responsibility across generations. AI can inform such debates, but it cannot generate the ethical commitment required to sustain them. It can tell societies what is happening. It cannot decide what societies are willing to give up, protect, or value. Climate action also depends on narratives of solidarity and responsibility. It requires people to feel connected to places, to future generations, and to one another. Such attachments are cultural, moral, and historical. They do not emerge from algorithmic calculation. They must be cultivated through education, art, public discourse, and social institutions. I do not wish here to enter into the separate debate over the extent to which AI may itself aggravate climate challenges through the expansion of data centres and their enormous energy consumption. At present, viable solutions to this environmental challenge remain limited.
Perhaps SDG 16: Peace, Justice and Strong Institutions is most sensitive in terms of solidarity and responsibility. While AI can support this goal in limited but meaningful ways, such as helping detect fraud, improving document management, identifying administrative bottlenecks, strengthening access to information, and revealing patterns related to corruption or institutional inefficiency, this may also be one of the areas where excessive reliance on AI becomes most dangerous. Peace, justice, and strong institutions depend on legitimacy, fairness, accountability, and trust. These are not technical outputs. They are political and moral achievements. AI may speed up procedures, but it does not create justice. It may classify risk, but it does not create legitimacy. It may process institutional data, but it cannot substitute for ethical judgment or democratic responsibility. There is also the danger that AI deepens exactly what SDG 16 seeks to overcome. Algorithmic profiling, predictive policing, opaque administrative decisions, automated surveillance, and biased risk scoring can all undermine rights rather than protect them. In fragile or polarized societies, such uses may weaken trust in institutions even further.
Even in SDG 4 on Quality Education, where AI is often celebrated as a powerful equalizer, the same problem appears. AI can personalize learning, widen access to information, and assist teachers and institutions in delivering educational resources. Yet education is not only about information delivery. It is also about forming judgment, empathy, creativity, civic consciousness, and the ability to live with difference. These are not outputs that can be fully engineered through intelligent systems. A student does not become educated simply by receiving optimized content. Education remains a relational and cultural process. If education is increasingly mediated through systems that privilege speed, prediction, and standardization, then the risk is not ignorance but narrowing: students may become better at navigating information while becoming weaker at interpretation, reflection, and moral discernment.
The weaknesses identified across these SDGs are not accidental. They reveal a deeper mismatch between the logic of AI and the nature of culture itself. Once culture is understood as a domain of meaning rather than merely information, the limitations of AI in the development sphere become easier to identify. The difficulties observed across several SDGs are not simply the result of imperfect technology or insufficient data. They point to a more structural problem. AI is highly effective where challenges can be rendered into measurable variables, predictable patterns, and optimizable processes. But it is far less effective where progress depends on interpretation, social legitimacy, ethical judgment, historical memory, and shared values. It is precisely these latter dimensions that belong to culture.
Understanding the Differences Between Culture and AI
Seen in this light, culture is not an external supplement to development, nor an ornamental dimension to be considered after technical systems have been designed. It is the interpretive framework through which societies define what counts as well-being, justice, responsibility, belonging, and progress. Culture shapes how communities understand their needs, how they negotiate priorities, how they remember the past, and how they imagine the future. For that reason, culture cannot be treated as a secondary variable within an AI-driven model of development. It is the very space in which the meaning of development is socially constituted.
This is why the relationship between culture and artificial intelligence must be approached with conceptual care. The two do not operate on the same epistemological or normative plane. Artificial intelligence functions through abstraction, codification, prediction, and optimization. It depends on the conversion of reality into data, on the identification of regularities, and on the production of outputs based on statistical inference or formalized rules. Culture, by contrast, is not reducible to information that can be fully captured, standardized, or processed through computational systems. It is historically sedimented, socially embodied, symbolically mediated, and continuously reinterpreted through lived experience.
This distinction is crucial because contemporary debates often proceed as though culture were simply another sector to which computational tools may be applied without conceptual loss. Yet culture is not merely a repository of heritage, language, customs, or artistic expression. It is the dynamic medium through which communities assign meaning to the world, negotiate identity, transmit values, and situate themselves in relation to both past and future. It is not static, and it is not neutral. It evolves through contestation, reinterpretation, selective remembrance, and changing social relations. Its continuity lies not in mechanical repetition, but in the human capacity to renew, dispute, and reframe inherited forms of life.
AI enters this field with considerable technical power but limited interpretive capacity. It can classify images, generate text, imitate styles, reconstruct artifacts, and detect patterns across enormous quantities of material. Yet these capacities should not be confused with understanding in any deep cultural sense. To identify a pattern is not to grasp significance. To reproduce a form is not to inherit the historical consciousness, symbolic density, or communal attachment through which that form acquires meaning. Artificial intelligence may simulate cultural expression with increasing sophistication, but simulation remains distinct from participation in a lived world of memory, value, and experience.
The problem, therefore, is not merely that AI has limits. The deeper issue is that its very strengths may encourage a misleading conception of culture itself. When culture is approached through the operational categories of data extraction, pattern recognition, scalability, and optimization, there is a risk that only those aspects of cultural life that are legible to the machine will be treated as relevant. Nuance, ambiguity, silence, irony, ritual depth, minority meanings, and context-dependent interpretation are harder to formalize and therefore more likely to be marginalized. In this way, the technological mediation of culture may gradually reorganize what counts as culturally visible, valuable, or preservable.
This concern becomes even more serious when AI is embedded in systems of dissemination and governance. Culture is not only expressed; it is also circulated, ranked, filtered, translated, archived, and institutionalized. Once algorithmic systems shape visibility at scale—through recommendation systems, automated translation, content moderation, search prioritization, or cultural analytics-they do not simply transmit culture. They begin to structure its conditions of appearance. What is amplified, what is rendered peripheral, what is categorized as authentic, and what is made globally legible are no longer exclusively human judgments. They are increasingly influenced by infrastructures designed according to technical, commercial, and geopolitical priorities.
This introduces a question of power that cannot be ignored. AI systems are not culturally innocent instruments. They are developed within specific institutional and economic settings, trained on unevenly distributed datasets, and deployed in environments marked by asymmetries of language, visibility, and influence. As a result, they may privilege dominant languages over marginalized ones, codified knowledge over oral traditions, standardized representations over localized meanings, and commercially valuable content over culturally significant but less digitized forms of expression. The issue is therefore not only technological bias in a narrow sense. It is the broader possibility that AI may reinforce hierarchies of cultural recognition while presenting itself as neutral infrastructure.
This does not mean that AI has no place in cultural life. Clearly it does. The question, however, is whether growing reliance on AI changes the terms under which culture is recognized, valued, and transmitted. Once that question is raised, the issue is no longer technological capability alone. It becomes a matter of cultural autonomy, ethical governance, and intellectual responsibility.
If the preceding discussion has shown that many dimensions of sustainable development depend on cultural meaning, social legitimacy, and human interpretation, then an important paradox immediately appears. Culture is indispensable to the realization of many SDGs, yet it remains institutionally marginal within the architecture of the goals themselves. Despite repeated calls by scholars, practitioners, and cultural organizations, culture was not established as a stand-alone Sustainable Development Goal in the 2030 Agenda. This omission is not merely administrative. It reflects a deeper difficulty in the way development has been conceptualized in contemporary global governance.
The SDGs were designed to be universal, measurable, policy-relevant, and politically negotiable across highly diverse national contexts. Culture does not fit this model as easily. Its meanings are context-bound, historically layered, and often resistant to standardization. Unlike infrastructure or emissions, culture cannot be fully captured through universal indicators without losing much of what makes it socially significant.
This helps explain why culture has often been acknowledged rhetorically while remaining weak institutionally. It is regularly invoked as a cross-cutting dimension of identity, heritage, creativity, inclusion, and social cohesion, yet these recognitions have rarely translated into equivalent normative status within development policy.
Yet the irony is that culture quietly underpins the success or failure of the goals across the board. Progress in health depends on trust, behavior, communication, and local understandings of risk. Progress in education depends on language, identity, authority, and social purpose. Progress in gender equality depends on norms, values, symbolic roles, and the legitimacy of change. Progress in peace and justice depends on memory, recognition, belonging, and the moral authority of institutions. Progress in climate action depends on public imagination, intergenerational ethics, and the cultural willingness to rethink consumption and responsibility. In other words, culture is not peripheral to implementation. It shapes the social conditions under which implementation is accepted, resisted, adapted, or made durable.
Why, then, has the effort to elevate culture within sustainable development repeatedly failed? Part of the answer lies in political caution. A stand-alone goal on culture would have raised difficult questions about whose culture, which values, and what forms of recognition should be institutionalized at the global level.
Another part of the answer lies in measurement anxiety. Global agendas favor what can be tracked. Culture, however, is strongest precisely where it exceeds metric simplification. As a result, culture is often displaced by more easily measurable variables, even when those variables cannot fully explain whether developmental gains will be socially rooted or historically sustainable.
This omission has important consequences for the debate on artificial intelligence. If culture already occupies a weak formal position within the SDG framework, then the growing enthusiasm for AI as an implementation tool risks deepening that imbalance. What is measurable becomes more visible; what is optimizable becomes more governable; what is data-rich becomes more actionable. Under such conditions, the aspects of development most closely tied to culture may become even easier to neglect, not because they are unimportant, but because they are harder to translate into computational terms.
For that reason, the challenge is not simply to insert culture more visibly into existing policy language. It is to recognize that sustainable development has always depended on more than technical delivery. It depends on the human capacity to interpret change, to legitimate norms, to transmit memory, and to imagine a future worth sustaining. These are cultural capacities. Without them, development may become more measurable, more digitized, and more administratively efficient, while becoming less meaningful, less rooted, and ultimately less sustainable.
Conclusion
The assumption that artificial intelligence and culture will naturally find a productive equilibrium is therefore one of the most comforting but least examined ideas in current policy discourse. Yet the analysis developed here suggests otherwise. There is no inherent balancing mechanism between AI and culture, because the two are driven by different imperatives.
In that sense, the challenge before policymakers, scholars, educators, and cultural institutions is not to celebrate partnership in the abstract. It is to govern asymmetry. It is to ensure that systems designed for computation do not quietly redefine domains grounded in meaning. It is to recognize that sustainable development, if it is to remain genuinely human-centered, cannot be built on technical acceleration alone. It must also protect those cultural and ethical capacities through which human communities decide what progress is for.
The future relationship between AI and culture will therefore not be decided by innovation alone. It will be decided by whether societies are willing to place technological power under human purposes rather than adapting human purposes to technological logic. Only under those conditions can AI remain a tool of development rather than becoming a force that empties development of its cultural and moral substance.