G&C

What is The New -ism? Dataism, Humanities Fight Against Irrelevance

Dataism has already become the default logic shaping power and decision making, as algorithms quietly overtake human judgment in politics, markets, and everyday life.

· Gabriella Martins Cardoso · 16 min read

In a bold attempt to predict what the future holds in political science, I have fallen upon dataism. I argue that dataism is the new ‘-ism’ that is prevailing through capitalism and liberalism to form dataism, an ideology that explains the infiltration of technology into our belief systems, institutions, norms, values and culture. In this reality, humans are ruled by decision-making algorithms.

This paper utilises Yuval Noah Harari’s concept of “Dataism,” as explored in Homo Deus: A Brief History of Tomorrow (Harari, 2017), to examine how algorithmic systems and global data infrastructures have already begun to shape power and legitimacy. While Harari describes Dataism as an emerging “data religion,” I argue that it has already transitioned from a belief to a general consensus. What once guided human judgment is now taking its place, as algorithms increasingly decide outcomes in governance, finance, and everyday life.

This paper argues that dataism is no longer a nascent ideology, it is already operating as a legitimating system that reorganises sovereignty and labour through machine authority. Crucially, it does not govern by persuasion or belief, it governs by default. Capitalism and liberal democracy still exist, but they now run on top of dataism’s logic until eventually dataism takes over.

This raises the central puzzle of this paper: whether technological change is giving rise to dataism, and if so, what humans should really be worried about as algorithms begin to shape our reality.

This paper begins by giving a brief overview of the historical trend associated with humanities’ obsession with efficiency, which has given rise to dataism. Second, it conceptualises dataism as an operating-system ideology that reorders sovereignty, labour, and meaning. Third, it evaluates two counterarguments, dataism as merely Capitalism 2.0 and liberal democracy’s ongoing regulatory resilience, using the United States and European Union as comparative cases. Finally, it considers what this shift means for the global order and argues that the humanities remain essential for protecting human agency and political diversity in an age shaped by algorithms.

Historical Trend

For the Enlightenment, it was humanism. For the industrial age, capitalism. For the age of data flows, dataism.

Harari argues that humanity has evolved from “famine, plague, and war” towards a condition in which the optimisation becomes the moral pursuit (Harari 2017, pp. 214–224). Under this view, the drive to enhance life via data processing invites techno-religions such as dataism to introduce new commandments to “connect everything” and “maximise information flow” (Harari 2017, pp. 221–224).

The issue is that political processing is being outrun by technological acceleration leaving a structural gap between fast-moving code and biotech, and slow, retrospective institutions. As seen with government slow bureaucratic systems, which rely on rigid hierarchies and rule-based procedures that limit flexibility, innovation, and timely decision- making in the face of rapid technological and societal change (Waza 2025, p. 356).

This rapid transition means that the internet’s core rules were never democratically voted upon. Constitutional choices were largely set by engineers and executive policy, not parliaments. For example, Postel’s RFC 1591 (1994) delegated the naming of domains, a core part of the internet’s governance, which was shaped by private actors like engineers. Additionally, the U.S. “Framework for Global Electronic Commerce” (1997) set a free-market approach for online trade (U.S. Department of Commerce, 1997). These choices created governance through private decisions on technology, without democratic debate. As Lessig (2000, p. 123) “code is law” describes how technological design choices can quietly reallocate power by regulating behaviours similarly to law. In that sense, the political foundation for dataism was laid decades ago. The logic of governance shifted from public deliberation to technical management. When digitalisation went ahead of politics, optimisation became not only an economic goal but a moral one.

What the Shift Is and Why It Matters

Harari (2017) defines Dataism as a worldview that treats the flow of information as the highest value, even above human experience (pp. 221–229). Like a religion, its “commandments” are to connect everything to the system and to remove hurdles that block data circulation. Harari calls this a techno-religion that puts data flow above human judgment and aims to link people, organisations, and devices into an internet of all things under a “freedom of information” doctrine (Harari 2017, pp. 221–224). As this worldview spreads, authority migrates from “follow your feelings” to “trust the algorithm,” a shift Harari frames explicitly as the relocation of decision-making from humans to computational systems (Harari, 2017, p. 229).

This shift is visible everywhere. Governments use predictive policing and risk-based welfare systems; banks rely on credit algorithms; hiring platforms screen workers automatically. Human discretion has not disappeared, but it is increasingly shaped by machine recommendations. As Harari warns, the threat of automation is not unemployment, it is irrelevance (Harari, 2018, p. 42).

Counterpoint A, Capitalism 2.0

A materialist counterargument would insist that “dataism” is actually a weaponised logic for capitalistic accumulation and is not an ideologically distinct phenomenon. On this notion, “maximising information flow” and “trusting the model” are justificatory language for capitalistic institutions’ extraction of human experiences as raw material to sell predictions and influence at scale. Zuboff labels the underlying business logic of surveillance capitalism “a new economic order that claims human experience as free raw material,” organised around extraction and behavioural modification (Zuboff 2019, p. 7). This forms a “rogue mutation of capitalism” (Zuboff 2019, p. 8), data is extracted from every human action, repackaged as behavioural predictions, and sold to shape future behaviour.

Additionally, Nick Srnicek’s (2017) account of platforms that act like a core-periphery system, whereby the core benefits from the centralised data’s powerful networks, which are transformed into a winner-take-all position and dependencies. Those in supporting, adjacent fields such as app developers, sellers, advertisers, and workers, are forced to operate under the rules and norms of the system. Henry Farrell and Abraham Newman’s (2019) theory of “weaponised interdependence” explores these structural relationships through an International Relations lens. They stipulate that within networked systems, there are central actors that observe activity and have the power to control access, called the “panopticon” and “chokepoint” effect**.** Essentially, “dataism” claims are better explained by capitalistic forces already well known to society, such as enclosing resources, extracting profit, and controlling key digital hubs in unequal networks.

Consider the United States, where the computing capacity needed for large-scale AI is concentrated in the hands of only a few big companies. Training and running advanced AI models requires powerful chips and the technical ability to use them at a massive scale. Big cloud corporations like Alphabet, Microsoft and Amazon offer this service through their own integrated systems. Since they control these computing hubs, they effectively decide entry barriers, who can build leading AI models, when, and at what cost, compounding dependency. In Farrell and Newman’s terms, this is classic hub power where the companies at the centre of the network can both watch and shut others out (2019, pp. 45–46).

Tech Leaders
Figure 1. Tech industry figures, including Elon Musk and Jeff Bezos attending the inauguration of U.S. President Donald Trump, Washington, D.C., January 20, 2017 (Associated Press, 2016).

The U.S regulatory nature is notorious for failing to take a proactive approach with big tech firms in the pocket of governments, creating an oligarchical-esque system. This reasoning is evident in their absence of a federal privacy statute, depending instead on a fragmented array of sector-specific laws and state regulations. This poses a dangerous interference that firms are allowed to repurpose data with little resistance. Zuboff’s analysis fits this system as platforms capture behavioural data first, and address issues of consent or fairness later (2019, chs. 4–5). Even when firms are held accountable, remedies barely touch the surface of the problem, maintaining the underlying structure intact.

Furthermore, it is plausible to say that platform work and the gig economy have transformed labour to reflect intensified capitalism. The International Labour Organisation (ILO) has deemed that digital labour powered by the gig economy is characterised by unstable pay, constant algorithmic monitoring, and little protection, precarity built into the system (ILO 2021, chs. 2–3). As Kenney and Zysman emphasise, platforms centralise data and control market access, shifting power upward to those who own the networks (2016, pp. 61–68). Zuboff’s “behavioural surplus” describes workers being data mined, and in return, workers are met with unstable conditions (2019, p. 8). Arguably, this system runs on incentives and data, not on belief in any “dataist” values.

Counterpoint B, Liberal-Democratic Resilience

A second counterargument to Harari’s thesis that dataism will replace liberalism is that liberal democracy is not collapsing but adapting. The claim that authority is migrating irreversibly to algorithms neglects the political resilience of liberal institutions’ historical capacity to absorb disruptive technologies and enforce human oversight. Unlike the U.S, the European Union’s General Data Protection Regulation (GDPR, Regulation (EU) 2016/679) demonstrates how data governance can successfully incorporate a rights-based framework. For example, Article 22 of the GDPR affirms a citizen’s right “not to be subject to a decision based solely on automated processing,” prioritising human review.

Subsequently, the Artificial Intelligence Act (Regulation (EU) 2024/1689) prohibits manipulative or exploitative AI, such as real-time biometric surveillance and systems scoring citizens’ behaviour (Art. 5). The Act’s Recital 15 outlines the necessity of these restrictions to preserve “human agency and oversight” in an environment increasingly mediated by autonomous decision systems.

This approach reflects what certain academics refer to as digital constitutionalism, extending the liberal principles of rights and equity into the digital domain. As seen in the 2022 Declaration on Digital Rights and Principles for the Digital Decade, which describes technology as “people-centred,” enforcing democratic values instead of market imperatives (European Commission, 2022). Habermas (1987) argues that political legitimacy comes from open dialogue and shared reasoning, not simply from systems that run efficiently (pp. 305– 309). The EU’s rhetoric of “trustworthy AI” and “human-centred design” echoes that philosophy. In Nye’s (2010) terms, the EU exercises cyber soft power, shaping global norms by setting standards others must follow (pp. 6–8).

The “Brussels Effect” demonstrates this normative power complex nicely, whereby foreign firms, even non-EU states, were compelled to incorporate AI and privacy standards to gain market access in Belgium (Siegmann & Anderljung, 2022). Contrary to Harari’s understanding that dataism is an unstoppable force that transcends politics, the EU demonstrates that politics can control the algorithmic order. Following this line of argument, dataism is not taking over; liberal democracy is correcting itself, adapting to the changing technical landscape by negotiating its social contract with technology, enforcing ethics to be considered with efficiency, and ensuring that data flows remain subordinate to human judgment.

Theoretical Response

This section defends my core claim that dataism already functions as a legitimating ideology in international politics. Constructivism affirms that International Relations ideas are significant when they shape actors’ norms, leading authority to migrate when decision-makers internalise “trust the algorithm” and redesign institutions around the frictionless circulation and processing of information (Harari 2017, p. 218**).** Harari describes this as the point where decision-making shifts from people to computational systems (Harari 2017, p. 229).

In constructivist terms, if policymakers increasingly treat “good governance” as data- driven optimisation, then dataism has become a governing norm that structures practice precisely in the form of ideational authority, my thesis claims.

Zuboff’s idea of “surveillance capitalism” highlights that the language of optimisation makes data extraction seem natural by turning human experience into “free raw material” for prediction markets (2019, p. 8). The economic motive is not denied, instead, consider how the dataist language of efficiency, optimisation, and frictionless flow creates a normative cover, making these systems appear legitimate. Entertaining this line of thought where “code is law,” authority is justified by the assumption that the best model, trained on the most data, deserves to decide. This move is ideological because it turns technical advantage into a moral right, giving control to those who own the infrastructure while making their power appear as a technical necessity.

In the United States (U.S) case, control over cloud and compute gives firms real gatekeeping power. What makes model-based decisions widely accepted is not only economic pressure, but the belief that algorithmic outputs are more objective, accurate, and rational. This reflects dataism’s core hierarchy of legitimacy, where authority follows data. Even in labour management, optimisation is justified through metrics, dashboards, and “evidence-based” efficiency, while the opacity workers face is treated as normal rather than problematic (ILO, 2021). In this sense, Capitalism 2.0 does not replace Dataism, but it relies on dataism’s moral logic to justify itself. Accumulation continues, but the language that makes it appear natural is ideological.

In response to Counterpoint B, the EU section gives concrete examples of legal reforms that prove liberal institutions can reform algorithmic authority. However, I argue that the very grammar of these reforms confirms dataism’s intrinsic presence. The EU doesn’t reject algorithmic governance; it accepts that it will be used and regulates it. The AI Act classifies systems by risk and requires transparency, data controls, and monitoring (Arts. 9–15, 61–70). Habermas reminds us that legitimacy is derived from public justification, not just technical performance (1987, pp. 305–309). That is why the EU promotes “trustworthy AI” to secure democratic acceptance rather than to reject algorithms. In Nye’s terms, this is norm-setting as cyber power, where EU standards spread globally through the “Brussels Effect” (Nye 2010, pp. 6–8). Yet the very fact that these norms take the form of risk management shows the deeper pattern. Dataism defines the terrain, and liberalism shapes the limits. Additionally, liberal democracies risk re-legitimising algorithmic governance under the guise of oversight without addressing deeper power imbalances of data ownership.

Two Articles make this clear. First, GDPR Article 22 permits exceptions such as contract necessity or explicit consent (Art. 22(2)), which means automated decisions have expanded under procedural safeguards, not been eliminated. Second, the AI Act only has a few direct bans as its main strategy lies in oversight, documentation, and monitoring. In both cases, liberal democracy re-legitimises algorithmic authority by placing it inside due-process frameworks. This is adaptation, not rejection, and the very need for such safeguards shows that algorithmic governance is already the default to which politics must react.

Constructivism tells us to watch for practice-constituting ideas, while critical theory tells us to follow how infrastructures and expert vocabularies sediment those ideas into power. In tandem, the counterarguments can reinforce, rather than overturn, the thesis.

Implications for Global Order

Harari cautions that as algorithms surpass humans in prediction and decision tasks, many people risk drifting into what he calls a “useless class.” This does not mean they are economically worthless, but that they no longer hold recognised roles within systems that assign value through data-based contribution (Harari 2018, pp. 40–45). The insecurity here is not only material, it is a loss of social meaning and place. At the same time, the individuals who design and calibrate algorithmic systems form a new knowledge elite, shaping what is considered rational, efficient, or true. Sovereignty begins to shift into the realm of interpretation, where power rests in deciding which information counts and how it is understood.

For the Global South, the stakes are higher. Couldry and Mejias describe this dynamic as data colonialism, where everyday life is captured as data while economic value accumulates elsewhere (2018, pp. 1–2). If core AI models continue to rely on data drawn from global populations but are trained and monetised in cloud systems owned by Northern firms, the political economy of the 19th century returns in digital form, extraction at the periphery, capitalisation at the core. States of the Global South, such as Brazil, India, and South Africa, to assert data sovereignty through local storage rules, open-source procurement, or South-South cloud collaborations show an awareness of this problem. However, without domestic computing capacity, skilled labour, and research ecosystems, this sovereignty remains merely symbolic.

The emerging global order shaped by dataism is neither fully liberal nor purely capitalist. It is interconnected, built on specialised expertise and shared infrastructures that most citizens cannot see or influence. The concern here is not a slide into authoritarian rule, but a quieter loss of politics itself, where decisions are treated as optimisation problems and disagreement is seen as inefficiency. In this setting, the humanities become a form of resistance. History, ethics, and interpretation help us ask questions that algorithms cannot answer, like what values are worth protecting, even when they are not efficient?

The direction of the global order will hinge on whether dataism continues as an unexamined default or becomes something we openly debate and shape. The first path leads to governance without politics, and the second offers a chance to rebuild politics for the digital century.

It is here that the humanities return not as nostalgia, but as necessity. If governance drifts toward optimisation-by-default, then the critical role of ethics, history, and interpretative reasoning is to maintain spaces in which political judgment remains possible. The humanities furnish not a romantic defence of the past, but the intellectual practices required to resist the reduction of human beings to data points and outcomes. Politics survives through discourse, and the ability to understand and challenge each other.

Thus, the international relations stakes are clear. Power is increasingly organised through infrastructures of data, legitimacy is increasingly articulated through algorithmic rationality, but meaning and agency depend on humanity’s ability to reflect. The struggle over dataism is not just about who controls data or markets, but about whether politics continues to see human beings as having intrinsic value, rather than as variables to be optimised.

Conclusion

In conclusion, this paper has argued that dataism is not an ideology found in the future; it exists in society today, already embedded in infrastructure, administrative practice, and institutional routines. Harari describes dataism as a worldview that may eventually replace liberal humanism (Harari, 2017, pp. 221–229). Nevertheless, it has already arrived and is embedded within global regulatory environments, data systems and labour practices. It forms how authority is validated through predictive performance, how sovereignty is exercised through ownership of cloud services, AI supply chains and platforms, and how decisions are made through dependence on model outputs. Dataism does not need mass belief to operate. It works by default, built into code, technical standards, procurement rules, organisational metrics, and the everyday procedures of digital governance.

The counterarguments examined reveal important limits and tensions. Capitalism 2.0, as described by Zuboff’s account of surveillance capitalism (2019, p. 7) and Srnicek’s analysis of platform monopolies, explains the material concentration of data power. Whereas liberal- democratic resilience, exemplified by the GDPR and the EU AI Act, demonstrates that democratic institutions retain the capacity to discipline algorithmic authority through human oversight requirements and rights-based constraints (GDPR Art. 22; Regulation (EU) 2024/1689, Arts. 5, 14). Notably, these responses do not overturn dataism, they merely regulate its expression. Even in resistance, the vocabulary remains that of optimisation, proportionality, and system risk, democracy is essentially being translated into the grammar of data governance.

This presents a normative challenge that reaches beyond economics or institutional design. If dataism remains unacknowledged and simply treated as an extension of capitalism or liberalism, we risk reducing politics to optimisation problems, disregarding conversations about values and what a good life should entail. The deepest threat of automation is not unemployment but irrelevance, the erosion of the belief that human judgment matters (Harari 2018, p. 42). It is important not to reject dataism, as such rejection would be both impossible and undesirable in a world that depends on complex informational coordination, but to determine how to domesticate dataism without becoming subordinate to it.

This requires not just powerful and large institutions, but for everybody to know where we currently are, what we would want our future to be and where we are actually going. It compels societies to reflect on how we would like to relate to each other. In doing so, algorithms would replicate the essence of humanity that we could all be content with. Importantly, it requires preserving the humanities, not as cultural decoration, but as the last infrastructure of meaning, the set of practices through which societies ask why something should be done, not just how well it can be optimised. This line of thinking begs another question that could be explored in further literature, the question being not whether dataism can be governed but whether we can remain political subjects while living inside its logic.

Reference List

Associated Press. (2016, December 23). These tech billionaires flanked Trump at inauguration [Photo]. AP News. https://apnews.com/article/trump-inauguration-tech- billionaires-zuckerberg-musk-wealth-0896bfc3f50d941d62cebc3074267ecd

Couldry, N., & Mejias, U. A. (2019). The costs of connection: How data is colonizing human life and appropriating it for capitalism. Stanford University Press.

European Commission, European Parliament, & Council of the European Union. (2023, January 23). European Declaration on Digital Rights and Principles for the Digital Decade (2023/C 23/01). Official Journal of the European Union, C 23/1. https://eur- lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32023C0023(01)

Farrell, H., & Newman, A. (2019). Weaponized interdependence: How global networks shape state coercion. International Security, 44(1), 42–79. https://doi.org/10.1162/isec_a_00351

General Data Protection Regulation (EU) 2016/679. https://eur- lex.europa.eu/eli/reg/2016/679/oj

Habermas, J. (1987). The theory of communicative action, Volume 2: Lifeworld and system. Beacon Press.

Harari, Y. N. (2017). Homo Deus: A brief history of tomorrow. Vintage.

Harari, Y. N. (2018). 21 lessons for the 21st century. Jonathan Cape.

International Labour Organization. (2021). World employment and social outlook 2021: The role of digital labour platforms. https://www.ilo.org/global/research/global- reports/weso/2021/WCMS_771749/lang—en/index.htm

Kenney, M., & Zysman, J. (2016). The rise of the platform economy. Issues in Science and Technology, 32(3), 61–69. https://issues.org/the-rise-of-the-platform-economy/

Lessig, L. (2000). Code and other laws of cyberspace. Basic Books.

Nye, J. S. (2010). Cyber power. Belfer Center for Science and International Affairs. https://www.belfercenter.org/publication/cyber-power

Postel, J. (1994). Domain name system structure and delegation (RFC 1591). Internet Engineering Task Force. https://www.rfc-editor.org/rfc/rfc1591

Regulation (EU) 2024/1689. Artificial Intelligence Act. https://eur- lex.europa.eu/eli/reg/2024/1689/oj

Siegmann, C., & Anderljung, M. (2022). The Brussels Effect and Artificial Intelligence (Technical Report). Centre for the Governance of AI. https://cdn.governance.ai/Brussels_Effect_GovAI.pdf

Srnicek, N. (2017). Platform capitalism. Polity Press. U.S. Department of Commerce. (1997). A framework for global electronic commerce.

https://clintonwhitehouse4.archives.gov/WH/New/Commerce/read.html

Waza, A. (2025). Rethinking bureaucracy: Agile governance in the 21st century. International Journal of Sustainable Applied Sciences, 3(6), 355–362. https://doi.org/10.59890/ijsas.v3i6.73

Zuboff, S. (2019). The age of surveillance capitalism. PublicAffairs. https://www.publicaffairsbooks.com/titles/shoshana-zuboff/the-age-of-surveillance- capitalism/9781610395694/

AI Declaration:

I declare that no artificial intelligence tools were used in the preparation of this work, with the exception of Zetero for referencing and Grammarly (free basic version) for grammatical corrections. I bear full responsibility for the content of the work.