Authors:
Muhammed Alakitan, Department of Sociology, University of Cambridge, Cambridge, UK
Ebenezer Makinde, Department of Political Science, Tulane University, New Orleans, Louisiana, USA
Editor:
Wenjia Tang, Media and Communications, University of Sydney, Australia
In 2026, AI is expected to enter a new phase. This phase will be shaped by quantum computing. It will allow large language models to generate hypotheses and take a more active role in scientific discovery. At the same time, there will be growing demand for hyperscalers and better contextualisation of trained data.

For Africa, 2026 is seen as a turning point. It is described as the year to move from strategies to policies, and from policies to actual implementation. These discussions are closely tied to sovereignty and economic growth. Because of these shifts, the ethics of user and data protection must be taken seriously now more than ever. Within this context, Nigeria’s experience helps build a broader understanding of the policy challenges surrounding AI regulation in Africa.
An exploratory paper on Nigeria’s tech policies argues that since 2015, the Nigerian government has followed two main approaches to regulating AI and digital technologies. The first was a policy-led approach (2015–2023). This approach aimed to digitise Nigeria by accelerating the adoption and integration of digital technologies across government Ministries, Departments, and Agencies (MDAs).
The second was a research-and-policy approach (2023–2027). This approach focused on upskilling citizens and increasing the contribution of the digital economy sector to the Nigerian Gross Domestic Product (GDP). However, the paper contends that neither approach prioritised citizens’ privacy or data ethics.
As the race for the first African AI policy continues, one major challenge is infrastructural and data sovereignty. Policymakers must ensure that socio-economic growth and development are supported. Countries need to own their digital infrastructure. Data, algorithms, and computational resources should serve local needs. But achieving this requires large-scale investments and special industrial policies. Without them, AI investments may reinforce economic dependence and data colonialism.
The paper also stresses the importance of ethical guidelines. This issue is urgent because AI offers Africa an opportunity to show renewed vigour in advancing economic and social development. With purposeful AI policies grounded in the safety of Africans and their value systems, countries could address developmental challenges through improved technologies. However, if citizens’ data and infrastructure are not secure, or not owned and used by Africans for Africans, this opportunity may remain wishful thinking.
The research further shows that developing policies is not enough. Deliberate implementation protocols must also be established. One suggested solution is to create consequences for nonadherence to stipulated ethical guidelines. Although the analysis focuses on Nigeria, the findings are relevant to broader African AI governance. They highlight the need for explicit ethical protocols in regulatory frameworks. AI strategies and policies should not leave start-ups, programmers, or investors guessing about the ethics they are expected to follow. Ethical guidelines must be clearly stated across all regulatory documents.
In conclusion, the discussion of Nigeria’s approaches to regulating AI and digital technologies over the past decade provides practical insights. These insights are useful for African AI policymakers, practitioners, and investors. They underline the importance of guaranteeing the security and privacy of citizens as the continent continues to navigate the safest path toward AI development.
Know more about our authors:
Muhammed Alakitan is a PhD candidate in the Department of Sociology at the University of Cambridge. His research examines how social media users construct meaning, struggle for, and contest online (in)visibility amid algorithmic, socio-political, and gendered structures. Broadly, his research and professional interests lie in the linkages between digital technologies and human development.
Ebenezer O. Makinde is a Ph.D. candidate in Political Science at Tulane University. His research is broadly at the intersection of politics and public policy, examining anti-corruption, democratic accountability, and political economy. His work spans digital politics, public opinion, and governance, using advanced quantitative and qualitative methods to address policy-relevant questions.