A continuación presentamos un artículo de Pedro García-López, catedrático de la Universitat Rovira i Virgili y experto en sistemas distribuidos y computación en la nube, que analiza críticamente la iniciativa europea de las AI Factories, una inversión de más de 10.000 millones de euros destinada a crear infraestructuras de supercomputación orientadas a la inteligencia artificial. El texto advierte que, sin adoptar prácticas cloud-native, definir incentivos económicos claros y superar las inercias culturales del ecosistema HPC, estas infraestructuras corren el riesgo de quedar infrautilizadas. Además, examina su relación con las Gigafactories y los proveedores de nube pública, y propone recomendaciones culturales, técnicas y económicas para asegurar que las AI Factories se conviertan en plataformas eficientes, flexibles y sostenibles capaces de impulsar la innovación y reforzar la soberanía digital en Europa.
Abstract
Europe’s ambitious AI Factories initiative, backed by over €10 billion in investment, is set to deploy a network of AI-optimized supercomputing infrastructures by 2026, aiming to position the continent as a global leader in digital sovereignty and artificial intelligence innovation. Yet, cultural, technical, and economic challenges could severely impede their success. Without adaptation to cloud-native practices, clearer economic incentives, and integrated software architectures, these public resources risk massive underutilization and wastage of public tax-payer money. This article reviews these critical concerns and proposes concrete recommendations to ensure AI Factories become an efficient, flexible, and sustainable pillar of Europe’s AI ecosystem,
1. Introduction
The European Commission’s AI Factories initiative, central to the EU’s AI Continent strategy, has allocated over €10 billion from Horizon Europe, Digital Europe, and national sources to deploy AI-optimized HPC centers across Europe by the end of 2026. These infrastructures aim to combine compute power, data access, and talent to boost AI innovation for startups and researchers in Europe.
However, AI Factories must overcome significant cultural, technical, and economic challenges. Traditionally rooted in academic supercomputing centers with rigid, reservation-based resource models, they must transform into flexible, cloud-native “neoclouds” to meet AI practitioners’ fast-paced demands. Failure to evolve risks costly underutilization and inefficient use of taxpayer resources.
2. Challenges
2.1 Cultural
HPC Supercomputers in Europe has been public computing resources devoted to academic and research activities. HPC infrastructures are designed for expert users proficient in parallel programming, operating directly on dedicated hardware without virtualization or containerization layers. Their software stacks differ significantly from those in cloud environments, with HPC involving far greater complexity in resource provisioning and dependency management.
Additionally, HPC supercomputers lack experience in providing simple user-friendly front-ends and typically do not employ strong security measures, as their services traditionally targeted closed, trusted communities rather than publicly accessible users. The absence of intuitive systems and commercial integration could limit AI Factories’ appeal beyond scientific contexts, potentially reinforcing HPC’s cultural inertia.
To successfully transition to AI Factories, this inertia must be consciously overcome with explicit support from cloud computing experts, enabling integration of cloud-native usability and security best practices alongside HPC’s powerful hardware acceleration.
2.2 Technical
Modern AI workloads require cloud-native infrastructure characterized by object storage, Kubernetes orchestration, serverless computing, and elastic scaling—capabilities largely missing from traditional HPC environments. Integrating these technologies presents significant challenges for HPC operators, who must develop new environments that support multi-tenancy, optimized data infrastructures, and scalable SaaS frontends . A recent comprehensive study highlights the urgent need to bridge the traditional cloud-HPC divide by adopting a dual-stack approach that combines HPC supercomputing with cloud-native technologies such as Kubernetes and object storage, thereby meeting both performance and usability demands for AI Factories.
Maintaining separate HPC and cloud stacks risks fragmentation and inefficient resource utilization. Supporting AI inference and agentic workloads requires low-latency, elastic systems beyond the conventional HPC queue-based models. Furthermore, public-facing APIs introduce new cybersecurity challenges that stress historically closed HPC environments.
As numerous HPC centers across Europe transition towards AI Factories, diverse adoption paths may introduce significant risks. Heterogeneity in approaches could become a major barrier, with AI Factories embracing cloud technologies likely to gain higher popularity and usage compared to conservative centers adhering strictly to traditional HPC practices, which may seem alien to AI practitioners. This divergence may lead to critical failures when offering advanced computing resources to AI communities accustomed to flexible, scalable cloud platforms.
2.3 Economic
We identify two crucial economic challenges for AI Factories: establishing efficient economic models and effectively engaging their target communities.
Firstly, HPC supercomputers have traditionally operated under subsidized or free-access models, where researchers receive long-term resource reservations for their projects. Infrastructure managers primarily ensure that all resources are booked. In contrast, commercial cloud providers typically use pay-as-you-go pricing models, creating clear incentives for both users and providers to optimize resource utilization and reduce waste. This fundamental difference means that economic incentives for efficiency and cost control are strong in commercial clouds but less evident in subsidized HPC environments.
Since 2025, the EU Energy Efficiency Directive (EED) mandates that all data centers with an installed IT power demand exceeding 500 kW must measure and report their Power Usage Effectiveness (PUE) to a central EU database. The EU’s approach focuses on monitoring, reporting, and incentivizing best practices through initiatives like the European Code of Conduct for Data Centres, with plans to implement stricter energy and water efficiency targets in the future. AI Factories should follow suit by rigorously monitoring and optimizing their energy consumption like any other data center.
The economic framework for federated AI Factory services across Europe remains uncertain. While EuroHPC JU promises a unified, user-friendly web interface allowing seamless access to computational resources across countries, key questions persist regarding the funding model: Who ultimately bears the costs of energy and resources? Will countries subsidize AI startups or research efforts outside their borders? Clarifying these financial arrangements is critical.
Secondly, AI Factories must precisely define their target user communities and actively work to onboard them. Historically, HPC centers focused on academic and research users, but the landscape is shifting to embrace AI companies and startups across Europe. This transition demands a swift realignment of priorities and services to meet the needs of new stakeholders.
In regions with nascent AI ecosystems, there is a risk of underutilizing the vast computational resources available. To mitigate this, it is advisable to broaden the outreach of AI Factories to include public administrations and citizens, ensuring these strategic assets are employed effectively. Public institutions could leverage AI Factories for sovereign functions such as critical infrastructure protection, emergency management, defense, security, and administrative functions in justice and health sectors. Positioning AI Factories as public digital infrastructures for sovereign purposes could reduce the risk of underuse.
Another promising avenue is to extend access to citizens by granting digital rights to resources like storage, compute, or AI applications; however, this would require further investment in modern, user-friendly software stacks to support such democratization.
3. GigaFactories, AI Factories and public Cloud providers
A controversial issue here is the role of AI Factories, Giga Factories and public Cloud providers.
In principle, AI Factories serve primarily as public digital infrastructures, although they are increasingly opening access to AI startups for both training and inference tasks. This expanded availability of costly computing resources—funded by public money—could be perceived as unfair competition with commercial cloud providers. However, at EBDVF 25 in Copenhagen, Lilith Axner, EuroHPC Programme Officer for Infrastructure, clarified that AI Factories are intended to remain pre-competitive environments. They act as testbeds to accelerate AI startups in Europe by providing the essential infrastructure during the pre-production phase. Consequently, companies are expected to transition to commercial public Cloud infrastructures for their production needs, eventually moving away from the AI Factory.
In contrast, Gigafactories represent a distinct model. Predominantly (around 70%) controlled by private entities with minority public participation (approximately 30% shared between the European Commission and the host country), Gigafactories operate as competitive market players alongside cloud providers. The involvement of public stakeholders ensures that a portion of Gigafactory capacity can be reserved as a sovereign public infrastructure for specific, selected purposes.
There is great potential for AI Factories and Gigafactories to complement each other effectively. A balanced approach might leverage AI Factories for research and startup acceleration, while relying on Gigafactories and commercial Cloud platforms for production environments delivering user-facing services such as AI inference and agent deployment.
As highlighted by Interface, fostering synergy and potentially co-locating Gigafactories with AI Factories could catalyze dynamic digital hubs. Moreover, proximity to major European AI talent centers like Paris and Barcelona is vital to nurturing active ecosystems. Internationally participating AI Factories—such as Barcelona AIF, Lumi AIF, or It4LIA—can have broader geographic impact across Europe.
Barcelona AIF benefits from favorable conditions, being located in a top European talent cluster with leading universities and AI companies, supported by international partners from Portugal, Romania, and Turkey. Additionally, the Spanish Gigafactory will be geographically co-located in Tarragona. Like other AI Factories, however, it still faces the challenge of attracting a vibrant AI ecosystem to fully leverage its resources in the coming years.
4. Recommendations
I now enumerate a list of cultural, technical, and economic recommendations to the new AI Factorties:
Cultural
- Intensively train AI Factory technical staff in cloud technologies, partnering with private clouds and open-source communities.
- Create multi-stakeholder advisory boards involving startups, researchers, governments, and citizens to tailor services.
- Promote programs enabling collaboration and cultural exchange between academia and industry.
- Ensure adoption of Cloud technologies in all Ai Factories and share good practices among them in Europe.
Technical
- Develop integrated dual-stack infrastructures merging HPC and cloud technologies with unified authentication, networking, and monitoring.
- Deliver managed object storage, Kubernetes, serverless computing, and SaaS platforms for easier AI deployment.
- Invest in automation and elastic scheduling to enable millisecond-level billing and efficient utilization.
- Ensure cybersecurity compliance with EU regulations and best practices.
- Adopt open source and sovereign software stacks when possible.
- Ensure smooth transition and integration with GigaFactories and Cloud providers.
Economic
- Require real-time consumption and energy-efficiency monitoring with public dashboards reporting PUE and WUE.
- Introduce credit systems for users rewarding efficient resource use and discouraging waste.
- Expand services to governments, healthcare, defense, emergency services, and citizens to maximize societal benefits.
- Establish clear financial governance for cross-national resource sharing within EuroHPC.
- Include stakeholders in the different user communities to accelerate the adoption and usage of the infrastructure
5. Conclusions
AI Factories offer Europe a critical opportunity to cement AI leadership and digital sovereignty. Achieving this depends on departing from legacy HPC norms toward an agile, cloud-native operational model emphasizing flexibility, transparency, and efficiency. Addressing cultural divides, technological fragmentation, and economic incentive misalignments is urgent to avoid waste and maximize impact.
AI Factories and Giga Factories represent a good opportunity to build a public digital infrastructure for AI in Europe. While the US and China heavily invest in their AI ecosystems and companies, Europe must decisively support its own digital hubs. A robust approach would be to mandate that public procurement across all European institutions—including universities, hospitals, justice, defense, and government bodies—rely on a sovereign European digital infrastructure.
Europe’s greatest strength and risk lie in its decentralized nature. Without coordination, numerous AI Factories could falter, underutilizing resources, while a few succeed in driving user adoption and ecosystem growth. It is imperative that AI Factories learn from one another, with EuroHPC JU facilitating convergence to avoid redundant efforts and enhance collective progress.
By following the roadmap outlined, Europe’s AI Factories can transform into vibrant, sustainable digital ecosystems that drive AI innovation and public good for decades.



Leave a Reply