Data centre architecture: how is it evolving?
In recent years, data centre architecture has undergone a radical, systemic transformation, as a result not only of growing demand for digital services, but also as it adapts to face new technological, operational and ecological challenges.
With the spread of the Internet of Things, the exponential increase in data traffic and the widespread use of cloud-native applications, today’s companies find themselves immersed in an IT landscape which requires nothing short of a complete overhaul of how data centres are conceived, designed and managed.
A constantly changing hybrid infrastructure
The current situation faced by companies (especially medium to large enterprises like banks) is characterised by the co-existence of different models of data centre, including centralised, cloud-based models, regional facilities and smaller internal data centres. This hybrid configuration reflects the need to balance security, latency and availability with regulatory compliance.
Large centralised data centres remain essential for the management of critical, sensitive loads, as they offer high availability standards thanks to 2N redundancy, constant monitoring and Tier III and IV certified systems. These centralised systems alone are not sufficient however, due to the growing need to raise computational capacity close to a level suitable for data generation.
Regional data centres represent a balance between centralisation and decentralisation They enable latency to be reduced and bandwidth to be optimised and are strategically positioned to serve specific geographical areas. They have good security standards and use physical segmentation systems and access area logic, with multifactor authentification and layered defences against unauthorised access.
At the same time, localised data centres (often smaller and managed directly by the company itself), continue to play a vital role in hosting proprietary or legacy applications. However, they sometimes suffer from structural limitations, such as a lack of redundancy or inefficient monitoring. Nevertheless, they are still an essential element of the network infrastructure in many companies, especially when maintaining stable connectivity with cloud environments is a priority.
From data centre architecture as a physical place to the cloud-edge continuum
Nowadays, IT infrastructure can no longer be considered an entity confined within a specific physical location. Resources are distributed on a continuum, including core, edge, and public and private cloud environments. In this new context, traditional concepts like ‘breakdown’ or ‘availability’ need to be reinterpreted. What counts now is the end user experience, repercussions on productivity and continuity of service.
A breakdown, for example, can no longer be assessed according to the malfunctioning of a single component, but must be contextualised according to the impact on company operations, the number of users affected and the chances of activating an efficient, transparent failover.
In a distributed environment then, availability analysis based on multiplications of single components (eg Disp 1 x Disp 2) is no longer sufficient. A targeted Business Impact Analysis is required which can identify the criticalities of each element in relation to system inter-dependency, the number of users affected and the priority of the service provided. This approach enables the data centre architecture to be classified not only according toits theoretical availability but by its strategic value to the organisation.
Regarding strategy, it is important to realise that edge computing is not a passing trend but a solid response to the need to process data close to its place of origin. This model reduces latency, lightens backbone network traffic and enables real time processing. Its largescale adoption is however still hindered by a lack of shared standards, its complex management and the wide ranging hardware and software requirements in different industrial sectors.
Companies often implement edge computing in a fragmented manner, associating it with specific uses. A new approach is needed in which edge is integrated into a unified architectural strategy. Two emerging models support this evolution:
- intelligent infrastructure: based on modular, replicable components, managed via automation and artifical intelligence. These systems are able to dynamically adapt resources to match workloads, improving efficiency and resilience;
- programmable infrastructure: based on the concept of Infrastructure as Code (IaC), it allows for automated management of the entire IT infrastructure, regardless of its physical location. This approach not only simplifies operations, it also accelerates time-to-market and releases IT staff from repetitive tasks.
ACES: an autopoietic model for the future
One of the most advanced examples of adaptive data centre architecture is the ACES project (Autopoietic Cognitive Edge-cloud Services). This model takes its inspiration from biology in order to construct systems that are capable of self-configuration, self-repair and self-optimisation in dynamic environments such as those in the edge-cloud continuum.
ACES uses machine learning techniques, algorithms based on collective behaviour (such as swarms and the hormonal system) and probabilistic reasoning logics to ensure elevated performance levels in contexts characterised by low latency, hardware heterogeneity and volatile demand.
The objective is to create a system where resources and workloads interact as autonomous agents, taking distributed but coherent decisions. ACES architecture is therefore designed to ensure:
- high availability (>99.9%);
- no single breakdown point;
- automatic horizontal scalabity;
- minimum latency for real time responses;
- predictive asset management capacity;
- integration with GIS and SCADA systems;
- advanced predictive maintenance mechanisms;
- security and privacy of distributed data.
Through the stimulation of biological phenomena, such as the release of synthetic hormones by software agents, the system is able to dynamically optimise workload positioning and adapt to environmental conditions in real time.
In brief, this new approach to data centre architecture implies that the old ‘hardware-centric’ approach must be replaced with a strategy focusing on service and user experience. Companies must equip themselves with intelligent orchestration tools, redefine availability metrics and invest in integrated hybrid architectures.
The IT infrastructure of the future will be fluid and intelligent as well as self-adapting. And, above all, it will be conceived not as a collection of physical places but rather as an ecosystem of interconnected services, capable of evolving together with the business.
Translated by Joanne Beckwith