What will be the technology trend in the next decade, and how will it shape the landscape of data management? Amidst the rapid evolution of technology, cloud technology has proven to be invaluable for organizations of all sizes. However, a new trend is emerging, with the concept of edge computing dominating the scene. As we look ahead, businesses are confronted with the challenge of meeting various goals, such as performance, compliance, autonomy, privacy, cost, and security, all while shifting intelligence from centralized data centers to the edge. So, let’s explore the rising prominence of edge computing and its potential impact on the future of data management.
Defining ‘the Edge computing ‘
Edge computing can be interpreted in various ways, ranging from data centers positioned near users (the “near edge”) to content delivery networks (CDNs) and resource-constrained devices such as IoT devices or sensors (the “far edge”). In the context of WebAssembly (Wasm), the edge refers to the device with which the user is actively interacting. This could be a smartphone, a car, a train, a plane, or even a humble web browser. The essence of the “edge” lies in the ability to place computational intelligence on-demand, precisely where and when it is needed. NTT’s 2023 Edge Report reveals that nearly 70% of enterprises are fast-tracking edge adoption to gain a competitive advantage or address critical business challenges. The demand to deliver real-time, rewarding experiences on personal devices aligns with the need to empower manufacturing processes and industrial appliances with direct compute power.
A Path to Abstraction: Evolution of Cloud Technologies
Over the course of the past two decades, the world of technology has undergone a remarkable transformation, marked by the continuous effort to abstract complexities from the development process. With each wave of innovation, we have witnessed the emergence of standardized platforms that have revolutionized the way applications are built, deployed, and managed. These advancements have not only simplified the development efforts but also significantly reduced time-to-market, fostering an environment of rapid innovation.
The journey began with the advent of virtual machines (VMs), which played a pivotal role in decoupling operating systems from specific hardware, ushering in the era of the public cloud. By providing a virtualized environment, VMs allowed for greater flexibility and scalability, enabling businesses to optimize their infrastructure and utilize resources more efficiently.
Containers, the next level of abstraction, revolutionized the technology landscape further. By encapsulating applications and their dependencies, containers offered a lightweight and portable solution for deploying software across various environments consistently. This newfound portability streamlined the deployment process, enabling developers to focus more on application logic rather than worrying about underlying infrastructure.
The transition from monolithic architectures to microservices architecture was another significant milestone in this journey. Microservices broke down monolithic applications into smaller, more manageable components, making it easier to develop, deploy, and scale individual services independently. This decoupling of services brought flexibility and agility, allowing organizations to adapt swiftly to changing demands.
As distributed systems became more prevalent, the need for an efficient orchestrator to manage containers at scale became evident. Kubernetes, an open-source container orchestration system, emerged as the de facto standard for managing containerized applications. With its robust features for automating deployment, scaling, and monitoring, Kubernetes empowered businesses to manage large clusters of containers effectively.
Recognizing the need for more lightweight and resource-efficient solutions, smaller orchestrators such as K3s, KubeEdge, MicroK8s, and MicroShift entered the scene. These lightweight variants of Kubernetes extended its capabilities to run on smaller footprints, making it feasible to deploy containers even on resource-constrained devices like Raspberry Pis or personal devices. However, while Kubernetes excels in managing large-scale deployments, it presents challenges when trying to operate on devices with limited resources.
The growing prominence of edge computing introduces a new set of requirements for container orchestration. Edge devices, such as Raspberry Pis or personal devices, have restricted resources compared to data centers, necessitating a more resource-efficient approach. While Kubernetes is excellent for managing clusters at scale, it becomes impractical to operate on edge devices due to the substantial resource overhead it imposes.
Challenges of Kubernetes at the Edge
Resource constraints in mobile architectures, often quantified in terms of size, weight, and power (SWaP), pose significant challenges when attempting to implement Kubernetes at the edge. While Kubernetes excels in managing large-scale container deployments in data centers, it faces obstacles when running on resource-constrained devices, such as edge devices and vehicles like jets.
To provide context, even optimized Docker containers, which are known for their efficiency, typically consume around 100 MB to 200 MB of memory. On the other hand, Java-based containerized applications, while powerful, can consume gigabytes of memory per instance. Such memory consumption might be tolerable in scenarios with abundant resources, but it becomes impractical and untenable for resource-constrained devices or vehicles.
For instance, in the case of edge devices, such as Raspberry Pis or personal smartphones, the limited memory and processing power can severely restrict the feasibility of running Kubernetes. These devices are not equipped to handle the resource demands of Kubernetes, and attempting to do so may result in poor performance and compromised functionality.
Moreover, when considering critical applications deployed in vehicles like jets, resource allocation becomes even more critical. In such scenarios, 30% to 35% of the device’s resources may be dedicated solely to operating Kubernetes. This leaves a limited and inadequate portion of resources available for running the intended application, hindering the primary purpose of deploying applications on these devices.
The resource constraints at the edge demand a more lightweight and efficient container orchestration solution that can operate effectively on devices with limited resources. While Kubernetes offers robust features and scalability for data centers and large clusters, it is evident that a different approach is necessary for edge computing.
As the demand for edge computing continues to grow, industry players are actively seeking solutions that strike the right balance between functionality and resource efficiency. To optimize performance at the edge, the development and adoption of more lightweight container orchestrators, tailored to the unique constraints of edge devices, are gaining momentum. These orchestrators focus on minimizing resource overhead while still providing essential container management capabilities, making them more suitable for deployment on resource-constrained devices and vehicles.
The High Cost of Container Management
Containers offer numerous benefits for technology infrastructure, but they do not address the overhead of managing applications inside these containers. Operating Kubernetes incurs significant costs related to maintaining and updating “boilerplate” or non-functional code, which is statically compiled and included in each application or container during build time. Developers often find themselves spending a considerable portion of their time on “boilerplate farming,” constantly patching the non-functional code that constitutes a significant percentage of the average application or microservice.
Wasm Advantages at the Edge
WebAssembly emerges as the next major technical abstraction, aiming to address the complexities of managing the day-to-day dependencies embedded in every application. It tackles the cost of operating horizontally distributed applications across clouds and edges, where stringent performance and reliability requirements must be met.
The WebAssembly Component Model
Applications operating across edges often encounter challenges due to the diverse range of devices present. The streaming of video to edge devices, for instance, requires scaling applications across thousands of unique operating systems, hardware, and version combinations. Currently, teams create different versions of their applications for each deployment domain, leading to considerable overhead and complexity. WebAssembly offers a promising solution by introducing a component model that facilitates portability across boundaries, reducing the burden of managing diverse environments.
Wasm at the Consumer Edge
WebAssembly is already revolutionizing application development, deployment, operation, and maintenance. Notably, Amazon Prime Video has leveraged WebAssembly to streamline the update process for over 8,000 unique device types, including TVs, PVRs, consoles, and streaming sticks. This move has resulted in a significant improvement in the update process, eliminating the need for separate native releases for each device and enhancing overall performance.
WebAssembly: Unleashing a Revolution Across the Technology Landscape
Undoubtedly, WebAssembly is experiencing a sweeping adoption, permeating through the entire technology landscape, from core public clouds to the furthest reaches of end-user edges, leaving no domain untouched. This revolutionary technology has already garnered the attention of industry pioneers, with major players like Amazon, Disney, BMW, Shopify, and Adobe leading the way as early adopters. Their impressive implementation of WebAssembly showcases the remarkable prowess and versatility of this new stack, leaving the world astounded by its capabilities.