Looking back on my career and network computing, it has been a lot like watching a pendulum swing back and forth between centralization and decentralization. In the beginning, we had mainframes that we used as a centralized computing source. Then, we decentralized to local mini-computers and micro-computers. This was followed by a swing back to centralization with servers and cloud-based applications. Pointing a web browser at a server is basically the same thing that we were doing when we used an ASCII terminal to interact with a mainframe.
Once things were centralized to the cloud, it became clear that things were now too centralized and we had to broaden the use of network computing. This became a blurring of stuff on the network, increased complexity, and required more time to manage (increased OPEX) resulting in management challenges as everything kept scaling out. Combine this with the fact that what you can deploy wasn’t always intuitive and that everything had to be automated. The major need was that all of this stuff, no matter where it resided, had to be programmable in order to be able to solve all of these challenges.
Today, the entrance of Fog Computing and Edge Computing is swinging the pendulum of change back towards decentralization and has, once again, highlighted the need for programmability. Fog comes out of the IoT world and addresses how you deal with all the masses of data without sending it to a central server in the cloud. Though some would say that this definition works for both Fog and Edge computing, there are subtle differences. One key difference is that Edge only looks at the edge devices that are connected to the Internet. Fog looks at both the edge and the network world providing the following:
- Real-time processing of data without latency in the response.
- Reducing the amount of network traffic and how far it has to travel to through the network.
Organizations like the OpenFog Consortium and IEEE have worked together to create standards for Fog Computing. Within the standards set forth, there are eight pillars everything is based upon, including a pillar on the need for programmability that highlights the need to have highly adaptive deployments including support for programming at the software and hardware layers. Programmability of a Fog node includes the following benefits outlined in the standards:
- Adaptive infrastructure for diverse IoT deployment scenarios and supporting changing business needs.
- Resource efficient deployments maximizing the resources by using a multitude of features including containerization. This increases the portability of components and is a key design goal enabled by programmability.
- Multi-tenancy to accommodate multiple tenants in a logically isolated runtime environment.
- Economical operations that results in adaptive infrastructure to changing requirements.
- Enhanced security to automatically apply patches and respond more quickly to evolving threats
What I find interesting, and concerning, through all of these discussions is that the conversation on programmability is always broad and doesn’t include basic management topics, let alone more advanced concepts such as distributed architectures and network-wide transactions. So, there is not a complete conversation, yet, on how to manage all of these things in the network and the edge. Unfortunately, management is often an afterthought and something that network element providers try to plug in after the fact. The management, orchestration, and provisioning of the network nodes need to be considered from day 1 and done in a common, standardized, and programmable way to properly manage Fog and Edge computing. To be competitive and deliver usable solutions, programmable management is essential to enable automation that saves time and OPEX for the world of Edge and Fog computing.
See how ConfD can deliver the programmability and network management needed to solve these challenges. >>Learn More