While preparing to write our new white paper “Managing Distributed Systems Using NETCONF and RESTCONF Transactions”, I have been reading Martin Kleppmann’s excellent book “Designing Data-Intensive Applications: The Big Ideas Behind Reliable, Scalable, and Maintainable Systems.” Reading it made me think of how increasingly important the skill of doing “deep work” vs “shallow work” has become. I.e. how increasingly valuable the skill of going “deep” is. Directing our focus on reading a book instead of settling for the five bullet PowerPoint slide summary is today, and even more so going forward, more important than ever as the overflow of information and attention-grabbing “funny cat videos” and “post-truth” sensational news keeps us in the shallows.
With technology and information targeting our most precious commodity, our attention, it’s even more important to spend time working on our skills to prioritize our focus on real value and not settle for a simplified version. We need to focus and pay attention to a slightly more complex solution that provides vastly more benefits.
Keeping solutions simple and not overdoing it is also an important skill. Yet, the flip side is that settling for “super simple” when what is initially perceived as complex may, in reality, not require that much more work to do it right. It is important and beneficial to spend just a bit more time investigating the problem at hand.
Stephen Covey wrote, “The noise of the urgent creates the illusion of importance.” I know this all too well as a recovering “shallow” doer; I have to work hard to stay increasingly in “deep” contemplation.
Related to the whitepaper I just wrote, to finally break free from the human-to-machine CLI script tinkering, operators, service providers, and equipment providers need to form a common deeper understanding of the vast effects of settling for the bare minimum “shallow” solution. By settling for the bare minimum for NETCONF or RESTCONF (or worse – the good old CLI scripts) versus the full capabilities of NETCONF that enable a programmable network, you lose the power and advantages of robust machine-to-machine network-wide NETCONF transactions for automated service deployments to the network.
I have observed that there is a need among the networking industry for a more widespread, deeper understanding of network-wide transactions. For example:
- Out-of-band configuration changes in virtual/physical devices and microservices is a more severe than normally perceived issue because getting back in sync with the device configuration and syncing all the way back up to the service level is often very hard and complex to perform and maintain.
- The ultimate goal of using Standard YANG data models is to make mapping service YANG data model to multiple different device YANG data models a lot more simple. The purpose is not to make devices “exchangeable” or “commodity” which seems to be a common fear equipment providers share. Instead, the mindset should be reframed to “If I maintain a deep understanding of the problem at hand and provide a valuable solution for tackling that problem, I will stay relevant to my customers”.
On the topic of focusing our attention, if you are interested in working on your skills of going “deep”, I would recommend these three excellent books that I have recently read:
- “Deep Work: Rules for Focused Success in a Distracted World” by Cal Newport
- “Theory U: Leading from the Future as It Emerges” by Otto Scharmer
- “21 Lessons for the 21st Century” by Yuval Noah Harari
If you are interested in going deeper into automating the network, read the “hot off the press” book written by my colleagues Jan Lindblad, Benoit Claise, and Joe Clarke called “Network Programmability with YANG.” It is a must-read.
Finally, of course, please, take a few minutes and read our new whitepaper “Managing Distributed Systems Using NETCONF and RESTCONF Transactions.”