My previous series of posts talked about a present problem for anyone deploying on the internet: what do you need to measure when deploying into the cloud and how do you measure cloud performance?
But planning and deployment issues are not restricted to just the immediate-term questions I was tackling there. Anyone in charge of a network has to think about how that network will evolve. The next articles in this series will be about the internet of the future and will suggest ways in which the internet seems likely to develop.
One of the astonishing things about the internet is that it is voluntary. With very little central organization, the internet emerges because it interconnects networks. And because of network effects, interconnecting different networks makes those networks more valuable, particularly when the network merely provides interconnection for intelligent applications at the edges of the network. This nature of the internet is what has allowed it to subsume other communications technologies.
The internet’s flaws
But there are clouds on the horizon. People as different as Malcolm Gladwelland Bruce Schneierclaiming the basic, open design of the internet is, in fact, its deepest flaw. Schneier even asserts that the only way the internet can be made safe is through government regulation. It is undeniable that Distributed Denial of Service (DDoS) attacks are getting worse, even if they are not getting more sophisticated. So, it is unlikely that pressure to “do something” about security on the internet will let up even if the “something” is possibly harmful to the very thing one is trying to protect.
At the same time, we see the emergence of systems that are on the internet but not of it. The internet grew and developed in a world of open standards. But we increasingly live in a post-standards world. Historically, the internet worked because different implementers all implemented a common standard. So, interoperation of different systems made by different people was the basic way the internet grew.
More recently, however, we see standards that are “living documents” (such as those published by the Web Hypertext Application Technology Working Group), which make interoperability hard to test. In addition, many technologies deployed on the internet are really proprietary APIs running over HTTP (such as those operated by Twitteror Dyn). These kinds of interfaces are by definition not subject to interoperation because the publisher of the API can change them at any time.
Finally, the tendency in the Internet of Things so far has been toward proprietary standards that are, in effect, an effort to create a closed “ecosystem” that is under the control of a single entity or consortium. Moreover, rather than creating a network of smart devices that mostly talk to each other, the bulk of shipping systems has been using a client-server model, with most of the intelligence in a central service. Formally, the pattern follows the internet model of intelligence at the edge, but it makes the central service the only intelligent part of the system. This pattern resembles the more centralized architecture of the old phone system.
If these trends continue, the internet of the future will be considerably different than the one that has brought us the innovation and dynamism we have seen so far. An internet where there are many gatekeepers would bit be like the internet we have been used to.
In the coming articles, I will examine these trends, consider whether they really represent the future of the internet, and explore what choices we might collectively make to avoid the negative consequences. For only if we make the right decisions will we be building better networks.
This article is published as part of the IDG Contributor Network. Want to Join?