Once every few decades, we experience a broad shift in how people interact with computers. Think about it. How long have you been relying on your mouse to click on the things you want to interact with? In many ways, the typical user interfaces model hasn’t changed much since 1984 but we’re finally in the midst of a major new shift.
What I’m calling the fourth-gen user interface has arrived, and it will create a truly dramatic shift for users over the next few years. These new interfaces will leverage technologies like ubiquitous connected devices, location-based services, speech recognition, computer vision, biometrics and even augmented reality (AR). This isn’t your dad’s computing environment.
So, why am I calling this a fourth-gen experience? Let’s look at the first three generations and then dive into the fourth.
The evolution of the user interface
The first-gen computer user interface of the 1950s and 1960s required humans to manually feed computers data in batches (think punch cards), and results were returned to the user through a printer. My dad told me a story once about one of his early computer programming experiences where he literally dropped his computer program on the floor (a stack of punch cards) and it took hours to re-sort it.
The second-gen user interface evolved into drastically more flexible character-based systems, or Command Line Interfaces (CLI), however they demanded that the user understand a complex system of commands and syntax to be efficient. CLI are still highly effective for system admins and developers but they aren’t a practical solution for the majority of end-users.
Today, most people interact with the third-gen user interface: the standard graphical interface that empowers people to navigate on a screen by clicking on an on-screen object. Whether it’s 1984-era Mac or the latest Windows 10, the experience hasn’t changed extensively. In fact, the most used smart phone experiences still use this interaction mode. It’s only recently that we’re seeing advanced phone applications that are starting to show the way towards a new interaction model.
The interaction between users and the third-gen user interface is intuitive, however it’s still quite manual. The speed at which interaction occurs is largely dependent on the users themselves. We’ve been lacking contextual awareness and more dynamic interaction models. The fourth-gen user interface wave combines these elements to achieve new levels of productivity.
The rise of IoT and the 4th-gen user interface
We’ve recently witnessed the birth and rise of a collection of industry trends that are beginning to mature rapidly: augmented reality (AR), machine learning (ML), Internet of Things (IoT), voice recognition, biometrics and facial recognition – to name a few. Together, these technologies enable intelligent, context-aware systems that are capable of automatically adjusting and configuring themselves to anticipate and fulfill user needs.
We are seeing the rise of IoT devices in the consumer space. Chances are you’ve heard of, or interacted with, Amazon Alexa, a Nest Thermostat, or an AR game like Pokémon Go. The convenience and efficiency of these devices show the way, and yet they only represent a subset of the possibilities and applications enabled by the fourth-gen user interface.
Take the context of a smart conference room as a business example. When you walk into your meeting, the system can determine who you are, what meeting you are there to join and which resources you need to get started. The IoT-enabled workspace launches the meeting automatically – shaving off the usual five minutes it takes your group to get everything up and running (and that’s not counting any potential “technical difficulties”). However, that’s not all – the room can also automatically adjust the meeting conditions to suit your personal preferences: lower the shades, turn the TV monitors on, initiate the video camera, record the session and email the links to the session recording out to all attendees when the meeting concludes.
In another real-world example, here at Citrix we recently retrofitted one of our office buildings with a new open floor plan. Employees can sit anywhere and log into their computer applications via shared devices at each desk. So, how do you find your co-workers when you need them or find free space for yourself? A map on a kiosk by the elevator is kept up to date with streamed sensor data so you know which spaces are available and you can ask to find a co-worker via voice command.
And, of course, with all the data gathered in these two examples there is huge potential for analytics on how your company resources are being used!
The intersection of IoT and artificial intelligence (AI) technologies will become exponentially more powerful as the technology continues to mature and integrate with new systems, vendors, and applications.
This is the potential of the fourth-gen user interface – the potential that exists between our physical and virtual environments. This is the reality of the future of work. This is the reality of the technology we have today. As noted sci-fi author William Gibson has said, “The future is already here — it’s just not very evenly distributed.”
Infrastructure requirements for the 4th-gen user interface
To truly succeed with the coming sophistication of IoT and fourth-gen user interface, your enterprise may need a new type of computing infrastructure. Think about all of the data and telemetry feeds that must continuously stream between different systems and sensors. Pure cloud-based infrastructures may struggle to keep up with this mix of high data-rates and demand sub-second user-response. That’s why edge computing and hybrid cloud models are quickly becoming synonymous with achieving and maintaining a competitive advantage.
Businesses can’t send all of this data, from an ever-increasing quantity of devices, to the cloud in an efficient or cost-effective way. They must decide what needs to go to the cloud for deep processing and what can be manipulated and analyzed at the edge more efficiently and quickly.
Public cloud infrastructures allow for flexible consumption models and faster rollouts. For certain application classes this is revolutionary. However, with the massive data wave coming from IoT you are often better off doing some processing locally before forwarding on a subset of the data to a remote cloud.
Intelligent edge computing enables enterprises to pick and choose the hybrid architecture that results in the best of both worlds. That’s because edge computing investments are designed to work in conjunction with the cloud; they represent local points of presence from applications that are cloud-driven.
As we continue to move to these more advanced, fourth-gen user activities, we must build an infrastructure that can intelligently, and automatically, decide where to most-efficiently process each component.
Take the Nest Thermostat as an example of a device that works across a hybrid environment. Most of its day-to-day, minute-by-minute decisions are made locally. Limited, pre-processed telemetry data is streamed to the cloud where the machine learning algorithms kick-in to adjust your long-term energy usage. In addition, the public cloud offers an easily accessible control point for your phone to remote manage the system. The need for edge computing in such a hybrid cloud environment increases further with the complexity of the devices or processes involved.
As Dr. Tom Bradicich, HPE VP and GM of Servers and IoT Systems, explained in a recent blog post, there are several key reasons as to why intelligent edge systems will be critical in the enterprise including but not limited to, matters of latency, bandwidth, compliance, security, cost, duplication and data corruption.
When, why and how to get started in the enterprise
So, when should enterprises operationalize IoT computing, fourth-gen user interface technologies, and edge computing? The answer is now.
These fourth-gen technologies are maturing on a similar timeline, and the first-to-adopt will be the ones who reap the most benefit. Timing is critical to ensure that IT can establish a sustainable competitive advantage – it doesn’t matter what industry you’re in.
Security and privacy concerns also take center stage in a hybrid cloud environment. The strict regulations related to financial data, the private nature of personal or health related data, and the persistence of advanced cyberattacks reveal the need for a flexible system. Businesses must decide when to locally process and when to export data to the cloud, depending on the sensitivity of the data. The power of edge computing and a hybrid environment make this possible. When the stakes of a single breach are so drastic, there isn’t room for undue risk.
The degree of telemetry data and analytical power these technologies produce is completely revolutionizing and accelerating how people can work. The implications for efficiency, productivity, collaboration, engagement, and innovation are huge.
Are you looking for how to get started? If you’re seeking an entry point, start with the IoT hubs that most major cloud providers maintain. These hubs, such as the Azure IoT hub, are essentially edge devices that collect local telemetry data for preprocessing before sending it to the cloud. Start the edge computing journey by exploring the IoT-related services that most cloud providers offer.
The fourth-gen user interface is already here, it’s just not evenly distributed. Those who lag behind the industry leaders will soon be scrambling to catch up.
This article is published as part of the IDG Contributor Network. Want to Join?