Where will IoT computing reside?

Millions of different devices will connect to the IoT, and each device will allocate computing resources differently. You'll need to determine where the compute happens whether it's closer to the device, in the data center or cloud. Here are a few things to consider.

In case you hadn’t heard, the Internet of Things (IoT) is hot. Computing is moving beyond the desktop/mobile client paradigm and will soon inhabit everyday physical objects.

The IoT allows physical devices to collect data from their environment and transfer it over a network to another device, or to a central location for processing. As an example, I wrote recently about how my Honeywell thermostat is “smart,” meaning Wi-Fi-enabled and controllable via my Amazon Echo and the thermostat’s IFTTT connection.

The future is clear: Gartner estimates more than 21 billion connected devices by 2020, up from 4.9 billion today. By way of comparison, 2020 will see around 6 billion smartphones in use across the globe. The easy observation here is that it's a ton of devices, all performing some kind of environmental interaction and all performing some kind of computing. This raises the question: Where will all that computing be performed?

The big cloud providers expect that much of it will happen in their data centers. Both Amazon Web Services (AWS) and Microsoft provide IoT service bundles that include device libraries, network communication, event capture, computing hand-off, and data storage—all integrated and designed to work together and reduce the burden of creating an IoT offering from scratch.

This perspective has a lot to recommend it. As my friend and cloud pundit David Linthicum points out, IoT and cloud computing are natural complements. The scale of computing required to power billions of devices is huge, and the cloud providers are capable of supporting that scale.

On the other hand, there are those who believe IoT devices themselves will be the location of IoT computing—so-called computing at the "edge" of the network. Peter Levine, a partner at venture capital firm Andreessen Horowitz, argues that cloud computing is dead because computing will migrate to smart edge devices. Cisco uses the same rationale to argue that so-called fog computing will extend cloud computing. AWS, hedging its bets in the debate, just announced Greengrass, an offering that puts its Lambda computing framework onto individual devices.

If you're devising an IoT ecosystem, you're probably wondering which alternative will emerge victorious. The answer is both. There will be millions of different IoT devices, and each device will allocate computing resources differently. Here are a few factors to consider.

How much functionality will you need?

While all IoT devices interact with their environment, these engagements vary widely by type and scale. My thermostat simply monitors the current temperature and turns a furnace on and off to bring the temperature into line with the desired setting. Its IoT smarts are limited to enabling remote access to modify the setting and the thermostat schedule.

At the other end of the spectrum, we find devices that require significant computing capacity because their interactions with the environment are highly complex. As an example, self-driving cars need significant computing on-board to allow constant monitoring and reaction to environmental cues such as a pedestrian stepping off the curb or a car moving out of a driveway.

Other devices will be strung out all along that spectrum. Computing will reside both centrally and on the device. The key question will be where to place compute capacity to best serve the device’s specific functionality.

We're still too early in the IoT revolution to say how a fully mature IoT computing environment will look. In the years ahead, we'll develop new use cases, and lots of interesting designs will emerge as we find new ways to make dumb devices more useful with computing smarts.  

Why connectivity matters

One of the reasons so much computing power is placed in autonomous vehicles is that it’s critical they respond immediately to changing environmental cues. If a car needs to communicate back to a central cloud in order to respond to an unexpected pedestrian action, the latency involved can turn a common traffic situation into a potential deadly incident.

Many IoT applications lack constant network connectivity altogether. I wrote recently about a monitoring solution that checks bridges for such issues as corrosion or element failure. The consequences of failure in the monitoring system can be catastrophic. Many bridges are in remote locations that lack connectivity. The solution to that challenge is to place data readers on trains that pass over the bridge. When the train crosses, the readers connect to the monitors and download the safety readings. This data is later uploaded from the train to a central processing location to track changes in bridge alignment.

If network connectivity is poor, you need more processing power on the device to allow better and faster response to local conditions. By contrast, my thermostat has constant 100Mbit connectivity to its back-end application, so not much computing power needs to be placed on the device itself.

Store and restore capabilities

The reality of devices is that they break. Or are upgraded. Or replaced. If computing is placed only on the device, there's a problem: How does one ensure the new (or upgraded) device reflects the reality of the previous device’s state? And how do you port that previous state to the new device?

We’ve already faced this problem in mainstream computing, where it’s called backup and restore. The solution is to copy a computer’s data and place it elsewhere so that it can be used to reproduce a computer’s state in the event of a crash or disaster.

Likewise, every IoT device will need centralized computing resources to store and restore prior states—in this case, the environmental readings it carried at the time of interruption. Operating a device without state could be disastrous. Nobody wants a newly booted autonomous cargo ship to continue its voyage with no knowledge of its location, bearing, or speed.

In short, every IoT device—no matter how remote, smart, or fast-responding—will need centralized computing to provide backup and restore. While that might seem trivial, it’s important to keep in mind the scale of IoT: billions of devices, many of which will carry massive state information. This will require exabytes of storage and huge amounts of computing capacity to manage the state transfers, which means cloud computing will be a critical part of IoT, no matter how smart our devices become. 

Crunching your IoT data

Even more IoT value resides beyond the actions of the devices themselves. The data they throw off can be analyzed to optimize operations or prevent breakdowns. In the same article where I discussed bridge monitoring, I wrote about how Tetra Pak is using data from its packaging machines to schedule maintenance and prevent machine breakdowns.

Every device’s state, and changes in its state, can be captured and aggregated to perform analytics. Google applied artificial intelligence (AI) to its data center operations and reduced power consumption by 15 percent, a huge savings at Google's scale. 

Whether the analytics are directed toward fairly simple variance analysis (Tetra Pak) or sophisticated optimization (Google), you need centralized storage and computing power. In the case of AI, the computing power necessary is enormous. Every IoT provider will want to perform analytics to make its devices operate more effectively.

Computing placement question

There isn't a simple either-or answer when it comes to IoT computing placement. Every IoT application will use both on-device and centralized computing. Every application will apportion overall computing according to the nature of the device and the service it performs.

It’s challenging even for experts in the field to comprehend the massive amounts of computing and data that the future IoT-saturated world will bring. There will certainly be enormous computing power at the network edge. However, I believe it will be dwarfed by the computing capacity at centralized locations, used to tie together the device mesh and analyze the deluge of data that it streams. 

The IoT explosion of data and the back-end computing power required to process it raises an important question for IT executives: where to run the IoT back-end computing? The primary choices are within the organization's own data center or with one of the large-scale cloud providers. Whatever option you choose, remember that given the scale and complexity of IoT applications, it's not easy to migrate them once they are up and running. It's important to recognize this stickiness and be willing to live with it for the long haul.

Here are some other factors to consider:

  • Using your own data center: There can be compelling security or regulatory reasons to keep certain IoT functions in house. Another reason that running IoT backends on premises might make sense is if the organization believes it has significant technical expertise or can operate infrastructure environments less expensively than the cloud providers. Critical to this choice is forecasting likely data and computing volumes, especially at peak times. Because of the highly erratic nature of many IoT applications, ensuring that sufficient computing power is available can be challenging but is necessary to ensure that no device interactions or data are lost.
  • Using a cloud provider: This approach will typically accelerate time to market, because the providers offer pre-integrated services that ease creating an end-to-end IoT offering. There are a lot of moving parts in an IoT application. Typically this requires a number of different software components, each of which must be installed, configured, updated, and managed. In addition, smooth data flow through the system requires integration between the components. Allowing the provider to take on this responsibility enables the IT organization to focus on the value-adding parts of the IoT application. Many IT organizations will find this approach attractive.

While it seems like the whole world is moving to the cloud for all IT applications, there are good reasons to choose the on-premises approach when it comes to IoT. Unlike previous computing paradigms, this mixed environment will not be about where generalized computing takes place. Edge computing in an IoT world will address device- and application-specific processing. General computing will still reside in a centralized environment.

IoT strategy: Lessons for leaders

  • When considering where computing should be placed, think of the location that best serves the IoT device’s specific functionality.
  • Every IoT device—no matter how remote, smart, or fast-responding—will need centralized computing to provide backup and restore. This means cloud computing will be a critical part of IoT.
  • The data that IoT devices throw off can be analyzed to optimize operations or prevent malfunctions.
  • The choice of where to locate the back-end processing for IoT applications implies a trade-off between ease of implementation and user control. It's usually hard to reverse that initial deployment decision, so make your decision with full information and awareness of downstream implications.