Enterprise computing after Moore's Law

We'll require new tech paradigms to meet tomorrow's processing needs.

Data centers are the factories of the 21st century, processing the ever-expanding volumes of information that make the global economy go. Global data volumes are growing exponentially in scale and complexity: Every two years, we produce as much information as previously generated in all of human history, a trend that will only accelerate in coming years.

Across industry and academia, researchers are working on new technologies that can scale to handle the data processing demands of the future in an energy-efficient manner. That's important because computing is becoming a major global energy hog, with the public cloud now using more energy than the global airline industry. In fact, if it were an independent country, it would rank No. 5 in energy use—behind China, the United States, Russia, and India.

By 2030, communications technology could consume up to 51 percent of world electricity supplies and produce as much as 23 percent of greenhouse gas emissions, according to a recent study in the journal ChallengesClearly, this is not a sustainable growth trajectory.

Possible solutions include memory-driven, neuromorphic, photonic, and quantum computing. While many of these technologies are still years away from full realization, we expect to see hybrid systems that incorporate aspects of them in the near future.

These computing platforms will handle data sets and analytics barely imaginable today. As the rate of business and technological change accelerates, companies will need to trim peripheral activities ruthlessly in order to focus on the partnerships, alliances, and acquisitions that create real value.

Moore's Law limitations

In 1965, Gordon Moore observed that the number of transistors that could be packed onto an integrated circuit had doubled every year since the technology first emerged in the early 1950s. Moore predicted this trend would continue indefinitely.

His prediction, known as Moore’s Law, has largely proved correct over the past five decades. Steady, exponential improvement in the power and performance of integrated circuits has enabled many technological and social trends that we associate with modernity, from digital imaging to smartphones, the social media revolution, the information economy, and the Internet of Things.

But that run will soon end. Why? As computers become faster and more powerful, they consume more energy and generate more heat. Because today’s chips require so much internal circuitry, heat and efficiency have become massive stumbling blocks. 

To fit more transistors on a chip, chip makers are designing at the nanoscale. Transistors are about 14 nanometers wide (a human hair is about 75,000 nanometers thick). Those transistors will continue to shrink for a while longer, with Intel planning to introduce 10-nm technology in July 2017. But scientists believe Moore’s Law will reach its limits at 5 nm. In a recent Nature article, Michael Waldrop notes that "at that scale, electron behavior will be governed by quantum uncertainties that will make transistors hopelessly unreliable."

Energy hogs

Today's data centers emit only about 1 percent of U.S. greenhouse gases. That’s not much compared with buildings and power plants, but data centers are one of the fastest growing sources of carbon. Experts predict emissions from data centers will rise 71 percent this decade, compared with only a 6 percent increase for all other sources of greenhouse gases. 

Traditional computer architecture is also reaching its limits. Current linear architectures weren’t designed to extract insights from massive data sets in real time. Scientists are now working on a variety of new computing paradigms that will be far more powerful and consume far less energy.

Future improvement will come from new materials, structures, and architectures.

Stan Williams Senior fellow at Hewlett Packard Labs

Options abound, from advanced materials such as magnetic charge ice and carbon nanotubes to quantum systems with neural chips. The race is on to see which approach will emerge victorious. In our view, the likely winners will be heterogeneous architectures that combine elements from multiple paradigms.

Universal memory

One promising technology is memory-driven computing, which merges the memory hierarchy of classical computers into a single layer of so-called universal memory. This single layer is persistent, meaning it can retain data without power. Memory-driven computing is also more efficient than classical computing because the persistent memory is positioned as close to the processor as possible. As a result, data does not have to be moved back and forth between layers of storage and then to the processor. This eliminates the latency involved in retrieving data from storage.

Since 2008, researchers at Hewlett Packard Labs have been developing a new type of memory chip called the memristor. In traditional volatile memory technologies such as DRAM, data is lost when the power is turned off. In contrast, a memristor avoids data loss by “remembering” the state it was in before the power was turned off.

Memristors belong to an emerging memory category called non-volatile memory (NVM). They can be made very small, enabling tremendous memory capacity in a very small space. They can be used for not only memory but also computation, using fewer components and less energy than other kinds of chips. Using memristors to fuse memory and storage, creating “universal memory” and flattening the data hierarchy, leads to massive performance and efficiency gains versus traditional computing models.

According to a recent report from the Computer Society of the Institute of Electrical and Electronics Engineers, “As we become exponentially more connected, people need and use more and more memory. In 2016, huge strides will be made in the development of new forms of non-volatile memory, which promises to let a hungry world store more data at less cost, using significantly less power. This will literally change the landscape of computing, allowing smaller devices to store more data and large devices to store huge amounts of information.” 

Composable infrastructure

Memory-driven computing will give enterprises options that were not remotely possible just a decade ago. To make full use of this technology, infrastructure will have to become more agile. Enter composable infrastructure. Think of it as a software-defined data center that automatically provisions itself to support any workload. Unlike traditional infrastructure, composable infrastructure scales easily to meet peak demand—whether you are a utility grappling with a power outage or a retailer dealing with the holiday rush.

The cost savings can be significant because composable infrastructure minimizes the underutilization that is endemic in traditional enterprise systems. In traditional data centers, computing, storage, and networking run on different platforms, creating islands of underutilized resources. Management tools do not usually cross those divides, which produces additional management silos.

Converged and hyperconverged infrastructure approaches have merit, but they fall short of the ultimate goal: a single platform with a single operational model for all workloads. To make this a reality, the platform must have hardware that can support a broad range of physical, virtual, and “containerized” workloads and be configurable through a software-defined approach that matches the needs of a given application or workload.

Quantum logic

In their quest to push the boundaries of today’s computing architecture, researchers around the world are hard at work on computing technologies that apply the principles of quantum physics to solve problems that elude today’s general-purpose computers.

Conventional computers require data to be encoded into binary digits or bits, each of which is always in one of two definite states (0s or 1s). Instead of the conventional bit as the basic unit of data, a quantum computer would use “qubits,” particles that can exist in more than one state at a time. Photons and other subatomic particles exhibit a property called quantum coherence, which allows them to interact with each other in a way similar to how waves interact. Two particles can also affect one another’s behavior through a process known as entanglement, even if they aren’t physically connected.

Reflecting the inherent uncertainty of quantum mechanics, quantum computers calculate probabilities rather than definite answers. So where a conventional computer would use binary processing to calculate 1+1=2, a quantum computer would find a probability of 99.9999 percent that 1+1=2. In theory, a qubit could execute multiple computations at the same time. This would be very helpful for complex calculations such as encrypting and decrypting data.

Quantum computing research is in its infancy, and a working quantum computer is probably 10 to 15 years away, according to Hewlett Packard Labs estimates. However, the use of quantum techniques in specific circuitry and chips could happen much sooner (within the next decade) and be incorporated into “hybrid” heterogeneous architectures.

One potential downside to quantum computing is that it could threaten our current encryption systems. Conventional computers can take years to break online encryption, but a quantum computer could do it far more rapidly. This means a quantum computer could make our security systems obsolete, opening our infrastructure and financial systems to terrorists and cyberpirates. Not surprisingly, one of the documents leaked by U.S. government contractor Edward Snowden revealed that the NSA had launched an $80 million project focused solely on quantum computing.

Build a brain

In many ways, the human brain is an ideal computer. It’s small and efficient, and can process many types of inputs almost instantly. Computers have been compared to brains since the dawn of computer science. More recently, researchers have started to design computers that are modeled on how the brain works, a field known as neuromorphic computing.

Brains can’t compete with computers when it comes to arithmetical operations. Over time, programmers have reduced brain-intensive tasks like animating images and playing chess to a series of mathematical calculations. Yet computers still struggle with many cognitive functions that are easy for people. With some training, for example, computers can recognize and accurately label a cat and a stool. But they have difficulty narrating a video of a cat jumping over a stool.

Programmers struggle to write algorithms that enable computers to perceive things easily. In part, this is because the neurological processes behind perception aren’t entirely clear. The brain contains an estimated 86 billion cells, called neurons, interconnected by a web of pathways called synapses. Researchers have determined that the same groups of neurons and synapses that perform calculations are responsible for storing memories. They’re also capable of handling many disparate tasks in parallel.

Traditional computers don’t work the way neurons and synapses do. Computers and applications are typically designed to solve problems or execute tasks in a sequential manner, passing information back and forth between a processor and memory at each step.

One of the emerging tenets of neuromorphic computing is parallel processing, or dividing big problems into smaller tasks that can be executed simultaneously, like sheets of neurons firing at the same time in the brain. Parallel processing, which has been around for decades, has been used primarily for specialized, high-performance computing tasks. Neuromorphic research aims to bring parallelism into everyday computing to tackle big computing tasks faster.

Computers that follow neuromorphic principles to read social cues or extract meaning from images could provide real-time commentary on news events. They could also help diagnose elusive medical conditions earlier and detect sophisticated computer security attacks within seconds. Practice makes perfect: The more often these computers perform a task, the faster and more accurate they become—without requiring any human intervention.

Imagine a robot that can interact with its environment. Instead of intensive programming to cover every scenario the robot might encounter in a home or work environment, the robot could “learn” in much the same way a child would learn—from which toys belong in which storage bin to how to use a vacuum cleaner correctly. Neuromorphic computing will blur the lines between human and artificial intelligence, putting humans and machines side by side in a host of new situations.

3D technologies

We probably won’t see true quantum and neuromorphic computers for a decade or more. In the meantime, a new chip architecture called 3D computing might act as a bridge technology. Using carbon nanotubes, researchers stack memory and processor layers on the same chip in three dimensions.

This vastly reduces electrons’ typical “commuting time” while traveling through conventional circuits, making them more efficient. One 3D memory solution is the 3D NAND technology pioneered by Samsung; Micron and SK Hynix are expected to start shipping 3D NAND in 2016, followed by Toshiba and SanDisk next year.

3D NAND is the target technology that next-generation non-volatile memory technologies hope to dethrone in the future once cost-performance barriers are overcome.

Thinking with light

As much as 40 percent of the electricity that data centers consume is spent moving information around inside data centers. There is a more efficient alternative that could halve the amount of energy these transmissions use, the equivalent of decommissioning as many as 250 big power plants.

Photonic data transmission has emerged as an alternative to conventional electronic transmission. In silicon photonics, tiny optical components send light pulses to transfer large data volumes at high speed between computer chips. Using light allows for greater data rates and bandwidth for data-intensive applications.

Recently, scientists at Hewlett Packard Labs developed a way to repurpose machinery used in semiconductor production to manufacture bundles of fast lasers—each about one-tenth the size of a human hair—and components necessary to make them transmit data. This breakthrough will make it cost-effective for computers to send vast quantities of information over short distances using photons, or light.

What if you could use light for computation as well as data transmission? Optical processing technology uses light instead of voltage to open and close the gates of a transistor. The intensity of incoming light affects an optical transistor in the same way that the amount of voltage applied affects an electronic transistor, allowing it to process the ones and zeros of binary code.

Until now, optical processors have lagged behind their silicon kin for many reasons, including cost and accuracy. Broad adoption of photonic computation and data transmission could help stabilize power consumption in data centers, enabling the IT sector to continue growing without triggering a full-blown energy crisis. 

Questions for enterprise leaders

  • How will your organization scale its computing capacity to manage exponentially growing data volumes?
  • How might your business benefit from a digital infrastructure built around essentially unlimited memory?
  • What is your company’s growth engine? What components of your business could you afford to lose?

William Harless and Kristine Blenkhorn contributed reporting to this