Pages

Thursday, May 4, 2017

The Machine = Memory-Driven Computing from HPE

By Bill Moran and Rich Ptak

Image courtesy of HPE, Inc.

   
Originally announced in June of 2014[1], HPE recently provided additional details[2] on the new
architecture titled by HPE as “The Machine”.

Computer industry observers, as well as developers have many questions about this new architecture. This article describes some of the “The Machine’s” key features. We also attempt to address exactly what this new system might mean to the industry. We also offer some recommendations on how IT departments should react to this new system. We do not attempt a deep dive into the technology. First, a little history.


Von Neumann Architecture

HPE has been working on its new architecture for several years. Their goal has been to solve problems posed by the end of the von Neumann computing era. 

Von Neumann architectures were built on the concept of a central processing unit or CPU. All data reached the relatively fast CPU through much slower functioning memory. CPU speeds, along with the cost of the memory itself, meant systems used specialized I/O devices such as tapes, disks, flash memory etc.  to hold the data to be processed. Until very recently, it was prohibitively expensive to build memories large enough to hold all the data for processing.

In addition to being cheaper than memory, such I/O devices were non-volatile. This meant they did not lose all that data when power was turned off as the memory would. Obviously, losing data along with power, whether intended (a shutdown) or unintended (power failure, device failure), was untenable for a working datacenter. Another challenge was the constant movement of data. Data often comes from a variety of distributed I/O devices, moves to a central store, and then to computer memory for processing, then moves back to either storage or another I/O device.

Moore's Law

Moore’s law explains why this architecture worked for the past ½ century. Technically, the law (more precisely an observation) relates to the number of transistors on a chip. It states that this number doubles approximately every 18 months.  However, physical limitations appear to mean this law is becoming invalid. (You can’t keep shrinking things indefinitely.) With the law’s end, we can no longer expect microprocessors to drive performance improvement as they have in the past.

However, the requirement for increased processing power continues to grow, driven in part by the sheer amount of data to be processed. Other factors driving growth include an increasing number of devices, more complex data types, the internet of things and an expanding world population. Studies report that the world’s data doubles every two years, with no sign of slowing down.  The demand for the ability to process this giant mound of data continues to increase.


The Machine’s Memory-Driven Architecture

As noted, the center of the von Neumann architecture is the CPU; large data sets must be split up to fit the relatively small amount of main memory that is tied to each processor. The much faster, cheaper memory now on the horizon allows for the emergence of a new architecture with memory at the center of the system.  This new non-volatile memory (NVM) means data is preserved even if/when power is off. All data is kept and accessible in greatly expanded memory.

HPE intends to use these emerging NVM technologies to form a new shared pool of “Fabric-Attached” memory, which means that any processor can access any byte of data directly, without having to work through another processor. In addition, this fabric enables processors of many types – x86, ARM, GPU etc. – to communicate over the same fabric. This allows Memory-Driven Computing to match a workload with the ideal processor architecture, allowing tasks to be completed in the shortest possible time using the least amount of energy. For communication over distances longer than about a foot, Photonic/Optical communication links are used to allow physical components spread over a wide area to perform as if they were all located in the same rack. In addition to improving performance, this delivers breakthrough energy efficiency and design freedom.

This structure will have several additional immediate effects. First, it eliminates the requirement for I/O devices in normal processing. A major cost saving. Next, all data is accessible by normal CPU instructions. This simplifies application programming with no need to manage and access data on external devices. Processing speeds increase dramatically as all data is immediately accessible for processing.


Migration Considerations

A major concern when introducing a new architecture is the difficulty of application migration to the new platform. Since most existing programs were created in a world of limited memory, they will not automatically take advantage of the new architecture. The algorithms that they depend on will have to be rethought to use the larger memories now available. We have had a taste of this type of effort resulting from the recent adoption of in-memory databases.

HPE is trying to make it as easy as possible to get started by using familiar programming languages and constructs. HPE has invested considerable resources into the support of the new architecture even to the extent of creating an optimized  version of Linux[3] running on a commercial System on a Chip (SOC) which is currently under development by a partner. They have an extensive program of support and information to help developers and programmers understand the new environment. This includes specialized software tools and services[4].


Structure of the new Datacenter

When fully implemented, the new architecture will drive a revolution in datacenter design. Full adoption will take many (10-20?) years.  For an extended period, applications designed for the current technology will have to coexist with the new architecture apps. Ripping out all the current I/O devices is simply not possible for numerous reasons, i.e. expense, risk, incompatibilities, poor or non-existent application documentation, etc. Another factor is the time it takes to build confidence in a new architecture as developers and other staff work to acquire expertise and learn its idiosyncrasies.


Performance Considerations

As mentioned above, The Machine’s new architecture offers the opportunity for many existing applications to be optimized to use what is essentially unlimited memory. Note that the new architecture does not require changes to programs, but changes are needed to achieve improved performance.

HPE modified some example code. They reported between 10X to 100X faster performance speeds. These impressive numbers were achieved fairly easily by optimizing existing algorithms for the larger memory environment. Further, more radical changes in the algorithms can achieve far greater speedups. A financial modelling example reported a speedup approaching 10,000X. However, this involved completely redesigning the application.

The performance story has two sides. On the one hand, radical change can yield stunning results, as seen in the financial model example. On the other hand, merely moving an existing unmodified application to the new environment will not automatically deliver great or, indeed, any performance improvement.


Economic Considerations

A key factor in determining the speed of new technology adoption is cost. We lack sufficient data about HPE’s new technology to make any cost estimates. Obviously, the cost of the new memory will be key. In the best case, the new technology would be price competitive with, as well as performance advantaged over existing technology.


A Possible Action Plan

We think most large IT installations will benefit from devoting resources and time studying HPE’s new architecture. Here is why. An IT department wishing to keep its most productive people must demonstrate that they are forward looking. No one wants to work in an environment where skills become obsolete because the organization fails to track new technology. This alone justifies some investment. A second, more compelling reason, is the potential for immediate payback. Many installations run applications developed when small memories were common, and where a lack of memory is a major cause of slow performance. Evaluating applications to assess the potential benefit from access to large memories can yield significant insights to the potential for investment payback.

Of course, these efforts must be closely monitored and controlled.  The adage; “If it ain’t broke don’t fix it” still holds. We are not recommending a major “rip and replace” mindset. On the other hand, there can be no progress without replacing old technology. HPE’s “The Machine” has the potential to leverage major performance improvements in today’s applications. Determining how much to invest requires a case-by-case evaluation; such effort can be fully justified.


Conclusions

We applaud HPE’s investments in this architecture. Wisely, they are cultivating a supportive ecosystem with multiple efforts intended to facilitate access, educate and involve[5] the greater IT development community with The Machine.  Our conclusions are as follow:

1.    The Machine introduces Memory-Driven Computing which is a radically new architecture optimized for data-intensive processing.
2.    It effectively addresses problems associated with the collapse of Moore’s Law.
3.    The architecture works and indications are that it can provide impressive (up to 10,000X) performance improvements for a certain spectrum of application types, when programmed appropriately. Modifying existing applications can yield more modest, but still substantial, 10X to 100X improvements.
4.    HPE has assembled an impressive array of tools, products and services to encourage IT staff, especially developers to become familiar with the architecture by means of easy, inexpensive (sometimes free) access to the technology.
5.    We recommend looking at this technology; it could change your future!
To date, we have seen no other approach offering as comprehensive a solution to the “end of Moore’s law” dilemma.  It will be several years before a final judgement on this new architecture will be possible. Today, we are optimistic about HPE’s prospects.

1 comment: