Tuesday, April 17, 2018

Compuware continues to lead in Agile DevOps for the mainframe

By Rich Ptak

Image courtesy of Compuware, Inc.

Compuware continues to add to and extend its mainframe solutions as it advances in its campaign to mainstream the mainframe. This time with two major innovations that help their customers preserve, advance and protect their mainframe investments.

Before we get into the innovations, we want to mention Electric Cloud, a new partner, who proactively integrated their service through the Compuware open API. This is the latest example of how Compuware takes an open borders approach where they integrate with a variety of solutions to help customers build out their DevOps toolchains. 

Now, onto the announcements. First, a new product, Compuware zAdviser. It leverages machine learning and intelligent analysis for continuous mainframe DevOps improvements. This new capability provides development managers with multi-level analysis of tool usage and performance data. They focus on the critical DevOps KPI’s (key performance indicators) of application quality, development team efficiency and velocity. All are also key to agile development. Even better, the product is free to Compuware customers. 

Second, is a new GUI for Compuware’s ThruPut Manager, which provides intuitive, actionable insight into how batch jobs are being initiated and executed, as well as their impact on cost. Users can leverage graphical visualizations of batch jobs that are waiting to execute and when they might run. In-depth detail on why a job has been waiting can also be easily obtained.

zAdviser + KPIs + Measurement = Success
Mainframe KPIs are a must if organizations want to successfully compete in the digital age. After all, you can’t improve what you can’t measure and if you’re not continuously improving, you are wasting your time and worse, your customers’ time. Teams must also be able to prioritize and measure the KPIs that will directly impact development and business outcomes. 

A Forrester Consulting study conducted on behalf of Compuware found that over 70% of firms responding had critical customer-facing services reliant on mainframe operations. Providing the customer with an exceptional experience, not simply good, clean code, has become the new measure of operational success.
According to a recent Forrester Consulting study conducted on behalf of Compuware, enterprises are doing a good job of tracking application quality, but they are considerably less concerned with efficiency and velocity. However, in order to modernize their application development strategies to keep pace with changing market conditions, firms must place as much focus on velocity and efficiency as they do quality.

Compuware zAdviser uses machine-learning to identify patterns that impact quality, velocity and efficiency of mainframe development by exposing correlations between a customer’s Compuware product usage and the KPIs. Equipped with empirical data, IT leadership can identify what capabilities within the tools developers can exploit to become better developers.   The day of beating the drum to go faster are long gone with the machine learning. 

ThruPut Manager: Visualization for Batch Execution
Compuware’s ThruPut Manager brought automated optimization to batch processing. ThruPut Manager allocates resource decisions by balancing the needs of multiple interested parties. It involves cost-benefit tradeoffs between risks and costs, such as risking SLA (service level agreement) violations of timely service delivery to avoid a costly increase in software MLC (monthly license cost) charges.

Compuware reports that batch processing jobs account for about 50% of mainframe workloads!

Today’s complex environments compound the problem with a bewildering number of choices, combinations and alternatives to consider in making these decisions. The amount of data, competing interests and number of options means it takes years of experience to achieve even a reasonable level of competence at this task. Further, a lack of such seasoned staff means that these operationally critical decisions are now being left to new-to-the-mainframe staffs lacking that experience.

ThruPut Manager’s new web interface provides operations staff with a visual representation of intelligible information of the cost/benefit tradeoffs as they work to optimize workload timing and resource performance.

In combination with Compuware Strobe, ops staff can more easily identify potential issues. They can manage and balance competing metrics relating to cost, resource allocation, service policies and customer interests to make the best decisions for optimizing the workloads, as well as application performance.

A big part of ThruPut Manager’s advantage is the multiple drill-down views it provides. Starting with an overview, which displays data about the General Services and Productions Services queue, users can drill down to a detailed view of specific job data and job history, as well as where work is getting selected. The GUI also collects and displays the R4HA information for the last eight hours. And, if the Automated Capacity Management feature is constraining less important workload to mitigate the R4HA, this will be displayed on the graph. 

The Final Word
Mainframe workloads continue to increase even as experts steadily leave the workforce and responsibilities shift to mainframe-inexperienced staff. Organizations must constantly work to modernize mainframe environments and remove impediments to innovation to not only increase their business agility, but also attract a new generation of staff to the platform.

Compuware zAdviser provides concrete data that allows mainframe staff to link the results of actions taken to improve performance based on KPI measurements. DevOps management and staff have access to intelligible, visual information on the impact of those changes in detail. 

Compuware ThruPut Manager provides much needed clarity and insight to fine-tune batch execution for optimal value easing budget stresses while fulfilling business imperatives.

These products provide strong evidence of Compuware’s ability to create innovative ways to identify and resolve challenges in mainframe development, management and operations that have long been barriers to its wider use. The entire team deserves a salute for their 14th consecutive quarter of very agile delivery of solutions that are driving the mainframe more and more into the mainstream of 21st century computing. Congratulations once again for your efforts.

Tuesday, April 10, 2018

IBM Z Systems – for enterprises of all sizes

By Rich Ptak

                 Picture courtesy of IBM, Inc
When Ross Mauri, General Manager IBM Z, briefed us on their newest offering, he quoted Steve Jobs, “You’ve got to start with the customer experience and work back toward the technology – not the other way around.” Not bad advice.

We long ago learned that selling IT (both products and services) on the basis of technological “speeds ‘n feeds” was a non-starter for many buyers. We found success by listening to clients to understand what the client was trying to achieve, then identifying what they need to succeed. It is apparent that IBM is listening and agrees.

Announced were two new additions to the IBM Z® Family. First is the IBM z14™ ZR1, built to enhance trust in a highly secure cloud. Next is the IBM LinuxONE™ Rockhopper II which offers flexibility and speedy scaling to allow scale-up growth.

 Prior to hearing any details on these new systems, we had a number of informal discussions with mainframe users attending Think 2018. Here is what they were hoping to hear from IBM about mainframes:
  • ·         Significantly increased processing power with multiple configuration options,
  • ·         More flexibility and simplicity in system infrastructure configuration,
  • ·         Standardization that allows semi-customized systems, 
  • ·         Expanded I/O capability,
  • ·         Smaller overall footprint,
  • ·         Pricing transparency,
  • ·         App security.
No real surprises. With this list in mind, let’s examine the market issues IBM is addressing with the newest additions to the Z product family. 

Digital Transformation hits every data center
The appearance of digital transformation forced enterprises to confront and deal with an increasing number of challenges. Threats to security, extreme spikes in workloads, etc. The impact on data centers was significant. Especially being felt in the demand for strong, broad-based security, extensive, intelligent analytics, automated machine-learning capabilities and open, connected and secure cloud services.

When IBM designed and introduced the Z family to address these challenges, they were primarily the concern of large-scale enterprises. Today, digital transformation continues to spread to the extent that these challenges are being experienced in enterprises and businesses of all sizes.

The new additions to the Z family are IBM’s response. While they share common family capabilities, such as pervasive encryption, Secure Service Containers, analytics, machine learning, etc., they also include extensive enhancements to address the most pressing customer and user needs.

New 19” Rack configuration
Design standardization in both the z14 ZR1 and the LinuxONE™ Rockhopper II along with a smaller I/O configuration means customers can choose server, switch and storage elements that fit their needs. For example, both fit in a 19” standard-sized rack leaving a significant (16U) in-frame space available for other components. This gives maximum flexibility and scalability.

For system administrators, a new mobile management software allows remote systems monitoring and management including push notification of events for more efficient operation. 

For those worried about response times for I/O sensitive workloads, the IBM zHyperLink Express offers a direct connect short distance link between z14 servers and FICON storage. IBM has found that it can cut response times by up to 50%. OLTP workloads have much faster access to data. Batch processing windows are reduced as Db2 index splits go faster. The result is increased customer satisfaction and low operational costs.

For those concerned with extra security for software virtual appliances there is IBM Secure Service Container. Available on both Z and LinuxONE, it is a Docker-based container capability that serves as secure platform to build and deliver remote services. Both data and execution code are isolated and protected from threats, internal or external, malicious or inadvertent.
Speeds n’ Feeds
This section is for the “speeds ‘n feeds” folks. Here are a few tech specs. You can find more here . For z14 ZR1, the number (4, 12, 24, 30) of processors is fully configurable. The entry level system provides a full 88 MIPS for capacity setting A01. The RAIM memory runs with a minimum of 64 GB to a max of 8TB. IBM expects the largest z14 ZR1 configuration to provide up to 13% more total z/OS and up to 60% more Linux on Z capacity than the largest z13s. 

For LinuxONE Rockhopper II the number of cores is also configurable (4, 12, 24, 30).
The statistics go on and on. In short, these new systems were designed from the very start to meet the demands and needs of real customers.

In Summary
IBM has indeed listened to its customers. Nearly every hot button item on the list we collected from clients has been addressed. Pricing was not mentioned in the session.

However, in a separate briefing just before announcement, IBM indicated that the price points for these systems are set to keep current customers, as well as attract new clients and workloads to mainfrome platforms.  

Other items touched on during the announcement, and apparent at THINK 2018 were aggressive efforts to add partners and alliances to the mainframe ecosystem. There exists a much more visible focus on developers with stronger DevOps products and API enhancements. Also apparent is the aggressive attention paid to enlarging the number of applications and open source solutions running on the mainframe.

So, it was no surprise when Mr. Mauri indicated that last year, IBM had the largest number of new-to-the-mainframe customers in a decade. He also indicated that he now has a dedicated sales force pursuing opportunities.

For our part, we are seeing a lot more interest in the mainframe. From pervasive encryption to the extensive efforts in mainframe education to the increasing success in promoting the mainframe as a mainstream solution, the number of mainframe believers appears to be growing. Congratulations to Ross Mauri, his whole team and partners on their success so far. And, be sure to check out these additions to the Z family.

Friday, April 6, 2018

Expanding access to Quantum Computing

By Rich Ptak

IBM Q at Thomas J. Watson Research Center
Photo by RLP
We’ve previously discussed IBM efforts and contributions to Quantum Computing[1]. These range from fundamental research to developing quantum computing science to providing services that will speed the transition from a theoretical science to a technology for problem solving. We described their efforts with partners to build a large, broad constituency of interested user/researchers connected via the IBM Q Network[2]. Q Network allow participants to make collaborative arrangements that will speed the evolution of Quantum Computing technology. We recommend reading those articles.

In IBM’s view, technological advance occurs in stages. Starting with a theory, a technological explanation is developed, and research done to define a technology. This leads to development of an engineering understanding of how it might be applied. Next, the focus shifts to spreading the knowledge of the technology and growing developmental tools as potential users learn how and where to apply it.     

Quantum computing is nearly there today. Research labs and engineering efforts continue at IBM and by others to address issues involving the operating environment, data input/output, qubit stability and reliability. They need now to attract and develop potential users.

Achieving a commercially viable quantum technology requires quantum knowledgeable users able to identify and articulate problems, as well as create programs/algorithms. This is a task for educators, interested users, commercially-focused engineers, enterprises, and researchers. Therefore, efforts to increase quantum knowledge in a much wider, commercially savvy community have been expanded, e.g. the efforts mentioned above and described below.  

IBM is taking a leading role to address these and other issues. This includes addressing such issues as: How will quantum computers and classical computers work together? What types of problems are uniquely quantum friendly? Even identifying the right questions to ask remains an important open issue. 

With such issues in mind, we accepted an invitation to visit with members of IBM’s Quantum Computing team at IBM’s Thomas J Watson Research Center in Yorktown Heights, NY.
Here is what we took away from the visit.

Delivering public access to quantum computing

A major step to broaden quantum knowledge occurred in May 2016, when IBM provided free public access to the first implementation of wide-scale, cloud-based 5-qubit quantum computing. In March 2017, IBM Q Experience[3] upped the game with access to actual universal (general use) quantum computers.

Two years on, 80,000+ people have moved far beyond that initial Q-experience. A large constituency has access to not just 5-qubit, but also 16-qubit quantum devices. IBM Q Network clients have access to a commercial 20-qubit system, and a break-through 50-qubit machine is planned to go on-line later this year. To date, users have created and run nearly 3 million applications; a strong indication that quantum computing has left the esoteric realm of pure science and theoretical physicists’ dreams.

Quantum computing has started down the path to become a commercially applicable compute technology. Early applications range from building molecular models that aid the mapping of quantum circuits to apps that analyze very large unstructured data sets (used for financial planning) to apps able to factor complex equations.

IBM Q Network[4] is designed to facilitate the creation of a global network of quantum computing interested industries aimed at applying quantum technology to problem solving. Currently, only Q Network clients have access to the 20-qubit devices. They can also access the 5- and 16-qubit devices of the IBM Q Experience. The community is in the early stages of understanding what kinds of problems can be solved, as well as how to formulate the “question” to be answered. Participants include F500[5] and start-up companies to research labs and universities[6]. More details are available in the paper mentioned earlier.

Today, IBM’s Q Experience and Q Network allow users free use of both real 5-qubit and 16-qubit computers and 32-qubit quantum simulators to write programs with familiar development tools,including loading and accessing data using classical systems, e.g. GUI- Quantum Composer[7], QISKit[8]-GitHub.IBM Q Network clients also have access to commercial 20-qubit systems and resources to explore practical applications in their industries.

On-site IBM researchers use Power Systems linked to IBM Q quantum computers.  Q Experience users, as well as IBM Q Network clients access the quantum systems  quantum simulators thru a Power System-based cloud. 

Combining classical computing with quantum allows dramatic extension to computing capabilities. It will eventually lead to posing and solving completely unique problem sets – including rapid evaluation of incredibly large data spaces to optimize financial trading, create new drugs and materials or optimize energy production. 

Quantum computing and classical computing

Posing classical versus quantum computing is not the issue; it is determining how to most effectively exploit each architecture. Successful efforts working with real quantum machines lead to the conclusion that for the foreseeable future, quantum computers leveraged in tandem with and complementary to classical computers is the most promising way forward. Classical computing executing with its logic gates; quantum computing using quantum theory to manipulate “bits” and logic.

Tests have been run comparing the problem-solving speed of quantum algorithms (on quantum devices) against classical algorithms (run on classical machines). The results show that quantum devices don’t consistently complete more quickly or offer better solutions. Or, not enough to justify the additional effort required to do so. Thereby confirming the continuing value of classical computing, as well as the need for both approaches. Today, the challenge lies in identifying the exact problems or parts of a problem can best be addressed via a quantum computing device.

Interestingly, doing those tests also helped to reveal ways to improve some algorithms to run even faster on classical computers.

Theory and logical quantum computer simulations provide some insight into problem formulation. However, the gap between what can be done with a logical qubit versus a real, live qubit is enormous. As an example, a logical qubit can hold its states forever and be examined at leisure. In real life, the qubit has an accessible, informational life of microseconds, meaning only samples of output can be taken. They are also error prone. Algorithms are run repeatedly to correct for these errors. Research continues to identify ways to extend the life and stability of qubits.

Decades of classical computing solving all kinds of problems has given great insight into their operation. They use binary logic and mathematical concepts tied to the physical world. Physical models allowed logical processes to be replicated. The expected results could be predicted and checked. This made problem formulation, execution and answer checking relatively straightforward. The same level of knowledge doesn’t exist for quantum computing. And, given the actual physics of the quantum computer, it is extremely difficult to acquire.

Quantum should be best at solving problems involving large data arrays or having many complex options. But, the quantum world operates at the limits of our measurable knowledge and abilities of observation. So, identifying the specific details of quantum friendly problems remains an on-going challenge. These include determining the best way to articulate problems, and even deciding what problem or pieces of a problem to run on a quantum computer. Much remains to be learned about composing algorithms and verifying solutions.

What is different about quantum computing?

Quantum computing is superficially similar to, but fundamentally different from classical computing. Both quantum and classical computers use algorithms to solve problems. The actual algorithms are different because of unique execution techniques. Both are programmed with gates and transforms. But, quantum computing manipulates objects at a molecular level. The laws of molecular physics that apply to quantum operations are quite different as are the conditions at which it works. IBM Q requires near absolute zero, -273C, temperatures. That is colder than what exists in space. (Although this may change.) It operates in ways not fully observable, or currently even directly measurable.

Qubits are casually like bits. But, bits hold only one of two states (0 or 1). Qubits hold a state of 0, 1 or any combination in-between (e.g. 20% - 0, 80% - 1). The amount of data a qubit holds accounts for its great potential. They are shorter-lived, sensitive (collapsing to bits if touched by minimal external energy), error-prone, etc. These are gradually being resolved.

                    Figure 1 Entanglement    ©HowStuffWorks
What is entanglement? Two particles (photon, qubit, etc.) interact and retain a relationship that is neither a physical nor a controllable exchange of any sort. In figure 1, particle (A) is entangled with particle (B). Once entangled, any changes made in the superposition state of one of the pair will correlate with changes in the superposition state of the other particle.

So, observing entangled particle (A) changes the state of its superposition. Near instantaneously, the state of the second particle (B) changes in a correlated but opposite of (A) – like a mirror image. This occurs without stimulation of (B) nor any connection no exchange of any kind. This occurs at speeds exceeding the speed of light even when significantly large distances separate the two particles.

The change in the state of (B) is predictable but opposite (complementary) to the change in the state of (A). Entanglement simply (or not) means that the superpositions of two entangled particles will change in an observable, complementary way with no physical contact or connection. Thus, the change state is “correlated”, not “causal”. The results of stimulating (A) are detectable by comparing states afterwards. The change is the result of random movement in both particles – but only the overall outcome is observable.

Entanglement of particle superpositions is unique to quantum computing. Neither classical computing nor classical physics have anything like it. It is the basis of a lot of the power and promise of quantum computing.

Decades long IBM efforts have led to major contributions to quantum computing science. More recently, they concentrated on moving quantum computing from a science to a technology for experiments in application. They support widespread education in quantum. They recognize that progress to commercialization occurs as a series of step-ups in knowledge and capability, not a leapfrog.

Classical computing is based on well-understood models of logic and mathematics. It is how we think and analyze. Experience and detailed models allow us to predict outcomes and measure results against expectations. We know how to articulate problems and structure algorithms with precision. Not so for quantum; where we are just learning how to do all that in quantum terms.

It is critical to begin engaging with this new evolution in computing technology. Not to become experts in the theoretical aspects, but to understand the change in thinking about how things operate to discover how it might be useful. Operating in a quantum environment requires a unique, almost philosophical view of problems. There is no doubt that it has the potential to radically alter how problems are viewed, articulated and solved.

Quantum computing will have a major impact. It will require effort to learn quantum. To learn how to think, communicate and frame questions and, then to comprehend answers in quantum terms. Q Network participants include US high school and the international equivalent students. We expect that will spread.

It would be a mistake to ignore quantum computing today. Understanding this, IBM is working to advance public awareness and competency.

We found the Yorktown meeting to be very worthwhile. It was informative, challenging and rewarding. We left with a lot to think about. Our major takeaways are as follows:

1)            For the foreseeable future, quantum and classical computers will operate side-by-side.
2)            Classical computing and its techniques remain relevant.
3)            Classical systems are not going to be obsolete any time soon.
4)            Quantum computers are not poised to replace/enhance smart phones.
5)            Quantum computing will radically change how we view and think about problems.
6)            Very new and different types of problems will be identified and solved by quantum computing.

Once harnessed, quantum computing’s ability to analyze massive amounts of data in reasonable time to provide accurate, actionable insights can benefit many areas. It will improve forecasting and allow ‘what-if’ analysis of incredible variety and complexity. The impact will be felt in shaping and developing strategies for everything from financial trading to inventory management. It will benefit research to improve energy discovery and use. It will drive innovations in metallurgy, medicine, forecasting, machine learning, traffic control, and much more.

Finally, we don’t expect a general-purpose quantum computing laptop soon. We do expect that quantum computing will become commercially viable within two decades. IBM researchers believe that quantum must provide an exponential speed-up, i.e. 2ⁿ (where n is the number of qubits) before it will be widely used. They expect that could happen in as little as 5 or as much as 20 years. We look forward to it.

[1] See “IBM Research on the road to commercial Quantum Computing“ at
[2] See “IBM Q Network – moving Quantum Computing from science to problem solver” at
[5] For example, JP Morgan Chase, Daimler, Samsung, Barclays, Hitachi Metals, Honda, Nagase.
[6] Keio University, Oak Ridge National Lab, University of Oxford, University of Melbourne.