Pages

Wednesday, November 22, 2017

IBM Spectrum with Cluster Virtualization accelerates Cloud & Cognitive Computing

By Rich Ptak

(Image courtesy of IBM, Inc.) 
IBM made a big impact with announcements of pervasive encryption earlier this year. Now, with its most recent announcement, we predict they will do so again as they tackle one of the biggest frustrations and ongoing challenges to IT.

IT innovators and creators have struggled for decades trying to identify how to get the maximum utilization and optimal performance from an evolving, disparate, complex, heterogeneous IT infrastructure. Complicating the challenge is a user base demanding simple access to the latest technological innovations. The upending of market and usage models, only adds to the problem.

IBM’s latest announcement tackles the challenge head-on with an architecture and solution suite that applies cognitive computing and sophisticated software that delivers consistent, simplified access to automatically managed (and optimized) infrastructure. In their announcement, IBM provides the details of exactly how and what they deliver. We aren’t here to rehash that. We provide a selected product overview with comments on why we think this is a major advance for cloud and cognitive computing users and developers.

What’s the issue?

A perennial goal of IT operations has been to provide the best user experience possible in a cost-effective manner. Typically, operational metrics focused on the simplicity of the user interface, reliability, response times, etc. For infrastructure, the goal was optimal utilization, high performance, and reliable operations. Pursuing these goals has driven IT innovation and product development for decades. IT staff tried a variety of ways to resolve the issues, e.g. languages (Fortran, Cobol, Java), operating systems (z/OS, Unix, Linux, etc.), platforms, dedicated systems, GUIs, APIs, containers, server clusters, open systems, and clouds with limited success.

Today, despite the effort, the goal remains the same. The compute environment is more complex than ever. Users are still frustrated. IT administrators, developers, and operations staff spend too much time configuring infrastructure and juggling complex, dynamic workloads trying to meet SLAs and satisfy users. IBM Systems turned to Cognitive computing and Software Defined infrastructure to address the issues.

IBM Software Defined Infrastructure & Software

In a nutshell, IBM’s Software Defined Computing solution suite addresses the user interface, workload management, infrastructure management and solution development challenges. At the heart of the solution is Cluster Virtualization software which virtualizes access to and exploitation of servers, storage, clusters, clouds, etc., or whatever constitutes the defined available infrastructure. IBM’s innovation is to buffer the user, whether apps developer or user, from having to learn about the intricacies of the supporting computing infrastructure. It offers the prospect of automatic infrastructure configuration and management optimized to provide maximum infrastructure utilization and workload performance.

A virtualization software interface buffers the user and IT operations from the complexity of the underlying infrastructure more effectively than past attempts. Users provide app requirements and parameters. Operations staff identify performance metrics, constraints, and requirements. The solution suites for workload types (discussed below) manage and optimize infrastructure operations using cognitive computing solutions. These dynamically learn app and workload behaviors, infrastructure availability and performance, etc. measured against up to 20 different parameters to manage workloads and configurations. 


Figure 1 IBM Cluster Virtualization    (Courtesy of IBM, Inc.)


Cluster Virtualization Software allows users to transparently share clusters of computing resources. Specialized software suites operating through IBM Spectrum Computing and IBM Spectrum Storage underpin the specialized suites.  Figure 1 represents how all the pieces fit together to provide end-to-end management of user activities across multiple platforms, architectures and data center environments. 

Cluster Virtualization

Cluster Virtualization allows many independent applications and workloads to make use of disparate resources residing in multiple, different clusters. The workloads can be a mixture of traditional apps, such as high-performance computing or compute-intensive analytics, or next-generation workloads leveraging Hadoop, Spark, and containers, etc. A consistent interface provides users simplified access to and utilization of the total cluster infrastructure. The arrangement is highly scalable; apps and users can both run into the thousands. IBM reports the potential of running millions of jobs per day. IT operations staff benefit with cognitive computing services that automatically manage up to 20 different operational parameters to configure, provide workload management (e.g. scheduling, assignment), infrastructure scaling (up and down), etc., to optimize resource utilization and performance.

The beauty of this design lies in the extremely flexible definition of clusters. It supports a broad range of complex, mixed and heterogeneous environments encompassing from several to 1000s of systems, or VMs. Supported system types include OpenPOWER, x86, ARM, SPARC, as well as multiple operating systems and environments, including LinuxONE, Docker, containers, etc. The defined cluster can be on-premise systems or extend into public, private or hybrid clouds. Cluster virtualization can function across heterogeneous cloud environments that include IBM Cloud, IBM Cloud Private, AWS, etc.

Next Gen IBM Spectrum LSF Suites

IBM also announced significant enhancements to their IBM Spectrum LSF suites. These offer workload management options specifically targeted at the Enterprise, HPC and Workgroup segments with increasing functionality at each level. Figure 2 shows how the functionality and capability varies at each level. 
Figure 2 New IBM Spectrum LSF suites  (Courtesy of IBM, Inc.)


Each level is designed to simplify user access and management with automated reconfigurations of access to resources, rapid, flexible scalability and resource utilization. All adjustments are controlled through defined policies, automatically managed and administered.

IBM has introduced new pricing terms and models which appear very attractive. Your IBM rep can provide details.

IBM Spectrum Conductor

There is much more to IBM’s announcement, including updates and additions to IBM Spectrum Conductor, such as the Deep Learning Impact module. An extensive list of enhancements was announced to speed processing, including hyper-parameter search and optimization techniques, elastic resource allocation, and Spark-specific data management. Cluster virtualization and multitenancy for deep learning are only two techniques that are included to increase resource utilization.

This module is designed to more dynamically, efficiently and rapidly extract useable business insights and value from data even as it also simplifies installation, configuration, implementation, model building and analysis with pre-built frameworks. It provides shared multi-tenant and multi-service functionality that will speed up processing and increase infrastructure utilization. IBM Services and Support is available for the entire software stack available on both IBM Power System for HPC with IBM PowerAI framework and x86 systems with Open Source frameworks. The software distribution packages contain all needed components, including all Open Source components. End-to-end workflow management operates automatically to improve operations over multiple cycles. Feedback is that the results are very effective in reaching the efficiency, acceleration and savings goals. 

Enhancements were also made to IBM Spectrum Scale to improve storage performance and operational efficiency. These include accelerated I/O performance, reduced latency between nodes, and better performance of metadata operations. From what we can tell, these all benefit from the cluster virtualization and contribute significantly to the overall performance improvements.

Conclusion


This announcement appears to provide significant evidence to justify a more detailed follow-up for any IT operation responsible for the economic support of a complex data center in a compute intensive environment. To us, the effective implementation of Cluster Virtualization with its potential to simply and economically leverage, exploit and scale heterogeneous compute clusters alone is a compelling reason for further exploration. We intend to follow developments in this area. We look forward to hearing more from users. In the meantime, we highly recommend calling your IBM rep for additional information. 

Wednesday, November 15, 2017

Ptak Associates Tech Blog: Busting Mainframe Myths - BMC’s 12th Annual Survey...

Ptak Associates Tech Blog: Busting Mainframe Myths - BMC’s 12th Annual Survey...: By Bill Moran and Rich Ptak BMC surprised us during the review of the results of their annual mainframe survey. Frankly, we were con...

Ptak Associates Tech Blog: Launching a Secure Environment: Applying IBM’s Lin...

Ptak Associates Tech Blog: Launching a Secure Environment: Applying IBM’s Lin...: By Bill Moran and Rich Ptak Courtesy of IBM The other day we attended an excellent presentation by Dr. Rheinhardt Buengden of IB...

Wednesday, November 1, 2017

Busting Mainframe Myths - BMC’s 12th Annual Survey

By Bill Moran and Rich Ptak


BMC surprised us during the review of the results of their annual mainframe survey. Frankly, we were concerned it would be somewhat boring. After all, after 12 years of surveys, expectations were low for something new, much less exciting. The results, when presented, changed all that.

BMC began by listing 5 popular mainframe myths. For this paper, we ‘ve reordered and reworded the list slightly, to make them more forceful. Here they are with our comments in italics:

1)    The mainframe is in maintenance mode (i.e. an old, dead platform) that no one invests in anymore. Many in the industry believe this.
2)    Executives are planning to replace their mainframes. As the trade press (and some analysts) have been saying for years.
3)    Organizations have already fully optimized the mainframes for maximum availability. No surprise here. They have had a lifetime to do so.
4)    Only elderly, ready-to-retire Cobol types work on the mainframe today. Sun Micro at one point had a video that showed some of them.
5)    If any young professionals work on the mainframe, they cannot expect much of a career.
We admit that our list exaggerates a bit, but it does so to make a valid point. Many non-mainframe people believe item number 1 item is undeniably true. This is the root of remaining 4 points. Despite efforts by IBM, BMC, Compuware, and others working for years to update, improve and mainstream the mainframe, the perception persists. 

This BMC survey provides a giant step toward finally putting these myths to rest.

Before presenting our conclusions and comments, some background. Survey details and logistics are covered in the Results e-Book[1]. The survey captures input from over 1,000 executives and professionals, all working with the mainframe in enterprises down to mid-range shops. Now, for the survey results as they expose the myths.

For myth #1, a full 91% of the respondents view the mainframe as a long-term, viable platform. 75% of respondents are using Java on the mainframe indicating their companies have made the investment to hire or train people in Java usage. 42% identify application modernization as a priority. The specific reason (for modernizing) is to take advantage of new technology. These results provide convincing proof that customers are modernizing their mainframes. Also, far from being dead – mainframes are very active platforms. Myth #1 deposed.
On to myth #2. 47% of the executives interviewed state that the mainframe will grow and attract more workloads, 43 % see it stabilizing, and only 9% say their organizations will replace the platform.  Myth #2 destroyed.

On to myth #3. The claim is that mainframe users have already squeezed the last drop of availability out of the platform. Mainframes have always delivered very high levels of availability, yet a full 66% say business requirements continue to force a focus on further reducing maintenance windows. Simply said, they must increase platform availability. Myth #3 shattered.

Consider myth #4. Mainframe users are mainly elderly, ready-to-retire types. This year, BMC added demographic questions to the survey.  They found 53% of the respondents are under the age of 50 and only 4% over 65.  20% are female of whom the majority 55%, are between 30 and 49. (Interesting side-note, latest figures say only 11% women are in STEM positions worldwide.) Myth #4 deflated.

Finally, Myth#5. No career path for younger professionals. In actuality, a full 70% of the surveyed millennials (under age 30 with less than 10 years’ experience) are convinced that the mainframe will grow and attract new workloads industry-wide. 54% believe that the mainframe will grow within their organization, a sure indication they see career opportunities with the mainframe. Myth #5 is laid to rest.

Logically, this survey will help to kill off some of these common mainframe myths. People will believe what they want to believe. Others are vested in the maintaining the myths. Typically, neither of these will let the facts alter their beliefs. We, however, want as many as possible to be aware of these facts.

We encourage you to investigate BMC’s results for more information and insight. You will likely find the results to be interesting and, possibly, unexpected.

BMC announced these results on November first. For even more of the details and your own copy of the survey, go to BMC’s Mainframe Survey Resources web page here[2].   And, you can read more of our commentary on IT topics in our Tech Blogs[3]. We think you will find that the mainframe has a significant future!


  


ignio: Artificial Intelligence for IT Ops

By Bill Moran and Rich Ptak


Digitate

Figure 1 Artificial Intelligence for IT Ops   Courtesy of Digitate
Indian multi-national Tata Consulting Services (TCS), created Digitate in 2015 to develop and deliver products based on the ignio™ Cognitive Automation platform. Today (November 2017), these include ignio for IT Operations, ignio for Batch, and ignio for SAP ERP. We think these offer significant value and benefits to IT. Here’s why.


An IT dilemma

IT departments face a dilemma. Their budgets are under severe pressure to deliver more with fewer resources.  Yet, they must also manage and undergo a costly digital transformation that CEOs are relying on to deliver new business opportunities. This dilemma is sharpened, and risk is increased as many of IT’s best people are unavailable because they are focused on firefighting to maintain the SLAs that keep existing customers happy. 

IT benefits greatly when such resources and people can focus on these challenges. This is where Digitate’s ignio products offer substantive assistance[1]. Over time, they “learn”[2] IT operations to allow automation of routine tasks and thus speed and facilitate problem detection and solution. As its knowledge builds, ignio more fully automates problem “find and fix” activities. Meanwhile, it greatly assists with problem resolution.

Determining problems in a complex environment is difficult and time-consuming. ignio can help but most IT shops will wisely choose to selectively implement the more advanced ignio capabilities. A careful plan, as we discuss later, will deliver many advantages by reducing risks and speeding the process.


ignio products

              
ignio for Batch and ignio for SAP identify their application targets. ignio for IT Operations is designed to deliver value across the whole range of data center operations. Each product can integrate with other installed monitors. Data sheets for each product are available on the Digitate web site[3]. Figure 2 shows the ignio platform architecture.

Figure 2 ignio Platform Architecture        Courtesy of Digitate    

Key to ignio’s value is the amount of out-of-the-box knowledge it has about the data center. It knows what a server is, what storage is, and has considerable knowledge about commonly installed operating systems. Inherent in ignio is >30 years of IT infrastructure technology that includes common knowledge about data center operations and IT infrastructures.

The process on how ignio addresses IT challenges has been carefully designed. Through Blueprinting, ignio first learns the environment to identify what is there and to determine “normal” behavior. Once ignio identifies “normal,” it can identify deviations. Then, it moves to analysis determining probable causes of the deviant behavior. Finally, ignio recommends or in some cases executes fixes which can be applied automatically. Such repair depends on the installation parameters.

During operation, ignio products follow a continuous cycle of Learn, Resolve, Prevent. The result is that operational models and the knowledge base are continually updated to reflect changes in the environment and operations of the data center.

In addition to being able to “Resolve” issues in the data center, ignio can automate routine tasks that used to take a significant amount of time. IT resources are stretched in most companies, ignio can help address typical employees requests quickly while allowing IT to tackle other more critical challenges.

In its “Prevent” phase, ignio will use the knowledge acquired of system operations to predict likely problems before they happen as well as model the effect of proposed system changes. Very significantly and attractively, we note that ignio does not use scripts. Therefore, staff do not have to deal with brittle scripts that are a nightmare to manage.

Suggested Action Plan

We recommend beginning with a study and evaluation of ignio. We found a wealth of helpful material on Digitate’s web site[4] from which to understand Digitate’s product offerings, their potential application in the enterprise, and decide on further investigation of ignio products.

After deciding to move forward with ignio, the next step requires creation of a business case and plan. Senior management judge will judge success by the amount of business value that a technology delivers. You can expect to deliver value in a reasonably short timeline. What is “reasonable” depends on the organization.

The planner needs to understand the organization’s significant problems. They must identify where and what the possibilities are for tangible organizational benefit. Too often, new technology projects fail due to lack of a properly documented business case with a well-defined use case that includes specific benefits enumerated and quantified. Review potential targets to identify which will benefit most from ignio. Avoid a project with a high risk of visible, disruptive failure. Effective application of AI is leading edge so set modest goals to start. Establish readily identifiable payback and quantifiable benefits.

Finally, identify potential pitfalls, setbacks, and difficulties. Then, determine how to address these. How will you recover if the original objective cannot be achieved?  This is a possibility, especially with new technology. Should you consider having Digitate Consulting work with existing staff on the initial deployment and training? Where are problems most likely to crop up? Who is affected by this? Where will objections/blockages occur? How can these be avoided/minimized? 

How long will the install take?

Digitate estimates that it generally takes 6 weeks for ignio to learn and become effective in
 normal operations. This can vary widely by customer[5]. Many installations operate a variety of “normal”. Day-time processing differs from nighttime. Weekdays differ from weekends. End-of-month, -quarter and -year have unique patterns. Some have periods when operations dramatically differ. For example, tax season stresses auditing firm IT systems; fourth quarter stresses retail IT. ignio continuously learns the business context during each period to build a complete model able to detect deviations. Select the initial project time-line accordingly as it may make sense to avoid a critical business period to avoid a catastrophic result.

ignio - Be aware

Currently, ignio does have some limitations. For instance, ignio has limited mainframe support. ignio for Batch will be able to analyze data from the mainframe, but it is not designed as a batch scheduler to execute mainframe’s batch jobs. That said, ignio for Batch can be very useful in certain environments. In our opinion, any shop running hundreds, or thousands of batch jobs would be well served to take a close look at ignio’s products.

Note that current operating system support includes: Windows, Linux, AIX, and Solaris. There is no support, currently or planned, for z/OS or any other mainframe OS.  We expect UNIX versions, like HP-UX, will be added over time.

The Final Word

ignio delivers a valuable, beneficial application of AI technology to IT data center operations. It will deliver worthwhile results to organizations that follow a careful plan for its implementation. Its products merit careful examination. It is new technology and should be handled as such, i.e. with careful management and planning.  

There will be many products using AI technology. Similar offerings are in the market that use AI, robotics, machine learning for cognitive automation in different ways. Offerings for process automation and optimization are available from companies such as Automation Anywhere, Blue Prism, IBM, UIPath, WorkFusion, etc. Business and industry press, consultants and analysts discussing applications of AI and cognitive technologies will only increase management pressure for in-house AI projects.

ignio appeals to us because it offers key advantages to IT. Among the most significant is that their current products can be used in projects totally contained within IT where risk can be best managed. This allows IT to build knowledge and experience to respond to management questions about AI. A project to investigate and apply ignio products to IT operations appears to us to be a very good move.

TCS has a worldwide presence, deep pockets and highly regarded expertise in IT consulting. Digitate benefits as they leverage these in development and delivery activities. Successful, continued innovation in leading-edge technologies requires substantial on-going investment. Stable technical and financial backing benefits both Digitate and its customers.


[1] There is an excellent video, an interview with Dr. Harrick Vin, the CEO of Digitate, on the design of ignio. See https://www.digitate.com/resource/interview-harrick-vin-birth-ignio/ There are other videos as well.
[2] We realize that we are using words that imply that machine learning is identical to human learning. This can be debated but we will use these words without prejudging the results of the debate.
[3] Find these and many more informative resources at: https://www.digitate.com/resources/
[5] External events may also have to be considered. A disaster, natural or otherwise, can dramatically affect data center operations.