Pages

Tuesday, January 12, 2016

HPE’s Synergy: Delivering the Data Center of the Future?

By Rich Ptak


At the recent Hewlett Packard Enterprise Discover event for its European customers, HPE announced their new Synergy offering. HP spent the last year planning and implementing their split into two companies which became final at the beginning of November.

  The old company’s final financial results were such that it increased turn around pressures significantly. Be that as it may, HPE management is now free to focus exclusively on achieving business success.

 Even more positively, the Synergy offering is both a significant and good portent for the future. HPE accomplished something that we suspect few companies could match. In addition to managing the breakup, management has been able to focus on defining and delivering on a major step-up in its vision of the datacenter’s future. HPE deserves recognition for that achievement alone.  

 First, some background. Clearly, adoption of the Cloud is driving major datacenter changes. Datacenters are very stove-piped. Each group, (server, storage, networking, operations, etc.) narrowly focus on their own domain.  This originally resulted from early hardware architecture specialization. Each group has skills specific to their infrastructure/technology specialty.


 Now, add competitive and economic pressures demanding IT staffs do more with less that have increased enormously even as staff budgets and available time decreased. The result is a data center operating with serious re-enforcement of built-in inefficiencies. It is extremely rigid, routinely 
over-provisioned (to meet unpredictable demand), expensive and time-consuming to reconfigure and scale to meet changing demand.


Technology and innovations that reduce costs are highly sought after. A software-definable infrastructure is a potential solution. This is where Synergy enters the picture. It is designed to comprehensively deliver on the full promise of a fully software defined data center. The intent is to dramatically increase the flexibility to move, change and reconfigure infrastructure resources. Resulting benefits include: rapid provisioning and re-provisioning for new applications in the cloud. Today’s rigid structural boundaries decrease or are eliminated. The need for costly 
over-provisioning disappears.  Human and hardware resources become more flexible and efficient.  The vision is very attractive yielding significant organizational benefits.

 However, there exist some practical considerations and barriers to achieving it. We don’t see anything unsolvable but let’s examine some. In the first place: Where to start? What does it cost to implement? How long will it take? Is there an implementation plan template I can use?  HPE has a key role in answering these, even at this stage.

 Other issues to consider. The basic concept of Synergy challenges the datacenter’s existing stove-piped organization.  Its success requires fundamental changes in the way that many IT jobs are done. Inevitably, some will resist such changes, regarding them as job threatening. Prototype projects facing such difficulties may fail. HPE must anticipate and be prepared to resolve such objections and concerns. Their sales force must be trained to discuss how both HPE and potential customers can respond to these as well any other concerns that will inevitably arise.

 Finally, IT is already stressed. Synergy adds to that stress as it requires significant commitment and effort to succeed.  HPE should be prepared to help a client to evaluate whether or not the resources needed to succeed are present. HPE should be able to provide tools, including services for assessment, implementation planning and guidance.

 A cautionary note, today Synergy is HPE centric, built on HP’s OneView software. Initially, it will most likely appeal to those already committed to OneView. To broaden its appeal, we expect HPE will eventually extend it to other architectures. However, no such plans were discussed.

 Our opinion is that HPE Synergy holds great promise. To the extent that HPE fully understands and is prepared to deliver the effort needed to make Synergy a success, it will succeed. We’re not sure this is exactly the “Data Center of the Future” but it offers a serious vision.

We will follow its progress with great interest. For today, we recommend potential customers monitor Synergy’s development and evolution. This will allow them to determine when, and if, they want to perform a detailed evaluation. We wish good luck to HPE as they move forward.

Tuesday, January 5, 2016

BMC automated mainframe cost management saves money and lowers MLC costs for Insurance Company

By Rich Ptak


BMC rightly identified the need for enterprises of all sizes to focus on transformation with Digital Enterprise Management (DEM)[1]. This can take on many different forms including a focus on automating tasks for managing and controlling licensing costs. Here is one example of how this plays out to the benefit of a mid-sized insurance firm.

Mainframe software license charges (MLC), for both system and applications, have been tied directly to consumption, measured in peak MSUs[2] or MIPS[3] since the early days. MSU (or MIPs) consumption ties to workloads and performance, which determine the ability to meet service level agreements (SLAs). MSUs are also used to calculate software licensing charges as a measure of the level of mainframe usage or computing consumed.

Associating the actual MSU usage by individual workloads has been a very difficult task. Managing mainframe software MLC costs, let alone predicting them, has never been easy. Even in the days of disciplined workloads, calculating the total number of MSUs consumed by any particular combination of workloads at any particular time was the source of Sys Admin nightmares. Even the best efforts by both vendors and customers, end up relying on manual efforts that are time-consuming, frustrating and typically unreliable. In today’s world of mobile computing, with unpredictable workload volumes, where some 90% of transactions end up involving mainframes, the variation is even less predictable. Even relatively small mainframe operations experience dramatic swings in MIPS consumption, driving up costs.

The efforts of one mainframe vendor are changing all that. BMC’s MLC cost management solutions provide the first real opportunity to automatically manage and control MSU peaks. They also provide tools to identify jobs and tasks to tune.

We interviewed the Technical Services Manager of a mid-sized insurance holding company. Using BMC products, Intelligent Capping for zEnterprise® (iCap) and Cost Analyzer for zEnterprise (Cost Analyzer), he can now control MSU peaks, eliminate peak surprises and identify where to concentrate tuning efforts. He reduced peaks from 90 to 75 MSUs, eliminated an annual ‘true-up’ bill in 6 months and plans to further reduce peaks to 63 MSUs.

The Corporate Data Center

The insurance company’s mainframe is the responsibility of a centralized corporate IT group with SLA commitments to support delivery of shared corporate services (e.g. Human Resource management, networking, billing, invoicing, etc.) to owned companies. Local IT groups within the various entities’ handle all other applications.

The company runs a variety of BMC’s DB2, IMS and MainView products to manage and control a relatively small z/OS-based 90 MSU (728 MIPS) mainframe. Other vendor products are also installed. Billing usage is determined using a 4-hour rolling average of MSU peaks. They pay a fixed z/ monthly charge based on usage by z/OS components. Usage of other IBM® software (IMS™, DB2®, CICS®, MQ®), is covered by a fixed monthly amount defined in an Enterprise Licensing agreement (ELA) based on an estimated peak MSU.  They true-up the difference between the actual and estimated MSU usage once per year. This charge has historically ranged between $30K and $50K. Annual MLC charges run in the neighborhood of $1.6M. Changes in workloads prevented more accurate forecasting, making budgeting for the true-up charge and cost management very difficult. Efforts at manual tracking and using other products to control peaks were unsatisfactory.  

Our manager was convinced that intelligent capping of peak load MSUs would reduce MLC costs. He also suspected more could be done to further reduce 4-hour rolling average MSU peaks. Better cost control and operations management was possible with more data and detailed insights to identify specific workloads, jobs and tasks for tuning efforts.

 One Insurance Company’s Experience

BMC’s MLC Cost Management products changed all that. Our Technical Services Manager learned about BMC’s MLC cost control solutions for the mainframe at a BMC seminar. Within four months, they had purchased and installed Cost Analyzer and iCap. The results were everything expected.

Cost Analyzer allowed LOB managers to identify workloads driving up peak consumption. These could be managed to reduce peak overruns. An update provides even more insight and control. We’ll discuss later.  

For iCap, the goal was to reduce the average peak from 90 to 75 MSUs. There are three operational modes for iCap:

  1. 1.   Observe – a learning mode that monitors and collects operational data on workloads.
  2. 2.   Message – extends Observe to analyze data and send alerts to recommend changes (using customer specified parameters) to control MSUs and manage costs.
  3. 3.   Manage – monitors, analyzes and automatically implements recommended changes.

For the first two weeks after installation, adjustments were made manually based on automatic alerts from iCap. After that, they switched to automatic Manage Mode. The product ran for the second ½ of the fiscal year. At the first post-installation true-up, the charge was zero. With iCap running, they NEVER exceeded the 75 MSU cap. Cap management allowed them to compensate for the first six months (pre-MSU capping) of consumption over-runs. This was a major advance in cost control and savings.

As a result, plans are to progressively lower the cap from current 75 MSU to 62 or 63 MSUs within 4 years. With the latest installed version of Cost Analyzer software, the manager can drill-down for additional detail on the workloads driving MIPs consumption. This allows identification of the specific jobs and tasks to tune to further reduce the load.

Capping consumption and insight into workload group operations will provide even more significant savings in the future. With the knowledge already gained, along with the control available with iCap, they can negotiate better multi-year peak and sub-capacity licensing and billing terms with vendors. The more detailed data and control works with virtually any mainframe software (BMC, CA Technologies, IBM, etc.). They anticipate savings from controlling the cap to exceed $140,000 over 48 months, an estimated 8.75% of their MLC costs.

Advice

Our manager strongly advises potential users to leverage BMC’s expertise in the implementation process. Not because the process is particularly complex, it isn’t. But because they found the BMC support was excellent beyond expectations. Time spent with them before, during and post-installation reduced the time to learn and benefit from the products. His team rapidly acquired useful insights into reports and data in formal and informal sessions with BMC staff. As a result, they quickly developed expertise at using the products to get optimum results.

Conclusion

In this manager’s experience, BMC products and support more than met his expectations and needs with these products. He expects to see additional benefits well into the future even as his workloads shift and change over time. He found working with BMC support staff accelerated time-to-value, while dramatically increasing team expertise and ability to use the new products. He recommends the combination of Intelligent Capping and Cost Analyzer along with use of BMC’s support services.

Sounds like an all-around win to us.


[2] A million service units (MSU) is a measurement of the amount of processing work a computer can perform in one hour – typically used for mainframes.
[3] Million instructions per second, a measure of a computer's central processing unit performance.