Pages

Tuesday, April 26, 2016

BMC NGT + Financial Services firm = enhanced application availability!

By Rich Ptak

Enterprise transformation to become a Digital Enterprise has received a lot of attention since BMC started the discussion with its DSM initiative. Enterprises are transforming operations to digitize functions and products in response to customer demands for speedier, innovative and personalized services. To meet customer expectations, apps defining services must be rapidly developed, continuously updated and digitally deployed.

The services themselves must be intuitive, user focused, constantly available on any user device especially cloud and mobile devices. A financial services consumer may want to check an account balance, analyze market trading, examine a company’s profile and transfer funds before completing a stock trade using one or a combination of devices. A single transaction spins up multiple transactions accessing a number of diverse data sets. The service must be available 24/7, operating flawlessly at rapid speeds. Anything less risks a dissatisfied customer and loss of business. The process is repeated across thousands, even millions of customers.

More sophisticated analytics along with different data mean that traffic volumes, data manipulation and analysis are expanding with no end in sight. Data center operations are hard-pressed to keep up with this activity with existing infrastructure management tools.

With some 70% of enterprise data residing on mainframes, most of the activities will involve mainframes working across multiple subsystems with shared data residing on both the mainframe databases along with other systems. Optimal database performance requires frequent and often time and storage intensive data base reorgs and clean-ups. With increasing size, these become progressively more difficult, more disruptive, and more resource intensive.

Already stressed, data centers are being pushed to operate with minimal or even no downtime. Unfortunately, traditional data management tools were not designed for the current digital reality, including such frequent and persistent updates and reorgs of massive amounts of data. These tools require data bases to be off-line and require intensive resources that grow uncontrollably as the data volumes grow.  These issues often require IT to delay or perform partial maintenance, which only slows performance deterioration, but doesn’t often improve performance. Additionally, forced downtime incurs added costs such as lost/delayed business, loss of customers turning to other suppliers, disrupted operations, etc.

Expensive alternatives (capacity over-provisioning, manual scripts to make temporary repairs and adjustments, equipment upgrades) can maintain operations until a full reorg is run to help avoid costly penalties due to service agreement breaches.  However, these “work-arounds” have their own draw-backs and costs.

BMC’s Next Generation Technology Utilities for DB2 (NGT) are designed for the challenges of digital business. NGT intelligently automates and monitors the utility process and runs with the data base on-line and fully operational. We interviewed a senior engineer responsible for data center architecture and operations at a financial services firm on his experiences with NGT.

The Corporate Data Center

Our senior engineer is responsible for operations in a mainframe shop consisting of multiple sysplexes totaling approximately 80K MIPs processing capacity. Their database runs 60 to 70 Terabytes after compression. Operating 24 hours/7 days per week, smooth, continuous operations is critical to the success of the business.

The company faced challenges impacting many DB2 environments.  Their data bases were growing rapidly with critical ones early each week reaching sizes that seriously impaired performance. They were seeing 1000 timeouts each week during data management processing. The workload would not allow downtime for a reorg during the week so optimization was delayed to the weekends. Temporary adjustments and workarounds made manually during the week were less and less effective as the workload grew. Using available tools, the only alternative was to let performance deteriorate until weekends and then, squeeze-in reorg and maintenance. Even then, traditional management techniques took too much time, decreasing availability (risking expensive service agreement violations) and ran up huge costs.

IT staff believed automation would be too costly and difficult (if not impossible) to successfully implement as many of the 10’s of thousands of jobs run followed no DB2 object naming conventions. This meant complex, expensive, time-consuming manual customization of reorgs specifying sort lists, setting parameters, handling overrides, and tuning. They were happy to find out that with NGT, this was not true.

The Financial Service Firm’s Experience

The IT staff used existing management solutions from BMC and IBM for ‘on-line’ data management.  They needed a better solution. As mentioned, they were initially skeptical an automated management approach could work because of the complexity of their naming and workflow processes. Our engineer decided to try BMC’s NGT solution.

NGT allows reorg processes to be run in real-time without disrupting or affecting service delivery operations. Thus, it assures near peak data base performance at all times.

NGT was able to tie into the existing "grouping" system and determine which ones need to be reorged on a daily basis. Placing jobs in groups simplifies reorg management, e.g. objects are movable from group to group via a simple “drag ‘n drop”. Thousands of jobs were replaced by wildcarded or "group" reorgs that automatically and dynamically handle the whole system.

Provided basic parameters and policies, NGT schedules and automatically runs reorgs as necessary. The utilities ‘learn’ what works and what doesn’t. It ‘remembers’ what has been executed and monitors process effectiveness. It uses this data to automatically make adjustments to executing processes. This assures that reorgs are done in the most efficient, rapid and optimal manner. Required reorg jobs went from tens of thousands to between five and six thousand jobs. Now, there are many fewer exceptions requiring overrides. Where a standard job ran with thousands of overrides, they now have fewer than 100. Instead of delaying until the weekends, reorgs are now run as often as they need or wish.

NGT Utilities provides full data base availability, improved application performance and fully automated data management housekeeping. Dataset allocation automatically adjusts to fit business policies, saving time and storage space. One unforeseen benefit, built-in integrity checks revealed database issues involving thousands of corrupted indexes. The corruption had been masked because of an automatic rebuild process in their existing solutions. 

Prior to NGT, they were able to manage only about 20% of their system because of its size and complexity. NGT enables them to run approximately 6 times as many reorgs at reduced costs while automatically managing the entire system. By being able to reorg through out the week, they maintain good performance and overall save about 10%. 

Advice

Our engineer was impressed with NGT’s ease of use. He cautioned that unlike traditional tools, it doesn’t generate a lot of real-time status reporting. Experienced DB staff may have difficulty adjusting to its lack of in-process reports. Not to worry, automatically it does a lot more behind the scenes. Time and detailed post-reorg reports do much to eliminate that concern.

For example, upon completing a reorg, NGT provides a very detailed activity report and error report summary. The summary provides useful insights into what was done in the reorg which serves as a basis for further action.  Finally, it is possible to insert exit points into the utilities to customize the response to specific situations. For example, NGT can be coded to alter a table entry in response to a fault or error instead of generating a Trouble Ticket.    

Conclusion

The senior staff engineer found the potential for continuous, automated reorganization invaluable. NGT is an incredibly powerful solution that is able to complete in much less time without interrupting operations. He liked how it learns and improves its effectiveness with each reorg. He found it highly useful and reliable in operation.

The volume of post-reorg data can be intimidating. But, as stated before, powerful NGT utilities more than compensate for this. They do their job automatically and well by providing insights into significant hidden or previously unknown problem areas. All without ever taking the database off-line. An excellent solution to the major challenges faced by this financial services firm.

Monday, April 25, 2016

Compuware boosts developer productivity by making it “Easy to Go Fast!”

By Rich Ptak


Compuware’s campaign for Mainstreaming the Mainframe continues with another partnership and innovative product improvements that provide Agility without Compromise. Earlier this year, we discussed Compuware’s commitment to make DevOps pervasive across an integrated enterprise (mainframe and distributed) computing environment. We described their design for a blended ecosystem in which both development and operations staff intimately function in a completely integrated and complementary manner.

Figure 1 (below) is a simplified, graphical view of Compuware’s vision.



Figure 1 Enabling DevOps across the Enterprise

Last quarter in two articles[1], we discussed Compuware’s acquisition of ISPW as well as its  partnerships, which include technology integrations with Atlassian Jira, SonarSource, Splunk and AppDynamics. This quarter we focus on another partnership with CorreLog and extensions to Topaz and ISPW which address the challenges associated with one of the more frustrating of software developers’ tasks, i.e. managing application source code.

The Issue:

Whatever the platform, the task of keeping applications up-to-date is a fact-of-life. For mainframe source code, often decades old and lacking reliable records of maintenance, changes, or code updates, it can be a nightmare. Release management is critical. Even experienced staff can find it extremely difficult.

Existing mainframe code management tools were designed for a different era and model of coding. They are hopelessly outdated and inadequate for today’s world of agile development and rapid programming styles. Code management needs to be made easier and streamlined. Much needed functionality that makes management quicker and easier currently exists in distributed environment tools that collect and present data.

As Compuware was optimizing their own development and operations processes, they were identifying the best possible ones for their own use with an eye to also supplying them to their customers. ISPW was just what they were looking for. They purchased the company and began integrating it into Topaz.

Combining Topaz with ISPW = Simplicity + Elegance

Compuware’s goal is to meet an identified demand for a contemporary DevOps development environment on the mainframe architecture. Their “Mainstreaming the Mainframe” strategy focuses on creating a blended ecosystem with tools attractive to both mainframers and experienced distributed systems developers who are not mainframe literate.

Combining the latest in source-code management technologies from the distributed world with Topaz, they are building a common culture to the benefit of both environments. Experienced mainframers get easier access to the latest in DevOps technology and exposed to new features that improve productivity. Especially interesting to Compuware is the opportunity to expose mainframers to the data aggregation and display capabilities that will make “it easy to go fast” in application code development, debugging and change management. Here are more examples of how the combination works together:

o   ISPW integration with Topaz provides a common look and feel while leveraging its automated capabilities and visualization strengths. Single-click access allows developers to leverage the automated capabilities and strengths of Topaz for Program Analysis. A direct link to Topaz Workbench when compiling quickens analysis of compile errors.

o   Code management is significantly easier, faster and less error-prone by using automation and visualization techniques for working on existing programs, code, copybooks, etc. to do error analysis, debugging and updating. Automatic display (with the ability to edit) of offending or suspect code along with associated error codes on a single screen means no longer having to manually search through multiple screens or pages of printouts. The result is much reduced time spent on error analysis and correction.

o   Topaz and ISPW work together to supply data useful in managing and performing source code updates and changes. Visual displays of the complete lifecycle of a project makes it easy to manage the process by maintaining a dynamic, visual record of who is working on what code, code status (edit, changes, testing, approved), etc.

o   The Impact Analysis feature within ISPW generates views of copybook, job and code dependencies, links and interactions speed understanding of program flows while reducing the risks of conflicts or disruptive changes. The depth (number of hops) of interactions that are included in the graphic display is adjustable by the developer.

o   Developers can work independently while keeping track of who and what others working on the code are doing. Automatically generated side-by-side visual displays of edited codes makes comparisons easy for error-correction, edits, merges and updates.

o   A mobile interface makes it easier to get change approval for emergency code fixes. 

There is more to the announcement including enhancements in log analysis and collection due to CorreLog SIEM Agent integration. This significantly strengthens log analysis and reporting using a standardized interface to get application-level data from mainframe application auditing solutions. In addition to its own analysis, messaging and altering, CorreLog can feed the 3270 log codes to all major SIEM solutions. This allows creation of a single view of risk and security, incidents and events across platforms.  

The Final Word

The overall takeaway is that Compuware continues to impress with their focus and progress. For six consecutive quarters they have delivered significant advances and improvements in products and solutions aimed at Mainstreaming the Mainframe.  

They have identified an obviously interested market segment who see opportunities and advantages in the mainframe, but want to provide programming tools with familiar interfaces to their next generation of developers. Also interested are those enterprises looking to integrate data centers with standardized management and operations solutions across platforms. Finally, they can serve data center managers and mainframers looking to modernize mainframe operations who are moving beyond Linux-Java to power agile development with DevOps solutions.

Compuware has accurately identified its market. In working closely with these customers, they are also uncovering unexpected opportunities to resolve weaknesses and customer discontent with other infrastructure tools. So far, Compuware has successfully been able to capitalize on such opportunities. We bet that they’ll continue to do so over coming quarters.




[1] See “Mainstreaming the Mainframe..” and “On implementing Compuware’s pioneering mainframe strategy” at: http://www.ptakassociates.com/content/

Wednesday, April 20, 2016

IBM's cloudMatrix Makes Selecting the Right Cloud Service Provider Easier!

By Rich Ptak and Audrey Rasmussen

Choosing the best among competing options is a daily task. One that can be as easy as deciding what to have for lunch, or, as complex and stressful as buying a home or choosing a Cloud service provider. For home or investment purchases, one option is to use a professional consultant or broker with specialized product knowledge and process expertise to aid decision-making.

CIOs and IT staffs face similar challenges when selecting the right hybrid Cloud option and/or service provider. The pace of technological change, technical and business requirements, policy variation, changes and options make it frustratingly difficult. Finding the right combination of services, let alone identifying all relevant evaluation metrics is neither easy nor obvious. It involves not only technical requirements, but also business considerations, such as costs, expected benefits, risks, compliance, etc. With those requirements, using an IT broker or consultant can be an attractive option.

Partially because of the difficulty in defining and quantifying business metrics, IT traditionally focused on technical analysis, less on business/financial analysis. Today, CIO’s cannot afford to ignore business’ aspects. Tight links between IT and the customer mandate a process that integrates business, technical, implementation and operations issues. Determining the service that satisfies all requirements while yielding the best benefits, cost-savings, and payback opportunities is no easy task. Providing such information in a consumable manner to non-experts at an affordable price was practically inconceivable.

Today, IBM’s new cloud service, cloudMatrix[1] solves many of the CIO’s problems. It provides a comprehensive, affordable way to aid clients in evaluating hybrid Cloud alternatives. It helps determine: a) how to provide developers and non-IT buyers multiple cloud service options within the constraints of enterprise policies, and b) how to manage service implementation and delivery with visibility into costs, usage and performance.
With IBM cloudMatrix services, enterprises can plan, buy, and manage (e.g. broker) software and cloud services from multiple suppliers across hybrid clouds from a single screen. In contrast, solutions focused mainly on the technical aspects of potential cloud workloads tend to be too complex and difficult to understand for non-technical buyers. Here’s what IBM offers.

IBM cloudMatrix

The interesting differentiation in cloudMatrix is its process-based approach guiding customers through exploration and adoption of cloud services. Customers proceed systematically through an IT supply-chain process of Plan, Buy, and Manage. The result is an easier, more accurate way to choose the right hybrid Cloud. Full brokerage services in the shared and dedicated models include all three functions, whereas the Planning offering only provides planning capabilities.
So, IBM Cloud offers:
  • IBM cloudMatrix Planning
  • IBM cloudMatrix Full Broker Shared
  • IBM cloudMatrix Full Broker Dedicated
IBM cloudMatrix Planning enables enterprise to assess an existing application’s benefit from and readiness for the Cloud with its analytics-based workload characterization, automated cloud service brokerage capabilities and cloud management. Workload characterization with sophisticated analytics defines the potential workload using an interactive, research-based questionnaire. The customer chooses what to assess, e.g. application readiness, comparison of cloud providers or a custom designed solution or blueprint. Using additional analytics and user priorities, it makes recommendations on where the workload best fits from among cloud provider and in-house options. Out-of-box comparisons cover AWS, Azure, SoftLayer, Google Compute, VMware vCloud Director 5.1 and 5.5. It delivers information for essential purchasing decisions, such as estimated costs and operational requirements.

Both Full Broker packages add the ability to buy services from the cloudMatrix catalog. Users purchasing Full Broker Dedicated services can customize user roles and access levels via integration with existing identity management systems. The management function allows IT operators to deliver orders using ITSM workflows and/or automated DevOps and CloudOps technologies. IT administrators can track, manage, and report financials – including charge-back to the departments consuming cloud services. Note that only those with the Dedicated offering have the ability to customize services, including planning and catalog.
IBM Cloud offers all three offerings as Software as a Service, plus implementation services to quick start your implementation and configuration along with three training sessions on how to use the platform.
Global Technology Services (GTS) offers managed services wrapped around all three of the cloudMatrix SaaS offerings for clients who want to purchase cloudMatrix as a managed solution. 
Global Business Services (GBS) offers include “how to use” cloudMatrix consulting services for tasks as broad as business transformation to as small to better assess the cloud-readiness of workloads and applications. 

Conclusion:

IBM cloudMatrix’s process-based approach to evaluating, purchasing and managing cloud services makes buying cloud services more approachable for both business and IT. The self-service model allows non-technical and IT buyers to more easily make well-informed cloud service decisions.

By facilitating an easier process to evaluate and buy cloud services for non-technical staff, IT becomes the enabler in the cloud service purchasing process, rather than the group from which to hide such purchases. In turn, this enables IT to manage all cloud services, not just those it knows about.

IBM cloudMatrix is a potential game changer. Just as consumer tax preparation software changed the market with its user process-oriented approach, cloudMatrix has the potential to change companies’ cloud service purchasing processes and thus increases the value that IT delivers to business buyers. Well done IBM.



[1] IBM’s cloudMatrix service came from its acquisition of Gravitant in late 2015.