Pages

Tuesday, May 24, 2016

Java on the Mainframe, big problems ahead? Not if BMC can help it!

By Rich Ptak

Today’s dynamic, mobile-obsessed, service-driven market is proving both beneficial and problematic for data center operations. While conventional “wisdom” has it that it’s distributed and mobile devices have been the big winners. In truth, it increasingly applies to mainframe environments. And, Java on the mainframe is playing a significant role, maybe larger than is known.

The mainframe remains an active, effective and in-demand player in today’s DevOps, agile and mobile-oriented world. Why? Because much of the critical data, information and assets that support the most used applications found in banking, financial services, retail, travel and research, resides and is analyzed there. Mobility-obsessed operations remain linked to and dependent upon mainframe operations.

That’s not to say problems don’t exist. Transaction volumes (often non-revenue producing) have exploded. Unpredictable traffic loads and patterns, complexity of multi-platform integrations, demands for instant response time, etc. have made it more difficult to manage, disrupting maintenance and operations. Yet, they are expected to deliver modernized, mobile applications faster and at lower costs.

One response was to put Java on the mainframe. Its features make it highly attractive. It is tailored for rapid development cycles. Designed for mobile/web applications development, it is platform independent. It integrates easily with a variety of applications, operating environments and data bases. The Java Native Interface (JNI) on z allows easy interaction with embedded program logic coded in multiple languages (COBOL, PL/1, ASM) and environments (CICS, WAS, ISM, DB2, USS, Batch, MQ, TCP/IP). In agile computing and DevOps, Java dominates among programmers and developers as the preferred environment.

Java’s Hidden Threat

Unfortunately, few mainframe experienced staff have extensive Java familiarity or expertise. This means the potential for major problems lurk in the background. Java was not designed to operate in the mainframe’s shared environment. Java does have built-in code to monitor AND manage resources. For instance, it manages memory space with a process of ‘garbage collection’. It identifies memory actively being used, gathers and compacts it, then frees the rest. It does no check for the impact on other programs. In a mainframe environment, these have the potential to seriously disrupt operations, freezing some jobs, delaying completion others.

However, during this activity Java pauses ALL in-flight transactions not just those of a single app. Nor does it check for the impact of its action on applications or technologies. Compounding the problem, there was no integrated view across the system technologies for monitoring and management. In fact, some Java can be running in the data center without all staff aware of it.

With increasing acceptance of Java on the mainframe, BMC’s recent mainframe survey reveals in-use or planned-to-use in 61% of DB2 apps, 57% CICS and 49% IMS apps[i]. This is a serious situation. Tools to manage Java itself exist. But, there is no fully integrated tool to monitor and manage the impact of what it is doing. BMC addresses this lack with MainView for Java Environments (MVJE). Let’s look at what it offers.

BMC’s MainView for Java Environments

 MVJE provides much needed functionality. It does not monitor Java code per se. It monitors the infrastructure to monitor the effects of Java activity. Specific functionality includes:

  • Automatic discovery of  z/OS JVM (early users were often surprised at the amount actually in-use),
  • Monitors real-time metrics for z/OS Java runtime environment to detect the impact of Java activities, e.g. CPU usage, Garbage collection metrics, memory usage data, etc.,
  • Analysis to detect workload impact of Java-initiated management activities (combined with Unix System Services (USS) it can initiate activities to address potential problems e.g. thread use problems),
  • Optimize operations as a result of integration with MainView monitoring for cross-technology analysis, 
  • Customizable dashboard views of Java Runtime metrics.


Much JAVA code is zIIP eligible; automated discovery along with monitoring zIIP offload assures no zIIP eligible code is missed. The additional data on infrastructure impact, resource usage, performance, etc. helps to avoid problems even as it speeds diagnosis and eventual resolution. This reduces the need for cross-functional “War-Room” meetings used to identify, diagnose and resolve Java-caused problems that impact application availability and performance.

MVJE quickly discovers and monitors JVMs. It pinpoints Java’s resource usage, so application performance and availability continue to meet service levels.  IT teams can quickly identify the root cause of problems, reducing MTTR, and improving productivity. MVJE monitoring of  zIIP offloading, helps to lower MLC. 

All normal benefits associated with use of BMC’s MainView product in terms of single, integrated view of systems activities. The risks associated with unmonitored, unmanaged technology are eliminated. More efficient monitoring and assured effective use of zIIP’s help to reduce MLC and capital costs. Intelligent automation allows proactive resolution of problems, again saving costs and improving overall system performance. Complexity is reduced as users can customize dashboards and reports to meet their specific information needs.

 The Final Word

We’ve discussed how Java’s built-in resource and memory management, operating in the background, unmonitored and unmanaged, can increase costs slow processing and waste resources.  BMC’s MVJE, is the first full system monitoring solution to address these risks.

System admins can now gain actionable insight into Java’s impact on infrastructure, resource usage and operations. Java becomes another well-monitored and managed technology.

Beta customers appear to be very satisfied with the product. A number revealed that they had experienced significant savings and improved performance as a result of using MVJE.  We’re not surprised.

BMC is providing existing MainView customers a free Java Discovery. We look forward to interviewing some customers after some experience with the product. We expect some will be surprised at the result as they believed themselves to be ‘Java-free’. We also believe that it will result in a significant number of sales. BMC has once again demonstrated their connection with their customers and their commitment to being a lead in mainframe solutions.




[i] Source: 2015 BMC Mainframe Research Results

Friday, May 20, 2016

Got something to say about the Mainframe? Check this out: BMC launches 11th Annual Mainframe Research Survey

 Got something on your chest about the mainframe? Familiar with your organization’s Mainframe
environment it? Why you use it? Benefits? Where it is going? It’s future? Have you ever wanted to tell a major vendor (and the world) about the mainframe in your enterprise? Here’s your chance.

From May 24th until June 6th, BMC is collecting data for its 11th survey of the trends in mainframe usage.  Already one of largest industry surveys with over 1,200 mainframe professionals and executives participating, BMC is seeking to attract an even larger number of participants.

The research results will be used by vendors, technical and executive users, industry analysts, media, etc. to make significant decisions and draw conclusions on just about everything mainframe. The report will influence investments, product (new and enhancements), hiring, functionality, etc.

The mainframe is a critical backbone with impact across industries and markets from mobility to analytics to complex modeling and the ongoing transformation of digital business.


So, if you’re technical IT staff involved in mainframe management or operations. Or, if you’re part of a Mainframe IT team as an executive, manager or technical architect recommending general management or operation practices. This is your chance to take 20 minutes to contribute to the conversation, and influence the future of the mainframe.


Starting May 24th, you can take the survey here!

Tuesday, May 10, 2016

Datto Drive: SMB desktop data protection at a hard to refuse price!

By Rich Ptak


Datto provides enterprise-grade backup, restore and recovery services in its privately-owned 200+ Petabyte cloud. Founded in 2007, its 600+ employees build products and support customers from nine (9) data centers and seven (7) offices located around the world. It performs over one million backups every day protecting millions of endpoints. In addition to their private cloud, all devices they use and provide are Datto’s own products.

Datto is all about Business Continuity and Data Recovery (BCDR) for SMB. Their success to date had been built on their use of a private hybrid, Datto Cloud, for backup/restore, advanced storage, instant virtualization (locally and remote), screenshot verification (to remotely verify backup data integrity) and on-prem file sync-and-share (FSS). All delivered through an international network of thousands of Managed Service Providers (MSP).

They expect their next big step forward, Datto Drive, to carry them deeper into the SMB market with in-cloud FSS and BCDR. Before we provide more details on the product, why should you want to know those details?

In its introductory year, Datto is making available:
·         One million Datto Drive accounts for free to:
o   SMB’s (business accounts only, no personal users)
o   For one year
o   With one terabyte of data storage (all managed by Datto in the Datto Cloud).
·         After one year, the offering changes to:
o   $10 per month per domain (NOT per user, the price holds no matter how many users)
o   Service delivered through a Datto MSP partner (which they’ll help you find, if necessary)
o   Premium versions for larger storage volumes and services are available. Premium services are available (for a fee) during the first year.   

Given that competitors price is higher on a per user basis, let’s see what Datto functionality includes.

Datto Drive

Datto Drive brings highly affordable sync-and-share and full backup/restore and disaster recovery for desktop and mobile devices to SMB’s. They are targeting the less than one third of the SMB market not currently working with MSPs today, who are sorely in need of comprehensive, enterprise-grade FSS and BDCR services.

For the price, Datto Drive offers enterprise-grade FSS built on ownCloud open-source technology. They provide the superior security of Datto’s hybrid cloud with advanced capabilities and functionality in permission management, tracking and tracing. There has been no proven data loss since its founding in 2007.

Datto Drive supports virtually every type of file (video, image, audio, text, etc.). File sharing, control and management can be done from any supported device (desktop, deskside, mobile). It permits real-time collaboration for sharing, exchange, editing, etc. across domain users.  Sync and share capabilities are already available for most existing operating systems, i.e. iPhone, Android, iPad, Windows, Linux and Mac. The ownCloud technology means that thousands of value-add apps are available. Finally, it also includes backup for Microsoft 365, OneDrive and SharePoint files.

Final Word


There’s a lot more functionality and things to like about Datto Drive and the rest of the product portfolio. We suggest a visit to Datto.com to see all that is available. When competitors, such as Dropbox and box are charging $15/user/month for similar services, this offer appears to us to be hard to resist. SMB owners should also move quickly to snag one of the 1 million domain accounts. It’s our opinion that they’ll disappear quickly. 

Friday, May 6, 2016

Defining Hybrid Cloud: A View from Above


By Audrey Rasmussen, Rich Ptak and Bill Moran

Looking at Hybrid Cloud from a Different Perspective

The definition of hybrid cloud is evolving as the cloud market progresses and the transition to hybrid cloud continues. The emergence of hybrid cloud brings with it a multitude of new delivery models, services, technologies and more. This wealth of options provides potential hybrid cloud customers with lots of choices. But, it can also be confusing, which elevates the need for relevant detailed definitions.
Business requirements are a key driver for adopting cloud computing. Yet, most hybrid cloud definitions primarily focus on technical descriptions. A technical definition is appropriate and useful for cloud implementers and technical teams. However, it falls short at clarifying hybrid cloud to potential business users.
For example, defining a “jet plane” by describing the engine and overall plane design is a valid approach. However, it misses the important business impact of jet planes in revolutionizing commercial air travel.  Similarly, defining hybrid cloud in strictly IT terms neglects the economic value and business implications of adopting it.
This is particularly important because unless business users understand the value and potential the hybrid cloud delivers, they are unlikely to reap optimal benefits from it. Consequently, their support for IT’s hybrid cloud efforts may be unenthusiastic.
Defining the hybrid cloud is complicated because each implementation is unique, as it fulfills specific business requirements. Since business needs drive adoption, business users must understand what a hybrid cloud can offer and mean to them. This knowledge can transform how business stakeholders innovate and design new business services. In order to reach both business consumers and technical implementers, we believe a broader definition of hybrid cloud is necessary.
We begin with our explanation of hybrid cloud for business cloud consumers.

The Aerial View of Hybrid Cloud: Business View

Some business staffs may think that only the technical IT team needs to understand cloud computing. Although hybrid clouds are enabling technologies, it is equally important for business leaders to understand how they can fundamentally impact and/or change business models and operations. Why? Because realizing the hybrid clouds’ full advantage requires a new, expanded way of thinking about the business and the possibilities.
Cloud computing enables an organization to extend their capabilities beyond the “walls” of their company and frequently beyond the expertise within their company. For example, traditionally, computers and information technology were company owned and resided in corporate data centers. Companies now have the option to pay service providers to use remote computer resources on-demand. Greatly expanded resources become available within minutes or hours, something not always possible in corporate data centers.
Now, cloud computing has and will continue to evolve as the variety of cloud services explodes beyond today’s technology and business resource limits. Diverse cloud service offerings run the gamut from business applications, industry specific data (for example, medical data), cloud development platforms, advanced analytics, video processing, weather data, Twitter data and much more. Today, businesses are able to access diverse services. The result is extending their capabilities far beyond the data residing within their company and the expertise of their employees. It eliminates in-house limitations on capabilities, making possible what is impossible to do in-house, and more. It provides businesses with vast opportunities to innovate creatively, beyond what they can accomplish within the constraints of their companies’ internal capabilities. Business leaders need to understand that this is what hybrid cloud can deliver.
An example is helpful in illustrating the creation of a new business service using a variety of hybrid cloud services. Imagine a car insurer’s customer facing application uses customer policy data (residing on an internal cloud) to gather vehicle coverage. The application uses traffic information (from Cloud service provider A) to warn the customer of an accident just ahead of their location, a potential traffic hazard or slowing freeway traffic. The application also uses weather data (from Cloud service provider B) to warn customers that they are heading directly into potential hail, tornado or adverse weather conditions. A map service (from Cloud service provider C) provides alternative directions avoiding the hazard.
In similar ways, businesses of all kinds are able to compose innovative business services that utilize internal and external cloud services. This changes how business leaders think about innovating and developing business services. Just as botanists create a new hybrid plant by selecting and combining the best plant characteristics, business leaders can create new, innovative hybrid cloud-based services by selecting among the best available services.
A hybrid cloud makes available resources where ownership is not feasible, justified or possible. This can be true for reasons related to operations, cost, or other reasons. It makes collaboration possible without risking production environments. Hybrid cloud can lower the cost of operations, development, sales, marketing, research and development. It opens up otherwise unavailable opportunities by making it possible to use capabilities on a temporary or exceptional basis. It can allow global market access without a global presence. A hybrid cloud allows access to resources and capacity as and when they are needed from public or community clouds that can generally provide services at lower cost than private infrastructure.
Although the basic definition of hybrid cloud sounds simple, there are technical issues that IT teams must attend to behind the scenes in order to implement it, while keeping it simple and seamless. A hybrid cloud requires business leaders and IT to work as a team.

The Ground Level View of Hybrid Cloud: Technical View

As mentioned, many technical descriptions of hybrid cloud are already available. There exists little need for extensive additional discussion here.
For a very simple working definition, we describe a hybrid cloud as being an environment that connects at least two independent cloud services from whatever source. It can consist of public cloud services, private cloud services or 3rd party delivered private cloud services in any combination. Public cloud services also have many “flavors”, for example they may be on- or off-premise, include multiple enterprises (e.g. a community) with access to the same resources, or have co-resident users working in ‘private’ spaces. Private cloud services are enterprise owned/controlled cloud services whose access are controlled by the enterprise or enterprise-authorized entity. An additional major benefit of hybrid cloud is to protect a company’s current investments in infrastructure.

Summary

At times, an enterprise needs access to services or capabilities where ownership isn’t necessary or is too expensive. The need may be operational, (e.g. a need to use advanced analytics), or informational, (e.g. access to weather or medical data.) It can be driven by IT or by business concerns. In short, the enterprise requires temporary and/or shared access to IT capacity (compute, storage, network, services, etc.) or functionality it doesn’t own. Hybrid cloud is a utility model which has the potential to more economically and efficiently provide access to a range of products, services and resources on a pay-as-you go basis. Between the rapid innovation of infrastructure and the creativity of marketers, the variety of cloud services offered and number of definitions will continue to expand.
From a business perspective, it puts assets, resources and expertise at the disposal of the enterprise that it otherwise would not have. It allows the enterprise to leverage these assets in creative and innovative ways with manageable financial expense and economic risk. For business staff, it loosens restrictions on what can be accomplished as the enterprise transforms itself to effectively compete in a digitized world.
Since hybrid cloud is the collective composition of multiple cloud services that spans across computing domains, it requires management and coordination of service activities from both in-house and service provider sources. The challenge for IT lies in effectively managing across these hybrid cloud service compositions seamlessly, while delivering what the business needs, in the time frame they need it at the best possible cost point.
Finally, for both IT and enterprise staffs the hybrid cloud provides the opportunity to work more closely together to define and achieve aggressive, innovative enterprise goals in an economic, effective, innovative manner.

Tuesday, April 26, 2016

BMC NGT + Financial Services firm = enhanced application availability!

By Rich Ptak

Enterprise transformation to become a Digital Enterprise has received a lot of attention since BMC started the discussion with its DSM initiative. Enterprises are transforming operations to digitize functions and products in response to customer demands for speedier, innovative and personalized services. To meet customer expectations, apps defining services must be rapidly developed, continuously updated and digitally deployed.

The services themselves must be intuitive, user focused, constantly available on any user device especially cloud and mobile devices. A financial services consumer may want to check an account balance, analyze market trading, examine a company’s profile and transfer funds before completing a stock trade using one or a combination of devices. A single transaction spins up multiple transactions accessing a number of diverse data sets. The service must be available 24/7, operating flawlessly at rapid speeds. Anything less risks a dissatisfied customer and loss of business. The process is repeated across thousands, even millions of customers.

More sophisticated analytics along with different data mean that traffic volumes, data manipulation and analysis are expanding with no end in sight. Data center operations are hard-pressed to keep up with this activity with existing infrastructure management tools.

With some 70% of enterprise data residing on mainframes, most of the activities will involve mainframes working across multiple subsystems with shared data residing on both the mainframe databases along with other systems. Optimal database performance requires frequent and often time and storage intensive data base reorgs and clean-ups. With increasing size, these become progressively more difficult, more disruptive, and more resource intensive.

Already stressed, data centers are being pushed to operate with minimal or even no downtime. Unfortunately, traditional data management tools were not designed for the current digital reality, including such frequent and persistent updates and reorgs of massive amounts of data. These tools require data bases to be off-line and require intensive resources that grow uncontrollably as the data volumes grow.  These issues often require IT to delay or perform partial maintenance, which only slows performance deterioration, but doesn’t often improve performance. Additionally, forced downtime incurs added costs such as lost/delayed business, loss of customers turning to other suppliers, disrupted operations, etc.

Expensive alternatives (capacity over-provisioning, manual scripts to make temporary repairs and adjustments, equipment upgrades) can maintain operations until a full reorg is run to help avoid costly penalties due to service agreement breaches.  However, these “work-arounds” have their own draw-backs and costs.

BMC’s Next Generation Technology Utilities for DB2 (NGT) are designed for the challenges of digital business. NGT intelligently automates and monitors the utility process and runs with the data base on-line and fully operational. We interviewed a senior engineer responsible for data center architecture and operations at a financial services firm on his experiences with NGT.

The Corporate Data Center

Our senior engineer is responsible for operations in a mainframe shop consisting of multiple sysplexes totaling approximately 80K MIPs processing capacity. Their database runs 60 to 70 Terabytes after compression. Operating 24 hours/7 days per week, smooth, continuous operations is critical to the success of the business.

The company faced challenges impacting many DB2 environments.  Their data bases were growing rapidly with critical ones early each week reaching sizes that seriously impaired performance. They were seeing 1000 timeouts each week during data management processing. The workload would not allow downtime for a reorg during the week so optimization was delayed to the weekends. Temporary adjustments and workarounds made manually during the week were less and less effective as the workload grew. Using available tools, the only alternative was to let performance deteriorate until weekends and then, squeeze-in reorg and maintenance. Even then, traditional management techniques took too much time, decreasing availability (risking expensive service agreement violations) and ran up huge costs.

IT staff believed automation would be too costly and difficult (if not impossible) to successfully implement as many of the 10’s of thousands of jobs run followed no DB2 object naming conventions. This meant complex, expensive, time-consuming manual customization of reorgs specifying sort lists, setting parameters, handling overrides, and tuning. They were happy to find out that with NGT, this was not true.

The Financial Service Firm’s Experience

The IT staff used existing management solutions from BMC and IBM for ‘on-line’ data management.  They needed a better solution. As mentioned, they were initially skeptical an automated management approach could work because of the complexity of their naming and workflow processes. Our engineer decided to try BMC’s NGT solution.

NGT allows reorg processes to be run in real-time without disrupting or affecting service delivery operations. Thus, it assures near peak data base performance at all times.

NGT was able to tie into the existing "grouping" system and determine which ones need to be reorged on a daily basis. Placing jobs in groups simplifies reorg management, e.g. objects are movable from group to group via a simple “drag ‘n drop”. Thousands of jobs were replaced by wildcarded or "group" reorgs that automatically and dynamically handle the whole system.

Provided basic parameters and policies, NGT schedules and automatically runs reorgs as necessary. The utilities ‘learn’ what works and what doesn’t. It ‘remembers’ what has been executed and monitors process effectiveness. It uses this data to automatically make adjustments to executing processes. This assures that reorgs are done in the most efficient, rapid and optimal manner. Required reorg jobs went from tens of thousands to between five and six thousand jobs. Now, there are many fewer exceptions requiring overrides. Where a standard job ran with thousands of overrides, they now have fewer than 100. Instead of delaying until the weekends, reorgs are now run as often as they need or wish.

NGT Utilities provides full data base availability, improved application performance and fully automated data management housekeeping. Dataset allocation automatically adjusts to fit business policies, saving time and storage space. One unforeseen benefit, built-in integrity checks revealed database issues involving thousands of corrupted indexes. The corruption had been masked because of an automatic rebuild process in their existing solutions. 

Prior to NGT, they were able to manage only about 20% of their system because of its size and complexity. NGT enables them to run approximately 6 times as many reorgs at reduced costs while automatically managing the entire system. By being able to reorg through out the week, they maintain good performance and overall save about 10%. 

Advice

Our engineer was impressed with NGT’s ease of use. He cautioned that unlike traditional tools, it doesn’t generate a lot of real-time status reporting. Experienced DB staff may have difficulty adjusting to its lack of in-process reports. Not to worry, automatically it does a lot more behind the scenes. Time and detailed post-reorg reports do much to eliminate that concern.

For example, upon completing a reorg, NGT provides a very detailed activity report and error report summary. The summary provides useful insights into what was done in the reorg which serves as a basis for further action.  Finally, it is possible to insert exit points into the utilities to customize the response to specific situations. For example, NGT can be coded to alter a table entry in response to a fault or error instead of generating a Trouble Ticket.    

Conclusion

The senior staff engineer found the potential for continuous, automated reorganization invaluable. NGT is an incredibly powerful solution that is able to complete in much less time without interrupting operations. He liked how it learns and improves its effectiveness with each reorg. He found it highly useful and reliable in operation.

The volume of post-reorg data can be intimidating. But, as stated before, powerful NGT utilities more than compensate for this. They do their job automatically and well by providing insights into significant hidden or previously unknown problem areas. All without ever taking the database off-line. An excellent solution to the major challenges faced by this financial services firm.

Monday, April 25, 2016

Compuware boosts developer productivity by making it “Easy to Go Fast!”

By Rich Ptak


Compuware’s campaign for Mainstreaming the Mainframe continues with another partnership and innovative product improvements that provide Agility without Compromise. Earlier this year, we discussed Compuware’s commitment to make DevOps pervasive across an integrated enterprise (mainframe and distributed) computing environment. We described their design for a blended ecosystem in which both development and operations staff intimately function in a completely integrated and complementary manner.

Figure 1 (below) is a simplified, graphical view of Compuware’s vision.



Figure 1 Enabling DevOps across the Enterprise

Last quarter in two articles[1], we discussed Compuware’s acquisition of ISPW as well as its  partnerships, which include technology integrations with Atlassian Jira, SonarSource, Splunk and AppDynamics. This quarter we focus on another partnership with CorreLog and extensions to Topaz and ISPW which address the challenges associated with one of the more frustrating of software developers’ tasks, i.e. managing application source code.

The Issue:

Whatever the platform, the task of keeping applications up-to-date is a fact-of-life. For mainframe source code, often decades old and lacking reliable records of maintenance, changes, or code updates, it can be a nightmare. Release management is critical. Even experienced staff can find it extremely difficult.

Existing mainframe code management tools were designed for a different era and model of coding. They are hopelessly outdated and inadequate for today’s world of agile development and rapid programming styles. Code management needs to be made easier and streamlined. Much needed functionality that makes management quicker and easier currently exists in distributed environment tools that collect and present data.

As Compuware was optimizing their own development and operations processes, they were identifying the best possible ones for their own use with an eye to also supplying them to their customers. ISPW was just what they were looking for. They purchased the company and began integrating it into Topaz.

Combining Topaz with ISPW = Simplicity + Elegance

Compuware’s goal is to meet an identified demand for a contemporary DevOps development environment on the mainframe architecture. Their “Mainstreaming the Mainframe” strategy focuses on creating a blended ecosystem with tools attractive to both mainframers and experienced distributed systems developers who are not mainframe literate.

Combining the latest in source-code management technologies from the distributed world with Topaz, they are building a common culture to the benefit of both environments. Experienced mainframers get easier access to the latest in DevOps technology and exposed to new features that improve productivity. Especially interesting to Compuware is the opportunity to expose mainframers to the data aggregation and display capabilities that will make “it easy to go fast” in application code development, debugging and change management. Here are more examples of how the combination works together:

o   ISPW integration with Topaz provides a common look and feel while leveraging its automated capabilities and visualization strengths. Single-click access allows developers to leverage the automated capabilities and strengths of Topaz for Program Analysis. A direct link to Topaz Workbench when compiling quickens analysis of compile errors.

o   Code management is significantly easier, faster and less error-prone by using automation and visualization techniques for working on existing programs, code, copybooks, etc. to do error analysis, debugging and updating. Automatic display (with the ability to edit) of offending or suspect code along with associated error codes on a single screen means no longer having to manually search through multiple screens or pages of printouts. The result is much reduced time spent on error analysis and correction.

o   Topaz and ISPW work together to supply data useful in managing and performing source code updates and changes. Visual displays of the complete lifecycle of a project makes it easy to manage the process by maintaining a dynamic, visual record of who is working on what code, code status (edit, changes, testing, approved), etc.

o   The Impact Analysis feature within ISPW generates views of copybook, job and code dependencies, links and interactions speed understanding of program flows while reducing the risks of conflicts or disruptive changes. The depth (number of hops) of interactions that are included in the graphic display is adjustable by the developer.

o   Developers can work independently while keeping track of who and what others working on the code are doing. Automatically generated side-by-side visual displays of edited codes makes comparisons easy for error-correction, edits, merges and updates.

o   A mobile interface makes it easier to get change approval for emergency code fixes. 

There is more to the announcement including enhancements in log analysis and collection due to CorreLog SIEM Agent integration. This significantly strengthens log analysis and reporting using a standardized interface to get application-level data from mainframe application auditing solutions. In addition to its own analysis, messaging and altering, CorreLog can feed the 3270 log codes to all major SIEM solutions. This allows creation of a single view of risk and security, incidents and events across platforms.  

The Final Word

The overall takeaway is that Compuware continues to impress with their focus and progress. For six consecutive quarters they have delivered significant advances and improvements in products and solutions aimed at Mainstreaming the Mainframe.  

They have identified an obviously interested market segment who see opportunities and advantages in the mainframe, but want to provide programming tools with familiar interfaces to their next generation of developers. Also interested are those enterprises looking to integrate data centers with standardized management and operations solutions across platforms. Finally, they can serve data center managers and mainframers looking to modernize mainframe operations who are moving beyond Linux-Java to power agile development with DevOps solutions.

Compuware has accurately identified its market. In working closely with these customers, they are also uncovering unexpected opportunities to resolve weaknesses and customer discontent with other infrastructure tools. So far, Compuware has successfully been able to capitalize on such opportunities. We bet that they’ll continue to do so over coming quarters.




[1] See “Mainstreaming the Mainframe..” and “On implementing Compuware’s pioneering mainframe strategy” at: http://www.ptakassociates.com/content/