Pages

Monday, January 16, 2017

Compuware Topaz for Total Test automates Cobol code testing + acquisitions + product enhancements!


By Rich Ptak

In January, 2017 Compuware marked yet another quarter of delivering on its promises to provide solutions and services to “Mainstream the Mainframe.” This time it includes automated COBOL code testing, 4 acquisitions in 12 months plus other product enhancements. Let’s get started.
Billions of lines of COBOL-based programs are the operational heart of computer data centers worldwide. For well over 50 years, COBOL programs continue to be used for a variety of reasons. The primary reason is simply because they work. The adage “if it ain’t broke, don’t fix it,” could have been written exclusively about these programs.
Web, mobile and distributed applications often leverage COBOL programs on the back end. As such, in today’s rapidly evolving, hi-volume computing environment, companies must be able to rapidly implement COBOL code updates and changes to stay digitally competitive. Such changes, however, risk the introduction of serious errors and bugs, which, even once discovered, (in itself a notoriously difficult task) can be even more difficult to correct. Testing is required to uncover or avoid introducing such errors.
Creating mainframe unit tests has been a labor- and time-intensive task as they are manually designed, developed and custom tailored to each program. Making things more difficult, is a frequent lack of program documentation, even as those with expertise and deep program knowledge leave the work force.
Changing and updating mainframe COBOL programs remains an intimidating bottleneck; a task to be avoided, if at all possible. This is untenable in today’s digital enterprise where speedy adaptation to changing circumstances is an absolutely fundamental requirement to the survival of computer-driven services, let alone their on-going success.
Until now, no vendor had attempted to comprehensively attack the challenge of mainframe unit test creation, let alone bring automated Java-like unit testing to the world of COBOL applications. But once again, Compuware steps up to provide an effective and solid solution in the form of Compuware Topaz for Total Test.

First, a little context

Over the last two years, Compuware has introduced solutions that address multiple, long-standing mainframe application lifecycle challenges in mainframe operations. These include:
  1. Intuitive visual analysis of even extremely complex and poorly documented mainframe programs and data structures (Topaz for Program Analysis and Topaz for Enterprise Data). 
  2. Real-time quality control and error detection of mainframe coding syntax (Topaz integration with SonarSource).
  3. Agile cross-platform source code management and release automation (ISPW and integration with XebiaLabs).

Compuware’s newest offering will resolve some important issues currently handicapping unit testing of mainframe code through comprehensive automation of critical tasks. Let’s review what they just introduced.

Topaz for Total Test = Automated Mainframe Unit Test Creation and Execution

By automating the processes of unit test creation, Compuware’s Topaz for Total Test transforms mainframe COBOL application development and testing. It does so without requiring any code changes to a COBOL program, while automatically creating and running tests on logical units of code. Developers at all skill levels can now perform unit testing of COBOL code similar to how it is done for other programming languages (Java, PHP, etc.).
Compuware goes beyond distributed tool capabilities by automating the collection of additional data that can be used in multiple ways. The data is preserved with the unit test and can be used to validate code changes. This approach allows the test data to travel with the test case making it easier to execute test cases on different systems. Developers can collect and save data stubs of existing input data and edit it for testing specific sections of code.  
Topaz for Total Test, as part of the Topaz suite, can be used with other elements to provide a comprehensive solution for dev/test operations. Here is closer look at how Topaz for Total Test automates many of the steps in unit test creation and execution:
  • Use Xpediter to gather test data call parameters and program results, 
  • Topaz for Total Test creates complete test case (fully automated),
  • Topaz for Total Test generates data stubs and program stubs (fully automated),
  • Unit test uses data stub created by Topaz for Total Test (fully automated),
  • Topaz for Total Test allows easy on/off use of stubs – no re-compilation required (fully automated),
  • Topaz for Total Test automatically cleans up after tests, 
  • Topaz for Total Test adds unit tests into test scenario (fully automated),
  • Continuous build process uses CLI to run test suite,
  • Topaz for Total Test executes test suite (automatically).
Benefits realized by IT mainframe organizations include acceleration of development processes, reduced time, effort and number of resources needed to create/run tests as it will be easier to update and change mainframe code. Overall operations efficiency improves as well because potential problems are identified and addressed at the earliest possible time in development.
Among Topaz for Total Test’s unique features and capabilities are Program Stubs that allow the main program to be isolated from the sub-program calls. And, sub-programs may be tested independently of the main program. Together these capabilities enable developers to split the testing of a large program into testing a set of smaller programs.
In effect, Topaz for Total Test reduces the complexity of doing good testing by focusing on small parts of the program. The solution is useful to developers at all levels of skill. Its ease of use and significant automations improve efficiency (faster test failure identification and resolution), speeds execution and development times and provides centralized control of testing.
There is much more to the product than we cover here. Compuware has plans for further enhancements, extensions and integrations to be delivered on a quarterly basis. Given their track record of performance, we expect they will delight their customers. If you have a significant amount of mainframe code in your shop, it makes good sense to check out Topaz for Total Test. 

Other items in the announcement

For their 4th acquisition in the last 12 months, Compuware acquired MVS Solutions with its popular ThruPut Manager, which automatically and intelligently optimizes the processing of batch jobs. ThruPut Manager:  
  • Provides immediate, intuitive insight into batch processing that even inexperienced operators can readily understand,
  • Makes it easy to prioritize batch processing based on business-based policies and goals,
  • Ensures proper batch execution by verifying that jobs have all the resources they need and proactively managing resource contention between jobs,
  • Dramatically reduces customers’ IBM Monthly Licensing Charges (MLC) by minimizing the rolling four-hour average (R4HA) processing peaks without counter-productive “soft-capping.”
As part of their third acquisition in 2016, Compuware added Standardware’s COPE IMS virtualization technology to its portfolio. With COPE, enterprises can rapidly deploy multiple virtual IMS environments to as many different active projects as they require without having to create costly new IMS instances or engage professionals with specialized technical skill-sets. As a result, even less experienced mainframe staff can perform IMS-related Dev/Ops tasks faster and at a lower cost. In addition, integration with Compuware Xpediter permits debugging within COPE environments.
Finally, Compuware announced updates such as graphical visualization of IMS DBDs in Topaz Workbench. The tool presents the structure of IMS databases at a glance and eliminates the need to pore over IMS configuration files to find this information. In addition, a new Strobe Insight Report compares the last execution statistics with the average execution statistics. The data is visualized in an interactive scatter chart based on collected SMF 30 data. With such visualization, analysts are able to quickly identify jobs that have exceeded their norms by a user specified percentage and, then, take the appropriate action. The tabular portion of the report compares and contrasts the average CPU, Elapsed Time and EXCP count with the last values collected.

The Final Word

With the announcement of Compuware Topaz for Total Test, the company has provided a significant advance in mainstreaming the mainframe. The digital agility of any enterprise is constrained by its least agile code base. By eliminating a long-standing constraint to COBOL agility, Compuware provides enterprise IT the ability to deliver more digital capabilities to the business at greater speed and with less risk.
The January announcement marks Compuware’s 9th consecutive quarter of delivering significant solutions that solidify mainstream positioning, while benefiting mainframe development and operations. We’ve commented on and have to admit that we have been impressed at virtually every announcement.
The steady stream of substantive improvements and additions has allowed Compuware to establish a strong market position for themselves. Their delivery of effective, innovative solutions provides solid enhancement to their reputation for successfully resolving significant problems that have hampered mainframe operations.
Congratulations to them. Check them out and see if you don’t agree with us. 

Wednesday, January 11, 2017

Dimension Data: Workspaces of the Future!

By Bill Moran and Rich Ptak

Dimension Data has been a successful international presence for a number of years. They have less visibility in the US and North American markets. Founded in South Africa in 1983, NTT acquired them in 2010. Their current revenue exceeds $7.5 billion, demonstrating strong, consistent growth that continues after joining NTT.

We focus on their end user solutions covered in our briefing with them. We do encourage you to visit their website[1] to view their full range of offerings. This Wikipedia article details their history[2]

First, we must compliment the quality of Dimension Data’s marketing and advertising. Normally, we don’t comment on this aspect of operations. However, we found their recent advertisements in The Economist magazine to be noteworthy, and so include one here. Such creativity will help them capture the attention of the US and North American markets.
For several years’ post- acquisition, NTT wisely kept the management team in place. This, combined with the financial strength and presence resulting from the association with NTT, facilitated expanding their marketplace positioning. As part of a larger entity, Dimension Data got more exposure, as well as added to existing proven customer confidence in their ability to deliver.

The Dimension Data Advantage

Dimension Data recently briefed us on their End–user Computing (EUC) strategy and offerings. To fully understand their offerings requires knowledge of their vision of how Digital transformation is changing companies, and its effect on various company stakeholders. So, we will first examine some of the relevant trends and resulting pressures.

Organizations are under significant pressure to cut costs. In response, some are reducing office space use by individual employees. In many US companies, this trend is implemented by encouraging employee home offices. This significantly affects the infrastructure and technology needed by the company. In other cases, companies are implementing changes to the working environment to attract and retain the best talent and remain competitive. Again, these changes will impact workspace design, communications, digital infrastructure, cloud (especially hybrid), data and information storage, (cyber)security and accessibility.  

Accompanying these macro trends are others specifically related to transformations that accompany the move to an increasingly Digital world. Some well-known, some not. Dimension Data has identified a number of these, which include:
  • Artificial Intelligence & Machine Learning
  • Internet of things
  • Virtual and Augmented Reality
  • Robotics
  • Digital Technology Platforms
  • Cloud, specifically hybrid Cloud
  • Big Data & the tools to analyze it

For many customers simply identifying and installing the correct technology is insufficient. Some are ill-equipped to cope with the new trends. Many risk being overwhelmed by the challenges facing them in the new digital environment. Others are incapable or uninterested in managing the operating technology.
                   Figure 1 Workspace for Tomorrow
That is precisely the entry-point identified by Dimension Data that provides as the opportunity for them to stand-out and outshine the competition. Dimension Data steps in with the ability to deliver a consultative workshop engagement specifically designed to help clients to develop a plan to smoothly move their workspaces to the next level.
Dimension Data is able to provide both an overall architecture, and a framework adaptable to fit the specific needs of any organization. Dimension Data is focused on enabling “Workspaces for Tomorrow” (Figure 1 – at right). This provides the basis for implementation and delivery of a comprehensive suite of workspace services to design, implement, maintain and even manage workspaces. 
Dimension Data has a unique offering which consists of a complete set of managed services to help customers designing “digital workspaces to embrace the way employees live, work, and collaborate.” Further, they “help organisations seamlessly unify the physical and virtual world into a digital experience.”
Let’s look specifically at Microsoft technology services. Dimension Data builds its expertise in this area on recently acquired Canada-based Ceryx. Ceryx specialized in assisting customers install and manage email services. Under Dimension Data’s auspices, they are broadening their offerings to provide ‘Managed Cloud Services for Microsoft’, which include all of Office 365, Skype for Business and Microsoft Cloud, as well as other Microsoft products. As a North American company, Ceryx additionally benefits Dimension Data with increased visibility in Canadian and US markets. The End-user Computing suite of Workspace services includes Workspace Mobility, Workspace Productivity Consulting Services and Software Services.

The Final Word

End-user Computing spearheads the “Workspaces for Tomorrow” effort within Dimension Data. Existing business units in networking, security, datacenter, collaboration, customer experience and service support cross-selling. With strategic partnerships with both Microsoft and VMware and the support of the NTT Group, Dimension Data is the engagement leader for outcome based services to enable “Workspaces for Tomorrow”. All this combines to provide an impressive array of experience, expertise and product.
Dimension Data has an impressive reference list of worldwide customers for end user computing, including well known banks, oil & gas companies, automotive manufacturers, etc. We believe that a significant opportunity for growth exists for them in the US and North American markets. We highly recommend investigating what they have to offer. We think that there is a very good chance that they just might turn out to be your best partner for your modernization efforts.

Monday, December 19, 2016

IBM Systems - year-end review

By Rich Ptak


It’s been a busy year for IBM, what with transitioning, proliferating use cases for Cognitive Solutions, the rapid buildup in their Cloud platform ecosystem and announcements of a series of innovative industry-specific solutions and projects, LinuxONE mainframe activities, the OpenCAPI initiative, etc. You might even expect they might slow down a bit.

Nothing even resembling that appears to be in the cards. IBM CEO Ginny Rometty led off the week announcing that IBM will hire 25000 new US employees while investing $1 billion for employee training and development. This is to take place over the next 4 years.  And, she promises the focus will be on “new collar” hires, i.e. skills-based, rather than focusing simply on “higher education” requirements. The needed skills include cloud computing technicians and specialists in service delivery acquired in vocational training or “on-the-job”. IBM collaborated in the curriculum design and implementation of schools to do the training. You can learn more about these Pathways in Technology Early College High Schools (P_TECH)  here[1].  This spending is in addition to the $6 billion spent annually on R&D projects. This is hardly the behavior of a company unsure about its future. But, I digress.

In the same week, Tom Rosamilia, Senior Vice President, IBM Systems held his annual review on the status of IBM Systems. Taking place prior to year-end results reporting, it was light on financial details. But, the broad strokes presented a company that was seeing both progress and positive returns.  In a turbulent and rapidly evolving business environment, IBM had embarked upon a bet-the-business strategy of transformation and innovation in technology, business models and skills to address the challenges of the evolving era of cognitive computing.

IBM committed to the cognitive era before it was fully defined and clearly defined as a viable market. Early hallmarks of the changing environment were the growth of cloud-based and Infrastructure-as -a-Service enterprise computing. Rosamilia quoted industry research that states that by 2022, an estimated 40% of all compute capacity would be provided through service providers. And, some 70% would be in hybrid (combination of on- and off-premise infrastructure).  Such configurations meant IBM’s systems infrastructure-based revenue streams would need altered delivery and service models to grow. 

In response, IBM Systems altered its operating model to focus on three main areas:
  1. Cognitive Solutions – through partnerships and initiatives that grow the ecosystem and increase use cases, scaling up performance and making it easier to leverage the technology.
  2. Cloud platform – building out the ecosystem, increasing accessibility by building out available services and tools and expanding utility with easy access to the newest technologies.
  3. Industry Solutions – focus on providing servers, services, platforms along with innovative solutions optimized and targeted to resolve industry-specific challenges.

In 2016, the systems product portfolio was divided into 3 areas: 
  1. The Power chip-based systems targeting the market with Open Computing (e.g. Open Compute Project), Power LC servers with NVIDIA NVLink™ for specific workloads models, the OpenCAPI[2] Consortium (dedicated to developing a standard that allows cross-vendor access to CAPI acceleration) and the OpenCAPI Consortium (developing a standard that allows cross-vendor access to CAPI acceleration), and PowerAI –  an IBM partnership with Nvidia that enables simplified access and installation of platforms for deep learning and HPC efforts;; 
  2. the zSystems with LinuxONE for hybrid clouds, high security z13s with encryption for hybrid clouds, Apache Spark support on z/OS and (the blockbuster) secure blockchain services on LinuxONE available via Bluemix or on-premise; and finally,
  3. IBM lit up the Storage /SDI (software defined infrastructure) markets with all-flash arrays available across their complete portfolio and a complete suite of software-defined storage solutions. There are plenty more coming with IBM Cloud Object Storage solution, cloud support for IBM Spectrum Virtualize and DeepFlash ESS. We don’t cover the areas, so we won’t comment more.

IBM will continue to stir things up as they expand and enhance deliverables in these areas in 2017. There is a special focus in Cognitive Computing where speed in data access and computational power are critical. Power systems with CAPI are specifically designed for Hi-speed, computationally dense computing. They benefit from partner developed accelerators. Cost is a major issue in high-performance analytics and computing. Power systems with CAPI and accelerators offer significant price/performance advantages over general purpose systems.

Block chain has been a major news item this past year. Use cases are proliferating as understanding of the technology and accessibility grows. Both significantly benefiting from IBM offerings of easy flexible access options to the technology, as well as training including some which are free.  Financial, healthcare[3] and business use cases for blockchain in secure networks are proliferating.  IBM is and will continue to be a major booster and contributor in the spread of this technology. IBM is offering secure blockchain cloud services on-premise or via Bluemix cloud with either LinuxONE or Power LC systems. 

Rosamilia discussed a number of activities underway with clients and customers expected to deliver in 2017. These include collaboration with Google and Rackspace using a Zaius server (running on yet-to-be generally available) Power9 chip and OpenCAPI to deliver 10x faster server performance on an “advanced, accelerated computing platform…(delivering an)…open platform, open interfaces, open compute”.  There is the blockchain/(HSBN) high security business network with existing and potential application across a range of business functions. These include securities transactions, supply chain, retail banking, syndicated loans, digital property management, etc.
Tom Rosamilia described how Walmart uses blockchain to guarantee the integrity of food from farm to consumer. Sensors packaged with farm products are tracked from farm to final consumer purchase to assure environmental conditions (e.g. temperature exposure) have been maintained. More examples are available (see our recent blog[4] on blockchain).

The session concluded with the list of technologies where IBM is investing. These include POWER9, LinuxONE, zNext (the next generation mainframe!), all-flash systems, next generation computing (beyond silicon), open ecosystems, blockchain and (presumably LARGE) object storage. We believe that their bets were well-placed. There is still much to be done, but it appears to us that Ginny Rometty and IBM will keep every one of those 25,000 new hires very, very busy.


[2] CAPI (Coherent Accelerator Processor Interface) developed by IBM for and initially only available on Power systems. It resolves the increasingly serious data access and transfer bottlenecks problems to improve system performance by a power of 10.)

Friday, December 9, 2016

OpenPOWER, Blockchain and more on IBM’ s Workload-driven infrastructure to “Outthink the status quo!”

By Rich Ptak

At and after IBM’s 2016 Edge even (which had over 5500 attendees), IBM has been spreading the word about and providing the experience of how IBM’s app-, service- and workload-driven infrastructure enables users to “Outthink the status quo” to drive their success. Combining agile infrastructure with enterprise digitization means pushing (if not demolishing) operational boundaries and creating business models that yield previously inconceivable solutions and capabilities to overcome challenges previously viewed as intractable or unresolvable.

Enterprises ranging from the very largest to start-ups are succeeding in innovative application of IBM and IBM-partner provided infrastructure that fully leverages cloud, cognitive and system technologies. Edge 2016 was a head-spinning event with lots of technology and technical detail, but also with benefits and operational advantages presented in terms highlighting positive enterprise impact. IBM used the event to first impress, then inspire customers to act to exceed their own expectations of what was possible. Here’s some of what impressed us.
It’s a platform view
Infrastructure remains vitally important as a means for accessing and leveraging the power of technology. Meeting the performance requirements of cutting-edge solutions (e.g. autonomous vehicles), as well as day-to-day apps (e.g. 3-D printing of medical prosthetics) is no easy task.  Doing so requires infrastructure that functions as an integrated platform combining elements from multiple sources. All of which must work together to transparently deliver the data storage capacity, access (CAPI accelerators) and processing speeds to match operational and computing demands. New capabilities in Dev/OPS are transforming the developer’s ability to access, exploit and manage the infrastructure in innovative ways, even as it changes how this is done by allowing educated users to define custom services themselves.
A major message from the event detailed how enterprises, research, market-driven, education, small and large and even individuals are accomplishing things that were previously unimagined, even unimaginable. This is possible, not simply because of the power of the technology, but also because of the increased, often cloud-based accessibility to the technology along with UIs that simplify (relatively) app creation.
Now, infrastructure provides a platform, on-premise or in a cloud, that uses elements from multiple different suppliers working transparently to the developer/user and, if necessary, across multiple platforms (mobile, cloud, server, etc.) to create a product, deliver a service or perform a function. The developer, researcher or whatever-user faces a constantly evolving, highly competitive world. A successful product/service must be able to quickly take advantage of emerging technology changes. Infrastructure platforms allow that to happen. Also, typically, a service, app, or product available across multiple infrastructure platforms has a competitive advantage. IBM is committed to delivering, as partners and customers substantiate, the products, whether hardware (mainframe, Power Systems, etc.), software or service with the required flexibility.
Taking on big challenges, succeed by outthinking the status quo
IBM noted a trend and called out a challenge to attendees. A major theme in Key Note speeches, presentations and on the show flow, was an emphasis on large enterprises, as well as smaller companies and individuals tackling big challenges. There remains plenty of tech talk and technology detail, but the focus was on the potential of today’s systems, applications and services. Recognition of that potential inspired users (enterprise and individual) to take on significant challenges in personal life, society, medicine, scientific research, etc.

These can be about on-line dating/matchmaking (PlentyOfFish), gaming (Pokemon) which just happened to double Nintendo’s capitalization to $42B in 10 days, or radically expanding access to mobile banking services for a previously grossly underserved market of transnational workers across East and West Africa.

Or, they can be a REALLY big problem, i.e. solving world hunger, curing cancer, guiding autonomous cars or solving the digital trust problem with a radically secure, peer-to-peer distributed ledger (Hyperledger). Hyperledger is an open source project from the Linux Foundation designed to enable the next generation of transactional applications by automating trust, accountability and transparency using blockchain technology. Technology which IBM makes freely easily accessible and usable to developers, and as a for-fee, highly-secure turnkey service, via its Bluemix cloud platform. (See IBM Blockchain.) Blockchain promises to have major impact in multiple markets from contract management (outsourcing) to financial transactions (cross bank, foreign exchange Letters-of-Credit) to provenance documenting for anything from agricultural products (farm-to-fork) to drugs and medical devices.

IBM Blockchain activities have exploded since its early 2016 announcement. Today, they provide access to Blockchain development services and support at centers worldwide. Blockchain Bluemix garages are in New York, London, Singapore, Toronto and Tokyo while Bluemix technology incubators are in Nice, Melbourne and San Francisco. Each week, it seems that market segments and use cases for Blockchain emerge, even as vendor offerings and services expand. IBM has clearly positioned itself to benefit as a result.

The net is that IBM is betting its business on providing broad access to that infrastructure products and services that will drive the next generation of innovation and technology-powered advancements. They are focused on the infrastructure in terms of cloud, mobile, IoT, cognitive computing and targeting markets with solutions. But, they are also investing in accessing and applying new and emerging technologies. They are building communities and ecosystems for cooperative innovation. Providing services and fabric to make it easier for these communities to leverage each other and the technologies. IBM challenged all attendees to stretch their imaginations and outthink the status quo in applying technology in both their professional and personal lives. They invited those that did to return to Las Vegas to tell their stories at Edge 2017.
The Products
Edge without products just wouldn’t be right. IBM titillated the chip community last summer with hints about Power9 which is due late 2017. To satisfy immediate demand, IBM announced a new line of OpenPOWER LC models optimized for specific market segments and styles of computing. Models included:
  1. An entry level model, the IBM Power System S812LC for customers just starting with Big Data.
  2.  IBM Power System S821LC, a 1U form factor with 2 Power8 processors.
  3. IBM Power System S822LC for Big Data. 
  4. IBM Power System S822LC for Commercial Computing.
  5. IBM Power System S822 LC for High Performance Computing, the latest POWER8 with NVLink - a high speed link between the CPU and onboard GPUs.

IBM’s Power Systems strategy continues to focus on chip advancements combined with design for specific computing models/markets and use of partner-developed accelerators and devices for additional performance enhancements.

We discuss these in our blog available at www.ptakassociates.blogspot.com. IBM also discussed a range of new applications of Watson and Watson Analytics, along with programs to provide easy, affordable access to these capabilities for developers, students and researchers. Multiple plans are available (starting at $30/month,) as well as a free introductory offer that packages access to data bases and analytics. See: https://www.ibm.com/marketplace/cloud/watson-analytics/ for details.

The mainframe continues to make its mark as IBM comes up with new ways to provide more power, speed and versatility without raising hardware prices.
Before we finish
There is a view that the age of the small, upstart entrepreneur is over. This was the focus of the September 17th-23rd issue of The Economist. A special report about the world’s most powerful companies explains “why the age of entrepreneurialism ushered in by…Thatcher and Reagan…is giving way to an age of corporate consolidation.” The article asserts:
  1. Large corporations dominate growth, market capitalization, revenues and profits globally. They (alone) have the cash, talent and savvy to maintain these positions.
  2. Entrepreneurial endeavors are declining.
  3. Emerging entrepreneurs opt for quick buy-outs by “superstar firms” over IPOs. 
  4. Despite a growing backlash, the superstar firms successfully lobby EU and national politicians for favorable treatment (corporatism).
  5. Technology and infrastructure trends, e.g. IOT favor the superstars.
Statistics, charts and considerable consulting firm cogitation back up these opinions.
We believe the conclusions are too pessimistic. There have been a significant number of roadblocks to progress and investment in start-up and small firms have been in-place and significantly added to in the last decade. These range from the economic (protectionism, unstable markets, inflation) to escalating governmental regulation (excessive mandates, quixotic regulation, direct interference) to lack of investor confidence. But, the environment is changing.
This partially due to shifts in attitudes and partially to tectonic shifts in both the political and societal environments currently underway in multiple countries. It is also due to the rapid evolution of new technologies; the creative application of which is easier and more widespread than ever before. There is an increasing emphasis by existing and emerging technology leaders on making it easier to leverage and constructively apply technology. IBM has been a leader by providing easy, low, cocost access for operational entrepreneurial and sandbox activities with Blockchain, Cognitive Computing, Watson Analytics, OpenPower Systems, Bluemix and cloud technologies. 
What’s the message?  
As described by Tom Rosamilia, IBM System SVP, what was once a discussion about technology, the conversation is now about business. More precisely, the discussion is about solving a problem whether business, financial, design, discovery or implementing a previously inconceivable or unimaginable service.

IBM’s message emphasized unleashing the underlying, deep competitive drive unique to humans. To direct their efforts to outthink their competition (in whatever form) to pursue the solution of the really difficult problems, in innovative definition of and delivery of new services and by identifying and pursuing radical opportunity.

IBM’s customers are pursuing all of these. In the process they clearly demonstrated that IBM’s goods and services are playing a leading role in unleashing a veritable tsunami of innovation and creativity to resolve all sorts of business, enterprise or even societal problems The stories told and exhibits at Edge 2016 did a lot to confirm our impressions. We think Edge 2017 is going to be even more enlightening. 

Thursday, December 1, 2016

HPE IoT Solutions ease management, reduce risk and cut operating costs

By Bill Moran and Rich Ptak

HPE has invested heavily to deliver IoT solutions. They are well aware of the challenges faced by customers attempting large-scale IoT deployments. Thus, our interest in commenting on the November 30th London Discover announcements of new enhancements to their IoT portfolio. First, a quick overview of the IoT world.

IoT is driven by the belief that enhancing the intelligence gathering (and frequently processing) capabilities of new and existing devices coupled with centralized control will yield personal and/or business benefits. The amount of intelligence (for decision-making, etc.) necessary at the edge of the network (in the device) versus the amount retained centrally (server, controller, etc.) varies widely based the use case. For example, a device monitoring what's in front of an automobile that detects a potential crash situation should make a decision to slam on the brakes locally (in the car). The delay inherent in communicating with a remote controller (in the cloud) would be intolerable.

The complexity inherent in IoT becomes even more complicated when multiple and different devices are involved. Each device type communicates in its own way. Use cases vary by task and industry. The automotive situation discussed is very different from that of a power utility monitoring thousands of meters for billing or consumption-tracking purposes.

Many more scenarios exist, and depending on scale and distribution, some will be very difficult and expensive to deploy. HPE set out to provide solutions that reduce the cost and effort required to deploy, connect and manage this wide variety of distributed devices. Here’s the result.

HPE’s IoT portfolio enhancements are in three categories. First is the new HPE Mobile Virtual Network Enabler (MVNE) intended for mobile virtual network operators (MVNO). It simplifies device management complexity of SIM card[1] deployment and management. HPE MVNE allows the MVNO themselves to setup the SIM delivery and billing service instead of buying the service from a carrier. The resulting reduction in complexity and long term billing management reduces operator costs.

The second enhancement is the HPE Universal IOT (UIoT) Platform. The number of IoT use cases implemented over Low Power WAN (Wide Area Networks) is rapidly increasing. Each with their own infrastructure and management solutions and standards. No single vendor has a management solution that works across all the differing solutions. This meant that businesses, e.g. delivering smart city IoT deployments, had to use multiple management systems. The HPE UIOT software platform will support multiple networks (Cellular, WiFi, LoRA, etc.). Using lightweight M2M protocol, it provides a consistent way to manage the variety of networks, as well as data models and formats.

Third are IoT-related enhancements to HPE Aruba[2] (LAN) networks. HPE introduced the Aruba Clearness Universal Profiler. A standalone software package, it permits customers to quickly see and monitor all devices connecting to wired and wireless networks. Helping to satisfy network audits, security and visibility requirements. It is the ONLY such application purpose-built to identify, classify and view all devices.

Finally, the latest ArubaOS-Switch software release enhances the security features of most of the Aruba access switch family. Features include automatic tunnel creation to isolate network traffic for security devices, smart lighting and HVAC control units, and the ability to set network access control policies for IoT devices.

With these IoT offerings enhancements, HPE has made significant strides toward reducing the complexity and costs facing customers developing and deploying IoT solutions. These are unique tools to help customers economically create and operate an IoT enabled world. HPE also demonstrates its clear understanding of the difficulties that customers are encountering.

It appears to us, that HPE is succeeding in its efforts to provide solutions that reduce the cost and effort required to deploy, connect and manage the wide variety of distributed devices available today. We suspect that there are many who will agree. 




[1] SIM cards provide the interface between a device and a cellular connection.
[2] Acquired for its wireless, campus switching and network access control solutions.

Friday, November 4, 2016

BMC Survey on the Status of the Mainframe

By Bill Moran and Rich Ptak
BMC has released the results of their 11th annual mainframe survey.  BMC partners with multiple other parties to collect data and to release the results (e.g. IBM Systems Magazine[1]). This assures they have input from a variety of sources, including non-BMC customers. The resulting expanded range of opinions increases the value of the data.

We review key results of the survey here. We may revisit the topic as additional information is made available. We commend BMC for conducting the survey. It performs a real service for the industry. Studying the results can provide significant insight into what is happening in the mainframe market.[2]

Key results

In our opinion, key conclusions from the survey are:

  1. If the “death of the mainframe” needed more debunking, this survey certainly does so.  It shows there will be no funeral services held for the mainframe anytime soon. Last year 90% of the survey respondents indicated they saw a long-term future for the mainframe. This year that number declined all the way to 89%. Not a statistically significant difference!
  2. The general population of companies, on average, keep more than 50% of their data on the mainframe.  70% of large organizations see their mainframe capacity increasing in the next 24 months.
  3. In large (and other) enterprises, Digital business appears to be driving higher mainframe workload growth.
  4. Smaller organizations are more likely to forecast declining use of the mainframe. 
  5. In contrast, those companies that are increasing their mainframe usage take a long term view of the mainframe and its value. They tend to be more effective at leveraging the platform. They want to provide a superior customer experience, hence they modernize operations, add capacity and increase workloads. They view mainframe security and high availability as critically important differentiators in today's market marked by escalating transaction rates, data growth and rapid response times. 

Other interesting insights

Linux usage on the mainframe broke through the 50% point this year. Its use has been growing steadily ever since a Linux initiative was launched when Lou Gerstner was IBM CEO. Last year, 48% of the survey respondents said that they had Linux in production; this year the percentage rose to 52%.

BMC divides organizations into three groups:

  1. The first group representing 58% of those surveyed say that mainframe usage in their organization is increasing.
  2. The second group (23%) say that usage is steady.
  3. The third group (19%) say that usage is reducing.
We did not do an exhaustive analysis of the differences between the increasing, steady and reducing groups. However, it is worthwhile to sketch a view of some differences between the reducing usage group and the increasing usage group.

In the first place, many reducers indicate that their management believes the mainframe is outdated. This results in pressures to abandon the platform. Thus, their focus is removing workloads. This group is also more concerned about a mainframe skills shortage. Their solution to that problem is, again, to remove workloads thus reducing mainframe platform dependencies.

In contrast, managers of the group that is increasing usage do not appear to believe the mainframe is obsolete. Therefore, there is no pressure to move off the platform. In fact, they actively seek to move new work onto the mainframe. While also concerned about a mainframe skills shortage, their response is to provide internal training and invest in automation wherever possible. Neither outsourcing nor moving workloads off the platform are viewed as viable solutions to a skills shortage.
Figure 1 Top Mainframe Priorities – Chart courtesy of BMC
Next, of interest were respondent priorities. The top priorities for 2016 as identified in the survey include:
  1. Cost reduction/optimization – 65%
  2. Data privacy/compliance/security – 50%
  3. Application availability – 49%
  4. Application modernization – 41%.

Number 5 on the list “Becoming more responsive to our business”, is not given a percentage. We estimate (see Figure 1) it at 38%. We found this somewhat surprising. With all the focus on the digital enterprise and business, we would have thought that this would be at least #3 on the list. Like we said, interesting data comes from the study.

Future Possible Questions

As a quick aside, there are many possible questions to explore. We encouraged mainframers to participate in the survey. Going a step further, put your suggestions in ‘comments’ to this blog. Or, tweet them with the hashtag #PtakAssocMFQ. We will track the results and share them with BMC before the next annual survey.

To start things off, we have a few suggestions for future survey topics:

  1. How many projects were undertaken to move work off the mainframe? What were the results? What were the factors contributing to the project’s success or failure?
  2.  Is (and how much is) the mainframe integrated into the overall datacenter operations? Or, is it an isolated island with mostly batch methods of integration?  
  3. How many organizations are using IBM’s z/PDT, which simulates a mainframe on a PC or X86 system for development?
  4. What is the progress of DevOps modernization? This might connect to the previous point as z/PDT is Linux based and many developers prefer to use Linux tools, but also need access to mainframe data to test their applications.
Of course, we understand that there are many logistical problems in putting together a survey of this type. For example, there is a practical limit on the number of questions that one can ask. However, the answers to these would be enlightening.

Summary

The survey provides significant value to the mainframe community with insights useful to mainframe users, vendors and service suppliers. It can help any mainframe-based organization to plan and optimize for the future. It highlights ongoing community problems even as it corrects conventional “wisdom”, i.e. the mainframe is alive and well. 

Finally, the survey is a valuable tool for understanding the state of the mainframe, its user concerns, needs and priorities. BMC may want to consider extending the reach of the survey to include other organizations, i.e. user groups such as Share. In the meantime, we suggest that you visit the BMC website to discover insights that you can successfully leverage and apply in your operations. 




[1] Note this is not corporate IBM.
[2] Study details available at www.bmc.com/mainframesurvey

Thursday, November 3, 2016

Compuware delivers again! Solution innovation for eight consecutive quarters!

By Rich Ptak

With its October launch, Compuware once again successfully met its self-imposed goal of a quarterly delivery of brand new or significantly enhanced, mainframe solutions. This makes 8 consecutive quarters they have done so. And, for each quarter, the result has been significant, ground-breaking extensions or enhancement of capabilities or accessibility in areas that include mainframe DevOps, risk reduction, app development, systems management and resolution of significant challenges to smooth mainframe operations. The current announcement continues the pattern. Congratulations and kudos to Compuware. Here’s what we found interesting.

Service-based Acquisition
We commented earlier on Compuware’s acquisition of ISPW product technology and its integration with Compuware Topaz. ISPW provides comprehensive, modern functionality for Source Code Management (SCM), Release Automation (RA) and Application Deployment (AD) for both mainframe and distributed platforms as a single, integrated solution. We were enthusiastic about the move and the success of Compuware’s integration. As it turns out, we weren’t the only ones.

The prospect of having a single solution where three separate products were previously required was very attractive to over-stretched IT staffs. Combined with a tight integration to Topaz and you have a solution that is practically irresistible. Customer demand for help in moving to ISPW was so high that it motivated Compuware to make its second business acquisition in 10 months. Compuware purchased the total SCM practice, including implementation services, experienced staff and proven methodologies from Information Technology Service Management (ITSM) firm, Itegrations[1]. Compuware’s SCM Migration Services[2] simplifies, speeds and reduces the risk when migrating from existing vendor-supplied and homegrown systems to ISPW SCM.

Topaz additions, enhancements, extensions and integrations
In keeping with Compuware’s theme of Mainstreaming the Mainframe, the announcement included new Compuware Topaz Connect[3] (formerly Itegrations NxBridge) that automates and simplifies cross-platform connectivity. Customers can automatically connect Compuware ISPW to various ITSM solutions including ServiceNow, BMC Remedy and Tivoli. This reduces manual-processes, time and effort while making the mainframe more accessible, the customer experience better and improving performance metrics. Recognizing that enterprises may not be able to migrate to agile ISPW SCM immediately, Topaz Connect enables CA Endevor users to access required Endevor functionality via Compuware Topaz Workbench[4], a modern Eclipse-based IDE. Through this integration, developers can perform critical activities such as add and move elements in the lifecycle; generate (compile) elements; create packages; and move groups in the lifecycle.

In another major step towards increasing mainframe utilization, raising the accessibility to modern tools and making the move to DevOps faster and easier, Compuware is providing REST APIs, in effect “building blocks,” to be used to control and manage application deployment in both mainframe and distributed environments. The APIs for ISPW enable users to create, promote, deploy and check the status of code releases using popular Agile/DevOps tools including Jenkins, XebiaLabs XL Release, Slack and Atlassian HipChat with Webhook notification.
Additional broader scope APIs will be available in coming months. These will be built to leverage, work with and support open standards and open standards-based tools. For example, complementing the APIs, Compuware plans to add support for a number of popular tools.

JCL has been a longtime hurdle for those looking to develop for mainframes, even more so for millennials. Compuware tackles the challenge with plug-ins for Topaz Workbench. Integrations with Software Engineering of America (SEA) technology include the JCLplus Plugin for Topaz Workbench which will automatically verify standards, check syntax and do runtime simulation of JCL. In addition, there is the SAVRS Plugin for Topaz Workbench, which allows easy viewing and interpretation of Joblog and SYSOUT reports.  

Terse error logs and messages made fault analysis a mainframe frustration for a long time. Adding to the problem, mainframe groups operated in informational and data isolation, siloed away from the rest of the enterprise. As a result, separated and off by itself, the mainframe became a “black box”, sidelined and not recognized as part of the enterprise operations team.

As a start to resolving those issues, Compuware partnered with Syncsort Ironstream to change that. Integrating with Ironstream allows the Abend-AID application fault discovery and analysis solution diagnostic data, together with the mainframe logs, security, and environmental data, to be fed in machine-readable form to Splunk, which combines that data with data from multiple different sources (security, compliance, behavioral, operations, compliance, etc.) across the organization. The combination can then be analyzed, correlated, evaluated to yield operational intelligence. The mainframe’s impact and influence in the context of the total operations is made visible and the importance of the mainframe to overall operations established.     

There is much more contained in the announcement. Our recommendation is that you follow up with Compuware to see how your customers and enterprise development and operations can benefit from their efforts.

Compuware’s overall ambition is to expose and confirm the importance of the mainframe to the enterprise. They do so by removing tool-based barriers that have traditionally inhibited its use to broader DevOps teams and by resolving significant shortcomings, including most significantly, dismantling operational silos. A significant part of the solution is the modernization of mainframe solutions, tools and capabilities so that developers, operations and business analysts can function consistently, transparently across mainframe, distributed and mobile platforms.

A Final word on Compuware’s Vision
If you haven’t noticed already, Compuware operates with a very customer-focused vision to drive its quarter-to-quarter deliveries. It isn’t that they are driven to reflexively react with little forethought. They have an established, consistent product/solution plan with a roadmap of future deliverables.

The basic, bedrock principle is to develop solutions based on what they believe are the critical and most-pressing problems confronting their customers NOW. They are driven by the belief that there are a number of identifiable and curable challenges that act as immediate roadblocks to keep the mainframe out of the mainstream. Their goal is to eliminate those roadblocks and speed the mainframe into mainstream operations. To do that, they have a prioritized, yet flexible list of which challenge they will take-on and when. 

Compuware’s plans are neither static, nor inhibiting of creative, responsive innovation. For example, late last year the team conceived of, built out and delivered Runtime Visualizer, a new feature in Topaz for Program Analysis[5], in just 84 days. This year they acquired ISPW and rapidly integrated it with Topaz[6]. On the heels of that, they acquired and delivered Compuware ISPW SCM Migration Services. Yet focused attention on customer feedback and the rapidly evolving world of enterprise IT isn’t completely unique – and it isn’t sufficient to maintain leadership. Compuware, partners, employees and executives hold themselves to an exceptionally rapid rate of development and delivery. They are aided by a great deal of flexibility in implementation due in a significant part to their own products and organizational vision. They are driven to produce extraordinary results that demonstrate their own agility, as well as that of mainframe solutions and operations. As they promise, Compuware’s employees and executives are delivering “Agility Without Compromise…simple, elegant solutions that enable a blended development ecosystem.”

We’re impressed with what they are doing. We recommend that you investigate to see if you agree.