Friday, August 14, 2015

BMC – Preparing Businesses for Digital Transformation

By Rich Ptak


As enterprise executives increasingly focus on digital technologies and changing market influences, they feel unprecedented pressure to adapt or risk being left behind. Rapidly evolving technologies are driving fundamental change in business operations in reaction to radical alterations in the ways of doing business.

New, digital start-ups can abruptly shift markets to drive larger established firms out of business. Examples like Netflix vs. Blockbuster Video come to mind. Familiar, well-established modes of competing and conducting business no longer work or serve to maintain competitive advantage.

Driven by consumer experiences with mobile, gaming, and shopping, user expectations rapidly bleed over from personal to business interactions. These irrevocably alter client/customer purchasing behaviors, expectations and relationships; impacting internal, as well external enterprise operating ecosystems and cultures.

One result is the flattening of traditional hierarchies as nimble project teams replace rigid departmental functions. Also, market globalization lowering the barriers to entry (to markets for many services), forces enterprises to deal with numerous agile, adaptive and continuously evolving competitors. Customers expect highly responsive services tailored to meet their individual needs. This drives the necessity of product creation and delivery models that adapt to meet these new demands.

To survive, enterprises must transform themselves to be able to rapidly adapt to changes (whether to costs, pricing, technology, etc.) in their environment. They must learn to compete as providers of consumer-like digital services. Doing so can involve redesigning the work-place to simplify processes, modify workflows, etc. In short, they must become a digital enterprise.

What does it mean to be a digital enterprise?
A digital enterprise is a provider of digital services, able to rapidly create new services in response to changing user demands. A digital service provides an optimized, exceptional experience, whether delivered to internal (employee) or external (partner, client or customer) consumers.

A digital service is personalized, automated, scalable, and self-service, responsive to and controlled by the customer (like a mobile phone app). The service is self-contained, able to automatically perform at every step from initial access thru to delivery. It includes a path for help with access to self-service remedial support along the way if needed.

Digital enterprise operations must function at the speed of business change. The digital enterprise depends upon the fully integrated, coordinated efforts of IT and business. For a digital enterprise to compete, they must be able to continuously function at a scale and speed that matches the rate of change demanded by users. This requires technology integrated into an agile infrastructure robust enough to support the continuous delivery of new services.

Transforming into a Digital Enterprise
The process of reinvention is not new. Every business aspect has been changing and evolving since trade began. What is new is the scale of the change, the speed at which it is occurring and the shortened cycle at which the process repeats itself.
Transforming to a digital enterprise includes reorganizing operations. Products, processes, procedures and the workplace itself have to become completely service oriented and user-driven. The organization has to support rapid design and delivery to satisfy user demands for:
1.    Intuitive solutions – that provide an exceptional, optimized, personalized, self-service experience to consumers (whether employee, partner, client or customer);
2.    High-speed innovation – ongoing and consistent development, leveraging expertise, technology, partnerships and the latest processes; such as agile software development.
3.    Standardized solutions – that are consistently delivered across infrastructure modes whether mainframe, mobile, virtual or cloud (private, public or hybrid).
The goal is to provide the best possible user experience, automatically optimized in real-time even as changes occur. This requires the ability to rapidly integrate continuously evolving elements into an environment that is increasingly automated, adaptive and virtualized.
Digital transformation is a continuous process. Therefore, an appropriate architecture for management and control of operational processes and structures during and post-digital transformation is needed. BMC designed Digital Enterprise Management to be that architecture.
Digital Enterprise Management- Architecting the Digital Enterprise
An architecture for successful transformation must facilitate the ongoing integration of leading edge technology. Integration is critical, as much existing invested capital (physical and intellectual) remains too valuable to ongoing operations to be discarded. A good example is mainframe code, reliably delivering services for decades. Critical to operations. Too valuable to discard. Far too expensive (and risky) to replace. Old and new must be blended for smooth operations. While point-in-time, one-off approaches can work, the better, more reliable solution lies in a well-defined architecture designed for long term management of digital services, infrastructure, processes and policy.
BMC’s Digital Enterprise Management (DEM) provides IT an architecture and suite of solutions designed to facilitate seamless, optimized digital transformation. It enables on-going improvement thru continuous innovation. It defines a structured approach to managing and optimizing technology, processes, and policy. It is for environments operating in real-time, with infrastructure ranging from mainframe to mobile to cloud and beyond. To accomplish this, DEM’s architecture and associated solutions/services focus on four disciplines and a shared foundation:
  1.    Digital Service Management – blends modern digital services design with ITSM principles & platforms to reinvent how business gets done and enable breakthroughs in human productivity.
  2.   Digital Enterprise Automation – an integrated and strategic approach to automation, enabling business to accelerate the delivery of digital services while improving quality and control.
  3.     Digital Service Assurance – integrates data from multiple external sources, including social human sentiment to allow businesses to take action quickly based on customer online posts and complaints.
  4.     Digital Infrastructure Optimization – helps businesses avoid wasted capacity and licensing across a business’ entire technology portfolio.


Analytics, Orchestration, and Policy provide a common foundation for configuration data, automation, orchestration, analytics, and policy, enabling businesses to share a single, real-time view of their infrastructure across teams and processes. See Figure 1 below.
Figure 1 Digital Enterprise Management: Four Disciplines and the Foundation      Courtesy BMC Software

This is a high level outline of Digital Enterprise Management from BMC. Services and products exist with more coming. The latest details are available from BMC and at their website http://www.bmc.com/dem .

Conclusion/Next Steps
Competing in the global marketplace as a digital enterprise promises to be a challenging, exhilarating, and rewarding experience. The actual process of transformation to a digital enterprise contains more potential pitfalls than business as usual.

The dynamic nature of the enterprise, and today’s marketplace prevents a one-size-fits-all solution. Nor does a complete solution suite exist today. The complete path forward will be realized only with time and experience. As understanding grows, more complete solutions will develop. Therefore, successfully navigating the path through the initial transformation and preparing for what lays ahead requires careful planning and, for most, the help of a trusted, knowledgeable partner.

Einstein is quoted as saying that if he had one hour to save the world, he would spend fifty-five minutes defining the problem, and only five minutes finding the solution. BMC has taken the time needed to clearly define the problem. They have a track record of success along with accumulated expertise. They combined all of that to develop Digital Enterprise Management with the services and products necessary for its implementation.

They recognize the path to a digital enterprise is truly a journey not a destination. We believe BMC has the knowledge, insight and expertise to be an excellent partner for any enterprise embarking on that journey. We recommend a visit to http://www.bmc.com/dem so you can see for yourself the benefits of their solutions.

Friday, July 31, 2015

POWER8, Linux, and CAPI provide micro-second information processing to Algo-Logic’s Tick-to-Trade (T2T) clients

By Rich Ptak and Bill Moran


Rapid processing of data improves decision-making in trading, research, and operations, benefitting enterprises and consumers. Computer servers accelerated with Field Programmable Gate Arrays[1] (FPGAs) operate at the greatest speeds to collect, analyze, and act on data. As data volumes sky rocket, processing speed becomes critically important.

Algo-Logic[2] leverages the speed of FPGAs to achieve the lowest possible trading latency. Their clients have access to data in 1.5 millionths of a second, enabling them to make better trades. Algo-Logic Systems’ CAPI-enabled Order Book is a part of a complete Trade-to-Tick (T2T) System[3] for market makers, hedge funds, and latency-sensitive trading firms. The exchange data feed is instantly processed by an FPGA. The results go to the shared memory of an IBM POWER8 server equipped with the IBM CAPI[4] card and specialized FPGA technology. Then, in less than 1.5 microseconds, it updates an order book of transactions (buy/sell/quantity).

Stock trading generates an enormous data flow about the price and number of shares available. Regulated exchanges, such as NASDAQ, provide a real-time feed of market data to trading systems so that humans and automated trading systems can place competitive bids to buy and sell equities.  By monitoring level 3 tick data and generating a level 2 order book, traders[5] can precisely track the number of shares available at each price level. Firms using Algo-Logic’s CAPI-enabled Order Book benefit from the split-second differences in understanding and interpreting the data[6] from the stock exchange feed.

Algo-Logic released their CAPI-enabled Order Book in March 2015. Multiple customers now use it in projects that include accelerated network processing of protocol parsing, financial surveillance systems, algorithmic trading, etc. with many proof-of-concept projects underway.

Algo-Logic found success with Linux, POWER8, and CAPI. We expect to write more about, Algo-Logic and other OpenPOWER Foundation[7] partners as they continue to develop solutions and POWER8-Linux systems demonstrate their ability to handle big data at the speeds developers, architects, and users need.






[2] Located in Silicon Valley; see: http://algo-logic.com
[3] See “CAPI Enabled Order Book Running on IBM® POWER8™ Server” at: http://algo-logic.com/CAPIorderbook
[5] We oversimplify stock market operations for clarity. For more details visit the footnotes.
[6]  This is  High Frequency Trading (HFT), for information, see: https://en.wikipedia.org/wiki/High-frequency_trading

Thursday, July 23, 2015

Compuware’s Topaz for Java Performance enhances mainframe productivity!

By Rich Ptak

Figure 1 A new tagline for Compuware

Last January, Compuware CEO Chris O’Malley committed to delivering significant enhancements to their mainframe software management portfolio. As part of that commitment, he promised the company would:

1.      Build innovative products highly valued by their customers;
2.      Build solutions that enable the next generation of mainframe workers;
3.      Identify and focus on the most critical needs of their customers;
4.      Assume shared responsibility to advocate and demonstrate leadership for the mainframe platform;
5.      Provide thought leadership for innovative uses for the mainframe platform.

As stated at the time, we were impressed with both the Compuware strategy and their aggressive timeline for the design and delivery of products, services and solutions. We also liked their plans for their Topaz management product, which we characterized as “Mainframe Software for the 21st Century”[1] in our write-up. We reviewed the motivation and market forces driving the strategy in the earlier piece. Those forces continue unabated, and need no repetition here.

We are now two quarters past that initial announcement of Topaz for Enterprise Data.  Chris and Compuware met their commitment deadline for April with the release of Topaz for Program Analysis. They continue to do so now with the announcement of Topaz for Java Performance on July 1st. We’ve seen the product in action; it is clear to us that the Compuware team is living up to their commitment.  Here’s why that is so.

Topaz for Java Performance overview
Topaz for Java Performance provides detailed, comprehensive visibility into the performance and behavior of Java Batch programs and WebSphere transactions running on the mainframe. It focuses on providing a solution that will enhance the productivity of millennial developers as it leverages and complements the expertise of experienced staff and QA teams. The goal is to speed the process of identifying, debugging and resolving mainframe application performance problems.
 IT staff will realize benefits from automation and application insight that results in a more effective utilization of resources and speedier execution. This also frees up the time of experienced staff that can then focus on more complex problems and problem avoidance.
Topaz for Java Performance provides programmers with a one page view of their JVM performance. Specifically designed to work with Java Batch and WebSphere, it provides a hierarchical representation of calls showing the method-to-method progress within the program. It also enables the staff to visualize the heap memory behavior of their program. In both situations, the viewer can zoom in and out to display more or less detail to represent specific portions of the performance data. It allows measurements to be taken across separate systems, LPARs and JVMs from a single web based measurement page. See Figure 2 below.

Figure 2 Measurements of CPU Utilization, Heap Memory, Java Classes and Threads  

It provides views and detailed insights into the peak CPU utilization of specific Java methods and classes.  With this ability, new and experienced staff can spot, investigate and resolve “garbage collection” issues such as memory leaks, trouble-shoot excessively long collection intervals as well as identify those threads that are blocked or not actually doing useful work.

This is all just the beginning as Compuware plans for the evolution of Topaz to become a comprehensive suite of next-generation development products for the next-generation of mainframe development.

Enhancements to other Topaz products
In the meantime, also included in this release are enhancements to the earlier releases. The enhancements include:
·         Topaz for Program Analysis has been enhanced to provide intuitive, accurate visibility into the flow of data within COBOL or PL/I applications. This includes showing how data gets into a field; how a field is used to set other fields; and how a field is used in comparisons. Such “data flow” information helps developers to design better, smarter applications. 
                         
·         Topaz for Enterprise Data can perform high-speed, compression-enabled host-to-host data copying by exploiting IBM z Systems zIIP processors. The load and burden on the general processors are reduced which can help delay/avoid an expensive upgrade. Also, developers can complete their work more quickly at lower costs.

The Final Word
There is much more that can be said and shown about this release that can be covered by Compuware. We highly recommend you see this video[2] and visit their website[3] for additional information
Compuware is delivering on its promises. They not only provide much needed and highly effective solutions, they are living proof of the high efficiency and agility that can be accomplished in mainframe computing by delivering high quality solutions at an unheard of pace. Congratulations to them.
They are also contributing to the general ‘heat-up’ in the mainframe marketplace. We see more interest in the mainframe among a larger and more varied set of users. We see more aggressive and innovative activity among the players as a result of increased competition. And, there is interest resulting from the opening up of the mainframe ecosystem to a wider audience. We think Compuware’s positioning itself as “The Mainframe Software Partner for the Next 50 Years” may turn out to be quite prophetic!

Wednesday, July 15, 2015

HP aims high as it raises the stakes in HPC

By Bill Moran and Rich Ptak



Recently, Bill Mannel, the new VP and GM of HPC[1]/Big Data for HP Servers Group, personally provided a review of their new HPC strategy. Usually, briefings of this sort have the executive say a few words; they then turn the briefing over to marketing and technical staff that provide details. Clearly this isn’t Mannel’s style. He not only personally presented the entire briefing but also very ably answered all the analyst’s questions; a reflection of his engineering background. (He held technical positions in the US Air Force, NASA and SGI before joining HP.) It was a very impressive performance. Now, let’s review the strategy with our comments on its implications.

Every strategy briefing includes an overview of important market issues. Mannel chose to focus on the effect of major industry trends on HP and HPC. Device proliferation (IoT) is happening and accelerating now. These always on-line devices generate vast quantities of data; much of which requires speedy, real-time analysis. Adding to the traffic are humans operating billions of cell phones world-wide. While the trends are not new; HP’s response is.

IT has a key role in every data-related activity from creation thru to service delivery in today’s enterprise. Mannel and HP are convinced that IT must transform itself from a cost center to a creator of competitive advantage. As a cost center, IT is a target for cost cuts. As a creator of competitive advantage, IT drives revenue; its budget becomes an investment in technology, not an overhead expense. HP is a partner/provider of services, guidance and products that improve IT’s effectiveness in the enterprise.

Mannel’s discussions with customers spotlighted HPC’s link to Big Data. Asked about using a system 1000x times more powerful than today’s most powerful supercomputers, a weather researcher foresaw no problem scaling calculations. His concern was with the months required to process the resulting volume of data output. Clearly, HPC and Big Data must work in tandem; thus Mannel’s responsibilities spans all HPC/Big Data-related solutions and partnerships.

Mannel skillfully wove-in additional data from customer executives with HPC responsibilities. A key insight was that HPC users believe that “one size fits all” has failed them; standard X86 architectures have run out of gas. General purpose hardware cannot deliver needed performance; it’s too slow and too expensive.

Mannel’s conclusion: Big Data/HPC environments are ripe for tailored solutions. Common thinking in commercial computing for some time, it was now gaining ground among HPC professionals. No single vendor can provide everything needed alone.  A critical part of the HPC strategy is expansion of the HPC Partner Ecosystem to cover storage, networking and accelerator options. Examples include Intel, Mellanox, NVIDIA and Seagate along with ongoing support of and contributions to OpenStack.

The recently announced HP/Intel Alliance for HPC which includes jointly sponsored benchmarking centers supports the overall strategy. These will help to advance customer innovation while expanding accessibility of HPC for enterprises of all sizes. Centers will include facilities for benchmarking, performance issue analysis, code analysis and code modernization. Replying to a question, Mannel stated the alliance with Intel was not exclusive; HP retains the option to collaborate with others, e.g. AMD. We are sure that this is correct, but it is also obvious that the existence of joint centers provides Intel a clear advantage.

Early focus will be on customers in Financial Services, Oil & Gas and Life Sciences. ISVs such as Schlumberger, ANSYS, Gaussian, Simulia, and Redline are also targeted for support. We expect customer demand to expand the list.

Mannel’s plans call for systems able to penetrate the very high end of the Top 500 supercomputer list.[2] (As of June 2015, HP has 178 entries listed, mostly clustered at the low end.) HP’s Apollo 8000 systems have the potential to reach the very top brackets currently dominated by IBM and Cray.[3] We predict very intensive activity in the support centers to make that happen.

We believe that HP’s new strategy has a very good chance for success. Bill Mannel strikes us as a capable executive with a lot of HPC experience. His apparently successful blending of a technical engineering background with manufacturing/business management skills is great preparation for his current position. He communicated the HP strategy very clearly and effectively. He gave excellent answers to every question thrown at him. The briefing was among the best we’ve ever had with HP.

At this point, we can’t judge Mannel’s success at navigating the internal politics that exist at HP (and all large companies). However, based on what we’ve seen so far, we expect he will do just fine. It is our opinion that HP made a very good move in choosing him to lead their HPC efforts. We wish him and HP good hunting during the coming months.




[1] High Performance Computing
[2] See http://www.top500.org/
[3] Although the #1 system in the world is Chinese-built on Intel processors.

Wednesday, July 8, 2015

Siliconscapes: Finds development faster, easier with Linux, CAPI and POWER8

By Bill Moran and Rich Ptak

Are you skeptical of tales describing easy ports of code to new platforms? Do you say “It can’t be that easy; that wouldn’t happen with a complicated program.”? Well, we have a story about the conversion of complicated technology to POWER8 that will interest you. Meet Dr. Kevin Irick, founder of SiliconScapes.

Dr. Irick formed SiliconScapes[1] to provide real-time image and video analytics systems. Initially, they used x86-based systems to host the FPGAs[2] needed to achieve the high processing speeds required for video analytics. Each different project required changes in the FPGAs, usually involving a lengthy development process, followed by a difficult integration. Shifting to POWER8 with IBM’s CAPI interface[3] eliminates the problem while offering the potential of a significant speed boost from POWER8.

SiliconScapes built a framework to integrate accelerators; moving the framework to POWER8 to access CAPI made sense. Prior to receipt of the system, an IBM software simulator was used to debug code for CAPI. Integrating HW acceleration into a system using CAPI reduces development costs and time, increasing customer satisfaction. Overall, it took several months[4]. IBM’s simulation tools also proved to be very useful and critical.

The effort was worthwhile. POWER8’s power and speed allows real time performance; its stability permits always-on video analytics with very high reliability and up-times. Faster, easier integration with CAPI reduces development costs and time, increasing customer satisfaction.

SiliconScapes’ experience demonstrates a migration to POWER8 delivers ease-of-use, speed and robustness as well as the benefits of open standards, even for very complicated programs with significant hardware dependencies. Other lessons-learned by the developers include:
·        It helps significantly to be thoroughly familiar with the existing technology before attempting a port to a new system.
·        For maximum benefit and efficiency, take time to learn the new POWER8 environment including new technology like CAPI.
·        IBM simulation tools are very useful.
·        It is worthwhile to request help from IBM, SiliconScapes received excellent support.

Kevin Irick, founder/developer of SiliconScapes, is an enthusiastic supporter of OpenPOWER systems, Linux, CAPI and Open Standards. With IBM’s help and support, the migration went faster than he anticipated; providing a win all-around. Developers interested in more details about this port can contact Dr. Irick directly at Kevin.Irick@siliconscapes.net or through the SiliconScapes website.



[1] For more details see http://siliconscapes.net/
[4] A delayed delivery of the loaner system slowed the process. It went quickly with the installed system

Friday, June 19, 2015

Information Builders: benefits by moving and shipping their product on Linux – IBM POWER8

By Rich Ptak and Bill Moran


With some 40 years[1] in business and 1400 employees worldwide, ISP Information Builders is clearly committed to delivering exceptional service to their customers. Their product, WebFOCUS8[2], is a business intelligence platform with an extensive list of capabilities. As described on their web site[3], it runs on a variety of platforms including the X86. They captured our attention when they announced the port of their product from x86 to IBM’s POWER8 platform.  After the port, the company’s experience and results were so positive that they announced its availability on their web site when they began shipping it to customers on request. The product is increasingly being installed at real customers operating a production environment, all available on POWER8.

We spent time discussing the product to gain insight on their porting and performance experiences moving to POWER8. Information Builder made very clear that IBM provided excellent support. Porting the application went smoothly with no problems as they only had to recompile the code. In addition, once the product was running on POWER8, they ran a number of benchmarks to compare the performance of WebFOCUS on a POWER8 platform versus an x86 platform in several different environments.

A video on their web site has all the details, including configurations, etc., of the comparison of the POWER 8 and x86 implementations. Three workloads were compared: a simple, a large, and a very complex one. Summarized results appear in the table below.

# Cores                                      Linux on POWER8                    Intel 
                                                 Reports per second              Reports per second
2 cores
469
255
4 Cores
690
399
8 Cores
1000
567
The cases shown simulate 25 online users. Additional examples with varying numbers of users are included in the online data.

On average for these three cases, the POWER8 system produced about 70% more reports than the Intel system. Note that significantly fewer cores are required on the POWER8 system to produce equivalent thru-put as the Intel system. Two POWER8 cores completes as much work as 4 Intel cores while 4 POWER8 cores will match 8 Intel cores. Since most software is priced by the number of cores, this means lower software costs on the POWER8 system.

Information Builders is an extremely professional organization. We found their endorsement of POWER8 Linux to be significant and very impressive. Another example of their professionalism is the WebFOCUS solution sizing tool on their web site. It allows you to estimate the POWER8 configuration needed for a given WebFOCUS workload. Of course, these results apply only to their application. However, it does provide a powerful endorsement of the POWER8-Linux platform.




[1] For company background on the see the article at https://en.wikipedia.org/wiki/Information_Builders
[2] See the details at http://www.informationbuilders.com/wf-linuxpower. There is a short video that has benchmark details. It is well worth watching.

Thursday, June 18, 2015

POWER8-Linux fighting against Cancer

By Rich Ptak and Bill Moran


In the US alone, over a million new cancer cases occur each year according to the American Cancer Society. The discovery of a ‘silver bullet’ to defeat cancer appears unlikely any time soon. Instead, new combinations of technologies are being used in the battle.  This blog describes how OpenPOWER technologies, POWER8, Linux and CAPI, are being used by the University of Toronto’s Computer Engineering department to improve the accuracy, speed, and convenience of cancer treatment.

Many cancers are tumors located deep inside the human body; head, neck and out-of-sight tumors provide special challenges. Treatment with current methodologies, chemo-, radiation or surgery, can involve serious, unpleasant side effects. They are difficult to control precisely, potentially damaging healthy tissue or missing some of the cancer.

An alternative is Photodynamic Therapy (PDT). PDT systemically administers a light-sensitive drug; the simulation help to chose the best placement of multiple light sources to activate the drug and destroy the tumor.This greatly reduces the risk of damage to other organs and body parts. But, some tumors still make precise location difficult and risky. Running a series of Monte Carlo simulations[1] to determine the most effective light-source placement locations can solve this.

However, such simulations are CPU intensive; running them in a distant data center is inconvenient, expensive and impractical. Having a dozen or more[2] x86 processors in an operating room is not feasible due to excess power requirements and heat generation. A University of Toronto research project is using OpenPOWER technology to address the problem.

Using a Power8 system with the CAPI FPGA interface, calculations are more than 64x faster (than with an x86 system[3]), or equivalently can use many fewer nodes. Users get more simulations to evaluate more treatment protocols, leading to safer and more effective treatment. POWER8’s smaller physical footprint and higher power efficiency (48x more throughput per Watt) mean it fits in the operating room without special cooling or electrical requirements.

OpenPOWER Systems, CAPI, FPGA, Open standards and IBM IP accelerated progress and makes a clinical version viable. The University of Toronto core team, Jeff Cassidy, Lothar Lilge and Vaughn Betz, continue the development. Partnering with industry and researchers including Roswell Park Cancer Institute (RPCI), Buffalo, NY, they anticipate early trials beginning in 2016. Find out more here. OpenPOWER will play a key role in deployment. We wish them all success in their work.




[1] Using the Full Monte software package for Monte Carlo simulations.
[2]  Estimated number of x86 processors needed for a workable version.
[3] CAPI allows a specialized FPGA to be closely integrated into the Power8 CPU. This accelerates the simulation.