Friday, July 31, 2015

POWER8, Linux, and CAPI provide micro-second information processing to Algo-Logic’s Tick-to-Trade (T2T) clients

By Rich Ptak and Bill Moran

Rapid processing of data improves decision-making in trading, research, and operations, benefitting enterprises and consumers. Computer servers accelerated with Field Programmable Gate Arrays[1] (FPGAs) operate at the greatest speeds to collect, analyze, and act on data. As data volumes sky rocket, processing speed becomes critically important.

Algo-Logic[2] leverages the speed of FPGAs to achieve the lowest possible trading latency. Their clients have access to data in 1.5 millionths of a second, enabling them to make better trades. Algo-Logic Systems’ CAPI-enabled Order Book is a part of a complete Trade-to-Tick (T2T) System[3] for market makers, hedge funds, and latency-sensitive trading firms. The exchange data feed is instantly processed by an FPGA. The results go to the shared memory of an IBM POWER8 server equipped with the IBM CAPI[4] card and specialized FPGA technology. Then, in less than 1.5 microseconds, it updates an order book of transactions (buy/sell/quantity).

Stock trading generates an enormous data flow about the price and number of shares available. Regulated exchanges, such as NASDAQ, provide a real-time feed of market data to trading systems so that humans and automated trading systems can place competitive bids to buy and sell equities.  By monitoring level 3 tick data and generating a level 2 order book, traders[5] can precisely track the number of shares available at each price level. Firms using Algo-Logic’s CAPI-enabled Order Book benefit from the split-second differences in understanding and interpreting the data[6] from the stock exchange feed.

Algo-Logic released their CAPI-enabled Order Book in March 2015. Multiple customers now use it in projects that include accelerated network processing of protocol parsing, financial surveillance systems, algorithmic trading, etc. with many proof-of-concept projects underway.

Algo-Logic found success with Linux, POWER8, and CAPI. We expect to write more about, Algo-Logic and other OpenPOWER Foundation[7] partners as they continue to develop solutions and POWER8-Linux systems demonstrate their ability to handle big data at the speeds developers, architects, and users need.

[2] Located in Silicon Valley; see:
[3] See “CAPI Enabled Order Book Running on IBM® POWER8™ Server” at:
[5] We oversimplify stock market operations for clarity. For more details visit the footnotes.
[6]  This is  High Frequency Trading (HFT), for information, see:

Thursday, July 23, 2015

Compuware’s Topaz for Java Performance enhances mainframe productivity!

By Rich Ptak

Figure 1 A new tagline for Compuware

Last January, Compuware CEO Chris O’Malley committed to delivering significant enhancements to their mainframe software management portfolio. As part of that commitment, he promised the company would:

1.      Build innovative products highly valued by their customers;
2.      Build solutions that enable the next generation of mainframe workers;
3.      Identify and focus on the most critical needs of their customers;
4.      Assume shared responsibility to advocate and demonstrate leadership for the mainframe platform;
5.      Provide thought leadership for innovative uses for the mainframe platform.

As stated at the time, we were impressed with both the Compuware strategy and their aggressive timeline for the design and delivery of products, services and solutions. We also liked their plans for their Topaz management product, which we characterized as “Mainframe Software for the 21st Century”[1] in our write-up. We reviewed the motivation and market forces driving the strategy in the earlier piece. Those forces continue unabated, and need no repetition here.

We are now two quarters past that initial announcement of Topaz for Enterprise Data.  Chris and Compuware met their commitment deadline for April with the release of Topaz for Program Analysis. They continue to do so now with the announcement of Topaz for Java Performance on July 1st. We’ve seen the product in action; it is clear to us that the Compuware team is living up to their commitment.  Here’s why that is so.

Topaz for Java Performance overview
Topaz for Java Performance provides detailed, comprehensive visibility into the performance and behavior of Java Batch programs and WebSphere transactions running on the mainframe. It focuses on providing a solution that will enhance the productivity of millennial developers as it leverages and complements the expertise of experienced staff and QA teams. The goal is to speed the process of identifying, debugging and resolving mainframe application performance problems.
 IT staff will realize benefits from automation and application insight that results in a more effective utilization of resources and speedier execution. This also frees up the time of experienced staff that can then focus on more complex problems and problem avoidance.
Topaz for Java Performance provides programmers with a one page view of their JVM performance. Specifically designed to work with Java Batch and WebSphere, it provides a hierarchical representation of calls showing the method-to-method progress within the program. It also enables the staff to visualize the heap memory behavior of their program. In both situations, the viewer can zoom in and out to display more or less detail to represent specific portions of the performance data. It allows measurements to be taken across separate systems, LPARs and JVMs from a single web based measurement page. See Figure 2 below.

Figure 2 Measurements of CPU Utilization, Heap Memory, Java Classes and Threads  

It provides views and detailed insights into the peak CPU utilization of specific Java methods and classes.  With this ability, new and experienced staff can spot, investigate and resolve “garbage collection” issues such as memory leaks, trouble-shoot excessively long collection intervals as well as identify those threads that are blocked or not actually doing useful work.

This is all just the beginning as Compuware plans for the evolution of Topaz to become a comprehensive suite of next-generation development products for the next-generation of mainframe development.

Enhancements to other Topaz products
In the meantime, also included in this release are enhancements to the earlier releases. The enhancements include:
·         Topaz for Program Analysis has been enhanced to provide intuitive, accurate visibility into the flow of data within COBOL or PL/I applications. This includes showing how data gets into a field; how a field is used to set other fields; and how a field is used in comparisons. Such “data flow” information helps developers to design better, smarter applications. 
·         Topaz for Enterprise Data can perform high-speed, compression-enabled host-to-host data copying by exploiting IBM z Systems zIIP processors. The load and burden on the general processors are reduced which can help delay/avoid an expensive upgrade. Also, developers can complete their work more quickly at lower costs.

The Final Word
There is much more that can be said and shown about this release that can be covered by Compuware. We highly recommend you see this video[2] and visit their website[3] for additional information
Compuware is delivering on its promises. They not only provide much needed and highly effective solutions, they are living proof of the high efficiency and agility that can be accomplished in mainframe computing by delivering high quality solutions at an unheard of pace. Congratulations to them.
They are also contributing to the general ‘heat-up’ in the mainframe marketplace. We see more interest in the mainframe among a larger and more varied set of users. We see more aggressive and innovative activity among the players as a result of increased competition. And, there is interest resulting from the opening up of the mainframe ecosystem to a wider audience. We think Compuware’s positioning itself as “The Mainframe Software Partner for the Next 50 Years” may turn out to be quite prophetic!

Wednesday, July 15, 2015

HP aims high as it raises the stakes in HPC

By Bill Moran and Rich Ptak

Recently, Bill Mannel, the new VP and GM of HPC[1]/Big Data for HP Servers Group, personally provided a review of their new HPC strategy. Usually, briefings of this sort have the executive say a few words; they then turn the briefing over to marketing and technical staff that provide details. Clearly this isn’t Mannel’s style. He not only personally presented the entire briefing but also very ably answered all the analyst’s questions; a reflection of his engineering background. (He held technical positions in the US Air Force, NASA and SGI before joining HP.) It was a very impressive performance. Now, let’s review the strategy with our comments on its implications.

Every strategy briefing includes an overview of important market issues. Mannel chose to focus on the effect of major industry trends on HP and HPC. Device proliferation (IoT) is happening and accelerating now. These always on-line devices generate vast quantities of data; much of which requires speedy, real-time analysis. Adding to the traffic are humans operating billions of cell phones world-wide. While the trends are not new; HP’s response is.

IT has a key role in every data-related activity from creation thru to service delivery in today’s enterprise. Mannel and HP are convinced that IT must transform itself from a cost center to a creator of competitive advantage. As a cost center, IT is a target for cost cuts. As a creator of competitive advantage, IT drives revenue; its budget becomes an investment in technology, not an overhead expense. HP is a partner/provider of services, guidance and products that improve IT’s effectiveness in the enterprise.

Mannel’s discussions with customers spotlighted HPC’s link to Big Data. Asked about using a system 1000x times more powerful than today’s most powerful supercomputers, a weather researcher foresaw no problem scaling calculations. His concern was with the months required to process the resulting volume of data output. Clearly, HPC and Big Data must work in tandem; thus Mannel’s responsibilities spans all HPC/Big Data-related solutions and partnerships.

Mannel skillfully wove-in additional data from customer executives with HPC responsibilities. A key insight was that HPC users believe that “one size fits all” has failed them; standard X86 architectures have run out of gas. General purpose hardware cannot deliver needed performance; it’s too slow and too expensive.

Mannel’s conclusion: Big Data/HPC environments are ripe for tailored solutions. Common thinking in commercial computing for some time, it was now gaining ground among HPC professionals. No single vendor can provide everything needed alone.  A critical part of the HPC strategy is expansion of the HPC Partner Ecosystem to cover storage, networking and accelerator options. Examples include Intel, Mellanox, NVIDIA and Seagate along with ongoing support of and contributions to OpenStack.

The recently announced HP/Intel Alliance for HPC which includes jointly sponsored benchmarking centers supports the overall strategy. These will help to advance customer innovation while expanding accessibility of HPC for enterprises of all sizes. Centers will include facilities for benchmarking, performance issue analysis, code analysis and code modernization. Replying to a question, Mannel stated the alliance with Intel was not exclusive; HP retains the option to collaborate with others, e.g. AMD. We are sure that this is correct, but it is also obvious that the existence of joint centers provides Intel a clear advantage.

Early focus will be on customers in Financial Services, Oil & Gas and Life Sciences. ISVs such as Schlumberger, ANSYS, Gaussian, Simulia, and Redline are also targeted for support. We expect customer demand to expand the list.

Mannel’s plans call for systems able to penetrate the very high end of the Top 500 supercomputer list.[2] (As of June 2015, HP has 178 entries listed, mostly clustered at the low end.) HP’s Apollo 8000 systems have the potential to reach the very top brackets currently dominated by IBM and Cray.[3] We predict very intensive activity in the support centers to make that happen.

We believe that HP’s new strategy has a very good chance for success. Bill Mannel strikes us as a capable executive with a lot of HPC experience. His apparently successful blending of a technical engineering background with manufacturing/business management skills is great preparation for his current position. He communicated the HP strategy very clearly and effectively. He gave excellent answers to every question thrown at him. The briefing was among the best we’ve ever had with HP.

At this point, we can’t judge Mannel’s success at navigating the internal politics that exist at HP (and all large companies). However, based on what we’ve seen so far, we expect he will do just fine. It is our opinion that HP made a very good move in choosing him to lead their HPC efforts. We wish him and HP good hunting during the coming months.

[1] High Performance Computing
[2] See
[3] Although the #1 system in the world is Chinese-built on Intel processors.

Wednesday, July 8, 2015

Siliconscapes: Finds development faster, easier with Linux, CAPI and POWER8

By Bill Moran and Rich Ptak

Are you skeptical of tales describing easy ports of code to new platforms? Do you say “It can’t be that easy; that wouldn’t happen with a complicated program.”? Well, we have a story about the conversion of complicated technology to POWER8 that will interest you. Meet Dr. Kevin Irick, founder of SiliconScapes.

Dr. Irick formed SiliconScapes[1] to provide real-time image and video analytics systems. Initially, they used x86-based systems to host the FPGAs[2] needed to achieve the high processing speeds required for video analytics. Each different project required changes in the FPGAs, usually involving a lengthy development process, followed by a difficult integration. Shifting to POWER8 with IBM’s CAPI interface[3] eliminates the problem while offering the potential of a significant speed boost from POWER8.

SiliconScapes built a framework to integrate accelerators; moving the framework to POWER8 to access CAPI made sense. Prior to receipt of the system, an IBM software simulator was used to debug code for CAPI. Integrating HW acceleration into a system using CAPI reduces development costs and time, increasing customer satisfaction. Overall, it took several months[4]. IBM’s simulation tools also proved to be very useful and critical.

The effort was worthwhile. POWER8’s power and speed allows real time performance; its stability permits always-on video analytics with very high reliability and up-times. Faster, easier integration with CAPI reduces development costs and time, increasing customer satisfaction.

SiliconScapes’ experience demonstrates a migration to POWER8 delivers ease-of-use, speed and robustness as well as the benefits of open standards, even for very complicated programs with significant hardware dependencies. Other lessons-learned by the developers include:
·        It helps significantly to be thoroughly familiar with the existing technology before attempting a port to a new system.
·        For maximum benefit and efficiency, take time to learn the new POWER8 environment including new technology like CAPI.
·        IBM simulation tools are very useful.
·        It is worthwhile to request help from IBM, SiliconScapes received excellent support.

Kevin Irick, founder/developer of SiliconScapes, is an enthusiastic supporter of OpenPOWER systems, Linux, CAPI and Open Standards. With IBM’s help and support, the migration went faster than he anticipated; providing a win all-around. Developers interested in more details about this port can contact Dr. Irick directly at or through the SiliconScapes website.

[1] For more details see
[4] A delayed delivery of the loaner system slowed the process. It went quickly with the installed system

Friday, June 19, 2015

Information Builders: benefits by moving and shipping their product on Linux – IBM POWER8

By Rich Ptak and Bill Moran

With some 40 years[1] in business and 1400 employees worldwide, ISP Information Builders is clearly committed to delivering exceptional service to their customers. Their product, WebFOCUS8[2], is a business intelligence platform with an extensive list of capabilities. As described on their web site[3], it runs on a variety of platforms including the X86. They captured our attention when they announced the port of their product from x86 to IBM’s POWER8 platform.  After the port, the company’s experience and results were so positive that they announced its availability on their web site when they began shipping it to customers on request. The product is increasingly being installed at real customers operating a production environment, all available on POWER8.

We spent time discussing the product to gain insight on their porting and performance experiences moving to POWER8. Information Builder made very clear that IBM provided excellent support. Porting the application went smoothly with no problems as they only had to recompile the code. In addition, once the product was running on POWER8, they ran a number of benchmarks to compare the performance of WebFOCUS on a POWER8 platform versus an x86 platform in several different environments.

A video on their web site has all the details, including configurations, etc., of the comparison of the POWER 8 and x86 implementations. Three workloads were compared: a simple, a large, and a very complex one. Summarized results appear in the table below.

# Cores                                      Linux on POWER8                    Intel 
                                                 Reports per second              Reports per second
2 cores
4 Cores
8 Cores
The cases shown simulate 25 online users. Additional examples with varying numbers of users are included in the online data.

On average for these three cases, the POWER8 system produced about 70% more reports than the Intel system. Note that significantly fewer cores are required on the POWER8 system to produce equivalent thru-put as the Intel system. Two POWER8 cores completes as much work as 4 Intel cores while 4 POWER8 cores will match 8 Intel cores. Since most software is priced by the number of cores, this means lower software costs on the POWER8 system.

Information Builders is an extremely professional organization. We found their endorsement of POWER8 Linux to be significant and very impressive. Another example of their professionalism is the WebFOCUS solution sizing tool on their web site. It allows you to estimate the POWER8 configuration needed for a given WebFOCUS workload. Of course, these results apply only to their application. However, it does provide a powerful endorsement of the POWER8-Linux platform.

[1] For company background on the see the article at
[2] See the details at There is a short video that has benchmark details. It is well worth watching.

Thursday, June 18, 2015

POWER8-Linux fighting against Cancer

By Rich Ptak and Bill Moran

In the US alone, over a million new cancer cases occur each year according to the American Cancer Society. The discovery of a ‘silver bullet’ to defeat cancer appears unlikely any time soon. Instead, new combinations of technologies are being used in the battle.  This blog describes how OpenPOWER technologies, POWER8, Linux and CAPI, are being used by the University of Toronto’s Computer Engineering department to improve the accuracy, speed, and convenience of cancer treatment.

Many cancers are tumors located deep inside the human body; head, neck and out-of-sight tumors provide special challenges. Treatment with current methodologies, chemo-, radiation or surgery, can involve serious, unpleasant side effects. They are difficult to control precisely, potentially damaging healthy tissue or missing some of the cancer.

An alternative is Photodynamic Therapy (PDT). PDT systemically administers a light-sensitive drug; the simulation help to chose the best placement of multiple light sources to activate the drug and destroy the tumor.This greatly reduces the risk of damage to other organs and body parts. But, some tumors still make precise location difficult and risky. Running a series of Monte Carlo simulations[1] to determine the most effective light-source placement locations can solve this.

However, such simulations are CPU intensive; running them in a distant data center is inconvenient, expensive and impractical. Having a dozen or more[2] x86 processors in an operating room is not feasible due to excess power requirements and heat generation. A University of Toronto research project is using OpenPOWER technology to address the problem.

Using a Power8 system with the CAPI FPGA interface, calculations are more than 64x faster (than with an x86 system[3]), or equivalently can use many fewer nodes. Users get more simulations to evaluate more treatment protocols, leading to safer and more effective treatment. POWER8’s smaller physical footprint and higher power efficiency (48x more throughput per Watt) mean it fits in the operating room without special cooling or electrical requirements.

OpenPOWER Systems, CAPI, FPGA, Open standards and IBM IP accelerated progress and makes a clinical version viable. The University of Toronto core team, Jeff Cassidy, Lothar Lilge and Vaughn Betz, continue the development. Partnering with industry and researchers including Roswell Park Cancer Institute (RPCI), Buffalo, NY, they anticipate early trials beginning in 2016. Find out more here. OpenPOWER will play a key role in deployment. We wish them all success in their work.

[1] Using the Full Monte software package for Monte Carlo simulations.
[2]  Estimated number of x86 processors needed for a workable version.
[3] CAPI allows a specialized FPGA to be closely integrated into the Power8 CPU. This accelerates the simulation.

Latest Dynatrace announcements convincingly demonstrate that customer focus pays off!

By Rich Ptak

Dynatrace went private late last year; the change is clearly paying off for them and their customers. CEO John Van Siclen managed a record breaking year with a 31% growth in net-new customers, and 95% revenue growth in emerging (BRIC) markets. The list includes Adobe, KLM, Samsung, Alibaba, Swiss Life, etc., demonstrating cross-industry appeal. New growth along with a 91% renewal rate by the existing customers gives them a base of more than 6000 customers. Dynatrace was once again named the world’s APM Market Share leader, seizing 12.5% of the market share in a highly competitive space, with revenues more than $100M ahead of their nearest rival. Additionally, the Dynatrace APM Community has grown to more than 103,000 members with 1,000 new sign-ups per month.
They attribute their success to a strategy based on four beliefs:
  1. It all starts and ends with the digital customers’ experience – it’s the delivery of the digital service to the end customer, not just the performance of the backend application or quality of the product that matters;
  2. It’s all about preventing problems, not simply reacting – recognizing and acting BEFORE the customer is affected;
  3. Gap-free (end-to-end) data is essential – need to capture, leverage and exploit all available data from whatever source to enable a digital business;
  4. The goal is DEV/Ops not merely ops – functions must communicate, coordinate and collaborate for success.
In our opinion it’s the degree to which they integrate these beliefs into their operations that makes them so successful. The focus on the customer permeates everything. They understand that their success depends on their ability to help customers succeed. And, that means providing the best customer experience possible.
Their focus was obvious throughout the business review; with nary a word about a product except as it impacts, benefits and contributes to the success of customer operations. Specific details of implementation and a demo are there if you want them. But, the major focus is on demonstrating their knowledge of what the customer is trying to do, and then demonstrating that Dynatrace is the one that can help them do it.
So, Dynatrace’s total focus is on helping digital businesses be successful by helping them become experts in three disciplines. These are: 1) optimizing and understanding the customer experience delivered anywhere, anytime and anyplace, 2) the continuous delivery of new digital capabilities by facilitating, actionable communication and collaboration among business, operations and development staffs, and 3) optimizing application performance by simplifying operations based on the complete visibility of what is taking place across the delivery chain.
  The strategy to accomplish the task they’ve taken on is as follows:
  1. Provide complete visibility in real-time of the experience of all targeted customers for all relevant devices,
  2. Break down barriers between Business, Development and Operations staffs with the most comprehensive coverage of enterprise use-case and insights into competitor performance,
  3. Apply their extensive expertise to leverage built-in and external analytics applied to all available, relevant data to optimize performance,
  4. Crowd-sourced testing networks, for high fidelity measurement of customer experience,
  5. Provide fastest Time-to-Value with easy-to-use tools for analysis and reporting such as self-service Quick-start guides for users.
Dynatrace shows no indication of slowing down. They are strengthening their ability to monitor and report on user experience and behavior monitoring in a merger with Keynote. Clients will have the most timely and accurate information to assure an optimal experience for their users and customers. We recommend you check out the Dynatrace website for product details. We’ll conclude with congratulations to the company for finishing their year so far ahead of their competition.