Pages

Sunday, February 7, 2016

IBM LinuxONE for Hybrid Cloud Environments – Building a Developer’s Dream ecosystem

By Rich Ptak


IBM is wasting no time as it builds out and strengthens the ecosystem for mainframe Linux developers. The IBM LinuxONE family was announced with significant fanfare in late August with an attractive set of attributes, price point and pricing models. It was supported by a robust and significant ecosystem providing support for a broad range of popular open source and ISV tools including Apache Spark, Node.js, MongoDB, MariaDB, PostgreSQL, Chef and Docker.

The current announcements build on and extend that ecosystem.  The highlights this time include enhanced speed and processing power for the entry-level system, Rockhopper. Also, there are new hybrid cloud capabilities in IBM products that included enhancements to increase flexibility, security and performance that will accelerate and facilitate the development of applications targeted for the cloud. 

Mainframe business revenue growth and overall business performance added some much needed good news to IBM’s 2015 financial picture. Ross Mauri, General Manager, IBM zSystems and LinuxONE, is committed to building on and extending this performance. To do so, IBM needs to increase and accelerate the penetration of the mainframe into the Linux environment. There is plenty of room for growth in that space. Key to success that area will be expanding the ecosystem of tools, technologies, services and partnerships to attract the attention and interest of active, innovative Linux developers. This means a focus on providing the tools and technologies that define the open source/open stack development environment. Full details of the announcement are available on the IBM site http://www.ibm.com/linuxone. Be prepared to spend some time on the site as it contains lots of links, data and information.

Here’s a look at some of the interesting details: To achieve their goal, IBM is delivering new cloud capabilities such as optimizing StrongLoop and Cloudant technologies for LinuxONE. The result is a highly scalable Node.js environment that is attractive because it allows developers to use their preferred language to develop server-side applications. StrongLoop makes it easy to develop the APIs that connect mobile, IoT and web apps and services. Cloudant is an enterprise-grade NoSQL database that allows users to save time and resources by storing mobile data in its native JSON (popular mobile format).These are important in hybrid cloud environments.

Even more interesting are the expanded software and capabilities. For example, the Go programming language is now supported by IBM LinuxONE. Go was developed by Google for building simple, reliable and efficient software. In the summer of 2016, IBM will begin contributing code to the Go community.
Through OpenStack technology collaboration, SUSE tools will be able to manage public, private and hybrid clouds implemented on LinuxONE Systems.

Added to the existing availability of SUSE and Red Hat distributions, Canonical’s Ubuntu Linux distribution and cloud tool sets (Juju, MAAS, and Landscape) will now be available to LinuxONE clients. This completes the Linux Hat Trick for the system.

New releases of both Emperor and Rockhopper will be shipping in March with improvements to speed and processing power. Details to be announced.

Also shipping in March, the IBM LinuxONE portfolio will have the IBM Open Platform (IOP) available at no cost. IOP broad set of industry standard Apache-based capabilities for analytics and big data. The components supported include Apache Spark, Apache HBase, Apache Hadoop 2.7.1 and more. Contributing its open source contributions, IBM optimized the Open Managed Runtime project (OMR) for LinuxONE. As IBM states it: “This repurposes the IBM innovations in virtual machine technology for new dynamic scripting languages, infusing them with enterprise-grade strength.”


IBM is continuing to expand and enhance family of LinuxONE Systems to meet the interests of and respond to the needs of the Linux, open systems, open source and open stack communities. If you haven’t looked at what they are offering recently, we’d highly recommend you do so today. All the best to IBM and the LinuxONE systems team!

Tuesday, January 12, 2016

HPE’s Synergy: Delivering the Data Center of the Future?

By Rich Ptak


At the recent Hewlett Packard Enterprise Discover event for its European customers, HPE announced their new Synergy offering. HP spent the last year planning and implementing their split into two companies which became final at the beginning of November.

  The old company’s final financial results were such that it increased turn around pressures significantly. Be that as it may, HPE management is now free to focus exclusively on achieving business success.

 Even more positively, the Synergy offering is both a significant and good portent for the future. HPE accomplished something that we suspect few companies could match. In addition to managing the breakup, management has been able to focus on defining and delivering on a major step-up in its vision of the datacenter’s future. HPE deserves recognition for that achievement alone.  

 First, some background. Clearly, adoption of the Cloud is driving major datacenter changes. Datacenters are very stove-piped. Each group, (server, storage, networking, operations, etc.) narrowly focus on their own domain.  This originally resulted from early hardware architecture specialization. Each group has skills specific to their infrastructure/technology specialty.


 Now, add competitive and economic pressures demanding IT staffs do more with less that have increased enormously even as staff budgets and available time decreased. The result is a data center operating with serious re-enforcement of built-in inefficiencies. It is extremely rigid, routinely 
over-provisioned (to meet unpredictable demand), expensive and time-consuming to reconfigure and scale to meet changing demand.


Technology and innovations that reduce costs are highly sought after. A software-definable infrastructure is a potential solution. This is where Synergy enters the picture. It is designed to comprehensively deliver on the full promise of a fully software defined data center. The intent is to dramatically increase the flexibility to move, change and reconfigure infrastructure resources. Resulting benefits include: rapid provisioning and re-provisioning for new applications in the cloud. Today’s rigid structural boundaries decrease or are eliminated. The need for costly 
over-provisioning disappears.  Human and hardware resources become more flexible and efficient.  The vision is very attractive yielding significant organizational benefits.

 However, there exist some practical considerations and barriers to achieving it. We don’t see anything unsolvable but let’s examine some. In the first place: Where to start? What does it cost to implement? How long will it take? Is there an implementation plan template I can use?  HPE has a key role in answering these, even at this stage.

 Other issues to consider. The basic concept of Synergy challenges the datacenter’s existing stove-piped organization.  Its success requires fundamental changes in the way that many IT jobs are done. Inevitably, some will resist such changes, regarding them as job threatening. Prototype projects facing such difficulties may fail. HPE must anticipate and be prepared to resolve such objections and concerns. Their sales force must be trained to discuss how both HPE and potential customers can respond to these as well any other concerns that will inevitably arise.

 Finally, IT is already stressed. Synergy adds to that stress as it requires significant commitment and effort to succeed.  HPE should be prepared to help a client to evaluate whether or not the resources needed to succeed are present. HPE should be able to provide tools, including services for assessment, implementation planning and guidance.

 A cautionary note, today Synergy is HPE centric, built on HP’s OneView software. Initially, it will most likely appeal to those already committed to OneView. To broaden its appeal, we expect HPE will eventually extend it to other architectures. However, no such plans were discussed.

 Our opinion is that HPE Synergy holds great promise. To the extent that HPE fully understands and is prepared to deliver the effort needed to make Synergy a success, it will succeed. We’re not sure this is exactly the “Data Center of the Future” but it offers a serious vision.

We will follow its progress with great interest. For today, we recommend potential customers monitor Synergy’s development and evolution. This will allow them to determine when, and if, they want to perform a detailed evaluation. We wish good luck to HPE as they move forward.

Tuesday, January 5, 2016

BMC automated mainframe cost management saves money and lowers MLC costs for Insurance Company

By Rich Ptak


BMC rightly identified the need for enterprises of all sizes to focus on transformation with Digital Enterprise Management (DEM)[1]. This can take on many different forms including a focus on automating tasks for managing and controlling licensing costs. Here is one example of how this plays out to the benefit of a mid-sized insurance firm.

Mainframe software license charges (MLC), for both system and applications, have been tied directly to consumption, measured in peak MSUs[2] or MIPS[3] since the early days. MSU (or MIPs) consumption ties to workloads and performance, which determine the ability to meet service level agreements (SLAs). MSUs are also used to calculate software licensing charges as a measure of the level of mainframe usage or computing consumed.

Associating the actual MSU usage by individual workloads has been a very difficult task. Managing mainframe software MLC costs, let alone predicting them, has never been easy. Even in the days of disciplined workloads, calculating the total number of MSUs consumed by any particular combination of workloads at any particular time was the source of Sys Admin nightmares. Even the best efforts by both vendors and customers, end up relying on manual efforts that are time-consuming, frustrating and typically unreliable. In today’s world of mobile computing, with unpredictable workload volumes, where some 90% of transactions end up involving mainframes, the variation is even less predictable. Even relatively small mainframe operations experience dramatic swings in MIPS consumption, driving up costs.

The efforts of one mainframe vendor are changing all that. BMC’s MLC cost management solutions provide the first real opportunity to automatically manage and control MSU peaks. They also provide tools to identify jobs and tasks to tune.

We interviewed the Technical Services Manager of a mid-sized insurance holding company. Using BMC products, Intelligent Capping for zEnterprise® (iCap) and Cost Analyzer for zEnterprise (Cost Analyzer), he can now control MSU peaks, eliminate peak surprises and identify where to concentrate tuning efforts. He reduced peaks from 90 to 75 MSUs, eliminated an annual ‘true-up’ bill in 6 months and plans to further reduce peaks to 63 MSUs.

The Corporate Data Center

The insurance company’s mainframe is the responsibility of a centralized corporate IT group with SLA commitments to support delivery of shared corporate services (e.g. Human Resource management, networking, billing, invoicing, etc.) to owned companies. Local IT groups within the various entities’ handle all other applications.

The company runs a variety of BMC’s DB2, IMS and MainView products to manage and control a relatively small z/OS-based 90 MSU (728 MIPS) mainframe. Other vendor products are also installed. Billing usage is determined using a 4-hour rolling average of MSU peaks. They pay a fixed z/ monthly charge based on usage by z/OS components. Usage of other IBM® software (IMS™, DB2®, CICS®, MQ®), is covered by a fixed monthly amount defined in an Enterprise Licensing agreement (ELA) based on an estimated peak MSU.  They true-up the difference between the actual and estimated MSU usage once per year. This charge has historically ranged between $30K and $50K. Annual MLC charges run in the neighborhood of $1.6M. Changes in workloads prevented more accurate forecasting, making budgeting for the true-up charge and cost management very difficult. Efforts at manual tracking and using other products to control peaks were unsatisfactory.  

Our manager was convinced that intelligent capping of peak load MSUs would reduce MLC costs. He also suspected more could be done to further reduce 4-hour rolling average MSU peaks. Better cost control and operations management was possible with more data and detailed insights to identify specific workloads, jobs and tasks for tuning efforts.

 One Insurance Company’s Experience

BMC’s MLC Cost Management products changed all that. Our Technical Services Manager learned about BMC’s MLC cost control solutions for the mainframe at a BMC seminar. Within four months, they had purchased and installed Cost Analyzer and iCap. The results were everything expected.

Cost Analyzer allowed LOB managers to identify workloads driving up peak consumption. These could be managed to reduce peak overruns. An update provides even more insight and control. We’ll discuss later.  

For iCap, the goal was to reduce the average peak from 90 to 75 MSUs. There are three operational modes for iCap:

  1. 1.   Observe – a learning mode that monitors and collects operational data on workloads.
  2. 2.   Message – extends Observe to analyze data and send alerts to recommend changes (using customer specified parameters) to control MSUs and manage costs.
  3. 3.   Manage – monitors, analyzes and automatically implements recommended changes.

For the first two weeks after installation, adjustments were made manually based on automatic alerts from iCap. After that, they switched to automatic Manage Mode. The product ran for the second ½ of the fiscal year. At the first post-installation true-up, the charge was zero. With iCap running, they NEVER exceeded the 75 MSU cap. Cap management allowed them to compensate for the first six months (pre-MSU capping) of consumption over-runs. This was a major advance in cost control and savings.

As a result, plans are to progressively lower the cap from current 75 MSU to 62 or 63 MSUs within 4 years. With the latest installed version of Cost Analyzer software, the manager can drill-down for additional detail on the workloads driving MIPs consumption. This allows identification of the specific jobs and tasks to tune to further reduce the load.

Capping consumption and insight into workload group operations will provide even more significant savings in the future. With the knowledge already gained, along with the control available with iCap, they can negotiate better multi-year peak and sub-capacity licensing and billing terms with vendors. The more detailed data and control works with virtually any mainframe software (BMC, CA Technologies, IBM, etc.). They anticipate savings from controlling the cap to exceed $140,000 over 48 months, an estimated 8.75% of their MLC costs.

Advice

Our manager strongly advises potential users to leverage BMC’s expertise in the implementation process. Not because the process is particularly complex, it isn’t. But because they found the BMC support was excellent beyond expectations. Time spent with them before, during and post-installation reduced the time to learn and benefit from the products. His team rapidly acquired useful insights into reports and data in formal and informal sessions with BMC staff. As a result, they quickly developed expertise at using the products to get optimum results.

Conclusion

In this manager’s experience, BMC products and support more than met his expectations and needs with these products. He expects to see additional benefits well into the future even as his workloads shift and change over time. He found working with BMC support staff accelerated time-to-value, while dramatically increasing team expertise and ability to use the new products. He recommends the combination of Intelligent Capping and Cost Analyzer along with use of BMC’s support services.

Sounds like an all-around win to us.


[2] A million service units (MSU) is a measurement of the amount of processing work a computer can perform in one hour – typically used for mainframes.
[3] Million instructions per second, a measure of a computer's central processing unit performance.

Monday, December 14, 2015

DELL + EMC: 1+1 EQUALS?

By Audrey Rasmussen, Bill Moran and Rich Ptak

Dell recently announced a definitive agreement to acquire EMC, including its approximate 80% ownership in VMware. Shortly after, EMC and VMware announced the creation of a joint venture in Cloud computing, named Virtustream. Does this mega-merger makes sense – for the companies, the industry, customers and shareholders? This report focuses on the potential impact of this acquisition on the companies, industry including potential competitors, and customers. We need to qualify this document by saying that this merger is an evolving situation and developments after the completion of this document will affect many of our conclusions.  

Read the report at: http://www.ptakassociates.com/app/download/7242715199/Dell-EMC+Merger+Report+-+FINAL.pdf

Monday, December 7, 2015

POWER8, Linux, and CAPI provide micro-second information processing to Algo-Logic’s Tick-to-Trade (T2T) clients

By Rich Ptak and Bill Moran


Rapid processing of data improves decision-making in trading, research, and operations, benefitting enterprises and consumers. Computer servers accelerated with Field Programmable Gate Arrays[1] (FPGAs) operate at the greatest speeds to collect, analyze, and act on data. As data volumes sky rocket, processing speed becomes critically important.

Algo-Logic[2] leverages the speed of FPGAs to achieve the lowest possible trading latency. Their clients have access to data in 1.5 millionths of a second, enabling them to make better trades. Algo-Logic Systems’ CAPI-enabled Order Book is a part of a complete Tick-to-Trade (T2T) System[3] for market makers, hedge funds, and latency-sensitive trading firms. The exchange data feed is instantly processed by an FPGA. The results go to the shared memory of an IBM POWER8 server equipped with the IBM CAPI[4] card and specialized FPGA technology. Then, in less than 1.5 microseconds, it updates an order book of transactions (buy/sell/quantity).

Stock trading generates an enormous data flow about the price and number of shares available. Regulated exchanges, such as NASDAQ, provide a real-time feed of market data to trading systems so that humans and automated trading systems can place competitive bids to buy and sell equities.  By monitoring level 3 tick data and generating a level 2 order book, traders[5] can precisely track the number of shares available at each price level. Firms using Algo-Logic’s CAPI-enabled Order Book benefit from the split-second differences in understanding and interpreting the data[6] from the stock exchange feed.

Algo-Logic released their CAPI-enabled Order Book in March 2015. Multiple customers now use it in projects that include accelerated network processing of protocol parsing, financial surveillance systems, algorithmic trading, etc. with many proof-of-concept projects underway.

Algo-Logic found success with Linux, POWER8, and CAPI. We expect to write more about, Algo-Logic and other OpenPOWER Foundation[7] partners as they continue to develop solutions and POWER8-Linux systems demonstrate their ability to handle big data at the speeds developers, architects, and users need.




[2] Located in Silicon Valley; see: http://algo-logic.com
[3] See http://algo-logic.com/ticktotrade, also see:  “CAPI Enabled Order Book Running on IBM® POWER8™ Server” at: http://algo-logic.com/CAPIorderbook
[5] We oversimplify stock market operations for clarity. For more details visit the footnotes.
[6]  This is  High Frequency Trading (HFT), for information, see: https://en.wikipedia.org/wiki/High-frequency_trading

Tuesday, December 1, 2015

IBM Watson + Power Systems mainstream Cognitive Computing

By Rich Ptak

Five years ago, powered by IBM POWER7 servers, a master-bedroom sized Watson broke into public consciousness making headlines as an undefeated champion against past Jeopardy winners. Since then IBM has put "Watson to work" with the latest POWER8 technology, OpenPOWER Foundation partners and multiple support centers...IBM is "mainstreaming" Watson.....read what's being done and our take on it....http://www.ptakassociates.com/content/

Thursday, November 19, 2015

OpenPOWER's Order–of-Magnitude Performance Improvements

By Rich Ptak and Bill Moran

Performance improvements come in different sizes. Often vendors announce a 20% or 30% performance improvement along with an increase in the price/performance of their product or technology. Much more rarely, a vendor delivers an order-of-magnitude improvement. An order-of-magnitude improvement equates to a performance increase of a factor of 10. Improvements on this scale underlie recent[1] technology acceleration announcements[2] by IBM and other OpenPOWER Foundation members.

Why are tenfold performance improvements especially important? Here’s why.  Consider this transportation example of what an order-of-magnitude change means. Let’s say running can be sustained at a rate of 10 miles per hour. An order-of-magnitude change raises that to 100 miles per hour. Many cars can achieve and maintain that speed. (We aren’t recommending that!) Another order of magnitude improvement in speed moves us to a jet airplane at 1,000 miles per hour. Another increase of this magnitude moves to a rocket reaching 10,000 mph.

Notice that each magnitude change increases not just speed, but dramatically transforms a whole landscape. Moving from the jet to the rocket allows escape from earth’s atmosphere to go to the moon. This demonstrates the potential importance of order-of-magnitude improvements. The OpenPOWER announcements detail multiple such improvements, let’s examine a few.

One example comes from Baylor College of Medicine and Rice University announcing breakthrough research in DNA structuring[3]. The discoveries were made possible by an order-of-magnitude improvement in processor performance. As reported by Erez Lieberman Aiden, senior author of the research paper, “the discoveries were possible, in part, because of Rice’s new PowerOmics supercomputer, which allowed his team to analyze more 3-D folding data than was previously possible.” A high-performance computer, an IBM POWER8 system customized with a cluster of NVIDIA graphical processing units “allowed Aiden’s group to run analyses in a few hours that would previously have taken several days or even weeks.”

Another example involves IBM’s Watson and NVIDIA’s Tesla K80 GPU system[4]. Watson[5], of course, is IBM’s leading cognitive computing offering which runs on IBM OpenPOWER servers. NVIDIA’s new system allows Watson’s Retrieval and Rank API to work at 1.7 x its normal speed. Wait a minute you might say. Where is the order-of-magnitude change here? 1.7 is impressive, but it’s no order-of-magnitude change.

Almost as an afterthought, IBM mentions that the GPU acceleration also increases Watson’s processing power to 10x its former maximum. So there we have another tenfold improvement in performance arrived at by marrying other technologies to Power.

Finally, Louisiana State University published a white paper[6] stating that Delta, its OpenPOWER-based supercomputer, accelerates Genomics Analysis by increasing performance over their previous Intel-based servers by 7.5x to 9x. Not quite an order-of-magnitude, but close. 

The announcement includes more examples demonstrating the potential of the OpenPOWER philosophy, OpenPOWER Foundation and Power Systems to achieve dramatic results across multiple industries. The fundamentals of the POWER architecture lead us to anticipate continued improvements in Big Data processing. Such developments will accelerate the growth of the Internet of Things. It will also drive fundamental changes in the possible types of processing, just like those happening with Cognitive Computing.