Monday, February 9, 2015

IBM’s Billion Dollar Investment supports Linux as it Propels new Markets & Services

By Rich Ptak

In 2013, IBM announced its $1B investment in Linux, fueling new ideas and solutions in the process.  Most of the investment has been on the POWER architecture and is beginning to pay off in multiple areas, with new solution vendors looking to take advantage of chip and memory advancements.

POWER8, the latest payoff, demonstrates IBM’s commitment to drive significant change in the Linux market. An interesting investment, but why would an IT architect or developer care? As an IT professional involved with Linux, IBM’s investment and resulting fallout can make your job easier as well as potentially directly benefit your career.

We think POWER8’s open hardware, the only architecture open to the market, will repeat the commercial success of the open software model. Open software revolutionized the software business. For hardware, it is already inspiring creativity as it drives ISVs, integrators and enterprises to work on the POWER8 Linux platform. This increases platform choice for customers and provides more opportunities for developers for the platform.  Our series of blogs will discuss more of the opportunities we see from IBM’s investments, what they may look like and how to take advantage of them.

Also growing from the investment is a steady build-up of Linux expertise at IBM’s client and partner support centers worldwide[1].  The more than 1200 ISV applications currently available on POWER8 can be partially attributed to these centers. These centers also offer free help for such activities as migrating apps[2] (e.g. between Intel and Power Systems) thus reducing risks, facilitating cross platform communication, code conversions and/or code development for POWER8 Linux. In addition, IBM has developer cloud support for free access[3] to Linux on POWER8 platforms. IBM’s BlueMix Cloud can also be used to assist Power8 Linux development.

The OpenPOWER foundation[4] is another major expansion of IBM support for POWER8/Linux. Created by IBM, Google[5], Mellanox, NVIDIA and Tyan, the Foundation allows member companies to leverage POWER technology and architecture to develop products. Rapid growth to ninety members (Samsung, Rackspace, Hitachi, Lawrence Livermore Laboratory, etc.) and academic associates (Rice University, Oregon State University, etc.) adds to the momentum.

Foundation member efforts have already successfully brought a number of products to market. For example, Redis Labs, Altera, Canonical, and IBM collaborated to produce the IBM Data engine for NoSQL. NVIDIA and IBM cooperated to produce the IBM POWER S824L server with GPU acceleration. Both were announced last October. You can expect more product announcements in coming months.

We believe that these are among the most significant developments to date in the Linux world. Future blogs will explore further details of POWER8 technology and services.  We’ll discuss what it means to developers/users in more detail as well as discuss benefits to architects, developers and other users.

[1] IBM Client Centers see:; IBM Innovation Centers see:; POWER Development Platform see:
[2] Note that Linux on RHEL (in beta), SUSE and Ubuntu are all supporting little endian on Power
[4] More information about OpenPOWER at: for a discussion of the exciting projects underway.
[5] Google demonstrates its motherboard with a POWER8 processor:

Wednesday, January 14, 2015

IBM z13: Redefining the Mainframe

 By Rich Ptak

After celebrating 50-years of mainframe success, IBM was not about to rest on its laurels. Rather, it was time to shake up the market! This is exactly what they are doing with the launch of IBM z13, the high-end of a new generation of mainframes!  A brand-new chip design, more features, expanded memory, greatly increased cache and more open than ever, IBM z13 is full of good news.

There are too many changes in this latest incarnation of the mainframe to cover in detail. We offer a few nuggets to show what drives the demand for mainframe computing and what IBM z13 delivers. Mainframe innovation continues unabated with over 7,000 mainframe-related patents since 1964, with more than 500 issued last year.

Its success spans multiple workload types. Workloads include transaction processing, data serving, and mixed workload processing. It delivers leading-edge operational efficiency, sets the standard for performance in trusted and secure[1] computing, remains legendary in reliability, availability and resiliency[2] – delivered in a package with virtually limitless scalability.

These continue, but computing has changed. New workloads and increasing sophistication in the ways of using/accessing technology emerged. Today’s users and applications demand specialized capabilities in a platform designed for and able to provide:
  • World-class data and transaction handling specifically for a mobile generation;
  • Integrated transaction and analytics for right-time insights at the point of impact and optimal application;
  • An efficient and trusted cloud that transforms and improves IT economics.

IBM z13 is designed from the ground up to support these tasks.

IBM designed capabilities into IBM z13 to optimize cloud support in all implementations, enhance its big data & analytics processing capabilities, expand support for enterprise mobile applications and build on existing world-class security. These define and drive the next generation of computing. The full impact of IBM z13 features, capabilities and functions will be analyzed over the coming months. Here are a few of the highlights we know today.

Real-time analysis reduces the time to get actionable insight and information from very large datasets. Linux, Java and zIIP workloads perform faster thus reducing the number of systems required and improving economics. Users get real-time reporting as analytics workloads run faster with accelerated processing. More data can be kept on-line and accessible for analysis with advanced accelerated data compression further reducing storage costs.

Linux developers and architects benefit from enterprise grade Linux with access to previously z/OS-exclusive functionality, such as IBM zAware, for real time device management, IBM GPFS (announced earlier) and with  plans for future delivery of the GDPS virtual appliance.

A new, uniquely designed 8 core-processor chip lies at the heart of the IBM z13. The system has a modular, drawer-based design based on 22nm Silicon Technology and offers up to 141 configurable cores. Capable of supporting up to 10TB of RAIM memory, data services (handling, access, analysis, etc.) are optimized with newly redesigned, larger sized caches. Data encryption, including ISPEC and SSL, benefits from augmented Cryptographic Assist Co-Processor Facility (CPACF). Single Instruction Multiple Data (SIMD) allows larger and more complex mathematical models that get to the results more quickly.

Cost effectiveness and efficiencies are improved the economies of scale from large increases in throughput for workloads using Linux and zIIP specialty engines. Required disk space and data transfer times are reduced as a result of improvements on on-chip hardware compression. Later this year, the IBM z13 will add support for KVM virtualization to existing Red Hat and SUSE virtualization. Computing costs are lower for medium to large scale implementations as IBM z13 can support up to 8000 virtual servers on a single system.

The list of improvements continues. We will cover these and other enhancements in future blogs and papers. IBM summarizes how the IBM z13 is reinventing enterprise IT for the digital business as follows:
  • Designed for data and transaction serving for the mobile generation
  • Designed for integrating transactions and analytics for insight at the point of impact
  • Designed for efficient and trusted cloud services to transform the economics of IT
In our opinion, this next year of mainframe computing will be extremely interesting for both the user community and IBM.


Friday, January 9, 2015

BMC TrueSight Capacity Optimization 10.0

BMC recently released BMC TrueSight Capacity Optimization 10.0, which expands its capabilities to help manage capacity and keep IT aligned to the digital business in today’s dynamic, hybrid environments. The new product name is the result of BMC’s new “TrueSight” product branding, but the latest version evolves BMC’s capacity management solution.

Capacity Management Matters Even More

At first glance, pairing capacity management with the dynamic scalability of Cloud computing seems like an oxymoron. However with Cloud computing’s “pay for what you use” model, the cost benefit of optimally timing the scaling of resources up and down can add up to significant savings. Additionally, when Cloud is combined with other technology trends, they collectively increase the importance of capacity management.
The emergence of trends like Social media, agile development, Web and Mobile are significantly increasing the volume of interactions and data, while speeding up process cycles. So as businesses innovate faster and develop new and more apps to reach existing and potential customers, IT capacity requirements fluctuate dynamically, and the pace of change accelerates to the point where it’s difficult to keep track of it all.
In addition, applications running in corporate data centers and/or in the cloud, complicate matters even more.
BMC’s latest release (10.0) of TrueSight Capacity Optimization aims to help address the challenges of capacity optimization in a dynamic, fast-paced environment.

Reservation-aware Capacity Optimization

Reservation-aware capacity optimization is a new feature of release 10.0, which is the ability to incorporate IT resource reservations from planned projects, into capacity plans. By providing insight into the timing of future IT resource requirements, IT staffs can intelligently plan for, ensure they can deliver IT resource commitments successfully to their business counterparts, and optimize the balance between cost and service quality.
IT staffs will have a more complete view of future capacity requirements with 10.0 because it combines planned future demand (reservation-awareness) with current capacity/ usage planning (based on actual performance and monitoring data) that is already delivered by TrueSight Capacity Optimization.    

Capacity Pool View

Also new to release 10.0 is the Capacity Pool View, which is a dashboard showing at-a-glance status views of capacity pools. It graphically displays usage, risk and efficiency metrics for each capacity pool. See Figure 1 below. The Capacity Pool View provides better visibility into the status and risk of their capacity pools, enabling them to better manage current and future capacity. 

Extending Cloud Support

BMC TrueSight Capacity Optimization 10.0 now integrates with OpenStack-based clouds, via a built-in connector to OpenStack NOVA APIs. As OpenStack continues to gain traction in the market, this new integration extends BMC’s capacity management reach to broader cloud infrastructures.

Our Final Perspective    

In today’s increasingly competitive and fast paced business climate, IT staffs have to deliver well-performing, high quality IT services faster and better. BMC TrueSight Capacity Optimization 10.0 extends IT’s capacity visibility by adding the impact of future IT demand with existing capacity, through its reservation-aware capacity optimization. This enables IT staffs to be more confident that they can support new IT-dependent business initiatives as they come onboard in the future.
BMC TrueSight Capacity Optimization 10.0 increases capacity visibility for IT staffs and managers, which in turn, should help reduce the IT capacity-dependent risks for new business initiatives. IT staffs now have better visibility into capacity demand, enabling them to deliver sufficient IT resources for new business initiatives, while wisely timing the delivery of services for cost efficiency.
The new features in version 10.0 moves BMC TrueSight Capacity Optimization forward in helping its customers more effectively manage capacity in today’s dynamic and hybrid cloud environments. This extends and builds on BMC’s established legacy in capacity management and supports the company’s vision to help transform the digital enterprise. 

Wednesday, January 7, 2015

Compuware Topaz – Mainframe Software for the 21st Century

By Rich Ptak

A newly privatized Compuware is setting out to significantly impact the mainframe marketplace. The changes started before the acquisition by private equity firm, Thoma Bravo. Compuware now focuses exclusively on mainframe software products, while the distributed application performance management products reside in spin-off Dynatrace. This makes sense as the escalation in the use of technologies like Cloud, Mobile, Big Data/Analytics, Security, etc. are recognized as natural mainframe workloads. Some 80% of the world’s corporate data originates on the mainframe with some 30 billion business transactions executed every day. Wise CIOs are re-examining their existing mainframe infrastructure, but many are not as they face two problems:

1.     Lack of experience with and knowledge about the mainframe itself impedes understanding its current utility, as well as its potential;

2.     Mainframe expertise is becoming a scarce commodity among computer architects, developers and operations staff.

Big problems that have no easy answers, but these are exactly what Compuware has decided to attack with the release of Topaz, a developer productivity solution designed to help a new development workforce increase their understanding of mainframe data and applications.

Topaz is the first product release from the new Compuware. It establishes a brand-new direction for mainframe product vendors. It is targeted specifically to enhance the productivity on the mainframe of developers, operations and architects without deep expertise on the platform. It does this by using Open Standards technologies, design goals that include simplification and a deep understanding of both the mainframe and non-mainframe environments.
 Topaz is designed to allow non-experts to improve the operational efficiency and performance of mainframe applications without becoming experts in the intricacies of mainframe functioning. In addition to its standards-based technologies, its key functionalities include a universal data editor, a relationship visualizer and host-to-host copy capabilities. Let’s examine the need Compuware wants to address, and then we’ll discuss what they deliver.
Read our complete take at:


Thursday, December 11, 2014

Red Hat’s release of Linux 7.1 = little endian hat trick for IBM Power Systems

By Rich Ptak

The beta release of Red Hat Enterprise Linux 7.1 is good news for Enterprise Linux customers as well as data centers currently committed to Linux on IBM’s Power Systems platform, Linux on Intel, or considering a commitment to the Power platform. Red Hat’s latest version includes support for IBM Power Systems running in little endian architecture mode. This accelerates business application innovation by eliminating a significant and outdated barrier to application portability. Customers with the latest IBM Power Systems can now leverage the significant existing ecosystem of Linux applications previously developed and restricted to x86 architectures. Red Hat joins Ubuntu and SUSE in supporting little endian mode.

This is significant because it increases business’ choice, flexibility and access to open standard solutions. It eases application migration from one platform to the other to take advantage of innovation anywhere, and any time. It enables simple data migration, simplifies data sharing (interoperability) with Linux on x86, and improves I/O offerings with modern I/O adapters and devices, e.g. GPUs.

The issue of big endian/little endian operating mode initially allowed applications developers to maximize application performance by exploiting differences in processor architectures. The difference also worked to the advantage of proprietary-minded vendors by reducing application portability as it tied applications more tightly to specific platform architectures. Important in the last century, all this changed under the pressures of open computing.

As the movement to embrace Open Standards/Open Software/Open architectures grew, the demand for application portability in an increasingly complex operating environment changed the dynamics of the market. It also changed the style of computing with the proliferation of interacting, interdependent transactions, Cloud, dynamic infrastructure and adaptive applications.
Data center heterogeneity has become the norm. Thus, making easy interaction and communication across/between multiple different architectures critically important as new generations of machines, data centers and enterprises merged, openness became the watchword dominating the market. It’s our opinion that this combination of Red Hat Enterprise Linux and Power Systems can accelerate business innovation, eliminate portability challenges, and solve IT challenges for companies of all sizes. 

Thursday, December 4, 2014

CA Technologies – Refines its Dev Ops Portfolio to fit today’s Application Economy

 By Rich Ptak 

The ‘Application Economy’ has taken over. Both business and consumer oriented market environments have become fully digitized as they revolve around and depend upon digital technologies in every aspect of operations – from creation through to delivery and post-delivery services. From one-on-one tracking of consumers to record habits to behavioral analytics to identify the ‘next big thing’, an app is available or in the works to inform, monitor, manage and deliver services to the customer. Real- and near-real-time analysis of consumer and customer behaviors are being used to modify and create new services ‘on-the-fly’. Customer expectations of services and the buying experience are undergoing radical escalations. The impact on the enterprise continues to be dramatic. CA Technologies, along with other vendors have taken notice radically altering their own products, how they are created, implemented and packaged.

For enterprises, the result is enormous pressure to deliver high quality responses to evolving demands. To satisfy these demands, enterprise IT product teams must rapidly deliver high quality, agile, resilient apps. This makes IT Dev Op teams today’s critical operational areas. Enterprise success depends upon their ability to cooperate, coordinate and integrate to consistently deliver product.

 However, these teams have historically conflicting cultural backgrounds and performance metrics. Development’s success is in terms of agility, delivery speed, feature richness, time-to-market, and standardization. Operation’s success is in terms of consistency in delivery, process, order, and protection of legacy and proprietary uniqueness. Both want to be successful in contributing to the achievement of enterprise goals. But, the very nature of their respective tasks complicates their cooperation. CA’s Dev Ops[1] portfolio addresses that challenge by facilitating team interactions.


CA DevOps Portfolio

CA Technologies groups some sixty different Dev Ops products into three area-focused portfolios. These areas have been identified as key to building Dev Ops team cooperation and contributing to their success. These areas along with some of the benefits that accrue to Dev Ops teams and enterprise efforts as a result include:  

·        Agile Parallel Development – products that speed the creation of high quality applications by better managing the access to scarce resources by development, test and integrator teams; thus allowing them to work in parallel which cuts the time needed to deliver solutions;

·        Continuous Delivery – consists of libraries of integrations that are used to build tools to automate and orchestrate the application release and deployment process while reducing manual errors, lowering overall costs and facilitating the realization of business value;

·        Agile Operations – guarantees an exceptional customer experience by minimizing service interruptions and speedy problem resolution using end-to-end monitoring and in-depth visibility to speed discovery, diagnosis and resolution of disruptive events and anomalies.

 Here’s how CA Technologies summarizes their new product groupings.

CA Technologies is not stopping here. In addition, they announced CA Mobile App Analytics[1] (MAA) as part of their DevOps for mobile product suite. The product is designed to understand and optimize the customer experience with mobile applications. Using detailed data collected during the entire lifecycle of the application and end-user experience, it builds a comprehensive view of the experience, health of the application and processing of the customer interaction. Captured data can be used to identify and investigate performance or other issues with the app. It will provide details to enable application team insights into customer usage and behaviors that will help to identify new app services, potential features and necessary fixes.

The Final Word

The CA DevOps Portfolio and DevOps for mobile product suite demonstrate significant progress toward effectively meeting customer needs, i.e. fully integrated product suites targeted to resolve significant, painful IT and Enterprise problems. This extends across CA’s offerings. For example, CA Application Performance Management (APM) integrates mobile app and user data from CA Mobile App Analytics (MAA) to give insight into complex applications and link the mobile app user’s experience to data center operations. IT teams can access performance details from the mobile app thru middleware/infrastructure components to the backend mainframe or data base.
By listening carefully to customers, CA is identifying and tackling pressing problems. They aren’t the only vendors who have embraced this customer centric approach. They are among the more successful in adapting their sales and marketing messages to match their offerings. We think CA customers will find much to like as the company moves forward with the offerings now available. Well-deserved kudos go to CA Technologies on the success of its efforts to date.


Wednesday, November 26, 2014

IT in Transition

By Rich Ptak

I’ve spent more than a few years working with technology vendors, business and enterprise IT staffs as they work together to effectively deploy IT infrastructure and resources to achieve enterprise goals. Change, to a greater or lesser extent, is ongoing, and that make IT’s job of fast, reliable service delivery challenging and frustrating, but ultimately rewarding.

We’re now at a time when changes in technology, knowledge, market conditions and ability to exploit technology are upending even basic operations of IT and the enterprise in fundamental ways. Technologies themselves are shifting and evolving, becoming easier to access and apply – altering market dynamics as they eliminate the barriers that kept competition at bay even as they enable ever more complex solutions and accelerate product life-cycles.

These same technologies, ironically enough, increase IT’s ability to deliver new services with capabilities that allow them to do more, process faster and analyze better than ever – even as it is forcing them to rethink how they create and apply it to solve enterprise problems.

For IT, the change is even more critical as they face escalating expectations of rapid response and short delivery cycles at the same time that they face increasing competition from agile, rapid development service providers.  Service providers that may, in fact, be already working inside their enterprise as contractors – gaining intimate knowledge of the problems frustrating business managers.

No market segment is exempt – financial, research, manufacturing, retail, etc. – all are challenged by radical, disorienting alterations in operational processes, tactics and relationships forced on them in all aspects of their business – while it may be more dramatically and easily visible in some situations, the changes are pervasive.

Users expect faster, near immediate response to requests for new services or changes to existing ones. Once a weeks- or months-long process, now creation, development and test of a new app has been condensed to: Code (or equivalent process) first – Implement – Fail and Fix as needed. AND, the market is accepting this. Not that major crashes or mistakes don’t happen, they do. It’s that if the recovery is quick enough, the impact is minimized and consumers move on. Of course, this isn’t true for all situations, but it is a real phenomenon.

Established enterprises find themselves facing new, more agile and aggressive competitors that come from surprising directions – think about how the markets for telephones, communication services, video content creation and access has changed in the last few years - enterprises are forced to develop and define new ways to create products, deliver services and how they generate and account for revenue – IT is caught squarely in the cross-hairs and must adapt.

On the positive side, advances in existing and emerging technologies allow more to be done even as increasing knowledge and maturation has led to advances in the methodologies of development, deployment, testing and management. These increase the capabilities and efficiencies of IT staff to do much more and to get it done quickly. But, it is up to IT to educate itself and work with business colleagues to acquire the insight and understanding of cross-enterprise operations that will enable IT to apply their efforts to benefit the enterprise. That, after all, is the only real reason for having an IT group.