Pages

Thursday, December 11, 2014

Red Hat’s release of Linux 7.1 = little endian hat trick for IBM Power Systems

By Rich Ptak


The beta release of Red Hat Enterprise Linux 7.1 is good news for Enterprise Linux customers as well as data centers currently committed to Linux on IBM’s Power Systems platform, Linux on Intel, or considering a commitment to the Power platform. Red Hat’s latest version includes support for IBM Power Systems running in little endian architecture mode. This accelerates business application innovation by eliminating a significant and outdated barrier to application portability. Customers with the latest IBM Power Systems can now leverage the significant existing ecosystem of Linux applications previously developed and restricted to x86 architectures. Red Hat joins Ubuntu and SUSE in supporting little endian mode.

This is significant because it increases business’ choice, flexibility and access to open standard solutions. It eases application migration from one platform to the other to take advantage of innovation anywhere, and any time. It enables simple data migration, simplifies data sharing (interoperability) with Linux on x86, and improves I/O offerings with modern I/O adapters and devices, e.g. GPUs.

The issue of big endian/little endian operating mode initially allowed applications developers to maximize application performance by exploiting differences in processor architectures. The difference also worked to the advantage of proprietary-minded vendors by reducing application portability as it tied applications more tightly to specific platform architectures. Important in the last century, all this changed under the pressures of open computing.

As the movement to embrace Open Standards/Open Software/Open architectures grew, the demand for application portability in an increasingly complex operating environment changed the dynamics of the market. It also changed the style of computing with the proliferation of interacting, interdependent transactions, Cloud, dynamic infrastructure and adaptive applications.
 
Data center heterogeneity has become the norm. Thus, making easy interaction and communication across/between multiple different architectures critically important as new generations of machines, data centers and enterprises merged, openness became the watchword dominating the market. It’s our opinion that this combination of Red Hat Enterprise Linux and Power Systems can accelerate business innovation, eliminate portability challenges, and solve IT challenges for companies of all sizes. 

Thursday, December 4, 2014

CA Technologies – Refines its Dev Ops Portfolio to fit today’s Application Economy

 By Rich Ptak 


The ‘Application Economy’ has taken over. Both business and consumer oriented market environments have become fully digitized as they revolve around and depend upon digital technologies in every aspect of operations – from creation through to delivery and post-delivery services. From one-on-one tracking of consumers to record habits to behavioral analytics to identify the ‘next big thing’, an app is available or in the works to inform, monitor, manage and deliver services to the customer. Real- and near-real-time analysis of consumer and customer behaviors are being used to modify and create new services ‘on-the-fly’. Customer expectations of services and the buying experience are undergoing radical escalations. The impact on the enterprise continues to be dramatic. CA Technologies, along with other vendors have taken notice radically altering their own products, how they are created, implemented and packaged.

For enterprises, the result is enormous pressure to deliver high quality responses to evolving demands. To satisfy these demands, enterprise IT product teams must rapidly deliver high quality, agile, resilient apps. This makes IT Dev Op teams today’s critical operational areas. Enterprise success depends upon their ability to cooperate, coordinate and integrate to consistently deliver product.

 However, these teams have historically conflicting cultural backgrounds and performance metrics. Development’s success is in terms of agility, delivery speed, feature richness, time-to-market, and standardization. Operation’s success is in terms of consistency in delivery, process, order, and protection of legacy and proprietary uniqueness. Both want to be successful in contributing to the achievement of enterprise goals. But, the very nature of their respective tasks complicates their cooperation. CA’s Dev Ops[1] portfolio addresses that challenge by facilitating team interactions.

 

CA DevOps Portfolio


CA Technologies groups some sixty different Dev Ops products into three area-focused portfolios. These areas have been identified as key to building Dev Ops team cooperation and contributing to their success. These areas along with some of the benefits that accrue to Dev Ops teams and enterprise efforts as a result include:  

·        Agile Parallel Development – products that speed the creation of high quality applications by better managing the access to scarce resources by development, test and integrator teams; thus allowing them to work in parallel which cuts the time needed to deliver solutions;

·        Continuous Delivery – consists of libraries of integrations that are used to build tools to automate and orchestrate the application release and deployment process while reducing manual errors, lowering overall costs and facilitating the realization of business value;

·        Agile Operations – guarantees an exceptional customer experience by minimizing service interruptions and speedy problem resolution using end-to-end monitoring and in-depth visibility to speed discovery, diagnosis and resolution of disruptive events and anomalies.

 Here’s how CA Technologies summarizes their new product groupings.


CA Technologies is not stopping here. In addition, they announced CA Mobile App Analytics[1] (MAA) as part of their DevOps for mobile product suite. The product is designed to understand and optimize the customer experience with mobile applications. Using detailed data collected during the entire lifecycle of the application and end-user experience, it builds a comprehensive view of the experience, health of the application and processing of the customer interaction. Captured data can be used to identify and investigate performance or other issues with the app. It will provide details to enable application team insights into customer usage and behaviors that will help to identify new app services, potential features and necessary fixes.
 

The Final Word

The CA DevOps Portfolio and DevOps for mobile product suite demonstrate significant progress toward effectively meeting customer needs, i.e. fully integrated product suites targeted to resolve significant, painful IT and Enterprise problems. This extends across CA’s offerings. For example, CA Application Performance Management (APM) integrates mobile app and user data from CA Mobile App Analytics (MAA) to give insight into complex applications and link the mobile app user’s experience to data center operations. IT teams can access performance details from the mobile app thru middleware/infrastructure components to the backend mainframe or data base.
 
By listening carefully to customers, CA is identifying and tackling pressing problems. They aren’t the only vendors who have embraced this customer centric approach. They are among the more successful in adapting their sales and marketing messages to match their offerings. We think CA customers will find much to like as the company moves forward with the offerings now available. Well-deserved kudos go to CA Technologies on the success of its efforts to date.



[1] https://tinyurl.com/nw6crke


Wednesday, November 26, 2014

IT in Transition

By Rich Ptak
 

I’ve spent more than a few years working with technology vendors, business and enterprise IT staffs as they work together to effectively deploy IT infrastructure and resources to achieve enterprise goals. Change, to a greater or lesser extent, is ongoing, and that make IT’s job of fast, reliable service delivery challenging and frustrating, but ultimately rewarding.

 
We’re now at a time when changes in technology, knowledge, market conditions and ability to exploit technology are upending even basic operations of IT and the enterprise in fundamental ways. Technologies themselves are shifting and evolving, becoming easier to access and apply – altering market dynamics as they eliminate the barriers that kept competition at bay even as they enable ever more complex solutions and accelerate product life-cycles.

 
These same technologies, ironically enough, increase IT’s ability to deliver new services with capabilities that allow them to do more, process faster and analyze better than ever – even as it is forcing them to rethink how they create and apply it to solve enterprise problems.

 
For IT, the change is even more critical as they face escalating expectations of rapid response and short delivery cycles at the same time that they face increasing competition from agile, rapid development service providers.  Service providers that may, in fact, be already working inside their enterprise as contractors – gaining intimate knowledge of the problems frustrating business managers.

 
No market segment is exempt – financial, research, manufacturing, retail, etc. – all are challenged by radical, disorienting alterations in operational processes, tactics and relationships forced on them in all aspects of their business – while it may be more dramatically and easily visible in some situations, the changes are pervasive.

 
Users expect faster, near immediate response to requests for new services or changes to existing ones. Once a weeks- or months-long process, now creation, development and test of a new app has been condensed to: Code (or equivalent process) first – Implement – Fail and Fix as needed. AND, the market is accepting this. Not that major crashes or mistakes don’t happen, they do. It’s that if the recovery is quick enough, the impact is minimized and consumers move on. Of course, this isn’t true for all situations, but it is a real phenomenon.

 
Established enterprises find themselves facing new, more agile and aggressive competitors that come from surprising directions – think about how the markets for telephones, communication services, video content creation and access has changed in the last few years - enterprises are forced to develop and define new ways to create products, deliver services and how they generate and account for revenue – IT is caught squarely in the cross-hairs and must adapt.

 
On the positive side, advances in existing and emerging technologies allow more to be done even as increasing knowledge and maturation has led to advances in the methodologies of development, deployment, testing and management. These increase the capabilities and efficiencies of IT staff to do much more and to get it done quickly. But, it is up to IT to educate itself and work with business colleagues to acquire the insight and understanding of cross-enterprise operations that will enable IT to apply their efforts to benefit the enterprise. That, after all, is the only real reason for having an IT group.

Thursday, November 20, 2014

The Challenge of Hybrid Infrastructure – pick your partner carefully

        The Challenge of Hybrid Infrastructure – Pick Your Partner CarefullySubmitted by Richard Ptak on Nov 20, 2014. IT is being challenged to rapidly reallocate resources in response to unpredictable fluctuations in demands for services. Demands to increase the quality and quantity of IT services while at the same time having to speed up the development cycle for evolving those services are not new. The problem is the speed and scale of response now necessary to satisfy these demands.

Read the rest at: http://www.enpointe.com/blog/challenge-of-hybrid-infrastructure-pick-your-partner-carefully

Monday, November 10, 2014

IBM Enterprise2014 – Infrastructure, System z and Power in the spotlight

 BY Rich Ptak

IBM’s Enterprise2014 attracted 3600 (35%) more participants than last year, even without the attraction of System x (now part of Lenovo). The theme ‘The Infrastructure for Cloud, Data & Engagement’ highlighted the vital role IT infrastructure plays in successful enterprises of all types.  It focused on how infrastructure can and does benefit all enterprise operations. Customers elaborated on IT-enabled contributions from development through to delivery and support of services/products. IBM® executives Tom Rosamilia, SVP for IBM Systems & Technology and IBM Integrated Supply Chain, his direct reports, business (marketing, sales, etc.) and technical staff were accessible during and after meetings, presentations and forums.
The event brought together an Executive Summit, an MSP/CSP Summit and 3 Technical Infrastructure Universities covering System z, Power Systems and System Storage. Senior executives, operations staff and academics mingled during breakouts and out-of-hours sessions. It was clear to us that attendees enjoyed and benefitted from the event.

General Observations
The Solutions and Services Showcase spotlights the innovative utilization of infrastructure by IBM and its partner to create solutions for education, monitoring, management, development, operations, analytics, Big Data, mobility, etc. Product knowledgeable experts dealt with pointed questions to provide insight into the abilities and application of the infrastructure.

The showcase was complemented by sessions on new products, solutions and initiatives that discussed practical business issues as well as took deep-dives into technologies, solutions and services. A significant majority of sessions were by customers and partners discussing and demonstrating how their enterprise activities were improved, more effective and productive due to IT infrastructure. Especially valuable were the practical insights gained from the presenters from Q&A and post-presentation chats.

We found Linux, System z and Power particularly interesting. Let’s take a look at Linux first.

New initiatives and more focus on Linux. IBM is aggressively pursuing the Linux[1] market. Numerous customer success stories attested to the level of Linux interest and activity.  Power and System z staff members, involved in strategy, development and products, spoke enthusiastically of current and planned initiatives targeting Linux. For the mainframe, IFLs (Linux for System z) already represent 40% of total MIPs shipped in FY13[2] including net new customers and workloads. Adding Linux focus to power should make next year’s number significantly higher.

IBM’s Power and System z offerings bring real strengths to Linux, for example: 1) support for all significant Linux versions, 2) two modern, platform architectures covering the market from mid-size to largest enterprise, 3) documented easy movement of (thousands of) existing Linux applications, 4) impressive performance figures, 5) Power Systems eliminate ‘endian’ issues (for those that care), and 6) delivering new-to-Linux capabilities such as LPARs, I/O caching). These plus making clear the case for business value will help them build market share.

System z supports mobile, analytics, Linux and more. System z[3] continues to increase MIPs shipped by adding net new customers, net new workloads and new usage cases. Despite mainframe ‘Gloom & Doomers’, IBM and, more significantly, its customers show considerable interest in the System z for intensive computing. Studies from multiple sources indicate any massive exit off-the-mainframe is not eminent.

Enterprises want IT support in agility, cloud, hi-speed data/analytics, security and mobile services. For right scale operations, the mainframe offers competitive advantages in each of these[4]. Customers describe benefits in real-time, high-speed, analysis of transaction data as well as in cloud services.
IBM continues efforts to lower the cost of mainframe computing where software licenses are based on MIPs used. One example, for selected software, IBM allows basically cost-free increases in MIPs consumption if the increase results from mobile apps accessing the mainframe. More than one customer indicated to us that this was a very good deal for them.

System z can simultaneously run every major Linux version in VMs. Demonstrated the ease to quickly moving multiple thousands of Linux applications to zLinux clinches its role as major platform for that market.

Post-event financials showed flat mainframe revenues. Year-over-year and quarter-to-quarter increases in MIPs shipped are more reflective of increasing compute capacity at lower prices than a dying market. We expect the announcement of a new family of mainframes in the next fiscal year will lead to better numbers.

Power® Systems on the rise. Watson, POWER8 processer, CAPI (Coherent Accelerator Processor Interface), more Linux moves – Power Systems[5] have had a busy year. Watson expanded into new business roles including concierge travel (planning) services, predicting meteor positioning, high-powered analytics and providing cloud-based services. Depending upon whether you agree with Elon Musk on the existential risk in the rise of cognitive computing, you will either be appalled or thrilled to hear that Watson is questioning (arguing with) itself during its learning process.

IBM introduced mid-size POWER8-based[6] systems last spring. Now, large Enterprise[7] systems will begin shipping November 18th with increased performance, computational speeds and power. One example, Power S824L systems (up to 8 TB of memory) run data-intensive tasks on POWER8 processors, offloading other compute-intensive workloads to GPU accelerators.  IBM also targets Power systems at specific market segments (private/hybrid cloud, hi-availability, VM management, analytics, security, etc.) They have attractive trade-up and transition packages from Power7 and Power7+ to Power8 systems that should boost sales.

IBM positions POWER8 as the evolutionary successor to the x86. Support of open standards, public APIs, OpenStack[8], OpenPOWER™ Foundation[9] and licensing the POWER8 chip to third-party development partners is attracting more users to POWER8. It is the only fully open architecture shipping today. It has 62 foundation members are developing solutions based on it.

Speed of service, economics and efficiency remain major enterprise and IT concerns. Sophisticated data/analysis capabilities are critical in getting actionable insight and information to users. Recent research has shown just the high cost of data moves and conversions (ESL) among different platforms.  Collecting and processing data on a single platform makes sense.

IBM is promoting ‘do your analysis where the data sits’. To that end, IBM made sure that both System z and Power Systems have exceptional abilities in data acquisition, data storage and analytics. Both teams endorse the message. Even more reassuring, they help the customer decide which platform is appropriate to their situation.

We’ve got next year’s Enterprise2015, May 11-15th, Las Vegas in our calendar. We suggest that you do the same. 

Infrastructure Orchestration – IT’s path to customer satisfaction!

By Rich Ptak


Consumers are rapidly adopting mobile computing. This is forcing solution and service providers to turn to cloud, sophisticated analytics and emerging technologies to speed the development and delivery of new services. The resulting disruption of enterprise and IT operations makes staff utilization, efficiency and infrastructure optimization a major issue. The solution lies in increasing orchestration[1], integration and automation across the enterprise.

What’s the source of all the complication? It comes from:

1.     Data centers that automatically expand and contract services to meet ‘spikey’ service demands;

2.     Trading-floor apps that be rapidly evolved to maintain a competitive edge longer than 72 hours; ;

3.     Complex process automations that ease access to technology are altering market dynamics by expanding consumer choice and raising competition to global levels;

4.     Users a click away from alternative services, suppliers and products.

These combine to drive demands for faster delivery of evolving solutions/services while forcing prices and costs down. To compete, the enterprise and IT must be fast, agile and adaptable. Traditional automation can only provide a starting point. Focusing on isolated tasks leaves IT and enterprise operations susceptible to bottlenecks, inconsistent response times and sporadic failures.

 IT itself, infrastructure and operations are now primary influencers of the customer experience. This alters IT performance metrics forcing process changes in everything from development to delivery. Each process step must execute smoothly and quickly to meet enduser expectations. Infrastructure must be able to adapt and redirect quickly. It must scale up or down to match changing conditions. Workloads must be shifted, new systems spun up to meet unpredictable transaction volumes, service requests or unexpected shifts in computing demand.

Integrated end-to-end orchestration provides the answer by bringing together multiple interdependent tasks and functions (both business and IT) to operate more effectively. For example, in dev/ops it can start with automating system configuration and provisioning for development then extend to test, deployment and production. Or, combining business and IT functions; IT automate and  centralize data collection, using a mobile device able to read inventory data collected by walking around a warehouse – transmitting data to a centralized repository for inventory control, capacity planning, purchasing, accounting, etc.

IT, focusing on user expectations, must now operate in a mode of continuous innovation breaking traditional patterns. Service requests for new systems, services, or development environments must be satisfied in near real-time.
 
Orchestration provides IT the opportunity to breakdown and work across functional silos that isolate enterprise functions. IT and business functions work together to identify opportunities to apply existing expertise and procedures to resolve the problems and challenges inherent in enterprise operations.
 
As interest grows, tools and solutions for piecemeal automation proliferate. A number of ‘integrated’ solutions exist, that, in reality, integrate, only at the ‘pane of glass’ UI. There are also solutions composed as hastily assembled collections of tools lacking any coherent, supportive architecture.
 
Implementation of a comprehensive orchestration solution requires significant experience and sophistication along with an investment in software and hardware. Until recently such an investment was affordable only by large enterprises. This is changing as vendors scramble to satisfy interest.

 The larger, experienced vendors, such as IBM[2] (sponsor of this blog) are making access to their latest orchestration solutions easier and more attractive to a wide range of customers. New offerings are appearing all the time. Interested buyers should exercise due caution as they review their options and investigate this rapidly evolving pathway to that provides competitive advantage today, and necessary for survival tomorrow. You can follow our comments and observations on orchestration here.




[1] Follow our video and white paper commentaries on orchestration here: http://www.ptakassociates.com/it-orchestration/


[2] IBM sponsored this paper; see more on their offerings at: http://www-03.ibm.com/software/products/en/ibm-cloud-orchestrator

Thursday, November 6, 2014

Hybrid Application Performance Testing - Apica

Introduction and Primary Challenges

Today, IT finds itself at the crossroad of major game-changing technology shifts: the explosive rise of cloud and mobile computing.
The promise of cloud scalability, flexibility and agility is driving enterprises to move applications out of the office and into the cloud. Simultaneously, mobile computing and social media are transforming and increasing interactions between companies and their customers/end users. Organizations are developing new mobile and social media applications that reach out to engage users as digital extensions of their sales, marketing and customer service efforts. These modern, hybrid applications must be flexible enough to perform reliably across a variety of devices and computing environments, and scalable enough to maintain peak performance under heavy loads.
As IT staff move applications to the cloud and mobile, many discover that capacity testing is not the same as it would be testing applications in traditional environments. Applications cannot always scale up, even when running on scalable cloud infrastructures. This becomes painfully clear when applications which performed adequately in traditional environments “break” when subjected to higher loads and dynamic scalability in the cloud.
Hybrid application performance testing must look seamlessly at end-to-end application performance as it travels across diverse environments, and performance must be optimized for each environment along the application delivery chain. This requires advanced, proactive planning and QA testing throughout the development lifecycle. Just as suspension bridge engineers painstakingly test and build strength and resiliency into bridge designs to handle the weight loads of actual use, modern application developers must also meticulously test and build performance-ready hybrid applications.

The unique nature of modern, hybrid applications running across diverse environments demands a different approach for testing performance.
This paper examines some of the most important considerations for modern application performance testing and planning.

A Multi-Faceted Performance Testing Approach

Performance testing must use a multi-faceted approach that goes beyond solely measuring response time. This approach is outlined in the sections below.

Understanding Application Characteristics:

The first step to hybrid application performance testing is understanding the performance characteristics of your application.
Knowing how an application performs under various load levels and in each environment (cloud, mobile, traditional data center, etc.) enables managing a hybrid application’s performance proactively, in support of business goals.

The Load Curve

A load curve is the most important measurement of a load test. Understanding how your application performs along the load curve enables operational IT staff to know when increased resources are required to prevent unacceptable performance degradation and poor end-user experiences.
The load curve diagram (graph at the top left) shows application response times versus increasing loads. If throughput doubles at the same rate as the number of users doubles, response times will stay constant. Performance testing helps identify when infrastructure capacity limits begin impacting performance, enabling IT operational staff to proactively avert issues.

To read the full paper, click on this link:  http://www.ptakassociates.com/content/