Pages

Friday, August 12, 2016


Compuware ISPW speeds and simplifies source code management for this Telecom Service Provider!


Source Code Management (SCM) has been a challenge for mainframe development and operations staff for a long time. For too many years the limitations of the VSAM file structure dictated how and what could be achieved with process-based code management.

Check out our Case Study (on our Content page http://www.ptakassociates.com/content/ ) describing how a major US Telecom Service Provider improved accuracy, increased productivity and sped up its source code management by simplifying management with Compuware's ISPW..........

Thursday, June 16, 2016

Cognitive Computing – fitting the platform to the application

By Rich Ptak


Recently, it has been popular to assert that IT professionals and business staff need no longer concern
themselves or consider IT infrastructure. Among other reasons, the claim is that with the growth of Cloud computing and commoditization, infrastructure no longer matters. The assertion being that a general purpose architecture provides all the processing flexibility and power needed to deliver a range of services.

We are convinced that this view is short-sighted and wrong as it ignores the changing dynamics of computing as Moore’s Law runs down, and technology evolves. It focuses on traditional performance metrics while ignoring the realities disrupting IT, how it is designed, implemented and realized in operations and applications.
It trivializes the difficulties in evolving and delivering systems that are able to meet the requirements of the high speed, data-intensive, highly scalable, adaptable workloads associated with evolving technologies of genomics, nanotechnology, etc. It denies or wishes to ignore the need for and interest in open, standards-based systems-oriented infrastructure able to intelligently adapt and optimize for evolving workloads. 

Identifying the Future of Infrastructure

There are enterprise and IT professionals who recognize and understand the implications of the extraordinary demands placed on infrastructure as a result of the combination of today’s competitive market and evolving technologies.

With the IBM IT Infrastructure Point of View[1] (POV) website, IBM offers these professionals “thought leadership” opinions and insights that promote and support the role of IT leaders in planning the use of technology to achieve organizational success. The target audience are those who not only understand the demand, but also seek to add to their knowledge in order to better prepare themselves and their organizations to meet future challenges.

Today’s Digital enterprises are being challenged to meet escalating expectations of clients and customers for extraordinary performance, dynamic scalability, robust adaptability and rapid innovation in the delivery of services and products. Such demands cannot be met with infrastructure and systems compromised to provide common denominator needs. Nor, can it be met with the ‘static’ configurations and fixed architectures of the very recent past.

Meeting the evolving demands of data and compute intensive digital enterprise, requires a systems infrastructure for server and storage operations that can be intelligently optimized. The infrastructure must be optimized to deal with emerging, evolving styles of computing. It must be flexible enough to integrate and interact with emerging technologies while still able to interoperate with existing operating environments. It must also be intelligently and cognitively adaptable to meet the emerging demands of whatever workload it takes on.  

IBM’s Point of View on IT Infrastructure for Cognitive Workloads

With the explosion of Cognitive computing, its hybrid cloud platforms, Power and z Systems, storage solutions and experience in leading edge technologies and solutions, IBM is uniquely positioned to work with clients to help to shape the future of their computing operations. Enterprise IT must not only get the most from their existing infrastructure but must also act to leverage new cognitive capabilities and take advantage of emerging technologies.

Today’s systems-oriented solutions depend upon server and storage technologies that can be combined with software driven cognitive abilities, such as IBM’s Watson. Cognitive computing has the potential to understand, reason, learn and adapt to changes in their operational environment and workloads. IBM is working with clients, customers and partners to push the boundaries of what is possible with an IT infrastructure optimized for cognitive workloads. 

In recognition of all this, IBM is publishing a Point of View (PoV) about IT infrastructure and cognitive workloads. It describes in significant detail how IBM, in conjunction with its partners, will advise and work with customers to aid them in their efforts to organize and plan in order to gain the most advantage from their IT systems and storage infrastructure.

As would be expected, key to this approach are the IBM solutions portfolio of z Systems, Power Systems, IBM Storage, hybrid cloud services and software-driven cognitive computing (ala Watson).  
This strategy is built around three principles:   
  1.     Design for Cognitive Business – to allow action at the speed of thought,
  2.    Build with Collaborative Innovation – to accelerate technology breakthroughs,
  3.     Delivery through a Cloud Platform – to extend the value of systems and data. 
IBM has multiple projects (both under-way and completed) where cognitive computing has provided the key factor in achieving competitive advantage, financial performance and enterprise success. The involve enterprises and organizations in a wide variety of markets. The projects have accelerated time to insights with infrastructure deliberately designed and architected for unstructured data with companies in banking, oil and gas exploration, and academia. They have sped-up development of new solutions while cutting the time-to-market. They have provided infrastructure optimized to run specific workloads with unique business requirements for customers ranging from government agencies to healthcare services.

We won’t steal any more of IBM’s thunder. We suggest that you visit the IBM IT infrastructure site to review the details. In our opinion, IBM appears to be well ahead of its competition with its comprehensive, customer-centric view and vision. They are also uniquely positioned to speak on this topic. They have both the cutting edge technology and significant real-life implementation experience with products to demonstrate their ability to deliver on their visions for the future of cognitive computing supported by intelligent infrastructure.

Tuesday, June 14, 2016

Acceleration, Collaboration, Innovation - IBM's roadmap for its POWER architecture

By Bill Moran and Rich Ptak


Vendors know the wisdom of publishing a product roadmap. Users want to know the planned future
of the products that they might invest in. They also want insight into how the vendor sees the product evolving.

So, IBM has reason to present the POWER architecture’s future to potential customers and partners. Having successfully persuaded many companies to sign up for OpenPOWER systems IBM must address questions concerning the future of the product’s architecture. IBM laid out its architectural strategy along with some specifics on its future. We discuss key takeaways from that presentation.



NVLink will be available in systems later this year and be carried forward in future systems. Notice that NVLink and CAPI are both specialized technologies for boosting certain kinds of performance. Combined with various architectural changes, they compensate for the rundown of Moore’s Law. Expect to see more such technologies in the future.

The current POWER8 system architecture is based on a 22 nm chip with 12 cores. Announced in 2014, IBM plans to continue with that base until mid-2017. The major enhancement to current version is the addition of NVIDIA NVLink. This acts as an extremely high –speed interconnect link between the chip and an NVIDIA GPU. The link delivers 80 GB per second in each direction. This is 5 to 12 times faster than today’s fastest available link.  The NVIDIA GPU accelerates floating point operations and other numerically intensive operations that are common in cognitive computing, analytics, and high performance computing.



Featuring partner-developed microprocessors in a roadmap is unique to IBM. It dramatically underscores the vitality of OpenPOWER activities. To our knowledge, no other hardware vendor has achieved anything like this!

IBM and its partners will build systems to take maximum advantage of this link which allows parallel processing using NVIDIA GPU's. IBM identified two additional partners, Zoom Netcom and Inventec Corporation. Zoom is a China-based system board developer. Inventec is a Taiwan-based server and laptop company. We expect both of these companies to be working on systems for their focus areas.

In mid-2017, IBM will begin rolling out Power9, a 14 nm chip versus today’s 22 nm Power8. IBM will first introduce a 24 core scale-out system. This will be followed sometime later with scale-up versions. There was no statement on the number of cores in scale up systems. The POWER9 systems will feature a new micro-architecture built around 24 newly redesigned cores and including a number of high-speed cache and memory interconnects including DDR4 direct attach memory channels, PCIe gen4 and custom accelerators from IBM and its partners.

In the time period between 2018 and 2019 IBM expects its partners to announce chip offerings based on IP from both Power8 and Power9, based on 10 to 7 nm technology. Partners will be targeting offerings to their own specialized market segments.

While IBM avoided any claims, we expect these systems’ shrinking chip technology will have some dramatic effects (upward) on demand. It’s also clear the partners expect to gain significant competitive and business advantages from their efforts.

Power10, expected sometime after 2020, is the next large step into the future. IBM provided no details on features or performance. Typical for a product at least 4 years away.


IBM will offer two Power9 families. Initially the focus will be scale-out systems with a maximum of 24 cores. Later, scale-up systems will be added, presumably with a larger number of cores. They will share a common architecture.

This road map shows that IBM along with other members of the OpenPOWER Foundation are developing Power at an increasing rate. Remember, IBM’s POWER group faces a number of unique and new challenges. No other major vendor has ever attempted to develop new hardware in collaboration with numerous partners ala the OpenPOWER Foundation.

Also, since selling its chip production facilities to Global Foundries, IBM’s POWER people must negotiate with an outside company. We believe that IBM’s POWER team are doing an exceptional job in coping with these difficulties. If they deliver on the items in this road map, the architecture should remain competitive. Chips developed by other companies provide both a roadmap highlight and effectively demonstrate Power Foundation’s strength.

Here is our simplified version of the road-map (with acknowledgement to IBM):

        Today                2H 2016               2017                   TBD                2018-2019           2020+
Power8
Power8
Power9
Power9
Power8/9
Power10
12 cores
12 cores
24 cores
? cores


CAPI
NVLINK + CAPI
Scale out
New Arch
Scale up
Partner developed.

22 nm
22 nm
14 nm
14 nm
10-7 nm

Tuesday, May 24, 2016

Java on the Mainframe, big problems ahead? Not if BMC can help it!

By Rich Ptak

Today’s dynamic, mobile-obsessed, service-driven market is proving both beneficial and problematic for data center operations. While conventional “wisdom” has it that it’s distributed and mobile devices have been the big winners. In truth, it increasingly applies to mainframe environments. And, Java on the mainframe is playing a significant role, maybe larger than is known.

The mainframe remains an active, effective and in-demand player in today’s DevOps, agile and mobile-oriented world. Why? Because much of the critical data, information and assets that support the most used applications found in banking, financial services, retail, travel and research, resides and is analyzed there. Mobility-obsessed operations remain linked to and dependent upon mainframe operations.

That’s not to say problems don’t exist. Transaction volumes (often non-revenue producing) have exploded. Unpredictable traffic loads and patterns, complexity of multi-platform integrations, demands for instant response time, etc. have made it more difficult to manage, disrupting maintenance and operations. Yet, they are expected to deliver modernized, mobile applications faster and at lower costs.

One response was to put Java on the mainframe. Its features make it highly attractive. It is tailored for rapid development cycles. Designed for mobile/web applications development, it is platform independent. It integrates easily with a variety of applications, operating environments and data bases. The Java Native Interface (JNI) on z allows easy interaction with embedded program logic coded in multiple languages (COBOL, PL/1, ASM) and environments (CICS, WAS, ISM, DB2, USS, Batch, MQ, TCP/IP). In agile computing and DevOps, Java dominates among programmers and developers as the preferred environment.

Java’s Hidden Threat

Unfortunately, few mainframe experienced staff have extensive Java familiarity or expertise. This means the potential for major problems lurk in the background. Java was not designed to operate in the mainframe’s shared environment. Java does have built-in code to monitor AND manage resources. For instance, it manages memory space with a process of ‘garbage collection’. It identifies memory actively being used, gathers and compacts it, then frees the rest. It does no check for the impact on other programs. In a mainframe environment, these have the potential to seriously disrupt operations, freezing some jobs, delaying completion others.

However, during this activity Java pauses ALL in-flight transactions not just those of a single app. Nor does it check for the impact of its action on applications or technologies. Compounding the problem, there was no integrated view across the system technologies for monitoring and management. In fact, some Java can be running in the data center without all staff aware of it.

With increasing acceptance of Java on the mainframe, BMC’s recent mainframe survey reveals in-use or planned-to-use in 61% of DB2 apps, 57% CICS and 49% IMS apps[i]. This is a serious situation. Tools to manage Java itself exist. But, there is no fully integrated tool to monitor and manage the impact of what it is doing. BMC addresses this lack with MainView for Java Environments (MVJE). Let’s look at what it offers.

BMC’s MainView for Java Environments

 MVJE provides much needed functionality. It does not monitor Java code per se. It monitors the infrastructure to monitor the effects of Java activity. Specific functionality includes:

  • Automatic discovery of  z/OS JVM (early users were often surprised at the amount actually in-use),
  • Monitors real-time metrics for z/OS Java runtime environment to detect the impact of Java activities, e.g. CPU usage, Garbage collection metrics, memory usage data, etc.,
  • Analysis to detect workload impact of Java-initiated management activities (combined with Unix System Services (USS) it can initiate activities to address potential problems e.g. thread use problems),
  • Optimize operations as a result of integration with MainView monitoring for cross-technology analysis, 
  • Customizable dashboard views of Java Runtime metrics.


Much JAVA code is zIIP eligible; automated discovery along with monitoring zIIP offload assures no zIIP eligible code is missed. The additional data on infrastructure impact, resource usage, performance, etc. helps to avoid problems even as it speeds diagnosis and eventual resolution. This reduces the need for cross-functional “War-Room” meetings used to identify, diagnose and resolve Java-caused problems that impact application availability and performance.

MVJE quickly discovers and monitors JVMs. It pinpoints Java’s resource usage, so application performance and availability continue to meet service levels.  IT teams can quickly identify the root cause of problems, reducing MTTR, and improving productivity. MVJE monitoring of  zIIP offloading, helps to lower MLC. 

All normal benefits associated with use of BMC’s MainView product in terms of single, integrated view of systems activities. The risks associated with unmonitored, unmanaged technology are eliminated. More efficient monitoring and assured effective use of zIIP’s help to reduce MLC and capital costs. Intelligent automation allows proactive resolution of problems, again saving costs and improving overall system performance. Complexity is reduced as users can customize dashboards and reports to meet their specific information needs.

 The Final Word

We’ve discussed how Java’s built-in resource and memory management, operating in the background, unmonitored and unmanaged, can increase costs slow processing and waste resources.  BMC’s MVJE, is the first full system monitoring solution to address these risks.

System admins can now gain actionable insight into Java’s impact on infrastructure, resource usage and operations. Java becomes another well-monitored and managed technology.

Beta customers appear to be very satisfied with the product. A number revealed that they had experienced significant savings and improved performance as a result of using MVJE.  We’re not surprised.

BMC is providing existing MainView customers a free Java Discovery. We look forward to interviewing some customers after some experience with the product. We expect some will be surprised at the result as they believed themselves to be ‘Java-free’. We also believe that it will result in a significant number of sales. BMC has once again demonstrated their connection with their customers and their commitment to being a lead in mainframe solutions.




[i] Source: 2015 BMC Mainframe Research Results

Friday, May 20, 2016

Got something to say about the Mainframe? Check this out: BMC launches 11th Annual Mainframe Research Survey

 Got something on your chest about the mainframe? Familiar with your organization’s Mainframe
environment it? Why you use it? Benefits? Where it is going? It’s future? Have you ever wanted to tell a major vendor (and the world) about the mainframe in your enterprise? Here’s your chance.

From May 24th until June 6th, BMC is collecting data for its 11th survey of the trends in mainframe usage.  Already one of largest industry surveys with over 1,200 mainframe professionals and executives participating, BMC is seeking to attract an even larger number of participants.

The research results will be used by vendors, technical and executive users, industry analysts, media, etc. to make significant decisions and draw conclusions on just about everything mainframe. The report will influence investments, product (new and enhancements), hiring, functionality, etc.

The mainframe is a critical backbone with impact across industries and markets from mobility to analytics to complex modeling and the ongoing transformation of digital business.


So, if you’re technical IT staff involved in mainframe management or operations. Or, if you’re part of a Mainframe IT team as an executive, manager or technical architect recommending general management or operation practices. This is your chance to take 20 minutes to contribute to the conversation, and influence the future of the mainframe.


Starting May 24th, you can take the survey here!

Tuesday, May 10, 2016

Datto Drive: SMB desktop data protection at a hard to refuse price!

By Rich Ptak


Datto provides enterprise-grade backup, restore and recovery services in its privately-owned 200+ Petabyte cloud. Founded in 2007, its 600+ employees build products and support customers from nine (9) data centers and seven (7) offices located around the world. It performs over one million backups every day protecting millions of endpoints. In addition to their private cloud, all devices they use and provide are Datto’s own products.

Datto is all about Business Continuity and Data Recovery (BCDR) for SMB. Their success to date had been built on their use of a private hybrid, Datto Cloud, for backup/restore, advanced storage, instant virtualization (locally and remote), screenshot verification (to remotely verify backup data integrity) and on-prem file sync-and-share (FSS). All delivered through an international network of thousands of Managed Service Providers (MSP).

They expect their next big step forward, Datto Drive, to carry them deeper into the SMB market with in-cloud FSS and BCDR. Before we provide more details on the product, why should you want to know those details?

In its introductory year, Datto is making available:
·         One million Datto Drive accounts for free to:
o   SMB’s (business accounts only, no personal users)
o   For one year
o   With one terabyte of data storage (all managed by Datto in the Datto Cloud).
·         After one year, the offering changes to:
o   $10 per month per domain (NOT per user, the price holds no matter how many users)
o   Service delivered through a Datto MSP partner (which they’ll help you find, if necessary)
o   Premium versions for larger storage volumes and services are available. Premium services are available (for a fee) during the first year.   

Given that competitors price is higher on a per user basis, let’s see what Datto functionality includes.

Datto Drive

Datto Drive brings highly affordable sync-and-share and full backup/restore and disaster recovery for desktop and mobile devices to SMB’s. They are targeting the less than one third of the SMB market not currently working with MSPs today, who are sorely in need of comprehensive, enterprise-grade FSS and BDCR services.

For the price, Datto Drive offers enterprise-grade FSS built on ownCloud open-source technology. They provide the superior security of Datto’s hybrid cloud with advanced capabilities and functionality in permission management, tracking and tracing. There has been no proven data loss since its founding in 2007.

Datto Drive supports virtually every type of file (video, image, audio, text, etc.). File sharing, control and management can be done from any supported device (desktop, deskside, mobile). It permits real-time collaboration for sharing, exchange, editing, etc. across domain users.  Sync and share capabilities are already available for most existing operating systems, i.e. iPhone, Android, iPad, Windows, Linux and Mac. The ownCloud technology means that thousands of value-add apps are available. Finally, it also includes backup for Microsoft 365, OneDrive and SharePoint files.

Final Word


There’s a lot more functionality and things to like about Datto Drive and the rest of the product portfolio. We suggest a visit to Datto.com to see all that is available. When competitors, such as Dropbox and box are charging $15/user/month for similar services, this offer appears to us to be hard to resist. SMB owners should also move quickly to snag one of the 1 million domain accounts. It’s our opinion that they’ll disappear quickly. 

Friday, May 6, 2016

Defining Hybrid Cloud: A View from Above


By Audrey Rasmussen, Rich Ptak and Bill Moran

Looking at Hybrid Cloud from a Different Perspective

The definition of hybrid cloud is evolving as the cloud market progresses and the transition to hybrid cloud continues. The emergence of hybrid cloud brings with it a multitude of new delivery models, services, technologies and more. This wealth of options provides potential hybrid cloud customers with lots of choices. But, it can also be confusing, which elevates the need for relevant detailed definitions.
Business requirements are a key driver for adopting cloud computing. Yet, most hybrid cloud definitions primarily focus on technical descriptions. A technical definition is appropriate and useful for cloud implementers and technical teams. However, it falls short at clarifying hybrid cloud to potential business users.
For example, defining a “jet plane” by describing the engine and overall plane design is a valid approach. However, it misses the important business impact of jet planes in revolutionizing commercial air travel.  Similarly, defining hybrid cloud in strictly IT terms neglects the economic value and business implications of adopting it.
This is particularly important because unless business users understand the value and potential the hybrid cloud delivers, they are unlikely to reap optimal benefits from it. Consequently, their support for IT’s hybrid cloud efforts may be unenthusiastic.
Defining the hybrid cloud is complicated because each implementation is unique, as it fulfills specific business requirements. Since business needs drive adoption, business users must understand what a hybrid cloud can offer and mean to them. This knowledge can transform how business stakeholders innovate and design new business services. In order to reach both business consumers and technical implementers, we believe a broader definition of hybrid cloud is necessary.
We begin with our explanation of hybrid cloud for business cloud consumers.

The Aerial View of Hybrid Cloud: Business View

Some business staffs may think that only the technical IT team needs to understand cloud computing. Although hybrid clouds are enabling technologies, it is equally important for business leaders to understand how they can fundamentally impact and/or change business models and operations. Why? Because realizing the hybrid clouds’ full advantage requires a new, expanded way of thinking about the business and the possibilities.
Cloud computing enables an organization to extend their capabilities beyond the “walls” of their company and frequently beyond the expertise within their company. For example, traditionally, computers and information technology were company owned and resided in corporate data centers. Companies now have the option to pay service providers to use remote computer resources on-demand. Greatly expanded resources become available within minutes or hours, something not always possible in corporate data centers.
Now, cloud computing has and will continue to evolve as the variety of cloud services explodes beyond today’s technology and business resource limits. Diverse cloud service offerings run the gamut from business applications, industry specific data (for example, medical data), cloud development platforms, advanced analytics, video processing, weather data, Twitter data and much more. Today, businesses are able to access diverse services. The result is extending their capabilities far beyond the data residing within their company and the expertise of their employees. It eliminates in-house limitations on capabilities, making possible what is impossible to do in-house, and more. It provides businesses with vast opportunities to innovate creatively, beyond what they can accomplish within the constraints of their companies’ internal capabilities. Business leaders need to understand that this is what hybrid cloud can deliver.
An example is helpful in illustrating the creation of a new business service using a variety of hybrid cloud services. Imagine a car insurer’s customer facing application uses customer policy data (residing on an internal cloud) to gather vehicle coverage. The application uses traffic information (from Cloud service provider A) to warn the customer of an accident just ahead of their location, a potential traffic hazard or slowing freeway traffic. The application also uses weather data (from Cloud service provider B) to warn customers that they are heading directly into potential hail, tornado or adverse weather conditions. A map service (from Cloud service provider C) provides alternative directions avoiding the hazard.
In similar ways, businesses of all kinds are able to compose innovative business services that utilize internal and external cloud services. This changes how business leaders think about innovating and developing business services. Just as botanists create a new hybrid plant by selecting and combining the best plant characteristics, business leaders can create new, innovative hybrid cloud-based services by selecting among the best available services.
A hybrid cloud makes available resources where ownership is not feasible, justified or possible. This can be true for reasons related to operations, cost, or other reasons. It makes collaboration possible without risking production environments. Hybrid cloud can lower the cost of operations, development, sales, marketing, research and development. It opens up otherwise unavailable opportunities by making it possible to use capabilities on a temporary or exceptional basis. It can allow global market access without a global presence. A hybrid cloud allows access to resources and capacity as and when they are needed from public or community clouds that can generally provide services at lower cost than private infrastructure.
Although the basic definition of hybrid cloud sounds simple, there are technical issues that IT teams must attend to behind the scenes in order to implement it, while keeping it simple and seamless. A hybrid cloud requires business leaders and IT to work as a team.

The Ground Level View of Hybrid Cloud: Technical View

As mentioned, many technical descriptions of hybrid cloud are already available. There exists little need for extensive additional discussion here.
For a very simple working definition, we describe a hybrid cloud as being an environment that connects at least two independent cloud services from whatever source. It can consist of public cloud services, private cloud services or 3rd party delivered private cloud services in any combination. Public cloud services also have many “flavors”, for example they may be on- or off-premise, include multiple enterprises (e.g. a community) with access to the same resources, or have co-resident users working in ‘private’ spaces. Private cloud services are enterprise owned/controlled cloud services whose access are controlled by the enterprise or enterprise-authorized entity. An additional major benefit of hybrid cloud is to protect a company’s current investments in infrastructure.

Summary

At times, an enterprise needs access to services or capabilities where ownership isn’t necessary or is too expensive. The need may be operational, (e.g. a need to use advanced analytics), or informational, (e.g. access to weather or medical data.) It can be driven by IT or by business concerns. In short, the enterprise requires temporary and/or shared access to IT capacity (compute, storage, network, services, etc.) or functionality it doesn’t own. Hybrid cloud is a utility model which has the potential to more economically and efficiently provide access to a range of products, services and resources on a pay-as-you go basis. Between the rapid innovation of infrastructure and the creativity of marketers, the variety of cloud services offered and number of definitions will continue to expand.
From a business perspective, it puts assets, resources and expertise at the disposal of the enterprise that it otherwise would not have. It allows the enterprise to leverage these assets in creative and innovative ways with manageable financial expense and economic risk. For business staff, it loosens restrictions on what can be accomplished as the enterprise transforms itself to effectively compete in a digitized world.
Since hybrid cloud is the collective composition of multiple cloud services that spans across computing domains, it requires management and coordination of service activities from both in-house and service provider sources. The challenge for IT lies in effectively managing across these hybrid cloud service compositions seamlessly, while delivering what the business needs, in the time frame they need it at the best possible cost point.
Finally, for both IT and enterprise staffs the hybrid cloud provides the opportunity to work more closely together to define and achieve aggressive, innovative enterprise goals in an economic, effective, innovative manner.