Wednesday, July 9, 2014

IBM announces research programs committed to the future of post-silicon systems


 By Rich Ptak

IBM is no stranger to committing large sums of research and development funds for longer term benefits. Want some examples? There was their early investment in massively parallel computing (Deep Blue and Blue Gene) to reach commercial viability. Their investments in multicore systems such as Power 4 and 5, which facilitated and sped the consolidation of the Unix market. Most recently was their investment in cognitive computing with the Jeopardy winning Watson, that is now yielding commercial as well as societal advances and benefits in the fields of finance, banking, retail, medical and healthcare.

Now, for the first time, IBM has unveiled its plans to invest $3 billion dollars over 5 years into a research program to enable and develop the next generation of semiconductor technologies and chips that are the building blocks of computer systems. Countering rumors of its abandoning hardware and systems based on its sales of business lines to Lenovo, IBM provides concrete proof that it has no intention of getting out of the systems business. The challenges relating to energy, heat, processing time, bandwidth, size, storage, etc. driven by applications for cloud and big data are emerging just as foreseeable physical and manufacturing limits of existing technologies are being reached. 

These investments will extend IBM’s innovation beyond today’s semiconductor technology breakthroughs into the leadership position in advanced technologies required to deal with evolving and emerging challenges.  Such efforts are necessary to develop and deliver in the next ten years the as yet unknown, fundamentally different systems needed to overcome physical and scaling limitations of techniques and technology.

IBM is sponsoring two research programs to address the challenges. The first will address the challenges of the physics that limits using and manufacturing existing silicon technology. Scaling down from today’s 22 nanometers to 10 nanometers is doable for the next few years; moving beyond that to 7 nanometers and smaller requires new manufacturing tools and techniques currently being researched.

The second program looks to the develop ways to manufacture and apply computer chips using radically new technologies in the post-silicon era. New materials and circuit architecture designs are being researched along with techniques for manufacturing, development and application. In addition, to avoid disruption, systems are required to bridge between existing and new technologies.

Projects are underway or beginning in areas that include quantum computing, neuron based systems, carbon nanotubes, silicon photonics, neurosynaptic computing, etc. IBM’s Research team will consist of over a thousand exiting and newly hired scientists and engineers. Research teams will be located in Albany and Yorktown, NY, Almaden, CA and Zurich, Switzerland.

 The Final Word

IBM has been a leader with an enviable track record in creating breakthroughs and innovation in CMOS and silicon technology including inventing or first implementing single cell DRAM, chemically amplified photoresists, High-k gate dialectrics, etc. They aren’t alone in addressing the problems of existing semiconductor technology and researching new technologies. But, they are certainly among the leaders in the breadth and depth of their efforts. In addition to its own projects, IBM continues and will continue to fund and collaborate with university semiconductor research as they continue to support such private-public partnerships as the NanoElectronics Initiative (NRI), Semiconductor Research Network (STARnet), and the Global Research Consortium (GRC) of the Semiconductor Research Corp.
Such efforts will all contribute and combine to create the next level of processing power that will enable and facilitate the move to eradicate blocks to progress and eliminate boundaries compute capabilities. Such innovation is necessary to drive a new class of transactions, create the capability to process a sensor-based world, enable a new level of encryption, etc. and make it possible for a new generation to identify and solve previously inconceivable or unsolvable problems. The investment and effort that IBM is making gives clear proof of their continuing interest in and dedication to delivering innovative systems.

Thursday, July 3, 2014

HP HAVEn: Big Data/Analytics Platform enabling enterprise advantages

By Rich Ptak

About a year ago, HP introduced HAVEn to the market as capability to be used to work with HADOOP to gain insight and information from analyzing structured and unstructured data. This spring, HP launched HAVEn as an extended, true data analysis platform and available as a cloud-based service. We spent some time with HP to get greater insight into HP’s positioning of HAVEn as a platform for Big Data/Analytics that works across multiple data formats.
 
The HAVEn name comes from the multiple analytics engines – Hadoop, Autonomy, Vertica, Enterprise Security and any number of HP and the customer’s own applications used to gain more customer-relevant information and insight. Help is available to determine which of the engines is needed, as we’ll see.  

As announced, HAVEn per se is a collection of data handling and analytics engines that combined with an enterprise’s own applications can used to tease information and insight from virtually any conceivable data set the user can access. HP offers HAVEn engines for use by the customer. It takes a pretty impressive service to allow virtually any enterprise or organization get useful insight and information from their available data. It is even more so, if it is targeted for use by an audience that includes data processing/IT (even business) professionals, but who are specifically not required to be professional data ANALYSTS to quickly realize value. And, HP has customers who will testify to their successes with HAVEn. However, to help the first-time user faced with figuring out where and how to get the most benefit from their data, HP has skilled service experts available for projects.
 
HP also previewed the HAVEn workbench at Discover. The workbench is a unifying layer on top of the engines. It allows developers to access the functionality of any of the underlying engines through a common interface. It also allows access to a library of services which expose the functions of the engines. Developers, data scientists and the like can add new services to the library. or “mash-up” two existing services to create a new service. Over time, as more services are added, the ability to explore your data or rapidly prototype new applications will increase exponentially.

One of our major themes with our clients (vendors as well as end-users) for the last decade or so has been that it is a major responsibility of vendors to make the full power of all technologies (existing and emerging) accessible and useful to their customers. The is increasingly critical as those customers are increasingly non-technical in nature – they have no idea, and less interest in how a Monte Carlo Simulation works, what a regression analysis is or accomplishes, nor any of the benefits revealed by the use of Chebyshev’s inequality. They just want to get any and all information and insight from their data that they can use to achieve their goals. Satisfying that need and demand appears to be one of the driving forces behind HAVEn.

HAVEn is also part of HP’s efforts to speed and spread the adoption of Big Data/Analytics to the widest possible audience. Having spent between $10 and $20 billion (between organic R&D and acquisitions) on HAVEn, HP believes it and its associated services can be effectively leveraged by customers. HP sees broad market potential for HAVEn. At this time, it has identified several broad market segments of specific interest including:

  1. Business Analysts – enterprise IT, data analytics specialists and experts who can use HAVEn as a tool to operate more effectively and efficiently to speed results and improve quality of their analysis.
  2. Developers – looking to build a business or service around analytics - including entrepreneurs, ISVs, partners, startups – interested in developing analysis-based solutions and services.
  3. Solution Buyers – those looking to get more insight from the data they have; such as marketing/sales executives, product managers, inventory and resource managers and suppliers - for example, those who want to learn more about buying patterns as it relates to various environmental factors such as time, weather, events, etc.

HP offers two free trials to encourage potential customers to experiment with HAVEn. The free downloads are for the Vertica Community edition and a free trial of ArcSight Logger. Learn more by going to:  http://www8.hp.com/us/en/software-solutions/big-data-platform-haven/try-now.html

Conclusion
HP clearly has invested a lot of time and effort into the HAVEn platform. The single significant drawback we found was the lack of an integrated, ‘single pane of glass’ UI. Integration packs are available among the engines which do help.

HP is continuing an aggressive development program as it encourages customers and partners to enhance and extend the reach of the product with connectors. We think that HP is definitely enabling and easing the move of Big Data analytics into the larger marketplace. Customers can learn more about HAVEn and how it is being used by visiting hp.com/haven. We think anyone with any significant data available to them would be wise to investigate what HAVEn might be able to do for them.





Wednesday, July 2, 2014

IBM Bluemix – Good news for cloud enterprise-class application development, testing and deployment

By Rich Ptak



Bluemix is IBM’s extensively featured cloud platform-as-a-Service (PaaS) for building, managing and running applications of all types. In February of 2014, IBM began an Open Beta of the platform, providing access for interested developers, students and researchers.  We described the platform, support and training programs available at that time here: (https://tinyurl.com/mzagdq9).  Bluemix offers a complete, robust, Dev/Ops environment, built on open standards, equipped to develop and run apps in any cloud-friendly language with integration services to existing systems of record. Developers can access a rich library of IBM, 3rd party and open source runtimes, services and APIs.

Over the last three months, IBM, partners and 3rd party participants have added a range of extensions to the platform in the run-up to a General Availability (GA) announcement. With the June 30th GA, (a full quarter before IBM originally planned), Bluemix enters the market fully tested by IBM and customers with documented success of its benefits and impressive support provided by IBM and partner staffs. Applications written using the open source services in Cloud Foundry could be moved between compatible Cloud Foundry implementations. Applications can also be developed/tested/tuned on Bluemix then moved to another platform, as well as vice versa.  Both of these capabilities have been done by users.

New services from IBM include Workflow, Geospatial Analytics, Application User Registry, MQ Light, Gamification, Embeddable Reporting, Appscan Mobile Analyzer, and Continuous Delivery Pipeline. Services from 3rd Party partners include Mongo Labs, Load Impact, BlazeMeter, SendGrid, RedisLabs, ClearDB (MySQL), CloudAMQP (RabbitMQ), ElephantSQL (PostgreSQL), etc. It is also worth highlighting that Bluemix includes a strong environment for the development, testing, and management of mobile as well as other applications.

Developers have access to the infrastructure resources, tools and assets they need (and know) – on-demand and when they need it. They don’t have to worry about or wait for infrastructure availability virtual or otherwise. Everything needed for app development, test, deployment and management is available in the cloud.

IBM’s intends Bluemix to be used for developing all kinds of applications. However, it is optimized for the cloud-centric nature of mobile, web and big data applications.  The environment’s open standards nature allows customers to move apps (as long as they only use open standard services). As an incentive to stay, IBM will provide compelling value from its middleware portfolio (e.g. mobile, Watson, analytics, etc.) only on Bluemix.

IBM continues to offer 30-days of free-trial use to encourage developers to become familiar with Bluemix services and tools. After that, fees are based on usage. This means you pay only for the amount of resources consumed. Runtime charges are based on the GB-hours an application runs. During the free 30-day trial, you get a maximum of 2GB. Once usage fees begin, you still get 375GB hours per month free. Pricing for services and add-ons can be flat-rate or metered; some have a free allowance each month. Details including pricing are here:  https://developer.ibm.com/bluemix/2014/06/30/general-availability/. The pricing appears to compare favorably against the competition.

Where’s the Value?

Bluemix is a true Platform-as-a-Service offering, designed with the interests of both the developer and enterprise in mind. It focuses on allowing the developer/enterprise to focus completely on creating and delivering value in terms of new products and services for their customers – while IBM takes all the responsibility to provide and maintain the infrastructure, development, testing and management tools and software.

For the enterprise, additional value comes from the ability to quickly access and deploy their services on a global infrastructure. Also, since Bluemix is built on top of SoftLayer, applications can be easily moved to the SoftLayer environment if you want to have control over the infrastructure. Applications can also be moved from one IBM datacenter to another at no cost. Bluemix provides a globally-supported environment for the deployment of a service. It dramatically expands the geographic reach of a company without the adding the expense of a remote presence.

The Final Word
Part of IBM”s goal with Bluemix was to ease and speed the transition of IT and enterprises (of all sizes) to the cloud environment. It does so by providing the tools, functionality and infrastructure that allows the developer the flexibility to use the language, tools and techniques they are most familiar with. This speeds development, testing and delivery of new solutions which increases developer productivity and enterprise agility. We recommend that potential evaluate the benefits and cost of IBM’s Bluemix against. As we said earlier, we expect that: “many will decide to increase and expand their participation to their own as well as their employer’s significant benefit.”

Thursday, June 19, 2014

Serena Dimensions CM 14 delivers integrated Software Change/Config Management

By Rich Ptak

Serena Software has made its mark with a software suite that enables Global 2000 enterprise IT organizations to develop and deploy better software. The suite includes solutions for application development, deployment, process automation and service management solutions. Their skills at conceiving and delivering process-based solutions facilitate and enable consistent customer success. Read our comments here: ptakassociates.com/content/

Wednesday, June 11, 2014

Action Plan for addressing mainframe MLC software costs


By Bill Moran

Background

Estimates are that between 30% and 45% of typical mainframe costs result from IBM monthly (MLC) software charges; a considerable amount of money in any shop. Mainframe software pricing, when examined, is a complicated subject. Despite efforts otherwise, IBM’s software pricing model contributes to that complexity. In this paper, we first examine how this came about. Then, we discuss a BMC product that addresses the problems.

Originally, most software was priced based on the size of the system it was licensed to run on. It was assumed the software would use the full system capacity.  However, customers complained that software that only used a small percent of the system was priced the same as software that used all of the system.  IBM agreed. In response, they introduced “sub capacity” pricing as an attempt to match pricing and system utilization. The older metric had the merit of simplicity; unfortunately, the new metric required tracking capacity usage, complicating matters significantly.

For MLC (monthly licensed charge) software, system usage over the month is tracked and recorded.  Monthly payments are based on a four hour rolling average of usage over the month. System Management Facility[1]  records usage in each LPAR. Each month, the customer runs the Sub-Capacity Reporting Tool[2], sending the results to IBM.  As another aid to customers, they maintain a website[3]  focused on mainframe software pricing.

Yet, customers have problems of:
  • ·        Complexity - IBM lists 8 major pricing plans plus others[4]. Plans are dynamic changing from time to time, for example, IBM recently changed its pricing to provide favorable treatment (i.e. reduce costs) for transactions  originating from mobile[5] devices.
  • ·        Overwhelming data quantities - A medium-sized system can generate so many SMF records in a month that analyzing them is an overwhelming task.
  • ·        Transparency – with no automated way to consolidate and plot the data to get a clear picture of what is actually happening, auditing IBM charges is challenging.
  • ·        Inability to plan – Manually analyzing raw data to optimize usage to get the best results at lowest cost is very difficult.  Customers turning to manual spreadsheets to address these problems have found their use tedious, frustrating and error-prone.


There is no silver bullet available to solve all of these problems. However, a recently released tool, BMC Cost Analyzer for zEnterprise®, helps in several ways by:
  • ·        Providing graphic reports that show exactly the usage of the various LPARs in the system.
  • ·        Breaking out the cost drivers by products and by workloads
  • ·        Allowing What-if scenarios to evaluate the effect on billing of workload placement, product placement, and capping.
  • ·        Facilitating audits of IBM software bills.


For organizations that only prepare the required information for IBM and do nothing else to manage MLC costs, it is quite possible to achieve savings of 20% or more by implementing a cost management program and using BMC’s new offering. Here is what an action plan might include.

Action Plan

  • ·        Start by getting the facts on how much is spent on the mainframe. Make sure to count only actual mainframe expenditures. In many data centers, all costs for power, cooling, etc. are allocated to the mainframe with none attributed to other servers.
  • ·        Get exact figure for MLC software costs. This is necessary to identify the amount of potential savings. How does it relate to overall mainframe costs?  If no active management has been done, potential savings could exceed 20%.  The potential amount provides a savings goal, and is a guide to how much to budget for tools/efforts to save.
  • ·        Permanently assign someone to monitor mainframe software pricing, to pay attention to the various IBM plans and determine which apply.
  • ·        Actively monitor what is going on in the datacenter. In addition to running the mandated reports and sending IBM the output, review BMC’s 10 step program. Review the helpful series of videos[6] BMC prepared with an outside consultant, Mr. David Wilson of SZS Consulting. The program includes the Cost Analyzer, but covers many other subjects as well, such as managing negotiations with IBM.
  • ·        If using spreadsheets to monitor pricing, closely examine the value realized versus the time taken to prepare them. There may be a less expensive solution.
  • ·        Review BMC’s Cost Analyzer, identify potential savings from its regular use. The tool will give maximum value only if used as a part of a comprehensive plan to manage software costs.

Summary

Software pricing contributes significantly to overall mainframe costs. Most installations need, but lack an active program to monitor/reduce software costs. BMC’s Cost Analyzer for zEnterprise can play a major role in cost containment and optimization plans. It should be investigated to quantify savings it can help achieve. The results may be a pleasant surprise.



[1] Supplied by IBM, SMF is a part of the operating system that creates records of most system events.
[2] Sub-Capacity Reporting Tool, SCRT as it is popularly known.
[3] See http://www-03.ibm.com/systems/z/resources/swprice/index.html. Someone in the shop needs to be familiar with this web site to keep up with the various plans for IBM software.
[4] There are special considerations for US Federal customers.
[5] Customers should review this change to see if it applies to them; if it does they need to apply for it.
[6] http://www.bmc.com/videos/232214431.html.  Here  is the URL for the David Wilson videos.

Monday, June 9, 2014

EDGE2014 – For Innovation and the Art of the Possible, Infrastructure Matters

By Rich Ptak

On May 19th, we attended IBM Edge2014 in Las Vegas. This year’s event focused on infrastructure-driven innovation. Over 5,500 IT business/technology executives and practitioners spent the week viewing, hearing about and discussing the latest in infrastructure capabilities and application. The event  is a showcase for IBM technology covering IBM Storage, IBM PureSystems, IBM System x, IBM Power Systems and IBM System z. Lenovo’s eminent acquisition of IBM’s x86 business as well as post-acquisition plans for partnership, etc. were natural topics of interest as the latest publicly available information was presented. Read more at: http://www.ptakassociates.com/content/

Tuesday, May 27, 2014

HP's Helion Cloud Initiative

HP’s Helion Cloud Initiative


It is not often that one sees a major corporation undertake an effort to remake itself. GM is the most recent example that comes to mind. However, GM needed a bankruptcy to force the issue. HP is embarking on a major transformation of its internal structure not under the pressure of bankruptcy, but by a drive to spur organic growth in key business initiatives by increasing innovation and implementing new technology. HP calls this “The New Style of IT,” and it includes cross-company initiatives such as Cloud, Big Data, Mobility, and Security. As part of this, HP’s cloud initiative continues under the new “HP Helion” brand.
 
Let’s examine what HP is up against as we assess their chances of success. HP has funded Helion with an initial amount of $1 billion over the next two years. The amount certainly signifies how seriously they take this effort.  This follows HP’s consolidation during 2013 and 2014 of disparate cloud-related products and services across the company into a new HP Cloud business unit, which combines all of their cloud portfolio. Effectively, many people will find themselves in new organizations. They will have to determine how to work with new colleagues in different relationships and changed responsibilities. While it is easy to announce such a change, a lot of hard work will be necessary to get these units functioning.

Next, the long range goal involves the integration of other HP products into the Helion world. For example, they will have to incorporate  Helion into HP Converged systems along with hundreds of other products, some of which will require major efforts in themselves. Another long range project will incorporate Helion into HP’s roughly 80 worldwide datacenters[1]. Martin Fink, leader of the Helion effort, says that the datacenter effort should take the next 18 months.

HP plans to roll out several projects this year. The first is a distribution (distro) of software to the Helion Cloud Community based on the Icehouse release of Open Stack. A free version of the software[2] is available now. HP plans a free release of updated versions every 6 weeks. Later in the year, HP will release a for-fee commercial version that scales much higher, supporting more users and VMs than the free version. It includes bundled-in support.

We have several suggestions for HP. For the new cloud organization to be successful, it needs to be very creative. Therefore, HP leadership should carefully consider the physical arrangements they make for the new unit. Steve Jobs spent a lot of time with the architects of the new Apple headquarters to ensure that the design would encourage and foster the creativity of the people working there. HP cannot afford (nor do they need) to redesign all their facilities. However, they could find a facilities review well worthwhile[3]

We also suggest that HP reconsider how they measure success for each new Helion distro. Executives were very proud they released their distro only three weeks after OpenStack’s release of icehouse. Clearly, today’s emphasis must include time-to-market, but speed cannot risk compromised quality. HP’s commitment to a new release every six weeks needs to be reevaluated. We believe that customers appreciate that quicker releases means that faster access to fixes and new functionality. However, they would be happier with 8 week intervals if the delay meant a better tested/more reliable release.  If time is needed for better quality, HP can tell the customers that they are taking that additional time without making much fuss. In fact, it might just be better to commit to the slightly longer interval.

We must also mention that although we are admirers of Martin Fink (we knew him when he did an excellent job in Linux), we do not think that having him wear three hats as HP’s CTO, Research Director, and manager of Helion[4] is really viable for an extended amount of time. Each one of these efforts requires (and should have) a full time executive. HP risks a lot by delaying recruitment or identification of these people.

In summary, we view this Helion rollout as a further, very positive development of Meg Whitman’s strategy for HP. In particular, the ongoing financial turnaround of the company that has proceeded well so far has made this Helion strategy practical.  It has made possible the $1 billion dollar investment funding Helion. HP customers and stockholders have a lot to cheer about at this point. However, Helion success will require wrenching changes at HP. HP management faces a real challenge making this initiative work. We are encouraged by their evident planning and efforts so far.



[1] HP has announced that 22 datacenters will implement Helion by the end of 2015.
[2] Service is a priced option.
[3] Whitman is consolidating the Helion group into several locations and gradually moving some of the telecommuters back into the office. Sounds like a reasonable strategy to us.
[4] He has other  management responsibilities has well.