Monday, October 20, 2014

IBM exits chip business to concentrate on systems and solutions


Surprising to few, satisfying many, IBM has announced signing of a Definitive Agreement with GLOBALFOUNDRIES for the latter to acquire IBM's global commercial semiconductor technology business as well as the commercial electronics business (including ADIC and specialty foundry, manufacturing, related operations and sales). The deal includes all intellectual property, world-class technologists and technologies related to IBM Microelectronics. The move has, in fact, been anticipated and rumored for some time. 

Subject to completion and approval of all applicable regulatory reviews, GLOBALFOUNDRIES will become IBM's exclusive provider for 22nm, 14nm and 10nm semiconductors for the next 10 years. The deal calls for IBM to pay GLOBALFOUNDRIES a cash consideration of $1.5B over the next 3 years.

IBM employees and facilities at Fishkill, NY and Essex Junction, VT will transition to GLOBALFOUNDRIES. Semiconductor server group employees as well as those doing semiconductor system assembly, test and fix ‘n repair facilities located in Albany, NY and Bromount, Canada will remain with IBM.

The agreement is structured to provide maximum benefit to the employees and business functioning of both companies. For example, executives in both companies as well as state and local politicians have been working together to protect jobs and investments in the region. GLOBALFOUNDRIES plans close to $10B in capital expenditures in 2014-2015, primarily in New York state.

This marks the final step in IBM’s gradual exit out of what it recognizes as low-margin, hi-volume businesses. IBM recognizes its business strength to be in value-added, service intense businesses. Acknowledging their strength in value-added solutions motivated earlier sales to Lenovo of the System x and PC business. Arguably, these moves could have been implemented earlier, but IBM deserves credit for decisively moving forward to make a clean exit.

Without having access to the details of the underlying financials, I suspect these decisions was made somewhat easier once internal analysis revealed that the end-to-end supply chain integration was neither financially viable nor operationally mandatory. IBM identified a weakness in their operations, evaluated alternatives, and selected what they saw as the best way to move forward.

IBM will be able to concentrate on the areas of high-end and mid-range systems market with mainframes and Power-based servers and solutions. Freed from day-to-day responsibility of semiconductor manufacture, IBM will focus where they have proven ability to drive profitable business. There are some questions though: Isn’t there a risk to the future of their servers if they cannot control the basic chip technology? Future platform capabilities and innovation are closely tied to the underlying chip. Won’t IBM be at a disadvantage if they don’t control the chip?

First, IBM will still develop its POWER and System z processor chips. GLOBALFOUNDRIES will manufacture those chips for IBM. So, IBM maintains control of the design of the chips. This means they will continue to be able to optimize the chip architectures for its systems.

IBM competitors like HP and Oracle have not manufactured their own chip products for a long time. In fact, today HP is almost completely reliant on Intel-sourced commodity chips for its servers – meaning, they have almost no input to the design of those chips. At the very least, it’s clear that control over manufacturing isn’t all that critical. IBM has nearly a decade long close relationship with GLOBALFOUNDRIES, which continues even after they become the largest semiconductor manufacturing employer in the Northeast.

IBM confronts other aspects of this challenge in multiple ways. There are the earlier mentioned plans for continued investment in basic research. There are plans for continuing the close collaboration with GLOBALFOUNDRIES as a partner supplier and design of chips. Next, IBM and GLOBALFOUNDRIES remain active partners in several collaborative semiconductor research activities in joint efforts with the Colleges of Nanoscale Science and Engineering 9CNSE) and SUNY (State University of New York) Polytechnic Institute in Albany, NY.  Thus, IBM has assured that they remain close to and influential in the basic platforms their servers and mainframes depend upon.

IBM will continue to influence semiconductor chip technology through its on-going R&D investment ($3B over 5 year) in semiconductors which complement on-going leading-edge research in cloud, mobile, big data analytics and secure transaction-optimized systems. Silicon chip technology will remain at the heart of basic chip technology for at least a decade, possibly more. IBM has publicly outlined multiple areas of research for the post-silicon world (discussed in our soon-to-be published blog about IBM’s Enterprise 2014 event). 

It’s our opinion that IBM has made a difficult but necessary decision. This business lost $700 million last year.  Many times, often it is compelling and necessary for a decision to be made among alternatives that are not clearly bad, good, better, best. Choices can be all good, all bad or a combination. Worst case, the only decision that can be made requires choosing the least damaging from a collection of bad decisions, then living with the consequences. The ability to make that decision and live with it separates the true leader from imitators. I don’t think this was one of those worst case decisions. Congratulations to IBM’s management that did make a decision, as IBM continues to reshape itself to grow in the changing IT landscape.

Thursday, October 9, 2014

BMC: Automating/Optimizing Mainframe Operations and License Management


 
 
One of the real strengths of applied computer technology lies in its ability to automate repetitive, intensively detailed, multi-factor operations. This is especially when the tasks follow a set of highly structured set of rules that while involving many factors are applied in a consistent manner.
 
Software licensing costs have long been a major bug-a-boo and attack points involving mainframe operations. For a variety of some aging but fully understandable reasons, mining the costs of running software on mainframes for savings has been a risky task. Not just because it is incredibly involved, error prone and time consuming, but also due to the potential of significantly upset clients (if service performance is degraded) or savings of sufficient magnitude are not realized. Both of which have resulted in past efforts. As a result, it is also one that has frequently been avoided or received cursory attention.
 
There have been multiple attempts at manual processes to manage costs and optimize operations. Typically resource intensive and, while effective, too often tied up scarce resources whose efforts could be more profitably applied at other tasks. And, BMC, IBM, CA and other vendors have offered a variety of products that poked at the challenge of managing MLC and associated mainframe software licensing costs.
 
We’ve examined various solutions offered by BMC and found them quite interesting. Examples of successful earlier products (such as BMC Application Accelerator for IMS™[1] BMC Capacity Optimization TrueSight Capacity Optimizer for Mainframes[2], and BMC Cost Analyzer for zEnterprise®[3], etc.) to lower the cost of software, raise and optimize the level of mainframe and lpar utilization in the data center while protecting SLAs and protecting critical services. BMC describes the potential cost and operations savings ranging from 2% to 30% from using these products.
  
Which is why we are very pleased to hear about BMC’s most recent solution offerings that complement these existing products, representing a comprehensive and integrated approach to mainframe software cost and operations management. Building on today’s operating environment with more data, abundant processing power, enhanced understanding of and sophisticated capabilities for integrated, end-to-end data analytics in policy-based process operation, BMC is offering more sophisticated, automated programs to tackle the problem of MLC management and mainframe systems optimization.
 
Recently announced by BMC were two products, one (BMC Intelligent Capping for zEnterprise®[4]) directed at dynamic, intelligent capacity management for workloads that protects critical service SLAs. The second offering (BMC Subsystem Optimizer for zEnterprise®[5]) optimizes LPAR subsystem operations by removing constraints on placement of DB2, IMS, and CICS. Here’s a thumbnail of each product’s unique benefits.
 
BMC Intelligent Capping (iCap) uniquely provides:
  • Monitors and dynamically manages defined capacity settings across LPARs and Capacity Groups
  • Uses customer policies and workload priorities for decision making 
  • Recaptures unused capacity – matching increases in caps with decreases
  • Helps customers implement changes that are identified in BMC Cost Analyzer reports.
BMC Subsystem Optimizer for zEnterprise (Subzero) uniquely:
  • Overcomes the technical requirement that subsystems must reside on the same LPAR
  • Requires no application changes
  • Operates transparently with DB2, IMS, CICS, and applications
  • Provides failover capability for DB2 and IMS (nonexistent today)
  • Provides logical next step for taking action based on BMC Cost Analyzer analysis.
 
Today’s competitive pressures tend to squeeze margins, increasing the value of efforts that optimize costs. Add to these incredibly aggressive and innovative competitors who are willing to buy entry into new markets and a managerial (financial and operational) intolerance for over-engineered, under-utilized infrastructure. IT is heavily squeezed. These solutions deserve the attention of mainframe data center operations managers and financial managers who are seriously interested in improving their operational bottom line. You can also see these in action October 13-16, at BMC’s Engage 2014[6] event in Orlando, FL.

 

Monday, October 6, 2014

HP splits the company

HP splits

HP announced that by the end of 2015 they plan to split the company in two. One of the companies, “HP enterprise”, will focus on business requirements and will contain the server business, HP's cloud offering –Helion, HP's enterprise services, and HP financial. The other company, “HP Inc.” will contain the printer and PC business. HP financial will continue to provide financial services to both companies.

Several key points

HP ran a conference call to discuss this announcement and answer questions. Several nuances of the announcement became clearer during this call. One of the reasons that Meg Whitman, HP CEO, had decided against splitting off the PC business several years ago was that the cost advantage from HP’s supply chain would be greatly reduced. That argument would seem to apply to the current split. After all one of the justifications that IBM gave for selling the x86 server business was that the Lenovo supply chain would give a cost advantage. Is HP was throwing away its current advantage of the unified supply chain? Meg Whitman, HP CEO, says she expects the two companies would arrange some joint agreements to cover this issue. In other words, the server folks would be able to leverage the component purchases of the PC company to get the most favorable prices.

The Wall Street Journal broke the story only yesterday (Sunday, 10/5) --, although rumors were rife towards the end of last week. This points to very good management discipline on the part of HP since they were able to keep this blockbuster story secret until almost the last minute. The dead story about HP’s negotiations with EMC did leak. Perhaps, this was part of the management plan to conceal the real story? So far, it appears that HP has planned properly.

Meg Whitman stated that there would be three transition offices set up, one in HP corporate and one for each of the two new companies, keeping the transition teams separate from the team that is running the day-to-day business. HP wants to ensure that the 2015 results are good so that the two new companies get off to a good start.

Finally, HP noted that they have suspended their stock repurchase plan because they are in possession of non-public information that requires them to make this suspension. When questioned they said that their M&A activities were involved. It seems that they think that this issue will be resolved by the end of the year but it is intriguing to speculate on what this “non-public” information might be

Customer Impacts

We do not see any immediate customer impacts from this announcement. However it is ironic that HP has pointed out the potential disruption for customers from the IBM sale of its x86 server business to Lenovo. Now HP will have to justify the far larger transition that it is going to go through to split itself in two. In 2016, customers who were buying products across HP will have to adjust to dealing with two companies. Both will still be named HP but as time goes by they will be different companies. 

HP management thinks that since the two companies have the same name (although the consumer company will have the HP logo) that this solves the branding issue. The branding issue would have been a huge cost if HP had spun off the PC business several years ago into a new company with a different name. We are not so sure about the branding issue being resolved because in the long run having two companies with essentially the same name may cause some confusion in the marketplace.

Competitive Factors

It was not mentioned in the call but we think that HP has decided that they need to position themselves to compete with the new Lenovo. Meg Whitman emphasized the value of agility in a fast changing marketplace. We think that HP considered how they will be competitively positioned against Lenovo going forward. They obviously think that the new structure with two companies competing with Lenovo instead of one would be an advantage for them. Time will tell but it’s clearly important for HP to make the right call here.

We have heard from some people in IBM that HP and IBM are not going to be competitors going forward. We think that is a serious mistake by IBM. The HP enterprise company is going to double down on the Cloud and other areas such as enterprise services where they will be directly competing with IBM for the customer’s business. It would be an error for IBM to underrate them.

Summary

HP has announced a major restructuring of the company. We think that HP management carefully considered this change. It’s too early to judge how successful this new structure will be. However, Meg Whitman, has built a track record of successful change at HP. We would not bet against her. It was notable that the company’s CFO announced during the briefing that they were going forward with plans to eliminate 14,000 more jobs. They have hit their target of 36,000 but they have identified the additional positions. They plan to reinvest the savings in sales and R&D. They believe that these investments will strengthen the two new companies as they go forward.




Wednesday, October 1, 2014

HP's Gen9 new X86 servers

HP's Gen9 servers

As HP announced their newest line of ProLiant servers, they made the point that they created the x86 server business 25 years ago. In the same way, this announcement sets the stage for the next 25 years. An ambitious goal, which for most other companies one might dismiss as pure marketing hype. However, as the leading supplier of x86 servers today one must take HP’s claim seriously. Clearly, HP will be moving in this new direction for many years to come. We waited a bit to do this write-up to give HP time to get its product offerings in order and to update its web site.[1]

It is true that server architectures need updating. In connection with its last Moonshot announcement, HP described future server performance requirements. Even merely linear development (of servers) will need unsustainable power and space requirements using current architectures. Moonshot represents the first step at addressing these issues. (And a good one in our view.)  We expect customers welcomed HP's redefined server.

The next evolutionary step is the Gen9. Here’s how to get more detailed information.
Searching the HP enterprise (not consumer) web site for “Gen9” servers eventually leads to the ProLiant Server page. Here’s a link[2]. Down and to the right is a tab for “Products and Services”, select it to see a list of current products, such as Blades, Rack servers, Tower servers, etc. Selecting “HP ProLiant Rack servers” displays “Shop for Rack Servers”, clicking on it takes you to a page[3] which provides access to a list of the rack servers that HP is currently offering.

There are four servers listed marked as “New”. All are Gen9 servers. Other severs on the same page are Gen8. Checking the Compare box (below each server) allows you to see the difference between any of the Gen 9 and any Gen8 servers.

Picking one of the Gen9 servers, click on “Learn More” goes to a page with additional models. We picked the DL380 Gen9, and then clicked on the “Select a Model” tab. This shows 4 sub-models of the DL380.[4] Selecting one of the sub-models, allows you to configure it, get more details on it, and explore benchmarks. We picked the most expensive,”HP ProLiant DL380 Gen9 E5-2650v3 2P 32GB-R P440ar 8SFF 2x10Gb 2x800W Perf Server” with a base price of $8,469. Model documentation is accessible to compare it with others. A very useful section on benchmarks appears down the page.[5] We did not explore all of the benchmarks but expect that most of them relate to Gen8 servers right now. Over time the Gen9 results will be added.[6]

Here’s a few of the key points gathered on our trip thru HP’s web site.

First, HP obviously remains in a transition state. They are still selling Gen8 servers in addition to the new Gen9. They decided a gradual changeover is better than attempting a very likely disruptive wholesale change. In the meantime, a Gen8 system might be the best choice for some customers. We agree and think that the way HP is managing this situation is best for them and for their customers as well. Customer choice is generally a good thing. For example, if one needs a special feature that is either not available or supported on Gen9, it remains available on a Gen8 system.

Second, it is well worth exploring the HP support options. Return to the web page referenced in footnote 1 above, and you will see what we mean. A specific example, Microsoft ends support for Windows Server 2003 in 2015. Customers have to move to Windows Server 2012 if they want continued support. HP offers a comprehensive set of options for the move. All are described on this page.[7]

We are not saying that HP’s offerings are the best for any individual customer. We are recommending that anyone planning a Window’s migration should be aware of their offerings; include investigating them as part your migration.

Final point. Elements of the Gen9 systems remain a work in progress. For example, at the OneView web site[8], you find that this key software component does not yet support Gen9. It is promised by year’s end. Other items are in this category. This is to be expected any time that a company like HP makes a major transition in technology. They need a reasonable amount of time to make a full transition.

We recommend that customers evaluating x86 servers to definitely include HP’s Gen9 offering in their appraisal. While it is true that HP needs to fill out the offering, they provide enough details and insight into the future to whet our appetite for more.






[1] We've learned that what a company offers as order-able products on the web site (including the base price) might be different from the announcement. Our interest is in what customers can actually order. We are not accusing HP of doing this.
[4] http://www8.hp.com/us/en/products/proliant-servers/product-detail.html?oid=7271241#!tab=models
[5] http://h17007.www1.hp.com/us/en/enterprise/servers/benchmarks/index.aspx#.Uw5F1f7naP8
[6] We did not try all combinations (There are many!) but we were not able to get any results for Gen9. 
[7] http://www8.hp.com/us/en/business-services/it-services.html?compURI=1078276&jumpid=reg_r1002_usen_c-001_title_r0004#.VCLoj_ldXiA
[8] http://www8.hp.com/us/en/products/server-software/product-detail.html?oid=5410258HP's Gen9 servers

Wednesday, July 9, 2014

IBM announces research programs committed to the future of post-silicon systems


 By Rich Ptak

IBM is no stranger to committing large sums of research and development funds for longer term benefits. Want some examples? There was their early investment in massively parallel computing (Deep Blue and Blue Gene) to reach commercial viability. Their investments in multicore systems such as Power 4 and 5, which facilitated and sped the consolidation of the Unix market. Most recently was their investment in cognitive computing with the Jeopardy winning Watson, that is now yielding commercial as well as societal advances and benefits in the fields of finance, banking, retail, medical and healthcare.

Now, for the first time, IBM has unveiled its plans to invest $3 billion dollars over 5 years into a research program to enable and develop the next generation of semiconductor technologies and chips that are the building blocks of computer systems. Countering rumors of its abandoning hardware and systems based on its sales of business lines to Lenovo, IBM provides concrete proof that it has no intention of getting out of the systems business. The challenges relating to energy, heat, processing time, bandwidth, size, storage, etc. driven by applications for cloud and big data are emerging just as foreseeable physical and manufacturing limits of existing technologies are being reached. 

These investments will extend IBM’s innovation beyond today’s semiconductor technology breakthroughs into the leadership position in advanced technologies required to deal with evolving and emerging challenges.  Such efforts are necessary to develop and deliver in the next ten years the as yet unknown, fundamentally different systems needed to overcome physical and scaling limitations of techniques and technology.

IBM is sponsoring two research programs to address the challenges. The first will address the challenges of the physics that limits using and manufacturing existing silicon technology. Scaling down from today’s 22 nanometers to 10 nanometers is doable for the next few years; moving beyond that to 7 nanometers and smaller requires new manufacturing tools and techniques currently being researched.

The second program looks to the develop ways to manufacture and apply computer chips using radically new technologies in the post-silicon era. New materials and circuit architecture designs are being researched along with techniques for manufacturing, development and application. In addition, to avoid disruption, systems are required to bridge between existing and new technologies.

Projects are underway or beginning in areas that include quantum computing, neuron based systems, carbon nanotubes, silicon photonics, neurosynaptic computing, etc. IBM’s Research team will consist of over a thousand exiting and newly hired scientists and engineers. Research teams will be located in Albany and Yorktown, NY, Almaden, CA and Zurich, Switzerland.

 The Final Word

IBM has been a leader with an enviable track record in creating breakthroughs and innovation in CMOS and silicon technology including inventing or first implementing single cell DRAM, chemically amplified photoresists, High-k gate dialectrics, etc. They aren’t alone in addressing the problems of existing semiconductor technology and researching new technologies. But, they are certainly among the leaders in the breadth and depth of their efforts. In addition to its own projects, IBM continues and will continue to fund and collaborate with university semiconductor research as they continue to support such private-public partnerships as the NanoElectronics Initiative (NRI), Semiconductor Research Network (STARnet), and the Global Research Consortium (GRC) of the Semiconductor Research Corp.
Such efforts will all contribute and combine to create the next level of processing power that will enable and facilitate the move to eradicate blocks to progress and eliminate boundaries compute capabilities. Such innovation is necessary to drive a new class of transactions, create the capability to process a sensor-based world, enable a new level of encryption, etc. and make it possible for a new generation to identify and solve previously inconceivable or unsolvable problems. The investment and effort that IBM is making gives clear proof of their continuing interest in and dedication to delivering innovative systems.

Thursday, July 3, 2014

HP HAVEn: Big Data/Analytics Platform enabling enterprise advantages

By Rich Ptak

About a year ago, HP introduced HAVEn to the market as capability to be used to work with HADOOP to gain insight and information from analyzing structured and unstructured data. This spring, HP launched HAVEn as an extended, true data analysis platform and available as a cloud-based service. We spent some time with HP to get greater insight into HP’s positioning of HAVEn as a platform for Big Data/Analytics that works across multiple data formats.
 
The HAVEn name comes from the multiple analytics engines – Hadoop, Autonomy, Vertica, Enterprise Security and any number of HP and the customer’s own applications used to gain more customer-relevant information and insight. Help is available to determine which of the engines is needed, as we’ll see.  

As announced, HAVEn per se is a collection of data handling and analytics engines that combined with an enterprise’s own applications can used to tease information and insight from virtually any conceivable data set the user can access. HP offers HAVEn engines for use by the customer. It takes a pretty impressive service to allow virtually any enterprise or organization get useful insight and information from their available data. It is even more so, if it is targeted for use by an audience that includes data processing/IT (even business) professionals, but who are specifically not required to be professional data ANALYSTS to quickly realize value. And, HP has customers who will testify to their successes with HAVEn. However, to help the first-time user faced with figuring out where and how to get the most benefit from their data, HP has skilled service experts available for projects.
 
HP also previewed the HAVEn workbench at Discover. The workbench is a unifying layer on top of the engines. It allows developers to access the functionality of any of the underlying engines through a common interface. It also allows access to a library of services which expose the functions of the engines. Developers, data scientists and the like can add new services to the library. or “mash-up” two existing services to create a new service. Over time, as more services are added, the ability to explore your data or rapidly prototype new applications will increase exponentially.

One of our major themes with our clients (vendors as well as end-users) for the last decade or so has been that it is a major responsibility of vendors to make the full power of all technologies (existing and emerging) accessible and useful to their customers. The is increasingly critical as those customers are increasingly non-technical in nature – they have no idea, and less interest in how a Monte Carlo Simulation works, what a regression analysis is or accomplishes, nor any of the benefits revealed by the use of Chebyshev’s inequality. They just want to get any and all information and insight from their data that they can use to achieve their goals. Satisfying that need and demand appears to be one of the driving forces behind HAVEn.

HAVEn is also part of HP’s efforts to speed and spread the adoption of Big Data/Analytics to the widest possible audience. Having spent between $10 and $20 billion (between organic R&D and acquisitions) on HAVEn, HP believes it and its associated services can be effectively leveraged by customers. HP sees broad market potential for HAVEn. At this time, it has identified several broad market segments of specific interest including:

  1. Business Analysts – enterprise IT, data analytics specialists and experts who can use HAVEn as a tool to operate more effectively and efficiently to speed results and improve quality of their analysis.
  2. Developers – looking to build a business or service around analytics - including entrepreneurs, ISVs, partners, startups – interested in developing analysis-based solutions and services.
  3. Solution Buyers – those looking to get more insight from the data they have; such as marketing/sales executives, product managers, inventory and resource managers and suppliers - for example, those who want to learn more about buying patterns as it relates to various environmental factors such as time, weather, events, etc.

HP offers two free trials to encourage potential customers to experiment with HAVEn. The free downloads are for the Vertica Community edition and a free trial of ArcSight Logger. Learn more by going to:  http://www8.hp.com/us/en/software-solutions/big-data-platform-haven/try-now.html

Conclusion
HP clearly has invested a lot of time and effort into the HAVEn platform. The single significant drawback we found was the lack of an integrated, ‘single pane of glass’ UI. Integration packs are available among the engines which do help.

HP is continuing an aggressive development program as it encourages customers and partners to enhance and extend the reach of the product with connectors. We think that HP is definitely enabling and easing the move of Big Data analytics into the larger marketplace. Customers can learn more about HAVEn and how it is being used by visiting hp.com/haven. We think anyone with any significant data available to them would be wise to investigate what HAVEn might be able to do for them.





Wednesday, July 2, 2014

IBM Bluemix – Good news for cloud enterprise-class application development, testing and deployment

By Rich Ptak



Bluemix is IBM’s extensively featured cloud platform-as-a-Service (PaaS) for building, managing and running applications of all types. In February of 2014, IBM began an Open Beta of the platform, providing access for interested developers, students and researchers.  We described the platform, support and training programs available at that time here: (https://tinyurl.com/mzagdq9).  Bluemix offers a complete, robust, Dev/Ops environment, built on open standards, equipped to develop and run apps in any cloud-friendly language with integration services to existing systems of record. Developers can access a rich library of IBM, 3rd party and open source runtimes, services and APIs.

Over the last three months, IBM, partners and 3rd party participants have added a range of extensions to the platform in the run-up to a General Availability (GA) announcement. With the June 30th GA, (a full quarter before IBM originally planned), Bluemix enters the market fully tested by IBM and customers with documented success of its benefits and impressive support provided by IBM and partner staffs. Applications written using the open source services in Cloud Foundry could be moved between compatible Cloud Foundry implementations. Applications can also be developed/tested/tuned on Bluemix then moved to another platform, as well as vice versa.  Both of these capabilities have been done by users.

New services from IBM include Workflow, Geospatial Analytics, Application User Registry, MQ Light, Gamification, Embeddable Reporting, Appscan Mobile Analyzer, and Continuous Delivery Pipeline. Services from 3rd Party partners include Mongo Labs, Load Impact, BlazeMeter, SendGrid, RedisLabs, ClearDB (MySQL), CloudAMQP (RabbitMQ), ElephantSQL (PostgreSQL), etc. It is also worth highlighting that Bluemix includes a strong environment for the development, testing, and management of mobile as well as other applications.

Developers have access to the infrastructure resources, tools and assets they need (and know) – on-demand and when they need it. They don’t have to worry about or wait for infrastructure availability virtual or otherwise. Everything needed for app development, test, deployment and management is available in the cloud.

IBM’s intends Bluemix to be used for developing all kinds of applications. However, it is optimized for the cloud-centric nature of mobile, web and big data applications.  The environment’s open standards nature allows customers to move apps (as long as they only use open standard services). As an incentive to stay, IBM will provide compelling value from its middleware portfolio (e.g. mobile, Watson, analytics, etc.) only on Bluemix.

IBM continues to offer 30-days of free-trial use to encourage developers to become familiar with Bluemix services and tools. After that, fees are based on usage. This means you pay only for the amount of resources consumed. Runtime charges are based on the GB-hours an application runs. During the free 30-day trial, you get a maximum of 2GB. Once usage fees begin, you still get 375GB hours per month free. Pricing for services and add-ons can be flat-rate or metered; some have a free allowance each month. Details including pricing are here:  https://developer.ibm.com/bluemix/2014/06/30/general-availability/. The pricing appears to compare favorably against the competition.

Where’s the Value?

Bluemix is a true Platform-as-a-Service offering, designed with the interests of both the developer and enterprise in mind. It focuses on allowing the developer/enterprise to focus completely on creating and delivering value in terms of new products and services for their customers – while IBM takes all the responsibility to provide and maintain the infrastructure, development, testing and management tools and software.

For the enterprise, additional value comes from the ability to quickly access and deploy their services on a global infrastructure. Also, since Bluemix is built on top of SoftLayer, applications can be easily moved to the SoftLayer environment if you want to have control over the infrastructure. Applications can also be moved from one IBM datacenter to another at no cost. Bluemix provides a globally-supported environment for the deployment of a service. It dramatically expands the geographic reach of a company without the adding the expense of a remote presence.

The Final Word
Part of IBM”s goal with Bluemix was to ease and speed the transition of IT and enterprises (of all sizes) to the cloud environment. It does so by providing the tools, functionality and infrastructure that allows the developer the flexibility to use the language, tools and techniques they are most familiar with. This speeds development, testing and delivery of new solutions which increases developer productivity and enterprise agility. We recommend that potential evaluate the benefits and cost of IBM’s Bluemix against. As we said earlier, we expect that: “many will decide to increase and expand their participation to their own as well as their employer’s significant benefit.”