Those decisions can be tough to navigate for any business leader, but for those outside the United States and Europe, there are yet even more considerations. In an article on the site Security Intelligence, author Preethy Soman lists these added challenges specific to Asia-Pacific:
1. Data location
Data centers often reside in the U.S. and Europe, which can complicate the transmission and decryption process, as well as efforts to head off attacks, without a strong strategy.
2. Lack of standards
There are many different cloud providers with many different available security features with no unifying guidelines. Users must look closely at what features providers offer and ensure continuity if they work with more than one cloud provider.
3. Regulatory requirements
Different countries have different data regulations, so businesses based in countries other than where their data resides must be aware of multiple legal requirements. A few options, such as Box Zones from IBM, do enable organizations to make use of data centers closer to home. That’s something for business leaders in the Asia-Pacific region to keep in mind.
Not only must business leaders ensure their cloud providers comply with their country’s regulations and meet their business needs, they also must ensure that a provider’s cloud technology is compliant with their on-premises systems.
It is becoming central to increasing aspects of our digital society, so much so that in a few years, we very well may stop talking about it all together. Instead, we will simply assume the capabilities it represents—better economics, ease of use, and flexibility.
Today 80% of enterprises are headed to hybrid models, which integrate cloud and on-premises, according to IDC. In addition, IDC says more than 90% of new software will be built for cloud delivery in 2015. This major IT shift is forcing corporations to find out the best way of moving to the cloud without foregoing existing investments in infrastructure.
IBM is helping lead this shift to the cloud and assisting clients across a wide array of industries as they digitally transform their organizations. Through our decades of client work and industry expertise, we are able to deliver consistency with choice via hybrid, public or private clouds. We’re leveraging this heritage and expertise while investing billions to deliver new AI, Internet of Things and Blockchain applications to the cloud.
Today there is no shortage of opinions when it comes to cloud leadership. Research firms, influencers and pundits have all weighed in about defining cloud, vendor market share, the ongoing evolution of cloud-based services and who will ultimately end up on top.
The assessments by analysts are varied, but rather than examine approaches or methodology, we’d like to stick to the facts. Here’s what some of the key analysts in the industry are saying about cloud:
- IBM Captures Leadership Position in Hybrid Cloud Environment Adoption by Technology Business Research (TBR).
- IBM Named a Leader in North American Platform-as-a-Service by Enterprise Strategy Group (ESG).
- IBM Named a Leader in Gartner Magic Quadrant for Data Center Outsourcing and Infrastructure Utility Services for NA. Clearly the growth in data center outsourcing is being driven by the global adoption of cloud and this report is the latest validation of IBM’s evolution from a systems to services integrator with hybrid cloud
- IBM Named Leader in Private Cloud Adoption by Technology Business Research (TBR).
These new independent reports build on two that recently came from IDC and TBR naming IBM a leading IaaS provider. Another client survey by TBR ranked IBM as the most widely adopted cloud platform and Synergy Research recently ranked IBM as the #1 hybrid cloud provider and also a top three leader in IaaS. http://ubm.io/2atvN7X.
If these reports weren’t enough, just this quarter, IBM Cloud announced a 30 percent year-to-year growth. IBM Cloud now has reached $11.6B in revenue for the past 12 months. IBM Cloud also has $6.7B annual run rate in cloud delivered as a service.
This strong growth is being driven by customer wins include Halliburton and Pratt & Whitney. Halliburton is using the cloud to lower the risk and cost of exploring new oil and gas fields, ultimately improving their bidding decisions. Using high performance computing and the GPU features of the IBM Cloud, Halliburton can quickly run hundreds of simulation cases and build forecasts.
Pratt & Whitney, a United Technologies Corp. company, is moving the engine manufacturer’s business, engineering and manufacturing enterprise systems to a fully-managed and supported environment on the IBM Cloud. The move will enable the company to scale quickly to meet the expected rise in computing service, data processing, and storage demands as a result of increased production over the next decade.
These milestones reflect the customer adoption and validation IBM is experiencing in the market. People will continue to speculate and guess on where the market headed. In the meantime, these facts and independent third party assessments underscore IBM’s leadership in the cloud.
I don’t discount its value, but my view of cloud is a little different because my job begins and ends with IBM clients’ success in adopting cloud, nothing more or less. As a result, I get a daily, ground-level view of what enterprise CIOs, line-of-business leaders and other decision makers experience when adopting cloud technology.
Here’s what I hear from them:
- They believe in the cloud so much that they’re literally betting their businesses on it.
- They’ve decided a hybrid cloud strategy is the right approach.
- They aren’t thinking so much about infrastructure as about innovation and speed—basically,the “second wave” of cloud computing, which includes analytics for structured and unstructured data and security capabilities.
They also understand that cloud isn’t a destination, but rather a platform for innovation. It’s where they can dream big, start small, experiment and scale when successful. In these organizations, CTOs and CIOs become advocates of “the art of the possible.”
Hybrid is the palette they’re painting with, best expressed by the analysts at Frost and Sullivan: “At their core, successful hybrid cloud strategies support the delivery of high-value applications and services to the business, while at the same time driving cost and inefficiency out of the IT infrastructure.”
Fine, but how does adopting a hybrid cloud strategy support business success?
Successful enterprises provide the answer. They aren’t simply grabbing cloud technology for its own sake. Instead, they’re pursuing a business strategy that’s equal parts transformation and industry disruption. They have a deep faith that cloud and cognitive technology will cause changes in customers’ experiences, vastly improve business processes and operations, and improve insight and innovation across all aspects of their companies’ missions.
Look at how these companies are doing it:
- Shop Direct, one of the UK’s largest online retailers, wanted to improve its customers’ shopping experiences. To do so, it needed greater IT performance and flexibility. By taking a hybrid approach and migrating critical workloads to a fully managed cloud environment, Shop Direct improved its ability to react to market changes, launch strategic digital initiatives and create an easier, more personalized online experience.
- Coca-Cola Enterprises, Inc.(CCE), an independent Coca-Cola bottler based in the United States, operates 17 manufacturing sites across Europe. In today’s crowded soft-drink market, the company wanted to stay relevant to consumers by engaging them more creatively, interactively and personally. Mobile fit the bill. To make this happen, CCE needed to bring its business systems into a global cloud environment. By using a hybrid strategy, it reduced the time to deploy new applications by more than 30 percent while creating new forms of customer engagement.
- Anthem, the health benefits company, wanted to simplify IT, reduce risk and develop products faster. By using a cloud orchestration tool and bare metal services—both marks of a hybrid strategy—Anthem accelerated system provisioning time from 17 days down to seven hours. It has also consolidated its subsidiaries’ standalone IT into one scalable platform that can respond rapidly to changing business needs.
In this second wave of cloud, where hybrid is the strategy of choice, it’s no longer only about cheap computing and storage. Instead, cloud has become the platform for innovation and business value. It is the IT delivery model that impacts the entire enterprise.
If anything, hybrid’s bringing enterprises into the third wave of cloud: cognitive computing, the next frontier of innovation. Increasingly, cognitive capabilities are being embedded in applications and they’ll be the next game changer. Hybrid enterprise leaders will use cognitive for natural human engagement and a deeper understanding of dark data. They’ll uncover insights into their businesses that they couldn’t have achieved even a year ago. And it will get them even closer to their customers.
For more about hybrid cloud strategy, read Frost & Sullivan’s “Using Hybrid Cloud Strategy to Drive Business Value.”
It’s no secret that application deployment failures and slow deployment timelines lead to massive financial losses. Potential damage to one’s businesses reputation and, ultimately, the loss of customers make failure one of the top priorities for every management level, from CEOs to IT Directors, according to a recent ADT report.
The costs alone are intimidating. Infrastructure failures can cost as much as $100,000 per hour. Production outages cost roughly $5,000 per minute. Critical applications can cost organizations $500,000 to $1 million per hour in some cases.
Why all the problems? Based on my 13 years of IT experience working with clients of all sizes across various industries, these are some key causes of application deployment failure:
1. Process inadequacy.
Operational resilience means more than the ability to recover from failure. It also includes the ability to prevent failures and take actions to avoid them. Many organizations do not have the appropriate operational resilience maturity required for their IT and business. It is practically impossible to prevent application failures completely, but it is important that organizations take the time to find, predict and fix them.
2. Lack of consistency in the release pipeline.
Many organizations experience a mismatch of software deployment models through their IT systems. This results in failures because systems are typically interconnected in IT landscapes.
3. Process complexity.
Some environments are complicated by the myriad of different toolsets and deployment procedures used by development and operational teams. The vast array of tools creates multiple tooling domains with embedded manual processes between the domains, which results in process complexity. In addition, there are examples where the provisioning and deployment processes are very different at the opposite ends of the release pipeline.
Lack of standardization and flexibility throughout the development and release process show up commonly in application vulnerability scanning. These weaknesses are caused by development teams not carrying out the appropriate security testing because they lack the appropriate governance measures. In some cases, testing can be viewed as expensive and time consuming, leading to a tendency to minimize efforts.
5. Lack of skills.
Every organization has its hero developers or operations experts who can single-handedly solve every problem. Overtime processes are built around these individuals, making these processes difficult to run when they move on. It is crucial to have processes that are not just built around one or two critical resources, but that also scale and are repeatable and automated to meet the changing demands of the organization.
A lack of proper communication and interoperability between the demand and supply side of IT, development and operational teams, results in situations in which actions taken in isolation seem sensible, but put together end to end, result in failure. In many organizations, the majority of changes are incremental additions or alterations. These changes can often attract less oversight and control than major projects.
How can one avoid the big bad six? An easy way is to discover faults that could result in failure early on in the release cycle. Doing so will reduce the cost of fixing the faults and eliminate the cost that could have been incurred from an application deployment failure.
Now we’re at the forefront of another disruption: a transition to cloud in the form of blended technology and environments, most commonly known as hybrid. Hybrid takes on many forms: traditional, non-cloud to cloud (public or private); private cloud to public cloud; private cloud to private cloud, or public cloud to public cloud.
As an industry researcher, I talk with a lot of CIOs. They are peers who have been in your shoes, and I’m certain if you asked, they would say they can empathize with you. You sit in the middle, sandwiched between the requirements of the business (they expect you to accelerate innovation) and the needs of IT (they need fast and good). The two can seem pretty oppositional at times.
But they don’t have to be. It is possible to transform business without sacrificing speed or security. With the right technology, you can connect workloads to get at powerful data, the data that will spark insights and allow you to disrupt before you become disrupted.
We recently completed a study of 500 IT decision makers who have implemented hybrid environments. Their research indicates that the frontrunners, the ones seeing more competitive advantages, are doing so thanks to the correct mix of blended, next-generation technology. In fact, nine in 10 of them said hybrid cloud gives them greater ROI than either an all-traditional or all-cloud environment.
Two-thirds of organizations that blend traditional and cloud infrastructures are already gaining advantage from their hybrid environments.
This doesn’t surprise me. It confirms what we knew all along: cloud isn’t a destination, it’s a platform for innovation and an enabler for change. Consider these survey stats:
- Successful organizations are five times more likely to use hybrid cloud for cognitive computing initiatives
- 85 percent of IT leaders report that hybrid cloud is accelerating digital transformation in their organization.
From an industry standpoint, I have seen the greatest innovation and change occur when CIOs see and embrace technology shifts. That’s true with cloud: a blended environment – an open, hybrid cloud – will deliver the outcomes your business expects and let you drive disruption. Here’s another supporting stat:
- 85 percent of leading organizations believe open technologies are essential for hybrid portability and interoperability
I encourage you to take the time and read the study to find out how your peers are making use of hybrid cloud as an enabler of digital change and competitive advantage.
- Scalability, including rapid allocation and deallocation of resources with a pay-as-you-use model (noting that the use of individual resources can vary greatly over the life cycle of an application)
- Reduced capital expenditure
- Reduced lead times with on-demand availability of resources
- Self-service with reduced administration costs
- Reduced skill requirements
- Support of team collaboration
- Ability to add new users quickly
The automation support one receives in a PaaS environment also provides productivity improvements and consistency in delivery. Along with automation is the ability for closer equivalence of the development, test and production environments, again improving consistency and reliability of delivery. This is one aspect of a DevOps/agile development approach that is ideal for a PaaS environment.
In addition, PaaS systems typically enable the sharing of resources across multiple development teams, avoiding the need for wasteful allocation of multiple assets of the same type in separate silos.
PaaS systems typically build in security and data-protection features, including resilience capabilities such as replication and backups. This can improve security and reduce the need for in-house security skills.
The provision of sophisticated, off-the-shelf capabilities as services enables the rapid creation and evolution of applications that address business requirements. This is especially important when considering mobile and web applications that include social and Internet of Things (IoT) capabilities.
Business applications typically require integration and involve aggregation of data and services from multiple existing systems. PaaS systems usually feature prebuilt integration and aggregation components to speed and simplify necessary development work.
PaaS systems can be used to build applications that are then offered to other customers and users as a software as a service (SaaS) offering. The requirements of SaaS applications, including scalability and the ability to handle multiple tenants, can usually be met by the cloud computing capabilities of a PaaS system.
In our next installment, we’ll provide guidance for acquiring and using PaaS offerings.
Interested in learning more about PaaS and getting a better picture of its implementation best practices? Check out my post on PaaS basics and download the Cloud Standards Customer Council’s “Practical Guide to Platform as a Service.”
When we think of cloud computing, we think of situations, products and ideas that started in the 21st century. This is not exactly the whole truth. Cloud concepts have existed for many years. Let’s go back to that time.
It was a gradual evolution that started in the 1950s with mainframe computing.
Multiple users were capable of accessing a central computer through dumb terminals, whose only function was to provide access to the mainframe. Because of the costs to buy and maintain mainframe computers, it was not practical for an organization to buy and maintain one for every employee. Nor did the typical user need the large (at the time) storage capacity and processing power that a mainframe provided. Providing shared access to a single resource was the solution that made economical sense for this sophisticated piece of technology.
After some time, around 1970, the concept of virtual machines (VMs) was created.
Using virtualization software like VMware, it became possible to execute one or more operating systems simultaneously in an isolated environment. Complete computers (virtual) could be executed inside one physical hardware which in turn can run a completely different operating system.
The VM operating system took the 1950s’ shared access mainframe to the next level, permitting multiple distinct computing environments to reside on one physical environment. Virtualization came to drive the technology, and was an important catalyst in the communication and information evolution.
In the 1990s, telecommunications companies started offering virtualized private network connections.
Historically, telecommunications companies only offered single dedicated point–to-point data connections. The newly offered virtualized private network connections had the same service quality as their dedicated services at a reduced cost. Instead of building out physical infrastructure to allow for more users to have their own connections, telecommunications companies were now able to provide users with shared access to the same physical infrastructure.
The following list briefly explains the evolution of cloud computing:
• Grid computing: Solving large problems with parallel computing
• Utility computing: Offering computing resources as a metered service
• SaaS: Network-based subscriptions to applications
• Cloud computing: Anytime, anywhere access to IT resources delivered dynamically as a service
Now let’s talk a bit about the present.
SoftLayer is one of the largest global providers of cloud computing infrastructure.
IBM already has platforms in its portfolio that include private, public and hybrid cloud solutions. SoftLayer guarantees an even more comprehensive infrastructure as a service (IaaS) solution. While many companies look to maintain some applications in data centers, many others are moving to public clouds.
Even now, the purchase of bare metal can be modeled in commercial cloud (for example, billing by usage or put another way, physical server billing by the hour). The result of this is that a bare metal server request with all the resources needed, and nothing more, can be delivered with a matter of hours.
In the end, the story is not finished here. The evolution of cloud computing has only begun. What do you think the future holds for cloud computing? Connect with me on Twitter….
More and more organizations are moving their enterprise workloads to the cloud, but they don’t just get there through magic. Often, there’s a lot of expense and risk involved. Occasionally, entire IT operations have to be overhauled. It’s a big challenge.
To face down that challenge, IBM and VMware joined forces earlier this year to help companies move their existing VMware workloads from on-premises environments to the cloud. So far, over 500 VMware clients have tapped IBM Cloud and its almost 50 security-rich cloud data centers to help with the transition.
To make hybrid cloud adoption even faster and easier, today IBM and VMware launched VMware Cloud Foundation on IBM Cloud, a unified platform with compute, storage and network virtualization solutions built in. Not only does this give clients consistent architecture that is based on VMware Validated Designs, it also enables to implement and deploy a VMware software-defined environment in hours versus weeks by automatically provisioning the Cloud Foundation stack on IBM Cloud. Enterprises can then move workloads to the cloud without any changes to them and can continue using the same familiar tools and existing scripts to manage IBM-hosted Cloud Foundation environment.
Of course, no one can overcome a huge challenge alone. It takes teamwork. That’s why IBM is continuing to grow its partner ecosystem to support VMware’s Software-Defined Data Center environments. Here’s just a handful of IBM partners who are helping ease the move to the cloud:
- HyTrust, which has announced new capabilities for its workload security platform to help organizations reduce risk, automate compliance, and ensure availability in virtualized and cloud environments.
- Veeam Software is helping organizations meet recovery times for all applications and data via a new kind of availability solution that delivers high-speed recovery, data loss avoidance, verified recoverability, leveraged data and complete visibility for VMware Cloud Foundation on IBM Cloud. Veeam is integrated into the Cloud Foundation and vCenter Server offerings on IBM Cloud announced at VMworld. It provides backup and instant virtual machine recovery for the VMware management stack.
- BMSIX, an IBM partner in Brazil. They’re helping customers such as Multiplus move seamlessly to cloud by providing migration services to manage their entire VMware software portfolio.
- Zerto, a hypervisor-based business continuity and disaster recovery software. Its ability to reduce downtime to near seconds, restore point tracking, and perform uninterrupted testing and storage-agnostic replication make it an excellent choice for customers looking to minimize downtime.
And then there’s Intel Corporation, which is collaborating with IBM to deliver workload-optimized performance through the use of the Intel Xeon processor E5 v4 product family while protecting data through chip-level, hardware-enforced security with Intel Trusted Execution Technology. IBM Cloud provides customers with the ability to choose bare metal servers or virtual servers that best meet their workload requirements for performance and value. Additionally, IBM Cloud offers customers bare metal servers to help assure that workloads can only run on trusted hardware in a known location. The Intel and IBM Cloud collaboration enables businesses to deploy VMware’s technologies with the control, security and transparency to accelerate enterprise hybrid cloud deployments.
Raejeanne Skillern, Intel Corporation’s Data Center Group Vice President General Manager for the Cloud Service Provider Group, had this to say about the collaborative effort: “With the acceleration of enterprise hybrid cloud adoption, organizations are beginning to ask where their data resides and how is it protected. We are pleased to be working with IBM Cloud to deliver a unique set of capabilities that enable hardware enforced security in the IBM Cloud to meet our customers’ data protection and compliance requirements.”
Beyond the corporate partners, IBM also has a team of nearly 4,000 service professionals and advisors with the know-how and expertise to help clients migrate VMware environments to the cloud without the hardships they may have anticipated.
/ August 29, 2016
In February 2016, Cirba conducted a study across three different public cloud offerings including Amazon® AWS, Microsoft Azure® Virtual Machines and IBM® SoftLayer® Bare Metal to compare costs and investigate the impact of Bare Metal infrastructure. On a sample of just under 1000 workloads, representative for enterprise usage, SoftLayer Bare Metal servers provided 53% cost savings over AWS….
For all information and questions you have regarding Softlayer, dont hesitate to contact me 🙂 !!