We will be leveraging VMware NSX in this implementation to provide the load balancing services for the vRA deployment as well as integrating into vRA for application-centric network and security. Before any of this is possible, we must deploy NSX to the vSphere cluster, prepare the hosts, and configure logical network services. The guide assumes the use of NSX for these services, but this is NOT a requirement. A distributed installation of vRA can be accomplished with most load balancers. VMware certifies NSX, F5, and NetScaler.
(You can skip this section if you do not plan on using NSX in your environment)
vRA 7.x focuses a lot on the user experience (UX), starting with one of the most critical — deploying the solution — then the second most critical, configuring it. Following through with the promise of a more streamlined deployment experience, vRA 7’s release made a significant UX leap with the debut of the wizard-driven and completely automated installation of the entire platform and automated initial configuration. And all of this in a significantly reduced deployment architecture.
The overall footprint of vRA has been drastically reduced. For a typical highly-available 6,x implementation, you would need at least 8 VA’s to cover just the core services (not including IaaS/windows components and the external App Services VA). In contrast, vRA 7’s deployment architecture brings that all down to a single pair of VA’s for core services. Once deployed, just 2 load-balanced VA’s will deliver vRA’s framework services, Identity Manager (SSO/vIDM), vPostgres DB, vRO, and RabbitMQ — all clustered and configurable behind a single load balance VIP and a single SSL cert. All that goodness, now down to 2 VA’s and all done automatically (!) during deployment.
While the IaaS (.net) components remain external, several services have moved to the VA(s). This will continue to be the case over time as more and more services make it over — eventually eliminating the Windows dependencies all together.…
Welcome to the vRealize Automation 7.2 Detailed Implementation Guide (DIG). This series of posts — made up of detailed how-to, end-to-end videos, plenty of commentary, and other related content — was put together to help you deploy and configure a highly-available, production-worthy vRealize Automation 7.2 distributed environment, complete with SDDC integration (e.g. VSAN, NSX), extensibility examples and ecosystem integrations. The design assumes VMware NSX will provide the load balancing capabilities and includes details on deploying and configuring NSX from from scratch to deliver these capabilities.
This little project has been in the works for quite some time and will continue to expand as I include additional how-to’s for a variety of use cases (e.g. IPAM and ITSM integration).
Target Audience
This guide was created for anyone looking to install and/or configure vRealize Automation 7.2 in any environment. And, as were my intentions in previous POC guides, the content here can be used as a form of training and education or simply a reference document for existing or new vRA environments.
As for skill level, this guide assumes you have a general idea of vRealize Automation and VMware’s broader Cloud Management products. However there is no expectation that you’ve previously deployed and configured vRA.…
A driving factor of virtualization in the old days was the immediate efficiencies that were realized with each P2V. It was money in the bank each time consolidation ratios increased and fewer physical boxes were required. In the physical world, we tried to ensure each OS and associated app(s) had plenty of excess CPU, memory, and storage resources available to it…just in case they were needed at some point in the future. The target utilization rate was typically under 20% (often less than half that) and a sustained rate above that was a cause for concern. In contrast, virtualization aspired resource utilization rates of 60-80% per host and a little below that cluster-wide. While high utilization became the new norm, over-provisioning of resources was typically avoided (at least in production).
Fast forward to the cloud era (private / public, doesn’t matter), where over-provisioning of machines consuming shared resources is a necessary evil for driving efficiencies at every level of infrastructure and scale. This is especially true for infrastructure-as-a-service. This evil is also one of the benefits…it’s what helps deliver the perception of unlimited resources to the consumer without actually making that kind of investment. While the cost of spare capacity has become less of an issue over time, over-provisioning of resources remains a common practice for many small shops, enterprises, and service providers alike.…
One of my favorite things to do is whiteboard. In my line of work, the whiteboard allows me to tell a story…one that can be broad in coverage, yet tuned on-the-fly to best align with the needs of the audience. It started as a “cloud” whiteboard back when vCloud Director (vCD) was released and the first vCloud Suite offering was announced. The first storylines were all about VMware’s cloud and management framework and leveraging vCD to align with a set of industry-accepted characteristics that defined “cloud”. There have been several iterations over time as new technologies (and acquisitions) came to fruition, with an evolving storyline to highlight modern challenges and the transformative nature of the Software-Defined Datacenter.
The whiteboard has been delivered on your standard everyday office whiteboard, table-tops, glass walls, flip charts, notepads, napkins, and electronically via powerpoint, iPad, and digital sketch pads. Regardless of delivery medium, I have found the whiteboard to be the most effective means of articulating the often-confusing details and associated benefits of the Software-Defined Datacenter at any level of depth…and without yawn-generating, ADD-invoking death by powerpoint.
My most recent iteration of the SDDC whiteboard doubles as field and partner enablement, so I had to put a little more thought into the storyline to ensure it closely resembles how customers have typically leveraged vSphere, NSX, VSAN, and the vRealize Suite evolve their existing datacenters to quickly build and gain the benefits of SDDC.…
Recapping Part 2 of this series: We staged a number of NSX Logical Switches to be consumed by vRA machines as External Networks. vRA collects and identifies these networks as traditional [vSphere] Network Paths and allows them to be wired for consumption in the Converged Blueprint (CBP) designer as needed (or using custom properties, but that’s beyond this post). Logical Switches can be created for a consumption-only model, automatically created per Deployment when using On-Demand services, or some combination of these.
Moving on…
Similar to it’s relationship with NSX Logical Switches, vRA provides both consumption-based and dynamic security services to deliver a number of use cases leveraging NSX Security Groups and Security Policies.
A Security Group defines — and logically groups — the objects you want to protect (e.g. virtual machines) and the policies that protect them (via a security policy). Group membership can be static or dynamic (e.g. based on logical naming, containers, tags, or as members of other security groups). Pre-created security groups are collected by vRA endpoint inventory and consumed as Existing Security Groups (SG) within the Converged Blueprint designer. These security groups may ultimately contain a combination of unmanaged vSphere VMs and vRA-managed machines.
vRA also supports On-Demand Security Groups (ODSG) within CBP, which requires the use of an existing Security Policy.…
A logical switch emulates a traditional network switch by creating logical networks that can be used to connected one or more vnics of a virtual machine to the corresponding logical network. In an NSX environment, logical switches are directly mapped to an available Transport Zone (VXLAN) and is stretched across all hosts and clustered configured with that VXLAN. Similarly, a Universal Logical Switch is deployed when used with Universal Transport Zones and can be stretched across hosts, clusters, and even vCenters. Logical switches are typically created and managed using the vSphere Web Client. Once created, machines can be logically wired to them for connectivity to other machines and/or upstream services (e.g. NSX Edge Services Gateway or Distributed Logical Router…or anything else wired to the resulting logical network). Thanks to the power of NSX, these networks can be spun up rapidly (albeit statically) and exist exclusively in the virtualization layer, saving countless management cycles and associated overhead (+ cost).
As you are well versed by now, NSX delivers the critical services needed for a modern network infrastructure while lifecycle automation of network and security services — from provisioning to decommissions (and everything in between) — are defined by the automation layer.…
A Cloud Management Platform (CMP) provides a unified platform for managing private, public, and hybrid cloud environments together with conventional and modern application architectures.Cloud Management for Dummies was written to guide organizations through some of the challenges of selecting a cloud management platform as they move from traditional IT to a more modern, automated, and governed infrastructure.
About this Book
Cloud Management For Dummies is loaded with information that can help you understand and capitalize on cloud management. In plain and simple language, we explain what a cloud management platform is, why you need it, and which capabilities to demand in an enterprise solution. We also illustrate common use cases for CMP and guide you to the path to management in the hybrid cloud era.
Excerpt: Identifying the Market Context
Companies in all industries are responding to new opportunities to leverage big data and mobility to drive a better customer experience and a more productive work environment. Many companies are actively pursuing new business models and revenue streams that rely on digitizing and modernizing business processes.
The phenomenon of digitization, along with other structural changes in the business world, is driving the need to dramati- cally speed up application delivery.