UCS Manager: B-Series Chassis Deployment
The Cisco approach to server deployment focuses on object based, policy driven server modeling. This abstraction between a server’s identity and the underlying hardware allows operators the freedom to stop worrying about snowflake configurations and gives them the ability to focus on higher order objectives. The orchestrator behind this approach is UCS Manager, the software that runs on UCS Fabric Interconnects. This powerful tool uses hierarchical models to abstract the configuration of servers into “Service Profiles” that can be templatized, maintained, and scaled with ease. In this post I’ll share the initial configuration of a UCS domain that serves as part of a modern software defined datacenter.
A basic UCS domain consists of a pair of UCS Fabric Interconnects (FIs) and some UCS Servers, either B-Series (Blades) or C-Series (Rackmounts) outfitted with Cisco Virtual Interface Cards (VICs). In my lab I have a pair of 2nd gen FIs, a UCS 5108 Chassis with (2) 2208XP IO Modules and (8) UCS B200 M3 Blades (each containing a VIC). The topology looks like this:

With the physical connectivity established, the first step in configuring this UCS domain is to power on and connect to the FIs via console. Once it’s booted up, you’ll be prompted with option to setup via GUI or CLI. We’ll go with CLI for the basic config. The initial setup is straightforward, here’s the output from the configuration of my primary FI (Fabric Interconnect A).
Enter the configuration method. (console/gui) ? console Enter the setup mode; setup newly or restore from backup. (setup/restore) ? setup You have chosen to setup a new Fabric interconnect. Continue? (y/n): y Enforce strong password? (y/n) [y]: n Enter the password for "admin": Confirm the password for "admin": Is this Fabric interconnect part of a cluster(select 'no' for standalone)? (yes/no) [n]: y Enter the switch fabric (A/B) []: A Enter the system name: hive-fi Physical Switch Mgmt0 IP address : 10.29.81.11 Physical Switch Mgmt0 IPv4 netmask : 255.255.255.0 IPv4 address of the default gateway : 10.29.81.1 Cluster IPv4 address [10.29.81.1]: 10.29.81.10 Configure the DNS Server IP address? (yes/no) [n]: y DNS IP address : 10.29.81.3 Configure the default domain name? (yes/no) [n]: y Default domain name : saasco.lab Join centralized management environment (UCS Central)? (yes/no) [n]: n Following configurations will be applied: Switch Fabric=A System Name=hive-fi Enforced Strong Password=no Physical Switch Mgmt0 IP Address=10.29.81.11 Physical Switch Mgmt0 IP Netmask=255.255.255.0 Default Gateway=10.29.81.1 Ipv6 value=0 DNS Server=10.29.81.3 Domain Name=saasco.lab Cluster Enabled=yes Cluster IP Address=10.29.81.10 NOTE: Cluster IP will be configured only after both Fabric Interconnects are initialized Apply and save the configuration (select 'no' if you want to re-enter)? (yes/no): y Applying configuration. Please wait. Configuration file - Ok Cisco UCS 6200 Series Fabric Interconnect hive-fi-A login:
With our primary FI online, we can now configure our subordinate FI which should automatically detect the presence of the primary FI.
Enter the configuration method. (console/gui) ? console Installer has detected the presence of a peer Fabric interconnect. This Fabric interconnect will be added to the cluster. Continue (y/n) ? y Enter the admin password of the peer Fabric interconnect: Connecting to peer Fabric interconnect... done Retrieving config from peer Fabric interconnect... done Peer Fabric interconnect Mgmt0 IPv4 Address: 10.29.81.11 Peer Fabric interconnect Mgmt0 IPv4 Netmask: 255.255.255.0 Cluster IPv4 address : 10.29.81.10 Peer FI is IPv4 Cluster enabled. Please Provide Local Fabric Interconnect Mgmt0 IPv4 Address Physical Switch Mgmt0 IP address : 10.29.81.12 Apply and save the configuration (select 'no' if you want to re-enter)? (yes/no): yes Applying configuration. Please wait. Tue Nov 5 00:38:31 UTC 2019 Configuration file - Ok Cisco UCS 6200 Series Fabric Interconnect hive-fi-B login:
Now we should have access to the UCS Manager GUI via the cluster IP. You may have to bypass an https security prompt and then select “Launch UCS Manager” before you see this screen.

You should now see a nice, clean UCS Manager layout without any warnings or errors. Chassis will not be automatically discovered until we define the FI port roles.

Let’s start there and begin defining the ports we’ll use for northbound uplinks and the ports we’ll connect to servers and storage. One great aspect of using FIs and VICs is that you can pass management, data, and FC storage traffic all through a single cable due to the fact that the VIC is a Converged Network Adapter (CNA). This not only simplifies datacenter cabling but also brings a huge degree of configuration flexibility and compute density potential.
Starting with FI A you’ll notice a number of ports are subtly highlighted, noting the detection of links. From this screen, navigate to the “Physical Ports” tab to assign port roles.


Now it’s just a simple matter of using shift+select to highlight the ports connected to the blade chassis, then opening the action menu via right click, then selecting “Configure as Server Port.” Similar action should be taken here for uplink ports as well.


You will notice that discovery of the UCS chassis begins immediately. Let that chug along in the background and complete the same port mapping for FI B before proceeding. Once discovery is complete, you should see your chassis and blade info populate.

With this initial setup complete, we can populate a few more global settings, as necessary, before proceeding with the configuration of the server policies and profiles. These settings include RBAC, NTP, License Management, etc.. You can also take a moment to tweak the power policies and server/fabric extender (FEX) discovery policies. For my lab, none of this is required so let’s move on!
Before crafting a server profile template, we must first outline the desired end state of our servers. In my instance, I want a number of homogenous ESXi hosts with enough vNICs to support management, data, iSCSI, and vMotion traffic segmented on different VLANs. I don’t have a need for HBAs since I won’t be using FC here. I will need to create some underlying pools to support these blades and will also be creating a few custom policies to help with things like VD creation and BIOS settings. For sanity’s sake, let’s outline everything we need to accomplish:
- Create MAC, UUID Suffix, Server, and IP Pools
- Create BIOS, Maintenance, Local Disk, and Boot Policies
- Create VLANs and vNICs templates
- Create Service Profile Template to stitch everything together
A fair amount of work, to be sure, but time spent here is invested into policies and profiles we can re-use going forward. This means setting up 100 servers is as easy as setting up 1.
Create Pools
Within the LAN section of UCS Manager, navigate to Pools -> root -> MAC Pools, then right click the heading and click “Create MAC Pool. We’ll be creating a MAC pool for both A and B sides, so name this first one something like “mac-pool-a”. For your MAC pool, use the following guidelines (allowing for a pool size of 256):


Once both the A and B side MAC pools have been created, let’s create the IP address pools which will be used for vKVM access. Still in the LAN area, right click Pools -> root -> IP Pools -> ext-mgmt and select “Create Block of IPv4 Addresses.”

Next, let’s create the UUID (Universally Unique Identifier) suffix pool. This will provide UCS Manager a way to uniquely recognize your blades regardless of other constructs applied. You will need one UUID per blade, but it’s a good idea to create a large enough pool to support future growth. For this, navigate to the Servers section and right click Servers -> Pools -> root -> UUID Suffix Pools, then select “Create UUID Suffix Pool.”

Within the Servers -> Pools -> root area, right click on “Server Pools” and select “Create Server Pool.” The name of this pool can be thought of a logical representation of a cluster you might be maintaining, or specific type of server build. Since these will be production servers, I will name them accordingly. In this server pool menu, add your blades to the pool, as seen here:

Pool party complete! Let’s move on to the creation of server policies.
Server Policies
This is where things get fun, as you can tweak every conceivable setting on a server via the use of server policies. Using these tiny objects, we can craft a service profile template that aligns to a desired server spec. To get started, let’s create the BIOS policy. From the servers section, right click Policies -> root -> BIOS Policies and select “Create BIOS Policy.” From here, pick a relevant name and dig in to the settings that apply to you. I’m following a CVD for this setup and will be performing the following tweaks:
- Main
- CDN Control: Enabled
- Quiet Boot: Disabled
- Advanced
- Processor
- DRAM Clock Throttling: Performance
- Frequency Floor Override: Enabled
- Processor C State: Disabled
- Processor C1E: Disabled
- Processor C3 Report: Disabled
- Processor C7 Report: Disabled
- Energy Performance: Performance
- RAS Memory
- LV DDR Mode: Performance Mode
- Processor

Next up, our maintenance policy. For this we can simply modify the default maintenance policy to assure we have full control over when nodes reboot. From the servers section, click Policies -> root -> Maintenance Policies -> default. From here, change the reboot policy to “User Ack” to assure that you must always provide acknowledgement before a server goes down for reboot (for instance, if a service profile is updated in a way that would require a reboot).

I need a RAID0 virtual disk (VD) on each blade to house the hypervisor OS, so I’ll create a Local Disk Policy to create the RAID array for me. To do this, navigate to the Policies -> root, right click “Local Disk Config Policies” and select “Create Local Disk Configuration Policy.” I’ll then select “RAID 1 Mirrored” under the mode drop-down.

Now navigate to LAN tab, navigate to Policies -> root , and right click “Network Control Policies.” This will be used just to enable CDP, so I will name it “enable-cdp” and set CDP to enabled.

We’ll also create a basic boot policy to streamline our boot process. This will come in handy later when we implement an iSCSI boot, but for now we’ll use it to set up a boot from local disk with a fall back to PXE boot if ESXi hasn’t yet been installed. Navigate to the Servers section of UCS Manager, navigate to Policies -> root, right click “Boot Policies” and select “Create Boot Policy.” In here, click “Add Local JBOD” from the local devices list and then use the “Add LAN Boot” to add a vNIC for “00-mgmt-A” and “00-mgmt-B” (which we will define later).

Networking
With the fundamentals of our server policies set, it’s time to establish connectivity via the creation of VLANs and vNICs. Navigate to the LAN tab and right-click “VLANs” then click “Create VLANs.”

Create VLANs for the following:
- Native VLAN
- Out of Band Management
- Application Data
- iSCSI Path A
- iSCSI Path B
- vMotion
Be sure to right click the Native VLAN and select “Set as Native VLAN.”

With our VLANs ready, we can now create our virtual NIC (vNIC) templates. We will create a vNIC template, associated with each of our fabric interconnects, for the four necessary functions: mgmt, data, vMotion, and iSCSI, resulting in a total of 8 vNICs on each blade. Within the LAN section, navigate to Policies -> root, right click “vNIC Templates” and click “Create vNIC Template.” Our first template will be for application data via path A which I will name “app-A”. The settings for data-A and data-B are shown here:
- app-A vNIC:
- Fabric ID: Fabric A
- Enable Failover: Enabled
- Redundancy Type: Primary Template
- Template Type: Updating Template
- VLANs: default (native), app
- MTU: 9000
- MAC Pool: <pool created for A>
- Network Control Policy: enable-cdp
- app-B vNIC:
- Fabric ID: Fabric A
- Enable Failover: Enabled
- Redundancy Type: Secondary Template
- Peer Redundancy Template: app-A
- Template Type: Updating Template
- VLANs: default (native), app
- MTU: 9000
- MAC Pool: <pool created for A>
- Network Control Policy: enable-cdp


Repeat this process for the data-B vNIC assuring that you select the correct Fabric ID and MAC pool associated with Fabric B.
The management vNICs will use the following settings:
- mgmt-A vNIC:
- Fabric ID: Fabric A
- Enable Failover: Enabled
- Redundancy Type: Primary Template
- Template Type: Updating Template
- VLANs: native (native), mgmt
- MTU: 1500
- MAC Pool: <pool created for A>
- Network Control Policy: enable-cdp
- mgmt-A vNIC:
- Fabric ID: Fabric A
- Enable Failover: Enabled
- Redundancy Type: Secondary Template
- Peer Redundancy Template: mgmt-A
- Template Type: Updating Template
- VLANs: native (native), mgmt
- MTU: 1500
- MAC Pool: <pool created for A>
- Network Control
The settings change slightly for vMotion vNIC templates, here are the settings for vMotion A and B:
- vMotion-A vNIC:
- Fabric ID: Fabric A
- Enable Failover: Enabled
- Redundancy Type: Primary Template
- Template Type: Updating Template
- VLANs: vMotion (native)
- MTU: 9000
- MAC Pool: <pool created for A>
- Network Control Policy: enable-cdp
- vMotion-A vNIC:
- Fabric ID: Fabric B
- Enable Failover: Enabled
- Redundancy Type: Secondary Template
- Peer Redundancy Template: vMotion-A
- Template Type: Updating Template
- VLANs: vMotion (native)
- MTU: 9000
- MAC Pool: <pool created for A>
- Network Control Policy: enable-cdp


Now to our iSCSI vNICs:
vMotion-A vNIC:
- iSCSI-A vNIC:
- Fabric ID: Fabric A
- Enable Failover: Disabled
- Redundancy Type: No Redundancy
- Template Type: Updating Template
- VLANs: iSCSI-A (native)
- MTU: 9000
- MAC Pool: <pool created for A>
- Network Control Policy: enable-cdp
- iSCSI-A vNIC:
- Fabric ID: Fabric A
- Enable Failover: Disabled
- Redundancy Type: No Redundancy
- Template Type: Updating Template
- VLANs: iSCSI-B (native)
- MTU: 9000
- MAC Pool: <pool created for A>
- Network Control Policy: enable-cdp
With our eight vNIC templates ready to go, we can do a quick birds eye view of our settings to assure correct VLAN association:

Using these vNICs we will now create a LAN Connectivity Policy. From the LAN tab, right-click LAN -> Policies -> LAN Connectivity Policies and click “Create New LAN Connectivity Policy.” In here we are going to add the 8 vNICs we just created. Click the “+ add” button and the within the “Create vNIC” menu select the checkbox “Use vNIC Template.” Select the vNIC templated created for mgmt-a and choose “VMWare” from the adapter policy dropdown. Outline a name using a prefix number scheme to assure your desired order is respected by host operating systems that handle ordering via CDN. Our ordering will be as follows:
- 00-mgmt-A
- 01-mgmt-B
- 02-vMotion-A
- 03-vMotion-B
- 04-app-A
- 05-app-B
- 06-iSCSI-A
- 07-iSCSI-B

Repeat this process for the remaining vNICs, totaling 8 vNICs. In the future I may go over the process of adding iSCSI boot capabilities to this cluster, which would require the creation of iSCSI vNICs in the section below the 8 vNICs we just created. This does not actually create a new vNIC, rather it is an iSCSI boot firmware table (iBFT) configuration placeholder for iSCSI boot config.

Given that we’ll be leveraging jumbo frames for most of these vNICs, we need to enable jumbo frames globally on the Fabric Interconnects. This is accomplished by navigating to the LAN tab and selecting LAN -> LAN Cloud -> QoS System Class. Within the general tab here, populate 9216 as the MTU for the Best Effort row.

Storage
For this lab I’m going to be installing the host OS to local storage using a RAID1 array of front loaded disks. To establish this policy, within the “Storage” section, navigate to Storage -> Storage Profiles -> root, right click root and click “Create Storage Profile.” Give the profile a name and then under the “Local LUNs” tab, click “Add.” Now give the local LUN a name and size.

Now in order to determine how this LUN will instantiate itself, we need a disk group policy which we can create from this wizard by click “Create Disk Group Policy.” In here you can customize the type of RAID desired, the quantity & type of disks being used, and more. Aside from those settings, I leave everything else to platform default.

Once you’ve finished the disk group policy settings, press OK a few times to return to the main Storage Profile config window and finally press OK here to complete the process. With this profile, servers will now create a RAID1 array out using two local disks and create a 20G virtual drive. If the local server doesn’t have the necessary drives to match this storage profile, you won’t be able to attach your service profile template so be wary that your settings in here match your environment’s disk configuration.
Service Profile Template
With the low level policies defined, we can now craft our Service Profile Template. This will link all of the policies we just created into one template entity that can be used to create individual profiles that will be bound to servers. From the “Servers” section, navigate to Service Profile Templates -> root, right-click root and click “Create Service Profile Template.” This will bring us to a wizard through which we can select all our policies. In the first step, we’ll give it a name, change the type to “Updating Template” and then select the UUID pool created earlier.

Next, for Storage Provisioning, navigate to the “Storage Profile Policy” tab and select the local storage policy we created earlier.

Click next to configure networking and click the radio button for “Use Connectivity Policy,” then select our custom policy.

No FC in this lab, so we’ll select “No vHBAs” under “SAN Connectivity” and also pass over the Zonig page. You’ll notice int he following page around vNIC/vHBA placement we have our 8 vNICs populated in our desired order. No need to adjust anything there, nor do we need to define a vMedia policy. We will, however, need to select our custom server boot policy and assure that we select “default” under the dropdown for “Maintenance Policy.”


We’ll then select our server pool assignment user the “Server Assignment” section.

Lastly we’ll select our BIOS policy from the BIOS Configuration section of “Operational Policies” and then, at click “Finish.”

Using this template we can create an arbitrary number of service profiles for use with our blades. We’ll create a batch of 8 for our chassis by right clicking our freshly created service profile template under Servers -> Services Profile Templates -> root and selecting “Create Service Profiles from Template.” In here, specify a naming prefix and the number of instances you’d like. You will now see the 8 templates appear within the service profiles section and, because we’ve defined a server pool, they will be automatically applied to our blades.

You can check in on the status of the service profile deployments by poking around in the servers tab and going down to the chassis or blade level within the equipment hierarchy. Under each blade’s general tab, you’ll see the “Assoc State” sitting at “Establishing” while the profile is applied and the server is brought online.


Soon we will have a chassis of fully configured servers ready for os install via PXE! Because this is a net-new install, we have just a bit more work to do establishing northbound connectivity before we’re ready to proceed. For now, we can assure that all blades are operational and compatible with our service profile by reviewing the status details. With no OS on the blade, you should see a server in an overall status state of “OK.” Without an OS installed, you may find a few errors popping up with a cause of “link-down” or “vif-down,” but these will go away once we get ESXi up and running.

Northbound Network Connectivity
Fabric Interconnects massively simplify the life of a network engineer, as it allows for centralized configuration of VLAN assignment via the use of vNIC templates. Northbound connectivity is also a breeze given that we need only configure a set of trunk links to the top of rack (ToR) switch. Let’s set up the LACP port channel settings on both the ToR switch and the Fabric Interconnect. Our expanded network topology now looks like this:

Let’ start with the FIs, navigate to the LAN tab of UCS Manager and expand LAN -> LAN Cloud -> Fabric A. Right click “Port Channels” and select “Create Port Channel.” Select a name, locally significant port channel number, then assign the specific uplink ports to bundle.


Perform this same process for Fabric B. That’s it, now for the Nexus config which is a bit beyond the scope of this post. All you need here is a pair of Nexus switches set up with Virtual Port Channel (vPC), providing an aggregated four LACP links down to the FIs. Below you will find an extract of the switch configuration.
! Nexus ToR A policy-map type network-qos jumbo class type network-qos class-default mtu 9216 system qos service-policy type network-qos jumbo vpc domain 1 peer-switch role priority 10 peer-keepalive destination 10.29.81.6 source 10.29.81.5 delay restore 150 peer-gateway auto-recovery ip arp synchronize interface port-channel1 description vPC Peer Link to n5k-core-b switchport mode trunk switchport trunk allowed vlan 1,8,10,20,21,30 spanning-tree port type network vpc peer-link interface port-channel11 description Link to FI-A eth1/22-23 switchport mode trunk switchport trunk allowed vlan 1,8,10,20,21,30 spanning-tree port type edge trunk vpc 11 interface port-channel12 description Link to FI-B eth1/23-24 switchport mode trunk switchport trunk allowed vlan 1,8,10,20,21,30 spanning-tree port type edge trunk vpc 12 interface Ethernet1/1-2 channel-group 11 mode active interface Ethernet1/3-4 channel-group 12 mode active interface Ethernet1/31 description vPC Peer Link to n5k-core-b eth1/31 switchport mode trunk switchport trunk allowed vlan 1,8,10,20,21,30 channel-group 1 mode active interface Ethernet1/32 description vPC Peer Link to n5k-core-b eth1/32 switchport mode trunk switchport trunk allowed vlan 1,8,10,20,21,30 channel-group 1 mode active
! Nexus Tor B policy-map type network-qos jumbo class type network-qos class-default mtu 9216 system qos service-policy type network-qos jumbo interface port-channel1 description vPC Peer Link to n5k-core-a switchport mode trunk switchport trunk allowed vlan 1-2,10,20,30 spanning-tree port type network vpc peer-link vpc domain 1 peer-switch role priority 20 peer-keepalive destination 10.29.81.5 source 10.29.81.6 delay restore 150 peer-gateway auto-recovery ip arp synchronize interface port-channel11 description Link to FI-A eth1/22-23 switchport mode trunk switchport trunk allowed vlan 1,8,10,20,21,30 spanning-tree port type edge trunk vpc 11 interface port-channel12 description Link to FI-B eth1/23-24 switchport mode trunk switchport trunk allowed vlan 1,8,10,20,21,30 spanning-tree port type edge trunk vpc 12 interface Ethernet1/1-2 channel-group 11 mode active interface Ethernet1/3-4 channel-group 12 mode active interface Ethernet1/31 description vPC Peer Link to n5k-core-a eth1/31 switchport mode trunk switchport trunk allowed vlan 1,8,10,20,21,30 channel-group 1 mode active interface Ethernet1/32 description vPC Peer Link to n5k-core-a eth1/32 switchport mode trunk switchport trunk allowed vlan 1,8,10,20,21,30 channel-group 1 mode active
And with that final step, we now have our UCS Domain nearly operational! From here we need only install ESXi on each of the blades and configure them to be part of our ESXi cluster. For this I’ll be leveraging PXE boot which I may cover in a future post. For now, I’ll leave with you a very cool feature of UCS manager which allows for path tracing from the Fabric Interconnects to the blade IO Modules, then to the VICs on the blades. This is called the “Hybrid View” and can be accessed by clicking on the chassis of your choice then navigating to the “Hybrid Display” tab. Very useful perspective and in my setup you can see that a few blades have dual VIC cards while some have only a single (denoted by the black boxes representing ports on each blade). There are many awesome features like this built in to UCS Manager, but for now we’ll move with our datacenter buildout!
