Coming Attractions: Service Manager & IPv6


On this fine evening, we wanted to share with you a little preview of coming attractions, which will hopefully appear in future posts. Two of our projects revolve around Microsoft System Center Service Manager and IPv6 (separate endeavors). Both of these hold good promise for our organization and where we go with each may help you as well.

Through the years, we’ve used a couple different help desk and change management tools–Track-It! and Alloy Navigator–and in each, we’ve run into issues and shortcomings. Track-It! was fine as a ticketing system, but provided very little correlation (if any), no audit trail, and sparse asset management. Alloy is a step in the right direction with a pretty comprehensive set of features, ranging from Purchase Orders to Incident and Change Management to Asset tracking, but the application and system itself are fraught with bugs, counter-intuitive processes, etc. In other words, lots of ongoing work which is worthy of many tickets itself.

So we’re venturing into Microsoft’s Service Manager territory and are very interested in the integration with the rest of the System Center suite (Configuration Manager and Operations Manager), as well as Active Directory. We’re also checking out Provance IT Asset Management, a management pack for SM, which enhances the product and provides an otherwise absent financial piece. Looking good so far!

On the networking side, we’ve been in the R&D phase with IPv6 (Internet Protocol version 6) for a few months now since receiving our own /48 block of addresses from ARIN. The documentation online is a bit sparse and mostly targeted to either consumers (Teredo) or ISPs, but we’re finding some nuggets in the digging. Some good resources thus far are:

IPv6: Cisco IOS


Addressing. Routing. DHCP. EIGRP. HSRP. Mobility. After consuming Cisco’s 706-page IOS IPv6 Configuration Guide, these are just a few of the areas we’re processing as the deployment plan starts coming together. If you’re running something other than Cisco, some of the commands below, and of course EIGRP, may not directly apply, but perhaps you can abstract the concepts and use them in your own network.

Here’s a rundown of the IOS commands we’ll be utilizing as we begin to implement:

ipv6 address: (Interface) Apply to VLAN interfaces, routing interfaces, etc (i.e. vlan20, g1/10, g2/0/23)
ipv6 general-prefix: (Global) Specifies the prefix of your IPv6 address space (i.e. 2001:d8:91B5::/48)
ipv6 unicast-routing: (Global) Enables IPv6 routing on the switch/router
ip name-server: (Global) Not specific to IPv4 or v6, but necessary to add IPv6 name server addresses
ipv6 dhcp relay destination: (Interface) Configure on all interfaces that need DHCP relaying
ipv6 eigrp: (Interface) Unlike IPv4, EIGRP is interface-specific (no “network” statements); apply to routing interfaces
ipv6 router eigrp: (Global) Creates the EIGRP router process on the switch
ipv6 hello-interval eigrp: (Interface) Configured on interfaces using EIGRP to set the frequency of hello packets to adjacent routers
ipv6 hold-time eigrp: (Interface) Configured on interfaces using EIGRP to tell neighbors how long the sender is valid
Coming next: a consolidated IPv6 deployment plan, derived from NIST Guidelines for the Secure Deployment of IPv6…

VMworld: Distributed vSwitch Best Practices (VSP2894)


Speaker: Vyenkatesh Deshpande (VMware)

Agenda:
– Overview of VDS
– vSphere 5 New Features
– VDS Best Practices
– VDS Myths

Overview

– unified network virtualization management in dependent of physical fabric
– manage datacenter-wide switch vs. individual switches per host
– vMotion-aware: statistics and policies follow the VM simplifying debugging and troubleshooting

VMworld: SRM 5.0 & vSphere Replication (BCO1562)


Speakers: Lee Dilworth, Clive Wenman (VMware)

Understanding the Use Cases and Implementation Options

Prior to SRM 5, relied on array-based replication
– requires same versions of vCenter and SRM but ESX versions can vary
SRM 5 now supports vSphere Replication (in addition to array-based)
– vSphere Replication requires parity of all versions of vSphere

SRM: Site Recovery Manager
SRA: Storage Replication Adapter

SRM 5 UI allows seeing both sites from one interface

vSphere Replication offers a cost-effective choice/alternative to array-based
– does not replace array-based for the foreseeable future

IPv6: RFC 6177 obsoletes RFC 3177


In what we believe to be a VERY wise revision, the IETF (Internet Engineering Task Force) has issued RFC 6177 to change the recommendation of indiscriminate issuing of /48 IPv6 address blocks to sites and organizations. Under RFC 3177, end sites were to be given /48 blocks, regardless of size. Thus, if an organization had multiple sites–whether a collection of small doctor’s offices or a multinational conglomerate–each of those sites would be assigned a /48.

Granted, IPv6 provides an unprecedented number of addresses and blocks, but discussions leading up to RFC 6177 argued that such a practice could be tantamount to declaring that 640K of memory is all anyone would ever need. It also was reminiscent of the early days of IPv4 when it wasn’t uncommon to give out /16’s, /12’s or even /8’s to organizations. And we all know how that ended up…

With the publication of RFC 6177 in March 2011, IETF’s recommendation has changed to assignments between /48 and /64, depending on the request. The provision and original intent of RFC 3177 to minimize hurdles in getting sufficient blocks for years ahead has still been preserved, so that end sites can maintain existing subnetting and transition to IPv6 without inordinate difficulties. The allowance, though, to assign a /56 or smaller block where appropriate will help keep IPv6’s options open as use cases and its evolution develops.

Kudos to IETF for learning from history!

Sources:

Hyper-V / VMM 2012 R2 and VMQ, Part 1

Microsoft has been gaining ground in the virtualization sphere one step at a time since Hyper-V first premiered. While the some increments were negligible (or merely painstakingly obvious), they achieved significant breakthroughs in late 2013 with the release of all things “2012 R2”. The puzzle piece on which we’ll focus here is VMQ (specifically dynamic VMQ, or dVMQ).

VMQ gives Hyper-V and System Center Virtual Machine Manager (VMM) Logical Switches what Receive Side Scaling (RSS) provides to physical servers; namely, it leverages multiple compute cores/interrupts to increase network traffic efficiency. The network teaming (or Load-Balancing Fail-Over, LBFO) configuration is important here, because it affects how VMQ maps queues to processors. The full table of possibilities is given halfway down the page of TechNet’s VMQ Deep Dive, Part 2. In a nutshell, some configurations need NIC queues to overlap the same processors (so that all queues are everywhere), while others need segregation (so every queue has its own unique core).

In our environment, we have a switch independent team with dynamic hashing (new to Windows Server 2012 R2), so “Sum of Queues” is how we should be set. Given that our Hyper-V hosts have two QLogic QLE8262 10Gbps CNAs with one port per card in use and four CPU sockets with ten cores each, we can allocate up to 16 queues per active CNA port but will stick to 8 in the examples below (the card determines the max queues, but that many CPU cores may not exist in the system). Take note: hyper-threading makes a difference, too. Since it is enabled in our environment, the relevant cores are even numbers, starting at zero (i.e. 0, 2, 4, 6, 8…). The other key here is the exclusion of the first core, zero, as the system uses it for primary functions that are best left uncontested.

To implement proper, non-overlapping queues for VMQ in our setup, we use the following PowerShell commands:

List our network adapters and current settings:
Get-NetAdapterVMQ
Configure queues on the first interface:
Set-NetAdapterVMQ -Name “SLOT 2 Port 1” -BaseProcessorNumber 2 -MaxProcessors 8
Configure queues on the second interface:
Set-NetAdapterVMQ -Name “SLOT 2 Port 1” -BaseProcessorNumber 18 -MaxProcessors 8
Verify the new configuration:
Get-NetAdapterVMQ
At this point, queues should begin to be assigned to virtual machines on this host, assuming they are connected to a Logical Switch in VMM and have VMQ enabled on the port profile. Check with with the command: Get-NetAdapterVMQQueue. If you see VMs in the right column, you’re in business.

In the next part, we’ll unpack the situation we and a few others in the global community are facing.

DNS, Server Replacements, and IPv6


Last week I encountered a briefly puzzling situation that’s worth noting as a tip when replacing a server on the network and needing to keep the same hostname. We’re a Microsoft shop, so this speaks to Microsoft DNS and VMs running Windows Server (2008 R2 and 2012 R2), but DNS being what it is, this is likely to apply to BIND, Linux, and the rest.

In this case, we were following a very simple server replacement process with these short steps, much as one would back in the 1990’s.

Rename the old server (i.e. svrsyslog –> svrsyslogold)
Build the new server with the original name (svrsyslog)
Set the new static IP
The relevant difference between the 90’s and now, though, is IPv6 (among many other things). Thus, in DNS, we have two records resembling those of a standard syslog server below.

dns-ipv6-1

What doesn’t stand out in those records, however, is the IPv4 portion of the IPv6-encapsulating address. So when we changed the server name to “…old”, everything looks fine, because the “Host (A)” record updates to the new name and a corresponding “IPv6 Host (AAAA)” record follows right below.

The key here is that the IPv6 record below the updated “svrsyslog” IPv4 record may not match. In our case, the old IPv6 record never updated; only the IPv4 did. This creates problems when connecting to the new server in a dual-stacked IPv4/IPv6 environment. IPv6-aware systems attempt to resolve the new “svrsyslog” with DNS and get the old IPv6 address (because the rebuilt server didn’t update the v6 record). IPv4 points one place, while IPv6 points to another.

The solution is as simple as it is in IPv4; obscurity and unfamiliarity with IPv6 is all that makes it elusive. Open the IPv6 record of the new/original server name (in this example, SVRSYSLOG) and edit the decimal portion of the IP address. Microsoft is kind enough to translate it from hex for us is the dialog box. Make that last chunk match, and you’re good to go.

VCE: Virtual Computing Environment


Are you familiar with VCE? If not, add it to your IT acronym dictionary, but it’ll be something you hear more about in the future if virtualization, shared storage, converged networks, and/or server infrastructure are in your purview. VCE stands for “Virtual Computing Environment” and is a consortium of Cisco, EMC, VMware, and Intel (funny…if you take three of those initials, you get V-C-E). The goal and objective, which they seem to be realizing, is to deliver a “datacenter in a box” (or multiple boxes, if your environment is large), and in a lot of ways, I think they have something going…

The highlights for quick consumption:

a VCE Vblock is an encapsulated, manufactured product (SAN, servers, network fully assembled at the VCE factory)
a Vblock solution is designed to be sized to your environment based on profiling of 200,000+ virtual environments
one of the top VCE marketed advantages is a single support contact and services center for all components (no more finger pointing)
because a Vblock follows “recipes” for performance needs and profiles, upgrades also come/require fixed increments
Cisco UCS blade increments are in “packs” of four (4) blades; EMC disks come in five (5) RAID group “packs”
Vblock-0 is good for 300-800 VMs; Vblock-1 is for 800-3000 VMs; Vblock-2 supports 3000-6000 VMs
when crossing the VM threshold for a Vblock size, Vblocks can be aggregated
Those are the general facts. So what does all that mean for interested organizations? Is it a good fit for you? Here are some takeaways I drew from the points above as well as the rest of the briefing by our VCE, EMC, and Cisco reps…

Consider those upgrade increments. Is your environment large enough that you tend to purchase expansion resources in those increments (or larger)? Small businesses may find fixed upgrade packs of four blades or 20+ disks hard to swallow. Medium to large businesses, though, may see these “prescribed” increments to be perfect for predictable performance additions.

Wrap your arms around the concept of collective, centralized support. If your organization exalts caution and stability (i.e. you wait roll out Windows OS versions until service pack 1 releases), VCE is your dream come true. Testing, certification, and product mastery are the jewels in VCE’s crown. If you live for the bleeding edge, jump on betas, and deploy patches hours (or minutes) from release, you may find the support matrix and slower certification cycle restrictive. VCE will still support your off-grid configuration, but understand that you’re being a free-radical in an otherwise predictable platform.

EMC. Cisco. VMware. (oh, and Intel). Those are the pieces in the VCE puzzle…the only pieces. Are you good with that? They are solid companies and have come a long way in recent years (especially since EMC’s acquisition of VMware). If you like those names, great. If you’re a NetApp diehard or an EqualLogic loyalist (or AMD), or you just love to get your hands around your Dell, HP, or IBM servers, though, you’ll need to let go. Circle up some friends and prepare for the postpartum depression, because they aren’t part of VCE. Sure, they can live in the house next door, but this roof is only big enough for V-C-E.

Well, that’s all for now. Virtualization is here. How are you implementing it?

RDS Health Monitor in F5 for Windows Server 2016


When rolling out new servers for Remote Desktop Services in Windows Server 2016, that are load balanced with F5 (Connection Broker servers specifically), I found that the Send/Receive strings used for the Health Monitors in F5 that we used for Windows Server 2012 R2 did not work in Windows Server 2016. After diving into some diagnostics logs, it looks like the response string has changed in Windows Server 2016.

Here are the Send/Receive strings for both the old 20012 R2 and the new 2016 that worked for me:

VMware & Link-State Tracking


If you’re running a VMware vSphere cluster on a two-tier (or greater) Cisco network, you might be in a situation like I was. You see, we built in redundancy when we planned our core and access switches, but the design had one significant flaw (see the simplified diagram to the right). Pretend all of those lines are redundant paths. Looks good so far, right? If CoreA goes down, ESX(i) can still send traffic up through AccessB to CoreB. The reverse applies if -B is down, and likewise for either of the Access- switches.

The catch comes for VMs on ESX(i) when one of the Core- switches goes down. ESX(i) balances VMs across the ports in the Virtual Machine port group(s). If a port goes down, it will smartly move the VM(s) to another port that is up. If an “upstream” hop like CoreB goes down, though, ESX(i) doesn’t know about that event, so it keeps its VMs in place, oblivious to the fact that the VMs on AccessB ports are as good as dead to the world. [Enter Link-State Tracking]

Link-state tracking (LST) is a feature in Cisco IOS 12.2(54)SG and later (and possibly a few minor revisions sooner) that enables Cisco switches and routers to manage the link state of ports based on the status of other port. In our case, LST can be configured on AccessB to watch the uplink port(s) to CoreB and act if something they go down. See the example config below:

switch# config t
switch (config)# link state track 1
switch (config)# int GigabitEthernet1/48
switch (config-if)# link state group 1 upstream
switch (config-if)# int GigabitEthernet1/1
switch (config-if)# link state group 1 downstream
switch (config-if)# int GigabitEthernet1/2
switch (config-if)# link state group 1 downstream

In this config, we have specified that the last port on our 48-port Cisco Catalyst switch (i.e. a 4900 series) is what links (“upstream”) to our core switch, CoreB. Then we add two other ports which our ESX(i) server is using for VMs as “downstream” ports. Once this configuration is in place, if GigabitEthernet1/48 goes down (unplugged, issues on CoreB, etc), AccessB will put GigabitEthernet1/1 and 1/2 into an “ErrorDisabled” state (down), so our ESX(i) server will know that it needs to choose new paths for the VMs that are traversing AccessB.

Of course, another solution to this topology would be to physically reconfigure it as a mesh with CoreA-to-AccessB and CoreB-to-AccessA links, but then you encounter spanning-tree and other factors at multiple levels. Even if that is your end game, link-state tracking is a great intermediate step in the mean time.

For more info on whether beaconing or link state tracking in your best fit, check out VMware’s blog:
Beaconing Demystified: Using Beaconing to Detect Link Failures