Thursday 16 October 2014

Data Center Design: Example Overview Part-II:

Virtual Device Context (VDC) Design

Continuing from Part-I, in the main DC, the Core, DC Aggregation, and DCI Modules would be deployed on the same physical devices – the Nexus 7010 Core Switches. The separation between these modules would be done using the Virtual Device Context (VDC) feature of the Nexus 7000 devices. The main DC and main office building core switches were defined to have the following VDCs:

·         Core VDC

·         DC-AGG VDC

·         OTV VDC

In  the main office building, the Core, DC Aggregation, and DCI modules would be deployed on the same physical devices – the Nexus 7009 Core Switches. The main office building core switches will therefore have the following VDCs:

·         Core VDC

·         DC-AGG VDC

·         OTV VDC

 

VDC Overview

Starting with Supervisor 2/Cisco NX-OS 6.1, the VDC feature allows up to four (4) separate virtual switches, and one (1) Admin VDC to be configured within a single Nexus 7000. The Admin VDC is optional and the creation of the admin VDC is not a required task for device operations.

Architecturally the VDCs run on top of a single NX-OS kernel and OS infrastructure. Each VDC

represents a separate instance of the control plane protocols, as illustrated below:

 

VDC Independence

A VDC runs as a separate logical entity within the physical device, maintains its own unique set of running software processes, has its own configuration, and can be managed by a separate administrator.

 

VDCs virtualize the control plane as well, which includes all those software functions that are processed by the CPU on the active supervisor module. The control plane supports the software processes for the services on the physical device, such as the routing information base (RIB) and the routing protocols. When a VDC is created, the Cisco NX-OS software takes several of the control plane processes and replicates them for the VDC. This replication of processes allows VDC administrators to use virtual routing and forwarding instance (VRF) names and VLAN IDs independent of those used in other VDCs. Each VDC administrator essentially interacts with a separate set of processes, VRFs, and VLANs.

 

All the Layer 2 and Layer 3 protocol services run within a VDC. Each protocol service started within a DC runs independently of the protocol services in other VDCs. The infrastructure layer protects the protocol services within VDC so that a fault or other problem in a service in one VDC does not impact other VDCs. The Cisco NX-OS software creates these virtualized services only when a VDC is created.

 

Each VDC has its own instance of each service. These virtualized services are unaware of other

VDCs and only work on resources assigned to that VDC. Only a user with the network-admin role

can control the resources available to these virtualized services.

 

Although CPU resources (Supervisor module) are not truly independent between the VDCs, the pre-emptive multi-tasking nature of OS does ensure that no single process can hog the CPU, including processes across VDCs. Even if CPU usage is driven up, all processes will have fair access to CPU clock cycles in order to function.

 

Memory is not controlled on a per VDC basis, instead it is controlled at a per process level. The services that run on the platform have limits set to their maximum available accessible memory that is enforced by the kernel. Therefore, any single process can only access the defined amount to an instance of that process. This prevents an errant process or memory leak in a process from consuming a significant amount of overall system memory.

 

There are additional resources that are applied system wide (allocated to all VDCs as a whole), thereby offering lesser amount operational independency between the VDCs. These global resources are discussed in the following section.

 

Default VDC

The physical device always has one VDC, the default VDC (VDC 1). When a user first logs into a new Cisco NX-OS device, it goes to the default VDC. User must be in the default VDC to create, change attributes for, or delete a non-default VDC. As mentioned earlier, the Cisco NX-OS software can support up to four VDCs (and 1 Admin VDC), including the default VDC. This means that a user can create up to three VDCs. If the user has the network-admin role privileges, physical device and all VDCs can be managed from the default VDC.

VDC login Process

 

VDC Resources

In respect of allocation to VDCs, Nexus switch resources are divided into three main categories:

Global, Dedicated and Shared. VDC1 is the default VDC, and controls the creation, deletion and

resource allocation of all other VDCs.

 

Global Resources are assigned to and controlled by the default VDC:

• Boot image configuration

• Software feature licensing

• Ethanalyzer session

• Control Plane Policing (CoPP) configuration

• Quality of Service (QoS) queuing configuration

• Allocation of resources to other VDCs

• Console port

• Connectivity Management Processor (CMP)

• Network Time Protocol (NTP) Server configuration

• Port channel hashing algorithm configuration

 

Every VDC has its own dedicated resources, which are assigned solely to that VDC:

• Physical interfaces

• Layer 3 and layer 2 protocol stacks

• Per-VDC management configuration

 

Shared resources are available to all VDCs:

• Out of band management interface (interface mgmt0). Each VDC has its own IP address on this interface.

 

NX-OS allows allocation of the following resources to be controlled by the configuration of minimum and maximum resource guarantees:

• VLAN

• SPAN sessions

• VRFs

• Port channels

• Route memory

 

 

Route Memory

While there is no exact mapping between the number of routes that can be stored per megabyte of memory allocated, 16 megabytes of route memory is enough to store approximately 11,000 routes with 16 next hops each. The default memory allocation to IPv4 routes is 58 megabytes.

 

The following command is helpful to understand the memory consumption for unicast routing tables. Using the show routing memory estimate command on the Nexus 7000, it can be seen that a VDC using the default resource template will support approximately 12,000 routes (assuming 8 next-hops for each route):

 

!

Switch# sh routing memory estimate route 12000 next-hops 8

 

Shared memory estimates:

Current max 8 MB; 6526 routes with 16 nhs

in-use 1 MB; 97 routes with 1 nhs (average)

Configured max 8 MB; 6526 routes with 16 nhs

Estimate 8 MB; 12000 routes

!

It should also be noted that the route memory resource allocation does not permit to use different

values for the maximum and minimum memory limits.

To allow a VDC to communicate with other devices in the network, the administrator must explicitly allocate interfaces to a VDC, except for the default VDC that will have control over all interfaces that are not otherwise allocated.

 

At the time of authoring this, for the relevant NX-OS software release, the F2e modules needed to be in their own dedicated VDC with no other module types being part of the same VDC. This restriction was removed in the next NX-OS release (6.2).

 

Allocation of interfaces should be done based on the type of hardware in use. For example a ‘M1-Series’ 32-port 10G line card doesn’t allow ports belonging to the same port-group to be part of different VDCs. A dedicated line card based approach for VDCs offers multiple benefits with the only trade-off being the consumption of extra-line cards:

• Efficient usage of Line Card hardware resources like MAC Table, FIB TCAM. The overall chassis level MAC Table/FIB TCAM limit can be scaled to 3 times 128K limit.

• Brings much closer to a secure architecture by limiting the dependency between the VDC environments.

Note: There is no port group restriction on M2 Series modules. Any port in M2 Series modules can be placed in any VDC.

 

Communication Between VDCs

The Cisco NX-OS software does not support direct communication between VDCs on a single

physical device. There must be a physical connection from a port allocated to one VDC to a port

allocated to the other VDC to allow the VDCs to communicate. Each VDC has its own VRFs for

communicating with other VDCs.

 

Customer Data Center building Network VDC Design Summary

The following points summarize the VDC design for the main Data Center building Network:

·         Core Nexus 7010 switches will be configured with three VDCs:

o (Default) VDC for Core module (Core VDC)

o VDC for DC Aggregation module (DC-AGG VDC)

o VDC for DCI module (OTV VDC)

·         Each VDC will be allocated dedicated physical interfaces:

o The M1L linecards will be allocated to the Core VDC

o The M2 linecard will have most of its ports in the Core VDC

o The F2e line cards (2 on each switch) will be allocated to the DC-AGG VDC

o Three ports from the M2 line cards will be allocated to the OTV VDC

·         Core VDC in each Nexus switch will have two point-to-point Tengigabit Ethernet Layer-3 links, one to the local DC-AGG VDC and the second to the DC-AGG VDC in the second Core switch.

·         Core VDC in each Nexus switch will have one point-to-point Tengigabit Ethernet Layer-3 link to the local OTV VDC.

·         Core VDC in each Nexus switch will have two point-to-point Tengigabit Ethernet Layer-3 links, one to each Distribution switch. As the Distribution switches will be deployed using VSS the Core VDC will only see the same OSPF neighbor across both links.

·         The links between the Core VDC and the WAN Edge routers will be configured as point-topoint Layer-3 links.

·         The inter DC links will be terminated on the Core VDC, one link on each switch. These will be configured as point-to-point Layer-3 links with a /29 subnet.

·         The DC-AGG VDC in each Core switch will be dual homed to the two Core VDCs via pointto-point Layer-3 links.

·         The DC-AGG VDC in each Core switch will have separate vPC interconnects with the both the local OTV VDC as well as the OTV VDC on the second switch. This will be an 802.1q trunk carrying VLANs requiring Layer-2 extension.

·         Core firewalls will also be connected to the DC-AGG VDCs via vPC. This will be an 802.1q trunk carrying all DC VLANs.

·         F5 load balancers will be connected to the DC-AGG VDCs via vPC. This port-channel will be made part of the respective VLAN as an access switchport.

·         Two ports will be interconnecting the OTV VDC with the Aggregation layer of the Data Center as follows:

o One optical Tengigabit Ethernet link will be connected to the local DC-AGG VDC

o A second optical Tengigabit Ethernet link will be connected to the DC-AGG VDC in the other Core switch

·         The VDCs will use the default VDC resource allocation template.

 

In order to provide an acceptable level of resiliency in the DC-AGG VDC it was decided that one F2e module will be added to each Core switch. This means a total of two F2e modules will be deployed and these will be obtained by removing one F2e module from each of the Aggregation switches in Row 7 of the Data Center.

 

Cutomer main building Network VDC Design Summary:

The following points summarize the proposed VDC design for main office building Network:

·         Core Nexus 7009 switches will be configured with three VDCs:

o (Default) VDC for Core module (Core VDC)

o VDC for DC Aggregation module (DC-AGG VDC)

o VDC for DCI module (OTV VDC)

·         Each VDC will be allocated dedicated physical interfaces:

o The M1L linecards will be allocated to the Core VDC

o The M2 linecard will have most of its ports in the Core VDC

o The F2e line cards (1 on each switch) will be allocated to the DC-AGG VDC

o Three ports from the M2 line cards will be allocated to the OTV VDC

·         Core VDC in each Nexus switch will have two point-to-point Tengigabit Ethernet Layer-3  links, one to the local DC-AGG VDC and the second to the DC-AGG VDC in the second Core switch.

·         Core VDC in each Nexus switch will have one point-to-point Tengigabit Ethernet Layer-3 link to the local OTV VDC.

·          Core VDC in each Nexus switch will have two point-to-point Tengigabit Ethernet Layer-3 links, one to each Distribution switch.

·         The inter DC links will be terminated on the Core VDC, one link on each switch. These will be configured as point-to-point Layer-3 links with a /29 subnet.

·         The DC-AGG VDC in each Core switch will be dual homed to the two Core VDCs via point-to-point Layer-3 links.

·         The DC-AGG VDC in each Core switch will have separate vPC interconnects with the both the local OTV VDC as well as the OTV VDC on the second switch. This will be an 802.1q trunk carrying VLANs requiring Layer-2 extension.

·         Two ports will be interconnecting the OTV VDC with the Aggregation layer of the Data Center as follows:

o One optical Tengigabit Ethernet link will be connected to the local DC-AGG VDC

o A second optical Tengigabit Ethernet link will be connected to the DC-AGG VDC in the other Core switch

·         The VDCs will use the default VDC resource allocation template.

 

Using The Default VDC

Generally it is recommended to dedicate the default VDC as an ‘Admin’ VDC and not run any data path traffic through it. This is mainly because of the fact that this VDC is used to create new VDCs and allocate resources to them as well as to manage configuration of those resources that can only be managed from the default VDC (e.g. CoPP configuration). Access to the default VDC means ability to create/modify existing VDCs.

 

If any environment uses the Default VDC for data traffic, then their management should ensure that operational resources accessing the Core VDC for normal operations are allocated the ‘vdcadmin’ role and not the ‘network-admin’ role.

 

VDC Naming Convention:

• (Default) VDC for Core network (Core VDC)

• VDC for DC Aggregation network (DC-AGG VDC)

• VDC for OTV (OTV VDC)

 

The name of Core VDC is the hostname of the switch and consequently there is no explicit configuration required for naming the Core VDC.

 

The DC Aggregation VDC and the OTV VDC would be created using the following names: DC-AGG and OTV respectively.

 

By default the hostname of the non-default VDCs is the hostname of the switch with the VDC name tagged onto the end. This means, that by default, the hostname displayed for the OTV VDC will a combination of the Core VDC hostname and ‘OTV’ tagged onto the end. This behavior can be modified to have only “DCAGG” or “OTV” displayed when logging into this VDC, using the following configuration template:

 

!

configure terminal

no vdc combined-hostname

!

 

VDC Interface Allocation

By default all interfaces on a Nexus 7000 are part of the Default VDC. For this specific project, the Default VDC was planned to be used for the Core module so there was no requirement to explicitly allocate interfaces to the Core module VDC as it would be the Default VDC.

 

The M1L and M2 line cards would have most of their ports allocated to the Core VDC. Three ports from the M2 line cards would be allocated to the OTV VDC. The F2e line cards (2 on each Core switch in main DC, 1 on each Core switch in main office building) would be allocated to the DC-AGG VDC.

 

For the DC-AGG and OTV VDCs, since they will be new VDCs (VDC 2 and 3 respectively), interfaces would need to be allocated explicitly. The Core Nexus 7010/7009 would be installed with M1, M2, and F2 linecards. The F2 linecards need to be installed in a VDC of their own. Therefore, if all linecards are installed and device is booted for the first time, by default, the N7K boots with the M linecards in the default VDC and the F2 linecards disabled.

 

Shared Management Interface

The Nexus 7000 is equipped with a dedicated Management interface port on each Supervisor engine. Only the port on the active Supervisor is available. By default, this management interface resides within a special ‘management’ VRF, which is completely separate from the default VRF and any other VRFs that may be created. It is not possible to move the management Ethernet port to any other VRFs, nor to assign other system ports to management VRF. Because of the existence of the dedicated management VRF, management Ethernet port cannot be used for data traffic. The management Ethernet port cannot be trunked as well.

One feature of the Supervisor management interface is that it exists within all VDCs (rather than a single VDC only). The management interface also carries a unique IP address in every VDC. In this way, a distinct management IP address can be provided for administration of a single VDC.

 

In this case, the management interface on the Nexus 7010/7009 Core switches was to be used for their management. Each VDC would have its own IP Address for the management interface.

 

VDC License Requirements: Creating non-default VDCs requires a special license.

 

VDC Configuration

VDC creation has the following prerequisites:

Login to the default VDC with a username that has the network-admin user role.

Make sure the appropriate license is installed.

Assign a name for the VDC.

Allocate resources available on the physical device to the VDCs.

Configure an IPv4 or IPv6 address to use for connectivity to the VDC.

 

Creating a VDC Resource Template

This step was not required for the VDC configuration in the customer's networks as the default resource template would be used. However, if required in the future, the following steps were recommended to be used to define a new resource template:

 

!

vdc resource template <Template-Name>

limit-resource vlan minimum 16 maximum 4094

limit-resource monitor-session minimum 0 maximum 2

limit-resource vrf minimum 16 maximum 500

limit-resource port-channel minimum 16 maximum 256

limit-resource u4route-mem minimum 16 maximum 128

limit-resource u6route-mem minimum 4 maximum 4

limit-resource m4route-mem minimum 8 maximum 8

limit-resource m6route-mem minimum 2 maximum 2

!

Creating a VDC

Below example shows how to create the DC-AGG VDC and OTV VDCs on the Nexus 7010 switches. The same template can be used to create VDCs in 7009 switches:

!

Nexus01DC# configure terminal

Nexus01DC(config)# vdc DC-AGG

Nexus01DC(config)# vdc OTV

!

 

Allocating Interfaces to a VDC

The following example shows how to allocate resources to a VDC:

 

!

Nexus01DC# configure terminal

Nexus01DC(config)# vdc <VDC-Name>

Nexus01DC(config-vdc)# allocate interface ethernet <slot/port>

! Allocate interfaces as needed

!

 

Applying Resource Template to a VDC

Below shows how to apply a resource template to a VDC:

!

Nexus01DC(config)# vdc <VDC-Name>

Nexus01DC(config-vdc)# template <Template-Name>

!

Initializing a New VDC

A newly created VDC is much like a new physical device. To access a VDC, a user must first initialize it. The initialization process includes setting the VDC admin user account password and optionally running the setup script. The setup script helps perform basic configuration tasks such as creating more user accounts and configuring the management interface.

 

The VDC admin user account in the non-default VDC is separate from the network admin user account in the default VDC. The VDC admin user account has its own password and user role.

!

Nexus01DC# switchto vdc DC-AGG

!

 

VDC resource limits can be changed by applying a new VDC resource template anytime after the VDC creation and initialization. Changes to the limits take effect immediately except for the IPv4 and IPv6 route memory limits, which take effect after the next VDC reset, physical device reload, or physical device stateful switchover.

 

VDC configuration should be done prior to applying any other configuration to the Nexus 7000 switches.

 

Saving VDC Configuration

From the VDC, a user with the vdc-admin or network-admin role can save the VDC configuration to the startup configuration. However, a user can also save the configuration of all VDCs to the startup from the default VDC.

!

switchto vdc OTV

copy running-config startup-config

switchback

copy running-config startup-config vdc-all

!

 

To be continued…Next: Fabric Path in the DC and Cloud Infrastructure considerations

 

No comments:

Breakfast At Serengeti

Breakfast At Serengeti
Lion's Share

The Ngorongoro Family

The Ngorongoro Family
Click on the Picture Above To Make It Larger

Tabloid Time: The Aliens Are a'Landing ?!.. ;-)

At the risk of being ridiculed and being labelled a freak, I shall like to draw everyone's attention to the following recent events....If you watch the videos then turn on the sound for the commentary...



Fireball across Ausin, Texas (16th Feb 2009). According to BBC, apparently, its NOT debris from a recent satellite collision...:
http://news.bbc.co.uk/1/hi/world/7891912.stm
http://us.cnn.com/2009/US/02/15/texas.sky.debris/index.html

Same in Idaho in recent times. NO meteor remains found yet: http://news.bbc.co.uk/1/hi/sci/tech/7744585.stm

Exactly same in Sweden: http://news.bbc.co.uk/1/hi/world/europe/7836656.stm?lss

This was recorded on 25th Feb 2007 in Dakota, US:
http://www.youtube.com/watch?v=cVEsL584kGw&feature=related

This year has seen three of the spookiest UFO videos surface, with people in India, Mexico and even in space, NASA, spotting things they couldn't explain: http://www.youtube.com/watch?v=7WYRyuL4Z5I&feature=related

CHECK out this one on 24th Januray, 2009 in Argentina close to Buenos Aires:
You tube: www.youtube.com/
Press:
Press Coverage

AND Lastly, and more importantly, from Buzz Aldrin on Apollo 11 : http://www.youtube.com/watch?v=XlkV1ybBnHI

Heh?! Don't know how authentic these news are... don't even know if these are UFO's or meteors or ball lightning or something else. But, if meteors, then where are the meteorites ? However, I see no reason why life cannot exist in other planets and why they could not be sneaking around here :-) . I for one, have long suspected some of my relations to be space aliens or at least X-people from X-files :-)

I am waiting for a job on an Alien spaceship myself. :-)


Giraffes in Parallel Universe

Giraffes in Parallel Universe
At Lake Manyara

Serengeti Shall Never Die

Serengeti Shall Never Die
Wildebeeste Calf starts running only 5 min. after being born. CLICK on the pitcture to view Slideshow