Jaycee's Networking

August 16, 2009

6500 Multilayer Switches

Filed under: IOS — Tags: , , — Jaycee @ 2:54 pm
*Multilayer switches are divided by chassis type.
SUP-32 = Supervisor 32Gbps backplane bus
SUP-720 = Supervisor 720Gbps fabric bus with 1,440Gbps on the horizon.
SVIs (Switched Virtual Interfaces)
GSR (Gigabit Switch Router)
GBIC (Gigabit Interface Converter)
SFP (Small Form-factor Pluggable)
dCEF (distributed Cisco Express Forwarding)
MSFC (Multilayer Switch Function Card)
PFC (Policy Feature Card)
DFC (Distributed Feature Card)
SFM (Switch Fabric Module)
FWSM (Firewall Services Module) – security module
CSM (Content Switching Module) – load-balancing
NAM (Network Analysis Module) – monitoring
IDSM (Intrusion Detection System Module)
CMM (Communication Media Module) – VoIP connectivity
VMS (VPN/Security Management Solution)
MARS (Monitoring, Analysis, and Response System)

NEBS (Network Equipment Building System)

1. 6500e (enhanced) chassis Power:

a. 6000-watt AC power supply requires 2 power outlets per supply => 4 outlets per chassis

b. 8700-watt AC power supply requires 3 power outlets per supply => 6 outlets per chassis

c. The power supplies can be configured in a failover mode or a combined mode to allow more power for hungry modules.

2. Modules:

a. Most of the modules are hot-swappable, but some modules must be shutdown before being removed.

b. Modules communicate with each other over the backplane, thus they have faster speed than the  standalone counterparts.

=> FWSM is capable of more than 4Gbps throughput, but the fastest standalone PIX is capable of only 1.5 Gbps.

3. Architecture:

a. 6000-series has 32 Gbps backplane bus

b. 6500-series has fabric bus (or crossbar switching bus) allows backplane speeds to be boosted up to 720 Gbps.

c. SFM is a 16-port switch that connects each of the fabric-enabled modules via the fabric bus.

1) SFM could only reside in certain slots.
2) Sup-720 includes the SFM’s functionality, it must reside in the SFM’s slots.
3) For 6509, Sup-720 modules must reside in slots 5 and 6.

d. Buses:

1) D bus (data bus):

1.1) 32 Gbps
1.2) D bus is shared like a traditional Ethernet network, in that all modules receive all frames that are placed on the bus.

2) R bus (result bus):

2.1) 4 Gbps
2.2) handles communication b/w the modules and the switching logic on the supervisors.

3) C bus (control bus), EOBC (Ethernet Out-of-Band Channel):

3.1) 100 Mbps half-duplex
3.2) is used for communication b/w the line cards and the network management processors on the supervisors.

4) Crossbar fabric bus:

4.1) “Fabric” is used to describe the mesh of connections.
4.2) Crossbar Fabric is a type of switching technology – each node is connected to every other node
4.3) Fully Interconnected Fabric – each port is directly connected to every other port

switch fabric examples

4.4) The crossbar fabric bus, in combination with a Sup-2 and a SFM, is capable of 256 Gbps and 30 Mpps (million packet per second).

4.5) With the addition of a dCEF, this combination is capable of 210 Mpps.
4.6) With a Sup-720 module, crossbar fabric supports up to 720 Gbps.
4.7) When using dCEF interface module, a Sup-720 is capable of 400 Mpps.
4.8) SFM provides the actual switch fabric b/w all the fabric-enabled modules. SFM’s functionality is included in the Sup-720 already.

e. 6509 backplanes:

6509 backplanes

1) Two backplane circuit boards separated by a vertical space.
2) 6506-chassis doesn’t have slots 7,8, and 9.
3) 6513-chassis has Sup-720 in slot 7 and 8.

e. Enhanced Chassis:

1) 6500e is designed to allow more power to be drawn to the line cards. i.e. PoE line cards.
2) It uses high-speed fans to cool these power-hungry modules.
3) it provides a redesgined backplane – allows for a total of 80 Gbps of throughput per slot. (standard 6500 has 40 Gbps of throughput per slot)
4) The new architecture will allow eight 10 Gbps ports per blade with no oversubsciption.

f. Supervisors:

1) Chassis-based switches don’t have processors built into them. Instead, the processor is on a module: Supervisor.

2) MSFC:

2.1) Supervisors offer L2 processing capabilities with an add-on daughter card, MSFC, supports L3 and higher functionality.
2.2) MSFC3 is part of the Sup-720.

3) PFC:

3.1) A daughter card supports QoS, no direct configuration of the PFC is required.
3.2) PFC3 is part of the Sup720.

4) Sup-720:

4.1) Capable of 400 Mpps (million packet per second) and 720 Gbps
4.2) It’s designed for bandwidth-hungry installation
4.3) It includes PFC3 and MSFC3, a new accelerated CEF and dCEF capabilities
4.4) Fabric-only modules are capable of 40 Gbps throughput with a Sup-720.
4.5) Sup-720 has two CompactFlash Type II slots. The keywords for the slots on the active Sup-720 are disk0: and disk1:.
4.6) The CompactFlash Type II slots support CompactFlash Type II Flash PC cards sold by Cisco.
4.7) Sup-720 port 1 has a SFP connector w/o unique configuration options.
4.8) Sup-720 port 2 has a RJ-45 connector and an SFP connector (default).

To configure port 2 with RJ-45:

R1# int gi5/2
R1(config-if)# media-type rj45  

To configure port 2 with SFP:

R1# int gi5/2
R1(config-if)# media-type sfp


5) Forwarding Deciscions for L3 Traffic:

PFC3 or DFC3 makes the forwarding deciscion for L3 traffic:

5.1) PFC3 makes all forwarding decisions for each packet that enters the switch through a module without a DFC3.
5.2) DFC3 makes all forwarding decisions for each packet that enters the switch on a DFC3-enabled module in 3 situations:

5.2.1) If the egress port is on the same module as the ingress port, the DFC3 forwards the packet locally (the packet never leaves the module).
5.2.2) If the egress port is on a different fabric-enabled module, the DFC3 sends the packet to the egress module, which sends it out the egress port.
5.2.3) If the egress port is on a different nonfabric-enabled module, the DFC3 sends the packet to the Sup-720. The Sup-720 fabric interface transfers the packet to the 32-Gbps switching bus where it is received by the egress module and is sent out the egress port.

g. Modules:

1) Nonfabric-enabled module: A module doesn’t support crossbar fabric

=>It only has connectors on one sides, for connection to the D bus.

2) Fabric-enabled module: A module that supports the 32 Gbps D bus and fabric bus

=> It has two connectors on the back of the blade: one for the D bus, and one for the crossbar fabric bus.

3) Fabric-only module: a module that uses only the fabric bus

=> It has a single connector on the fabric side, with no connector on the D bus side.

4) Sup-720 is operating in dCEF mode, which allows forwarding at up to 720 Gbps:

R1#sh mod
Mod Ports Card Type                              Model              Serial No.
--- ----- -------------------------------------- ------------------ -----------
 1    4  CEF720 4 port 10-Gigabit Ethernet      WS-X6704-10GE      SAD192803ZN
 2    4  CEF720 4 port 10-Gigabit Ethernet      WS-X6704-10GE      SAL190415QR
 3   48  CEF720 48 port 10/100/1000mb Ethernet  WS-X6748-GE-TX     SAD101205F1
 5    2  Supervisor Engine 720 (Active)         WS-SUP720-3B       SAL1201GSDZ

Mod MAC addresses                       Hw    Fw           Sw           Status
--- ---------------------------------- ------ ------------ ------------ -------
 1  0014.1c6b.d87d to 0014.1c6b.d87e   2.2   12.2(14r)S5  12.2(33)SXI  Ok
 2  0013.1a23.216a to 0013.1a23.216b   2.2   12.2(14r)S5  12.2(33)SXI  Ok
 3  0015.f91d.d50c to 0015.f91d.d5db   2.3   12.2(14r)S5  12.2(33)SXI  Ok
 5  0016.9de6.7ae1 to 0016.9de6.7ae3   5.7   8.5(2)       12.2(33)SXI  Ok

Mod  Sub-Module                  Model              Serial       Hw     Status
---- --------------------------- ------------------ ----------- ------- -------
 1  Distributed Forwarding Card WS-F6700-DFC3B     SAD0939021M  4.2    Ok
 2  Distributed Forwarding Card WS-F6700-DFC3B     SAD093803VY  4.2    Ok
 3  Centralized Forwarding Card WS-F6700-CFC       SAD100402PG  2.0    Ok
 5  Policy Feature Card 3       WS-F6K-PFC3B       SAL1208GK44  2.4    Ok
 5  MSFC3 Daughterboard         WS-SUP720          SAL1208GHM6  3.2    Ok

Mod  Online Diag Status
---- -------------------
 1  Pass
 2  Pass
 3  Pass
 5  Pass

R1#sh fabric switching-mode
Global switching mode is Compact
dCEF mode is not enforced for system to operate
Fabric module is not  required for system to operate
Modules are allowed to operate in bus mode
Truncated mode is allowed, due to presence of DFC, CEF720 module

Module Slot     Switching Mode
 1                     dCEF
 2                     dCEF
 3                 Crossbar
 5                     dCEF

5) Each of the fabric-only modules has two 20 Gbps connections to the crossbar fabric bus:

R1#sh fabric util
slot    channel    speed    Ingress %     Egress %
1          0        20G            0            3
1          1        20G            2            0
2          0        20G            0            3
2          1        20G            0            0
3          0        20G            0            0
3          1        20G            0            0
5          0        20G            0            0

6) Module Types:

Modules are generally divided into line cards and service modules: Line card offers connectivity, such as copper or fiber Ethernet. Service Modules offer functionality.

6.1) Ethernet modules:

6.1.1) Connectivity options: RJ-45, GBIC, small-form-factor GBIC, Amphenol connectors for direct connection to path panels.

ethernet module connectivity options
6.1.2) Port density: 4-port 10 Gbps XENPAK-based modules, 48-port 1Gbps RJ-45 modules, 96-port RJ-21 connector modules support 10/100 Mbps.

ethernet module port density range
6.1.3) Capability: PoE and dCEF

6.2) FWSM:

6.2.1) It’s as a PIX, the difference is that all connections are internal to the switch, resulting in very high throughput.
6.2.2) the interfaces are SVIs, so the FWSM is not limited to physical connections.
6.2.3) FWSM is capable of over 4 Gbps of throughput, comparing with 1.7 Gbps on the PIX 535.
6.2.4) FWSM is a separate device in the chassis. To login:

R1# session slot 8 proc 1
The default escape character is Ctrol-^, then x.
You can also type 'exit' at the remote prompt to end the session
Trying ... Open

User Access Verification

Type help or '?' for a list of available commands.
R1> en
Password: ********

6.2.5) If FWSM is running in single-context mode, you’ll be able to run all PIX commands. If FWSM is running in multiple-context mode, you’ll need to change to the proper context to make changes.

R1# sho context
Context Name          Class        Interfaces            URL
 admin                default                            disk:/admin.cfg
*EComm                default      vlan20,30             disk:/Ecomm.cfg
R1# changeto context EComm
R1/EComm# sho int
Interface Vlan20 "outside", is up, line protocol is up
        MAC address 0008.4cff.b403, MTU 1500
        IP address, subnet mask
                Received 90083941155 packets, 6909049206185 bytes
                Transmitted 3710031826 packets, 1371444635 bytes
                Dropped 156162887 packets
Interface Vlan30 "inside", is up, line protocol is up
        MAC address 0008.4cff.b403, MTU 1500
        Transmitted 2954364369 packets, 7023125736 bytes
        Dropped 14255735 packets

6.3) CSM:

6.3.1) CSM is capable of 4Gbps of throughput.
6.3.2) All of the CSM commands are included in the switch’s CLI. Command for CSM are included under command:

R1 (config)# mod csm 9
R1 (config-module-csm)#

6.3.3) CSM is not fabric-enabled, it’s a 32 Gbps blade. Insert it into a switch that is using the fabric backplane will cause the supervisor to revert to bus mode instead of aster modes such as dCEF.
=> A switch with a Sup-720, fabric-only Ethernet modules, and a CSM will not run at 720 Gbps because of the CSM’s limited backplane connections.

6.3.4) CSM blades will operate in a stateful failover design. A pair of CSMs can synced with the command:

R1# hw-module csm 9 standby config-sync
R1 #
May  5 17:21:14: %CSM_SLB-6-REDUNDANCY_INFO: Module 9 FT info: Active: Bulk sync started
May  5 17:21:17  %CSM_SLB-4-REDUNDANCY_WARN: Module 9 FT warning: FT configuration might be out of sync.
May  5 17:21:24: %CSM_SLB-4-REDUNDANCY_WARN: Module 9 FT warning: FT configuration back in sync
May  5 17:21:26: %CSM_SLB-6-REDUNDANCY_INFO: Module 9 FT info: Active: Manual bulk sync completed

6.4) NAM:

6.4.1) NAM is a remote monitorying (RMON) probe and packet-capture device that controlled through a web browser with no extra software required.
6.4.2) NAM is able to capture more than one session at a time.
6.4.3) With the ability to capture from RSPAN sources, the NAM blade can be used to analyze traffic on any switch on the network.

6.5) IDSM: It’s a preconfigured Linux server that reside on a blade which connected to the crossar fabric bus.

6.6) FlexWAN module:

6.6.1) It allows the connection of WAN links, such as T1, DS3, OC3.
6.6.2) Two types of FlexAN modules: FlexWAN and Enhanced FlexWAN.
6.6.3) Difference b/w the two versions: CPU speed, memory capacity, and connection to the crossbar fabric bus.

6.7) CMM:

6.7.1) It provides telephony integration into 6500-series switches.
6.7.2) It’s fabric-enabled module has 3 slots which accept different port adapters.
6.7.3) A 6500 chassis can be filled with CMMs and a supervisor to provide large port density for VoIP connectivity.

h.  Switch Fabric Functionality Switching Modes:

1) Compact mode:

The switch uses this mode for all traffic when only fabric-enabled modules are installed. In this mode, a compact version of the D Bus header is forwarded over the switch fabric channel, which provides the best possible performance.

2) Truncated mode:

The switch uses this mode for traffic between fabric-enabled modules when there are both fabric-enabled and nonfabric-enabled modules installed. In this mode, the switch sends a truncated version of the traffic (the first 64 bytes of the frame) over the switch fabric channel.

3) Bus mode:

The switch uses this mode for traffic between nonfabric-enabled modules and for traffic between a nonfabric-enabled module and a fabric-enabled module. In this mode, all traffic passes between the local bus and the supervisor engine bus.

4) To allow use of nonfabric-enabled modules or to allow fabric-enabled modules to use bus mode:

R1(config)# fabric switching-mode allow bus-mode

To prevent use of nonfabric-enabled modules or to prevent fabric-enabled modules from using bus mode:

R1(config)# no fabric switching-mode allow bus-mode

=> power will be removed from any nonfabric-enabled modules installed in the switch.

6) To allow fabric-enabled modules to use truncated mode:

R1(config)# fabric switching-mode allow truncated

To prevent fabric-enabled modules from using truncated mode:

R1(config)# no fabric switching-mode allow truncated

7) Displaying switch fabric functionality modes:

R1# sh fabric active
Active fabric card in slot 5
No backup fabric card in the system

R1# show fabric switching-mode module 5
Module Slot     Switching Mode
 5                     dCEF

R1# show fabric status 5
 slot  channel speed module   fabric   hotStandby  Standby  Standby
                     status   status      support  module   fabric
 5        0      20G     OK       OK   Y(not-hot)

R1# show fabric utilization 5
 slot    channel      speed    Ingress %     Egress %
 5          0           20G            0            0

R1# show fabric errors
Module errors:
 slot    channel     crc      hbeat       sync   DDR sync
 1          0          0          0          0          0
 1          1          0          0          0          0
 2          0          0          0          0          0
 2          1          0          0          0          0
 3          0          0          0          0          0
 3          1          0          0          0          0
 5          0          0          0          0          0

Fabric errors:
 slot    channel    sync     buffer    timeout
 1          0          0          0          0
 1          1          0          0          0
 2          0          0          0          0
 2          1          0          0          0
 3          0          0          0          0
 3          1          0          0          0
 5          0          0          0          0

July 1, 2009

Virtual Switching System (VSS)

Filed under: Information, IOS — Tags: , — Jaycee @ 7:22 pm

A. VSS (Virtual Switching System)

1. VSS pools multiple Catalyst 6500 switches into one virtual switch, increasing bandwidth capacity to 1.4 Tbps.

2. A VSS will allow 2x 6500 to operate as a single logical virtual switch called VSS1440.

3. VSS1440 = 2x 6500 with SUP720-10GE

4. In a VSS, the data plane and switch fabric with capacity of 720 Gbps of supervisor engine in each chassis are active at the same time on both chassis, combining for an active 1400-Gbps switching capacity per VSS. But

a. ONLY ONE of the virtual switch members has the active control plane.

b. Both chassis are kept in sync with the SSO (Stateful Switchover) mechanism along with NSF (Nonstop Forwarding) to provide nonstop communication.

5. Benefits:

a. single point of management, IP address, and routing instance for 6500 virtual switch

1) single configuration file and node to manage.

2) removes the need to configure redundant switches twice with identical policies.

3) Only one gateway IP address is required per VLAN instead of the 3 IP address per VLAN used today.

4) removes the need for HSRP, VRRP, GLBP

*Cisco LMS (LAN Management System) 3.0 can be used to centrally manage a 6500 virtual switch as a single entity

b. MEC (Multichassis EtherChannel) is a Layer 2 multipathing technology that creates simplified loop-free toplogies, eliminating the dependency on STP. (STP can still be activated to protect strictly against any user misconfiguration.)

c. Flexible deployment – physical switches don’t have to be colocated: 2 physical switches are connected with standard 10 Gigabit Etherent interfaces and as such can be located any distance based on the distance limitation of the 10Gigabit Etherenet optics.

d. VSS eliminates L2/L3 protocol reconvergence if a virtual switch member fails

e. VSS scales system bandwidth to 1.4Tbps:

1) activates all L2 bandwidth across redundant 6500 switches with automatic load sharing.

2) maximizing server bandwidth throughput

f. eliminating unicast flooding caused by asymmetrical routing in traditional campus designs

g. optimizing the number of hops for intracampus traffic using multichassis EtherChannel enhancements

6. Target deployment areas for VSS:

a. Campus or DC core/distribution layer

b. DC access layer (server connectivity)

7. Two physical chassis doesn’t need to be identical in the type of modules installed or even type of chassis. For example, a WS-C6503-E chassis can be combined with a WS-C6513 chassis to form a VSS.

8. eFSU (enhanced fast software upgrade) is a mechanism to perform software upgrades while maintaining HA. It leverages the existing features of NSF and SSO and significantly reduces the downtime to less than 200ms.

9. Dual active state is detected rapidly by:

a. Enhancement to PAgP used in MEC with connecting Cisco switches

b. L3 BFD (Bidirectional Forwarding Detection) configuration on a directly connected link between virtual siwtch memebers or through an L2 link through an access layer switch.

c. L2 Fast-Hello Dual-Active Detection configuration on a directly connected link between virtual switch members.

B. MEC (Multichassis EtherChannel)

1. MEC allows a connected node to terminate the EtherChannel across the 2 physical Catalyst 6500 switches that make up the VSS leading to creating simplified loop-free L2 toplogy.

2. Using MEC in VSS topology results in all links being active and at the same time provides for a highly available topology without the dependency of STP.

3. supports up to 512 MECs.

End-of-Row or Top-of-Rack for Server Networking in DC

Filed under: Information, IOS, Routing Design — Tags: , — Jaycee @ 2:05 pm

There are 3 primary approaches for server networking in dc environment:

1. End-of-Row:

a. When aggregating servers larger than 1U or servers with a mixed amount of interface types and densities, Catalyst 6500 Series switches are used to support one or more racks.

b. Advantage:

1) cost effective – delivering the highest level of switch and port utilization, especially when coupled with the rich set of network visualization services available in the Catalyst 6500 Series. (6500 supports a wide variety of service modules, simplifies pushing security and application networking service into the access layer.)

2) server-independent – provides maximum felxibilty to support a borad range of servers.

3) performance advantage – 2 servers exchange large volumes of information cab be placed on the same line card as opposed to card-to-card or switch-to-switch, which will be slower.

c. Disadvantage:

1) cable/patch panel cost – physical volume of the cables and the waste of the valuable rack space

2. Top-of-Rack:

a. When stacking  40 1U servers in a rack one or two, 1U Rack Switches (like the Catalyst 4948-10G) are often used to aggregate all of these servers with Gigabit Ethernet and then run a couple 10GbE links back to the aggregation switches. (In some cases, 2x 4948 switches are used for HA purpose.) (Catalyst 4948is optimized for the dc environment.)

b. Advangate:

1) simplified cable management

2) avoid rack space and cooling issues

3) avoid cooling issues of end-of-rack switching

4) fast port-to-port switching for servers within the rack

5) predictable oversubscription of the uplink ans smaller switching domains (one per rack) to aid in fault isolatio and containment

c. Disadvantage:

1) Not enough servers to fill the switch in one rack – solution: put one top-of-rack switch server in an adjacent rack to preserve the advantages of the top-of-rack switch wile increasing port utilization.

3. Integrated:

a. When using blade servers, blade switches would be deployed. Cisco Catalyst Blade Switch 3000 Series support the visualization, segmentation, and management tools needed to properly support this environment.

b. When server virtualization is in use, it can rapidly increase the complexity of the network (the number of MAC addresses, complexity of spanning tree, data pathways, etc.)

c. In some larger dc, using the pass-thru module or the balde switches where it’s aggregated into a series of rack switches.

*Most people like dual top-of-rack because servers have dual production uplinks. But they can’t really fit in 40 1U servers due to power limitation or heating problem. So they end up 3 racks using the top-of-rack switch in the middle rack and cables are going between cabinets. End-of-rack is actually designed for this situation. But placing 6500 in the middle rack would cause overheating problem. 6500 switches thus shall be placed at the end of the row.

May 10, 2009


Filed under: Information, IOS — Tags: , — Jaycee @ 8:17 pm

A. EtherChannel :

1. A Cisco term for the technology that enables the bonding of up to 8 physical Ethernet links into a single logical link. However, the bandwidth is not truly the aggregate of the physical link speeds in all situations. (an EtherChannel composed of 4 1-Gbps links, each conversation will still be limited to 1 Gbps by default.

2. By default, the physical link used for each packet is determined by the packet’s destination MAC address. This algorithm is Cicso-proprietary. The packets with the same destination MAC address will always travel over the same physical link. => It ensures that packets sent to a single destination MAC address never arrive out of order.

3. If one workstation talks to one server over an EtherChannel, only one of the physical links will be used. All of the traffic destinated for that server will traverse a single physical link in the EtherChannel. => A single user will only ever get 1 Gbps from the EtherChannel at a time.

4. Solaris uses “Trunk” for this technology term.

B. Load Balancing:

1. Hashing algorithm takes the destination MAC address and hashes that value to a number in the range of 0-7.

2. The only possible way to distribute traffic equally across all links in an EtherChannel is to design one with 8, 4, or 2 physical links.

3. The method the switch uses to determine which path to assign can be changed:

a. MAC address:

(1) source MAC
(2) destination MAC (default)
(3) source and destination MAC

b. IP address:

(1) source IP
(2) destination IP
(3) source and destination IP

c. Ports:

(1) source port
(2) destination port
(3) source and destination ports

4.In the case of one server (i.e. email server) receiving the lion’s share of the traffic, destination MAC address load balancing doesn’t make sense. Give this scenario, balancing with the source MAC address makes more sense as the example below:

EtherChannel load-balancing factors

a. Load-balancing method is only applied to packets being transmitted over the EtherChannel. This is not a 2-way function.

b. When packets are being returned from the email server, the source MAC address is that of the email server itself.

c. Thus, the soluction would be to have source MAC address load balancing on Switch A, and destination load balancing on Switch B.

5. For a switch like 6509, changing the load-balancing algorithm is done on a chassis-wide basis, so with all the devices connected to a single switch as below:

Single server to single NAS

Problem: the bandwidth required between the server and the NAS device is in excess of 2 Gbps.

a. Solution 1: change the server and/or NAS device so that each link has its own MAC adddress => the packets ill still be source from and destined for only one of those addresses.

b. Solution 2: Spliting the link into 4 x 1-Gbps links, each with its own IP network and mounting different filesystems on each link will solve the problem.

c. Solution 3: get a faster physical link, wush as 10Gbps Ethernet.

C. EtherChannel Protocols:

EtherChannel protocols and their modes

1. LACP (Link Aggregation Control Protocol) is IEEE standard.

2. PAgP (Port Aggregation Control Protocol) is Cisco-proprietary.

3. NetApp NAS devices dont negotiate with the other sides of the links, so Cisco side of the EtherChannel should set to “on“.

D. EtherChannel Configuration:

1. Creat a port-channel virtual interface:

EtherChannel configuration

2. Shows the status of an EtherChannel:

sh etherchannel sum

3. Shows the number of bits used in the hash algorithm for each physical interface:

sh etherchannel 1 port-channel

January 8, 2009


Filed under: IOS, VLAN, VTP — Tags: , , , — Jaycee @ 2:42 am

1.Frames cannot leave the VLANs from which they originate.

2. “Router on a stick” runs a single trunk from the switch to the router.
=> All the VLANs will then pass over a single link.
==> The router is passing traffic b/w VLANs, so each frame will be seen twice on the same link.
===> Once to get to the router, and once to get back to the destination VLAN.

3. With a layer-3 switch, every pot can be dedicated to devices or trunks to other switches.

4. Configuring VLANs:

(1) Some IOS models, such as the 2950 and 3550, have a configurable VLAN database with its own configuration mode and commands.
=> The configuration for this database is completely separate fro the configuration for the rest of the switch.
==> A write erase followed by a reload will not clear the VLAN database on these switches.

(2) Configuring throught the VLAN database is a throwback to older models that offered no other way to manage VLANS.
=> All newer switches offer the option of configuring the VLANs throught the normal IOS CLI.
==> Switches like 6500, when running in native IOS mode, only support IOS commands for switch configuration.

(3) Cisco recommends VTP be configured as a 1st step when configuring VLANs.
=> trunks will not negotiate w/o a VTP domain
==> VTP domain is not required to make VLANs function on a single switch

5. CatOS

(1)  CatosSwitch# (enable) set vlan 10 name Lab-VLAN

(2) CatosSwitch# (enable) set vlan 10 6/1,6/3-4

(3) CatosSwitch# (enable) sho vlan


6. IOS Using VLAN Database

(1) If you have an IOS switch with active VLANs, but no reference in the running configuration, it’s possible:

a. they were configured in the VLAN database
b. they were learned via VTP

(2) 2950-IOS# vlan database

(3) 2950-IOS(vlan)# vlan 10 name Lab-VLAN

(4) 2950-IOS(vlan)# show

a. 2950-IOS(vlan)# show current
=> display the current database
b. 2950-IOS(vlan)# show changes

=> the differences b/w the current and proposed database

7. IOS Using Global Commands

(1) 2950-IOS# conf t
2950-IOS(config)# vlan 10
2950-IOS(config-vlan)# name Lab-VLAN

(2) 2950-IOS# sho vlan

(3) 2950-IOS(config)# int f0/1
2950-IOS(config-if)# switchport access vlan 10

(4) 2950-IOS(config)# interface range f0/1-2
2950-IOS(config-if-range)# switchport access vlan 10


January 5, 2009

Hubs and Switches

Filed under: Information, IOS — Tags: , — Jaycee @ 3:09 am

1. Cables:

(1) 10Base-5 = thicknet
N connectors
(2) 10Base-2 = thin-net, for cable TV
BNC connectors
(3) UTP = unshielded twisted pair cables
RJ45 connectors
(4) 10Base-T — there is no specific distance limitation, usually keep within 100M
describes certain characteristics that a cable should meet

2. Hubs:

(1) A hub connects Ethernet cables together, their signals can be repeated to every other connected cable on the hub

(2) a hub is a repeater, a repeater is not necessarily a hub

(3) A repeater repeats a signal, usually used to extend a connection to a remote host, or to connect a group of users who exceed the distance limitation of 10Base-T.

(4) A repeater may have only 2 connectors, a hub can have many more.

(5) 5-4-3 rule of Ethernet design – b/w any 2 nodes on an Ethernet network

a. there can be be only 5 segments
b. connected via 4 repeaters
c. only 3 of the segments can be populated

3. Collision  domains

(1) Collisions are limited to network segments, where devices can communicate using layer-2 MAC addresses.
(2) Collisions are limited to collision domains, where collisions can occur.

4. Broadcast domain:

(1) where a broadcast will be propagated.

(2) Broadcasts stay within a layer-3 network, which usually bordered by a layer-3 device such as a router.

(3) Broadcasts are sent through switches (layer-2 devices), but stop at routers.

(4) Broadcasts and IP networks are not limited to VLANs.

(5) Broadcast Storms

a. Causes: endless loop
b. Symptoms: every device essentially being unable to send any frames on the network due to constant network traffic, all status lights on the hubs staying on instantly instead of blinking normally.
c. Resolves: the only way to resolve a broadcast storm is to break the loop.

5. Frames

(1) TCP packet is encapsulated with layer-2 information to form a frame
(2) always refer to frames when speaking of hubs and swithches

6. Switch terms:

(1) Switch — the general term used for anything that can switch
(2) Ethernet Switch — any device that forwards frames based on their layer-2 MAC addresses using Ethernet.

a. An Ethernet switch creates a collision domain on each port
b. A hub generally expands a collision domain through all ports

(3) Layer-3 switch — a switch with routing capabilities. VLANS can be configured as virtual interfaces on a layer-3 switch.

(4) Multilayer switch — Same as a layer-3 switch, but also allow for control based on higher layers in packets.

(5) Switching — is the act of forwarding frames based on their destination MAC addresses.

a. In telecom, switching is the act of making a connection b/w 2 parties.
b. In routing, switching is the process of forwarding packets from one interface to another within a router.

7. CAM table (content-addressable memory) in Cat OS and MAC address table in IOS contain a map of what MAC addresses have been discovered on what ports.

8. When a station using IP needs to send a packet to another IP address on the same network, it must 1st determine the MAC address for the destination IP address:

a. IP send out an ARP (Address Resolution Protocol) request packet. This packet is a broadcast, so it’s sent out all switch ports.

b. The ARP packet, when encapsulated into a frame, now contains the requesting station’s MAC address, so the switch knows what port to assign for the source.

c. When the destination station replies that it owns the requested IP address, the switch knows which port the destination MAC address is located on (the reply frame will contain the replying station’s MAC address).

9. To display information about the MAC address table:

show mac-address-table


*in Cat OS, use show cam dynamic

10. Cisco Switch Types:

(1) Fixed-configuration switches

a. are smaller, usually 1 rack unit (RU) in size
b. typically contain nothing but Ethernet ports
c. includes the Cisco 2950, 3550, 3750
d. 3750 is capable of being stackedthe limitation of stacking is that the backplane of the stack is limited to 32 Gbps (Gigabits per seconds)
Benefits: price, size, weight, power–capable of operating on normal household power,some support a power distribution unit which can provide some power redundancy at additional cost.

(2) Modular chassis-based switches

a. can support 720 Gbps on their backplanes
b. more expensive

11. Modular Chassis-based Switches

a. Advantages:

(1) Expandability — 7x 3750s for an equal ports of the 6500 chassis, but speed of a stack is limited to 32Gbps while 6500 provide 720Gbps

(2) Flexibility — 6500 chassis will accept modules that provide services outside the range of a normal switch:

a. Firewall Services Modules (FWSMs)
b. Intrusion Detection System Modules (IDSMs)
c. Content Switching Modules (CSMs)
d. Network Analysis Modules (NAMs)
e. WAN modules (FlexWAN)

(3) Redundancy

a. Support multiple power supplies
b. Support dual supervisors

(4) Speed

a. 6500 chassis employing Supervisor-720 (Sup-720) processors supoorts up to 720 Gbps of throughput on the backplane.
b. The fastest fixed-configuration switch — Cisco 4948 — supports only 48Gbps.

i. 4948 switch is designed to be placed at the top of a rack in order to support the devices in the rack.
ii. it cannot be stacked, therefore it’s limited to 48 ports.

b. Disadvantages: heavy, take a lot of room, require a lot of power.

*Cisco’s two primary chassis-based switches: 4500 series and 6500 series.

12. Planning a Chassis-Based Switch Installation

(1) Rack space

a. 6513 — 19 RU
b. NEBS version of 6509 — 21 RU
c. 4506 — 10 RU
d. 7-foot telecom rack is 40 RU

(2) Power

a. add up the power requirements for all the modules
b. To provide redundancy, each of the power supplies in the pair should be able to provide all the power necessary to run the entire switch, including all modules.
c. For DC power supplies, make sure you specify A and B feeds.
e.g. if you need 40 amps of DC power, you’d request 40 amps DC — A and B feeds. This means that you’ll get 2 40-amps power circuits for failover purposes.
d. Most collocation facilities supply positive gound DC power.
e. For AC power supplies, you’ll need to specify the voltage, amperage, and socket needed for each feed.
e.g. the power cord for a power supply may come with a NEMA L6-20P plug. It will require NEMA L6-20R receptacles(插座).
f. The P and R on the ends of the part numbers describe whether the part is a plug or a receptacle. NEMAL6-20 is a twist-lock 250-volt AC 16-amp connector.
g. Always tighten the clamp to avoid the cable popping out of the receptacle when stressed.

(3) Cooling

a. On many chassis switches, cooling is done from side to side.
b. NEBS-compliant 6509 switch moves air vertically, and the modules sit vertically in the chassis.
c. Always make sure you leave ample (充裕的) space b/w chassis switches when installing them.

(4) Installing and removing modules

Any time you’re working with a chassis or modules, you should use a static strap.

(5) Routing cables

When routing cables to modules, remember that you may need to remove the modules in the future.

Create a free website or blog at WordPress.com.