Jaycee's Networking

June 12, 2010

Enable/Establish BGP

Filed under: BGP, IOS — Tags: , — Jaycee @ 7:54 pm

I. BGP Overview:

1. Path Vector EGP – used to exchange prefix information b/w ASs.

2. Uses TCP port 179 for transport:

a. Required underlying IGP
b. Network cannot route on BGP alone

II. Enabling BGP:

1. Enable the global process

router bgp [AS]

*Only one processes per router

2. Establish BGP Peerings

neighbor [address] remote-as [AS]

*Need to have ip reach-ability

III. Establishing BGP Peerings:

1. Two types of peers:

iBGP – member of the same AS
EBGP – members of different ASs

2. TCP 179 for transport

Normal TCP operations apply

3. Listen for address 1.2.3.4 starting a TCP session at port 179:

neighbor 1.2.3.4 remote-as 100

(router is doing “show ip route 1.2.3.4”, and do router recursion process to the interface – source of the BGP packet is from)

4. TCP server must agree on where the session is coming from. Need to know which one is TCP Client and which one is TCP Server.

bgp-notes-01

5. TCP Client has the higher BGP router-id

6. If server doesn’t expect session it will refuse

7. Packet sourced from outgoing interface in the routing table

BGP update source

*If there are multiple links b/w BGP peers, you can use “bgp update source” to update the BGP source interface.

(15:00)

IV.

Advertisements

November 6, 2009

Decision of BGP Path Selection on IOS and JUNOS

Filed under: BGP, IOS — Tags: , , — Jaycee @ 11:02 am

BGP Path Selection Process Decision Steps

IOS

JUNOS

Next-Hop accessible/resolvable (mandatory
attribute)
By default, the NEXT-HOP is changed for EBGP and is unchanged for iBGP.

 

The NEXT-HOP identifies the EBGP speaker in the adjoining AS, and IGP will not carry this route, thereby leading to an unreachable next hop.

Synchronization

BGP process expects the IGP to have a copy of each route before that route can be advertised by BGP. This is why disabling synchronization is the 1st step in IOS configuration.

NONE.

Weight (Influences OUTBOUND traffic, but apply on inbound). This is Cisco proprietary parameter given to a route on a particular router and is used only within that router. The weight is never given to other routers.

 

*Default weight = 0, except for locally sourced routes which get a default weight = 32,768. The maximum weight is 65,535.

*Weight value => the higher the better.

NONE.

Local Preference (Influences OUTBOUND traffic,
but apply on inbound).
(discretionary attribute)

 

Local preferences are shared among iBGP routers, but they are NOT shared with external BGP routers.

 

*Default Local_PREF = 100.

*Local_PREF value => the higher the better.

Self-Originated

BGP routes prefer routes that originate inside their own AS. That is, to choose the route that originated with BGP on this router.
AS Path (Influences INBOUND traffic, but apply on outbound). (mandatory
attribute)
By default, BGP discards any route advertisement that contains its local AS number in the AS path to prevent routing loop. For routes that originate outside of the AS, BGP will prefer the one with the shortest path.
Origin. (mandatory attribute)

 

ORIGIN has 3 values:

0 = IGP, 1 = EGP, 2 = Incomplete

BGP selects IGP routes in preference to EGP, and EGP in preference to INCOMPLETE routes. An INCOMPLETE route is one that is injected into BGP via redistribution.

*Origin value => the lower the better.

MED (Influences INBOUND traffic, but apply on outbound). (nontransitive attribute)

 

Use MED to tell your ISPs which of several entrances to your
network they should use. You should use MED values ONLY IF you are multihomed to a single provider. MED values are ONLY propagated to adjacent ASes, so routers that are further downstream don’t see them at all.

MED is used by the local AS to influence the routing decisions in an adjacent AS for traffic that is inbound to the local AS. BGP selects the route with the lowest MED value. MED actually leaves your AS and tells your neighbor routers which link we want them to talk to.

 

*Default MED = 0.

*MED value => the lower the better

MED is used ONLY if both routes are received from the same AS, or if the command “bgp always-compare-med” has been enabled.

 

With “bgp always-compare-med” enabled, BGP will compare MED values even if they come from different ASes, although to reach this step the AS_PATHs must have the same length. You should use this command throughout the AS or you risk creating routing loops.

External

 

BGP prefer the paths learned using EBGP over paths learned using iBGP to eliminate loops.

EBGP AD = 20 is lower than other IGP because it should go out of the AS instead of staying in AS.

 

iBGP AD = 200 is higher than other IGP because if it¡¯s an internal route, it should use internal IGP.

BGP default protocol preference = 170
IGP Cost

 

BGP prefers paths with the lowest IGP metric.

a. Make sure disabling synchronization.

 

b. Choose the routes with the lowest IGP administrative distance.

a. Examine route tables inet.0 and inet.3 for the BGP next hop, and then install the physical next hop for the route with the better preference.

 

b. For preference ties, install the physical next hop found in inet.3.

c. For preference ties within the same route table, install the physical next hop where the greater number of equal-cost paths exists.

eBGP
Peering/Ages of the routes
BGP will look at the ages of the routes and use the oldest route to particular destination for stability.
Router ID A router’s ID is the IP address assigned to the loopback interface or the highest IP address on an active interface at boot time.

 

*Router ID => the lower the better

October 15, 2009

IOS ADs vs JUNOS Preferences

Filed under: IOS, Junos — Tags: , — Jaycee @ 8:17 pm
Source IOS administrative distance JUNOS protocol preference Purpose
Local 0 0 Local IP of the interface
Connected Interface 0 0 Subnet corresponding to the directly connected interface
System Routes 4
Static 1 5 Static routes
RSVP 7 Routes learned from the Resource Reservation Protocol used in MPLS
LDF 8
LDP 9 Routes learned from the Label Distribution Protocol used in MPLS
OSPF internal route 10 OSPF internal routes such as interfaces that are running OSPF
IS-IS Level 1 internal route 15 IS-IS Level 1 internal routes such as interfaces that are running ISIS
IS-IS Level 2 internal route 18 IS-IS Level 2 internal routes such as interfaces that are running ISIS
EBGP 20
Redirects 30 Routes from ICMP redirect
Kernel 40 Routes learned via route socket from kernel
SNMP 50 Routes installed by NMS through the SNMP
Router discovery 55 Routes installed by ICMP Router Discovery
Internal EIGRP 90 Cisco proprietary routing protocol
RIP 100 Routes from Routing Information Protocol (IPv4)
RIPng 100 Routes from Routing Information Protocol (IPv6)
IGRP 100 Internal Gateway Routing Protocol
PIM 105 Routes from Protocol Independent Multicast
DVMRP 110 Routes from Distance Vector Multicast
OSPF 110
IS-IS 115
RIP 120 Routes from Routing Information Protocol
Aggregate 130 Aggregate and generated routes
EGP 140 Routes from Exterior Gateway Protocol
OSPF AS external routes 150 Routes from OSPF that have been redistributed into OSPF
ODR 160 On Demand Routing
IS-IS Level 1 external route 160 Routes from IS-IS Level 1 that have been redistributed into ISIS
IS-IS Level 2 external route 165 Routes from IS-IS Level 2 that have been redistributed into ISIS
BGP 170 Routes from BGP
MSDP 175
External EIGRP 170
iBGP 200
Unknown 255 255

August 21, 2009

Router Design Concept

Filed under: IOS, Routing Design — Tags: , — Jaycee @ 3:58 pm
GRES (graceful Routing Engine switchover) – In a router that contains a master and a backup Routing Engine, allows the backup Routing Engine to assume mastership automatically, with no disruption of packet forwarding.
Graceful switchover — JUNOS software feature that allows a change from the primary device, such as a Routing Engine, to the backup device without interruption of packet forwarding.

(lecture by Tim Chung)

1. Basic Router and Routing:

basic-routing-01

a. R1 and R2 has routing protocol (i.e RIP or OSPF), so the computer can reach the destination server 10.0.0.1.

b. R2 is a single CPU router which is like a Linux server doing a routing job.

c. A single CPU needs to process all of the packets whichever goes through the router. If the computer is sending too many data packets through R2, then the CPU of R2 is going to be occupied by the data packets.

d. When CPU is too busy (up to 99%~100% usage) on processing the data packets, other important packets for control, such as routing protocols, SNMP, wouldn’t be processed in time which would cause routing adjacency dropped. All of the data packets would not reach the destination.

e. Thus, Cisco 2800 series can only do T1 since it’s a single RISC processor, and Juniper J-series is also single IBM CPU. They both couldn’t handle high traffic. They are both software based routers.

2. For modern routers, they have more than 1 CPU doing data packet forwarding and processing control information.

router-basic-02

a. Take Juniper router as an example, a router has 2 plane: RE and PFE. All of the data packets going through PFE and goes out.

b. PFE passes all important control packets to RE.

c. In this way, router wouldn’t drop the adjacency which wont lose the routes. Data packets can be sent to the destination.

3. For Redundancy:

router-basic-03

a. Uses fabric between RE and PFE and PIC for high traffic transmissions.

b. Uses full-mesh x-bar for PFEs.

4. For more redundancy with GRES:

router-basic-04

August 16, 2009

6500 Multilayer Switches

Filed under: IOS — Tags: , , — Jaycee @ 2:54 pm
*Multilayer switches are divided by chassis type.
SUP-32 = Supervisor 32Gbps backplane bus
SUP-720 = Supervisor 720Gbps fabric bus with 1,440Gbps on the horizon.
SVIs (Switched Virtual Interfaces)
GSR (Gigabit Switch Router)
GBIC (Gigabit Interface Converter)
SFP (Small Form-factor Pluggable)
dCEF (distributed Cisco Express Forwarding)
MSFC (Multilayer Switch Function Card)
PFC (Policy Feature Card)
DFC (Distributed Feature Card)
SFM (Switch Fabric Module)
FWSM (Firewall Services Module) – security module
CSM (Content Switching Module) – load-balancing
NAM (Network Analysis Module) – monitoring
IDSM (Intrusion Detection System Module)
CMM (Communication Media Module) – VoIP connectivity
VMS (VPN/Security Management Solution)
MARS (Monitoring, Analysis, and Response System)

NEBS (Network Equipment Building System)


1. 6500e (enhanced) chassis Power:

a. 6000-watt AC power supply requires 2 power outlets per supply => 4 outlets per chassis

b. 8700-watt AC power supply requires 3 power outlets per supply => 6 outlets per chassis

c. The power supplies can be configured in a failover mode or a combined mode to allow more power for hungry modules.

2. Modules:

a. Most of the modules are hot-swappable, but some modules must be shutdown before being removed.

b. Modules communicate with each other over the backplane, thus they have faster speed than the  standalone counterparts.

=> FWSM is capable of more than 4Gbps throughput, but the fastest standalone PIX is capable of only 1.5 Gbps.

3. Architecture:

a. 6000-series has 32 Gbps backplane bus

b. 6500-series has fabric bus (or crossbar switching bus) allows backplane speeds to be boosted up to 720 Gbps.

c. SFM is a 16-port switch that connects each of the fabric-enabled modules via the fabric bus.

1) SFM could only reside in certain slots.
2) Sup-720 includes the SFM’s functionality, it must reside in the SFM’s slots.
3) For 6509, Sup-720 modules must reside in slots 5 and 6.

d. Buses:

1) D bus (data bus):

1.1) 32 Gbps
1.2) D bus is shared like a traditional Ethernet network, in that all modules receive all frames that are placed on the bus.

2) R bus (result bus):

2.1) 4 Gbps
2.2) handles communication b/w the modules and the switching logic on the supervisors.

3) C bus (control bus), EOBC (Ethernet Out-of-Band Channel):

3.1) 100 Mbps half-duplex
3.2) is used for communication b/w the line cards and the network management processors on the supervisors.

4) Crossbar fabric bus:

4.1) “Fabric” is used to describe the mesh of connections.
4.2) Crossbar Fabric is a type of switching technology – each node is connected to every other node
4.3) Fully Interconnected Fabric – each port is directly connected to every other port

switch fabric examples

4.4) The crossbar fabric bus, in combination with a Sup-2 and a SFM, is capable of 256 Gbps and 30 Mpps (million packet per second).

4.5) With the addition of a dCEF, this combination is capable of 210 Mpps.
4.6) With a Sup-720 module, crossbar fabric supports up to 720 Gbps.
4.7) When using dCEF interface module, a Sup-720 is capable of 400 Mpps.
4.8) SFM provides the actual switch fabric b/w all the fabric-enabled modules. SFM’s functionality is included in the Sup-720 already.

e. 6509 backplanes:

6509 backplanes

1) Two backplane circuit boards separated by a vertical space.
2) 6506-chassis doesn’t have slots 7,8, and 9.
3) 6513-chassis has Sup-720 in slot 7 and 8.

e. Enhanced Chassis:

1) 6500e is designed to allow more power to be drawn to the line cards. i.e. PoE line cards.
2) It uses high-speed fans to cool these power-hungry modules.
3) it provides a redesgined backplane – allows for a total of 80 Gbps of throughput per slot. (standard 6500 has 40 Gbps of throughput per slot)
4) The new architecture will allow eight 10 Gbps ports per blade with no oversubsciption.

f. Supervisors:

1) Chassis-based switches don’t have processors built into them. Instead, the processor is on a module: Supervisor.

2) MSFC:

2.1) Supervisors offer L2 processing capabilities with an add-on daughter card, MSFC, supports L3 and higher functionality.
2.2) MSFC3 is part of the Sup-720.

3) PFC:

3.1) A daughter card supports QoS, no direct configuration of the PFC is required.
3.2) PFC3 is part of the Sup720.

4) Sup-720:

4.1) Capable of 400 Mpps (million packet per second) and 720 Gbps
4.2) It’s designed for bandwidth-hungry installation
4.3) It includes PFC3 and MSFC3, a new accelerated CEF and dCEF capabilities
4.4) Fabric-only modules are capable of 40 Gbps throughput with a Sup-720.
4.5) Sup-720 has two CompactFlash Type II slots. The keywords for the slots on the active Sup-720 are disk0: and disk1:.
4.6) The CompactFlash Type II slots support CompactFlash Type II Flash PC cards sold by Cisco.
4.7) Sup-720 port 1 has a SFP connector w/o unique configuration options.
4.8) Sup-720 port 2 has a RJ-45 connector and an SFP connector (default).

To configure port 2 with RJ-45:

R1# int gi5/2
R1(config-if)# media-type rj45  

To configure port 2 with SFP:

R1# int gi5/2
R1(config-if)# media-type sfp

4.9)

5) Forwarding Deciscions for L3 Traffic:

PFC3 or DFC3 makes the forwarding deciscion for L3 traffic:

5.1) PFC3 makes all forwarding decisions for each packet that enters the switch through a module without a DFC3.
5.2) DFC3 makes all forwarding decisions for each packet that enters the switch on a DFC3-enabled module in 3 situations:

5.2.1) If the egress port is on the same module as the ingress port, the DFC3 forwards the packet locally (the packet never leaves the module).
5.2.2) If the egress port is on a different fabric-enabled module, the DFC3 sends the packet to the egress module, which sends it out the egress port.
5.2.3) If the egress port is on a different nonfabric-enabled module, the DFC3 sends the packet to the Sup-720. The Sup-720 fabric interface transfers the packet to the 32-Gbps switching bus where it is received by the egress module and is sent out the egress port.

g. Modules:

1) Nonfabric-enabled module: A module doesn’t support crossbar fabric

=>It only has connectors on one sides, for connection to the D bus.

2) Fabric-enabled module: A module that supports the 32 Gbps D bus and fabric bus

=> It has two connectors on the back of the blade: one for the D bus, and one for the crossbar fabric bus.

3) Fabric-only module: a module that uses only the fabric bus

=> It has a single connector on the fabric side, with no connector on the D bus side.

4) Sup-720 is operating in dCEF mode, which allows forwarding at up to 720 Gbps:

R1#sh mod
Mod Ports Card Type                              Model              Serial No.
--- ----- -------------------------------------- ------------------ -----------
 1    4  CEF720 4 port 10-Gigabit Ethernet      WS-X6704-10GE      SAD192803ZN
 2    4  CEF720 4 port 10-Gigabit Ethernet      WS-X6704-10GE      SAL190415QR
 3   48  CEF720 48 port 10/100/1000mb Ethernet  WS-X6748-GE-TX     SAD101205F1
 5    2  Supervisor Engine 720 (Active)         WS-SUP720-3B       SAL1201GSDZ

Mod MAC addresses                       Hw    Fw           Sw           Status
--- ---------------------------------- ------ ------------ ------------ -------
 1  0014.1c6b.d87d to 0014.1c6b.d87e   2.2   12.2(14r)S5  12.2(33)SXI  Ok
 2  0013.1a23.216a to 0013.1a23.216b   2.2   12.2(14r)S5  12.2(33)SXI  Ok
 3  0015.f91d.d50c to 0015.f91d.d5db   2.3   12.2(14r)S5  12.2(33)SXI  Ok
 5  0016.9de6.7ae1 to 0016.9de6.7ae3   5.7   8.5(2)       12.2(33)SXI  Ok

Mod  Sub-Module                  Model              Serial       Hw     Status
---- --------------------------- ------------------ ----------- ------- -------
 1  Distributed Forwarding Card WS-F6700-DFC3B     SAD0939021M  4.2    Ok
 2  Distributed Forwarding Card WS-F6700-DFC3B     SAD093803VY  4.2    Ok
 3  Centralized Forwarding Card WS-F6700-CFC       SAD100402PG  2.0    Ok
 5  Policy Feature Card 3       WS-F6K-PFC3B       SAL1208GK44  2.4    Ok
 5  MSFC3 Daughterboard         WS-SUP720          SAL1208GHM6  3.2    Ok

Mod  Online Diag Status
---- -------------------
 1  Pass
 2  Pass
 3  Pass
 5  Pass

R1#sh fabric switching-mode
Global switching mode is Compact
dCEF mode is not enforced for system to operate
Fabric module is not  required for system to operate
Modules are allowed to operate in bus mode
Truncated mode is allowed, due to presence of DFC, CEF720 module

Module Slot     Switching Mode
 1                     dCEF
 2                     dCEF
 3                 Crossbar
 5                     dCEF

5) Each of the fabric-only modules has two 20 Gbps connections to the crossbar fabric bus:

R1#sh fabric util
slot    channel    speed    Ingress %     Egress %
1          0        20G            0            3
1          1        20G            2            0
2          0        20G            0            3
2          1        20G            0            0
3          0        20G            0            0
3          1        20G            0            0
5          0        20G            0            0

6) Module Types:

Modules are generally divided into line cards and service modules: Line card offers connectivity, such as copper or fiber Ethernet. Service Modules offer functionality.

6.1) Ethernet modules:

6.1.1) Connectivity options: RJ-45, GBIC, small-form-factor GBIC, Amphenol connectors for direct connection to path panels.

ethernet module connectivity options
6.1.2) Port density: 4-port 10 Gbps XENPAK-based modules, 48-port 1Gbps RJ-45 modules, 96-port RJ-21 connector modules support 10/100 Mbps.

ethernet module port density range
6.1.3) Capability: PoE and dCEF

6.2) FWSM:

6.2.1) It’s as a PIX, the difference is that all connections are internal to the switch, resulting in very high throughput.
6.2.2) the interfaces are SVIs, so the FWSM is not limited to physical connections.
6.2.3) FWSM is capable of over 4 Gbps of throughput, comparing with 1.7 Gbps on the PIX 535.
6.2.4) FWSM is a separate device in the chassis. To login:

R1# session slot 8 proc 1
The default escape character is Ctrol-^, then x.
You can also type 'exit' at the remote prompt to end the session
Trying 127.0.0.81 ... Open

User Access Verification

Password:
Type help or '?' for a list of available commands.
R1> en
Password: ********

6.2.5) If FWSM is running in single-context mode, you’ll be able to run all PIX commands. If FWSM is running in multiple-context mode, you’ll need to change to the proper context to make changes.

R1# sho context
Context Name          Class        Interfaces            URL
 admin                default                            disk:/admin.cfg
*EComm                default      vlan20,30             disk:/Ecomm.cfg
R1# changeto context EComm
R1/EComm# sho int
Interface Vlan20 "outside", is up, line protocol is up
        MAC address 0008.4cff.b403, MTU 1500
        IP address 10.1.1.1, subnet mask 255.255.255.0
                Received 90083941155 packets, 6909049206185 bytes
                Transmitted 3710031826 packets, 1371444635 bytes
                Dropped 156162887 packets
Interface Vlan30 "inside", is up, line protocol is up
        MAC address 0008.4cff.b403, MTU 1500
        Transmitted 2954364369 packets, 7023125736 bytes
        Dropped 14255735 packets

6.3) CSM:

6.3.1) CSM is capable of 4Gbps of throughput.
6.3.2) All of the CSM commands are included in the switch’s CLI. Command for CSM are included under command:

R1 (config)# mod csm 9
R1 (config-module-csm)#

6.3.3) CSM is not fabric-enabled, it’s a 32 Gbps blade. Insert it into a switch that is using the fabric backplane will cause the supervisor to revert to bus mode instead of aster modes such as dCEF.
=> A switch with a Sup-720, fabric-only Ethernet modules, and a CSM will not run at 720 Gbps because of the CSM’s limited backplane connections.

6.3.4) CSM blades will operate in a stateful failover design. A pair of CSMs can synced with the command:

R1# hw-module csm 9 standby config-sync
R1 #
May  5 17:21:14: %CSM_SLB-6-REDUNDANCY_INFO: Module 9 FT info: Active: Bulk sync started
May  5 17:21:17  %CSM_SLB-4-REDUNDANCY_WARN: Module 9 FT warning: FT configuration might be out of sync.
May  5 17:21:24: %CSM_SLB-4-REDUNDANCY_WARN: Module 9 FT warning: FT configuration back in sync
May  5 17:21:26: %CSM_SLB-6-REDUNDANCY_INFO: Module 9 FT info: Active: Manual bulk sync completed

6.4) NAM:

6.4.1) NAM is a remote monitorying (RMON) probe and packet-capture device that controlled through a web browser with no extra software required.
6.4.2) NAM is able to capture more than one session at a time.
6.4.3) With the ability to capture from RSPAN sources, the NAM blade can be used to analyze traffic on any switch on the network.

6.5) IDSM: It’s a preconfigured Linux server that reside on a blade which connected to the crossar fabric bus.

6.6) FlexWAN module:

6.6.1) It allows the connection of WAN links, such as T1, DS3, OC3.
6.6.2) Two types of FlexAN modules: FlexWAN and Enhanced FlexWAN.
6.6.3) Difference b/w the two versions: CPU speed, memory capacity, and connection to the crossbar fabric bus.

6.7) CMM:

6.7.1) It provides telephony integration into 6500-series switches.
6.7.2) It’s fabric-enabled module has 3 slots which accept different port adapters.
6.7.3) A 6500 chassis can be filled with CMMs and a supervisor to provide large port density for VoIP connectivity.

h.  Switch Fabric Functionality Switching Modes:

1) Compact mode:

The switch uses this mode for all traffic when only fabric-enabled modules are installed. In this mode, a compact version of the D Bus header is forwarded over the switch fabric channel, which provides the best possible performance.

2) Truncated mode:

The switch uses this mode for traffic between fabric-enabled modules when there are both fabric-enabled and nonfabric-enabled modules installed. In this mode, the switch sends a truncated version of the traffic (the first 64 bytes of the frame) over the switch fabric channel.

3) Bus mode:

The switch uses this mode for traffic between nonfabric-enabled modules and for traffic between a nonfabric-enabled module and a fabric-enabled module. In this mode, all traffic passes between the local bus and the supervisor engine bus.

4) To allow use of nonfabric-enabled modules or to allow fabric-enabled modules to use bus mode:

R1(config)# fabric switching-mode allow bus-mode

To prevent use of nonfabric-enabled modules or to prevent fabric-enabled modules from using bus mode:

R1(config)# no fabric switching-mode allow bus-mode

=> power will be removed from any nonfabric-enabled modules installed in the switch.

6) To allow fabric-enabled modules to use truncated mode:

R1(config)# fabric switching-mode allow truncated

To prevent fabric-enabled modules from using truncated mode:

R1(config)# no fabric switching-mode allow truncated

7) Displaying switch fabric functionality modes:

R1# sh fabric active
Active fabric card in slot 5
No backup fabric card in the system

R1# show fabric switching-mode module 5
Module Slot     Switching Mode
 5                     dCEF

R1# show fabric status 5
 slot  channel speed module   fabric   hotStandby  Standby  Standby
                     status   status      support  module   fabric
 5        0      20G     OK       OK   Y(not-hot)

R1# show fabric utilization 5
 slot    channel      speed    Ingress %     Egress %
 5          0           20G            0            0

R1# show fabric errors
Module errors:
 slot    channel     crc      hbeat       sync   DDR sync
 1          0          0          0          0          0
 1          1          0          0          0          0
 2          0          0          0          0          0
 2          1          0          0          0          0
 3          0          0          0          0          0
 3          1          0          0          0          0
 5          0          0          0          0          0

Fabric errors:
 slot    channel    sync     buffer    timeout
 1          0          0          0          0
 1          1          0          0          0
 2          0          0          0          0
 2          1          0          0          0
 3          0          0          0          0
 3          1          0          0          0
 5          0          0          0          0

August 14, 2009

Switching Algorithms/Paths

Filed under: Information, IOS — Tags: — Jaycee @ 4:17 am

A. Overview:

1. Switching – the process of moving packets from one interface to another whinin a router.

2. Routing – the process of choosing paths and forwarding packets to destinations outside of the physical router.

3. Switching Algorithm – a valuable way to increase or decrease a router’s performance.

4. RIB (Routing Information Base) –

1) is built by L3 routing protocols
2) is essentially the routing table
3) The decisions about how to move packets from one interface to another are based on the RIB.

5. Steps of the process of switching a packet:

1) Determine whether the packet’s destination is reachable

2) Determine the next hop to the destination, and to which interface the packet should be switched to

3) Rewrite the MAC header on the packet to reach its destination

6. Requirements of router switching:

a. Interfaces have access to input/output memory. When a packet comes into an interface, the router must decide to which interface the packet should be sent. Once the decision is made, the packet’s MAC header are rewritten, and the packet is sent on its way.

b. Packets must get from one interface to another.

c. How the router decides which interface to switch the packet to – is based on the switching path in use.

d. Routing table contains all the necessary information to determine the correct interface, but process switching must be used to retrieve data from the routing table.

B. Process Switching:

1. It’s the original method of determining which interface to forward a packet to.

2. The processor calls a process that accesses the RIB, and waiting for the next scheduled execution of that process to run.

3. Steps for Process Switching:

process switching

1) The interface processor detects a packet and moves the packet to the input/output memory.

2) Interface processor generates a receive interrupt.

a. CPU(Central processor) determines the packet type (IP), and copies it to the processor memory if necessary.

b. Then the processor places the packet on the appropriate process’s input queue and releases the interrupt.

c. The process for IP packets is titled ip_input.

3) When the scheduler next runs, it notices the presence of a packet in the input queue for the ip_input process, then schedules the process for execution.

4) When the ip_input process runs, it looks up the next hop and output interface information in the RIB. Then it consults the ARP cache to retrieve the L2 address for the next hop.

5) The process rewrites the packet’s MAC header with the appropriate addresses, then places the packet on the output queue of the appropriate interface.

6) The packet is moved from the output queue of the outbound interface to the transmit queue of the outbound interface.

=> then Outbound QoS

7) The output interface processor notices the packet in its queue, and transfers the packet to the network media.

4. Slowness happens at:

slowness on process switching

a. The processor waits for the next scheduled execution of the ip_input process.

b. ip_input process references the RIB when it runs

1) ip_input process is at the same priority level as other processes on the router, such as routing protocol and HTTP web server interface.

2) Packets sourced from or destined to the router itself are always process-switched, such as SNMP traps from the router and telnet packets destined for the router.

C. Interrupt Context Switching:

1. The processor interrupts the current process to switch the packet.

interrupt context switching

2. It’s faster than process switching since ip_input process is rarely called. Interrupt Context Switching usually bypasses the RIB, and works with parallel tables, which are built more efficiently.

3. Steps for Interrupt Context Switching:

1) The interface processor detects a packet and moves the packet into input/output memory.

2) The interface processor generates a receive interrupt. During this time, the CPU determines the packet type (IP) and begins to switch the packet.

3) The processor searches the route cache for: destination reach-ability,  output interface, next hop, MAC conversion. Then the processor uses this information to rewrite the packet’s MAC header.

4) The packet is copied to either the transmit or the output queue of the outbound interface. The receive interrupt is ended, and the originally running process continues.

5) The output interface processor notices the packet in its queue, and transfers the packet to the network media.

4. RIB is by passed entirely in this model. The necessary information is retrieved from “route cache“. Each switching path has its own means of determining, storing, and retrieving this information. There are 3 different methods:

a. Fast Switching:

fast-switching binary tree

1)  uses binary tree format for recording/retrieving information in the route cache.

2) The information of the next hop and MAC address changes is stored within each node.

3) It’s fast compared with searching the RIB since the tree is very deterministic.

4) Drawbacks:

4.1) The data for each address is stored within the  nodes, the size of the data is not static. Each node may be a different size, the table can be inefficient.

4.2) Route cache is updated only when packets are process-switched. The route cache is updated only when the 1st packet to a destination is switched. To keep the data in the route cache current, 1/20th of the entire route cache is aged out (discarded) every minute. This table must be rebuilt using process switching.

4.3) ARP table is not directly related to the contents of the route cache. Process switching must be used when ARP changes.

b. Optimum switching:

optimum-switching multiway tree

1) uses a multiway tree instead of a binary tree for recording/retrieving information in the route cache

2) This pattern continues for 4 levels – one for each octet.

3) The information of each route (prefix) or IP address is stored within the final node.

4) The size of the table can be variable since each node may or may not contain information.

5) Drawbacks:

5.1) Searching the tree is not as efficient as it might be if every node were of a known static size.

5.2) The relevant data is stored in the nodes and has no direct relationship to the RIB or ARP cache, entries are aged and rebuilt through process switching.

c. Cisco Express Forwarding (CEF):

CEF forwarding and adjacency tables

1) CEF is the default switching path on all modern routers.

2) The data is not stored within the nodes. Each node becomes a pointer to another table, which contains the data.

3) Each node is the is the same static size w/o data but the position of the node is a reference to adjacency table.

4) Adjacency table stores the pertinent data, such as MAC header substitution and next hop information for the nodes.

5) Advantages:

5.1) Both forwarding table and adjacency table are built w/o process switching

5.2) Forwarding table is built separately from the adjacency table, an error in one table doesn’t cause the other to become stale.

5.3) When the ARP cache changes, only the adjacency table changes, so aging or invalidation of the forwarding table is not required.

5.4) CEF supports load balancing over equal-cost paths.

D. Configuring and Managing Switching Algorithm (or Paths):

1. Process Switching:

R0# sh ip int vlan 301 | i switching
 IP fast switching is enabled
 IP fast switching on the same interface is disabled
 IP Flow switching is enabled
 IP CEF switching is enabled
 IP Selective flow switching turbo vector
 IP Flow CEF switching turbo vector
 IP multicast fast switching is enabled
 IP multicast distributed fast switching is disabled
 IP multicast multilayer switching is disabled

a. To disable all Interrupt Context Switching Paths, use command:

R0(config-if)# no ip route-cache
R0#sh ip int vlan 301 | i switching
 IP fast switching is disabled
 IP fast switching on the same interface is disabled
 IP Flow switching is disabled
 IP CEF switching is disabled
 IP Selective flow switching turbo vector
 IP Flow CEF switching turbo vector
 IP multicast fast switching is disabled
 IP multicast distributed fast switching is disabled
 IP multicast multilayer switching is disabled

b. When a router is process switching most of its IP packets, the top process will always be ip_input. You can verify this by the command:

R0# sh proc cpu sorted | e 0.00
CPU utilization for five seconds: 49%/26%; one minute: 45%; five minutes: 45%
PID Runtime(ms)   Invoked      uSecs   5Sec   1Min   5Min TTY Process
281   3118803922583781382          0 26.95% 24.53% 21.42%   0 IP Input
178     2276332   5264619        432  0.15%  0.08%  0.03%   0 SNMP ENGINE

2. Fast Switching:

To enable fast switching:

R0(config-if)# ip route-cache
R0#sh ip int vlan 301 | i swi
 IP fast switching is enabled
 IP fast switching on the same interface is enabled
 IP Flow switching is disabled
 IP CEF switching is disabled
 IP Selective flow switching turbo vector
 IP Flow CEF switching turbo vector
 IP multicast fast switching is enabled
 IP multicast distributed fast switching is disabled
 IP multicast multilayer switching is disabled

=> Turning on fast switching is NOT enabling CEF.

3. CEF:

a. CEF is enabled by default.

b. There are 2 places that it con be configured:

R0(config)# ip cef
R0(config-if)# ip route-cache cef

c. CEF will load-balance packets based on a per-destination by default. => A single destination will use the same link

d. CEF allows you to configure load balancing on a per-packet basis. => VoIP cannot tolerate per-packet load balancing because packets may arrive out of order. When using usch protocols, always ensure that load balancing is performed per-destination, or use a higher-level protocol such as Multilink-PPP.

R0(config-if)# ip load-sharing per-packet
R0(config-if)# ip load-sharing per-destination

e. To show CEF tables:

R0# sh ip cef
Prefix              Next Hop             Interface
0.0.0.0/0           192.168.4.14          Vlan301
0.0.0.0/32          receive
127.0.0.0/8         attached             EOBC0/0
127.0.0.0/32        receive
127.0.0.51/32       receive
127.255.255.255/32  receive
192.168.4.0/28       attached             Vlan301
192.168.4.0/32       receive
192.168.4.1/32       receive
192.168.4.12/32      192.168.4.12          Vlan301
192.168.4.13/32      192.168.4.13          Vlan301
192.168.4.14/32      192.168.4.14          Vlan301
192.168.4.15/32      receive
224.0.0.0/4         drop
224.0.0.0/24        receive
255.255.255.255/32  receive

July 1, 2009

Virtual Switching System (VSS)

Filed under: Information, IOS — Tags: , — Jaycee @ 7:22 pm

A. VSS (Virtual Switching System)

1. VSS pools multiple Catalyst 6500 switches into one virtual switch, increasing bandwidth capacity to 1.4 Tbps.

2. A VSS will allow 2x 6500 to operate as a single logical virtual switch called VSS1440.

3. VSS1440 = 2x 6500 with SUP720-10GE

4. In a VSS, the data plane and switch fabric with capacity of 720 Gbps of supervisor engine in each chassis are active at the same time on both chassis, combining for an active 1400-Gbps switching capacity per VSS. But

a. ONLY ONE of the virtual switch members has the active control plane.

b. Both chassis are kept in sync with the SSO (Stateful Switchover) mechanism along with NSF (Nonstop Forwarding) to provide nonstop communication.

5. Benefits:

a. single point of management, IP address, and routing instance for 6500 virtual switch

1) single configuration file and node to manage.

2) removes the need to configure redundant switches twice with identical policies.

3) Only one gateway IP address is required per VLAN instead of the 3 IP address per VLAN used today.

4) removes the need for HSRP, VRRP, GLBP

*Cisco LMS (LAN Management System) 3.0 can be used to centrally manage a 6500 virtual switch as a single entity

b. MEC (Multichassis EtherChannel) is a Layer 2 multipathing technology that creates simplified loop-free toplogies, eliminating the dependency on STP. (STP can still be activated to protect strictly against any user misconfiguration.)

c. Flexible deployment – physical switches don’t have to be colocated: 2 physical switches are connected with standard 10 Gigabit Etherent interfaces and as such can be located any distance based on the distance limitation of the 10Gigabit Etherenet optics.

d. VSS eliminates L2/L3 protocol reconvergence if a virtual switch member fails

e. VSS scales system bandwidth to 1.4Tbps:

1) activates all L2 bandwidth across redundant 6500 switches with automatic load sharing.

2) maximizing server bandwidth throughput

f. eliminating unicast flooding caused by asymmetrical routing in traditional campus designs

g. optimizing the number of hops for intracampus traffic using multichassis EtherChannel enhancements

6. Target deployment areas for VSS:

a. Campus or DC core/distribution layer

b. DC access layer (server connectivity)

7. Two physical chassis doesn’t need to be identical in the type of modules installed or even type of chassis. For example, a WS-C6503-E chassis can be combined with a WS-C6513 chassis to form a VSS.

8. eFSU (enhanced fast software upgrade) is a mechanism to perform software upgrades while maintaining HA. It leverages the existing features of NSF and SSO and significantly reduces the downtime to less than 200ms.

9. Dual active state is detected rapidly by:

a. Enhancement to PAgP used in MEC with connecting Cisco switches

b. L3 BFD (Bidirectional Forwarding Detection) configuration on a directly connected link between virtual siwtch memebers or through an L2 link through an access layer switch.

c. L2 Fast-Hello Dual-Active Detection configuration on a directly connected link between virtual switch members.

B. MEC (Multichassis EtherChannel)

1. MEC allows a connected node to terminate the EtherChannel across the 2 physical Catalyst 6500 switches that make up the VSS leading to creating simplified loop-free L2 toplogy.

2. Using MEC in VSS topology results in all links being active and at the same time provides for a highly available topology without the dependency of STP.

3. supports up to 512 MECs.

End-of-Row or Top-of-Rack for Server Networking in DC

Filed under: Information, IOS, Routing Design — Tags: , — Jaycee @ 2:05 pm

There are 3 primary approaches for server networking in dc environment:

1. End-of-Row:

a. When aggregating servers larger than 1U or servers with a mixed amount of interface types and densities, Catalyst 6500 Series switches are used to support one or more racks.

b. Advantage:

1) cost effective – delivering the highest level of switch and port utilization, especially when coupled with the rich set of network visualization services available in the Catalyst 6500 Series. (6500 supports a wide variety of service modules, simplifies pushing security and application networking service into the access layer.)

2) server-independent – provides maximum felxibilty to support a borad range of servers.

3) performance advantage – 2 servers exchange large volumes of information cab be placed on the same line card as opposed to card-to-card or switch-to-switch, which will be slower.

c. Disadvantage:

1) cable/patch panel cost – physical volume of the cables and the waste of the valuable rack space

2. Top-of-Rack:

a. When stacking  40 1U servers in a rack one or two, 1U Rack Switches (like the Catalyst 4948-10G) are often used to aggregate all of these servers with Gigabit Ethernet and then run a couple 10GbE links back to the aggregation switches. (In some cases, 2x 4948 switches are used for HA purpose.) (Catalyst 4948is optimized for the dc environment.)

b. Advangate:

1) simplified cable management

2) avoid rack space and cooling issues

3) avoid cooling issues of end-of-rack switching

4) fast port-to-port switching for servers within the rack

5) predictable oversubscription of the uplink ans smaller switching domains (one per rack) to aid in fault isolatio and containment

c. Disadvantage:

1) Not enough servers to fill the switch in one rack – solution: put one top-of-rack switch server in an adjacent rack to preserve the advantages of the top-of-rack switch wile increasing port utilization.

3. Integrated:

a. When using blade servers, blade switches would be deployed. Cisco Catalyst Blade Switch 3000 Series support the visualization, segmentation, and management tools needed to properly support this environment.

b. When server virtualization is in use, it can rapidly increase the complexity of the network (the number of MAC addresses, complexity of spanning tree, data pathways, etc.)

c. In some larger dc, using the pass-thru module or the balde switches where it’s aggregated into a series of rack switches.

*Most people like dual top-of-rack because servers have dual production uplinks. But they can’t really fit in 40 1U servers due to power limitation or heating problem. So they end up 3 racks using the top-of-rack switch in the middle rack and cables are going between cabinets. End-of-rack is actually designed for this situation. But placing 6500 in the middle rack would cause overheating problem. 6500 switches thus shall be placed at the end of the row.

June 26, 2009

JUNOS Commands for IOS Users

Filed under: IOS, Junos — Tags: , — Jaycee @ 11:15 pm

A. Basic CLI and Systems Management Commands:

IOS Command JUNOS Command
clock set set date
reload request system reboot
send request message
show clock show system uptime
show environment show chassis environment
show history show cli history
show ip traffic show system statistics
show logging show log
show log file name
show processes show system processes
show running config show configuration
show tech-support request support information
show users show system users
show version show version
show chassis hardware
terminal length set cli screen-length
terminal width set cli screen-width
trace traceroute

B. Switching Commands:

IOS Command JUNOS Command
none show ethernet-switching interfaces
show spanning-tree show spanning-tree bridge
show mac address-table show ethernet-switching table

C. Interface Commands:

IOS Command JUNOS Command
clear counters clear interface statistics
show interfaces show interfaces
show interfaces detail
show interfaces extensive
show ip interface brief show interfaces terse

D. Routing Protocol-Independent Commands:

IOS Command JUNOS Command
clear arp-cache clear arp
show arp show arp
show ip route show route
show ip route summary show route summary
show route-map show policy
show policy policy-name
show tcp show system connections

1. OSPF Commands:

IOS Command JUNOS Command
show ip ospf database show ospf database
show ip ospf interface show ospf interface
show ip ospf neighbor show ospf neighbor

2. BGP Commands:

IOS Command JUNOS Command
clear ip bgp clear bgp neighbor
clear ip bgp dampening clear bgp damping
show ip bgp show route protocol bgp
show ip bgp community show route community
show ip bgp dampened paths show route damping decayed
show ip bgp neighbors show bgp neighbor
show ip bgp neighbors address advertised-routes show route advertising-protocol bgp address
show ip bgp neighbors address received-routes show route receive-protocol bgp address
show ip bgp peer-group show bgp group
show ip bgp regexp show route aspath-regex
show ip bgp summary show bgp summary

May 31, 2009

BGP Processes and Memory Use

Filed under: BGP, IOS — Tags: — Jaycee @ 6:06 pm

A. Example of “BGP Process and Memory Use”:

R1# show processes memory | begin BGP
 PID TTY  Allocated      Freed    Holding    Getbufs    Retbufs Process
  73   0  678981156   89816736   70811036          0          0 BGP Router
  74   0    2968320  419750112      61388    1327064        832 BGP I/O
  75   0          0    8270540       9824          0          0 BGP Scanner
                                 70882248 Total BGP
                                 77465892 Total all processes

1. Allocated column: It shows the total number of bytes allocated since the creation of the process.

2. Freed column: It provides the number of bytes the process has freed since its creation.

3. Holding column: It shows the actual memory that is bing consumed by the process at the moment.

a. BGP Router process accounts for the majority of BGP’s memory use.
b. The memory use for both the BGP I/O and BGP Scanner process are insignificant.

B. BGP Router Process:

1. BGP RIB

a. Includes network entries, path entries, path attributes, and route map and filter list caches.

b. The memory used to store this information can be found in “show ip bgp summary” output.

2. IP RIB for BGP learned prefixes

a. BGP learned prefixes are stored in two types of structures:

(1) NDBs (Network Descriptor Blocks)

(2) RDBs (Routing Descriptor Blocks)

b. Each route in the IP RIB requires one NDB and one RDB per path.

c. If the route is subnetted, additional memory is required to maintain the NDB.

d. The direct memory use for IP RIB can be shown using the “show ip route summary” command.

3. IP switching component for BGP learned prefixes

a. With significatn memory demand is IP switching component, such as FIB structures.

4. BGP Router process requires a small amount of memory for its own operation in addition to what is required to store the routing information, approximately 40KB which is insignificant compared to the overall memory consumed by the BGP router process.

Older Posts »

Blog at WordPress.com.