Jaycee's Networking

August 14, 2009

Switching Algorithms/Paths

Filed under: Information, IOS — Tags: — Jaycee @ 4:17 am

A. Overview:

1. Switching – the process of moving packets from one interface to another whinin a router.

2. Routing – the process of choosing paths and forwarding packets to destinations outside of the physical router.

3. Switching Algorithm – a valuable way to increase or decrease a router’s performance.

4. RIB (Routing Information Base) –

1) is built by L3 routing protocols
2) is essentially the routing table
3) The decisions about how to move packets from one interface to another are based on the RIB.

5. Steps of the process of switching a packet:

1) Determine whether the packet’s destination is reachable

2) Determine the next hop to the destination, and to which interface the packet should be switched to

3) Rewrite the MAC header on the packet to reach its destination

6. Requirements of router switching:

a. Interfaces have access to input/output memory. When a packet comes into an interface, the router must decide to which interface the packet should be sent. Once the decision is made, the packet’s MAC header are rewritten, and the packet is sent on its way.

b. Packets must get from one interface to another.

c. How the router decides which interface to switch the packet to – is based on the switching path in use.

d. Routing table contains all the necessary information to determine the correct interface, but process switching must be used to retrieve data from the routing table.

B. Process Switching:

1. It’s the original method of determining which interface to forward a packet to.

2. The processor calls a process that accesses the RIB, and waiting for the next scheduled execution of that process to run.

3. Steps for Process Switching:

process switching

1) The interface processor detects a packet and moves the packet to the input/output memory.

2) Interface processor generates a receive interrupt.

a. CPU(Central processor) determines the packet type (IP), and copies it to the processor memory if necessary.

b. Then the processor places the packet on the appropriate process’s input queue and releases the interrupt.

c. The process for IP packets is titled ip_input.

3) When the scheduler next runs, it notices the presence of a packet in the input queue for the ip_input process, then schedules the process for execution.

4) When the ip_input process runs, it looks up the next hop and output interface information in the RIB. Then it consults the ARP cache to retrieve the L2 address for the next hop.

5) The process rewrites the packet’s MAC header with the appropriate addresses, then places the packet on the output queue of the appropriate interface.

6) The packet is moved from the output queue of the outbound interface to the transmit queue of the outbound interface.

=> then Outbound QoS

7) The output interface processor notices the packet in its queue, and transfers the packet to the network media.

4. Slowness happens at:

slowness on process switching

a. The processor waits for the next scheduled execution of the ip_input process.

b. ip_input process references the RIB when it runs

1) ip_input process is at the same priority level as other processes on the router, such as routing protocol and HTTP web server interface.

2) Packets sourced from or destined to the router itself are always process-switched, such as SNMP traps from the router and telnet packets destined for the router.

C. Interrupt Context Switching:

1. The processor interrupts the current process to switch the packet.

interrupt context switching

2. It’s faster than process switching since ip_input process is rarely called. Interrupt Context Switching usually bypasses the RIB, and works with parallel tables, which are built more efficiently.

3. Steps for Interrupt Context Switching:

1) The interface processor detects a packet and moves the packet into input/output memory.

2) The interface processor generates a receive interrupt. During this time, the CPU determines the packet type (IP) and begins to switch the packet.

3) The processor searches the route cache for: destination reach-ability,  output interface, next hop, MAC conversion. Then the processor uses this information to rewrite the packet’s MAC header.

4) The packet is copied to either the transmit or the output queue of the outbound interface. The receive interrupt is ended, and the originally running process continues.

5) The output interface processor notices the packet in its queue, and transfers the packet to the network media.

4. RIB is by passed entirely in this model. The necessary information is retrieved from “route cache“. Each switching path has its own means of determining, storing, and retrieving this information. There are 3 different methods:

a. Fast Switching:

fast-switching binary tree

1)  uses binary tree format for recording/retrieving information in the route cache.

2) The information of the next hop and MAC address changes is stored within each node.

3) It’s fast compared with searching the RIB since the tree is very deterministic.

4) Drawbacks:

4.1) The data for each address is stored within the  nodes, the size of the data is not static. Each node may be a different size, the table can be inefficient.

4.2) Route cache is updated only when packets are process-switched. The route cache is updated only when the 1st packet to a destination is switched. To keep the data in the route cache current, 1/20th of the entire route cache is aged out (discarded) every minute. This table must be rebuilt using process switching.

4.3) ARP table is not directly related to the contents of the route cache. Process switching must be used when ARP changes.

b. Optimum switching:

optimum-switching multiway tree

1) uses a multiway tree instead of a binary tree for recording/retrieving information in the route cache

2) This pattern continues for 4 levels – one for each octet.

3) The information of each route (prefix) or IP address is stored within the final node.

4) The size of the table can be variable since each node may or may not contain information.

5) Drawbacks:

5.1) Searching the tree is not as efficient as it might be if every node were of a known static size.

5.2) The relevant data is stored in the nodes and has no direct relationship to the RIB or ARP cache, entries are aged and rebuilt through process switching.

c. Cisco Express Forwarding (CEF):

CEF forwarding and adjacency tables

1) CEF is the default switching path on all modern routers.

2) The data is not stored within the nodes. Each node becomes a pointer to another table, which contains the data.

3) Each node is the is the same static size w/o data but the position of the node is a reference to adjacency table.

4) Adjacency table stores the pertinent data, such as MAC header substitution and next hop information for the nodes.

5) Advantages:

5.1) Both forwarding table and adjacency table are built w/o process switching

5.2) Forwarding table is built separately from the adjacency table, an error in one table doesn’t cause the other to become stale.

5.3) When the ARP cache changes, only the adjacency table changes, so aging or invalidation of the forwarding table is not required.

5.4) CEF supports load balancing over equal-cost paths.

D. Configuring and Managing Switching Algorithm (or Paths):

1. Process Switching:

R0# sh ip int vlan 301 | i switching
 IP fast switching is enabled
 IP fast switching on the same interface is disabled
 IP Flow switching is enabled
 IP CEF switching is enabled
 IP Selective flow switching turbo vector
 IP Flow CEF switching turbo vector
 IP multicast fast switching is enabled
 IP multicast distributed fast switching is disabled
 IP multicast multilayer switching is disabled

a. To disable all Interrupt Context Switching Paths, use command:

R0(config-if)# no ip route-cache
R0#sh ip int vlan 301 | i switching
 IP fast switching is disabled
 IP fast switching on the same interface is disabled
 IP Flow switching is disabled
 IP CEF switching is disabled
 IP Selective flow switching turbo vector
 IP Flow CEF switching turbo vector
 IP multicast fast switching is disabled
 IP multicast distributed fast switching is disabled
 IP multicast multilayer switching is disabled

b. When a router is process switching most of its IP packets, the top process will always be ip_input. You can verify this by the command:

R0# sh proc cpu sorted | e 0.00
CPU utilization for five seconds: 49%/26%; one minute: 45%; five minutes: 45%
PID Runtime(ms)   Invoked      uSecs   5Sec   1Min   5Min TTY Process
281   3118803922583781382          0 26.95% 24.53% 21.42%   0 IP Input
178     2276332   5264619        432  0.15%  0.08%  0.03%   0 SNMP ENGINE

2. Fast Switching:

To enable fast switching:

R0(config-if)# ip route-cache
R0#sh ip int vlan 301 | i swi
 IP fast switching is enabled
 IP fast switching on the same interface is enabled
 IP Flow switching is disabled
 IP CEF switching is disabled
 IP Selective flow switching turbo vector
 IP Flow CEF switching turbo vector
 IP multicast fast switching is enabled
 IP multicast distributed fast switching is disabled
 IP multicast multilayer switching is disabled

=> Turning on fast switching is NOT enabling CEF.

3. CEF:

a. CEF is enabled by default.

b. There are 2 places that it con be configured:

R0(config)# ip cef
R0(config-if)# ip route-cache cef

c. CEF will load-balance packets based on a per-destination by default. => A single destination will use the same link

d. CEF allows you to configure load balancing on a per-packet basis. => VoIP cannot tolerate per-packet load balancing because packets may arrive out of order. When using usch protocols, always ensure that load balancing is performed per-destination, or use a higher-level protocol such as Multilink-PPP.

R0(config-if)# ip load-sharing per-packet
R0(config-if)# ip load-sharing per-destination

e. To show CEF tables:

R0# sh ip cef
Prefix              Next Hop             Interface           Vlan301          receive         attached             EOBC0/0        receive       receive  receive       attached             Vlan301       receive       receive          Vlan301          Vlan301          Vlan301      receive         drop        receive  receive

August 10, 2009

Anycast DNS

Filed under: BGP, Information — Tags: — Jaycee @ 9:24 pm

DNS Anycast Service-Provision Architecture

IPv4 Anycast Routing

* Local and global nodes:

a. Anycast deployment on the Internet differs between local and global nodes:

1) Local nodes are often more intended to provide benefit for the direct local community.

2) Local node announcements are often announced with the no-export BGP community to prevent peers from announcing them to their peers (i.e. the announcement is kept in the local area).

3) Where both local and global nodes are deployed, the announcements from global nodes are often AS prepended (i.e. the AS is added a few more times) to make the path longer so that a local node announcement is preferred over a global node announcement.

July 1, 2009

Virtual Switching System (VSS)

Filed under: Information, IOS — Tags: , — Jaycee @ 7:22 pm

A. VSS (Virtual Switching System)

1. VSS pools multiple Catalyst 6500 switches into one virtual switch, increasing bandwidth capacity to 1.4 Tbps.

2. A VSS will allow 2x 6500 to operate as a single logical virtual switch called VSS1440.

3. VSS1440 = 2x 6500 with SUP720-10GE

4. In a VSS, the data plane and switch fabric with capacity of 720 Gbps of supervisor engine in each chassis are active at the same time on both chassis, combining for an active 1400-Gbps switching capacity per VSS. But

a. ONLY ONE of the virtual switch members has the active control plane.

b. Both chassis are kept in sync with the SSO (Stateful Switchover) mechanism along with NSF (Nonstop Forwarding) to provide nonstop communication.

5. Benefits:

a. single point of management, IP address, and routing instance for 6500 virtual switch

1) single configuration file and node to manage.

2) removes the need to configure redundant switches twice with identical policies.

3) Only one gateway IP address is required per VLAN instead of the 3 IP address per VLAN used today.

4) removes the need for HSRP, VRRP, GLBP

*Cisco LMS (LAN Management System) 3.0 can be used to centrally manage a 6500 virtual switch as a single entity

b. MEC (Multichassis EtherChannel) is a Layer 2 multipathing technology that creates simplified loop-free toplogies, eliminating the dependency on STP. (STP can still be activated to protect strictly against any user misconfiguration.)

c. Flexible deployment – physical switches don’t have to be colocated: 2 physical switches are connected with standard 10 Gigabit Etherent interfaces and as such can be located any distance based on the distance limitation of the 10Gigabit Etherenet optics.

d. VSS eliminates L2/L3 protocol reconvergence if a virtual switch member fails

e. VSS scales system bandwidth to 1.4Tbps:

1) activates all L2 bandwidth across redundant 6500 switches with automatic load sharing.

2) maximizing server bandwidth throughput

f. eliminating unicast flooding caused by asymmetrical routing in traditional campus designs

g. optimizing the number of hops for intracampus traffic using multichassis EtherChannel enhancements

6. Target deployment areas for VSS:

a. Campus or DC core/distribution layer

b. DC access layer (server connectivity)

7. Two physical chassis doesn’t need to be identical in the type of modules installed or even type of chassis. For example, a WS-C6503-E chassis can be combined with a WS-C6513 chassis to form a VSS.

8. eFSU (enhanced fast software upgrade) is a mechanism to perform software upgrades while maintaining HA. It leverages the existing features of NSF and SSO and significantly reduces the downtime to less than 200ms.

9. Dual active state is detected rapidly by:

a. Enhancement to PAgP used in MEC with connecting Cisco switches

b. L3 BFD (Bidirectional Forwarding Detection) configuration on a directly connected link between virtual siwtch memebers or through an L2 link through an access layer switch.

c. L2 Fast-Hello Dual-Active Detection configuration on a directly connected link between virtual switch members.

B. MEC (Multichassis EtherChannel)

1. MEC allows a connected node to terminate the EtherChannel across the 2 physical Catalyst 6500 switches that make up the VSS leading to creating simplified loop-free L2 toplogy.

2. Using MEC in VSS topology results in all links being active and at the same time provides for a highly available topology without the dependency of STP.

3. supports up to 512 MECs.

End-of-Row or Top-of-Rack for Server Networking in DC

Filed under: Information, IOS, Routing Design — Tags: , — Jaycee @ 2:05 pm

There are 3 primary approaches for server networking in dc environment:

1. End-of-Row:

a. When aggregating servers larger than 1U or servers with a mixed amount of interface types and densities, Catalyst 6500 Series switches are used to support one or more racks.

b. Advantage:

1) cost effective – delivering the highest level of switch and port utilization, especially when coupled with the rich set of network visualization services available in the Catalyst 6500 Series. (6500 supports a wide variety of service modules, simplifies pushing security and application networking service into the access layer.)

2) server-independent – provides maximum felxibilty to support a borad range of servers.

3) performance advantage – 2 servers exchange large volumes of information cab be placed on the same line card as opposed to card-to-card or switch-to-switch, which will be slower.

c. Disadvantage:

1) cable/patch panel cost – physical volume of the cables and the waste of the valuable rack space

2. Top-of-Rack:

a. When stacking  40 1U servers in a rack one or two, 1U Rack Switches (like the Catalyst 4948-10G) are often used to aggregate all of these servers with Gigabit Ethernet and then run a couple 10GbE links back to the aggregation switches. (In some cases, 2x 4948 switches are used for HA purpose.) (Catalyst 4948is optimized for the dc environment.)

b. Advangate:

1) simplified cable management

2) avoid rack space and cooling issues

3) avoid cooling issues of end-of-rack switching

4) fast port-to-port switching for servers within the rack

5) predictable oversubscription of the uplink ans smaller switching domains (one per rack) to aid in fault isolatio and containment

c. Disadvantage:

1) Not enough servers to fill the switch in one rack – solution: put one top-of-rack switch server in an adjacent rack to preserve the advantages of the top-of-rack switch wile increasing port utilization.

3. Integrated:

a. When using blade servers, blade switches would be deployed. Cisco Catalyst Blade Switch 3000 Series support the visualization, segmentation, and management tools needed to properly support this environment.

b. When server virtualization is in use, it can rapidly increase the complexity of the network (the number of MAC addresses, complexity of spanning tree, data pathways, etc.)

c. In some larger dc, using the pass-thru module or the balde switches where it’s aggregated into a series of rack switches.

*Most people like dual top-of-rack because servers have dual production uplinks. But they can’t really fit in 40 1U servers due to power limitation or heating problem. So they end up 3 racks using the top-of-rack switch in the middle rack and cables are going between cabinets. End-of-rack is actually designed for this situation. But placing 6500 in the middle rack would cause overheating problem. 6500 switches thus shall be placed at the end of the row.

May 15, 2009

About Interfaces

Filed under: Frame Relay, Information, IOS — Jaycee @ 10:59 pm

A. Interface:

1. Each media type has its own configuration commands with few commands are common to all interfaces.

2. Interface is where you set addresses and netmasks and specify how the interface interacts with the routing protocol.

3. Subinterfaces provide a way to have multiple logical configurations for the same interface; most commonly used in Frame Relay, ATM, and Fast Ethernet.

a. Subinterface zero (0) refers to the actual interface:

serial 1 = serial 1.0

b. Frame Relay permits subinberfaces in both point-to-point and multipoint modes.

4. Secondary IP address(es):

interface ethernet 0
ip address
ip address secondary
ip address secondary


a. Secondary IP addresses are NOT supported by OSPF
b. Routing updates are not sent out to secondary subnets due to split horizon.
c. Too many secondary IP addresses often means you are doing something wrong with your network design.
d. Host broadcasts may or may not be heard by hosts on other subnets, depending on the broadcast address used by the host and the hosts’ implementations.

B. Common Interface Commands:

a. ip directed-bradcast

A directed broadcast is a broadcast that is sent to a specific network or set of networks. They are frequently used in DOS attack. To reduce the vulnerability to such attacks, it’s disabled by default.

b. ip proxy-arp

It allows the router to respond to ARP requests for hosts that it knows about, but that aren’t direclty reachable by the host making the ARP request. If the router receives an ARP request for a host and the router has a route to that host, the router sends an ARP response  with its own data link address to the requestor.

c. ip unreachables

It enables the generation of ICMP protocol unreachable messages (the default). It’s often used on the null interface.

C. Loopback Interface:

1. It’s NOT tied to

2. It’s often used as a termination address for some routing protocols, such as OSPF and BGP for router ID. It never goes down.

3. Use “ip unnumbered” configuration command allows you to enable IP processing on a serial interface without assigning it an explicit IP address.

4.Use it for all management software, which will test whether the router is alive by pinging the loopback interface’s IP.

D. Null Interface:

1. It’s the “bit bucket” or “black hole“interface. A null route directs traffic to a non-existent interface called the null interface. Network packets directed to the “Null 0” interface are discarded as soon as they are received.

2.A null route is useful for removing packets that cannot make it out of the network or to their destination, and/or to decrease congestion created when packets with no currently reachable destination float around the network, or the destination is under a denial of service attack.

3. During a denial of service attack, a null route can temporarily be placed on the next to last hop closest to destination which will cause that device to drop all traffic generated by the attack.

4. It’s most useful for filtering unwanted traffic, because you can discard traffic simply by routing it to the null interface. you could achieve the same goal using ACLs, but ACLs require more CPU overhead.

5. There can be only one null interface (null 0), and it’s always configured. It accepts ONLY ONE configuration command as below:

interface null 0
 no ip unreachables

6. As part of security strategy, uses null0 to prevent routing loops when using summarized addresses.

7. Example:


As the above toplogy, R1 has this static route:

ip route null0

a. From R3, if it sends a packet to destination for

=> the packet would send to R1 and then R4 since R4 has the longest match than R1 has. (Choose than

b. From R3, if it sends a packet to destination for

=> the packet would be dropped.

E. Serial Interface:

1. They are interfaces that connect to a device like a CSU/DUS, which in turn connects to a leased line to complete a point-to-point connection.

2. Serial Encapsulation:

a. PPP (Point-to-Point):

(1) Echo requests are used as keepalives; use “no keepalives” to disable this feature
(2) It’s supported by all router and vendors. If you are creating a serial link with 2 different types of routers, you’ll neeed to use PPP for the two routers to communicate.

b. HDLC:

(1) It provides synchronous frames and error detection without windowing or retransmission.
(2) It’s NOT supported by all vendors.

c. Frame Relay:

(1) Your packets are handled by a switched network that provides virtual circuits between you and the sites.
(2) Frame Relay is an encapsulation type, not an interface type.
(3) Frame Relay communication takes place over some other medium, typically a T1 line.

F. Passive Interface:

1. It tells an interface to listen to RIP or IGRP routes but NOT to advertise them. (Listen but dont’ talk.)

This feature can reduce routing load on the CPU by reducing the number of interfaces on which a protocol needs to communicate.

2. For OSPF and EIGRP, this command completely disables route processing for that interface.

3. Example:

Using Passive Interfaces

router eigrp 300
 passive-interface ethernet 0

router rip
 passive-interface serial0
 passive-interface serial1

May 12, 2009

Server Load Balancing

Filed under: Information, Load Balancing, Routing Design — Tags: , — Jaycee @ 2:12 am

A. Load Balancing:

1. DNS-Based Load Balancing (as known as DNS Round Robin):

a. Allows more than one IP to associate with a hostname

b. Domain name server looks up the domain name with one of the root servers. The root servers do not have IP info, but they know who does and report that to the user’s DNS server. The query goes out to the authoritative name server, the IP is reported back. The entire process as below:

(1) The user types the URL into the browser.
(2) The OS makes a DNS request to the configured DNS server.
(3) The DNS server sees if it has that IP address cached. If not, it makes a query to the root servers to see what DNS servers have the information.
(4) The root servers reply back with an authoritative DNS server for the requested hostname.
(5) The DNS server makes a query to the authoritative DNS server and receives a response.

c. Limitation of DNS round robin:

(1) Unpredictable traffic/load distribution

Since individual users don’t make requests to the authoritative name servers, they make requests to the name servers configured in their operating systems. Those DNS servers then make the requests to the authoritative DNS servers and cache the received information.

(2) DNS Caching

To prevent DNS servers from being hammered with requests, and to keep bandwidth utilization low, DNS servers emply quite a bit of DNS caching.

(3) Lack of fault-tolerance measures

When demand increases suddenly, more servers are required quickly. Any new server entries in DNS take a while to propagate which makes scaling a site’s capacity quicly difficult.

2. Firewall Load Balancing:

Most firewalls are CPU-based, such as a SPARC machine or an x86-based machine. Because of the processor limitations involved, the amount of throughput a firewall can handle is often limited, generally they tend to max out at around 70 to 80 Mbps of throughput.

3. Global Server Load Balancing (GSLB):

a. SLB works on LAN; GSLB works on WAN.

b. There are serveral ways to implement GSLB, such as DNS-based and BGP-based.

c. Two main reasons to implement GSLB:

(1) GSLB brings content closer to the users.
(2) GSLB provides redundancy in case any site fails.

B. Clustering vs. SLB:

1. Clustering is application-based, reserving load balancing for the network-based aspect of the technology; SLB is network-based load balancing.

2. Disadvantages of Clustering:

a. It’s tight integration between the servers.
b. special software is required
c. a vendor will most likely support a limited number of platforms
d. a limited number of protocols are supported

3. SLB:

a. It’s platform and OS neutral, so it works as long as there is a network stack.
b. It’s extremely flexible: it supports just about any network protocol, from HTTP to NFS, to Real Media, to almost any TCP- or UDP-based protocol.
c. With no interaction between the servers and a clear delineation of functions, a SLB design is very simple and elegant, as well as powerful and functional.

C. OSI model with SLB:

1. Layer 1 – physical

2. Layer 2 – Data link:

Ethernet frame consists of a header, a checksum, and a payload. Ethernet frame size has a limit of 1.5KB. Some devices support Jumbo Frames for Gigabit Ethernet, which is over 9KB.

3. Layer 3 – Network:

These device are routers, although SLB devices have router characteristics.

4. Layer 4 – Transport:

An SLB instance will involve an IP address and a TCP/UDP port.

5. Layer 5 -7 – Session, Presentation, Application:

Layers 5-7 involve URL load balancing and parsing. URL load balancing can set persistence based on the “cookie” negotiated between teh client and the server.

D. Components of SLB:

1. VIPs (Virtual IPs):

It’s the load-balancing instance. A TCP or UDP port number is associated with the VIP, such as TCP port 80 for web traffic.

2. Servers

3. Groups/Farm/Server Farm

4. User-Access Levels: Read-only, Superuser, Other levels

E. Redundancy:

Typically, 2 devices are implemented. A protocol is used by one device to check on its partner’s health. In “active/active” scenario, both devices are active and accept traffic in “active/passive”, only one device is used while the other waits in case of failure.

1. Active/Passive ( as known as Active/Standby or Master/Slave) Scenario:

2. Active/Active Scenarios:

(1) VIPs are distributed between the two LBs to share teh incoming traffic. For example, VIP 1 goes to LB A, and VIP 2 to LB B.

(2) Both VIPs answer on both LBs, but 2 LBs may not hold the same IP. For example, VIP 1 and VIP 2 both on LB A and LB B.

3. Redundancy Protocols:

a. VRRP (Virtual Router Redundancy Protocol):

(1) An open standard.
(2) Each unit in a pair sends out packets to see if the other will respond.
(3) VRRP uses UDP port 1985 and sends packets to the multicast address
(4) VRRP requires that the two units are able to communicate with each other.

b. ESRP (Extreme Standby Router Protocol): Extreeme Networks’ proprietary.

c. HSRP (Hot Stndby Routing Protocol): Cisco proprietary.

d. GLBP (Gateway Load Balancing Protocol):

(1) Cisco proprietary.

(2) To overcome the limitations of existing redundant router protocols.

(3) GLBP allows a weighting parameter to be set. Based on this weighting, ARP requests will be answered with AMC addresses pointint to different routers. Thus, load balancing is not absed on traffic load, but the number of hosts that will use each gateway routers. By default, GLBP LBs in round-robin fashion.

GLBP elects one AVG (Active Virtual Gateway) for each group. The elected AVG then assigns a virtual MAC address to each member of the GLBP group, including itself, thus enabling AVFs (Active Virtual Forwarders). Each AVF assumes responsibility for forwarding packets sent to it’s virtual MAC address. There could be up to four active AVFs at the same time.

By default, GLBP routers use the local multicast address to send hello packets to their peers every 3 seconds over UDP 3222 (source and destination).

4. Fail-Over Cable:

This method uses a proprietary “heartbeat” checking protocol running over a serial line between a pair of load balancers.

If this fail-Over cable is disconnected, it can cause serious network problems that both units tries to take on “master” status. STP can avoid bridgin loops.

5. Stateful Fail-Over:

If a device fails over, all of the active TCP connections are reset, TCP sequence number information is lost, and network error displayed on end user’s browser.

“Stateful Fail-Over” keeps session and persistence information on both the active and passive unit. If the active unit fails, then the passive unit will have all of the information, and service will be completely uninterrupted. The end user wont notice anything.

6. Persistence (sticky):

It’s the act of keeping a specific user’s traffic going to the same server that was initially hit when the site was contacted. This is especially important in web-store type applications, where a user fills a shopping cart, and that information may only be stored on one particular machine.

7. Health Checking (Service Checking):

It can be performed a number of ways:

a. ping check
b. port check
c. content check

SLB will continuously run these service checks at user-definable intervals.

8. Load-Balancing Algorithms:

There are several methods of distributing traffic using a given metric. These are the mathematical algorithms programmed into the SLB device. They can run on top and in conjunction with any persistence methods, and they are assigned to individual VIPs.

F. SLB benefits:

1. Flexibility

SLB allows the addtion and removal of servers to a site at any time. LB can also direct traffic using cookies, URL parsing, static and dynamic algorithms, and much more.

2. High availability (HA)

SLB can automatically check the status of the available servers, take any nonresponding servers out of the rotation, and put them in rotation when they are functioning again. LB themselves come in a redundant configuration.

3. Scalability

Since SLB distributes load among many servers, all that is needed to increase the serving power of a site is to add more servers.

May 10, 2009


Filed under: Information, IOS — Tags: , — Jaycee @ 8:17 pm

A. EtherChannel :

1. A Cisco term for the technology that enables the bonding of up to 8 physical Ethernet links into a single logical link. However, the bandwidth is not truly the aggregate of the physical link speeds in all situations. (an EtherChannel composed of 4 1-Gbps links, each conversation will still be limited to 1 Gbps by default.

2. By default, the physical link used for each packet is determined by the packet’s destination MAC address. This algorithm is Cicso-proprietary. The packets with the same destination MAC address will always travel over the same physical link. => It ensures that packets sent to a single destination MAC address never arrive out of order.

3. If one workstation talks to one server over an EtherChannel, only one of the physical links will be used. All of the traffic destinated for that server will traverse a single physical link in the EtherChannel. => A single user will only ever get 1 Gbps from the EtherChannel at a time.

4. Solaris uses “Trunk” for this technology term.

B. Load Balancing:

1. Hashing algorithm takes the destination MAC address and hashes that value to a number in the range of 0-7.

2. The only possible way to distribute traffic equally across all links in an EtherChannel is to design one with 8, 4, or 2 physical links.

3. The method the switch uses to determine which path to assign can be changed:

a. MAC address:

(1) source MAC
(2) destination MAC (default)
(3) source and destination MAC

b. IP address:

(1) source IP
(2) destination IP
(3) source and destination IP

c. Ports:

(1) source port
(2) destination port
(3) source and destination ports

4.In the case of one server (i.e. email server) receiving the lion’s share of the traffic, destination MAC address load balancing doesn’t make sense. Give this scenario, balancing with the source MAC address makes more sense as the example below:

EtherChannel load-balancing factors

a. Load-balancing method is only applied to packets being transmitted over the EtherChannel. This is not a 2-way function.

b. When packets are being returned from the email server, the source MAC address is that of the email server itself.

c. Thus, the soluction would be to have source MAC address load balancing on Switch A, and destination load balancing on Switch B.

5. For a switch like 6509, changing the load-balancing algorithm is done on a chassis-wide basis, so with all the devices connected to a single switch as below:

Single server to single NAS

Problem: the bandwidth required between the server and the NAS device is in excess of 2 Gbps.

a. Solution 1: change the server and/or NAS device so that each link has its own MAC adddress => the packets ill still be source from and destined for only one of those addresses.

b. Solution 2: Spliting the link into 4 x 1-Gbps links, each with its own IP network and mounting different filesystems on each link will solve the problem.

c. Solution 3: get a faster physical link, wush as 10Gbps Ethernet.

C. EtherChannel Protocols:

EtherChannel protocols and their modes

1. LACP (Link Aggregation Control Protocol) is IEEE standard.

2. PAgP (Port Aggregation Control Protocol) is Cisco-proprietary.

3. NetApp NAS devices dont negotiate with the other sides of the links, so Cisco side of the EtherChannel should set to “on“.

D. EtherChannel Configuration:

1. Creat a port-channel virtual interface:

EtherChannel configuration

2. Shows the status of an EtherChannel:

sh etherchannel sum

3. Shows the number of bits used in the hash algorithm for each physical interface:

sh etherchannel 1 port-channel

May 8, 2009


Filed under: Information, IOS — Tags: — Jaycee @ 6:10 am

A. T1 is a means for digitally trunking multiple voice channels together between locations.

1. T1 are full-duplex links.

2. All T1s are digital. T1 uses digital signaling within the data channel even with an “analog” T1. Each channel’s audio must be converted to digital to be sent over the T1.

3. Basic types of T1s:

a. Channelized T1:

(1) It’s a voice circuit that has 24 voice channels.

(2) It’s in-band signaling:

i) each channel contains its own signaling information, which is inserted into the data stream of the digitized voice.

ii) The signals within the channel are not audible. These signals, called ABCD bits, are embedded in the voice data and they are used to report on the status of phones.

b. PRI:

(1) It’s a voice circuit that has 24 voice channels and one of which is dedicated to signaling.

(2) Bearer channels: the number of available voice channels is 23.

(3) Data channel: the signaling channel.

(4) It’s out-ot-band signaling: One entire channel is reserved for signaling which reduces the number of usable channels from 24 to 23.

c. Clear-channel T1:

(1) It’s not framed, no channels and no organization of the bits flowing through the link.

(2) It’s a rarity, as most data links are provisioned with ESF framing.

4. Two types of encoding:

In T1 signaling, there are two possible states: mark (1) and space (0). A space is 0V, and a mark is either +5V or -5V.

a. AMI (Alternate Mark Inversion): When using AMI, a long progression of spaces will result in a loss of synchronization. The risk of an all zeros signal exists, so AMI sets every eighth bit to a 1. 16 zeros in a row can cause the remote end to lose synchronization. Voice signals can easily absorb having every eighth bit set to 1, but data signals can’t tolerate having any bits changed. If one bit is different in a TCP packet, the CRC (Cyclic Rdundancy Check) will fail and packet will be resent. Thus, AMI is not an acceptable encoding technique on data T1.

b. B8ZS: If 8 zeros in a row are detected in a signal, they will converted to a pattern including international BPV (Bipolar Violation)s. Remote side will converts it back to all zeros. This technique allows data streams to contain as many conseccutive zeros as necessary while maintaining ones density.

5. Framing:

T1 use time-division multiplexinthat each channel is actually a group of serial binary values. The receiving equipment needs to know when the first channel starts, and when the last channel ends. The way this is done is called framing.

a. D4/Superframe: a standard voice framing that each 8-bit sample is relayed from each channel in order.

i) Frame: 24 8-bit channels + 1 framing bit = 192bits + 1 bit = 193 bits.

ii) Superframe: 12 x 193-bit frames = 2,316bits

iii) 8,000 frames per second are sent: 8,000 x 1932bits = 1,544,000 bps

iv) If removing the framing bit (1 bit in each frame), we get: 8,000 x 192 = 1,536,000 bps

v) Thus, Some texts will show a T1 to be 1.544 Mbps, while others may show 1.536M Mbps. 1.536 Mbps is the usable speed of a T1 when framing bits are taken into consideeration.

vi) D4 is lack of error detection, thus it’s not suitable for data.

b. ESF (Extnded Superframe):

i) ESF is composed of 24 frames instead of 12.

ii) Frames 4,8,12,16,20 and 24 (every 4th frame): The framing bits are filled with the pattern 001011.

iii) Frames 1,3,5,7,9,11,13,15,17,19,21 and 23 (every odd-numbered frame): The framing bits are used for a new 4,000 bps virtual data channel. This channel is used for out-of-band communications b/w networking devices on the link.

iv) Frames 2,6,10,14,18 and 22 (the remaining even-numbered frames): The framing bits are used to store a 6-bit CRC value for each superframe.

6. T1 should be provisioned with B8ZS encoding and ESF framing.

7. T1 CSU/DSU WIC Troubleshooting:

Router# sh service-module s0/1 performance-statistics

a. It shows which events occurred during each of the last 96 15-minute intervals.

b. In the result of the command, if you see a number of line code violations, or any other error that shows up in all or most of the intervals, you’ve got a problem.

B. Configure an internal CSU/DSU for a WAN connection:

interface serial0/0
 ip add
 service-module t1 timeslots 1-12 speed 56
 service-module t1 clock source internal
 service-module t1 linecode ami
 service-module t1 data-coding inverted
 service-module t1 framing esf
 service-module t1 fdl ansi
 service-module t1 remote-alarm-enable

1. “service-module t1 timeslots 1-12“: tells the internal CSU/DSU to use the first 12 time slots of the T1 circuit.

2. “service-module t1 timeslots 1-12 speed 56“: by default, the CSU will assume that all of these time slots are 64Kbps DS0 channels. If the circuit actually uses 56Kbps channels, then add “speed“.

3. “service-module t1 clock source internal“: WAN carrier usually provide the clock signal for a T1 circuit. But for lab networks, you’ll need the internal CSU/DSU to act as the DCE device and supply the clock signal.

4. “service-module t1 linecode ami“: by default, the router will use B8ZS (Binay 8 Zeros Substitution) line coding which tends to be the most common way that T1 circuits are delivered. Some use AMI (Alternate Mark Inversion) instead. “service-module t1 data-coding inverted“: When use AMI line coding, you have to set the speed of each channel to 56Kbps or use inverted data coding.

5. “service-module t1 framing esf“: by default, the router uses ESF (Extended Super Frame). Here uses SF (Super Frame) framing.

6. “service-module t1 fdl ansi“: Some network vendors require a special FDL (Facilities Data Link) configuration. When using a WIC-1DSU-T1 module, FDL is disabled by default. On Cisco 2524 and 2525 routers, the built-in T1 CSU/DSU uses both ANSI and AT&T simultaneously by default.

7. “service-module t1 remote-alarm-enable“: allows the CSU to send remote alarms, called yellow alarms, to the CSU on the other end of the circuit. It does this to let the other device know that it has encountered an alarm condition such as a framing error or loss of signal on the circuit. Only use this with ESF framing because it conflicts with SF framing.

May 7, 2009

VPN’s: IPSec vs. SSL

Filed under: Information, Security — Jaycee @ 7:29 pm

VPN creates a virtual “tunnel” connecting the two endpoints. The traffic within the VPN tunnel is encrypted so that other users of the public Internet can not readily view intercepted communications.

VPN Advantages:

1. By implementing a VPN, a company can provide access to the internal private network to clients around the world at any location with access to the public Internet. It erases the administrative and financial headaches associated with a traditional leased line WAN and allows remote and mobile users to be more productive.

2. Best of all, if properly implemented, it does so without impacting the security and integrity of the computer systems and data on the private company network.

IPSec (Internet Protocol Security) VPN:

Traditional VPN’s rely on IPSec to tunnel between the two endpoints. IPSec works on the Network Layer of the OSI Model- securing all data that travels between the two endpoints without an association to any specific application. When connected on an IPSec VPN the client computer is “virtually” a full member of the corporate network- able to see and potentially access the entire network.

The majority of IPSec VPN solutions require third-party hardware and / or software. In order to access an IPSec VPN, the workstation or device in question must have an IPSec client software application installed.


It provides an extra layer of security if the client machine is required not only to be running the right VPN client software to connect to your IPSec VPN, but also must have it properly configured. These are additional hurdles that an unauthorized user would have to get over before gaining access to your network.


1. It can be a financial burden to maintain the licenses for the client software and a nightmare for tech support to install and configure the client software on all remote machines- especially if they can’t be on site physically to configure the software themselves.

2. IPSec is complex. The more sites that connect to each other, the more secure links or tunnels need to be defined and maintained


SSL is a common protocol and most web browsers have SSL capabilities built in.


1. Almost every computer in the world is already equipped with the necessary “client software” to connect to an SSL VPN.

2. It allows more precise access control. First of all it provides tunnels to specific applications rather than to the entire corporate LAN. So, users on SSL VPN connections can only access the applications that they are configured to access rather than the whole network.

3. It’s easier to provide different access rights to different users and have more granular control over user access.


1. The limitation of SSL was that the browsers could access only Web-based applications, but this challenge was met by Webifying non-Web applications or pushing Java or Active X SSL VPN agents to the remote machines on the fly. These plug-ins gave the remote computers the ability to create network layer connections comparable to IPSec, but without having to distribute dedicated VPN client software.

2. Having direct access only to the web-enabled SSL applications also means that users don’t have access to network resources such as printers or centralized storage and are unable to use the VPN for file sharing or file backups.

As a result, SSL VPNs are making great headway against IPSec VPNs for remote access and seem likely to win out in the end. IPSec is still the preferred method of site-to-site VPNs because either technology requires a gateway anyway, IPSec is better established in this arena and many SSL vendors don’t even offer site-to-site connections. For site-to-site, IPSec carries the day.

UTM High Availability

Filed under: Information, Security — Tags: — Jaycee @ 12:11 am

Active/active HA —  two firewalls load-balance automatically between themselves.

Active/passive HA — a hot standby system takes over when the active node goes down.

The argument here is that:

Any performance benefits achieved from an active/active configuration would pale in comparison to the guarantee that when a HA event occurs to an active/passive configuration, you’ll still have just as good performance as before the event. Because a typical HA event might be a hardware failure that could take a box out for 24 to 72 hours, having the same performance before and after would be pretty important.

With Check Point HA, called ClusterXL, Nokia IPSO clustering and Juniper HA, each device has its own IP address, and the pair also has a third IP address as well as an additional (virtual) MAC address. When an HA event occurs, the remaining node takes over the HA IP and MAC addresses, assuring that no one outside of the cluster has to adjust and traffic can keep flowing as soon as the HA event is detected — always within four second limit.

With multinode clustering, you can keep adding devices into the cluster, making it (in theory) increasingly reliable and fast. Nokia’s IP290 and Astaro’s ASG425a — offer multinode clustering, which is a potential solution to the problem of losing a single node in a high-availability environment.

Older Posts »

Create a free website or blog at WordPress.com.