Virtual Circuit Manager


6.1 Virtual Circuit Manager Overview

The Virtual Circuit Manager (VC Manager) is responsible for setting up virtual paths between switches for all connection oriented protocols. These virtual paths are either provisioned by the NMS operator at the start of the network and remain for the life of the network (Permanent Virtual Circuits, or PVCs), or built on an as needed basis when traffic enters the network and released when the traffic stops (Switched Virtual Circuits).

After these circuits are built, the VC Manager is responsible for ensuring that they remain on the best possible path for the requirements of the circuit, as well as the requirements of all other circuits in the network. The VC Manager works closely with OSPF to keep a picture of the state of every circuit and trunk in the network.

The following diagram presents the basic structure of the Virtual Circuit manager.

6.1.1. PVC/SVC Setup

Setup of a PVC differs from the setup of an SVC. PVCs are initiated from the NMS when a circuit is provisioned. SVCs are initiated when a Router Call Request packet is received from a router. Basic SVC bandwidth and QoS limitations can be configured on the router UNI Lport.

The PVC/SVC setup routines handle the differences between PVCs and SVCs and present a consistent interface to the VC Manager routines when the circuit is initiated. This interface is the Call Setup PDU described later.

The PVC/SVC setup routines also take care of merging the different requirements of setting up an ATM circuit and Frame Relay circuit into one consistent interface.

6.1.2. VC Manager Routines

VC Manager routines are responsible for circuit setup, tear down and rerouting. These routines work closely with the OSPF/VNN subsystem to find the best possible candidate path for a circuit, given all the possible constraints on the circuit and the current state of all other nodes on the network.

VC manager routines are called from PVC/SVC setup requests, OSPF/VNN notifications of state changes on the network, re-route and load balancing timers, circuit node and call setup failures etc..

6.1.3. Connection Admission Control (CAC)

The Connection Admission Control (CAC) routines are responsible for admitting or rejecting a requested circuit. Bandwidth limitations and congestion are the main factors here. Although similar, ATM and Frame have significant differences in requirements at the CAC level and these are handled here.

CAC for a call is checked along every switch in the path from the initiating switch to the destination switch before the circuit can become active. CAC on any switch along the path can reject the path, forcing the initiating switch to attempt to choose another path.

CAC is also responsible for calculating the effective bandwidth for a circuit request given its traffic parameters. This effective bandwidth is what is passed to OSPF when requesting a path to the destination switch

6.1.4. VC Connection Primitive routines

The VC Manager Connection Primitive routines handle building and working with the various Call Connect/Reject/Status Change PDUs. These form the basis of a state machine that tracks the life of a circuit.

6.1.5. OSPF/VNN/PNNI

OSPF/VNN/PNNI keeps a picture of every circuit and trunk in the network. When requested by the VC Manager routines, OSPF will find the best possible candidate path given the requirements of the circuit, the state of the network, and a list of restrictions.

OSPF will also call VC Manager routines when the state of the network changes. For example if a port on switch 3 goes into congestion, OSPF on switch 1 will learn of this through Link State Update messages and will call a VC Manager routine to possibly reroute any circuits that transit the congested port on switch 3.

The Following figure is referred to throughout the rest of this chapter.

6.2 Circuit Setup Overview

This section gives a general overview of the entire process of setting up a circuit (PVC or SVC). For this example, a student at the NMS provisions a PVC from the UNI Lport on SW 1 (SW1:1 notation means Switch 1, Lport IfIndex 1) to the UNI Lport SW4:1. This is an attempt to connect Router A to Router B. The DLCI on the Router A side is 100 and the DLCI on the Router B side is 101.

The basic structure for call setup is the Call Connect PDU. This is passed from layer to layer and filled out as it goes. It is also the basis of the packet that gets sent to all switches that make up the Circuit.

6.2.1. Beginning PVC setup

When the provisioning is complete and the student presses the OK button, the NMS will send down cktTable MIBs to both endpoint switches (SW1 and SW4). These MIBs describe the circuit from the perspective of the switch that receives them. For example, SW1 receives the cktSrcDlci as 100 and the cktDestDlci as 101, where as SW4 receives them reversed. The same holds true for forward and reverse bandwidth, QoS and other parameters.

The last cktTable MIB that gets sent by the NMS is cktAdminStatus. When this MIB is received, the VC Manager PVC Routines are called. These routines smooth out the differences between PVCs and SVCs, as well as the differences between ATM and Frame parameters. A Call Connect PDU structure is created and passed on to the VC Manager routines. A Local pointer points back to the cktTable.

Note, if this was an SVC request, a call connect packet would have been received from a router. This would have contained call QoS and bandwidth requirements as well as the end system address. Ospf_NameLookup() would have to be called to find the IP address of the destination switch given the End System Address supplied by the router. VC Manager routines require IP addresses of systems

6.2.2. VC Manager routines setup

The VC Manager routines receive a Call Connect structure that has source and destination circuit identifiers (DLCI or concatenated VPI/VCI), and the ingress and egress lports for the circuit (SW1:1 and SW4:1). Other information it can get from the cktTable. VcCaC_InitPDU() is called to fill in the protocol specific bandwidth and Connection Admission Control information.

Both SW1 and SW4 are at this point concurrently. A decision is made on which switch actually initiates the call based on the IP addresses of the switches. The rule is that the switch with the higher IP initiates the call. Each switch compares their IP address with the destination IP address to decide this.

SW1 decides that it is the low IP switch. Because the NMS downloads and starts the circuit on each endpoint switch asynchronously, it is possible that SW4 has already tried to initiate the call, and send the Call Connect. SW1 checks to see if a matching Call Connect PDU has been queued for this circuit, and if not, it queues it and goes about its business.

There are a few special cases that are dealt with here. Circuits where both endpoints are on the same system are handled immediately, as are circuits for Management DLCIs.

6.2.3. VcM_InitiateCall() Initiating a Call Request

The High IP switch SW4 has the responsibility for choosing the best path to SW1 and trying to set up the circuit. If a user defined path exists (cktDefinedPath), then OSPF is not queried for a path.

As an aside, it is possible to arrive in this routine from several places. This routine is called not only for initiating a new call, but also for finding a different path for a failed call, rerouting around a congested or failed node on an existing circuit, rerouting a lower priority circuit to free up bandwidth for a higher priority circuit, etc. In these cases, it is called with a list of nodes to avoid in finding the best path. These are passed on to OSPF when it attempts to find the candidate path to the destination. If this list is not NULL, then cktDefinedPath is ignored.

In any event, OSPF is given a list of requirements and nodes to avoid, and it will return a candidate path list to the destination within those requirements. If no path exists, the requirements may be changed and retried if the circuit allows it. If there still is no path and no other options, this routine returns an error to the caller.

In this example, OSPF returns the path {SW4:3,SW3:1} and a hop count of 2. This information is added into the Call Information structure. The Low level connection routines are now called.

6.2.4. Building the example circuit

The low level Call Primitive routines are responsible for receiving, sending, and handling Call PDU packets. These packets are used to build a connection, acknowledge a connection or reject a connection, activate or deactivate a connection, and release a connection.

The main low level routine for connection is VpC_ConnectIn(), which processes Call Connect PDUs. This routine calls the CAC for both the incoming and outgoing lport for the requested circuit, binds the Call Connect PDU to the Lports (see diagram below), and passes the Call PDU on to the next hop in the supplied path.

Binding the Call Connect PDU to Lports

Continuing the example, VcP_ConnectIn() receives the PDU from the higher layers on SW4, verifies CAC on the outgoing Lport, binds the PDU to the lports then sends the PDU on to SW3. VcP_ConnectIn() on SW3 receives it, verifies CAC on both the incoming and outgoing lports, binds the PDU to the lports and sends it on to the next hop in the path. VcP_ConnectIn() on SW1 receives it, verifies CAC on the incoming Lport, realizes that it is the endpoint switch and handles endpoint processing (which is PVC or SVC specific).

Successful call setup causes an ACK packet to be sent down the newly created PVC to the initiating switch. All switches along the path flag the circuit as INACTIVE.

At any point along the path, if CAC on a switch rejects the call, a REJECT packet (with the reason code and failing node/lport) will be sent back to the initiating switch along the partially created path, causing all intermediate switches to unbind the lports and release the bandwidth. The initiating switch will then re-call VcM_InitiateCall() to attempt to set up the circuit again, avoiding the node that failed CAC.

6.2.5. Starting the example circuit

Once the ACK packet is transfered from the receiving endpoint switch to the initiating endpoint switch, the circuit is up and inactive. It is the responsibility of the UNI lport logic on both sides to actually start the circuit once the CPE routers have been notified of the circuit.

This is more complex than it initially appears. The Frame Relay spec spends some time on this, and is a good resource for further explaination. However in the simplest case, UNI lports will send a start packet to the other side after notifying the CPE routers. The circuit is not considered active until both sides have sent and received a start message. For example:

Switch 3 sends the connect request to switch 2, then to switch 1, which notifies the CPE router of the new inactive PVC via an Lmi Async message. The connect is then acknowledged. Once Router 1 has been notified, the UNI lport will send a Start packet into the new circuit.

Switch 3 receives the Ack packet and will notify the CPE router 2 of the new inactive circuit. Once this is done, then the UNI lport on Switch 3 can send its start packet into the network. (Note in the diagram, the switch 3 UNI lport sends its start after it receives the start from switch 1. This can happen in either order.). Once switch 3 has both sent and received the start packet, it considers the circuit active and notifies the CPE router 2 via an Lmi async packet.

Eventually switch 1 receives the start packet from switch 3. Since it has sent and now received a start, the circuit is active and it notifies router 1 via an Lmi Async packet.

The case of Multi-segment PVCs is a little more complex. The circuit can not be active until all segments of the PVC are defined. In the following diagram, a two segment PVC with an NNI link between Network 1 and Network 2 is defined and activated.

The rule here is this: UNI Lports can initiate a status change (such as start or stop). NNI Lports can only note and propagate status changes.

Network 2 creates its circuit first. Switch 4 initiates a connect request which switch 3 responds to with an Ack.Switch 3 also sends an NNI async notification to switch 2, notifying it about the new inactive circuit. However, since it is an NNI Lport it does not send a start. When switch 4 receives the Ack, it notifies router 2 and sends a start, since it is a UNI lport. Switch 3 receives the start and propagates it to switch 2 on the other side of the NNI link. At this point the circuit exists but is inactive.

Now network 1 starts its segment of the circuit. Switch 2 sends the connect to switch 1, which sends the Lmi Async new circuit notification to the router and sends the Ack back to switch 2. Also, since it is a UNI lport, it sends a start (Note that in the diagram, the start is sent later, after a start is received from switch 2. It can happen in any order).

Switch 2 receives the Ack from switch 1 and sends the NNI async new circuit notification to switch 3. It then notices that switch 3 has already sent a notification and status for its segment of this circuit. Switch 2's NNI lport will propagate this status (started) into network 1.

Switch 2 receives the start message. It sends an NNI Async message to switch 3, which propagates it into the network. Eventually, switch 4 receives the start message, sends an Lmi async status of started to router 2 and considers the circuit active (since the UNI lport has both sent and received a start message).

6.3 Circuit Reroute Tuning

Reroute tuning is a way to ensure that a PVC or SVC always has the best path from source switch to destination switch. Once the circuit is provisioned and connected, it usually stays on that path for its existence. However, some other better, faster and/or cheaper path may come available after the circuit is connected. Reroute tuning takes care of this

Using the above figure, suppose that the least cost path from SW4 to SW1 is through SW3, but the bandwidth on lport SW3:2 is all used up by a high priority SVC. In that case, the VC Manager is forced to choose the path {SW4:2,SW2:1} to get to SW1, even though it costs more.

Eventually, the SVC is released, freeing up a lower cost path for our PVC. Reroute tuning would allow the previously provisioned PVC to jump to this different path.

Reroute tuning can only happen on the switch that actually initiated the circuit. Once every nodeRerouteDelay seconds, the VC Manager routine VcM_RerouteTune() is called. It will start at the top of the chain of circuits it created and recalculate the path of each circuit. If a new path is found that costs less that the current path, the old circuit will be released, and a new circuit will be created.

Note to me:think about this for a second. OSPF is using available bandwidth as a measure for this. If I am looking for a better path for a circuit that exists, it already 'owns' the bandwidth on the nodes it passes through. hmmm.

Reroute tuning will only examine nodeRerouteCount circuits every interval. This is to prevent thrashing. Also, it will ignore any circuit that doesn't have cktRerouteBalance enabled.

6.4 Bandwidth and Bumping Priority

6.4.1. Bandwidth Priority

The bandwidth priority of a circuit determines its importance in the network. This priority is used when finding the best path through the network. If there is not enough bandwidth along a chosen best path, then the VC manager uses the circuits Bandwidth priority to shift other circuits to more costly trunks.

Assume that Trunk 1 has a bandwidth of 110K/sec. PVC1 uses most of that. Now a student configures PVC2, with a CIR of 100K/s and a Bandwidth Priority of 1. The VC Manager will release PVC1 and reconnect it on Trunk 2 (at a higher cost) so that PVC2 can have the least cost trunk.

6.4.2. Bumping Priority

The bumping priority of a circuit is used to determine the order in which it will be bumped, if there are more than one circuit on a trunk that need to be bumped. Lower bump priority circuits are bumped first. For circuits with equal priority, the one with the higher CIR gets bumped first. Finally, if all else fails, the circuit with the higher DLCI gets bumped first.

6.4.3. Implementation in VC Manager

VcM_InitiateCall() is responsible for getting a candidate path through a network. VcM_InitiateCall() will request the best possible path from OSPF. OSPF will check each hop along the way to ensure that there is enough available bandwidth. If there isn't, OSPF returns a NULL path, an error code stating not enough bandwidth, and the node/lport of the node that caused the problem.

VcM_InitiateCall() will then check to see if it has any other circuits that transit the problem node to see if they are candidates for reroute. If they are, then it releases that circuit with VcM_ReleaseCall(), and recursively calls itself to re-establish the released circuit, avoiding the node that caused the bump. Once the bumped circuit is re-established (through a different node), VcM_InitiateCall() tries OSPF again with the original circuit request. This is repeated until the candidate path is complete or there are no other circuits that are candidates for reroute

If there are no candidates for reroute, VcM_InitiateCall() will call OSPF again, having it avoid the node that caused the problem. This will cause a 'less than best' path to be returned.

There are obviously a lot of potentials for deadlock and circuit thrashing here. Reroute counters and recursion checks are put in place to limit this. If it starts to look ugly, VcM_InitiateCall() will try to bail gracefully. It is, after all, only a simulation.

Note to me: Check how reroute balancing and negative trunk bandwidth play into the. Also the difference b/t reroute balancing and timed reroute balancing. See FRII4.23

6.5 Fault Tolerant PVCs and Resilient UNI/NNI

If a lport or node within the network fails, the VC Manager of every switch will be notified and will check to see if they initiated any circuits that transit the problem lport. If so, they will attempt to reroute if possible. This covers all failures within the network and is a standard feature of the VM Manager.

Fault Tolerant and Resilient PVCs handle the case of faults between the network and the customer router.

If LMI, NNI, or ILMI fail on lport SW1:1, the VC Manager will be informed. If there is no resilient UNI, then the entire PVC would be brought down. However, lport SW1:3 can be configured as a Resilient UNI to backup the PVC on Lport SW1:1, so the PVC endpoint would be rerouted over lport SW1:3.

A UNI Lport can be configured as a Resilient backup lport for one or more PVCs. It can only backup one PVC at a time. When not being used for backup, the lport must be idle. No PVCs can be configured against a backup UNI endpoint Lport.

6.6 Multicast PVC setup

TO BE DONE

6.7 Virtual Private Networks (VPNs)

A Virtual Private Network is a series of trunks that are dedicated to a customer. The customer buys the actual trunks and all of their PVCs and SVCs will have exclusive access to those trunks. VPNs are for customers that need bandwidth reliability or packet security.

IPGlue implementation of these is fairly simple. When a direct trunk lport is created, it is given a VPN number, lportPrivateNet and a customer number lportCustomerID. If the VPN number is zero, the trunk is public.

Circuits are also given VPN numbers when they are created cktPrivateNet. When built, these circuits will be limited to trunks with the same number. If there is no cktPrivateNet specified in the circuit creation, it assumes the VPN number of the ingress UNI lport.

OSPF is passed the VPN number when each new trunk is created, as part if the Link State Update packet and is therefore aware of the VPN of every trunk in the network. When a circuit is created with a VPN, OSPF will only consider trunks with matching VPNs when building the best path candidate list. Conversely, trunks with nonzero VPNs are not considered for circuits with no VPN number.

If there is not enough bandwidth on any given trunk in a VPN, the cktPublicNetOverflow MIB is checked. If this is set to 'use-public', then VcM_InitializeCall() will then consider public trunks. This also holds true for a failure in a node on a VPN. If cktPublicNetOverflow is set to 'private' and a node fails an no other VPN path exists, all PVCs on that node will fall over.

6.8 Closed Loop Congestion Control (CLCC)

When an lport on any node in the network enters a congested state, it may send an OSPF congestion notification to all other switches. In that case, OSPF will call VcM_Clcc() to check to see if this switch has any circuits transiting the congested node.

If the VcM_Clcc() finds any circuits, it will attempt to throttle the traffic through those circuits by changing the rate enforcement parameters of the circuit. For various levels of network congestion, Be and Bc of a circuit will be lowered by a percentage in an attempt to force more red frames to be dropped. See Frame Service section 3.4 for more details.

When the trouble node changes out of the congested state, again OSPF is notified and VcM_Clcc() returns the rate enforcement parameters to their original settings.

How does the ATM congestion scheme work into this?. Note that CLCC is enabled on an lport basis. Will ATM direct trunks allow this?

6.9 SVC Details

Some ideas of things to discuss here

6.9.1. Mapping of E.164 (or other) address to IP

6.9.2. Router to switch call initiation

6.9.3. SVC Options

6.9.3.1 Calling party insertion mode

6.9.3.2 Calling party presentation mode

6.9.3.3 Calling party screening

6.9.3.4 Holddown and release timers

6.9.3.5 Load balance

6.9.4. Closed User Groups