Fiber-Optic Applications
The use and demand for optical fiber has grown tremendously and optical-fiber applications are numerous. Telecommunication applications are widespread, ranging from global networks to desktop computers. These involve the transmission of voice, data, or video over distances of less than a meter to hundreds of kilometers, using one of a few standard fiber designs in one of several cable designs.
Carriers use optical fiber to carry plain old telephone service (POTS) across their nationwide networks. Local exchange carriers (LECs) use fiber to carry this same service between central office switches at local levels, and sometimes as far as the neighborhood or individual home (fiber to the home [FTTH]).
Optical fiber is also used extensively for transmission of data. Multinational firms need secure, reliable systems to transfer data and financial information between buildings to the desktop terminals or computers and to transfer data around the world. Cable television companies also use fiber for delivery of digital video and data services. The high bandwidth provided by fiber makes it the perfect choice for transmitting broadband signals, such as high-definition television (HDTV) telecasts.
Intelligent transportation systems, such as smart highways with intelligent traffic lights, automated tollbooths, and changeable message signs, also use fiber-optic-based telemetry systems.
Another important application for optical fiber is the biomedical industry. Fiber-optic systems are used in most modern telemedicine devices for transmission of digital diagnostic images. Other applications for optical fiber include space, military, automotive, and the industrial sector.
The Physics Behind Fiber Optics
A fiber-optic cable is composed of two concentric layers, called the core and the cladding, as illustrated in Figure 3-1. The core and cladding have different refractive indices, with the core having a refractive index of n1, and the cladding having a refractive index of n2. The index of refraction is a way of measuring the speed of light in a material. Light travels fastest in a vacuum. The actual speed of light in a vacuum is 300,000 kilometers per second, or 186,000 miles per second.
Figure 3-1. Cross Section of a Fiber-Optic Cable
[View full size image]
The index of refraction is calculated by dividing the speed of light in a vacuum by the speed of light in another medium, as shown in the following formula:
Refractive index of the medium = [Speed of light in a vacuum/Speed of light in the medium]
The refractive index of the core, n1, is always greater than the index of the cladding, n2. Light is guided through the core, and the fiber acts as an optical waveguide.
Figure 3-2 shows the propagation of light down the fiber-optic cable using the principle of total internal reflection. As illustrated, a light ray is injected into the fiber-optic cable on the left. If the light ray is injected and strikes the core-to-cladding interface at an angle greater than the critical angle with respect to the normal axis, it is reflected back into the core. Because the angle of incidence is always equal to the angle of reflection, the reflected light continues to be reflected. The light ray then continues bouncing down the length of the fiber-optic cable. If the angle of incidence at the core-to-cladding interface is less than the critical angle, both reflection and refraction take place. Because of refraction at each incidence on the interface, the light beam attenuates and dies off over a certain distance.
Figure 3-2. Total Internal Reflection
[View full size image]
The critical angle is fixed by the indices of refraction of the core and cladding and is computed using the following formula:
qc = cos–1 (n2/n1)
The critical angle can be measured from the normal or cylindrical axis of the core. If n1 = 1.557 and n2 = 1.343, for example, the critical angle is 30.39 degrees.
Figure 3-2 shows a light ray entering the core from the outside air to the left of the cable. Light must enter the core from the air at an angle less than an entity known as the acceptance angle (qa):
qa = sin–1 [(n1/n0) sin(qc)]
In the formula, n0 is the refractive index of air and is equal to one. This angle is measured from the cylindrical axis of the core. In the preceding example, the acceptance angle is 51.96 degrees.
The optical fiber also has a numerical aperture (NA). The NA is given by the following formula:
NA = Sin qa = (n12 – n22)
From a three-dimensional perspective, to ensure that the signals reflect and travel correctly through the core, the light must enter the core through an acceptance cone derived by rotating the acceptance angle about the cylindrical fiber axis. As illustrated in Figure 3-3, the size of the acceptance cone is a function of the refractive index difference between the core and the cladding. There is a maximum angle from the fiber axis at which light can enter the fiber so that it will propagate, or travel, in the core of the fiber. The sine of this maximum angle is the NA of the fiber. The NA in the preceding example is 0.787. Fiber with a larger NA requires less precision to splice and work with than fiber with a smaller NA. Single-mode fiber has a smaller NA than MMF.
Figure 3-3. Acceptance Cone
[View full size image]
Performance Considerations
The amount of light that can be coupled into the core through the external acceptance angle is directly proportional to the efficiency of the fiber-optic cable. The greater the amount of light that can be coupled into the core, the lower the bit error rate (BER), because more light reaches the receiver. The attenuation a light ray experiences in propagating down the core is inversely proportional to the efficiency of the optical cable because the lower the attenuation in propagating down the core, the lower the BER. This is because more light reaches the receiver. Also, the less chromatic dispersion realized in propagating down the core, the faster the signaling rate and the higher the end-to-end data rate from source to destination. The major factors that affect performance considerations described in this paragraph are the size of the fiber, the composition of the fiber, and the mode of propagation.
Optical-Power Measurement
The power level in optical communications is of too wide a range to express on a linear scale. A logarithmic scale known as decibel (dB) is used to express power in optical communications.
The wide range of power values makes decibel a convenient unit to express the power levels that are associated with an optical system. The gain of an amplifier or attenuation in fiber is expressed in decibels. The decibel does not give a magnitude of power, but it is a ratio of the output power to the input power.
Loss or gain = 10log10(POUTPUT/PINPUT)
The decibel milliwatt (dBm) is the power level related to 1 milliwatt (mW). Transmitter power and receiver dynamic ranges are measured in dBm. A 1-mW signal has a level of 0 dBm.
Signals weaker than 1 mW have negative dBm values, whereas signals stronger than 1 mW have positive dBm values.
dBm = 10log10(Power(mW)/1(mW))
< Day Day Up >
Optical-Cable Construction
The core is the highly refractive central region of an optical fiber through which light is transmitted. The standard telecommunications core diameter in use with SMF is between 8 mm and 10 mm, whereas the standard core diameter in use with MMF is between 50 mm and 62.5 mm. Figure 3-4 shows the core diameter for SMF and MMF cable. The diameter of the cladding surrounding each of these cores is 125 mm. Core sizes of 85 mm and 100 mm were used in early applications, but are not typically used today. The core and cladding are manufactured together as a single solid component of glass with slightly different compositions and refractive indices. The third section of an optical fiber is the outer protective coating known as the coating. The coating is typically an ultraviolet (UV) light-cured acrylate applied during the manufacturing process to provide physical and environmental protection for the fiber. The buffer coating could also be constructed out of one or more layers of polymer, nonporous hard elastomers or high-performance PVC materials. The coating does not have any optical properties that might affect the propagation of light within the fiber-optic cable. During the installation process, this coating is stripped away from the cladding to allow proper termination to an optical transmission system. The coating size can vary, but the standard sizes are 250 mm and 900 mm. The 250-mm coating takes less space in larger outdoor cables. The 900-mm coating is larger and more suitable for smaller indoor cables.
Figure 3-4. Optical-Cable Construction
[View full size image]
Fiber-optic cable sizes are usually expressed by first giving the core size followed by the cladding size. Consequently, 50/125 indicates a core diameter of 50 microns and a cladding diameter of 125 microns, and 8/125 indicates a core diameter of 8 microns and a cladding diameter of 125 microns. The larger the core, the more light can be coupled into it from the external acceptance angle cone. However, larger-diameter cores can actually allow in too much light, which can cause receiver saturation problems. The 8/125 cable is often used when a fiber-optic data link operates with single-mode propagation, whereas the 62.5/125 cable is often used in a fiber-optic data link that operates with multimode propagation.
Three types of material make up fiber-optic cables:
Glass
Plastic
Plastic-clad silica (PCS)
These three cable types differ with respect to attenuation. Attenuation is principally caused by two physical effects: absorption and scattering. Absorption removes signal energy in the interaction between the propagating light (photons) and molecules in the core. Scattering redirects light out of the core to the cladding. When attenuation for a fiber-optic cable is dealt with quantitatively, it is referenced for operation at a particular optical wavelength, a window, where it is minimized. The most common peak wavelengths are 780 nm, 850 nm, 1310 nm, 1550 nm, and 1625 nm. The 850-nm region is referred to as the first window (as it was used initially because it supported the original LED and detector technology). The 1310-nm region is referred to as the second window, and the 1550-nm region is referred to as the third window.
Glass Fiber-Optic Cable
Glass fiber-optic cable has the lowest attenuation. A pure-glass, fiber-optic cable has a glass core and a glass cladding. This cable type has, by far, the most widespread use. It has been the most popular with link installers, and it is the type of cable with which installers have the most experience. The glass used in a fiber-optic cable is ultra-pure, ultra-transparent, silicon dioxide, or fused quartz. During the glass fiber-optic cable fabrication process, impurities are purposely added to the pure glass to obtain the desired indices of refraction needed to guide light. Germanium, titanium, or phosphorous is added to increase the index of refraction. Boron or fluorine is added to decrease the index of refraction. Other impurities might somehow remain in the glass cable after fabrication. These residual impurities can increase the attenuation by either scattering or absorbing light.
Plastic Fiber-Optic Cable
Plastic fiber-optic cable has the highest attenuation among the three types of cable. Plastic fiber-optic cable has a plastic core and cladding. This fiber-optic cable is quite thick.Typical dimensions are 480/500, 735/750, and 980/1000. The core generally consists of polymethylmethacrylate (PMMA) coated with a fluropolymer. Plastic fiber-optic cable was pioneered principally for use in the automotive industry. The higher attenuation relative to glass might not be a serious obstacle with the short cable runs often required in premise data networks. The cost advantage of plastic fiber-optic cable is of interest to network architects when they are faced with budget decisions. Plastic fiber-optic cable does have a problem with flammability. Because of this, it might not be appropriate for certain environments and care has to be taken when it is run through a plenum. Otherwise, plastic fiber is considered extremely rugged with a tight bend radius and the capability to withstand abuse.
Plastic-Clad Silica (PCS) Fiber-Optic Cable
The attenuation of PCS fiber-optic cable falls between that of glass and plastic. PCS fiber-optic cable has a glass core, which is often vitreous silica, and the cladding is plastic, usually a silicone elastomer with a lower refractive index. PCS fabricated with a silicone elastomer cladding suffers from three major defects. First, it has considerable plasticity, which makes connector application difficult. Second, adhesive bonding is not possible. And third, it is practically insoluble in organic solvents. These three factors keep this type of fiber-optic cable from being particularly popular with link installers. However, some improvements have been made in recent years.
NOTE
For data center premise cables, the jacket color depends on the fiber type in the cable. For cables containing SMFs, the jacket color is typically yellow, whereas for cables containing MMFs, the jacket color is typically orange. For outside plant cables, the standard jacket color is typically black.
Multifiber Cable Systems
Multifiber systems are constructed with strength members that resist crushing during cable pulling and bends. The outer cable jackets are OFNR (riser rated), OFNP (plenum rated), or LSZH (low-smoke, zero-halogen rated). The OFNR outer jackets are composed of flame-retardant PVC or fluoropolymers. The OFNP jackets are composed of plenum PVC, whereas the LSZH jackets are halogen-free and constructed out of polyolefin compounds. Figure 3-5 shows a multiribbon, 24-fiber, ribbon-cable system. Ribbon cables are extensively used for inside plant and datacenter applications. Individual ribbon subunit cables use the MTP/MPO connector assemblies. Ribbon cables have a flat ribbon-like structure that enables installers to save conduit space as they install more cables in a particular conduit.
Figure 3-5. Inside Plant Ribbon-Cable System
Figure 3-6 shows a typical six-fiber, inside-plant cable system. The central core is composed of a dielectric strength member with a dielectric jacket. The individual fibers are positioned around the dielectric strength member. The individual fibers have a strippable buffer coating. Typically, the strippable buffer is a 900-mm tight buffer. Each individual coated fiber is surrounded with a subunit jacket. Aramid yarn strength members surround the individual subunits. Some cable systems have an outer strength member that provides protection to the entire enclosed fiber system. Kevlar is a typical material used for constructing the outer strength member for premise cable systems. The outer jacket is OFNP, OFNR, or LSZH.
Figure 3-6. Cross Section of Inside-Plant Cables
Figure 3-7 shows a typical armored outside-plant cable system. The central core is composed of a dielectric with a dielectric jacket or steel strength member. The individual gel-filled subunit buffer tubes are positioned around the central strength member. Within the subunit buffer tube, six fibers are positioned around an optional dielectric strength member. The individual fibers have a strippable buffer coating. All six subunit buffer tubes are enclosed within a binder that contains an interstitial filling or water-blocking compound. An outer strength member, typically constructed of aramid Kevlar strength members encloses the binder. The outer strength member is surrounded by an inner medium-density polyethylene (MDPE) jacket. The corrugated steel armor layer between the outer high-density polyethylene (HDPE) jacket, and the inner MDPE jacket acts as an external strength member and provides physical protection. Conventional deep-water submarine cables use dual armor and a special hermetically sealed copper tube to protect the fibers from the effects of deep-water environments. However, shallow-water applications use cables similar to those shown in Figure 3-7 with an asphalt compound interstitial filling.
Figure 3-7. Cross Section of an Armored Outside-Plant Cable
[View full size image]
Tuesday, December 11, 2007
ISDN
ISDN
Integrated Services Digital Network (ISDN) is a digital system that allows voice and data to be transmitted simultaneously using end-to-end digital connectivity. ISDN allows multiple digital channels to be transmitted simultaneously over the same wiring infrastructure used for analog lines. Two kinds of channels are defined in ISDN. The B channel or bearer channel carries user traffic, whereas the D channel or data channel carries CCS signaling data. The bandwidths of B channels are 64 kbps. Some switches limit B channels to a capacity of 56 kbps. The D channel handles signaling at 16 kbps or 64 kbps, depending on the service type. Original recommendations of ISDN were in Consultative Committee for International Telegraph and Telephone (CCITT) Recommendation I.120 (1984), which described some initial guidelines for implementing ISDN. As regards ISDN in North America, members of the industry agreed to create the National ISDN 1 (NI-1) standard as an interoperable ISDN standard. A more comprehensive standardization initiative, National ISDN 2 (NI-2), was later adopted. Two basic types of ISDN services are offered: basic rate interface (BRI) and primary rate interface (PRI).
ISDN BRI
ISDN BRI (2B+D) consists of two 64-kbps B channels and one 16-kbps D channel for a total of 144 kbps. BRI service is designed to meet the needs of most individual users. BRI ISDN also uses a channel-aggregation protocol, such as BONDING or Multilink PPP, that supports an uncompressed data transfer speed of 128 kbps, plus bandwidth for overhead and signaling.
As illustrated in Figure 2-18, the U interface is a two-wire (single-pair) interface from the ISDN switch, the same physical interface provided for plain old telephone service (POTS) lines. It supports full-duplex data transfer over a single pair. Echo cancellation is used to reduce noise, and data-encoding schemes, such as 2 binary 1 quaternary (2B1Q) in North America and 4B3T in Europe, permit a relatively high data rate of 160 kbps over ordinary single-pair local loops.
Figure 2-18. ISDN Basic Rate Interface
[View full size image]
The U interface is terminated with a network termination 1 (NT-1) device at the CPE end. North American carriers provide customers with a choice of U or S/T interfaces. EMEA and Asia Pacific phone companies supply NT1s, thereby providing their customers with an S/T interface. The ISDN NT-1 converts the two-wire U interface into the four-wire S/T interface. The S/T interface supports up to seven devices on the full-duplex S/T bus. The BRI NT-1 provides timing, multiplexing of the B and D channels, and power conversion.
Devices that connect to the S/T interface include ISDN-capable telephones, videoconferencing equipment, routers, and terminal adapters. All devices that are designed for ISDN are designated terminal equipment 1 (TE-1). All other communication devices that are not ISDN capable, but have an asynchronous serial (EIA-232) or POTS telephone interface—including ordinary analog telephones, modems, and terminals—are designated terminal equipment 2 (TE-2). A terminal adapter (TA) connects a TE-2 to the ISDN S/T bus. ISDN services can be deployed as OPX services by carriers and service providers that operate carrier class 5 switches capable of ISDN PRI and BRI services. There are local loop distance limitations of 18,000 feet (5.5 km) of the CO point of presence (POP) for BRI service. Repeater devices are required for distances exceeding these guidelines.
ISDN PRI
ISDN PRI service is offered as T1/PRI or E1/PRI. T1/PRI (23B+D) has a channel structure that is 23 B channels plus one 64-kbps D channel for a total of 1536 kbps. In EMEA and the Asia Pacific, E1/PRI (30B+D) consists of 30 B channels plus one 64-kbps D channel for a total of 1984 kbps. It is also possible to support multiple PRI lines with one 64-kbps D channel using NFAS. H channels provide a way to aggregate B channels. They are implemented as follows:
H0 = 384 kbps (6 B channels)
H10 = 1472 kbps (23 B channels)
H11 = 1536 kbps (24 B channels)
H12 = 1920 kbps (30 B channels)
ISDN PRI services are offered over a two-pair T1/PRI or E1/PRI unbalanced facility. As shown in Figure 2-19, in the case of ISDN PRI, the NT-1 is a CSU/DSU-like device, whereas the NT-2 devices provide customer premises switching, multiplexing, or other forms of concentration. If a device performs NT-1 and NT-2 functions, it might be referred to as an NT-12. The NT-2 device converts the T interface into the S interface. The ISDN S and T interfaces are electrically equivalent. The NT-2 communicates with terminal equipment, and handles the Layer 2 and 3 ISDN protocols. The U interface local loop connects to ISDN line-termination equipment that provides the LT function. The connection between switches within the phone network is called exchange termination (ET). The LT and ET functions communicate via the V interface.
Figure 2-19. ISDN Primary Rate Interface
[View full size image]
ISDN Layer 1
The ITU I-Series and G-Series documents specify the ISDN physical layer. Echo cancellation is used to reduce noise, and data encoding schemes, such as 2B1Q and 4B3T, are used to encode data.
As illustrated in Figure 2-20, 2B1Q is the most common signaling method on U interfaces. In this method, each pair of binary digits represents four discrete amplitude and polarity values. This protocol is defined in detail in ANSI spec T1.601. In summary, 2B1Q provides 2 bits per baud, which results in 80-kilo baud (baud = one modulation per second) or a transfer rate of 160 kbps. This means that the input voltage level can be one of four distinct levels. These levels are called quaternaries. Each quaternary represents 2 data bits, because there are 4 possible ways to represent 2 bits, as shown in Figure 2-18. Each U interface frame is 240 bits long. At the prescribed data rate of 160 kbps, each frame is therefore 1.5 ms long. Each frame consists of a 16-kbps frame overhead, 16-kbps D channel, and two B channels at 64-kbps each.
Figure 2-20. ISDN Layer 1
The Sync field consists of 9 quaternaries (2 bits each) in the quaternary symbolic pattern +3 +3 –3 –3 –3 +3 –3 +3 –3. The (B1 + B2 + D) represent 18 bits of data consisting of 8 bits from the first B channel, 8 bits from the second B channel, and 2 bits of D-channel data. The Maintenance field contains CRC information, block error detection flags, and embedded operator commands used for loopback testing without disrupting user data. Data is transmitted in a superframe consisting of 8 * 240-bit frames for a total of 1920 bits (240 octets). The Sync field of the first frame in the superframe is inverted (–3 –3 +3 +3 +3 –3 +3 –3 +3).
ISDN Layer 2
The ISDN data link layer is specified by the ITU Q-Series documents Q.920 through Q.923. All the signaling on the D channel is defined in the Q.921 spec. ISDN uses the Link Access Protocol - D channel (LAP-D) as its Layer 2 protocol. LAP-D is almost identical to the X.25 LAP-B protocol. Figure 2-21 shows the LAP-D frame format.
Figure 2-21. ISDN Layer 2
[View full size image]
The Start Flag field is 1 octet long and its value is always 7E (hex) or 0111 1110 (binary). The Control field is 2 octets long and indicates the frame type (information, supervisory, or unnumbered) and sequence numbers (N(r) and N(s)). The Information field contains Layer 3 protocol information and user data. The CRC field is a 2-octet field that provides cyclic redundancy checks for bit errors on the user data. The End Flag field is also 1 octet long and its value is always set to 7E (hex) or 0111 1110 (binary).
The Address field contains the Service Access Point Identifier (SAPI) subfield, which is 6 bits wide; the C/R (command/response) bit, which indicates whether the frame is a command or a response; the EA0 (address extension) bit, which indicates whether this is the final octet of the address or not; the TEI (terminal endpoint identifier) 7-bit device identifier; and the EA1 (address extension) bit, which is similar to the EA0.
As detailed in Figure 2-21, the Service Access Point Identifier (SAPI) is a 6-bit field that identifies the point where Layer 2 provides a service to Layer 3. Terminal endpoint identifiers (TEIs) are unique IDs given to each device (TE) on an ISDN S/T bus. This identifier value could be dynamic, or the value can be assigned statically when the TE is installed.
ISDN Link-Layer Establishment
The following steps are used to establish Layer 2 communication between ISDN devices:
The TE and the network initially exchange receive ready (RR) frames, listening for someone to initiate a connection.
The TE sends an unnumbered information (UI) frame with a SAPI of 63 (management procedure, query network) and TEI of 127 (broadcast).
The network assigns an available TEI (in the range 64 to 126).
The TE sends a set asynchronous balanced mode (SABME) frame with a SAPI of 0 (call control, used to initiate a setup) and a TEI of the value assigned by the network.
The network responds with an unnumbered acknowledgement (UA), SAPI = 0, TEI = assigned.
The Layer 2 connection is now ready for a Layer 3 setup.
ISDN Layer 3
The ISDN network layer is also specified by the ITU Q-Series documents Q.930 through Q.939. Layer 3 is used for the establishment, maintenance, and termination of logical network connections between two devices. Service profile IDs (SPIDs) are used to identify what services and features the ISDN switch provides to the attached ISDN device.
NOTE
The reader must not confuse the ISDN Layer 3 with Layer 3 of the OSI model. Protocols, such as ISDN and X.25, have their own Layer 3. Network layer protocols, such as IP, perceive such protocol stacks as the data link layer.
SPIDs are accessed at device initialization prior to call setup. The SPID is usually the 10-digit phone number of the ISDN line along with a prefix or suffix. The suffix is also known as a tag identifier (TID). SPIDs are used to identify features on the line, but in reality they can be whatever the carrier decides the value(s) should be. If an ISDN line requires a SPID, but it is not correctly supplied, Layer 2 initialization will take place, but Layer 3 will not, and the device will not be able to place or accept calls. ITU spec Q.932 provides greater details on SPIDs.
The Information field is a variable-length field that contains the Q.931 protocol data. Figure 2-22 describes the various subfields contained in the Information field. The following fields are contained in the Q.931 header:
Protocol Discriminator (1 octet)— Identifies the Layer 3 protocol. If this is a Q.931 header, this value is always 0816.
Length (1 octet)— Indicates the length of the next field, the CRV.
Call Reference Value (CRV) (1 or 2 octets)— Used to uniquely identify each call on the user-network interface. This value is assigned at call setup, and this value becomes available for another call when the call is cleared.
Message Type (1 octet)— Identifies the message type (setup, connect, and so forth). This determines what additional information is required and allowed.
Mandatory and Optional Information Elements (variable length)— Are options that are set depending on the message type.
Figure 2-22. ISDN Layer 3
ISDN Call Setup
The following steps are used to establish ISDN calls from an ISDN Layer 3 perspective:
Caller sends a setup to the ISDN switch.
If the setup is okay, the switch sends a call proceeding to the caller, and a setup to the receiver.
The receiver gets the setup. If it is okay, it sends an alerting message to the switch.
The switch forwards the alerting message to the caller.
When the receiver answers the call, is sends a connect message to the switch.
The switch forwards the connect message to the caller.
The caller sends a connect acknowledge message to the switch.
The switch forwards the connect ack message to the receiver.
The call is now set up.
TDM Network Elements
A variety of TDM-based network elements are used to build TDM systems. Some of these elements are discussed in this section. Common handoff to optical systems takes place at the DS1/DS3 levels in the case of the T-carrier, and E1/E3 levels in the case of the E-carrier. Note that the various individual network elements presented in this section, such as repeaters, CSUs/DSUs, DACS, and channel banks, are commercially available as integrated units supporting a wide variety of low- and high-speed interfaces, encoding, signaling, and protocols. The TDM network elements integrate to form the digital loop carrier (DLC) supporting various TDM architectures and topologies.
Repeaters
Repeaters are four-wire T1/E1 unbalanced amplifiers and signal processors for use on T1 or E1 lines. Repeaters are used to extend in-house T1/E1 lines in campus and high-rise environments. Repeaters can also be used to extend the distance between any T1/E1 equipment, such as DSUs, channel banks, and routers with built-in CSU/DSUs. A pair of repeaters can be located up to 5000 feet apart. Solid copper 22 AWG two-twisted-pair is the preferred cable for connection between repeaters. Smaller wire sizes will reduce the functional distance between the repeater pairs. Connection is made through RJ-45 modular connectors or four-wire, screw-down barrier strips. Both types of connectors are commonly used standards.
CSU/DSU
Channel service units/digital service units are essentially CPE multiplexers that can assign channels or time slots to a circuit. For example, a 256-kbps circuit will have four time slots assigned to it (N1 to N4). Each of these time slots is 64 kbps. Most CSUs/DSUs also support 56-kbps time slots. Some CSUs/DSUs are equipped with multiple ports. This enables the user to allocate time slots to each physical port that might be attached to routers or other CPE equipment. For example, a CSU/DSU connected to a 256-kbps line could assign N1 to port 1 and N2+N3+N4 to port 2. This would allocate 64-kbps bandwidth to the CPE device attached to port 1, and 192 kbps to the CPE device attached to port 2. Another function supported by CSUs/DSUs is the drop and insert function. Drop and insert is used to terminate one or more DS0 channels of a T1 at the digital RS-530/V.35 interface of the FT. One or more of the remaining DS0 channels can be passed on to other equipment, typically a system using voice lines. For example, a single 112-kbps channel (56 kbps * 2) might be dropped off to support a router, and up to 22 of the remaining DS0 channels passed on to a private branch exchange (PBX) for voice lines. CSU/DSU devices are regarded as a demarcation point by some carriers. In such a case, the carrier would own and manage the CSU/DSU, permitting them to perform loopback tests in the event of local loop circuit outages.
Digital Access and Cross-Connect Systems
The modern DACS is truly an integrated access device (IAD) that integrates channel bank cross-connect and multiplexer functionality in one device. DACS cross-connect functionality enables carriers to physically wire user circuits and electrically groom these 64-kbps voice or data circuits to higher T1 or E1 levels. The higher-level T1 or E1s can be groomed into DS3s or E3s for back-haul to a carrier class 5 switch. The time-slot interchange (digital cross-connect) functionality of a DACS enables you to assign DS0s to higher-level T1/E1 circuits in any order you want. It also enables you to assign the order of T1/E1s within a DS3/E3. A DACS also enables you to perform T1-to-E1 format conversion. Most DACSs support console, Telnet, and Simple Network Management Protocol (SNMP) for configuration, maintenance, performance monitoring, and administration.
Channel Bank
Channel banks are devices implemented at a CO (public exchange) that convert analog signals from home and business users into digital signals to be carried over higher-speed lines between the CO and other exchanges. The analog signal is converted into a digital signal that transmits at a 64-kbps rate. The 64-kbps signal is multiplexed with other DS0 signals on the same line using TDM techniques to higher T1/E1 levels. Channel banks offer foreign exchange office (FXO), foreign exchange subscriber (FXS), special access office (SAO), dial pulse originating (DPO), dial pulse terminating (DPT), equalized transmission only (ETO), transmission only (TO), and pulse link repeater (PLR) facilities.
Integrated Services Digital Network (ISDN) is a digital system that allows voice and data to be transmitted simultaneously using end-to-end digital connectivity. ISDN allows multiple digital channels to be transmitted simultaneously over the same wiring infrastructure used for analog lines. Two kinds of channels are defined in ISDN. The B channel or bearer channel carries user traffic, whereas the D channel or data channel carries CCS signaling data. The bandwidths of B channels are 64 kbps. Some switches limit B channels to a capacity of 56 kbps. The D channel handles signaling at 16 kbps or 64 kbps, depending on the service type. Original recommendations of ISDN were in Consultative Committee for International Telegraph and Telephone (CCITT) Recommendation I.120 (1984), which described some initial guidelines for implementing ISDN. As regards ISDN in North America, members of the industry agreed to create the National ISDN 1 (NI-1) standard as an interoperable ISDN standard. A more comprehensive standardization initiative, National ISDN 2 (NI-2), was later adopted. Two basic types of ISDN services are offered: basic rate interface (BRI) and primary rate interface (PRI).
ISDN BRI
ISDN BRI (2B+D) consists of two 64-kbps B channels and one 16-kbps D channel for a total of 144 kbps. BRI service is designed to meet the needs of most individual users. BRI ISDN also uses a channel-aggregation protocol, such as BONDING or Multilink PPP, that supports an uncompressed data transfer speed of 128 kbps, plus bandwidth for overhead and signaling.
As illustrated in Figure 2-18, the U interface is a two-wire (single-pair) interface from the ISDN switch, the same physical interface provided for plain old telephone service (POTS) lines. It supports full-duplex data transfer over a single pair. Echo cancellation is used to reduce noise, and data-encoding schemes, such as 2 binary 1 quaternary (2B1Q) in North America and 4B3T in Europe, permit a relatively high data rate of 160 kbps over ordinary single-pair local loops.
Figure 2-18. ISDN Basic Rate Interface
[View full size image]
The U interface is terminated with a network termination 1 (NT-1) device at the CPE end. North American carriers provide customers with a choice of U or S/T interfaces. EMEA and Asia Pacific phone companies supply NT1s, thereby providing their customers with an S/T interface. The ISDN NT-1 converts the two-wire U interface into the four-wire S/T interface. The S/T interface supports up to seven devices on the full-duplex S/T bus. The BRI NT-1 provides timing, multiplexing of the B and D channels, and power conversion.
Devices that connect to the S/T interface include ISDN-capable telephones, videoconferencing equipment, routers, and terminal adapters. All devices that are designed for ISDN are designated terminal equipment 1 (TE-1). All other communication devices that are not ISDN capable, but have an asynchronous serial (EIA-232) or POTS telephone interface—including ordinary analog telephones, modems, and terminals—are designated terminal equipment 2 (TE-2). A terminal adapter (TA) connects a TE-2 to the ISDN S/T bus. ISDN services can be deployed as OPX services by carriers and service providers that operate carrier class 5 switches capable of ISDN PRI and BRI services. There are local loop distance limitations of 18,000 feet (5.5 km) of the CO point of presence (POP) for BRI service. Repeater devices are required for distances exceeding these guidelines.
ISDN PRI
ISDN PRI service is offered as T1/PRI or E1/PRI. T1/PRI (23B+D) has a channel structure that is 23 B channels plus one 64-kbps D channel for a total of 1536 kbps. In EMEA and the Asia Pacific, E1/PRI (30B+D) consists of 30 B channels plus one 64-kbps D channel for a total of 1984 kbps. It is also possible to support multiple PRI lines with one 64-kbps D channel using NFAS. H channels provide a way to aggregate B channels. They are implemented as follows:
H0 = 384 kbps (6 B channels)
H10 = 1472 kbps (23 B channels)
H11 = 1536 kbps (24 B channels)
H12 = 1920 kbps (30 B channels)
ISDN PRI services are offered over a two-pair T1/PRI or E1/PRI unbalanced facility. As shown in Figure 2-19, in the case of ISDN PRI, the NT-1 is a CSU/DSU-like device, whereas the NT-2 devices provide customer premises switching, multiplexing, or other forms of concentration. If a device performs NT-1 and NT-2 functions, it might be referred to as an NT-12. The NT-2 device converts the T interface into the S interface. The ISDN S and T interfaces are electrically equivalent. The NT-2 communicates with terminal equipment, and handles the Layer 2 and 3 ISDN protocols. The U interface local loop connects to ISDN line-termination equipment that provides the LT function. The connection between switches within the phone network is called exchange termination (ET). The LT and ET functions communicate via the V interface.
Figure 2-19. ISDN Primary Rate Interface
[View full size image]
ISDN Layer 1
The ITU I-Series and G-Series documents specify the ISDN physical layer. Echo cancellation is used to reduce noise, and data encoding schemes, such as 2B1Q and 4B3T, are used to encode data.
As illustrated in Figure 2-20, 2B1Q is the most common signaling method on U interfaces. In this method, each pair of binary digits represents four discrete amplitude and polarity values. This protocol is defined in detail in ANSI spec T1.601. In summary, 2B1Q provides 2 bits per baud, which results in 80-kilo baud (baud = one modulation per second) or a transfer rate of 160 kbps. This means that the input voltage level can be one of four distinct levels. These levels are called quaternaries. Each quaternary represents 2 data bits, because there are 4 possible ways to represent 2 bits, as shown in Figure 2-18. Each U interface frame is 240 bits long. At the prescribed data rate of 160 kbps, each frame is therefore 1.5 ms long. Each frame consists of a 16-kbps frame overhead, 16-kbps D channel, and two B channels at 64-kbps each.
Figure 2-20. ISDN Layer 1
The Sync field consists of 9 quaternaries (2 bits each) in the quaternary symbolic pattern +3 +3 –3 –3 –3 +3 –3 +3 –3. The (B1 + B2 + D) represent 18 bits of data consisting of 8 bits from the first B channel, 8 bits from the second B channel, and 2 bits of D-channel data. The Maintenance field contains CRC information, block error detection flags, and embedded operator commands used for loopback testing without disrupting user data. Data is transmitted in a superframe consisting of 8 * 240-bit frames for a total of 1920 bits (240 octets). The Sync field of the first frame in the superframe is inverted (–3 –3 +3 +3 +3 –3 +3 –3 +3).
ISDN Layer 2
The ISDN data link layer is specified by the ITU Q-Series documents Q.920 through Q.923. All the signaling on the D channel is defined in the Q.921 spec. ISDN uses the Link Access Protocol - D channel (LAP-D) as its Layer 2 protocol. LAP-D is almost identical to the X.25 LAP-B protocol. Figure 2-21 shows the LAP-D frame format.
Figure 2-21. ISDN Layer 2
[View full size image]
The Start Flag field is 1 octet long and its value is always 7E (hex) or 0111 1110 (binary). The Control field is 2 octets long and indicates the frame type (information, supervisory, or unnumbered) and sequence numbers (N(r) and N(s)). The Information field contains Layer 3 protocol information and user data. The CRC field is a 2-octet field that provides cyclic redundancy checks for bit errors on the user data. The End Flag field is also 1 octet long and its value is always set to 7E (hex) or 0111 1110 (binary).
The Address field contains the Service Access Point Identifier (SAPI) subfield, which is 6 bits wide; the C/R (command/response) bit, which indicates whether the frame is a command or a response; the EA0 (address extension) bit, which indicates whether this is the final octet of the address or not; the TEI (terminal endpoint identifier) 7-bit device identifier; and the EA1 (address extension) bit, which is similar to the EA0.
As detailed in Figure 2-21, the Service Access Point Identifier (SAPI) is a 6-bit field that identifies the point where Layer 2 provides a service to Layer 3. Terminal endpoint identifiers (TEIs) are unique IDs given to each device (TE) on an ISDN S/T bus. This identifier value could be dynamic, or the value can be assigned statically when the TE is installed.
ISDN Link-Layer Establishment
The following steps are used to establish Layer 2 communication between ISDN devices:
The TE and the network initially exchange receive ready (RR) frames, listening for someone to initiate a connection.
The TE sends an unnumbered information (UI) frame with a SAPI of 63 (management procedure, query network) and TEI of 127 (broadcast).
The network assigns an available TEI (in the range 64 to 126).
The TE sends a set asynchronous balanced mode (SABME) frame with a SAPI of 0 (call control, used to initiate a setup) and a TEI of the value assigned by the network.
The network responds with an unnumbered acknowledgement (UA), SAPI = 0, TEI = assigned.
The Layer 2 connection is now ready for a Layer 3 setup.
ISDN Layer 3
The ISDN network layer is also specified by the ITU Q-Series documents Q.930 through Q.939. Layer 3 is used for the establishment, maintenance, and termination of logical network connections between two devices. Service profile IDs (SPIDs) are used to identify what services and features the ISDN switch provides to the attached ISDN device.
NOTE
The reader must not confuse the ISDN Layer 3 with Layer 3 of the OSI model. Protocols, such as ISDN and X.25, have their own Layer 3. Network layer protocols, such as IP, perceive such protocol stacks as the data link layer.
SPIDs are accessed at device initialization prior to call setup. The SPID is usually the 10-digit phone number of the ISDN line along with a prefix or suffix. The suffix is also known as a tag identifier (TID). SPIDs are used to identify features on the line, but in reality they can be whatever the carrier decides the value(s) should be. If an ISDN line requires a SPID, but it is not correctly supplied, Layer 2 initialization will take place, but Layer 3 will not, and the device will not be able to place or accept calls. ITU spec Q.932 provides greater details on SPIDs.
The Information field is a variable-length field that contains the Q.931 protocol data. Figure 2-22 describes the various subfields contained in the Information field. The following fields are contained in the Q.931 header:
Protocol Discriminator (1 octet)— Identifies the Layer 3 protocol. If this is a Q.931 header, this value is always 0816.
Length (1 octet)— Indicates the length of the next field, the CRV.
Call Reference Value (CRV) (1 or 2 octets)— Used to uniquely identify each call on the user-network interface. This value is assigned at call setup, and this value becomes available for another call when the call is cleared.
Message Type (1 octet)— Identifies the message type (setup, connect, and so forth). This determines what additional information is required and allowed.
Mandatory and Optional Information Elements (variable length)— Are options that are set depending on the message type.
Figure 2-22. ISDN Layer 3
ISDN Call Setup
The following steps are used to establish ISDN calls from an ISDN Layer 3 perspective:
Caller sends a setup to the ISDN switch.
If the setup is okay, the switch sends a call proceeding to the caller, and a setup to the receiver.
The receiver gets the setup. If it is okay, it sends an alerting message to the switch.
The switch forwards the alerting message to the caller.
When the receiver answers the call, is sends a connect message to the switch.
The switch forwards the connect message to the caller.
The caller sends a connect acknowledge message to the switch.
The switch forwards the connect ack message to the receiver.
The call is now set up.
TDM Network Elements
A variety of TDM-based network elements are used to build TDM systems. Some of these elements are discussed in this section. Common handoff to optical systems takes place at the DS1/DS3 levels in the case of the T-carrier, and E1/E3 levels in the case of the E-carrier. Note that the various individual network elements presented in this section, such as repeaters, CSUs/DSUs, DACS, and channel banks, are commercially available as integrated units supporting a wide variety of low- and high-speed interfaces, encoding, signaling, and protocols. The TDM network elements integrate to form the digital loop carrier (DLC) supporting various TDM architectures and topologies.
Repeaters
Repeaters are four-wire T1/E1 unbalanced amplifiers and signal processors for use on T1 or E1 lines. Repeaters are used to extend in-house T1/E1 lines in campus and high-rise environments. Repeaters can also be used to extend the distance between any T1/E1 equipment, such as DSUs, channel banks, and routers with built-in CSU/DSUs. A pair of repeaters can be located up to 5000 feet apart. Solid copper 22 AWG two-twisted-pair is the preferred cable for connection between repeaters. Smaller wire sizes will reduce the functional distance between the repeater pairs. Connection is made through RJ-45 modular connectors or four-wire, screw-down barrier strips. Both types of connectors are commonly used standards.
CSU/DSU
Channel service units/digital service units are essentially CPE multiplexers that can assign channels or time slots to a circuit. For example, a 256-kbps circuit will have four time slots assigned to it (N1 to N4). Each of these time slots is 64 kbps. Most CSUs/DSUs also support 56-kbps time slots. Some CSUs/DSUs are equipped with multiple ports. This enables the user to allocate time slots to each physical port that might be attached to routers or other CPE equipment. For example, a CSU/DSU connected to a 256-kbps line could assign N1 to port 1 and N2+N3+N4 to port 2. This would allocate 64-kbps bandwidth to the CPE device attached to port 1, and 192 kbps to the CPE device attached to port 2. Another function supported by CSUs/DSUs is the drop and insert function. Drop and insert is used to terminate one or more DS0 channels of a T1 at the digital RS-530/V.35 interface of the FT. One or more of the remaining DS0 channels can be passed on to other equipment, typically a system using voice lines. For example, a single 112-kbps channel (56 kbps * 2) might be dropped off to support a router, and up to 22 of the remaining DS0 channels passed on to a private branch exchange (PBX) for voice lines. CSU/DSU devices are regarded as a demarcation point by some carriers. In such a case, the carrier would own and manage the CSU/DSU, permitting them to perform loopback tests in the event of local loop circuit outages.
Digital Access and Cross-Connect Systems
The modern DACS is truly an integrated access device (IAD) that integrates channel bank cross-connect and multiplexer functionality in one device. DACS cross-connect functionality enables carriers to physically wire user circuits and electrically groom these 64-kbps voice or data circuits to higher T1 or E1 levels. The higher-level T1 or E1s can be groomed into DS3s or E3s for back-haul to a carrier class 5 switch. The time-slot interchange (digital cross-connect) functionality of a DACS enables you to assign DS0s to higher-level T1/E1 circuits in any order you want. It also enables you to assign the order of T1/E1s within a DS3/E3. A DACS also enables you to perform T1-to-E1 format conversion. Most DACSs support console, Telnet, and Simple Network Management Protocol (SNMP) for configuration, maintenance, performance monitoring, and administration.
Channel Bank
Channel banks are devices implemented at a CO (public exchange) that convert analog signals from home and business users into digital signals to be carried over higher-speed lines between the CO and other exchanges. The analog signal is converted into a digital signal that transmits at a 64-kbps rate. The 64-kbps signal is multiplexed with other DS0 signals on the same line using TDM techniques to higher T1/E1 levels. Channel banks offer foreign exchange office (FXO), foreign exchange subscriber (FXS), special access office (SAO), dial pulse originating (DPO), dial pulse terminating (DPT), equalized transmission only (ETO), transmission only (TO), and pulse link repeater (PLR) facilities.
E&T-Carrier
The T-Carrier
The North American DS1 consists of 24 DS0 channels that are multiplexed. The signal is referred to as DS1, whereas the transmission channel over the copper-based facility is called a T1 circuit. The T-carrier is used in the United States, Canada, Korea, Hong Kong, and Taiwan.
TDM circuits typically use multiplexers, such as channel service units/digital service units (CSUs/DSUs) or channel banks at the CPE (customer premises equipment) side, and they use larger programmable multiplexers, such as DACS and channel banks, at the carrier end. The T-carrier system is entirely digital, using PCM and TDM. The system uses four wires and provides duplex capability. The four-wire facility was originally a pair of twisted-pair copper wires, but can now also include coaxial cable, optical fiber, digital microwave, and other media. A number of variations on the number and use of channels is possible. The T-carrier hierarchy used in North America is shown in Table 2-1 and illustrated in Figure 2-9. The DS1C, DS2, and DS4 levels are not commercially used. The SONET Synchronous Transport Signal (STS) levels have largely replaced the DS levels above DS3.
Figure 2-9. T-Carrier Multiplexed Hierarchy
[View full size image]
Table 2-1. T-Carrier Hierarchy Digital Signal Level
Number of 64 kbps Channels
Equivalent
Bandwidth
DS0
1
1 * DS0
64 kbps
DS1
24
24 * DS0
1.544 Mbps
DS1C
48
2 * DS1
3.152 Mbps
DS2
96
4 * DS1
6.312 Mbps
DS3
672
28 * DS1
44.736 Mbps
DS4
4032
6 * DS3
274.176 Mbps
NOTE
Some TDM systems use 8 kbps for in-band signaling. This results in a net bandwidth of only 56 kbps per channel. Japan uses the North American standards for DS0 through DS2, but the Japanese DS5 has roughly the circuit capacity of a U.S. DS4.
DS Framing
The DS1 frame of Figure 2-10 is composed of 24 DS0 (8-bit) channels, plus 1 framing bit, which adds up to 193 bits. The DS1 signal transports 8000 frames per second, which results in 193 * 8000 bits per second or 1,544,000 bps (1.544 Mbps). The first bit (bit 1), or F bit, is used for frame alignment, performance-monitoring cyclic redundancy check (CRC), and data linkage. The remaining 192 bits provide 24 8-bit time slots numbered from 1 to 24.
Figure 2-10. DS Frame
DS systems use alternate mark inversion (AMI) or binary 8 zero substitution (B8ZS) for line encoding. In AMI, every other 1 is a different polarity, and the encoding mechanism does not maintain a "1s density." In B8ZS, the encoding mechanism uses intentional bipolar violation to maintain a "1s density." Bipolar violations are two "1s" of the same polarity. T1 physical delivery is over two-pair copper wires—one pair for RX (1+2) and one pair for TX (4+5). For the CPE, RX means data from the network, whereas TX means data to the network.
DS Multiframing Formats
Two kinds of multiframing techniques are used for DS-level transmissions:
D4 or superframe (SF)
D5 or extended superframe (ESF)
D4 multiframing typically uses AMI encoding, whereas ESF uses B8ZS encoding. However, B8ZS line coding could be used with D4 framing as well as ESF. The multiplexer (mux) terminating the T1 usually determines the multiframing option.
D4 Superframe
In the original D4 (SF) standard, the framing bits continuously repeated the sequence 110111001000. In voice telephony, errors are acceptable, and early standards allow as much as one frame in six to be missing entirely. As shown in Figure 2-11, the SF (D4) frame has 12 frames and uses the least significant bit (LSB) in frames 6 and 12 for signaling (A, B bits). This method of in-band signaling is called robbed-bit signaling. Each frame has 24 channels of 64 kbps. Within an SF, F bits delineate the basic frames within the multiframe. In channel-associated signaling, bits are robbed from time slots to carry signaling messages. Figure 2-11 shows the D4 SF format.
Figure 2-11. D4 SF Format
D5 Extended Superframe
To promote error-free transmission, an alternative called the D5 or extended superframe (ESF) of 24 frames was developed. As shown in Figure 2-12, the ESF frame has 24 frames and uses the LSB in frames 6, 12, 18, and 24 for signaling (A, B, C, D bits). Each frame has 24 channels of 64 kbps. In this standard, 6 of the 24 framing bits provide a 6-bit cyclic redundancy check (CRC-6), and 6 provide the actual framing. The other 12 form a VC of 4096 bps for use by the transmission equipment, for call progress signals such as busy, idle, and ringing. DS1 signals using ESF equipment are nearly error free, because the CRC detects errors and allows automatic rerouting of connections. Within an ESF, the F bits provide basic frame and multiframe delineation, performance monitoring through CRC-6-based error detection, a 4-kbps data link to transfer priority operations messages, and other maintenance or operations messages. The F bits also provide periodic terminal performance reports, or an idle sequence. In CAS, bits are robbed from time slots to carry signaling messages. Figure 2-12 shows the ESF format.
Figure 2-12. ESF Format
[View full size image]
SF and ESF Alarms
It is important to understand D4 and ESF alarm conditions, in order to interpret the behavior of a TDM transmission system on the CPE as well as on the network side. The alarms listed here are commonly used with CPE equipment, such as CSUs/DSUs, T1 repeaters, DACS devices, and multiplexers.
AIS (alarm indication signal)— The AIS is also known as a "Keep Alive" or "Blue Alarm" signal. This consists of an unframed, all-1s signal sent to maintain transmission continuity. The AIS carrier failure alarm (CFA) signal is declared when both the AIS state and red CFA persist simultaneously.
OOF (out-of-frame)— The OOF condition occurs whenever network or DTE equipment senses errors in the incoming framing pattern. Depending upon the equipment, this can occur when 2 of 4, 2 of 5, or 3 of 5 framing bits are in error. A reframe clears the OOF condition.
Red CFA (carrier failure alarm)— This CFA occurs after detection of a continuous OOF condition for 2.5 seconds. This alarm state is cleared when no OOF conditions occur for at least 1000 milliseconds. Some applications (certain DACS services) might not clear the CFA state for up to 15 seconds of no OOF occurrences.
Yellow CFA (carrier failure alarm)— When a device enters the red CFA state, it transmits a "yellow alarm" in the opposite direction. A yellow alarm is transmitted by setting bit 2 of each time slot to a 0 (zero) space state for D4-framed facilities. For ESF facilities, a yellow alarm is transmitted by sending a repetitive 16-bit pattern consisting of 8 marks (1) followed by 8 spaces (0) in the data-link bits. This is transmitted for a minimum of 1 second.
LOS (loss of signal)— A LOS condition is declared when no pulses have been detected in a 175 +/– 75 pulse window (100 to 250 bit times).
The E-Carrier
The basic unit of the E-carrier system is the 64-kbps DS0, which is multiplexed to form transmis-sion formats with higher speeds. The E1 consists of 32 DS0 channels. The E-carrier is a European digital transmission format devised by the International Telecommunication Union Telecommu-nication Standardization Sector (ITU-T) and given the name by the Conference of European Postal and Telecommunication Administration (CEPT). E2 through E5 are carriers in increasing multiples of the E1 format. The E1 signal format carries data at a rate of 2.048 Mbps and can carry 32 channels of 64 kbps each. Unlike T1, it does not bit-rob and all 8 bits per channel are used to code the signal. E1 and T1 can be interconnected for international use. The E-carrier hierarchy used in EMEA, Latin America, South Asia, and the Asia Pacific region is shown in Table 2-2 and illustrated in Figure 2-13. The E2, E4, and E5 levels are not commercially used. The Synchronous Digital Hierarchy (SDH) levels have largely replaced the DS levels above E4.
Figure 2-13. E-Carrier Multiplexed Hierarchy
[View full size image]
Table 2-2. E-Carrier Hierarchy Digital Signal Level
Number of 64 kbps Channels
Equivalent
Bandwidth
E1
32
32 * DS0
2.048 Mbps
E2
128
4 * E1
8.448 Mbps
E3
512
4 * E2
34.368 Mbps
E4
2048
4 * E3
139.264 Mbps
E5
8192
4 * E4
565.148 Mbps
As depicted in Figure 2-14, a 2.048-Mbps basic frame is comprised of 256 bits numbered from 1 to 256. These bits provide 32 8-bit time slots numbered from 0 to 31. The first time slot is a framing time slot used for frame alignment, performance monitoring (CRC), and data linkage. Time slot 0 carries framing information in a frame alignment signal as well as remote alarm notification, 5 national bits, and optional CRC bits. Time slot 16 is a signaling time slot and carries signaling information out of band. However, time slot 16 could carry data as well.
Figure 2-14. E1 Frame Structure
Like all basic frames used in telecommunications, the E1 basic frame lasts 125 microseconds. The full E1 bit rate is 2.048 Mbps. We calculate this bit rate by multiplying the 32-octet E1 frame by 8000 frames per second. Subtracting time slots 0 and 16, we see that E1 lines offer 30 time slots to carry user data or a payload-carrying capacity of 1.920 Mbps.
E1 uses AMI or high-density bipolar 3 (HDB3) for line encoding. In AMI, every other 1 is a different polarity, and the encoding mechanism does not maintain a "1s density." AMI is used to represent successive 1s' values in a bit stream with alternating positive and negative pulses to eliminate any direct current (DC) offset.
NOTE
AMI is not used in most 2.048-Mbps transmission systems because synchronization loss can occur during long strings of 0s, because there are no pulses.
In HDB3, every other 1 is a different polarity and the encoding mechanism uses a bipolar violation to maintain a "1s density." The HDB3 coded signal does not have a DC component. Therefore, the signal can be transmitted through balanced transformer-coupled circuits. The clock recovery circuits of the receivers can operate well, even though the data contains long strings of 0s.
Unbalanced E1 physical delivery is over two-pair copper wires with 120-ohm line impedance—one pair for RX (1+2) and one pair for TX (4+5). For the CPE, RX means data from the network, whereas TX means data to the network. Balanced E1 physical delivery is over a pair of 75-ohm coaxial cables. One coax is used for TX, whereas the other one is for RX.
E1 Frame Alignment Signal (FAS)
Framing is necessary so that any equipment receiving the E1 signal can synchronize, identify, and extract the individual channels. The 2.048-Mbps E1 frame consists of 32 individual time slots (numbered 0 through 31). Each time slot consists of individual 64-kbps channels of data. Time slot 0 of every even frame is reserved for the FAS. As shown in Figure 2-15, odd frames have the NFAS word that contains the distant alarm indication bit and other bits reserved for national and International use. Thirty-one time slots remain for bearer channels, into which customer data can be placed.
Figure 2-15. E1 Frame Alignment Signal
[View full size image]
E1 MultiFrame Alignment Signal (MFAS)
Sixteen E1 consecutive frames form a new structure called an E1 multiframe. The frames in a multiframe are numbered 0 to 15. Multiframe structure is used for two purposes: CAS signaling and CRC. Each of these modes is independent from the use of the other. CAS is carried in time slot 16, and CRC is carried in time slot 0. The purpose of the multiframe is to have sufficient overhead bits to support two key functions in time slot 16, which carries signaling information when an E1 is transmitting digital voice streams. MFAS framing is used for CAS to transmit ABCD bit information for each of the 30 channels, as illustrated in Figure 2-16.
Figure 2-16. E1 Multiframe Alignment Signal
[View full size image]
This method uses the 32 time slot frame format with time slot 0 for the FAS and time slot 16 for the MFAS and CAS. When a PCM-30 multiframe is transmitted, 16 FAS frames are assembled together. Time slot 16 of the first frame is dedicated to MFAS bits, and time slot 16 of the remaining 15 frames is dedicated to ABCD bits.
E1 CRC Error Checking
A cyclic redundancy check-4 (CRC-4) is often used in E1 transmission to identify possible bit errors during in-service error monitoring. CRC-4 is a checksum calculation that allows for the detection of errors within the 2.048-Mbps signal while it is in service. A discrepancy indicates at least one bit error in the received signal. The equipment that originates the E1 data calculates the CRC-4 bits for one submultiframe. It inserts the CRC-4 bits in the CRC-4 positions in the next submultiframe.
The receiving equipment performs the reverse mathematical computation on the submultiframe. It examines the CRC-4 bits that were transmitted in the next submultiframe. It then compares the transmitted CRC-4 bits to the calculated value. If there is a discrepancy in the two values, a CRC-4 error is reported via E-bits indication. Each individual CRC-4 error does not necessarily correspond to a single bit error, which is a drawback. Multiple bit errors within the same submultiframe will lead to only one CRC-4 error for the block. Thirty-one time slots remain for bearer channels, into which customer data can be placed.
Errors could occur such that the new CRC-4 bits are calculated to be the same as the original CRC-4 bits. CRC-4 error checking provides a most convenient method of identifying bit errors within an in-service system, but only provides an approximate measure (93.75 percent accuracy) of the circuit's true performance. Consider the MFAS framing shown in Figure 2-17. Each MFAS frame can be divided into "submultiframes." These are labeled SMF1 and SMF2, and consist of eight frames apiece. We associate 4 bits of CRC information with each submultiframe. The CRC-4 bits are calculated for each submultiframe, buffered, and then inserted into the following submultiframe to be transmitted.
Figure 2-17. E1 CRC Error Checking
[View full size image]
ITU-T specifications G.704 and G.706 define the CRC-4 cyclic redundancy check for enhanced error monitoring on the E1 line.
E1 Errors and Alarms
It is important to understand E1 error and alarm conditions, in order to interpret the behavior of a TDM transmission system on the CPE as well as on the network side. The alarms listed here are commonly used with CPE equipment such as CSUs/DSUs, E1 repeaters, DCS devices, and multiplexers:
Alarm indication signal (AIS)— Alarm indication signal is an unframed, all-1s signal.
Background block error (BBE)— A background block error is an error block (a block is a set of consecutive bits associated with a path) that does not occur as part of a severely errored second (SES).
Bit errors— Bit errors are bits that are in error. Bit errors are not counted during unavailable time.
Bit slip— A bit slip occurs when the synchronized pattern either loses a bit or has an extra bit stuffed into it.
Clock slips— Clock slips occur when the measured frequency deviates from the reference frequency by a one-unit interval.
Code errors— A code error is a violation of the coding rules: two successive pulses with the same polarity. In HDB3 coding, a code error is a bipolar violation that is not part of a valid HDB3 substitution.
Cyclic redundancy check (CRC) errors— CRC-4 block errors. This measurement applies to signals containing a CRC-4 check sequence.
Degraded minutes— A degraded minute (DM) occurs when there is a 10 to 6 or worse bit error rate during 60 available, nonseverely bit-errored seconds.
Errored block— A block in which one or more bits are in error.
E-bit indication— An E-bit is transmitted by the receiving equipment after detecting a CRC-4 error.
Errored second (ES)— An errored second is any second in which one or more bits are in error. An errored second is not counted during an unavailable second. For G.826, an errored second contains one or more blocks with at least one defect.
Frame alarm (FALM)— Frame alarm seconds is a count of seconds that have had far-end frame alarm (FAS remote alarm indication [RAI]), which is when a 1 is transmitted in every third bit of each time slot 0 frame that does not contain the FAS.
Frame alignment signal (FAS)— A count of the bit errors in the frame alignment signal words received. It applies to both PCM-30 and PCM-31 framing.
Frequency— Any variance from 2.048 Mbps in the received frequency is recorded in hertz or parts per million.
Loss of frame seconds (LOFS)— Loss of frame seconds is a count of seconds since the beginning of the test that have experienced a loss of frame.
Loss of signal seconds (LOSS)— Loss of signal seconds is a count of the number of seconds during which the signal has been lost during the test.
Multiframe alarm (MFAL)— Multiframe alarm seconds is a count of seconds that have had far-end multiframe alarm (MFAS RAI).
Multiframe alignment signal (MFAS) distant alarm— In this alarm, a 1 is transmitted in every sixth bit of each time slot 16 in the 0 frame.
Severely errored second (SES)— A severely errored second has an error rate of 10-3 or higher. Severely errored seconds are not counted during unavailable time. For G.826 block measurements, an SES is a 1-second period containing 30 percent or greater errored blocks.
Time slot 16 AIS— In this alarm, all 1s are transmitted in time slot 16 of all frames.
Unavailable seconds (UAS)— Unavailable time begins at the onset of 10 consecutive severely errored seconds. Unavailable seconds also begin at a loss of signal or loss of frame.
Wander— This is the total positive or negative phase difference between the measured frequency and the reference frequency. The +wander value increases whenever the measured frequency is one unnumbered information (UI) frame larger than the reference frequency. The –wander increases whenever the measured frequency is one UI frame less than the reference frequency.
NOTE
The following ITU-T recommendations are commonly used with TDM systems: G.703, physical/electrical characteristics of hierarchical digital interfaces; G.704, synchronous frame structures used at 1544, 6312, 2048, 8488, and 44,736 kbps; G.706, frame alignment and CRC procedures relating to basic frame structures defined in Recommendation G.704; G.711, PCM of voice frequencies.
The North American DS1 consists of 24 DS0 channels that are multiplexed. The signal is referred to as DS1, whereas the transmission channel over the copper-based facility is called a T1 circuit. The T-carrier is used in the United States, Canada, Korea, Hong Kong, and Taiwan.
TDM circuits typically use multiplexers, such as channel service units/digital service units (CSUs/DSUs) or channel banks at the CPE (customer premises equipment) side, and they use larger programmable multiplexers, such as DACS and channel banks, at the carrier end. The T-carrier system is entirely digital, using PCM and TDM. The system uses four wires and provides duplex capability. The four-wire facility was originally a pair of twisted-pair copper wires, but can now also include coaxial cable, optical fiber, digital microwave, and other media. A number of variations on the number and use of channels is possible. The T-carrier hierarchy used in North America is shown in Table 2-1 and illustrated in Figure 2-9. The DS1C, DS2, and DS4 levels are not commercially used. The SONET Synchronous Transport Signal (STS) levels have largely replaced the DS levels above DS3.
Figure 2-9. T-Carrier Multiplexed Hierarchy
[View full size image]
Table 2-1. T-Carrier Hierarchy Digital Signal Level
Number of 64 kbps Channels
Equivalent
Bandwidth
DS0
1
1 * DS0
64 kbps
DS1
24
24 * DS0
1.544 Mbps
DS1C
48
2 * DS1
3.152 Mbps
DS2
96
4 * DS1
6.312 Mbps
DS3
672
28 * DS1
44.736 Mbps
DS4
4032
6 * DS3
274.176 Mbps
NOTE
Some TDM systems use 8 kbps for in-band signaling. This results in a net bandwidth of only 56 kbps per channel. Japan uses the North American standards for DS0 through DS2, but the Japanese DS5 has roughly the circuit capacity of a U.S. DS4.
DS Framing
The DS1 frame of Figure 2-10 is composed of 24 DS0 (8-bit) channels, plus 1 framing bit, which adds up to 193 bits. The DS1 signal transports 8000 frames per second, which results in 193 * 8000 bits per second or 1,544,000 bps (1.544 Mbps). The first bit (bit 1), or F bit, is used for frame alignment, performance-monitoring cyclic redundancy check (CRC), and data linkage. The remaining 192 bits provide 24 8-bit time slots numbered from 1 to 24.
Figure 2-10. DS Frame
DS systems use alternate mark inversion (AMI) or binary 8 zero substitution (B8ZS) for line encoding. In AMI, every other 1 is a different polarity, and the encoding mechanism does not maintain a "1s density." In B8ZS, the encoding mechanism uses intentional bipolar violation to maintain a "1s density." Bipolar violations are two "1s" of the same polarity. T1 physical delivery is over two-pair copper wires—one pair for RX (1+2) and one pair for TX (4+5). For the CPE, RX means data from the network, whereas TX means data to the network.
DS Multiframing Formats
Two kinds of multiframing techniques are used for DS-level transmissions:
D4 or superframe (SF)
D5 or extended superframe (ESF)
D4 multiframing typically uses AMI encoding, whereas ESF uses B8ZS encoding. However, B8ZS line coding could be used with D4 framing as well as ESF. The multiplexer (mux) terminating the T1 usually determines the multiframing option.
D4 Superframe
In the original D4 (SF) standard, the framing bits continuously repeated the sequence 110111001000. In voice telephony, errors are acceptable, and early standards allow as much as one frame in six to be missing entirely. As shown in Figure 2-11, the SF (D4) frame has 12 frames and uses the least significant bit (LSB) in frames 6 and 12 for signaling (A, B bits). This method of in-band signaling is called robbed-bit signaling. Each frame has 24 channels of 64 kbps. Within an SF, F bits delineate the basic frames within the multiframe. In channel-associated signaling, bits are robbed from time slots to carry signaling messages. Figure 2-11 shows the D4 SF format.
Figure 2-11. D4 SF Format
D5 Extended Superframe
To promote error-free transmission, an alternative called the D5 or extended superframe (ESF) of 24 frames was developed. As shown in Figure 2-12, the ESF frame has 24 frames and uses the LSB in frames 6, 12, 18, and 24 for signaling (A, B, C, D bits). Each frame has 24 channels of 64 kbps. In this standard, 6 of the 24 framing bits provide a 6-bit cyclic redundancy check (CRC-6), and 6 provide the actual framing. The other 12 form a VC of 4096 bps for use by the transmission equipment, for call progress signals such as busy, idle, and ringing. DS1 signals using ESF equipment are nearly error free, because the CRC detects errors and allows automatic rerouting of connections. Within an ESF, the F bits provide basic frame and multiframe delineation, performance monitoring through CRC-6-based error detection, a 4-kbps data link to transfer priority operations messages, and other maintenance or operations messages. The F bits also provide periodic terminal performance reports, or an idle sequence. In CAS, bits are robbed from time slots to carry signaling messages. Figure 2-12 shows the ESF format.
Figure 2-12. ESF Format
[View full size image]
SF and ESF Alarms
It is important to understand D4 and ESF alarm conditions, in order to interpret the behavior of a TDM transmission system on the CPE as well as on the network side. The alarms listed here are commonly used with CPE equipment, such as CSUs/DSUs, T1 repeaters, DACS devices, and multiplexers.
AIS (alarm indication signal)— The AIS is also known as a "Keep Alive" or "Blue Alarm" signal. This consists of an unframed, all-1s signal sent to maintain transmission continuity. The AIS carrier failure alarm (CFA) signal is declared when both the AIS state and red CFA persist simultaneously.
OOF (out-of-frame)— The OOF condition occurs whenever network or DTE equipment senses errors in the incoming framing pattern. Depending upon the equipment, this can occur when 2 of 4, 2 of 5, or 3 of 5 framing bits are in error. A reframe clears the OOF condition.
Red CFA (carrier failure alarm)— This CFA occurs after detection of a continuous OOF condition for 2.5 seconds. This alarm state is cleared when no OOF conditions occur for at least 1000 milliseconds. Some applications (certain DACS services) might not clear the CFA state for up to 15 seconds of no OOF occurrences.
Yellow CFA (carrier failure alarm)— When a device enters the red CFA state, it transmits a "yellow alarm" in the opposite direction. A yellow alarm is transmitted by setting bit 2 of each time slot to a 0 (zero) space state for D4-framed facilities. For ESF facilities, a yellow alarm is transmitted by sending a repetitive 16-bit pattern consisting of 8 marks (1) followed by 8 spaces (0) in the data-link bits. This is transmitted for a minimum of 1 second.
LOS (loss of signal)— A LOS condition is declared when no pulses have been detected in a 175 +/– 75 pulse window (100 to 250 bit times).
The E-Carrier
The basic unit of the E-carrier system is the 64-kbps DS0, which is multiplexed to form transmis-sion formats with higher speeds. The E1 consists of 32 DS0 channels. The E-carrier is a European digital transmission format devised by the International Telecommunication Union Telecommu-nication Standardization Sector (ITU-T) and given the name by the Conference of European Postal and Telecommunication Administration (CEPT). E2 through E5 are carriers in increasing multiples of the E1 format. The E1 signal format carries data at a rate of 2.048 Mbps and can carry 32 channels of 64 kbps each. Unlike T1, it does not bit-rob and all 8 bits per channel are used to code the signal. E1 and T1 can be interconnected for international use. The E-carrier hierarchy used in EMEA, Latin America, South Asia, and the Asia Pacific region is shown in Table 2-2 and illustrated in Figure 2-13. The E2, E4, and E5 levels are not commercially used. The Synchronous Digital Hierarchy (SDH) levels have largely replaced the DS levels above E4.
Figure 2-13. E-Carrier Multiplexed Hierarchy
[View full size image]
Table 2-2. E-Carrier Hierarchy Digital Signal Level
Number of 64 kbps Channels
Equivalent
Bandwidth
E1
32
32 * DS0
2.048 Mbps
E2
128
4 * E1
8.448 Mbps
E3
512
4 * E2
34.368 Mbps
E4
2048
4 * E3
139.264 Mbps
E5
8192
4 * E4
565.148 Mbps
As depicted in Figure 2-14, a 2.048-Mbps basic frame is comprised of 256 bits numbered from 1 to 256. These bits provide 32 8-bit time slots numbered from 0 to 31. The first time slot is a framing time slot used for frame alignment, performance monitoring (CRC), and data linkage. Time slot 0 carries framing information in a frame alignment signal as well as remote alarm notification, 5 national bits, and optional CRC bits. Time slot 16 is a signaling time slot and carries signaling information out of band. However, time slot 16 could carry data as well.
Figure 2-14. E1 Frame Structure
Like all basic frames used in telecommunications, the E1 basic frame lasts 125 microseconds. The full E1 bit rate is 2.048 Mbps. We calculate this bit rate by multiplying the 32-octet E1 frame by 8000 frames per second. Subtracting time slots 0 and 16, we see that E1 lines offer 30 time slots to carry user data or a payload-carrying capacity of 1.920 Mbps.
E1 uses AMI or high-density bipolar 3 (HDB3) for line encoding. In AMI, every other 1 is a different polarity, and the encoding mechanism does not maintain a "1s density." AMI is used to represent successive 1s' values in a bit stream with alternating positive and negative pulses to eliminate any direct current (DC) offset.
NOTE
AMI is not used in most 2.048-Mbps transmission systems because synchronization loss can occur during long strings of 0s, because there are no pulses.
In HDB3, every other 1 is a different polarity and the encoding mechanism uses a bipolar violation to maintain a "1s density." The HDB3 coded signal does not have a DC component. Therefore, the signal can be transmitted through balanced transformer-coupled circuits. The clock recovery circuits of the receivers can operate well, even though the data contains long strings of 0s.
Unbalanced E1 physical delivery is over two-pair copper wires with 120-ohm line impedance—one pair for RX (1+2) and one pair for TX (4+5). For the CPE, RX means data from the network, whereas TX means data to the network. Balanced E1 physical delivery is over a pair of 75-ohm coaxial cables. One coax is used for TX, whereas the other one is for RX.
E1 Frame Alignment Signal (FAS)
Framing is necessary so that any equipment receiving the E1 signal can synchronize, identify, and extract the individual channels. The 2.048-Mbps E1 frame consists of 32 individual time slots (numbered 0 through 31). Each time slot consists of individual 64-kbps channels of data. Time slot 0 of every even frame is reserved for the FAS. As shown in Figure 2-15, odd frames have the NFAS word that contains the distant alarm indication bit and other bits reserved for national and International use. Thirty-one time slots remain for bearer channels, into which customer data can be placed.
Figure 2-15. E1 Frame Alignment Signal
[View full size image]
E1 MultiFrame Alignment Signal (MFAS)
Sixteen E1 consecutive frames form a new structure called an E1 multiframe. The frames in a multiframe are numbered 0 to 15. Multiframe structure is used for two purposes: CAS signaling and CRC. Each of these modes is independent from the use of the other. CAS is carried in time slot 16, and CRC is carried in time slot 0. The purpose of the multiframe is to have sufficient overhead bits to support two key functions in time slot 16, which carries signaling information when an E1 is transmitting digital voice streams. MFAS framing is used for CAS to transmit ABCD bit information for each of the 30 channels, as illustrated in Figure 2-16.
Figure 2-16. E1 Multiframe Alignment Signal
[View full size image]
This method uses the 32 time slot frame format with time slot 0 for the FAS and time slot 16 for the MFAS and CAS. When a PCM-30 multiframe is transmitted, 16 FAS frames are assembled together. Time slot 16 of the first frame is dedicated to MFAS bits, and time slot 16 of the remaining 15 frames is dedicated to ABCD bits.
E1 CRC Error Checking
A cyclic redundancy check-4 (CRC-4) is often used in E1 transmission to identify possible bit errors during in-service error monitoring. CRC-4 is a checksum calculation that allows for the detection of errors within the 2.048-Mbps signal while it is in service. A discrepancy indicates at least one bit error in the received signal. The equipment that originates the E1 data calculates the CRC-4 bits for one submultiframe. It inserts the CRC-4 bits in the CRC-4 positions in the next submultiframe.
The receiving equipment performs the reverse mathematical computation on the submultiframe. It examines the CRC-4 bits that were transmitted in the next submultiframe. It then compares the transmitted CRC-4 bits to the calculated value. If there is a discrepancy in the two values, a CRC-4 error is reported via E-bits indication. Each individual CRC-4 error does not necessarily correspond to a single bit error, which is a drawback. Multiple bit errors within the same submultiframe will lead to only one CRC-4 error for the block. Thirty-one time slots remain for bearer channels, into which customer data can be placed.
Errors could occur such that the new CRC-4 bits are calculated to be the same as the original CRC-4 bits. CRC-4 error checking provides a most convenient method of identifying bit errors within an in-service system, but only provides an approximate measure (93.75 percent accuracy) of the circuit's true performance. Consider the MFAS framing shown in Figure 2-17. Each MFAS frame can be divided into "submultiframes." These are labeled SMF1 and SMF2, and consist of eight frames apiece. We associate 4 bits of CRC information with each submultiframe. The CRC-4 bits are calculated for each submultiframe, buffered, and then inserted into the following submultiframe to be transmitted.
Figure 2-17. E1 CRC Error Checking
[View full size image]
ITU-T specifications G.704 and G.706 define the CRC-4 cyclic redundancy check for enhanced error monitoring on the E1 line.
E1 Errors and Alarms
It is important to understand E1 error and alarm conditions, in order to interpret the behavior of a TDM transmission system on the CPE as well as on the network side. The alarms listed here are commonly used with CPE equipment such as CSUs/DSUs, E1 repeaters, DCS devices, and multiplexers:
Alarm indication signal (AIS)— Alarm indication signal is an unframed, all-1s signal.
Background block error (BBE)— A background block error is an error block (a block is a set of consecutive bits associated with a path) that does not occur as part of a severely errored second (SES).
Bit errors— Bit errors are bits that are in error. Bit errors are not counted during unavailable time.
Bit slip— A bit slip occurs when the synchronized pattern either loses a bit or has an extra bit stuffed into it.
Clock slips— Clock slips occur when the measured frequency deviates from the reference frequency by a one-unit interval.
Code errors— A code error is a violation of the coding rules: two successive pulses with the same polarity. In HDB3 coding, a code error is a bipolar violation that is not part of a valid HDB3 substitution.
Cyclic redundancy check (CRC) errors— CRC-4 block errors. This measurement applies to signals containing a CRC-4 check sequence.
Degraded minutes— A degraded minute (DM) occurs when there is a 10 to 6 or worse bit error rate during 60 available, nonseverely bit-errored seconds.
Errored block— A block in which one or more bits are in error.
E-bit indication— An E-bit is transmitted by the receiving equipment after detecting a CRC-4 error.
Errored second (ES)— An errored second is any second in which one or more bits are in error. An errored second is not counted during an unavailable second. For G.826, an errored second contains one or more blocks with at least one defect.
Frame alarm (FALM)— Frame alarm seconds is a count of seconds that have had far-end frame alarm (FAS remote alarm indication [RAI]), which is when a 1 is transmitted in every third bit of each time slot 0 frame that does not contain the FAS.
Frame alignment signal (FAS)— A count of the bit errors in the frame alignment signal words received. It applies to both PCM-30 and PCM-31 framing.
Frequency— Any variance from 2.048 Mbps in the received frequency is recorded in hertz or parts per million.
Loss of frame seconds (LOFS)— Loss of frame seconds is a count of seconds since the beginning of the test that have experienced a loss of frame.
Loss of signal seconds (LOSS)— Loss of signal seconds is a count of the number of seconds during which the signal has been lost during the test.
Multiframe alarm (MFAL)— Multiframe alarm seconds is a count of seconds that have had far-end multiframe alarm (MFAS RAI).
Multiframe alignment signal (MFAS) distant alarm— In this alarm, a 1 is transmitted in every sixth bit of each time slot 16 in the 0 frame.
Severely errored second (SES)— A severely errored second has an error rate of 10-3 or higher. Severely errored seconds are not counted during unavailable time. For G.826 block measurements, an SES is a 1-second period containing 30 percent or greater errored blocks.
Time slot 16 AIS— In this alarm, all 1s are transmitted in time slot 16 of all frames.
Unavailable seconds (UAS)— Unavailable time begins at the onset of 10 consecutive severely errored seconds. Unavailable seconds also begin at a loss of signal or loss of frame.
Wander— This is the total positive or negative phase difference between the measured frequency and the reference frequency. The +wander value increases whenever the measured frequency is one unnumbered information (UI) frame larger than the reference frequency. The –wander increases whenever the measured frequency is one UI frame less than the reference frequency.
NOTE
The following ITU-T recommendations are commonly used with TDM systems: G.703, physical/electrical characteristics of hierarchical digital interfaces; G.704, synchronous frame structures used at 1544, 6312, 2048, 8488, and 44,736 kbps; G.706, frame alignment and CRC procedures relating to basic frame structures defined in Recommendation G.704; G.711, PCM of voice frequencies.
Time-Division Multiplexing
Analog Signal Processing
An analog signal varies continuously over time. It could vary in amplitude, frequency, or phase. These components define the sound wave an analog signal represents. The amplitude, frequency, and phase shift are three characteristics of the analog signal that can be varied to convey information. Analog signals are inherently susceptible to attenuation as they progress along the transmission medium. Analog signals are also susceptible to electromagnetic interference (EMI), radio frequency interference (RFI), and other noise sources. This results in signal distortion with changes in frequency characteristics.
Analog telephony signals span the 200-Hz to 3.4-kHz frequency band. Such analog signals are referred to as narrowband due to their narrow frequency response.
Analog video signals operate in a frequency band from flat response (0 Hz) up to 60 MHz. Such analog signals are referred to as broadband due to their wide frequency response. The National Television System Committee (NTSC) and PAL broadcast (radio frequency [RF] transmission) standards impose a limit on the bandwidth of the video signal of about 6 to 10 MHz. Video bandwidth is, effectively, the highest-frequency analog signal a monitor can handle without distortion. Amplification can be used to compensate for signal attenuation. However, narrowband repeaters cannot distinguish between the signal and distortion components of the analog signal. The repeater amplifies the entire input signal, thereby amplifying the noise along with the original signal. The effects of noise and distortion are cumulative along the analog transmission system.
Analog Signal Generation and Reception
The generation of an analog telephony signal takes place when a person speaks into the transmitter of a telephone set. Changes in the air pressure result in sound waves that are sensed by the diaphragm. The diaphragm responds to changes in air pressure and varies circuit resistance by compressing or decompressing carbon in the transmitter. The change in resistance causes a variation in the output voltage, thereby creating an electrical signal analogous to the sound wave. The phone connects to a central office (CO) in the caller's neighborhood through a subscriber line interface circuit (SLIC) that executes functions, such as powering the phone, detecting when the caller picks up or hangs up the receiver, and ringing the phone when required. A codec at the CO converts the analog voice signals to digital data for easy routing through the voice network and delivery to the CO located in the recipient's neighborhood. At the recipient's CO, the digital data stream is converted back into an electrical analog signal. During reception, a varying current flows through the coil and vibrates the receiver diaphragm that reproduces the sound wave. Digital transmission systems overcome the basic analog issue of the cumulative effects of noise and distortion by regenerating rather than amplifying the transmitted signal. The regenerative repeater detects the presence of a pulse (signal) and creates a new signal based on a sample of the existing signal. The regenerated signal duplicates the original signal and eliminates the cumulative effects of noise and distortion inherent in analog facilities.
Analog-To-Digital Conversion
Converting an analog telephony signal to a digital signal involves filtering, sampling, quantization, and encoding. The following example involves an audio frequency (AF) signal.
Filtering
Audio frequencies range from 20 Hz to 20,000 Hz. Telephone transmission systems are designed to transmit analog signals between 200 Hz and 3400 Hz. End frequencies below 200 Hz and above 3400 Hz are removed by a process called filtering.
As indicated in Figure 2-4, a band pass filter (BPF) is used to filter the audio telephony band for analog-to-digital (A/D) conversion. BPFs are constructed using analog electronic components, such as capacitors and inductors.
Figure 2-4. Filtering of the Analog Telephony Waveform
[View full size image]
Sampling
In the sampling process, portions of a signal are used to represent the whole signal. Each time the signal is sampled, a pulse amplitude modulation (PAM) signal is generated. According to the Nyquist theorem, to accurately reproduce the analog signal (speech), a sampling rate of at least twice the highest frequency to be reproduced is required. Because the majority of telephony voice frequencies (200 to 3400 Hz) are less than 4 kHz, an 8-kHz sampling rate has been established as the standard. As illustrated in Figure 2-5, the PAM sampler measures the filtered analog signal 8000 times per second, or once every 125 microseconds. The value of each of these samples is directly proportional to the amplitude of the analog signal at the time of the sample (PAM, as mentioned previously).
Figure 2-5. Pulse Amplitude Modulation (PAM)
Quantization
Quantization represents the original analog signal by a discrete and limited number of digital signals. When the original signal is in a quantized state, it can be safely relayed for any distance without further loss in quality. To obtain the digital signal, the PAM signal is measured and coded. As shown in Figure 2-5, the amplitude or height of the PAM is measured to derive a number that represents its amplitude level. Quantization essentially matches the PAM signals to one of 255 values on a segmented scale. The quantizer measures the amplitude or height of each PAM signal coming from the sampler and assigns it a value from –127 to +127. In telephony systems, each amplitude value (sample) is expressed as a 13-bit code word. Comparing the sample to a companding characteristic, which is a nonlinear formula, forms an 8-bit byte.
Encoding
The decimal (base 10) number derived via quantization is then converted to its equivalent 8-bit binary number. As illustrated in Figure 2-6, the output is an 8-bit "word" in which each bit can be either a 1 (pulse) or a 0 (no pulse). This process is repeated 8000 times a second for a telephony voice channel service. The output (8000 samples/second * 8 bits/sample) is a 64-kbps PCM signal. This 64-kbps channel is called a DS0, which forms the fundamental building block of the digital signal level (DS level) hierarchy.
Figure 2-6. Pulse Code Modulation (PCM)
[View full size image]
µ-law and A-Law Coding
Voice signals are not uniform, and some signals are weaker than others. The dynamic range is the difference in decibels (dB) between weaker (softer) and stronger (louder) signals. The dynamic range of speech can be as high as 60 dB. This does not lend itself well to efficient linear digital encoding. G.711 µ-law and A-law encoding effectively reduce the dynamic range of the signal, thereby increasing the coding efficiency and resulting in a signal-to-noise ratio (SNR) superior to that obtained by linear encoding for a given number of bits. The µ-law and A-law algorithms are standard compression algorithms used in digital communications systems to optimize and modify the dynamic range of an analog signal for digitizing. The µ-law is typically used on T1 facilities, whereas the A-law is used on E1 facilities.
Companding (compression and expansion) is a method commonly used in telephony applications to increase dynamic range while keeping the number of bits used for quantization constant. The compression is lossy, but provides lower quantization errors at smaller amplitude values than at larger values. Basically, the voice is sampled at 8000 samples per second and converted into a 14-bit word (µ-law) or 13-bit word (A-law) that goes into the compander. The samples are processed using a nonlinear formula to transform them into 8-bit words. The compander also inverts all even bits in the word. In A-law companding, for instance, the 13-bit word 1111111111111 is converted to 11111111 (+127) using compression, resulting in the PCM word 10101010 (AA hex). Telephony PCM words use a polarity, chord, and step makeup. Nonlinear coding uses more values to represent lower-volume levels and fewer values for higher-volume levels. This way, µ-law and A-law companding algorithms permit subtleties of a voice conversation to be captured.
Echo Cancellation
Line echo is created when a signal encounters an impedance mismatch in the telephone network, such as that typically caused by a two- to four-wire (hybrid) conversion in an analog system. The hybrid is a transformer located at the facility that connects the two-wire local loop coming from homes or businesses to the four-wire trunk at the CO for inter-exchange carrier (IXC) interconnectivity. The echo is intensified by distance and impedance-mismatched network equipment. In circuit-switched long-distance networks, echo cancellers reside in the metropolitan COs that connect to the long-distance network. These echo cancellers remove electrical echoes made noticeable by delay in the long-distance network. To eliminate echo, echo cancellation devices use adaptive digital filters, nonlinear processors, and tone detectors.
The adaptive filter is made up of an echo estimator and a subtractor. The echo estimator monitors the receive path and dynamically builds a mathematical model of the line that creates the returning echo. The echo estimate is then fed to the subtractor, which subtracts the linear part of the echo from the line in the send path. The nonlinear processor evaluates the residual echo, removes all signals below a certain threshold, and replaces them with simulated background noise that sounds like the original background noise without the echo. Echo cancellers also include tone detectors that disable echo cancellation by user equipment upon receipt of certain tones during data and fax transmission. As an example, the echo-cancellation function is turned off upon receipt of the high-frequency tone that precedes a modem connection.
Circuit-Switched Networks
Figure 2-7 shows an example of a circuit-switched network from a customer's perspective. Such a topology is also referred to as a point-to-point line or nailed-up circuit. Typically such lines are leased from a local exchange carrier (LEC) or IXC and are also referred to as leased lines. One leased line is required for each of the remote sites to connect to the headquarters at the central site.
Figure 2-7. Leased Lines from a Customer Perspective
[View full size image]
The private nature of leased line networks provides inherent privacy and control benefits. Leased lines are dedicated, so there are no statistical availability issues associated with oversubscription, as there are in public packet-switched networks. This is both a strength and weakness. The strength is that the circuit is available on a permanent basis and does not require that a connection be set up before traffic is passed. The weakness is that the bandwidth is paid for even if is not used, which is typically about 40 to 70 percent of the time. In addition to the inefficient use of bandwidth, a major disadvantage of leased lines is their mileage-sensitive nature, which makes it a very expensive alternative for networks spanning long distances or requiring extensive connectivity between sites.
Leased lines also lack flexibility in terms of changes to the network when compared to alternatives, such as Frame Relay. For example, adding a new site to the network requires a new circuit to be provisioned end to end for every site with which the new location must communicate. If there are a number of sites, the costs can mount quickly. Leased lines are priced on a mileage and bandwidth basis by a carrier, which results in customers incurring large monthly costs for long-haul leased circuits.
In comparison, public networks (such as Frame Relay) require only an access line to the nearest CO and the provisioning of virtual circuits (VCs) for each new site with which it needs to communicate. In many cases, existing sites will require only the addition of a new VC definition for the new site.
From the carrier perspective, the circuit assigned to the customer (also known as the local loop) is provisioned on the digital access and cross-connect system (DACS) or channel bank. The individual T1 circuits are multiplexed onto a T3 and trunked over terrestrial, microwave, or satellite links to its destination, where it is demultiplexed and fanned out into individual T1 lines. Figure 2-8 shows this scheme. The T-carrier hierarchy, DS1, and DS3 are covered later in this chapter.
Figure 2-8. Leased Lines from a Carrier Perspective
[View full size image]
TDM Signaling
Signaling in the TDM telephony world provides functions such as supervising and advertising line status, alerting devices when a call is trying to connect, and routing and addressing information. Two different types of signaling information are within the T1/E1 system:
Channel-associated signaling (CAS)
Common channel signaling (CCS)
Channel-Associated Signaling (CAS)
CAS is the transmission of signaling information within the information band, or in-band signaling. This means that voice or data signals travel on the same circuits as line status, address, and alerting signals. Because there are 24 channels on a full T1 line, CAS interleaves signaling packets within voice packets; therefore, there are 24 channels to use for voice. Various types of CAS signaling are available in the T1 world. The most common forms of CAS signaling are loopstart, groundstart, and ear and mouth (E&M) signaling. CAS signaling is often referred to as robbed-bit signaling because signaling bits are robbed from every 6th and 12th frame in a D4 superframe (SF) or 6th, 12th, 18th, 24th frame, and extended superframe (ESF). This is explained in greater detail in a later section.
Common Channel Signaling (CCS)
CCS is the transmission of signaling information out of the information band. The most notable and widely used form of this signaling type is ISDN. One disadvantage to using an ISDN primary rate interface (PRI) is the removal of one DS0, or voice channel (in this case, for signaling use). Therefore, one T1 would have 23 DS0s, or bearer B channels for user data, and one DS0, or D channel for signaling. It is possible to control multiple PRIs with a single D channel, each using non-facility-associated signaling (NFAS). This enables you to configure the other PRIs in the NFAS group to use all 24 DS0s as B channels.
An analog signal varies continuously over time. It could vary in amplitude, frequency, or phase. These components define the sound wave an analog signal represents. The amplitude, frequency, and phase shift are three characteristics of the analog signal that can be varied to convey information. Analog signals are inherently susceptible to attenuation as they progress along the transmission medium. Analog signals are also susceptible to electromagnetic interference (EMI), radio frequency interference (RFI), and other noise sources. This results in signal distortion with changes in frequency characteristics.
Analog telephony signals span the 200-Hz to 3.4-kHz frequency band. Such analog signals are referred to as narrowband due to their narrow frequency response.
Analog video signals operate in a frequency band from flat response (0 Hz) up to 60 MHz. Such analog signals are referred to as broadband due to their wide frequency response. The National Television System Committee (NTSC) and PAL broadcast (radio frequency [RF] transmission) standards impose a limit on the bandwidth of the video signal of about 6 to 10 MHz. Video bandwidth is, effectively, the highest-frequency analog signal a monitor can handle without distortion. Amplification can be used to compensate for signal attenuation. However, narrowband repeaters cannot distinguish between the signal and distortion components of the analog signal. The repeater amplifies the entire input signal, thereby amplifying the noise along with the original signal. The effects of noise and distortion are cumulative along the analog transmission system.
Analog Signal Generation and Reception
The generation of an analog telephony signal takes place when a person speaks into the transmitter of a telephone set. Changes in the air pressure result in sound waves that are sensed by the diaphragm. The diaphragm responds to changes in air pressure and varies circuit resistance by compressing or decompressing carbon in the transmitter. The change in resistance causes a variation in the output voltage, thereby creating an electrical signal analogous to the sound wave. The phone connects to a central office (CO) in the caller's neighborhood through a subscriber line interface circuit (SLIC) that executes functions, such as powering the phone, detecting when the caller picks up or hangs up the receiver, and ringing the phone when required. A codec at the CO converts the analog voice signals to digital data for easy routing through the voice network and delivery to the CO located in the recipient's neighborhood. At the recipient's CO, the digital data stream is converted back into an electrical analog signal. During reception, a varying current flows through the coil and vibrates the receiver diaphragm that reproduces the sound wave. Digital transmission systems overcome the basic analog issue of the cumulative effects of noise and distortion by regenerating rather than amplifying the transmitted signal. The regenerative repeater detects the presence of a pulse (signal) and creates a new signal based on a sample of the existing signal. The regenerated signal duplicates the original signal and eliminates the cumulative effects of noise and distortion inherent in analog facilities.
Analog-To-Digital Conversion
Converting an analog telephony signal to a digital signal involves filtering, sampling, quantization, and encoding. The following example involves an audio frequency (AF) signal.
Filtering
Audio frequencies range from 20 Hz to 20,000 Hz. Telephone transmission systems are designed to transmit analog signals between 200 Hz and 3400 Hz. End frequencies below 200 Hz and above 3400 Hz are removed by a process called filtering.
As indicated in Figure 2-4, a band pass filter (BPF) is used to filter the audio telephony band for analog-to-digital (A/D) conversion. BPFs are constructed using analog electronic components, such as capacitors and inductors.
Figure 2-4. Filtering of the Analog Telephony Waveform
[View full size image]
Sampling
In the sampling process, portions of a signal are used to represent the whole signal. Each time the signal is sampled, a pulse amplitude modulation (PAM) signal is generated. According to the Nyquist theorem, to accurately reproduce the analog signal (speech), a sampling rate of at least twice the highest frequency to be reproduced is required. Because the majority of telephony voice frequencies (200 to 3400 Hz) are less than 4 kHz, an 8-kHz sampling rate has been established as the standard. As illustrated in Figure 2-5, the PAM sampler measures the filtered analog signal 8000 times per second, or once every 125 microseconds. The value of each of these samples is directly proportional to the amplitude of the analog signal at the time of the sample (PAM, as mentioned previously).
Figure 2-5. Pulse Amplitude Modulation (PAM)
Quantization
Quantization represents the original analog signal by a discrete and limited number of digital signals. When the original signal is in a quantized state, it can be safely relayed for any distance without further loss in quality. To obtain the digital signal, the PAM signal is measured and coded. As shown in Figure 2-5, the amplitude or height of the PAM is measured to derive a number that represents its amplitude level. Quantization essentially matches the PAM signals to one of 255 values on a segmented scale. The quantizer measures the amplitude or height of each PAM signal coming from the sampler and assigns it a value from –127 to +127. In telephony systems, each amplitude value (sample) is expressed as a 13-bit code word. Comparing the sample to a companding characteristic, which is a nonlinear formula, forms an 8-bit byte.
Encoding
The decimal (base 10) number derived via quantization is then converted to its equivalent 8-bit binary number. As illustrated in Figure 2-6, the output is an 8-bit "word" in which each bit can be either a 1 (pulse) or a 0 (no pulse). This process is repeated 8000 times a second for a telephony voice channel service. The output (8000 samples/second * 8 bits/sample) is a 64-kbps PCM signal. This 64-kbps channel is called a DS0, which forms the fundamental building block of the digital signal level (DS level) hierarchy.
Figure 2-6. Pulse Code Modulation (PCM)
[View full size image]
µ-law and A-Law Coding
Voice signals are not uniform, and some signals are weaker than others. The dynamic range is the difference in decibels (dB) between weaker (softer) and stronger (louder) signals. The dynamic range of speech can be as high as 60 dB. This does not lend itself well to efficient linear digital encoding. G.711 µ-law and A-law encoding effectively reduce the dynamic range of the signal, thereby increasing the coding efficiency and resulting in a signal-to-noise ratio (SNR) superior to that obtained by linear encoding for a given number of bits. The µ-law and A-law algorithms are standard compression algorithms used in digital communications systems to optimize and modify the dynamic range of an analog signal for digitizing. The µ-law is typically used on T1 facilities, whereas the A-law is used on E1 facilities.
Companding (compression and expansion) is a method commonly used in telephony applications to increase dynamic range while keeping the number of bits used for quantization constant. The compression is lossy, but provides lower quantization errors at smaller amplitude values than at larger values. Basically, the voice is sampled at 8000 samples per second and converted into a 14-bit word (µ-law) or 13-bit word (A-law) that goes into the compander. The samples are processed using a nonlinear formula to transform them into 8-bit words. The compander also inverts all even bits in the word. In A-law companding, for instance, the 13-bit word 1111111111111 is converted to 11111111 (+127) using compression, resulting in the PCM word 10101010 (AA hex). Telephony PCM words use a polarity, chord, and step makeup. Nonlinear coding uses more values to represent lower-volume levels and fewer values for higher-volume levels. This way, µ-law and A-law companding algorithms permit subtleties of a voice conversation to be captured.
Echo Cancellation
Line echo is created when a signal encounters an impedance mismatch in the telephone network, such as that typically caused by a two- to four-wire (hybrid) conversion in an analog system. The hybrid is a transformer located at the facility that connects the two-wire local loop coming from homes or businesses to the four-wire trunk at the CO for inter-exchange carrier (IXC) interconnectivity. The echo is intensified by distance and impedance-mismatched network equipment. In circuit-switched long-distance networks, echo cancellers reside in the metropolitan COs that connect to the long-distance network. These echo cancellers remove electrical echoes made noticeable by delay in the long-distance network. To eliminate echo, echo cancellation devices use adaptive digital filters, nonlinear processors, and tone detectors.
The adaptive filter is made up of an echo estimator and a subtractor. The echo estimator monitors the receive path and dynamically builds a mathematical model of the line that creates the returning echo. The echo estimate is then fed to the subtractor, which subtracts the linear part of the echo from the line in the send path. The nonlinear processor evaluates the residual echo, removes all signals below a certain threshold, and replaces them with simulated background noise that sounds like the original background noise without the echo. Echo cancellers also include tone detectors that disable echo cancellation by user equipment upon receipt of certain tones during data and fax transmission. As an example, the echo-cancellation function is turned off upon receipt of the high-frequency tone that precedes a modem connection.
Circuit-Switched Networks
Figure 2-7 shows an example of a circuit-switched network from a customer's perspective. Such a topology is also referred to as a point-to-point line or nailed-up circuit. Typically such lines are leased from a local exchange carrier (LEC) or IXC and are also referred to as leased lines. One leased line is required for each of the remote sites to connect to the headquarters at the central site.
Figure 2-7. Leased Lines from a Customer Perspective
[View full size image]
The private nature of leased line networks provides inherent privacy and control benefits. Leased lines are dedicated, so there are no statistical availability issues associated with oversubscription, as there are in public packet-switched networks. This is both a strength and weakness. The strength is that the circuit is available on a permanent basis and does not require that a connection be set up before traffic is passed. The weakness is that the bandwidth is paid for even if is not used, which is typically about 40 to 70 percent of the time. In addition to the inefficient use of bandwidth, a major disadvantage of leased lines is their mileage-sensitive nature, which makes it a very expensive alternative for networks spanning long distances or requiring extensive connectivity between sites.
Leased lines also lack flexibility in terms of changes to the network when compared to alternatives, such as Frame Relay. For example, adding a new site to the network requires a new circuit to be provisioned end to end for every site with which the new location must communicate. If there are a number of sites, the costs can mount quickly. Leased lines are priced on a mileage and bandwidth basis by a carrier, which results in customers incurring large monthly costs for long-haul leased circuits.
In comparison, public networks (such as Frame Relay) require only an access line to the nearest CO and the provisioning of virtual circuits (VCs) for each new site with which it needs to communicate. In many cases, existing sites will require only the addition of a new VC definition for the new site.
From the carrier perspective, the circuit assigned to the customer (also known as the local loop) is provisioned on the digital access and cross-connect system (DACS) or channel bank. The individual T1 circuits are multiplexed onto a T3 and trunked over terrestrial, microwave, or satellite links to its destination, where it is demultiplexed and fanned out into individual T1 lines. Figure 2-8 shows this scheme. The T-carrier hierarchy, DS1, and DS3 are covered later in this chapter.
Figure 2-8. Leased Lines from a Carrier Perspective
[View full size image]
TDM Signaling
Signaling in the TDM telephony world provides functions such as supervising and advertising line status, alerting devices when a call is trying to connect, and routing and addressing information. Two different types of signaling information are within the T1/E1 system:
Channel-associated signaling (CAS)
Common channel signaling (CCS)
Channel-Associated Signaling (CAS)
CAS is the transmission of signaling information within the information band, or in-band signaling. This means that voice or data signals travel on the same circuits as line status, address, and alerting signals. Because there are 24 channels on a full T1 line, CAS interleaves signaling packets within voice packets; therefore, there are 24 channels to use for voice. Various types of CAS signaling are available in the T1 world. The most common forms of CAS signaling are loopstart, groundstart, and ear and mouth (E&M) signaling. CAS signaling is often referred to as robbed-bit signaling because signaling bits are robbed from every 6th and 12th frame in a D4 superframe (SF) or 6th, 12th, 18th, 24th frame, and extended superframe (ESF). This is explained in greater detail in a later section.
Common Channel Signaling (CCS)
CCS is the transmission of signaling information out of the information band. The most notable and widely used form of this signaling type is ISDN. One disadvantage to using an ISDN primary rate interface (PRI) is the removal of one DS0, or voice channel (in this case, for signaling use). Therefore, one T1 would have 23 DS0s, or bearer B channels for user data, and one DS0, or D channel for signaling. It is possible to control multiple PRIs with a single D channel, each using non-facility-associated signaling (NFAS). This enables you to configure the other PRIs in the NFAS group to use all 24 DS0s as B channels.
Introduction to Optical Networking
SONET/SDH
SONET/SDH networks are typically built in a hierarchical topology. The campus network could be GE or even an OC-3/STM-1 or OC-12/STM-4 SONET/SDH ring. Campus-to-central office (CO) traffic is normally carried over the metro access ring. CO-to-CO traffic is commonly carried over metro core rings and, finally, if the traffic is required to leave the metro core, long-haul traffic is typically carried over DWDM circuits.
As shown in Figure 1-1, customer rings are known as access rings and typically span a campus. The access rings converge and interconnect at major network traffic collection points. These collection points are referred to as points of presence (POPs) by carriers or as headends in the cable industry. The collector rings aggregate the access ring traffic and groom this traffic into the core rings, which are often referred to as interoffice facility (IOF) or metro core rings because they interconnect these collection points. Access rings reach further out to customer premises locations and are said to subtend off the larger collector rings. The collector rings subtend off the larger core rings.
Figure 1-1. SONET/SDH Hierarchical Topology
[View full size image]
In legacy-based SONET/SDH time-division multiplexing (TDM), the sum of all subtending access ring bandwidth equals the total bandwidth required at the collector. Similarly, the sum of all subtending collector ring bandwidths equals the total bandwidth required at the core backbone.
Legacy SONET networks use automatic protection switching (APS), 1+1 protection, linear APS, two-fiber unidirectional path-switched ring (UPSR), two-fiber bidirectional line-switched ring (BLSR), or four-fiber BLSR protection mechanisms. Legacy SDH networks use multiplex section protection (MSP) 1+1, MSP 1:1, and MSP 1:N. They also implement two-fiber subnetwork connection protection (SNCP), two-fiber multiplexed section protection ring (MS-SPRing), or four-fiber MS-SPRing protection mechanisms. These protection mechanisms are also used by next-generation SONET/SDH, and are discussed in greater detail in later chapters.
Traffic flows in the access rings are typically of a hub-and-spoke nature, consolidating back at the local CO. UPSR/SNCP architectures are well suited for such multiple point-to-point or two-node traffic flows. The hub-and-spoke architecture also extends to 1+1 and linear access networks. Collector and core rings, however, support large amounts of traffic between access rings. As such, core ring traffic travels in a mesh, from any CO to any other CO. Because of their inherent potential for bandwidth reservation, BLSR/MS-SPRing architectures work well for such distributed "mesh" and node-to-node traffic applications.
Legacy SONET/SDH
Legacy SONET/SDH networks use add/drop multiplexers (ADMs) that add or drop OC-N or STM-N circuits between ADM nodes on the ring. The relationship between the SONET Optical Channel (OC-N) and SDH-Synchronous Transport Signal levels (STM-N) is presented in Table 1-1.
Table 1-1. SONET OC-N and Its SDH Equivalent Signal Level
T-Carrier Equivalent
SDH Equivalent
Bandwidth
OC-3
84 * T1
STM-1
155.52 Mbps
OC-12
336 * T1
STM-4
622.08 Mbps
OC-48
1344 * T1
STM-16
2488.32 Gbps
OC-192
5376 * T1
STM-64
9953.28 Gbps
OC-768
21,504 * T1
STM-256
39,813.12 Gbps
SONET topologies typically use digital cross-connect systems (DCS) or DACS to groom lower-bandwidth DS-0 or DS-1 circuits to higher DS-3, OC-3, or STM-1 levels. SDH architectures use the term DXC for a digital cross-connect switch. Higher-order DXCs are used to cross-connect or switch traffic in 155-Mbps (STM-1) blocks, whereas lower-order DXCs are used to cross-connect traffic at 1.544 (DS-1) or 2.048 (E1) rates. Next-generation MSPPs integrate DCS/DXC functionality within the chassis.
Various CPE services, such as T1 or FT1 private line services, terminate on the DACS. Ethernet services could be provided using routers directly connected to the DACS, as shown in Figure 1-2. Voice services could be carried over TDM circuits by attaching the switches or private branch exchanges (PBXs) directly to the ADMs or via the DACS. Attaching ATM core switches directly to the ADMs provides ATM transport. In the case of ATM, the underlying SONET/SDH concatenated circuit would be completely transparent and the provider would need to provision permanent virtual circuits (PVCs) or switched virtual circuits (SVCs) as per customer requirements.
Figure 1-2. Legacy SONET/SDH Applications
SONET/SDH Multiservice Provisioning Platforms
In recent times, since the late 1990s, the distinction between metro core and access rings has been blurred with the advent of next-generation SONET/SDH devices known as multiservice provisioning platforms or MSPPs. As illustrated in Figure 1-3, high-bandwidth core rings can aggregate customer traffic and perform a CO-to-CO function. The MSPP can perform the duties of an ADM and DCS/DXC on access rings and metro core rings.
Figure 1-3. Next-Generation SONET/SDH MSPP Topology
The current drivers for increasing optical bandwidth include unicast data (including voice over IP), TDM voice, videoconferencing, and multicast distance-learning applications. The optical infrastructure provides a true broadband medium for multiservice transport. Current optical technologies in use can be broadly classified, as shown in Table 1-2.
Table 1-2. Classification of Optical Technology Technology
Application
Gigabit Ethernet
Metro access or metro core
Legacy SONET/SDH
Metro access, metro core, and long haul
Multiservice SONET/SDH
Metro access, metro core, and long haul
Packet over SONET/SDH
Metro access, metro core, and long haul
DWDM
Metro access, metro core and long haul
Legacy SONET/SDH TDM bandwidth summation no longer applies when packet- or frame-based traffic is statistically multiplexed onto a SONET/SDH ring. MSPPs can share SONET/SDH bandwidth among TDM, Ethernet, and other customer premises equipment (CPE) services. The inherent reliability of SONET/SDH is extended to Ethernet services when provisioned over MSPPs. These data services can be implemented across UPSR/SNCP, BLSR/MS-SPRing, linear, unprotected, and path-protected meshed network (PPMN) topologies. Furthermore, SONET/SDH 50-ms recovery is provided for these Ethernet services in the same manner as is done currently for TDM-based DS-N and OC-N circuits. The MSPP also includes support for resilient packet ring (IEEE 802.17) and has a roadmap for Generalized Multiprotocol Label Switching (GMPLS) with support for automatically switched optical networks (ASONs).
MSPPs enable carriers to provide packet-based services over SONET/SDH platforms. These services can be offered with varying service level agreements (SLAs) using Layer 1.5, 2, or 3 switching and quality of service (QoS) mechanisms. The optical network QoS includes the following parameters:
Degree of transparency
Level of protection
Required bit error rate
End-to-end delay
Jitter requirements
As illustrated in Figure 1-4, multiservice provisioning platforms integrate DCS/DXC and Ethernet switching functionality within the device. However, ATM services would need an external core ATM switch to provision end-user PVCs or SVCs. The MSPP can provide private line TDM services, 10/100/1000-Mbps Ethernet services, and Multiprotocol Label Switched (MPLS) IP-routed services. This means that the service provider could build Layer 2 Ethernet virtual LAN (VLAN) virtual private networks (VPNs) or Layer 2.5 MPLS VPNs. Such versatility positions the MSPP as the solution of choice for metro access and core applications. Integration of DWDM capability also extends core and long-haul transport as an application for the MSPP.
Figure 1-4. Next-Generation MSPP Applications
Improving SONET/SDH Bandwidth Efficiency
Legacy SONET/SDH networks were designed to transport TDM traffic in a highly predictable and reliable manner. Today's traffic patterns are shifting from TDM to an increasing percentage of bursty data traffic. Internet and data network growth in the past six years has highlighted legacy SONET/SDH's inefficiency in transporting data. Its rigid data hierarchy limits connections to fixed increments that have steep gaps between them. For example, an OC-3/STM-1 translates to 155 Mbps, but the next standard increment that is offered is OC-12/STM-4, which is 622 Mbps.
Inefficiency of bandwidth use in transporting Ethernet over SONET/SDH has been overcome by concatenation techniques. If one were to transport 100-Mbps Fast Ethernet over a SONET/SDH channel, for example, the legacy SONET/SDH channel selected would be an OC-3/STM-1. The OC-3/STM-1 channel consumes about 155 Mbps of bandwidth. This would result in a loss of an OC-1 or 51.84 Mbps worth of bandwidth. Concatenation supports nonstandard channels such as an STS-2. Transporting 100-Mbps Ethernet within an STS-2 (103.68 Mbps) optimizes bandwidth efficiency. Virtual concatenation (VCAT) and the link capacity adjustment scheme (LCAS) are techniques used to further enhance network efficiencies.
VCAT is an inverse multiplexing procedure whereby the contiguous bandwidth is broken into individual synchronous payload envelopes (SPEs) at the source transmitter that are logically represented in a virtual concatenation group (VCG). The VCG members are transported as individual SPEs across the SONET/SDH network and recombined at the far-end destination VCG receiver. VCAT is used to provision point-to-point connections over the SONET network using any available capacity to construct an (N * STS-1)-sized pipe for packet traffic.
LCAS is a protocol that ensures synchronization between the sender and receiver during the increase or decrease in size of a virtually concatenated circuit, in a hitless manner that doesn't interfere with the data signal.
QoS
The capability to classify packets, queue them based on that classification, and then schedule them efficiently into Synchronous Transport Signal (STS) channels is necessary to enable services that create and maintain sustainable service provider business cases. QoS is necessary in a service provider environment, to maintain customer SLAs. The various protection mechanisms used in optical networks such as APS, 1+1, two-fiber UPSR/SNCP, two-fiber BLSR/MS-SPRing, and four-fiber BLSR/MS-SPRing also determine the QoS and consequent SLA that a carrier can guarantee the customer. For example, circuits provisioned over a four-fiber BLSR/MS-SPRing ring can be offered with a higher QoS guarantee and SLA than a circuit provisioned over UPSR/SNCP, because four-fiber BLSR/MS-SPRing provides maximum redundancy.
SONET/SDH Encapsulation of Ethernet
Various methods for encapsulating Ethernet packets into SONET/SDH payloads have been discussed in the industry. The MSPP strategy focuses on delivering a single encapsulation scheme for both Ethernet and storage-area network (SAN) extension services while enabling interoperability between the transport components and the Layer 2 and 3 devices, which can exist within service provider networks. The vendor-accepted standard for encapsulation of Ethernet within SONET/SDH is the ANSI T1X1.5 Generic Framing Procedure (GFP). GFP provides a generic way to adapt various data traffic types from the client interface onto a synchronous optical transmission channel, such as SONET/SDH or WDM. GFP works in conjunction with VCAT and LCAS schemes, described earlier.
Packet Ring Technologies
Various technologies enable the transport of Ethernet services over SONET/SDH. Shared packet ring (SPR) and resilient packet ring (RPR) implementations vary by vendor. The only true standard is the IEEE 802.17 RPR specification. RPR technology uses a dual-counter rotating fiber ring topology to transport working traffic between nodes. RPR uses spatial reuse of bandwidth, which ensures that bandwidth is only consumed between the source and destination nodes. Packets are removed at their destination, leaving bandwidth available to downstream nodes on the ring.
Proactive span protection automatically avoids failed spans within 50 ms, thereby providing SONET/SDH-like resiliency in RPR architectures. RPR provides support for latency- and jitter-sensitive traffic, such as voice and video. RPR supports topologies of more than 100 nodes per ring with an automatic topology-discovery mechanism that works across multiple, interconnected rings.
SPR architectures are essentially Switched Ethernet over SONET/SDH optical transport topologies that follow the rules of bridging and Ethernet VLANs. SPR supports dual 802.1Q VLAN tagging and up to eight 802.1P classes of service. SPR and RPR are further discussed in later chapters.
Provisioning
MSPPs use GUI-based craft interfaces, management platforms, and the familiar IOS command-line interface (CLI) to simplify the provisioning task of SONET/SDH circuits, Ethernet circuits, IP routing, RPR, MPLS, and DWDM. Carriers and service providers that have experienced the complexities involved with Transaction Language 1 (TL-1) provisioning truly appreciate the ease of MSPP provisioning. Automated GUI-based provisioning is intuitive and reduces the learning curve associated with mastering TL-1. It also reduces the risk associated with incorrectly provisioning circuits that could result in breach of SLAs.
Signaling
MSPPs use signaling-based circuit provisioning using the user-network interface (UNI) signaling protocol, a standards-based unified control plane, and GMPLS signaling. GMPLS is also referred to as multiprotocol lambda switching. GMPLS supports packet switching devices as well as devices that perform switching in the time, wavelength, and space domains. GMPLS provides the framework for a unified control and management plane for IP and optical transport networks. The ITU G.ASON framework includes support for automated routing and signaling of optical connections at the UNI, network-network interface (NNI), and connection-control interface (CCI) level.
Dense Wavelength-Division Multiplexing
Dense wavelength-division multiplexing (DWDM) is a method to insert multiple channels or wavelengths over a single optical fiber. DWDM maximizes the use of the installed fiber base and allows new services to be quickly and easily provisioned over the existing fiber infrastructure. DWDM offers bandwidth multiplication for carriers over the same fiber pair. DWDM alleviates unnecessary fiber build-out in congested conduits and provides a scalable upgrade path for bandwidth needs.
As illustrated in Figure 1-5, various wavelengths are multiplexed over the fiber. End or intermediate DWDM devices perform amplification, reshaping, and timing (3R) functions. Individual wavelengths or channels can be dropped or inserted along a route. DWDM open architecture systems allow a variety of devices to be connected including SONET/SDH ADMs, ATM switches, and IP routers.
Figure 1-5. DWDM Schematic
[View full size image]
DWDM platforms provide the following:
Optical multiplexing/demultiplexing to combine/separate ITU-T grid wavelengths launched by optical transmitters/transponders
Optical filtering to combine ITU-T grid wavelengths launched by MSPPs
Optical ADM functionality to exchange wavelengths on SONET/SDH spans between the MSPP and the DWDM device
Optical performance monitoring (OPM)
Fiber-optic signal amplification and 3R functionality
Long-haul DWDM is commonly divided into three categories with the main differentiator being unregenerated transmission distance. The three main long-haul DWDM classifications are long haul (LH), which ranges from 0 to 600 km; extended long haul (ELH), which ranges from 600 to 2000 km; and ultra long haul (ULH), which ranges from 3000+ km.
Storage networking is one of the key drivers for DWDM. The amount of data that enterprises store, including content or e-commerce databases, has increased exponentially. This has, in turn, driven up the demand for more storage connectivity. Information storage also includes backing up servers and providing updated, consistent mirror images of that data at remote sites for disaster recovery. Storage-area networking uses protocols such as ESCON, FICON, Fibre Channel, or Gigabit Ethernet.
The availability of fiber plants has become a key challenge for many companies that need multiple connections across a metropolitan-area network (MAN). Before DWDM technology was available, a company that wanted to connect data centers had to provide fiber for each individual connection. For small numbers of connections, this was not a problem. However, as shown in Figure 1-6, eight pairs of fiber-optic cable would be required if an organization were to connect two data centers via Gigabit Ethernet along with multiple ESCON channels, and FICON over Fibre Channel.
Figure 1-6. Storage-Area Topology
[View full size image]
If the organization owned the fiber plant, they would be responsible for the underground installation of the fiber-optic cable and its maintenance. Many organizations outsource such work to dark-fiber providers. Fiber providers charge per strand per kilometer of fiber. Therefore, networks such as that in Figure 1-6 could be extremely expensive to build and maintain.
The metro DWDM platform enables service providers to deliver managed wavelength-based ESCON, FICON, Fibre Channel, and Ethernet services to customers offering outsourced storage or content services. This facilitates the convergence of data, storage, and SONET/SDH networking and provides an infrastructure capable of reliable, high-availability multiservice networking in the MAN at very economical levels.
Using DWDM technology, the service providers can strip off wavelengths and assign them to each connection as shown in Figure 1-7. Each connection is now assigned a wavelength, instead of being assigned to its own fiber pair. As illustrated in Figure 1-7, eight wavelengths are assigned to a single pair of fibers. This way, numerous data streams can be multiplexed at different speeds, across a single fiber pair. This saves the organization considerable expense. In addition, service providers can provision wavelengths to enterprise customers and charge for the number of wavelengths used.
Figure 1-7. Storage-Area Topology Using DWDM
[View full size image]
Consider a DWDM platform that provides 32 wavelengths multiplexed over a single fiber pair. By supporting speeds from 10 Mbps up to OC-192 (10 Gbps), the system could provide up to 320 Gbps of bandwidth. To increase the density of signals on the fiber-optic cable, most users would start by aggregating their existing traffic, such as Gigabit Ethernet, ESCON (136 Mbps/200 Mbps), FICON (1.062 Gbps), or Fibre Channel (640 Mbps/1.062 Gbps/2.125 Gbps) via DWDM.
Users also have the ability to increase the bandwidth on each of the channels (wavelengths)—for example, by moving from OC-3 to OC-48. Another key benefit is protocol transparency, which alleviates the need for protocol conversion, the associated complexity, and the transmission latencies that might result. Protocol transparency is accomplished with 2R networks and enables support for all traffic types, regardless of bandwidth and protocol.
SONET/SDH networks are typically built in a hierarchical topology. The campus network could be GE or even an OC-3/STM-1 or OC-12/STM-4 SONET/SDH ring. Campus-to-central office (CO) traffic is normally carried over the metro access ring. CO-to-CO traffic is commonly carried over metro core rings and, finally, if the traffic is required to leave the metro core, long-haul traffic is typically carried over DWDM circuits.
As shown in Figure 1-1, customer rings are known as access rings and typically span a campus. The access rings converge and interconnect at major network traffic collection points. These collection points are referred to as points of presence (POPs) by carriers or as headends in the cable industry. The collector rings aggregate the access ring traffic and groom this traffic into the core rings, which are often referred to as interoffice facility (IOF) or metro core rings because they interconnect these collection points. Access rings reach further out to customer premises locations and are said to subtend off the larger collector rings. The collector rings subtend off the larger core rings.
Figure 1-1. SONET/SDH Hierarchical Topology
[View full size image]
In legacy-based SONET/SDH time-division multiplexing (TDM), the sum of all subtending access ring bandwidth equals the total bandwidth required at the collector. Similarly, the sum of all subtending collector ring bandwidths equals the total bandwidth required at the core backbone.
Legacy SONET networks use automatic protection switching (APS), 1+1 protection, linear APS, two-fiber unidirectional path-switched ring (UPSR), two-fiber bidirectional line-switched ring (BLSR), or four-fiber BLSR protection mechanisms. Legacy SDH networks use multiplex section protection (MSP) 1+1, MSP 1:1, and MSP 1:N. They also implement two-fiber subnetwork connection protection (SNCP), two-fiber multiplexed section protection ring (MS-SPRing), or four-fiber MS-SPRing protection mechanisms. These protection mechanisms are also used by next-generation SONET/SDH, and are discussed in greater detail in later chapters.
Traffic flows in the access rings are typically of a hub-and-spoke nature, consolidating back at the local CO. UPSR/SNCP architectures are well suited for such multiple point-to-point or two-node traffic flows. The hub-and-spoke architecture also extends to 1+1 and linear access networks. Collector and core rings, however, support large amounts of traffic between access rings. As such, core ring traffic travels in a mesh, from any CO to any other CO. Because of their inherent potential for bandwidth reservation, BLSR/MS-SPRing architectures work well for such distributed "mesh" and node-to-node traffic applications.
Legacy SONET/SDH
Legacy SONET/SDH networks use add/drop multiplexers (ADMs) that add or drop OC-N or STM-N circuits between ADM nodes on the ring. The relationship between the SONET Optical Channel (OC-N) and SDH-Synchronous Transport Signal levels (STM-N) is presented in Table 1-1.
Table 1-1. SONET OC-N and Its SDH Equivalent Signal Level
T-Carrier Equivalent
SDH Equivalent
Bandwidth
OC-3
84 * T1
STM-1
155.52 Mbps
OC-12
336 * T1
STM-4
622.08 Mbps
OC-48
1344 * T1
STM-16
2488.32 Gbps
OC-192
5376 * T1
STM-64
9953.28 Gbps
OC-768
21,504 * T1
STM-256
39,813.12 Gbps
SONET topologies typically use digital cross-connect systems (DCS) or DACS to groom lower-bandwidth DS-0 or DS-1 circuits to higher DS-3, OC-3, or STM-1 levels. SDH architectures use the term DXC for a digital cross-connect switch. Higher-order DXCs are used to cross-connect or switch traffic in 155-Mbps (STM-1) blocks, whereas lower-order DXCs are used to cross-connect traffic at 1.544 (DS-1) or 2.048 (E1) rates. Next-generation MSPPs integrate DCS/DXC functionality within the chassis.
Various CPE services, such as T1 or FT1 private line services, terminate on the DACS. Ethernet services could be provided using routers directly connected to the DACS, as shown in Figure 1-2. Voice services could be carried over TDM circuits by attaching the switches or private branch exchanges (PBXs) directly to the ADMs or via the DACS. Attaching ATM core switches directly to the ADMs provides ATM transport. In the case of ATM, the underlying SONET/SDH concatenated circuit would be completely transparent and the provider would need to provision permanent virtual circuits (PVCs) or switched virtual circuits (SVCs) as per customer requirements.
Figure 1-2. Legacy SONET/SDH Applications
SONET/SDH Multiservice Provisioning Platforms
In recent times, since the late 1990s, the distinction between metro core and access rings has been blurred with the advent of next-generation SONET/SDH devices known as multiservice provisioning platforms or MSPPs. As illustrated in Figure 1-3, high-bandwidth core rings can aggregate customer traffic and perform a CO-to-CO function. The MSPP can perform the duties of an ADM and DCS/DXC on access rings and metro core rings.
Figure 1-3. Next-Generation SONET/SDH MSPP Topology
The current drivers for increasing optical bandwidth include unicast data (including voice over IP), TDM voice, videoconferencing, and multicast distance-learning applications. The optical infrastructure provides a true broadband medium for multiservice transport. Current optical technologies in use can be broadly classified, as shown in Table 1-2.
Table 1-2. Classification of Optical Technology Technology
Application
Gigabit Ethernet
Metro access or metro core
Legacy SONET/SDH
Metro access, metro core, and long haul
Multiservice SONET/SDH
Metro access, metro core, and long haul
Packet over SONET/SDH
Metro access, metro core, and long haul
DWDM
Metro access, metro core and long haul
Legacy SONET/SDH TDM bandwidth summation no longer applies when packet- or frame-based traffic is statistically multiplexed onto a SONET/SDH ring. MSPPs can share SONET/SDH bandwidth among TDM, Ethernet, and other customer premises equipment (CPE) services. The inherent reliability of SONET/SDH is extended to Ethernet services when provisioned over MSPPs. These data services can be implemented across UPSR/SNCP, BLSR/MS-SPRing, linear, unprotected, and path-protected meshed network (PPMN) topologies. Furthermore, SONET/SDH 50-ms recovery is provided for these Ethernet services in the same manner as is done currently for TDM-based DS-N and OC-N circuits. The MSPP also includes support for resilient packet ring (IEEE 802.17) and has a roadmap for Generalized Multiprotocol Label Switching (GMPLS) with support for automatically switched optical networks (ASONs).
MSPPs enable carriers to provide packet-based services over SONET/SDH platforms. These services can be offered with varying service level agreements (SLAs) using Layer 1.5, 2, or 3 switching and quality of service (QoS) mechanisms. The optical network QoS includes the following parameters:
Degree of transparency
Level of protection
Required bit error rate
End-to-end delay
Jitter requirements
As illustrated in Figure 1-4, multiservice provisioning platforms integrate DCS/DXC and Ethernet switching functionality within the device. However, ATM services would need an external core ATM switch to provision end-user PVCs or SVCs. The MSPP can provide private line TDM services, 10/100/1000-Mbps Ethernet services, and Multiprotocol Label Switched (MPLS) IP-routed services. This means that the service provider could build Layer 2 Ethernet virtual LAN (VLAN) virtual private networks (VPNs) or Layer 2.5 MPLS VPNs. Such versatility positions the MSPP as the solution of choice for metro access and core applications. Integration of DWDM capability also extends core and long-haul transport as an application for the MSPP.
Figure 1-4. Next-Generation MSPP Applications
Improving SONET/SDH Bandwidth Efficiency
Legacy SONET/SDH networks were designed to transport TDM traffic in a highly predictable and reliable manner. Today's traffic patterns are shifting from TDM to an increasing percentage of bursty data traffic. Internet and data network growth in the past six years has highlighted legacy SONET/SDH's inefficiency in transporting data. Its rigid data hierarchy limits connections to fixed increments that have steep gaps between them. For example, an OC-3/STM-1 translates to 155 Mbps, but the next standard increment that is offered is OC-12/STM-4, which is 622 Mbps.
Inefficiency of bandwidth use in transporting Ethernet over SONET/SDH has been overcome by concatenation techniques. If one were to transport 100-Mbps Fast Ethernet over a SONET/SDH channel, for example, the legacy SONET/SDH channel selected would be an OC-3/STM-1. The OC-3/STM-1 channel consumes about 155 Mbps of bandwidth. This would result in a loss of an OC-1 or 51.84 Mbps worth of bandwidth. Concatenation supports nonstandard channels such as an STS-2. Transporting 100-Mbps Ethernet within an STS-2 (103.68 Mbps) optimizes bandwidth efficiency. Virtual concatenation (VCAT) and the link capacity adjustment scheme (LCAS) are techniques used to further enhance network efficiencies.
VCAT is an inverse multiplexing procedure whereby the contiguous bandwidth is broken into individual synchronous payload envelopes (SPEs) at the source transmitter that are logically represented in a virtual concatenation group (VCG). The VCG members are transported as individual SPEs across the SONET/SDH network and recombined at the far-end destination VCG receiver. VCAT is used to provision point-to-point connections over the SONET network using any available capacity to construct an (N * STS-1)-sized pipe for packet traffic.
LCAS is a protocol that ensures synchronization between the sender and receiver during the increase or decrease in size of a virtually concatenated circuit, in a hitless manner that doesn't interfere with the data signal.
QoS
The capability to classify packets, queue them based on that classification, and then schedule them efficiently into Synchronous Transport Signal (STS) channels is necessary to enable services that create and maintain sustainable service provider business cases. QoS is necessary in a service provider environment, to maintain customer SLAs. The various protection mechanisms used in optical networks such as APS, 1+1, two-fiber UPSR/SNCP, two-fiber BLSR/MS-SPRing, and four-fiber BLSR/MS-SPRing also determine the QoS and consequent SLA that a carrier can guarantee the customer. For example, circuits provisioned over a four-fiber BLSR/MS-SPRing ring can be offered with a higher QoS guarantee and SLA than a circuit provisioned over UPSR/SNCP, because four-fiber BLSR/MS-SPRing provides maximum redundancy.
SONET/SDH Encapsulation of Ethernet
Various methods for encapsulating Ethernet packets into SONET/SDH payloads have been discussed in the industry. The MSPP strategy focuses on delivering a single encapsulation scheme for both Ethernet and storage-area network (SAN) extension services while enabling interoperability between the transport components and the Layer 2 and 3 devices, which can exist within service provider networks. The vendor-accepted standard for encapsulation of Ethernet within SONET/SDH is the ANSI T1X1.5 Generic Framing Procedure (GFP). GFP provides a generic way to adapt various data traffic types from the client interface onto a synchronous optical transmission channel, such as SONET/SDH or WDM. GFP works in conjunction with VCAT and LCAS schemes, described earlier.
Packet Ring Technologies
Various technologies enable the transport of Ethernet services over SONET/SDH. Shared packet ring (SPR) and resilient packet ring (RPR) implementations vary by vendor. The only true standard is the IEEE 802.17 RPR specification. RPR technology uses a dual-counter rotating fiber ring topology to transport working traffic between nodes. RPR uses spatial reuse of bandwidth, which ensures that bandwidth is only consumed between the source and destination nodes. Packets are removed at their destination, leaving bandwidth available to downstream nodes on the ring.
Proactive span protection automatically avoids failed spans within 50 ms, thereby providing SONET/SDH-like resiliency in RPR architectures. RPR provides support for latency- and jitter-sensitive traffic, such as voice and video. RPR supports topologies of more than 100 nodes per ring with an automatic topology-discovery mechanism that works across multiple, interconnected rings.
SPR architectures are essentially Switched Ethernet over SONET/SDH optical transport topologies that follow the rules of bridging and Ethernet VLANs. SPR supports dual 802.1Q VLAN tagging and up to eight 802.1P classes of service. SPR and RPR are further discussed in later chapters.
Provisioning
MSPPs use GUI-based craft interfaces, management platforms, and the familiar IOS command-line interface (CLI) to simplify the provisioning task of SONET/SDH circuits, Ethernet circuits, IP routing, RPR, MPLS, and DWDM. Carriers and service providers that have experienced the complexities involved with Transaction Language 1 (TL-1) provisioning truly appreciate the ease of MSPP provisioning. Automated GUI-based provisioning is intuitive and reduces the learning curve associated with mastering TL-1. It also reduces the risk associated with incorrectly provisioning circuits that could result in breach of SLAs.
Signaling
MSPPs use signaling-based circuit provisioning using the user-network interface (UNI) signaling protocol, a standards-based unified control plane, and GMPLS signaling. GMPLS is also referred to as multiprotocol lambda switching. GMPLS supports packet switching devices as well as devices that perform switching in the time, wavelength, and space domains. GMPLS provides the framework for a unified control and management plane for IP and optical transport networks. The ITU G.ASON framework includes support for automated routing and signaling of optical connections at the UNI, network-network interface (NNI), and connection-control interface (CCI) level.
Dense Wavelength-Division Multiplexing
Dense wavelength-division multiplexing (DWDM) is a method to insert multiple channels or wavelengths over a single optical fiber. DWDM maximizes the use of the installed fiber base and allows new services to be quickly and easily provisioned over the existing fiber infrastructure. DWDM offers bandwidth multiplication for carriers over the same fiber pair. DWDM alleviates unnecessary fiber build-out in congested conduits and provides a scalable upgrade path for bandwidth needs.
As illustrated in Figure 1-5, various wavelengths are multiplexed over the fiber. End or intermediate DWDM devices perform amplification, reshaping, and timing (3R) functions. Individual wavelengths or channels can be dropped or inserted along a route. DWDM open architecture systems allow a variety of devices to be connected including SONET/SDH ADMs, ATM switches, and IP routers.
Figure 1-5. DWDM Schematic
[View full size image]
DWDM platforms provide the following:
Optical multiplexing/demultiplexing to combine/separate ITU-T grid wavelengths launched by optical transmitters/transponders
Optical filtering to combine ITU-T grid wavelengths launched by MSPPs
Optical ADM functionality to exchange wavelengths on SONET/SDH spans between the MSPP and the DWDM device
Optical performance monitoring (OPM)
Fiber-optic signal amplification and 3R functionality
Long-haul DWDM is commonly divided into three categories with the main differentiator being unregenerated transmission distance. The three main long-haul DWDM classifications are long haul (LH), which ranges from 0 to 600 km; extended long haul (ELH), which ranges from 600 to 2000 km; and ultra long haul (ULH), which ranges from 3000+ km.
Storage networking is one of the key drivers for DWDM. The amount of data that enterprises store, including content or e-commerce databases, has increased exponentially. This has, in turn, driven up the demand for more storage connectivity. Information storage also includes backing up servers and providing updated, consistent mirror images of that data at remote sites for disaster recovery. Storage-area networking uses protocols such as ESCON, FICON, Fibre Channel, or Gigabit Ethernet.
The availability of fiber plants has become a key challenge for many companies that need multiple connections across a metropolitan-area network (MAN). Before DWDM technology was available, a company that wanted to connect data centers had to provide fiber for each individual connection. For small numbers of connections, this was not a problem. However, as shown in Figure 1-6, eight pairs of fiber-optic cable would be required if an organization were to connect two data centers via Gigabit Ethernet along with multiple ESCON channels, and FICON over Fibre Channel.
Figure 1-6. Storage-Area Topology
[View full size image]
If the organization owned the fiber plant, they would be responsible for the underground installation of the fiber-optic cable and its maintenance. Many organizations outsource such work to dark-fiber providers. Fiber providers charge per strand per kilometer of fiber. Therefore, networks such as that in Figure 1-6 could be extremely expensive to build and maintain.
The metro DWDM platform enables service providers to deliver managed wavelength-based ESCON, FICON, Fibre Channel, and Ethernet services to customers offering outsourced storage or content services. This facilitates the convergence of data, storage, and SONET/SDH networking and provides an infrastructure capable of reliable, high-availability multiservice networking in the MAN at very economical levels.
Using DWDM technology, the service providers can strip off wavelengths and assign them to each connection as shown in Figure 1-7. Each connection is now assigned a wavelength, instead of being assigned to its own fiber pair. As illustrated in Figure 1-7, eight wavelengths are assigned to a single pair of fibers. This way, numerous data streams can be multiplexed at different speeds, across a single fiber pair. This saves the organization considerable expense. In addition, service providers can provision wavelengths to enterprise customers and charge for the number of wavelengths used.
Figure 1-7. Storage-Area Topology Using DWDM
[View full size image]
Consider a DWDM platform that provides 32 wavelengths multiplexed over a single fiber pair. By supporting speeds from 10 Mbps up to OC-192 (10 Gbps), the system could provide up to 320 Gbps of bandwidth. To increase the density of signals on the fiber-optic cable, most users would start by aggregating their existing traffic, such as Gigabit Ethernet, ESCON (136 Mbps/200 Mbps), FICON (1.062 Gbps), or Fibre Channel (640 Mbps/1.062 Gbps/2.125 Gbps) via DWDM.
Users also have the ability to increase the bandwidth on each of the channels (wavelengths)—for example, by moving from OC-3 to OC-48. Another key benefit is protocol transparency, which alleviates the need for protocol conversion, the associated complexity, and the transmission latencies that might result. Protocol transparency is accomplished with 2R networks and enables support for all traffic types, regardless of bandwidth and protocol.
Subscribe to:
Posts (Atom)