SONET/SDH
SONET/SDH networks are typically built in a hierarchical topology. The campus network could be GE or even an OC-3/STM-1 or OC-12/STM-4 SONET/SDH ring. Campus-to-central office (CO) traffic is normally carried over the metro access ring. CO-to-CO traffic is commonly carried over metro core rings and, finally, if the traffic is required to leave the metro core, long-haul traffic is typically carried over DWDM circuits.
As shown in Figure 1-1, customer rings are known as access rings and typically span a campus. The access rings converge and interconnect at major network traffic collection points. These collection points are referred to as points of presence (POPs) by carriers or as headends in the cable industry. The collector rings aggregate the access ring traffic and groom this traffic into the core rings, which are often referred to as interoffice facility (IOF) or metro core rings because they interconnect these collection points. Access rings reach further out to customer premises locations and are said to subtend off the larger collector rings. The collector rings subtend off the larger core rings.
Figure 1-1. SONET/SDH Hierarchical Topology
[View full size image]
In legacy-based SONET/SDH time-division multiplexing (TDM), the sum of all subtending access ring bandwidth equals the total bandwidth required at the collector. Similarly, the sum of all subtending collector ring bandwidths equals the total bandwidth required at the core backbone.
Legacy SONET networks use automatic protection switching (APS), 1+1 protection, linear APS, two-fiber unidirectional path-switched ring (UPSR), two-fiber bidirectional line-switched ring (BLSR), or four-fiber BLSR protection mechanisms. Legacy SDH networks use multiplex section protection (MSP) 1+1, MSP 1:1, and MSP 1:N. They also implement two-fiber subnetwork connection protection (SNCP), two-fiber multiplexed section protection ring (MS-SPRing), or four-fiber MS-SPRing protection mechanisms. These protection mechanisms are also used by next-generation SONET/SDH, and are discussed in greater detail in later chapters.
Traffic flows in the access rings are typically of a hub-and-spoke nature, consolidating back at the local CO. UPSR/SNCP architectures are well suited for such multiple point-to-point or two-node traffic flows. The hub-and-spoke architecture also extends to 1+1 and linear access networks. Collector and core rings, however, support large amounts of traffic between access rings. As such, core ring traffic travels in a mesh, from any CO to any other CO. Because of their inherent potential for bandwidth reservation, BLSR/MS-SPRing architectures work well for such distributed "mesh" and node-to-node traffic applications.
Legacy SONET/SDH
Legacy SONET/SDH networks use add/drop multiplexers (ADMs) that add or drop OC-N or STM-N circuits between ADM nodes on the ring. The relationship between the SONET Optical Channel (OC-N) and SDH-Synchronous Transport Signal levels (STM-N) is presented in Table 1-1.
Table 1-1. SONET OC-N and Its SDH Equivalent Signal Level
T-Carrier Equivalent
SDH Equivalent
Bandwidth
OC-3
84 * T1
STM-1
155.52 Mbps
OC-12
336 * T1
STM-4
622.08 Mbps
OC-48
1344 * T1
STM-16
2488.32 Gbps
OC-192
5376 * T1
STM-64
9953.28 Gbps
OC-768
21,504 * T1
STM-256
39,813.12 Gbps
SONET topologies typically use digital cross-connect systems (DCS) or DACS to groom lower-bandwidth DS-0 or DS-1 circuits to higher DS-3, OC-3, or STM-1 levels. SDH architectures use the term DXC for a digital cross-connect switch. Higher-order DXCs are used to cross-connect or switch traffic in 155-Mbps (STM-1) blocks, whereas lower-order DXCs are used to cross-connect traffic at 1.544 (DS-1) or 2.048 (E1) rates. Next-generation MSPPs integrate DCS/DXC functionality within the chassis.
Various CPE services, such as T1 or FT1 private line services, terminate on the DACS. Ethernet services could be provided using routers directly connected to the DACS, as shown in Figure 1-2. Voice services could be carried over TDM circuits by attaching the switches or private branch exchanges (PBXs) directly to the ADMs or via the DACS. Attaching ATM core switches directly to the ADMs provides ATM transport. In the case of ATM, the underlying SONET/SDH concatenated circuit would be completely transparent and the provider would need to provision permanent virtual circuits (PVCs) or switched virtual circuits (SVCs) as per customer requirements.
Figure 1-2. Legacy SONET/SDH Applications
SONET/SDH Multiservice Provisioning Platforms
In recent times, since the late 1990s, the distinction between metro core and access rings has been blurred with the advent of next-generation SONET/SDH devices known as multiservice provisioning platforms or MSPPs. As illustrated in Figure 1-3, high-bandwidth core rings can aggregate customer traffic and perform a CO-to-CO function. The MSPP can perform the duties of an ADM and DCS/DXC on access rings and metro core rings.
Figure 1-3. Next-Generation SONET/SDH MSPP Topology
The current drivers for increasing optical bandwidth include unicast data (including voice over IP), TDM voice, videoconferencing, and multicast distance-learning applications. The optical infrastructure provides a true broadband medium for multiservice transport. Current optical technologies in use can be broadly classified, as shown in Table 1-2.
Table 1-2. Classification of Optical Technology Technology
Application
Gigabit Ethernet
Metro access or metro core
Legacy SONET/SDH
Metro access, metro core, and long haul
Multiservice SONET/SDH
Metro access, metro core, and long haul
Packet over SONET/SDH
Metro access, metro core, and long haul
DWDM
Metro access, metro core and long haul
Legacy SONET/SDH TDM bandwidth summation no longer applies when packet- or frame-based traffic is statistically multiplexed onto a SONET/SDH ring. MSPPs can share SONET/SDH bandwidth among TDM, Ethernet, and other customer premises equipment (CPE) services. The inherent reliability of SONET/SDH is extended to Ethernet services when provisioned over MSPPs. These data services can be implemented across UPSR/SNCP, BLSR/MS-SPRing, linear, unprotected, and path-protected meshed network (PPMN) topologies. Furthermore, SONET/SDH 50-ms recovery is provided for these Ethernet services in the same manner as is done currently for TDM-based DS-N and OC-N circuits. The MSPP also includes support for resilient packet ring (IEEE 802.17) and has a roadmap for Generalized Multiprotocol Label Switching (GMPLS) with support for automatically switched optical networks (ASONs).
MSPPs enable carriers to provide packet-based services over SONET/SDH platforms. These services can be offered with varying service level agreements (SLAs) using Layer 1.5, 2, or 3 switching and quality of service (QoS) mechanisms. The optical network QoS includes the following parameters:
Degree of transparency
Level of protection
Required bit error rate
End-to-end delay
Jitter requirements
As illustrated in Figure 1-4, multiservice provisioning platforms integrate DCS/DXC and Ethernet switching functionality within the device. However, ATM services would need an external core ATM switch to provision end-user PVCs or SVCs. The MSPP can provide private line TDM services, 10/100/1000-Mbps Ethernet services, and Multiprotocol Label Switched (MPLS) IP-routed services. This means that the service provider could build Layer 2 Ethernet virtual LAN (VLAN) virtual private networks (VPNs) or Layer 2.5 MPLS VPNs. Such versatility positions the MSPP as the solution of choice for metro access and core applications. Integration of DWDM capability also extends core and long-haul transport as an application for the MSPP.
Figure 1-4. Next-Generation MSPP Applications
Improving SONET/SDH Bandwidth Efficiency
Legacy SONET/SDH networks were designed to transport TDM traffic in a highly predictable and reliable manner. Today's traffic patterns are shifting from TDM to an increasing percentage of bursty data traffic. Internet and data network growth in the past six years has highlighted legacy SONET/SDH's inefficiency in transporting data. Its rigid data hierarchy limits connections to fixed increments that have steep gaps between them. For example, an OC-3/STM-1 translates to 155 Mbps, but the next standard increment that is offered is OC-12/STM-4, which is 622 Mbps.
Inefficiency of bandwidth use in transporting Ethernet over SONET/SDH has been overcome by concatenation techniques. If one were to transport 100-Mbps Fast Ethernet over a SONET/SDH channel, for example, the legacy SONET/SDH channel selected would be an OC-3/STM-1. The OC-3/STM-1 channel consumes about 155 Mbps of bandwidth. This would result in a loss of an OC-1 or 51.84 Mbps worth of bandwidth. Concatenation supports nonstandard channels such as an STS-2. Transporting 100-Mbps Ethernet within an STS-2 (103.68 Mbps) optimizes bandwidth efficiency. Virtual concatenation (VCAT) and the link capacity adjustment scheme (LCAS) are techniques used to further enhance network efficiencies.
VCAT is an inverse multiplexing procedure whereby the contiguous bandwidth is broken into individual synchronous payload envelopes (SPEs) at the source transmitter that are logically represented in a virtual concatenation group (VCG). The VCG members are transported as individual SPEs across the SONET/SDH network and recombined at the far-end destination VCG receiver. VCAT is used to provision point-to-point connections over the SONET network using any available capacity to construct an (N * STS-1)-sized pipe for packet traffic.
LCAS is a protocol that ensures synchronization between the sender and receiver during the increase or decrease in size of a virtually concatenated circuit, in a hitless manner that doesn't interfere with the data signal.
QoS
The capability to classify packets, queue them based on that classification, and then schedule them efficiently into Synchronous Transport Signal (STS) channels is necessary to enable services that create and maintain sustainable service provider business cases. QoS is necessary in a service provider environment, to maintain customer SLAs. The various protection mechanisms used in optical networks such as APS, 1+1, two-fiber UPSR/SNCP, two-fiber BLSR/MS-SPRing, and four-fiber BLSR/MS-SPRing also determine the QoS and consequent SLA that a carrier can guarantee the customer. For example, circuits provisioned over a four-fiber BLSR/MS-SPRing ring can be offered with a higher QoS guarantee and SLA than a circuit provisioned over UPSR/SNCP, because four-fiber BLSR/MS-SPRing provides maximum redundancy.
SONET/SDH Encapsulation of Ethernet
Various methods for encapsulating Ethernet packets into SONET/SDH payloads have been discussed in the industry. The MSPP strategy focuses on delivering a single encapsulation scheme for both Ethernet and storage-area network (SAN) extension services while enabling interoperability between the transport components and the Layer 2 and 3 devices, which can exist within service provider networks. The vendor-accepted standard for encapsulation of Ethernet within SONET/SDH is the ANSI T1X1.5 Generic Framing Procedure (GFP). GFP provides a generic way to adapt various data traffic types from the client interface onto a synchronous optical transmission channel, such as SONET/SDH or WDM. GFP works in conjunction with VCAT and LCAS schemes, described earlier.
Packet Ring Technologies
Various technologies enable the transport of Ethernet services over SONET/SDH. Shared packet ring (SPR) and resilient packet ring (RPR) implementations vary by vendor. The only true standard is the IEEE 802.17 RPR specification. RPR technology uses a dual-counter rotating fiber ring topology to transport working traffic between nodes. RPR uses spatial reuse of bandwidth, which ensures that bandwidth is only consumed between the source and destination nodes. Packets are removed at their destination, leaving bandwidth available to downstream nodes on the ring.
Proactive span protection automatically avoids failed spans within 50 ms, thereby providing SONET/SDH-like resiliency in RPR architectures. RPR provides support for latency- and jitter-sensitive traffic, such as voice and video. RPR supports topologies of more than 100 nodes per ring with an automatic topology-discovery mechanism that works across multiple, interconnected rings.
SPR architectures are essentially Switched Ethernet over SONET/SDH optical transport topologies that follow the rules of bridging and Ethernet VLANs. SPR supports dual 802.1Q VLAN tagging and up to eight 802.1P classes of service. SPR and RPR are further discussed in later chapters.
Provisioning
MSPPs use GUI-based craft interfaces, management platforms, and the familiar IOS command-line interface (CLI) to simplify the provisioning task of SONET/SDH circuits, Ethernet circuits, IP routing, RPR, MPLS, and DWDM. Carriers and service providers that have experienced the complexities involved with Transaction Language 1 (TL-1) provisioning truly appreciate the ease of MSPP provisioning. Automated GUI-based provisioning is intuitive and reduces the learning curve associated with mastering TL-1. It also reduces the risk associated with incorrectly provisioning circuits that could result in breach of SLAs.
Signaling
MSPPs use signaling-based circuit provisioning using the user-network interface (UNI) signaling protocol, a standards-based unified control plane, and GMPLS signaling. GMPLS is also referred to as multiprotocol lambda switching. GMPLS supports packet switching devices as well as devices that perform switching in the time, wavelength, and space domains. GMPLS provides the framework for a unified control and management plane for IP and optical transport networks. The ITU G.ASON framework includes support for automated routing and signaling of optical connections at the UNI, network-network interface (NNI), and connection-control interface (CCI) level.
Dense Wavelength-Division Multiplexing
Dense wavelength-division multiplexing (DWDM) is a method to insert multiple channels or wavelengths over a single optical fiber. DWDM maximizes the use of the installed fiber base and allows new services to be quickly and easily provisioned over the existing fiber infrastructure. DWDM offers bandwidth multiplication for carriers over the same fiber pair. DWDM alleviates unnecessary fiber build-out in congested conduits and provides a scalable upgrade path for bandwidth needs.
As illustrated in Figure 1-5, various wavelengths are multiplexed over the fiber. End or intermediate DWDM devices perform amplification, reshaping, and timing (3R) functions. Individual wavelengths or channels can be dropped or inserted along a route. DWDM open architecture systems allow a variety of devices to be connected including SONET/SDH ADMs, ATM switches, and IP routers.
Figure 1-5. DWDM Schematic
[View full size image]
DWDM platforms provide the following:
Optical multiplexing/demultiplexing to combine/separate ITU-T grid wavelengths launched by optical transmitters/transponders
Optical filtering to combine ITU-T grid wavelengths launched by MSPPs
Optical ADM functionality to exchange wavelengths on SONET/SDH spans between the MSPP and the DWDM device
Optical performance monitoring (OPM)
Fiber-optic signal amplification and 3R functionality
Long-haul DWDM is commonly divided into three categories with the main differentiator being unregenerated transmission distance. The three main long-haul DWDM classifications are long haul (LH), which ranges from 0 to 600 km; extended long haul (ELH), which ranges from 600 to 2000 km; and ultra long haul (ULH), which ranges from 3000+ km.
Storage networking is one of the key drivers for DWDM. The amount of data that enterprises store, including content or e-commerce databases, has increased exponentially. This has, in turn, driven up the demand for more storage connectivity. Information storage also includes backing up servers and providing updated, consistent mirror images of that data at remote sites for disaster recovery. Storage-area networking uses protocols such as ESCON, FICON, Fibre Channel, or Gigabit Ethernet.
The availability of fiber plants has become a key challenge for many companies that need multiple connections across a metropolitan-area network (MAN). Before DWDM technology was available, a company that wanted to connect data centers had to provide fiber for each individual connection. For small numbers of connections, this was not a problem. However, as shown in Figure 1-6, eight pairs of fiber-optic cable would be required if an organization were to connect two data centers via Gigabit Ethernet along with multiple ESCON channels, and FICON over Fibre Channel.
Figure 1-6. Storage-Area Topology
[View full size image]
If the organization owned the fiber plant, they would be responsible for the underground installation of the fiber-optic cable and its maintenance. Many organizations outsource such work to dark-fiber providers. Fiber providers charge per strand per kilometer of fiber. Therefore, networks such as that in Figure 1-6 could be extremely expensive to build and maintain.
The metro DWDM platform enables service providers to deliver managed wavelength-based ESCON, FICON, Fibre Channel, and Ethernet services to customers offering outsourced storage or content services. This facilitates the convergence of data, storage, and SONET/SDH networking and provides an infrastructure capable of reliable, high-availability multiservice networking in the MAN at very economical levels.
Using DWDM technology, the service providers can strip off wavelengths and assign them to each connection as shown in Figure 1-7. Each connection is now assigned a wavelength, instead of being assigned to its own fiber pair. As illustrated in Figure 1-7, eight wavelengths are assigned to a single pair of fibers. This way, numerous data streams can be multiplexed at different speeds, across a single fiber pair. This saves the organization considerable expense. In addition, service providers can provision wavelengths to enterprise customers and charge for the number of wavelengths used.
Figure 1-7. Storage-Area Topology Using DWDM
[View full size image]
Consider a DWDM platform that provides 32 wavelengths multiplexed over a single fiber pair. By supporting speeds from 10 Mbps up to OC-192 (10 Gbps), the system could provide up to 320 Gbps of bandwidth. To increase the density of signals on the fiber-optic cable, most users would start by aggregating their existing traffic, such as Gigabit Ethernet, ESCON (136 Mbps/200 Mbps), FICON (1.062 Gbps), or Fibre Channel (640 Mbps/1.062 Gbps/2.125 Gbps) via DWDM.
Users also have the ability to increase the bandwidth on each of the channels (wavelengths)—for example, by moving from OC-3 to OC-48. Another key benefit is protocol transparency, which alleviates the need for protocol conversion, the associated complexity, and the transmission latencies that might result. Protocol transparency is accomplished with 2R networks and enables support for all traffic types, regardless of bandwidth and protocol.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment