Multicast deployment is often more difficult than unicast deployment, and there are many issues that you need to consider when deploying multicast. The following sections provide information regarding the key issues that you need to address when deploying multicast. They include tips and information on bandwidth sharing, how network devices forward multicast traffic, and how different network environments affect your multicast deployment.
Even though connectivity for multicast and unicast applications is very similar, multicast deployment is often more difficult than unicast deployment.
Both multicast and unicast rely on the network layer for connectivity:
However, this is why it is often more difficult to deploy multicast than unicast:
The bandwidth sharing mechanism that you use for multicasting depends on which version of SmartSockets you use. SmartSockets Versions 6.2 and higher support congestion control. Earlier versions of SmartSockets require administrators to configure the amount of bandwidth that they expect the network to deliver.
Bandwidth sharing for multicast is not automatic as it is for unicast. In unicast, reliable unicast transports (for example, TCP) automatically share available network bandwidth among all sessions contending for it. Administrators play no role in this processprotocol stacks measure the round-trip time and packet loss rates and dynamically determine available bandwidth. Unicast assumes that all streams have equal priority and automatically divides bandwidth accordingly.
TIBCO SmartSockets,Versions 6.2 and higher, provide support for congestion control, which dynamically determines bandwidth limits and maximizes throughput. For information on setting congestion control options, see Bandwidth Management.
SmartSockets versions prior to Version 6.2 do not provide support for congestion control. Instead, SmartSockets relies on administrators to configure the amount of bandwidth that they expect the network to deliver. If administrators fail to configure the amount of bandwidth, congestion can cause packet loss and either erratic behavior or application failure.
To optimize throughput, administrators need to limit how fast SmartSockets Multicast sends data. If you exceed your network’s bandwidth capacity, the congestion causes the network to perform below its maximum capacity. For example, if you ask SmartSockets Multicast to deliver 11 Mbps over a 10 Mbps network layer, you may only receive 5 or 7 Mbps. In addition, you will probably experience chaotic behavior based on the loss rates and other factors. However, if you ask SmartSockets Multicast to deliver 9 Mbps over a 10 Mbps network layer, it will.
Here are some tips for bandwidth sharing in an environment that uses a SmartSockets version prior to Version 6.2:
Client failovers using the multicast protocol, PGM, as the alternate protocol do not work. Because multicast uses threads on the client side, threading must be initialized before PGM connects to RTgms. To initialize threading, set the Server_Names option to pgm:_node:
your_value
to cause PGM to initialize threads when it loads.
For example, if the Server_Names option is set to tcp,pgm:_node:
your_value
, after the first successful TCP connection, RTclient stops traversing the Server_Names list until the existing TCP connection is closed. When RTclient loses the connection to RTserver, RTclient attempts to reconnect using TCP. If it cannot reconnect, RTclient connects using PGM. The PGM link driver loads, and threads are initialized.
RTclient cannot initialize threads in the middle of an application. Initializing threads in the middle of an application can cause core dumps. Threads must be initialized at program startup.
There is one workaround to this problem: call TipcInitThreads at program startup.
How multicast packets are forwarded depends on the types of network devices you use:
Physical-layer devices like hubs that do not inspect packets to determine if they are unicast, broadcast, or multicast forward multicast packets to all stations on the network exactly as forward all other packets.
Physical-layer devices that inspect packets far enough to know if the physical-layer address is unicast, broadcast, or multicast forward or "flood" multicast and broadcast packets to all ports. In contrast, they forward unicast packets only to the port containing the destination physical-layer address. A "dumb" switch is an example of such a device.
Advanced physical-layer devices use knowledge of which physical layer addresses are members of which network-layer multicast groups to selectively forward multicast packets. These devices monitor network-layer IGMP packets to obtain group membership information. This is often called "IGMP snooping."
Cisco Systems switches prefer to use a Cisco Systems proprietary protocol called "CGMP" instead. This protocol is used between routers and switches. Like IGMP snooping, it gives the switches the information needed to selectively forward multicast on a per-port basis instead of flooding it. It is easier for switches to run CGMP because it requires no network-layer work on their part.
Network-layer devices such as routers can generally be configured to forward multicast between attached networks even though this is not generally the default configuration. Once enabled, routers monitor IGMP group membership requests from hosts and forward traffic to ports as necessary. See Example Cisco Systems Router Configuration for instructions on how to enable multicast forwarding for Cisco Systems routers.
Routers are also responsible for forwarding multicast traffic to interested devices that are not directly connected to their ports. A router can be configured to forward multicast traffic between its own ports even if it does not forward traffic to other routers. You must configure a multicast routing protocol such as PIM, DVMRP, or MOSPF before routers will distribute multicast traffic to all interested devices within an intranet.
Multicast deployment often also involves ensuring that multicast streams go only where they are wanted. This is especially important when high-bandwidth streams are present on a network with some low-bandwidth links or where access must be controlled at the network layer for security reasons. Within a LAN, all Ethernet switches can direct unicast traffic only to ports where it is wanted. However, many Ethernet switches simply flood multicast packets to all ports. Therefore, it may be necessary to configure your network to block the flow of multicast data for bandwidth-sharing or security reasons.
Even before you configure your network specifically for multicast, your network may already pass multicast traffic in some areas. Before configuring your network, you may want to test your existing network to determine where multicast data is already flowing, if it is flowing at all. See Multicast Troubleshooting for advice on testing multicast connectivity.
Multicast connectivity that happens without explicit network configuration generally sends all multicast traffic to all users on a LAN. See Bandwidth Sharing for the implications of this.
If multicast traffic can already flow in your network, it is generally most likely to flow between users in close proximity to one another. For example, within a multi-story office building, offices on the same floor are very likely to find multicast connectivity. Likelihood of multicast connectivity diminishes as you move farther away: the minority of adjacent floors might have multicast connectivity, it is unlikely for floors farther away to have multicast connectivity, it is very unlikely to find multicast connectivity between buildings in a campus or across a wide-area network, and it is rare to find multicast connectivity between sites connected via the Internet unless both sites are educational or research sites.
There are some known issues with the way multicasting works with GMD messages. If GMD messages are being multicast, and a subscribing client does a warm disconnect, some of those GMD messages might be missed. The problem occurs because the client notifies the RTgms process that it is disconnecting, and then the RTgms process notifies the RTserver sending the multicast GMD messages. Potentially, the RTserver might have sent several GMD messages after the client warm disconnect, but before the RTserver received notice from the RTgms process to start buffering the GMD messages.
There are two workarounds to this problem:
gmd_failure
or gmd_success
. This prevents any warm disconnects.Also note, because of the overhead required to guarantee message delivery, multicasting with GMD is slower than multicasting without GMD.
SmartSockets Multicast allows PGM packets to be UDP encapsulated instead of being encapsulated directly in IP. UDP encapsulation of PGM packets means that:
Frame relay networks are usually a partial or full mesh of point-to-point connections between end points. Switches in frame relay networks are committed to delivering a given bandwidth or Committed Information Rate (CIR) of frame relay circuits between endpoints. This means that they are unable to generate more than one packet out for each packet in, even though this is required to implement broadcast or multicast.
Because frame relay switches cannot broadcast or multicast, Cisco Systems routers can simulate it by duplicating all broadcast and multicast packets when sending to an interface for each remote end point. By default, a single serial interface transmits up to 3 point-to-point copies of a broadcast or multicast packet if it has 3 virtual circuits running over it. Be sure to allocate bandwidth for each multicast stream required by your application.
Some satellite applications use frame relay encapsulation but do not use frame relay switches. In satellite broadcast applications, all end points can hear traffic for all virtual circuits. For these applications, a Cisco Systems IOS frame-relay multicast dlci
command can send a single copy of multicast packets to a DLCI shared by all end points. Be sure to use it on all serial ports that share the DLCI.
ip multicast-routing ! interface Ethernet 0 ip pim sparse-dense-mode ip pgm router ! interface Ethernet 1 ip pim sparse-dense-mode ip pgm router
This also configures automatic discovery of all other PIM routers on Ethernet 0 and 1. Multicast traffic is exchanged among all PIM routers.
See the Cisco Systems Multicast Quick-Start Configuration Guide for additional Cisco Systems multicast configuration examples.
TIBCO SmartSockets™ User’s Guide Software Release 6.8, July 2006 Copyright © TIBCO Software Inc. All rights reserved www.tibco.com |