Tailoring Your Multicast Deployment


Multicast deployment is often more difficult than unicast deployment, and there are many issues that you need to consider when deploying multicast. The following sections provide information regarding the key issues that you need to address when deploying multicast. They include tips and information on bandwidth sharing, how network devices forward multicast traffic, and how different network environments affect your multicast deployment.

How Multicast Deployment Compares with Unicast Deployment

Even though connectivity for multicast and unicast applications is very similar, multicast deployment is often more difficult than unicast deployment.

Both multicast and unicast rely on the network layer for connectivity:

However, this is why it is often more difficult to deploy multicast than unicast:

Bandwidth Sharing

The bandwidth sharing mechanism that you use for multicasting depends on which version of SmartSockets you use. SmartSockets Versions 6.2 and higher support congestion control. Earlier versions of SmartSockets require administrators to configure the amount of bandwidth that they expect the network to deliver.

Bandwidth sharing for multicast is not automatic as it is for unicast. In unicast, reliable unicast transports (for example, TCP) automatically share available network bandwidth among all sessions contending for it. Administrators play no role in this process—protocol stacks measure the round-trip time and packet loss rates and dynamically determine available bandwidth. Unicast assumes that all streams have equal priority and automatically divides bandwidth accordingly.

TIBCO SmartSockets Version 6.2 and Higher

TIBCO SmartSockets,Versions 6.2 and higher, provide support for congestion control, which dynamically determines bandwidth limits and maximizes throughput. For information on setting congestion control options, see Bandwidth Management.

TIBCO SmartSockets Versions Prior to Version 6.2

SmartSockets versions prior to Version 6.2 do not provide support for congestion control. Instead, SmartSockets relies on administrators to configure the amount of bandwidth that they expect the network to deliver. If administrators fail to configure the amount of bandwidth, congestion can cause packet loss and either erratic behavior or application failure.

To optimize throughput, administrators need to limit how fast SmartSockets Multicast sends data. If you exceed your network’s bandwidth capacity, the congestion causes the network to perform below its maximum capacity. For example, if you ask SmartSockets Multicast to deliver 11 Mbps over a 10 Mbps network layer, you may only receive 5 or 7 Mbps. In addition, you will probably experience chaotic behavior based on the loss rates and other factors. However, if you ask SmartSockets Multicast to deliver 9 Mbps over a 10 Mbps network layer, it will.

Here are some tips for bandwidth sharing in an environment that uses a SmartSockets version prior to Version 6.2:

Client Failovers in Multicast

Client failovers using the multicast protocol, PGM, as the alternate protocol do not work. Because multicast uses threads on the client side, threading must be initialized before PGM connects to RTgms. To initialize threading, set the Server_Names option to pgm:_node:your_value to cause PGM to initialize threads when it loads.

For example, if the Server_Names option is set to tcp,pgm:_node:your_value, after the first successful TCP connection, RTclient stops traversing the Server_Names list until the existing TCP connection is closed. When RTclient loses the connection to RTserver, RTclient attempts to reconnect using TCP. If it cannot reconnect, RTclient connects using PGM. The PGM link driver loads, and threads are initialized.

RTclient cannot initialize threads in the middle of an application. Initializing threads in the middle of an application can cause core dumps. Threads must be initialized at program startup.

There is one workaround to this problem: call TipcInitThreads at program startup.

How Network Devices Forward Multicast

How multicast packets are forwarded depends on the types of network devices you use:

Multicast deployment often also involves ensuring that multicast streams go only where they are wanted. This is especially important when high-bandwidth streams are present on a network with some low-bandwidth links or where access must be controlled at the network layer for security reasons. Within a LAN, all Ethernet switches can direct unicast traffic only to ports where it is wanted. However, many Ethernet switches simply flood multicast packets to all ports. Therefore, it may be necessary to configure your network to block the flow of multicast data for bandwidth-sharing or security reasons.

Testing for Multicast Traffic Before Configuring Your Network

Even before you configure your network specifically for multicast, your network may already pass multicast traffic in some areas. Before configuring your network, you may want to test your existing network to determine where multicast data is already flowing, if it is flowing at all. See Multicast Troubleshooting for advice on testing multicast connectivity.

Multicast connectivity that happens without explicit network configuration generally sends all multicast traffic to all users on a LAN. See Bandwidth Sharing for the implications of this.

If multicast traffic can already flow in your network, it is generally most likely to flow between users in close proximity to one another. For example, within a multi-story office building, offices on the same floor are very likely to find multicast connectivity. Likelihood of multicast connectivity diminishes as you move farther away: the minority of adjacent floors might have multicast connectivity, it is unlikely for floors farther away to have multicast connectivity, it is very unlikely to find multicast connectivity between buildings in a campus or across a wide-area network, and it is rare to find multicast connectivity between sites connected via the Internet unless both sites are educational or research sites.

Multicast and GMD

There are some known issues with the way multicasting works with GMD messages. If GMD messages are being multicast, and a subscribing client does a warm disconnect, some of those GMD messages might be missed. The problem occurs because the client notifies the RTgms process that it is disconnecting, and then the RTgms process notifies the RTserver sending the multicast GMD messages. Potentially, the RTserver might have sent several GMD messages after the client warm disconnect, but before the RTserver received notice from the RTgms process to start buffering the GMD messages.

There are two workarounds to this problem:

Also note, because of the overhead required to guarantee message delivery, multicasting with GMD is slower than multicasting without GMD.

UDP Encapsulation of PGM

SmartSockets Multicast allows PGM packets to be UDP encapsulated instead of being encapsulated directly in IP. UDP encapsulation of PGM packets means that:

Multicast Deployment with Frame Relay Networks

Frame relay networks are usually a partial or full mesh of point-to-point connections between end points. Switches in frame relay networks are committed to delivering a given bandwidth or Committed Information Rate (CIR) of frame relay circuits between endpoints. This means that they are unable to generate more than one packet out for each packet in, even though this is required to implement broadcast or multicast.

Because frame relay switches cannot broadcast or multicast, Cisco Systems routers can simulate it by duplicating all broadcast and multicast packets when sending to an interface for each remote end point. By default, a single serial interface transmits up to 3 point-to-point copies of a broadcast or multicast packet if it has 3 virtual circuits running over it. Be sure to allocate bandwidth for each multicast stream required by your application.

Some satellite applications use frame relay encapsulation but do not use frame relay switches. In satellite broadcast applications, all end points can hear traffic for all virtual circuits. For these applications, a Cisco Systems IOS frame-relay multicast dlci command can send a single copy of multicast packets to a DLCI shared by all end points. Be sure to use it on all serial ports that share the DLCI.

On terrestrial networks where packets must be copied for DLCI, Cisco Systems limits the broadcast bandwidth. The default limit of 36 broadcast packets a second equates to about 400 Kbps. Use the IOS frame-relay broadcast-queue command to increase the limit if needed. See the Cisco Systems tip page on the Frame Relay Broadcast Queue for more information.

Example Cisco Systems Router Configuration

Here is a sample configuration fragment for a Cisco Systems router that forwards multicast traffic between Ethernet interfaces with the PGM Router Assist function enabled on both interfaces.

ip multicast-routing 
! 
interface Ethernet 0 
  ip pim sparse-dense-mode 
  ip pgm router 
! 
interface Ethernet 1 
  ip pim sparse-dense-mode 
  ip pgm router 

This also configures automatic discovery of all other PIM routers on Ethernet 0 and 1. Multicast traffic is exchanged among all PIM routers.

See the Cisco Systems Multicast Quick-Start Configuration Guide for additional Cisco Systems multicast configuration examples.


TIBCO SmartSockets™ User’s Guide
Software Release 6.8, July 2006
Copyright © TIBCO Software Inc. All rights reserved
www.tibco.com