Home » Cisco CCNA Training » Fundamental Network Characteristics

Fundamental Network Characteristics

    How To Configure Ip Address on PC In Packet Tracer

    How to apply ip address,Subnet Mask ,Default Gateway in Packet Tracer


    Step 1:Double Click the PC Icon Move to config tab and configure default gateway

    Step 3:Then Click on FastEthernet and apply  ip address and subnetmask

    Thats it!!...

    Read more

    Fast Ethernet Tutorial

    A Guide to Using Fast Ethernet and Gigabit Ethernet

    Network managers today must contend with the requirements of utilizing faster media, mounting bandwidth and play “traffic cop” to an ever-growing network infrastructure. Now, more than ever, it’s imperative for them to understand the basics of using various Ethernet technologies to manage their networks.

    This tutorial will explain the basic principles of Fast Ethernet and Gigabit Ethernet technologies, describing how each improves on basic Ethernet technology. It will offer guidance on how to implement these technologies as well as some “rules of the road” for successful repeater selection and usage.

    Introduction to Ethernet, Fast Ethernet and Gigabit Ethernet

    It is nearly impossible to discuss networking without the mention of Ethernet, Fast Ethernet and Gigabit Ethernet. But, in order to determine which form is needed for your application, it’s important to first understand what each provides and how they work together.

    A good starting point is to explain what Ethernet is. Simply, Ethernet is a very common method of networking computers in a LAN using copper cabling. Capable of providing fast and constant connections, Ethernet can handle about 10,000,000 bits per second and can be used with almost any kind of computer.

    While that may sound fast to those less familiar with networking, there is a very strong demand for even higher transmission speeds, which has been realized by the Fast Ethernet and Gigabit Ethernet specifications (IEEE 802.3u and IEEE 802.3z respectively). These LAN (local area network) standards have raised the Ethernet speed limit from 10 megabits per second (Mbps) to 100Mbps for Fast Ethernet and 1000Mbps for Gigabit Ethernet with only minimal changes made to the existing cable structure.

    The building blocks of today’s networks call out for a mixture of legacy 10BASE-T Ethernet networks and the new protocols. Typically, 10Mbps networks utilize Ethernet switches to improve the overall efficiency of the Ethernet network. Between Ethernet switches, Fast Ethernet repeaters are used to connect a group of switches together at the higher 100 Mbps rate.

    However, with an increasing number of users running 100Mbps at the desktop, servers and aggregation points such as switch stacks may require even greater bandwidth. In this case, a Fast Ethernet backbone switch can be upgraded to a Gigabit Ethernet switch which supports multiple 100/1000 Mbps switches. High performance servers can be connected directly to the backbone once it has been upgraded.

    Integrating Fast Ethernet and Gigabit Ethernet

    Many client/server networks suffer from too many clients trying to access the same server, which creates a bottleneck where the server attaches to the LAN. Fast Ethernet, in combination with switched Ethernet, can create an optimal cost-effective solution for avoiding slow networks since most 10/100Mbps components cost about the same as 10Mbps-only devices.

    When integrating 100BASE-T into a 10BASE-T network, the only change required from a wiring standpoint is that the corporate premise distributed wiring system must now include Category 5 (CAT5) rated twisted pair cable in the areas running 100BASE-T. Once rewiring is completed, gigabit speeds can also be deployed even more widely throughout the network using standard CAT5 cabling.

    The Fast Ethernet specification calls for two types of transmission schemes over various wire media. The first is 100BASE-TX, which, from a cabling perspective, is very similar to 10BASE-T. It uses CAT5-rated twisted pair copper cable to connect various hubs, switches and end-nodes. It also uses an RJ45 jack just like 10BASE-T and the wiring at the connector is identical. These similarities make 100BASE-TX easier to install and therefore the most popular form of the Fast Ethernet specification.

    The second variation is 100Base-FX which is used primarily to connect hubs and switches together either between wiring closets or between buildings. 100BASE-FX uses multimode fiber-optic cable to transport Fast Ethernet traffic.

    Gigabit Ethernet specification calls for three types of transmission schemes over various wire media. Gigabit Ethernet was originally designed as a switched technology and used fiber for uplinks and connections between buildings. Because of this, in June 1998 the IEEE approved the Gigabit Ethernet standard over fiber: 1000BASE-LX and 1000BASE-SX.

    The next Gigabit Ethernet standardization to come was 1000BASE-T, which is Gigabit Ethernet over copper. This standard allows one gigabit per second (Gbps) speeds to be transmitted over CAT5 cable and has made Gigabit Ethernet migration easier and more cost-effective than ever before.

    Rules of the Road

    The basic building block for the Fast Ethernet LAN is the Fast Ethernet repeater. The two types of Fast Ethernet repeaters offered on the market today are:

    Class I Repeater — The Class 1 repeater operates by translating line signals on the incoming port to a digital signal. This allows the translation between different types of Fast Ethernet such as 100BASE-TX and 100BASE-FX. A Class I repeater introduces delays when performing this conversion such that only one repeater can be put in a single Fast Ethernet LAN segment.

    Class II Repeater — The Class II repeater immediately repeats the signal on an incoming port to all the ports on the repeater. Very little delay is introduced by this quick movement of data across the repeater; thus two Class II repeaters are allowed per Fast Ethernet segment.
    Network managers understand the 100 meter distance limitation of 10BASE-T and 100BASE-T Ethernet and make allowances for working within these limitations. At the higher operating speeds, Fast Ethernet and 1000BASE-T are limited to 100 meters over CAT5-rated cable. The EIA/TIA cabling standard recommends using no more than 90 meters between the equipment in the wiring closet and the wall connector. This allows another 10 meters for patch cables between the wall and the desktop computer.

    In contrast, a Fast Ethernet network using the 100BASE-FX standard is designed to allow LAN segments up to 412 meters in length. Even though fiber-optic cable can actually transmit data greater distances (i.e. 2 Kilometers in FDDI), the 412 meter limit for Fast Ethernet was created to allow for the round trip times of packet transmission. Typical 100BASE-FX cable specifications call for multimode fiber-optic cable with a 62.5 micron fiber-optic core and a 125 micron cladding around the outside. This is the most popular fiber optic cable type used by many of the LAN standards today. Connectors for 100BASE-FX Fast Ethernet are typically ST connectors (which look like Ethernet BNC connectors).

    Many Fast Ethernet vendors are migrating to the newer SC connectors used for ATM over fiber. A rough implementation guideline to use when determining the maximum distances in a Fast Ethernet network is the equation: 400 – (r x 95) where r is the number of repeaters. Network managers need to take into account the distance between the repeaters and the distance between each node from the repeater. For example, in Figure 1 two repeaters are connected to two Fast Ethernet switches and a few servers.

    Figure 1: Fast Ethernet Distance Calculations with Two Repeaters

    Maximum Distance Between End nodes:
    400-(rx95) where r = 2 (for 2 repeaters)
    400-(2×95) = 400-190 = 210 feet, thus
    A + B + C = 210 Feet

    There is yet another variation of Ethernet called full-duplex Ethernet. Full-duplex Ethernet enables the connection speed to be doubled by simply adding another pair of wires and removing collision detection; the Fast Ethernet standard allowed full-duplex Ethernet. Until then all Ethernet worked in half-duplex mode which meant if there were only two stations on a segment, both could not transmit simultaneously. With full-duplex operation, this was now possible. In the terms of Fast Ethernet, essentially 200Mbps of throughput is the theoretical maximum per full-duplex Fast Ethernet connection. This type of connection is limited to a node-to-node connection and is typically used to link two Ethernet switches together.

    A Gigabit Ethernet network using the 1000BASE-LX long wavelength option supports duplex links of up to 550 meters of 62.5 millimeters or 50 millimeters multimode fiber. 1000BASE-LX can also support up to 5 Kilometers of 10 millimeter single-mode fiber. Its wavelengths range from 1270 millimeters to 1355 millimeters. The 1000BASE-SX is a short wavelength option that supports duplex links of up to 275 meters using 62.5 millimeters at multimode or up to 550 meters using 55 millimeters of multimode fiber. Typical wavelengths for this option are in the range of 770 to 860 nanometers.

    Maintaining a Quality Network

    The CAT5 cable specification is rated up to 100 megahertz (MHz) and meets the requirement for high speed LAN technologies like Fast Ethernet and Gigabit Ethernet. The EIA/TIA (Electronics industry Association/Telecommunications Industry Association) formed this cable standard which describes performance the LAN manager can expect from a strand of twisted pair copper cable. Along with this specification, the committee formed the EIA/TIA-568 standard named the “Commercial Building Telecommunications Cabling Standard” to help network managers install a cabling system that would operate using common LAN types (like Fast Ethernet). The specification defines Near End Crosstalk (NEXT) and attenuation limits between connectors in a wall plate to the equipment in the closet. Cable analyzers can be used to ensure accordance with this specification and thus guarantee a functional Fast Ethernet or Gigabit Ethernet network.

    The basic strategy of cabling Fast Ethernet systems is to minimize the re-transmission of packets caused by high bit-error rates. This ratio is calculated using NEXT, ambient noise and attenuation of the cable.

    Fast Ethernet Migration

    Most network managers have already migrated from 10BASE-T or other Ethernet 10Mbps variations to higher bandwidth networks. Fast Ethernet ports on Ethernet switches are used to provide even greater bandwidth between the workgroups at 100Mbps speeds. New backbone switches have been created to offer support for 1000Mbps Gigabit Ethernet uplinks to handle network traffic. Equipment like Fast Ethernet repeaters will be used in common areas to group Ethernet switches together with server farms into large 100Mbps pipes. This is currently the most cost effective method of growing networks within the average enterprise.

    Read more

    Network Switching Tutorial

    Network Switching

    Switches can be a valuable asset to networking. Overall, they can increase the capacity and speed of your network. However, switching should not be seen as a cure-all for network issues. Before incorporating network switching, you must first ask yourself two important questions: First, how can you tell if your network will benefit from switching? Second, how do you add switches to your network design to provide the most benefit?

    This tutorial is written to answer these questions. Along the way, we’ll describe how switches work, and how they can both harm and benefit your networking strategy. We’ll also discuss different network types, so you can profile your network and gauge the potential benefit of network switching for your environment.

    What is a Switch?

    Switches occupy the same place in the network as hubs. Unlike hubs, switches examine each packet and process it accordingly rather than simply repeating the signal to all ports. Switches map the Ethernet addresses of the nodes residing on each network segment and then allow only the necessary traffic to pass through the switch. When a packet is received by the switch, the switch examines the destination and source hardware addresses and compares them to a table of network segments and addresses. If the segments are the same, the packet is dropped or “filtered”; if the segments are different, then the packet is “forwarded” to the proper segment. Additionally, switches prevent bad or misaligned packets from spreading by not forwarding them.

    Filtering packets and regenerating forwarded packets enables switching technology to split a network into separate collision domains. The regeneration of packets allows for greater distances and more nodes to be used in the total network design, and dramatically lowers the overall collision rates. In switched networks, each segment is an independent collision domain. This also allows for parallelism, meaning up to one-half of the computers connected to a switch can send data at the same time. In shared networks all nodes reside in a single shared collision domain.

    Easy to install, most switches are self learning. They determine the Ethernet addresses in use on each segment, building a table as packets are passed through the switch. This “plug and play” element makes switches an attractive alternative to hubs.

    Switches can connect different network types (such as Ethernet and Fast Ethernet) or networks of the same type. Many switches today offer high-speed links, like Fast Ethernet, which can be used to link the switches together or to give added bandwidth to important servers that get a lot of traffic. A network composed of a number of switches linked together via these fast uplinks is called a “collapsed backbone” network.

    Dedicating ports on switches to individual nodes is another way to speed access for critical computers. Servers and power users can take advantage of a full segment for one node, so some networks connect high traffic nodes to a dedicated switch port.

    Full duplex is another method to increase bandwidth to dedicated workstations or servers. To use full duplex, both network interface cards used in the server or workstation and the switch must support full duplex operation. Full duplex doubles the potential bandwidth on that link.

    Network Congestion


    As more users are added to a shared network or as applications requiring more data are added, performance deteriorates. This is because all users on a shared network are competitors for the Ethernet bus. A moderately loaded 10 Mbps Ethernet network is able to sustain utilization of 35 percent and throughput in the neighborhood of 2.5 Mbps after accounting for packet overhead, inter-packet gaps and collisions. A moderately loaded Fast Ethernet or Gigabit Ethernet shares 25 Mbps or 250 Mbps of real data in the same circumstances. With shared Ethernet and Fast Ethernet, the likelihood of collisions increases as more nodes and/or more traffic is added to the shared collision domain.

    Ethernet itself is a shared media, so there are rules for sending packets to avoid conflicts and protect data integrity. Nodes on an Ethernet network send packets when they determine the network is not in use. It is possible that two nodes at different locations could try to send data at the same time. When both PCs are transferring a packet to the network at the same time, a collision will result. Both packets are retransmitted, adding to the traffic problem. Minimizing collisions is a crucial element in the design and operation of networks. Increased collisions are often the result of too many users or too much traffic on the network, which results in a great deal of contention for network bandwidth. This can slow the performance of the network from the user’s point of view. Segmenting, where a network is divided into different pieces joined together logically with switches or routers, reduces congestion in an overcrowded network by eliminating the shared collision domain.

    Collision rates measure the percentage of packets that are collisions. Some collisions are inevitable, with less than 10 percent common in well-running networks.

    The Factors Affecting Network Efficiency
    • Amount of traffic
    • Number of nodes
    • Size of packets
    • Network diameter
    Measuring Network Efficiency
    • Average to peak load deviation
    • Collision Rate
    • Utilization Rate

    Utilization rate is another widely accessible statistic about the health of a network. This statistic is available in Novell’s console monitor and WindowsNT performance monitor as well as any optional LAN analysis software. Utilization in an average network above 35 percent indicates potential problems. This 35 percent utilization is near optimum, but some networks experience higher or lower utilization optimums due to factors such as packet size and peak load deviation.

    A switch is said to work at “wire speed” if it has enough processing power to handle full Ethernet speed at minimum packet sizes. Most switches on the market are well ahead of network traffic capabilities supporting the full “wire speed” of Ethernet, 14,480 pps (packets per second), and Fast Ethernet, 148,800 pps.


    Routers work in a manner similar to switches and bridges in that they filter out network traffic. Rather than doing so by packet addresses, they filter by specific protocol. Routers were born out of the necessity for dividing networks logically instead of physically. An IP router can divide a network into various subnets so that only traffic destined for particular IP addresses can pass between segments. Routers recalculate the checksum, and rewrite the MAC header of every packet. The price paid for this type of intelligent forwarding and filtering is usually calculated in terms of latency, or the delay that a packet experiences inside the router. Such filtering takes more time than that exercised in a switch or bridge which only looks at the Ethernet address. In more complex networks network efficiency can be improved. An additional benefit of routers is their automatic filtering of broadcasts, but overall they are complicated to setup.

    Switch Benefits
    • Isolates traffic, relieving congestion
    • Separates collision domains, reducing collisions
    • Segments, restarting distance and repeater rules
    Switch Costs
    • Price: currently 3 to 5 times the price of a hub
    • Packet processing time is longer than in a hub
    • Monitoring the network is more complicated

    General Benefits of Network Switching

    Switches replace hubs in networking designs, and they are more expensive. So why is the desktop switching market doubling ever year with huge numbers sold? The price of switches is declining precipitously, while hubs are a mature technology with small price declines. This means that there is far less difference between switch costs and hub costs than there used to be, and the gap is narrowing.

    Since switches are self learning, they are as easy to install as a hub. Just plug them in and go. And they operate on the same hardware layer as a hub, so there are no protocol issues.

    There are two reasons for switches being included in network designs. First, a switch breaks one network into many small networks so the distance and repeater limitations are restarted. Second, this same segmentation isolates traffic and reduces collisions relieving network congestion. It is very easy to identify the need for distance and repeater extension, and to understand this benefit of network switching. But the second benefit, relieving network congestion, is hard to identify and harder to understand the degree by which switches will help performance. Since all switches add small latency delays to packet processing, deploying switches unnecessarily can actually slow down network performance. So the next section pertains to the factors affecting the impact of switching to congested networks.

    Network Switching

    The benefits of switching vary from network to network. Adding a switch for the first time has different implications than increasing the number of switched ports already installed. Understanding traffic patterns is very important to network switching – the goal being to eliminate (or filter) as much traffic as possible. A switch installed in a location where it forwards almost all the traffic it receives will help much less than one that filters most of the traffic.

    Networks that are not congested can actually be negatively impacted by adding switches. Packet processing delays, switch buffer limitations, and the retransmissions that can result sometimes slows performance compared with the hub based alternative. If your network is not congested, don’t replace hubs with switches. How can you tell if performance problems are the result of network congestion? Measure utilization factors and collision rates.

    Good Candidates for Performance Boosts from Switching
    • Utilization more than 35%
    • Collision rates more than 10%
    Utilization load is the amount of total traffic as a percent of the theoretical maximum for the network type, 10 Mbps in Ethernet, 100 Mbps in Fast Ethernet. The collision rate is the number of packets with collisions as a percentage of total packages

    Network response times (the user-visible part of network performance) suffers as the load on the network increases, and under heavy loads small increases in user traffic often results in significant decreases in performance. This is similar to automobile freeway dynamics, in that increasing loads results in increasing throughput up to a point, then further increases in demand results in rapid deterioration of true throughput. In Ethernet, collisions increase as the network is loaded, and this causes retransmissions and increases in load which cause even more collisions. The resulting network overload slows traffic considerably.

    Using network utilities found on most server operating systems network managers can determine utilization and collision rates. Both peak and average statistics should be considered.

    Replacing a Central Hub with a Switch

    This switching opportunity is typified by a fully shared network, where many users are connected in a cascading hub architecture. The two main impacts of switching will be faster network connection to the server(s) and the isolation of non-relevant traffic from each segment. As the network bottleneck is eliminated performance grows until a new system bottleneck is encountered – such as maximum server performance.

    Adding Switches to a Backbone Switched Network

    Congestion on a switched network can usually be relieved by adding more switched ports, and increasing the speed of these ports. Segments experiencing congestion are identified by their utilization and collision rates, and the solution is either further segmentation or faster connections. Both Fast Ethernet and Ethernet switch ports are added further down the tree structure of the network to increase performance.

    Designing for Maximum Benefit

    Changes in network design tend to be evolutionary rather than revolutionary-rarely is a network manager able to design a network completely from scratch. Usually, changes are made slowly with an eye toward preserving as much of the usable capital investment as possible while replacing obsolete or outdated technology with new equipment.

    Fast Ethernet is very easy to add to most networks. A switch or bridge allows Fast Ethernet to connect to existing Ethernet infrastructures to bring speed to critical links. The faster technology is used to connect switches to each other, and to switched or shared servers to ensure the avoidance of bottlenecks.

    Many client/server networks suffer from too many clients trying to access the same server which creates a bottleneck where the server attaches to the LAN. Fast Ethernet, in combination with switched Ethernet, creates the perfect cost-effective solution for avoiding slow client server networks by allowing the server to be placed on a fast port.

    Distributed processing also benefits from Fast Ethernet and switching. Segmentation of the network via switches brings big performance boosts to distributed traffic networks, and the switches are commonly connected via a Fast Ethernet backbone.

    Good Candidates for Performance Boosts from Switching
    • Important to know network demand per node
    • Try to group users with the nodes they communicate with most often on the same segment
    • Look for departmental traffic patterns
    • Avoid switch bottlenecks with fast uplinks
    • Move users switch between segments in an iterative process until all nodes seeing less than 35% utilization


    Advanced Switching Technology Issues

    There are some technology issues with switching that do not affect 95% of all networks. Major switch vendors and the trade publications are promoting new competitive technologies, so some of these concepts are discussed here.

    Managed or Unmanaged

    Management provides benefits in many networks. Large networks with mission critical applications are managed with many sophisticated tools, using SNMP to monitor the health of devices on the network. Networks using SNMP or RMON (an extension to SNMP that provides much more data while using less network bandwidth to do so) will either manage every device, or just the more critical areas. VLANs are another benefit to management in a switch. A VLAN allows the network to group nodes into logical LANs that behave as one network, regardless of physical connections. The main benefit is managing broadcast and multicast traffic. An unmanaged switch will pass broadcast and multicast packets through to all ports. If the network has logical grouping that are different from physical groupings then a VLAN-based switch may be the best bet for traffic optimization.

    Another benefit to management in the switches is Spanning Tree Algorithm. Spanning Tree allows the network manager to design in redundant links, with switches attached in loops. This would defeat the self learning aspect of switches, since traffic from one node would appear to originate on different ports. Spanning Tree is a protocol that allows the switches to coordinate with each other so that traffic is only carried on one of the redundant links (unless there is a failure, then the backup link is automatically activated). Network managers with switches deployed in critical applications may want to have redundant links. In this case management is necessary. But for the rest of the networks an unmanaged switch would do quite well, and is much less expensive.

    Store-and-Forward vs. Cut-Through

    LAN switches come in two basic architectures, cut-through and store-and-forward. Cut-through switches only examine the destination address before forwarding it on to its destination segment. A store-and-forward switch, on the other hand, accepts and analyzes the entire packet before forwarding it to its destination. It takes more time to examine the entire packet, but it allows the switch to catch certain packet errors and collisions and keep them from propagating bad packets through the network.

    Today, the speed of store-and-forward switches has caught up with cut-through switches to the point where the difference between the two is minimal. Also, there are a large number of hybrid switches available that mix both cut-through and store-and-forward architectures.

    Blocking vs. Non-Blocking Switches

    Take a switch’s specifications and add up all the ports at theoretical maximum speed, then you have the theoretical sum total of a switch’s throughput. If the switching bus, or switching components cannot handle the theoretical total of all ports the switch is considered a “blocking switch”. There is debate whether all switches should be designed non-blocking, but the added costs of doing so are only reasonable on switches designed to work in the largest network backbones. For almost all applications, a blocking switch that has an acceptable and reasonable throughput level will work just fine.

    Consider an eight port 10/100 switch. Since each port can theoretically handle 200 Mbps (full duplex) there is a theoretical need for 1600 Mbps, or 1.6 Gbps. But in the real world each port will not exceed 50% utilization, so a 800 Mbps switching bus is adequate. Consideration of total throughput versus total ports demand in the real world loads provides validation that the switch can handle the loads of your network.

    Switch Buffer Limitations

    As packets are processed in the switch, they are held in buffers. If the destination segment is congested, the switch holds on to the packet as it waits for bandwidth to become available on the crowded segment. Buffers that are full present a problem. So some analysis of the buffer sizes and strategies for handling overflows is of interest for the technically inclined network designer.

    In real world networks, crowded segments cause many problems, so their impact on switch consideration is not important for most users, since networks should be designed to eliminate crowded, congested segments. There are two strategies for handling full buffers. One is “backpressure flow control” which sends packets back upstream to the source nodes of packets that find a full buffer. This compares to the strategy of simply dropping the packet, and relying on the integrity features in networks to retransmit automatically. One solution spreads the problem in one segment to other segments, propagating the problem. The other solution causes retransmissions, and that resulting increase in load is not optimal. Neither strategy solves the problem, so switch vendors use large buffers and advise network managers to design switched network topologies to eliminate the source of the problem – congested segments.

    Layer 3 Switching

    A hybrid device is the latest improvement in internetworking technology. Combining the packet handling of routers and the speed of switching, these multilayer switches operate on both layer 2 and layer 3 of the OSI network model. The performance of this class of switch is aimed at the core of large enterprise networks. Sometimes called routing switches or IP switches, multilayer switches look for common traffic flows, and switch these flows on the hardware layer for speed. For traffic outside the normal flows, the multilayer switch uses routing functions. This keeps the higher overhead routing functions only where it is needed, and strives for the best handling strategy for each network packet.

    Many vendors are working on high end multilayer switches, and the technology is definitely a “work in process”. As networking technology evolves, multilayer switches are likely to replace routers in most large networks.

    Read more

    Sharing Devices

    A Look at Device Server Technology

    Device networking starts with a device server, which allows almost any device with serial connectivity to connect to Ethernet networks quickly and cost-effectively. These products include all of the elements needed for device networking and because of their scalability; they do not require a server or gateway.

    This tutorial provides an introduction to the functionality of a variety of device servers.  It will cover print servers, terminal servers and console servers, as well as embedded and external device servers.  For each of these categories, there will also be a review of specific Lantronix offerings.

    An Introduction to Device Servers

    A device server is characterized by a minimal operating architecture that requires no per seat network operating system license, and client access that is independent of any operating system or proprietary protocol. In addition the device server is a “closed box,” delivering extreme ease of installation, minimal maintenance, and can be managed by the client remotely via a web browser.

    By virtue of its independent operating system, protocol independence, small size and flexibility, device servers are able to meet the demands of virtually any network-enabling application. The demand for device servers is rapidly increasing because organizations need to leverage their networking infrastructure investment across all of their resources. Many currently installed devices lack network ports or require dedicated serial connections for management — device servers allow those devices to become connected to the network.

    Device servers are currently used in a wide variety of environments in which machinery, instruments, sensors and other discrete devices generate data that was previously inaccessible through enterprise networks. They are also used for security systems, point-of-sale applications, network management and many other applications where network access to a device is required.
    As device servers become more widely adopted and implemented into specialized applications, we can expect to see variations in size, mounting capabilities and enclosures. Device servers are also available as embedded devices, capable of providing instant networking support for developers of future products where connectivity will be required.

    Print servers, terminal servers, remote access servers and network time servers are examples of device servers which are specialized for particular functions. Each of these types of servers has unique configuration attributes in hardware or software that help them to perform best in their particular arena.

    External Device Servers

    External device servers are stand-alone serial-to-wireless (802.11b) or serial-to-Ethernet device servers that can put just about any device with serial connectivity on the network in a matter of minutes so it can be managed remotely.

    External Device Servers from Lantronix

    Lantronix external device servers provide the ability to remotely control, monitor, diagnose and troubleshoot equipment over a network or the Internet.  By opting for a powerful external device with full network and web capabilities, companies are able to preserve their present equipment investments.

    Lantronix offers a full line of external device servers:  Ethernet or wireless, advanced encryption for maximum security, and device servers designed for commercial or heavy-duty industrial applications.


    Providing a whole new level of flexibility and mobility, these devices allow users to connect devices that are inaccessible via cabling.  Users can also add intelligence to their businesses by putting mobile devices, such as medical instruments or warehouse equipment, on networks.


    Ideal for protecting data such as business transactions, customer information, financial records, etc., these devices provide enhanced security for networked devices.


    These devices enable users to network-enable their existing equipment (such as POS devices, AV equipment, medical instruments, etc.) simply and cost-effectively, without the need for special software.


    For heavy-duty factory applications, Lantronix offers a full complement of industrial-strength external device servers designed for use with manufacturing, assembly and factory automation equipment. All models support Modbus industrial protocols.

    Embedded Device Servers

    Embedded device servers integrate all the required hardware and software into a single embedded device.  They use a device’s serial port to web-enable or network-enable products quickly and easily without the complexities of extensive hardware and software integration. Embedded device servers are typically plug-and-play solutions that operate independently of a PC and usually include a wireless or Ethernet connection, operating system, an embedded web server, a full TCP/IP protocol stack, and some sort of encryption for secure communications.

    Embedded Device Servers from Lantronix

    Lantronix recognizes that design engineers are looking for a simple, cost-effective and reliable way to seamlessly embed network connectivity into their products.  In a fraction of the time it would take to develop a custom solution, Lantronix embedded device servers provide a variety of proven, fully integrated products.  OEMs can add full Ethernet and/or wireless connectivity to their products so they can be managed over a network or the Internet.


    These devices allow users tonetwork-enable just about any electronic device with Ethernet and/or wireless connectivity.


    Users can integrate networking capabilities onto the circuit boards of equipment like factory machinery, security systems and medical devices.

    Single-Chip Solutions:

    These powerful, system-on-chip solutions help users address networking issues early in the design cycle to support the most popular embedded networking technologies.

    Terminal Servers

    Terminal servers are used to enable terminals to transmit data to and from host computers across LANs, without requiring each terminal to have its own direct connection. And while the terminal server’s existence is still justified by convenience and cost considerations, its inherent intelligence provides many more advantages. Among these is enhanced remote monitoring and control. Terminal servers that support protocols like SNMP make networks easier to manage.
    Devices that are attached to a network through a server can be shared between terminals and hosts at both the local site and throughout the network. A single terminal may be connected to several hosts at the same time (in multiple concurrent sessions), and can switch between them. Terminal servers are also used to network devices that have only serial outputs. A connection between serial ports on different servers is opened, allowing data to move between the two devices.

    Given its natural translation ability, a multi-protocol server can perform conversions between the protocols it knows such as LAT and TCP/IP. While server bandwidth is not adequate for large file transfers, it can easily handle host-to-host inquiry/response applications, electronic mailbox checking, etc. In addition, it is far more economical than the alternatives — acquiring expensive host software and special-purpose converters. Multiport device and print servers give users greater flexibility in configuring and managing their networks.

    Whether it is moving printers and other peripherals from one network to another, expanding the dimensions of interoperability or preparing for growth, terminal servers can fulfill these requirements without major rewiring. Today, terminal servers offer a full range of functionality, ranging from 8 to 32 ports, giving users the power to connect terminals, modems, servers and virtually any serial device for remote access over IP networks.

    Print Servers

    Print servers enable printers to be shared by other users on the network. Supporting either parallel and/or serial interfaces, a print server accepts print jobs from any person on the network using supported protocols and manages those jobs on each appropriate printer.

    The earliest print servers were external devices, which supported printing via parallel or serial ports on the device. Typically, only one or two protocols were supported. The latest generations of print servers support multiple protocols, have multiple parallel and serial connection options and, in some cases, are small enough to fit directly on the parallel port of the printer itself. Some printers have embedded or internal print servers. This design has an integral communication benefit between printer and print server, but lacks flexibility if the printer has physical problems.

    Print servers generally do not contain a large amount of memory; printers simply store information in a queue. When the desired printer becomes available, they allow the host to transmit the data to the appropriate printer port on the server. The print server can then simply queue and print each job in the order in which print requests are received, regardless of protocol used or the size of the job.


    Device Server Technology in the Data Center

    The IT/data center is considered the pulse of any modern business.  Remote management enables users to monitor and manage global networks, systems and IT equipment from anywhere and at any time.  Device servers play a major role in allowing for the remote capabilities and flexibility required for businesses to maximize personnel resources and technology ROI.

    Console Servers

    Console servers provide the flexibility of both standard and emergency remote access via attachment to the network or to a modem. Remote console management serves as a valuable tool to help maximize system uptime and system operating costs.

    Secure console servers provide familiar tools to leverage the console or emergency management port built into most serial devices, including servers, switches, routers, telecom equipment – anything in a rack – even if the network is down. They also supply complete in-band and out-of-band local and remote management for the data center with tools such as telnet and SSH that help manage the performance and availability of critical business information systems.

    Lantronix provides complete in-band and out-of-band local and remote management solutions for the data center. Lantronix secure console management products give IT managers unsurpassed ability to securely and remotely manage serial devices, including servers, switches, routers, telecom equipment – anything in a rack – even if the network is down.


    The ability to manage virtually any electronic device over a network or the Internet is changing the way the world works and does business. With the ability to remotely manage, monitor, diagnose and control equipment, a new level of functionality is added to networking — providing business with increased intelligence and efficiency.  Lantronix leads the way in developing new network intelligence and has been a tireless pioneer in machine-to-machine (M2M) communication technology.

    We hope this introduction to networking has been helpful and informative. This tutorial was meant to be an overview and not a comprehensive guide that explains everything there is to know about planning, installing, administering and troubleshooting a network. There are many Internet websites, books and magazines available that explain all aspects of computer networks, from LANs to WANs, network hardware to running cable. To learn about these subjects in greater detail, check your local bookstore, software retailer or newsstand for more information

    Read more

    Adding Speed

    The phrase “you can never get too much of a good thing” can certainly be applied to networking. Once the benefits of networking are demonstrated, there is a thirst for even faster, more reliable connections to support a growing number of users and highly-complex applications.

    How to obtain that added bandwidth can be an issue. While repeaters allow LANs to extend beyond normal distance limitations, they still limit the number of nodes that can be supported.
    Bridges and switches on the other hand allow LANs to grow significantly larger by virtue of their ability to support full Ethernet segments on each port. Additionally, bridges and switches selectively filter network traffic to only those packets needed on each segment, significantly increasing throughput on each segment and on the overall network.

    Network managers continue to look for better performance and more flexibility for network topologies, bridges and switches. To provide a better understanding of these and related technologies, this tutorial will cover:

    • Bridges
    • Ethernet Switches
    • Routers
    • Network Design Criteria
    • When and Why Ethernets Become Too Slow
    • Increasing Performance with Fast and Gigabit Ethernet


    Bridges connect two LAN segments of similar or dissimilar types, such as Ethernet and Token Ring. This allows two Ethernet segments to behave like a single Ethernet allowing any pair of computers on the extended Ethernet to communicate. Bridges are transparent therefore computers don’t know whether a bridge separates them.

    Bridges map the Ethernet addresses of the nodes residing on each network segment and allow only necessary traffic to pass through the bridge. When a packet is received by the bridge, the bridge determines the destination and source segments. If the segments are the same, the packet is dropped or also referred to as “filtered”; if the segments are different, then the packet is “forwarded” to the correct segment. Additionally, bridges do not forward bad or misaligned packets.

    Bridges are also called “store-and-forward” devices because they look at the whole Ethernet packet before making filtering or forwarding decisions. Filtering packets and regenerating forwarded packets enables bridging technology to split a network into separate collision domains. Bridges are able to isolate network problems; if interference occurs on one of two segments, the bridge will receive and discard an invalid frame keeping the problem from affecting the other segment. This allows for greater distances and more repeaters to be used in the total network design.

    Dealing with Loops

    Most bridges are self-learning task bridges; they determine the user Ethernet addresses on the segment by building a table as packets that are passed through the network. However, this self-learning capability dramatically raises the potential of network loops in networks that have many bridges. A loop presents conflicting information on which segment a specific address is located and forces the device to forward all traffic. The Distributed Spanning Tree (DST) algorithm is a software standard (found in the IEEE 802.1d specification) that describes how switches and bridges can communicate to avoid network loops.

    Ethernet Switches

    Ethernet switches are an expansion of the Ethernet bridging concept. The advantage of using a switched Ethernet is parallelism. Up to one-half of the computers connected to a switch can send data at the same time.

    LAN switches link multiple networks together and have two basic architectures: cut-through and store-and-forward. In the past, cut-through switches were faster because they examined the packet destination address only before forwarding it on to its destination segment. A store-and-forward switch works like a bridge in that it accepts and analyzes the entire packet before forwarding it to its destination.

    Historically, store-and-forward took more time to examine the entire packet, although one benefit was that it allowed the switch to catch certain packet errors and keep them from propagating through the network. Today, the speed of store-and-forward switches has caught up with cut-through switches so the difference between the two is minimal. Also, there are a large number of hybrid switches available that mix both cut-through and store-and-forward architectures.

    Both cut-through and store-and-forward switches separate a network into collision domains, allowing network design rules to be extended. Each of the segments attached to an Ethernet switch has a full 10 Mbps of bandwidth shared by fewer users, which results in better performance (as opposed to hubs that only allow bandwidth sharing from a single Ethernet). Newer switches today offer high-speed links, either Fast Ethernet, Gigabit Ethernet, 10 Gigabit Ethernet or ATM. These are used to link switches together or give added bandwidth to high-traffic servers. A network composed of a number of switches linked together via uplinks is termed a “collapsed backbone” network.



    A router is a device that forwards data packets along networks, and determines which way to send each data packet based on its current understanding of the state of its connected networks. Routers are typically connected to at least two networks, commonly two LANs or WANs or a LAN and its Internet Service Provider’s (ISPs) network. Routers are located at gateways, the places where two or more networks connect.

    Routers filter out network traffic by specific protocol rather than by packet address. Routers also divide networks logically instead of physically. An IP router can divide a network into various subnets so that only traffic destined for particular IP addresses can pass between segments. Network speed often decreases due to this type of intelligent forwarding. Such filtering takes more time than that exercised in a switch or bridge, which only looks at the Ethernet address. However, in more complex networks, overall efficiency is improved by using routers.

    Network Design Criteria

    Ethernets and Fast Ethernets have design rules that must be followed in order to function correctly. The maximum number of nodes, number of repeaters and maximum segment distances are defined by the electrical and mechanical design properties of each type of Ethernet media.

    A network using repeaters, for instance, functions with the timing constraints of Ethernet. Although electrical signals on the Ethernet media travel near the speed of light, it still takes a finite amount of time for the signal to travel from one end of a large Ethernet to another. The Ethernet standard assumes it will take roughly 50 microseconds for a signal to reach its destination.

    Ethernet is subject to the “5-4-3” rule of repeater placement: the network can only have five segments connected; it can only use four repeaters; and of the five segments, only three can have users attached to them; the other two must be inter-repeater links.

    If the design of the network violates these repeater and placement rules, then timing guidelines will not be met and the sending station will resend that packet. This can lead to lost packets and excessive resent packets, which can slow network performance and create trouble for applications. New Ethernet standards (Fast Ethernet, GigE, and 10 GigE) have modified repeater rules, since the minimum packet size takes less time to transmit than regular Ethernet. The length of the network links allows for a fewer number of repeaters. In Fast Ethernet networks, there are two classes of repeaters. Class I repeaters have a latency of 0.7 microseconds or less and are limited to one repeater per network. Class II repeaters have a latency of 0.46 microseconds or less and are limited to two repeaters per network. The following are the distance (diameter) characteristics for these types of Fast Ethernet repeater combinations:

    Fast Ethernet Copper Fiber
    No Repeaters
    One Class I Repeater
    One Class II Repeater
    Two Class II Repeaters

    * Full Duplex Mode 2 km

    When conditions require greater distances or an increase in the number of nodes/repeaters, then a bridge, router or switch can be used to connect multiple networks together. These devices join two or more separate networks, allowing network design criteria to be restored. Switches allow network designers to build large networks that function well. The reduction in costs of bridges and switches reduces the impact of repeater rules on network design.

    Each network connected via one of these devices is referred to as a separate collision domain in the overall network.

    When and Why Ethernets Become Too Slow

    As more users are added to a shared network or as applications requiring more data are added, performance deteriorates. This is because all users on a shared network are competitors for the Ethernet bus. On a moderately loaded 10Mbps Ethernet network that is shared by 30-50 users, that network will only sustain throughput in the neighborhood of 2.5Mbps after accounting for packet overhead, interpacket gaps and collisions.

    Increasing the number of users (and therefore packet transmissions) creates a higher collision potential. Collisions occur when two or more nodes attempt to send information at the same time. When they realize that a collision has occurred, each node shuts off for a random time before attempting another transmission. With shared Ethernet, the likelihood of collision increases as more nodes are added to the shared collision domain of the shared Ethernet. One of the steps to alleviate this problem is to segment traffic with a bridge or switch. A switch can replace a hub and improve network performance. For example, an eight-port switch can support eight Ethernets, each running at a full 10 Mbps. Another option is to dedicate one or more of these switched ports to a high traffic device such as a file server.

    Greater throughput is required to support multimedia and video applications. When added to the network, Ethernet switches provide a number of enhancements over shared networks that can support these applications. Foremost is the ability to divide networks into smaller and faster segments. Ethernet switches examine each packet, determine where that packet is destined and then forward that packet to only those ports to which the packet needs to go. Modern switches are able to do all these tasks at “wirespeed,” that is, without delay.

    Aside from deciding when to forward or when to filter the packet, Ethernet switches also completely regenerate the Ethernet packet. This regeneration and re-timing allows each port on a switch to be treated as a complete Ethernet segment, capable of supporting the full length of cable along with all of the repeater restrictions. The standard Ethernet slot time required in CSMA/CD half-duplex modes is not long enough for running over 100m copper, so Carrier Extension is used to guarantee a 512-bit slot time.

    Additionally, bad packets are identified by Ethernet switches and immediately dropped from any future transmission. This “cleansing” activity keeps problems isolated to a single segment and keeps them from disrupting other network activity. This aspect of switching is extremely important in a network environment where hardware failures are to be anticipated. Full duplex doubles the bandwidth on a link, and is another method used to increase bandwidth to dedicated workstations or servers. Full duplex modes are available for standard Ethernet, Fast Ethernet, and Gigabit Ethernet. To use full duplex, special network interface cards are installed in the server or workstation, and the switch is programmed to support full duplex operation.

    Increasing Performance with Fast and Gigabit Ethernet

    Implementing Fast or Gigabit Ethernet to increase performance is the next logical step when Ethernet becomes too slow to meet user needs. Higher traffic devices can be connected to switches or each other via Fast Ethernet or Gigabit Ethernet, providing a great increase in bandwidth. Many switches are designed with this in mind, and have Fast Ethernet uplinks available for connection to a file server or other switches. Eventually, Fast Ethernet can be deployed to user desktops by equipping all computers with Fast Ethernet network interface cards and using Fast Ethernet switches and repeaters.

    With an understanding of the underlying technologies and products in use in Ethernet networks, the next tutorial will advance to a discussion of some of the most popular real-world applications.

    Read more

Social Media Auto Publish Powered By : XYZScripts.com