Posted filed under CompTIA Network+.

Source mc mcse Certification Resources

 

  • Quality of Service – (QoS) is a set of parameters that controls the level of quality provided to different types of network traffic.QoS parameters include the maximum amount of delay, signal loss, noise that can be accommodated for a particular type of network traffic, bandwidth priority, and CPU usage for a specific stream of data. These parameters are usually agreed upon by the transmitter and the receiver. Both the transmitter and the receiver enter into an agreement known as the Service Level Agreement (SLA). In addition to defining QoS parameters, the SLA also describes remedial measures or penalties to be incurred in the event that the ISP fails to provide the QoS promised in the SLA.

 

  • Traffic Shaping (also known as “packet shaping” or ITMPs: Internet Traffic Management Practices) is the control of computer network traffic in order to optimize or guarantee performance, increase/decrease latency, and/or increase usable bandwidth by delaying packets that meet certain criteria. More specifically, traffic shaping is any action on a set of packets (often called a stream or a flow) which imposes additional delay on those packets such that they conform to some predetermined constraint (a contract or traffic profile).Traffic shaping provides a means to control the volume of traffic being sent into a network in a specified period (bandwidth throttling), or the maximum rate at which the traffic is sent (rate limiting), or more complex criteria such as GCRA. This control can be accomplished in many ways and for many reasons; however traffic shaping is always achieved by delaying packets. Traffic shaping is commonly applied at the network edges to control traffic entering the network, but can also be applied by the traffic source (for example, computer or network cardhttp://en.wikipedia.org/wiki/Traffic_shaping – cite_note-2) or by an element in the network. Traffic policing is the distinct but related practice of packet dropping and packet marking.

 

  • Load Balancing – is a technique to distribute workload evenly across two or more computers, network links, CPUs, hard drives, or other resources, in order to get optimal resource utilization, maximize throughput, minimize response time, and avoid overload. Using multiple components with load balancing, instead of a single component, may increase reliability through redundancy. The load balancing service is usually provided by a dedicated program or hardware device (such as a multilayer switch or a DNS server).

 

  • High Availability – (aka Uptime) refers to a system or component that is continuously operational for a desirably long length of time. Availability can be measured relative to “100% operational” or “never failing.” A widely-held but difficult-to-achieve standard of availability for a system or product is known as “five 9s” (99.999 percent) availability.Since a computer system or a network consists of many parts in which all parts usually need to be present in order for the whole to be operational, much planning for high availability centers around backup and failover processing and data storage and access. For storage, a redundant array of independent disks (RAID) is one approach. A more recent approach is the storage area network (SAN).Some availability experts emphasize that, for any system to be highly available, the parts of a system should be well-designed and thoroughly tested before they are used. For example, a new application program that has not been thoroughly tested is likely to become a frequent point-of-breakdown in a production system.

 

  • Cache Engine – (aka server) is a dedicated network server or service acting as a server that saves Web pages or other Internet content locally. By placing previously requested information in temporary storage, or cache, a cache server both speeds up access to data and reduces demand on an enterprise’s bandwidth. Cache servers also allow users to access content offline, including media files or other documents. A cache server is sometimes called a “cache engine.” A cache server is almost always also a proxy server, which is a server that “represents” users by intercepting their Internet requests and managing them for users. Typically, this is because enterprise resources are being protected by a firewall server. That server allows outgoing requests to go out but screens all incoming traffic. A proxy server helps match incoming messages with outgoing requests. In doing so, it is in a position to also cache the files that are received for later recall by any user. To the user, the proxy and cache servers are invisible; all Internet requests and returned responses appear to be coming from the addressed place on the Internet. (The proxy is not quite invisible; its IP address has to be specified as a configuration option to the browser or other protocol program.)

 

  • Fault-tolerance – describes a computer system or component designed so that, in the event that a component fails, a backup component or procedure can immediately take its place with no loss of service. Fault tolerance can be provided with software, or embedded in hardware, or provided by some combination. In the software implementation, the operating system provides an interface that allows a programmer to “checkpoint” critical data at pre-determined points within a transaction. In the hardware implementation (for example, with Stratus and its VOS operating system), the programmer does not need to be aware of the fault-tolerant capabilities of the machine.At a hardware level, fault tolerance is achieved by duplexing each hardware component. Disks are mirrored. Multiple processors are “lock-stepped” together and their outputs are compared for correctness. When an anomaly occurs, the faulty component is determined and taken out of service, but the machine continues to function as usual.

Parameters Influencing QOS

  • Bandwidth – is the average number of bits that can be transmitted from the source to a destination over the network in one second.
  • Latency – (AKA “lag”) is the amount of time it takes a packet of data to move across a network connection. When a packet is being sent, there is “latent” time, when the computer that sent the packet waits for confirmation that the packet has been received. Latency and bandwidth are the two factors that determine your network connection speed. Latency in a packet-switched network is measured either one-way (the time from the source sending a packet to the destination receiving it), or round-trip (the one-way latency from source to destination plus the one-way latency from the destination back to the source). Round-trip latency is more often quoted, because it can be measured from a single point. Note that round trip latency excludes the amount of time that a destination system spends processing the packet. Many software platforms provide a service called ping that can be used to measure round-trip latency. Ping performs no packet processing; it merely sends a response back when it receives a packet (i.e. performs a no-op), thus it is a relatively accurate way of measuring latency.Where precision is important, one-way latency for a link can be more strictly defined as the time from the start of packet transmission to the start of packet reception. The time from the start of packet transmission to the end of packet transmission at the near end is measured separately and called serialization delay. This definition of latency depends on the throughput of the link and the size of the packet, and is the time required by the system to signal the full packet to the wire.Some applications, protocols, and processes are sensitive to the time it takes for their requests and results to be transmitted over the network. This is known as latency sensitivity. Examples of latency sensitive applications include VOIP, video conferencing, and online games. In a VOIP deployment, high latency can mean an annoying and counterproductive delay between a speaker’s words and the listener’s reception of those words. Network management techniques such as QoS, load balancing, traffic shaping, and caching can be used individually or combined to optimize the network and reduce latency for sensitive applications. By regularly testing for latency and monitoring those devices that are susceptible to latency issues, you can provide a higher level of service to end users.
  • Jitter – Jitter is the deviation in or displacement of some aspect of the pulses in a high-frequency digital signal. As the name suggests, jitter can be thought of as shaky pulses. The deviation can be in terms of amplitude, phase timing, or the width of the signal pulse. Another definition is that it is “the period frequency displacement of the signal from its ideal location.” Among the causes of jitter are electromagnetic interference (EMI) and crosstalk with other signals. Jitter can cause a display monitor to flicker; affect the ability of the processor in a personal computer to perform as intended; introduce clicks or other undesired effects in audio signals, and loss of transmitted data between network devices. The amount of allowable jitter depends greatly on the application.

 

  • Packet Loss – is the failure of one or more transmitted packets to arrive at their destination. This event can cause noticeable effects in all types of digital communications.The effects of packet loss:
    • In text and data, packet loss produces errors.
    • In videoconference environments it can create jitter.
    • In pure audio communications, such as VoIP, it can cause jitter and frequent gaps in received speech.
    • In the worst cases, packet loss can cause severe mutilation of received data, broken-up images, unintelligible speech or even the complete absence of a received signal.

    The causes of packet loss include inadequate signal strength at the destination, natural or human-made interference, excessive system noise, hardware failure, software corruption or overburdened network nodes. Often more than one of these factors is involved. In a case where the cause cannot be remedied, concealment may be used to minimize the effects of lost packets.

 

  • Echo – is when portions of the transmission are repeated. Echoes can occur during many locations along the route. Splices and improper termination in the network can cause a transmission packet to reflect back to the source, which causes the sound of an echo. To correct for echo, network technicians can introduce an echo canceller to the network design. This will cancel out the energy being reflected.

 

  • High Bandwidth Applications – A high bandwidth application is a software package or program that tends to require large amounts of bandwidth in order to fulfill a request. As demand for these applications continues to increase, bandwidth issues will become more frequent, resulting in degradation of a network system. One way to combat the effects of these applications on a network is to manage the amount of bandwidth allocated to them. This allows users to still use the applications without degrading the QoS of network services.Examples:
    • Thin Clients
    • Voice over IP
    • Real Time Video
    • Multi-media

 

 

 

 

Want more information on how to become CompTIA Net+ Certified? Learn more!

 

 


Also published on Medium.

Comments are closed.