Posted by & filed under CompTIA Security+, MICROSOFT MTA SECURITY.

Identity fraud

Identity fraud, also known as identity theft, is a serious crime that can have devastating consequences for individuals and businesses alike. With the increasing digitization of our lives and the proliferation of personal information online, the risk of falling victim to identity fraud is higher than ever. In this blog, we will delve into what identity fraud is, the risks associated with it, and steps you can take to protect yourself.

CompTIA Security+ Certification

What is Identity Fraud?

Identity fraud occurs when someone steals your personal information and uses it without your consent to commit fraudulent activities, such as making unauthorized purchases, opening bank accounts or credit cards, filing false tax returns, or even committing crimes in your name. Personal information that can be used for identity fraud includes your name, address, social security number, date of birth, phone number, email address, financial account numbers, and more.

The Risks of Identity Fraud

The risks of identity fraud are numerous and can have serious consequences for victims. Some of the risks associated with identity fraud include:

  • Financial Losses: Identity thieves can drain your bank accounts, make fraudulent purchases using your credit cards, and even open new credit accounts in your name, leaving you with significant financial losses and damage to your credit score.
  • Legal Troubles: If an identity thief commits crimes using your personal information, you may find yourself facing legal troubles, including being wrongly accused of criminal activities.
  • Emotional Distress: Discovering that your personal information has been stolen and misused can be emotionally distressing, causing anxiety, stress, and a sense of violation.
  • Time and Effort to Resolve: Resolving the aftermath of identity fraud can be time-consuming and require significant effort, including contacting financial institutions, credit bureaus, and law enforcement agencies, filling out paperwork, and dealing with the bureaucratic process.

Protecting Yourself from Identity Fraud

While identity fraud can be a serious threat, there are steps you can take to protect yourself and reduce your risk of falling victim to this crime. Here are some important measures you can implement:

  • Safeguard Your Personal Information: Be cautious about sharing your personal information online, and only provide it to trusted sources. Avoid sharing sensitive information on social media platforms, and be cautious about the information you share over the phone or via email.
  • Use Strong and Unique Passwords: Use strong, unique passwords for all your online accounts, and avoid using common passwords or reusing passwords across different accounts. Consider using a password manager to help you generate and store complex passwords securely.
  • Monitor Your Financial Accounts: Regularly monitor your bank and credit card accounts for any unauthorized transactions or suspicious activity. Report any discrepancies immediately to your financial institution.
  • Be Cautious of Phishing Attempts: Be wary of emails, phone calls, or text messages that request your personal information or financial details. Be cautious of clicking on links or downloading attachments from unknown sources, and verify the legitimacy of any communication before providing any sensitive information.
  • Secure Your Devices: Keep your devices, including your computer, smartphone, and tablet, secure with up-to-date antivirus software, firewalls, and security patches. Avoid using public Wi-Fi networks for sensitive transactions and be cautious of downloading apps or software from unknown sources.
  • Check Your Credit Reports: Regularly review your credit reports from the major credit bureaus (Equifax, Experian, and TransUnion) to check for any suspicious activity or inaccuracies. You are entitled to a free credit report from each bureau every year.
  • Consider Identity Theft Protection Services: Consider enrolling in an identity theft protection service that offers monitoring, alerts, and assistance in case of identity fraud. Do your research and choose a reputable service with good reviews.

Conclusion

In conclusion, identity fraud is a serious crime that can have severe consequences for individuals and businesses. With the increasing digital landscape and the proliferation of personal information online, it’s crucial to take steps to protect yourself from falling victim to identity fraud. By safeguarding your personal information, using strong and unique passwords, monitoring your financial accounts, being cautious of phishing attempts, securing your devices, checking your credit reports, and considering identity theft protection services, you can significantly reduce your risk of identity fraud. Stay vigilant, be cautious, and take proactive measures to protect your personal information and financial well-being. Remember, prevention is key when it comes to identity fraud, and taking action now can save you from potential devastating consequences in the future.

Posted by & filed under CompTIA Security+, MICROSOFT MTA SECURITY.

Whaling

Phishing attacks, a form of cyber attack where malicious actors trick individuals into revealing sensitive information, have become increasingly sophisticated in recent years. One type of phishing attack that has gained prominence is “whaling,” which targets high-level executives and individuals with access to valuable data or funds. Whaling attacks are highly targeted and personalized, making them difficult to detect and defend against. In this blog, we will explore the concept of whaling, the risks it poses to organizations, and how the implementation of security measures, such as Security+, can help protect against this advanced form of phishing.

CompTIA Security+ Certification

Understanding Whaling:

Whaling, also known as CEO fraud or business email compromise (BEC), is a type of phishing attack that focuses on high-profile individuals, such as CEOs, CFOs, and other executives. Unlike traditional phishing attacks, which may cast a wide net and target a large number of individuals, whaling attacks are carefully crafted and highly targeted. Cybercriminals conduct thorough research on their victims, gathering information from publicly available sources, social media, and other online platforms to create a convincing facade. They then use this information to send fraudulent emails that appear to be from a trusted source, often posing as a high-ranking executive or a trusted business partner, in order to trick the victim into taking a specific action, such as transferring funds or revealing sensitive information.

Risks of Whaling:

Whaling attacks pose significant risks to organizations, as they can result in financial losses, reputational damage, and data breaches. High-level executives and individuals with access to critical data or financial resources are prime targets for whaling attacks, as their actions can have a significant impact on the organization. Whaling attacks often exploit the human element of cybersecurity, relying on social engineering techniques to manipulate victims into taking actions that may compromise security. The personalized and convincing nature of whaling attacks makes them difficult to detect using traditional security measures, and organizations need to implement specialized security measures to effectively mitigate the risks.

Whaling Security+:

Security+ is a well-known and widely used certification offered by CompTIA, which focuses on information security and validates the skills and knowledge required to secure IT systems and networks. Implementing Security+ best practices can help organizations protect against whaling attacks by enhancing email security, strengthening authentication methods, and providing employee training on identifying and responding to whaling attempts. Some key Security+ practices that can be applied to mitigate whaling risks include:

  • Email Authentication: Implementing technologies such as Domain-based Message Authentication, Reporting, and Conformance (DMARC), Sender Policy Framework (SPF), and DomainKeys Identified Mail (DKIM) can help verify the authenticity of incoming emails and detect spoofed or fraudulent emails.
  • Employee Training: Providing regular and comprehensive training to employees, especially high-level executives and individuals with access to sensitive data, on identifying and responding to whaling attempts can help increase awareness and reduce the likelihood of falling victim to such attacks.
  • Access Control: Implementing strong access control measures, such as multi-factor authentication (MFA), to limit access to critical systems and data can help prevent unauthorized access in case of a successful whaling attack.
  • Incident Response: Establishing a robust incident response plan that includes procedures for detecting, reporting, and responding to whaling attacks can help organizations quickly mitigate the impact of a successful attack and prevent further damage.

Conclusion

Whaling attacks pose significant risks to organizations, and it is crucial to implement effective security measures to protect against this advanced form of phishing. Security+, with its focus on information security, can provide organizations with the necessary skills and knowledge to strengthen their defenses against whaling attacks. By implementing email authentication, providing employee training, enforcing access controls, and establishing incident response plans

Posted by & filed under CompTIA Security+, MICROSOFT MTA SECURITY.

FREE ISC2 Certified in Cybersecurity Exam Voucher

Did you know that you can use this FREE exam codes to register for ISC2 Certified in Cybersecurity℠ – CC

See www.asmed.com/s1

Are you passionate about technology and interested in a career that offers limitless opportunities? You don’t need prior experience to start your journey in cybersecurity—just the drive to learn and succeed. With the global demand for cybersecurity professionals at an all-time high, now is the perfect time to explore this exciting and rewarding field.

Why Cybersecurity?

Cybersecurity is more than just a job; it’s a mission to protect data, systems, and people from digital threats. As technology evolves, so does the need for skilled professionals who can safeguard our interconnected world. By joining the cybersecurity workforce, you’re not only securing your future but also contributing to the safety and security of countless organizations and individuals worldwide.

ISC2's Commitment to Closing the Cybersecurity Workforce Gap

In an effort to close the cybersecurity workforce gap and bring more diversity into the field, ISC2 is offering a groundbreaking opportunity: FREE Certified in Cybersecurity (CC) Online Self-Paced Training and exams for one million people. This initiative is designed to empower individuals from all backgrounds to kickstart their cybersecurity careers and become part of the world’s largest association of certified cybersecurity professionals.

How to Start Your Journey

Participating in the One Million Certified in Cybersecurity program is simple. Here’s how you can get started:

  • Create an Account: If you don’t already have an ISC2 account, you’ll need to create one. If you’re already a member, simply sign in.
  • Complete Your Application: Fill out the ISC2 Candidate application form and select “Certified in Cybersecurity” as your certification of interest.
  • Access Your Training: Once your application is complete, you’ll become an ISC2 Candidate. This status gives you access to the Official ISC2 Certified in Cybersecurity Online Self-Paced Training and allows you to register for the free certification exam. Access all your resources on the Candidate Benefits page.
  • Take the Exam: After completing the training, take the certification exam. Upon passing, you’ll need to complete an application form and pay a U.S. $50 Annual Maintenance Fee (AMF).
  • Become Certified: Once you pass the exam and complete all the required steps, you’ll become a certified member of ISC2. As a member, you’ll join the world’s largest association of certified cybersecurity professionals and gain access to a wealth of professional development resources to support your career growth.

Ready to Get Started?

Don’t miss this unique opportunity to break into the cybersecurity field with the support of ISC2. Whether you’re looking to make a career change or advance your current skills, this program provides everything you need to succeed.

Posted by & filed under Amazon AWS.

FREE AWS Exam Retake codes

Looking to advance your career and validate your skills? Get AWS Certified and stand out from other professionals with a recognized credential. Boost your confidence with an extra exam retake opportunity (AWPR9A223835), available only with Pearson VUE. Take advantage of this limited-time promotion to schedule and complete your exam between March 15 and May 31, 2023. If you need it, you’ll receive a free retake when you schedule and complete your exam on or before August 1, 2023

To get started:

Click on image or follow the link end of the page:

  • Login or create an AWS training account
  • Register for your exam
  • Apply the promo code during checkout to qualify for a free exam retake
  • Validate your skills and show your professional network that you stand out from other professionals.

Posted by & filed under CompTIA Network+, MICROSOFT MTA NETWORKING.

Remote Desktop Protocol (RDP)

RDP is a Microsoft-designed technology that allows two computers to share a GUI using a network connection.

RDP is a proprietary technology initially built by Microsoft that allows two computers to exchange a graphical user interface (GUI) using a standardized network connection. This article explains the meaning of RDP, how it works, its benefits, and the challenges to consider. 

CompTIA Security+ Certification

What Is Remote Desktop Protocol?

Remote work has existed for quite a while but has recently been brought to the limelight. The COVID-19 pandemic highlighted how much employees could complete from the comfort of their homes. But it also showed the limitations of remote work and its risk for the business. One of those limitations and risks is the need to duplicate the office environment at home, including sensitive files, documents, subscribed applications, etc. 

An employee that works with sensitive information might be limited from working remotely even when there is little alternative. This is where the Remote Desktop Protocol (RDP) comes into the scene.

Remote Desktop Protocol is a safe protocol for communication between computer networks. It is an exclusive protocol built by Microsoft that furnishes the user on one desktop with a graphical user interface that they can use to connect with another computer over a network connection directly. For this to work, the user must have the RDP software installed on his computer from which he accesses the other computer running the RDP server. 

The Remote Desktop Protocol connection is a tool that allows users to connect to another windows or PC in another location over the internet. The user located far away will be able to log in to the home PC, view the desktop and access the files stored in it, and use the peripheral devices like the mouse and keyboard to control the office PC, just as though they were in front of it. 

The Remote Desktop Protocol is not just a tool for remote workers to access their office desktops; it is also invaluable to network admins as they can diagnose and fix non-structural system malfunction without being physically present. Remote employees, those in transit, at a conference, support technicians, and network administrators can use RDP for regular maintenance. 

Microsoft developed RDP, but it can link different types of computers. The client that is the PC, the user is logged into can run on multiple operating systems like Windows, macOS, Unix, and Android. At the same time, the server is built for specific operating systems, majorly Windows. 

How Does Remote Desktop Protocol Work?

The working principle of the RDP is quite simple and uncomplicated. Like other Remote Desktop software, RDP gives you remote control over another system. However, RDP is the most common protocol used for this purpose. 

How does RDP work?

Anything you will control remotely, be it an object or, in this case, a computer system, must be able to receive some signal. Take, for instance, drones. For a drone to move in a direction or change course, it must receive radio signals from the drone controller in the hands of the pilot. Remote Desktop Protocol works by a similar yet different mechanism, but first, we must understand what the client and server represent in the RDP network.

  • Server: The server, otherwise known as the host, is the computer you want to connect to and is accessible from any location. It requires the RDP software to be installed on it.
  • Client: The client is the remote computer operated by the user who has the authorization to connect to and control the host desktop remotely. 

When using Remote Desktop Protocol, signals are sent over the internet rather than radio waves. These signals include input signals from the keyboard and mouse and output display signals from the server. RDP opens a particular channel through the Transmission Control Protocol (TCP or TCP/IP) and sends the information packets in an encrypted format to improve the network’s security. Currently, RDP uses the network port 3389 to transfer all data related to Remote Desktop access. 

Before sending the information to the host, the transport driver is in charge of packaging the data. From there, Microsoft communications services direct it to the Remote Desktop ppl to COL prepared channel where the operating system encrypts it and is transmitted.

Encrypting and transmitting data to the host computer over the internet and receiving the desktop display at every point can cause delays in use. Therefore, RDP requires fast internet services to adequately handle the workload while creating a pleasant experience for the user. 

When using Remote Desktop access, it is possible to add extra transport drivers for other network protocols depending on peripheral users’ demand to connect to the host computer. This level of independence in the TCP/zip stack improves the performance of RDP and makes it an extensible network. 

CompTIA Security+ Certification

Properties of the Remote Desktop Protocol

The working principle of RDP is reflected in its properties. These include smart card verification, ability to display on several screens, reduced bandwidth, 128-bit encryption for data sent from keyboard and mouse using the RC4 encryption, sending audio from the host to the client computer, sharing clips between computers, using local printers to print out documents from remote information, and so on. 

With RDP, up to 64,000 different channels can be used to transmit data, and with the ability to reduce bandwidth, data transfer can still occur with sub-optimal network conditions. It is essential to know that some of these features are, however, only accessible in the enhanced sessions. With this unique set of properties, Remote Desktop Protocol has three primary use cases:

  • They are used by individuals for remote desktops to their office PC when working from home, working part-time, or even their home PC when in transit or on holidays. 
  • It enables remote troubleshooting by a technician or a friend helping another person.
  • Network admins can use RDP for remote administration ofIT infrastructure.

Benefits of Remote Desktop Protocol

Using the RDP protocol, one can gain the following benefits:
1. Makes device management easier
Managing a company’s or organization’s computer network is not a very easy job. It has challenges, and troubleshooting technical problems is just part of it. IT administrations must ensure that devices comply with company policy while remaining accessible to existing and potential new users or employees.
Sometimes, computers malfunction, either due to hardware or software failure. Other times remote users accessing the host server may unintentionally make settings that affect operation. If the server desktop is in a location that is not easily accessible, one can still fix technical issues from a remote location.
IT admins also have to ensure that installed software remains updated. With Remote Desktop Protocol, the job of the IT admin is considerably less challenging and not restricted to their presence in the office building. The admin can remotely control, make changes in the setting, control permission, limit access, etc., all in real-time.
2. Simplifies data access and management
One intriguing benefit of RDP is the ease at which data can be accessed and managed. Remote Desktop Protocol does not require complex instructions and procedures to access data from a computer system or database.
Users can do so from even a phone with just log-in details. The human mind is only so limited in the information it can store after it has left the work environment. Opportunities may then arise where it is necessary to recall some vital data. Remote Desktop Protocol makes this not only possible but easy.
The system can also manage data remotely, not limited to data access. Managers or human resources can monitor the information being entered into the database at leisure, ensure financial records are accurate and in sync with production or sales, and watch the working hours of workers covertly.
3. Supports remote working
In current times, it is not unheard of to find a company with more than 70% of its staff working from home. It was usually seen among software developers but now extends to all workers, like content creators, personal assistants, research assistants, marketers, product designers, and so on. Some workers may visit the office building weekly or on random days. RDP makes it easier for a company to have remote employees and maintain high excellence and efficiency.
4. Enforces maximum security
Remote Desktop Protocol caters to network security in several ways. With RDP, there is an addition of professionals in charge of maintaining the integrity of the server. This includes ensuring protection against the latest security threats. More so, there is constant data encryption for every information sent across the network.
This protects against hackers that may try to access vital data as it’s sent over the internet. With Remote Desktop Protocol, data loss is safeguarded against. Not just because of multiple screen sharing but also because one can easily recover files due to backup. Lastly, sensitive information containing financial records or confidential clients can be marked off and restricted from being viewed by just any remote employee.
5. Enables cost-savings
Another benefit of RDP is its cost-effectiveness. It saves money for any company and individual employing the technology. For devices that have Remote Desktop Protocol enabled, they can be easily repaired by technicians from afar. This alone reduces the maintenance cost of operating a device.
A company that invests in Remote Desktop Protocol can expect a healthy return on investment. Having more work done remotely and perhaps some full-time remote staff saves time and energy usually expended on transit. This maximizes productivity and increases the ROI of the company.
6. Works with multiple operating systems
One challenge encountered again and again with computer systems is operating systems compatibility. Many software programs are developed every day, yet the majority are selective on the type of device they can effectively run on. Remote Desktop Protocol may not be compatible with every operating system in the book, but it goes a long way. The RDP server previously was limited to a Windows-based system but now includes macOS. The clients can access the server from multiple servers, including Android and iOS mobile phones.
7. Increases productivity
Remote Desktop Protocol can go a long way to increase the productivity of any enterprise that uses the technology, from large multi corporations to small businesses and startups. The work environment is one of the primary factors that influence an employee’s productivity. Employees outfitted with the latest technological advancement and provisions like RDP will enjoy exploring such tools. Also, someone who is not confined to the four walls of an office or the three walls of a cubicle, as the case may be, is more creative and expressive in carrying out tasks.
Some ways RDP increases productivity include:
• Every team member uses the best operating system with high performance irrespective of the type of computer hardware they may have in the office.
• Field employees can have the same level of access to data, similar to their colleagues, and can also attribute information directly to the company’s database.
• Remote users can easily access company files stored on the server hardware without much expertise. This is in contrast to cloud storage which may prove challenging to navigate.
• Multiple applications on the host server are made available for peripheral users to improve their ability to work on projects.
• Employees can have a say in their working environment which ultimately improves job outlook, job satisfaction, and productivity.

Challenges of Remote Desktop Protocol

Remote Desktop Protocol is not without a few challenges. These include; 

  • The risk of downtime:RDP is a system that inadvertently puts most of its users at risk if there is disruption from a significant source. This means that downtimes can be abrupt when they occur, and the implication is far-reaching across every RDP client in their various locations. 
  • Multiple causes of interruption:Downtime could result from a break in consistency, system failure, or network services from the company providing the service. Downtime can be from the host computer; an event such as hardware theft or destruction can cause a backlash on other users. 
  • Network dependency:Similar to the above mentioned, the RDP framework will work similarly as long as all outsider PCs have solid and dependable web associations accessible to them. If not, the system is entirely out of reach. Further, remote employees can have latency issues if they have a slow internet connection.
  • Bottlenecks:Depending on the host system’s power and how many are trying to access it simultaneously, blockages can be caused and reduce performance.
  • The need for expert knowledge:The RDP manager must have complete information regarding the matter and be promptly contactable if and when any issues ought to happen during ordinary working hours. Without the vital assistance on reserve to go to in case of a framework blackout, the outcomes could be critical.
  • Increased security vulnerabilities:Remote access is a double-edged sword regarding system security. Although it comes with data encryption, access controls, and activity logging, it introduces additional security vulnerabilities that could be used as attack points. Security vulnerabilities, such as susceptibility to hash attacks and computer worms, are not ideal for sustained use over time.

You’ll notice that, for instance, it’s challenging to keep tabs on everyone accessing your system remotely. You can’t physically authenticate all the users. That makes it easy for attackers to infiltrate the system using genuine accounts and then leave unnoticed. In other cases, users leverage compromised VPN services, which hackers then manage to take advantage of to gain unauthorized access. 

Despite these challenges, RDP can be useful for administering remote work management and access, especially for companies using an on-premise IT infrastructure.

Takeaway

Remote desktop protocol has become the standard for sharing desktops and other GUI interfaces over networked Microsoft systems. However, enterprises should keep in mind that heavy bandwidth utilization may impact performance. Besides bandwidth strain and security risks, remote desktop protocol or RDP has a few cons. This makes it a compelling solution in the era of remote and hybrid working. 

Posted by & filed under CompTIA Network+, MICROSOFT MTA NETWORKING.

Multi-Protocol Label Switching (MPLS)

MPLS – short for Multi-Protocol Label Switching – is a now-aging network routing system that transfers data between nodes using labels that denote predetermined pathways instead of network addresses that refer to the nodes themselves. This article explains how MPLS works, its types, and the core architecture. 

What Is MPLS (Multi-Protocol Label Switching)?

MPLS – short for Multi-Protocol Label Switching – is defined as a now-aging type of network routing system that transfers data between nodes using labels that denote predetermined pathways instead of network addresses that refer to the nodes themselves. 

Since its inception in the 1960s, the internet has evolved in more ways than was ever imagined. Amazingly, the internet is still changing, bringing us closer and closer to newer technologies yet undiscovered. Data transfer over the internet has as well evolved. Data transfer is perhaps the most critical function of the internet in connecting millions of computers worldwide. 

Traditionally, the standard Internet Protocol (IP) and the Transfer Control Protocol (TCP) have regulated how data packets are moved from one point to the other. In this protocol, each router must make an independent decision about every tiny bit of data packet and determine where the network should send it. Multi-Protocol Label Switching was created to circumvent this bottleneck in data transfer across the internet. 

CompTIA Network+ Certification

Understanding multi-protocol label switching

Multi-Protocol Label Switching or MPLS is a technique used to route and direct traffic in communication technology that uses labels in place of addresses to handle data flow from one router to the other. Ideally, these addresses identify endpoints for each data packet. However, labels do not focus on the destination but instead on routes and pathways that have already been established. 

MPLS is a networking technology that directs traffic consisting of data packets along networking routes but through the shortest path described on the labels. 

Multi-Protocol Label Switching is one of the Internet Protocol (IP) routing techniques that can work on numerous packets covering more than one network protocol and, as such, is referred to as a multi-Protocol system. Multi-Protocol Label Switching, therefore, supports technologies such as the Asynchronous Transport Mode (ATM), Frame Relay, DSL, etc. 

The MPLS transfer protocol primarily controls the forwarding of packets over a private Wide Area Network (WAN), for example, a company with several remote outlets or branches connected to the main center. It resolves the issue of slow data transfer and downtime when using the internet but remains a scalable and protocol-independent technology. 

When comparing Multi-Protocol Label Switching with other data transfer methods, MPLS is a technology that increases the speed at which data flows across a network. This is simply because the need for looking up complex routing tables at every node has been eliminated. Previously, each node in the local internet mesh served as a router determining the path for incoming packets by searching through complex tables. 

Multi-Protocol Label Switching was initially released in 2001 by the internet engineering task force (IETF). It released both the architecture of the technology and its label stack encoding. MPLS performed similarly to the ATM switch as a faster routing technique than the conventional method. MPLS, however, did not have the setbacks ATM had. MPLS also has the advantage of out-of-band control and maintenance of traffic engineering. 

How Does MPLS Work?

Multi-Protocol Label Switching works by addressing incoming packets to their destination based on the information written on their labels. It does not try to guess the address but uses labels to find an established bandwidth for the data packet. 

MPLS works in a manner that is slightly similar to IP routing techniques. When a regular router receives an incoming data packet, the only information on the packet is the destination IP address without further details on the routes or manner in which the network should transport the packet. In MPLS, the label contains information about the routes the data packet should take. This eliminates the cumulative delay by routers in ‘thinking’ of the best possible course.

When a data packet enters a Multi-Protocol Label Switching network, it is given a specific forwarding Class of Service (CoS), also called Forwarding Equivalence Class (FEC). The class of service forms a part of the label, showing what type of information is contained in the data packet, be it real-time data like VoIP or emails. With this label, the routers can reserve the fastest paths with the least latency to highly sensitive real-time information like Voice over Internet Protocol (VoIP) and video conferencing.

When a data packet enters an MPLS network, the entry node is called a Label Edge Router or ingress node. The class of service is then added, specifying the type of information in the packet and its priority level. In MPLS, there are predetermined, unidirectional pathways linking routers across the network; the Label Switched Path (LSP). Networks can only forward data packets after the LSP has been established and the ingress node has encapsulated the packet in the LSP. 

Other nodes within the network are called the label switch routers, which are transit nodes ensuring continuous data flow. The information in the packet label guides the transit nodes, and stops are minimized. After passing through the ingress nodes and transmit nodes, the last router is called an egress node, and it removes the label so the packet address can be read and delivered to the destination. 

The MPLS uses a networking protocol that is somewhat a combination of Layer 2 (data link layer) and Layer 3 (IP layer) of the  Open Systems Interconnection (OSI) model. This is why MPLS is generally considered a layer 2.5 networking protocol, having features from both for data transfer across a network. Its functionality is enabled by the following

Components of the MPLS label:

  • Label/label value: It is a 20-bit long field containing the information routers read in directing the data packet.
  • Traffic class field: This is a 3-bit long part of the label used to set the Quality of Service and explicit congestion notification. 
  • Bottom of the stack: Labels can be stacked on top of each other, and the topmost label is in charge of delivery and is replaced by other labels underneath it until the transfer is complete. The last label in an MPLS header is referred to as the bottom of the stack.
  • Time to Live (TTL): It is an 8-bit long label that decreases in value each time the packet hops and therefore limits the packet’s lifespan. 

Types of MPLS

MPLS technology can be of three types. These are:

CompTIA Network+ Certification

Types of MPLS

1. Layer 2 point-to-point

Layer 2 point-to-point is a type of MPLS suitable for companies that need high bandwidth connections connecting a few locations together while maintaining cost-effectiveness. Examples of practical use of layer 2 point-to-point include several network operations with their primary network infrastructure built using Ethernet and layer 2. 

Layer 2 point-to-point is an excellent alternative to high bandwidth leased lines. It is not bound by internet protocol and can send data running in the Local Area Network (LAN) directly to the WAN without needing routers to change the packets to be compatible with layer 3 of the OSI model. Here are its pros and cons:

  • Pros: With this type of MPLS, the need to manage complex routing tables has been eliminated. Also, it is cost-effective, as WAN connections can be directly linked with layer 2 switches, eliminating the need for expensive routers. 
  • Cons: It is challenging to get circuits of less than 10Mbps in bandwidth as providers only sell high bandwidth circuits. Further, it does not support point-to-multipoint connections.

2. Layer 2 Virtual Private LAN Services (VPLS)

Layer 2 Virtual Private LAN Services (also known as Layer 2 VPLS) is now becoming more sought after for its ability to provide Ethernet services. Layer 2 VPLS combines the Multi-Protocol Label Switching with the Ethernet and extends the benefits to end customers and carriers. 

For over 20 years, LAN has predominantly used Ethernet switching for connectivity, while the carrier network relies on internet protocol routing. Internet protocol not only provides internet access but also provides virtual private network (VPN) access. 

Ethernet, however, has continued to be widely used over various bandwidths because it requires little technical knowledge and remains more affordable. Ethernet is now the infrastructure of choice in both LAN and WAN. Virtual Private LAN Services (VPLS) is an ideal protocol that can provide its users with Multi-Protocol Label Switching and Ethernet, therefore diverting all the traffic in Layer 2 directly to the wide area network. In addition, VPLS remains simple, easy, affordable, and highly scalable. Here are its pros and cons:

  • Pros: It provides a transparent interface that does not require investment in hardware such as routers to upgrade bandwidth. Traffic is labeled with a MAC address as opposed to an IP address, and like all switched networks, Layer 2 VPLS offers lower latency periods than a router network will offer. Configuration and deployment are straightforward, even for newly added sites.
  • Cons: Layer 2 VPLS is still being used only in some parts of the world and has not attained global reach. Therefore this limits the applicability of any feature. The absence of routers as part of the hardware infrastructure places the layer 2 VPLS at higher risk of storm damage. Monitoring is complex due to a lack of visibility from the providers.

3. Layer 3 IP/VPN

Layer 3 IP/VPN is a type of MPLS network most suitable for large enterprises covering multiple branches over a vast land mass. This includes corporations with offices spread across the globe, industries located in more than one country, etc. 

Layer 3 IP/VPN is a service that is naturally a continuation of the ATM and legacy frame relay models. Layer 3 IP/VPN transports data packets based on labels attached as the packets enter the ingress nodes. Therefore, it is highly suitable for companies that are merging for easy scalability and rapid deployment. 

It is also a good fit for companies migrating from the ATM to IP or from the inflexible frame relay to IP, and also for those preparing for voice and data convergence. Layer 3 IP/VPN makes it possible for all the sites in the network to have a blanket class of service prioritization based on the type of traffic (e.g., VoIP). Here are its pros and cons:

  • Pros: Layer 3 IP/VPN is highly scalable and helpful when considering fast deployment. It supportsquality of service (QOS) for differentiation of traffic types. Unlike an ATM, it does not need permanent virtual circuits yet provides the same services. 
  • Cons: Changing the network settings like QOS takes time and involves sending requests. Layer 3 IP/VPN is not suitable for small businesses. It offers only IP services, and must convert data from layer 2 to layer 3 before you can use it on the network.

Architecture of MPLS

MPLS architecture comprises a combination of 2 OSI layers – i.e., the second and third layers. This means that in an MPLS network, there are unique steps that a data packet must follow to get it across the MPLS domain. These steps include: 

  • Label creation and distribution must be done based on the FEC and dispersed among the routers with LDP protocol.  
  • Creation of tables at each router using the Label Forwarding Information Base (LFIB). The LFIB can be regarded as analogous to the routing table employed in the IP network.
  • Label switched path creation. 
  • Label insertion/table lookup of data packets entering the ingress router. 

Packet forwarding occurs at every router by swapping the labels until the bottom stack label is reached at the egress router. The primary architectural point of Multi-Protocol Label Switching is that one can add labels carrying additional information to data packets for transfer above what the routers previously had to use. 

Apart from this, you must understand the five elements of MPLS to grasp the architecture of the network. 

1. Ingress Label Edge Router (LER)

The ingress label edge router is located on the periphery and indicates a point of entry for the data packet from its source. Ingress label router imposes a label and forwards the packets to a destination. Therefore, the ingress edge router is responsible for initiating the packet forwarding operation and does this just after setting up the label switched path (LSP) and assigning proper labels. 

2. Forward Equivalence Class (FEC)

The Forward Equivalence Class is a group of data packets related to one application that is forwarded in its switch path, applying the same treatment and across the same route. Therefore, all the packets of that class bear the same service requirement. Each type of data traffic is given a new forward equivalence class, which is done immediately when the packet enters the MPLS cloud. 

3. Label Switch Router (LSR)

The Label Switch Router is a part of the MPLS that exchanges inbound packets with outbound ones. It also performs functions such as label removal or disposition, label addition or imposition, and label swapping. In label swapping, the label switch router replaces the topmost label in a stack with the value of an outgoing label. This router also separates data streams from the access network into the core of the MPLS, into different FECs. 

4. Label Switch Path (LSP)

The Label Switch Path (LSP) is a direct pathway in the Multi-Protocol Label Switching (MPLS) enabled network that is used by a packet moving from its source to the destination. LSP is a unidirectional path that allows packets to move in only one direction. The packet passes through several intermediate routers between the origin and destination. 

A labeled switched path is necessary for every MPLS network for data transfer to occur. A typical scenario involves a data packet coming in from the ingress node (LER) and migrating through different nodes through the shortest possible path, using an established LSP before getting to the egress node. 

5. Egress Label Edge Router (LER)

Like the ingress LER, the Egress Label Edge Router (LER) is a router located on the MPLS network’s periphery. It serves as a point of exit for data packets that have arrived at their destination. Therefore, it removes labels (label disposition) and forwards the IP packet to the final destination. The egress LER uses a bottom-of-stack indicator to guide its function. This means it will only dispose of a label if the label on top of the stack is identified as a bottom label. 

Multi-Protocol Label Switching is also separated into the control and forwarding planes: 

  • MPLS control plane: The responsibility of the control play is to create the label switched path. The LSP is then used for sharing the routing information through the routers and also integrates the data, creating the LFIB. 
  • MPLS forwarding plane: The forwarding plane directs packets throughrouters based on their labels. It uses the information in the LFIB.

Takeaway

While MPLS remains foundational to network infrastructure, its usage is waning. According to a 2021 study by Telegeography, implementation of MPLS decreased by 24% between 2019 and 2020. During this time, the adoption of SD-WAN increased, speaking to the growing preference for more agile and flexible software-based technologies.

On the other hand, MPLS involves expensive but highly reliable infrastructure which promises excellent performance, especially for real-time data transfers. As a result, certain enterprises may want to hold onto their MPLS investments and have them co-exist with new technologies. 

Posted by & filed under CompTIA Network+, MICROSOFT MTA NETWORKING.

Load Balancing

Load Balancing Definition: Load balancing is the process of distributing network traffic across multiple servers. This ensures no single server bears too much demand. By spreading the work evenly, load balancing improves application responsiveness. It also increases availability of applications and websites for users. Modern applications cannot run without load balancers. Over time, load balancers have added additional capabilities including security and application acceleration

CompTIA Network+ Certification

About Load Balancers

As an organization meets demand for its applications, the load balancer decides which servers can handle that traffic. This maintains a good user experience.

Load balancers manage the flow of information between the server and an endpoint device (PC, laptop, tablet or smartphone). The server could be on-premises, in a data center or the public cloud. The server can also be physical or virtualized. The load balancer helps servers move data efficiently, optimizes the use of application delivery resources and prevents server overloads. Load balancers conduct continuous health checks on servers to ensure they can handle requests. If necessary, the load balancer removes unhealthy servers from the pool until they are restored. Some load balancers even trigger the creation of new virtualized application servers to cope with increased demand.

Traditionally, load balancers consist of a hardware appliance. Yet they are increasingly becoming software-defined. This is why load balancers are an essential part of an organization’s digital strategy.

Download this report to learn how the NSX Advanced Load Balancer (Avi Networks) distributed over 1 million SSL transactions per second with the help of Intel’s high-performance CPUs

History of Load Balancing

Load balancing got its start in the 1990s as hardware appliances distributing traffic across a network. Organizations wanted to improve accessibility of applications running on servers. Eventually, load balancing took on more responsibilities with the advent of Application Delivery Controllers (ADCs). They provide security along with seamless access to applications at peak times.

ADCs fall into three categories: hardware appliances, virtual appliances (essentially the software extracted from legacy hardware) and software-native load balancers. As computing moves to the cloud, software ADCs perform similar tasks to hardware. They also come with added functionality and flexibility. They let an organization quickly and securely scale up its application services based on demand in the cloud. Modern ADCs allow organizations to consolidate network-based services. Those services include SSL/TLS offload, caching, compression, intrusion detection and web application firewalls (WAF). This creates even shorter delivery times and greater scalability.

Load Balancing and SSL

Secure Sockets Layer (SSL) is the standard security technology for establishing an encrypted link between a web server and a browser. SSL traffic is often decrypted at the load balancer. When a load balancer decrypts traffic before passing the request on, it is called SSL termination. The load balancer saves the web servers from having to expend the extra CPU cycles required for decryption. This improves application performance.

However, SSL termination comes with a security concern. The traffic between the load balancers and the web servers is no longer encrypted. This can expose the application to possible attack. However, the risk is lessened when the load balancer is within the same data center as the web servers.

Another solution is the SSL pass-through. The load balancer merely passes an encrypted request to the web server. Then the web server does the decryption. This uses more CPU power on the web server. But organizations that require extra security may find the extra overhead worthwhile.

Load Balancing and Security

Load Balancing plays an important security role as computing moves evermore to the cloud. The off-loading function of a load balancer defends an organization against distributed denial-of-service (DDoS) attacks. It does this by shifting attack traffic from the corporate server to a public cloud provider. DDoS attacks represent a large portion of cybercrime as their number and size continues to rise. Hardware defense, such as a perimeter firewall, can be costly and require significant maintenance. Software load balancers with cloud offload provide efficient and cost-effective protection.

CompTIA Network+ Certification

Load Balancing Algorithms

There is a variety of load balancing methods, which use different algorithms best suited for a particular situation.

  • Least Connection Method — directs traffic to the server with the fewest active connections. Most useful when there are a large number of persistent connections in the traffic unevenly distributed between the servers.
  • Least Response Time Method — directs traffic to the server with the fewest active connections and the lowest average response time.
  • Round Robin Method — rotates servers by directing traffic to the first available server and then moves that server to the bottom of the queue. Most useful when servers are of equal specification and there are not many persistent connections.
  • IP Hash — the IP address of the client determines which server receives the request.

 

 

Load balancing has become a necessity as applications become more complex, user demand grows and traffic volume increases. Load balancers allow organizations to build flexible networks that can meet new challenges without compromising security, service or performance.

Load Balancing Benefits

Load balancing can do more than just act as a network traffic cop. Software load balancers provide benefits like predictive analytics that determine traffic bottlenecks before they happen. As a result, the software load balancer gives an organization actionable insight. These are key to automation and can help drive business decisions.

In the seven-layer Open System Interconnection (OSI) model, network firewalls are at levels one to three (L1-Physical Wiring, L2-Data Link and L3-Network). Meanwhile, load balancing happens between layers four to seven (L4-Transport, L5-Session, L6-Presentation and L7-Application).

Load balancers have different capabilities, which include:

  • L4 — directs traffic based on data from network and transport layer protocols, such as IP address and TCP port.
  • L7 — adds content switching to load balancing. This allows routing decisions based on attributes like HTTP header, uniform resource identifier, SSL session ID and HTML form data.
  • GSLB — Global Server Load Balancing extends L4 and L7 capabilities to servers in different geographic locations.

More enterprises are seeking to deploy cloud-native applications in data centers and public clouds. This is leading to significant changes in the capability of load balancers. In turn, this creates both challenges and opportunities for infrastructure and operations leaders.

Posted by & filed under CompTIA Network+, MICROSOFT MTA NETWORKING.

IMAP Protocol

IMAP stands for Internet Message Access Protocol. It is an application layer protocol which is used to receive the emails from the mail server. It is the most commonly used protocols like POP3 for retrieving the emails.

It also follows the client/server model. On one side, we have an IMAP client, which is a process running on a computer. On the other side, we have an IMAP server, which is also a process running on another computer. Both computers are connected through a network.

CompTIA Network+ Certification

The IMAP protocol resides on the TCP/IP transport layer which means that it implicitly uses the reliability of the protocol. Once the TCP connection is established between the IMAP client and IMAP server, the IMAP server listens to the port 143 by default, but this port number can also be changed.

By default, there are two ports used by IMAP:

  • Port 143: It is a non-encrypted IMAP port.
  • Port 993: This port is used when IMAP client wants to connect through IMAP securely.

Why should we use IMAP instead of POP3 protocol?

POP3 is becoming the most popular protocol for accessing the TCP/IP mailboxes. It implements the offline mail access model, which means that the mails are retrieved from the mail server on the local machine, and then deleted from the mail server. Nowadays, millions of users use the POP3 protocol to access the incoming mails. Due to the offline mail access model, it cannot be used as much. The online model we would prefer in the ideal world. In the online model, we need to be connected to the internet always. The biggest problem with the offline access using POP3 is that the mails are permanently removed from the server, so multiple computers cannot access the mails. The solution to this problem is to store the mails at the remote server rather than on the local server. The POP3 also faces another issue, i.e., data security and safety. The solution to this problem is to use the disconnected access model, which provides the benefits of both online and offline access. In the disconnected access model, the user can retrieve the mail for local use as in the POP3 protocol, and the user does not need to be connected to the internet continuously. However, the changes made to the mailboxes are synchronized between the client and the server. The mail remains on the server so different applications in the future can access it. When developers recognized these benefits, they made some attempts to implement the disconnected access model. This is implemented by using the POP3 commands that provide the option to leave the mails on the server. This works, but only to a limited extent, for example, keeping track of which messages are new or old become an issue when both are retrieved and left on the server. So, the POP3 lacks some features which are required for the proper disconnected access model.

In the mid-1980s, the development began at Stanford University on a new protocol that would provide a more capable way of accessing the user mailboxes. The result was the development of the interactive mail access protocol, which was later renamed as Internet Message Access Protocol.

IMAP History and Standards

The first version of IMAP was formally documented as an internet standard was IMAP version 2, and in RFC 1064, and was published in July 1988. It was updated in RFC 1176, August 1990, retaining the same version. So they created a new document of version 3 known as IMAP3. In RFC 1203, which was published in February 1991. However, IMAP3 was never accepted by the market place, so people kept using IMAP2. The extension to the protocol was later created called IMAPbis, which added support for Multipurpose Internet Mail Extensions (MIME) to IMAP.

This was a very important development due to the usefulness of MIME. Despite this, IMAP is was never published as an RFC. This may be due to the problems associated with the IMAP3. In December 1994, IMAP version 4, i.e., IMAP4 was published in two RFCs, i.e., RFC 1730 describing the main protocol and RFC 1731 describing the authentication mechanism for IMAP 4. IMAP 4 is the current version of IMAP, which is widely used today. It continues to be refined, and its latest version is actually known as IMAP4rev1 and is defined in RFC 2060. It is most recently updated in RFC 3501.

IMAP Features

IMAP was designed for a specific purpose that provides a more flexible way of how the user accesses the mailbox. It can operate in any of the three modes, i.e., online, offline, and disconnected mode. Out of these, offline and disconnected modes are of interest to most users of the protocol.

The following are the features of an IMAP protocol:

  • Access and retrieve mail from remote server: The user can access the mail from the remote server while retaining the mails in the remote server.
  • Set message flags: The message flag is set so that the user can keep track of which message he has already seen.
  • Manage multiple mailboxes: The user can manage multiple mailboxes and transfer messages from one mailbox to another. The user can organize them into various categories for those who are working on various projects.
  • Determine information prior to downloading: It decides whether to retrieve or not before downloading the mail from the mail server.
  • Downloads a portion of a message: It allows you to download the portion of a message, such as one body part from the mime-multi part. This can be useful when there are large multimedia files in a short-text element of a message.
  • Organize mails on the server: In case of POP3, the user is not allowed to manage the mails on the server. On the other hand, the users can organize the mails on the server according to their requirements like they can create, delete or rename the mailbox on the server.
  • Search: Users can search for the contents of the emails.
  • Check email-header: Users can also check the email-header prior to downloading.
  • Create hierarchy: Users can also create the folders to organize the mails in a hierarchy.

IMAP General Operation

CompTIA Network+ Certification
  • The IMAP is a client-server protocol like POP3 and most other TCP/IP application protocols. The IMAP4 protocol functions only when the IMAP4 must reside on the server where the user mailboxes are located. In c the POP3 does not necessarily require the same physical server that provides the SMTP services. Therefore, in the case of the IMAP protocol, the mailbox must be accessible to both SMTP for incoming mails and IMAP for retrieval and modifications.
  • The IMAP uses the Transmission Control Protocol (TCP) for communication to ensure the delivery of data and also received in the order.
  • The IMAP4 listens on a well-known port, i.e., port number 143, for an incoming connection request from the IMAP4 client.

Let's understand the IMAP protocol through a simple example.

CompTIA Network+ Certification

The IMAP protocol synchronizes all the devices with the main server. Let’s suppose we have three devices desktop, mobile, and laptop as shown in the above figure. If all these devices are accessing the same mailbox, then it will be synchronized with all the devices. Here, synchronization means that when mail is opened by one device, then it will be marked as opened in all the other devices, if we delete the mail, then the mail will also be deleted from all the other devices. So, we have synchronization between all the devices. In IMAP, we can see all the folders like spam, inbox, sent, etc. We can also create our own folder known as a custom folder that will be visible in all the other devices.

Posted by & filed under CompTIA Network+, MICROSOFT MTA NETWORKING.

HTTP

The Hypertext Transfer Protocol (HTTP) is an application-level protocol for distributed, collaborative, hypermedia information systems. This is the foundation for data communication for the World Wide Web (i.e. internet) since 1990. HTTP is a generic and stateless protocol which can be used for other purposes as well using extensions of its request methods, error codes, and headers.

Basically, HTTP is a TCP/IP based communication protocol, that is used to deliver data (HTML files, image files, query results, etc.) on the World Wide Web. The default port is TCP 80, but other ports can be used as well. It provides a standardized way for computers to communicate with each other. HTTP specification specifies how clients’ request data will be constructed and sent to the server, and how the servers respond to these requests.

Basic Features

There are three basic features that make HTTP a simple but powerful protocol:

  • HTTP is connectionless:The HTTP client, i.e., a browser initiates an HTTP request and after a request is made, the client waits for the response. The server processes the request and sends a response back after which client disconnect the connection. So client and server knows about each other during current request and response only. Further requests are made on new connection like client and server are new to each other.
  • HTTP is media independent:It means, any type of data can be sent by HTTP as long as both the client and the server know how to handle the data content. It is required for the client as well as the server to specify the content type using appropriate MIME-type.
  • HTTP is stateless:As mentioned above, HTTP is connectionless and it is a direct result of HTTP being a stateless protocol. The server and client are aware of each other only during a current request. Afterwards, both of them forget about each other. Due to this nature of the protocol, neither the client nor the browser can retain information between different requests across the web pages.

HTTP/1.0 uses a new connection for each request/response exchange, where as HTTP/1.1 connection may be used for one or more request/response exchanges.

Basic Architecture

The following diagram shows a very basic architecture of a web application and depicts where HTTP sits:

CompTIA Network+ Certification

The HTTP protocol is a request/response protocol based on the client/server based architecture where web browsers, robots and search engines, etc. act like HTTP clients, and the Web server acts as a server.

Client

The HTTP client sends a request to the server in the form of a request method, URI, and protocol version, followed by a MIME-like message containing request modifiers, client information, and possible body content over a TCP/IP connection.

Server

The HTTP server responds with a status line, including the message’s protocol version and a success or error code, followed by a MIME-like message containing server information, entity meta information, and possible entity-body content.

Posted by & filed under CompTIA Network+, MICROSOFT MTA NETWORKING.

File Transfer Protocol (FTP)

The full form of FTP is File Transfer Protocol. It is a standard internet protocol provided by TCP/IP which is used for transmitting the files from one system to another system.

The main purpose of FTP is for transferring the web page files from one system to the computer which acts as a server for other computers on the internet. It is also helpful for downloading the files to compute from other servers.

Objectives

The objectives of FTP are as follows −

  • FTP provides file sharing.
  • FTP helps us to encourage the use of remote computers.
  • FTP used to transfer the data reliably and efficiently.

Features

The features of FTP are as follows −

  • Data representation
  • File organization and Data structures
  • Transmission modes
  • Error control
  • Access control

TCP Connections

For file transferring, two TCP connections are used which are as follows −

  • Control connection− For sending control information like user identification, password, commands to change the remote directory, commands to retrieve and store files, etc. FTP makes use of control connections. It is initiated with port number 21.
  • Data Connection− For sending the actual file, FTP makes use of data connection. It is initiated with port number 20.

Given below is the diagram of the TCP Connections −

CompTIA Network+ Certification

FTP session

When an FTP session is started between a client and servers, the client initiates a control TCP connection with the server side. The client sends control information over this. When the server receives this, it initiates a data connection to the client side. At a time only one file can be sent over one data connection. FTP has to maintain information about its user throughout the session.

Data Structures

FTP allows three types of data structures, which are as follows −

  • File structure− It is a continuous sequence of data bytes.
  • Record structure− In this, the file is made up of sequential records.
  • page structure− In this, the file is made up of independent indexed pages.

FTP Servers

FTP servers are divided into two parts for separating the general public users from more private users −

  • Anonymous Server− FTP sites allow anonymous FTP do not require a password for access. We have to log in as anonymous and enter our email address as password.
  • Non-anonymous server− If we are using a non-anonymous server, then we will log in as you and give our password.

FTP commands

The FTP commands are as follows −

  • USER – Sending user identification to the server.
  • PASS – sending user password to the server.
  • PWD – It causes the name of the current working directory to be returned in the reply.

Working Procedure

Clients initiate a conversation with servers by requesting to download a file. With the help of FTP, a client can delete, upload, download, rename etc. and even copy files on a server. A user typically needs to log on to the FTP server to use the available content.

Advantages

The advantages of FTP are as follows −

  • Speed
  • Efficient
  • Security
  • Back & Forth movement

Disadvantages

The disadvantages of FTP are as follows −

  • FTP is not compatible with every system.
  • Attackers can quickly identify the FTP password.
  • Does not allow running of simultaneous transfers to multiple receivers.

Posted by & filed under CompTIA Network+, MICROSOFT MTA NETWORKING.

Difference between Private and Public IP addresses

CompTIA Network+ Certification

What is an IP Address?

An IP address (Internet Protocol address) is a numerical identifier, such as 192.0.2.1, that is associated with a computer network that communicates using the Internet Protocol. An IP address is used for two purposes: identifying a host or network interface, and addressing a specific location.

An IP address has two primary functions: it identifies the host, or more particularly its network interface; and it indicates the host’s position in the network, allowing a path to be established to that host.

IP addresses can be either Public or Private. Read through this article to find out how a Public IP address is different from a Private IP address.

What is a Public IP Address?

Your internet service provider assigns a public IP address to your network router so that it may be accessed directly over the internet (ISP). When you connect to the internet using your router’s public IP, your personal device has a private IP address that is concealed.

Connecting to the internet using a public IP address is similar to sending mail to a P.O. box rather than giving out your home address. It’s a little safer, but it’s a lot more noticeable.

What is a Private IP Address?

The address that your network router provides to your device is known as a private IP address. Each device on the same internal network is given a unique private IP address (also known as a private network address) that allows them to communicate with one another.

Private IP addresses enable devices on the same network to interact without needing to connect to the internet. Private IPs assist to strengthen security within a specified network, such as your home or workplace, by making it more difficult for an external host or user to establish a connection. This is why you can print papers from your home printer using a wireless connection, but your neighbor can’t send their files to your printer accidentally.

CompTIA Network+ Certification

Difference between Private and Public IP Addresses

The following table highlights the major differences between Private and Public IP addresses −

Key

Private IP Address

Public IP Address

Scope

Private IP address scope is local to present network.

Public IP address scope is global.

Communication

Private IP Address is used to communicate within the network.

Public IP Address is used to communicate outside the network.

Format

Private IP Addresses differ in a uniform manner.

Public IP Addresses differ in varying range.

Provider

Local Network Operator creates private IP addresses using network operating system.

Internet Service Provider (ISP) controls the public IP address.

Cost

Private IP Addresses are free of cost.

Public IP Address comes with a cost.

Locate

Private IP Address can be located using ipconfig command.

Public IP Address needs to be searched on search engine like google.

Range

Private IP Address range:

10.0.0.0 – 10.255.255.255,

172.16.0.0 – 172.31.255.255,

192.168.0.0 – 192.168.255.255

Except private IP Addresses, rest IP addresses are public.

Example

Private IP Address is like 192.168.11.50.

Public IP Address is like 17.5.7.8.

Conclusion

Private IP Address and Public IP Address are used to uniquely identify a machine on the Internet. Private IP address is used with a local network and public IP address is used outside the network. Public IP address is provided by the Internet Service Provider (ISP).

A public IP address is a one-of-a-kind numeric code that is never repeated by other devices, whereas a private IP address is a non-unique numeric code that can be reused by other private network devices.

Posted by & filed under CompTIA Network+, MICROSOFT MTA NETWORKING.

Server Message Block Protocol (SMB)

The Server Message Block (SMB) Protocol is a Microsoft Windows protocol that allows users to share files, printers, and serial ports across a network. SMBv2 is the most recent version released with Windows Vista and has undergone more revisions under Windows 7.

The IBM-developed Server Message Block protocol is a networking protocol. In the 1990s, Microsoft upgraded the protocol, allowing Windows-based networks to create, alter, and delete shared files, printers, and serial ports.

SMB is an application layer protocol that interacts through TCP port 445 in most deployments. Compared to similar protocols such as the File Transfer Protocol (FTP), SMB quickly gained popularity since it offers far more flexibility.

An application known as Samba allows Linux systems to interact with the SMB protocol in Linux settings. The open-source variant of SMB is the Common Internet File System (CIFS).

How Does SMB Work?

The Server Message Block protocol allows clients to communicate with other network users and access their files and services. The other system must have also implemented the network protocol and used an SMB server to receive and execute client requests. Both parties, however, must first create a link, sending equivalent messages to each other.

SMB uses the Transmission Control Protocol (TCP) in IP networks, requiring a three-way handshake before communicating between the client and the server. The TCP protocol governs subsequent data transmission.

CompTIA Network+ Certification

Versions of SMB Protocol

Following is the list of SMB Protocol Versions −

  • IBM released SMBv1 in 1984 as a DOS file-sharing protocol. In 1990, Microsoft revised and enhanced it.
  • In 1996, a new version of CIFS was launched, with more excellent capabilities and support for higher file sizes. It was bundled with the latest Windows 95 operating system.
  • In 2006, Windows Vista introduced SMBv2. It had a noticeable performance boost, thanks to enhanced efficiency; fewer instructions and subcommands meant faster execution.
  • Windows 7 had SMBv2.1, which was an enhanced performance.
  • With Windows 8, SMBv3 was introduced, along with many improvements. The protocol now supports end-to-end encryption, which is the most noticeable improvement.
  • 02 was released alongside Windows 8.1. By eliminating SMBv1, it provided the possibility to improve security and speed.
  • With Windows 10, SMBv3.1.1 was launched in 2015. It improved the protocol’s security by including AES-128 encryption, protection against man-in-the-middle attacks, and session verification.

Knowing which version of the SMB protocol your device uses is critical if you own a business and have several Windows devices connected. It would be difficult to find a PC running Windows 95 or XP (and using SMBv1) in a modern office, but they may still be running on outdated servers.

Is SMB Safe to Use?

While different versions of SMB give varying levels of security and protection, hackers have uncovered a vulnerability in SMBv1 that they can use to execute their malware without the user’s knowledge. When a device becomes infected, it infects all other connected devices. The National Security Agency (NSA) detected the bug in 2017.

The exploit was called Eternal Blue, and it was stolen from the NSA and distributed online by the Shadow Brokers hacker group. Microsoft patched the vulnerability, but the WannaCry ransomware attack hit the world barely a month later.

Security Precautions

Given the WannaCry and Not Petya ransomware, as well as multiple other vulnerabilities revealed in the most recent SMB version (v3.1.1), such as SMB Ghost and SMBleed, many network administrators and security professionals are questioning whether it should be utilized on networks. SMB, in general, is regarded as a secure protocol when it is updated and patched.

However, the following steps should be taken to mitigate any security vulnerabilities posed by SMB −

  • SMBv1 should not be used since it lacks encryption, is inefficient, and new significant issues comparable to the MS17-010 vulnerabilities could appear in the future due to its complex implementation.
  • When possible, use the most recent SMB version (SMBv3.1.1 as of the date of this post). SMBv3.1.1 is more efficient than previous SMB versions and has cutting-edge security measures.
  • SMB access should be limited to trustworthy networks and clients as a best security practice (Least Privilege).
  • Finally, if SMB functionality is not required, it should be deactivated on Windows systems to decrease the overall attack surface and disclose as little fingerprinting information to attackers as feasible.

Posted by & filed under CompTIA Network+, MICROSOFT MTA NETWORKING.

What is VPN in Computer Network?

VPN stands for Virtual Private Network. It allows you to connect your computer to a private network, creating an encrypted connection that masks your IP address to securely share data and surf the web, protecting your identity online.

A virtual private network, or VPN, is an encrypted connection over the Internet from a device to a network. The encrypted connection helps ensure that sensitive data is safely transmitted. It prevents unauthorized people from eavesdropping on the traffic and allows the user to conduct work remotely. VPN technology is widely used in corporate environments.

A VPN connection is shown in the figure below −

CompTIA Network+ Certification

In this figure, Routers R1 and R2 use VPN technology to guarantee privacy for the organization.

VPN connections are used in two important ways −

  • To establish WAN connections using VPN technology between two distant networks that may be thousands of miles apart, but where each has some way of accessing the internet.
  • To establish remote access connections that enable remote users to access a private network through a public network like the internet.

Types of VPNs

Router VPN

The first type uses a router with added VPN capabilities. A VPN router cannot only handle normal routine duties, but it can also be configured to form VPNs over the internet to other similar routers located in remote networks.

Firewall VPN

The second type of VPN is one built into a firewall device. Firewall VPN can be used both to support remote users and also to provide VPN links.

Network Operating System

The third type of VPNs include those offered as part of a network operating system like Windows NT, Windows 2000, and Netware 5. These VPNs are commonly used to support remote access, and they are generally the least expensive to purchase and install.

Posted by & filed under CompTIA Network+, MICROSOFT MTA NETWORKING.

Single Mode vs. Multimode Fiber Optic Cables

Many decisions come into play when installing fiber optic cabling.  By far, one of the most important questions is whether to install single mode or multimode.  This decision has huge implications for your network’s distance, bandwidth, and budget, so it’s vital to understand the differences between these two types of fiber optic glass.

Before we discuss each type of fiber, here are some definitions:

Optical fiber: The glass portion of a fiber optic cable – no jacketing or strength members included.  An optical fiber is made up of a light carrying core surrounded by cladding.  The cladding prevents light from escaping the core, effectively keeping the signal moving down the glass.

Single mode fiber: a fiber featuring a small light-carrying core of about 9 micrometers (µm) in diameter.  For reference, a human hair is closer to 100 µm. The core is surrounded by a cladding that brings the overall diameter of the optical fiber to 125 µm.

Multimode fiber: a fiber with a core of 50 µm or above.  A larger core means multiple modes (or rays of light) can travel down the core simultaneously. Just like single mode, the core is surrounded by a cladding that brings the overall diameter of the optical fiber to 125 µm.

 

CompTIA Network+ Certification

Common Misconceptions

It’s important to remember that (without the visual acuity of Superman) there is no way to distinguish between single mode and multimode optical fibers with the naked eye.  As noted above, standard optical fibers have cladding around the core that brings the diameter of the optical fiber itself to 125 µm.  When you put a connector on an optical fiber, you are primarily seeing the cladding and any integral protective coating, like SSF™ polymer.

The terms “single mode” and “multimode” also have no relation to the number of optical fibers in the fiber optic cable you are running.  It’s possible to have a cable containing 144 single mode optical fibers, and it’s also possible to have a cable containing 144 multimode optical fibers.

Is Multimode Better?

To installers new to fiber, multimode fiber may seem appealing because the name implies that more can be sent over the cable.  However, “multimode” refers to multiple rays of light simultaneously taking different tracks down the core of the fiber.   This characteristic, enabled by multimode’s larger core, actually creates some limitations.

CompTIA Network+ Certification

In multimode fiber, light travels down the core, bouncing off the cladding as it goes.  As each beam of light has an individual path, each will reach the end of the optical fiber at different times.  This spread is modal dispersion, and it creates limits on data and distance.  For OM3 multimode, 10 Gbs can be sent a maximum of about 300 meters or 1000 feet before the signal becomes indistinguishable.

For OM3 multimode, 10 Gbs can be sent a maximum of about 300 m (1000 ft) before the signal becomes indistinguishable.

CompTIA Network+ Certification

Conversely, single mode’s minuscule core limits dispersion, so higher bandwidth signals can be sent over a longer distance.  Sending data over the ocean floor?  Single mode would be the cable for you. In general, single mode is the cable of choice for installations above about 300 m (1000 ft).

Single Mode Distance Limitations1

TYPE  

APPLICATION

DISTANCE

WAVELENGTH

Gigabit  

1000BASE-LX

5 km

1310 nm

10 Gigabit

10GBASE-LX4

10 km

1310 nm

10 Gigabit

10GBASE-E

40 km

1550 nm

40 Gigabit

40GBASE-LR4

10 km

1310 nm

40 Gigabit

40GBASE-FR

2 km

1310 nm

100,Gigabit

100GBASE-LR4

10 km

1310 nm

Why Run Multimode at All?

The answer to this comes down primarily to budget and applications.  Single mode cable requires single mode transceivers, and those tend to be far more expensive than multimode equivalents.  The difference in electronics can bring single mode system costs far above those of multimode, even if the per foot cost of single mode cable is low.  This is one of the primary reasons we’ll generally recommend multimode before single mode fiber in lower-distance applications.

However, there are still times when single mode may be recommended for short cable runs. It depends on the installation!

Choosing Multimode? Pick the Right Grade.

Multimode fiber is currently constructed in five different grades: OM1, OM2, OM3, OM4, and OM5.  Each grade of multimode fiber has a different bandwidth and distance limitation, with OM4 and OM5 providing the greatest bandwidth over longest distance and OM1 providing the lowest.  At the moment, our general grade recommendation for installations suitable for multimode is OM3. As can be seen in the table below, OM3 provides good options for bandwidth over distance, and it is generally more cost-effective than OM4.

It is extremely important to note that while OM2, OM3, OM4, and OM5 all have a core of 50 µm, OM1 has a core of 62.5 µm.  While these optical fibers are all surrounded by a cladding to 125 µm, OM1 can’t be used as a patch cable in a system involving OM2/OM3/OM4/OM5, and it will not work with connectors rated for OM2/OM3/OM4/OM5.

CABLE TYPE

10 GB ETHERNET DISTANCE
10GBASE-SR

40 GB/100 GB ETHERNET DISTANCE
40GBASE-SR4/ 100GBASE-SR10

OM1 Fiber

33 m / 100 ft

N/A

OM2 Fiber

82 m / 260 ft

N/A

OM3 Fiber

300 m / 1000 ft

100 m / 330 ft

OM4 Fiber

400 m / 1300 ft

150 m / 500 ft

OM5 Fiber

400 m / 1300 ft

150 m / 500 ft

Don’t Mix and Match

Just as it’s important to note that you can’t mix OM1 and OM4, also note that single mode and multimode are not interchangeable.  Single mode electronics and connectors only work with single mode fiber, and multimode, likewise, only works with multimode.  This is due to the difference in core diameters between fiber types, as well as light wavelengths used for transmission.

Both single mode and multimode fibers provide excellent solutions for durable, high bandwidth installations.  Being aware of the differences between the two types of fiber will allow you to select the fiber most appropriate for your installation and data requirements.

The Short Version

  • Single mode fiber has a smaller core than multimode and is suitable for long haul installations. Single mode systems are generally more expensive.
  • Multimode fiber has a larger core and is recommended for fiber runs less than 400 m (1300 feet). The grade of multimode fiber affects its distance and bandwidth capabilities.  Multimode systems are generally less expensive.
  • Single mode only works with single mode, and multimode only works with multimode. This is true for cable, connectors, and electronics.
  • Our recommendation for cable runs under 300 m (1000 ft) is generally multimode OM3. This provides high bandwidth and is more budget friendly than OM4.

Posted by & filed under CompTIA Network+, MICROSOFT MTA NETWORKING.

Network Time Protocol (NTP)

NTP is an internet protocol that’s used to synchronize the clocks on computer networks to within a few milliseconds of universal coordinated time (UTC). It enables devices to request and receive UTC from a server that, in turn, receives precise time from an atomic clock.

CompTIA Network+ Certification

What Is NTP?

“A man with one watch knows what time it is. A man with two watches is never sure.”

Segal’s law pokes fun at the person who makes no effort to check they’re right while highlighting the complexity of receiving information from more than one source.

When it comes to synchronizing your operations, network time protocol (NTP) solves both problems, providing users with the certainty of accurate time across a whole network of devices.

Like any network protocol, NTP is a set of rules, or conventions, that dictate how devices on a network should transmit, receive and understand data. Think about it as a shared language, allowing devices to communicate, in this instance, about time.

NTP allows networked devices, such as clocks, phones and computers, to request and receive time from a server that, in turn, receives precise time from a definitive time source, like an atomic clock.

NTP was developed in the 1980s and is now on version four. Since its release, it’s been used to synchronize the critical systems of businesses, organizations and governments all over the world.

Why Is Time Synchronization Important?

Precise time is vital to everyday life.

As life becomes digitized and automated, exact time is increasingly important:

  • The telecommunications industry relies on accurate time for the transfer of vast amounts of data.
  • Utility companies use time synchronization to manage power distribution.
  • Financial services need exact time to timestamp transactions and ensure traceable records.
  • Satellite navigation depends on precise time, with a difference of one microsecond causing a positioning error of 300 metres.1
  • CCTV and speed cameras require an accurate timestamp to be admissible as evidence.
  • Countless businesses rely on precise time to manage their day-to-day processes, such as synchronizing clocking-in systems.

For organizations of any size, NTP is a cost-effective, reliable and user-friendly method of distributing precise time throughout a network, allowing users to boost productivity, improve customer service, enhance security and more.

Moreover, by using NTP to synchronize to UTC—a global time standard—organizations and governments are able to coordinate international operations.

What Is UTC and How Is It Decided?

UTC is the standard the world has agreed on as the basis for civil time. It’s the result of a decades-long process of invention, revision and collaboration, during which time the standard moved from Greenwich Mean Time (GMT) to Atomic Time (TAI), to UTC.

Importantly, UTC is a time standard, not a time zone, which means it’s the same all over the world and isn’t affected by daylight savings. In fact, time zones are expressed according to their offset from UTC (+/- the number hours). UTC is maintained by comparing more than 200 atomic clocks located all over the world. The Bureau International des Poids et Measures (BIPM), in France, is responsible for collating this data and generating definitive UTC time.

How Does NTP Work?

CompTIA Network+ Certification

A time server and antenna can synchronize a network to UTC.

NTP makes UTC available to an organization by taking a time signal from one, or more, atomic clocks and distributing it to networked devices.

At its most basic, an NTP network is comprised of the devices to be synchronized (known as clients) and an NTP server, which receives UTC time and provides it to the clients.

The clients and server communicate in a series of requests and responses:

  1. The client sends an NTP request packet to the time server, stamping the time as it does so (the origin timestamp).
  2. The server stamps the time when the request packet is received (the receive timestamp).
  3. The server stamps the time again when it sends a response packet back to the client (the transmit timestamp).
  4. The client stamps the time when the response packet is received (the destination timestamp).

This process may only take microseconds, but the timestamps allow the client to account for the roundtrip delay and work out the difference between its internal time and that provided by the server, adjusting itself as necessary and maintaining synchronization.

NTP Hierarchy: Stratum Levels Explained

CompTIA Network+ Certification

Devices on one stratum can provide time to devices on the next.

On large networks, there may be so many clients that the server can’t handle requests from all of them. In these instances, servers and clients are arranged in a hierarchy of levels called stratums.

Stratum one servers have a direct connection, via a radio or GPS signal, with the primary time source, and they provide that time to clients on stratum two via a network connection.

In turn, stratum two devices can function like a server by providing time to clients on stratum three, and those on stratum three can provide it to those on stratum four—and so on. In this way, stratum one devices aren’t overloaded with too many requests.

A total of 15 synchronized stratum levels are possible (stratum 16 is for unsynchronized clients), but each one introduces another layer of network delay, causing accuracy to decrease. To combat this, NTP clients can be set up to request time from multiple servers to help them determine the correct time as closely as possible.

What Is SNTP and How Is It Different?

Simple network time protocol (SNTP) is exactly what its name suggests: a stripped-down version of NTP that’s suited to small networks and computers with limited processing power.

SNTP and NTP share several similarities. For example, the packets of data exchanged between the clients and the time server are identical, making any time server compatible with both.

However, SNTP lacks the many algorithms that NTP uses to determine and maintain synchronization.

Practically, for instance, NTP calculates the drift rate of a given clock from the true time and adjusts that rate to maintain the clock’s synchronization. SNTP, on the other hand, allows the clock to drift and then jumps the time forward or back to match the true time at given intervals.

Between these intervals, it’s possible for the clock to be out of sync, making SNTP unsuitable for applications that demand the highest levels of precision.

SNTP also differs in the number of servers it uses for synchronization. Whereas NTP allows clients on one stratum to act as servers to clients on the next, SNTP is based on a single server-client relationship.

Additional time servers can be specified as backups, but SNTP, unlike NTP, is unable to communicate with several servers in order to discern which is the most accurate. SNTP was released in the early 1990s to suit the limited processing power of the computers of the day. Today, there are few instances where NTP can’t be handled, but SNTP can still be useful for simple applications that don’t require the higher level of precision provided by NTP.

Public NTP Servers vs. Local NTP Servers

CompTIA Network+ Certification

Local NTP servers sit inside your firewall, avoiding the vulnerabilities caused by public servers.

There are two types of NTP servers that you can use to provide UTC time to your network: public servers and local servers.

A public time server is owned and operated by a third party who makes it available for use over the internet. The NTP Pool Project provides an online directory of public servers, allowing you to direct your clients to one of these, free of charge.

Local (aka internal) NTP servers are those you own yourself and install in your premises, establishing a physical network connection between your servers and clients.

If synchronized time is critical to your operations, then internal time servers are the safer, more reliable option. They provide improved accuracy and more control while avoiding the various drawbacks of public servers:

CompTIA Network+ Certification

How to Synchronize Your Network with an Internal NTP Server

CompTIA Network+ Certification

typical set up uses NTP to synchronize a network to a GPS time signal.

To set up an NTP network with an internal time server, you need a number of things:

  • A reference clock/time source that defines and transmits the true time.
  • A time receiver, in the form of a radio or GPS antenna.
  • An NTP server, which receives the time from the antenna and delivers it to a network.
  • The devices/clients to be synchronized.  

Reference Clocks

A reference clock is the primary time source that defines and provides UTC time. Atomic clocks are the most accurate type of reference clock, providing near-inconceivable levels of precision.

For instance, the NIST-F2, created by the US National Institute of Standards and Technology, measures the vibration of a cesium atom to define a second (9,192,631,770 vibrations per second). If run without interruption, the clock would neither gain or lose one second in 300 million years.

Thankfully, you don’t need to install an atomic clock in your server room to receive precise time. They’re installed in the satellites of the global positioning system, and they’re maintained in the laboratories of national standards agencies all over the world. These clocks transmit time signals that you can pick up and use to synchronize your own network.

Time Receivers

GPS antennas can receive a time signal from multiple satellites.

Each GPS satellite transmits a time signal that anyone can receive with a GPS antenna. The global positioning system is designed so that at least four satellites are constantly available from anywhere in the world, making it a highly reliable source of accurate time.

Alternatively, radio antennas receive a time signal from one of several atomic clocks on earth. The range of these signals is localized, so users have to consider which station provides the strongest signal and use an antenna that’s set to that frequency.

GPS time signals are the most accurate and have the advantage of being globally available. However, the antenna requires a 360° view of the sky, which isn’t possible in every situation.

Radio time signals, on the other hand, can be received through windows, making a radio time source a good option for premises that don’t have an unobstructed view of the sky.  

However, a radio time signal can be affected by topography and downtime, making it less reliable and not ideal for synchronizing very critical systems.

Either way, the receiver connects to an internal NTP server via a cable that can be up to 1,000 meters long, when used with a power booster, giving businesses lots of flexibility when it comes to installation.

Internal NTP Servers

Internal NTP Server
Internal NTP Server

A rackmount NTP server installs easily alongside your existing IT hardware.

An NTP server receives the time from the reference clock, via the antenna, and provides it to your network.

The type of server you choose will depend on a number of factors:

  • Whether you’re using a GPS or radio time source.
  • How many clients you want to synchronize.
  • Whether or not you want to supply time to multiple networks.
  • What operating system you want to use.
  • How you want to physically install the server.

Choose a radio or GPS time server, depending on which of these sources is the best for you. Alternatively, dual time servers are a good choice for applications that require the highest level of reliability. These servers use a radio and GPS antenna to receive time from both, allowing the server to draw time from the strongest source and automatically revert to the other if one signal is lost.

To avoid spending money on features you don’t need, you should match your time server to the size of your network.

The Galleon Systems NTS-4000 synchronizes a single network and is ideal for smaller businesses. The NTS-6002 can synchronize two independent networks, making it a great choice for organizations with separate staff and customer networks.  

For the most-demanding applications, the NTS-8000 can synchronize up to six networks—ideal for supplying precise time to independent networks on different floors of a building.

Each of these time servers is capable of synchronizing thousands of clients, and they’re all available in a radio, GPS or dual configuration.

Time servers can run on several different operating systems, but any client can access a server running on any OS. For instance, clients running MacOS can communicate with a Windows time server by using their built-in NTP client software.

Finally, choose a server that meets the physical requirements of your space. Many time servers come in a rack-mountable body, allowing you to integrate them alongside your existing IT hardware. Alternatively, you can enjoy the same functionality from a standalone server, which sits on any flat surface.

NTP Clients

Ethernet clocks are ideal for displaying precise time throughout your premises.

Clients are the devices you connect to your time server to be synchronized. Virtually any device can be a client if it meets three conditions:

  • It has a built-in clock.
  • It can be connected to a network via an Ethernet connection.
  • It’s capable of running NTP/SNTP client software.

Possible clients include computers, phones, clocks, CCTV systems, clocking-in systems, payment terminals and more.

Many devices have NTP client software built-in. If not, TimeSync software is easy to install on Windows devices, allowing you to synchronize a range of clients for a variety of purposes.

What Is NTP? Conclusion

NTP provides businesses and organizations with a reliable, user-friendly and cost-effective method of time synchronization.

It’s one of the oldest internet protocols still in use and, though now on version four, retains many of the principles that made it so popular in its early years.

By connecting your networked devices to a time server, which receives a signal from a definitive time source, you can enjoy the benefits of precise time in any location, boosting productivity, improving customer service and synchronizing your operations.

For a no-obligation discussion about implementing NTP in your organization, contact Galleon Systems: 0121 608 7230.

Posted by & filed under CompTIA Network+, MICROSOFT MTA NETWORKING.

Distinguish between Connection-Oriented and Connectionless Service

CompTIA Network+ Certification

Connection-Oriented Services

In a connection-oriented service, each packet is related to a source/destination connection. These packets are routed along a similar path, known as a virtual circuit. Thus, it provides an end-to-end connection to the client for reliable data transfer.

It delivers information in order without duplication or missing information. It does not congest the communication channel and the buffer of the receiving device. The host machine requests a connection to interact and closes the connection after the transmission of the data.

Mobile communication is an example of a connection-oriented service.

CompTIA Network+ Certification

Connectionless-Service

In connectionless service, a router treats each packet individually. The packets are routed through different paths through the network according to the decisions made by routers. The network or communication channel does not guarantee data delivery from the host machine to the destination machine in connectionless service.

The data to be transmitted is broken into packets. These independent packets are called datagrams in analogy with telegrams.

The packets contain the address of the destination machine. Connectionless service is equivalent to the postal system. In the postal system, a letter is put in an envelope that contains the address of the destination. It is then placed in a letterbox.

The letter finally delivers to the destination through the postal network. However, it does not guarantee to appear in the addressee’s letterbox.

CompTIA Network+ Certification

Differences

The major differences between connection oriented services and connectionless services in computer network are as follows−

Connection Oriented Services

It can generate an end to end connection between the senders to the receiver before sending the data over the same or multiple networks.


It generates a virtual path between the sender and the receiver.


It needed a higher bandwidth to transmit the data packets.


There is no congestion as it supports an end-to-end connection between sender and receiver during data transmission.


It is a more dependable connection service because it assures data packets transfer from one end to the other end with a connection.

Connectionless Services−

It can transfer the data packets between senders to the receiver without creating any connection.

 


It does not make any virtual connection or path between the sender and the receiver.


It requires low bandwidth to share the data packets.


There can be congestion due to not providing an end-to-end connection between the source and receiver to transmit data packets.


It is not a dependent connection service because it does not ensure the share of data packets from one end to another for supporting a connection.

Posted by & filed under CompTIA Network+, MICROSOFT MTA NETWORKING.

Simple Network Management Protocol (SNMP)

SNMP stands for Simple Network Management Protocol. It is an Internet-standard protocol for handling devices on IP networks. Devices that typically provide SNMP include routers, switches, servers, workstations, printers, modem racks, and more. It is used mainly in the network management framework to monitor network-attached computers for conditions requiring regulatory attention.

It is a framework for managing devices on the Internet using the TCP-IP protocol suite. It supports a set of fundamental operations for monitoring and maintaining the Internet.

CompTIA Network+ Certification

SNMP Concept

SNMP facilitates the concept of manager and agent. A manager, generally a host, controls and monitors a group of agents, usually routers. This is an application-level protocol in which some manager stations control a group of agents. The protocol is designed to monitor different manufacturer’s devices and installed on various physical networks at the application level.

Managers and Agents

A management station, known as a manager. It is a host that runs the SNMP user program. A managed station was known as an agent. It is a router (or a host) that runs the SNMP server program. Management is completed through simple interaction between a manager and an agent. The agent keeps performance data in a database. The manager has created the values in the database.

CompTIA Network+ Certification

Components of SNMP

An SNMP-managed network includes three key components. These components are as follows −

  • Managed Device− It is a network node that executes an SNMP interface that enables unidirectional (read-only) or bidirectional access to node-specific information.
  • Agent− An agent is a network-management software mechanism that consists of a managed device. An agent has local knowledge of management data and translates that information to or from an SNMP specific form.
  • Network management system (NMS)− A network management system (NMS) executes applications that monitor and control managed devices.
CompTIA Network+ Certification

SNMP Protocols

SNMP uses two other protocols which are as follows –

SMI

SMI stands for Structure Management Information. SMI represents the general rules for naming objects, defining object types (including range and length), and showing how to encode objects and values.

SMI does not determine the number of objects an entity should handle or name the objects to be managed or define the relationship between the objects and their values.

MIB

MIB stands for Management information base. For each entity to be handled, this protocol must represent the number of objects, name them as per the rules represented by SMI, and relate a type to each named object. MIB generates a collection of named objects, their types, and their relationships to each other in an entity to be managed.

Posted by & filed under CompTIA Network+, MICROSOFT MTA NETWORKING.

Dynamic Host Configuration Protocol (DHCP)

Dynamic Host Configuration Protocol (DHCP) is a client/server protocol that automatically provides an Internet Protocol (IP) host with its IP address and other related configuration information such as the subnet mask and default gateway. In DHCP, port number 67 is used for the server and 68 is used for the client.

DHCP allows a network administrator to supervise and distribute IP addresses from a central point and automatically sends a new Internet Protocol (IP) address when a computer is plugged into a different place in the network.

DHCP is an application layer protocol that provides −

  • Subnet Mask
  • Router Address
  • IP Address

DHCP Client-Server Communication Diagram

In DHCP, the client and the server exchange DHCP messages to establish a connection.

DHCP Discover Message − Client Requests DHCP Information

  • It is the first message produced by a client in the communication process between the client and server with the target address 255.255.255.255 and the source address 0.0.0.0.
  • This message is produced by the client host to discover if there are any DHCP servers present in a network or not.
  • The message might contain other requests like subnet mask, domain name server, and domain name, etc.
  • The message is broadcast to all the devices in a network to find the DHCP server.
CompTIA Network+ Certification

DHCP Offer Message − DHCP Server Offers Information to Client

  • The DHCP server will reply/respond to the host in this message, specifying the unleashed IP address and other TCP configuration information.
  • This message is broadcasted by the server.
  • If there are more than one DHCP servers present in the network, then the client host accepts the first DHCP OFFER message it receives.
  • Also, a server ID is specified in the packet to identify the server.

DHCP Request Message − Client Accepts DHCP Server Offer

  • The Client receives the DHCP offer message from the DHCP server that replied/responded to the DHCP discover message.
  • After receiving the offer message, the client will compare the offer that is requested, and then select the server it wants to use.
  • The client sends the DHCP Request message to accept the offer, showing which server is selected.
  • Then this message is broadcast to the entire network to let all the DHCP servers know which server was selected.
CompTIA Network+ Certification

DHCP Acknowledgment Message − DHCP server acknowledges the client and leases the IP address.

  • If a server receives a DHCP Request message, the server marks the address as leased.
  • Servers that are not selected will return the offered addresses to their available pool.
  • Now, the selected server sends the client an acknowledgment (DHCP ASK), which contains additional configuration information.
  • The client may use the IP address and configuration parameters. It will use these settings till its lease expires or till the client sends a DHCP Release message to the server to end the lease.

DHCP Request, DHCP ACK Message − Client attempts to renew the lease

  • The client starts to renew a lease when half of the lease time has passed.
  • The client requests the renewal by sending a DHCP Request message to the server.
  • If the server accepts the request, it will send a DHC ACK message back to the client.
  • If the server does not respond to the request, the client might continue to use the IP address and configuration information until the lease expires.
  • As long as the lease is still active, the client and server do not need to go through the DHCP Discover and DHCP Request process.
  • When the lease has expired, the client must start over with the DHCP Discover process.
CompTIA Network+ Certification

The client ends the lease − DHCPRELEASE.

  • The client ends the lease by sending a DHCP Release message to the DHCP server.
  • The server will then return the client’s IP address to the available address pool and cancel any remaining lease time.

Posted by & filed under CompTIA Network+, MICROSOFT MTA NETWORKING.

CompTIA NET+ | Generic Routing Encapsulation (GRE)

Generic Routing Encapsulation (GRE) is a routing protocol developed by Cisco Systems in 1994 that allows a wide range of network-layer protocols to be contained inside virtual point-to-point or point-to multipoint links over an Internet Protocol network. Protocol encapsulation, not GRE specifically, breaks the layering sequence, according to the OSI principles of protocol layering.

GRE can be thought of as a barrier between two protocol stacks, one of which serves as a carrier for the other. IP protocol type 47 is used for GRE packets enclosed within IP. It is a tunneling protocol and is defined by RFC 2784. GRE provides both stateless and private connection.

GRE establishes a secure, stateless connection. The protocol establishes a connection that is comparable to that of a Virtual Private Network (VPN). Over an IP network, it can carry any OSI layer three protocol.

GRE establishes a tunnel between two routers over the Internet to allow communication between two hosts of different private networks. With the help of Virtual Tunnel Interface, the GRE connection endpoints can be terminated.

GRE Tunneling

GRE creates a private way for packets to travel through an otherwise public network by encapsulating or tunnelling the packets. Tunnel endpoints that encapsulate or de-encapsulate the traffic are used in GRE tunnelling.

Encapsulating packets within other packets is known as tunnelling. GRE tunnels are often set up between two routers, with each router acting as the tunnel’s end. The routers are configured to send and receive GRE packets directly.

Within an outer IP packet, GRE encapsulates a payload, an inner packet that must be transferred to a target network. GRE tunnel endpoints route encapsulated packets via intervening IP networks to convey payloads across GRE tunnels. GRE tunnels are used to connect different subnetworks.

CompTIA Network + Certification Boot Camp.

Advantages of GRE

  • IPv4 broadcast and multicast traffic can be encapsulated using the GRE protocol.
  • IPv6 is also supported.
  • It’s a straightforward and adaptable protocol.
  • Numerous protocols are encapsulated in a single GRE tunnel.
  • It can connect multiple discontinuous sub-networks and is easy to debug.
Comp TIA Network+ Certifications_GRE

Disadvantages of GRE

  • It does not provide a data encryption facility, and it needs to be integrated with other security protocols to provide network security.
  • Defining GRE tunnels is a laborious process; hence it is less scalable.

There are quite a few protocols available for data transfer via a secure network. Protocols were created for a reason, and they’re getting better all the time. Whether it’s greater security or ease of use and configuration, we always have various aspects to consider when picking the optimal protocol for a network.

Source: What
is Generic Routing Encapsulation (GRE) (tutorialspoint.com)

Pictures From: Image1, image2, image3, imageADV, imagedis.

Posted by & filed under CompTIA A+, MICROSOFT MTA O/S.

What are the Different Motherboard Form Factors?

Microsoft MTA O/S | CompTIA A+ Exam objectives 1.2: sizes.

Introduction to Motherboards

A motherboard (MB), also known as a Mainboard, system board or logic board, is the central or primary circuit board in a Personal Computer (PC). It is an extremely complex electronic system that every device in a computer system connects to in order to send and receive data. A typical motherboard is made up of a main microprocessor, called the CPU, two or more DIMM slots to hold memory modules, support chips called the Chipset, controller ports for connecting storage drives, expansion slots for adding connections, and integrated input and output ports for connecting external devices.

 

Also known as a mainboard, system board, mobo or MB, here’s how a motherboard looks like:

 

Motherboard Form Factors

Motherboard form factors refer to the layout, features, and size of a motherboard. While there are dozens of form factors for desktop computers, most of them are either obsolete or developed for specialized purposes.

As a result, almost all consumer motherboards sold today belong to one of these form factors: ATX, Micro-ATX, Mini-ITX and EATX

What Does Each One Mean?

To start, let’s begin with the “standard”-sized motherboard which is the ATX. ATX stands for “Advanced Technology eXtended” and was developed as far back as 1995. If you own, or have owned, a regular-sized PC, there’s a good chance it has an ATX motherboard. This makes ATX the “regular” choice when purchasing a PC or motherboard.

From ATX, motherboards get either bigger or smaller in size. Going upward, you have the EATX motherboard (Extended ATX) which adds more to the ATX board and is slightly larger as a result. Going the other way, you have the Micro ATX which is smaller than the ATX. After that is the Mini ITX (“Information Technology eXtended”) which is even smaller than the Micro ATX.

ATX

The most popular standard for PC motherboards is ATX, which stands for Advanced Technology Extended. ATX motherboards are considered to be full-size with up to seven PCI/PCI Express (PCIe) expansion slots. Expansion slots are needed for things like graphics cards, sound cards, NVMe PCIe Solid State Drives (SSDs), and various peripherals. They also provide up to eight slots for RAM.

MINI-ITX

If you need a computer that is really small then you should look to Mini-ITX. These boards are primarily used in small form factor (SFF) computer systems where the entire computer must fit in a cabinet or on a bookshelf or otherwise be very portable. Typical uses include home theater PCs (HTPCs) where low power consumption means less noise from cooling fans and LAN gaming where you need something that is easy to carry around. Many new CPUs include integrated graphics eliminating the need for a dedicated graphics card if you aren’t after high resolution and/or high frame rates. This is good, because the Mini-ITX standard allows for just one PCI expansion port. To take full advantage of the smaller form factor you may need to find something other than a standard ATX power supply as they are generally too large for small Mini-ITX cases.

EATX

On the other hand, if space is not your concern, but performance and reliability are then eATX is for you. The e stands for extended making this an Extended Advanced Technology Extended motherboard. Boy that’s a mouthful. Generally these are used for enterprise-class high-performance workstations and servers. While it’s the same height as an ATX motherboard, it is 86 mm (3.39 inches) wider. This additional space is generally used for a second CPU, but single CPU boards are also available. They also have eight memory slots and up to seven PCI expansion slots, but using an older 64-bit PCI standard called PCI-X (PCI Extended).

Micro ATX

As the computer Technology developed the computer market changed and demand of small and powerful main boards was huge in numbers. They were developed using the same ATX form Factor in mind as the price of this motherboard was low the demand Increased rapidly.

Pros and Cons of each form factor

Pros

Cons

ATX

  • Excellent overclocking potential
  • Easy to find compatible components
  • Usually features great aesthetics
  • Little expensive
  • Also requires a lot of space

Micro-ATX

  • Very affordable
  • Kinda Portable
  • Small enough for a on-desk setups
  • Decent overclocking
  • Lower at RAM capacity than ATX
  • Not ideal for Multi-GPU setups

Mini-ITX

  • Affordable
  • Very Portable (Ideal for LAN Parties
  • Makes great HTPC
  • Not a great choice for overclocking
  • Minimal RAM capacity
  • No Multi-GPU Support

EATX

  • Enthusiasts-Tier overclocking
  • More PCIe lanes
  • High Ram capacity
  • Ideal for 4-way GPU builds, servers and High-End workstations
  • Very expensive
  • Requires a lot of space

Motherboard Form Factor Comparison Chart

ATX

Micro ATX

EATX

Mini-ITX  

Maximum Size 

30.5 x 24.4 cm

12 x 9.6 in              

24.4 x 24.4 cm

9.6 x 9.6 in                 

30.48 x 33.02 cm

12 x 13 in            

.17 x 17 cm

6.7 x 6.7 in

Ram Slots  

2 to 8   

2 to 4            

8

2

RAM Type 

DIMM

DIMM .

DIMM

DIMM, SODIMM

Expansion Slots     

7

4

7  

1

Graphics Cards   

1 – 4          

1 – 3        

1-4

0-1

Expansion Slots     

4 – 12

4 – 8

4 – 12

2 – 6

Credits: Build Computers

Posted by & filed under CompTIA A+, MICROSOFT MTA O/S.

Backup and recovery procedures protect your data against data loss and reconstruct the data, should loss occur.

When searching for a backup solution, you’ll find that there are two main types of backups: “file-level backup” and “image-level backup.”

File-Level Backup

File-level backups are the most common type of backup. This method has been around for a long time, and most people are familiar with the process.  File level backup allows you to save individual files and folders from your office computer to a remote location. You’ve probably used a consumer-level backup program to save files, such as Google Drive, Dropbox, or Microsoft Office Drive. Because only changed data is saved with each backup, the total backup is smaller in size and therefore requires less storage space.

For most SMBs, operating only a few machines or in a home computing situation, this type of backup is usually adequate for maintaining your data. Delete a file? Go and get it from your external drive or cloud storage solution.

Image-Level Backup

Image-level backup is known by many names: bare metal backup/recovery (BMR), disaster recovery backup, ghost backup, block-level backup or “cloning” your machine. Image level backup is a more complete option for backing up your practice data. Rather than copying individual files and folders, image level backup takes a snapshot of your entire operating system and all of the data associated with it. The backup is saved in a single file called an image, which can be retrieved and restored if your practice suffers a major data loss. This backup method requires more storage space, but it’s much more efficient when you want to get back up and running as quickly as possible.

Another benefit of image level backup is that the copy of your operating system can be restored to any computer. So, if your hardware is damaged due to a fire or flood, or if your computers are stolen, there’s no need to search for and purchase compatible older machines. And, while the main benefit of image level backup is having the ability to quickly restore your entire server from a single file, it’s also possible to retrieve a singular file – eliminating the need for file level backups.

Learn more about our A+ certification.

Posted by & filed under CompTIA A+, MICROSOFT MTA O/S.

Source by Wikipedia

Monitor Connectors:

· If you are using a Monochrome / CGA/ EGA monitor, it is a digital monitor and will have a DB-9 Male connector that plugs into a digital adapter.
· If you are using a VGA/ SVGA monitor, it will have a male DB-15 connector that plugs into ananalog adapter. You should never interchange an analog monitor to that of a digital adapter and vice versa, or severe damage may take place.

Video types

 

Video monitor

Max. Color depth

Max. Resolution

CGA

16 Colors

160X100

EGA

64 colors

640X350 (Graphics Mode)

VGA

256 colors

640X480 (Graphics Mode)

SVGA

16 Million Colors

1280X1024 or even more

When you are installing a different SVGA monitor, it is unlikely that the new monitor has the same
capabilities as the old one. As a result, the image on the screen may not be readable. In such instances,
change the video resolution to Standard VGA before installing the new monitor. You can change the
resolution appropriately after the image on the screen is readable with the new monitor. It may also be
necessary to load appropriate device driver, if you are installing a different display adapter.

LCD Monitors:

The ‘native resolution’ specification points out one of the big differences between LCD and CRT
displays. If you run an LCD at any resolution other than its native resolution, the display will become
blurry, especially with text. The reason this happens on LCDs is that they are made up of tiny cells in a
matrix (called the native resolution). For instance, if the native resolution is listed as 1280×1024, then
there are 1280 cells across and 1024 cells down the screen. If you only display at 1024×768, then a
large number of the pixels are being ’stretched’ over multiple cells, which is what causes the image
quality to degrade.

Various resolutions commonly used with LCD monitors are as given below:
1024 x 768 is XGA (eXtended Graphics Array)
1280×720 is WGA/WXGA (Wide eXtended Graphics Array)
1280 x 1024 is SXGA (Super eXtended Graphics Array)
1400×1050 is SXGA+ (Super eXtended Graphics Array Plus)
1680×1050 is WSXGA (Wide Super eXtended Graphics Array Plus)
1600×1200 is UXGA (Ultra eXtended Graphics Array)
1920×1200 is WUXGA (Wide Ultra eXtended Graphics Array)

Wide screen format aspect ratio is typically 16:10 for computer monitors and 16:9 for LCD
televisions. Aspect ratio of 16:10 conforms with WUXGA standard. Further note that UXGA has a
resolution of 1600X1200 and an aspect ratio of 4:3.

Products or instrumentation equipped with a touch screen normally require a calibration routine upon
power up because it is difficult to perfectly align a touch screens coordinates with those of the display
underneath it. Calibration is necessary when the coordinates of the area touched on the screen are not
sufficiently close to the coordinates on the display. Without proper calibration, software may not
respond correctly when a soft button or icon is pressed.It is recommended that you clean the LCD
screen with clean water, using a soft cotton cloth. Do not spray water directly on the screen. First wet
the cloth (no dripping of water), and wipe the LCD screen gently.

Monitors and static charge:

1. Monitors accumulate very high static charges and need to be handled very carefully. Before
attempting any repair, it is important to discharge any accumulated charges on the monitor. You can
use a jumper, one end of which is grounded, and touch the other end of the jumper wire to the anode of
the monitor. While doing so, ensure that you are not in direct contact with the jumper wire or the
anode. You can use a screwdriver, or a nose pliers with rubber handle for this purpose. A “POP” sound
can be heard when the static charges accumulated on the anode lead getting grounded through the
jumper wire. Static charges accumulated on monitors may lead to severe burn or even fatal, if come
into direct contact.

2. Never wear a wrist strap when working on monitors. Monitors contain very high voltages,
sometimes fatal to human, even when the power is turned off. If you are wearing wrist strap, the
human body works as a conduit to discharge the electric charge.

Want more information on how to become CompTIA A+ Certified? Learn more!

Posted by & filed under CompTIA A+, MICROSOFT MTA O/S.

Source by Wikipedia

A bus is a set of signal pathways that allow information to travel between components inside or outside of a computer.

Laptop Buses

PCMCIA or PC Card

Personal Computer memory card international association is a type of bus use for laptops. There are different types of cards and you primarily slide in the card in a PC card slot of a laptop.

Type II PCMCIA Card

PC Card Types

Measurement

Usage

Type I

3.3mm

Flash Memory

Type II

5.0mm

USB/NIC/Wireless

Type III

10.5mm

Hard Drive

Note: PCMCIA (Personal Computer Memory Card International Association) cards supports 16 or 32 bit bus width.

Express Card

Express Card is the newest form of card you insert in newer laptops

List of Ports

Computer ports are connection points or interfaces with other peripheral devices. There are two main types of computer ports: physical and virtual.
Physical ports are used for connecting a computer trough a cable and a socket to a peripheral device. Physical computer ports list includes serial ports ( DB9 socket ), USB ports ( USB 2.0 or 3.0 socket / connector ), parallel ports ( DB25 socket / connector ), ethernet / internet ports ( RJ45 socket / connector )….

In this day’s most desktop and notebook computers use only USB, VGA, Ethernet, IEEE 1394, DVI and TRS physical ports. Serial, parallel, PS/2 and SCSI are used more by industrial and professional computers.

Phisical common computer ports – short description

-USB port (Universal Serial Bus) created in mid-1990’s mainly to standardize communications between computers and peripheral. Also, USB ports can be used as a power supply for different devices like digital cameras, microcontroller programmers, laptop coolers and other. There were four types of USB computer ports: USB1.0 and 1.1 released between 1996 and 1998 with a speed range starting from 1.5 Mb up to 12 Mb; USB 2.0 released in 2000 with a maximum speed of 480 Mb/sec and USB 3.0 released in 2008 with a maximum speed of 5 Gb/sec.

-Ethernet/internet ports were first introduced in 1980 to standardize the local area networks (LAN). Internet ports use RJ45 connectors and have speeds between 10 Mb/sec, 100 Mb/sec and 1 Gb/sec, 40 Gb/sec and 100 Gb/sec
VGA ports (Video Graphics Array) has 15 pins displayed on three rows and it is mainly used for connecting the monitor with the video adapter from the computer motherboard; adapters:
·HDMI (High-Definition Multimedia Interface)
·DVI (Digital Visual Interface)

-IEEE 1394 ports this technology is developed by Apple between 1980 and 1990 with the name FireWire and it is the equivalent of the USB for Apple computers

-IEEE 1284 ports Printers – Parallel Communication Standard

-TRS (Tip, Ring, and Sleeve) ports are used for receiving and transmitting with analog signals like audio

-DVI are computer ports used to transmit uncompressed digital video data

-PS/2 ports were introduced in 1987 to replace the serial mouse and keyboard

-Serial port uses the DB9 socket/connector and transfers information, one bit at a time, between the computer and other peripherals. The serial computer port identifies with RS-232 standard

Want more information on how to become CompTIA A+ Certified? Learn more!

Posted by & filed under CompTIA A+, MICROSOFT MTA O/S.

Source by Wikipedia

A bus is a set of signal pathways that allow information to travel between components inside or outside of a computer.

Other Types of Bus

USB

USB or Universal Serial Bus is an external bus that most popular form of bus use today
USB is hot swappable
USB can daisy-chain up to 127 devices

USB Speeds
USB 1.0 supports 1.5Mbps
USB 1.1 supports 12Mbps
USB 2.0 supports up to 480Mbps
USB 3.0 supports up to 4.8Gbps

 

USB_A Connector

USB_B Connector

AMR

Released September 8, 1998, AMR is short for Audio/Modem Riser. AMR allows an OEM to create one card that has the functionality of either Modem or Audio or both Audio and Modem on one card. This new specification allows for the motherboard to be manufactured at a lower cost and free up industry standard expansion slots in the system for other additional plug-in peripherals.

AMR Slot

CNR

Introduced by Intel February 7, 2000, CNR is short for Communication and Network Riser and is a specification that supports audio, modem USB and Local Area Networking interfaces of core logic chipsets.

CNR Slot

PCI-X

PCI-X is a high-performance bus that is designed to meet the increased I/O demands of technologies such as Fibre Channel, Gigabit Ethernet, and Ultra3 SCSI.

PCI-X card

PCI-X Slots

 

Type of Bus

Bits Wide

Clock Speed

Transfer Speed

PCI-X (v1)

64bit

66MHz * 8 =

528MB/s

PCI-X (v1)

64bit

100MHz * 8 =

800MB/s

PCI-X (v1)

64bit

133MHz * 8 =

1066MB/s

 

PCI Express

A high-speed serial I/O interconnect standard being used for high-speed connection it will eventually replace the PCI standards

PCI-e Card

Lane Widths

Peak unidirectional bandwidth

Peak full duplex bandwidth

x1

250MB/s

500MB/s

x2

500MB/s

1GB/s

x4

1GB/s

2GB/s

x8

2GB/s

4GB/s

x16

4GB/s

8GB/s

Want more information on how to become CompTIA A+ Certified? Learn more!

Posted by & filed under CompTIA A+, MICROSOFT MTA O/S.


Source by Wikipedia

A bus is a set of signal pathways that allow information to travel between components inside or outside of a computer.

 

 

Types of Bus

 

External bus or Expansion bus allows the CPU to talk to the other devices in the computer and vice versa. It is called that because it’s external to the CPU.


Address bus allows the CPU to talk to a device. It will select the particular memory address that the device is using and use the address bus to write to that particular address.


Data bus allows the device to send information back to the CPU

 

 

Types of Expansion Buses

 

ISA

Introduced by IBM, ISA orIndustry Standard Architecture was originally an 8-bit bus and later expanded to a 16-bit bus in 1984. When this bus was originally released it was a proprietary bus, which allowed only IBM to create peripherals and the actual interface. Later however in the early 1980’s, the bus was being created by other clone manufacturers.

 

16bit ISA Card

16bit ISA Slot

 

 

PCI

 

Introduced by Intel in 1992, PCI is short for Peripheral Component Interconnect and is a 32-bit or 64-bit expansion bus.


The PCI bus is the most popular expansion bus use in today’s computers.


PCI Card

 

PCI Slot

 

AGP

 

Introduced by Intel in 1997, AGP or Advanced Graphic Port is a 32-bit bus or 64-bit bus designed for the high demands of 3-D graphics. AGP has a direct line to the computers memory which allows 3-D elements to be stored in the system memory instead of the video memory.
AGP is one of the fastest expansion bus in use but it”s only for video or graphics environment.


AGP Card

 

AGP Slot

 

 

Expansion Bus Chart:

Type of Bus

Bits Wide

Clock Speed

Transfer Speed

ISA

8 bit

4.77 MHz

2.38MB/s

ISA

16 bit

8.33 MHz

8MB/s

PCI (Client)

64 bit

66MHz

266MB/s

AGP 1x

32 bit

66MHz

266MB/s

AGP 2x

32 bit

66MHz

533MB/s

AGP 4x

32 bit

66MHz

1,066MB/s

AGP 8x

32 bit

66MHz

2,133MB/s

AGP 8x (high-end)

64 bit

66MHz

4,266MB/s

Want more information on how to become CompTIA A+ Certified? Learn more!

Posted by & filed under CompTIA A+, CompTIA Network+, MICROSOFT MTA NETWORKING, MICROSOFT MTA O/S.

 

ComputerCableStore

 

When talking about cable pinouts we often get questions as to the difference in Straight-through, Crossover, and Rollover wiring of cables and the intended use for each type of cable. These terms are referring to the way the cables are wired (which pin on one end is connected to which pin on the other end). Below we will try shed some light on this commonly confused subject.

 

Straight-Through Wired Cables

Straight-Through refers to cables that have the pin assignments on each end of the cable. In other words Pin 1 connector A goes to Pin 1 on connector B, Pin 2 to Pin 2 ect. Straight-Through wired cables are most commonly used to connect a host to client. When we talk about cat5e patch cables, the Straight-Through wired cat5e patch cable is used to connect computers, printers and other network client devices to the router switch or hub (the host device in this instance).

 

 Connector A  Connector B
 Pin 1  Pin 1
 Pin 2  Pin 2
 Pin 3  Pin 3
 Pin 4  Pin 4
 Pin 5  Pin 5
 Pin 6  Pin 6
 Pin 7  Pin 7
 Pin 8  Pin 8

 

Crossover Wired Cables

Crossover wired cables (commonly called crossover cables) are very much like Straight-Through cables with the exception that TX and RX lines are crossed (they are at oposite positions on either end of the cable. Using the 568-B standard as an example below you will see that Pin 1 on connector A goes to Pin 3 on connector B. Pin 2 on connector A goes to Pin 6 on connector B ect. Crossover cables are most commonly used to connect two hosts directly. Examples would be connecting a computer directly to another computer, connecting a switch directly to another switch, or connecting a router to a router.Note: While in the past when connecting two host devices directly a crossover cable was required. Now days most devices have auto sensing technology that detects the cable and device and crosses pairs when needed.

 Connector A  Connector B
 Pin 1  Pin 1
 Pin 2  Pin 2
 Pin 3  Pin 3
 Pin 4  Pin 4
 Pin 5  Pin 5
 Pin 6  Pin 6
 Pin 7  Pin 7
 Pin 8  Pin 8

 

Rollover Wired Cables

Rollover wired cables most commonly called rollover cables, have opposite Pin assignments on each end of the cable or in other words it is “rolled over”. Pin 1 of connector A would be connected to Pin 8 of connector B. Pin 2 of connector A would be connected to Pin 7 of connector B and so on. Rollover cables, sometimes referred to as Yost cables are most commonly used to connect to a devices console port to make programming changes to the device. Unlike crossover and straight-wired cables, rollover cables are not intended to carry data but instead create an interface with the device.

 

 Connector A  Connector B
Pin 1  Pin 1
Pin 2 Pin 2
Pin 3 Pin 3
Pin 4 Pin 4
Pin 5 Pin 5
Pin 6 Pin 6
Pin 7 Pin 7
Pin 8 Pin 8

 

 

Want more information on how to become CompTIA A+ Certified? Learn more!

 

 

Posted by & filed under CompTIA A+, MICROSOFT MTA O/S.

Source Quizlet

 

What are three common problem areas with laptops?

  • Displays
  • Storage devices and RAM
  • Power and input devices

 

 

 

What are the three possible areas that a technician must check for printer errors?

  • The device
  • Cable connection
  • The computer to witch it is attached

 

 

What are some common causes of printer problems?

  • Loose cable connections
  • Paper jams
  • Equipment power
  • Low ink warning
  • Out of paper
  • Errors on equipment display
  • Errors on computer screen
  • Empty toner cartridge
  • Printer server is not working
  • The printer can not establish a connection to the wireless network

 

 

What are two probable causes of printer paper jams?

  • The wrong paper type is being used
  • Humidity causes the paper to stick together

 

 

 

What is a common cause of creases on paper when printed?

The pickup rollers are obstructed, damaged or dirty

 

 

What is the purpose of the first lab?

To reinforce your troubleshooting skills

 

 

What is the purpose of the second lab?

To reinforce your communication and troubleshooting skills with printers

 

 

What is the purpose of the third, fourth and fifth labs?

To reinforce your skills with printer problems

 

 

Define troubleshooting in the context of security

To be able to analyze a security threat and determine the appropriate method to protect assets and repair damage

 

 

What are some common causes of security problems?

  • The user account is disabled
  • The user is using an incorrect username or password
  • The user does not have the correct folder or file permissions
  • The firewall configurations are incorrect
  • The users computer has been infected by a virus
  • The wireless security configurations are incorrect on the client
  • The security configurations are incorrect on the wireless access point

 

 

 

What are some symptoms of malware infections?

  • Message MBR has been changed or modified appears at bootup
  • A windows 7 or windows vista computer starts with the error message “error loading operating system”
  • A windows 7 or windows vista computer starts with the error message “Caution: this hard disk may be infected by virus”
  • A windows xp computer will nto boot
  • A windows 7 computer will nto boot
  • Your contacts are receiving spam from your email account

 

 

What is the purpose of the first lab?

To test your troubleshooting skills with security problems

 

 

What is the purpose of the second lab?

To reinforce your communication and troubleshooting skills

 

 

What is the purpose of the third, fourth and fifth labs?

To rest you troubleshooting skills with security problems

 

 

What advanced troubleshooting topics were covered in this chapter?

  • Computer components
  • Peripherals
  • Operating systems
  • Networks
  • Laptops
  • Printers
  • Security

 

 

Click here to see more!

 

Want more information on how to become CompTIA A+ Certified? Learn more!

 

 

Posted by & filed under CompTIA A+, MICROSOFT MTA O/S.

Source Quizlet

 

Most often, what is the reason for a need for advanced troubleshooting?

It means that the probable cause is difficult to diagnose

 

 

What are the six steps of troubleshooting?

  • Identify the problem
  • Establish a theory of probable cause
  • Test the theory to determine the cause
  • Establish a plan of action to resolve the problem and implement the solution
  • Verify full system functionality and if applicable, implement preventative measures
  • Document findings, action and outcomes

 

 

What are the nine common advanced problems that occur with computers and peripherals?

  • An OS not found error message is displayed when the computer is started
  • RAID cannot be found
  • RAID stops working
  • The computer does not recognize a SCSI drive
  • The computer does not recognize a removable external drive
  • After updating the CMOS frimware the computer will not start
  • The computer reboots without warning, locks up, or displays error message, or the BSOD
  • The computer exhibits slow performance
  • After upgrading from a single core to a dual core CPU the computer runs slower and only shows one CPU graph in the task manager

 

 

What are the three steps designed to reinforce communication and troubleshooting skills?

  • Receive the work order
  • Talk the customer through various steps to try and resolve the problem
  • Document the problem and the resolution

 

 

When working to identify the problem with operating systems, what are three good open ended questions to ask the user?

  • What OS is installed on the computer?
  • What programs have been installed recently?
  • What updates or service packs have been installed?

 

 

What are three good close ended questions to ask when working to identify operating system problems?

  • Does anyone else have this problem?
  • Has this problem happened before?
  • Have you made any changes to you computer?

 

 

What are eight common causes of operating system problems?

  • Corrupted or missing system files
  • Incorrect device driver
  • Failed update or service pack installation
  • Corrupted registry
  • Failed or faulty hard drive
  • Incorrect password
  • Virus infection
  • Spyware

 

 

What are a few common steps in determining the cause in operating system problems?

  • Reboot the computer
  • Examine event logs
  • Run SFC/ScanNow
  • Roll back or reinstall the device driver
  • Uninstall recent updates or service packs
  • Run system restore
  • Run CHKDSK
  • Log in as a different user
  • Boot to the last known good configuration
  • Run a virus scan
  • Run a spyware scan

 

 

What are some steps to take if further research is needed to solve the problem?

  • Helpdesk repair logs
  • Other technicians
  • Manufacturer FAQ’s
  • Technical websites
  • Newgroups
  • Computer manuals
  • Device manuals
  • Online forums
  • Internet search

 

 

What are some steps to take in order to verify the solution and full system functionality?

  • Reboot the computer
  • Access all drives and shared resources
  • Check event logs to ensure there are no new warnings or errors
  • Check device manager to ensure there are no warnings or errors
  • Make sure applications run properly
  • Make sure the internet can be accessed
  • Check task manager to ensure that there are no unidentified programs running

 

 

What actions constitute documenting findings after a solution is found and systems tested?

  • Discuss the solution implemented with the customer
  • Have the customer verify the problem has been solved
  • Provide the customer with all the paper work
  • Document any components used in the repair
  • Document the time spent to resolve the problem

 

 

Most often, what is the reason for a need for advanced troubleshooting?

Operating system problems, hardware, software, networks or some combination of the three

 

 

What is a stop error?

A hardware or software malfunction that causes the system to lock up

 

 

What is a BSOD and what is usually the cause?

Black screen of death, device driver errors

 

 

What are some ways to prevent stop errors and BSOD?

  • Verify that the hardware and software drivers are compatible
  • Install the latest patches and update for windows
  • Event log and other diagnostic utilities

 

 

What is the purpose of the first lab?

Designed to reinforce your skills with the operating system

 

 

What is the purpose of the second lab?

Designed to reinforce your communication and troubleshooting skills

 

 

What is the purpose of the third, fourth and fifth labs?

Designed to reinforce you skills with operating system problems

 

 

 

What are a few step 3 quick procedures that can determine the exact cause or even a correct a network problem?

  • Restart the network equipment
  • Renew the IP address
  • Reconnect all of the network cables
  • Verify the wireless router configuration
  • Ping the local host
  • Ping the default gateway
  • Ping an external website
  • Verify the network equipment settings

 

 

 

 

What are three common causes of network connectivity problems?

  • Incorrect IP information
  • Incorrect wireless configuration
  • Disabled network connection

 

 

What are three common causes of not being able to send or receive email?

  • Incorrect email software settings
  • Firewall settings
  • Hardware connectivity issues

 

 

What are two common causes of transfer problems between FTP clients and servers?

  • Incorrect IP address and post settings
  • Security policies

 

 

What are two common causes of secure internet connection problems?

  • Incorrect certificate settings
  • Ports blocked by software and hardware

 

 

What are three common causes of unexpected information reported from CLI commands?

  • Incorrect IP address settings
  • Hardware connection issues
  • Firewall settings

 

 

 

What is the purpose of the second lab?

To reinforce your communication and troubleshooting skills

 

 

What is the purpose of the third, fourth and fifth labs?

To test your troubleshooting skills with networking problems

 

 

 

What are a few common causes of problems with laptops?

  • Laptop battery does not have a charge
  • Laptop battery will not charge
  • Loose cable connections
  • The inverter does not work
  • External keyboard does not work
  • Num lock is on
  • Loose RAM
  • A Fn key has disabled a capability
  • A button or slide switch has disabled the wireless connection]

 

 

What does the technician need to be sure to do when replacing components on laptops?

They they have the correct replacements and tools recommended by the manufacturer

 

Click here to see more!

 

Want more information on how to become CompTIA A+ Certified? Learn more!

 

 

Posted by & filed under CompTIA A+, MICROSOFT MTA O/S.

Source Learning About Computers

 

What are binary numbers? The binary number system is when only two numbers are used – 0 and 1. It is also called base 2. The computer number system is base 2. Our number system is referred to as decimal or base 10 because we use 10 digits (0 – 9) to form all of our numbers. There are many other number bases, including hexadecimal, but it’s easier for computers to utilize 0s and 1s.

In electronics, a 0 is off (usually 0 Volts) and 1 is on (usually 5 Volts). All computer data is composed of 1s and 0s. Each individual 1 or 0 is a bit. Four bits is a nibble. Eight bits is a byte. From there we have kilobytes, megabytes, etc. Since everything is a series of 1s and 0s, the CPU has to perform every calculation in binary. But before any operations are done, numbers have to first be converted into base 2.

But before diving into the binary number system and conversions, let’s first see how things work in our decimal system.

Let’s just pick a number…like 9345. How do we get this? Remember when I mentioned we use base 10? In math the base is a number that’s raised to a power (another name for power is exponent). For example 34 is 3 raised to the 4th power, which means you multiply 3 times itself 4 times (3 * 3 * 3 * 3).

We have what’s called a place value system. Each individual number holds a particular numerical position. We get these positions by using 10 raised to different powers. Start with the number on the right.

So looking at 9345, the right-most number 5 is in the ones place (10º = 1). The 4 is in the tens place (10¹ = 10). The 3 is in the hundreds place (10² = 100), and the 9 is in the thousands place (10³ = 1000). This is true for any number. Now the larger the number the more place values (ten thousands, hundred thousands, etc.), but I’m keeping it short in this example. So we have:

 

 

If you take each number, multiply it by its place value, & add the results, you get 9345.

 

 

Note: any number raised to 0 = 1. Any number raised to 1 = itself.

This method is used in base 2 except rather than the ones place, tens place, hundreds place, thousands place, etc. you have: ones place (2º), twos place (2¹), fours place (2²), and eights place (2³), etc.

Using the base 10 example just above, the number 10112 is like this:

 

 

It’s the same process for any number system. And remember, the computer number system always uses binary. So now that you have a basic understanding of place values, it’s time to start converting!

 

Converting From Binary To Decimal:
Converting binary to decimal is really quite simple. All you do is apply the same technique used in the place value illustration on the intro page except this time we will be using a 2 instead of a 10.

For example if we want to know what 1101000112 is in our number system (base 10) we do the following:

 

We usually start on the right. With each number, you raise 2 to its power then multiply the result by the binary digit. When you’re done, add all the results together and that is the number in base 10.

 

This method is used for converting any number base to decimal.

 

Decimal to Binary Conversion:
Decimal to binary conversion is not hard either, it just takes a little more work. There are two methods you can use: successive division and subtracting values using a table.

Successive division requires dividing continuously by the base you’re converting to until the quotient equals 0. The remainders compose the answer.

As an example, let’s convert 835 to binary.

 

The most significant bit is the left number in the answer and the least significant bit is on the right end giving us an answer of: 11010000112

Binary digits are usually grouped by 4, 8, 16, etc. so we can place a couple of 0s on the left to give us three groups of four. This does not change the answer.

0011   0100   00112

You can check your answer by converting back to base 10.

We just looked at the successive division method of converting from decimal to binary. The other method is subtracting values. With this method you keep subtracting until you reach 0. Let’s convert 165 to binary.

 


Notice a 1 is only placed under the highest value that can be subtracted from a number. Everything else is automatically a 0 giving us an answer of: 101001012.

Hexadecimal:
The hexadecimal (hex for short) number system uses 16 digits to form all other numbers. The purpose of using hex is for human understanding. Computers always work in binary (0s and 1s). To have a long series of binary digits gets complicated, so programmers had to come up with a more simplfied way to represent them. Hex groups binary numbers into 4 bit packages so to speak. One hex digit represents four bits (called a nibble). Hexadecimal numbers have a subscript 16 or H behind them (D316 or D3H). Because single characters must be used, the letters A, B, C, D, E, F represent 10-15. Remember, when dealing with number systems, we always start with 0. So we have 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F. Memory locations are listed as hex values, and many times when you get an error message, your OS (operating system) will show you the location.

Example of hex and the number of bits:

F6AH – 12 bits
BH – 4 bits
78H – 6 bits

 

Converting Hexadecimal to Decimal:
As was mentioned in the binary conversion section above, we use the same technique to convert to decimal (base 10) from any other base. In this case, let’s convert 4B7F16 to Base 10 (decimal).

 

Converting Decimal to Hexadecimal:
To convert from decimal to hexadecimal, we use the successive division method discussed earlier only we divide by 16 instead of 2. Let’s convert 501 from decimal to hex.

 

 

We’re done since we can’t divide 1 by 16 and that leaves us with a remainder of 1. When writing the answer the LSD is always on the right and the MSD on the left. The answer is: 1F516

Converting Hexadecimal to Binary:
Remember hex uses groups of four bits, so we can use the table below for conversions.

Decimal Binary Hex
0 0000 0
1 0001 1
2 0010 2
3 0011 3
4 0100 4
5 0101 5
6 0110 6
7 0111 7
8 1000 8
9 1001 9
10 1010 A
11 1011 B
12 1100 C
13 1101 D
14 1110 E
15 1111 F

To convert D14B to binary:

D = 1101, 1 = 0001, 4 = 0100, B = 1011

When we put the pieces together, we get: D14B16 = 11010001010010112

 

Converting Binary to Hexadecimal:
To convert 11110101010011102 to base 16, we first break up the number into groups of four as shown below:

1111 0101 0100 1110

Now assign each group its corresponding hex value

1111 = F, 0101 = 5, 0100 = 4, 1110 = 14

When put together, we get F54E16

 

Want more information on how to become CompTIA A+ Certified? Learn more!

 

 

Posted by & filed under CompTIA A+, MICROSOFT MTA O/S.

Source Learning About Computers

To install a power supply is really quite easy. It fits in the top back of the case. First, orient the holes on the power supply to those on the case.

Then, place the unit in the space provided and slide it in until the screw holes are aligned.

Secure it by screwing it in.

Finally, plug the power connector into the motherboard. It connects one way.

Want more information on how to become CompTIA A+ Certified? Learn more!

Posted by & filed under CompTIA A+, MICROSOFT MTA O/S.

Source: Adopted From www.computerhope.com

Almost all laptops can connect to an external display and have the laptop display sent to that external display instead of, or in addition to, the laptop screen. Examples of an external display include a CRT monitor, flat panel display, projector, or TV. Read more »

Posted by & filed under CompTIA A+, MICROSOFT MTA O/S.

Source: Adopted From www.computerhope.com

Unlike a desktop computer, laptops and other portable computers often do not offer many upgrade options. Below is a listing of common laptop upgrade questions and the answers to each of the questions regarding laptop and portable computer upgrades. Read more »