Posted by & filed under CompTIA Security+.

Remote Access vs Site-To-Site VPN

Remote Access VPN

What is a Remote Access VPN?

A remote access VPN gives employees access to secure connection with a remote computer network and they can access resources on the network as if they were directly plugged into the network’s servers.

How Remote Access VPN Works?

A remote-access VPN works by creating a virtual tunnel between an offsite employee’s device and the company’s network. This tunnel goes through the Internet or another type of public network, such as a public Wi-Fi network. The tunnel and the data traveling through it are protected by encryption and security protocols, which keeps the data private and secure.

 

Companies can build and maintain their own remote-access VPNs. The two main components in this type of VPN are a network access server and VPN client software. The client software needs to be installed on the devices that employees will use to connect to the server when they are out of the office. This might include company-issued mobile devices and employees’ personal devices (e.g., smartphones, home computers). As VPN usage grows, companies can add equipment that is designed to optimize VPN services, such as VPN concentrators.

 

Businesses that do not want to bother with setting up their own remote-access VPNs can use a VPN service provider. In this case, they do not need a network access server, but they will still need to make sure that the VPN client software is installed on the devices employees will be using to connect to the VPN.

Advantages

Using a remote access VPN provides several advantages for businesses. The most important benefit though is data security. When an offsite employee sends data through a VPN, it is encrypted, so even if a hacker is able to intercept that data, they won’t be able to use it. This is particularly important if an employee accesses their companies’ network using public Wi-Fi while traveling because traffic sent over these networks is usually not encrypted.

 

Another advantage of remote access VPNs is that they provide companies with an affordable way to secure data sent by offsite employees. The initial investment needed to set up a remote access VPN is minimal and they can easily be scaled as a company grows and this is especially true if a VPN service provider is used.

 

Since remote access VPNs are affordable and secure, organizations can feel more comfortable with letting their employees work from home or while traveling. Employees who can work where and when they want also tend to be happier as well as more productive.

Disadvantages

Like any tool, remote-access VPNs have some disadvantages. They can slow down a device’s web browsing speed. More important, setting up an in-house VPN solution can be challenging if a business is unfamiliar with the technology.

Using a VPN service provider will simplify the setup process. However, businesses need to be wary when choosing one. There are fake VPN service providers (i.e., fake services offered by hackers) as well as providers with questionable practices (i.e., legit companies that sell customers’ data). So, when selecting a VPN service provider, it is important to do research and carefully evaluate candidates, especially those that offer free VPN services.

Site-To-Site VPN

What is a Site-to-Site VPN?

A site-to-site setup is where two (or more) different networks are connected together using one tunnel. In this connection model, devices in one network can reach devices in the other network, and vice versa. The implementation of this is, for as far as Access Server is involved in this, relatively simple. The more difficult part comes when dealing with firewalls and security filtering options, and altering routing tables in routers and Internet gateways, as there are so many of them of various brands and models, that we cannot possibly document all of them. However, routers should all have one thing in common: the ability to route traffic using static routes. With that capability comes the means to send traffic from one network to another, through a gateway system that provides connectivity to the other network.

A site-to-site VPN therefore extends the company’s network, making computer resources at one location available to employees at other locations. This capability makes a site-to-site VPN an attractive option for a growing corporation with branch offices around the world.

The two main techniques for establishing a site-to-site VPN are:

1) Intranet-based — If a company has one or more remote locations that they wish to join in a single private network, they can create an intranet VPN to connect each separate LAN to a single WAN.

2) Extranet-based — When a company has a close relationship with another company (such as a partner, supplier or customer), it can build an extranet VPN that connects those companies’ LANs. This extranet VPN allows the companies to work together in a secure, shared network environment while preventing access to their separate intranets.

The difference between internet-based and MPLS VPNs lies in the connections they use, and whether the company’s own network or the VPN provider’s network performs the virtual tunneling.

Is a Site-to-Site VPN right for my Business?

No matter which configuration method is used, creating and maintaining a site-to-site VPN requires a significant investment of financial and human resources. Before considering such an investment, you need to first figure out if a site-to-site VPN is right for your business.

Most companies that use site-to-site VPNs have the service set up by a business security solutions company such as Cisco, Palo Alto Networks, or Checkpoint. Typically, the site-to-site VPN is just one part of a larger package of security services.

Some of the key factors to consider when deciding if a site-to-site VPN is right for your company are:

size of the business

number of locations

geographical spread (how far apart locations are from each other)

resource-sharing requirements

If your business is spread across multiple locations and employees at every location need to access resources at the main office, you should consider deploying a site-to-site VPN.

Product categories

Sources:  OpenVPN, Script

Are you looking to break into the exciting field of cybersecurity? Join our 5-day CompTIA Security+ Bootcamp Training and build your cybersecurity knowledge and skills. 

Or

Become a certified ethical hacker! Our 5-day CEH Bootcamp is unlike other strictly theoretical training, you will be immersed in interactive sessions with hands-on labs after each topic. You can explore your newly gained knowledge right away in your classroom by pentesting, hacking and securing your own systems. Learn more 

Posted by & filed under CompTIA A+, CompTIA Security+.

Protocols and Encryption

CompTIA A+ Core 2 -Chap 2.2 | Security+ Chap 6.3

Protocols and Encryption

WEP

WEP – (wired equivalent privacy) is a standard network protocol that adds security to Wi-Fi and other 802.11 wireless networks. WEP was designed to give wireless networks the same level of privacy protection as a comparable wired network, but technical flaws greatly limit its usefulness. Newer, stronger protection standards have replaced WEP as the default for most modern networks.

How WEP Works

WEP uses a data encryption scheme that is based on a combination of user- and system-generated key values. The original implementations of WEP supported encryption keys of 40 bits plus 24 additional bits of system-generated data, leading to keys of 64 bits in total length. To increase protection, these encryption methods were later extended to support longer keys, including 104-bit (128 bits of total data), 128-bit (152 bits total), and 232-bit (256 bits total) variations.

When deployed over a Wi-Fi connection, WEP encrypts the data stream using these keys so that it is no longer human-readable but can be processed by receiving devices. The keys are not sent over the network but are stored on the wireless network adapter or in the Windows registry.

Why WEP Is Not Recommended for General Use

WEP was introduced in 1999. Within a few years, several security researchers discovered flaws in its design. The “24 additional bits of system-generated data” is technically known as the initialization vector and proved to be a critical protocol flaw. With simple and readily available tools, a hacker can determine the WEP key and use it to break into an active Wi-Fi network.

Vendor-specific enhancements to WEP such as WEP+ and dynamic WEP attempted to patch some of the shortcomings of WEP, but these technologies have proved unworkable.

Replacements for WEP

WPA replaced WEP in 2004, and WPA2 replaced WPA. Although running a network with WEP enabled is better than running with no wireless encryption protection at all, the difference is negligible from a security perspective.

WPA

WPA – WiFi Protected Access, is a Wi-Fi security technology was created in 2003 and was developed in response to the weaknesses of Wired Equivalent Privacy standards. It improves upon WEP’s authentication and encryption features.

As the WiFi Alliance made this transition to a more advanced protocol, they had to keep some of the same elements of WEP so older devices would still be compatible. Unfortunately, this means vulnerabilities, such as the WiFi Protected Setup feature which can be hacked relatively easily, are still present in the updated version of WPA.

What Is WPA-PSK?

WPA Pre-Shared Key is a variation of WPA designed for home networks. It’s a simplified but still powerful form of WPA.

Similar to WEP, a static key or passphrase is set, but WPA-PSK uses TKIP. WPA-PSK automatically changes the keys at preset intervals to make it difficult for hackers to find and exploit them.

WPA2

WPA2 – Wi-Fi Protected Access 2 is a network security technology commonly used on Wi-Fi wireless networks. It’s an upgrade from the original WPA technology, which was designed as a replacement for the older and much less secure WEP. WPA2 is used on all certified Wi-Fi hardware since 2006 and is based on the IEEE 802.11i technology standard for data encryption.

WPA2 Limitations

The only notable vulnerability of WPA2 is that once someone has access to the network, they can attack other devices connected to the network. This is an issue if a company has an internal threat, such as an unhappy employee, who hacks into the other devices on the company’s network.

Using WPA2 decreases the performance of network connections due to the extra processing load of encryption and decryption. The performance impact of WPA2 is usually negligible, especially when compared with the increased security risk of using WPA or WEP, or no encryption at all.

AES vs. TKIP for Wireless Encryption 

The main difference with WPA2 is that it uses the Advanced Encryption Standard (AES) instead of TKIP. AES is able to secure top-secret government information, so it’s a good option for keeping a personal device or company WiFi safe.

Many home routers let administrators choose from among these possible combinations:

WPA with TKIP (WPA-TKIP): This is the default choice for old routers that don’t support WPA2.

WPA with AES (WPA-AES): AES was first introduced before the WPA2 standard was completed, although few clients supported this mode.

WPA2 with AES (WPA2-AES): This is the default choice for newer routers and the recommended option for networks where all clients support AES.

WPA2 with AES and TKIP (WPA2-AES/TKIP): Routers need to enable both modes if any clients do not support AES. All WPA2 capable clients support AES, but most WPA clients do not.

WPA3

As vulnerabilities are recognized, advancements are made. In 2018, the WiFi Alliance introduced WPA3. This new version will have, “new features to simplify WiFi security, enable more robust authentication and deliver increased cryptographic strength for highly sensitive data markets.”

WPA3 support availability

 

This isn’t to say that you can buy the new hardware and get up and running on WPA3 immediately. The first generation of support for WPA3 in client devices is just rolling out. Android 10 has support, but it is still in beta (as of September 2019), and there is no official release date. The same goes for Apple, which has released WPA3 support in iOS 13. The latest builds of Windows 10 have support for WPA3-SAE, but there’s an important short-term qualifier on Windows support: Windows may support it, but the device driver for the network hardware must also. This will take time, and there may be old devices that don’t get support. There is no word from Apple on support in the Mac, but if iOS is supporting WPA3, Mac support is likely not far off.

With all those caveats, nascent, incomplete support for WPA3 is much more valuable than universal and mature support for WPA2, especially for new purchases and especially on consumer configurations, which are the ones you would use in your home or any small business. As client support starts to increase, it is critical that network infrastructure already support it.

While support for WPA3 is growing, routers and access points will be able to offer older clients a fallback to the WPA2 PSK and Open methods, but those users will be vulnerable while the WPA3 users on the same network will be protected.

Even though operating system support for WPA3 is only beginning to be released, the imperative for WPA3 is clear: It is demonstrably more secure than WPA2 and not having it will, before too long, make your wireless network less secure than users and IT professionals have a right to expect. Therefore, you should expect the wireless hardware you buy to support it now.

Product categories

Sources:  Wi-fi.org, Lifewire, HP

Are you looking to break into the exciting field of cybersecurity? Join our 5-day CompTIA Security+ Bootcamp Training and build your cybersecurity knowledge and skills. 

Or

Become a certified ethical hacker! Our 5-day CEH Bootcamp is unlike other strictly theoretical training, you will be immersed in interactive sessions with hands-on labs after each topic. You can explore your newly gained knowledge right away in your classroom by pentesting, hacking and securing your own systems. Learn more 

Posted by & filed under Amazon AWS.

How to use AWS Certificate Manager with AWS CloudFormation

Automate issuing a public DNS validated certificate

Before the ACM can issue a certificate for your site, it must verify that you own or control all of the domain names that you specified in your request. You can choose either email validation or DNS validation when you request a certificate.

Previously, when you requested DNS validation, you had to manually add the requested CNAME validation record to the hosted zone in Amazon Route53.

Note: ACM uses canonical name (CNAME) records to validate that you own or control a domain.

The new release enables AWS CloudFormation to validate your public certificate using DNS when the AWS CloudFormation stack is run from the same account the route 53 domain was created in. To enable this feature, you need to reference your HostedZoneId to create the required CNAME record for validation.

Below is a snippet of an AWS CloudFormation template for ACM that references the HostedZoneId.

When you deploy this resource in AWS CloudFormation, you’ll see the required CNAME record issued by ACM. AWS CloudFormation will use this information to update the hosted zone based on the HostedZoneId you provided. The following figure shows the CNAME listed in the Status reason of the Events list.

Now you can automate using ACM to issue public certificates and include that as part of your overall stack.

You can now use ACM CloudFormation templates to automate the issuance of your private certificate.

Disable certificate transparency

Certificate transparency is an open framework that monitors and audits SSL/TLS certificates. The standard creates a system of public logs that will eventually record all certificates issued by publicly trusted certificate authorities, allowing efficient identification of mistakenly or maliciously issued certificates.

Every certificate transparency log is a record of all publicly trusted digital certificates unless you specifically disable certificate transparency at creation. Those certificates contain information about the public key, the subject, and the issuer.

Certificate transparency logs keep append-only cryptographically-secured records of certificates, meaning that certificates can only be added to the log. It’s impossible to delete, modify, or in any way retroactively change or insert certificates into the log.

You can include disabling the certificate transparency logs as part of your AWS CloudFormation template. You need to add the CertificateTransparencyLoggingPreference parameter and set it to DISABLED.

Below is a snippet of an AWS CloudFormation template that includes the CertificateTransparencyLoggingPreference parameter.

Product categories

Sources:  AWS

AWS Certified Solutions Architect – Associate is a category of technical certifications offered by Amazon Web Services (AWS) for beginners and professionals who run enterprise architecture programs, as well as solutions architects. It covers deployment of AWS systems, AWS best practices, and many other topics.

the AWS Course covers skills for working with the Amazon Cloud, Alexa, Amazon S3, Amazon EC2, autoscaling and load balancing, serverless websites and many more.

Our next 5-day bootcamp will start soon!

Posted by & filed under A+, CompTIA A+.

Type of Installations

CompTIA A+ Core 2. Lesson: 1.3

Type of Installations

Unattended Installations

An Unattended Installation is an automated installation technology that you can use to install with no user intervention. Unattended installation is typically used during large-scale rollouts when it would be too slow and costly to have administrators or technicians interactively install the Operating System/ Software on individual computers.

An example of unattended installation of a software: Paint.net

In-Place Upgrade

The installation of an operating system or application on the computer without removing the older version first and without saving any data beyond normal precautions. In-place installations of operating systems have a tendency to cause problems, especially if the new version is very different from the previous one. For example, in-place upgrades are recommended when migrating from Windows Vista to Windows 7, but not from Windows XP. In that case, a “clean install” is recommended, whereby all user data have to be saved externally and restored after the installation, and all applications have to be re-installed.

An example on how to do an In-Place Upgrade: Microsoft

Clean Install

A completely new installation of an operating system or application on a computer. In a clean install of an OS, the hard disk is formatted and completely erased. In a clean install of an application, the older version is uninstalled first. Installing an OS on a new computer or installing an application for the first time is automatically a clean install.

In most cases, a clean install is not necessary when upgrading your operating system. It is much easier and safer to perform a standard “upgrade and install,” which simply upgrades the necessary files and leaves the user files in place. However, sometimes an OS upgrade is not possible because important files have become lost or corrupted. In this case, a clean install may be the only option. Some users may also prefer to perform a clean install so that no lingering problems from the previous OS will affect the newly installed operating system. Additionally, a clean install may be appropriate when installing an OS on a new hard drive or when transferring ownership of a computer to another person.

Windows, Mac OS X and Linux allow you to perform a clean install when upgrading your operating system. The installer will give you the choice between a standard upgrade (typically the default option) and a clean installation near the beginning of the installation process.

How to install Linux

Repair Install

If your installation of Windows 10 is acting up or misbehaving, it is possible to perform a reinstallation of Windows 10 that keeps your files, documents, data, settings, and apps intact. Reinstalling Windows 10 does not format your PC, and can be used to correct problems with missing DLL errors, failed sfc scans, start menu doesn’t work, and more.

Windows 10 repair installs make it possible to fix your PC without resorting to reformat your hardrive. Windows 10 repair install, can also be used to reload Windows 10 system files, registry keys, and configuration to a known working state without the deletion or removal of user files, settings, and software. Reinstalling Windows 10 is not the same as formatting and reinstalling Windows or a clean Windows 10 install – it’s the very last thing you can try if your are experiencing problems running Windows 10 or Windows 10 applications and metro apps before resorting to a system refresh or clean install which would result in the loss of all installed software.

How to repair install your Windows 10

Multi-boot

Dual-boot is a term used to describe a computer that utilizes two operating systems. For example, with a dual-boot you could have Windows 10 and Linux on the same machine. The concept of installing more than two operating systems is referred to as a multi-boot.

The interest in dual-booting exists because there isn’t a one-size-fits-all solution to operating systems. While any modern OS will cover the average person’s day-to-day computer needs, for those who need to use specialized programs or want to try more experimental operating systems without sacrificing their computer usability, dual-booting offers a flexible solution.

Windows 10 is the latest and greatest version of Windows, with great performance and a wide selection of programs and games available. However, not all older applications made the jump. Old games without continued developer support are especially likely to function poorly in Windows 10. Every time a new Windows version comes out, its graphics tools and drivers are updated. If the apps don’t get updated by their developers, they may not work.

In this case, you might want to load Windows 7 along with Windows 10 or another OS from the boot menu (more on this below). Apple’s MacOS is popular due to its great usability and features, and Linux is favored by those who want complete control over their operating system. Dual-booting allows you to switch OS on the fly and make the most out of your computer, whatever your preferences and needs may be.

How to dual boot Linux and Windows.

Multiboot from a USB

 

Remote Network Installation 

Remote Network Installation is an easy-to-use network application designed for centralized software management and audit across a Network. The program allows you to audit software installed on remote PCs across your network and deploy software automatically on remote PCs. The program deploys software in parallel on remote PCs, so you can deploy within a few minutes across entire network.

Enterprise software for RNI: Solarwind

Product categories

Sources:  Paint.net, Microsoft, Linux, Windows 10, Linux and Windows. Solarwind

A+ certification develops PC hardware and software troubleshooting skills. Professionals in the information technology industry may find A+ certification to be an advantage when searching for jobs. Technicians that have the certification tend to earn 5 to 15 percent more than those that don’t. Some employers require A+ certification for entry-level technical support positions. IT professionals that obtain the certification pursue many career paths, such as technical support specialist and IT support administrator.

Learn more about our A+ Training.

Posted by & filed under CompTIA Security+.

How to Sanitize your Data?

Data Sanitization

For the general user, the delete or format command appears to be the logical method of removing unwanted data files. These methods, however, are like sweeping something under the carpet: you may not be able to see it, but it’s still there. When you delete files and then empty the Recycle Bin, you don’t actually erase the information, you just delete the reference to it so that the operating system can’t find it. All the data is still there and, unless it’s overwritten, can be easily recovered using a file recovery software.

When sensitive information is stored on the hard drive of a machine that is to be surplussed or transferred to another individual or department, it is therefore imperative that extra measures be taken to wipe clean the hard drive before the computer leaves your area of responsibility. This document describes some common methods and software to assist you with the sanitization process. It also includes links to articles that provide detailed technical descriptions of what occurs during this process.

Data destruction software, however, truly does erase the data. Each data destruction program utilizes one or more data sanitization methods that can permanently overwrite the information on the drive.

If you need to remove all traces of a virus or you’re planning on recycling or disposing of your hard drive or computer, wiping your hard drive using data destruction software is the best way to protect yourself.

Sanitizing Techniques

The NIST Special Publication 800-88, Guidelines for Media Sanitization, provides an overview of sanitization techniques and requirements. According to the publication, the purpose of sanitization is “to render access to target data (the data subject to the sanitization technique) on the media infeasible for a given level of recovery effort”, and describes the three categories of action to be taken to sanitize media as:

Clear applies logical techniques to sanitize data in all user-addressable storage locations for protection against simple non-invasive data recovery techniques; typically applied through the standard Read and Write commands to the storage device, such as by rewriting with a new value or using a menu option to reset the device to the factory state (where rewriting is not supported).

Purge applies physical or logical techniques that render Target Data recovery infeasible using state of the art laboratory techniques.

Destroy renders Target Data recovery infeasible using state of the art laboratory techniques and results in the subsequent inability to use the media for storage of data.

See Section 5 of the document for a more detailed summary of sanitization techniques, and Appendix A for sanitization requirements for specific media/device types.

The three most common techniques for properly sanitizing hard drives are:

Physically destroying the drive, rendering it unusable. This is a good alternative for defective hard drives or those that would be too costly to repair. For added security, the disk should be overwritten or degaussed prior to destruction.

Degaussing the drive to randomize the magnetic domains – most likely rendering the drive unusable in the process. Degaussing, or demagnetizing, applies a reverse magnetizing field to data stored on magnetic media, erasing the contents by returning the magnetic flux to a zero state.

Overwriting the drive’s data so that it cannot be recovered. Overwriting replaces previously stored data on a drive or disk with a predetermined pattern of meaningless information, rendering the data unrecoverable.

Note that when removing sensitive information, don’t forget storage devices such as thumb-drives, back-up external hard drives and CDs. Also, be sure to erase any stored names and numbers from phones and fax machines.

Suggested Software

 

Program

Cost

Platform

Comments

Darik’s Boot and Nuke (DBAN)
 

Shareware

Windows & Mac

Self-contained boot disk that securely wipes the hard disks of most computers; consumer-grade, appropriate for bulk or emergency data destruction. (Acquired by Blancco in 2012.)

Disk Utility

Free

Mac OS X

Securely erases data as well as disk’s empty space (latter prevents the recovery of erased files without erasing the entire disk).

Disk Wipe

Shareware

Windows

Permanently wipes sensitive data on partitions and disk volumes.

East-Tec DisposeSecure

$29.95

OS independent

Erase computer hard drives, partitions or media devices; US DOD compliant.

Eraser

Shareware

Windows

Completely removes sensitive data from a hard drive by overwriting it several times with carefully selected patterns.

KillDisk (Active@KillDisk)

Free version, Pro versions start at $39.95

Windows, Linux

Powerful and compact software allowing you to destroy all data on hard disks, SSD and USB drives completely, excluding any possibility of future recovery of deleted files and folders; a hard drive and partition eraser utility.

Linux

Free

Linux

Use built-in ddwipe and shred tools

secure rm

Shareware

Unix & Windows command line program

Secure file removal utility for Unix and Unix-like computer systems; command-line compatible rm overwrites file contents

ShredIt

Free trial, $24.95

Windows, Mac, iOS, Android

Easy interface, configurable overwrite pattern and number of overwrites

Wipe

Shareware

Linux, Unix

Uses Gutmann’s erase patterns, erasing single files and accompanying metadata or entire disks

WipeDrive (Home version)

$29.95

Windows, Intel-based Macs, Linux

Securely erases all hard drive or external storage information; military grade; wipe up to 3 different hard drives

WipeDrive (Enterprise version)

Contact vendor for details (licensed per client)

Windows, Intel-based Macs, Linux

Flexible licensing permits wiping desktops, laptops, networked computers, servers, individual drives, or RAID arrays with one product and one block of licenses. Erase hard drives remotely.

 

Removal Tips

Windows

Each of the software products listed above comes with specific instructions, some with an easy-to-use wizard interface. KillDisk (recommended by some DCCs).

Macintosh

In addition to the software offered above, Mac computer hard drives can be cleared by zeroing their data. Note that zeroing data (aka “low level” format) may take a long time and depends on the hard disk size. It is recommended to use the “8-way random” feature in conjunction with the “zero all data” option.  See the section “Securely erase a disk” in the article Disk Utility 12.x: Erase a Disk, CD or DVD for details.

Solaris / Linux / Unix

 

Product categories

Sources:  CMU, NIST, Standford

Are you looking to break into the exciting field of cybersecurity? Join our 5-day CompTIA Security+ Bootcamp Training and build your cybersecurity knowledge and skills. 

Or

Become a certified ethical hacker! Our 5-day CEH Bootcamp is unlike other strictly theoretical training, you will be immersed in interactive sessions with hands-on labs after each topic. You can explore your newly gained knowledge right away in your classroom by pentesting, hacking and securing your own systems. Learn more 

Posted by & filed under Amazon AWS.

DB Instances on AWS Outposts

 

AWS launched support for MySQL and PostgreSQL, with plans to add other database engines in the future

Creating a DB Instance

 

Create a DB Instance using the RDS Console, API (CreateDBInstance), CLI (create-db-instance), or CloudFormation (AWS::RDS::DBInstance).

I’ll use the Console, taking care to select the AWS Region that serves as “home base” for my Outpost. I open the Console and click Create database to get started:

I select On-premises for the Database location, and RDS on Outposts for the On-premises database option:

Next, I choose the Virtual Private Cloud (VPC). The VPC must already exist, and it must have a subnet for my Outpost. I also choose the Security Group and the Subnet:

Moving forward, I select the database engine, and version. We’re launching with support for MySQL  and PostgreSQL, with plans to add more engines and versions based on your feedback:

I give my DB Instance a name (jb-database-2), and enter the credentials for the master user:.

Then I choose the size of the instance. I can select between Standard classes (db.m5):

and Memory Optimized classes (db.r5):

Next, I configure the desired amount of SSD storage:

One thing to keep in mind is that each Outpost has a large, but finite amount of compute power and storage. If there’s not enough of either one free when I attempt to create the database, the request will fail.

Within the Additional configuration section I can set up several database options, customize my backups, and set up the maintenance window. Once everything is ready to go, I click Create database:

As usual when I use RDS, the state of my instance starts out as Creating and transitions to Available when my DB Instance is ready:

After the DB instance is ready, I simply configure my code (running in my VPC or in my Outpost) to use the new endpoint:

 

Things to Know

Operations & Functions – Much of what you already know about RDS works as expected and is applicable. You can rename, reboot, stop, start, tag DB instances, and you can make use of point-in-time recovery; you can scale the instance up and down, and automatic minor version upgrades work as expected. You cannot make use of read replicas or create highly available clusters.

Backup & Recover – Automated backups work as expected, and are stored in the AWS region. You can use them to create a fresh DB Instance in the cloud or in any of your Outposts. Manual snapshots also work, and are stored in the AWS region. They can be used to create a fresh DB Instance on the same Outpost.

Encryption – The storage associated with your DB instance is encrypted, as are your DB snapshots, both with KMS keys.

AWS Certified Solutions Architect – Associate is a category of technical certifications offered by Amazon Web Services (AWS) for beginners and professionals who run enterprise architecture programs, as well as solutions architects. It covers deployment of AWS systems, AWS best practices, and many other topics.

The AWS Course covers skills for working with the Amazon Cloud, Alexa, Amazon S3, Amazon EC2, autoscaling and load balancing, serverless websites and many more.

Our next 5-day bootcamp will start soon!

Posted by & filed under CompTIA Security+.

Passively Test Security Controls

Security+ Objectives 1.5

The passive approach uses devices to watch the traffic as it passes by. These devices can be special purpose devices such as a Sniffer or they can be built into other devices such as routers, switches or end node hosts. Administrators can scan specific systems or an entire network. For example, many organizations perform periodic scans on the entire network to detect vulnerabilities. If an administrator makes an unauthorized change resulting in a vulnerability, the scan can detect it. Similarly, if a rebuilt system is missing some key security settings, the scan will detect them. It’s also possible to scan a new system before or right after it’s deployed.

Passively Testing Security Controls

An important point about a vulnerability scan is that it does not attempt to exploit any vulnerabilities. Instead, a vulnerability scan is a passive attempt to identify weaknesses. This ensures that the testing does not interfere with normal operations. Security administrators then assess the vulnerabilities to determine which ones to mitigate. In contrast, a penetration test is an active invasive test that attempts to exploit vulnerabilities.

service-remote-access

Vulnerability Scan as a Management Control

Security administrators use vulnerability scanners as a management control to identify which systems are susceptible to attacks. Vulnerability scanners identify a wide range of weaknesses and known security issues that attackers can exploit. Most vulnerability scanners combine multiple features into a single package. A vulnerability scanner includes the following capabilities:

  • • Identifying vulnerabilities
  • • Identifying misconfigurations
  • • Passively testing security controls
  • • Identifying lack of security controls

Vulnerability scanners utilize a database or dictionary of known vulnerabilities and test systems against this database. For example, the MITRE Corporation maintains the Common Vulnerabilities and Exposures (CVE) list, which is a dictionary of publicly known security vulnerabilities and exposures. This is similar to how antivirus software detects malware using virus signatures. The difference is that the CVE is one public list funded by the U.S. government, whereas antivirus vendors maintain proprietary signature files.

Additionally, attackers often look for systems that are misconfigured, but vulnerability scanners can detect some common misconfiguration settings. Some of the vulnerabilities and common misconfigurations discovered by a vulnerability scanner include:

  • Open ports- Open ports can signal a vulnerability, especially if administrators aren’t actively managing the services associated with these ports. For example, not all web servers use File Transfer Protocol (FTP) so if TCP ports 20 and 21 are open, it indicates a potential vulnerability related to FTP. Similarly, Telnet uses port 23 but Secure Shell (SSH) using port 22 is a recommended alternative.
  • Default accounts and passwords- Operating systems and applications can have default usernames and passwords. Basic operating system and application hardening steps should remove the defaults, and a scan can discover the weaknesses if operating systems and applications aren’t. For example, some SQL database systems allow the sa (system administrator) account to be enabled with a blank password. Scanners such as Nessus will detect this.
  • Weak passwords-Many scanners include a password cracker that can discover weak passwords or verify that users are creating strong passwords in compliance with an organization’s policy. It is more efficient to use a technical password policy to require and enforce the use of strong passwords. However, if this isn’t possible, administrators use a separate password cracker to discover weak passwords.
  • Sensitive data- Some scanners include data loss prevention (DLP) techniques to detect sensitive data sent over the network. For example, a DLP system can scan data looking for patterns such as Social Security numbers or key words that identify classified or proprietary data.
  • Security and configuration errors-Vulnerability scans can also check the system against a configuration or security baseline to identify unauthorized changes.

Sources: Stanford

Categories

Are you looking to break into the exciting field of cybersecurity? Join our 5-day CompTIA Security+ Bootcamp Training and build your cybersecurity knowledge and skills. 

Or

Become a certified ethical hacker! Our 5-day CEH Bootcamp is unlike other strictly theoretical training, you will be immersed in interactive sessions with hands-on labs after each topic. You can explore your newly gained knowledge right away in your classroom by pentesting, hacking and securing your own systems. Learn more 

Posted by & filed under CompTIA Security+, Sec+.

Penetration Testing Vs Vulnerability Scanning

Security+ Objectives 1.4

Penetration Testing & Vulnerability Scanning, What's the Difference?

What is Penetration Testing?

Penetration testing is a way to “stress test” your IT infrastructure security. Penetration techniques are used to evaluate the safety and security of the network in a controlled manner. Operating systems, services, applications, and even the behavior of the end user is assessed to validate existing defense mechanisms and the efficacy of end-user security policies. The goal is to gain unauthorized access through exploitation which can be used to emulate the intent of a malicious hacker.

What is Vulnerability Scanning?

vulnerability scanning, or VA, is the process of identifying threats and vulnerabilities on a target by using automated vulnerability scanners. This sometimes includes a range of manual testing with additional tools to further evaluate the security of applications or networks and to verify vulnerabilities discovered by the scanning applications.

Objectives

For Pentest

A pentest is often initiated by various scenarios which can include (but are not limited to) application launches, major network/application changes or updates, compliance regulations or a breach/leak from a targeted attack.
Due to the varied reasons for conducting a pentest, the objectives can often differ greatly as well. Industry experts generally divide penetration testing into three categories: black box testing, white box testing, and gray box testing. The categories correspond to different types of attacks or cybersecurity threats. For more information.

Generally, the end goal of an ethical hacker is to gain unauthorized access to a target by means of exploiting uncovered vulnerabilities from the scanning and enumeration phase. Your organization, however, may have an alternative end goal in mind due to the requirements for conducting the pentest in the first place.

Some of the objectives and occasions for conducting a pentest are as follows:
Application launches: A pentest may be conducted as part of the software development life cycle (SDLC) process to uncover existing vulnerabilities which should be resolved before the launch. The main objective is to help save time and money by discovering and fixing vulnerabilities before an application is deployed into production and open to end users or potentially malicious hackers.

Major network/application change or update: Pentests are often scheduled on an annual, bi-annual or quarterly basis to maintain best security practices and stay on top of any major changes which could potentially uncover new vulnerabilities. A pentest may be initialized on this cycle or when a major change in a network or application occurs.

Vulnerability management program: The landscape of attacks is evolving at a rapid pace which usually eclipses the awareness and knowledge that organizations maintain with regards to their security posture. In order to attempt to stay on top of this, it is imperative to continuously assess the applications and infrastructure on a regular or at least semi-regular basis.
There is a famous quote in the infosec community by John Chambers (former CEO of Cisco) which explains the need for this continuous maintenance: “There are only two types of companies: Those that have been hacked, and those that don’t know they have been hacked.”
Compliance regulations: A pentest can be conducted with the objective of meeting certain compliance standards which have requirements to perform penetration tests at certain periods. Depending on the type of data organizations process or store, they may be required to abide by different compliance regulations (e.g. PCI DSS, HIPAA, Sarbanes-Oxley). Some of these regulations require a pentest to address the risks and potential security exposure an organization may have to aid in the protection of this regulated data.

After a breach or leak: This is quite possibly the worst reason to conduct a pentest but is also very common unfortunately. After having already been breached and having confidential data being exposed to the public, an organization may panic and immediately hire a vendor to conduct a pentest to prevent a similar leak from happening again in the future. The objective here is to uncover any additionally existing vulnerabilities and holes an organization may have since they are already well aware that flaws exist in the first place. This is a reactive approach used to prevent similar breaches in the future.

For Vulnerability Scanning

The objectives of a vulnerability scanning can be somewhat different compared to that of a penetration test.
Where a pentest can sometimes be more responsive or mandatory for various reasons, a vulnerability scanning can be more cyclical to be proactive at discovering vulnerabilities and to perform patching as part of an ongoing vulnerability management program or when new vulnerabilities are released. Pentests are also included as part of a vulnerability management program, however, these will be much less frequent than vulnerability scannings within the actual program.
In this case, VSs should be a frequent and ongoing process to continuously monitor and identify weaknesses in an organization and reduce the attack surface.
There are also many cases where a VS is performed after a leak occurs, when a new prominent vulnerability comes to light, or if a change in a network or application takes place.
objectives of these are as follows:

New vulnerability released: When a new headline vulnerability hits the market, many companies and executives panic immediately. Calls are made to their VS teams or vendors to conduct an scanning ASAP to hunt for this new vulnerability in their organization. Recent examples of this include EternalBlue, KRACK, Meltdown/Spectre and Heartbleed.
The scannings take place shortly after word gets out that another one of these vulnerabilities have come to light, and the objective here is to determine whether there is any presence of such a vulnerability in an organization.

Network/application change: This falls somewhat into a vulnerability management program but will remain separate since not all organizations have or maintain such a program. Anytime a major change, update or migration takes place (have you moved buildings recently?), this should be an immediate trigger to re-scan and assess the environment to find weaknesses which may have been created due to these changes. Maybe something was missed while setting up the network and an extra access point or server was left behind open to the external network. It’s things like this which happen frequently and are often forgotten.

Vulnerability management program: Application security and patch management are a continuous process within a good vulnerability management program. This will include vulnerability scanning of applications and networks to identify weaknesses and patches that should be applied. The entire program should be cyclical which will require vulnerability scannings on a monthly, quarterly or annually basis, depending on the targets, to stay on top of new vulnerabilities and exposures in an organization.
After a breach or leak: This fits in the same category as execution of a pentest after a breach or leak. A vulnerability scanning should be initiated to uncover potential flaws which still exist within the organization to prevent another attack from occurring. If you’ve made the headlines due to the attack, maybe your internal sensitive information has been passed around in some dark hacker forums. This information, or just the fact that there are/were open vulnerabilities, can be enough to trigger additional attacks by hackers. Staying on top of all attack surfaces is paramount in keeping this exposure to a minimum and minimizing the chances occurrence in the near future.

Who Usually Performs a Pentest/VS?

Hackers! White hat ones, of course (we hope). A pentest team can have an extremely Varied background as far as education and experience goes. But no matter what, they all have one important thing in common – a passion for security and great curiosity to find and break things. It’s this passion that unites all pentesters and makes them great at their jobs.

Conclusion

Before starting the process of hiring a vendor for a security scanning, it is crucial to know the differences between a pentest and a vulnerability scanning to be sure that your organization is getting what it wants and needs based your business requirements. A pentest can have a drastically higher price tag than a VS, but if you only require a small VS, then it isn’t worth it to pay out all that cash for a pentest.

Once you have a solid understanding of the main differences, you can make the right decision for your organization and better determine the scope of the engagement. In the end, you will be left with the type of test and report you were looking for and the required patching that lies ahead.

Categories

Sources: Solarwinds, pentest-standard script

Are you looking to break into the exciting field of cybersecurity? Join our 5-day CompTIA Security+ Bootcamp Training and build your cybersecurity knowledge and skills. 

Or

Become a certified ethical hacker! Our 5-day CEH Bootcamp is unlike other strictly theoretical training, you will be immersed in interactive sessions with hands-on labs after each topic. You can explore your newly gained knowledge right away in your classroom by pentesting, hacking and securing your own systems. Learn more 

Posted by & filed under Amazon AWS.

Here we will have 3 VPC as follow and if you remember VPC do not offer transitive Peering; in order to make a peering between all VPC ; we need to make sure all VPC are peer together and this take a lots of time and effort.

 

That is if VPC1 is peering VPC2 and the if VPC2 is peering with VPC3, then VPC1 can not peer with VPC3, In order all talked to each other we need to full mesh peering.

 

Now AWS came with idea VPC Transitive Gateway.

 

 

That is it will follow Hub and spoke topology ;

That is VPC4=10.4.0.0/16 will act as Hub ; and the rest of VPC will be Spok

For example VPC1 =10.1.0.0/16 will be spoke

VPC2= 10.2.0.0/16 wil be spoke

VPC3=10.3.0.0/16 will be spoke.

 

So after we finish all EC2 instance in each VPC will be able to talked to each other.

 

Step 1) We will need to create a VPC1=10.1.0.0/16 and also create IGW and attached to VPC1;Then we will create public Subnet 10.1.1.0/24 in Subnet 1

As we see when I created VPC 1 =10.1.0.0/16 , the AWS has created a Routing table for me ;lets give the Name “ This was created by system when I created a VPC 10.1.0.0/16

 

Step 2) I need to create a new Custom RT ;and call it “ Public Routing table for 10.1.0.0/16” goes to internet ; then add entry to 0.0.0.0 and point to IGW that was called “VPC1 IGW”

 

Step 3) Remember make sure go to Subnet Association and associate Subnet 10.1.1.0/24 ; with above Custom Routing Table.

 

Step 4) I will do same concept for :

VPC 2 = 10.2.0.0/16

Public Subnet = 10.2.1.0/24

Create an IGW VPC2 ; and attached to VPC2

Created a new RT for VPC 2

Subnet Association

 

Step 5) I will need to do above task for :

VPC 3 = 10.3.0.0/16

Public Subnet = 10.3.1.0/24

Create an IGW VPC3 ; and attached to VPC3

Created a new RT for VPC 3

Subnet Association

 

Step 6) Now I will go to EC2 and I will bootup an Amazon Linux AMI and put on each corresponding Subnet and each corresponding VPC ;

Lets called it as follow :

PC1-10.1.0.0 ( put inside VPC1 , subnet 10.1.1.0/24)

PC2-10.2.0.0 ( put inside VPC2 , subnet 10.2.1.0/24)

PC3-10.3.0.0 ( put inside VPC3 , subnet 10.3.1.0/24)

 

Step 7) Now in order each EC2 talked to each other I need to do VPC peering; that is

VPC1 peer with VPC2 and VPC2 peer with VPC 3 and remember we do not have transitive Peering so VPC1 cannot talk to VPC3 , so in order to do this I need to have another VPC peering between VPC1 and VPC3

As we see this will get harder as we get more VPC , so in order to solve the problem Amazon came with Transit Gateway

Step 8) Now I will start my actual Lab in here

 

Step 9) I go in top ; then click on VPC; then on right side I go to transit gateway

 

Step 10) Lets create it and I will call it :

 

Name: ASMTransitgateway

Description : This will be used for VPC1 , and VPC2, and VPC3

 

Step 10) Give AS for BGP = 64512 and rest of value leave as default

 

Step 11) Now I will to go and attached VPC1, VPC2 and VPC3 to above transit gateway

 

Step 12) Go to left and click transit gateway attachment ; then pick VPC1 and give name VPC1 and pick Public Subnet 1

 

Step 13) Do same concept for VPC2 , and VPC 3 and after 5 ins you should get all VPC available

 

Step 14) Now when all is good on step13 ; when I go to left and look at transit gateway routing table I will see all the routes from VPC1,VPC2,VPC3 in here , so route has been propagated in here

 

Step 15) Now if I SSH to EC2 lcoated in VPC 1 ( 10.1.0.0/24) I will not be able to ping an EC2 located on 10.2.1.x or 10.3.1.X Why ? Since remember I need to go to each Routing table of VPC1, and VPC2 and VPC 3 add a corresponding Route and point to transit gateway.

 

Step 16) that is I need to have this

For VPC1 :

10.1.0.0/16 local

0.0.0.0/0 IGW

10.2.0.0/16 transit gateway ( I need to add this entry )

10.3.0.0/16 transit gateway ( I need to add this entry)

For VPC2 :

10.2.0.0/16 local

0.0.0.0/0 IGW

10.1.0.0/16 transit gateway ( I need to add this entry )

10.3.0.0/16 transit gateway ( I need to add this entry)

For VPC3 :

10.3.0.0/16 local

0.0.0.0/0 IGW

10.1.0.0/16 transit gateway ( I need to add this entry )

10.2.0.0/16 transit gateway ( I need to add this entry)

 

Step 17) Now I will SSH to EC2 on VPC1 = that is 10.1.1.x network and I should be able to ping VPC2 = 10.2.1.x and VPC3=10.3.1.x

 

Step 18) As we see now all EC2 can talk to each other ; so the key to remember when you create transit gateway and attached to the VPC; it will learn all routes from VPC1 , VPC2, and VPC3 , and make sure you go to corresponding Routing table of VPC1, VPC2 and VPC 3 and update the routing table . Make sure delete all Transit Gateway in order not to get charged.

AWS Certified Solutions Architect – Associate is a category of technical certifications offered by Amazon Web Services (AWS) for beginners and professionals who run enterprise architecture programs, as well as solutions architects. It covers deployment of AWS systems, AWS best practices, and many other topics.

The AWS Course covers skills for working with the Amazon Cloud, Alexa, Amazon S3, Amazon EC2, autoscaling and load balancing, serverless websites and many more.

Our next 5-day bootcamp will start soon!

Posted by & filed under CompTIA A+.

CompTIA A+ Core 1 Objective 2.2

What is a Router?

A router is another network infrastructure device that directs packets through the network based on information from Network Layer (Layer 3) of OSI model. A router uses a combination of hardware and software to “route” data from its source to its destination. A router can be configured to route data packets from different network protocols, like TCP/IP, IPX/SPX, and AppleTalk.

 

Routers segment large networks into logical segments called subnets. The division of the network is based on the Layer 3 addressing system, like IP addresses. If the Network Layer (Layer 3) Data packet (IP Datagram) is addressed to another device on the local subnet, the packet does not cross the router and create a traffic congestion problem in another network. If data is addressed to a computer outside the subnet, the router forwards the data to the addressed network. Thus routing of network data helps conserve network bandwidth.

Routers are the first line of defense for your netwprk and they must be configured to pass only traffic that is authorized by the network administrators. Thus a router can function as a firewall if it’s configured properly

How Do Router Work?

Let’s use a home wireless router connected to a cable provider’s internet network in a very simplified example.

  1. 1) The router powers on and loads it’s OS from flash

  2. 2) The router loads the configuration file last saved to NVRAM and sets up the network interfaces and routing protocols it will run.

  3. 3) The router adds the network address and subnet for each interface to it’s routing table along with the name of the interface itself.

  4. 4) The router has a simple static default route to send all non-local data out the network port connected to the cable company.

  5. 5) When the router receives a web page request from your computer, it checks the destination IP address against its routing table.

  6. 6) The bits forming the destination IP address in the IP packet are used as a hash key to point to the correct route, which in turn points to the correct network interface that the packet should be forwarded out of.

  7. 7) The router transmits the packet out the correct interface, to the next router, which repeats the process until the packet reaches the destination.

The process is mostly the same for any router.

What is a Switch?

A Network Switch is a constituent of computer network that connects two network slices and/or two network devices (switches or routers) together. Switch can be termed as a network bridge with multiple ports which helps to process and route packets at data link layer of the OSI reference model. There are some switches which have capabilities to process data at the upper layers (network layer and above). Those switches are often termed as multilayer switches.

Functions

The basic function that any switch is supposed to perform is to receive information from any source connected to it and dispatch that information to the appropriate destination only. This thing differentiates switches from hubs. Hub gets the information and forwards that to every other device in the network. This is the reason why switches are called intelligent devices.

The network switch has become a crucial part of present local area networks (LANs). LANs with medium to large sizes are established using a number of inter-linked network switches. SOHO (Small Office/Home office) networks generally consist of a single switch, or sometimes a multi-purpose device like a residential gateway to utilize small office/home broadband services such as Digital subscriber line (DSL) and cable Internet. Nowadays, we have been using router-like components which interface to the particular physical broadband technology. We may see some people using telephone technology on internet using Voice over IP (VoIP).

As mentioned above, a switch is operated at the data link layer to develop a distinct collision domain for each port of the switch. Let us consider, there are four computers – A, B, C, and D connected to four ports of the switch, then any pair , say A and B, may transfer data in either directions, at the same time, the other pair, C and D, can exchange their information simultaneously, and these two communications will not interrupt each other. Using full duplex mode, pairs may get overlapped (A communicating with B, B with C, and so on). Whereas in hubs, all of them have to share the same bandwidth by running in half duplex mode, causing collisions, which will result in unnecessary packet retransmissions.

More Functionality

  • Store and Forward: The switch stores and verifies each packet before routing it.

  • Cut through: The switch verifies the portion of the packet header up to the hardware address of frame before it is forwarded. They may have to stick to the store and forward procedure if the outgoing port is engaged when the packet enters.

  • Fragment free: It is the methodology that tries to preserve the advantages of both cut through and store and forward functionalities. Fragment free verifies the first 64 bytes of the packet, wherein addressing details are saved. This is because; collisions should be determined within the first 64 bytes of the packet frame, so erroneous packet frames will not be routed.

  • Adaptive switching: This method automatically selects one of the above three methods depending upon traffic situation.

Sources: Cisco, Howstuffworks

A+ certification develops PC hardware and software troubleshooting skills. Professionals in the information technology industry may find A+ certification to be an advantage when searching for jobs. Technicians that have the certification tend to earn 5 to 15 percent more than those that don’t. Some employers require A+ certification for entry-level technical support positions. IT professionals that obtain the certification pursue many career paths, such as technical support specialist and IT support administrator.

Posted by & filed under Splunk.

Recursion Pharma Targets 100 Genetic Diseases with Splunk and Machine Learning

In the past, Salt Lake City-based Recursion Pharmaceuticals found it difficult to manage large amounts of time-series data collected from computer-controlled instruments and video footage generated from cameras in the laboratory. The initial data management strategy hardly matched the firm’s aggressive high-volume ambitions — its laboratory’s microscopes currently produce on the order of 700,000 TIFF files each week, representing an 800 percent increase in productivity over 10 months. While the company considered open-source alternatives, the director of high-throughput science (HTS) operations saw the pivotal role that Splunk Enterprise could fill as Recursion ramped up its capabilities. Please feel to click the lick above and watch the short clip.

Recursion_pharma_

The Splunk platform helps monitor and diagnose issues in real time with complex lab instruments, catching anomalies in automated operations, letting the high-throughput science team build dashboards to measure quality over time. It also serves as a data management platform that feeds machine data back to the data scientists who work with Splunk add-ons such as Splunk DB Connect, enabling the team to share discovered knowledge

Biopharma Leader Gets Ahead of Security Threats with Analystics-Driven SIEM in the Cloud

With data silos located worldwide, a global biopharmaceutical leader realized it needed a security information and event management (SIEM) solution. To better protect its customers and intellectual property against cyberthreats and breaches, particularly given today’s ever-changing security threat landscape, the company adopted Splunk Cloud withSplunk Enterprise Security (ES) for security and compliance.

BIOPHARMA_LEADER

Given that the company manages highly sensitive consumer and proprietary intellectual property data, it needed a solution that would provide visibility into any possible threats in its IT environment as soon as possible. The company was satisfied with the immediate availability of Splunk Cloud with Splunk Enterprise Security as its SIEM, and the security team began asking questions and getting answers from its data right away. The Splunk analytics-driven SIEM takes in log information from all areas of the company — including infrastructure, applications and devices — to detect security incidents and manage those events, which simply could not be done adequately with its legacy SIEM. Overall, security investigation and resolution times have been reduced from up to months to hours or even minutes. In addition to relying on Splunk Cloud for security and compliance, the company is also beginning to use Splunk Cloud to monitor its IT operations, and plans are underway to identify additional ways to leverage insights from the data and help the business moving forward.

Sources: Splunk
IMG Source: recursionpharma

Categories

The Splunk platform aggregates and analyzes digital exhaust from various sources, including application program interface (API) pulls and log files from applications, servers, mobile devices and websites. Turning machine data into operational intelligence helps Splunk’s customers gain an understanding of what’s happening across their IT systems and technology infrastructure in real-time in order to facilitate data-driven decision management.

Join our Splunk Training and get certified!

Posted by & filed under Splunk.

The Bank of England Protects $1 Trillion

 Founded in 1694, the bank of England is the central bank of the united kingdom, facilitating transactions that amount to roughly $1 trillion everyday. the bank of England’s SOC- staffed by a team of 10 security analysts-is responsible for protecting the infrastructure that facilitates these transactions,processes one-third of the country’s GDP and is used by 4,200 staff, across a network of 10,000  endpoints that cover servers and user devices.

-Bank-of-England.

City of Los Angeles Enables Real-Time Security Intelligence Across 40+ Agencies

To protect its digital infrastructure, the City of Los Angeles requires
threat intelligence and situational awareness of its security posture.
Previously, the city’s more than 40 agencies had disparate security
measures that complicated data analysis. Los Angeles turned to Splunk
to assess citywide risks, gain visibility into suspicious activities and
proactively mitigate threats.

Cali_Late-

Dubai Airports Flies Into the Future With Splunk

Expecting to serve 100 million travelers annually — and to provide exceptional services to each of them — Dubai Airports turned to Splunk Enterprise. The airport meets its target to get 95 percent of passengers through security in five minutes or less by monitoring metal detectors to
identify trends that keep queues moving efficiently. The IT team monitors all of its Wi-Fi access points in real time, detecting and dealing with congestion and rogue hot-spots as they arise. Please feel free to click the link above to watch a brief video.

Dubai-Airport

Fairfax County Protects Its Citizens’ Data

Fairfax County, Virginia, located in the heart of the nation’s capital,
employs 12,000 people across more than 50 agencies and serves
more than 1.1 million citizens. Its government is regarded as a leader in cyber-security and IT, enabling the county to effectively serve and protect citizens. Since deploying Splunk Enterprise Security (ES) with Splunk Cloud as its security information and event management (SIEM) platform, Fairfax County has reduced security reporting from two weeks to real time. Please feel free to click the link above to watch a brief video.

fairfax_virginia

Leidos Taps Splunk for Better Event Management

As a Fortune 500 science and technology solutions leader, Leidos’ 48-year history spans everything from supporting the U.S. space shuttle program to helping design an America’s Cup yacht race winner. Today, Leidos is working to solve global challenges in defense, intelligence, health and other markets — and facing its own challenges to ensure its services are always available to customers. Since replacing its legacy event management solution with Splunk IT Service Intelligence (ITSI), the Leidos internal IT department has seen many benefits. Please feel free to click the link above to watch a brief video.

Leidos_Building

Maryland Lottery and Gaming Control Agency Bets on Splunk

Operating the state lottery and serving as regulator of the state’s six casinos, the Maryland Lottery and Gaming Control Agency (MLGCA) oversees lottery and gaming activities that generate more than $1 billion in annual contributions to the State of Maryland. Before Splunk, the agency spent several hours each week on manual processes from meeting specific criteria regarding database security to documenting IT processes each week to meet audit requirements. The agency turned to Splunk, the Data-to-Everything Platform, to modernize its IT and security operations.

maryland-lottery-

Maryland’s Prince George’s County Mission-Ready With Splunk Platform

Home to NASA’s Goddard Space Flight Center, the University of Maryland and nearly 1 million citizens, Prince George’s County, Maryland, located outside of Washington, D.C., spans almost 500 square miles. The county’s centralized Office of Information Technology (OIT), which supports all branches, agencies and government employees, faces many challenges when it comes to fulfilling its mission with limited resources and funding. The county relies on Splunk to monitor its extensive network, which consists of about 100 sites and 500 network devices. Please feel free to click the link above to watch a brief video.

maryland_MGM

SAIC Builds New World-Class Security Operations Center

Science Applications International Corp. (SAIC) is a leading technology
integrator specializing in technical, engineering and enterprise information
markets. With expertise in domains such as scientific research, program
management and IT services, the company needed to build out a robust
security operations center (SOC) and computer incident response team
(CIRT) to defend against cyberattacks. Please feel free to click the link above to watch a brief video.

SAIC_HeadQuaters_

The U.S. Census Bureau Goes Digital for the Country’s Most Efficient Census

Every 10 years since 1790, the U.S. Census Bureau has set out to
accomplish the country’s largest civilian undertaking: getting an accurate count of each person living in the United States and its territories. As the country’s first digital decennial census, the 2020 census will provide Americans with the option to respond online, in addition to by phone or mail. To navigate this new territory, the Census Bureau uses Splunk to take a data-forward approach to measuring America.

Census_Headquarters

Sources:Splunk

Posted by & filed under Splunk.

What data can I index?

Splunk Enterprise can index any kind of data. In particular, any and all IT streaming, machine, and historical data, such as Windows event logs, web server logs, live application logs, network feeds, metrics, change monitoring, message queues, archive files, and so on..

-splunk-index.

How do I get data in?

To get data into your Splunk deployment, point it at a data source. Tell it a bit about the source. That source then becomes a data input. Splunk Enterprise indexes the data stream and transforms it into a series of events. You can view and search those events right away. If the results aren’t exactly what you want, you can tweak the indexing process until they are.

-Splunk-Data

Guided Data Onboarding

The Guided Data Onboarding (GDO) feature also provides end-to-end guidance for getting select data sources into specific Splunk platform deployments.From your home page in Splunk Web, find the data onboarding guides by clicking Add Data. From there you can select a data source and configuration type. Then view diagrams, high-level steps, and documentation links that help you set up and configure your data source.You can find all the Guided Data Onboarding manuals by clicking the Add data tab on the Splunk Enterprise Documentation site.

onboardin_data

Types of data sources

Splunk provides tools to configure many kinds of data inputs, including those that are specific to particular application needs. Splunk also provides the tools to configure any arbitrary data input types. In general, you can categorize Splunk inputs as follows:

  • Files and directories
  • Network events
  • Windows sources
  • Other sources
Data-sources-

Files and directories

 Alot of data comes directly from files and directories. You can use the files and directories monitor input processor to get data from files and directories.To monitor files and directories, see Get data from files and directories.

Files_and_directories

Network events

Splunk Enterprise can index data from any network port, for example, remote data from syslog-ng or any other application that transmits over the TCP protocol. It can also index UDP data, but you should use TCP instead whenever possible for enhanced reliability.Splunk Enterprise can also receive and index SNMP events, alerts fired off by remote devices.To get data from network ports, see Get data from TCP and UDP ports in this manual.To get SNMP data, see Send SNMP events to your Splunk deployment in this manual.

Network_Events-

Windows sources

Splunk Cloud and the Windows version of Splunk Enterprise accept a wide range of Windows-specific inputs. Splunk Web lets you configure the following Windows-specific input types:

  • Windows Event Log data
  • Windows Registry data
  • WMI data
  • Active Directory data
  • Performance monitoring data

To index and search Windows data on a non-Windows instance of Splunk Enterprise, you must first use a Windows instance to gather the data. See Considerations for deciding how to monitor remote Windows data.

For a more detailed introduction to using Windows data in Splunk Enterprise, see Monitoring Windows data in this manual.

Window_Source

Other data sources

Splunk software also supports other kinds of data sources. For example:

  • Metrics
    Get metrics data from your technology infrastructure, security systems, and business applications.
  • First-in, first-out (FIFO) queues
  • Scripted inputs
    Get data from APIs and other remote data interfaces and message queues.
  • Modular inputs
    Define a custom input capability to extend the Splunk Enterprise framework.
  • The HTTP Event Collector endpoint
    Use the HTTP Event Collector to get data directly from a source with the HTTP or HTTPS protocols.
-data-sources-

Source: Splunk

Posted by & filed under Amazon AWS, Splunk.

AWS CloudTrail

About the Splunk App for AWS. The Splunk App for AWS gives you critical operational and security insight into your Amazon Web Services account. The app includes: A pre-built knowledge base of dashboards, reports, and alerts that deliver real-time visibility into your environment.

splunk-and-AWS-cloudTrail

AWS Config with Splunk

In addition to displaying Amazon CloudWatch logs and metrics in Splunk dashboards, you can use AWS Config data to bring security and configuration management insights to your stakeholders. The current recommended way to get AWS Config data to Splunk is a pull strategy.

Aws-config

AWS Config

AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations.

-aws-config

Amazon RDS

Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning, database setup, patching and backups.

Amazon_RDS

Amazon CloudWatch

Amazon CloudWatch is a monitoring service for AWS cloud resources and the applications you run on AWS. You can use Amazon CloudWatch to collect and track metrics, collect and monitor log files, set alarms, and automatically react to changes in your AWS resources.

amazon_cloudwatch

Amazon VPC Flow Logs

VPC Flow logging lets you capture and log data about network traffic in your VPCVPC Flow logging records information about the IP data going to and from designated network interfaces, storing this raw data in Amazon CloudWatch where it can be retrieved and viewed.

flowlog_

Amazon S3

Amazon S3 has a simple web services interface that you can use to store and retrieve any amount of data, at any time, from anywhere on the web. It gives any developer access to the same highly scalable, reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its own global network of web sites.

Functions_S3

Amazon EC2

Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Amazon EC2’s simple web service interface allows you to obtain and configure capacity with minimal friction.

Amazon_EC2

Amazon CloudFront

Amazon CloudFront is a web service that gives businesses and web application developers an easy and cost effective way to distribute content with low latency and high data transfer speeds. Like other AWS services, Amazon CloudFront is a self-service, pay-per-use offering, requiring no long term commitments or minimum fees. With CloudFront, your files are delivered to end-users using a global network of edge locations.

amazon_cloudfront

Amazon EBS

Amazon Elastic Block Store (EBS) is an easy to use, high performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction intensive workloads at any scale.

Amazon_EBS

Source: Splunk

Posted by & filed under Splunk.

Using the Fast mode

The Fast mode prioritizes the performance of the search and does not return nonessential field or event data. This means that the search returns what is essential and required.

  • Disables field discovery. Field discovery is the process Splunk software uses to extract fields aside from default fields such as hostsource, and sourcetype. The Splunk software only returns information on default fields and fields that are required to fulfill your search. If you are searching on specific fields, those fields are extracted.
  • Only depicts search results as report result tables or visualizations when you run a reporting search. A reporting search is a search that includes transforming commands. Under the Fast mode you will see only event lists and event timelines for searches that do not include transforming commands.

Using the Smart mode

All reports run in Smart mode, the default search mode, after they are first created. By design, the Smart mode returns the best results for whatever search or report you run. If you search on events, you get all the event information you need. If you run a transforming search, the Splunk software favors speed over thoroughness and brings you straight to the report result table or visualization.

When you run a Smart mode search that does not include transforming commands, the search behaves as if it were in Verbose mode.

  • Discovers all the fields it can.
  • Generates the full event list and event timeline. No event table or visualization will appear because you need transforming commands to make those happen.

When you run a Smart mode search that includes transforming commands, the search behaves as if it were in Fast mode.

  • Disables field discovery.
  • Does not waste time generating the event list and event timeline and jumps you straight to the report result table or visualization.
smart_fast_verbose

Using the Verbose mode

The Verbose mode returns all of the field and event data it possibly can, even if it means the search takes longer to complete, and even if the search includes reporting commands.

  • Discovers all of the fields it can. This includes default fields, automatic search-time field extractions, and all user-defined index-time and search-time field extractions. Discovered fields are displayed in the left-hand fields sidebar in the Events results tab.
  • Returns an event list view of results and generates the search timeline. It also generates report tables and visualizations if your search includes reporting commands.

You may want to use the Verbose mode if you are putting together a transforming search but are not exactly sure what fields you need to report on, or if you need to verify that you are summarizing the correct events.

verbose_splunk

Sources: Splunk

Posted by & filed under Splunk.

Splunk Deployment

deployment server is a Splunk Enterprise instance that acts as a centralized configuration manager for any number of other instances, called “deployment clients”. Any full Splunk Enterprise instance – even one indexing data locally – can act as a deployment server. A deployment server cannot be a client of itself.

Splunk_Deployment

Splunk Index Cluster

Indexer clusters are groups of Splunk Enterprise indexers configured to replicate each others’ data, so that the system keeps multiple copies of all data. This process is known as index replication. By maintaining multiple, identical copies of Splunk Enterprise data, clusters prevent data loss while promoting data availability for searching.Indexer clusters feature automatic failover from one indexer to the next. This means that, if one or more indexers fail, incoming data continues to get indexed and indexed data continues to be searchable.

Splunk_cluster

Splunk Increasing Capacity

search head cluster is a group of Splunk Enterprise search heads that serves as a central resource for searching. The members of a search head cluster are essentially interchangeable. You can run the same searches, view the same dashboards, and access the same search results from any member of the cluster.

To achieve this interchangeability, the search heads in the cluster must share configurations and apps, search artifacts, and job scheduling. Search head clusters automatically propagate most of these shared resources among the members.

Splunk-cluster

Splunk Multi-Instnce

The Daily Indexing Volume table summarizes the performance recommendations that were given in the performance checklist. The table shows the number of reference machines that you need to index and search data in Splunk Enterprise, depending on the number of concurrent users and the amounts of data that the instance indexes.

An indexer that meets the reference hardware requirements can ingest up to 300GB/day while supporting a search load. For a review of the current reference hardware specifications, see Reference hardware in this manual.

The table is only a guideline. Modify these figures based on your use case. If you need help defining and scaling a Splunk platform environment, contact your Splunk Sales representative or Professional Services.

Splunk-Multi_Instance

Souces: Splunk

Posted by & filed under Splunk.

deployment server is a Splunk Enterprise instance that acts as a centralized configuration manager for any number of other instances, called “deployment clients”. Any full Splunk Enterprise instance – even one indexing data locally – can act as a deployment server. A deployment server cannot be a client of itself.

Splunk Enterpise

Splunk Enterprise is a software product that enables you to search, analyze, and visualize the data gathered from the components of your IT infrastructure or business. Splunk Enterprise takes in data from websites, applications, sensors, devices, and so on.

Splunk-enterprise

Splunk Cloud

Splunk Cloud is the industry’s only enterprise-ready cloud service for machine data, offering a 100% uptime SLA and standard plans from 5GB/day to 5TB/day. Watch this video to find out how you can accelerate time-to-value and stay focused on your core business using Splunk Cloud.

Splunk-cloud

Splunk Light

Splunk Light is a comprehensive solution for small IT environments that automates log search and analysis. When you download Splunk Light for free, you get a Splunk Light license for 30 days that lets you index up to 5 gigabytes of data per day.

splunk-light

Source: Splunk

Posted by & filed under Splunk.

Splunk is a software mainly used for searching, monitoring, and examining machine-generated Big Data through a web-style interface. Splunk performs capturing, indexing, and correlating the real-time data in a searchable container from which it can produce graphs, reports, alerts, dashboards, and visualizations.

Splunk Search Head

splunk-search

In a distributed search environment, a Splunk Enterprise instance that handles search management functions, directing search requests to a set of search peers and then merging the results back to the user. A Splunk Enterprise instance can function as both a search head and a search peer.

Splunk Indexer

Splunk Enterprise instance that indexes data, transforming raw data into events and placing the results into an index. It also searches the indexed data in response to search requests.

splunk-Index

Splunk Forwarder

Splunk forwarder is one of the components of Splunk infrastructure. Splunk forwarder acts as an agent for log collection from remote machines. Splunk forwarder collects logs from remote machines and forwards them to the indexer (Splunk database) for further processing and storage.

Splunk-fowarding

Source: Splunk

Posted by & filed under CompTIA Security+.

Black Box, Grey Box, White Box Testing

Security+ Objectives 1.4

Black Box, Grey Box, White Box testing: What Differences?

There are several ways to conduct penetration tests. If you’re considering penetration testing for your network, you’ll likely choose either black, white, or gray box testing. Each method has merits, so it’s helpful to understand the difference between these tests in order to decide which route is right for your organization.

Black Box Testing

Black box testing is a way to test a system with no access to information about the system being tested. The tester has no knowledge of the system, the source code, or the system architecture. Since this is the approach most attackers will take, black box testing replicates the method a hacker would use to try to get into the system.

Here are some of the advantages of black box pen testing:

♦ Since knowledge of the programming language isn’t necessary, the tester doesn’t have to be an expert
♦ The tester documents inconsistencies between the actual system and the specs
♦ It’s performed from an outsider’s perspective, not the system designer’s
♦ It’s reproducible
♦ It’s efficient on larger systems

These are some disadvantages of black box pen testing:

♦ The tests are difficult to design
♦ The results can be overestimated
♦ It’s unable to test all software properties
♦ Uncovering bugs and vulnerabilities can take longer than with other tests
♦ It may not be thorough
♦ Testers are unable to test specific segments of code, such as complex areas that are more prone to errors
♦ There’s a chance of repeating testing already performed by the programmer

White Box Testing

White box testing is also known as clear box testing, glass box testing, structural testing, and transparent box testing. This method of testing software checks the internal structure of an application. The tester has knowledge and access to the source code and the system architecture.

These are advantages of white box pen testing:

♦ It makes sure all independent paths of a module have been checked
♦ It verifies all logical decisions along with their values
♦ It checks syntax and uncovers typographical errors
♦ It finds design errors due to the difference between the code design and actual implementation
♦ It’s often faster at finding bugs and vulnerabilities than black box testing
♦ The testing coverage is usually more complete
♦ It finds errors in “hidden” code
♦ It approximates partitioning done by execution equivalence
♦ It helps in optimizing code
♦ It helps to remove extra lines of code that can introduce hidden flaws

The disadvantages of white box pen testing include:

♦ The testing is more difficult to design
♦ It requires specialized knowledge and tools.
♦ Lack of access to a running system makes it difficult to find defects based on a misconfigured system or issues that only exist when the system is deployed
♦ It’s more expensive
♦ It’s difficult to find hidden errors in every part of source code
♦ It usually requires modifying the program, changing values to force execution paths, or generating a complete range of inputs to test a function

Gray Box Testing

Gray box testing combines elements of black box and white box testing. It simulates an attack by a hacker who has gained access to the network infrastructure documents. The tester has some knowledge of the system being tested, which is usually limited to design documents and architecture diagrams.

Advantages of gray box pen testing include:

♦ It combines the benefits of black box and white box testing
♦ Greater knowledge of the target system can uncover more significant vulnerabilities with less effort
♦ It can better approximate advantages some attackers may have
♦ It’s non-intrusive and unbiased, since the tester doesn’t have access to source code
♦ Testing is performed from the user’s perspective, not the designer’s
♦ There’s no need to provide internal information about the program’s operations and functions

Some disadvantages of gray box pen testing:

♦ There’s limited ability to go over source code and test coverage
♦ There’s a chance of repeating testing already performed by the programmer
♦ It can’t test every possible input stream due to time constraints
♦ It’s unsuitable for algorithm testing

Which Test You Should Choose?

Deciding which testing methodology to adopt depends on the goals of the test. White is best for uncovering semantic errors at the beginning of the lifecycle. Black is ideal for situations where you either don’t have the source code or you want to view the application from an attacker’s viewpoint. Gray provides the most comprehensive software assurance program.

No matter which type of testing your organization ultimately selects, it’s important to have skilled testers perform the tests and analyze the results.

Categories

Are you looking to break into the exciting field of cybersecurity? Join our 5-day CompTIA Security+ Bootcamp Training and build your cybersecurity knowledge and skills. 

Or

Become a certified ethical hacker! Our 5-day CEH Bootcamp is unlike other strictly theoretical training, you will be immersed in interactive sessions with hands-on labs after each topic. You can explore your newly gained knowledge right away in your classroom by pentesting, hacking and securing your own systems. Learn more 

Posted by & filed under Amazon AWS, Splunk.

Tech Brief Exerpts

When it comes to the cloud, you can’t secure, operate or manage costs for what you can’t see. As more of your critical workloads move to AWS, you need end-to-end visibility to:

• Ensure AWS deployments meet or exceed security and compliance standards
• Guarantee AWS-based applications and services meet defined SLAs
• Gain critical security, operational and cost management insights across your entire AWS environment.

 

Splunk solutions make it easy to gain end-to-end visibility across your AWS and hybrid environment. Deploy as software (Splunk Enterprise) or as a cloud service (Splunk Cloud) to gain a complete view of your cloud, applications and services. Leverage the free Splunk App for AWS (available on Splunk Enterprise, Splunk Cloud and Splunk Insights in AWS Marketplace) to gain critical security, operational and cost optimization insights into your AWS deployment.

Splunk’s Portfolio of Cloud Solutions Includes:

  • Splunk App for AWS: Provides pre-built dashboards, reports and alerts that instantly deliver critical operational and security insights into your AWS deployment
  • Splunk Cloud: Delivers Operational Intelligence as a cloud service, backed by a 100% uptime SLA
  • Splunk Light (cloud service): Automates log search and analysis for small IT environments
  • Splunk Enterprise on AWS: Delivers Operational Intelligence as self-deployed software on AWS in a bring-your-own-license (BYOL) model
  • Amazon Machine Images: Accelerate deployment of Splunk Enterprise, Splunk Light and Hunk on AWS
  • Technology Integrations with AWS Lambda, Kinesis and IoT: Enables direct collection from AWS Lambda (via Splunk HTTP Event Collector) to enable monitoring of AWS Lambda applications as well as services natively integrated with Lambda (such as AWS Kinesis and AWS IoT service)
  • Hunk Integration with EMR/S3: Enables easy exploration of data in Amazon EMR and S3

Gain End-to-End AWS Visibility

The Splunk App for AWS offers a rich set of pre-built dashboards and reports to analyze and visualize data from numerous AWS services—including AWS
CloudTrail, AWS Config, AWS Config Rules, Amazon Inspector, Amazon RDS, Amazon CloudWatch, Amazon VPC Flow Logs, Amazon S3, Amazon ECS,
Amazon CloudFront, Amazon EBS, Amazon ELB and AWS Billing—all from a single, free app.

Splunk_and_Aws

Use Splunk as a Cloud Service

Splunk Cloud
Splunk Cloud is an AWS-based service that delivers the benefits of Splunk Enterprise with the flexibility of a cloud service. Using Splunk Cloud, you can search, analyze and visualize data from applications and devices across your entire environment, including AWS, on-premises data centers and any other public/private cloud environments. Splunk Cloud meets the highest levels of security and reliability, having completed SOC2 Type 2 attestation and ISO 27001 certification. Splunk Cloud is available in over 10 AWS Regions
globally, including AWS GovCloud (US).

Deploy Splunk Enterprise on AWS

If you prefer to deploy and manage software rather than a cloud service, Splunk Enterprise is perfect for deploying on AWS. It’s self-contained and can be easily deployed on any Amazon Elastic Compute Cloud (EC2) instance. Splunk Enterprise also scales horizontally, making it ideal for an AWS deployment.

splunk_Aws

Posted by & filed under Splunk.

Introduction

Cyber-crime is rising constantly as attackers are coming up with new tools and attacks techniques. To defend the network and protect companies against cyber-attacks, security professionals need security tools such as IDS, IPS, SIEMS. For my network defense tool assignment, I have decided to research about Splunk, one of the most popular Security Information and Event Management (SIEM) in the cybersecurity industry. According to a report published on the site, www.itcentralstation.com, Splunk is the best SIEM tool in 2019.  It was conceived by founders Rob Das and Erik Swan between 2002 and 2004 to collect data from different sources, detect security events and to perform real time monitoring. There are several types and licensing models of Splunk available to suit the needs of its customer base. The most popular products are Splunk Enterprise, Splunk Cloud, and Splunk Light. My work will be based on the installation of the free version of Splunk light, its configuration, and its use case to detect a DoS attack,  NMAP scanning attack and failed login attempts.

 

  • Installation Process

A 1-Go to Splunk , and register for a free trial ( 30 days) to download  Splunk light.

 

Splunk_Core_Cert

2- After creating a Splunk account, choose the appropriate O.S version for the computer and click on the download link as shown in the figure below:

 

Splunk_Core_Cert

3- Open the download folder and click on the Splunk light installer as administrator.

4- Accept the license agreement, proceed with the default installation, and click “finish” when done.

Splunk_Core_Cert
Splunk_Core_Cert
Splunk_Core_Cert
  • Basic Configuration of Splunk light
  • Type “Splunklight” in the search bar and click on the icon to start the software.
  • When presented with the user interface, use the administrator credentials created during the installation process.

.

Splunk_Core_Cert
  • After a successful login, it is recommended to take a quick user tour to get familiar with Splunklight features such as the console which consists of the Search, the Datasets, the Reports, the Alerts, and the Dashboards tabs.

Search:

It allows users to search through data collected on the platform in different ways. It can be time based, source based, and source type based.

 Datasets:

Use the Datasets listing page to view and manage your existing datasets. Click a dataset name to view its contents. Click Pivot to design a visualization-rich report based on the dataset. Click Explore in Search to extend a dataset in Search and save it as a new report, alert, or dashboard panel.

Reports:

 Reports are based on single searches and can include visualizations, statistics and/or events


Alerts:

 Alerts set a condition that triggers an action, such as sending an email that contains the results of the triggering search to a list of people.

 

Dashboards:

 Dashboards include searches, visualizations, and input controls that capture and present available data.

Splunk_Core_Cert
  • It is important to properly set the source of the data we want to collect for analysis or monitoring on the platform. As shown in the figure below, there are three options such as uploading files from the host computer, monitoring directly from the Splunk instance, and from remote forwarder instances.
Splunk_Core_Cert

     5-The screenshots below show how I used Splunk light to collect my Windows 10 security event logs for further analysis by clicking on “local event log collection” link. Once the data is loaded and indexed onto the platform I can search through it by time range, source type and look for more details on each event. The visualization feature for a customized view.  I have saved a search and created an alert that will send me emails notification as administrator if similar events occur in the future. I can also perform a real time monitoring for the same event if I want to.

 

Splunk_Core_Cert
Splunk_Core_Cert
Splunk_Core_Cert
  • Using Splunk light to detect security events

1-Successful login on my windows 10 machine.

 

With the Search function I was able to detect the logon event that occurred when I logged in as administrator on my local windows 10 machine.

 

Splunk_Core_Cert

2- Failed logon attempt on the Windows 10 machine

Like in the example above, I searched through the collected logs files a failed logon attempt that is displayed in detail in the two figures below. Based on the details such as the time of the event, its id, the reason of the failure, I can initiate an investigation following the incident response process.

.

Splunk_Core_Cert
Splunk_Core_Cert

3-Saving the failed logon attempt as an alert.

Not only I was able to detect the failed logon attempt but had I the possibility to save it as an alert that will notify me whenever similar events occur in the future (scheduled alert). I may also set it for real time monitoring.

Splunk_Core_Cert
Splunk_Core_Cert

4- Firewall log analysis to detect dropped ICMP packets after a ping attempt from another host on the network.

 

Prior to the ping attempt from the windows 7 machine, I ensured  that the Windows firewall is enabled and the ICMP communication is not allowed by default. At the command line interface on a Windows 7 machine I tried to ping the Windows 10 host which is address is 192.168.1.12.

As highlighted in the third figure below on the Splunk platform, all the ICMP packets were dropped. Significant information such as the source ip address, the time of the event also appear in the detected event.

 

Splunk_Core_Cert
Splunk_Core_Cert
Splunk_Core_Cert
  • Detecting successful ICMP packets with Splunk light

Allowing ICMP packets through the firewall can be dangerous to the network. According to the book Guide to Network Defense and Countermeasures, “a DoS attack called an ICMP flood occurs when multiple packets are sent to a single host on a network. The result of this flood is that the server becomes so busy responding to the ICMP requests that it cannot process other traffic.” Prior to simulate the Dos attack on my Windows 10 machine, I kept the firewall enabled but allowed the ICMP traffic through it. The screenshots above illustrate each step of the simulation where I initiated multiple pings connections from the Windows 7 machine toward the windows 10. Those successive packets kept the Windows 10 machine’s CPU busy and unavailable to process legitime requests.

 

 

Splunk_Core_Cert
Splunk_Core_Cert
Splunk_Core_Cert
Splunk_Core_Cert
  • Saving Unsuccessful ping (ICMP packets detection) as a critical alert.

 

Because the use of ICMP protocol is risky for the network, any ICMP packet should be taken seriously and analyzed properly. Therefore, the detection above also is saved as a critical and a scheduled alert with email notification to the administrator.

 

 

Splunk_Core_Cert

-The last security event is NMAP scanning detection.

The screenshots below illustrate the scenario where I scanned the windows 10 machine for open ports using NMAP which I had previously installed on a Windows 8 server machine.

 

Splunk_Core_Cert
Splunk_Core_Cert
  • Summary and Conclusions

 

Through the installation and the use of Splunk light, I have gained more technical skills about the SIEM tool. I have learned how to collect firewall and event logs files and analyze security events upon detection. The simulation sections have enabled me to practice what I have learned about common network attacks such as login attacks, Denial of service attacks with ICMP packets and reconnaissance attack with NMAP scanning. In fact, the implementation of Splunk has been useful because it detected all those three security events that occurred on my Windows 10 machine. Therefore, security professionals should always configure Splunk for notification of critical alerts. In case of the detection of a failed logon attempt by Splunk, an administrator should follow the incident response process by investigation the event and find out the reason behind it and implement a countermeasure such as implementing account lockout policies. The best implementation against the DoS attack with ICMP packets is to disable the echo request rules in the firewall and the use of antivirus.

 

By Bidossessi Tobossou ( ASM Instructor)

Sources: James Miller, Implementing Splunk 7 – Third Edition, Randy Weaver, Guide to Network Defense and Countermeasures, Chap 3, Page 85, ITcentralstation, Splunk

Posted by & filed under Amazon AWS.

How to Enable Elastic IP addresses on AWS SFTP server endpoint?

To make your AWS SFTP server accessible using Elastic IP addresses, create an internet-facing endpoint for your server.

However, if you must change the listener port to a port other than port 22 (for migration), then follow these steps:

Important: Don’t proceed with the following steps if your listener port can be port 22. Instead, create an internet-facing endpoint for your server.

Create an Amazon VPC and allocate IP addresses

  1. 1.  Create an Amazon Virtual Private Cloud (Amazon VPC) in the same AWS Region as your AWS SFTP server.
  2. 2. Allocate three Elastic IP addresses in the same Region as your AWS SFTP server. Or, you can choose to bring your own IP address range (BYOIP).

  Create an Amazon VPC endpoint

  1. 1. Open the Amazon VPC console.
  2. 2. From the navigation pane, choose Endpoints.
  3. 3. Choose Create Endpoint.
  4. 4. For the Create Endpoint page, enter the following:
      For Service category, select AWS services.
      For Service Name, select the service name that ends with transfer.server. For example, if you’re in the us-east-1 Region, then select com.amazonaws.us-east-1.transfer.server.
      For VPC, select the Amazon VPC that you want to use for access to your SFTP server.
      For Subnets, select the three subnets that you want to use.
      For Enable Private DNS Name, keep Enable for this endpoint selected.
      For Security group, you can select existing security groups or you can create a new security group.
      Note: The security group that you use must allow inbound access on port 22 from the subnets of the load balancer that you’ll create in a later step.
  5. 5. Choose Create endpoint.
  6. 6. Under The following VPC Endpoint was created, choose the link to the endpoint to view its details.
  7. 7. Choose the Subnets tab.
  8. 8. Note the private IP addresses associated with each subnet. You need these IP addresses in a later step.

Configure the VPC endpoint on your AWS SFTP server

  1. 1. Open the AWS SFTP console.
  2. 2. Select your server. Choose Actions and then choose Stop.
  3. 3. After the server’s State changes to Offline, choose the link for Server ID to view the server’s configuration.
  4. 4. For Server configuration, choose Edit.
  5. 5. For Edit configuration, enter the following:
      For Endpoint type, choose VPC Endpoint.
      For VPC endpoint, select the endpoint that you created.
  6. 6. Choose Save.

Create a Network Load Balancer and define the Amazon VPC endpoint as the load balancer’s target

  1. 1. Open the Amazon Elastic Compute Cloud (Amazon EC2) console.
  2. 2. From the navigation pane, choose Load Balancers.
  3. 3. Choose Create Load Balancer.
  4. 4. Under Network Load Balancer, choose Create.
  5. 5. For Step 1: Configure Load Balancer, enter the following:
       For Name, enter a name for the load balancer.
       For Scheme, select internet-facing.
      For Listeners, keep Load Balancer Protocol as TCP. Then, change the associated Load Balancer Port to your custom listener port.
      For VPC, select the Amazon VPC that you want to use.
      For Availability Zones, select the Availability Zones associated with the three subnets that you want to use.
      For the IPv4 address of each subnet, select one of the Elastic IP addresses that you allocated.
  6.   Choose Next: Configure Security Settings.
  7. 7. Choose Next: Configure Routing.
  8. 8. For Step 3: Configure Routing, enter the following:
      For Target group, select New target group.
      For Name, enter a name for the target group.
      For Target type, select IP.
      For Protocol, select TCP.
      For Port, enter 22.
      Under Health checks, for Protocol, select TCP.
  9. 9. Choose Next: Register Targets.
  10. 10. For Step 4: Register Targets, enter the following:
      For Network, confirm that the Amazon VPC you want to use is selected.
      For IP, enter the private IP address of one of your Amazon VPC’s subnets. You copied these IP addresses when you created the Amazon VPC endpoint.
  11. 11. Choose Add to list.
  12. 12. Repeat steps 10 and 11 until you’ve entered the private IP addresses of all three subnets.
  13. 13. Choose Next: Review.
  14. 14. Choose Create.

Important: To control access to your server from client IP addresses, use the network access control lists (ACLs) for the subnets configured on the load balancer. Network ACL permissions are set at the subnet level, so access rules apply to all resources using the subnet. You can’t control access from client IP addresses using security groups, because the load balancer’s target type is set to IP instead of Instance. This means that the load balancer doesn’t preserve source IP addresses.

Test access to the server from an Elastic IP address

After you configure the Amazon VPC endpoint and the Network Load Balancer, you can test access to your AWS SFTP server. For example, the following OpenSSH command connects to the server from a specific IP address:

Note: Replace 192.0.2.3 with an Elastic IP address that you allocated.

sftp -i sftpuserkey -P [port] sftpuser@192.0.2.3

If the Network Load Balancer’s health checks fail, this means the load balancer can’t connect to the AWS SFTP endpoint. To troubleshoot this, check the following:

  • 1) Confirm that the VPC endpoint’s associated security group allows inbound connections from the subnets configured on the load balancer. The load balancer must be able to connect to the Amazon VPC endpoint over port 22.
  • 2) Confirm that the AWS SFTP server’s State is Online.

Our AWS Certified Solutions Architect – Associate certification training is for beginners and professionals who run enterprise architecture programs, as well as solutions architects. It covers deployment of AWS systems, AWS best practices, and many other topics. The AWS Course covers skills for working with the Amazon Cloud, Alexa, Amazon S3, Amazon EC2, autoscaling and load balancing, serverless websites and many more.

Our next AWS 5-day bootcamp will start soon!

 

Posted by & filed under Splunk.

Here is what you should know about Splunk!

What is Splunk?

Splunk is the world’s first Data-to-Everything Platform. Now organizations no longer need to worry about where their data is coming from, and they are free to focus on the business outcomes that data can deliver. Innovators in IT, Security, IoT and business operations can now get a complete view of their business in real time, turn data into business outcomes, and embrace technologies that prepare them for a data-driven future.

What Does a Splunk Core Certified User Do?

A Splunk Core Certified User is able to search, use fields, create alerts, use look-ups, and create basic statistical reports and dashboards in either the Splunk Enterprise or Splunk Cloud platforms. This optional entry-level certification demonstrates an individual’s basic ability to navigate and use Splunk software.

Average salary for different splunk job descriptions.

Splunk pays its employees an average of $128,607 a year. Salaries at Splunk range from an average of $85,276 to $181,095 a year. Splunk employees with the job title Software Engineer make the most with an average annual salary of $121,134, while employees with the title Software Engineer make the least with an average annual salary of $121,134.

An Example of Splunk's Operations.

Splunk is a software mainly used for searching, monitoring, and examining machine-generated Big Data through a web-style interface. Splunk performs capturing, indexing, and correlating the real-time data in a searchable container from which it can produce graphs, reports, alerts, dashboards, and visualizations.

Excerpt From Splunk Case Study

 

Why Splunk
Family Search planned to move to a continuous delivery model on AWS to increase business agility and more rapidly deliver features to its patrons. To do so, the company needed a monitoring service that could immediately detect changes across its website properties following the push of a new release. This service needed to support the cultural change to a Dev-Ops deployment model. To meet this need, Family  Search chose Splunk Cloud to help analyze logs from all components of its IT environment in real time, without additional developer effort. Leveraging Splunk Cloud, Family Search is now able to identify errors in real time and has successfully moved to continuous delivery with over 900 deploys per day.

Categories

The Splunk platform aggregates and analyzes digital exhaust from various sources, including application program interface (API) pulls and log files from applications, servers, mobile devices and websites. Turning machine data into operational intelligence helps Splunk’s customers gain an understanding of what’s happening across their IT systems and technology infrastructure in real-time in order to facilitate data-driven decision management.

Join our Splunk Training and get certified!

Posted by & filed under CompTIA A+, MICROSOFT MTA O/S.

Establishing Network Connections

Configure Microsoft Windows Networking on a Client/Desktop

CompTIA A+ Core 2 Section 1.8

VPN

Active reconnaissance is a type of computer attack in which an intruder engages with the targeted system to gather information about vulnerabilities. This may be through automated scanning or manual testing using various tools like ping, traceroute, netcat etc. This type of recon requires that attacker interact with the target. This recon is faster and more accurate, however it also makes much more noise. Since the attacker have to interact with the target to gain information, there’s an increased chance that the recon will get caught by a firewall or one of the network security devices. (Intrusion Detection Systems, network firewalls, etc.)

 

How to Connect to a VPN in Windows 10

Note that some VPN providers offer their own desktop clients, which means you won’t need the setup process described in this guide.

Windows 10 supports PPTP, L2TP/IPsec, SSTP, and IKEv2 connections without any third-party software.

To connect to a VPN on Windows 10, head to Settings > Network & Internet > VPN. Click the “Add a VPN connection” button to set up a new VPN connection.

 

 

 

Provide the connection details for your VPN. You can enter any name you like under “Connection Name”. This name is just used on your computer to help you identify the VPN connection.

Your VPN provider should be able to provide you with these details. If the VPN is provided by your employer, your employer’s IT department should provide you with the details you’ll need to connect.

Once you’ve set up a VPN, you’ll see it in the network popup menu next to any nearby Wi-Fi networks.

Click the network name in the popup menu and Windows will open the Settings > Network & Internet > VPN window for you. Select the VPN and click “Connect” to connect to it. You can also configure or remove VPN connections from here.



Dial-ups

How to Connect to a Dial-up in Windows 7

A Dial-up connection is one of the first forms of internet service. This type of connection relies on the use of traditional telephone lines to carry data packets and provide users with access to the web. Although popularly used during the days when the internet was still young, it is currently labeled as an outdated method of connecting to the internet because of its slow data transfer speeds, and its reliance on telephone lines when users access the internet. This means that a home dial-up connection only provides one service at a time and that people in a household with a dial-up connection are unable to use the phone to take calls and browse the internet at the same time.

Dial-up connections are able to provide users with a 56Kbps bandwidth[3], with the 56K modem only becoming available in 1998. Earlier modems ran at significantly slower speeds which, by today’s standards, would not be able to handle the data and media transfers that users need to functionally access a regular website, much less streaming videos on Youtube, viewing images on Instagram and the like.

1) Click the Start button

2) Select Control Panel

3) Select Network and Internet and select Connect to the Internet

4) Select Dial-up

5) In the dialogue box that appears enter the connection details:

Dial-up phone number

User name: (your account ID)

Password: (your NCF password)

Connection name: The default name is Dial-up Connection, but you can identify it as NCF or any other name that you want instead.

6) Allow other people to use this connection: check this if you want other users of your computer who have their own accounts on your PC to have access to the dial-up connection.

Click Connect to dial the connection.

Open your web browser and it should now load your home page, indicating that you are connected to the internet.

After you have set up the connection any time you open your web browser it should automatically dial NCF and connect you to the internet



Wireless

WLAN means wireless local area network.

WLAN is a type of Local Area Network (LAN) that uses high frequency radio waves rather than wires to communicate and transmit data. It is a flexible data communication system implemented as an extension to or as an alternative for, a wired LAN.

A WLAN can be built using any of several different wireless network protocols, most commonly Wi-Fi or Bluetooth.

How to Set-up a wireless router

Step 1: Connect your router to your modem

Don’t plug your router into the power outlet just yet. Instead, the first thing you’re going to do is connect your modem to the WAN port, with an Ethernet cable.

Not sure what the WAN port is? Most consumer routers feature a group of Ethernet ports, several of which are called the LAN ports (local area network) and another called the WAN (wide area network) port. The WAN port looks the same as the LAN ports, but it’s often a different color and spaced apart from them. In the above photo, for instance, the WAN port is yellow and labeled “Internet.” Sometimes it will be labeled “WAN.” This port is intended to connect your router to the modem, and the outside world from there. The LAN ports are intended for connecting to local devices.

Plug one Ethernet cable into your modem, and the other end into your router’s WAN port. Then plug your router’s power adapter into the wall

Step 2: Connect your router to your computer

Connect a computer manually to the router using an Ethernet cable. Once you’re connected, head to your router’s configuration page. This is basically a website hosted on your router, and it’s a traditional way to configure your router’s settings. How to find it can vary, but it almost always means typing 192.168.1.1 into the address bar of your preferred browser, then hitting Enter.

Alternative Method:

Today’s modems can typically be set up by a mobile app that will automatically walk you through the setup process. It’s the most effective method, but you have to make sure you choose the right app.

Instructions for what app to use should be in your manual.

Now connect to your latent Wi-Fi network. This is usually a simple process: Head to the settings on your phone or computer, and choose the Wi-Fi/Network option (Google just has you scan a QR code, so this process can vary a little). Look for your router’s name here: Your router should have a default SSID name that’s printed on the back out of the router, along with the default network key. Use the key to log into the network.

Once connected, launch the router management app to begin the setup!

Step 3: Create a username and password, or find the existing one

For most new routers, you’ll be asked to create an app name (again, this is called an SSID) and password. Make sure this password is unique and secure, because anyone who has it can do all kinds of nasty things to your network.

WPA2 encryption is currently the standard for security and you should always choose it if given an option for your security protocol. Make sure you choose a long password, too, one that is distinct from the admin password you set earlier and ideally doesn’t include words from the dictionary or an easily guessed name (like the name of your pet). Afterwards, store it somewhere secure.

A used router, however, may already have a password and need to be reset. Most routers have a recessed reset button on the back. Holding down the “reset” button with a paperclip for at least 30 seconds will do the trick. Leave the power on, and wait for the router to rest. It should utilize the default settings when it restarts, meaning the default username and password will now grant you access to settings.

Step 3: Continue setting up your router

Set up other settings such as parental controls, automatic updates, and more. You can tweak these settings and more later from the app, but pay attention to the walkthrough and everything it teaches you. You should learn about guest access, monitoring activity, and more. When finished, the app will configure network settings and finish setting up your network. You can now explore the app at leisure to learn more.

Additional information

With these steps out of the way, your router should be ready to use. Go ahead and connect your devices, and enjoy!

Of course, there’s a lot more you can configure, if you dig into the settings. Setting up port forwarding can be useful, and the true power users should consider replacing their firmware with DD-WRT to get access to all kinds of settings that are otherwise not offered. For most users, however, working Wi-Fi and secure access to the administrative backend is a great place to start.



Wired

What is LAN?

A local area network (LAN) consists of a series of computers linked together to form a network in a circumscribed location. The computers in a LAN connect to each other via TCP/IP ethernet. A LAN is normally exclusive to an organization, such as a school, office, association or church.

What is wired LAN?

Wired connections are by far the most common. The main media in use are coaxial cable, twisted pairs and fibre optics. For each of these, specific network technologies or specifications have been designed. The medium must have properties that will ensure a reasonable error performance for a guaranteed distance and rate of date delivery (i.e. speed). It must also support two-way or multiway communications.

WWAN

Wireless WAN is a wide area network in which separate areas of coverage or cells are connected wirelessly to provide service to a large geographic area.

WWAN services are typically delivered to smart phones and other handheld devices sold by cellular service providers and their retail partners but other mobile devices can use them as well. Some netbooks are available with WWAN cards installed; you can also purchase wireless WAN cards to install yourself. Unlike Wi-Fi cards, which can be used in just about any hotspot, WWAN devices must be provisioned specifically for access to your service provider’s network. Your service provider will take care of billing for roaming access that involves other provider networks.

The three families of WWAN technologies are GSM/UMTS, CDMA One/CDMA2000 and WiMAX. In the United States, service providers include AT&T, Clearwire, Sprint and Verizon. Wireless WAN services are expected to become increasingly available as 4G technologies mature.

A+ certification develops PC hardware and software troubleshooting skills. Professionals in the information technology industry may find A+ certification to be an advantage when searching for jobs. Technicians that have the certification tend to earn 5 to 15 percent more than those that don’t. Some employers require A+ certification for entry-level technical support positions. IT professionals that obtain the certification pursue many career paths, such as technical support specialist and IT support administrator.https://asmed.com/course/comptia-a-boot-camp/

Posted by & filed under CompTIA Security+, MICROSOFT MTA SECURITY.

Load Balancing

CompTIA Security+|MTA Security – Load Balancing

CompTIA Security+ Section 2.0

 

Load Balancer
Definition: “Load balancing is defined as the methodical and efficient distribution of network or application traffic across multiple servers in a server farm.”

A load balancer works in-between client devices and back end servers, and it receives then distributes incoming requests to any available server capable of fulfilling them. It can be either physical hardware, a virtualized instance running on specialized hardware or a software process.

Scheduling:

   Affinity – “an affinity means that the load balancer will always use the same server for a particular user or a particular application instance.”

   Round-Robin – “One type of scheduling is called round-robin scheduling where each server is selected in turn. And if round-robin scheduling is set for 1 to 1, then the first bit of traffic will go to Server A. The second bit of traffic through the load balance will be scheduled to Server B. The third type of traffic through the load balancer will be scheduled to Server C. And because this load balancer is scheduling in a round-robin method, the last bit will go to Server D.”

Active – Passive: “Some load balancers can be configured for active/passive load balancing. That means that some servers will be currently active and able to take requests, and other servers are on standby.”

Active – Active: “That means that all of these servers are active. And requests coming through the load balancer could use any of these active servers at any time.” If any server fails, then “the load balancer will identify the failure and begin using one of the standby servers in its place.”

 

 

 

Why is it necessary?

“An ADC with load balancing capabilities helps IT departments ensure scalability and availability of services. Its advanced traffic management functionality can help a business steer requests more efficiently to the correct resources for each end user. An ADC offers many other functions (for example, encryption, authentication and web application firewalling) that can provide a single point of control for securing, managing and monitoring the many applications and services across environments and ensuring the best end-user experience.”

 



Are you looking to break into the exciting field of Cybersecurity? Join our 5-day CompTIA Security+ Bootcamp and build your cybersecurity knowledge and skills.

OR

Become a certified ethical hacker! Our 5-day CEH Bootcamp is unlike other strictly theoretical training, you will be immersed in interactive sessions with hands-on labs after each topic. You can explore your newly gained knowledge right away in your classroom by pentesting, hacking and securing your own systems. Learn more

Posted by & filed under CompTIA Security+, MICROSOFT MTA SECURITY.

Penetration Testing Concepts 1

Active Vs Passive Reconnaissance

CompTIA Sec+ Section 1.4

Active Reconnaissance

Active reconnaissance is a type of computer attack in which an intruder engages with the targeted system to gather information about vulnerabilities. This may be through automated scanning or manual testing using various tools like ping, traceroute, netcat etc. This type of recon requires that attacker interact with the target. This recon is faster and more accurate, however it also makes much more noise. Since the attacker have to interact with the target to gain information, there’s an increased chance that the recon will get caught by a firewall or one of the network security devices. (Intrusion Detection Systems, network firewalls, etc.)

Tools and Techniques Used

In order to gather information from the target host, attacker normally use port scanning techniques.

Nmap is probably the most well-known tool for active network reconnaissance. Nmap is a network scanner designed to determine details about a system and the programs running on it. This is accomplished through the use of a suite of different scan types that take advantage of the details of how a system or service operates. By launching scans against a system or a range of IP addresses under a target’s control, a hacker can learn a significant amount of information about the target network.

Metasploit is primarily designed as an exploitation toolkit. It contains a variety of different modules that have prepackaged exploits for a number of vulnerabilities. With Metasploit, even a novice hacker has the potential to break into a wide range of vulnerable machines.

Although it was designed as an exploit toolkit, Metasploit can also be effectively used for reconnaissance. At the minimum, using the autopwn option on Metasploit allows a hacker to try to exploit a target using any means necessary. More targeted analysis can allow a hacker to perform reconnaissance using Metasploit with more subtlety.

Port Scanning

Port Scanning is a systematically scanning computer ports as whole information is going in and out is through portand port scanning identifies open ports to a computer.Through port scanning attacker infer which services are visible and where attack is possible. Basic principal of port scanning is that to retrieve data from the opened port and analyze it.



Passive Reconnaissance

Passive reconnaissance is an attempt to gain information about targeted computers and networks without actively engaging with the systems. It is a gathering the information without alerting victim. If the victim host alerted then it drastically increases security against the attack.

Tools and Techniques Used

Wireshark is best known as a network traffic analysis tool, but it can also be invaluable for passive network reconnaissance. If an attacker can gain access to an organization’s Wi-Fi network or otherwise eavesdrop on the network traffic of an employee (e.g., by eavesdropping on traffic in a coffee shop), analyzing it in Wireshark can provide a great deal of useful intelligence about the target network.

Shodan is a search engine for internet-connected devices. As the Internet of Things grows, individuals and organizations increasingly are connecting insecure devices to the internet.

Using Shodan, a hacker may be able to find devices within the IP address range belonging to a company, indicating that they have the device deployed on their network. Since many IoT devices are vulnerable by default, identifying one or more on the network may give a hacker a good starting point for a future attack.

OS Fingerprinting

OS Fingerprinting is a method for determining which operating system does the remote computer runs.OS Fingerprinting is mostly used for cyber reconnaissance as most exploitable vulnerabilities are operating system specific.



COUNTERMEASURES

There is a remarkable research work done to detect cyber attack at reconnaissance phase. Reconnaissance is a primary and starting phase of any cyber attack so if any solution for detecting cyber reconnaissance can be a good achievement in the course development of effective early warning system.Cyber attack is sequence of reconnaissance, infiltration and conclusion phase. A layered protection is always best but defense strategies for different network will be different. Stateful firewall, Intrusion Detection System, Intrusion Prevention System and NAT are preferable solutions. Firewall should be configured to allow only necessary traffic and also configured to log multiple connections from the same IP address. This implementation assures block scans such as FIN, NULL, XMAS and detect SYN scan as firewall is stateful. Snort like tool should be used which work as IDS that monitors traffic and detect anomalous activity. Snort should be configured to detect multiple connection from the same IP address. Using NAT is a good practice as it will put only single IP visible and block OS fingerprinting attempts. Apply all latest patches to block vulnerable points.



Are you looking to break into the exciting field of Cybersecurity? Join our 5-day CompTIA Security+ Bootcamp and build your cybersecurity knowledge and skills.

OR

Become a certified ethical hacker! Our 5-day CEH Bootcamp is unlike other strictly theoretical training, you will be immersed in interactive sessions with hands-on labs after each topic. You can explore your newly gained knowledge right away in your classroom by pentesting, hacking and securing your own systems. Learn more

Posted by & filed under Amazon AWS.

How to Install Python on Amazon Linux Server

In technical terms, Python is an object-oriented, high-level programming language with integrated dynamic semantics primarily for web and app development. It is extremely attractive in the field of Rapid Application Development because it offers dynamic typing and dynamic binding options. Python is relatively simple, so it’s easy to learn since it requires a unique syntax that focuses on readability.

At the writing time of this article Python 3.7.4 (in Python 3.7 series) latest stable version is available to download and install. This tutorial will help you to install Python 3.7.4 on Amazon Linux systems.

Step 1 - Install the GCC Compiler

Login to your server using ssh or shell access. Now, use the following command to install prerequisites for Python before installing it.

$ yum install gcc openssl-devel bzip2-devel libffi-devel

Step 2 - Download Python

Download Python using the following command from the Python official site. You can also download the latest version in place of specified below.

$ cd /opt
$ sudo wget https://www.python.org/ftp/python/3.7.4/Python-3.7.4.tgz

Now extract the downloaded package.

$ sudo tar xzf Python-3.7.4.tgz

Step 3 - Install Python

Use below set of commands to compile Python source code on your system using alt-install.

$ cd Python-3.7.4
$ sudo ./configure –enable-optimizations
$ sudo make altinstall

make altinstall is used to prevent replacing the default python binary file /usr/bin/python.

Now remove downloaded source archive file from your system

$ sudo rm /usr/src/Python-3.7.4.tgz

Step 4 - Check Python Version

Check if python installed correctly by checking its version. Use command python3.7 instead of python. The new binary will be installed at /usr/local/bin/python3.7 location:

$ python3.7 -V

Posted by & filed under Amazon AWS.

How to backup an EC2 instance through Lambda

Amazon AWS EBS Backup via Lambda and CloudWatch

The goal is to backup the EBS volume via AWS lambda and CloudWatch , we will do it two way ; One will be done in 1 Min interval ( using the Lambda function and another one using the Cloudwatch with 5 mins Interval)

Step 1) Right Now I have two EC2 machines and I have two Volume on the left side .

Step 2) When I go to snapshot on the left side I do not see anything

Step 3) In order to create a function ; I need to create a Role and in order to create a Role I need to create a Policy

Step 4) Go to IAM and on left side click create a policy and then click Jason and copy and paste this code in here ( we will see what these code do inside the Lambda )

Step 5)

{

    “Version”: “2012-10-17”,

    “Statement”: [

        {

            “Action”: [

                “cloudwatch:DeleteAlarms”,

                “cloudwatch:DescribeAlarmHistory”,

                “cloudwatch:DescribeAlarms”,

                “cloudwatch:DescribeAlarmsForMetric”,

                “cloudwatch:GetMetricStatistics”,

                “cloudwatch:ListMetrics”,

                “cloudwatch:PutMetricAlarm”,

                “ec2:CreateSnapshot”,

                “ec2:ModifySnapshotAttribute”,

                “ec2:ResetSnapshotAttribute”,

                “ec2:Describe*”,

                “logs:CreateLogGroup”,

                “logs:CreateLogStream”,

                “logs:PutLogEvents”,

                “iam:GetRole”,

                “iam:ListRoles”,

                “lambda:*”

            ],

            “Resource”: “*”,

            “Effect”: “Allow”

        }

    ]

}

Step 6) Click Review Policy and give name BackupEC2policy

Step 7) Go to Role on the left side and then Pick Lambda , click next for Permission ; then search for above policy you created ; go to next page ( no Tag) then click Review and then give name BackupEC2Role

Step 8) Now we will go to Lambda and use above Role

Step 9) Give the name BackupEC2 ; chose Runtime Python 3.6 ;then pick chose exiting Role; the chose BackupEC2Roles , then Create a function

Step 10) On next page we will see on the right side what is shown ( this come from Policy that we created before)

Step 11) Now I will go copy and paste the Function code inside it and make sure save it

import json

import boto3

# Setting ec2 client.

ec2 = boto3.client(‘ec2’)

# Our lambda handler function!

def lambda_handler(event, context):

    # Printing event received.

    # print(“Received event: ” + json.dumps(event, indent=2))

    # Let’s go ahead and print the rule arn to the logs so we know they are different!

    rule_name = event[‘resources’]

    print(rule_name)

    # Setting the variable to loop through later.

    # Filtering by only looking for ‘in-use’ EBS volumes.

    total_ebs = ec2.describe_volumes(Filters=[{‘Name’: ‘status’, ‘Values’: [‘in-use’]}])

    # Looping through and collecting all EBS volumes.

    for volume in total_ebs[‘Volumes’]:

     # Creating the snaphsot for all volumes within our region.

        ec2.create_snapshot(VolumeId=volume[‘VolumeId’],Description=volume[‘Attachments’][0][‘InstanceId’])

        print(“All done with volume: ” + volume[‘VolumeId’])

Step 12) On the left side you will see the triggers ; in order to the Triggers I will go to CloudWatch first ( later on I will do another way) then   click on rule on the left side .

Step 13)  click on Rule ; then Click create a Rule

Step 14 )  Click the Schedule ; then choose 5 mins , then click on the Lambda function ; then pick backupEC2 , then give Name BackupEC25mins ; and

Step 15) Now go back to your Lambda function ; and then on the left side you will see the CloudWatch as one of the tigers .


Step 16) When you click on it , you will see 5 mins interval.

Step 17 ) Now when I go to Snapshot ; I will see about 4 ( if it was more than 5 mins) , Now I will disable this Triggers and I will delete the Snapshot ; then I will configure the Triger within inside of the Lambda Functions.

Step 18) Also when you go to Lambda function at Bottom; you will see two invocations 

 

Step 19) Now  I will create a Trigger inside the Lambda   and I will pick Cloud Watch ;then click rule ; ( as we see we have the rule from last steps) then Click new Rule ; and , give rule name 

BackupEC2_1mins , then give description “ This will back up every one min.” , click schedule expression , then type “rate(1 minute)

Step 20) Now make sure it is enable and click Add 

Step 21) Now you will see number two next to CloudWatch Events , You will see one enable and one is disable

Step 22) after one more min, when you go back to snapshot; you should see the snapshot of the volume,

Step 23) now if you go to CloudWatch you should see new events in here too.

Step 24) now when you go to CloudWatch ; then you will see all the logs and all the events that has been created.

Step 25) make sure disable the Tigger (cloud watch) and delete all the EBS volume.

 

AWS Certified Solutions Architect – Associate is a category of technical certifications offered by Amazon Web Services (AWS) for beginners and professionals who run enterprise architecture programs, as well as solutions architects. It covers deployment of AWS systems, AWS best practices, and many other topics.

the AWS Course covers skills for working with the Amazon Cloud, Alexa, Amazon S3, Amazon EC2, autoscaling and load balancing, serverless websites and many more.

Our next 5-day bootcamp will start soon!


Posted by & filed under CompTIA A+.

32-bit VS 64-bit

CompTIA A+ Objective 1.1

The terms “32-bit” and “64-bit” are commonly seen in system requirements and other technical literature, but few people actually know what these terms mean. Do they refer to hardware or software specifications?

Up until 2006, all Windows operating systems have been written using 32-bit architecture. 32-bits is simply the “width” of the data bus (think of it as lanes on a highway). As computer hardware technology has progressed, the need for faster and more efficient data-handling capabilities are quickly becoming a necessity. And as a result of these increasing requirements came new hardware and software architecture for managing these advancements, the 64-Bit processor and 64-Bit operating system. While 64-Bit technology is really nothing new, for most everyday users it has never been practical. Now that costs have been reduced for the necessary hardware, 64-Bit operating system use is on the rise.

In Windows, you can see if you have a 32 Bit or 64-Bit operating system by going to the control panel and opening the system icon or use the Windows Key + Pause Hotkey

32 Bit Operating Systems

Only a few of the computers operating in the world today use an operating system that utilizes 32-bit memory management architecture (Data bus). These are commonly referred to as X86 systems (this is in reference to the first 32 bit 286\386\486 systems). There are few remaining 32-bit operating systems on the market today. Here are a few examples:

♦ Microsoft Windows: These include Windows 95, 98, NT, 2000, XP, Vista, and Server
♦ Linux: These include Red Hat, Mandrake, and Ubuntu
♦ Solaris: Versions 1-10
♦ Mac OS: Classic (84-2001) and OS X
♦ FreeBSD: Versions 1-8

64 Bit Operating Systems

In 2002, Linux and Microsoft released the first commercial 64-bit operating systems. Linux release Red Hat 7.1 and Microsoft introduced Windows XP 64-Bit Edition. These were first used by server administrators and for users with high-end software, such as rendering applications. As larger data stores, such as 4.7 GB DVD’s, needed to be accessed more efficiently, these 64-bit operating systems are being offered to the consumer user as well. In 2016, 64-Bit operating systems are the standard.

The Difference

The main difference between 32-bit and 64-bit operating systems is the way that they manage memory. For example, Windows XP 32-bit is limited to a total of 4 GB maximum of system memory to be allocated by the kernel and applications (this is why systems with 4 GB of RAM do not show the total system memory in Windows. Kernel = 1 GB reserved, Applications = 3 GBs-viewable). Windows 64-Bit has a limit of 16 Terabytes maximum of system memory allocation. This is extremely important for performance because data in memory is accessed thousands of times faster than from a disk drive. Programs also load much faster into memory. There is also a better security in XP 64-Bit because it is written from Server 2003 SP1 code base. Users of modeling, statistical, and rendering software really benefit from the 64-Bit architecture because these programs are usually very processor and memory intensive.

While it is possible to install a 32-bit operating system on a 64-bit system, it is best to install a 64-bit version if possible. The 64-bit OS will allow your computer to access more RAM, run applications more efficiently, and, in most cases, run both 32-bit and 64-bit programs.

Exceptions to this rule include system utilities and antivirus programs that are written specifically for a 32 or 64-bit operating system. In these cases, you will need to install the version of the software that corresponds to your OS (which may be different than your hardware). Additionally, device drivers are often written for specific operating systems, so it is important to install 32-bit drivers if you are running a 32-bit OS and 64-bit drivers if your operating system is 64-bit.

Here are some other possible obstacles of using a 32-Bit operating system:

Applications: Just like any legacy technology, vendors no longer develop applications for 32-Bit operating systems.

Hardware: The many processors require a 64-Bit operating system.

Drivers: Manufacturers often do not offer 32-Bit driver versions for their hardware due to a lack of market demand or their product.

Summary

Hopefully the above explanation has given you a better understanding of what 32-bit and 64-bit mean in reference to hardware and software. If you’re installing an operating system, it’s important to know what type of processor your computer has so you can install the right one. If you are installing software, it’s important to know what type of operating system your computer is running, so you can install the correct version.

Most PCs today are 64-bit. However, if you have a 32-bit computer, you have no choice but to install a 32-bit OS. If you have a 64-bit system but have some mission critical peripherals that do not have 64-bit drivers, installing a 32-bit OS might be your best option. In all other cases, it is best to buy 64-bit hardware, install a 64-bit operating system, and run 64-bit applications. This combination will provide the best possible performance.

A+ certification develops PC hardware and software troubleshooting skills. Professionals in the information technology industry may find A+ certification to be an advantage when searching for jobs. Technicians that have the certification tend to earn 5 to 15 percent more than those that don’t. Some employers require A+ certification for entry-level technical support positions. IT professionals that obtain the certification pursue many career paths, such as technical support specialist and IT support administrator.

Posted by & filed under CompTIA A+, CompTIA Security+, MICROSOFT MTA O/S, MICROSOFT MTA SECURITY.

Basic Command Line Tools

CompTIA Security+ Objectives 2.2 –  CompTIA A+ Objectives Core 2 – 1.4

Ping

Ping is a command used to measure the latency between two devices on a network. The ping command is available on every operating systems and is a measure of the latency we feel in everything we do online, whether it’s clicking a link, posting a status update, sending a message, playing a game, uploading a file, streaming an online video, or anything else. If you’re using a website and it takes a long time to load a new page every time you click a link, you’re probably experiencing a high ping to that site. If it’s fast, you probably have a low ping.

Netstat

Netstat — derived from the words network and statistics — is a program that’s controlled via commands issued in the command line. It delivers basic statistics on all network activities and informs users on which portsand addresses the corresponding connections (TCP, UDP) are running and which ports are open for tasks.

In Windows operating systems, you can use the netstat services via the command line (cmd.exe). You can find them in the start menu under “All Programs” -> “Accessories” -> “Command Prompt”. Alternatively, you can search directly for “Command Prompt” in the start menu’s search field or start the command line via “Run” (Windows key + press “R” and enter “cmd”).

Tracert

The tracert command is a Command Prompt command that’s used to show several details about the path that a packet takes from the computer or device you’re on to whatever destination you specify.

You might also sometimes see the tracert command referred to as the trace route command or traceroute command.

Traceroute is a useful tool for determining the response delays and routing loops present in a network pathway across packet switched nodes. It also helps to locate any points of failure encountered while en route to a certain destination.

To run traceroute on Windows:

Open the command prompt.
>>Go to Start > Run.
>>Type cmd and press the Enter key.
>> In the command prompt, type:
tracert hostname
where hostname is the name of the server connection you are testing.

Nslookup/Dig

NsLookup/Dig (Microsoft Windows uses nslookup, while Mac OS X and Linux use dig.)  is a tool included in many operating systems that can look up IP addresses and perform other searches on DNS domains and servers. This resource is housed in a utility called nslookup.exe. NsLookup is a basic way to get fundamental DNS information quickly and easily.

nslookup/Dig is used to troubleshoot server connections or for security reasons. Such reasons include guard against phishing attacks, in which a domain name is altered — for example, by substituting the numeral 1 for a lowercase l — to make an unfriendly site look friendly and familiar (joes1owerprices.com vs. joeslowerprices.com).

DNS, or nslookup, also helps deter cache poisoning, in which data is distributed to caching resolvers, posing as an authoritative origin server.

ARP

ARP (Address Resolution Protocol) is a network protocol used to find out the hardware (MAC) address of a device from an IP address. It is used when a device wants to communicate with some other device on a local network (for example on an Ethernet network that requires physical addresses to be known before sending packets). The sending device uses ARP to translate IP addresses to MAC addresses. The device sends an ARP request message containing the IP address of the receiving device. All devices on a local network segment see the message, but only the device that has that IP address responds with the ARP reply message containing its MAC address. The sending device now has enough information to send the packet to the receiving device.

ipconfig

Internet Protocol Configuration (ipconfig) is a Windows console application that has the ability to gather all data regarding current Transmission Control Protocol/Internet Protocol (TCP/IP) configuration values and then display this data on a screen. Ipconfig also refreshes the Domain Name System (DNS) and Dynamic Host Configuration Protocol (DHCP) settings each time it is invoked. When invoked without additional parameters, ipconfig simply displays the IP address, default gateway and subnet mask for all available adapters.

Tcpdump

Tcpdump is a command line utility that allows you to capture and analyze network traffic going through your system. It is often used to help troubleshoot network issues, as well as a security tool.

A powerful and versatile tool that includes many options and filters, tcpdump can be used in a variety of cases. Since it’s a command line tool, it is ideal to run in remote servers or devices for which a GUI is not available, to collect data that can be analyzed later. It can also be launched in the background or as a scheduled job using tools like cron.

Nmap

Network Mapped (Nmap) is a network scanning and host detection tool that is very useful during several steps of penetration testing. Nmap is not limited to merely gathering information and enumeration, but it is also powerful utility that can be used as a vulnerability detector or a security scanner. So Nmap is a multipurpose tool, and it can be run on many different operating systems including Windows, Linux, BSD, and Mac. Nmap is a very powerful utility that can be used to:

  • Detect the live host on the network (host discovery)
  • Detect the open ports on the host (port discovery or enumeration)
  • Detect the software and the version to the respective port (service discovery)
  • Detect the operating system, hardware address, and the software version
  • Detect the vulnerability and security holes (Nmap scripts)

Netcat

Netcat is a featured networking utility which reads and writes data across network connections, using the TCP/IP protocol.
It is designed to be a reliable “back-end” tool that can be used directly or easily driven by other programs and scripts. At the same time, it is a feature-rich network debugging and exploration tool, since it can create almost any kind of connection you would need and has several interesting built-in capabilities.

It provides access to the following main features:

1) Outbound and inbound connections, TCP or UDP, to or from any ports.
2) Featured tunneling mode which allows also special tunneling such as UDP to TCP, with the possibility of specifying all network parameters (source port/interface, listening port/interface, and the remote host allowed to connect to the tunnel.
3) Built-in port-scanning capabilities, with randomizer.
4) Advanced usage options, such as buffered send-mode (one line every N seconds), and hexdump (to stderr or to a specified file) of trasmitted and received data.
5) Optional RFC854 telnet codes parser and responder.

 

Are you looking to break into the exciting field of cybersecurity? Join our 5-day CompTIA Security+ Bootcamp Training and build your cybersecurity knowledge and skills. 

Or

Become a certified ethical hacker! Our 5-day CEH Bootcamp is unlike other strictly theoretical training, you will be immersed in interactive sessions with hands-on labs after each topic. You can explore your newly gained knowledge right away in your classroom by pentesting, hacking and securing your own systems. Learn more

Posted by & filed under CompTIA Security+.

Ransomware

CompTIA Sec+ Objectives 1.1

Ransomware is a type of malicious software that infects a computer and restricts users’ access to it until a ransom is paid to unlock it. Ransomware variants have been observed for several years and often attempt to extort money from victims by displaying an on-screen alert. Typically, these alerts state that the user’s systems have been locked or that the user’s files have been encrypted. Users are told that unless a ransom is paid, access will not be restored.

Ransomware

How does a computer become infected with Ransomware?

Ransomware (and other malware) is very often spread via email (phishing). Targeting is very straightforward using email as an attack vector since a domain name is a great identifier. The hacker then makes it clear that the information is stolen and offers to give that information back if the victim pays a ransom.
Victims are often asked to pay the ransom in the form of Bitcoins. If the ransom is paid, the cybercriminals may unlock the data or send a key to for the encrypted files.

Drive-by downloading occurs when a user unknowingly visits an infected website and then malware is downloaded and installed without the user’s knowledge.

Crypto ransomware, a malware variant that encrypts files, is spread through similar methods and has also been spread through social media, such as Web-based instant messaging applications. Additionally, newer methods of ransomware infection have been observed. For example, vulnerable Web servers have been exploited as an entry point to gain access to an organization’s network.

Why is Ransomware so effective?

The authors of ransomware instill fear and panic into their victims, causing them to click on a link or pay a ransom, and users systems can become infected with additional malware. Ransomware displays intimidating messages similar to those below:

  • “Your computer has been infected with a virus. Click here to resolve the issue.”
  • “Your computer was used to visit websites with illegal content. To unlock your computer, you must pay a $100 fine.”
  • “All files on your computer have been encrypted. You must pay this ransom within 72 hours to regain access to your data.”

Why is Ransomware spreading?

Ransomware attacks and their variants are rapidly evolving to counter preventive technologies for several reasons:

  • 1) Easy availability of malware kits that can be used to create new malware samples on demand
  • 2) Use of known good generic interpreters to create cross-platform ransomware (e.g., Ransom32 uses Node.js with a JavaScript payload)
  • 3) Use of new techniques, such as encrypting the complete disk instead of selected files

Additionally, today’s thieves don’t even have to be tech savvy. Ransomware marketplaces have sprouted up online, offering malware strains for any would-be cybercrook and generating extra profit for the malware authors, who often ask for a cut in the ransom proceeds.

Examples of Ransomware attacks​

wannacry

CryptoLocker Ransomware Attack

CyptoLocker botnet is one of the oldest forms of cyber attacks which has been around for the past two decades. The CyptoLocker ransomware came into existence in 2013 when hackers used the original CryptoLocker botnet approach in ransomware.

CyptoLocker ransomware is the most destructive form of ransomware since it uses strong encryption algorithms. It is often impossible to decrypt (restore) the Crypto ransomware-infected computer and files without paying the ransom.

WannaCry Ransomware Attack 

WannaCry is the most widely known ransomware variant across the globe. The WannaCry ransomware attack has affected nearly 125,000 organizations in over 150 countries. Some of the alternative names given to the WannaCry ransomware are WCry or WanaCrypt0r.

Cerber Ransomware Attack

Cerber ransomware attacks targeted cloud-based Office 365 users. Millions of Office 365 users have fallen prey to an elaborate phishing campaign carried out by the Cerber ransomware

CryptoWall Ransomware Attack

CryptoWall is an advanced form of CryptoLocker ransomware. It came into existence since early 2014 after the downfall of the original CryptoLocker variant. Today, there are multiple variants of CryptoWall in existence. It includes CryptoDefense, CryptoBit, CryptoWall 2.0, and CryptoWall 3.0.

Locky Ransomware Attack

Locky is another ransomware variant which is designed to lock the victim’s computer and prevent them from using it until a ransom is paid. It usually spread through a seemingly benign email message disguised as an invoice.

GoldenEye Ransomware Attack

GoldenEye is similar to the infamous Petya ransomware. It spreads through a massive social engineering campaign that targets human resources departments. When a user downloads a GoldenEye-infected file, it silently launches a macro which encrypts files on the victim’s computer.

Jigsaw Ransomware Attack

Jigsaw is one of the most destructive types of ransomware attacks, because it encrypts and progressively deletes the encrypted files until a ransom is paid. It starts deleting the files one after the other on an hourly basis until the 72-hour mark- when all the remaining files are deleted.

When a user opens the email attachment, the invoice gets deleted automatically, and the victim is directed to enable macros to read the document. When the victim enables macros, it begins encrypting multiple file types using AES encryption.

Apart from the list of ransomware mentioned above, Petya, NotPetya, TeslaCrypt, TorrentLocker, ZCryptor, etc., are some of the other ransomware variants that are well-known for their malicious activities.

 

How to defend against ransomware?

Infections can be devastating to an individual or organization, and recovery can be a difficult process that may require the services of a reputable data recovery specialist.

US-CERT recommends that users and administrators take the following preventive measures to protect their computer networks from ransomware infection:

1) Employ a data backup and recovery plan for all critical information. Perform and test regular backups to limit the impact of data or system loss and to expedite the recovery process. Note that network-connected backups can also be affected by ransomware; critical backups should be isolated from the network for optimum protection.

2) Keep your operating system and software up-to-date with the latest patches. Vulnerable applications and operating systems are the targets of most attacks. Ensuring these are patched with the latest updates greatly reduces the number of exploitable entry points available to an attacker.

3) Maintain up-to-date anti-virus software, and scan all software downloaded from the internet prior to executing.

4) Restrict users’ ability (permissions) to install and run unwanted software applications, and apply the principle of “Least Privilege” to all systems and services. Restricting these privileges may prevent malware from running or limit its capability to spread through the network.

5) Avoid enabling macros from email attachments. If a user opens the attachment and enables macros, embedded code will execute the malware on the machine.

6) Do not follow unsolicited Web links in emails. Refer to the Phishing resources found on this website for more information.

I’m A Ransomware Victim What should I do?

Keep in mind that paying a ransom is no guarantee of receiving a decryption key.

“ASM advises that you never pay a ransom”

If you or your company have been a victim of ransomware you should report it immediately to CISA at www.us-cert.gov/report, a local FBI Field Office, or Secret Service Field Office.

Another good source of information if you have been a victim of ransomware and help you on unlocking some ransomware threats is to visit No More Ransom. If your device gets held for ransom, start by researching what type of ransomware it is. Then check out No More Ransom’s decryption tools and see if one is available for your ransomware.

Are you looking to break into the exciting field of Cybersecurity? Join our 5-day CompTIA Security+ Bootcamp and build your cybersecurity knowledge and skills.

OR

Become a certified ethical hacker! Our 5-day CEH Bootcamp is unlike other strictly theoretical training, you will be immersed in interactive sessions with hands-on labs after each topic. You can explore your newly gained knowledge right away in your classroom by pentesting, hacking and securing your own systems. Learn more

Posted by & filed under CompTIA A+, CompTIA Network+.

Comparison of IPv4 and IPv6

Basic Comparison

Description

IPv4

IPv6

Address

32 bits long (4 bytes). Address is composed of a network and a host portion, which depend on address class. Various address classes are defined: A, B, C, D, or E depending on initial few bits. The total number of IPv4 addresses is 4 294 967 296. The text form of the IPv4 address is nnn.nnn.nnn.nnn, where 0<=nnn<=255, and each n is a decimal digit. Leading zeros can be omitted. Maximum number of print characters is 15, not counting a mask.

128 bits long (16 bytes). Basic architecture is 64 bits for the network number and 64 bits for the host number. Often, the host portion of an IPv6 address (or part of it) will be derived from a MAC address or other interface identifier.

Depending on the subnet prefix, IPv6 has a more complicated architecture than IPv4.

The number of IPv6 addresses is 1028 (79 228 162 514 264 337 593 543 950 336) times larger than the number of IPv4 addresses. The text form of the IPv6 address is xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx, where each x is a hexadecimal digit, representing 4 bits. Leading zeros can be omitted. The double colon (::) can be used once in the text form of an address to designate any number of 0 bits. For example, ::ffff:10.120.78.40 is an IPv4-mapped IPv6 address.

Address allocation

Originally, addresses were allocated by network class. As address space is depleted, smaller allocations using Classless Inter-Domain Routing (CIDR) are made. Allocation has not been balanced among institutions and nations.

Allocation is in the earliest stages. The Internet Engineering Task Force (IETF) and Internet Architecture Board (IAB) have recommended that essentially every organization, home, or entity be allocated a /48 subnet prefix length. This will leave 16 bits for the organization to do subnetting. The address space is large enough to give every person in the world their own /48 subnet prefix length.

Address mask

Used to designate network from host portion.

Not used

Configuration

You must configure a newly installed system before it can communicate with other systems; that is, IP addresses and routes must be assigned.

Configuration is optional, depending on functions required. IPv6 can be used with any Ethernet adapter and can be run over the loopback interface. IPv6 interfaces are self-configuring using IPv6 stateless autoconfiguration. You can also manually configure the IPv6 interface. So, the system will be able to communicate with other IPv6 systems that are local and remote, depending on the type of network and whether an IPv6 router exists.

Domain Name System (DNS)

Applications accept host names and then use DNS to get an IP address, using socket API gethostbyname().

Applications also accept IP addresses and then use DNS to get host names using gethostbyaddr().

 

For IPv4, the domain for reverse lookups is in-addr.arpa.

Same support for IPv6. Support for IPv6 exists using AAAA (quad A) record type and reverse lookup (IP-to-name). An application may elect to accept IPv6 addresses from DNS (or not) and then use IPv6 to communicate (or not).

The socket API gethostbyname() only supports IPv4. For IPv6, a new getaddrinfo() API is used to obtain (at application choice) IPv6 only, or IPv4 and IPv6 addresses.

 

For IPv6, the domain used for reverse lookups is ip6.arpa, and if they are not found then ip6.int is used.

Dynamic Host Configuration Protocol (DHCP)

DHCP is used to dynamically obtain an IP address and other configuration information. IBM i supports a DHCP server for IPv4.

ISC DHCP server implementation can be used.

Host table

A configurable table that associates an Internet address with a host name (for example, 127.0.0.1 for loopback). This table is used by the sockets name resolver, either before a DNS lookup or after a DNS lookup fails (determined by host name search priority).

Same support for IPv6.

Internet Control Message Protocol (ICMP)

Used by IPv4 to communicate network information.

Used similarly by IPv6; however, Internet Control Message Protocol version 6 (ICMPv6) provides some new attributes.

Basic error types remain, such as destination unreachable, echo request and reply. New types and codes are added to support neighbor discovery and related functions.

IP header

Variable length of 20-60 bytes, depending on IP options present.

Fixed length of 40 bytes. There are no IP header options. Generally, the IPv6 header is simpler than the IPv4 header.

LAN connection

LAN connection is used by an IP interface to get to the physical network. Many types exist; for example, Ethernet. Sometimes it is referred to as the physical interface, link, or line.

IPv6 can be used with any Ethernet adapters and is also supported over virtual Ethernet between logical partitions.

Loopback address

A loopback address is an interface with an address of 127.*.*.* (typically 127.0.0.1) that can only be used by a node to send packets to itself. The physical interface (line description) is named *LOOPBACK.

The concept is the same as in IPv4. The single loopback address is 0000:0000:0000:0000:0000:0000:0000:0001 or ::1 (shortened version). The virtual physical interface is named *LOOPBACK.

PING

PING is a basic TCP/IP tool to test reachability. Available using IBM Navigator for i and the character-based interface.

Same support for IPv6.

Source: IBM

Becoming Network+ certified is a distinctive step towards a career in networking or telecommunications.
Network+ is a vendor neutral certification that proves an IT professional’s expertise in managing, maintaining, troubleshooting, installing and configuring basic computer networks.

Learn more about our program: Network+

Posted by & filed under CompTIA Network+, CompTIA Security+.

Two-Factor or Multi-Factor Authentication (2FA or MFA)

CompTIA Security+ Objective 4.1 and CompTIA Network+ Objective 4.2

What is two-factor or multi-factor authentication (2FA or MFA)?

Multifactor authentication ensures that a user is who they say they are. The more factors are used to determine the identity of a person, the greater the reliability of the authenticity.

Typical MFA scenarios include:

1) Swiping a card and entering a PIN.
2) Logging into a website and being requested to enter an additional one-time password (OTP) that the website’s authentication server sends to the requester’s phone or email address.
3) Downloading a VPN client with a valid digital certificate and logging into the VPN before being granted access to a network.
4) Swiping a card, scanning a fingerprint and answering a security question.
5) Attaching a USB hardware token to a desktop that generates a one-time passcode and using the one-time passcode to log into a VPN client.

Background

One of the largest problems with traditional user ID and password login is the need to maintain a password database. Whether encrypted or not, if the database is captured it provides an attacker with a source to verify his guesses at speeds limited only by his hardware resources. Given enough time, a captured password database will fall.

As processing speeds of CPUs have increased, brute force attacks have become a real threat. Further developments like GPGPU password cracking, rainbow tables and now the Quantum computers have provided greater advantages for the attackers. GPGPU cracking, for example, can produce more than 500,000,000 passwords per second, even on lower end gaming hardware. Depending on the particular software, rainbow tables can be used to crack 14-character alphanumeric passwords in about 160 seconds. Recently, a paper released by Google’s researchers that was briefly posted on a Nasa’s website before being removed, claimed that their processor was able to perform a calculation in three minutes and 20 seconds that would take today’s most advanced classical computer, known as Summit, approximately 10,000 years. With these capabilities, a password database alone doesn’t stand a chance against such methods when it is a real target of interest.

In the past, MFA systems typically relied upon two-factor authentication. Increasingly, vendors are using the label “multifactor” to describe any authentication scheme that requires more than one identity credential.

Authentication factors

An authentication factor is a category of credential used for identity verification. For MFA, each additional factor is intended to increase the assurance that an entity involved in some kind of communication or requesting access to some system is who, or what, they are declared to be. The five most common categories are often described as something you know (the knowledge factor), something you have (the possession factor), something you are (the inherence factor), somewhere you are (location factor) and something you do (Time factors).

Something you know (The knowledge factor)

Accessing any account requires credentials by registering a unique username and password. Cellphone PINs and the answers to secret questions also fall under this category. Creating a strong password is still recommended.

Strong Password – Consisting of at least 8 characters that contain a combination of letters, numbers, and symbols if allowed. Case-sensitive letters along with passwords that do not contain words that can be found in a dictionary.
However, even strong passwords we use today lack credibility in the security world, thus introducing two-factor authentication (2FA) or multi-factor authentication (MFA).

Something you have (The possession factor)

Anything a user must have in their possession in order to log in, such as a security token, a one-time password (OTP) token, a key fob, an employee ID card or a phone’s SIM card. For mobile authentication, a smartphone often provides the possession factor, in conjunction with an OTP app.

Something you are (The inherence factor)

Any biological traits the user has that are confirmed for login. This category includes the scope of biometric authentication methods such as retina scans, iris scans fingerprint scans, finger vein scans, facial recognition, voice recognition, hand geometry, even earlobe geometry.

Somewhere you are (The location factor)

The user’s current location is often suggested as a fourth factor for authentication. Again, the ubiquity of smartphones can help ease the authentication burden here: Users typically carry their phones and most smartphones have a GPS device, enabling reasonable surety confirmation of the login location. Often times, an email service will send you notifications about a log-in from a certain location that is not typical with the user’s usual location —the MFA tool may require the user to enter a code texted to the user’s phone. 

Something you do (The Time factor)

Often called Adaptive Authentication, this type of MFA takes context into account to flag logins that are out of the ordinary. When a person tries to authenticate in an unusual context, Adaptive MFA may tighten security by requesting additional credentials. For example, if a user is logging in from a cafe late at night—and this is not typical for that user—the MFA tool may require the user to enter a code texted to the user’s phone.

MFA is an effective way to provide enhanced security. Traditional usernames and passwords can be stolen, and they’ve become increasingly more vulnerable to brute force attacks. MFA creates multiple layers of security to help increase the confidence that the user requesting access is actually who they claim to be. With MFA, a cybercriminal may steal one credential but will be thwarted by having to verify identity in a different manner.

Sources:
NIST
Independent

Are you looking to break into the exciting field of cybersecurity? Join our 5-day CompTIA Security+ Bootcamp and build your cybersecurity knowledge and skills.
Check out our network+ bootcamp as well and become an expert in managing, maintaining, troubleshooting, installing and configuring basic computer networks.

Posted by & filed under CompTIA Security+.

Phishing

What is Phishing and How to Detect Phishing Attacks?

Part of CompTIA Security+ Objectives 1.2

Phishing is an email scam designed that appears to be from a legitimate company and ask you to provide sensitive information. Cybercriminals use phishing to attempt to obtain sensitive information such as credit card details and login credentials, by disguising as a trustworthy organization or reputable person in an email communication. This is usually done by including a link that will appear to take you to the company’s website to fill in your information – but the website is a clever fake and the information you provide goes straight to the crooks behind the scam.

Phishing is popular with cybercriminals because it enables them to steal sensitive financial and personal information without having to break through the security defenses of a computer or network. Public awareness about phishing campaigns has grown considerably in recent years, as many incidents have been covered by a variety of media sources. In addition to technical solutions, user security awareness is one of the cyber security measures being used to help counter attempted phishing incidents.

CompTIA Security+ Certification

How to Identify Phishing and How to Avoid being a Victim?

Identifying Fake Email Addresses

Where is your email coming from? Fake email addresses attempt to trick end users into a sense of comfort, security and legitimacy. Does the domain from which you’re receiving the email make sense? Is it consistent with the company’s domain?

Here are two ways to identify fake email addresses:

1. What comes after the @ sign?

As mentioned above, a legit email domain will match the URL for the organization’s website. Going back to the banking example, here are examples of safe and unsafe email domains.

  • Safe: @chase.com
  • Unsafe: @chasebank.com

At a quick glance, this seems like a reasonable and safe domain. But if the domain is anything different than what you would type in a web browser to access the organization’s website, it’s most likely a fake email address.

2. What name appears in the email?

Spoiler alert: it doesn’t matter. Real names don’t mean anything on the internet. There’s no intellectual property or restrictions on the names of emails when creating an account.

In fact, many legitimate businesses create fake names for marketing emails that just head back to a distro so they can avoid being flagged for email abuse when they are spamming without an opt-in policy. I could start an email account with your name, and there are no checks and balances on it. That’s why the domain is so important – there’s a registration process for domains related to unique IP addresses, so it’s not possible to copy without having inside access.

Identifying Fake Websites

One of the easier ways to mitigate cybersecurity risk is to train your employees to pay attention to the address bar in their web browser. As we rely more on backlinking, cookies and search engines to reach websites, employees tend to pay less attention to the URL in the address bar and go more and more into autopilot when browsing.

Pay attention to your browser and ask these questions to identify fake websites:

Is your connection secure? Look to the far left of your address bar. If you’re using Chrome or Firefox, you should see a padlock icon to indicate that your connection is safe.

If you are still using Internet Explorer: Stop reading this article, delete all your cookies and search history on IE, and install Chrome or Firefox. Those browsers are much safer. Don’t ever look back, and don’t feel bad about it.
BUT… Secure Socket Layer (SSL)/ Transport Layer Security (TLS) certificates are now easier for threat actors to get, so the “padlock” strategy isn’t effective enough on its own. The lock icon is not bulletproof, and it must be used in combination with the rest of the points that follow.

Identifying Fake URLs

1. Does the URL make sense?

Use the same strategy to identify fake websites that you would to identify fake email addresses. The main parts of the URL before .com or .org, etc., should not be an alphabet soup of letters and numbers. The domain origination of the main site and emails that you receive from the organization should match.

A relevant example for personal banking would be this:

Safe: chase.com/creditcardoffer
Unsafe: chasecom.io/creditcardoffer

Threat actors purposely try to mask their URLs in clever ways, often by incorporating special characters or a sandwich of letters that resemble the correct website. If you’re not looking closely, you can easily be duped into clicking the link and installing malware on your device, even if the link doesn’t load or takes you to a dead page.

2. Did you get the link in an email?

If so, don’t click. This sounds extreme. It also sounds slow and antiquated. But verification is a pillar of being vigilant. Even if the contact emailing you is in your address book, they could have been phished – you just never know.

Call or email the contact before clicking. Calling is always preferred. As for emailing, you must make sure you open a new email to ask them if the last email was legit. If you reply directly to the email in question, you’re communicating with a hacker, and they will mislead you.
Seriously, call them before you open the email. Especially if it’s from an internal source. That will only create a stronger cybersecurity culture. This may seem like an unnecessary or slow step, but it’s the entire point of the training exercise! The more cybersecurity is discussed, the more it is prioritized in typical day-to-day interactions.

Are you looking to break into the exciting field of cybersecurity? Join our 5-day CompTIA Security+ Bootcamp and build your cybersecurity knowledge and skills.
asmed.com/s1

Posted by & filed under ABOUT IT.

Cloud Security: How Secure is Cloud Data?

Saving documents, photos, videos and reports on a computer’s hard drive? That’s becoming almost quaint. Today, people are more frequently saving their files in the cloud, storage space not located in their computers or devices but on servers hosted by third-party providers.

The positives of storing data and files in the cloud are many: You won’t clutter your computer. If your laptop, tablet or smart phone crashes, you’ll still have access to your files. And it will take you a long time to ever use up all the space offered by most cloud providers.

But you might have concerns about cloud computing, including data protection, security threats, and potential data loss. You might worry that all those videos, photos, and reports might be vulnerable to data breach and hackers who could break into your cloud provider’s servers.

Or maybe you worry that your provider’s servers will crash, causing all those photos of your summer vacations or videos of your children’s elementary school graduation to disappear.

Here’s some reassurance, though: Information stored in the cloud is likely to be more secure than are files, images and videos stored on your own devices. Why? Cloud companies often rely on far more robust cybersecurity measures to protect your sensitive data.

What is the cloud?

What is the cloud? You’ve probably used it more than you think. Ever write an essay that you saved in Google Docs? Then you’ve used the cloud. Ever watch a movie on Netflix, stored images in Dropbox, or sent email messages through Yahoo Mail? Those are all cloud services.

At its most basic, the cloud refers to any type of software or service that isn’t located on your personal computer or devices but instead runs on the internet. The files, images and videos that you save on cloud services are stored on the servers of third parties, companies such as Amazon, Google, and Microsoft*.

You can then get at these files whenever you are using a device connected to the internet. If you’ve saved photos from your most recent trip to the beach, you don’t have to wait until you’re at your laptop computer to access them. You can find them by logging onto the internet from any computer or device anywhere.

Popular cloud service providers include Google Cloud Platform, Amazon Web Services, and Microsoft Azure, to name a few. Everything from Hulu and Dropbox to Gmail and Office 365 is hosted in the cloud, not on your personal computer or phone*.

How secure is the cloud?

It’s natural to wonder if your data is safe when it’s stored in the cloud infrastructure. After all, your files, photos and videos are being stored on servers that are not under your control. You might wonder how vulnerable these servers are to cyberthieves.

Here’s the truth, though: The data that you save with cloud service providers may likely be safer than the information you’ve stored in your computer’s hard drive. Remember, hackers can use malware and phishing emails to get the information stored on your own devices. They might freeze your computer, demanding a ransom before they’ll release the files and data they’ve frozen.

The security measures undertaken by larger companies providing cloud services are likely to be more robust and powerful than what you have protecting your home computer and devices.

Why is the cloud security so robust?

What makes cloud storage so safe? First, servers are usually located in warehouses that most workers don’t have access to. Secondly, the files stored on cloud servers are encrypted. This means that they are scrambled, which makes it far harder for cybercriminals to access.

Here is a look at some of the security measures that cloud providers frequently use to protect your data.

Consistent security updates

How often do you ignore those notifications to update your operating system, browser or email service? That’s a no-no in the world of computer security. Those updates often contain tools designed to protect your devices from the latest viruses or malware.

When you store your data in the cloud, though, the companies overseeing the servers should be consistently updating their security measures. You won’t have to worry about forgetting to run an update. Your cloud service provider will regularly update its security measures.

AI tools and auto-patching

Cloud providers are also turning to artificial intelligence, or AI, to help protect your data. This is important: It’s not easy to find experienced security professionals to oversee data. Cloud providers, though, can instead turn to AI to tackle at least the first level of security analysis. These programs rely on built-in algorithms to seek out and identify possible vulnerabilities in security measures.

Built-in firewalls

Cloud providers rely on firewalls, too, to help protect your files. As the name suggests, this technology acts a bit like a wall keeping your data safe.

Firewalls, which can be hardware- or software-based, apply rules to all of the traffic coming into a network. These rules are designed to filter out suspicious traffic and to keep your data behind the wall. This makes it more difficult for hackers to slip malware or viruses past the security measures used by your cloud service provider.

Redundancy

But what about hardware failures or power outages? Will you be able to access your data if your cloud provider suffers a natural disaster or large-scale outage?

Yes, because most of the biggest cloud providers practice redundancy. This means that they copy your data several times and store them on many different data centers. This way, if one server goes down, you can access your files from a back-up server.

Third-party security testing

Your cloud provider should also hire outside security companies to test their servers and software regularly to make sure they are safe from hackers, cybercriminals and the latest malware and viruses. This outside testing boosts the odds that your cloud provider will have the defenses needed to keep your files away from hackers.

Yes, because most of the biggest cloud providers practice redundancy. This means that they copy your data several times and store them on many different data centers. This way, if one server goes down, you can access your files from a back-up server.

What precautions can you take to boost your cloud security?

Are you already storing your files in the cloud? If so, you can take certain steps to help enhance the security of that data.

Encrypt your data

First, make sure you send your files to a cloud services provider that encrypts your data. You want to make it as difficult as possible for hackers to get at your information. Storing your images and files with a provider that relies on encryption will give hackers pause. It’s easier for them to steal data that hasn’t been scrambled.

Perform data backups

Make sure you only work with cloud providers that back up your data. You don’t want all your information stored on just one server. If that server goes offline, you won’t be able to access your data. You might also consider backing up your most sensitive information in your own external hard drives even if you are saving it in the cloud. This will provide you with an extra layer of protection should something happen with your cloud provider.

Enable two-factor authentication

You can make life more difficult for hackers by enabling two-factor authentication. As the name suggests, two-factor authentication requires you to provide two pieces of information when logging onto a site.

Say you are logging onto your bank’s online site. First, you provide your username and password, as usual. Then you wait for your bank to send a code to your email address or phone. You then enter this code online to access your accounts. This extra step makes it more difficult for hackers to get at your emails, personal information or financial information.

Are you looking to break into the exciting field of cybersecurity? Join our 5-day CompTIA Security+ Bootcamp and build your cybersecurity knowledge and skills.

Posted by & filed under ABOUT IT, CompTIA Security+, Sec+.

One of the fastest growing areas of the enterprise is cybersecurity. Every year, more attacks are carried out meaning more of the IT budget is allocated for bolstering security, ultimately leading to the need to hire more cybersecurity experts to protect the organization.

 

According to cybersecurity company CrowdStrike, which recently published a casebook with takeaways and insights into the frontlines of incident response (IR) cases spanning 2018, organizations are not making substantive progress to detect intruders and stop breaches overall.

 

The study also found that commodity malware was often observed as a precursor to larger, more disruptive attacks, and that there has been a dramatic rise in the number of attacks that leveraged social engineering and phishing.

 

With 82% of IT and cybersecurity professionals claiming they don’t have the right amount of necessary talent within their organization, many are no resorting to in-house training to bring their employees up to scratch.

 

Certifications are often the best way to ensure the people you work with have the right knowledge and skills to excel in their job. It provides recommendations based on the National Institute of Standards and Technology (NIST) framework and has developed a stringent process for local vendors that includes a comprehensive evaluation of all qualifications.

The Importance of Security+ certification

 

For those who are already in the IT security field, or are looking to enhance their career, there is a range of certifications and training options available for those who want to learn about IT security and demonstrate that knowledge to current and potential employers. However, a lot of the more advanced IT security certifications require a level of knowledge, experience, and commitment that might be outside of the range of many newer IT professionals.

 

A good certification to demonstrate basic security knowledge is the CompTIA Security+ certification. Unlike other certifications, such as the CISSP or the CISM, the Security+ doesn’t have any mandatory experience or prerequisites, though CompTIA does recommend that candidates have at least two years of experience with networking in general and security in particular. CompTIA also suggests that Security+ candidates obtain the CompTIA Network+ certification, but they don’t require it.

 

CompTIA Security+ is an internationally recognized, vendor-neutral certification from CompTIA that demonstrates competency in the baseline skills needed for a career in IT security industry. Earning this certification verifies your skills in the following areas based on the exam objectives:

 

Threats, attacks and vulnerabilities: You can detect various kinds of compromise and have knowledge of penetration testing and vulnerability scanning.

 

Technologies and tools: You can deploy various network components, as well as assess and troubleshoot problems to support operational and information security technology.

 

Architecture and design: You know how to implement a secure network architecture, cloud computing, virtualization technology and systems design.

 

Identity and access management: You can implement access controls and identity management and access services, as well as management control.

 

Risk management: You understand risk management and data security best practices and their business impact.

 

Cryptography and PKI: You can install and configure settings for wireless security and implement public key infrastructure (PKI).

Benefits of CompTIA Security+ Certification

 

This certification is ideal for IT security professionals who are relatively new to the field or have never taken a security certification exam before. It’s also useful if you’re looking to switch to cybersecurity from a related field. It is recommended as the first security certification you should earn, as it deals with the core skills needed for any job in cyber security and can help you to move into an intermediate-level cybersecurity position. If you consider getting this CompTIA certification, the benefits listed below will help you make the right decision.

 

  1. Vendor-Neutral Certification

The certification is vendor-neutral, which indicates that you do not have to center on technology and security of a precise vendor. Thus, you can incorporate the common components of cybersecurity. The extensive skills and knowledge accumulated while achieving the certification make security professionals and network administrators become very much in demand in the IT marketplace. This imparts you the skill to prospect several opportunities in the field.

 

  1. Prerequisites and Required Experience

Since it is an entry-level certification, it is not expected to have much experience. This is one of the main benefits of the certification why many people entering the security industry choose to obtain CompTIA Security+ certification to improve their career potentials considerably. Unlike other cybersecurity certifications, this one doesn’t have any requirements. It is only suggested that the professionals appearing for the certification exam should have at least two years of working experience in security and networking fields to raise their possibilities of passing the exam and being hired in time.

 

  1. Globally Accepted Certification

The CompTIA Security+ certification is globally acknowledged to be one of the fundamental security certifications in the field of cybersecurity. Presented by CompTIA (Computing Technology Industry Association), the Security+ certification is also approved by ANSI and is created to demonstrate the understanding of an individual to employ the knowledge, skills, and features required to be compliant with ISO 17024 standard.

 

  1. High Career Opportunity

The professionals in the field of cybersecurity are desired by organizations in the private and public areas. The CompTIA Security+ certification provides you an advantage to handle security and network administration responsibilities. With this certification, you can take up a broad range of jobs with various services, such as Compliance and Operational Security, Cryptography, Threats and Vulnerabilities, Data and Host Security, Access Control and Identity Management, and Application.

 

  1. Higher Pay Scale

The professionals with the Security+ certification have greater opportunities of receiving higher salaries than their non-certified co-workers. With this certification to your resume, you can expect to receive an average salary of between $42,130 and $95,830 as a network engineer. It is necessary to mention that there are other factors that determine how much you earn, and these factors involve your level of experience and accreditation. Within a few years, the professional with the CompTIA Security+ certification can earn about $66,890 per year.

 

  1. Opportunity to Take up Government Work

If you aspire to get a cybersecurity or networking job in any government agency, obtaining the CompTIA Security+ certification is an ideal starting point for you. The certification satisfies the DoD 8570 standard. It is perceived as a compliance standard that is employed for citing many prerequisites required for government employees and contractors at various levels of work in the IT area. The certification meets the DoD 8570 compliance standards that are required for both Technical Level Two and Management Level One positions.

 

  1. Opportunity to Learn and Develop

Before you can achieve the certification, you will be expected to study the exam objectives and contents. After significant preparation, you have to clear the exam. In the course of preparing for the exam, you will learn many things about cybersecurity. Moreover, you will acquire skills and knowledge that will benefit you become a greater security professional. This is why hiring managers always look after for the professionals with this CompTIA certification. The certification is evidence of your expertise and skills in the job field.

 

  1. Opportunity to Work Anywhere in the World

One notable benefit of achieving the CompTIA Security+ certification is that it is universally recognized, trusted, and accepted all over the globe. Wherever you want to work, this certification can open the door for you. If you aspire to work in the USA, Security+ will make it possible for you. For example, it is recognized by the United States Department of Defence and can work as an icebreaker for any government agency job. If you wish to work as a network cybersecurity professional, this Security+ certification is essential for you.

 

 

Earning the CompTIA Security+ certification can help you land job roles that are in high demand and command a good salary, such as security architect, security engineer, security or systems administrator, security analyst, network administrator, security consultant, and junior IT auditor. Although certification isn’t typically a requirement for these positions, it can provide a career advantage.

 

The U.S. Department of Defense has approved the CompTIA Security+ credential to meet Directive 8140/8570.01-M requirements. The certification also complies with ISO 17024.

Check out ASM's Cyber Security Program

If you are looking to re-skill or get your feet wet on cyber security, ASM offers training classes available both on-site and online. ASM's training program includes Exam Voucher, Extensive Hands-on Labs, Lunch, Refreshments, Indoor Garage Parking Voucher, Practice Exam, and One Year Free Re-take Policy at ASM HQ Facility.

Posted by & filed under ABOUT IT, CompTIA Network+.

The newest updates for the CompTIA Network+ (N10-009) certification, which launched in June 2024, reflect the latest networking technologies and trends. This version incorporates advanced concepts like Software-Defined Networking (SDN), Software-Defined WAN (SD-WAN), and Infrastructure as Code (IaC), which automate and optimize network management through code. There’s also a strong focus on cloud integration, zero-trust architecture, and scalability using tools like VxLAN (Virtual Extensible LAN) for large-scale network deployments.

Compared to the previous version (N10-008), the N10-009 exam has expanded its coverage of cloud computing, network security, and modern networking environments. Key additions include enhanced discussion on network hardening, identity access management, and encryption technologies, reflecting the industry’s growing need for robust security in cloud-based and hybrid networks.

This version also introduces new learning tools like CertMaster, which provides hands-on labs and adaptive learning to improve skill retention. For professionals, these updates make the N10-009 a strong foundational certification for roles like network administrator, cloud engineer, and help desk technician.

This certification ensures professionals are well-equipped to handle evolving technologies, making it ideal for those entering the fields of networking and cybersecurity

Are you looking to break into the exciting field of Networking? Join our 5-day CompTIA Network+ Boot camp Training and build your networking knowledge and skills. 

Posted by & filed under ABOUT IT, CompTIA Network+.

Why Network+ Certification?

Network+ is offered by the Computing Technology Industry Association (CompTIA) which is the top vendor-neutral provider of IT certs. Some of you might be thinking, “Why take CompTIA Network+ first and not jump into Cisco Certified Network Associate certification (CCNA) path right away?”. While you could jump straight into the CCNA path, However, Cisco assumes that you have prior networking experience and specifically with their routing and switching equipment. It is good to have a mix of certifications on your resume, and as a beginner in IT or with networking, Network+ is the best certification to start. Additionally, going vendor-neutral can sometimes be a good thing because the focus isn’t on the hardware, but more on the logical framework of the technology. Networking is important for obvious reasons. If you don’t have network connectivity, you can’t expect to do much from your desk.

Network+ Certification Background

CompTIA has been offering industry standard certification for over 20 years, and has granted 2 million IT certifications in that time. The Network+ certification was created by CompTIA and accredited by the American National Standards Institute (ANSI) in 2008. When granted the Network+ certification, it is good for 3 years. After 3 years, there is a recertification requirement to take the latest exam in order to stay up-to-date. CompTIA describes Network+ as an entry-level certification, which is partially true. I would not pick up the book, read it and feel like you could take the test and be certified. I believe it is good to have at least some familiarity with networking and the different kinds of equipment before you take this on; it will definitely help with keeping pace with your training.

Training Available

There is an abundance of training available for the Network+ certification. At ASM, we offer a bundle of  Network+ / A+ or A+/Network+/Security+. As a Network+ certified professional, you will demonstrate proficiency in the following areas:

  • Design and implement functional networks
  • Configure, manage, and maintain essential network devices
  • Use devices such as switches and routers to segment network traffic and create resilient networks
  • Identify benefits and drawbacks of existing network configurations
  • Implement network security, standards, and protocols
  • Troubleshoot network problems
  • Support the creation of virtualized networks

It is imperative that you find a good training course led by a qualified training instructor. At ASM we provide many training options and course formats including Virtual ILT classrooms (WebEx) and live in person classroom.

Network+ Exam Details

The Network+ exam consists of 90 questions in a mix of multiple choice, drag and drops and performance based formats. You will have 90 minutes to complete the exam and results are immediate. The passing score for the exam is 720 on a scale of 100-900 and the exam voucher will cost $319 USD.

What Next?

The Network+ certification is a launching pad for your career. You can go in so many different directions after passing the Network+ exam. To stay within the CompTIA path, the next option would be Security+ which every IT administrator should have.  A+ is an option if you plan to be more hardware focused in your career. There’s also Linux+, Cloud+, PenTest+ and many more depending on what your desired career path is. If you love networking and want to continue to more advanced certifications, the CCNA is a good next choice.

Is it For Me?

If you are planning on having a long career in IT, then the answer is a resounding YES! Get Network+ certified. It is an amazing class and will really help you get a firm understanding of networking fundamentals. Having a solid foundational knowledge of networking will serve you well in the many years of your IT career to come.

Posted by & filed under Community Outreach.

Please Join us for this Homeless Outreach on Sunday 4/12/2020 at 6:15 pm in Rockville MD.

Location:

Women’s Shelter

2 Taft Court, Rockville, MD (map)

[bs_icon name=”glyphicon glyphicon-collapse-down”] We look forward to serving people at women’s Shelter located in Rockville, MD. Please join us on 2nd Sunday of each month at 2 Taft court , Rockville MD .

Posted by & filed under Community Outreach.

Please Join us for this Homeless Outreach on Sunday 3/8/2020 at 6:15 pm in Rockville MD.

Location:

Women’s Shelter

2 Taft Court, Rockville, MD (map)

[bs_icon name=”glyphicon glyphicon-collapse-down”] We look forward to serving people at women’s Shelter located in Rockville, MD. Please join us on 2nd Sunday of each month at 2 Taft court , Rockville MD .

Posted by & filed under Community Outreach.

Please Join us for this Homeless Outreach on Sunday 2/9/2020 at 6:15 pm in Rockville MD.

Location:

Women’s Shelter

2 Taft Court, Rockville, MD (map)

[bs_icon name=”glyphicon glyphicon-collapse-down”] We look forward to serving people at women’s Shelter located in Rockville, MD. Please join us on 2nd Sunday of each month at 2 Taft court , Rockville MD .

Posted by & filed under Community Outreach.

Please Join us for this Homeless Outreach on Sunday 12/12/2020 at 6:15 pm in Rockville MD.

Location:

Women’s Shelter

2 Taft Court, Rockville, MD (map)

[bs_icon name=”glyphicon glyphicon-collapse-down”] We look forward to serving people at women’s Shelter located in Rockville, MD. Please join us on 2nd Sunday of each month at 2 Taft court , Rockville MD .

Posted by & filed under Community Outreach.

Please Join us for this Homeless Outreach on Sunday 10/13/2019 at 6:15 pm in Rockville MD.

Location:

Women’s Shelter

2 Taft Court, Rockville, MD (map)

[bs_icon name=”glyphicon glyphicon-collapse-down”] We look forward to serving people at women’s Shelter located in Rockville, MD. Please join us on 2nd Sunday of each month at 2 Taft court , Rockville MD .

Posted by & filed under Community Outreach.

Please Join us for this Homeless Outreach on Sunday 09/08/2019 at 6:15 pm in Rockville MD.

Location:

Women’s Shelter

2 Taft Court, Rockville, MD (map)

[bs_icon name=”glyphicon glyphicon-collapse-down”] We look forward to serving people at women’s Shelter located in Rockville, MD. Please join us on 2nd Sunday of each month at 2 Taft court , Rockville MD .

Posted by & filed under Community Outreach.

Please Join us for this Homeless Outreach on Sunday 08/11/2019 at 6:15 pm in Rockville MD.

Location:

Women’s Shelter

2 Taft Court, Rockville, MD (map)

[bs_icon name=”glyphicon glyphicon-collapse-down”] We look forward to serving people at women’s Shelter located in Rockville, MD. Please join us on 2nd Sunday of each month at 2 Taft court , Rockville MD .

Posted by & filed under Community Outreach.

Please Join us for this Homeless Outreach on Sunday 07/14/2019 at 6:15 pm in Rockville MD.

Location:

Women’s Shelter

2 Taft Court, Rockville, MD (map)

[bs_icon name=”glyphicon glyphicon-collapse-down”] We look forward to serving people at women’s Shelter located in Rockville, MD. Please join us on 2nd Sunday of each month at 2 Taft court , Rockville MD .

Posted by & filed under Community Outreach.

Please Join us for this Homeless Outreach on Sunday 03/14/2019 at 6:15 pm in Rockville MD.

Location:

Women’s Shelter

2 Taft Court, Rockville, MD (map)

  • ASMED outreach 1
  • ASMED outreach 5
  • ASMED outreach 4
  • ASMED outreach 3
  • ASMED outreach 4
  • ASMED outreach 4
  • ASMED outreach 4

[bs_icon name=”glyphicon glyphicon-collapse-down”] We look forward to serving people at women’s Shelter located in Rockville, MD. Please join us on 2nd Sunday of each month at 2 Taft court , Rockville MD .

Posted by & filed under Community Outreach.

Please Join us for this Homeless Outreach on Sunday 03/10/2019 at 6:15 pm in Rockville MD.

Location:

Women’s Shelter

2 Taft Court, Rockville, MD (map)

  • ASMED outreach 1
  • ASMED outreach 5
  • ASMED outreach 4
  • ASMED outreach 3
  • ASMED outreach 4
  • ASMED outreach 4

[bs_icon name=”glyphicon glyphicon-collapse-down”] We look forward to serving people at women’s Shelter located in Rockville, MD. Please join us on 2nd Sunday of each month at 2 Taft court , Rockville MD .

Posted by & filed under Community Outreach.

Please Join us for this Homeless Outreach on Sunday 01/13/2019 at 6:15 pm in Rockville MD.

Location:

Women’s Shelter

2 Taft Court, Rockville, MD (map)

[bs_icon name=”glyphicon glyphicon-collapse-down”] We look forward to serving people at women’s Shelter located in Rockville, MD. Please join us on 2nd Sunday of each month at 2 Taft court , Rockville MD .

Posted by & filed under Community Outreach.

Please Join us for this Homeless Outreach on Sunday 10/14/2018 at 6:15 pm in Rockville MD.


Location:

Women’s Shelter

2 Taft Court, Rockville, MD (map)

[bs_icon name=”glyphicon glyphicon-collapse-down”] We look forward to serving people at women’s Shelter located in Rockville, MD. Please join us on 2nd Sunday of each month at 2 Taft court , Rockville MD .