Posted by & filed under Amazon AWS.

Amazon AWS OpsWorks

 

What is OpsWorks ?

 

 

AWS OpsWorks is a configuration management service that provides managed instances of Chef  . OpsWorks lets you use Chef to automate how servers are configured, deployed, and managed across your Amazon EC2 instances or on-premises compute environments.

 

It has these parts :

 

Chef : Chef turns infrastructure into codes , With Chef you can automate how you build ,deploy and manage your infrastructure

 

Stack – Stack is a group of resources such as EC2 instance, ELB , RDS instance

 

Layer – Layer exits within a stack and consist of things like Database layer, Application Layer

 

When you and I create for example an Application Layer (instead of creating manually) it will install Apache server and it will configure all the yum update and all other stuff for you.

 

We will see it in the Lab.

 

  • We need 1 or more layers in the stack.
  • An instance must be assigned to at least 1 layer
  • Preconfigured layer include :
  1. Application layer
  2. Load Balance layer
  3. Database layer
  4. Cashing layer

 

Now we will do a Lab now

 

Step 1) Go to Amazon Console and go to OpsWorks

Step 2) Go to region that you did not use it like Germany, as you see on EC2, there is no instance and we have only one default security group

Step 3) Create an Stack, here we will use the Chef 11 stack

Step 4) call it ASMWebPHPStack and leave all as default

 

Think of stack as production Stack; Test Stack, developer stack

 

Step 5) Now I will add a layer in here

Step 6) Pick Application PHP server =layer Type

Step 7) what above layer do; it will create an EC2 instance that will have Apache Webserver on it with all yum update on it

 

Step 8) now inside the Layer add an EC2 instance and called asmwebserver1, pick t2.micro and pick the AZ =eu-centeral-1a

 

Step 9) Start the instance, right now installing the instance and pre-configuring Apache Server and doing all the updates for us ( it will take some time) Remember if we go back to OpsWorks and these are Chef 11 stack ( pre define chef recipe)

 

Step 10) Lets back to Opsworks and click on Instance, and still booting

 

Step 11) go on the Left and get an (App) Application from Repository

 

An app represents code stored in a repository that you want to install on application server instances. When you deploy the app, OpsWorks downloads the code from the repository to the specified server instances.

 

Step 12) give the name ASMApp1

 

Step 13) put inside Git with this address , leave the rest as default value

 

https://github.com/aws-samples/opsworks-demo-php-simple-app

 

Step 14) Now click add App

 

Step 15) Now go back to instance on the left and see if it is complete

 

Step 16) If I click on the Ip address, you will see not fund; since the Apache Web server is running but we need go to App and deploy it

 

Step 17) It take about 2 mins to deploy , after it is finish when you go back to my instance and click the IP address I will see it is working and it shows at bottom got the name of instance

 

Simple PHP App

Congratulations!

Your PHP application is now running on the host “asmwebserver1” in your own dedicated environment in the AWS Cloud.

 

This host is running PHP version 5.3.29.

 

Step 18) Remember we do not need to SSH and just use the .php file.

 

Step 19) Now we will add another layer for ELB , but before we do add this layer ; we will go back to EC2 and left we will create an ELB and called it OpsWorkELB , but we do not add any instance to here .

 

 

Pick the security group called “

AWS-OpsWorks-PHP-App-Server”

 

and change to index.php and change the :

 

Interval =10 sec

Healthy threshold=3

 

Step 20) after it finished; when you go to Instance Tab , make sure you do not have any instance added here

 

Step 21) Now I go back to OpsWorks and try add the ELB layer to it

 

Step 22) you can add a new layer in another way ; go back to PHP app Sevrer ( layer) then you will see a Tab called network ; here we can add the ELB layer ; as we see this is another way of doing this .

 

Lets read the notes:

 

Note: After you attach an ELB to a layer, OpsWorks removes any currently registered instances and then manages the load balancer for you. If you subsequently use the ELB console or API to modify the configuration, the changes will not be permanent

 

Step 23) Now if you go back to layer ; you will See the ELB layer has been added but health is not good yet; it takes time ; now after a min ; it will be healthy and now if you click on long DNS name ; it will show the same Web page as before.

 

Step 24) Now if I go back to EC2  then on left click on ELB; on the instance TAB ; I will see the name of Instance (asmwebserver1) in here , remember we did not add anything in here before.

 

Step 25) If you go back to OpsWorks and click on the instance; you will see you have two other kinds

  1. 24 hours ( default Instance )
  2. Time-based
  3. Load-based ( used with auto-scaling)

 

Step 26) Now I will go under instance inside Opsworks and add another instance and let’s called it

 

asmwebserver2

t2.micro

eu-centeral-1b

 

Step 27) now click on Start ; then we will see if I go to long DNS name (https://opsworkelb-1369249705.eu-central-1.elb.amazonaws.com/)

and keep refreshing ; I will see both the name :

 

asmwebserver1

asmwebserver2

 

Step 28 ) After about 2 mins ,  Now if I go back to EC2 and click on ELB , I will see the new instance has been added under the instance tab and it is in-service

 

Step 29) Now let’s go add another instance and called it :

 

asmwebserver3

t2.micro

eu-centeral-1c

 

Step 30) Now when I go to Long DNS name and keep refreshing ; I will see all three Webserver name

 

asmwebserver1

asmwebserver2

asmwebeserver3

 

Step 31) Now I will delate all the instance, layer and make sure remove ELB and delate all the Security group.

Posted by & filed under Amazon AWS.

 

 

 

AWS provides various options to encrypt your data on S3.

 

There are 2 types of encryption:

 

Client side: Client encrypts locally using tool/software of their choices

Server Side encryption: Choose from available choices in AWS

In this video, our focus will be Server Side Encryption (SSE) since Client side is open to your preference/choices/requirement.

 

Server Side has 3 types of encryption as well:

 

SSE-S3: One click encryption

SSE-KMS: Using KMS

SSE-C: Not available in console. Customer provides the keys

 

Encrypt a file using SSE S3:

 

$ aws s3 cp abcd.txt s3://kms-test11 –sse

 

To  create a  kms key:

$ aws kms create-key -description “my kms key”

 

List all the keys:

$aws kms aws kms list-keys

 

This will not help us identify the correct key. Create an Alias for the key we created:

aws kms create-alias –alias-name alias/example-alias –target-key-id 1234abcd-12ab-34cd-56ef-1234567890ab

 

 

Then let’s list the alias of the keys as well.

$ aws kms list-aliases

 

To upload and encrypt a file to S3 bucket using your KMS key:

 

aws s3 cp file.txt s3://kms-test11 –sse aws:kms –sse-kms-key-id 4dabac80-8a9b-4ada-b3af-fc0faaaac5

 

Upload and encrypt a file using default KMS Key for S3 in the region:

 

aws s3 cp file.txt s3://kms-test11 –sse aws:kms

 

 

Want more information on how to become Amazon AWS Certified? Learn more!

 

 

 

Posted by & filed under Community Outreach.

Please Join us for this Homeless Outreach on Sunday 03/11/2018 at 6:15 pm in Rockville MD.


Location:

Women’s Shelter

2 Taft Court, Rockville, MD (map)

 

[bs_icon name=”glyphicon glyphicon-collapse-down”] It was a blessings night to serve about 35 people at women’s Shelter located in Rockville , MD. Please join us on 2nd Sunday of each month at 2 Taft court , Rockville MD .

 

 

Posted by & filed under Amazon AWS.

 

Amazon AWS S3 Endpoint

 

 

 

In Here we have One VPC =10.0.0.0/16  also we have  three Subnet as follow

10.0.1.0/24 Public Subnet1

10.0.2.0/24 Private Subnet2

10.0.3.0/24 Public Subnet3

 

I have two server one is called Public1 inside the Public Subnet=10.0.1.0/24 with

Public Ip address: 34.201.13.154
Private Ip address = 10.0.1.183

Password -LphUKwynw9

 

I have another server in Private Subnet 2= 10.0.2.0/24

Private Ip address = 10.0.2.248

Password = rSB=zj6(EY

 

 

Step 1) I logon the PC called Public1 and rename it to Public1

Step 2) Since I can not access the Server located in Private subnet; first I need to RDP to Public1, then from that Server, I will do RDP to Private Server =

So I will RDP to Ip address =10.0.2.248

 

Step 3) Make sure from the server called Public1 I can ping 10.0.2.248, then try to RDP to PC called Private1=10.0.2.248 then rename the PC to Private1, so this way you will not get confused which one is which.

 

Step 4) In order to be able to use the AWS CLI command line, you need to make sure to install the AWS CLI commands on the server.

 

Step 5) here is the CLI command line for Windows

https://docs.aws.amazon.com/cli/latest/userguide/awscli-install-windows.html#install-msi-on-windows

 

Step 6) Since I want to show how you can connect to S3 from CLI; after installing the CLI; we need to make sure the Server has the IAM Role ( since we do not want to put the credentials inside the Instance

 

Step 7) Go to IAM and create a Role called S3Admin Role.

 

Step 8) Now go back to both EC2 instance ( Public and Private ) and make sure you assign the IAM roles to them.

 

Step 9) Now when I go to Public Server and I Type at command Prompt I type AWS S3 LS I will be able to see all y buckets:

 

C:\Users\Administrator>aws s3 ls

2018-01-28 21:21:01 asmtest4568

2018-01-28 21:20:31 test12345asm

2018-01-28 21:20:47 test698932asm

2018-01-08 17:51:16 testasm12345

 

Step 10)Now we come to our main Problem; that is goal is to make the Private PC that is located inside the Private Subnet be able to connect to S3 but via VPC S3 endpoint

 

Step 11) I want to make sure before we do above, we need to make sure the AWS CLI command line is installed on the PC=Private1 , so First I create a NAT gateway ( like what we did in the last Video) then I make sure pick Public Subnet ( 10.0.1.0/24) and then  I configure the My Private RT is pointed to NAT gateway .

You will be able to ping 8.8.8.8 from Private1 after few mins.

 

Step 12) Now I go and Install the AWS CLI from the last link we had :

 

https://docs.aws.amazon.com/cli/latest/userguide/awscli-install-windows.html#install-msi-on-windows

 

Make sure in the same way you have given the Role to Public1, you would give to Private1 ( Role=S3admin) so that way you do not need the credentials.

C:\Users\Administrator>aws configure

AWS Access Key ID [None]:

AWS Secret Access Key [None]:

Default region name [None]: us-east-1

Default output format [None]: table

 

Step 12) after I finish installing AWS CLI on PC called Private1, I will remove the NAT gateway; so it will not have access to Internet  then I will do Amazon VPC Endpoint S3

 

Step 13) Now I will remove NAT gateway from it.

 

Step 14) Now I will go on the Top go to VPC; then I will go to left side click on Endpoint, then click on create an Endpoint, now at the bottom middle pick the S3  and make sure in here pick Private Subnet

 

Hint: for Nat Gateway, make sure pick Public Subnet

For S3 Endpoint, make sure pick Private Subnet

 

Step 15) Here I will pick the Full access

Policy

Full Access – Allow access by any user or service within the VPC using credentials from any AWS accounts to any resources in this AWS service. All policies — IAM user policies, VPC endpoint policies, and AWS service-specific policies (e.g. Amazon S3 bucket policies, any S3 ACL policies) — must grant the necessary permissions for access to succeed.

Policy

 

 

Step 16) Now make sure the private Routing Table is pointed to this VPC-E as a routing table.

 

10.0.0.0/16 local Active No
pl-63a5400a (com.amazonaws.us-east-1.s3) vpce-d92aabb0 Active No

Step 17) now and go and test it and it should work.

 

Posted by & filed under Amazon AWS.

 

                      

 

 

In this video, we share how we made a quiz skill that has a database integration so that your progress can be saved.

 

 These are the components of an Alexa skill:

  •  Interaction model (acts like a front-end interface)
  • Logic (Code is written in programming languages like python, JS, Java) usually hosted in AWS Lambda

You will need 2 accounts to accomplish this:

  • AWS account for AWS Lambda and DynamoDB (aws.amazon.com)
  • Amazon developer account for the skill builder (interaction model etc) and certification. (developer.amazon.com)

 

import random # we need this package to shuffle the answers

 

questions= [“what’s the game’s author’s name?”, “what’s 2+2”, “what’s the capital of the United States?”,

“What’s the capital of China? “, “what’s the capital of India?”]

answers= [[“John”,”Jack”,”Jim”],[4,5,6],[“Washington DC”, “Seattle”, “New York”] ,[“Beijing”, “Shanghai”, “Tibet”],[“New Delhi”, “Mumbai”, “Banglore”]]

 

choiceLetter=[‘A’,’B’,’C’]

 

 

def quiz():

print( “Welcome to the game! You will get five questions. Type the choice letter(A, B, C) to enter the right answer”)

 

index=0 # keeping track of the index

score=0 # keeping track of the quiz score

 

for i in range(len(questions)):

print questions[index] # print question

answerIndex=0

shuffleAns= list(answers[index]) # copy the answer list to a different variable

 

random.shuffle(shuffleAns) #shuffle the answers

 

for answer in shuffleAns:

print (“({}) {}”.format(choiceLetter[answerIndex],answer))

if answer==answers[index][0]: # making sure that the correct asswer’s idndex after shuffling  is saved

afterShuffle= choiceLetter[answerIndex]

answerIndex+=1

 

a= raw_input(“Enter your choice(A/B/C): “)

 

if a == afterShuffle:

print “Correct!”

score+=1

else:

print “wrong answer!”

index+=1

print “”

 

print (“\nYour score is {} out of {} \n”.format(score,len(questions)))

 

# This will run the quiz

quiz()

 

Want more information on how to become Amazon AWS Certified? Learn more!

Posted by & filed under Amazon AWS.

 

Step1: Go to Cloud formation Page Create a stack

Step 2: Click on select from templates and choose WordPress Blog

Step 3: Choose the parameters like username, password, key etc

** make sure you choose the right ssh key because we need to log in to the EC2 server later

Step 4: On the tag page tag the Stack resources with “Wordpress Blog”

Step 5: Click next and Create

Step 6: Navigate to output section and click on the URL. This will take you to your blog.

Step 7: Choose username and password and click next.

Step 8: You should have a blog setup now.

Step 9: SSH into your EC2 and paste the following command

 

$ sudo chown -R apache:apache path/to/wordpress

 

Step 10: Go back to WordPress and you may now install themes and customize your page.

Step 11: Select the stack that we created earlier and click on Actions. Then delete the stack. This will delete all the resources that were created by the stack.

 


Want more information on how to become Amazon AWS Certified? Learn more!

Posted by & filed under Amazon AWS.

 

Autocomplete Lesson:

To add the autocomplete feature:

$ echo “complete -C aws_completer aws” >> ~/.bash_profile

$ source ~/.bash_profile

S3 CLI commands

Create a bucket:
$ aws s3 mb s3://test-saurav-123

Remove the bucket:
$ aws s3 rb test-saurav-123

Creating empty files in Linux :
$ touch file{1..10}

Copy a local file to a bucket:
$ aws s3 cp file1 s3://copy-file-saurav/

Copy Everything in local file to a bucket:
$ aws s3 sync . s3://copy-file-saurav/
$ crontab -e

Create a script to save all files to S3 bucket every (x) minutes/hours/days or months
• $ vim sync.sh
• press i for insert
• #!/bin/bash
• $ aws s3 sync /home/ec2-user/Documents/ s3://copy-file-saurav
• Press Esc
• Type :wq! To save and exit
• $ Chmod 500 sync.sh
The above command will make it executable

On command line type this to create a cron job:
$ crontab –e
• press I to insert
• paste: */1 * * * * /home/ec2-user/sync.sh
• The above line will sync files every minute. For every day use:
* */1 * * * /home/ec2-user/sync.sh
• save with :wq!

To exclude a file with certain extension:
$ aws s3 sync /home/ec2-user/Documents/ s3://copy-file-saurav –exclude ‘*.rtf’ –acl
public-read –storage-class STANDARD_IA
To exclude a file with public read permission and put in in s3- IA storage class:
$ aws s3 sync /home/ec2-user/Documents/ s3://copy-file-saurav –acl public-read —
storage-class STANDARD_IA

For further reference visit:
https://docs.aws.amazon.com/cli/latest/reference/s3/index.html

 

 

Want more information on how to become Amazon AWS Certified? Learn more!

Posted by & filed under Amazon AWS.

 

 

Step 1) I will take image of One of My Web server called Webserver1, when I go to public IP address I will see the

content of the Web server

Step 2) I will stop the Instance; then Take an image of it. Call it “Webserver1Image”

Step 3) Now I will see the New Image under the Image on the left ( Under AMI)

Step 4) I will go and create an Elastic Load Balance; and I will call it AutoscaleELB, then choose a Sec Group

and then I will pick two Subnets on different AZ ( in here it will be us-east-1a and us-east-1b

Step 5 ) I Will not add any instance under this ELB, but I  will add it later on with Autoscaling

Step 6) Now I will see my AMI is already online

Step 7) now I go and start My Auto scaling Process, on the left, I click on launch configuration

Step 8) click to create an “Auto Scaling Group”

Step 9) First you need to define a template that your Auto Scaling group will use to launch instances.

Step 10) Click on “Create Launch configuration “

Step 11) Here you will pick “From my AMI “ from left it is called “Webservwer1Image “ which we did on step2

Step 12 ) After selecting the AMI, called it “AutoscalingLunchconfiguration”

Step 13) Click Advanced configuration and for the IP address pick :

Assign a public IP address to every instance.

Step 14) For Security group pick  something with HTTP access

Step 15) Use one of your Keys and start it up

Step 16) now it will take you to Auto scaling group, lets called it “Auto Scaling group 1 “

Step 17) here make sure pick correct VPC and then “Start with two instances” add the AZ from subnet drop down and

pick Correct Subnet us-east-1a and us-east-1b, ( Must Match with ELB that I created before )

Step 17 b) Make sure click Advanced and in here picking ELB ( that you created in Step 4, that was called:AutoscaleELB

Step 18 )Now I will configure scaling Policy  :

( in here you will tell if the CPU is above 80% for

example add one instance, and If the CPU is below 30% you will remove one of the instances.

Scale between Min 2 and Max 5

Step 19 ) Scale between 2 and 5, that is if My CPU utilization goes above 80 %, the maximum EC2 instance I can have

will be 5 and Min will be 2

Step 20) Now I will add some Policy that is if CPU goes above 80% add “1” instance, and if the CPU drops below

30 % I will remove one EC2 instance. Remember the Min EC2 instance will be 2 and Max EC2 instance will

be 5

Step 21 ) now you can create a notification, then put your e-mail address for

Step 22 ) As it is finalizing the process when I go to EC2 I will see two instances will come up, let us call them as follow

Autoscaling1 created 9-17-2017

Autoscaling2 created 9-17-2017

Step 23 ) Now if I go to ELB, I will see it last  two instances that were created from Auto scaling and Also I will see the Long

DNS name, when I copy and paste DNS, I will see the same content of Web server 1 that was tested in Step 1

Step 24 ) Now I will do a test since I can not create an 80% CPU utilization; what I can do I will make sure one of the EC2

instance dies ( so I terminate) and see if Auto scaling group create a new EC2, so I will select EC2 from us-east-1a

and terminate it.

Step 25 ) Now after killing the EC2 in us-east-1a I will see that under the Load balancer it has only one in service

which is US-east-1b and when I go back to EC2, I will see the auto scaling will provisioning a new EC2 in

us-east-1a and when I go back to the Elastic Load balancer, I will see now two EC2, one from us-east-1a and

another one from us-east-1b. Also, go to Auto scaling group and check the activity Tab, you will see one

terminating and then another one is initializing.

Step 26)  Now When I go to Auto scaling group on the left and click on the History Tab; I will see the history what

is happening.

Step 27 ) let us go to EC2 and give the new EC2 name as follow :

Autoscaling 3 created 9-17-2017

Step 28 ) now we see this {Autoscaling 3 created 9-17-2017 } has been shown under the Elastic Load Balancer

 

 

Want more information on how to become Amazon AWS Certified? Learn more!

Posted by & filed under Amazon AWS.

 

Lambda is an event-driven compute service. We have functions that execute when a trigger is triggered. In today’s lab, we will set up a trigger with Cloud watch Events.

On top of that, we will need an IAM role that gives our lambda function permissions to Stop EC2 Instances. We will also add Cloud watch permissions in that role so that the lambda function can log the event every time it is triggered.

Let’s get started with setting up the role:

  • Go to the IAM console and click on role
  • Click Create a Role
  • Under AWS service, select Lambda. Click Next Permissions.

You should see a list of policies. Search for AmazonEC2FullAccess and check the box.

  • Also, search for CloudWatchFullAccess and check that box.
  • Click Next Review.
  • Choose a Name(eg: Lambda EC2_cloudwatch_access) for the role. Add a description
  • Click Create

 

Now we should have a role that we can attach to our Lambda function.

Now Let’s work on our Lambda function:

  • Go to the Lambda console.
  • Click Create a function
  • Click Author from Scratch
  • Give the function a Name( eg: stopC2)
  • Under role: Select choose an existing role
  • Select the role we created earlier.
  • Click on create function.
  • Select Python 2.7 as the Runtime.
  • Paste our python function:

 

                               import boto3

                               client=boto3.client(‘ec2’)

                               def lambda_handler(event, context):

                               response=client.describe_instances()

                               for reservation in response[“Reservations”]:

                                      for instance in reservation[“Instances”]:

                                           print(instance[“InstanceId”] + “stopping”)

                                           id=[instance[“InstanceId”]]

                               client.stop_instances(InstanceIds=id)

                               return(“Completed”)

 

  • Click on Save and Test. You’ll see a configure test event window. Give the test event a name and click on create.
  • Copy the Arn of the lambda function from the top of the page: (eg: arn:aws:lambda:us-east-1:771454434342637355:function:stop

Cloudwatch Events Setup:

  • Go to the Cloudwatch console.
  • On the left-hand side menu, Under Events, select Rules.
  • Click Create a Rule
  • Select Schedule and select Cron Expression.
  • Put: 0 5 * * ? *  on the box. This means every 5 am GMT every day, month, year
  • This is the documentation for the cron expression: https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/ScheduledEvents.html
  • On the right-hand side, click on Add Target. Select Lambda function and in the function field, paste the ARN we copied earlier.(eg: arn:aws:lambda:us-east-1:771454434342637355:function:stop
  • Click on Configure Details.
  • Give the Rule a name and description and Create a Rule.
  • Click on Create.

Now let’s go to our EC2 console and launch or start few ( 3 ) instances.

It’s time to test the Lambda Function. Since we can’t wait until 12 am, let’s test the event manually.

Go back to the Lambda function we created earlier and click Test.

Check your EC2 console. Your EC2 instance must be stopping.

That concludes the lab.

Terminate the EC2 instance that you created earlier.

 

*If you have errors in lambda function we can go to cloud watch logs and troubleshoot.

 

 

Want more information on how to become Amazon AWS Certified? Learn more!

 

Posted by & filed under Amazon AWS.

Amazon AWS EFS with ELB

 

 

 

EFS (Elastic File System) Features

  • EFS is storage option for EC2, that can be attached to more than one EC2, remember Volume can only be attached to one EC2
  • EFS is Elastic, and it is scalable ( The storage capacity will increase or decrease as you add files to it, so it is not like Volume that you start for example by 30  GB)
  • Supports the Network File System Version 4 (NFSv4) protocol
  • You only pay for the storage you use (no pre-provisioning required)
  • Can scale up to the petabytes

Now will do a Lab,

In here we will have two EC2 instances (One on each Subnet and different AZ), then we will create an EFS and Mount on this EC2, I will boot up and Amazon AMI and then install the Apache Web server, and on one of the EC2 we will create the index, html and we will see it will be on 2nd EC2 and then we will test it via Elastic ELB and we will see our sample web page.

 

Step 1) I go to AWS and click a new EFS

Step 2) I choose the VPC=10.0.0.0/16 then I will choose two Subnet 2 and Subnet 3 in each different AZ

  us-east-1a  with Subnet 1= 10.0.1.0 /24

  us-east-1c  with subnet 3= 10.0.3.0/24

 Make sure pick correct Security Group ( Here I pick RDP, HTTP, ICMP) and default security group,

Click Next

Step 3) Give the Name =EFSWebsites, leave the default and click next

Step 4) now it should be creating the EFS  ( when it is finished it will show Available)

Step 5) Now I will go to EC2 and create two EC2 with Amazon AWS and call it EFS-east1a and EFSeast1c

 Put the first EC2 ( called EFS-east1a inside the Subnet 1)

 Put the 2nd EC2 ( called EFS-east1c inside the Subnet 3)

Step 6) make sure use same security group as above.

Step 7) Now I will SSH to the first instance ( EFS-east1a) and here we will use the keygen to create a .ppk ( since my key was in format of .pem

Step 8) in order Yum with two windows, please make sure pay attention to private IP address 10.0.1.x ( 1st instance) and 10.0.3.x (2nd instance)

Step 9) type these

sudo su  ( to put in root position)

yum update -y

yum install httpd –y  ( to install the Apache server)

Service httpd start ( to start the Web server service)

Step 10) now above will create a directory for us  at  /var/www/html

So let’s go to that directory

[root@ip-10-0-1-8 ec2-user]# cd /var/www/html

[root@ip-10-0-1-8 html]# ls

As we see when I type ls there is nothing inside my directory now.

Step 11) now go to the 2nd instance (10.0.3.x) and ssh to it  and type same commands as before

 make sure go back to root on both instance:

Step 12) Now I will go and create a classic ELB  (call it EFSELB) and include these two instances and make sure pick correct Security group as for those instances. ( pick also default Sec group)

Step 13) change the interval to 6 sec and also the Healthy threshold to 2 retry.

Step 14) If I go to instance tab; I will see the status out of service ( since we have no index.html )

Step 15) Now I go back to EFS and follow the instruction on what to do: in here click on :

 

Amazon EC2 mount instructions

I do not need to do these steps since my Linux AMI already has it :

————————————————–Skip these steps—————————————-

Install the nfs client on your EC2 instance.

  • On an Amazon Linux, Red Hat Enterprise Linux, or SuSE Linux instance:
    sudo yum install -y nfs-utils
  • On an Ubuntu instance:
    sudo apt-get install nfs-common

——————————————————————————————————

Now if you get this message in the box :

Your Amazon VPC is not currently configured to enable mounting using DNS names. To enable this feature go to the Amazon VPC console; in the Actions menu, select “Edit DNS Resolution” and “Edit DNS Hostnames” and make sure both are set to “Yes”. For instructions on mounting your file system using a mount target IP address instead of a DNS name, please see our technical documentation.

 

Now after you did the DNS now you can copy and paste this commands to Mount your file system

sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 fs-a9ffaee0.efs.us-east-1.amazonaws.com:/ efs

But now instead of /efs directory we will mount it at /var/www/html (at the both of instance only)

sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 fs-a9ffaee0.efs.us-east-1.amazonaws.com:/ /var/www/html

 

Step 16) if you paste above commands and does not work and it takes time; you need to go back to your both instance and make sure you have added the default security group ( also inside the EFS)

Step 17) now I will go to one of the instance ( for exam first instance) then I go to directory

Cd /var/www/html

And type nano index.html

And write something inside of it, then we will see this file inside 2nd instance

 

Step 18)  Now when I go to next instance, I will see the index.html with same content

Step 19) So I have mounted this EFS file on directory   /vasr/www/html

Step 20) Now if I go to my ELB, and I copy and paste long DNS name :

 

EFSELB-30545612.us-east-1.elb.amazonaws.com 

I will see the content of index.html

 

Step 21) Now I can go to ROUTE 53 and create an “A” record with Alias and point to ELB, then try to open with my domain name

Hamedtest100.com

So as we see in here, it looks like

 

Here are the difference between EBS, EFS, and S3

Amazon EFS is a file storage service for use with Amazon EC2. Amazon EFS provides a file system interface, file system access semantics (such as strong consistency and file locking), and concurrently-accessible storage for up to thousands of Amazon EC2 instances.

Amazon EBS is a block-level storage service for use with Amazon EC2. Amazon EBS can deliver performance for workloads that require the lowest-latency access to data from a single EC2 instance.

Amazon S3 is an object storage service. Amazon S3 makes data available through an Internet API that can be accessed anywhere.

 


 

Want more information on how to become Amazon AWS Certified? Learn more!

Posted by & filed under Amazon AWS.

EBS Vs Instance Store, Snapshot, and Volume

 

 

 

  • Instance Store Volumes are sometimes called Ephemeral Storage.
  • Instance Store volumes cannot be stopped. If the underlying host fails, you will lose your data.
  • EBS backed instances can be stopped. You will not lose the data in this instance if it is stopped.
  • You can reboot both, you will not lose your data.
  • By default, both ROOT volumes will be deleted on termination, however, with EBS volumes, you can tell AWS to keep the root device volume.

Amazon (EC2) Elastic Compute Cloud

  • Amazon EC2 provides scalable virtual servers in the cloud. The virtual servers can run different operating systems but most commonly run a flavor of Linux or Windows.
  • An EC2 virtual server is known as an “instance” and can be made up of different instance types and sizes.
  • Pricing Models
    • Reserved Instances
      • Purchase reserved instances when you know the “amount of usage” you will be using for an instance. If an instance is needed 100% of the time purchasing reserved instances at a discount will reduce costs.
    • On–‐Demand Instances
      • Are used “on–‐demand” and are paid for by the hour. If you only need an instance for an hour only fire it up for that long!
    • Spot Instances
      • Bid on unused EC2 instances for “non-production applications”.

Volumes Vs Snapshots

  • Volumes exist on EBS
    • Virtual Hard Disk
    • Snapshots exist on S3
    • You can take a snapshot of a volume, this will store that volume on S3
    • Snapshots are incremental, this means that only the blocks that have changed since your last snapshot are moved to S3
    • If this is your first snapshot, it may take some time to create

Volumes Vs Snapshots – Security

  • Snapshots of encrypted volumes are encrypted automatically.
  • Volumes restored from encrypted snapshots are encrypted automatically.
  • You can share snapshots, but only if they are unencrypted.
    • These snapshots can be shared with other AWS accounts or made public

Snapshots of Root Device Volumes

  • To create a snapshot for Amazon EBS volumes that serve as root devices, you should stop the instance before taking the snapshot.

 

Now we will do a Lab

Step 1) I go to Sydney Section; in here I have a VPC =192.168.0.0 /16 and then we have two public Subnet 1, 2

192.168.1.0/24

192.168.2.0/24

Step 2) I will go to EC2 and pick one Windows 2008 server with EBS, and then I will see also under the Volume section on the left side it has the new Volume.

Step 3) Then after I log in, I will go to Windows 2008 server and rename it , then I will install IIS Web server , then I go to c:\intepub\wwwroot and create an index.html file and save it in this folder ( hint: make sure the extension is a .html file not .txt, make sure use folder option to find out if the extension is turn off )

Step 4) Now go and make sure you type IP address of the Public in IE or Google Chrome, you should be able to see the content of the homepage

Step 5) Now I will go stop and take Snapshot of this EC2, as we see when I select the EC2 called EBSSydney1, in here I can extend the volume size, let’s give the name Image of EBSSydeny1.

When you create an EBS image, an EBS snapshot will also be created for each of the above volumes.

Step 6) Now I will see “Image of EBSSydeny1” under AMI and also under the snapshot on the left side

Step 7) So what does it mean, I can have this as a baseline and start from this AMI, I can get another EC2 instance with an exact same software package ( Like IIS and web server).

Step 8) Now I will go and Start a new Launch of EC2, but in here I will use My AMI

Step 9) go thru same process as before and pick this New EC2 inside the public subnet 192.168.1.0/24 and give name EBSSydney2

Step 10) As we see as we go thru these steps ( I can uncheck Delete on termination, but I cannot do this when I do instance store )

Step 11) As we see now under the EC2 I will see two instances and also under the volume I will see two Volume

Step 12) Now I will connect to new EC2 called EBSSydney2 and RDP to it and also if I copy and paste the public address I should see the same content of EBSSydney1

Step 13) after login and put the password I will go and see I have IIS installed on folder c:\inetpub\wwwroot\index.html, then I will rename the Computer name to EBSSydeny2

Step 14) Now If I want I can increase the Volume size of EBSSydney1 from 30 GB to 35 GB, all I have to do go to Volume and click on action and then modify it.

Step 15) I will go and restart the EBSSydney1 and to see if I got for extra 5 GB Storage, as we see when I go to volume and I click on EBSydney1 and look under the Status tab; I will see some % that is trying to optimize it.

Step 17) Make sure State in-use – completed (100%) 

Step 18) if you go to Server Manager at taskbar; then click on the storage; now I will see extra space of 5.00 GB and now all you have to do highlight it and then you can extend your C drive.

Step 19) now I can see C drive is 35GB

Step 20) Now let’s say I want to attach a new Volume to EBSydney1 as  D Drive=50 GB; So I can go to Volume on the left and then create a new Volume ( here I can encrypt it) and then attached to EBSydney1

Here is the ID of new Volume: vol-0401eb87ba30edb96; ( for Example)

Step 21) Lets rename is called it “New Volume Made “Now on right side Column I will see it says available (instead of in use)

Step 22) let’s go an attached to EBSsydney1 (the size of volume is 50G)

Step 23) Now If I go to EC2 and click on EBSydney 1 and at the bottom, I will see both root and this extra attached Volume to it

Step 24) Now go to Ebsydney ( via RDP) and go to server manager and click on storge I will see this new 50 GB drive ( extra HD) and then I can right-click on it, initialed it and then right click and create a volume and use all 50 GB space and then give the Drive letter “D”

Step 25) Now I will have two Hard drive C and D drive. Let’s imagine my D drive is used for DATA, so I will go and store a file called Data.txt inside it.

Step 26) so Up to now I have two EC2

EBSsydney2 with C drive

EBSydney1 with  C and D  Drive

Step 27) Now since all of these EBS are attached to EC1 via network; I can detach the Volume D Drive from EBSsydney1 and connect to EBSsydney2

Step 28) After I do above and Detached, then I will see status as “Available”

Step 29) Now I will go and Attached this Volume ( D Drive) to EBSsydney2

Step 30) Now when I go EBSsydney1, I will see D drive is not there; but when I go to EBSsydney2, I will see it has D drive and also I will see my data is still there.

Step 31) So we see the EBS Volume can be attached to only one EC2 at a time, and now when I created the New Volume and if it was Encrypted the attached volume will be also encrypted.

Step 32) Now I will go and create Instance (instead of EBS) and we will see that I cannot stop it, but only Restart or Terminate

Step 33) Go to EC2 and Create a new EC2, then on the left side click on Community AMI; then make sure select Root type As “ Instance Store “

Step 34) go thru the process and pick first AMI, as you see there is no Free Tier and pick M3.meduim and then put public subnet

Step 35 ) On the add storage type; read it :

Your instance will be launched with the following storage device settings. You can attach additional instance store volumes to your instance. You can also attach additional EBS volumes after launching an instance, but not instance store volumes. Learn more about storage options in Amazon EC2.

Step 36) click Add new volume and as you see you do not have any good option and also you cannot click on “Delete on Termination “ and also you cannot encrypt. Give the name

InstanceSydney1

Step 37) As you see it is not FREE; so as soon as we did it we will delete it.

Step 38) As you see when you created an Instance Store; you do not have any Volume ( like when we created the EBS stuff)

Step 39) now when you click action and look at the state I can only reboot or Terminate; so I cannot stop it, I cannot take the image of it.

 

Part 1

 

 

Part2 

Want more information on how to become Amazon AWS Certified? Learn more!

Posted by & filed under Amazon AWS.

Simple Notification Service (SNS)

 

 

SNS is integrated into many AWS services. We are able to use it to receive notifications when events occur in our AWS Environment. With CloudWatch and SNS a full environment monitoring solution could be created that notifies administrators of alerts, capacity issues, downtime, changes in the environment, and more!
 

  • TOPIC: A topic what a “message is sent to”
  • Subscription end point: SNS sends all messages to subscriptions subscribed to a specific topic
  • Subscriber end points include the following
     

    • Application, Mobile APP notifications (IOS/Android/Amazon/Microsoft)
    • SMS
    • HTTPS
    • HTTP
    • JSON
    • E–‐Mail JSON
    • SQS Queue
    • AWS Lambda
  • Remember SNS is PUSH , but SQS ( For Queue) is PULL

Step 1) We will create a TOPIC. Call it “A picture was uploaded to S3 buckets”

Step 2) lets create a Subscription, in here we will use protocols “e-mail”

The choice of Protocols are:

  • HTTP
  • SMS
  • HTTPS
  • JSON
  • E–‐Mail JSON
  • SQS Queue
  • AWS Lambda
  • Application

 Step 3) Then insert your e-mail in here :

 Step 4) Now As we see it is pending, so go to above e-mail and then accept subscription; then you need to confirm subscription

Step 5) Now when I go back to AWS SNS, I will see new Subscription ID

Step 6) Now if you need to add another Subscription; you can click Subscription; then chose Mobile phone ( to get txt)  and provide your cell phone number.

Step 7) Now I will test it by sending a Publish to this topic; Via Myself, remember I can use the notification also when I trigger some events.

Step 8) Now that all is OK, I will go to S3 and create a Buckets and have a trigger a SNS3 to e-mail when I get a file uploaded to the S3

Step 9) Go to S3 and create this Buckets “testswithsns” with all default value

Step 10) Then go to properties of that buckets and click on the events

Step 11) Pick the event = “Object create”,  As you see when you want select it: you have choices SNS topic, SQS Queue  or Lambda Function

Step 12) I pick SNS and pick the topic I did on the first steps called “whenAfileisuplaodedtoS3 “

Step 13) Now when I upload pictures to my S3 buckets I will see notifications.

 

 

 

Want more information on how to become Amazon AWS Certified? Learn more!

Posted by & filed under Amazon AWS.

 

 

 

 

 

Step 1) Create Target Group 1 (Pic) and Target Group 2 (video)

Step 2) Associate /Register the above group with EC1(Pic1) and EC2 (Pic2) with group 1
click target Tab ( Make sure click to add to register , then Save )
Step 3) Associate /Register the above group with EC3(video3 ) and EC4 (Video 4) with group 4
click target Tab ( Make sure click to add to register , then Save )
Step 4 ) Now we will click on the Application Load Balancer and go thru steps

Step 5) After selecting the Security Group ( port 80) then we will pick both target group 1 and target group2
if we did not create target group 1 and target group 2 we had to do it here. This way is easier to
understand the material.

Step 6) Here I pick Picture as a default group ( it does not matter which one we pick) we will see later on; as
we see rest of parameter is filled up .
Step 7) We wait about 5 mins so the state goes from Provisioning to Active . Now if you go to target groups
you will see both of the instance at group 1 (pic) are healthy . But when you look at Group 2(video)
you will see as unused , since we have to link the video to Application Load Balancer.

Step 8) Now If I copy and paste long DNS name https://applicationloadbalancer-780517374.us-east-1.elb.amazonaws.com/
It will show page that correspond to Pic1 and Pic2

Step 9) Now if you go back to target group and click on Target group 1( pic) on the first tab, I see the Load balancer is
shown in here ; but when I click on the target group 2( Video) on first tab I do not see the Load balancer associate
here. So I need to go to Load Balancer and change the Rule ( last tab) .

Step 10) In here we see the default rule is pointed to Group1 (pic) .

Step 11 ) click on View/Edit Rule

Step 12) Now we will add content rule and I will use for path *Picture* go to picture group
Now we will add a content rule and I will use for path *Video* go to Video group

Step 13) Now if I paste that long DNS name with /picture.html it will shows the content from EC1 and EC2 ( after refresh)

https://applicationloadbalancer-780517374.us-east-1.elb.amazonaws.com/picture.html

Step 14) Now if I paste that long DNS name with /video.html it will shows the content from EC3 and EC4 ( after refresh14)

https://applicationloadbalancer-780517374.us-east-1.elb.amazonaws.com/video.html

step 15) now if you go back to target Group 2 (video) then on fist tab we will see it is now associated with Load Balancer

 

 

 

Want more information on how to become Amazon AWS Certified? Learn more!

Posted by & filed under Cisco CCNA.


Click here to see the presentation

 

Here what I have:

Pc1=10.10.10.1

pc2=10.10.10.2

pc3=10.10.10.3 connected to port f0/3 which is located in Lobby

hacker=10.10.10.4

 

 

The goal is I want to protect the port f0/3 located in lobby and make sure only PC3=sales3 be able to connect and do his work.

 Hint:You will go int f0/3 and start with switchport?
Read more »

Posted by & filed under 70-410, Microsoft MCSA.

Minimum requirements

Microsoft publishes hardware minimum requirements for the software it sells. For Windows Server 2012, the minimum requirements are:
Processor: Minimum: 1.4 GHz 64-bit processor
Ram: Minimum: 512 MB
Disk Space: Minimum: 32 GB
Other requirements:
DVD drive
Super VGA (800 x 600) or higher-resolution monitor
Keyboard and Microsoft® mouse (or other compatible pointing device)
Internet access

 

Server 2012 Installation

Insert the server 2012 DVD in the server and to through the following steps:

  • Once you get the following message press Enter to boot from the setup.Insert the server 2012 DVD in the server and to through the following steps:
    Server 2012 Installation

 

  • Wait for a while till the setup loads all necessary files (Depending on your machine, it will take couple of minutes)
    Server 2012 Installation

 

 

  • Once the setup files are loaded, the setup will start with the following screen. You can change these to meet your needs (the default values should be fine for now).
    Server 2012 Installation

 

  • Once you click Next, you can start the installation, click “Install now”.
    Server 2012 Installation

 

  • You will see the following screen, wait until it finishes loading.
    Server 2012 Installation

 

  • In the following setup screen, you will see four options. Select Windows Server 2012 DataCenter Evaluation (Server With GUI).
    Server 2012 Installation

 

  • After you click Next from previous screen, Read the License terms, tick the “I accept the license terms” and click Next.Server 2012 Installation

 

  • Now It will ask you for the drive (or partition) you want to install Windows on. Here we will install it on the one partition we have.
    [bs_icon name=”glyphicon glyphicon-exclamation-sign”] NOTE: This will remove the contents on the partition. Either you create a partition to install windows on, or you can test this on a testing machine.
    Server 2012 Installation

 

  • Now once we picked our partition, clicking on next from previous screen will start the setup. This process might take a while.
    Server 2012 Installation

 

  • Once the setup is done, it will restart and start your Windows Server 2012 for the first time. It will ask you then to set up a password for the Administrator user.
    Server 2012 Installation

 

  • The setup will finalize your settings, might take a couple of minutes.
    Server 2012 Installation

 

  • Once the setup is done, you can log in for the first time to your Windows Server, as the screen says, press Ctrl+Alt+Delete to log in, and use the password you set in the setup process.Server 2012 Installation

 

  • Once you Log in, Windows Server 2012 will show the Server Manager.
    Server 2012 Installation

 

 

To Become Microsoft Certified please Check out the Link;

 

 

 

Posted by & filed under CompTIA Network+, MICROSOFT MTA NETWORKING.

Taking a Network+ exam? Follow us on Facebook and ask us about our study group.

The (Open Systems Interconnection) OSI has been developed by International Organization for Standardization (ISO). The OSI model provides a framework for creating and implementing networking standards and devices and describes how network applications on different computers can communicate through the network media. In this post, each of the seven layers of the OSI model will be explained in simple terms.

Layer 1 – Physical layer

Physical layer defines the physical medium itself. It details how cables, connectors and network interface cards are supposed to work and how to send and receive bits. When a networking problem occurs, many networking pros go right to the physical layer to check that all of the cables are properly connected and that the power plug hasn’t been pulled from the router, switch or computer, for example.

Layer 2 – Data Link

Data Link layer defines the format of data on the network. The data link layer is divided into two sub layers: The Media Access Control (MAC) layer and the Logical Link Control (LLC) layer. MAC layer is used for hardware addressing and for controlling the access method. The LLC layer is used for used for flow control and error detection.

Layer 3 – Network

Network Layer defines device addressing, routing, and path determination. Network layer acts as a boundary between the host and the subnet. It deals with routing issues, deadlock and conjestion issues caused by increased number of packet data transfer and decreasing the performance etc.

NFS uses Internetwork Protocol (IP) as its network layer interface. IP is responsible for routing, directing datagrams from one network to another. Even though IP packets are addressed using IP addresses, hardware addresses must be used to actually transport data from one host to another. The Address Resolution Protocol (ARP) is used to map the IP address to the hardware address.

Layer 4 – Transport

Transport layer is responsible for getting the entire message across, establishes and terminates connections between two computers and keeping track of fragmentation and out-of-order packets. Used for flow control and data recovery.

Two transport protocols, Transmission Control Protocol (TCP) and User Datagram Protocol (UDP), sits at the transport layer.

TCP establishes connections between two hosts on the network through ‘sockets’ which are determined by the IP address and port number.

UDP on the other hand provides a low overhead transmission service, but with less error checking.

Layer 5 – Session

This layer defines how to establish, manage and terminate connections between applications.

Layer 6 – Presentation

The Presentation layer defines the data formats. The compression and encryption are also defined at this layer.

Layer 7 – Application

This layer provides network services to the end-users. In general: Layer 7 is the layer that users interact with directly, for example, users browse applications like web browsers (Google Chrome, Firefox, Safari, etc.), Email Applications (Outlook, Thunderbird, etc.), Chat Applications ( WhatsApp, Skype, Viber, etc.) directly are the examples of layer 7 applications.

References: Network World

Want more information on how to become CompTIA Network+ Certified? Learn more!

Posted by & filed under CompTIA Network+, MICROSOFT MTA NETWORKING.

Taking a Network+ exam? Follow us on Facebook and ask us about our study group.

WiFi is not an acronym but a brand name created by a marketing firm that’s meant to serve as an interoperability seal for marketing efforts. WiFi technology however provides wireless Internet access via the use of radio waves which transmit a signal to a wireless enabled device. You can access this type of connection from up to one hundred feet away and the performance of the connection can vary according to the 802.11 standard being used and the number of devices connecting to the router simultaneously. The standard known as the 802.11 offers a number of different levels of bandwidth usage symbolized by a letter.

The IEEE naming scheme for the standard is a little tough to get used to, and in an effort to make it easier to understand, the Wi-Fi Alliance has come up with some simpler names.

802.11a

The first “letter” following the June 1997 approval of the 802.11 standard, this one provided for operation in the 5GHz frequency which makes it less prone to interference and with data rates up to 54Mbps. It introduced a more complex technique, known as OFDM (orthogonal frequency division multiplexing) for generating the wireless signal.

802.11b

Released in September 1999, it’s most likely that your first home router was 802.11b, which operates in the 2.4GHz frequency and provides a data rate up to 11 Mbps and has a range up to 150 feet. 802.11b components are cheap, but the standard has the slowest maximum speed of all the 802.11 standards.

802.11g

Approved in June 2003, 802.11g was the successor to 802.11b, able to achieve up to 54Mbps rates in the 2.4GHz band, matching 802.11a speed but within the lower frequency range. 802.11g standard uses the same OFDM technology introduced with 802.11a. 802.11g is backward compatible with 802.11b devices: an 802.11b device can connect to an 802.11g access point (but at 802.11b speeds).

802.11n (Wi-Fi 4)

The first standard to specify MIMO (Multiple Input Multiple Output), 802.11n was approved in October 2009 and allows for usage in two frequencies – 2.4GHz and 5GHz, with speeds up to 600Mbps. When you hear wireless LAN vendors use the term “dual-band”, it refers to being able to deliver data across these two frequencies.

802.11ac (Wi-Fi 5)

Current home wireless routers are likely 802.1ac-compliant, and operate in the 5 GHz frequency space. With Multiple Input, Multiple Output (MIMO) – multiple antennas on sending and receiving devices to reduce error and boost speed – this standard supports speeds ranging from 433 Mbps all the way up to 3.46Gbps. Some router vendors include technologies that support the 2.4GHz frequency via 802.11n, providing support for older client devices that may have 802.11b/g/n radios, but also providing additional bandwidth for improved data rates.

Pending Wi-Fi standards

802.11aj

Also known as China Millimeter Wave, this defines modifications to the 802.11ad physical layer and MAC layer to enable operation in the China 59-64GHz frequency band. The goal is to maintain backward compatibility with 802.11ad (60GHz) when it operates in that 59-64GHz range and to operate in the China 45GHz band, while maintaining the 802.11 user experience. Final approval was expected in November 2017.

802.11ak

There are some products in the home-entertainment and industrial-control spaces that have 802.11 wireless capability and 802.3 Ethernet function. The goal of this standard is to help 802.11 media provide internal connections as transit links within 802.1q bridged networks, especially in the areas of data rates, standardized security and quality-of-service improvements. It reached draft status in November 2017.

802.11ax (Wi-Fi 6)

Known as High Efficiency WLAN, 802.11ax aims to improve the performance in WLAN deployments in dense scenarios, such as sports stadiums and airports, while still operating in the 2.4GHz and 5GHz spectrum. The group is targeting at least a 4X improvement in throughput compared to 802.11n and 802.11ac., through moreefficient spectrum utilization. Approval is estimated to be in July 2019.

802.11ay

Also known as Next Generation 60GHz, the goal of this standard is to support a maximum throughput of at least 20Gbps within the 60GHz frequency (802.11ad currently achieves up to 7Gbps), as well as increase the range and reliability. The standard is expected to be approved between September and November 2019.

802.11az

Called Next Generation Positioning (NGP), a study group was formed in January 2015 to address the needs of a “Station to identify its absolute and relative position to another station or stations it’s either associated or unassociated with.” The goals of the group would be to define modifications to the MAC and PHY layers that enable “determination of absolute and relative position with better accuracy with respect to the Fine Timing Measurement (MTM) protocol executing on the same PHY-type, while reducing existing wireless medium use and power consumption, and is scalable to dense deployments.” The current estimate on approval of this standard is March 2021.

802.11ba

Otherwise known as “Wake-Up Radio” (WUR), this isn’t a crazy morning zoo-crew thing, but rather a new technology aimed at extending the battery life of devices and sensors within an Internet of Things network. The goal of the WUR is to “greatly reduce the need for frequent recharging and replacement of batteries while still maintaining optimum device performance.” This is currently expected to be approved in July 2020.

Wifi Standards

References
actiontec. (n.d.). The evolution of WiFi standards: a look at 802.11a/b/g/n/ac. Retrieved from actiontec.com: https://www.actiontec.com/wifihelp/evolution-wi-fi-standards-look-802-11abgnac/
Cisco. (n.d.). What Is Wi-Fi? Retrieved from www.cisco.com: https://www.cisco.com/c/en/us/products/wireless/what-is-wifi.html
Phillips, G. (n.d.). What Are Wi-Fi Standards? Retrieved from www.makeuseof.com: https://www.makeuseof.com/tag/understanding-common-wifi-standards-technology-explained/

Want more information on how to become CompTIA Network+ Certified? Learn more!

Posted by & filed under CompTIA Network+, MICROSOFT MTA NETWORKING.

Taking a Network+ exam? here are some network types you should know. What is a LAN ? A local area network (LAN) is a computer network within a small geographical area such as a home, school, computer laboratory, office building or group of buildings. A LAN is composed of inter-connected workstations and personal computers which are each capable of accessing and sharing data and devices, such as printers, scanners and data storage devices, anywhere on the LAN. LANs are characterized by higher communication and data transfer rates and the lack of any need for leased communication lines.

What is WLAN? A wireless local area network (WLAN) is a wireless distribution method for two or more devices that use high-frequency radio waves and often include an access point to the Internet. A WLAN allows users to move around the coverage area, often a home or small office, while maintaining a network connection. A WLAN is sometimes call a local area wireless network (LAWN) What is MAN? A metropolitan area network (MAN) is similar to a local area network (LAN) but spans an entire city or campus. MANs are formed by connecting multiple LANs. Thus, MANs are larger than LANs but smaller than wide area networks (WAN). MANs are extremely efficient and provide fast communication via high-speed carriers, such as fiber optic cables. What is WAN? A wide area network (WAN) is a network that exists over a large-scale geographical area. A WAN connects different smaller networks, including local area networks (LANs) and metro area networks (MANs). This ensures that computers and users in one location can communicate with computers and users in other locations. WAN implementation can be done either with the help of the public transmission system or a private network. What is CAN? A campus area network (CAN) is a network of multiple interconnected local area networks (LAN) in a limited geographical area. A CAN is smaller than a wide area network (WAN) or metropolitan area network (MAN). A CAN is also known as a corporate area network (CAN). What is SAN A storage area network (SAN) is a secure high-speed data transfer network that provides access to consolidated block-level storage. An SAN makes a network of storage devices accessible to multiple servers. SAN devices appear to servers as attached drives, eliminating traditional network bottlenecks. SANs are sometimes also referred to (albeit redundantly) as SAN storage, SAN network, network SAN, etc. What is PAN? A personal area network (PAN) refers to the interconnection of information technology devices or gadgets within the environment of an individual user (typically within 10 meters or 33 feed). These interconnected devices might include laptop computers, PDAs, cellphones, printers, PCs or other wearable computer devices.

References

techopedia.com. (n.d.). Retrieved from techopedia.com: https://www.techopedia.com/

Want more information on how to become CompTIA Network+ Certified? Learn more!

Posted by & filed under CompTIA Network+, MICROSOFT MTA NETWORKING.

 

When you buy brand new switches all we need connect the PC to switch port and then PC can talk to each other ; but I can take a Switch like Cisco Switch and create a few VLAN ; such as

(Here is general Idea of VLAN)

VLAN 10 = Publishing
VLAN 20= Accounting
VLAN 30= security

 

Vlan Name Status Ports
1 Default Active Fa0/1, Fa0/2, Fa0/3, Fa0/4 Fa0/5, Fa0/6, Fa0/7, Fa0/8Fa0/9, Fa0/10, Fa0/11, Fa0/12

Fa0/13, Fa0/14, Fa0/15, Fa0/16

Fa0/17, Fa0/18, Fa0/19, Fa0/20

Fa0/21, Fa0/22, Fa0/23, Fa0/24

Gig0/1, Gig0/2

as we see all the ports are located in default VLAN one ; now   I will create

two VLAN and assigned the port as follow

VLAN 2 EGR    f0/1,f0/2

VLAN 3 Sales   f0/4,F0/5

Here is the result after I have done above

Switch#show vlan brief

Vlan Name Status Ports
1 Default Active Fa0/3, Fa0/6, Fa0/7, Fa0/8Fa0/9, Fa0/10, Fa0/11, Fa0/12

Fa0/13, Fa0/14, Fa0/15, Fa0/16

Fa0/17, Fa0/18, Fa0/19, Fa0/20

Fa0/21, Fa0/22, Fa0/23, Fa0/24

2 EGR Active Fa0/1, Fa0/2
3 sales Active Fa0/4 Fa0/5,

As we see now PC1 can talk to PC2 , and PC4 can talk to PC5 , but PC1 can not ping PC4 or PC5 since I need to have a router between them.

Here is the diagram after I put the router between them. Make sure give ip address of

10.10.10.100 to f0/0

20.20.20.100 to f0/1 and make sure turn on the port ; since by default all the ports are in shut down mode

Now lets look see if from PC1 can ping PC4.

As we see PC1 can not ping PC4 , so lets try to troubleshoot , go to PC1 and see if you can ping the

DG=10.10.10.100

 

PC>ipconfig

FastEthernet0 Connection:(default port)

 

Link-local IPv6 Address………: FE80::230:F2FF:FEA7:AB9D

IP Address………………….: 10.10.10.1

Subnet Mask…………………: 255.255.255.0

Default Gateway……………..: 10.10.10.100

 

PC>ping 10.10.10.100

Pinging 10.10.10.100 with 32 bytes of data:

 

Request timed out.

Request timed out.

Request timed out.

Request timed out.

 

Ping statistics for 10.10.10.100:

Packets: Sent = 4, Received = 0, Lost = 4 (100% loss),

 

As we see we cannot ping the DG

 

Why ?

 

Think of OSI Layer model .

 

Remember IP address is located on layer 3

  1. Physical Layer
  2. Data Link Layer ( look at your Switch)
  3. Network Layer

Now when I look at Switch and I type

Switch#show vlan brief

 

Vlan Name Status Ports
1 Default Active   Fa0/7, Fa0/8Fa0/9, Fa0/10, Fa0/11, Fa0/12

Fa0/13, Fa0/14, Fa0/15, Fa0/16

Fa0/17, Fa0/18, Fa0/19, Fa0/20

Fa0/21, Fa0/22, Fa0/23, Fa0/24

2 EGR Active Fa0/1, Fa0/2, Fa0/3,
3 sales Active Fa0/4, Fa0/5,Fa0/6,

 

Now I should be able to ping from PC1 =10.10.10.1 to Pc4=20.20.20.4

 

PC>ping 20.20.20.4

Pinging 20.20.20.4 with 32 bytes of data:

 

Reply from 20.20.20.4: bytes=32 time=1ms TTL=127

Reply from 20.20.20.4: bytes=32 time=1ms TTL=127

Reply from 20.20.20.4: bytes=32 time=1ms TTL=127

Reply from 20.20.20.4: bytes=32 time=0ms TTL=127

 

Ping statistics for 20.20.20.4:

Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),

Approximate round trip times in milli-seconds:

Minimum = 0ms, Maximum = 1ms, Average = 0ms

 

Now lets say instead of two VLAN , we have three or four VLAN ; in that case I can not out so many router between each VLAN , so cisco came with idea of Multi layer switch ( Cisco 3550, 3560 ,3575 and above) that allows you to be a layer 2 (switch) and layer 3 device (act as router) in that case we do not need a router ; which is but a Cisco 3550 and above then I can route packet from one vlan to another vlan. ( CCNA/CCNP concept)

 

 

Here how it looks like at Cisco Switch; when you buy brand new Switch (without configuration)

Want more information on how to become CompTIA Network+ Certified? Learn more!

 

 

Posted by & filed under CompTIA Network+, MICROSOFT MTA NETWORKING.

 

 

(Domain Name System) The Internet’s system for converting alphabetic names into numeric IP addresses. For example, when a Web address (URL) is typed into a browser, DNS servers return the IP address of the Web server associated with that name. In this made-up example, the DNS converts the URL www.company.com into the IP address 204.0.8.51. Without DNS, you would have to type the series of four numbers and dots into your browser to retrieve the website, which you actually can do.

A Hierarchy of Servers
The DNS system is a hierarchy of duplicated database servers worldwide that begin with the “root servers” for the top-level domains (.com, .net, .org, etc.). The root servers point to the “authoritative” servers located in ISPs, as well as in large companies, that turn the names into IP addresses; the process known as “name resolution.” Using our www.company.com example, COMPANY.COM is the domain name, and WWW is the hostname. The domain name is the organization’s identity on the Web, and the hostname is the name of the Web server within that domain

Getting a Web Page
Converting the domain name (URL) in a Web browser into an IP address takes numerous queries. Along the way, there can be more name servers than are shown here.

Caching Speeds Up Delivery
If the ABC.COM Web address was previously requested by one of the ISP’s customers, its IP address is cached (stored) in the DNS server and returns the IP address immediately. A user’s computer also typically caches IP addresses, which can eliminate the DNS query all together if the address is already in the cache.

 

Source by PCMag

 

Want more information on how to become CompTIA Network+ Certified? Learn more!

 

 

Posted by & filed under CompTIA Network+, MICROSOFT MTA NETWORKING.

 

From Wikipedia

 

A. Cable Tester- A cable tester is an electronic device used to verify the electrical connections in a cable or other wired assembly. Generally, a cable tester consists of:

 

1. A source of electric current
2. A volt meter
3. A switching matrix used to connect the current source and the volt meter to all of the contact points in a cable.

 

There are two common ways to test a connection:

 

1. A continuity test. Current is passed down the connection. If there is current the connection is assumed to be good. This type of test can be done with a series combination of a battery (to provide the current) and a light bulb (that lights when there is a current).

 

2. A resistance test. A known current is passed down the connection and the voltage that develops is measured. From the voltage and current, the resistance of the connection can be calculated and compared to the expected value.

 

 There are two common ways to test for a short:

 

1. A low voltage test. A low power, low voltage source is connected between two conductors that should not be connected and the amount of current is measured. If there is no current the conductors are assumed to be well isolated.

 

2. A high voltage test. Again a voltage source is connected but this time the voltage is of several hundred volts. The increased voltage will make the test more likely to find connections that are nearly shorted since the higher voltage will cause the insulation of nearly shorted wires to break down.

 

Want more information on how to become CompTIA Network+ Certified? Learn more!

 

 

Posted by & filed under Amazon AWS.

 

 

 

The NAT Instance

  • Install the NAT instance from Community AMI
  • When creating a NAT instance , make sure Disable source/destination check
  • Nat Instance must be in public subnet
  • There must be route out of private subnet and pointed to NAT instance
  • You Need to have a Security group for NAT instance
  • The amount of traffic that NAT instance supports depends on the instance size

NAT Gateways

  • It is New ,
  • Preferred by Enterprise
  • Scale automatically up to 10Gbps
  • No need to patch ( Amazon will take care of it)
  • No need to have a security group
  • Automatically assigned a public Ip address
  • Remember to update your Route table
  • No Need to disable Source/Destination checks

 

 

 

 

Want more information on how to become Amazon AWS Certified? Learn more!

 

 

Posted by & filed under Amazon AWS.

 

From Wikipedia

 

 

Type

Description

Function

A Address Record  Returns a 32-bit IPv4 address, most commonly used to map hostnames to an IP address of the host,but it is also used for DNSBLs, storing subnet masks in RFC 1101, etc.
CNAME  Canonical Name Record Alias of one name to another: the DNS lookup will continue by retrying the lookup with the new name.
MX Mail Exchange Record Maps a domain name to a list of message transfer agents for that domain
 AAAA IPv6 Address Record Returns a 128-bit IPv6 address, most commonly used to map hostnames to an IP address of the host.
TXT Text Record Originally for arbitrary human-readable text in a DNS record. Since the early 1990s, however, this record more often carriesmachine-readable data, such as specified by RFC 1464, opportunistic encryption, Sender Policy Framework, DKIM, DMARC,  DNS-SD, etc.
PTR Pointer Record Pointer to a canonical name. Unlike a CNAME, DNS processing stops and just the name is returned. The most common use is for implementing reverse DNS lookups, but other uses include such things as DNS-SD.
SRV Service locator Generalized service location record, used for newer protocols instead of creating protocol-specific records such as MX.
SPF Sender Policy Framework SPF(99) (from RFC 4408) was specified as part of the Sender Policy Framework protocol as an alternative to storing SPF data in TXT records, using the same format. It was later found that the majority of SPF deployments lack proper support for this record type, and support for it was discontinued in RFC 7208.
NS Name Server record Delegates a DNS zone to use the given authoritative name servers
SOA Start of [a zone of] Authority Record Specifies authoritative information about a DNS zone, including the primary name server, the email of the domain administrator, the domain serial number, and several timers relating to refreshing the zone.

 

 

Want more information on how to become Amazon AWS Certified? Learn more!

 

 

Posted by & filed under CompTIA Network+, MICROSOFT MTA NETWORKING.

 

From Wikipedia

 

 

Type

Description

Function

A Address Record  Returns a 32-bit IPv4 address, most commonly used to map hostnames to an IP address of the host,but it is also used for DNSBLs, storing subnet masks in RFC 1101, etc.
CNAME  Canonical Name Record Alias of one name to another: the DNS lookup will continue by retrying the lookup with the new name.
MX Mail Exchange Record Maps a domain name to a list of message transfer agents for that domain
 AAAA IPv6 Address Record Returns a 128-bit IPv6 address, most commonly used to map hostnames to an IP address of the host.
TXT Text Record Originally for arbitrary human-readable text in a DNS record. Since the early 1990s, however, this record more often carriesmachine-readable data, such as specified by RFC 1464, opportunistic encryption, Sender Policy Framework, DKIM, DMARC,  DNS-SD, etc.
PTR Pointer Record Pointer to a canonical name. Unlike a CNAME, DNS processing stops and just the name is returned. The most common use is for implementing reverse DNS lookups, but other uses include such things as DNS-SD.
SRV Service locator Generalized service location record, used for newer protocols instead of creating protocol-specific records such as MX.
SPF Sender Policy Framework SPF(99) (from RFC 4408) was specified as part of the Sender Policy Framework protocol as an alternative to storing SPF data in TXT records, using the same format. It was later found that the majority of SPF deployments lack proper support for this record type, and support for it was discontinued in RFC 7208.
NS Name Server record Delegates a DNS zone to use the given authoritative name servers
SOA Start of [a zone of] Authority Record Specifies authoritative information about a DNS zone, including the primary name server, the email of the domain administrator, the domain serial number, and several timers relating to refreshing the zone.

 

 

Want more information on how to become CompTIA Network+ Certified? Learn more!

 

 

Posted by & filed under Amazon AWS.

Source By: <docs.aws.amazon.com>

Getting Started with Elastic Load Balancing

This tutorial provides a hands-on introduction to Application Load Balancers through the AWS Management Console, a web-based interface. To create your first Application Load Balancer, complete the following steps.

 

 

Tasks

  • Before You Begin
  • Step 1: Select a Load Balancer Type
  • Step 2: Configure Your Load Balancer and Listener
  • Step 3: Configure a Security Group for Your Load Balancer
  • Step 4: Configure Your Target Group
  • Step 5: Register Targets with Your Target Group
  • Step 6: Create and Test Your Load Balancer
  • Step 7: Delete Your Load Balancer (Optional)

 

Before You Begin

  • Decide which two Availability Zones you will use for your EC2 instances. Verify that your virtual private cloud (VPC) has at least one public subnet in each of these Availability Zones.
  • Launch at least one EC2 instance in each Availability Zone. For this exercise, you can simply use the public subnets. Be sure to install a web server, such as Apache or Internet Information Services (IIS), on each EC2 instance.
  • Ensure that the security group for your EC2 instances allows HTTP access on port 80. To test the web server, copy the DNS name of the instance into the address field of an Internet-connected web browser and verify that the browser displays the default page of the web server.

 

Step 1: Select a Load Balancer Type

Elastic Load Balancing supports two types of load balancers: Application Load Balancers and Classic Load Balancers. For this tutorial, you create an Application Load Balancer.

To create an Application Load Balancer

  1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
  2. On the navigation bar, choose a region for your load balancer. Be sure to select the same region that you used for your EC2 instances.
  3. On the navigation pane, under LOAD BALANCING, choose Load Balancers.
  4. Choose Create Load Balancer.
  5. Choose Application Load Balancer, and then choose Continue.

 

Step 2: Configure Your Load Balancer and Listener

On the Configure Load Balancer page, complete the following procedure.

To configure your load balancer and listener

  1. For Name, type a name for your load balancer.The name of your Application Load Balancer must be unique within your set of Application Load Balancers for the region, can have a maximum of 32 characters, can contain only alphanumeric characters and hyphens, and must not begin or end with a hyphen.
  2. For Scheme, keep the default value, internet-facing.

  1. For IP address type, select ipv4 if your instances support IPv4 addresses or dualstack if they support IPv4 and IPv6 addresses.
  2. For Listeners, keep the default, which is a listener that accepts HTTP traffic on port 80.

 

  1. For Availability Zones, select the VPC that you used for your EC2 instances. For each of the two Availability Zones that contain your EC2 instances, select the Availability Zone and then select the public subnet for that Availability Zone.
  2. Choose Next: Configure Security Settings.
  3. For this tutorial, you are not using a secure listener. Choose Next: Configure Security Groups.

 

Step 3: Configure a Security Group for Your Load Balancer

The security group for your load balancer must allow it to communicate with registered targets on both the listener port and the health check port. The console can create security groups for your load balancer on your behalf, with rules that specify the correct protocols and ports.

Note

If you prefer, you can create and select your own security group instead.

On the Configure Security Groups page, complete the following procedure to have Elastic Load Balancing create a security group for your load balancer on your behalf.

To configure a security group for your load balancer

  1. Choose Create a new security group.
  2. Type a name and description for the security group, or keep the default name and description. This new security group contains a rule that allows traffic to the load balancer listener port that you selected on the Configure Load Balancer page.

  1. Choose Next: Configure Routing.

 

Step 4: Configure Your Target Group

Create a target group, which is used in request routing. The default rule for your listener routes requests to the registered targets in this target group. The load balancer checks the health of targets in this target group using the health check settings defined for the target group. On the Configure Routing page, complete the following procedure.

To configure your target group

  1. For Target group, keep the default, New target group.
  2. For Name, type a name for the new target group.
  3. Keep Protocol as HTTP and Port as 80.

For Health checks, keep the default protocol and ping path.

  1. Choose Next: Register Targets.

 

Step 5: Register Targets with Your Target Group

On the Register Targets page, complete the following procedure.

To register targets with the target group

  1. For Instances, select one or more instances.
  2. Keep the default port, 80, and choose Add to registered.

  1. If you need to remove an instance that you selected, for Registered instances, select the instance and then choose Remove.
  2. When you have finished selecting instances, choose Next: Review.

 

Step 6: Create and Test Your Load Balancer

Before creating the load balancer, review the settings that you selected. After creating the load balancer, verify that it’s sending traffic to your EC2 instances.

To create and test your load balancer

  1. On the Review page, choose Create.
  2. After you are notified that your load balancer was created successfully, choose Close.
  3. On the navigation pane, under LOAD BALANCING, choose Target Groups.
  4. Select the newly created target group.
  5. On the Targets tab, verify that your instances are ready. If the status of an instance is initial, it’s probably because the instance is still in the process of being registered, or it has not passed the minimum number of health checks to be considered healthy. After the status of at least one instance is healthy, you can test your load balancer.
  6. On the navigation pane, under LOAD BALANCING, choose Load Balancers.
  7. On the Description tab, copy the DNS name of the load balancer (for example, my-load-balancer-1234567890.us-west-2.elb.amazonaws.com). Paste the DNS name into the address field of an Internet-connected web browser. If everything is working, the browser displays the default page of your server.

 

Step 7: Delete Your Load Balancer (Optional)

As soon as your load balancer becomes available, you are billed for each hour or partial hour that you keep it running. When you no longer need a load balancer, you can delete it. As soon as the load balancer is deleted, you stop incurring charges for it. Note that deleting a load balancer does not affect the targets registered with the load balancer. For example, your EC2 instances continue to run.

To delete your load balancer

  1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
  2. On the navigation pane, under LOAD BALANCING, choose Load Balancers.
  3. Select the load balancer, and then choose Actions, Delete.
  4. When prompted for confirmation, choose Yes, Delete.

 

To Become AWS Certified please Check out this link;

Posted by & filed under Amazon AWS.

Source By: <docs.aws.amazon.com>

 

What Is Elastic Load Balancing?

Elastic Load Balancing distributes incoming application traffic across multiple EC2 instances, in multiple Availability Zones. This increases the fault tolerance of your applications.

The load balancer serves as a single point of contact for clients, which increases the availability of your application. You can add and remove instances from your load balancer as your needs change, without disrupting the overall flow of requests to your application. Elastic Load Balancing scales your load balancer as traffic to your application changes over time, and can scale to the vast majority of workloads automatically.

You can configure health checks, which are used to monitor the health of the registered instances so that the load balancer can send requests only to the healthy instances. You can also offload the work of encryption and decryption to your load balancer so that your instances can focus on their main work.

Features of Elastic Load Balancing

Elastic Load Balancing supports two types of load balancers: Application Load Balancers and Classic Load Balancers. Choose the load balancer type that meets your needs.

Feature Classic Load Balancer Application Load Balancer
Protocols HTTP, HTTPS, TCP, SSL HTTP, HTTPS
Platforms EC2-Classic, EC2-VPC EC2-VPC
Sticky sessions (cookies) load balancer generated
Back-end server authentication
Back-end server encryption
Idle connection timeout
Connection draining
Cross-zone load balancing † Always enabled
Health checks † † Improved
CloudWatch metrics Improved
Access logs Improved
Path-based routing
Route to multiple ports on a single instance
HTTP/2 support
Websockets support
Load balancer deletion protection

† Cross-zone load balancing is always enabled for an Application Load Balancer. For a Classic Load Balancer, it is disabled by default, but can be enabled and disabled as needed.

† † For an Application Load Balancer, you can specify the HTTP codes that indicate a successful health check response. An Application Load Balancer returns improved information about the cause of health check failures.

Accessing Elastic Load Balancing

You can create, access, and manage your load balancers using any of the following interfaces:

  • AWS Management Console— Provides a web interface that you can use to access Elastic Load Balancing.
  • AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS services, including Elastic Load Balancing, and is supported on Windows, Mac, and Linux.
  • AWS SDKs — Provides language-specific APIs and takes care of many of the connection details, such as calculating signatures, handling request retries, and error handling.
  • Query API— Provides low-level API actions that you call using HTTPS requests. Using the Query API is the most direct way to access Elastic Load Balancing, but it requires that your application handle low-level details such as generating the hash to sign the request, and error handling. For more information, see the following:

 

Elastic Load Balancing works with the following services to improve the availability and scalability of your applications.

  • Amazon EC2 — Virtual servers that run your applications in the cloud. You can configure your load balancer to route traffic to your EC2 instances.
  • Amazon ECS — Enables you to run, stop, and manage Docker containers on a cluster of EC2 instances. You can configure your load balancer to route traffic to your containers.
  • Auto Scaling — Ensures that you are running your desired number of instances, even if an instance fails, and enables you to automatically increase or decrease the number of instances as the demand on your instances changes. If you enable Auto Scaling with Elastic Load Balancing, instances that are launched by Auto Scaling are automatically registered with the load balancer, and instances that are terminated by Auto Scaling are automatically de-registered from the load balancer.
  • Amazon CloudWatch — Enables you to monitor your load balancer and take action as needed.
  • Amazon Route 53 — Provides a reliable and cost-effective way to route visitors to websites by translating domain names (such as www.example.com) into the numeric IP addresses (such as 192.0.2.1) that computers use to connect to each other. AWS assigns URLs to your resources, such as load balancers. However, you might want a URL that is easy for users to remember. For example, you can map your domain name to a load balancer.

 

To Become AWS Certified please Check out this link;

Posted by & filed under Amazon AWS.

Amazon S3

Amazon Simple Storage Service (Amazon S3), provides developers and IT teams with secure, durable, highly-scalable cloud storage. Amazon S3 is easy to use object storage, with a simple web service interface to store and retrieve any amount of data from anywhere on the web. With Amazon S3, you pay only for the storage you actually use. There is no minimum fee and no setup cost.

Amazon S3 offers a range of storage classes designed for different use cases including Amazon S3 Standard for general-purpose storage of frequently accessed data, Amazon S3 Standard – Infrequent Access (Standard – IA) for long-lived, but less frequently accessed data, and Amazon Glacier for long-term archive. Amazon S3 also offers configurable lifecycle policies for managing your data throughout its lifecycle. Once a policy is set, your data will automatically migrate to the most appropriate storage class without any changes to your applications.

Amazon S3 can be used alone or together with other AWS services such as Amazon Elastic Compute Cloud (Amazon EC2) and AWS Identity and Access Management (IAM), as well as data migration services and gateways for initial or ongoing data ingestion. Amazon S3 provides cost-effective object storage for a wide variety of use cases including backup and recovery, nearline archive, big data analytics, disaster recovery, cloud applications, and content distribution.

Source By: docs.aws.amazon.com