[bs_icon name=”glyphicon glyphicon-collapse-down”] We look forward to serving people at women’s Shelter located in Rockville, MD. Please join us on 2nd Sunday of each month at 2 Taft court , Rockville MD .
CompTIA presents Cyberstates, the definitive guide to national, state, and metropolitan area tech sector and tech workforce analytics. Cyberstates aggregates mountains of data
and transforms it into easy to understand visuals and actionable insights.Read more »
[bs_icon name=”glyphicon glyphicon-collapse-down”] We look forward to serving people at women’s Shelter located in Rockville, MD. Please join us on 2nd Sunday of each month at 2 Taft court , Rockville MD .
[bs_icon name=”glyphicon glyphicon-collapse-down”] We look forward to serving people at women’s Shelter located in Rockville , MD. Please join us on 2nd Sunday of each month at 2 Taft court , Rockville MD .
[bs_icon name=”glyphicon glyphicon-collapse-down”] It was a blessings night to serve about 35 people at women’s Shelter located in Rockville , MD. Please join us on 2nd Sunday of each month at 2 Taft court , Rockville MD .
AWS WAF is a web application firewall service that helps protect your web apps from common exploits that could affect app availability, compromise security, or consume excessive resources
In here we have two instances called LinuxWebserver1 and Linux Webserver3 that are part of Application Load Balancer; and the goal is that my Ip address ( my laptop Ip address) will not be able to access this Website
Step 1) First I show you what I have with two instances
Step 2) When I go to Load Balancer on the left side and copy and paste long DNS name ; I will be able to see both content of the Websites (after refresh) that shows the ALB is working good.
Step 3) Or since I have “A” record on Route 53 , I can use the domain name on Route 53 , so life is good
Step 4) now goal is to block hamed2019.com so that my home PC Ip address will not be able to see it.
Step 5) go to WAF , click configure WEB ACL , on the Concepts overview , we will see what we can do it; so click next , then called it WebDenyACL
Syep 6) Pick the Region N.Va and then pick resources which is Application Load Balancer , then click next
Step 7) now we will create a condition, as we see in video we can pick any of the condition
Cross-site scripting match conditions
Geo match conditions
IP match conditions
Size constraint conditions
SQL injection match conditions
String and regex match conditions
Step 8) Click create IP match condition
Step 9) Give the name “MyHomePC” then go to google and search what is my IP address ; and you will get the Ip address for example :
100.15.97.150
Step 10) put the above number 100.15.97.150/32 ( /24 means block and range of IP address ) /32means only that particular Ip address , then make sure click add Ip address
Step 11 ) Now we click next and then we want to create a Rule , on next page click Rule and called “HomePCRule”
Step 12) Then go to section “Add Condition” and do as follow”
When a Request “does” originate from an Ip address
“MyHomePC” Then you will see IP address=100.15.97.32
Then click create.
Step 12) On next page leave it as block then for default action, pick first one
“Allow all request that does not match any rule”
Step 13) Click Review and create , that will take to next page ; then read it and click confirm and create
Step 14) Now go to left side and click on WebACL rule ; then click on “WebDenyACL” then go to tab called Rules ; then you should see all information.
Step 15 ) Now try to access the website ; by either copy and paste long DNS name from load balancer to use your domain name . you should see Forbidden Error 403
Step 16) Hint : when you click on “Web ACL” on left ; then you might see few name on it ; when you click one of them ; then go to TAB Rules ; and then at bottom you will see add association . then you will see this :
You can associate only one web ACL with a specific resource. If a different web ACL is already associated with the selected resource, the association will be updated with this web ACL.
6) A web application allows customers to upload orders to an S3 bucket. The resulting Amazon S3 events trigger a Lambda function that inserts a message to an SQS queue. A singleEC2 instance reads messages from the queue, processes them, and stores them in an DynamoDB table partitioned by unique order ID. Next month traffic is expected to increase by a factor of 10 and a Solutions Architect is reviewing the architecture for possible scaling problems.Which component is MOST likely to need re-architecting to be able to scale to accommodate the new traffic?
A.Lambda function B.SQS queue C.EC2 instance D.DynamoDB table
7) An application saves the logsto an S3 bucket. A user wants to keep the logs forone month for troubleshooting purposes, and then purge the logs. What feature will enable this?
A.Adding a bucket policy on the S3 bucket. B.Configuring lifecycle configuration rules on the S3 bucket. C.Creating an IAM policy for the S3 bucket. D.Enabling CORS on the S3 bucket.
8) An application running on EC2 instances processes sensitive information stored on Amazon S3. The information is accessed over the Internet. The security team is concerned that the Internet connectivity to Amazon S3 is a security risk.Which solution will resolve the security concern?
A.Access the data through an Internet Gateway. B.Access the data through a VPN connection. C.Access the data through a NAT Gateway. D.Access the data through a VPC endpoint for Amazon S3.
9) An organization is building an Amazon Redshift cluster in their shared services VPC. The cluster will host sensitive data.How can the organization control which networks can access the cluster?
A.Run the cluster in a different VPC and connect through VPC peering. B.Create a database user inside the Amazon Redshift cluster only for users on the network. C.Define a cluster security group for the cluster that allows access from the allowed networks. D.Only allow access to networks that connect with the shared services network via VPN.
10) A Solutions Architect is designing an online shopping application running in a VPC on EC2 instances behind an ELB Application Load Balancer.The instances run in an Auto Scaling group across multiple Availability Zones. The application tiermust read and write data to a customer managed database cluster. There should be no access to the database from the Internet, but the cluster must be able to obtain software patches from the Internet.Which VPC design meets these requirements?
A.Public subnets for both the application tier and the database cluster. B.Public subnets for the application tier, and private subnets for the database cluster. C.Public subnets for the application tier and NAT Gateway, and private subnets for the database cluster. D.Public subnets for the application tier, and private subnets for the database cluster and NAT Gateway.
Answers
6) C –Asingle EC2 instance will not scale and is a single point of failure in the architecture. A much better solution would be to have EC2 instances in an Auto Scaling group across 2 availability zones read messages from the queue. The other responses are all managed services that can be configured to scale or will scale automatically. For more information Click here
7) B –Lifecycle configuration allows lifecycle management of objects in a bucket. The configuration is a set of one or more rules, where each rule defines an action for Amazon S3 to apply to a group of objects. Bucket policies and IAM define access to objects in an S3 bucket. CORS enables clients in one domain to interact with resources in a different domain.
8) D –VPC endpoints for Amazon S3 provide secure connections to S3 buckets that do not require a gateway or NAT instances. NAT Gateways and Internet Gateways still route traffic over the Internet to the public endpoint for Amazon S3. There is no way to connect to Amazon S3 via VPN. For more information Click here
9) C –A security group can grant access to traffic from the allowed networks via the CIDR range for each network. VPC peering and VPN are connectivity services and cannot control traffic for security. Amazon Redshift user accounts address authentication and authorization at the user level and have no control over network traffic. For more information Click here
10) C –The online application must be inpublic subnets to allow access from clients’ browsers. The database cluster must be in private subnets to meet the requirement that there be no access from the Internet. For more information Click here
1) A company is storing an access key (access key ID and secret access key) in a text file on a custom AMI. The company uses the access key to access DynamoDBtables from instances created from the AMI. The security team has mandated a more secure solution.Which solution will meet the security team’s mandate?
A.Put the access key in an S3 bucket, and retrieve the access key on boot from the instance. B.Pass theaccess key to the instances through instance user data. C.Obtain the access key from a key server launched in a private subnet. D.Create an IAM role with permissions to access the table, and launch all instances with the new role.
2) A company is developing a highly available web application using stateless web servers. Which services are suitable for storing session state data?(Select TWO.)
3) Company salespeople upload their sales figures daily.
A Solutions Architect needs a durable storage solution for these documents that also protects against users accidentally deleting important documents. Which action will protect against unintended user actions?
A.Store data in an EBS volume and create snapshots once a week. B.Store data in an S3 bucket and enable versioning. C.Store data in two S3 buckets in different AWS regions. D.Store data on EC2 instance storage.
4) An application requires a highly available relational database with an initial storage capacity of 8 TB. The database will grow by 8 GB every day. To support expected traffic, at least eight read replicas will be required to handle database reads.Which option will meet these requirements?
5) A Solutions Architect is designing a critical business application with a relational database that runs on an EC2 instance. It requires a single EBS volume that can support up to 16,000 IOPS.
Which Amazon EBS volume typecan meet the performance requirements of this application?
1) D–IAM roles for EC2 instances allow applications running on the instance to access AWS resources without having to create and store any access keys. Any solution involving the creation of an access key then introduces the complexity of managing that secret. For more information click here.
2) B, D –Both DynamoDB and ElastiCache provide high performance storage of key-value pairs. CloudWatch and ELB are not storage services. Storage Gateway is a storage service, but it is a hybrid storage service that enables on-premises applications to use cloud storage.
3) B –If a versioned object is deleted, then it can still be recovered by retrieving the final version. Response A would lose any changes committed since the previous snapshot. Storing the data in 2 S3 buckets would provide slightly more protection, but a user could still delete the object from both buckets. EC2 instance storage is ephemeral and should never be used for data requiring durability. For more information click here.
4) C –Amazon Aurora is a relational database that will automatically scale to accommodate data growth. Amazon Redshift does not support read replicas and will not automatically scale. DynamoDB is a NoSQL service, not a relational database. Amazon S3 is object storage, not a relational database.
5) A –EBS Provisioned IOPS SSD provides sustained performance for mission-critical low-latency workloads. EBS General Purpose SSD can provide bursts of performance up to 3,000 IOPS and have a maximum baseline performance of 10,000 IOPS for volume sizes greater than 3.3 TB. The 2 HDD options are lower cost, high throughput volumes.
1) Why is AWS more economical than traditional data centers for applications with varying compute workloads?
A) Amazon Elastic Compute Cloud (Amazon EC2) costs are billed on a monthly basis. B) Customers retain full administrative access to their Amazon EC2 instances. C) Amazon EC2 instances can be launched on-demand when needed. D) Customers can permanently run enough instances to handle peak workloads.
2) Which AWS service would simplify migration of a database to AWS?
A) AWS Storage Gateway B) AWS Database Migration Service (AWS DMS) C) Amazon Elastic Compute Cloud (Amazon EC2) D) Amazon AppStream 2.0
3) Which AWS offering enables customers to find, buy, and immediately start using software solutions in their AWS environment?
A) AWS Config B) AWS OpsWorks C) AWS SDK D) AWS Marketplace
4) Which AWS networking service enables a company to create a virtual network within AWS?
A) AWS Config B) Amazon Route 53 C) AWS Direct Connect D) Amazon Virtual Private Cloud (Amazon VPC)
5) Which of the following is AWS’s responsibility under the AWS shared responsibility model?
A) Configuring third-party applications B) Maintaining physical hardware C) Securing application access and data D) Managing custom Amazon Machine Images (AMIs)
6) Which component of AWS global infrastructure does Amazon CloudFront use to ensure low-latency delivery?
A) AWS Regions B) AWS edge locations C) AWS Availability Zones D) Amazon Virtual Private Cloud (Amazon VPC)
7) How would a system administrator add an additional layer of login security to a user’s AWS Management Console?
A) Use AWS Cloud Directory B) Audit AWS Identity and Access Management (IAM) roles C) Enable Multi-Factor Authentication D) Enable AWS CloudTrail
8) Which service can identify the user that made the API call when an Amazon Elastic Compute Cloud (Amazon EC2) instance is terminated?
A) Amazon CloudWatch B) AWS CloudTrail C) AWS X-Ray D) AWS Identity and Access Management (AWS IAM)
9) Which service would you use to send alerts based on Amazon CloudWatch alarms?
A) Amazon Simple Notification Service (Amazon SNS) B) AWS CloudTrail C) AWS Trusted Advisor D) Amazon Route 53
10) Where can a customer find information about prohibited actions on AWS infrastructure?
A) AWS Trusted Advisor B) AWS Identity and Access Management (IAM) C) AWS Billing Console D) AWS Acceptable Use Policy
Pre-Step 1) we will have two EC2 instance; one we will have Windows 2008 server and another Linux Server
Step 1) In here we have two Websites ; one running on Linux Machine on Default port #80 and another one on Windows IIS Webserver on port 8080
Step 2) inside the IIS we need to go to IIS manager and change the binding to point to port 8080 and also create a custom rule so that the Windows Firewall will accept port 8080 as inbound
Step 3) When you install IIS webserver; you will have a folder called C:\inetpub\wwwroot
Then inside above folder ; make sure create a file and called it healthy.html and make sure you will be able to access via IE browser; that is http://x.x.x.x:8080/healthy.html
http://54.236.241.245:8080/healthy.html
Step 4) make sure on Linux machine you will have a file inside the folder
cd /var/www/html a filed called healthy.html ( otherwise your health check will not work) we will see it inside the lab.
So make sure when you type this in IE browsers http://54.174.68.136/healthy.html
you will see something ( use your own IP address)
Step 5) Now we will go over the labs ; the goal is that when we create a network ELB ; when you type in the URL:80 it goes to Linux Machine and when you type in URL:8080 it will go to Windows Machine.
Step 6) First we will configure the two Target Group (this is the best way to do it) then we will create Network ELB
Step 7) First I create a Target group called:
WindowsGP1 ( Make sure pic TCP port 8080)
LinuxGP2 (make sure pic TCP port 80)
Step 8) for both group, when you do the health check, make sure pick HTML and in both cases use file healthy.html (remember we have created these file inside the Linux and Windows machine on corresponding folder)
Step 9) after you have created the target group ; you need to go to each Target group and click on the Target Tab ; then you need to pick correct EC2 and make sure you click add to register it and make sure save it. ( do not forget to save it).
Step 10) Now I will go create my Network ELB ; and in here make sure add two listeners in here , one for TCP port 80 and another one for TCP port 8080
Step 11) Make sure you will pick correct AZ and pick correct AZ in correct Public Subnet that you have , as you see in here we have choice of IP address ( we did not have this choice in Class ELB or Application ELB)
Step 12 ) We can also pick Elastic IP address. ( lets not do it now)
Step 13) Next we need to pick the Target group , so we can pick any of the target group for now I will pick WindowsTG1 ( it does not matter as we will see later on ).
Step 14) As you see when you pick the WindowsTG1 , it will fill in with the parameter we filled up before.
Step 15) Now if you go back to Target group on the left side and click on the description ; you will see in front of Load balancer , the Network ELB ; but when
you go to LinuxTG2 ; you do not see it ; since when I was doing step 12 in above ; I pick the WindowsTG1
Step 16) when you look at TAB called Target Tab on WindowsGP1 , you will see it says healthy ( which is good) but when you go under the Target tab of LinuxGP2, I see “Unused” since we did not pick on step 12
Step 17) Now let’s go to LoadBalancer on the Left side and when you click on the Listener ; it is Pointed to Windows; but we need to point windows only for port 8080 and Linux to point to port 80, so next to port 80 , click on it and the click edit; then pick Linux ,
Step 18) Make sure now port 80 goes to Linux
Port 8080 goes to Windows on the Listener Tab.
Step 19) Now if I go to left side and click on the Target group ; then when you click on the LinuxTG2 , you will see it is being initialized (first) then it will be healthy.
Step 20) If you go to ELB and click on the descriptions TAB and the copy and paste long DNS name you should be able to get the Linux machine with port 80 and Windows machine with port 8080
Please note that RAM Utilization is a customer metric. By Default Ec2 monitors is for 5 mins interval, unless you enable detailed monitoring which will then make it 1 min intervals ( it will cost more)
Let’s go to google and search for Amazon EC2 Metrics and Dimensions
we will do a lab and we will create a new EC2 and then we do customize Cloudwatch metric for Memory.
Step 1) We need to create a CloudWatch Role for EC2 so the EC2 can send some information to CloudWatch
Step 2) Go to IAM , and create a new Role called EC2CloudWatch
Step 3) Pick CoudWatchFullAccess as a Policy
Step 4) Now we will go and create an EC2 instance, then attached above Role to it ; then we will use some Perl Script so that we get customize Memory Utilization
Step 5) Before we do our lab, let’s go to google and type Monitoring Memory and Disk Metrics for EC2 Linux instance
Step 12) If you type ls we will see it did create a directory called aws-scripts-mon
Now I will remove the Zip file that we downloaded
rm –rf CloudWatchMonitoringScripts-1.2.2.zip
Step 13) I go to the directory
cd aws-scripts-mon/
Then type ls and you see some perl script ( in green color)
mon-put-instance-data.pl
mon-get-instance-stats.pl
Step 14) Now before we push these script to Ec2 , lets go back to AWS console and look at Cloudwatch and we will see there is no Metric for Memory , but after we push above perl scripts ; we will see the Memory Utilization metrics
Step 15) Go to the Dashboards and create a new Dashboards and put CPU utilization ( as we see we do not Memory Utilization)
Call the Dashboard EC2custommetric (make sure save it)
As you click on EC2 and you will see these metric (as I mentioned in the beginning of Lab)
You can monitor your EC2 using CloudWatch using:
CPU
Network
Disk
Status Check
Step 16) Now we go back to our terminal windows and copy and paste this command from AWS websites
I will put this line of code to test and see if our EC2 instance can talk to CloudWatch
AWS OpsWorks is a configuration management service that provides managed instances of Chef . OpsWorks lets you use Chef to automate how servers are configured, deployed, and managed across your Amazon EC2 instances or on-premises compute environments.
It has these parts :
Chef : Chef turns infrastructure into codes , With Chef you can automate how you build ,deploy and manage your infrastructure
Stack – Stack is a group of resources such as EC2 instance, ELB , RDS instance
Layer – Layer exits within a stack and consist of things like Database layer, Application Layer
When you and I create for example an Application Layer (instead of creating manually) it will install Apache server and it will configure all the yum update and all other stuff for you.
We will see it in the Lab.
We need 1 or more layers in the stack.
An instance must be assigned to at least 1 layer
Preconfigured layer include :
Application layer
Load Balance layer
Database layer
Cashing layer
Now we will do a Lab now
Step 1) Go to Amazon Console and go to OpsWorks
Step 2) Go to region that you did not use it like Germany, as you see on EC2, there is no instance and we have only one default security group
Step 3) Create an Stack, here we will use the Chef 11 stack
Step 4) call it ASMWebPHPStack and leave all as default
Think of stack as production Stack; Test Stack, developer stack
Step 5) Now I will add a layer in here
Step 6) Pick Application PHP server =layer Type
Step 7) what above layer do; it will create an EC2 instance that will have Apache Webserver on it with all yum update on it
Step 8) now inside the Layer add an EC2 instance and called asmwebserver1, pick t2.micro and pick the AZ =eu-centeral-1a
Step 9) Start the instance, right now installing the instance and pre-configuring Apache Server and doing all the updates for us ( it will take some time) Remember if we go back to OpsWorks and these are Chef 11 stack ( pre define chef recipe)
Step 10) Lets back to Opsworks and click on Instance, and still booting
Step 11) go on the Left and get an (App) Application from Repository
An app represents code stored in a repository that you want to install on application server instances. When you deploy the app, OpsWorks downloads the code from the repository to the specified server instances.
Step 12) give the name ASMApp1
Step 13) put inside Git with this address , leave the rest as default value
Step 15) Now go back to instance on the left and see if it is complete
Step 16) If I click on the Ip address, you will see not fund; since the Apache Web server is running but we need go to App and deploy it
Step 17) It take about 2 mins to deploy , after it is finish when you go back to my instance and click the IP address I will see it is working and it shows at bottom got the name of instance
Simple PHP App
Congratulations!
Your PHP application is now running on the host “asmwebserver1” in your own dedicated environment in the AWS Cloud.
This host is running PHP version 5.3.29.
Step 18) Remember we do not need to SSH and just use the .php file.
Step 19) Now we will add another layer for ELB , but before we do add this layer ; we will go back to EC2 and left we will create an ELB and called it OpsWorkELB , but we do not add any instance to here .
Pick the security group called “
AWS-OpsWorks-PHP-App-Server”
and change to index.php and change the :
Interval =10 sec
Healthy threshold=3
Step 20) after it finished; when you go to Instance Tab , make sure you do not have any instance added here
Step 21) Now I go back to OpsWorks and try add the ELB layer to it
Step 22) you can add a new layer in another way ; go back to PHP app Sevrer ( layer) then you will see a Tab called network ; here we can add the ELB layer ; as we see this is another way of doing this .
Lets read the notes:
Note: After you attach an ELB to a layer, OpsWorks removes any currently registered instances and then manages the load balancer for you. If you subsequently use the ELB console or API to modify the configuration, the changes will not be permanent
Step 23) Now if you go back to layer ; you will See the ELB layer has been added but health is not good yet; it takes time ; now after a min ; it will be healthy and now if you click on long DNS name ; it will show the same Web page as before.
Step 24) Now if I go back to EC2 then on left click on ELB; on the instance TAB ; I will see the name of Instance (asmwebserver1) in here , remember we did not add anything in here before.
Step 25) If you go back to OpsWorks and click on the instance; you will see you have two other kinds
24 hours ( default Instance )
Time-based
Load-based ( used with auto-scaling)
Step 26) Now I will go under instance inside Opsworks and add another instance and let’s called it
Step 28 ) After about 2 mins , Now if I go back to EC2 and click on ELB , I will see the new instance has been added under the instance tab and it is in-service
Step 29) Now let’s go add another instance and called it :
asmwebserver3
t2.micro
eu-centeral-1c
Step 30) Now when I go to Long DNS name and keep refreshing ; I will see all three Webserver name
asmwebserver1
asmwebserver2
asmwebeserver3
Step 31) Now I will delate all the instance, layer and make sure remove ELB and delate all the Security group.
[bs_icon name=”glyphicon glyphicon-collapse-down”] It was a blessings night to serve about 35 people at women’s Shelter located in Rockville , MD. Please join us on 2nd Sunday of each month at 2 Taft court , Rockville MD .
In Here we have One VPC =10.0.0.0/16 also we have three Subnet as follow
10.0.1.0/24 Public Subnet1
10.0.2.0/24 Private Subnet2
10.0.3.0/24 Public Subnet3
I have two server one is called Public1 inside the Public Subnet=10.0.1.0/24 with
Public Ip address: 34.201.13.154
Private Ip address = 10.0.1.183
Password -LphUKwynw9
I have another server in Private Subnet 2= 10.0.2.0/24
Private Ip address = 10.0.2.248
Password = rSB=zj6(EY
Step 1) I logon the PC called Public1 and rename it to Public1
Step 2) Since I can not access the Server located in Private subnet; first I need to RDP to Public1, then from that Server, I will do RDP to Private Server =
So I will RDP to Ip address =10.0.2.248
Step 3) Make sure from the server called Public1 I can ping 10.0.2.248, then try to RDP to PC called Private1=10.0.2.248 then rename the PC to Private1, so this way you will not get confused which one is which.
Step 4) In order to be able to use the AWS CLI command line, you need to make sure to install the AWS CLI commands on the server.
Step 6) Since I want to show how you can connect to S3 from CLI; after installing the CLI; we need to make sure the Server has the IAM Role ( since we do not want to put the credentials inside the Instance
Step 7) Go to IAM and create a Role called S3Admin Role.
Step 8) Now go back to both EC2 instance ( Public and Private ) and make sure you assign the IAM roles to them.
Step 9) Now when I go to Public Server and I Type at command Prompt I type AWS S3 LS I will be able to see all y buckets:
C:\Users\Administrator>aws s3 ls
2018-01-28 21:21:01 asmtest4568
2018-01-28 21:20:31 test12345asm
2018-01-28 21:20:47 test698932asm
2018-01-08 17:51:16 testasm12345
Step 10)Now we come to our main Problem; that is goal is to make the Private PC that is located inside the Private Subnet be able to connect to S3 but via VPC S3 endpoint
Step 11) I want to make sure before we do above, we need to make sure the AWS CLI command line is installed on the PC=Private1 , so First I create a NAT gateway ( like what we did in the last Video) then I make sure pick Public Subnet ( 10.0.1.0/24) and then I configure the My Private RT is pointed to NAT gateway .
You will be able to ping 8.8.8.8 from Private1 after few mins.
Step 12) Now I go and Install the AWS CLI from the last link we had :
Make sure in the same way you have given the Role to Public1, you would give to Private1 ( Role=S3admin) so that way you do not need the credentials.
C:\Users\Administrator>aws configure
AWS Access Key ID [None]:
AWS Secret Access Key [None]:
Default region name [None]: us-east-1
Default output format [None]: table
Step 12) after I finish installing AWS CLI on PC called Private1, I will remove the NAT gateway; so it will not have access to Internet then I will do Amazon VPC Endpoint S3
Step 13) Now I will remove NAT gateway from it.
Step 14) Now I will go on the Top go to VPC; then I will go to left side click on Endpoint, then click on create an Endpoint, now at the bottom middle pick the S3 and make sure in here pick Private Subnet
Hint: for Nat Gateway, make sure pick Public Subnet
For S3 Endpoint, make sure pick Private Subnet
Step 15) Here I will pick the Full access
Policy
Full Access – Allow access by any user or service within the VPC using credentials from any AWS accounts to any resources in this AWS service. All policies — IAM user policies, VPC endpoint policies, and AWS service-specific policies (e.g. Amazon S3 bucket policies, any S3 ACL policies) — must grant the necessary permissions for access to succeed.
Policy
Step 16) Now make sure the private Routing Table is pointed to this VPC-E as a routing table.
10.0.0.0/16
local
Active
No
pl-63a5400a (com.amazonaws.us-east-1.s3)
vpce-d92aabb0
Active
No
Step 17) now and go and test it and it should work.
Step 2: Click on select from templates and choose WordPress Blog
Step 3: Choose the parameters like username, password, key etc
** make sure you choose the right ssh key because we need to log in to the EC2 server later
Step 4: On the tag page tag the Stack resources with “Wordpress Blog”
Step 5: Click next and Create
Step 6: Navigate to output section and click on the URL. This will take you to your blog.
Step 7: Choose username and password and click next.
Step 8: You should have a blog setup now.
Step 9: SSH into your EC2 and paste the following command
$ sudo chown -R apache:apache path/to/wordpress
Step 10: Go back to WordPress and you may now install themes and customize your page.
Step 11: Select the stack that we created earlier and click on Actions. Then delete the stack. This will delete all the resources that were created by the stack.
Want more information on how to become Amazon AWS Certified? Learn more!
Creating empty files in Linux :
$ touch file{1..10}
Copy a local file to a bucket:
$ aws s3 cp file1 s3://copy-file-saurav/
Copy Everything in local file to a bucket:
$ aws s3 sync . s3://copy-file-saurav/
$ crontab -e
Create a script to save all files to S3 bucket every (x) minutes/hours/days or months
• $ vim sync.sh
• press i for insert
• #!/bin/bash
• $ aws s3 sync /home/ec2-user/Documents/ s3://copy-file-saurav
• Press Esc
• Type :wq! To save and exit
• $ Chmod 500 sync.sh
The above command will make it executable
On command line type this to create a cron job:
$ crontab –e
• press I to insert
• paste: */1 * * * * /home/ec2-user/sync.sh
• The above line will sync files every minute. For every day use:
* */1 * * * /home/ec2-user/sync.sh
• save with :wq!
To exclude a file with certain extension:
$ aws s3 sync /home/ec2-user/Documents/ s3://copy-file-saurav –exclude ‘*.rtf’ –acl
public-read –storage-class STANDARD_IA
To exclude a file with public read permission and put in in s3- IA storage class:
$ aws s3 sync /home/ec2-user/Documents/ s3://copy-file-saurav –acl public-read —
storage-class STANDARD_IA
For further reference visit:
https://docs.aws.amazon.com/cli/latest/reference/s3/index.html
Want more information on how to become Amazon AWS Certified? Learn more!
Lambda is an event-driven compute service. We have functions that execute when a trigger is triggered. In today’s lab, we will set up a trigger with Cloud watch Events.
On top of that, we will need an IAM role that gives our lambda function permissions to Stop EC2 Instances. We will also add Cloud watch permissions in that role so that the lambda function can log the event every time it is triggered.
Let’s get started with setting up the role:
Go to the IAM console and click on role
Click Create a Role
Under AWS service, select Lambda. Click Next Permissions.
You should see a list of policies. Search for AmazonEC2FullAccess and check the box.
Also, search for CloudWatchFullAccess and check that box.
Click Next Review.
Choose a Name(eg: Lambda EC2_cloudwatch_access) for the role. Add a description
Click Create
Now we should have a role that we can attach to our Lambda function.
Now Let’s work on our Lambda function:
Go to the Lambda console.
Click Create a function
Click Author from Scratch
Give the function a Name( eg: stopC2)
Under role: Select choose an existing role
Select the role we created earlier.
Click on create function.
Select Python 2.7 as the Runtime.
Paste our python function:
import boto3
client=boto3.client(‘ec2’)
def lambda_handler(event, context):
response=client.describe_instances()
for reservation in response[“Reservations”]:
for instance in reservation[“Instances”]:
print(instance[“InstanceId”] + “stopping”)
id=[instance[“InstanceId”]]
client.stop_instances(InstanceIds=id)
return(“Completed”)
Click on Save and Test. You’ll see a configure test event window. Give the test event a name and click on create.
Copy the Arn of the lambda function from the top of the page: (eg: arn:aws:lambda:us-east-1:771454434342637355:function:stop
Cloudwatch Events Setup:
Go to the Cloudwatch console.
On the left-hand side menu, Under Events, select Rules.
Click Create a Rule
Select Schedule and select Cron Expression.
Put: 0 5 * * ? * on the box. This means every 5 am GMT every day, month, year
On the right-hand side, click on Add Target. Select Lambda function and in the function field, paste the ARN we copied earlier.(eg: arn:aws:lambda:us-east-1:771454434342637355:function:stop
Click on Configure Details.
Give the Rule a name and description and Create a Rule.
Click on Create.
Now let’s go to our EC2 console and launch or start few ( 3 ) instances.
It’s time to test the Lambda Function. Since we can’t wait until 12 am, let’s test the event manually.
Go back to the Lambda function we created earlier and click Test.
Check your EC2 console. Your EC2 instance must be stopping.
That concludes the lab.
Terminate the EC2 instance that you created earlier.
*If you have errors in lambda function we can go to cloud watch logs and troubleshoot.
Want more information on how to become Amazon AWS Certified? Learn more!
EFS is storage option for EC2, that can be attached to more than one EC2, remember Volume can only be attached to one EC2
EFS is Elastic, and it is scalable ( The storage capacity will increase or decrease as you add files to it, so it is not like Volume that you start for example by 30 GB)
Supports the Network File System Version 4 (NFSv4) protocol
You only pay for the storage you use (no pre-provisioning required)
Can scale up to the petabytes
Now will do a Lab,
In here we will have two EC2 instances (One on each Subnet and different AZ), then we will create an EFS and Mount on thisEC2, I will boot up and Amazon AMI and then install the Apache Webserver, and on one of the EC2 we will create the index, html and we will see it will be on 2nd EC2 and then we will test it via Elastic ELB and we will see our sample web page.
Step 1) I go to AWS and click a new EFS
Step 2) I choose the VPC=10.0.0.0/16 then I will choose two Subnet 2 and Subnet 3 in each different AZ
us-east-1a with Subnet 1= 10.0.1.0 /24
us-east-1c with subnet 3= 10.0.3.0/24
Make sure pick correct Security Group ( Here I pick RDP, HTTP, ICMP) and default security group,
Click Next
Step 3) Give the Name =EFSWebsites, leave the default and click next
Step 4) now it should be creating the EFS ( when it is finished it will show Available)
Step 5) Now I will go to EC2 and create two EC2 with Amazon AWS and call it EFS-east1a and EFSeast1c
Put the first EC2 ( called EFS-east1a inside the Subnet 1)
Put the 2nd EC2 ( called EFS-east1c inside the Subnet 3)
Step 6) make sure use same security group as above.
Step 7) Now I will SSH to the first instance ( EFS-east1a) and here we will use the keygen to create a .ppk ( since my key was in format of .pem
Step 8) in order Yumwith two windows, please make sure pay attention to private IP address 10.0.1.x ( 1st instance) and 10.0.3.x (2nd instance)
Step 9) type these
sudo su ( to put in root position)
yum update -y
yum install httpd –y ( to install the Apache server)
Service httpd start ( to start theWeb server service)
Step 10) now above will create a directory for us at /var/www/html
So let’s go to that directory
[root@ip-10-0-1-8 ec2-user]# cd /var/www/html
[root@ip-10-0-1-8 html]# ls
As we see when I type ls there is nothing inside my directory now.
Step 11) now go to the 2nd instance (10.0.3.x) and ssh to it and type same commands as before
make sure go back to root on both instance:
Step 12) Now I will go and create a classic ELB (call it EFSELB) and include thesetwoinstances and make sure pick correct Security group as for those instances. ( pick also default Sec group)
Step 13) change the interval to 6 sec and also the Healthy threshold to 2 retry.
Step 14) If I go to instance tab; I will see the status out of service ( since we have no index.html )
Step 15) Now I go back to EFS and follow the instruction on what to do: in here click on :
Amazon EC2 mount instructions
I do not need to do these steps since my Linux AMI already has it :
————————————————–Skip these steps—————————————-
Install thenfs client on your EC2 instance.
On an Amazon Linux, Red Hat Enterprise Linux, or SuSE Linux instance:
sudo yum install -y nfs-utils
On an Ubuntu instance:
sudo apt-get install nfs-common
——————————————————————————————————
Now if you get this message in the box :
Your Amazon VPC is not currently configured to enable mounting using DNS names. To enable this feature go to the Amazon VPC console; in the Actions menu, select “Edit DNS Resolution” and “Edit DNS Hostnames” and make sure both are set to “Yes”. For instructions on mounting your file system using a mount target IP address instead of a DNS name, please see our technical documentation.
Now after you did the DNS now you can copy and paste this commands to Mount your file system
sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 fs-a9ffaee0.efs.us-east-1.amazonaws.com:/ efs
But now instead of /efs directory we will mount it at /var/www/html (at the both of instance only)
sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 fs-a9ffaee0.efs.us-east-1.amazonaws.com:/ /var/www/html
Step 16) if you paste above commands and does not work and it takes time; you need to go back to your both instance and make sure you have added the default security group ( also inside the EFS)
Step 17) now I will go to one of the instance ( for exam first instance) then I go to directory
Cd /var/www/html
And type nano index.html
And write something inside of it, then we will see this file inside 2nd instance
Step 18) Now when I go to next instance, I will see the index.html with same content
Step 19) So I have mounted this EFS file on directory/vasr/www/html
Step 20) Now if I go to my ELB, and I copy and paste long DNS name :
EFSELB-30545612.us-east-1.elb.amazonaws.com
I will see the content of index.html
Step 21) Now I can go to ROUTE 53 and create an “A” record with Alias and point to ELB, then try to open with my domain name
Hamedtest100.com
So as we see in here, it looks like
Here are the difference between EBS, EFS, and S3
Amazon EFS is a file storage service for use with Amazon EC2. Amazon EFS provides a file system interface, file system access semantics (such as strong consistency and file locking), and concurrently-accessible storage for up to thousands of Amazon EC2 instances.
Amazon EBS is a block-level storage service for use with Amazon EC2. Amazon EBS can deliver performance for workloads that require the lowest-latency access to data from a single EC2 instance.
Amazon S3 is an object storage service. Amazon S3 makes data available through an Internet API that can be accessed anywhere.
Want more information on how to become Amazon AWS Certified? Learn more!
Instance Store Volumes are sometimes called Ephemeral Storage.
Instance Store volumes cannot be stopped. If the underlying host fails, you will lose your data.
EBS backed instances can be stopped. You will not lose the data in this instance if it is stopped.
You can reboot both, you will not lose your data.
By default, both ROOT volumes will be deleted on termination, however, with EBS volumes, you can tell AWS to keep the root device volume.
Amazon (EC2) Elastic Compute Cloud
Amazon EC2 provides scalable virtual servers in the cloud. The virtual servers can run different operating systems but most commonly run a flavor of Linux or Windows.
An EC2 virtual server is known as an “instance” and can be made up of different instance types and sizes.
Pricing Models
Reserved Instances
Purchase reserved instances when you know the “amount of usage” you will be using for an instance. If an instance is needed 100% of the time purchasing reserved instances at a discount will reduce costs.
On–‐Demand Instances
Are used “on–‐demand” and are paid for by the hour. If you only need an instance for an hour only fire it up for that long!
Spot Instances
Bid on unused EC2 instances for “non-production applications”.
Volumes Vs Snapshots
Volumes exist on EBS
Virtual Hard Disk
Snapshots exist on S3
You can take a snapshot of a volume, this will store that volume on S3
Snapshots are incremental, this means that only the blocks that have changed since your last snapshot are moved to S3
If this is your first snapshot, it may take some time to create
Volumes Vs Snapshots – Security
Snapshots of encrypted volumes are encrypted automatically.
Volumes restored from encrypted snapshots are encrypted automatically.
You can share snapshots, but only if they are unencrypted.
These snapshots can be shared with other AWS accounts or made public
Snapshots of Root Device Volumes
To create a snapshot for Amazon EBS volumes that serve as root devices, you should stop the instance before taking the snapshot.
Now we will do a Lab
Step 1) I go to Sydney Section; in here I have a VPC =192.168.0.0 /16 and then we have two public Subnet 1, 2
192.168.1.0/24
192.168.2.0/24
Step 2) I will go to EC2 and pick one Windows 2008 server with EBS, and then I will see also under the Volume section on the left side it has the new Volume.
Step 3) Then after I log in, I will go to Windows 2008 server and rename it , then I will install IIS Web server , then I go to c:\intepub\wwwroot and create an index.html file and save it in this folder ( hint: make sure the extension is a .html file not .txt, make sure use folder option to find out if the extension is turn off )
Step 4) Now go and make sure you type IP address of the Public in IE or Google Chrome, you should be able to see the content of the homepage
Step 5) Now I will go stop and take Snapshot of this EC2, as we see when I select the EC2 called EBSSydney1, in here I can extend the volume size, let’s give the name Image of EBSSydeny1.
When you create an EBS image, an EBS snapshot will also be created for each of the above volumes.
Step 6) Now I will see “Image of EBSSydeny1” under AMI and also under the snapshot on the left side
Step 7) So what does it mean, I can have this as a baseline and start from this AMI, I can get another EC2 instance with an exact same software package ( Like IIS and web server).
Step 8) Now I will go and Start a new Launch of EC2, but in here I will use My AMI
Step 9) go thru same process as before and pick this New EC2 inside the public subnet 192.168.1.0/24 and give name EBSSydney2
Step 10) As we see as we go thru these steps ( I can uncheck Delete on termination, but I cannot do this when I do instance store )
Step 11) As we see now under the EC2 I will see two instances and also under the volume I will see two Volume
Step 12) Now I will connect to new EC2 called EBSSydney2 and RDP to it and also if I copy and paste the public address I should see the same content of EBSSydney1
Step 13) after login and put the password I will go and see I have IIS installed on folder c:\inetpub\wwwroot\index.html, then I will rename the Computer name to EBSSydeny2
Step 14) Now If I want I can increase the Volume size of EBSSydney1 from 30 GB to 35 GB, all I have to do go to Volume and click on action and then modify it.
Step 15) I will go and restart the EBSSydney1 and to see if I got for extra 5 GB Storage, as we see when I go to volume and I click on EBSydney1 and look under the Status tab; I will see some % that is trying to optimize it.
Step 17) Make sure State in-use – completed (100%)
Step 18) if you go to Server Manager at taskbar; then click on the storage; now I will see extra space of 5.00 GB and now all you have to do highlight it and then you can extend your C drive.
Step 19) now I can see C drive is 35GB
Step 20) Now let’s say I want to attach a new Volume to EBSydney1 as D Drive=50 GB; So I can go to Volume on the left and then create a new Volume ( here I can encrypt it) and then attached to EBSydney1
Here is the ID of new Volume: vol-0401eb87ba30edb96; ( for Example)
Step 21) Lets rename is called it “New Volume Made “Now on right side Column I will see it says available (instead of in use)
Step 22) let’s go an attached to EBSsydney1 (the size of volume is 50G)
Step 23) Now If I go to EC2 and click on EBSydney 1 and at the bottom, I will see both root and this extra attached Volume to it
Step 24) Now go to Ebsydney ( via RDP) and go to server manager and click on storge I will see this new 50 GB drive ( extra HD) and then I can right-click on it, initialed it and then right click and create a volume and use all 50 GB space and then give the Drive letter “D”
Step 25) Now I will have two Hard drive C and D drive. Let’s imagine my D drive is used for DATA, so I will go and store a file called Data.txt inside it.
Step 26) so Up to now I have two EC2
EBSsydney2 with C drive
EBSydney1 with C and D Drive
Step 27) Now since all of these EBS are attached to EC1 via network; I can detach the Volume D Drive from EBSsydney1 and connect to EBSsydney2
Step 28) After I do above and Detached, then I will see status as “Available”
Step 29) Now I will go and Attached this Volume ( D Drive) to EBSsydney2
Step 30) Now when I go EBSsydney1, I will see D drive is not there; but when I go to EBSsydney2, I will see it has D drive and also I will see my data is still there.
Step 31) So we see the EBS Volume can be attached to only one EC2 at a time, and now when I created the New Volume and if it was Encrypted the attached volume will be also encrypted.
Step 32) Now I will go and create Instance (instead of EBS) and we will see that I cannot stop it, but only Restart or Terminate
Step 33) Go to EC2 and Create a new EC2, then on the left side click on Community AMI; then make sure select Root type As “ Instance Store “
Step 34) go thru the process and pick first AMI, as you see there is no Free Tier and pick M3.meduim and then put public subnet
Step 35 ) On the add storage type; read it :
Your instance will be launched with the following storage device settings. You can attach additional instance store volumes to your instance. You can also attach additional EBS volumes after launching an instance, but not instance store volumes. Learn more about storage options in Amazon EC2.
Step 36) click Add new volume and as you see you do not have any good option and also you cannot click on “Delete on Termination “ and also you cannot encrypt. Give the name
InstanceSydney1
Step 37) As you see it is not FREE; so as soon as we did it we will delete it.
Step 38) As you see when you created an Instance Store; you do not have any Volume ( like when we created the EBS stuff)
Step 39) now when you click action and look at the state I can only reboot or Terminate; so I cannot stop it, I cannot take the image of it.
Part 1
Part2
Want more information on how to become Amazon AWS Certified? Learn more!
SNS is integrated into many AWS services. We are able to use it to receive notifications when events occur in our AWS Environment. With CloudWatch and SNS a full environment monitoring solution could be created that notifies administrators of alerts, capacity issues, downtime, changes in the environment, and more!
TOPIC: A topic what a “message is sent to”
Subscription end point: SNS sends all messages to subscriptions subscribed to a specific topic
Subscriber end points include the following
Application, Mobile APP notifications (IOS/Android/Amazon/Microsoft)
SMS
HTTPS
HTTP
JSON
E–‐Mail JSON
SQS Queue
AWS Lambda
Remember SNS is PUSH , but SQS ( For Queue) is PULL
Step 1) We will create a TOPIC. Call it “A picture was uploaded to S3 buckets”
Step 2)lets create a Subscription, in here we will use protocols “e-mail”
The choice of Protocols are:
HTTP
SMS
HTTPS
JSON
E–‐Mail JSON
SQS Queue
AWS Lambda
Application
Step 3) Then insert your e-mail in here :
Step 4) Now As we see it is pending, so go to above e-mail and then accept subscription; then you need to confirm subscription
Step 5) Now when I go back to AWS SNS, I will see new Subscription ID
Step 6) Now if you need to add another Subscription; you can click Subscription; then chose Mobile phone ( to get txt) and provide your cell phone number.
Step 7) Now I will test it by sending a Publish to this topic; Via Myself, remember I can use the notification also when I trigger some events.
Step 8) Now that all is OK, I will go to S3 and create a Buckets and have a trigger a SNS3 to e-mail when I get a file uploaded to the S3
Step 9) Go to S3 and create this Buckets “testswithsns” with all default value
Step 10) Then go to properties of that buckets and click on the events
Step 11) Pick the event = “Object create”, As you see when you want select it: you have choices SNS topic, SQS Queue or Lambda Function
Step 12) I pick SNS and pick the topic I did on the first steps called “whenAfileisuplaodedtoS3 “
Step 13) Now when I upload pictures to my S3 buckets I will see notifications.
Want more information on how to become Amazon AWS Certified? Learn more!
Step 1) Create Target Group 1 (Pic) and Target Group 2 (video)
Step 2) Associate /Register the above group with EC1(Pic1) and EC2 (Pic2) with group 1 click target Tab ( Make sure click to add to register , then Save ) Step 3) Associate /Register the above group with EC3(video3 ) and EC4 (Video 4) with group 4 click target Tab ( Make sure click to add to register , then Save ) Step 4 ) Now we will click on the Application Load Balancer and go thru steps
Step 5) After selecting the Security Group ( port 80) then we will pick both target group 1 and target group2 if we did not create target group 1 and target group 2 we had to do it here. This way is easier to understand the material.
Step 6) Here I pick Picture as a default group ( it does not matter which one we pick) we will see later on; as we see rest of parameter is filled up . Step 7) We wait about 5 mins so the state goes from Provisioning to Active . Now if you go to target groups you will see both of the instance at group 1 (pic) are healthy . But when you look at Group 2(video) you will see as unused , since we have to link the video to Application Load Balancer.
Step 8) Now If I copy and paste long DNS name https://applicationloadbalancer-780517374.us-east-1.elb.amazonaws.com/ It will show page that correspond to Pic1 and Pic2
Step 9) Now if you go back to target group and click on Target group 1( pic) on the first tab, I see the Load balancer is shown in here ; but when I click on the target group 2( Video) on first tab I do not see the Load balancer associate here. So I need to go to Load Balancer and change the Rule ( last tab) .
Step 10) In here we see the default rule is pointed to Group1 (pic) .
Step 11 ) click on View/Edit Rule
Step 12) Now we will add content rule and I will use for path *Picture* go to picture group Now we will add a content rule and I will use for path *Video* go to Video group
Step 13) Now if I paste that long DNS name with /picture.html it will shows the content from EC1 and EC2 ( after refresh)
Cisco IP SLA (Service Level Agreement) is very similar to concept of Frame-relay – Like Committed information rate. That is your Service Provider ISP, will tell you that you are guaranteed to get particular BW from them , with Min value of something ; but can go to higher value. Same Concept apply to Ip SLA Read more »
Now I will do small Lab:The Goal is that PC1 (private Network) be able to Ping PC2 another Private Network, by going Via R3 which represent internet. R3 has no connectivity to LAN 192.168.2.0 and 192.168.1.0 /24 Here is R1 has default route to ISP (R3): Read more »
Now I will do small Lab: The Goal is that create a PPP Multilink on R1 and R2 so that the link between them will be double speed with IP address as follow: Read more »
Login to your Windows server 2012 as an Administrator. When your log in is completed the Server Manager will be opened automatically, if it doesn’t open then you must open it manually. Read more »
Another advantage of having a client-server model is centralizing storage. Instead of having data dispersed among multiple hard drives on different client machines, we can store the all mission critical data on a central hard drive on the server. Read more »
From the Domain Controller, open command prompt and type GPMC.MSC (short name) and hit enter, this would open GPMC or click on Start then click on down arrow and select Group Policy Management.
Another advantage of having a client-server model is centralizing storage. Instead of having data dispersed among multiple hard drives on different client machines, we can store the all mission critical data on a central hard drive on the server. Read more »
Another great feature of Server 2012, is how the Delegation of Control Wizard simplifies adding rights for common tasks to groups or administrators. Read more »