AWS and
Chef
First, I
would like to clarify that we are going to launch the instances using AWS
CloudFormation. Of course, we could run and manage them manually, but
that's not the point of automation, is it?
CloudFormation includes two main concepts:
— a template, a JSON file that describes all the resources we
need to launch the instance.
— a stack, containing the AWS resources
described in the template.
To those who’s
just getting acquainted with AWS, Amazon offers ready-to-use sample templates
that cover most of the aspects necessary for working with AWS. For the link to
these sample templates, see the end of the article.
Let's take
a close look at what a template is. In the basic case, it consists of
four blocks: Parameters, Mappings, Resources, and Outputs.
The Parameters
block describes the variables and their values that will be passed to the stack
during its creation. You can define parameter values while creating the resources,
or you can use the ‘default’ field in the parameter description.
Parameters can contain any type of information, from a password to a network
port or a path to a directory. To get the parameter value, use the Ref function in the template.
The Mappings
block contains a set of keys with the corresponding parameters and values. Mappings are typically used to define the AWS regions
and instances corresponding to them. To get the value of a given mapping, use
the Fn::FindInMap function, where you define the key and the parameters
that will be used to search for the value.
The Resources
block describes our ec2-instances or any other AWS resources. This is
the section where you define the instances for the Chef-server and the client
node. The description should include the type of the resource (for example, AWS::EC2::Instance);
you can also specify some metadata describing the node or defining the
pre-install procedure directives (for example, a certain package must be
installed when launching the image). Properties
are a fundamental part of this block; they contain the detailed information
about the launched image. Here you can define the instance type to run (for
example, Amazon Linux 32-bit), the membership of the new instance in one Security
Group or another (essentially, it is a firewall with defined traffic
handling rules, where the default action is deny). But the central part
of the Properties block is User Data. This is where we will
describe the script that will turn our characterless instance into a Chef server
or a Chef client.
See the Template
that I use below under the cut, followed by my comments.
{
"AWSTemplateFormatVersion" : "2010-09-09",
"Description" : "Template for stack",
"Parameters" : {
"KeyName" : {
"Description" : "Name of an existing EC2 KeyPair to enable SSH access to the instances",
"Type" : "String",
"MinLength" : "1",
"MaxLength" : "255",
"AllowedPattern" : "[\\x20-\\x7E]*",
"ConstraintDescription" : "can contain only ASCII characters."
},
"HostKeys" : {
"Description" : "Public Key",
"Type" : "String"
},
"SecretAccessKey" : {
"Description" : "Name of an existing EC2 KeyPair to enable SSH access to the instances",
"Type" : "String"
},
"InstanceType" : {
"Description" : "Chef Server EC2 instance type",
"Type" : "String",
"Default" : "m1.small",
"AllowedValues" : [ "t1.micro","m1.small"],
"ConstraintDescription" : "must be a valid EC2 instance type."
},
"SSHLocation" : {
"Description" : " The IP address range that can be used to SSH to the EC2 instances",
"Type": "String",
"MinLength": "9",
"MaxLength": "18",
"Default": "0.0.0.0/0",
"AllowedPattern": "(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})/(\\d{1,2})",
"ConstraintDescription": "must be a valid IP CIDR range of the form x.x.x.x/x."
}
},
"Mappings" : {
"AWSInstanceType2Arch" : {
"t1.micro" : { "Arch" : "64" },
"m1.small" : { "Arch" : "64" }
},
"AWSRegionArch2AMI" : {
"us-east-1" : { "32" : "ami-d7a18dbe", "64" : "ami-bba18dd2", "64HVM" : "ami-0da96764" },
"us-west-2" : { "32" : "ami-def297ee", "64" : "ami-ccf297fc", "64HVM" : "NOT_YET_SUPPORTED" },
"us-west-1" : { "32" : "ami-923909d7", "64" : "ami-a43909e1", "64HVM" : "NOT_YET_SUPPORTED" }
}
},
"Resources" : {
ChefClient" : {
"Type" : "AWS::EC2::Instance",
"Metadata" : {
"Description" : "Chef Client",
"AWS::CloudFormation::Init" : {
"config" : {
"packages" : {
"yum" : {
"git" : []
}
}
}
}
},
"Properties": {
"ImageId" : { "Fn::FindInMap" : [ "AWSRegionArch2AMI", { "Ref" : "AWS::Region" }, { "Fn::FindInMap" : [ "AWSInstanceType2Arch", { "Ref" : "InstanceType" }, "Arch" ] } ] },
"InstanceType" : { "Ref" : "InstanceType" },
"SecurityGroups" : [ {"Ref" : "WebServerSecurityGroup"} ],
"KeyName" : { "Ref" : "KeyName" },
"UserData" : { "Fn::Base64" : { "Fn::Join" : ["", [
"#!/bin/bash -v\n",
"yum update -y aws-cfn-bootstrap\n",
"function error_exit\n",
"{\n",
" cfn-signal -e 1 -r \"$1\" '", { "Ref" : "WaitHandle" }, "'\n",
" exit 1\n",
"}\n",
"yum update -y\n",
"yum install git -y\n",
"/sbin/service iptables stop\n",
"/sbin/service ip6tables stop\n",
"/sbin/chkconfig iptables off\n",
"/sbin/chkconfig iptables off\n",
"yum install git -y\n",
"/usr/bin/curl -L https://www.opscode.com/chef/install.sh | bash\n",
"cd /root/\n",
"/usr/bin/git git://github.com/opscode/chef-repo.git\n",
"/bin/mkdir -p /root/chef-repo/.chef\n",
"/bin/mkdir -p /etc/chef\n",
"/bin/mkdir /root/.aws\n",
"/bin/touch /root/.aws/config\n",
"/bin/echo '[default]' >> /root/.aws/config\n",
"/bin/echo 'region = ", {"Ref" : "AWS::Region" }, "' >> /root/.aws/config\n",
"/bin/echo 'aws_access_key_id = ", { "Ref" : "HostKeys" }, "' >> /root/.aws/config\n",
"/bin/echo 'aws_secret_access_key = ", { "Ref" : "SecretAccessKey" }, "' >> /root/.aws/config\n",
"/usr/bin/aws s3 cp s3://storage/admin.pem /root/chef-repo/.chef\n",
"/usr/bin/aws s3 cp s3://storage/chef-validator.pem /root/chef-repo/.chef\n",
"/usr/bin/aws s3 cp s3://storage/knife.rb /root/chef-repo/.chef\n",
"/usr/bin/aws s3 cp s3://storage/client.rb /etc/chef\n",
"/usr/bin/aws s3 cp s3://storage/json_attribs.json /etc/chef\n",
"/bin/cp -p /root/chef-repo/.chef/chef-validator.pem /etc/chef/validation.pem\n",
"/usr/sbin/ntpdate -q 0.europe.pool.ntp.org\n",
"/bin/echo '\nchef_server_url \"", { "Ref" : "ChefServerURL" }, "\"' >> /etc/chef/client.rb\n",
"/bin/echo '\nchef_server_url \"", { "Ref" : "ChefServerURL" }, "\"' >> /root/chef-repo/.chef/knife.rb\n",
"/usr/bin/chef-client\n",
"/opt/aws/bin/cfn-signal -e 0 -r \"ChefClient setup complete\" '", { "Ref" : "WaitHandle" }, "'\n"
]]}}
}
},
"WaitHandle" : {
"Type" : "AWS::CloudFormation::WaitConditionHandle"
},
"WaitCondition" : {
"Type" : "AWS::CloudFormation::WaitCondition",
"DependsOn" : "ChefClient",
"Properties" : {
"Handle" : {"Ref" : "WaitHandle"},
"Timeout" : "1200"
}
},
"ChefServer" : {
"Type" : "AWS::EC2::Instance",
"Metadata" : {
"Description" : "Bootstrap ChefServer",
"AWS::CloudFormation::Init" : {
"config" : {
"packages" : {
"yum" : {
"wget" : []
}
}
}
}
},
"Properties": {
"ImageId" : { "Fn::FindInMap" : [ "AWSRegionArch2AMI", { "Ref" : "AWS::Region" }, { "Fn::FindInMap" : [ "AWSInstanceType2Arch", { "Ref" : "InstanceType" }, "Arch" ] } ] },
"InstanceType" : { "Ref" : "InstanceType" },
"SecurityGroups" : [ {"Ref" : "WebServerSecurityGroup"} ],
"KeyName" : { "Ref" : "KeyName" },
"UserData" : { "Fn::Base64" : { "Fn::Join" : ["", [
"#!/bin/bash\n",
"cfn-init --region ", { "Ref" : "AWS::Region" },
" -s ", { "Ref" : "AWS::StackId" }, " -r ChefServer ", " -c orderby ",
" --access-key ", { "Ref" : "HostKeys" },
" --secret-key ", {"Ref" : "SecretAccessKey"}, " || error_exit 'Failed to run cfn-init'\n",
"yum update -y aws-cfn-bootstrap\n",
"function error_exit\n",
"{\n",
" cfn-signal -e 1 -r \"$1\" '", { "Ref" : "WaitHandle" }, "'\n",
" exit 1\n",
"}\n",
"yum update -y\n",
"/sbin/service iptables stop\n",
"/sbin/service ip6tables stop\n",
"/sbin/chkconfig iptables off\n",
"/sbin/chkconfig ip6tables off\n",
"#Install ChefServer package\n",
"cd /root/\n",
"/usr/bin/wget https://opscode-omnibus-packages.s3.amazonaws.com/el/6/x86_64/chef-server-11.0.10-1.el6.x86_64.rpm\n",
"/bin/rpm -ivh /root/chef-server-11.0.10-1.el6.x86_64.rpm\n",
"/usr/bin/wget https://s3.amazonaws.com/storage/default.rb\n",
"/bin/cp -f default.rb /opt/chef-server/embedded/cookbooks/runit/recipes/default.rb\n",
"#Configure ChefServer\n",
"su - -c '/usr/bin/chef-server-ctl reconfigure'\n",
"su - -c '/usr/bin/chef-server-ctl restart'\n",
"#AWS creds installation\n",
"/bin/mkdir /root/.aws\n",
"/bin/touch /root/.aws/config\n",
"/bin/echo '[default]' >> /root/.aws/config\n",
"/bin/echo 'region = ", {"Ref" : "AWS::Region" }, "' >> /root/.aws/config\n",
"/bin/echo 'aws_access_key_id = ", { "Ref" : "HostKeys" }, "' >> /root/.aws/config\n",
"/bin/echo 'aws_secret_access_key = ", { "Ref" : "SecretAccessKey" }, "' >> /root/.aws/config\n",
"#Upload files for client\n",
"/usr/bin/aws s3 cp /etc/chef-server/admin.pem s3://storage/\n",
"/usr/bin/aws s3 cp /etc/chef-server/chef-validator.pem s3://storage/\n",
"#Chef client and dirs for it\n",
"/usr/bin/curl -L https://www.opscode.com/chef/install.sh | /bin/bash\n",
"/bin/mkdir /root/.chef\n",
"/bin/mkdir /etc/chef\n",
"/bin/mkdir /etc/chef/cookbooks\n",
"/bin/mkdir /etc/chef/roles\n",
"#Knife client config files from S3\n",
"/bin/cp /etc/chef-server/admin.pem /etc/chef/client.pem\n",
"/usr/bin/aws s3 cp s3://storage/knife_admin.rb /root/.chef/knife.rb\n",
"#Roles and cookbooks from S3\n",
"/usr/bin/aws s3 cp s3://storage/roles/ /etc/chef/roles/ --recursive\n",
"/usr/bin/aws s3 cp s3://storage/cookbooks/ /etc/chef/cookbooks/ --recursive\n",
"#Cookbooks from community\n",
"/usr/bin/knife cookbook site download cron\n",
"/usr/bin/knife cookbook site download jenkins\n",
"/usr/bin/knife cookbook site download ntp\n",
"/usr/sbin/ntpdate -q 0.europe.pool.ntp.org\n",
"yum remove ruby -y\n",
"yum install ruby19 -y\n",
"#Unpack and move cookbooks\n",
"/bin/mv /root/*.tar.gz /etc/chef/cookbooks\n",
"for i in `/bin/ls /etc/chef/cookbooks/*.tar.gz`; do /bin/tar zxf $i -C /etc/chef/cookbooks/; /bin/rm -f $i; done\n",
"for i in `/bin/ls /etc/chef/cookbooks`; do /usr/bin/knife cookbook upload $i; done\n",
"#Upload cookbooks and roles\n",
"/usr/bin/knife cookbook upload * -c '/root/.chef/knife.rb'\n",
"/usr/bin/knife role from file /etc/chef/roles/*.rb\n",
"/bin/echo -e \"*/5 * * * * root /usr/bin/knife exec -E 'nodes.find(\\\"!roles:BaseRole\\\") { |n| puts n.run_list.add(\\\"role[BaseRole]\\\"); n.save}' -c '/root/.chef/knife.rb'\" >> /etc/crontab\n",
"/bin/echo -e \"*/5 * * * * root /usr/bin/knife exec -E 'nodes.find(\\\"env_role:master AND !roles:master\\\") { |n| puts n.run_list.add(\\\"role[master]\\\"); n.save}' -c '/root/.chef/knife.rb'\" >> /etc/crontab\n",
"/bin/echo -e \"*/5 * * * * root /usr/bin/knife exec -E 'nodes.find(\\\"env_role:slave AND !roles:slave\\\") { |n| puts n.run_list.add(\\\"role[slave]\\\"); n.save}' -c '/root/.chef/knife.rb'\" >> /etc/crontab\n",
"/opt/aws/bin/cfn-signal -e 0 -r \"ChefServer setup complete\" '", { "Ref" : "WaitHandle" }, "'\n"
]]}}
}
},
"WaitHandle" : {
"Type" : "AWS::CloudFormation::WaitConditionHandle"
},
"WaitCondition" : {
"Type" : "AWS::CloudFormation::WaitCondition",
"DependsOn" : "ChefServer",
"Properties" : {
"Handle" : {"Ref" : "WaitHandle"},
"Timeout" : "1200"
}
},
"WebServerSecurityGroup" : {
"Type" : "AWS::EC2::SecurityGroup",
"Properties" : {
"GroupDescription" : "Enable HTTP access via port 80 and SSH access",
"SecurityGroupIngress" : [
{"IpProtocol" : "tcp", "FromPort" : "80", "ToPort" : "80", "CidrIp" : "0.0.0.0/0"},
{"IpProtocol" : "tcp", "FromPort" : "8080", "ToPort" : "8080", "CidrIp" : "0.0.0.0/0"},
{"IpProtocol" : "tcp", "FromPort" : "443", "ToPort" : "443", "CidrIp" : "0.0.0.0/0"},
{"IpProtocol" : "tcp", "FromPort" : "22", "ToPort" : "22", "CidrIp" : { "Ref" : "SSHLocation"}}
]
}
},
}
As we can
see from the template, we have five defined parameters. Two of them
contain default values – the type of the created instance (in this case it is
m1.small) and the subnetwork of the IP addresses that will have SSH-access
to the node. When creating the stack, we must define three parameters:
the key for the SSH-access to the nodes (created separately in the AWS
Console), the access key, and the secret key (both are configured during AWS
access account registration).
The mapping
describes two instance types, both with a 64-bit architecture. AWSRegionArch2AMI
mapping property also describes the IDs of the instances corresponding to the
instances with the Amazon Linux operating system (the ID information can be
obtained in the AWS Console).
Then, we
describe the resources of the Chef server and the Chef client. In both
cases, before running commands from the User Data section, we must
install wget via the Metadata block (just in case – in reality, Amazon
Linux images should contain such packages). The resources to be created are
defined by the ImageId and InstanceType properties (in this case,
they are predefined as Amazon Linux, m1.small and a 64-bit architecture). Then
goes the main body of the resource – User Data. It is a configuration
bash-script that is executed step-by-step after our instance is initialized.
In brief, we
should perform the following steps for the node that will serve as the Chef server:
·
Disable iptables (to
avoid possible issues with dropping packages).
·
Install Open Source
Chef Server.
·
Replace default.rb
(yep, the issue with Amazon Linux images is that they are not presented as RedHat,
which they essentially are, but as Amazon, so the Chef server service cannot
work at full capacity).
·
Auto-configure and
restart the server.
·
Create a configuration
file for the AWS console.
·
Upload the
admin.pem and chef-validator.pem files to the repository (we will need them on
the Client).
·
Install the Chef client
on the node (yep, the Chef server does not have its own knife).
·
Get the client.pem and
knife.rb files for the knife running on the server (something like the
starter-kit from Part I).
·
Recreate the
structure of the chef-repo directory where we store our cookbooks, role files, etc.
·
Upload and install
cookbooks on the server (EPAM cookbooks are stored in the repository, and others
are provided by the community).
·
Upload and install
roles on the server.
·
Add a scheduled
task that will trigger node checks every five minutes and install the basic
role on each new node.
Although
this explanation can seem muddled and confusing, an in-depth analysis of each
part of the script would be much longer. Therefore, if you have any questions,
ask away in the comments or message me directly.
Back to the
template. For the node that will serve as the Chef client, we should
take the following steps:
·
Disable iptables (to
avoid possible issues with dropping packages).
·
Install the Chef client.
·
Create a configuration
file for the AWS console.
·
Load the client
configuration files from the repository.
·
Add the Chef server
address to the client's configuration (it can change even when the images are
relaunched, or when the stack is restarted).
·
Add a scheduled
task for launching the Chef client.
It is note-worthy
that, thanks to such options as json_attribs, we can assign the node
with a tag that will define its role in the infrastructure. This can
come in handy when there are nodes with different infrastructural roles among
the Chef clients.
The
following resources – WaitHandle and WaitCondition – describe the
conditions when the process of creating a stack can be put on hold. If WaitHandle receives a signal about the
successful completion of the process that was active during the timeout period
defined in WaitCondition, stack
creation is resumed/finished.
The next
resource declared – Security Group – is a firewall for the node. Here,
we describe ports forwarding and the source addresses of packages.
The last
block – Outputs – serves so that we can receive some useful variables
after we successfully launch the stack and the instances. For example, a domain
name for gaining access to the instance.
In the end,
we get a universal template and the ability to deploy our neat little
infrastructure by running a single command from the AWS management console (if
you're interested in a setup with a large number of instances — use
Auto-Scaling Group). You can see the result of the launch in the CloudFormation
section.
What comes
next? You will have the opportunity to manage your nodes by means of knife,
cookbooks and roles. You can use the community cookbook, create your own
custom cookbooks, write wrappers for other existing cookbooks. The
possibilities are numerous, and the choice depends on the final objective.
In this
series of articles I tried to scratch the surface of the exciting subject of
management automation for a group of computers, as well as working with the AWS
cloud resources. I hope some of you newbie DevOps will find these
articles interesting and useful.
If you have
any questions or suggestions – feel free to comment on any article or send me
direct messages. I sincerely thank everyone who found the time to read it all
through.
Until we
meet again!
Links:
AWS
documentation — aws.amazon.com/documentation/
AWS
CloudFormation — aws.amazon.com/documentation/cloudformation/
AWS EC2 —
aws.amazon.com/documentation/ec2/
AWS Sample
Templates — aws.amazon.com/cloudformation/aws-cloudformation-templates/
AWS Console
— docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/getting-started.html
chef,
amazon web services, automation, linux, ec2, cloudformation
Thanks for info!!!
ReplyDeleteyou can visit
http://suranka.com/training-courses/cloud-computing-training/amazon-web-services-training-in-chennai/
Nice blog
ReplyDeleteHey did you hear about Chef Fundamentals Training ? ITS announces the public batch for Devops- Chef Fundamentals training in gurgaon from 12th to 14th June 2015.
I regular come to this site, this blog is truly pleasant and the people are fully sharing excellent thoughts here.
ReplyDeletehadoop 2.0 installation
Your thinking toward the respective issue is awesome also the idea behind the blog is very interesting which would bring a new evolution in respective field. Thanks for sharing.
ReplyDeleteJava Training in Chennai
Salesforce Training in Chennai