<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Xi Group Ltd. Company Blog &#187; Xi Group Ltd. Company Blog &#187; template</title>
	<atom:link href="http://blog.xi-group.com/tag/template/feed/" rel="self" type="application/rss+xml" />
	<link>http://blog.xi-group.com</link>
	<description>High-quality DevOps Services</description>
	<lastBuildDate>Tue, 09 Jun 2015 11:38:46 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.2.2</generator>
	<item>
		<title>UserData Template for Ubuntu 14.04 EC2 Instances in AWS</title>
		<link>http://blog.xi-group.com/2015/01/userdata-teplate-for-ubuntu-14-04-ec2-instances-in-aws/</link>
		<comments>http://blog.xi-group.com/2015/01/userdata-teplate-for-ubuntu-14-04-ec2-instances-in-aws/#comments</comments>
		<pubDate>Tue, 27 Jan 2015 11:41:14 +0000</pubDate>
		<dc:creator><![CDATA[Ivo Vachkov]]></dc:creator>
				<category><![CDATA[AWS]]></category>
		<category><![CDATA[Development]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Operations]]></category>
		<category><![CDATA[AWS CLI]]></category>
		<category><![CDATA[template]]></category>
		<category><![CDATA[UserData]]></category>

		<guid isPermaLink="false">http://blog.xi-group.com/?p=45</guid>
		<description><![CDATA[In any elastic environment there is a recurring issue: How to quickly spin up new boxes? Over time multiple options emerge. Many environments will rely on a pre-baked machine instances. In Amazon AWS those are called Amazon Machine Instances (AMIs), in Joyent&#8217;s SDC &#8211; images, but no matter the name they present pre-build, (mostly) pre-configured [&#8230;]]]></description>
				<content:encoded><![CDATA[<p style="text-align: justify;">In any elastic environment there is a recurring issue: How to quickly spin up new boxes? Over time multiple options emerge. Many environments will rely on a pre-baked machine instances. In Amazon AWS those are called Amazon Machine Instances (AMIs), in Joyent&#8217;s SDC &#8211; images, but no matter the name they present pre-build, (mostly) pre-configured digital artifact that the underlying cloud layer will bootstrap and execute. They are fast to bootstrap, but limited. Hard to manage different versions, hard to switch virtualization technologies (PV vs. HVM, AWS vs. Joyent, etc), hard to deal with software versioning. Managing elastic environment with pre-baked images is probably the fastest way to start, but probably the most expensive way in the long run.</p>
<p style="text-align: justify;">Another option is to use some sort of configuration management system. Chef, Puppet, Salt, Ansible &#8230; a lot of choices. Those are flexible, but depending on the usage scenarios can be slow and may require additional &#8220;interventions&#8221; to work properly. There are two additional &#8220;gotchas&#8221; that are not commonly discussed. First, those tools will force some sort in-house configuration/pseudo-programming language and terminology. Second, security is a tricky concept to implement within such system. Managing elastic environments with configuration management systems is definitely possible, but comes with some dependencies and prerequisites you should account for in the design phase.</p>
<p style="text-align: justify;">Third option, AWS UserData / Joyent script, is a reasonable compromise. This is effectively a script that executes one upon virtual machine creation. It allows you to configure the instance, attach/configure storages, install software, etc. There are obvious benefits to that approach:</p>
<ul>
<li>Treat that script like any other coding artifact, use version control, code reviews, etc;</li>
<li>It is easily modifiable upon need or request;</li>
<li>It can be used with virtually any instance type;</li>
<li>It is a single source of truth for the instance configuration;</li>
<li>It integrates nicely with the whole Control Plane concept.</li>
</ul>
<p style="text-align: justify;">Here is a basic template for Ubuntu 14.04 used with reasonable success to cover wide variety of deployment needs:</p>
<p></p><pre class="crayon-plain-tag">#!/bin/bash -ex

# DESCRIPTION: The following UserData script is created to ... 
# 
# Maintainer: ivachkov [at] xi-group [dot] com
# 
# Requirements:
#	OS: Ubuntu 14.04 LTS
#	Repositories: 
#		...
#	Packages:
# 		htop, iotop, dstat, ...
#	PIP Packages:
#		boto, awscli, ...
# 
# Additional information if necessary
# 	... 
# 

# Debian apt-get install function to eliminate prompts
export DEBIAN_FRONTEND=noninteractive
apt_get_install()
{
	DEBIAN_FRONTEND=noninteractive apt-get -y \
		-o DPkg::Options::=--force-confnew \
		install $@
}

# Configure disk layout 
INSTANCE_STORE_0="/dev/xvdb"
IS0_PART_1="/dev/xvdb1"
IS0_PART_2="/dev/xvdb2"

# INSTANCE_STORE_1="/dev/xvdc"
# IS1_PART_1="/dev/xvdc1"
# IS1_PART_2="/dev/xvdc2"

# ... 

# Unmount /dev/xvdb if already mounted
MOUNTED=`df -h | awk '{print $1}' | grep $INSTANCE_STORE_0`
if [ ! -z "$MOUNTED" ]; then
	umount -f $INSTANCE_STORE_0
fi

# Partition the disk (8GB for SWAP / Rest for /mnt)
(echo n; echo p; echo 1; echo 2048; echo +8G; echo t; echo 82; echo n; echo p; echo 2; echo; echo; echo w) | fdisk $INSTANCE_STORE_0

# Make and enable swap
mkswap $IS0_PART_1
swapon $IS0_PART_1

# Make /mnt partition and mount it
mkfs.ext4 $IS0_PART_2
mount $IS0_PART_2 /mnt

# Update /etc/fstab if necessary 
# sed -i s/$INSTANCE_STORE_0/$IS0_PART_2/g /etc/fstab

# Add external repositories
# 
# Example 1: MongoDB
# apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10
# echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen' | sudo tee /etc/apt/sources.list.d/mongodb.list
# 
# Example 2: Salt
# add-apt-repository ppa:saltstack/salt
# 
# Example 3: *Internal repository*
# curl --silent https://apt.mydomain.com/my.apt.gpg.key | apt-key add -
# curl --silent -o /etc/apt/sources.list.d/my.apt.list https://apt.mydomain.com/my.apt.list

# Update the packace indexes
apt-get update && DEBIAN_FRONTEND=noninteractive apt-get -y -o Dpkg::Options::="--force-confnew" dist-upgrade

# Install basic APT packages and requirements
apt_get_install htop sysstat dstat iotop
# apt_get_install ... 
apt_get_install python-pip
apt_get_install ntp
# apt_get_install ... 

# Install PIP requirements
pip install six==1.8.0
pip install boto
pip install awscli
# pip install ... 

# Configure NTP
service ntp stop		# Stop ntp daemon to free NTP socket
sleep 3				# Give the daemon some time to exit
ntpdate pool.ntp.org		# Sync time
service ntp start		# Re-enable the NTP daemon

# Configure other system-specific settings ... 

# Configure automatic security updates
cat > /etc/apt/apt.conf.d/20auto-upgrades << "EOF"
APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Unattended-Upgrade "1";
EOF
/etc/init.d/unattended-upgrades restart

# Update system limits
cat > /etc/security/limits.d/my_limits.conf << "EOF"
*               soft    nofile          999999
*               hard    nofile          999999
root            soft    nofile          999999
root            hard    nofile          999999
EOF
ulimit -n 999999

# Update sysctl variables
cat > /etc/sysctl.d/my_sysctl.conf << "EOF"
net.core.somaxconn=65535
net.core.netdev_max_backlog=65535
# net.core.rmem_max=8388608
# net.core.wmem_max=8388608
# net.core.rmem_default=65536
# net.core.wmem_default=65536
# net.ipv4.tcp_rmem=8192 873800 8388608
# net.ipv4.tcp_wmem=4096 655360 8388608
# net.ipv4.tcp_mem=8388608 8388608 8388608
# net.ipv4.tcp_max_tw_buckets=6000000
# net.ipv4.tcp_max_syn_backlog=65536
# net.ipv4.tcp_max_orphans=262144
# net.ipv4.tcp_synack_retries = 2
# net.ipv4.tcp_syn_retries = 2
# net.ipv4.tcp_fin_timeout = 7
# net.ipv4.tcp_slow_start_after_idle = 0
# net.ipv4.ip_local_port_range = 2000 65000
# net.ipv4.tcp_window_scaling = 1
# net.ipv4.tcp_max_syn_backlog = 3240000
# net.ipv4.tcp_congestion_control = cubic
EOF
sysctl -p /etc/sysctl.d/my_sysctl.conf

# Create specific users and groups 
# addgroup ...
# useradd ... 
# usermod ...

# Create expected set of directories
DIRECTORIES="
	/var/log/...
	/run/...
	/srv/... 
	/opt/...
	"

for DIRECTORY in $DIRECTORIES; do
	mkdir -p $DIRECTORY
	chown USER:GROUP $DIRECTORY	
done

# Create custom_crontab
cat > /home/ubuntu/custom_crontab << "EOF"

EOF

# Enable custom cronjobs
su - ubuntu -c "/usr/bin/crontab /home/ubuntu/custom_crontab"

# Install main application / service 
# ...
# ... 

# Configure main application / service
# ... 
# ... 

# Make everythig survive reboot
cat > /etc/rc.local << "EOF"
#!/bin/sh

# Regenerate disk layout on ephemeral storage 
# ... 

# Start the application 
# ... 

EOF

# Start application
# service XXX restart 

# Tag the instance (NOTE: Depends on configure AWS CLI)
INSTANCE_ID=`curl -s http://169.254.169.254/latest/meta-data/instance-id`
# aws ec2 create-tags --resources $INSTANCE_ID --tags Key=Name,Value=... 

# Mark successful execution
exit 0</pre><p></p>
<p style="text-align: justify;">Trivial. Yet, incorporates a lot in just ~200 lines of code:</p>
<ol>
<li>Disk layout management;</li>
<li>Package repositories configuration;</li>
<li>Basic tool set and third party software installation;</li>
<li>Service reconfiguration (NTP, Automatic security updates);</li>
<li>System reconfiguration (limits, sysctl, users, directories, crontab);</li>
<li>Post-reboot startup configuration;</li>
<li>Identity discovery and self-tagging;</li>
</ol>
<p style="text-align: justify;">As added bonus, the <strong>cloud-init</strong> package will properly log all output during the script execution in <strong>/var/log/cloud-init-output.log</strong> for failure investigations. Current script uses <strong>-ex</strong> bash parameters, which means it will explicitly echo all executed commands (<strong>-x</strong>) and exit at first sign of unsuccessful command execution (<strong>-e</strong>).</p>
<p style="text-align: justify;">NOTE: There is one important component, purposefully omitted from the template UserData, the log file management. We plan on discussing that in a separate article.</p>
<p>References</p>
<ul>
<li><a href="http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html">http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html</a></li>
<li><a href="http://wiki.joyent.com/wiki/display/sdc/Using+the+Metadata+API">http://wiki.joyent.com/wiki/display/sdc/Using+the+Metadata+API</a></li>
</ul>
<div class="rpbt_shortcode">
<h3>Related Posts</h3>
<ul>
					
			<li><a href="http://blog.xi-group.com/2015/02/how-to-deploy-single-node-hadoop-setup-in-aws/">How to deploy single-node Hadoop setup in AWS</a></li>
					
			<li><a href="http://blog.xi-group.com/2014/11/small-tip-how-to-use-block-device-mappings-to-manage-instance-volumes-with-aws-cli/">Small Tip: How to use &#8211;block-device-mappings to manage instance volumes with AWS CLI</a></li>
					
			<li><a href="http://blog.xi-group.com/2014/07/how-to-implement-multi-cloud-deployment-for-scalability-and-reliability/">How to implement multi-cloud deployment for scalability and reliability</a></li>
					
			<li><a href="http://blog.xi-group.com/2014/07/small-tip-how-to-use-aws-cli-to-start-spot-instances-with-userdata/">Small Tip: How to use AWS CLI to start Spot instances with UserData</a></li>
					
			<li><a href="http://blog.xi-group.com/2015/01/small-tip-how-to-use-aws-cli-filter-parameter/">Small Tip: How to use AWS CLI &#8216;&#8211;filter&#8217; parameter</a></li>
			</ul>
</div>
]]></content:encoded>
			<wfw:commentRss>http://blog.xi-group.com/2015/01/userdata-teplate-for-ubuntu-14-04-ec2-instances-in-aws/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>DevOps Shell Script Template</title>
		<link>http://blog.xi-group.com/2014/07/devops-shell-script-template/</link>
		<comments>http://blog.xi-group.com/2014/07/devops-shell-script-template/#comments</comments>
		<pubDate>Thu, 03 Jul 2014 15:49:21 +0000</pubDate>
		<dc:creator><![CDATA[Ivo Vachkov]]></dc:creator>
				<category><![CDATA[Development]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Operations]]></category>
		<category><![CDATA[cloud]]></category>
		<category><![CDATA[heterogenous systems]]></category>
		<category><![CDATA[linux]]></category>
		<category><![CDATA[shell script]]></category>
		<category><![CDATA[template]]></category>

		<guid isPermaLink="false">http://blog.xi-group.com/?p=51</guid>
		<description><![CDATA[In everyday life of a DevOps engineer you will have to create multiple pieces of code. Some of those will be run once, others &#8230; well others will live forever. Although it may be compelling to just put all the commands in a text editor, save the result and execute it, one should always consider [&#8230;]]]></description>
				<content:encoded><![CDATA[<p style="text-align: justify;">In everyday life of a DevOps engineer you will have to create multiple pieces of code. Some of those will be run once, others &#8230; well others will live forever. Although it may be compelling to just put all the commands in a text editor, save the result and execute it, one should always consider the &#8220;bigger picture&#8221;. What will happen if your script is run on another OS, on another Linux distribution, or even on a different version of the same Linux distribution?! Another point of view is to think what will happen if somehow your neat 10-line-script has to be executed on say 500 servers?! Can you be sure that all the commands will run successfully there? Can you even be sure that all the commands will even be present? Usually &#8230; No!</p>
<p style="text-align: justify;">Faced with similar problems on a daily basis we started devising simple solutions and practices to address them. One of those is the process of standardizing the way different utilities behave, the way they take arguments and report errors. Upon further investigation it became clear that a pattern can be extracted and synthesized in a series of template, one can use in daily work to keep common behavior between different utilities and components.</p>
<p style="text-align: justify;">Here is the basic template used in shell scripts:</p>
<p></p><pre class="crayon-plain-tag">#!/bin/sh
#
# DESCRIPTION: ... Include functional description ...
#
# Requiresments:
#	awk
#	... 
#	uname
#
# Example usag:
#	$ template.sh -h 
#	$ template.sh -p ARG1 -q ARG2
#

RET_CODE_OK=0
RET_CODE_ERROR=1

# Help / Usage function
print_help() {
	echo "$0: Functional description of the utility"
	echo ""
	echo "$0: Usage"
	echo "    [-h] Print help"
	echo "    [-p] (MANDATORY) First argument"
	echo "    [-q] (OPTIONAL) Second argument"
	exit $RET_CODE_ERROR;
}

# Check for supported operating system
p_uname=`whereis uname | cut -d' ' -f2`
if [ ! -x "$p_uname" ]; then
	echo "$0: No UNAME available in the system"
	exit $RET_CODE_ERROR;
fi
OS=`$p_uname`
if [ "$OS" != "Linux" ]; then
	echo "$0: Unsupported OS!";
	exit $RET_CODE_ERROR;
fi

# Check if awk is available in the system
p_awk=`whereis awk | cut -d' ' -f2`
if [ ! -x "$p_awk" ]; then
	echo "$0: No AWK available in the system!";
	exit $RET_CODE_ERROR;
fi

# Check for other used local utilities
#	bc
#	curl
#	grep 
#	etc ...

# Parse command line arguments
while test -n "$1"; do
	case "$1" in
	--help|-h)
		print_help
		exit 0
		;;
	-p)
		P_ARG=$2
		shift
		;;
	-q)
		Q_ARG=$2
		shift
		;;
	*)
		echo "$0: Unknown Argument: $1"
		print_help
		exit $RET_CODE_ERROR;
		;;
	esac
	
	shift
done

# Check if mandatory argument is present?
if [ -z "$P_ARG" ]; then
	echo "$0: Required parameter not specified!"
	print_help
	exit $RET_CODE_ERROR;
fi

# ... 

# Check if optional argument is present and if not, initialize!
if [ -z "$Q_ARG" ]; then
	Q_ARG="0";
fi

# ... 

# DO THE ACTUAL WORK HERE 

exit $RET_CODE_OK;</pre><p></p>
<p style="text-align: justify;">Nothing fancy. Basic framework that does the following:</p>
<ol>
<li><strong>Lines 3 &#8211; 13</strong>: Make sure basic documentation, dependency list and example usage patterns are provided with the script itself;</li>
<li><strong>Lines 15 &#8211; 16</strong>: Define meaningful return codes to allow other utils to identify possible execution problems and react accordingly;</li>
<li><strong>Lines 18 &#8211; 27</strong>: Basic help/usage() function to provide the user with short guidance on how to use the script; </li>
<li><strong>Lines 29 &#8211; 52</strong>: Dependency checks to make sure all utilities the script needs are available and executable in the system;</li>
<li><strong>Lines 54 &#8211; 77</strong>: Argument parsing of everything passed on the command line that supports both short and long argument names;</li>
<li><strong>Lines 79 &#8211; 91</strong>: Validity checks of the argument values that should make sure arguments are passed contextually correct values;</li>
<li><strong>Lines 95 &#8211; N</strong>: Actual programming logic to be implemented &#8230; </li>
</ol>
<p style="text-align: justify;">This template is successfully used in a various scenarios: command line utilities, Nagios plugins, startup/shutdown scripts, UserData scripts, daemons implemented in shell script with the help of <strong>start-stop-daemon</strong>, etc. It is also used to allow deployment on multiple operating systems and distribution versions. Resulting utilities and system components are <strong>more resilient</strong>, include <strong>better documentation and dependency sections</strong>, provide the user with <strong>similar and intuitive way to get help or pass arguments</strong>. Error handling is functional enough to go beyond the simple <strong>OK</strong> / <strong>ERROR</strong> state. And all of those are important feature when components must be run in highly heterogenous environments such as most cloud deployments!</p>
<div class="rpbt_shortcode">
<h3>Related Posts</h3>
<ul>
					
			<li><a href="http://blog.xi-group.com/2015/01/userdata-teplate-for-ubuntu-14-04-ec2-instances-in-aws/">UserData Template for Ubuntu 14.04 EC2 Instances in AWS</a></li>
					
			<li><a href="http://blog.xi-group.com/2014/07/how-to-implement-multi-cloud-deployment-for-scalability-and-reliability/">How to implement multi-cloud deployment for scalability and reliability</a></li>
					
			<li><a href="http://blog.xi-group.com/2014/06/small-tip-how-to-run-non-deamon-ized-processes-in-the-background-with-supervisord/">Small Tip: How to run non-deamon()-ized processes in the background with SupervisorD</a></li>
					
			<li><a href="http://blog.xi-group.com/2015/02/how-to-deploy-single-node-hadoop-setup-in-aws/">How to deploy single-node Hadoop setup in AWS</a></li>
					
			<li><a href="http://blog.xi-group.com/2014/11/small-tip-how-to-use-block-device-mappings-to-manage-instance-volumes-with-aws-cli/">Small Tip: How to use &#8211;block-device-mappings to manage instance volumes with AWS CLI</a></li>
			</ul>
</div>
]]></content:encoded>
			<wfw:commentRss>http://blog.xi-group.com/2014/07/devops-shell-script-template/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
