<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Xi Group Ltd. Company Blog &#187; Xi Group Ltd. Company Blog &#187; AWS CLI</title>
	<atom:link href="http://blog.xi-group.com/tag/awscli/feed/" rel="self" type="application/rss+xml" />
	<link>http://blog.xi-group.com</link>
	<description>High-quality DevOps Services</description>
	<lastBuildDate>Tue, 09 Jun 2015 11:38:46 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.2.2</generator>
	<item>
		<title>How to deploy single-node Hadoop setup in AWS</title>
		<link>http://blog.xi-group.com/2015/02/how-to-deploy-single-node-hadoop-setup-in-aws/</link>
		<comments>http://blog.xi-group.com/2015/02/how-to-deploy-single-node-hadoop-setup-in-aws/#comments</comments>
		<pubDate>Wed, 04 Feb 2015 08:19:25 +0000</pubDate>
		<dc:creator><![CDATA[Ivo Vachkov]]></dc:creator>
				<category><![CDATA[AWS]]></category>
		<category><![CDATA[Big Data]]></category>
		<category><![CDATA[Development]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Operations]]></category>
		<category><![CDATA[AWS CLI]]></category>
		<category><![CDATA[big data]]></category>
		<category><![CDATA[hadoop]]></category>
		<category><![CDATA[map reduce]]></category>
		<category><![CDATA[single-node hadoop]]></category>

		<guid isPermaLink="false">http://blog.xi-group.com/?p=14</guid>
		<description><![CDATA[Common issue in the Software Development Lifecycle is the need to quickly bootstrap vanilla environment, deploy some code onto it, run it and then scrap it. This is a core concept in Continuous Integration / Continuous Delivery (CI/CD). It is a stepping stone towards immutable infrastructure. Properly automated implementation can also save time (no need [&#8230;]]]></description>
				<content:encoded><![CDATA[<p style="text-align: justify;">Common issue in the Software Development Lifecycle is the need to quickly bootstrap vanilla environment, deploy some code onto it, run it and then scrap it. This is a core concept in Continuous Integration / Continuous Delivery (CI/CD). It is a stepping stone towards immutable infrastructure. Properly automated implementation can also save <strong>time</strong> (no need to configure it manually) and <strong>money</strong> (no need to track potential regression issues in the development process).</p>
<p style="text-align: justify;">Over the course of several years, found this to be extremely useful when used in BigData projects that use <a href="http://hadoop.apache.org" target="_blank">Hadoop</a>. Installation of Hadoop is not always straight-forward. It depends on various internal and external components (JDK, Map-Reduce Framework, HDFS, etc). It can be messy. Different components communicate over various ports and protocols. HDFS uses somewhat clumsy semantics to deal with files and directories. For those and similar reasons <a href="http://www.xi-group.com/" target="_blank">we</a> decided to present our take on Hadoop installation on a single node for development purposes.</p>
<p style="text-align: justify;">The following shell script is simplified, fully functional skeleton implementation that will install Hadoop on a <strong>c3.xlarge</strong>, <a href="https://getfedora.org" target="_blank">Fedora</a> 20 node in AWS and run a test job on it:</p>
<p></p><pre class="crayon-plain-tag">#!/bin/bash

# Key file to be generated and its filesystem location
KEY_NAME="test-hadoop-key"
KEY_FILE="/tmp/$KEY_NAME"

# Security group name and description
SG_NAME="test-hadoop-sg"
SG_DESC="Test Hadoop Security Group"

# Temporary files; General Log and Instance User data
LOG_FILE="/tmp/test-hadoop-setup.log"
USR_DATA="/tmp/test-hadoop-userdata.sh"

# Instance details
AWS_PROFILE="$$profile$$"
AWS_REGION="us-east-1"
AMI_ID="ami-21362b48"
INST_TAG="test-hadoop-single"
INST_TYPE="c3.xlarge"
DISK_SIZE="20"

# Default return codes
RET_CODE_OK=0
RET_CODE_ERROR=1

# Check for various utilities that will be used 

# Check for supported operating system
P_UNAME=`whereis uname | cut -d' ' -f2`
if [ ! -x "$P_UNAME" ]; then
	echo "$0: No UNAME available in the system"
	exit $RET_CODE_ERROR;
fi
OS=`$P_UNAME`
if [ "$OS" != "Linux" ]; then
	echo "$0: Unsupported OS!";
	exit $RET_CODE_ERROR;
fi

# Check if awscli is available in the system
P_AWS=`whereis aws | cut -d' ' -f2`
if [ ! -x "$P_AWS" ]; then
	echo "$0: No 'aws' available in the system!";
	exit $RET_CODE_ERROR;
fi

# Check if awk is available in the system
P_AWK=`whereis awk | cut -d' ' -f2`
if [ ! -x "$P_AWK" ]; then
	echo "$0: No 'awk' available in the system!";
	exit $RET_CODE_ERROR;
fi

# Check if grep is available in the system
P_GREP=`whereis grep | cut -d' ' -f2`
if [ ! -x "$P_GREP" ]; then
	echo "$0: No 'grep' available in the system!";
	exit $RET_CODE_ERROR;
fi

# Check if sed is available in the system
P_SED=`whereis sed | cut -d' ' -f2`
if [ ! -x "$P_SED" ]; then
	echo "$0: No 'sed' available in the system!";
	exit $RET_CODE_ERROR;
fi

# Check if ssh is available in the system
P_SSH=`whereis ssh | cut -d' ' -f2`
if [ ! -x "$P_SSH" ]; then
	echo "$0: No 'ssh' available in the system!";
	exit $RET_CODE_ERROR;
fi

# Check if ssh-keygen is available in the system
P_SSH_KEYGEN=`whereis ssh-keygen | cut -d' ' -f2`
if [ ! -x "$P_SSH_KEYGEN" ]; then
	echo "$0: No 'ssh-keygen' available in the system!";
	exit $RET_CODE_ERROR;
fi

# Userdata code to bootstrap Hadoop 2.X on Fedora 20 instance
cat > $USR_DATA << "EOF"
#!/bin/bash

# Mark execution start
echo "START" > /root/userdata.state

# Install Hadoop
yum --assumeyes install hadoop-common hadoop-common-native hadoop-hdfs hadoop-mapreduce hadoop-mapreduce-examples hadoop-yarn

# Configure HDFS
hdfs-create-dirs

# Bootstrap Hadoop services
systemctl start hadoop-namenode && sleep 2
systemctl start hadoop-datanode && sleep 2
systemctl start hadoop-nodemanager && sleep 2
systemctl start hadoop-resourcemanager && sleep 2

# Make Hadoop services start after reboot
systemctl enable hadoop-namenode hadoop-datanode hadoop-nodemanager hadoop-resourcemanager

# Configure Hadoop user
runuser hdfs -s /bin/bash /bin/bash -c "hadoop fs -mkdir /user/fedora"
runuser hdfs -s /bin/bash /bin/bash -c "hadoop fs -chown fedora /user/fedora"

# Deploy additional software dependencies
# ... 

# Deploy main application 
# ... 

# Mark execution end
echo "DONE" > /root/userdata.state
EOF

# Create Security Group
echo -n "Creating '$SG_NAME' security group ... "
aws ec2 create-security-group --group-name $SG_NAME --description "$SG_DESC" --region $AWS_REGION --profile $AWS_PROFILE > $LOG_FILE
echo "Done."

# Add open SSH access
echo -n "Adding access rules to '$SG_NAME' security group ... "
aws ec2 authorize-security-group-ingress --group-name $SG_NAME --protocol tcp --port 22 --cidr 0.0.0.0/0 --region $AWS_REGION --profile $AWS_PROFILE >> $LOG_FILE

# Add open Hadoop ports access
aws ec2 authorize-security-group-ingress --group-name $SG_NAME --protocol tcp --port 8088 --cidr 0.0.0.0/0 --region $AWS_REGION --profile $AWS_PROFILE >> $LOG_FILE
aws ec2 authorize-security-group-ingress --group-name $SG_NAME --protocol tcp --port 50010 --cidr 0.0.0.0/0 --region $AWS_REGION --profile $AWS_PROFILE >> $LOG_FILE
aws ec2 authorize-security-group-ingress --group-name $SG_NAME --protocol tcp --port 50020 --cidr 0.0.0.0/0 --region $AWS_REGION --profile $AWS_PROFILE >> $LOG_FILE
aws ec2 authorize-security-group-ingress --group-name $SG_NAME --protocol tcp --port 50030 --cidr 0.0.0.0/0 --region $AWS_REGION --profile $AWS_PROFILE >> $LOG_FILE
aws ec2 authorize-security-group-ingress --group-name $SG_NAME --protocol tcp --port 50070 --cidr 0.0.0.0/0 --region $AWS_REGION --profile $AWS_PROFILE >> $LOG_FILE
aws ec2 authorize-security-group-ingress --group-name $SG_NAME --protocol tcp --port 50075 --cidr 0.0.0.0/0 --region $AWS_REGION --profile $AWS_PROFILE >> $LOG_FILE
aws ec2 authorize-security-group-ingress --group-name $SG_NAME --protocol tcp --port 50090 --cidr 0.0.0.0/0 --region $AWS_REGION --profile $AWS_PROFILE >> $LOG_FILE
echo "Done."

# Generate New Key Pair and Import it
echo -n "Generating key pair '$KEY_NAME' for general access ... "
rm -rf $KEY_FILE $KEY_FILE.pub
ssh-keygen -t rsa -f $KEY_FILE -N '' >> $LOG_FILE
aws ec2 import-key-pair --key-name $KEY_NAME --public-key-material "`cat $KEY_FILE.pub`" --region $AWS_REGION --profile $AWS_PROFILE >> $LOG_FILE
echo "Done."

# Build the Hadoop box
echo -n "Starting Hadoop instance ... "
RI_OUT=`aws ec2 run-instances --image-id $AMI_ID --count 1 --instance-type $INST_TYPE --key-name $KEY_NAME --security-groups $SG_NAME --user-data file:///tmp/test-hadoop-userdata.sh --block-device-mapping "[{\"DeviceName\":\"/dev/sda1\", \"Ebs\":{\"VolumeSize\":$DISK_SIZE, \"DeleteOnTermination\": true} } ]" --region $AWS_REGION --profile $AWS_PROFILE`
I_ID=`echo $RI_OUT | grep "InstanceId" | awk '{print $43}' | sed 's/,$//' | sed -e 's/^"//'  -e 's/"$//'`
echo $RI_OUT >> $LOG_FILE
echo "Done."

# Tag the Hadoop box
echo -n "Tagging Hadoop instance '$I_ID' ... "
aws ec2 create-tags --resources $I_ID --tags Key=Name,Value=$INST_TAG --region $AWS_REGION --profile $AWS_PROFILE >> $LOG_FILE
echo "Done."

# Obtain instance public IP address
echo -n "Obtaining instance '$I_ID' public hostname ... "

# Delays in AWS fabric, reiterate until public hostname is assigned ...
while true; do
	sleep 3

	HOST=`aws ec2 describe-instances --instance-ids $I_ID --region $AWS_REGION --profile $AWS_PROFILE | grep PublicDnsName | awk -F":" '{print $2}' | awk '{print $1}' | sed 's/,$//' | sed -e 's/^"//'  -e 's/"$//'`;
	if [[ $HOST == ec2* ]]; then
		break;
	fi
done
echo "Done."

# Poll until system is ready
echo -n "Waiting for instance '$I_ID' to configure itself (will take approx. 5 minutes) ... "
while true; do
	sleep 5;

	TEMP_OUT=`ssh -q -o "StrictHostKeyChecking=no" -i $KEY_FILE -t fedora@$HOST "sudo cat /root/userdata.state"`;

	# Clear some strange symbols 
	STATE=`echo $TEMP_OUT | cut -c1-4`;

	if [ "$STATE" = "DONE" ]; then
		break;
	fi
done
echo "Done."

# Test Hadoop setup
echo "========== Testing Single-node Hadoop =========="
ssh -q -o "StrictHostKeyChecking=no" -i $KEY_FILE fedora@$HOST "hadoop jar /usr/share/java/hadoop/hadoop-mapreduce-examples.jar pi 10 1000000"
echo "========== Done =========="

# Run main Application here
# echo "========== Testing Main Application Single-node Hadoop =========="
# ssh -q -o "StrictHostKeyChecking=no" -i $KEY_FILE fedora@$HOST "hadoop jar ..."
# echo "========== Done =========="

# Terminate instance
echo -n "Terminating Hadoop instance '$I_ID' ... "
aws ec2 terminate-instances --instance-ids $I_ID --region $AWS_REGION --profile $AWS_PROFILE >> $LOG_FILE

# Poll until instance is terminated
while true; do
	sleep 5;

	TERMINATED=`aws ec2 describe-instances --instance-ids $I_ID --region $AWS_REGION --profile $AWS_PROFILE | grep terminated`;
	if [ ! -z "$TERMINATED" ]; then
		break;
	fi
done
echo "Done."

# Remove SSH Keypair
echo -n "Removing key pair '$KEY_NAME' ... "
aws ec2 delete-key-pair --key-name $KEY_NAME --region $AWS_REGION --profile $AWS_PROFILE >> $LOG_FILE
echo "Done."

# Remove Security Group
echo -n "Removing '$SG_NAME' security group ... "
aws ec2 delete-security-group --group-name $SG_NAME --region $AWS_REGION --profile $AWS_PROFILE >> $LOG_FILE
echo "Done."

# Remove local resources
rm -rf $USR_DATA
rm -rf $KEY_FILE $KEY_FILE.pub
rm -rf $LOG_FILE

# Normal termination
exit $RET_CODE_OK</pre><p></p>
<p style="text-align: justify;">Additional notes:</p>
<ul>
<li>Please, edit the <strong>AWS_PROFILE</strong> variable. AWS CLI commands depend on this!</li>
<li>Activity log is kept in <strong>/tmp/test-hadoop-setup.log</strong> and will be recreated with every new run of the script.</li>
<li>In case of normal execution, all allocated resources will be cleaned upon termination.</li>
<li>This script is ready to be used as Jenkins build-and-deploy job.</li>
<li>Since the single-node Hadoop/HDFS is terminated, output data that goes to HDFS should be transferred out of the instance before termination! </li>
</ul>
<p style="text-align: justify;">Example run should look like:</p>
<p></p><pre class="crayon-plain-tag">:~> ./aws-hadoop-single.sh
Creating 'test-hadoop-sg' security group ... Done.
Adding access rules to 'test-hadoop-sg' security group ... Done.
Generating key pair 'test-hadoop-key' for general access ... Done.
Starting Hadoop instance ... Done.
Tagging Hadoop instance 'i-b3b27f5c' ... Done.
Obtaining instance 'i-b3b27f5c' public hostname ... Done.
Waiting for instance 'i-b3b27f5c' to configure itself (will take approx. 5 minutes) ... Done.
========== Testing Single-node Hadoop ==========
Number of Maps  = 10
Samples per Map = 1000000
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
15/02/04 07:27:05 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
15/02/04 07:27:05 INFO input.FileInputFormat: Total input paths to process : 10
15/02/04 07:27:05 INFO mapreduce.JobSubmitter: number of splits:10
15/02/04 07:27:05 INFO Configuration.deprecation: user.name is deprecated. Instead, use mapreduce.job.user.name
15/02/04 07:27:05 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
15/02/04 07:27:05 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative
15/02/04 07:27:05 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
15/02/04 07:27:05 INFO Configuration.deprecation: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class
15/02/04 07:27:05 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
15/02/04 07:27:05 INFO Configuration.deprecation: mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class
15/02/04 07:27:05 INFO Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name
15/02/04 07:27:05 INFO Configuration.deprecation: mapreduce.reduce.class is deprecated. Instead, use mapreduce.job.reduce.class
15/02/04 07:27:05 INFO Configuration.deprecation: mapreduce.inputformat.class is deprecated. Instead, use mapreduce.job.inputformat.class
15/02/04 07:27:05 INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
15/02/04 07:27:05 INFO Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
15/02/04 07:27:05 INFO Configuration.deprecation: mapreduce.outputformat.class is deprecated. Instead, use mapreduce.job.outputformat.class
15/02/04 07:27:05 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
15/02/04 07:27:05 INFO Configuration.deprecation: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class
15/02/04 07:27:05 INFO Configuration.deprecation: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
15/02/04 07:27:05 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1423034805647_0001
15/02/04 07:27:05 INFO impl.YarnClientImpl: Submitted application application_1423034805647_0001 to ResourceManager at /0.0.0.0:8032
15/02/04 07:27:05 INFO mapreduce.Job: The url to track the job: http://ip-10-63-188-40:8088/proxy/application_1423034805647_0001/
15/02/04 07:27:05 INFO mapreduce.Job: Running job: job_1423034805647_0001
15/02/04 07:27:11 INFO mapreduce.Job: Job job_1423034805647_0001 running in uber mode : false
15/02/04 07:27:11 INFO mapreduce.Job:  map 0% reduce 0%
15/02/04 07:27:24 INFO mapreduce.Job:  map 60% reduce 0%
15/02/04 07:27:33 INFO mapreduce.Job:  map 100% reduce 0%
15/02/04 07:27:34 INFO mapreduce.Job:  map 100% reduce 100%
15/02/04 07:27:34 INFO mapreduce.Job: Job job_1423034805647_0001 completed successfully
Job Finished in 29.302 seconds
15/02/04 07:27:34 INFO mapreduce.Job: Counters: 43
        File System Counters
                FILE: Number of bytes read=226
                FILE: Number of bytes written=882378
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=2660
                HDFS: Number of bytes written=215
                HDFS: Number of read operations=43
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=3
        Job Counters
                Launched map tasks=10
                Launched reduce tasks=1
                Data-local map tasks=10
                Total time spent by all maps in occupied slots (ms)=93289
                Total time spent by all reduces in occupied slots (ms)=7055
        Map-Reduce Framework
                Map input records=10
                Map output records=20
                Map output bytes=180
                Map output materialized bytes=280
                Input split bytes=1480
                Combine input records=0
                Combine output records=0
                Reduce input groups=2
                Reduce shuffle bytes=280
                Reduce input records=20
                Reduce output records=0
                Spilled Records=40
                Shuffled Maps =10
                Failed Shuffles=0
                Merged Map outputs=10
                GC time elapsed (ms)=1561
                CPU time spent (ms)=7210
                Physical memory (bytes) snapshot=2750681088
                Virtual memory (bytes) snapshot=11076927488
                Total committed heap usage (bytes)=2197291008
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters
                Bytes Read=1180
        File Output Format Counters
                Bytes Written=97
Estimated value of Pi is 3.14158440000000000000
========== Done ==========
Terminating Hadoop instance 'i-b3b27f5c' ... Done.
Removing key pair 'test-hadoop-key' ... Done.
Removing 'test-hadoop-sg' security group ... Done.
:~></pre><p></p>
<p style="text-align: justify;">Hopefully, this short introduction will advance your efforts to automate development tasks in BigData projects!</p>
<p style="text-align: justify;">If you want to discuss more complex scenarios including automated deployments over multi-node Hadoop clusters, <a href="http://aws.amazon.com/elasticmapreduce/">AWS Elastic MapReduce</a>, <a href="http://aws.amazon.com/datapipeline/">AWS DataPipeline</a> or other components of the <a href="http://www.slideshare.net/ivachkov/big-data-ecosystem-39249871">BigData ecosystem</a>, do not hesitate to <a href="http://blog.xi-group.com/contact-us/">Contact Us</a>!</p>
<p>References</p>
<ul>
<li><a href="http://hadoop.apache.org/">Apache Hadoop</a></li>
<li><a href="http://aws.amazon.com/cli/">AWS Command Line Interface</a></li>
<li><a href="http://www.slideshare.net/ivachkov/big-data-ecosystem-39249871">BigData ecosystem</a></li>
</ul>
<div class="rpbt_shortcode">
<h3>Related Posts</h3>
<ul>
					
			<li><a href="http://blog.xi-group.com/2015/01/userdata-teplate-for-ubuntu-14-04-ec2-instances-in-aws/">UserData Template for Ubuntu 14.04 EC2 Instances in AWS</a></li>
					
			<li><a href="http://blog.xi-group.com/2014/11/small-tip-how-to-use-block-device-mappings-to-manage-instance-volumes-with-aws-cli/">Small Tip: How to use &#8211;block-device-mappings to manage instance volumes with AWS CLI</a></li>
					
			<li><a href="http://blog.xi-group.com/2014/07/how-to-implement-multi-cloud-deployment-for-scalability-and-reliability/">How to implement multi-cloud deployment for scalability and reliability</a></li>
					
			<li><a href="http://blog.xi-group.com/2015/01/small-tip-how-to-use-aws-cli-filter-parameter/">Small Tip: How to use AWS CLI &#8216;&#8211;filter&#8217; parameter</a></li>
					
			<li><a href="http://blog.xi-group.com/2014/07/small-tip-how-to-use-aws-cli-to-start-spot-instances-with-userdata/">Small Tip: How to use AWS CLI to start Spot instances with UserData</a></li>
			</ul>
</div>
]]></content:encoded>
			<wfw:commentRss>http://blog.xi-group.com/2015/02/how-to-deploy-single-node-hadoop-setup-in-aws/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>UserData Template for Ubuntu 14.04 EC2 Instances in AWS</title>
		<link>http://blog.xi-group.com/2015/01/userdata-teplate-for-ubuntu-14-04-ec2-instances-in-aws/</link>
		<comments>http://blog.xi-group.com/2015/01/userdata-teplate-for-ubuntu-14-04-ec2-instances-in-aws/#comments</comments>
		<pubDate>Tue, 27 Jan 2015 11:41:14 +0000</pubDate>
		<dc:creator><![CDATA[Ivo Vachkov]]></dc:creator>
				<category><![CDATA[AWS]]></category>
		<category><![CDATA[Development]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Operations]]></category>
		<category><![CDATA[AWS CLI]]></category>
		<category><![CDATA[template]]></category>
		<category><![CDATA[UserData]]></category>

		<guid isPermaLink="false">http://blog.xi-group.com/?p=45</guid>
		<description><![CDATA[In any elastic environment there is a recurring issue: How to quickly spin up new boxes? Over time multiple options emerge. Many environments will rely on a pre-baked machine instances. In Amazon AWS those are called Amazon Machine Instances (AMIs), in Joyent&#8217;s SDC &#8211; images, but no matter the name they present pre-build, (mostly) pre-configured [&#8230;]]]></description>
				<content:encoded><![CDATA[<p style="text-align: justify;">In any elastic environment there is a recurring issue: How to quickly spin up new boxes? Over time multiple options emerge. Many environments will rely on a pre-baked machine instances. In Amazon AWS those are called Amazon Machine Instances (AMIs), in Joyent&#8217;s SDC &#8211; images, but no matter the name they present pre-build, (mostly) pre-configured digital artifact that the underlying cloud layer will bootstrap and execute. They are fast to bootstrap, but limited. Hard to manage different versions, hard to switch virtualization technologies (PV vs. HVM, AWS vs. Joyent, etc), hard to deal with software versioning. Managing elastic environment with pre-baked images is probably the fastest way to start, but probably the most expensive way in the long run.</p>
<p style="text-align: justify;">Another option is to use some sort of configuration management system. Chef, Puppet, Salt, Ansible &#8230; a lot of choices. Those are flexible, but depending on the usage scenarios can be slow and may require additional &#8220;interventions&#8221; to work properly. There are two additional &#8220;gotchas&#8221; that are not commonly discussed. First, those tools will force some sort in-house configuration/pseudo-programming language and terminology. Second, security is a tricky concept to implement within such system. Managing elastic environments with configuration management systems is definitely possible, but comes with some dependencies and prerequisites you should account for in the design phase.</p>
<p style="text-align: justify;">Third option, AWS UserData / Joyent script, is a reasonable compromise. This is effectively a script that executes one upon virtual machine creation. It allows you to configure the instance, attach/configure storages, install software, etc. There are obvious benefits to that approach:</p>
<ul>
<li>Treat that script like any other coding artifact, use version control, code reviews, etc;</li>
<li>It is easily modifiable upon need or request;</li>
<li>It can be used with virtually any instance type;</li>
<li>It is a single source of truth for the instance configuration;</li>
<li>It integrates nicely with the whole Control Plane concept.</li>
</ul>
<p style="text-align: justify;">Here is a basic template for Ubuntu 14.04 used with reasonable success to cover wide variety of deployment needs:</p>
<p></p><pre class="crayon-plain-tag">#!/bin/bash -ex

# DESCRIPTION: The following UserData script is created to ... 
# 
# Maintainer: ivachkov [at] xi-group [dot] com
# 
# Requirements:
#	OS: Ubuntu 14.04 LTS
#	Repositories: 
#		...
#	Packages:
# 		htop, iotop, dstat, ...
#	PIP Packages:
#		boto, awscli, ...
# 
# Additional information if necessary
# 	... 
# 

# Debian apt-get install function to eliminate prompts
export DEBIAN_FRONTEND=noninteractive
apt_get_install()
{
	DEBIAN_FRONTEND=noninteractive apt-get -y \
		-o DPkg::Options::=--force-confnew \
		install $@
}

# Configure disk layout 
INSTANCE_STORE_0="/dev/xvdb"
IS0_PART_1="/dev/xvdb1"
IS0_PART_2="/dev/xvdb2"

# INSTANCE_STORE_1="/dev/xvdc"
# IS1_PART_1="/dev/xvdc1"
# IS1_PART_2="/dev/xvdc2"

# ... 

# Unmount /dev/xvdb if already mounted
MOUNTED=`df -h | awk '{print $1}' | grep $INSTANCE_STORE_0`
if [ ! -z "$MOUNTED" ]; then
	umount -f $INSTANCE_STORE_0
fi

# Partition the disk (8GB for SWAP / Rest for /mnt)
(echo n; echo p; echo 1; echo 2048; echo +8G; echo t; echo 82; echo n; echo p; echo 2; echo; echo; echo w) | fdisk $INSTANCE_STORE_0

# Make and enable swap
mkswap $IS0_PART_1
swapon $IS0_PART_1

# Make /mnt partition and mount it
mkfs.ext4 $IS0_PART_2
mount $IS0_PART_2 /mnt

# Update /etc/fstab if necessary 
# sed -i s/$INSTANCE_STORE_0/$IS0_PART_2/g /etc/fstab

# Add external repositories
# 
# Example 1: MongoDB
# apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10
# echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen' | sudo tee /etc/apt/sources.list.d/mongodb.list
# 
# Example 2: Salt
# add-apt-repository ppa:saltstack/salt
# 
# Example 3: *Internal repository*
# curl --silent https://apt.mydomain.com/my.apt.gpg.key | apt-key add -
# curl --silent -o /etc/apt/sources.list.d/my.apt.list https://apt.mydomain.com/my.apt.list

# Update the packace indexes
apt-get update && DEBIAN_FRONTEND=noninteractive apt-get -y -o Dpkg::Options::="--force-confnew" dist-upgrade

# Install basic APT packages and requirements
apt_get_install htop sysstat dstat iotop
# apt_get_install ... 
apt_get_install python-pip
apt_get_install ntp
# apt_get_install ... 

# Install PIP requirements
pip install six==1.8.0
pip install boto
pip install awscli
# pip install ... 

# Configure NTP
service ntp stop		# Stop ntp daemon to free NTP socket
sleep 3				# Give the daemon some time to exit
ntpdate pool.ntp.org		# Sync time
service ntp start		# Re-enable the NTP daemon

# Configure other system-specific settings ... 

# Configure automatic security updates
cat > /etc/apt/apt.conf.d/20auto-upgrades << "EOF"
APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Unattended-Upgrade "1";
EOF
/etc/init.d/unattended-upgrades restart

# Update system limits
cat > /etc/security/limits.d/my_limits.conf << "EOF"
*               soft    nofile          999999
*               hard    nofile          999999
root            soft    nofile          999999
root            hard    nofile          999999
EOF
ulimit -n 999999

# Update sysctl variables
cat > /etc/sysctl.d/my_sysctl.conf << "EOF"
net.core.somaxconn=65535
net.core.netdev_max_backlog=65535
# net.core.rmem_max=8388608
# net.core.wmem_max=8388608
# net.core.rmem_default=65536
# net.core.wmem_default=65536
# net.ipv4.tcp_rmem=8192 873800 8388608
# net.ipv4.tcp_wmem=4096 655360 8388608
# net.ipv4.tcp_mem=8388608 8388608 8388608
# net.ipv4.tcp_max_tw_buckets=6000000
# net.ipv4.tcp_max_syn_backlog=65536
# net.ipv4.tcp_max_orphans=262144
# net.ipv4.tcp_synack_retries = 2
# net.ipv4.tcp_syn_retries = 2
# net.ipv4.tcp_fin_timeout = 7
# net.ipv4.tcp_slow_start_after_idle = 0
# net.ipv4.ip_local_port_range = 2000 65000
# net.ipv4.tcp_window_scaling = 1
# net.ipv4.tcp_max_syn_backlog = 3240000
# net.ipv4.tcp_congestion_control = cubic
EOF
sysctl -p /etc/sysctl.d/my_sysctl.conf

# Create specific users and groups 
# addgroup ...
# useradd ... 
# usermod ...

# Create expected set of directories
DIRECTORIES="
	/var/log/...
	/run/...
	/srv/... 
	/opt/...
	"

for DIRECTORY in $DIRECTORIES; do
	mkdir -p $DIRECTORY
	chown USER:GROUP $DIRECTORY	
done

# Create custom_crontab
cat > /home/ubuntu/custom_crontab << "EOF"

EOF

# Enable custom cronjobs
su - ubuntu -c "/usr/bin/crontab /home/ubuntu/custom_crontab"

# Install main application / service 
# ...
# ... 

# Configure main application / service
# ... 
# ... 

# Make everythig survive reboot
cat > /etc/rc.local << "EOF"
#!/bin/sh

# Regenerate disk layout on ephemeral storage 
# ... 

# Start the application 
# ... 

EOF

# Start application
# service XXX restart 

# Tag the instance (NOTE: Depends on configure AWS CLI)
INSTANCE_ID=`curl -s http://169.254.169.254/latest/meta-data/instance-id`
# aws ec2 create-tags --resources $INSTANCE_ID --tags Key=Name,Value=... 

# Mark successful execution
exit 0</pre><p></p>
<p style="text-align: justify;">Trivial. Yet, incorporates a lot in just ~200 lines of code:</p>
<ol>
<li>Disk layout management;</li>
<li>Package repositories configuration;</li>
<li>Basic tool set and third party software installation;</li>
<li>Service reconfiguration (NTP, Automatic security updates);</li>
<li>System reconfiguration (limits, sysctl, users, directories, crontab);</li>
<li>Post-reboot startup configuration;</li>
<li>Identity discovery and self-tagging;</li>
</ol>
<p style="text-align: justify;">As added bonus, the <strong>cloud-init</strong> package will properly log all output during the script execution in <strong>/var/log/cloud-init-output.log</strong> for failure investigations. Current script uses <strong>-ex</strong> bash parameters, which means it will explicitly echo all executed commands (<strong>-x</strong>) and exit at first sign of unsuccessful command execution (<strong>-e</strong>).</p>
<p style="text-align: justify;">NOTE: There is one important component, purposefully omitted from the template UserData, the log file management. We plan on discussing that in a separate article.</p>
<p>References</p>
<ul>
<li><a href="http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html">http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html</a></li>
<li><a href="http://wiki.joyent.com/wiki/display/sdc/Using+the+Metadata+API">http://wiki.joyent.com/wiki/display/sdc/Using+the+Metadata+API</a></li>
</ul>
<div class="rpbt_shortcode">
<h3>Related Posts</h3>
<ul>
					
			<li><a href="http://blog.xi-group.com/2015/02/how-to-deploy-single-node-hadoop-setup-in-aws/">How to deploy single-node Hadoop setup in AWS</a></li>
					
			<li><a href="http://blog.xi-group.com/2014/11/small-tip-how-to-use-block-device-mappings-to-manage-instance-volumes-with-aws-cli/">Small Tip: How to use &#8211;block-device-mappings to manage instance volumes with AWS CLI</a></li>
					
			<li><a href="http://blog.xi-group.com/2014/07/how-to-implement-multi-cloud-deployment-for-scalability-and-reliability/">How to implement multi-cloud deployment for scalability and reliability</a></li>
					
			<li><a href="http://blog.xi-group.com/2014/07/small-tip-how-to-use-aws-cli-to-start-spot-instances-with-userdata/">Small Tip: How to use AWS CLI to start Spot instances with UserData</a></li>
					
			<li><a href="http://blog.xi-group.com/2015/01/small-tip-how-to-use-aws-cli-filter-parameter/">Small Tip: How to use AWS CLI &#8216;&#8211;filter&#8217; parameter</a></li>
			</ul>
</div>
]]></content:encoded>
			<wfw:commentRss>http://blog.xi-group.com/2015/01/userdata-teplate-for-ubuntu-14-04-ec2-instances-in-aws/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Small Tip: How to use AWS CLI &#8216;&#8211;filter&#8217; parameter</title>
		<link>http://blog.xi-group.com/2015/01/small-tip-how-to-use-aws-cli-filter-parameter/</link>
		<comments>http://blog.xi-group.com/2015/01/small-tip-how-to-use-aws-cli-filter-parameter/#comments</comments>
		<pubDate>Tue, 20 Jan 2015 12:20:54 +0000</pubDate>
		<dc:creator><![CDATA[Ivo Vachkov]]></dc:creator>
				<category><![CDATA[AWS]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Operations]]></category>
		<category><![CDATA[Small Tip]]></category>
		<category><![CDATA[AWS CLI]]></category>
		<category><![CDATA[extract fields]]></category>
		<category><![CDATA[filter parameter]]></category>
		<category><![CDATA[output filter]]></category>
		<category><![CDATA[parse output]]></category>

		<guid isPermaLink="false">http://blog.xi-group.com/?p=362</guid>
		<description><![CDATA[This post will present another, useful feature of the AWS CLI tool set, the &#8211;filter parameter. This command line parameter is available and extremely helpful in EC2 namespace (aws ec2 describe-*).There are various ways to use &#8211;filter parameter. 1. &#8211;filter parameter can get filtering properties directly from the command line: [crayon-69d07c417aa5d943387449/] 2. &#8211;filter parameter will [&#8230;]]]></description>
				<content:encoded><![CDATA[<p style="text-align: justify;">This post will present another, useful feature of the AWS CLI tool set, the <strong>&#8211;filter</strong> parameter. This command line parameter is available and extremely helpful in EC2 namespace (aws ec2 describe-*).There are various ways to use <strong>&#8211;filter</strong> parameter.</p>
<p style="text-align: justify;">1. <strong>&#8211;filter</strong> parameter can get filtering properties directly from the command line:</p>
<p></p><pre class="crayon-plain-tag">aws ec2 describe-instances --filter Name="instance-id",Values="i-1234abcd"</pre><p></p>
<p style="text-align: justify;">2. <strong>&#8211;filter</strong> parameter will also use JSON-encoded filter file:</p>
<p></p><pre class="crayon-plain-tag">aws ec2 describe-instances --filters file://filters.json</pre><p></p>
<p>The <strong>filters.json</strong> file uses the following structure:</p>
<p></p><pre class="crayon-plain-tag">[
  {
    "Name": "instance-type",
    "Values": ["m1.small", "m1.medium"]
  },
  {
    "Name": "availability-zone",
    "Values": ["us-west-2c"]
  }
]</pre><p></p>
<p style="text-align: justify;">There are various AWS CLI components that provide <strong>&#8211;filter</strong> parameters. For additional information check the <em>References</em> section.</p>
<p>To demonstrate the way this functionality can be used in various scenarios, there are several examples:</p>
<p>1. Filter by availability zone:</p><pre class="crayon-plain-tag">aws ec2 describe-instances --filter Name="availability-zone",Values="us-east-1b"</pre><p></p>
<p>2. Filter by security group (EC2-Classic):</p><pre class="crayon-plain-tag">aws ec2 describe-instances --filter Name="group-name",Values="default"</pre><p></p>
<p>3. Filter by security group (EC2-VPC):</p><pre class="crayon-plain-tag">aws ec2 describe-instances --filter Name="instance.group-name",Values="default"</pre><p></p>
<p>4. Filter only spot instances</p><pre class="crayon-plain-tag">aws ec2 describe-instances --filter Name="instance-lifecycle",Values="spot"</pre><p></p>
<p>5. Filter only running EC2 instances:</p><pre class="crayon-plain-tag">aws ec2 describe-instances --filter Name="instance-state-name",Values="running"</pre><p></p>
<p>6. Filter only stopped EC2 instances:</p><pre class="crayon-plain-tag">aws ec2 describe-instances --filter Name="instance-state-name",Values="stopped"</pre><p></p>
<p>7. Filter by SSH Key name</p><pre class="crayon-plain-tag">aws ec2 describe-instances --filter Name="key-name",Values="ssh-key"</pre><p></p>
<p>8. Filter by Tag:</p><pre class="crayon-plain-tag">aws ec2 describe-instances --filter "Name=tag-key,Values=Name" "Name=tag-value,Values=string"</pre><p></p>
<p>9. Filter by Tag with a wildcard (&#8216;*&#8217;):</p><pre class="crayon-plain-tag">aws ec2 describe-instances --filter "Name=tag-key,Values=MyTag" "Name=tag-value,Values=abcd*efgh"</pre><p></p>
<p>10. Filter by multiple criteria (all running instances with string &#8217;email&#8217; in the value of the Name tag):</p><pre class="crayon-plain-tag">aws ec2 describe-instances --filter "Name=instance-state-name,Values=running" "Name=tag-key,Values=Name" "Name=tag-value,Values=*email*"</pre><p></p>
<p>11. Filter by multiple criteria (all running instances with empty Name tag);</p><pre class="crayon-plain-tag">aws ec2 describe-instances --filter "Name=instance-state-name,Values=running" "Name=tag-key,Values=Name" "Name=tag-value,Values=''"</pre><p></p>
<p>Those examples are very close to production ones used in several large AWS deployments. They are used to:</p>
<ul>
<li>Monitor changes in instance populations;</li>
<li>Monitor successful configuration of resources;</li>
<li>Track deployment / rollout of new software version;</li>
<li>Track stopped instances to prevent unnecessary resource usage;</li>
<li>Ensure desired service distributions over availability zones and regions;</li>
<li>Ensure service distribution over instances with different lifecycle;</li>
</ul>
<p>Be sure to utilize this functionality in your monitoring infrastructure. It has been powerful source of operational insights and great source of raw data for our intelligent control planes!</p>
<p>If you want to talk more on this subject or just share your experience, do not hesitate to <a href="http://blog.xi-group.com/contact-us/" target="_blank">Contact Us!</a></p>
<p>References</p>
<ul>
<li><a href="http://docs.aws.amazon.com/cli/latest/reference/ec2/describe-instances.html">http://docs.aws.amazon.com/cli/latest/reference/ec2/describe-instances.html</a></li>
<li><a href="http://docs.aws.amazon.com/cli/latest/reference/ec2/describe-spot-instance-requests.html">http://docs.aws.amazon.com/cli/latest/reference/ec2/describe-spot-instance-requests.html</a></li>
<li><a href="http://docs.aws.amazon.com/cli/latest/reference/ec2/describe-reserved-instances.html">http://docs.aws.amazon.com/cli/latest/reference/ec2/describe-reserved-instances.html</a></li>
<li><a href="http://docs.aws.amazon.com/cli/latest/reference/ec2/describe-network-acls.html">http://docs.aws.amazon.com/cli/latest/reference/ec2/describe-network-acls.html</a></li>
<li><a href="http://docs.aws.amazon.com/cli/latest/reference/ec2/describe-key-pairs.html">http://docs.aws.amazon.com/cli/latest/reference/ec2/describe-key-pairs.html</a></li>
<li><a href="http://docs.aws.amazon.com/cli/latest/reference/ec2/index.html">http://docs.aws.amazon.com/cli/latest/reference/ec2/index.html</a></li>
</ul>
<div class="rpbt_shortcode">
<h3>Related Posts</h3>
<ul>
					
			<li><a href="http://blog.xi-group.com/2014/11/small-tip-how-to-use-block-device-mappings-to-manage-instance-volumes-with-aws-cli/">Small Tip: How to use &#8211;block-device-mappings to manage instance volumes with AWS CLI</a></li>
					
			<li><a href="http://blog.xi-group.com/2014/07/small-tip-how-to-use-aws-cli-to-start-spot-instances-with-userdata/">Small Tip: How to use AWS CLI to start Spot instances with UserData</a></li>
					
			<li><a href="http://blog.xi-group.com/2014/06/small-tip-ebs-volume-allocation-time-is-linear-to-the-size-and-unrelated-to-the-instance-type/">Small Tip: EBS volume allocation time is linear to the size and unrelated to the instance type</a></li>
					
			<li><a href="http://blog.xi-group.com/2015/02/how-to-deploy-single-node-hadoop-setup-in-aws/">How to deploy single-node Hadoop setup in AWS</a></li>
					
			<li><a href="http://blog.xi-group.com/2015/01/userdata-teplate-for-ubuntu-14-04-ec2-instances-in-aws/">UserData Template for Ubuntu 14.04 EC2 Instances in AWS</a></li>
			</ul>
</div>
]]></content:encoded>
			<wfw:commentRss>http://blog.xi-group.com/2015/01/small-tip-how-to-use-aws-cli-filter-parameter/feed/</wfw:commentRss>
		<slash:comments>5</slash:comments>
		</item>
		<item>
		<title>Small Tip: How to use &#8211;block-device-mappings to manage instance volumes with AWS CLI</title>
		<link>http://blog.xi-group.com/2014/11/small-tip-how-to-use-block-device-mappings-to-manage-instance-volumes-with-aws-cli/</link>
		<comments>http://blog.xi-group.com/2014/11/small-tip-how-to-use-block-device-mappings-to-manage-instance-volumes-with-aws-cli/#comments</comments>
		<pubDate>Wed, 26 Nov 2014 10:18:37 +0000</pubDate>
		<dc:creator><![CDATA[Ivo Vachkov]]></dc:creator>
				<category><![CDATA[AWS]]></category>
		<category><![CDATA[Development]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Operations]]></category>
		<category><![CDATA[Small Tip]]></category>
		<category><![CDATA[AWS CLI]]></category>
		<category><![CDATA[block device mappings]]></category>
		<category><![CDATA[instance store]]></category>
		<category><![CDATA[volumes]]></category>

		<guid isPermaLink="false">http://blog.xi-group.com/?p=195</guid>
		<description><![CDATA[This post will present one of the less popular features in the AWS CLI tool set, how to deal with EC2 instance volumes through the use of &#8211;block-device-mappings parameter. Previous post, Small Tip: Use AWS CLI to create instances with bigger root partitions already presents one of the common use cases, modifying the instance root [&#8230;]]]></description>
				<content:encoded><![CDATA[<p style="text-align: justify;">This post will present one of the less popular features in the AWS CLI tool set, how to deal with EC2 instance volumes through the use of <strong>&#8211;block-device-mappings</strong> parameter. Previous post, <a href="http://blog.xi-group.com/2014/06/small-tip-use-aws-cli-to-create-instances-with-bigger-root-partitions/">Small Tip: Use AWS CLI to create instances with bigger root partitions</a> already presents one of the common use cases, modifying the instance root partition size. However, use of &#8216;&#8211;block-device-mappings&#8217; can go far beyond this simple feature.</p>
<p style="text-align: justify;">Default documentation (<a href="http://docs.aws.amazon.com/cli/latest/reference/ec2/run-instances.html">http://docs.aws.amazon.com/cli/latest/reference/ec2/run-instances.html</a>) although a good start is somewhat limited. Several tips and tricks will be presented here.</p>
<p><strong>The location of the JSON block device mapping specification can be quite flexible. The mappings can be supplied:</strong></p>
<p>1. Using command line directly:</p><pre class="crayon-plain-tag">--block-device-mappings '[ {"DeviceName":"/dev/sdb","VirtualName":"ephemeral0"}, {"DeviceName":"/dev/sdc","VirtualName":"ephemeral1"}]'</pre><p></p>
<p>2. Using file as a source:</p><pre class="crayon-plain-tag">--block-device-mappings file:////home/ec2-user/mapping.json</pre><p></p>
<p>3. Using URL as a source:</p><pre class="crayon-plain-tag">--block-device-mappings http://mybucket.s3.amazonaws.com/mapping.json</pre><p></p>
<p>Source: <a href="http://understeer.hatenablog.com/entry/2013/10/18/223618">http://understeer.hatenablog.com/entry/2013/10/18/223618</a></p>
<p>Other common scenarios:</p>
<p>1. <strong>To reorder default ephemeral volumes to ensure stability of the environment</strong>:</p><pre class="crayon-plain-tag">[
  {
    "DeviceName": "/dev/sde",
    "VirtualName": "ephemeral0"
  },
  {
    "DeviceName": "/dev/sdf",
    "VirtualName": "ephemeral1"
  }
]</pre><p></p>
<p style="text-align: justify;"><strong>NOTE</strong>: Useful for additional UserData processing or deployments with hardcoded settings.</p>
<p>2. <strong>To allocate additional EBS Volume with specific size (100GB), to be associated with the EC2 instance</strong>:</p><pre class="crayon-plain-tag">[
  {
    "DeviceName": "/dev/sdg",
    "Ebs": {
      "VolumeSize": 100
    }
  }
]</pre><p></p>
<p style="text-align: justify;"><strong>NOTE</strong>: Useful for cases where cheaper instance types are outfitted with big volumes (Disk intensive tasks run on low-CPU/MEM instance types).</p>
<p>3. <strong>To allocate new volume from Snapshot ID</strong>:</p><pre class="crayon-plain-tag">[
  {
    "DeviceName": "/dev/sdh",
    "Ebs": {
      "SnapshotId": "snap-xxxxxxxx"
    }
  }
]</pre><p></p>
<p style="text-align: justify;"><strong>NOTE</strong>: Useful to pre-loading newly created instances with specific disk data and still retaining the ability to modify the local copy.</p>
<p>4. <strong>To omit mapping of a particular Device Name</strong>:</p><pre class="crayon-plain-tag">[
  {
    "DeviceName": "/dev/sdj",
    "NoDevice": ""
  }
]</pre><p></p>
<p style="text-align: justify;"><strong>NOTE</strong>: Useful to overwrite default AWS behavior.</p>
<p>5. <strong>To allocate new EBS Volume with explicit termination behavior (Keep after instance termination)</strong>:</p><pre class="crayon-plain-tag">[
  {
    "DeviceName": "/dev/sdc",
    "Ebs": {
      "VolumeSize": 10,
      "DeleteOnTermination": false
    }
  }
]</pre><p></p>
<p style="text-align: justify;"><strong>NOTE</strong>: Useful to keep instance data after termination, additional cost may be significant if those volumes are not released after examination.</p>
<p>6. <strong>To allocate new, encrypted, EBS Volume with Reserved IOPS</strong>:</p><pre class="crayon-plain-tag">[
  {
    "DeviceName": "/dev/sdc",
    "Ebs": {
      "VolumeSize": 10,
      "VolumeType": "io1",
      "Iops": 1000,
      "Encrypted": true
    }
  }
]</pre><p></p>
<p style="text-align: justify;"><strong>NOTE</strong>: Useful to set minimum required performance levels (I/O Operations Per Second) for the specified volume.</p>
<p style="text-align: justify;">Outlined functionality should cover wide range of potentially use cases for DevOps engineers who want to use automation to customize their infrastructure. Flexible instance volume management is a key ingredient for successful implementation of the &#8216;Infrastructure-as-Code&#8217; paradigm!</p>
<p>References</p>
<ul>
<li><a href="http://docs.aws.amazon.com/cli/latest/reference/ec2/run-instances.html">http://docs.aws.amazon.com/cli/latest/reference/ec2/run-instances.html</a></li>
<li><a href="http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html">http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html</a></li>
<li><a href="http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-blockdev-mapping.html">http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-blockdev-mapping.html</a></li>
<li><a href="http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html">http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html</a></li>
<li><a href="http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html">http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html</a></li>
</ul>
<div class="rpbt_shortcode">
<h3>Related Posts</h3>
<ul>
					
			<li><a href="http://blog.xi-group.com/2015/02/how-to-deploy-single-node-hadoop-setup-in-aws/">How to deploy single-node Hadoop setup in AWS</a></li>
					
			<li><a href="http://blog.xi-group.com/2015/01/userdata-teplate-for-ubuntu-14-04-ec2-instances-in-aws/">UserData Template for Ubuntu 14.04 EC2 Instances in AWS</a></li>
					
			<li><a href="http://blog.xi-group.com/2015/01/small-tip-how-to-use-aws-cli-filter-parameter/">Small Tip: How to use AWS CLI &#8216;&#8211;filter&#8217; parameter</a></li>
					
			<li><a href="http://blog.xi-group.com/2014/07/how-to-implement-multi-cloud-deployment-for-scalability-and-reliability/">How to implement multi-cloud deployment for scalability and reliability</a></li>
					
			<li><a href="http://blog.xi-group.com/2014/07/small-tip-how-to-use-aws-cli-to-start-spot-instances-with-userdata/">Small Tip: How to use AWS CLI to start Spot instances with UserData</a></li>
			</ul>
</div>
]]></content:encoded>
			<wfw:commentRss>http://blog.xi-group.com/2014/11/small-tip-how-to-use-block-device-mappings-to-manage-instance-volumes-with-aws-cli/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>How to implement multi-cloud deployment for scalability and reliability</title>
		<link>http://blog.xi-group.com/2014/07/how-to-implement-multi-cloud-deployment-for-scalability-and-reliability/</link>
		<comments>http://blog.xi-group.com/2014/07/how-to-implement-multi-cloud-deployment-for-scalability-and-reliability/#comments</comments>
		<pubDate>Fri, 18 Jul 2014 15:20:12 +0000</pubDate>
		<dc:creator><![CDATA[Ivo Vachkov]]></dc:creator>
				<category><![CDATA[AWS]]></category>
		<category><![CDATA[Development]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Operations]]></category>
		<category><![CDATA[theCloud]]></category>
		<category><![CDATA[AWS CLI]]></category>
		<category><![CDATA[cloud]]></category>
		<category><![CDATA[cloudflare]]></category>
		<category><![CDATA[distributed systems]]></category>
		<category><![CDATA[dns]]></category>
		<category><![CDATA[elastic computing]]></category>
		<category><![CDATA[joyent]]></category>
		<category><![CDATA[multi-cloud]]></category>

		<guid isPermaLink="false">http://blog.xi-group.com/?p=279</guid>
		<description><![CDATA[Introduction This post will present interesting approach to scalability and reliability: How to implement multi-cloud application deployment ?! There are many reasons why this is interesting topic. Avoiding provider lockdown, reducing cloud provider outage impact, increasing world-wide coverage, disaster recovery / preparedness are only some of them. The obvious benefits of multi-cloud deployment are increased [&#8230;]]]></description>
				<content:encoded><![CDATA[<h2>Introduction</h2>
<p style="text-align: justify;">This post will present interesting approach to scalability and reliability:</p>
<p style="text-align: center;"><strong>How to implement multi-cloud application deployment ?!</strong></p>
<p style="text-align: justify;">There are many reasons why this is interesting topic. Avoiding provider lockdown, reducing cloud provider outage impact, increasing world-wide coverage, disaster recovery / preparedness are only some of them. The obvious benefits of multi-cloud deployment are increased reliability and outage impact minimization. However, there are drawbacks too: supporting different sets of code to accommodate similar, but different services, increased cost, increased infrastructure complexity, different tools &#8230; Yet, despite the drawbacks, the possible benefits far outweigh the negatives!</p>
<p style="text-align: justify;">In the following article a simple service will be deployed in automated fashion over two different Cloud Service Providers: Amazon AWS and Joyent. Third provider, CloudFlare, will be used to service DNS requests. The choice of providers is not random. They are chosen because of particular similarities and because the ease of use. All of those providers have consistent, comprehensive APIs that allow automation through programming in parallel to the command line tools.</p>
<h2>Preliminary information</h2>
<p style="text-align: justify;">The service setup, described here, although synthetic, is representative of multiple usage scenarios. More complex scenarios are also possible. Special care should be taken to address use of common resources or non-replicable resources/states. Understand the dependencies of your application architecture before using multi-cloud setup. Or contact <a href="http://blog.xi-group.com/contact-us/">Xi Group Ltd.</a> to aid you in this process!</p>
<p style="text-align: justify;">The following Cloud Service Providers will be used to deploy executable code on:</p>
<ul>
<li><a href="https://aws.amazon.com/">Amazon AWS</a></li>
<li><a href="http://www.joyent.com/">Joyent</a></li>
</ul>
<p style="text-align: justify;">DNS requests will be served by <a href="https://www.cloudflare.com/">CloudFlare</a>. The test domain is: <strong>scalability.expert</strong></p>
<p style="text-align: justify;">Required tools are:</p>
<ul>
<li><a href="http://aws.amazon.com/cli/">Amazon AWS CLI</a></li>
<li><a href="https://github.com/joyent/node-smartdc">Joyent SmartDataCenter tools</a></li>
</ul>
<p style="text-align: justify;">Additional information can be found in <a href="http://aws.amazon.com/cli/">AWS CLI</a>, <a href="https://apidocs.joyent.com/cloudapi/">Joyent CloudAPI Documentation</a> and <a href="https://www.cloudflare.com/docs/client-api.html">CloudFlare ClientAPI</a>.</p>
<h2>Implementation Details</h2>
<p style="text-align: justify;">A service, website for <strong>www.scalability.expert</strong>, has to be deployed over multiple clouds. For simplicity, it is assumed that this is a static web site, served by NginX. It will run on Ubuntu 14.04 LTS. Instance types chosen in both AWS and Joyent are pretty limited, but should provide enough computing power to run NginX and serve static content. CloudFlare must be configured with basic settings for the DNS zone it will serve (in this case, the free CloudFlare account is used).</p>
<p style="text-align: justify;">Each computing instance, when bootstrapped or restarted, will start the NginX and register itself in CloudFlare. At that point it should be able to receive client traffic. Upon termination or shutdown, each instance should remove its own entries from CloudFlare thus preventing DNS zone pollution with dead entries. In a previous article, <a href="http://blog.xi-group.com/2014/06/how-to-implement-service-discovery-in-the-cloud/">How to implement Service Discovery in the Cloud</a>, it was discussed how <strong>DNS-SD</strong> can be implemented for similar setup with increased client complexity. In a multi-tier architecture this a proper solution. However, lack of control over the client browser may prove that a simplistic solution, like the one described here, is a better choice.</p>
<h3>CloudFlare</h3>
<p style="text-align: justify;">CloudFlare setup uses the free account and one domain, <strong>scalability.expert</strong>, is configured:</p>
<p><a href="http://blog.xi-group.com/wp-content/uploads/2014/07/Screen-Shot-2014-07-18-at-1.18.32-PM.png"><img src="http://blog.xi-group.com/wp-content/uploads/2014/07/Screen-Shot-2014-07-18-at-1.18.32-PM.png" alt="Screen Shot 2014-07-18 at 1.18.32 PM" width="985" height="154" class="alignnone size-full wp-image-288 img-thumbnail img-responsive" /></a> </p>
<p style="text-align: justify;">Basic configuration includes only one entry for the zone name:</p>
<p><a href="http://blog.xi-group.com/wp-content/uploads/2014/07/Screen-Shot-2014-07-18-at-1.19.03-PM.png"><img src="http://blog.xi-group.com/wp-content/uploads/2014/07/Screen-Shot-2014-07-18-at-1.19.03-PM.png" alt="Screen Shot 2014-07-18 at 1.19.03 PM" width="976" height="222" class="alignnone size-full wp-image-289 img-thumbnail img-responsive" /></a></p>
<p style="text-align: justify;">As seen by the orange cloud icon, the requests for this record will be routed through CloudFlare&#8217;s network for inspection and analysis!</p>
<h3>AWS UserData / Joyent Script</h3>
<p style="text-align: justify;">To automate the process of configuring instances, the following UserData script will be used:</p>
<p></p><pre class="crayon-plain-tag">#!/bin/bash -ex

# Debian apt-get install function
apt_get_install()
{
    DEBIAN_FRONTEND=noninteractive apt-get -y \
    -o DPkg::Options::=--force-confdef \
    -o DPkg::Options::=--force-confold \
    install $@
}
 
# Mark execution start
echo "STARTING" > /root/user_data_run
 
# Some initial setup
export DEBIAN_FRONTEND=noninteractive
apt-get update && apt-get upgrade -y

# Mark progress ...
echo "OS UPDATE FINISHED" >> /root/user_data_run
 
# Install required packages
apt_get_install jq nginx

# Mark progress ...
echo "SOFTWARE DEPENDENCIES INSTALLED" >> /root/user_data_run

# Create test html page
mkdir /var/www
cat > /var/www/index.html << "EOF"
<html>
    <head>
        <title>Demo Page</title>
    </head>
 
    <body>
        <center><h2>Demo Page</h2></center><br>
        <center>Status: running</center>
    </body>
</html>
EOF

# Configure NginX
cat > /etc/nginx/conf.d/demo.conf << "EOF"
# Minimal NginX VirtualHost setup
server {
    listen 8080;
 
    root /var/www;
    index index.html index.htm;
 
    location / {
        try_files $uri $uri/ =404;
    }
}
EOF
 
# Restart NginX with the new settings
/etc/init.d/nginx restart

# Mark progress ...
echo "NGINX CONFIGURED" >> /root/user_data_run

# /etc/init.d startup script
cat > /etc/init.d/cloudflare-submit.sh << "EOF"
#! /bin/bash
#
# Author: Ivo Vachkov (ivachkov@xi-group.com)
#
### BEGIN INIT INFO
# Provides: DNS-SD Service Group Registration / De-Registration
# Required-Start:
# Should-Start:
# Required-Stop:
# Should-Stop:
# Default-Start:  2 3 4 5
# Default-Stop:   0 1 6
# Short-Description:    Start / Stop script for DNS-SD
# Description:          Use to JOIN/LEAVE DNS-SD Service Group
### END INIT INFO

set -e
umask 022

# DNS Configuration details
ZONE="scalability.expert"
HOST="www"
TTL="120"
IP=""

# CloudFlare oSpecific Settings
CF_HOST="https://www.cloudflare.com/api_json.html"
CF_SERVICEMODE="0" # 0: Disable / 1: Enable CloudFlare acceleration network

# Edit the following parameters with your specific settings
CF_TOKEN="cloudflaretoken" 
CF_ACCOUNT="account@cloudflare.com"

# Execution log file
LOG_FILE=/var/log/cloudflare-submit.log

source /lib/lsb/init-functions

export PATH="${PATH:+$PATH:}/usr/sbin:/sbin:/usr/bin:/usr/local/bin:/usr/local/sbin"

# Get public IP
get_public_ip () {
        # Check what cloud provider this code is running on
        if [ ! -f "/var/lib/cloud/data/instance-id" ]; then
                echo "$0: /var/lib/cloud/data/instance-id is not available! Unsupported environment! Exiting ..."
                exit 1
        fi

        # Get the instance public IP address
        I_ID=`cat /var/lib/cloud/data/instance-id`
        if [[ $I_ID == i-* ]]; then
                # Amazon AWS
                IP=`curl http://169.254.169.254/latest/meta-data/public-ipv4`
        else
                # Joyent
                IP=`ifconfig eth0 | grep "inet addr" | awk '{print $2}' | cut -c6-`
        fi
}

# Default Start function
cloudflare_register () {
        # Get instance public IP address
        get_public_ip

        # Check the resutl
        if [ -z "$IP" ]; then
                echo "$0: Unable to obtain public IP Address! Exiting ..."
                exit 1
        fi

        # Execute update towards CloudFlare API
        curl -s $CF_HOST \
                -d "a=rec_new" \
                -d "tkn=$CF_TOKEN" \
                -d "email=$CF_ACCOUNT" \
                -d "z=$ZONE" \
                -d "type=A" \
                -d "name=$HOST" \
                -d "content=$IP" \
                -d "ttl=$TTL" >> $LOG_FILE
    
        # Get record ID for this IP
        REC_ID=`curl -s $CF_HOST \
                -d "a=rec_load_all" \
                -d "tkn=$CF_TOKEN" \
                -d "email=$CF_ACCOUNT" \
                -d "z=$ZONE" | jq -a '.response.recs.objs[] | .content, .rec_id' | grep -A 1 $IP| tail -1 | awk -F"\"" '{print $2}'`

        # Update with desired service mode
        curl -s $CF_HOST \
                -d "a=rec_edit" \
                -d "tkn=$CF_TOKEN" \
                -d "email=$CF_ACCOUNT" \
                -d "z=$ZONE" \
                -d "id=$REC_ID" \
                -d "type=A" \
                -d "name=$HOST" \
                -d "content=$IP" \
                -d "ttl=1" \
                -d "service_mode=$CF_SERVICEMODE" >> $LOG_FILE
}

# Default Stop function
cloudflare_deregister () {
        # Get instance public IP address
        get_public_ip

        # Check the resutl
        if [ -z "$IP" ]; then
                echo "$0: Unable to obtain public IP Address! Exiting ..."
                exit 1
        fi

        # Get record ID for this IP
        REC_ID=`curl -s $CF_HOST \
                -d "a=rec_load_all" \
                -d "tkn=$CF_TOKEN" \
                -d "email=$CF_ACCOUNT" \
                -d "z=$ZONE" | jq -a '.response.recs.objs[] | .content, .rec_id' | grep -A 1 $IP| tail -1 | awk -F"\"" '{print $2}'`

        # Execute update towards CloudFlare API
        curl -s $CF_HOST \
                -d "a=rec_delete" \
                -d "tkn=$CF_TOKEN" \
                -d "email=$CF_ACCOUNT" \
                -d "z=$ZONE" \
                -d "id=$REC_ID" >> $LOG_FILE
}

case "$1" in
start)
        log_daemon_msg "Registering $HOST.$ZONE  with CloudFlare ... " || true
        cloudflare_register
        ;;
stop)
        log_daemon_msg "De-Registering $HOST.$ZONE with CloudFlare ... " || true
        cloudflare_deregister
        ;;
restart)
        log_daemon_msg "Restarting ... " || true
        cloudflare_deregister
        cloudflare_register
        ;;
*)
        log_action_msg "Usage: $0 {start|stop|restart}" || true
        exit 1
esac

exit 0
EOF

# Add it to the startup / shutdown process
chmod +x /etc/init.d/cloudflare-submit.sh
update-rc.d cloudflare-submit.sh defaults 99

# Mark progress ...
echo "CLOUDFLARE SCRIPT INSTALLED" >> /root/user_data_run

# Register with CloudFlare to start receiving requests
/etc/init.d/cloudflare-submit.sh start

# Mark execution end
echo "DONE" > /root/user_data_run</pre><p></p>
<p style="text-align: justify;">This UserData script contains three components:</p>
<ol>
<li>
<p style="text-align: justify;"><strong>Lines 0 &#8211; 62</strong>: Boilerplate, OS update, installation and configuration of NginX;</p>
</li>
<li>
<p style="text-align: justify;"><strong>Lines 64 &#8211; 215</strong>: cloudflare-submit.sh, main script that will be called on startup and shutdown of the instance. cloudflare-submit.sh will register the instance&#8217;s public IP address with CloudFlare and set required protection. By default, protection and acceleration is off. Additional configuration is required to make this script work for your setup, account details must be configured in the specified variables!</p>
</li>
<li>
<p style="text-align: justify;"><strong>Lines 217 &#8211; 228</strong>: Setting proper script permissions, configuring automatic start of cloudflare-submit.sh and executing it to register with CloudFlare. </p>
</li>
</ol>
<p style="text-align: justify;">Code is reasonably straight-forward. init.d startup script is divided to multiple functions and output is redirected to a log file for debugging purposes. External dependencies are kept to a minimum. Distinguishing between AWS EC2 and Joyent instances is done by analyzing the instance ID. In AWS, all EC2 instances have instance IDs starting with &#8216;i-&#8216;, while Joyent uses (by the looks of it) some sort of UUID. This part of the logic is particularly important if the code should be extended to support other cloud providers!</p>
<p style="text-align: justify;">Both AWS and Joyent offer Ubuntu 14.04 support, so the <strong>same code can be use to configure the instances</strong> in automated fashion. This is particularly handy when it comes to data driven instance management and <a href="http://en.wikipedia.org/wiki/Don't_repeat_yourself">the DRY principle</a>. Command line tools for both cloud providers also offer similar syntax, which makes it easier to utilize this functionality.</p>
<h3>Amazon AWS</h3>
<p style="text-align: justify;">Staring new instances within Amazon AWS is straight-forward, assuming <strong>awscli</strong> is properly configured:</p>
<p></p><pre class="crayon-plain-tag">aws ec2 run-instances \
    --image-id ami-018c9568 \
    --count 1 \
    --instance-type t1.micro \
    --key-name test-key \
    --security-groups test-sg \
    --user-data file://userdata-script.sh</pre><p></p>
<h3>Joyent</h3>
<p style="text-align: justify;">Starting news instances within Joyent is somewhat more complex, but there is comprehensive <a href="https://apidocs.joyent.com/cloudapi/">documentation</a>:</p>
<p></p><pre class="crayon-plain-tag">sdc-createmachine \
    --account account_name \
    --keyId aa:bb:cc:dd:ee:ff:gg:hh:ii:jj:kk:ll:mm:nn:oo:pp \
    --name test \
    --package "4dad8aa6-2c7c-e20a-be26-c7f4f1925a9a" \
    --tag Name=test \
    --url "https://us-east-1.api.joyentcloud.com" \
    --metadata "Name=test" \
    --image 286b0dc0-d09e-43f2-976a-bb1880ebdb6c \
    --script userdata-script.sh</pre><p></p>
<p style="text-align: justify;">This particular example will start new SmartMachine instance using the <strong>4dad8aa6-2c7c-e20a-be26-c7f4f1925a9a</strong> package (g3-devtier-0.25-kvm, 3rd generation, virtual machine (KVM) with 256MB RAM) and <strong>286b0dc0-d09e-43f2-976a-bb1880ebdb6c</strong> (ubuntu-certified-14.04) image. SSH key details are supplied through the specific combinations of Web-interface settings and SSH key signature. For the list of available packages (instance types) and images (software stacks) consult the API: <a href="https://apidocs.joyent.com/cloudapi/#ListPackages">ListPackages</a>, <a href="https://apidocs.joyent.com/cloudapi/#ListImages">ListImages</a>.</p>
<p style="text-align: justify;"><strong>NOTE</strong>: Joyent offers rich Metadata support, which can be quite flexible tool when managing large number of instances!</p>
<h2>Successful service configuration</h2>
<p style="text-align: justify;">Successful service configuration will result in proper DNS entries to be added to the <strong>scalability.expert</strong> DNS zone in CloudFlare:</p>
<p><a href="http://blog.xi-group.com/wp-content/uploads/2014/07/Screen-Shot-2014-07-18-at-4.12.43-PM.png"><img src="http://blog.xi-group.com/wp-content/uploads/2014/07/Screen-Shot-2014-07-18-at-4.12.43-PM.png" alt="Screen Shot 2014-07-18 at 4.12.43 PM" width="1031" height="374" class="alignnone size-full wp-image-292 img-thumbnail img-responsive" /></a></p>
<p style="text-align: justify;">After configured TTL, those should be visible world-wide:</p>
<p></p><pre class="crayon-plain-tag">:~> nslookup www.scalability.expert
Server:         8.8.4.4
Address:        8.8.4.4#53

Non-authoritative answer:
Name:   www.scalability.expert
Address: 54.83.175.90
Name:   www.scalability.expert
Address: 165.225.137.102

:~></pre><p></p>
<p style="text-align: justify;">As seen, both AWS (54.83.175.90) and Joyent (165.225.137.102) IP addresses are returned, i.e. DNS Round-Robin. Service can simply be tested with:</p>
<p></p><pre class="crayon-plain-tag">:~> curl http://www.scalability.expert:8080/
<html>
    <head>
        <title>Demo Page</title>
    </head>

    <body>
        <center><h2>Demo Page</h2></center><br>
        <center>Status: running</center>
    </body>
</html>
:~></pre><p></p>
<p style="text-align: justify;">Resulting calls can be seen in the NginX log files on both instances:</p>
<p><a href="http://blog.xi-group.com/wp-content/uploads/2014/07/Screen-Shot-2014-07-18-at-5.30.50-PM.png"><img src="http://blog.xi-group.com/wp-content/uploads/2014/07/Screen-Shot-2014-07-18-at-5.30.50-PM.png" alt="Screen Shot 2014-07-18 at 5.30.50 PM" width="1284" height="1083" class="alignnone size-full wp-image-301 img-thumbnail img-responsive" /></a></p>
<p style="text-align: justify;"><strong>NOTE</strong>: CloudFlare protection and acceleration features are explicitly disabled in this example! It is strongly suggested to enabled them for production purposes!</p>
<h2>Conclusion</h2>
<p style="text-align: justify;">It should be clear now, that whenever software architecture follows certain design principles and application is properly decoupled in multiple tiers, the whole system can be deployed within multiple cloud providers. DevOps principles for automated deployment can be implemented in this environment as well. The overall system is with improved scalability, reliability and in case of data driven elastic deployments, even cost! Proper design is key, but the technology provided by companies like Amazon and Joyent make it easier to turn whiteboard drawings into actual systems with hundreds of nodes!</p>
<p>References</p>
<ul>
<li><a href="http://blog.xi-group.com/2014/06/how-to-implement-service-discovery-in-the-cloud/">How to implement Service Discovery in the Cloud</a></li>
<li><a href="https://aws.amazon.com/">https://aws.amazon.com/</a></li>
<li><a href="http://aws.amazon.com/cli/">http://aws.amazon.com/cli/</a></li>
<li><a href="http://www.joyent.com/">http://www.joyent.com/</a></li>
<li><a href="https://github.com/joyent/node-smartdc">https://github.com/joyent/node-smartdc</a></li>
<li><a href="https://apidocs.joyent.com/cloudapi/">https://apidocs.joyent.com/cloudapi/</a></li>
<li><a href="https://www.cloudflare.com/">https://www.cloudflare.com/</a></li>
<li><a href="https://www.cloudflare.com/docs/client-api.html">https://www.cloudflare.com/docs/client-api.html</a></li>
</ul>
<div class="rpbt_shortcode">
<h3>Related Posts</h3>
<ul>
					
			<li><a href="http://blog.xi-group.com/2014/06/how-to-implement-service-discovery-in-the-cloud/">How to implement Service Discovery in the Cloud</a></li>
					
			<li><a href="http://blog.xi-group.com/2015/02/how-to-deploy-single-node-hadoop-setup-in-aws/">How to deploy single-node Hadoop setup in AWS</a></li>
					
			<li><a href="http://blog.xi-group.com/2015/01/userdata-teplate-for-ubuntu-14-04-ec2-instances-in-aws/">UserData Template for Ubuntu 14.04 EC2 Instances in AWS</a></li>
					
			<li><a href="http://blog.xi-group.com/2014/11/small-tip-how-to-use-block-device-mappings-to-manage-instance-volumes-with-aws-cli/">Small Tip: How to use &#8211;block-device-mappings to manage instance volumes with AWS CLI</a></li>
					
			<li><a href="http://blog.xi-group.com/2015/01/small-tip-how-to-use-aws-cli-filter-parameter/">Small Tip: How to use AWS CLI &#8216;&#8211;filter&#8217; parameter</a></li>
			</ul>
</div>
]]></content:encoded>
			<wfw:commentRss>http://blog.xi-group.com/2014/07/how-to-implement-multi-cloud-deployment-for-scalability-and-reliability/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Small Tip: How to use AWS CLI to start Spot instances with UserData</title>
		<link>http://blog.xi-group.com/2014/07/small-tip-how-to-use-aws-cli-to-start-spot-instances-with-userdata/</link>
		<comments>http://blog.xi-group.com/2014/07/small-tip-how-to-use-aws-cli-to-start-spot-instances-with-userdata/#comments</comments>
		<pubDate>Sat, 12 Jul 2014 18:44:17 +0000</pubDate>
		<dc:creator><![CDATA[Ivo Vachkov]]></dc:creator>
				<category><![CDATA[AWS]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Operations]]></category>
		<category><![CDATA[Small Tip]]></category>
		<category><![CDATA[AWS CLI]]></category>
		<category><![CDATA[spot instances]]></category>
		<category><![CDATA[UserData]]></category>

		<guid isPermaLink="false">http://blog.xi-group.com/?p=185</guid>
		<description><![CDATA[Common occurrence in the list of daily DevOps tasks is the one to deal with AWS EC2 Spot Instances. They offer the same performance, as the OnDemand counterparts, they are cheap to the extend that user can specify the hourly price. The drawback is that AWS can reclaim them if the market price goes beyond [&#8230;]]]></description>
				<content:encoded><![CDATA[<p style="text-align: justify;">Common occurrence in the list of daily DevOps tasks is the one to deal with <a href="http://aws.amazon.com/ec2/purchasing-options/spot-instances/">AWS EC2 Spot Instances</a>. They offer the same performance, as the OnDemand counterparts, they are cheap to the extend that user can specify the hourly price. The drawback is that AWS can reclaim them if the market price goes beyond the user&#8217;s price. Still, those are key component, a basic building block, in every modern elastic system. As such, DevOps engineers must regularly interact with those.</p>
<p style="text-align: justify;">AWS provides proper command line interface, <a href="http://docs.aws.amazon.com/cli/latest/reference/ec2/request-spot-instances.html">aws ec2 request-spot-instances</a> exposes multiple options to the user. However, some of the common use cases are not comprehensively covered in the documentation. For example, creating Spot Instances with Userdata using the command line tools is somewhat obscure and convoluted, although common need in DevOps and Developers lives. The tricky part: <strong>It must be BASE64 encoded!</strong></p>
<p style="text-align: justify;">Assume the following, simple UserData script, must be deployed on numerous EC2 Spot Instances:</p>
<p></p><pre class="crayon-plain-tag">#!/bin/bash -ex

# Debian apt-get install function
apt_get_install()
{
        DEBIAN_FRONTEND=noninteractive apt-get -y \
        -o DPkg::Options::=--force-confdef \
        -o DPkg::Options::=--force-confold \
        install $@
}

# Mark execution start
echo "STARTING" > /root/user_data_run

# Some initial setup
set -e -x
export DEBIAN_FRONTEND=noninteractive
apt-get update && apt-get upgrade -y

# Install required packages
apt_get_install nginx

# Create test html page
mkdir /var/www
cat > /var/www/index.html << "EOF"
<html>
        <head>
                <title>Demo Page</title>
                </head>

        <body>
                <center><h2>Demo Page</h2></center><br>
                <center>Status: running</center>
        </body>
</html>
EOF

# Configure NginX
cat > /etc/nginx/conf.d/demo.conf << "EOF"
# Minimal NginX VirtualHost setup
server {
        listen 8080;

        root /var/www;
        index index.html index.htm;

        location / {
                try_files $uri $uri/ =404;
        }
}
EOF

# Restart NginX with the new settings
/etc/init.d/nginx restart

# Mark execution end
echo "DONE" > /root/user_data_run</pre><p></p>
<p style="text-align: justify;">Make sure base64 command is available in your system, or use equivalent, to encode the sample userdata.sh file before passing to the launch specification:</p>
<p></p><pre class="crayon-plain-tag">aws ec2 request-spot-instances \
    --spot-price 0.01 \
    --instance-count 2 \
    --launch-specification \
        "{ \
            \"ImageId\":\"ami-a6926dce\", \
            \"InstanceType\":\"m3.medium\", \
            \"KeyName\":\"test-key\", \
            \"SecurityGroups\": [\"test-sg\"], \
            \"UserData\":\"`base64 userdata.sh`\" \
        }"</pre><p></p>
<p style="text-align: justify;">In this example <strong>two</strong> spot instance requests will be created for <strong>m3.medim</strong> instances, using <strong>ami-a6926dce</strong> AMI, <strong>test-key</strong> SSH key, running in <strong>test-sg</strong> Security Group. BASE64-encoded contents of <strong>userdata.sh</strong> will be attached to the request so upon fulfillment the Userdata will be passed to the newly created instances and executed after boot-up.</p>
<p style="text-align: justify;">Spot instance requests will be created in the AWS EC2 Dashboard:</p>
<p><a href="http://blog.xi-group.com/wp-content/uploads/2014/07/Screen-Shot-2014-07-12-at-9.11.20-PM.png"><img src="http://blog.xi-group.com/wp-content/uploads/2014/07/Screen-Shot-2014-07-12-at-9.11.20-PM.png" alt="Screen Shot 2014-07-12 at 9.11.20 PM" width="1240" height="198" class="alignnone size-full wp-image-271 img-thumbnail img-responsive" /></a></p>
<p style="text-align: justify;">Once the Spot Instance Requests (SIRs) are fulfilled, InstanceID will be associated for each SIR:</p>
<p><a href="http://blog.xi-group.com/wp-content/uploads/2014/07/Screen-Shot-2014-07-12-at-9.18.24-PM.png"><img src="http://blog.xi-group.com/wp-content/uploads/2014/07/Screen-Shot-2014-07-12-at-9.18.24-PM.png" alt="Screen Shot 2014-07-12 at 9.18.24 PM" width="1237" height="198" class="alignnone size-full wp-image-272 img-thumbnail img-responsive" /></a></p>
<p style="text-align: justify;">EC2 Instances dashboard will show newly created Spot Instances (notice the &#8220;<strong>Lifecycle: spot</strong>&#8221; in Instance details):</p>
<p><a href="http://blog.xi-group.com/wp-content/uploads/2014/07/Screen-Shot-2014-07-12-at-9.20.30-PM.png"><img src="http://blog.xi-group.com/wp-content/uploads/2014/07/Screen-Shot-2014-07-12-at-9.20.30-PM.png" alt="Screen Shot 2014-07-12 at 9.20.30 PM" width="2286" height="194" class="alignnone size-full wp-image-273 img-thumbnail img-responsive" /></a></p>
<p style="text-align: justify;">Using the proper credentials, one can verify successful execution of the userdata.sh on each instance:</p>
<p></p><pre class="crayon-plain-tag">:~> ssh -i ~/.ssh/test-key.pem ubuntu@ec2-54-211-6-104.compute-1.amazonaws.com "tail /var/log/cloud-init-output.log"
Setting up nginx (1.4.6-1ubuntu3) ...
Processing triggers for libc-bin (2.19-0ubuntu6) ...
+ mkdir /var/www
+ cat
+ cat
+ /etc/init.d/nginx restart
 * Restarting nginx nginx
   ...done.
+ echo DONE
Cloud-init v. 0.7.5 finished at Sat, 12 Jul 2014 18:17:09 +0000. Datasource DataSourceEc2.  Up 76.38 seconds
:~></pre><p></p>
<p style="text-align: justify;">&#8230; and more importantly, if the configured service works as expected:</p>
<p></p><pre class="crayon-plain-tag">:~> curl http://ec2-54-211-6-104.compute-1.amazonaws.com:8080/
<html>
        <head>
                <title>Demo Page</title>
                </head>

        <body>
                <center><h2>Demo Page</h2></center><br>
                <center>Status: running</center>
        </body>
</html>
:~></pre><p></p>
<p style="text-align: justify;">Newly created Spot Instances are serving traffic, running at 0.01 USD/hr and will happily do so until the market price for this instance type goes above the specified price!</p>
<p>References</p>
<ul>
<li><a href="http://docs.aws.amazon.com/cli/latest/reference/ec2/request-spot-instances.html">http://docs.aws.amazon.com/cli/latest/reference/ec2/request-spot-instances.html</a></li>
</ul>
<div class="rpbt_shortcode">
<h3>Related Posts</h3>
<ul>
					
			<li><a href="http://blog.xi-group.com/2015/01/userdata-teplate-for-ubuntu-14-04-ec2-instances-in-aws/">UserData Template for Ubuntu 14.04 EC2 Instances in AWS</a></li>
					
			<li><a href="http://blog.xi-group.com/2015/01/small-tip-how-to-use-aws-cli-filter-parameter/">Small Tip: How to use AWS CLI &#8216;&#8211;filter&#8217; parameter</a></li>
					
			<li><a href="http://blog.xi-group.com/2014/11/small-tip-how-to-use-block-device-mappings-to-manage-instance-volumes-with-aws-cli/">Small Tip: How to use &#8211;block-device-mappings to manage instance volumes with AWS CLI</a></li>
					
			<li><a href="http://blog.xi-group.com/2014/06/small-tip-ebs-volume-allocation-time-is-linear-to-the-size-and-unrelated-to-the-instance-type/">Small Tip: EBS volume allocation time is linear to the size and unrelated to the instance type</a></li>
					
			<li><a href="http://blog.xi-group.com/2015/02/how-to-deploy-single-node-hadoop-setup-in-aws/">How to deploy single-node Hadoop setup in AWS</a></li>
			</ul>
</div>
]]></content:encoded>
			<wfw:commentRss>http://blog.xi-group.com/2014/07/small-tip-how-to-use-aws-cli-to-start-spot-instances-with-userdata/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Small Tip: EBS volume allocation time is linear to the size and unrelated to the instance type</title>
		<link>http://blog.xi-group.com/2014/06/small-tip-ebs-volume-allocation-time-is-linear-to-the-size-and-unrelated-to-the-instance-type/</link>
		<comments>http://blog.xi-group.com/2014/06/small-tip-ebs-volume-allocation-time-is-linear-to-the-size-and-unrelated-to-the-instance-type/#comments</comments>
		<pubDate>Mon, 23 Jun 2014 07:20:27 +0000</pubDate>
		<dc:creator><![CDATA[Ivo Vachkov]]></dc:creator>
				<category><![CDATA[AWS]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Operations]]></category>
		<category><![CDATA[Small Tip]]></category>
		<category><![CDATA[allocation time]]></category>
		<category><![CDATA[AWS CLI]]></category>
		<category><![CDATA[EBS]]></category>
		<category><![CDATA[volume]]></category>

		<guid isPermaLink="false">http://blog.xi-group.com/?p=66</guid>
		<description><![CDATA[Due to fluctuations in startup times for instances in AWS, it was speculated that allocation of EBS volumes may be the reason for the nondeterministic behavior. This led to an interesting discussion and finally to a small test to determine how volume size of an EBS volume allocated with an instance affect its startup time. [&#8230;]]]></description>
				<content:encoded><![CDATA[<p style="text-align: justify;">Due to fluctuations in startup times for instances in AWS, it was speculated that allocation of EBS volumes may be the reason for the nondeterministic behavior. This led to an interesting discussion and finally to a small test to determine how volume size of an EBS volume allocated with an instance affect its startup time.</p>
<p style="text-align: justify;">To gather some results the following script was created: <a href="https://s3-us-west-2.amazonaws.com/blog.xi-group.com/aws-ebs-allocation-times/aws-single.sh">https://s3-us-west-2.amazonaws.com/blog.xi-group.com/aws-ebs-allocation-times/aws-single.sh</a>. It will create one instance of the specified type with <strong>N</strong> GB of Root EBS volume, wait for the instance to properly start and then terminate it. The time for the whole process is measured (e.g. full &#8216;time-to-service&#8217;).</p>
<p>The script was run multiple times for each instance type and EBS volume size. Results are presented in the following table:</p>
<table  width="100%" class="table table-bordered table-striped">
<tr>
<th></th>
<th>t1.micro</th>
<th>c1.xlarge</th>
<th>m3.xlarge</th>
<th>m3.2xlarge</th>
<th>m2.4xlarge</th>
</tr>
<tr>
<td>20 GB</td>
<td>~ 1m 50s</td>
<td>~ 1m 45s</td>
<td>~ 1m 50s</td>
<td>~ 2m 15s</td>
<td>~ 3m 20s</td>
</tr>
<tr>
<td>50 GB</td>
<td>~ 2m 45s</td>
<td>~ 2m 40s</td>
<td>~ 2m 50s</td>
<td>~ 2m 40s</td>
<td>~ 3m 10s</td>
</tr>
<tr>
<td>100 GB</td>
<td>~ 3m 45s</td>
<td>~ 3m 30s</td>
<td>~ 3m 30s</td>
<td>~ 4m 20s</td>
<td>~ 5m 00s</td>
</tr>
<tr>
<td>200 GB</td>
<td>~ 6m 00s</td>
<td>~ 6m 10s</td>
<td>~ 9m 00s</td>
<td>~ 5m 45s</td>
<td>~ 7m 30s</td>
</tr>
</table>
<p>Graphical representation:<br />
<a href="http://blog.xi-group.com/wp-content/uploads/2014/06/Screen-Shot-2014-06-23-at-9.49.13-AM.png"><img src="http://blog.xi-group.com/wp-content/uploads/2014/06/Screen-Shot-2014-06-23-at-9.49.13-AM.png" alt="Screen Shot 2014-06-23 at 9.49.13 AM" width="968" height="600" class="alignnone size-full wp-image-167 img-thumbnail img-responsive" /></a></p>
<p style="text-align: justify;">As shown, instance start time grows linearly with the size of the EBS Root volume. Moral of the story:</p>
<p><center><strong>The more EBS storage you allocate at boot, the slower the instance will start!</strong></center></p>
<p style="text-align: justify;">NOTE: The whole procedure is reasonably time consuming if you gather multiple data points (in this case, for each instance type / volume size the script was run 3 times and the average value is shown). It will cost money, since all EC2 allocations will be charged for at least an hour. The script, provided here is &#8216;AS IS&#8217; and can be used as reference. Be sure to understand it and properly modify it before running it!</p>
<div class="rpbt_shortcode">
<h3>Related Posts</h3>
<ul>
					
			<li><a href="http://blog.xi-group.com/2015/01/small-tip-how-to-use-aws-cli-filter-parameter/">Small Tip: How to use AWS CLI &#8216;&#8211;filter&#8217; parameter</a></li>
					
			<li><a href="http://blog.xi-group.com/2014/11/small-tip-how-to-use-block-device-mappings-to-manage-instance-volumes-with-aws-cli/">Small Tip: How to use &#8211;block-device-mappings to manage instance volumes with AWS CLI</a></li>
					
			<li><a href="http://blog.xi-group.com/2014/07/small-tip-how-to-use-aws-cli-to-start-spot-instances-with-userdata/">Small Tip: How to use AWS CLI to start Spot instances with UserData</a></li>
					
			<li><a href="http://blog.xi-group.com/2015/02/how-to-deploy-single-node-hadoop-setup-in-aws/">How to deploy single-node Hadoop setup in AWS</a></li>
					
			<li><a href="http://blog.xi-group.com/2015/01/userdata-teplate-for-ubuntu-14-04-ec2-instances-in-aws/">UserData Template for Ubuntu 14.04 EC2 Instances in AWS</a></li>
			</ul>
</div>
]]></content:encoded>
			<wfw:commentRss>http://blog.xi-group.com/2014/06/small-tip-ebs-volume-allocation-time-is-linear-to-the-size-and-unrelated-to-the-instance-type/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>How to implement Service Discovery in the Cloud</title>
		<link>http://blog.xi-group.com/2014/06/how-to-implement-service-discovery-in-the-cloud/</link>
		<comments>http://blog.xi-group.com/2014/06/how-to-implement-service-discovery-in-the-cloud/#comments</comments>
		<pubDate>Tue, 17 Jun 2014 13:51:42 +0000</pubDate>
		<dc:creator><![CDATA[Ivo Vachkov]]></dc:creator>
				<category><![CDATA[AWS]]></category>
		<category><![CDATA[Development]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[theCloud]]></category>
		<category><![CDATA[AWS CLI]]></category>
		<category><![CDATA[cloud]]></category>
		<category><![CDATA[distributed systems]]></category>
		<category><![CDATA[dns]]></category>
		<category><![CDATA[dns-sd]]></category>
		<category><![CDATA[elastic computing]]></category>
		<category><![CDATA[service discovery]]></category>

		<guid isPermaLink="false">http://blog.xi-group.com/?p=49</guid>
		<description><![CDATA[Introduction Service Discovery is not new technology. Unfortunately, it is barely understood and rarely implemented. It is a problem that many system architects face and it is key to multiple desirable qualities of a modern, cloud enabled, elastic distributed system such as reliability, availability, maintainability. There are multiple ways to approach service discovery: Hardcode service [&#8230;]]]></description>
				<content:encoded><![CDATA[<h2>Introduction</h2>
<p style="text-align: justify;"><a title="Service Discovery" href="http://en.wikipedia.org/wiki/Service_discovery">Service Discovery</a> is not new technology. Unfortunately, it is barely understood and rarely implemented. It is a problem that many system architects face and it is key to multiple desirable qualities of a modern, cloud enabled, elastic distributed system such as <strong>reliability</strong>, <strong>availability</strong>, <strong>maintainability</strong>. There are multiple ways to approach service discovery:</p>
<ul>
<li>Hardcode service locations;</li>
<li>Develop proprietary solution;</li>
<li>Use existing technology.</li>
</ul>
<p style="text-align: justify;">Hardcoding is still the common case. How often do you encounter hardcoded URLs in configuration files?! Developing proprietary solution becomes popular too. Multiple companies decided to address Service Discovery by implementing some sort of distributed key-value store. Amongst the popular ones: Etsy&#8217;s <a href="https://github.com/coreos/etcd">etcd</a>, Heroku&#8217;s <a href="https://github.com/ha/doozerd">Doozer</a>, Apache <a href="http://zookeeper.apache.org">ZooKeeper</a>, Google&#8217;s <a href="http://research.google.com/archive/chubby.html">Chubby</a>. Even <a href="http://redis.io">Redis</a> can used for such purposes. But for many cases additional software layers and programming complexity is not needed. There is already existing solution based on DNS. It is called DNS-SD and is defined in <a href="http://tools.ietf.org/html/rfc6763">RFC6763</a>.</p>
<p style="text-align: justify;">DNS-SD utilizes <strong>PTR</strong>, <strong>SRV</strong> and <strong>TXT</strong> DNS records to provide flexible service discovery. All major DNS implementations support it. All major cloud providers support it. DNS is well established technology, well understood by both Operations and Development personnel with strong support in programming languages and libraries. It is highly-available by replication.</p>
<h2>How does DNS-SD work?</h2>
<p style="text-align: justify;">DNS-SD uses three DNS records types: PTR, SRV, TXT:</p>
<ul>
<li><strong>PTR</strong> record is defined in <a href="http://tools.ietf.org/html/rfc1035">RFC1035</a> as &#8220;domain name pointer&#8221;. Unlike CNAME records no processing of the contents is performed, data is returned directly.</li>
<li><strong>SRV</strong> record is defined in <a href="http://tools.ietf.org/html/rfc2782">RFC2782</a> as &#8220;service locator&#8221;. It should provide protocol agnostic way to locate services, in contrast to the MX records. It contains four components: <strong>priority</strong>, <strong>weight</strong>, <strong>port</strong> and <strong>target</strong>.</li>
<li><strong>TXT</strong> record is defined in <a href="http://tools.ietf.org/html/rfc1035">RFC1035</a> as &#8220;text string&#8221;.</li>
</ul>
<p style="text-align: justify;">There are multiple specifics around protocol and service naming conventions that are beyond the scope of this post. For more information please refer to <a href="http://tools.ietf.org/html/rfc6763">RFC6763</a>. For the purposes of this article, it is assumed that a proprietary TCP-based service, called <strong>theService</strong> that has different reincarnations runs on <strong>TCP port 4218</strong> on multiple hosts. The basic idea is:</p>
<ol>
<li>Create a pointer record for _theSerivce that contains all available incarnations of the service;</li>
<li>For each incarnation create SRV record (where the service is located) and TXT record (any additional information for the client) that specify the service details.</li>
</ol>
<p>This is what sample configuration looks like in AWS Route53 for the unilans.net. domain:<br />
<a href="http://blog.xi-group.com/wp-content/uploads/2014/06/Screen-Shot-2014-06-14-at-10.41.42-PM.png"><img class="alignnone size-full wp-image-101 img-thumbnail img-responsive" src="http://blog.xi-group.com/wp-content/uploads/2014/06/Screen-Shot-2014-06-14-at-10.41.42-PM.png" alt="Screen Shot 2014-06-14 at 10.41.42 PM" width="1482" height="198" /></a></p>
<p>Using <strong>nslookup</strong> results can be verified:</p><pre class="crayon-plain-tag">:~&gt; nslookup -q=PTR _theService._tcp.unilans.net.
Server:         8.8.8.8
Address:        8.8.8.8#53

Non-authoritative answer:
_theService._tcp.unilans.net    name = _incarnation1._theService._tcp.unilans.net.
_theService._tcp.unilans.net    name = _incarnation2._theService._tcp.unilans.net.

Authoritative answers can be found from:

:~&gt; nslookup -q=any _incarnation1._theService._tcp.unilans.net.
Server:         8.8.8.8
Address:        8.8.8.8#53

Non-authoritative answer:
_incarnation1._theService._tcp.unilans.net      text = "txtvers=1\; data=sampledata\;"
_incarnation1._theService._tcp.unilans.net      service = 0 0 4218 host1.unilans.net.

Authoritative answers can be found from:

:~&gt;</pre><p></p>
<p>Now a client that wants to use incarnation1 of theService has means to access it (Host: <strong>host1.unilans.net</strong>, Port: <strong>4218</strong>).</p>
<p>Load-balaing can be implementing by adding another entry in the service locator record with the same priority and weight:<br />
<a href="http://blog.xi-group.com/wp-content/uploads/2014/06/Screen-Shot-2014-06-14-at-10.42.28-PM.png"><img class="alignnone size-full wp-image-103 img-thumbnail img-responsive" src="http://blog.xi-group.com/wp-content/uploads/2014/06/Screen-Shot-2014-06-14-at-10.42.28-PM.png" alt="Screen Shot 2014-06-14 at 10.42.28 PM" width="1484" height="241" /></a></p>
<p>Resulting DNS lookup:</p><pre class="crayon-plain-tag">:~&gt; nslookup -q=any _incarnation1._theService._tcp.unilans.net.
Server:         8.8.8.8
Address:        8.8.8.8#53

Non-authoritative answer:
_incarnation1._theService._tcp.unilans.net      text = "txtvers=1\; data=sampledata\;"
_incarnation1._theService._tcp.unilans.net      service = 0 0 4218 host1.unilans.net.
_incarnation1._theService._tcp.unilans.net      service = 0 0 4218 host100.unilans.net.

Authoritative answers can be found from:

:~&gt;</pre><p></p>
<p>In a similar way, fail-over can be implemented by using different priority (or load distribution using different weights):<br />
<a href="http://blog.xi-group.com/wp-content/uploads/2014/06/Screen-Shot-2014-06-14-at-10.54.41-PM.png"><img class="alignnone size-full wp-image-105 img-thumbnail img-responsive" src="http://blog.xi-group.com/wp-content/uploads/2014/06/Screen-Shot-2014-06-14-at-10.54.41-PM.png" alt="Screen Shot 2014-06-14 at 10.54.41 PM" width="1483" height="241" /></a></p>
<p>Resulting DNS lookup:</p><pre class="crayon-plain-tag">:~&gt; nslookup -q=any _incarnation1._theService._tcp.unilans.net.
Server:         8.8.8.8
Address:        8.8.8.8#53

Non-authoritative answer:
_incarnation1._theService._tcp.unilans.net      text = "txtvers=1\; data=sampledata\;"
_incarnation1._theService._tcp.unilans.net      service = 0 0 4218 host1.unilans.net.
_incarnation1._theService._tcp.unilans.net      service = 1 0 4218 host100.unilans.net.

Authoritative answers can be found from:

:~&gt;</pre><p></p>
<p>NOTE: With DNS the client is the one to implement the load-balacing or the fail-over (although there are exceptions to this rule)!</p>
<h2>Benefits of using DNS-SD for Service Discovery</h2>
<p style="text-align: justify;">This technology can be used to support multiple version of a service. Using the built-in support for different reincarnations of the same service, versioning can be implemented in clean granular way. Common problem in REST system, usually solved by nasty URL schemes or rewriting URLs. With DNS-SD required metadata can be passed through the TXT records and multiple versions of the communication protocol can be supported, each in contained environment &#8230; No name space pollution, no clumsy URL schemes, no URL rewriting &#8230;</p>
<p style="text-align: justify;">This technology can be utilized to <strong>reduce complexity</strong> while building distributed systems. The clients will most certainly go through the process of name resolution anyway, so why not incorporate service discover in it?! Instead of dealing with external system (installation, operation, maintenance) and all the possible issues (hard to configure, hard to maintain, immature, fault-intollerant, requires additional libraries in the codebase, etc), incorporate this with the name resolution. DNS is well supported on virtually all operating systems and with all programming languages that provide network programming abilities. System architecture complexity is reduced because subsystem that already exists is providing additional services, instead of introducing new systems.</p>
<p style="text-align: justify;">This technology can be utilized to <strong>increase reliability / fault-tolerance</strong>. Reliability / fault-tolerance can be easily increased by serving multiple entries with the service locator records. <strong>Priority</strong> can be used by the client to go through the list of entries in controlled manner and <strong>weight</strong> to balance the load between the service providers on each priority level. The combination of backend support (control plane updating DNS-SD records) and reasonably intelligent clients (implementing service discovery and priority/weight parsing) should give granular control over the fail-over and load-balancing processes in the communication between multiple entities.</p>
<p style="text-align: justify;">This technology supports <strong>system elasticity</strong>. Modern cloud service providers have APIs to control DNS zones. In this article, AWS Route53 will be used to demonstrate how elastic service can be introduced through DNS-SD to clients. Backend service scaling logic can modify service locator records to reflect current service state as far as DNS zone modification API is available. This is just part of the control plane for the service &#8230;</p>
<p style="text-align: justify;">Bonus point: <strong>DNS also gives you simple, replicated key-value store through TXT records!</strong></p>
<h2>Implementation of Service Discovery with DNS-SD, AWS Route53, AWS IAM and AWS EC2 UserData</h2>
<p>Following is a set of steps and sample code to implement Service Discovery in AWS, using Route53, IAM and EC2.</p>
<h3>Manual configuration</h3>
<p>1. Create <strong>PTR</strong> and <strong>TXT</strong> Records for theService in Route53:<br />
<a href="http://blog.xi-group.com/wp-content/uploads/2014/06/Screen-Shot-2014-06-16-at-2.07.22-PM.png"><img src="http://blog.xi-group.com/wp-content/uploads/2014/06/Screen-Shot-2014-06-16-at-2.07.22-PM.png" alt="Screen Shot 2014-06-16 at 2.07.22 PM" width="1482" height="51" class="alignnone size-full wp-image-123 img-thumbnail img-responsive" /></a></p>
<p>This is a simple example for one service with one incarnation (v1).</p>
<p>NOTE: There is no SRV since the service is currently not running anywhere! Active service providers will create/update/delete SRV entries. </p>
<p>2. Create IAM role for EC2 instances to be able to modify DNS records in desired Zone:<br />
<a href="http://blog.xi-group.com/wp-content/uploads/2014/06/Screen-Shot-2014-06-16-at-2.24.34-PM.png"><img src="http://blog.xi-group.com/wp-content/uploads/2014/06/Screen-Shot-2014-06-16-at-2.24.34-PM.png" alt="Screen Shot 2014-06-16 at 2.24.34 PM" width="1549" height="281" class="alignnone size-full wp-image-124 img-thumbnail img-responsive" /></a></p>
<p>Use the following policy:</p><pre class="crayon-plain-tag">{
   "Version": "2012-10-17",
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "route53:ListHostedZones"
         ],
         "Resource":"*"
      },
      {
         "Effect":"Allow",
         "Action":[
            "route53:GetHostedZone", 
            "route53:ListResourceRecordSets",
            "route53:ChangeResourceRecordSets"
         ],
         "Resource":"arn:aws:route53:::hostedzone/XXXXYYYYZZZZ"
      },
      {
         "Effect":"Allow",
         "Action":[
            "route53:GetChange"
         ],
         "Resource":"arn:aws:route53:::change/*"
      }
   ]
}</pre><p></p>
<p>&#8230; where <strong>XXXXYYYYZZZZ</strong> is your hosted zone ID!</p>
<h3>Automated JOIN/LEAVE in service group</h3>
<p style="text-align: justify;">Manual settings, outlined in the previous section give the basic framework of the DNS-SD setup. There is no SRV record since there are no active instances providing the service. Ideally, each active service provider will register/de-register with the service when available. This is key here: DNS-SD can be integrated cleanly with the elastic nature of the cloud. Once this integration is at place, all clients will only need to resolve DNS records in order to obtain list of active service providers. For demonstration purposes the following script was created:</p>
<p></p><pre class="crayon-plain-tag">#!/usr/bin/env python

# The following code modifies AWS Route53 entries to demonstrate usage of DNS-SD in cloud environments
#
# To JOIN Service group:
# 	dns-sd.py -z unilans.net -s _v1._theservice._tcp.unilans.net. -p 8080 join
#
# To LEAVE Service group:
#	dns-sd.py -z unilans.net -s _v1._theservice._tcp.unilans.net. -p 8080 leave
#
# NOTE: THIS IS FOR DEMONSTRATION PURPOSES ONLY! ERROR HANDLING IS ABSOLUTE MINIMAL! THIS IS *NOT* PRODUCTION CODE!

import sys
import copy
import argparse

import requests
import boto.route53

def main():
	"""
	Main entry point
	"""

	# Parse command line arguments
	parser = argparse.ArgumentParser(description='Example code to update service records in Route53 hosted DNS zones')
	parser.add_argument('-z', '--zone', type=str, required=True, dest='zone', help='Zone Name')
	parser.add_argument('-s', '--service', type=str, required=True, dest='service', help='Service Name')
	parser.add_argument('-p', '--port', type=int, required=True, dest='port', help='Service Port')
	parser.add_argument('operation', metavar='OPERATION', type=str, help='Operation [join|leave]', choices=['join', 'leave'])

	args = parser.parse_args()
	operation = args.operation
	zone = args.zone
	service = args.service
	port = args.port

	# Establish connection to Route53 API
	conn = boto.route53.connection.Route53Connection()

	# Get zone handler
	z = conn.get_zone(zone)
	if not z:
		print "{progname}: Wrong or inaccessible zone!".format(progname=sys.argv[0])
		sys.exit(-1)

	# Get EC2 Public IP Address
	response = requests.get('http://169.254.169.254/latest/meta-data/public-ipv4')
	if response.status_code == 200:
		public_ipv4 = response.text
	else:
		print "{progname}: Unable to obtain public IP address from AWS!".format(progname=sys.argv[0])
		sys.exit(-1)

	# Generate domain-specific hostname
	fqdn_hostname = '{hostname}.{zone}'.format(hostname=public_ipv4.replace(".", "-"), zone=zone)

	# Act, based on operation request (join | leave)
	if operation.upper() == 'join'.upper():
		# Create A record
		z.add_a(fqdn_hostname, public_ipv4, ttl=60)

		# Obtain service locator records
		r = z.find_records(service, 'SRV')
		if not r:
			# Create SRV record
			srv_value = u'0 0 {port} {fqdn}'.format(port=port, fqdn=fqdn_hostname)
			z.add_record('SRV', service, srv_value, ttl=60)
		else:
			# Add to SRV record
			srv_value = u'0 0 {port} {fqdn}'.format(port=port, fqdn=fqdn_hostname)
			tmp_r = copy.deepcopy(r)
			tmp_r.resource_records.append(srv_value)
			z.update_record(r, tmp_r.resource_records)

	elif operation.upper() == 'leave'.upper():
		# Remove entry from the SRV record
		r = z.find_records(service, 'SRV')
		if r:
			tmp_r = copy.deepcopy(r)
			for record in tmp_r.resource_records:
				if fqdn_hostname in record:
					tmp_r.resource_records.remove(record)

			if len(tmp_r.resource_records) == 0:
				# Remove the SRV entry itself
				z.delete_record(r)
			else:
				# Update the SRV record
				z.update_record(r, tmp_r.resource_records)

		# Remove A record
		r = z.find_records(fqdn_hostname, 'A')
		if r:
			z.delete_record(r)

	else:
		print "{progname}: Wrong operation!".format(progname=sys.argv[0])
		sys.exit(-1)

if __name__ == '__main__':
	main()</pre><p></p>
<p>Copy of the code can be downloaded from <a href="https://s3-us-west-2.amazonaws.com/blog.xi-group.com/aws-route53-iam-ec2-dns-sd/dns-sd.py">https://s3-us-west-2.amazonaws.com/blog.xi-group.com/aws-route53-iam-ec2-dns-sd/dns-sd.py</a></p>
<p>This code, given DNS zone, service name and service port, will update necessary DNS records to join or leave the service group. </p>
<p>Starting with initial state:<br />
<a href="http://blog.xi-group.com/wp-content/uploads/2014/06/Screen-Shot-2014-06-16-at-4.59.40-PM.png"><img src="http://blog.xi-group.com/wp-content/uploads/2014/06/Screen-Shot-2014-06-16-at-4.59.40-PM.png" alt="Screen Shot 2014-06-16 at 4.59.40 PM" width="1482" height="51" class="alignnone size-full wp-image-128 img-thumbnail img-responsive" /></a></p>
<p>Executing JOIN:</p><pre class="crayon-plain-tag">dns-sd.py -z unilans.net -s _v1._theservice._tcp.unilans.net. -p 8080 join</pre><p></p>
<p>Result:<br />
<a href="http://blog.xi-group.com/wp-content/uploads/2014/06/Screen-Shot-2014-06-16-at-4.59.07-PM.png"><img src="http://blog.xi-group.com/wp-content/uploads/2014/06/Screen-Shot-2014-06-16-at-4.59.07-PM.png" alt="Screen Shot 2014-06-16 at 4.59.07 PM" width="1483" height="103" class="alignnone size-full wp-image-129 img-thumbnail img-responsive" /></a></p>
<p>Executing LEAVE:</p><pre class="crayon-plain-tag">dns-sd.py -z unilans.net -s _v1._theservice._tcp.unilans.net. -p 8080 leave</pre><p></p>
<p>Result:<br />
<a href="http://blog.xi-group.com/wp-content/uploads/2014/06/Screen-Shot-2014-06-16-at-4.59.40-PM1.png"><img src="http://blog.xi-group.com/wp-content/uploads/2014/06/Screen-Shot-2014-06-16-at-4.59.40-PM1.png" alt="Screen Shot 2014-06-16 at 4.59.40 PM" width="1482" height="51" class="alignnone size-full wp-image-130 img-thumbnail img-responsive" /></a></p>
<p style="text-align: justify;">Domain-specific hostname is created, service location record (SRV) is created with proper port and hostname. When host leaves the service group, domain-specific hostname is removed, so is the entry in the SRV record, or the whole record if this is the last entry.</p>
<h3>Fully automated setup</h3>
<p style="text-align: justify;">UserData will be used to fully automate the process. There are many options: Puppet, Chef, Salt, Ansible and all of those can be used, but the UserData solution is with reduced complexity, no external dependencies and can be directly utilized by other AWS Services like CloudFormation, AutoScalingGroups, etc.</p>
<p>The full UserData content is as follows:</p><pre class="crayon-plain-tag">#!/bin/bash -ex

# Debian apt-get install function
apt_get_install()
{
	DEBIAN_FRONTEND=noninteractive apt-get -y \
	-o DPkg::Options::=--force-confdef \
	-o DPkg::Options::=--force-confold \
	install $@
}

# Mark execution start
echo "STARTING" > /root/user_data_run

# Some initial setup
set -e -x
export DEBIAN_FRONTEND=noninteractive
apt-get update && apt-get upgrade -y

# Install required packages
apt_get_install python-boto python-requests
apt_get_install nginx

# Create test html page
mkdir /var/www
cat > /var/www/index.html << "EOF"
<html>
	<head>
		<title>Demo Page</title>
	</head>

	<body>
		<center><h2>Demo Page</h2></center><br>
		<center>Status: running</center>
	</body>
</html>
EOF

# Configure NginX
cat > /etc/nginx/conf.d/demo.conf << "EOF"
# Minimal NginX VirtualHost setup
server {
	listen 8080;

	root /var/www;
	index index.html index.htm;

	location / {
		try_files $uri $uri/ =404;
	}
}
EOF

# Restart NginX with the new settings
/etc/init.d/nginx restart

# Create dns-sd.py
cat > /usr/local/sbin/dns-sd.py << "EOF"
#!/usr/bin/env python

# The following code modifies AWS Route53 entries to demonstrate usage of DNS-SD in cloud environments
#
# To JOIN Service group:
# 	dns-sd.py -z unilans.net -s _v1._theservice._tcp.unilans.net. -p 8080 join
#
# To LEAVE Service group:
#	dns-sd.py -z unilans.net -s _v1._theservice._tcp.unilans.net. -p 8080 leave
#
# NOTE: THIS IS FOR DEMONSTRATION PURPOSES ONLY! ERROR HANDLING IS ABSOLUTE MINIMAL! THIS IS *NOT* PRODUCTION CODE!

import sys
import copy
import argparse

import requests
import boto.route53

def main():
	"""
	Main entry point
	"""

	# Parse command line arguments
	parser = argparse.ArgumentParser(description='Example code to update service records in Route53 hosted DNS zones')
	parser.add_argument('-z', '--zone', type=str, required=True, dest='zone', help='Zone Name')
	parser.add_argument('-s', '--service', type=str, required=True, dest='service', help='Service Name')
	parser.add_argument('-p', '--port', type=int, required=True, dest='port', help='Service Port')
	parser.add_argument('operation', metavar='OPERATION', type=str, help='Operation [join|leave]', choices=['join', 'leave'])

	args = parser.parse_args()
	operation = args.operation
	zone = args.zone
	service = args.service
	port = args.port

	# Establish connection to Route53 API
	conn = boto.route53.connection.Route53Connection()

	# Get zone handler
	z = conn.get_zone(zone)
	if not z:
		print "{progname}: Wrong or inaccessible zone!".format(progname=sys.argv[0])
		sys.exit(-1)

	# Get EC2 Public IP Address
	response = requests.get('http://169.254.169.254/latest/meta-data/public-ipv4')
	if response.status_code == 200:
		public_ipv4 = response.text
	else:
		print "{progname}: Unable to obtain public IP address from AWS!".format(progname=sys.argv[0])
		sys.exit(-1)

	# Generate domain-specific hostname
	fqdn_hostname = '{hostname}.{zone}'.format(hostname=public_ipv4.replace(".", "-"), zone=zone)

	# Act, based on operation request (join | leave)
	if operation.upper() == 'join'.upper():
		# Create A record
		z.add_a(fqdn_hostname, public_ipv4, ttl=60)

		# Obtain service locator records
		r = z.find_records(service, 'SRV')
		if not r:
			# Create SRV record
			srv_value = u'0 0 {port} {fqdn}'.format(port=port, fqdn=fqdn_hostname)
			z.add_record('SRV', service, srv_value, ttl=60)
		else:
			# Add to SRV record
			srv_value = u'0 0 {port} {fqdn}'.format(port=port, fqdn=fqdn_hostname)
			tmp_r = copy.deepcopy(r)
			tmp_r.resource_records.append(srv_value)
			z.update_record(r, tmp_r.resource_records)

	elif operation.upper() == 'leave'.upper():
		# Remove entry from the SRV record
		r = z.find_records(service, 'SRV')
		if r:
			tmp_r = copy.deepcopy(r)
			for record in tmp_r.resource_records:
				if fqdn_hostname in record:
					tmp_r.resource_records.remove(record)

			if len(tmp_r.resource_records) == 0:
				# Remove the SRV entry itself
				z.delete_record(r)
			else:
				# Update the SRV record
				z.update_record(r, tmp_r.resource_records)

		# Remove A record
		r = z.find_records(fqdn_hostname, 'A')
		if r:
			z.delete_record(r)

	else:
		print "{progname}: Wrong operation!".format(progname=sys.argv[0])
		sys.exit(-1)

if __name__ == '__main__':
	main()

EOF

# Make dns-sd.py executable
chmod +x /usr/local/sbin/dns-sd.py

# Create startup job
cat > /etc/init.d/dns-sd << "EOF"
#! /bin/bash
#
# Author: Ivo Vachkov (ivachkov@xi-group.com)
#
### BEGIN INIT INFO
# Provides: DNS-SD Service Group Registration / De-Registration
# Required-Start:
# Should-Start:
# Required-Stop:
# Should-Stop:
# Default-Start:  2 3 4 5
# Default-Stop:   0 1 6
# Short-Description:    Start / Stop script for DNS-SD
# Description:          Use to JOIN/LEAVE DNS-SD Service Group
### END INIT INFO

set -e
umask 022

# Configuration details
DNS_SD="/usr/local/sbin/dns-sd.py"
DNS_ZONE="unilans.net"
SERVICE_NAME="_v1._theservice._tcp.unilans.net."
SERVICE_PORT="8080"

. /lib/lsb/init-functions

export PATH="${PATH:+$PATH:}/usr/sbin:/sbin:/usr/bin:/usr/local/bin:/usr/local/sbin"

# Default Start function
dns_sd_join () {
	$DNS_SD -z $DNS_ZONE -s $SERVICE_NAME -p $SERVICE_PORT join
}

# Default Stop function
dns_sd_leave () {
	$DNS_SD -z $DNS_ZONE -s $SERVICE_NAME -p $SERVICE_PORT leave
}

case "$1" in
start)
	log_daemon_msg "Joining $DNS_ZONE|$SERVICE_NAME:$SERVICE_PORT ... " || true
	dns_sd_join
	;;
stop)
	log_daemon_msg "Leaving $DNS_ZONE|$SERVICE_NAME:$SERVICE_PORT ... " || true
	dns_sd_leave
	;;
restart)
	log_daemon_msg "Restarting ... " || true
	dns_sd_leave
	dns_sd_join
	;;
*)
	log_action_msg "Usage: $0 {start|stop|restart}" || true
	exit 1
esac

exit 0
EOF

# Make /etc/init.d/dns-sd executable
chmod +x /etc/init.d/dns-sd

# Set automatic execution on start/shutdown
update-rc.d dns-sd defaults 99

# Execute initial service group JOIN
/etc/init.d/dns-sd start

# Mark execution end
echo "DONE" > /root/user_data_run</pre><p></p>
<p>Copy of the code can be downloaded from <a href="https://s3-us-west-2.amazonaws.com/blog.xi-group.com/aws-route53-iam-ec2-dns-sd/userdata.sh">https://s3-us-west-2.amazonaws.com/blog.xi-group.com/aws-route53-iam-ec2-dns-sd/userdata.sh</a></p>
<p>Starting 3 test instances to verify functionality:</p><pre class="crayon-plain-tag">aws ec2 run-instances --image-id ami-018c9568 --count 3 --instance-type t1.micro --key-name test-key --security-groups test-sg --iam-instance-profile Name=DNS-SD-Route53-EC2-Role --user-data file://userdata.sh</pre><p></p>
<p>Resulting changes to Route53:<br />
<a href="http://blog.xi-group.com/wp-content/uploads/2014/06/Screen-Shot-2014-06-16-at-7.21.05-PM.png"><img src="http://blog.xi-group.com/wp-content/uploads/2014/06/Screen-Shot-2014-06-16-at-7.21.05-PM.png" alt="Screen Shot 2014-06-16 at 7.21.05 PM" width="1482" height="190" class="alignnone size-full wp-image-132 img-thumbnail img-responsive" /></a></p>
<p>Three new boxes self-registered in the Service group. Stopping manually one leads to de-registration:<br />
<a href="http://blog.xi-group.com/wp-content/uploads/2014/06/Screen-Shot-2014-06-16-at-7.22.19-PM.png"><img src="http://blog.xi-group.com/wp-content/uploads/2014/06/Screen-Shot-2014-06-16-at-7.22.19-PM.png" alt="Screen Shot 2014-06-16 at 7.22.19 PM" width="1482" height="146" class="alignnone size-full wp-image-133 img-thumbnail img-responsive" /></a></p>
<p><strong>Elastic systems are possible to implement with DNS-SD!</strong> Note however, that the DNS records are limited to 65536 bytes, so the amount of entries that can go into SRV record, although big, is limited!</p>
<h2>Client code</h2>
<p style="text-align: justify;">To demonstrate DNS-SD resolution, the following sample code was created:</p>
<p></p><pre class="crayon-plain-tag">#!/usr/bin/env python

# The following code demonstrates how to resolve DNS-SD Service Descriptions
#
# Example execution:
#	client.py -z unilans.net. -s theService -p tcp -v v1
#
# NOTE: THIS IS FOR DEMONSTRATION PURPOSES ONLY! ERROR HANDLING IS ABSOLUTE MINIMAL! THIS IS *NOT* PRODUCTION CODE!

import sys
import random
import argparse

import requests
import dns.resolver

def main():
	"""
	Main entry point
	"""

	# Parse command line arguments
	parser = argparse.ArgumentParser(description='Example code to resolve DNS-SD service descriptions')
	parser.add_argument('-z', '--zone', type=str, required=True, dest='zone', help='Zone Name')
	parser.add_argument('-s', '--service', type=str, required=True, dest='service', help='Service Name')
	parser.add_argument('-p', '--protocol', type=str, required=True, dest='protocol', help='Service Transport Protoco [tcp|udp]', choices=['tcp', 'udp'])
	parser.add_argument('-v', '--version', type=str, required=True, dest='version', help='Service Version')

	args = parser.parse_args()
	zone = args.zone
	service = args.service
	protocol = args.protocol
	version = args.version

	# Obtain PTR Record
	service_id = '_{service}._{protocol}.{zone}'.format(service=service, protocol=protocol, zone=zone)
	answer = dns.resolver.query(service_id, 'PTR')

	# Find the service incarnation
	if answer:
		for record in answer.rrset:
			r = str(record.target).split('.')
			if version in r[0]:
				service_version = str(record.target)

	# Discover and consume the actual service
	if service_version:
		# Get SRV and TXT
		answer_srv = dns.resolver.query(service_version, 'SRV')
		answer_txt = dns.resolver.query(service_version, 'TXT')

		service_addr = ''
		service_port = 0

		# If those are valid get random service location entry
		if answer_srv and answer_txt:
			srv_entry = random.choice(answer_srv.rrset.items)
			if srv_entry:
				service_addr = srv_entry.target
				service_port = srv_entry.port

	service_uri = 'http://{host}:{port}/'.format(host=service_addr, port=service_port)
	r = requests.get(service_uri)
	if r.status_code == 200:
		print r.text

if __name__ == '__main__':
	main()</pre><p></p>
<p>Copy of the code can be downloaded from <a href="https://s3-us-west-2.amazonaws.com/blog.xi-group.com/aws-route53-iam-ec2-dns-sd/client.py">https://s3-us-west-2.amazonaws.com/blog.xi-group.com/aws-route53-iam-ec2-dns-sd/client.py</a></p>
<p style="text-align: justify;">Why would that be better?! Yes, there is added complexity in the name resolution process. But, more importantly, details needed to find the service are agnostic to its location, or specific to the client. Service-specific infrastructure can change, but the client will not be affected, as long as the discovery process is performed.</p>
<p>Sample run:</p><pre class="crayon-plain-tag">:~> client.py -z unilans.net. -s theService -p tcp -v v1
<html>
        <head>
                <title>Demo Page</title>
        </head>

        <body>
                <center><h2>Demo Page</h2></center><br>
                <center>Status: running</center>
        </body>
</html>
:~></pre><p></p>
<p><strong>Voilà! Reliable Service Discovery in elastic systems!</strong></p>
<h2>Additional Notes</h2>
<p>Some additional notes and well-knowns:</p>
<ul>
<li>
<p style="text-align: justify;">Examples in this article could be extended to support fail-over or more sophisticated forms of load-balancing. Current random.choice() solution should be good enough for the generic case;</p>
</li>
<li>
<p style="text-align: justify;">More complex setup with different priorities and weights can be demonstrated too;</p>
</li>
<li>
<p style="text-align: justify;">Service health-check before DNS-SD registration can be demonstrated too;</p>
</li>
<li>
<p style="text-align: justify;">Non-HTTP service can be demonstrated to use DNS-SD. Technology is application-agnostic.</p>
</li>
<li>
<p style="text-align: justify;">TXT contents are not used throughout this article. Those can be used to carry additional meta-data (NOTE: This is public! Anyone can query your DNS TXT records with this setup!).</p>
</li>
</ul>
<h2>Conclusion</h2>
<p style="text-align: justify;">Quick implementation of DNS-SD with AWS Route53, IAM and EC2 was presented in this article. It can be used as a bare-bone setup that can be further extended and productized. It solves common problem in elastic systems: Service Discovery! All key components are implemented in either Python or Shell script with minimal dependencies (sudo aptitude install awscli, python-boto, python-requests, python-dnspython), although the implementation is not dependent on a particular programming language.</p>
<p>References</p>
<ul>
<li><a href="http://tools.ietf.org/html/rfc6763">http://tools.ietf.org/html/rfc6763</a></li>
<li><a href="http://www.infoq.com/articles/rest-discovery-dns">http://www.infoq.com/articles/rest-discovery-dns</a></li>
<li><a href="http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/UsingWithIAM.html">http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/UsingWithIAM.html</a></li>
</ul>
<div class="rpbt_shortcode">
<h3>Related Posts</h3>
<ul>
					
			<li><a href="http://blog.xi-group.com/2014/07/how-to-implement-multi-cloud-deployment-for-scalability-and-reliability/">How to implement multi-cloud deployment for scalability and reliability</a></li>
					
			<li><a href="http://blog.xi-group.com/2015/02/how-to-deploy-single-node-hadoop-setup-in-aws/">How to deploy single-node Hadoop setup in AWS</a></li>
					
			<li><a href="http://blog.xi-group.com/2015/01/userdata-teplate-for-ubuntu-14-04-ec2-instances-in-aws/">UserData Template for Ubuntu 14.04 EC2 Instances in AWS</a></li>
					
			<li><a href="http://blog.xi-group.com/2014/11/small-tip-how-to-use-block-device-mappings-to-manage-instance-volumes-with-aws-cli/">Small Tip: How to use &#8211;block-device-mappings to manage instance volumes with AWS CLI</a></li>
					
			<li><a href="http://blog.xi-group.com/2015/01/small-tip-how-to-use-aws-cli-filter-parameter/">Small Tip: How to use AWS CLI &#8216;&#8211;filter&#8217; parameter</a></li>
			</ul>
</div>
]]></content:encoded>
			<wfw:commentRss>http://blog.xi-group.com/2014/06/how-to-implement-service-discovery-in-the-cloud/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Small Tip: Use AWS CLI to create instances with bigger root partitions</title>
		<link>http://blog.xi-group.com/2014/06/small-tip-use-aws-cli-to-create-instances-with-bigger-root-partitions/</link>
		<comments>http://blog.xi-group.com/2014/06/small-tip-use-aws-cli-to-create-instances-with-bigger-root-partitions/#comments</comments>
		<pubDate>Thu, 05 Jun 2014 13:43:44 +0000</pubDate>
		<dc:creator><![CDATA[Ivo Vachkov]]></dc:creator>
				<category><![CDATA[AWS]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Small Tip]]></category>
		<category><![CDATA[AWS CLI]]></category>
		<category><![CDATA[bigger]]></category>
		<category><![CDATA[linux]]></category>
		<category><![CDATA[root partition]]></category>

		<guid isPermaLink="false">http://blog.xi-group.com/?p=42</guid>
		<description><![CDATA[On multiple occasions we had to deal with instances running out of disk space for the root file system. AWS provides you reasonable amount of storage, but most operating systems without additional settings will just use the root partition for everything. Which is usually sub-optimal, since default root partition is 8GB and you may have [&#8230;]]]></description>
				<content:encoded><![CDATA[<p style="text-align: justify;">On multiple occasions we had to deal with instances running out of disk space for the root file system. AWS provides you reasonable amount of storage, but most operating systems without additional settings will just use the root partition for everything. Which is usually sub-optimal, since default root partition is 8GB and you may have 160GB SSD just mounted on /mnt and never used. With the AWS Web interface, it is easy. Just create bigger root partitions for the instances. However, the AWS CLI solution is not obvious and somewhat hard to find. If you need to regularly start instances with non-standard root partitions, manual approach is not maintainable.</p>
<p style="text-align: justify;">There is a solution. It lies in the <strong>&#8211;block-device-mappings</strong> parameter that can be passed to <strong>aws ec2 run-instances</strong> command.</p>
<p style="text-align: justify;">According to the documentation this parameter uses JSON-encoded block device mapping to adjust different parameter of the instances that are being started. There is a simple example that shows how to attach additional volume:</p>
<p></p><pre class="crayon-plain-tag">--block-device-mappings "[{\"DeviceName\": \"/dev/sdh\",\"Ebs\":{\"VolumeSize\":100}}]"</pre><p></p>
<p style="text-align: justify;">This will attach additional 100GB EBS volume as /dev/sdb. The key part: <strong>&#8220;Ebs&#8221;: {&#8220;VolumeSize&#8221;: 100}</strong></p>
<p style="text-align: justify;">By specifying your instance&#8217;s root partition you can adjust the root partition size. Following is an example how to create Amazon Linux instance running on t1.micro with 32GB root partition:</p>
<p></p><pre class="crayon-plain-tag">aws ec2 run-instances --image-id ami-fb8e9292 --count 1 --instance-type t1.micro --key-name test-key --security-groups test-sg --block-device-mapping "[ { \"DeviceName\": \"/dev/sda1\", \"Ebs\": { \"VolumeSize\": 32 } } ]"</pre><p>The resulting volume details show the requested size and the fact that this is indeed root partition:<br />
<a href="http://blog.xi-group.com/wp-content/uploads/2014/06/Screen-Shot-2014-06-05-at-4.30.31-PM.png"><img class="alignnone size-full wp-image-53 img-thumbnail img-responsive" src="http://blog.xi-group.com/wp-content/uploads/2014/06/Screen-Shot-2014-06-05-at-4.30.31-PM.png" alt="Screen Shot 2014-06-05 at 4.30.31 PM" width="1474" height="117" /></a></p>
<p>Confirming, that the instance is operating on the proper volume:</p><pre class="crayon-plain-tag">:~&gt; ssh ec2-user@ec2-50-16-57-145.compute-1.amazonaws.com "df -h"
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1       32G  1.1G   31G   4% /
devtmpfs        282M   12K  282M   1% /dev
tmpfs           297M     0  297M   0% /dev/shm
:~&gt;</pre><p></p>
<p style="text-align: justify;">There is enough space in the root partition now. Note: This is EBS volume, additional charges will apply!</p>
<p style="text-align: justify;">References</p>
<ul>
<li>aws ec2 run-instances help</li>
</ul>
<div class="rpbt_shortcode">
<h3>Related Posts</h3>
<ul>
					
			<li><a href="http://blog.xi-group.com/2015/01/small-tip-how-to-use-aws-cli-filter-parameter/">Small Tip: How to use AWS CLI &#8216;&#8211;filter&#8217; parameter</a></li>
					
			<li><a href="http://blog.xi-group.com/2014/11/small-tip-how-to-use-block-device-mappings-to-manage-instance-volumes-with-aws-cli/">Small Tip: How to use &#8211;block-device-mappings to manage instance volumes with AWS CLI</a></li>
					
			<li><a href="http://blog.xi-group.com/2014/07/small-tip-how-to-use-aws-cli-to-start-spot-instances-with-userdata/">Small Tip: How to use AWS CLI to start Spot instances with UserData</a></li>
					
			<li><a href="http://blog.xi-group.com/2014/06/small-tip-ebs-volume-allocation-time-is-linear-to-the-size-and-unrelated-to-the-instance-type/">Small Tip: EBS volume allocation time is linear to the size and unrelated to the instance type</a></li>
					
			<li><a href="http://blog.xi-group.com/2014/06/small-tip-partitioning-disk-drives-from-within-userdata-script/">Small Tip: Partitioning disk drives from within UserData script</a></li>
			</ul>
</div>
]]></content:encoded>
			<wfw:commentRss>http://blog.xi-group.com/2014/06/small-tip-use-aws-cli-to-create-instances-with-bigger-root-partitions/feed/</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>Small Tip: AWS tools are case sensitive, AWS Web Interface is not</title>
		<link>http://blog.xi-group.com/2014/06/small-tip-aws-tools-are-case-sensitive-aws-web-interface-is-not/</link>
		<comments>http://blog.xi-group.com/2014/06/small-tip-aws-tools-are-case-sensitive-aws-web-interface-is-not/#comments</comments>
		<pubDate>Mon, 02 Jun 2014 07:42:26 +0000</pubDate>
		<dc:creator><![CDATA[Ivo Vachkov]]></dc:creator>
				<category><![CDATA[AWS]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Small Tip]]></category>
		<category><![CDATA[AWS CLI]]></category>
		<category><![CDATA[case sensitive]]></category>
		<category><![CDATA[filters]]></category>
		<category><![CDATA[instances]]></category>
		<category><![CDATA[names]]></category>
		<category><![CDATA[tags]]></category>

		<guid isPermaLink="false">http://blog.xi-group.com/?p=27</guid>
		<description><![CDATA[In a recent investigation, we found an interesting difference between AWS command line tools (based on Boto library) and AWS Web interface. Apparently, command line tools are case sensitive while AWS Web interface is not. This can potentially lead to automated scaling issues. Tooling may not get &#8216;the full picture&#8217; if tags are mixed-case and [&#8230;]]]></description>
				<content:encoded><![CDATA[<p style="text-align: justify;">In a recent investigation, we found an interesting difference between AWS command line tools (based on Boto library) and AWS Web interface. Apparently, command line tools are case sensitive while AWS Web interface is not. This can potentially lead to automated scaling issues. Tooling may not get &#8216;the full picture&#8217; if tags are mixed-case and software does not account for that.</p>
<p style="text-align: justify;">Lets start with simple example &#8230;</p>
<p style="text-align: justify;">We have the following EC2 instances in AWS Account:<br />
<a href="http://blog.xi-group.com/wp-content/uploads/2014/06/Screen-Shot-2014-06-02-at-10.03.04-AM.png"><img class="alignnone size-full wp-image-28 img-thumbnail img-responsive" src="http://blog.xi-group.com/wp-content/uploads/2014/06/Screen-Shot-2014-06-02-at-10.03.04-AM.png" alt="Screen Shot 2014-06-02 at 10.03.04 AM" width="1385" height="300" /></a></p>
<p style="text-align: justify;">Search for the term &#8216;TEST-NODE&#8217; yields the same results as searching for &#8216;test-node&#8217; in the AWS Web interface.</p>
<p style="text-align: justify;">Searching for &#8216;TEST-NODE':<br />
<a href="http://blog.xi-group.com/wp-content/uploads/2014/06/Screen-Shot-2014-06-02-at-10.03.27-AM.png"><img class="alignnone size-full wp-image-29 img-thumbnail img-responsive" src="http://blog.xi-group.com/wp-content/uploads/2014/06/Screen-Shot-2014-06-02-at-10.03.27-AM.png" alt="Screen Shot 2014-06-02 at 10.03.27 AM" width="1385" height="270" /></a></p>
<p>Searching for &#8216;test-node':<br />
<a href="http://blog.xi-group.com/wp-content/uploads/2014/06/Screen-Shot-2014-06-02-at-10.03.49-AM.png"><img class="alignnone size-full wp-image-30 img-thumbnail img-responsive" src="http://blog.xi-group.com/wp-content/uploads/2014/06/Screen-Shot-2014-06-02-at-10.03.49-AM.png" alt="Screen Shot 2014-06-02 at 10.03.49 AM" width="1385" height="270" /></a></p>
<p style="text-align: justify;">&#8230; it behaves the same way. It is case-insensitive.</p>
<p style="text-align: justify;">However, commend line tools will produce totally different output.</p>
<p style="text-align: justify;">Searching for &#8216;TEST-NODE':</p>
<p></p><pre class="crayon-plain-tag">:~&gt; aws ec2 describe-instances --filters "Name=tag:Name,Values=*TEST-NODE*" --query 'Reservations[*].Instances[*].Tags[?Key==`Name`].Value[]' --output text
TEST-NODE-1
:~&gt;</pre><p>Searching for &#8216;test-node':</p><pre class="crayon-plain-tag">:~&gt; aws ec2 describe-instances --filters "Name=tag:Name,Values=*test-node*" --query 'Reservations[*].Instances[*].Tags[?Key==`Name`].Value[]' --output text
test-node-5
:~&gt;</pre><p></p>
<p style="text-align: justify;">Python + Boto shows the same behavior (not surprisingly, AWS CLI uses Boto):</p>
<p style="text-align: justify;">Searching for &#8216;TEST-NODE':</p>
<p></p><pre class="crayon-plain-tag">:~&gt; python
Python 2.7.5 (default, Mar  9 2014, 22:15:05)
[GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.0.68)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
&gt;&gt;&gt; import boto
&gt;&gt;&gt; import boto.ec2
&gt;&gt;&gt; conn = boto.ec2.connect_to_region('us-east-1', aws_access_key_id='', aws_secret_access_key='')
&gt;&gt;&gt; reservations = conn.get_all_instances(filters = {'instance-state-name' : 'running', "tag:Name": "*" + 'TEST-NODE' + "*"})
&gt;&gt;&gt; for r in reservations:
...     for i in r.instances:
...             print i.tags['Name']
...
TEST-NODE-1
&gt;&gt;&gt; ^D
:~&gt;</pre><p>Searching for &#8216;test-node':</p><pre class="crayon-plain-tag">:~&gt; python
Python 2.7.5 (default, Mar  9 2014, 22:15:05)
[GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.0.68)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
&gt;&gt;&gt; import boto
&gt;&gt;&gt; import boto.ec2
&gt;&gt;&gt; conn = boto.ec2.connect_to_region('us-east-1', aws_access_key_id='', aws_secret_access_key='')
&gt;&gt;&gt; reservations = conn.get_all_instances(filters = {'instance-state-name' : 'running', "tag:Name": "*" + 'test-node' + "*"})
&gt;&gt;&gt; for r in reservations:
...     for i in r.instances:
...             print i.tags['Name']
...
test-node-5
&gt;&gt;&gt; ^D
:~&gt;</pre><p></p>
<p style="text-align: justify;">Moral of the story: <strong>ALWAYS VERIFY/ENFORCE THAT DATA IS PROPERLY FORMATTED!</strong></p>
<p style="text-align: justify;">There are multiple possible solutions to this issue. With the cost of few extra cycles one can make sure proper comparison is implemented:</p>
<p></p><pre class="crayon-plain-tag">:~&gt; python
Python 2.7.5 (default, Mar  9 2014, 22:15:05)
[GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.0.68)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
&gt;&gt;&gt; import boto
&gt;&gt;&gt; import boto.ec2
&gt;&gt;&gt; conn = boto.ec2.connect_to_region('us-east-1', aws_access_key_id='', aws_secret_access_key='')
&gt;&gt;&gt; reservations = conn.get_all_instances(filters = {'instance-state-name' : 'running'})
&gt;&gt;&gt; for r in reservations:
...     for i in r.instances:
...             if 'test-node'.upper() in i.tags['Name'].upper():
...                     print i.tags['Name']
...
TEST-NODE-1
TEST-Node-2
TEST-node-3
test-node-5
&gt;&gt;&gt; ^D
:~&gt;</pre><p>References</p>
<ul>
<li><a href="http://docs.aws.amazon.com/cli/latest/reference/ec2/describe-instances.html" target="_blank">http://docs.aws.amazon.com/cli/latest/reference/ec2/describe-instances.html</a></li>
</ul>
<div class="rpbt_shortcode">
<h3>Related Posts</h3>
<ul>
					
			<li><a href="http://blog.xi-group.com/2015/01/small-tip-how-to-use-aws-cli-filter-parameter/">Small Tip: How to use AWS CLI &#8216;&#8211;filter&#8217; parameter</a></li>
					
			<li><a href="http://blog.xi-group.com/2014/11/small-tip-how-to-use-block-device-mappings-to-manage-instance-volumes-with-aws-cli/">Small Tip: How to use &#8211;block-device-mappings to manage instance volumes with AWS CLI</a></li>
					
			<li><a href="http://blog.xi-group.com/2014/07/small-tip-how-to-use-aws-cli-to-start-spot-instances-with-userdata/">Small Tip: How to use AWS CLI to start Spot instances with UserData</a></li>
					
			<li><a href="http://blog.xi-group.com/2014/07/small-tip-aws-announces-t2-instance-types/">Small Tip: AWS announces T2 instance types</a></li>
					
			<li><a href="http://blog.xi-group.com/2014/06/small-tip-ebs-volume-allocation-time-is-linear-to-the-size-and-unrelated-to-the-instance-type/">Small Tip: EBS volume allocation time is linear to the size and unrelated to the instance type</a></li>
			</ul>
</div>
]]></content:encoded>
			<wfw:commentRss>http://blog.xi-group.com/2014/06/small-tip-aws-tools-are-case-sensitive-aws-web-interface-is-not/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
