linux

DevOps Shell Script Template

2014/07/03 Development, DevOps, Operations No comments , , , , ,

In everyday life of a DevOps engineer you will have to create multiple pieces of code. Some of those will be run once, others … well others will live forever. Although it may be compelling to just put all the commands in a text editor, save the result and execute it, one should always consider the “bigger picture”. What will happen if your script is run on another OS, on another Linux distribution, or even on a different version of the same Linux distribution?! Another point of view is to think what will happen if somehow your neat 10-line-script has to be executed on say 500 servers?! Can you be sure that all the commands will run successfully there? Can you even be sure that all the commands will even be present? Usually … No!

Faced with similar problems on a daily basis we started devising simple solutions and practices to address them. One of those is the process of standardizing the way different utilities behave, the way they take arguments and report errors. Upon further investigation it became clear that a pattern can be extracted and synthesized in a series of template, one can use in daily work to keep common behavior between different utilities and components.

Here is the basic template used in shell scripts:

Nothing fancy. Basic framework that does the following:

  1. Lines 3 – 13: Make sure basic documentation, dependency list and example usage patterns are provided with the script itself;
  2. Lines 15 – 16: Define meaningful return codes to allow other utils to identify possible execution problems and react accordingly;
  3. Lines 18 – 27: Basic help/usage() function to provide the user with short guidance on how to use the script;
  4. Lines 29 – 52: Dependency checks to make sure all utilities the script needs are available and executable in the system;
  5. Lines 54 – 77: Argument parsing of everything passed on the command line that supports both short and long argument names;
  6. Lines 79 – 91: Validity checks of the argument values that should make sure arguments are passed contextually correct values;
  7. Lines 95 – N: Actual programming logic to be implemented …

This template is successfully used in a various scenarios: command line utilities, Nagios plugins, startup/shutdown scripts, UserData scripts, daemons implemented in shell script with the help of start-stop-daemon, etc. It is also used to allow deployment on multiple operating systems and distribution versions. Resulting utilities and system components are more resilient, include better documentation and dependency sections, provide the user with similar and intuitive way to get help or pass arguments. Error handling is functional enough to go beyond the simple OK / ERROR state. And all of those are important feature when components must be run in highly heterogenous environments such as most cloud deployments!

Small Tip: How to run non-deamon()-ized processes in the background with SupervisorD

2014/06/26 Development, DevOps, Operations, Small Tip , , , , , , , ,

The following article will demonstrate how to use Ubuntu 14.04 LTS and SupervisorD to manage the not-so-uncommon case of long running services that expect to be running in active console / terminal. Those are usually quickly / badly written pieces of code that do not use daemon(), or equivalent function, to properly go into background but instead run forever in the foreground. Over the years multiple solutions emerged, including quite the ugly ones (nohup … 2>&1 logfile &). Luckily, there is a better one, and it’s called SupervisorD. With Ubuntu 14.04 LTS it even comes as a package and it should be part of your DevOps arsenal of tools!

In a typical Python / Web-scale environment multiple components will be implemented in a de-coupled, micro-services, REST-based architecture. One of the popular frameworks for REST is Bottle. And there are multiple approaches to build services with Bottle when full-blown HTTP Server is available (Apache, NginX, etc.) or if performance matters. All of those are valid and somewhat documented. But still, there is the case (and it more common than one would think) when developer will create Bottle server to handle simple task and it will propagate into production, using ugly solution like Screen/TMUX or even nohup. Here is a way to put this under proper control.

Test Server code: test-server.py

Test server configuration file: test-server.conf

Manual execution of the server code will looks like this:

When the controlling terminal is lost the server will be terminated. Obviously, this is neither acceptable, nor desirable behavior.

With SupervisorD (sudo aptitude install supervisor) the service can be properly managed using simple configuration file.

Example SupervisorD configuration file: /etc/supervisor/conf.d/test-server.conf

To start the service, execute:

To verify successful service start:

SupervisorD will redirect stdout and stderr to properly named log files:

Those log files can be integrated with a centralized logging architecture or processed for error / anomaly detection separately.

SupervisorD also comes with handy, command-line control utility, supervisorctl:

With some additional effort SupervisorD can react to various types of events (http://supervisord.org/events.html) which bring it one step closer to full process monitoring & notification solution!

References

Small Tip: Partitioning disk drives from within UserData script

2014/06/11 AWS, DevOps, Small Tip 2 comments , , , , ,

In a recent upgrade to the new generation of instances we faced an interesting conundrum. Previous generations came with quite the amount of disk spaces. Usually instance stores are mounted on /mnt. And it is all good and working. The best part, one can leave the default settings for the first instance store and do anything with the second. And “anything” translated to enabling swap on the second instance store. With the new instance types, however the number (and the size) of the instance stores is reduced. It is SSD, but m2.4xlarge comes with 2 x 840 GB, while the equivalent in the last generation, r3.2xlarge, comes with only one 160 GB instance store partition.

Not a problem, just a challenge!

We prefer to use UserData for automatic server setup. After some attempts it became clear that partitioning disks from a shell script is not exactly trivial tasks under Linux in AWS. BSD-based operating systems come with disklabel and fdisk and those will do the job. Linux comes with fdisk by default and that tool is somewhat limited …

Luckily, fdisk reads data from stdin so quick-and-dirty solution quickly emerged!

The following UserData is used to modify the instance store of a m3.large instance, creating 8GB swap partition and re-mounting the rest as /mnt:

Execute it with AWS CLI (Using stock Ubuntu 14.04 HVM AMI):

The result:

There it is, 8GB swap partition (/dev/xvdb1) and the rest (/dev/xvdb2) mounted as /mnt. Note that /etc/fstab is also updated to account for the device name change!

Small Tip: Use AWS CLI to create instances with bigger root partitions

2014/06/05 AWS, DevOps, Small Tip 2 comments , , , , ,

On multiple occasions we had to deal with instances running out of disk space for the root file system. AWS provides you reasonable amount of storage, but most operating systems without additional settings will just use the root partition for everything. Which is usually sub-optimal, since default root partition is 8GB and you may have 160GB SSD just mounted on /mnt and never used. With the AWS Web interface, it is easy. Just create bigger root partitions for the instances. However, the AWS CLI solution is not obvious and somewhat hard to find. If you need to regularly start instances with non-standard root partitions, manual approach is not maintainable.

There is a solution. It lies in the –block-device-mappings parameter that can be passed to aws ec2 run-instances command.

According to the documentation this parameter uses JSON-encoded block device mapping to adjust different parameter of the instances that are being started. There is a simple example that shows how to attach additional volume:

This will attach additional 100GB EBS volume as /dev/sdb. The key part: “Ebs”: {“VolumeSize”: 100}

By specifying your instance’s root partition you can adjust the root partition size. Following is an example how to create Amazon Linux instance running on t1.micro with 32GB root partition:

The resulting volume details show the requested size and the fact that this is indeed root partition:
Screen Shot 2014-06-05 at 4.30.31 PM

Confirming, that the instance is operating on the proper volume:

There is enough space in the root partition now. Note: This is EBS volume, additional charges will apply!

References

  • aws ec2 run-instances help