How To and Tutorials

Installing Redis and setting up Master - Slave Replication

Over the past few years the usage of NOSQL databases has grown quite a bit. Part of this popularity is due to the scalability and performance seen with NOSQL solutions, one of those highly performant databases is Redis. Redis is an highly popular open source in memory key - value data store that is currently in use at and highly praised by tech companies such as Twitter, Stack Exchange and Github.

Managing DNS locally with /etc/hosts

Before the advent of a distributed domain name system; networked computers used local files to map hostnames to IP addresses. On Unix systems this file was named /etc/hosts or “the hosts file”. In those days, networks were small and managing a file with a handful of hosts was easy. However as the networks grew so did the methods of mapping hostnames and IP addresses. In modern days with the internet totaling at somewhere around 246 million domain names (as of 2012) the hosts file has been replaced with a more scalable distributed DNS service.

Install RackTables and start tracking your IP address inventory

Recently while I was scouring through a spreadsheet of “unallocated” IP addresses, I thought to myself. There has to be a better way to manage an inventory of IP addresses, and while I’m at it maybe even provide a list of provisioned servers. After some googling I found myself on RackTables.org the home of an open source project that aims to provide just what I was looking for. An Inventory system that tracks and manages Servers, Racks, IP Addresses and just about everything else in a data center.

Symlinks vs Hardlinks and how to create them

In a previous article I covered a little bit about Symlinks and Hardlinks but I never really explained what they are or how to create them. Today I am going to cover how to create both Symlinks and Hardlinks and what the difference is between the two. What are Symlinks and Hardlinks Hard Links In Linux when you perform an listing in a directory the listing is actually is a list of references that map to an inode.

Changing the default nice value for a user or group

Recently I coveredhow to increase and decrease the CPU priority of processes using nice and renice. Today I am going to cover how to change the default niceness value for a user or group. Why change the default CPU priority value? Before explaining how to change the default niceness value, let’s cover why this could be useful. Scenario #1 You have a system that has thousands of users that log in via SSH and could potentially run CPU intensive tasks.

Remote Command Execution with SaltStack

A few weeks back I wrote an article Getting started with SaltStack; that article covered Configuration and Package Automation with Saltstack. In Today’s article I am going to cover SaltStack’s Remote Execution abilities, a feature that I feel Saltstack has implemented better than other automation tools. Running a command in a State If you remember from the previous article SaltStack’s states are permanent configurations. Adding a command in a Salt state is used when you want to have a command that is run after provisioning a server, run every time Salt manages the state of the system or run when certain conditions are true.

Setting process CPU priority with nice and renice

Nice is a command in Unix and Linux operating systems that allows for the adjustment of the “Niceness” value of processes. Adjusting the “niceness” value of processes allows for setting an advised CPU priority that the kernel’s scheduler will use to determine which processes get more or less CPU time. In Linux this niceness value can be ignored by the scheduler, however other Unix implementations can treat this differently.

Getting started with SaltStack by example: Automatically Installing nginx

Systems Administration is changing, with the huge scale of internet company deployments and the popularity of cloud computing. Server deployments are often scaling faster than the systems administration teams supporting them. In order to meet the demand those teams are finding themselves changing the ways they have traditionally managed servers. One of those changes is automation, where once a sysadmin would need to spend time installing packages by hand (via apt or yum) and modifying configuration files.

Finding the OS version and Distribution in Linux

When supporting systems you have inherited or in environments that have many different OS versions and distributions of Linux. There are times when you simply don’t know off hand what OS version or distribution the server you are logged into is. Luckily there is a simple way to figure that out. Ubuntu/Debian $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=13.04 DISTRIB_CODENAME=raring DISTRIB_DESCRIPTION="Ubuntu 13.04" RedHat/CentOS/Oracle Linux # cat /etc/redhat-release Red Hat Enterprise Linux Server release 5 (Tikanga) Catchall If you are looking for a quick way and don’t care what the output looks like, you can simply do this as well.

SSH: Disable Host Checking for Scripts & Automation

In the world of Cloud Servers and Virtual Machines scripting and automation are top priority for any sysadmin. Recently while I was creating a script that logged into another server via SSH to run arbitrary commands, I ran into a brick wall. $ ssh 192.168.0.169 The authenticity of host ‘192.168.0.169 (192.168.0.169)’ can’t be established. ECDSA key fingerprint is 74:39:3b:09:43:57:ea:fb:12:18:45:0e:c6:55:bf:58. Are you sure you want to continue connecting (yes/no)? To anyone who has used SSH long enough the above message should look familiar.