A picture is worth a thousand words

When building a new team, a number of factors have to be taken into account. These can be summarized in a visual called a ‘team canvas’ , as seen below.

Team Canvas

Why was the team formed? Define the purpose of the team in the center. List some of the values you want this team to have. Next, define the people and the roles that will make up this team. Follow that with some common goals that the team comes up with.

Ask the team to write their strenghts and areas of improvements. The team will need some tools to get their job done, list those under ‘needs and expectations’. Set some boundaries with rules and activities. For personal growth figure out what each team members wants to accomplish and list those in ‘personal goals’ section.

Creating a team canvas will help create structure for the team. It will give others an overview of the team, and it will also help in keeping the team headed in the right direction.

How to get things done

As a manager I am expected to “get things done”. I use Agile Scrum or Agile Kanban to plan out my execution when it comes to ensuring that my team is planning and executing at it’s best. I also follow a strategy by David Allen called “Getting Things Done®” in which he outlines the following strategy that will enable execution.

Getting Things Done
Getting Things Done

I ‘capture’ the list of things to do for my team in a product backlog. I then go through the backlog and ‘clarify’ the items in it to ensure that they are actionable. Avoiding vague terms such as “improve speed of web application”, I replace them with more specific terms such as “web app has latency of  ‘X’ ms, instead it should have latency of ‘Y’ ms.

Following that I ‘organize’ the backlog by priority and move items  in a given sprint or release into the ‘To Do’ swimlane in Kanban at the begining of the sprint or release. I either have daily stand-ups or weekly standups to ‘reflect’ on the progress of the work that I have ‘engaged’ in.

There are of course other strategies of execution as well. Dr. Stephen Covey has written ‘7 Habits of Highly Effective People’ which can help in improving personal execution. Tony Buzan has written about ‘Mind Mapping’ as a technique for unlocking our brains potential. Nick Cernis also has written about ‘todoodlist’ which is another way of organization to improve execution.

Stakeholders in Project Management

Managing people, process and technology is essential to be a successful project management. Stakeholder management falls under the ‘people’ management category.

According to PMBOK “A stakeholder is an individual, group, or organization who may affect, be affected by or perceive itself to be affected by a decision, activity, or outcome of a project.””

Without stakeholders active participation a project is likely to fail. A project manager should identify stakeholders and manage the relationship with them.  Figure 1 shows some of the stakeholders that a project manager may have to deal with.

 

Project Stakeholders
Project Stakeholders

As an SRE manager, I have add to play the role of a technical project manager (TPM) in projects where a TPM was not available. Identifying stakeholders can be tricky, since in large projects one may not be aware of all the potential stakeholders. Stakeholder identification can take place when the project charter is being defined.  This will help flush out the major stakeholders. However, it is not uncommon to find new stakeholder(s) throughout the implementation of the project. A project with unhappy stakeholders does not bode well for the success criteria of the project.  Stakeholders influence can be expressed in the power/interest model as shown in Figure 2.

Sketch004
Figure 2

The division of stakeholders in the categories of ‘keep satisfied’, ‘manage closely’, ‘monitor’ and ‘keep informed’ is the role of the project manager.  Stakeholders with low interest and low power require the least comparitive effort in terms of management, on the other hand stakeholders in the high power and high interest should be handled with utmost care.

The importance of stakeholders cannot be overestimated. For instance if you ignore sales as a stakeholder, you may end up with a product that the sales team has no knowledge of in terms of sales strategy.

My Netflix recommendations

I was taking an account of the shows on Netflix I have enjoyed over the past four years, and I thought I would share that with y’all.  What shows do you enjoy on Netflix?

  • Black Mirror
  • Planet Earth
  • Glitch
  • Ozark
  • Mindhunter
  • Marvel’s the Punisher
  • Marvel’s Jessica Jones
  • Marvel’s The Defenders
  • Marvel’s Iron Fist
  • Marvel’s Luke Cage
  • Marvel’s Daredevil
  • Arrow
  • White Collar
  • Sherlock
  • Breaking Bad
  • The Last Kingdom
  • The Walking Dead
  • Narcos
  • Into the Badlands
  • Archer
  • Supernatural
  • My Name is Earl
  • Better Call Saul

How to create a Docker base image

Here is a compelete set of instructions to create your own Docker base image. Docker base images are the first layer in a Docker image. There are plenty of base images available, however you can create your own if you like as well.

First, install debootstrap package on any given Ubuntu host.

# apt-get install debootstrap -y

Next, install docker.

# apt-get install docker.io -y

Next, download Ubuntu.

# debootstrap ubuntu-suite-name directory-name-to-download-to > /dev/null

Example for 16.04: debootstrap xenial xenial > /dev/null
You can get a list of Ubunutu suite names from https://wiki.ubuntu.com/DevelopmentCodeNames
Now you can import the Xenial directory into Docker as an image.

# tar -C xenial -c . | docker import - xenial

That’s it to create an image and store it locally, you can verify the image using ‘docker images’.
If you want to run a container using the image try ‘docker run -it xenial /bin/bash’. This should run a BASH shell in the container and give you a BASH command prompt.

Next, if you want to push this to your Docker hub registry, try the below steps:

# docker login -u your-docker-hub-username
# docker tag image-id-that-you-got-from-docker-images-cmd your-docker-hub-username/ubuntu:16.04
  Example for Xenial or Ubuntu 16.04: docker tag s3489dk349d0 syedaali/unbuntu:16.04
# docker push syedaali/unbuntu

To verify, visit in your browser hub.docker.com, login and check to make sure the image is present.
Alternatively you can run ‘docker search syedaali’ and it will show you the images that I own. Replace my username with yours of course.

Understanding Inodes

Understanding inodes is crucial to understanding Unix filesystems. Files contain data and metadata. Metadata is information about the file. Metadata is stored in an inode. The contents of an inode are:

– Inode Number
– Uid
– Gid
– Size
– Actim
– Mtime
– Ctime
– Blocksize
– Mode
– Number of links
– ACLs

Inodes are usually 256 bytes in size. Filenames are not stored in inodes, instead they are stored in the data portion of a directory. Usually filenames are stored in a linear manner, that is why searching for a filename can take a long time. Ext4 and XFS use more efficient Btrees to store filenames in directories, this allows for constant lookup times instead of linear lookup times.

Dentry is short for directory entry and is used to keep track of inode and filename information in a directory.

An inode can contain direct or indirect points to blocks of data for a given file. Direct block means that the inode contains the block number of a block that contains the actual file data. Indirect block means that the inode contains the block number of a block that then contains further block numbers to read data from.

Ext filesystem creates a fixed number of inodes when the filesystem is formatted. If you run out of inodes you have to format the filesystem. XFS does not contain a fixed number of inodes, they are created on demand.

When you delete a file the unlink() system call removes the directory entry for the inode and marks it available. The data blocks themselves are not deleted.

The number of links to a file is maitained in an inode. Each time a hard link is created the number of links increases. Soft links do not increase the number of links to a file or directory.

Superblock contains metadata about a filesystem. A filesystem typically stores many copies of a superblock in case one of them gets damaged. Some of the information in a superblock is:

– Filesystem size
– Block size
– Empty and filled blocks
– Size and location of inode table
– Disk block map

You can read superblock information using the command ‘dumpe2fs /dev/mount | grep -i superblock’.

Updating your GitHub.com Fork

Github.com hosts a large number of open source repositories. You can often fork these repositories, however keeping the fork updated with the master can be challenging. Here are some steps you can do to update your fork with the master. The below assumes you are in your local fork directory. Also, the branch from the master is called ‘develop’, if it is something else, then use that branch name instead of ‘develop’. I am using the SaltStack repo as an example.

$ git remote -v
$ git remote add upstream https://github.com/saltstack/salt
$ git remote -v
$ git fetch upstream
$ git checkout develop
$ git merge upstream/develop
$ git push

Sparse Files

Sparse files are files whose metadata reports one size, but the file itself takes less space on the filesystem.
Spare files are a common way to effeciently use disk space. They can be created using the ‘truncate’ command.
Or you can create them by opening a file programatically, seeking to an offset and then closing the file, without writing anything.

ls -l reports the lenght of the file, so the same file of 2 bytes will be reports as 2 bytes.
ls -s reports the size based on blocks, so for a 2 byte file, ls -s will report a size of 4K, since the block size is 4K.

du reports size based on blocks being used. For instance, if a file is 2 bytes, and the block size is 4096, du will report the file being 4K.
du -b will report the same size of a file as ls -l, since -b means apparent size.

Both ls -l and du -b do not take into account spare files. If a file if sparse, du -b and ls -l report it as though it is not sparse.

When using the ‘cp’ command, use the ‘cp –sparse=always’ option to keep sparse files as sparse.

‘scp’ is not sparse aware and if you use scp to copy a file that is spare it will take up “more” room on the destination host.
Instead if you use rsync with -S option, spare files will be maintained as sparse.

tar is not sparse file smart by default. If you tar a sparse file, the tar file itself and when you untar the tar file, both will fill the sparse
areas of the sparse file with zeros, resulting in more disk blocks being used. You should use the ‘-S’ option with tar to make it sparse file smart.

# create a sparse file of size 1GB
$ truncate -s +1G test

# The first number shows 0, which is block based size
$ ls -lsh test
total 1.G
0 -rw-rw-r-- 1 orion orion 1.0G Jan  7 14:07 test

# create tar file
$ tar -cvf test.tar test

# test.tar now really takes up 1GB
$ ls -ls test.tar
1.1G -rw-rw-r-- 1 orion orion 1.1G Jan  7 14:08 test.tar

# untarring now shows that the file is now using 1GB, before it was using 0GB
$ rm test
$ tar xvf test.tar
$ ls -lsh test
1.0G -rw-rw-r-- 1 orion orion 1.0G Jan  7 14:07 test

With the -S option, tar is smarter and the file continues to be still sparse.

# create a sparse file of size 1GB
$ truncate -s +1G test

# The first number shows 0, which is block based size
$ ls -lsh test
total 1.G
0 -rw-rw-r-- 1 orion orion 1.0G Jan  7 14:07 test

# create tar file with -S
$ tar -S -cvf test.tar test

# test.tar allocated size based on blocks is now 12
$ ls -ls test.tar
12 -rw-rw-r-- 1 orion orion      10240 Jan  7 14:19 test.tar
     
# untarring now shows that the file is still sparse
$ rm test
$ tar xvf test.tar
$ ls -lsh test
0 -rw-rw-r-- 1 orion orion 1073741824 Jan  7 14:19 test

When we do a ‘stat’ on a spare file, we see it it taking up no space in terms of blocks.

$ stat test
  File: `test'
  Size: 1073741824	Blocks: 0          IO Block: 4096   regular file
Device: fd07h/64775d	Inode: 1046835     Links: 1
Access: (0664/-rw-rw-r--)  Uid: (  500/   orion)   Gid: (  500/   orion)
Access: 2015-01-07 14:19:53.957911258 -0800
Modify: 2015-01-07 14:17:58.000000000 -0800
Change: 2015-01-07 14:19:53.957623281 -0800

We can also measure the extents used.

$ filefrag test
test: 0 extents found

Copying files in Linux

Copying files should be simple, yet there are a number of ways of transferring files.
Some of the ways that I could think of are listed here.

# -a=archive mode; equals -rlptgoD
# -r=recurse into directories
# -l=copy symlinks as symlinks
# -p=preserve permissions
# -t=preserve modification times
# -g=preserve group
# -o=preserve owner (super-user only)
# -D=preserve device files (super-user only) and special files
# -v=verbose
# -P=keep partially transferred files and show progress
# -H=preserve hardlinks
# -A=preserve ACLs
# -X=preserve selinux and other extended attributes
$ rsync -avPHAX /source /destination

# cross systems using ssh
# -z=compress
# -e=specify remote shell to use
$ rsync -azv -e ssh /source user@destinationhost:/destination-dir

# -xdev=Don’t  descend  directories on other filesystems
# -print=print the filenames found
# -p=Run in copy-pass mode
# -d=make directories
# -m=preserve-modification-time
# -v=verbose
$ find /source -xdev -path | cpio -pdmv /destination

# let's not forget good old cp
# -r=recursive
# -p=preserve mode,ownership,timestamps
# -v=verbose
$ cp -rpv --sparse=always /source /destination

# tar
# -c=create a new archive
# -v=verbose
# -f=use archive file
$ tar cvf - /source | (cd /destination && tar xvf -)

# scp
$ scp -r /source user@destinationhost:/destination-dir

# copy an entire partition
$ dd if=/dev/source-partition of=/dev/destination-partition bs=<block-size>

Useful Linux Memory Calculation Commands

Calculate total memory used by apps

ps aux | awk '{sum+=$6} END {print sum / 1024}’

To free pagecache:

    echo 1 > /proc/sys/vm/drop_caches

To free dentries and inodes:

    echo 2 > /proc/sys/vm/drop_caches

To free pagecache, dentries and inodes:

    echo 3 > /proc/sys/vm/drop_caches

dd physical memory being used by a user, in this it’s user daemon

ps aux |awk '{if($1 ~ "daemon"){Total+=$6}} END {print Total/1024" MB"}'

see how many processes are using swap:

grep Swap /proc/[1-9]*/smaps | grep -v '\W0 kB'

list the top 10 processes using the most swap:

ps ax | sed "s/^ *//" > /tmp/ps_ax.output 
for x in $(grep Swap /proc/[1-9]*/smaps | grep -v '\W0 kB' | tr -s ' ' | cut -d' ' -f-2 | sort -t' ' -k2 -n | tr -d ' ' | tail -10); do 
    swapusage=$(echo $x | cut -d: -f3)
    pid=$(echo $x | cut -d/ -f3)
    procname=$(cat /tmp/ps_ax.output | grep ^$pid)
    echo "============================" 
    echo "Process   : $procname" 
    echo "Swap usage: $swapusage kB"; done

Common Mount Options

async -> Allows the asynchronous input/output operations on the file system.
auto -> Allows the file system to be mounted automatically using the mount -a command.
defaults -> Provides an alias for async,auto,dev,exec,nouser,rw,suid.
exec -> Allows the execution of binary files on the particular file system.
loop -> Mounts an image as a loop device.
noauto -> Default behavior disallows the automatic mount of the file system using the mount -a command.
noexec -> Disallows the execution of binary files on the particular file system.
nouser -> Disallows an ordinary user (that is, other than root) to mount and unmount the file system.
remount -> Remounts the file system in case it is already mounted.
ro -> Mounts the file system for reading only.
rw -> Mounts the file system for both reading and writing.
user -> Allows an ordinary user (that is, other than root) to mount and unmount the file system.

%d bloggers like this: