Resume

A few weeks ago I decided to use some of my free time over the holidays to refresh my resume with the goals of the past year. While editing my google doc and dealing with the arduous formatting, pagination issues and etc I came up with a question:

Is there a schema for resume and can it be described using a JSON or YAML?

Enter JSON Resume, I knew I couldn’t have been the only person to have posed this question. The JSON resume project seems well maintained and is still under active development. My reasoning for wanting a schema for my resume is best explained through several goals:

  • Demonstrating proficency with object notation & key attribute languages like json or YAML
  • A single source of truth for my resume that has several output mediums: web, pdf, physical paper
  • Easily updated with further career growth
  • Perfect usecase for building a CI/CD workflow using the AWS cloud suite:
  • Additional Exposure of resume through a public Githib repository
  • A usecase for my resume.moorenix.com domain
  • SSL Encrypted
  • The website has to be static & cheap

Current Open Issues

  • PDF exports do not render as well as using the PDF export feature of your browser: Issue 94
  • Date handling its extremely specific: Issue 198

Final Thoughts

Overall I’m satisfied with the outcome of the project. I’m able to make updates to my resume in a lanaguage that I am comfortable with. The website costs a fraction of a doller per month to host and build times are minimal. I believe anyone who is considereing learning DevOps or wants to expand their knowledge of CI/CD concepts through practical experience would find this exercise very useful & rewarding.

Terraform & Ansible together

I’m sure many of you have stumbled across the article Ansible and HashiCorp: Better Together and wondered, how can I do this in my environment? I, like many folks, really wanted to think that Terraform could configure a VM while its deploying the rest of the infrastructure. Y’know, that whole frying of an instance from a baked image. This may work in very simple, caveat free, environments who have little technological baggage. Unfortunately, that’s not the world we all live in. This led me down the path of investigating a potential cooperation between Terraform & Ansible of which there are a few options:

  • The Ansible provisioner
  • The local-exec method calling ansible from Terraform
  • Using a tool to create a dynamic ansible inventory from the .tfstate
  • Create an Ansible inventory from Terraform

I tried several of these methods and the one which worked best for me was to have terraform generate an Ansible inventory. This solution was simple yet functional for our organization. We have no need to maintain the .tfstate long-term as we’re only needing terraform to stand up several long-running VMs which will be maintained & patched. This may seem like the old-school method of infrastructure as code, because it is, but it works and works well. I’ve created a Github repo as an example.

Terraform templatefile()

The key to getting Terraform to generate a file from the .tfstate is the templatefile() function. Several attempts at getting this to work included using a null_resource with a local-exec but that failed to loop properly when generating the inventory from my template and that’s where templatefile() came in handy.

HCL Code

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
######################################
# Terraform Ansible Inventory Creation
######################################
data "template_file" "hosts" {
template = templatefile("${path.module}/templates/hosts.tpl", { default_ip_address = ["${vsphere_virtual_machine.vm.*.default_ip_address}"]}
)
}

resource "null_resource" "hosts" {
triggers = {
template_rendered = "${data.template_file.hosts.rendered}"
}
provisioner "local-exec" {
command = "echo '${data.template_file.hosts.rendered}' > '${path.module}/ansible/hosts.yml'"
}
}

Template hosts.tpl

1
2
3
4
5
all:
hosts:
%{ for addr in default_ip_address ~}
${addr}:
%{ endfor ~}

References

Github Code Repo
Terraform Templates

PXE Install ESXi

Throughout my career, I’ve been through the ESXi installer more times than I care to admit, if I never have to confirm the root password again, I’d be ecstatic. It’s true that today having service profile based blade centers greatly reduce the number of ESXi installs. But, there comes a time where you have to do a large scale deployment. Say for instance you have an 2,000 concurrent user VDI footprint comprising 32 hosts spanning two datacenters and 4 Cisco UCS failure domains, installing ESXi is less than exciting.

Thanks to advent of profile based blade center configurations like Cisco UCS, we can now control the MAC addresses in a pool and if you happen to go one step further and go with UCS Central, your pools can global and shared among all of your failure domains. This being the case, I’ve found that when installing ESXi with PXE I can autogenerate the ESXi kickstart! I will detail these steps in a separate post NGINX Autogenerated Kickstart from MAC. However, what ties all of this together are two assumptions:

  • MAC Address assiged by a pool in sequential order
  • DHCP IP Reservations based on this MAC Address

Network Design

I chose to go with a /24 subnet for my hypervisor management range, plenty of space to grow and at the top of the subnet we’re going to add a DHCP scope for PXE booting.

Purpose Subnet CIDR Subnet Mask Default Gateway DHCP Range
Site 172.16.0.0/22 255.255.252.0 N/A
Management Network 172.16.0.0/24 255.255.255.0 172.16.0.1 N/A
Hypervisors 172.16.1.0/24 255.255.255.0 172.16.1.1 172.16.1.240 - 172.16.1.249
VM Network 172.16.2.0/24 255.255.255.0 172.16.2.1 172.16.2.10 - 172.16.2.249

Hypervisor Network DHCP Options

DHCP Option Number Description Configuration
003 Router 172.16.1.1
006 DNS 8.8.8.8
066 Boot Server Host Name 172.16.2.100
067 Bootfile Name pxelinux.0
  • You can use any DNS you would like
  • You can reference your boot server via hostname

TFTP Server

I use the xinetd TFTP server on a CentOS machine.

TFTP Install

CentOS 6/7/8

1
2
3
yum -y install xinetd
systemctl enable xinetd
systemctl start xinetd

Ubuntu 16.04+

1
2
3
4
apt-get update
apt-get install xinetd
systemctl enable xinetd
systemctl start xinetd

/etc/xinetd.d/tftp

1
2
3
4
5
6
7
8
9
10
11
12
13
service tftp
{
socket_type = dgram
protocol = udp
wait = yes
user = root
server = /usr/sbin/in.tftpd
server_args = --address 172.16.2.100:67 -T 10000000 -s /var/lib/tftpboot
disable = no
per_source = unlimited
cps = 2000 1
flags = IPv4
}

syslinux

So you have to use a specific version of syslinux in order for ESXi to boot using legacy BIOS 3.86. You can find reference to this on the VMware documentation site.

You will want to download syslinux 3.86 from their site. You can compile the source or use my pre-built binary.

ESXi Kickstart

I have created a public Github repo for everyone to clone and have fun with. The premise of the automation is this, a predictable MAC address yields an assumed IP Address that is sent to NGINX to dynamically serve up a static kickstart file your boot.cfg is expecting.

Word of caution

We boot from SAN in all of the environments I’ve employed this setup. As everyone knows about Fiber Channel, you can number your LUNs, LUN:0 should be your boot LUN mapped by your vHBA. However, if you’re going to reload a blade, do yourself a favor and remove it from every initiator group but it’s boot LUN. Because whichever LUN is presented first will be wiped and it does not care which one that is.

Final Thoughts

If all goes well, you’ll have a hypervisor vNIC boot, receieve the IP you expect, it will grab syslinux and pull the kickstart from your NGINX server and install itself all on its own.

vSphere Rescan all HBA

When adding iSCSI or Fiber Channel LUNs to your vSphere cluster, you’ll need to rescan the vHBA on all of your hosts. Here’s a quick snippet to facilitate that in PowerCLI.

Single Host

1
Get-VMHost | Get-VMHostStorage -RescanAllHba

Entire Cluster

1
Get-Cluster -Name “GC” | Get-VMHost | Get-VMHostStorage -RescanAllHba