A few weeks ago I decided to use some of my free time over the holidays to refresh my resume with the goals of the past year. While editing my google doc and dealing with the arduous formatting, pagination issues and etc I came up with a question:
Is there a schema for resume and can it be described using a JSON or YAML?
Enter JSON Resume, I knew I couldn’t have been the only person to have posed this question. The JSON resume project seems well maintained and is still under active development. My reasoning for wanting a schema for my resume is best explained through several goals:
Demonstrating proficency with object notation & key attribute languages like json or YAML
A single source of truth for my resume that has several output mediums: web, pdf, physical paper
Easily updated with further career growth
Perfect usecase for building a CI/CD workflow using the AWS cloud suite:
Overall I’m satisfied with the outcome of the project. I’m able to make updates to my resume in a lanaguage that I am comfortable with. The website costs a fraction of a doller per month to host and build times are minimal. I believe anyone who is considereing learning DevOps or wants to expand their knowledge of CI/CD concepts through practical experience would find this exercise very useful & rewarding.
I’m sure many of you have stumbled across the article Ansible and HashiCorp: Better Together and wondered, how can I do this in my environment? I, like many folks, really wanted to think that Terraform could configure a VM while its deploying the rest of the infrastructure. Y’know, that whole frying of an instance from a baked image. This may work in very simple, caveat free, environments who have little technological baggage. Unfortunately, that’s not the world we all live in. This led me down the path of investigating a potential cooperation between Terraform & Ansible of which there are a few options:
The local-exec method calling ansible from Terraform
Using a tool to create a dynamic ansible inventory from the .tfstate
Create an Ansible inventory from Terraform
I tried several of these methods and the one which worked best for me was to have terraform generate an Ansible inventory. This solution was simple yet functional for our organization. We have no need to maintain the .tfstate long-term as we’re only needing terraform to stand up several long-running VMs which will be maintained & patched. This may seem like the old-school method of infrastructure as code, because it is, but it works and works well. I’ve created a Github repo as an example.
Terraform templatefile()
The key to getting Terraform to generate a file from the .tfstate is the templatefile() function. Several attempts at getting this to work included using a null_resource with a local-exec but that failed to loop properly when generating the inventory from my template and that’s where templatefile() came in handy.
Throughout my career, I’ve been through the ESXi installer more times than I care to admit, if I never have to confirm the root password again, I’d be ecstatic. It’s true that today having service profile based blade centers greatly reduce the number of ESXi installs. But, there comes a time where you have to do a large scale deployment. Say for instance you have an 2,000 concurrent user VDI footprint comprising 32 hosts spanning two datacenters and 4 Cisco UCS failure domains, installing ESXi is less than exciting.
Thanks to advent of profile based blade center configurations like Cisco UCS, we can now control the MAC addresses in a pool and if you happen to go one step further and go with UCS Central, your pools can global and shared among all of your failure domains. This being the case, I’ve found that when installing ESXi with PXE I can autogenerate the ESXi kickstart! I will detail these steps in a separate post NGINX Autogenerated Kickstart from MAC. However, what ties all of this together are two assumptions:
MAC Address assiged by a pool in sequential order
DHCP IP Reservations based on this MAC Address
Network Design
I chose to go with a /24 subnet for my hypervisor management range, plenty of space to grow and at the top of the subnet we’re going to add a DHCP scope for PXE booting.
service tftp { socket_type = dgram protocol = udp wait = yes user = root server = /usr/sbin/in.tftpd server_args = --address 172.16.2.100:67 -T 10000000 -s /var/lib/tftpboot disable = no per_source = unlimited cps = 2000 1 flags = IPv4 }
syslinux
So you have to use a specific version of syslinux in order for ESXi to boot using legacy BIOS 3.86. You can find reference to this on the VMware documentation site.
You will want to download syslinux 3.86 from their site. You can compile the source or use my pre-built binary.
ESXi Kickstart
I have created a public Github repo for everyone to clone and have fun with. The premise of the automation is this, a predictable MAC address yields an assumed IP Address that is sent to NGINX to dynamically serve up a static kickstart file your boot.cfg is expecting.
Word of caution
We boot from SAN in all of the environments I’ve employed this setup. As everyone knows about Fiber Channel, you can number your LUNs, LUN:0 should be your boot LUN mapped by your vHBA. However, if you’re going to reload a blade, do yourself a favor and remove it from every initiator group but it’s boot LUN. Because whichever LUN is presented first will be wiped and it does not care which one that is.
Final Thoughts
If all goes well, you’ll have a hypervisor vNIC boot, receieve the IP you expect, it will grab syslinux and pull the kickstart from your NGINX server and install itself all on its own.
When adding iSCSI or Fiber Channel LUNs to your vSphere cluster, you’ll need to rescan the vHBA on all of your hosts. Here’s a quick snippet to facilitate that in PowerCLI.
Welcome to Hexo! This is your very first post. Check documentation for more info. If you get any problems when using Hexo, you can find the answer in troubleshooting or you can ask me on GitHub.