Category Archives: DevOps and Automation

Ansible Tower 3.1 brings Workflows Log integration and Clustering

Ansible Tower 3.1 brings the new “Workflows” feature allowing you to hook Playbooks, set conditions in executing them and passing data from one Playbook to another.

Additionally Tower can now scale beyond a single instance allowing job processing through any one of the tower nodes in the cluster.

In Tower 3.1 you can easily direct all logs to a central log service such as ELK, Splunk, loggly or others.

More information here: https://www.ansible.com/blog/introducing-asible-tower-3-1

AWS Lambda 5 cool features

What I liked:

  1. Versions and aliases (prod as an alias can point to the active function)
  2. Scheduling of actions
  3. Support for Python and others
  4. Dynamic – No need to setup servers
  5. VPC support – can communicate with other services you have internally
  6. Integration with CloudWatch (inspect and Analyze incoming log entries)

AWS Config Rules!

I know what you did last summer…opening all those insecure ports in your security groups to quickly troubleshoot that nasty bug, then in the rush to close this issue, forgot to close back the gate! AWS Config is your friend then 🙂

AWS Config Rules is built on top of AWS Config and it allows you to get notified or have action taken when a configuration change is made where it breaches the AWS best practice rules or any of your own set of rules.

Here is the video session on this and the main goodies as well as important notes I found:

  1. It is a change control and auditing solution completely automated! No need for scripts, data store or tracking done by you.
  2. You can troubleshoot and run a time travel like view through your resources, their state and relationships to any other dependent resources exiting through any point in time, including any change or deletion done.
  3. Your data is highly reusable: You get a JSON formatted record of any change to your resources, and where relevant it compatible with the relevant AWS describe commands so you can use that match for validation scenarios.
  4. Powerful correlation: It will use CloudTrail to show you who and when change a resource.
  5. Easily expandable: You create your own rules through AWS Lambda using any language it supports. Rule verifications are triggered based on time or AWS API event (tag created, instance deleted, etc)
  6. Targeted: You easily get a report of what changed when, resources that exist while they shouldn’t or are missing.
  7. Turn events into data by routing AWS Config events SNS notifications into your own event repository in real time.
  8. Coverage: Everything EC2, VPC and CloudTrail related, and now the holy grail: IAM, so now you can dig into who added that policy, or detect when an admin user has been added where it shouldn’t have been.
  9. Availability: AWS Config is available everywhere and AWS Config rules is only available in North Virginia.
  10. Spread: Regional. You need to run and review it in each region separately.. Not quiet as powerful if it were account centered..
  11. Pricing: Right here!

Slides Decks is here:

Launching Ubuntu VM on Windows Azure in less than 1 Min.

Yeh, you all are using Amazon AWS…

But sometimes when you get the urge to try something new, maybe control your AWS operation from an other cloud providers cloud, maybe try Azure 🙂

Here is how to quickly Launch a Ubuntu (or basically any Linux) VM on Windows Azure in less than 1 Min.

Firstly, you may want to set-up this environment for better Azure usage

  1. Sign up at Azure (Free Trial here) + Special free offers and discounts for MSDN Subscribers here
  2. Install the Windows Powershell for Azure (and/or Python and/or Azure command-line interface – CMD) from this link: http://azure.microsoft.com/en-us/downloads/
  3. For Powershell: Use Add-AzureAccount command to add your Windows Azure credentials to your local Powershell install

Setting the Ubuntu VM:

Create the certificate for your new Ubuntu VM: (use Cygwin or any Linux):

openssl req -x509 -days 365 -newkey rsa:2048 -keyout myPrivateKey.key -out myCert.pem

More info: http://azure.microsoft.com/en-gb/documentation/articles/virtual-machines-linux-use-ssh-key/#generate-a-key-from-an-existing-openssh-compatible-key

Using the Azure Web Portal create a new Ubuntu or VM from the Azure templates and use the myCert.pem for the Azure new VM configuration

Prepare a Putty version of your Azure Cert for the VM:

openssl rsa -in ./myPrivateKey.key -out myPrivateKey_rsa

load myPrivateKey_rsa into puttygen and ask it to create a new private key of this
Use .ppk for the output of puttygen private key
Use the new .ppk file for putty ssh session to the Azure Linux VM

Power on the VM (at the Azure portal)

Use putty to login to the new VM (and now you can “sudo apt-get install awscli” or any other stuff you want to do on that Ubuntu VM)

Verifying VM Console & Logs:

Not simple…nothing yet like “aws ec2 get-console-output”

More info:

Simplifying Virtual Machine Troubleshooting using Azure Log Collector (March 2015) –  works for Windows VMs only!! Support for Linux VMs is on the works
Virtual machine console access (Jan 2015)

Verifying VM state in Powershell:

Use the command: Get-AzureVM

DevOps must read book – The Phoenix Project

Finally reading this importing and cool DevOps book: The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win

More info on Goodreads: https://www.goodreads.com/book/show/17255186

So realistic that I can fill as if I am Bill, the main character on this story.

Up into 31% or so of the book, it is a real IT Professional DevOps Horror story. Then it starts to clear up a bit, but not yet turning positive…

Few quotes from the book I liked so far:

  1. The Job of IT Operations
  2. The Theory of Constraints and IT Operations
  3. Three Venues for maximizing the value of DevOps

More notes coming soon…

If you are in IT Operations and/or DevOps or want to get what they do, read it!
It has a great story in it and lots of insights on the topic.

You can read more about the books I read (or write..) on my GoodReads carrosel on my blog and you can click here to follow me on GoodReads.

DevOps Reporting From The Trenches – From Devastating Failure to Market Domination, The pitfalls, the formula and beyond

There’s a lot told about DevOps as well as cool automation tools and products, but there is not a lot of real world stories that are fun to just relax and read, and be done with it within less than an hour.

Here is one that does have all that jazz – DevOps: Reporting from the Trenches: Winning by Creating a Continuous Delivery Software Factory

It is the story of a huge financial institute that has gone From devastating failure to market domination by adopting DevOps best practices.

I have to reveal a small secret here…Originally I built the story using characters from Star Trek, but then had to take this out, to avoid any rights issues with the Star Trek creators. But if you look closely into the words the story’s heroes use, you might find that adventurous spirit.

You can read the story or simply download the whole DevOps CA Technology Exchange magazine in a single PDF file.

I will soon create an audio version of this story and add it to the podcast archive.

What’s in the story:

  • DevOps Revealed
  • DevOps as Power Multiplier for SaaS
  • DevOps Failure Horror Stories
  • The DevOps Success Formula in Five Simple Steps
  • Odyssey – Intergalactic tour into the future of software DevOps and Continuous Delivery

Here is how it all starts:

James lay back in his chair and looked at the shades dancing on the board. It was the third time he tried to find what’s wrong with the graphs but he could not.

James looked at Stephanie.

Stephanie was shocked, but a small splinter of a smile started to make trails near her mouth.

James said: “I know this was about to happen, yet I can’t believe it!”

Stephanie murmured: “Unbelievable.”

In less than ten months “UAS Enterprise1,” perceived as the most innovative service provider in their business witnessed an enormous market share collapse and the loss of its number one position.

Looking back at the market collapse, it took months until James, the CEO of UAS Enterprise could discover the root cause.

You can read the story or simply download the whole DevOps CA Technology Exchange magazine in a single PDF file.

I had the pleasure of writing it together with Alon Eizenman the founder of Nolio, one of the best DevOps tools on the market and part of CA Technologies DevOps portfolio and Miron Gross, one of the coolest automation pros I know, who I had the privilege to mentor in the CTE (Council for Technical Excellence) program. It is an entirely new experience, writing for a professional magazine and I recommend to go through this, if you’re an IT Professional or a blogger.

“PatchMe” – Quick Shell Shock (+future Vulnerabilities) Auto-Patcher for Heterogeneous Unix / Linux Sites

Download, comment and like it here: https://github.com/jackbezalel/patchme

I think you should like this new Shell-Shock Auto-Patcher I created (“PatchMe“). It should serve you well for future and past similar “noisy” vulnerabilities, specially if you have a lab or site with many types of “loosely” managed operating systems. This tends to happen within development labs, where you have license for every machine, but do not necessarily set or maintain them through a central patch repository.

It is also good for cases where patching is not straight-forward as it is in Ubuntu, Debian or similar free open source operating system.

I believe you will find PatchMe is simple to deploy and use. You don’t have to go through a complex massive patching, testing and mass-deploying, just to get rid of one hot vulnerability.

It is a right-to-the-point vulnerability patcher, for what is in the news, and allows you to automatically dry-test the patch, if successful run the patch live, get a central repository dynamically created during the patch process, so you have full audit of what is going on.

Here it is for use “as is” (don’t complain if it breaks anything). It may have bugs, but was tested and seems to work just fine on dozens of machines. It currently patches Red Hat 5, 6, 7 and should work well for release 4 as well. I am adding support for Solaris, including Solaris Zones scenarios and will be taking care of CentOS, HP-UX and AIX as well other Linux distributions.

Download, comment and like it here: https://github.com/jackbezalel/patchme

PatchMe uses the “patchme.sh” script which works has an NFS based repository.

All one need to do is mount the PatchMe NFS tree (directory structure in the PatchMe readme file) and run the patchme.sh script from the bin directory, followed by a vulnerability name. You could basically schedule it to run at once for all your machines or just a few of them.

You have to get the patches for the vulnerability on your own, provided your license allows it, and PatchMe will take care of installing the right patch on each machine and operating system.

Why not use your Unix or Linux distribution software update mechanism? Well, because each vendor has a different mechanism and requirements, while PatchMe is meant to reduce the time and effort, and focus on just one vulnerability each time. It allows you, the system administrator save time and effort, and keep management off your back.

Once activated, PatchMe will create a specific directory in the central repository for review and analysis, with those files for each machine and vulnerability as well as patch cycle run for it:

– Software installed on the system, pre-patching
– State of patching dry-run (will only test if patch can be deployed fine, but will not install it)
– Dry Run log
– Software packages installed on the system post patching (in case you want to consider roll back of the patch)
– State of update live-run (patched or not)
– Live patching log

Download, comment and like it here: https://github.com/jackbezalel/patchme

I am not going to work on roll-back of patches – it seems too sensitive and problematic, while we are aiming here at a simple short process.

Future planned updates:

I will work on trying to avoid using NFS.
Instead you would use a single “Patcher” machine where the NFS repository exists, (I used Red Hat 7.0) and from there use a “dropper” script that will try to login to the target machine we want to patch, via ssh, using root passwords you provide the “Dropper” with. The “Patcher” will copy a zip file with the relevant script, run it and then get the results back to the patcher repository.

The next step (maybe for Thu) could be to run a scan from the patcher machine, producing a list of Linux/Unix machines (nmap, etc) and feed this to the Dropper and PatchMe to logon to those machines and patch them.

Feel free to comment and/or advise…