Tag Archives: DevOps

Deep AWS CLI stuff..

If you are an AWS DevOps girl or guy, you want to check this video out soon

Highlights I liked:

  1. Using JMESPath to exercise AWS CLI Queries
  2. AWS CLI Wait-for (successful completion of a command) new option
  3. AWS CLI Generate Skeleton to create a JSON file you can customise later on and feed to another command
  4. Using the new “Assume Role” authentication option

And more…

DevOps must read book – The Phoenix Project

Finally reading this importing and cool DevOps book: The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win

More info on Goodreads: https://www.goodreads.com/book/show/17255186

So realistic that I can fill as if I am Bill, the main character on this story.

Up into 31% or so of the book, it is a real IT Professional DevOps Horror story. Then it starts to clear up a bit, but not yet turning positive…

Few quotes from the book I liked so far:

  1. The Job of IT Operations
  2. The Theory of Constraints and IT Operations
  3. Three Venues for maximizing the value of DevOps

More notes coming soon…

If you are in IT Operations and/or DevOps or want to get what they do, read it!
It has a great story in it and lots of insights on the topic.

You can read more about the books I read (or write..) on my GoodReads carrosel on my blog and you can click here to follow me on GoodReads.

DevOps Reporting From The Trenches – From Devastating Failure to Market Domination, The pitfalls, the formula and beyond

There’s a lot told about DevOps as well as cool automation tools and products, but there is not a lot of real world stories that are fun to just relax and read, and be done with it within less than an hour.

Here is one that does have all that jazz – DevOps: Reporting from the Trenches: Winning by Creating a Continuous Delivery Software Factory

It is the story of a huge financial institute that has gone From devastating failure to market domination by adopting DevOps best practices.

I have to reveal a small secret here…Originally I built the story using characters from Star Trek, but then had to take this out, to avoid any rights issues with the Star Trek creators. But if you look closely into the words the story’s heroes use, you might find that adventurous spirit.

You can read the story or simply download the whole DevOps CA Technology Exchange magazine in a single PDF file.

I will soon create an audio version of this story and add it to the podcast archive.

What’s in the story:

  • DevOps Revealed
  • DevOps as Power Multiplier for SaaS
  • DevOps Failure Horror Stories
  • The DevOps Success Formula in Five Simple Steps
  • Odyssey – Intergalactic tour into the future of software DevOps and Continuous Delivery

Here is how it all starts:

James lay back in his chair and looked at the shades dancing on the board. It was the third time he tried to find what’s wrong with the graphs but he could not.

James looked at Stephanie.

Stephanie was shocked, but a small splinter of a smile started to make trails near her mouth.

James said: “I know this was about to happen, yet I can’t believe it!”

Stephanie murmured: “Unbelievable.”

In less than ten months “UAS Enterprise1,” perceived as the most innovative service provider in their business witnessed an enormous market share collapse and the loss of its number one position.

Looking back at the market collapse, it took months until James, the CEO of UAS Enterprise could discover the root cause.

You can read the story or simply download the whole DevOps CA Technology Exchange magazine in a single PDF file.

I had the pleasure of writing it together with Alon Eizenman the founder of Nolio, one of the best DevOps tools on the market and part of CA Technologies DevOps portfolio and Miron Gross, one of the coolest automation pros I know, who I had the privilege to mentor in the CTE (Council for Technical Excellence) program. It is an entirely new experience, writing for a professional magazine and I recommend to go through this, if you’re an IT Professional or a blogger.

“PatchMe” – Quick Shell Shock (+future Vulnerabilities) Auto-Patcher for Heterogeneous Unix / Linux Sites

Download, comment and like it here: https://github.com/jackbezalel/patchme

I think you should like this new Shell-Shock Auto-Patcher I created (“PatchMe“). It should serve you well for future and past similar “noisy” vulnerabilities, specially if you have a lab or site with many types of “loosely” managed operating systems. This tends to happen within development labs, where you have license for every machine, but do not necessarily set or maintain them through a central patch repository.

It is also good for cases where patching is not straight-forward as it is in Ubuntu, Debian or similar free open source operating system.

I believe you will find PatchMe is simple to deploy and use. You don’t have to go through a complex massive patching, testing and mass-deploying, just to get rid of one hot vulnerability.

It is a right-to-the-point vulnerability patcher, for what is in the news, and allows you to automatically dry-test the patch, if successful run the patch live, get a central repository dynamically created during the patch process, so you have full audit of what is going on.

Here it is for use “as is” (don’t complain if it breaks anything). It may have bugs, but was tested and seems to work just fine on dozens of machines. It currently patches Red Hat 5, 6, 7 and should work well for release 4 as well. I am adding support for Solaris, including Solaris Zones scenarios and will be taking care of CentOS, HP-UX and AIX as well other Linux distributions.

Download, comment and like it here: https://github.com/jackbezalel/patchme

PatchMe uses the “patchme.sh” script which works has an NFS based repository.

All one need to do is mount the PatchMe NFS tree (directory structure in the PatchMe readme file) and run the patchme.sh script from the bin directory, followed by a vulnerability name. You could basically schedule it to run at once for all your machines or just a few of them.

You have to get the patches for the vulnerability on your own, provided your license allows it, and PatchMe will take care of installing the right patch on each machine and operating system.

Why not use your Unix or Linux distribution software update mechanism? Well, because each vendor has a different mechanism and requirements, while PatchMe is meant to reduce the time and effort, and focus on just one vulnerability each time. It allows you, the system administrator save time and effort, and keep management off your back.

Once activated, PatchMe will create a specific directory in the central repository for review and analysis, with those files for each machine and vulnerability as well as patch cycle run for it:

– Software installed on the system, pre-patching
– State of patching dry-run (will only test if patch can be deployed fine, but will not install it)
– Dry Run log
– Software packages installed on the system post patching (in case you want to consider roll back of the patch)
– State of update live-run (patched or not)
– Live patching log

Download, comment and like it here: https://github.com/jackbezalel/patchme

I am not going to work on roll-back of patches – it seems too sensitive and problematic, while we are aiming here at a simple short process.

Future planned updates:

I will work on trying to avoid using NFS.
Instead you would use a single “Patcher” machine where the NFS repository exists, (I used Red Hat 7.0) and from there use a “dropper” script that will try to login to the target machine we want to patch, via ssh, using root passwords you provide the “Dropper” with. The “Patcher” will copy a zip file with the relevant script, run it and then get the results back to the patcher repository.

The next step (maybe for Thu) could be to run a scan from the patcher machine, producing a list of Linux/Unix machines (nmap, etc) and feed this to the Dropper and PatchMe to logon to those machines and patch them.

Feel free to comment and/or advise…