Tag Archives: Automation

Ansible Tower 3.1 brings Workflows Log integration and Clustering

Ansible Tower 3.1 brings the new “Workflows” feature allowing you to hook Playbooks, set conditions in executing them and passing data from one Playbook to another.

Additionally Tower can now scale beyond a single instance allowing job processing through any one of the tower nodes in the cluster.

In Tower 3.1 you can easily direct all logs to a central log service such as ELK, Splunk, loggly or others.

More information here: https://www.ansible.com/blog/introducing-asible-tower-3-1

AWS Lambda 5 cool features

What I liked:

  1. Versions and aliases (prod as an alias can point to the active function)
  2. Scheduling of actions
  3. Support for Python and others
  4. Dynamic – No need to setup servers
  5. VPC support – can communicate with other services you have internally
  6. Integration with CloudWatch (inspect and Analyze incoming log entries)

AWS Config Rules!

I know what you did last summer…opening all those insecure ports in your security groups to quickly troubleshoot that nasty bug, then in the rush to close this issue, forgot to close back the gate! AWS Config is your friend then 🙂

AWS Config Rules is built on top of AWS Config and it allows you to get notified or have action taken when a configuration change is made where it breaches the AWS best practice rules or any of your own set of rules.

Here is the video session on this and the main goodies as well as important notes I found:

  1. It is a change control and auditing solution completely automated! No need for scripts, data store or tracking done by you.
  2. You can troubleshoot and run a time travel like view through your resources, their state and relationships to any other dependent resources exiting through any point in time, including any change or deletion done.
  3. Your data is highly reusable: You get a JSON formatted record of any change to your resources, and where relevant it compatible with the relevant AWS describe commands so you can use that match for validation scenarios.
  4. Powerful correlation: It will use CloudTrail to show you who and when change a resource.
  5. Easily expandable: You create your own rules through AWS Lambda using any language it supports. Rule verifications are triggered based on time or AWS API event (tag created, instance deleted, etc)
  6. Targeted: You easily get a report of what changed when, resources that exist while they shouldn’t or are missing.
  7. Turn events into data by routing AWS Config events SNS notifications into your own event repository in real time.
  8. Coverage: Everything EC2, VPC and CloudTrail related, and now the holy grail: IAM, so now you can dig into who added that policy, or detect when an admin user has been added where it shouldn’t have been.
  9. Availability: AWS Config is available everywhere and AWS Config rules is only available in North Virginia.
  10. Spread: Regional. You need to run and review it in each region separately.. Not quiet as powerful if it were account centered..
  11. Pricing: Right here!

Slides Decks is here:

“PatchMe” – Quick Shell Shock (+future Vulnerabilities) Auto-Patcher for Heterogeneous Unix / Linux Sites

Download, comment and like it here: https://github.com/jackbezalel/patchme

I think you should like this new Shell-Shock Auto-Patcher I created (“PatchMe“). It should serve you well for future and past similar “noisy” vulnerabilities, specially if you have a lab or site with many types of “loosely” managed operating systems. This tends to happen within development labs, where you have license for every machine, but do not necessarily set or maintain them through a central patch repository.

It is also good for cases where patching is not straight-forward as it is in Ubuntu, Debian or similar free open source operating system.

I believe you will find PatchMe is simple to deploy and use. You don’t have to go through a complex massive patching, testing and mass-deploying, just to get rid of one hot vulnerability.

It is a right-to-the-point vulnerability patcher, for what is in the news, and allows you to automatically dry-test the patch, if successful run the patch live, get a central repository dynamically created during the patch process, so you have full audit of what is going on.

Here it is for use “as is” (don’t complain if it breaks anything). It may have bugs, but was tested and seems to work just fine on dozens of machines. It currently patches Red Hat 5, 6, 7 and should work well for release 4 as well. I am adding support for Solaris, including Solaris Zones scenarios and will be taking care of CentOS, HP-UX and AIX as well other Linux distributions.

Download, comment and like it here: https://github.com/jackbezalel/patchme

PatchMe uses the “patchme.sh” script which works has an NFS based repository.

All one need to do is mount the PatchMe NFS tree (directory structure in the PatchMe readme file) and run the patchme.sh script from the bin directory, followed by a vulnerability name. You could basically schedule it to run at once for all your machines or just a few of them.

You have to get the patches for the vulnerability on your own, provided your license allows it, and PatchMe will take care of installing the right patch on each machine and operating system.

Why not use your Unix or Linux distribution software update mechanism? Well, because each vendor has a different mechanism and requirements, while PatchMe is meant to reduce the time and effort, and focus on just one vulnerability each time. It allows you, the system administrator save time and effort, and keep management off your back.

Once activated, PatchMe will create a specific directory in the central repository for review and analysis, with those files for each machine and vulnerability as well as patch cycle run for it:

– Software installed on the system, pre-patching
– State of patching dry-run (will only test if patch can be deployed fine, but will not install it)
– Dry Run log
– Software packages installed on the system post patching (in case you want to consider roll back of the patch)
– State of update live-run (patched or not)
– Live patching log

Download, comment and like it here: https://github.com/jackbezalel/patchme

I am not going to work on roll-back of patches – it seems too sensitive and problematic, while we are aiming here at a simple short process.

Future planned updates:

I will work on trying to avoid using NFS.
Instead you would use a single “Patcher” machine where the NFS repository exists, (I used Red Hat 7.0) and from there use a “dropper” script that will try to login to the target machine we want to patch, via ssh, using root passwords you provide the “Dropper” with. The “Patcher” will copy a zip file with the relevant script, run it and then get the results back to the patcher repository.

The next step (maybe for Thu) could be to run a scan from the patcher machine, producing a list of Linux/Unix machines (nmap, etc) and feed this to the Dropper and PatchMe to logon to those machines and patch them.

Feel free to comment and/or advise…

May the Power(CLI) be with you! (PowerCLI Book Review)

Remember the famous line Yoda tells Sky Walker in the first “Star Wars” movie? He says “May the Force Be With You!”

For years I have traded it with “May the cursor be with you”.

Recently since I had a crash on PowerCLI and then on PowerShell, I started looking for good books on PowerCLI.

The first one I found was dated 2011, so it was good, but not good enough.

Then I saw on twitter Robert van den Nieuwendijk was looking for reviewers for his new “Learning PowerCLI” book.

Learning PowerCLI Book Review - Book

This was right on for me and I got free access to the book copy for review, within few hours.

I will be using it through my journey to get more intimate relations with our VMware operations. But I can say the quick “I need to” test came out with flying colors. I mean, you could read the book chapter by chapter, but you would also want to quickly find how to do something specific. In this case I was looking for Data Store latency information and within 5 minutes found the background materials as well as the specific commands.

Having a trusted guide (Robert), I also took on his warm advise to finally look into the automated comprehensive vCheck Script as well as the PowerGUI tool and wonderful set of ready to use packs (more on this on a separate post). One word of caution – I would not use just any library pack offered there, before I take a look at the author and the scripts, and I always run them under a user with read-only permissions, preferring a quick test on a test environment first, just to quickly see it actually works.

This resulted in a quick report I was able to construct within an hour, showing a possible savings of ~ $15,071.

For now I think I’d replace Yoda’s blessing with “May the Power(CLI) be with you” 🙂

More on my review of the book soon…

P.S.

What are your favorite Power CLI resources?

20140303-075409.jpg

P.S.

Until 26 March this book is on the packt’s buy 1/get 1 free book sale, along with all their catalog here:
http://bit.ly/1j26nPN

Be Aware of the Puppets

Be aware of the Puppets!

If you are a software vendor who relay on continuous year-long software maintenance for your customers, you better read this…

OVH who claim to be Europe’s number one ISP has been reducing registration of new users due to a major shift in income.

It appears that OVH business plan was centered around the notion customers cannot easily migrate their servers to a new platform once OVH released it. This meant each upgrade would take customers months to prepare, test, migrate, troubleshoot. In such case, customers needed a whole year maintenance and support.

However in the wake of free good enough automation tools, such as Puppet, Chef and others, customers were able to deploy their applications and configurations, in a matter of days or weeks, rather than months.

This led to abandoning of the full year continuous support and maintenance. Customers bought one month subscriptions, only when wanted to migrate to a new platform offered by the platform supplier.

What the market is saying, is “We are not willing to pay high prices all year-long, just because your software deployment is complex. We expect prices to be low, since the whole software market is being commoditized by SaaS offerings”

Now ISVs are forced to look into the next value chain they should evolve their offering into. Check out my “Cloud Computing and SAAS Secrets revealed – 7 Consumption Economics Rules” article to get some insight on what should be done.