Ansible Tower 3.1 brings Workflows Log integration and Clustering

Ansible Tower 3.1 brings the new “Workflows” feature allowing you to hook Playbooks, set conditions in executing them and passing data from one Playbook to another.

Additionally Tower can now scale beyond a single instance allowing job processing through any one of the tower nodes in the cluster.

In Tower 3.1 you can easily direct all logs to a central log service such as ELK, Splunk, loggly or others.

More information here: https://www.ansible.com/blog/introducing-asible-tower-3-1

AWS Athena says No so beautifully

Amazon AWS Athena allows you run ANSI SQL directly against your S3 Buckets supporting a multitude of file formats and data formats

Here are my insights taken from a comprehensive YouTube session lead by Abhishek Sinha

  • No ETL needed
  • No Servers or instances
  • No warmup required
  • No data load before querying
  • No need for DRP – it’s multi AZ

Uses Presto (in memory data distributed data query engine) and HIVE (DDL table creation to reference to your S3 data)
You pay for the amount of data scanned, so you can optimize the performance as well as cost, if you:

  1. Compress your data
  2. Store it in a columned format
  3. Partition it
  4. Convert it to Parquet / ORC format

Querying in Athena:

  1. You can query Athena via the AWS Console (dozens of queries can run in parallel) or using any JDBC enabled tool such as SQL Workbench
  2. You can stream Athena queries results into S3 or AWS Quick Sight (Spice)
  3. Creating a table for query in Athena is merely writing a schema that you later refer to
  4. Table Schema you create for queries are fully managed and Highly Available
  5. Queries will act as the route to the data so every time you execute the Query it re-evaluates everything in the relevant buckets
  6. To create a partition you specify a key value and then a bucket and a prefix that points to the data that correlates with this partition

Just note that Athena serves specific use cases (such as non urgent ad-hoc queries) where other Big Data tools are used to fulfill other needs – AWS Redshift is more aimed at quickest query times for large amounts of unstructured data, where AWS Kinesis Analytics is aimed at queries of rapidly streaming data.

Want to learn more on Big Data and AWS? Visit http://allcloud.io

Kubernetes- making it Highly Available

You can set a Highly Available Kubernetes cluster  by adding worker node pools and master replicas.

That’s true as of Kubernetes version 1.5.2. It is supported using the kube-up/kube-down scripts for GCE (as alpha): http://blog.kubernetes.io/2017/02/highly-available-kubernetes-clusters.html?m=1

For AWS you have support for HA Kubernetes cluster using KOPS scripts:

http://kubecloud.io/setup-ha-k8s-kops/

GCP Big Table – main facts

GCP Big Table – main facts:

Is the basis of many google products
Object storage system

Does not offer indexes except for a single range index you can use

Is the basis for Hadoop big data system

You pay for storage separately

You pay for min 3 nodes and can expand as you need

Nodes are needed just for read / write – not for storage

Support for massive amounts of reads / writes but not locking or transaction support

Is not completely and highly available since sometimes data is not available as it is moved around

Great for big queries, less for short quick rapid ones

https://cloud.google.com/bigtable/