Redhat remediation solution through Ansible Tower play books … dealing mostly with security risk assessment and mitigations (patches and more) https://www.redhat.com/en/about/press-releases/red-hat-delivers-analytics-driven-automation-latest-version-insights-ansible-integration
If you feel overwhelmed by the breakdown of technologies Docker is built on, here is a cheat list to ease the pain 🙂
- Use services that allow integration of feature flags through out your application to dynamically test, activate or suspend features that (some) your users should be using. Here is one such service: https://github.com/launchdarkly/featureflags/blob/master/README.md
- Track your external libraries through services that can alert of issues or vulnerabilities in those libraries – here is one such service called "Synk" – https://serverless.com/blog/4-ways-to-secure-prevent-vulnerabilities-in-serverless-applications/
- Store all your external libraries as a local copy in your internal repositories so that your are not affected by mistakes or vulnerabilities that affect the public code repositories ( such as NPM public repository for java – see details here on how it badly affected many applications https://www.theregister.co.uk/AMP/2016/03/23/npm_left_pad_chaos/ )
- Using a bigger memory tier could cause allocation of better CPU allocation which can bring your transaction processing speed from seconds to sub seconds
- Monitor performance metrics of your application to determine when it changed and why
- You must monitor for errors in your code. Don't assume its working well
- Using Lambda inside VPC requires attention to security groups same as for an EC2 instance
- Make sure your Lambda function has the least privileges required in its IAM policy
- AWS Toolkit for Eclipse: Support for Creating Maven Projects for AWS, Lambda, and Serverless Applications http://bit.ly/2muxucL
Ansible Tower 3.1 brings the new "Workflows" feature allowing you to hook Playbooks, set conditions in executing them and passing data from one Playbook to another.
Additionally Tower can now scale beyond a single instance allowing job processing through any one of the tower nodes in the cluster.
In Tower 3.1 you can easily direct all logs to a central log service such as ELK, Splunk, loggly or others.
More information here: https://www.ansible.com/blog/introducing-asible-tower-3-1
Pycharm seem to always more to it than you thought.. supporting Vagrant and Docker for remote debug is one of those thingies..
9 reasons you should be using PyCharm
What are your favorite add-ons for Pycharm?Ever used the paid version? Why?
Amazon AWS Athena allows you run ANSI SQL directly against your S3 Buckets supporting a multitude of file formats and data formats
- No ETL needed
- No Servers or instances
- No warmup required
- No data load before querying
- No need for DRP – it's multi AZ
Uses Presto (in memory data distributed data query engine) and HIVE (DDL table creation to reference to your S3 data)
You pay for the amount of data scanned, so you can optimize the performance as well as cost, if you:
- Compress your data
- Store it in a columned format
- Partition it
- Convert it to Parquet / ORC format
Querying in Athena:
- You can query Athena via the AWS Console (dozens of queries can run in parallel) or using any JDBC enabled tool such as SQL Workbench
- You can stream Athena queries results into S3 or AWS Quick Sight (Spice)
- Creating a table for query in Athena is merely writing a schema that you later refer to
- Table Schema you create for queries are fully managed and Highly Available
- Queries will act as the route to the data so every time you execute the Query it re-evaluates everything in the relevant buckets
- To create a partition you specify a key value and then a bucket and a prefix that points to the data that correlates with this partition
Just note that Athena serves specific use cases (such as non urgent ad-hoc queries) where other Big Data tools are used to fulfill other needs – AWS Redshift is more aimed at quickest query times for large amounts of unstructured data, where AWS Kinesis Analytics is aimed at queries of rapidly streaming data.
Want to learn more on Big Data and AWS? Visit http://allcloud.io
All the goodies in one crunchy basketKubernetes, PostgreSQL and state full sets
You can set a Highly Available Kubernetes cluster by adding worker node pools and master replicas.
That's true as of Kubernetes version 1.5.2. It is supported using the kube-up/kube-down scripts for GCE (as alpha): http://blog.kubernetes.io/2017/02/highly-available-kubernetes-clusters.html?m=1
For AWS you have support for HA Kubernetes cluster using KOPS scripts:
GCP Big Table – main facts:
Is the basis of many google products
Object storage system
Does not offer indexes except for a single range index you can use
Is the basis for Hadoop big data system
You pay for storage separately
You pay for min 3 nodes and can expand as you need
Nodes are needed just for read / write – not for storage
Support for massive amounts of reads / writes but not locking or transaction support
Is not completely and highly available since sometimes data is not available as it is moved around
Great for big queries, less for short quick rapid ones