Tag Archives: Amazon AWS

Sweet findings in AWS Inspector

Taking a look at the AWS Inspector intro  by Alex Lucas I could see several sweet features all DevOps security minded pros would love (and of course it includes even more goodies).

  1. Covers all the CVE stuff, finds, reports, act (patch, etc) and many more best practices the AWS team had gathered all along.
  2. Recording of activities inside your instances and tracking of suspicious or activities that may harm your defense.
  3. Automation via the AWS CLI, API, including workflow control for vulnerabilities and their mitigation actions (you can mark a vulnerability as clear manually for example, or tag one as needing a review by Auditor, etc)
  4.  Able to catch real time system call operations, analyze and report – such as setting a library file with wide open permissions that is owned by root…
  5. Requires an agent that is ready for Amazon and Ubuntu AMIs for now (more platforms including Windows to follow
  6. Simple and cheap to try, no need to spend time and money on massive software suites. I know that’s AWS core concept, but in the vulnerability scanning world its even more enticing.
  7. Did you know that Amazon Linux instances will automatically patch themselves on any reboot (at least for critical security patches I guess)? Alex mentions this as one of the “obstacles” he had, while he was trying to setup a vulnerable instance that does not have critical patches installed…

Open topics:

  1. What is the Agent’s overhead?
  2. Support for Cloud Watch Integration yet to come
  3. People in the YouTube session seem to be either sleepy or something, because they did not jump of their sits in enthusiasm as I would 🙂

Slide deck is here:

Amazon AWS lifts the limit on SQS meesage delivery size to from 256k to 2GB!

Another cool leverage of AWS S3 service, this time for Amazon SQS

“…Amazon Simple Queue Service (SQS) now has an Extended Client Library that enables you to send and receive messages with payloads up to 2GB. Previously, message payloads were limited to 256KB. Using the Extended Client Library, message payloads larger than 256KB are stored in an Amazon Simple Storage Service (S3) bucket, using SQS to send and receive a reference to the payload location…”

Disaster Recovery in just $200? Watch this “On-Premise DR, assisted by Amazon AWS” session

I think this one is a “Watch ASAP” for Enterprise IT Professionals who are looking for ways to cut on the time and effort spent on their On-Premise DR (Disaster Recovery) project, and are open to use Amazon AWS for that purpose.

Watch the gradual build up of your DR solution enhancement by using a simple backup to S3 or Glacier and into parallel multi region automated solutions.

This is NOT a “fully figured out cut for you” solution, but it takes you gently into the realm of DR, cloud assisted solutions, and should be a nice brainstorming source to look into your specific case.

Near the end it kind of mentions my “Nano Self Rebuilding Data Center” idea (basically allowing any AWS based project to be turned into a script that can be used to rebuild it, like AWS Config on steroids…)

Crazy Idea: Duolc, StretchOS and what Gazzilion Apps Really want [!OpenStack]

Yes, I am a DevOps, Big Data, Security kind of guy and I use Amazon AWS, Microsoft Azure and OpenStack, as well as other smaller players. But I like to take a new diverse, contrarian look at stuff the Cloud community seem to have kind of pre-determined agreement.

I am sitting at the OpenStack conference and learning about the cool news ways it can give you an edge. I believe that Open Source and OpenStack is at the heart of getting an edge. You need mature, fully supported vendor based platforms. But you also need at some points to move fast, faster, fastest. At that edge point Open Source and Open Stack are the tools you want to use.Those “free” toys, do have a cost spent in learning curves, education, cultural change and efforts to workaround cases of immaturity. When they mature, they join your base tool set as other new roughly edged opportunities arise.

However, looking at Cloud platforms, it is clear what they ask for are applications that can be spread across many computing nodes. Most of the current applications enterprises use are not set for the cloud.

While everyone in the Cloud Community is expecting the Enterprise to re-write or convert their applications to the cloud, Enterprises naturally just want the job done.

What the “Legacy” applications want is “Duolc” (“Cloud” spelled in reverse) deployed on “StretchOS”.

So “StretchOS“, a term I just made up, should be able to group a bunch of resources and make it so it behaves as a unified single operating system running on a single computer. The CPU, Memory, Disk and Network resources will be highly available and processes will be able to be served on any of the computing resources.

I am not aware of someone developing something such as “StretchOS”, but looking at the vast amount of applications that could immediately and effortlessly benefit of such a solution should attract a close look of entrepreneurs. This could be a cross-gap solution until most of the apps become cloud-enabled.

Now, as I dumped this crazy concept on your desk, I can go back to my Cloud deployments..

Deep AWS CLI stuff..

If you are an AWS DevOps girl or guy, you want to check this video out soon

Highlights I liked:

  1. Using JMESPath to exercise AWS CLI Queries
  2. AWS CLI Wait-for (successful completion of a command) new option
  3. AWS CLI Generate Skeleton to create a JSON file you can customise later on and feed to another command
  4. Using the new “Assume Role” authentication option

And more…

Time To Innovate (TTI): Amazon Cloud Access Control and Bromium vSentry

Listen to this post Listen to this post here:   Yesterday Amazon announced it is starting to provide AWS role based resource access control. This follows the previous announcements regarding other AWS services, being integrated into Amazon’s IAM framework:

AWS Elastic Beanstalk Supports IAM Roles

AWS Identity and Access Management (IAM) Adds Support for Variables in Access Control Policies

We can say one thing for sure about Amazon: Amazon is going through a constant continuous improvement cycle, complementing its cloud service portfolio. This surely is attractive for IT professionals looking for a restless innovative solution provider, understanding that in many cases, the active improvement process is much more important than the “static perfectness” state others try to reach.

On the other edge of the spectrum, take a look at the vSentry end-user threat protection solution from Bromium, and what’s interesting to me are their 2 core innovations:

Micro-Virtualization creating a transparent shield for any un-trusted application, which allows the user feel safe, and avoid irrelevant alerts, as the un-trusted application they use, can try and do harm, to find itself isolated, without any actual modification of user data. All the affects of the malware are cleaned up, as its virtual sandbox vanishes as soon as the application exits. That’s based on the Intel VT technology.

This also enables the second innovation – Task Introspection. Since applications can do anything they want, be it as malicious as they can, as they are getting virtual rather than actual access to the system’s resources, an attack can be recorded and reviewed, at will, saving forensics time and effort.

In this case as well, what’s important to note, is the rapid exploitation of the opportunity to innovate, using current tools (such as Intel VT). Sure, this solution is not perfect, and will be circumvented at some point, but it does offer a pain-killer type of remedy, which IT professionals are likely to quickly grab.

I believe there should be a new term for us to useTime To Innovate (TTI) – which is about measuring how much time it takes you to innovate, as soon as an opportunity is presented.

Are you rapidly innovating as well, or endlessly trying to perfect your solution?