Featured post

The Cloud Cost Optimization Void and connecting FinOps and DevOps teams

Keeping operational costs optimized in the Cloud is one of the main pillars for cloud based companies success. This goes along with resilience, agility, and security. There are many resources discussing cost optimization, but they often lack the critical component—the human factor. Your people and processes will determine your financial success much more than the tools you use. Today we’ll focus on the former.

Let’s look at the basic scenario happening at your company as well: The FinOps (Financial Operations) team taking care of cost control and optimization for your cloud operations picks up on a component or a process that requires a change to reduce cost and improve operations. For example, it could be changing your instance types from the existing generation to the latest one. Moving to the new instance type can cut cost in up to 20%.

The FinOps team reaches out to the relevant development (or DevOps) teams with recommendations to change the instance types they are using. This is where the optimization process can break. Sometimes the development team simply does not respond, or they could state they have challenges (technical, time related or other) to address the required changes. There are cases where development teams accept and execute changes in a timely manner, but there is lack of visibility for the change and its impact on cost. Practically, The cost cut disappears into the “void”. Maybe it will be back sometime, probably too late, same as the passengers of the TV Drama “Manifest” of flight 828 have—and it did not end up well (so far at season 3 :-))

During the many journeys I lead with companies, I found the following process valuable in addressing the cost optimization challenges described here.

First, make sure you have continuous monitoring of your application workloads across their environments (development, QA, Staging, Pre-Production, Production). In each environment, your application workload (resources it uses) may change. Tag your application deployments with their change number (revision, version etc.). That way, you can observe the performance and resource use of your workloads across their change numbers and compare them.

Your FinOps team should be able to open a software/configuration change request for the development teams, when they find a workload that requires that change to improve its cost. That change request should enter the development team tasks. The task should have a cost-reduction estimation assigned to it. You can compute the estimated cost reduction according to the workload current cost, and then applying the estimated cost cut.

If you are a developer, you may say, “but my development team’s tasks don’t have any financial figure attached to them—how would I prioritize the cost reduction task versus a marketing or customer ask task?”. My answer is that most of your tasks could be set with a financial gain figure to them. So adding functionality would have a “5% income gain” assigned to it and could be compared to a “cost cut of 10%”. Lacking that, you should still strive to find a way to prioritize cost-cut related tasks.

Once the development team priorities allow it, they develop the code to implement the cost reduction. Since the task is in the development queue everyone knows about it, FinOps team as well- and we have visibility and accountability. Once the code change deploys into an environment (provided it is NOT mixed with other changes) you can clearly observe its effect on cost in your monitoring system: cost of your workload before and after the change.

Of course, this process is still challenging in many cases and having all the components I described can take time and effort to set and maintain, but I believe aiming to it brings valuable benefits.

We are all eager to learn from your experience on that topic, so please share!

Yours,

Jacky Bezalel, Senior Technical Leader at Amazon Web Services ; Teams and Senior Management Career Coach.

Featured post

How to Motivate with “AHA”i

In the following “What makes us feel good about our work” talk by Dan Ariely, there are few points worth contemplating:

Listen to this postClick Here to listen to this post: 

  1. Acknowledgement is critical, and should be exercised as much as possible. Even the simplest one, like saying “Aha”, when your team member hands you over, their report, is powerful. If you want your team be as twice as productive as they are now, acknowledge them routinely. You don’t need to make up things; you do not need to exaggerate. Simply make sure you clearly show you got it – they did a job, they invested the effort, they desire feedback. If you can thank them, do it. At the end of the week, just before we all go home, I thank each one of my team members, from the bottom of my heart, for the effort they have put during the week. It does not matter how many successes they had, it is all about the time, effort and good will they have put into their work. That way they go off to their weekend, charged with good feeling.
  2. Indeed tossing away an effort people put, is a major energy drainer. However ignoring people’s effort is putting them down, almost the same as shredding their effort in front of their eyes. That’s just to stress how much acknowledgement is important. And there is something else. When you do have to stop a project, cancel a work done, make sure you put enough energy beyond explaining why this need to be done. Make sure you acknowledge the effort put so far. People could reach 150% productivity, if you do your best to preserve their efforts, and it does not matter how much you pay them. So thank them and do the most to make use with what they built so far. Make sure this effort is communicated clearly. All this will put more fuel into your team’s emotion al engines, as they take your ship to a new destination.
  3. It doesn’t matter how much you pay people, if you don’t acknowledge their effort. You can raise a salary once in a while, and still it would not matter as much as a much more frequent acknowledgement of people’s efforts.
  4. If you have an advice to give or feedback to improve one’s performance, they are still considered an acknowledgement and worth more than silence. Acknowledging people does not mean you should forget about mentoring and guiding.
  5. We value our creations much more than evaluators do – because we appreciate more what we put effort into. Make sure you kindly address this gap, as you go through a performance evaluation with your team. Otherwise people will think your review of their performance is too critical.
  6. In the past decades we had Industrial Efficiency govern our work methodology: we preferred everyone repeatedly does just part of the complete creation process of our solution. However now we are shifting into knowledge economy. Now everybody can decide how much effort they put into their work. Consequently success depends on your team’s willingness to invest more acquiring and purifying information. You don’t have the time, and sometimes you could even lack the means to properly measure the quality of the information, your team is creating. So you want them be emotionally motivated to invest the extra effort. Even if they must focus on a repetitive job, at least give them a prospective of its overall project, and an opportunity to speak out their advice, beyond their current role. This means you have to extend into the following areas. Note that each one of those deserves a whole article.
  7. Meaning – Creating a powerful meaning or Context for the job to be done.
  8. Creation – Let your team feel they own the creation of the solution.
  9. Challenge – Take your team through challenges, so they can experience personal growth.
  10. Ownership – Make sure your team see themselves as the source of everything that happens as they ship their solution.
  11. Identity – Make sure your team can see themselves and their job as one.
  12. Pride – Set the opportunities for your team to see the gradual progress they gain. Acknowledge them for their uniqueness and efforts.

As you watch the video, let me know if you see more that we should look into…

Featured post
GENESIS - Big Data Super Nova - Part One (Eve)

GENESIS – Big Data Super Nova – Part Three (Apple)

Hey and welcome!

The “Genesis – Big Data Super Nova and the Journey Back to Privacy and Security” broadcast is about to begin.

This time I decided to share with you my insights and ideas through a story.
So tighten your seatbelt as we jump into hyperspace through time.
It’s now exactly 37 years forward – the morning of March 13, 2050, 11 AM: Earth Standard Galaxy time.

This is chapter 3, here is where you can find The first chapter of this Big Data Sci-Fi Novella (Eve)

Chapter 3: Apple

GENESIS - Big Data Super Nova - A Sci-fi Tech Novella, Chapter 3: Apple

Listen to this postListen or Download audio of this chapter here: 

It was the 1st of April 2013.

One year before, there were rumors that the world is going to end, that is, if you believed the ancient Mayan literature.

It turned up to be a bit later and more digital than physical.

Back then, social networks such as “Facebook” and “Twitter” allowed people to share their thoughts and feelings through external personal outer-body devices.
People called those cumbersome ugly devices “iPhone” and “Android”.

Everyone was hooked to those networks. Companies started letting people get access to services, based on their always-on authentication, to the social networks.
But people were using “Pass-words” to get connected to the social networks. They chose easy pass-words and did not care replacing them occasionally.

So if you found out one’s social network password (let’s say on Twitter), you also got access to any other service they had authorized (O-AUTH) to get logged on to, based on authentication token stored in their Twitter account.

Basically if you had someone’s twitter password, you could buy stuff and have them pay for it, look at the medical records, and review their personal assets. You actually became them.

When the big Digital Tsunami happened, it started by a huge surge of credit card and bank transactions fraud. Then as money pore into the attackers bank accounts (there were many used, so you could not track a single source), it started spreading away as a massive purchase wave.

Amazon, eBay and similar services were hammered with endless waves of purchases, and then rebounded into cancellations. Everyone tried to cancel the fraud purchase attempts.
There were similar attacks on health, government, manufacturing and other essential services, all directed to make them useless.

Later on this kind of attack was labeled “Application Based Denial of Service” – where instead of driving lots of requests to a service, you would overload the remediation system of a service with transactions that require huge effort to remediate (such as reversing a fraud transaction).

There was lots of confusion and slow response, addressing the global, world-wide break-in, because of the miss-leading reports, that it was all a 1st April joke.

In less than one hour all the financial institutions, digital merchant and many other critical systems were completely ruined.

It took 4 months to get most of the damages fixed. But by then many lives were lost.
You could die, because you could not get food or medical treatment, or you could get attacked, by desperate people who tried to forcefully take what you had.

No Anti Malware system could find out the secret key-loggers which had everyone’s passwords, because they were part of many operating systems.

The attackers worked many years, getting to the right people in every operating-system manufacturer’s staff. Then the secret key-logger code was added to the operating system’s codebase. Every developer the attackers acquired, had to add or change just a small piece of code, entirely blind to the demonic intension driving all those small changes (Time Shifted Attack).

The attackers had even penetrated the NSA (National Security Agency) staff, and implanted their own code on top of the home-land security hooks. Those hooks were originally designed to allow law-agencies get access to people’s cloud stored data as well as communications, in case those people were suspects. Now those law enforcement hooks had a parasite code, wire-tapping everything, ready to spray all this information to the malicious crackers.

It all sat there, silent, waiting, collecting and using Big Data to analyze and further, and reveal additional access details. Gone were the days of brute force password cracking – you guess passwords much faster, using Big Data analysis.

And then it was time for the software octopus to wake.

It started ignition on March 13, and up it woke on April 1.

Adam recalled that there was one more important event which took place on the 1st of April 2013.
The “Naturalists Group” was born.

The attack was carried on through Windows, Linux, Oracle, Java infrastructure suppliers and Cloud Providers.
Cloud services failed one by one and the true lack of the Cloud concept was revealed: We did not have “The Cloud”. What we had was many disparate cloud services, each one with its own weakness and finalized amount of resources.

Each Cloud provider had a final central set of “engines” operating all its services, which could easily hacked and brought down, essentially making it useless.
You really had nowhere to fail over to.

The lack of Data Integration was apparent across Big Data archives, so you couldn’t really see what was going on. We had no “Big Eye in the Sky”, no “Digital Defense Satellite”, watching our data and computers.

That was the day when true “Universal Computing” and the “Neuro-Fibre Net”, were born.

Lots of server systems halted that day, and many personal devices got their share of blackout as well.

But most of the victims had the mobile device of choice of that time.

It was produced by Apple.

Get additional chapters by reading the book by Clicking Right Here

Featured post
GENESIS - Big Data Super Nova - Part One (Eve)

GENESIS – Big Data Super Nova – Part Two (Adam)

Hey and welcome!

The “Genesis – Big Data Super Nova and the Journey Back to Privacy and Security” broadcast is about to begin.

This time I decided to share with you my insights and ideas through a story.
So tighten your seatbelt as we jump into hyperspace through time.
It’s now exactly 37 years forward – the morning of March 13, 2050, 11 AM: Earth Standard Galaxy time.

Previous Chapter is here (Eve)

Chapter Two: Adam

GENESIS – Big Data Super Nova – Part Two (Adam)

Listen to this postListen or Download audio of this chapter here: 

He didn’t like to Wake-on-LAN people, although Eden Industries code of conduct allowed it, in case of emergencies.

As a Naturalist, long term member of the “Real Thing”, the last thing he wanted is any relation to using inter-body Nano bots or other artificial enhancements.

Adam was 27 years old, and preferred to die young, rather than have his body parts replaced by bot parts.

After 200 hundred years or so, no bot can keep your body alive, and you have to mind-beam yourself into the mind cloud, that is, if you discount the choice of artificial body part replacements.

The other option is to use a whole body replacement.

“That’s when you fully stop being a human, turning yourself into a dumb robot”, he thought. You look like a Frankenstein, no matter how precise your human body imitation is.

Of course, your mind is still there, if you don’t mind an emotional glitch here and there, since data loss can happen when your brains is scanned.

“Nothing is perfect, including 3-D brain scanners.”

Thinking about the Nano Mind- Scan bots crawling through his brain, as they turn mind into data, turned Adam’s brown eyes even darker.

“I don’t mind being a bit over-weight and baldy, or even called ‘Slow Thinker’. I won’t let any of those crap bots get under my skin”.

Eve looked always in great shape. But she never had to actually make an effort to look good. Her Muscle Trainer bots did all the work.

Eve was 20 years older than him, and still looked like she was in her early twenties.

“But she was not real. Not a real human being. And still I like her a lot.”

Then he felt ashamed for waking Eve by communicating the alert message, through her Inter Body Bots.

But he didn’t have a choice.

There was no time to wait, as more reports came in, it turned up to be a huge disaster.
He recalled the last time a global alert was broadcast on planet earth.

This time it was worse, enormously worse.

Get additional chapters by reading the book by Clicking Right Here

Featured post
GENESIS - Big Data Super Nova - Part One (Eve)

GENESIS – Big Data Super Nova – Part One (Eve)

Hey and welcome!

The “Genesis – Big Data Super Nova and the Journey Back to Privacy and Security” broadcast is about to begin.

This time I decided to share with you my insights and ideas through a story.
So tighten your seatbelt as we jump into hyperspace through time.
It’s now exactly 37 years forward – the morning of March 13, 2050, 11 AM: Earth Standard Galaxy time.

Listen to this postListen or Download audio of this chapter here: 

Chapter 1: Eve

Eve woke up.

But as she puffed her dark long hair off her face, she felt strange, a sense of fuzziness.
The inter-body adrenaline injection Nano system did its job and she opened her eyes.

GENESIS - Big Data Super Nova - Part One (Eve)

“No need for Alarms clocks any more”, she thought.

Then she felt as if she lost it.

It was as if her thoughts got encrypted through an SSH5 Gateway so she could not really understand what she was thinking.

Lights came up, as the under-skin environment communication Nano chip sent the message about her awakening to her room’s climate control unit.

But she was still lost.

Her body was still awakening, eyes turned purple, muscles twitching, heart beating faster.
But none of this was in her control. The Nano bots were doing it all.
No one needed to use contact lenses to get their choice of eye color, or go to the sea to get their body’s skin tanned.

Well except the Naturalists of course. They were objecting just any kind of body enhancements.

Then suddenly, as if someone hit the lights on, Eve was really awaken.
Her first thought was “It is too early”.

Then the Red Blinking message appeared right in front of her eyes, blinking, mesmerizing.

“From: Adam@eden-industries.com
Subject: Big trouble Boss
Better get here right away, while you still can.
Adam.”

That’s why she lost it for a moment.
There was no time for the usual gradual wake up.

Adam had probably activated the Emergency wake up process, communicating the secret “Wake-on -LAN” message right into the Nano chips in her body.

“I hate it”, she thought. “And I don’t have time to get to the office.”

“I have to mind-beam”.

Get additional chapters by reading the book by Clicking Right Here

Karate SQL? KSQL vs. Kafka Streams

Just kidding -:) No Karate SQL I am aware of..

Naturally you would use Kafka Streams if your code runs on Java where your code requires SQL like access to the data.

“Kafka Streams is the core API for stream processing on the JVM: Java, Scala, Clojure etc. It is based on a DSL (Domain Specific Language) that provides a declaratively-styled interface where streams can be joined, filtered, grouped or aggregated using the DSL itself. It also provides functionally-styled mechanisms — map, flatMap, transform, peek, etc”

(Building a Microservices Ecosystem with Kafka Streams and KSQL
https://www.confluent.io/blog/building-a-microservices-ecosystem-with-kafka-streams-and-ksql/
via Instapaper)

Checkout KSQL, the Kafka Streams client for cases where you want to run SQL queries vs Kafka outside a JVM.

You can set a KSQL container as a side car along with your app container and let the app act upon regular Kafka Topic events, discarding the need for the app to deal with the data query logic needed to find relevant data off the stream.

Example: Your micro service needs to act upon a new customer order. Your sidecar container will run KSQL DSL select and stream only relevant event data to your app one at a time (configurable).

KSQL will get a copy of the same data across your micro services replicas.

Sounds like fun? Well because it is!

Maybe it should be called Karate SQL after all..

P.S.

If you use AWS, and need Kafka (otherwise you would use AWS Kinesis), here is a nice basic starter automation for setting Kafka on AWS.

AWS Secrets… Yes!!

YES! ! AWS Secrets!!

https://aws.amazon.com/blogs/aws/aws-secrets-manager-store-distribute-and-rotate-credentials-securely/

I’d say secrets in parameter store are like Serverless credentials in Jenkins while secrets in secrets manager are like Serverless hashicorp vault. The difference for now is in the limits of use – SSM is free but would not work well when saturated with many calls – you are expected to use it moderately, while in AWS secrets you are not limited cause you pay. I believe in the future AWS secrets will be more feature-rich.

Kubernetes 1.9 admission extension – What is it?

Kubernetes 1.9 includes powerful admission extension abilities that are part of the golden principles Kubernetes is being built on – you want to look for those principals in other solutions you are considering.

What is Admission?

Admission is the phase of handling an API server request that happens before a resource is persisted, but after authorization. Admission gets access to the same information as authorization (user, URL, etc) and the complete body of an API request (for most requests).

What are they good for?

Webhook admission plugins allow for mutation and validation of any resource on any API server, so the possible applications are vast. Some common use-cases include:

Mutation of resources like pods. Istio has talked about doing this to inject side-car containers into pods. You could also write a plugin which forcefully resolves image tags into image SHAs.

Name restrictions. On multi-tenant systems, reserving namespaces has emerged as a use-case.

Complex CustomResource validation. Because the entire object is visible, a clever admission plugin can perform complex validation on dependent fields (A requires B) and even external resources (compare to LimitRanges).

Security response. If you forced image tags into image SHAs, you could write an admission plugin that prevents certain SHAs from running.

More information here: http://blog.kubernetes.io/2018/01/extensible-admission-is-beta.html