Category Archives: Podcast

The Jack Bezalel Podcast – Sharing with you my insights of 30+ years IT Professional, Blending Leadership, Marketing, Story Telling, NLP, Spirituality, Daring Vision and Innovation aimed to make Systems work and People Grow.

Pulling Innovation out of Patent Black Holes

Pulling Innovation out of Patent Black Holes

Listen to this post Listen to this post here:

If you care even a bit about patents, you should be in one of those camps:

  1. You want the current patent system to prevail, because for now, you enjoy its rewards
  2. You would like the current patent system to massively change, at least for the software industry

I claim there is a third new option, that no one seems to talk about, and that it is not only fair, but has financial reward system built-in as well.

But before I reveal this other possibility, let us take a look at the real motivation for the patent system to exist altogether.

I am not a lawyer, so I’d like to provide you with a practical cause for patents to exist.

I’d say patent rules were created to reward a result gained out of massive effort or the creation of an unexpected possibility. In both cases patent rules were put in place to reward a result that is unlikely to achieve. Additionally patents rules were put in place to bring to halt, any attempts to make use of this unique result (patent), without rewarding the patent creator.

Lacking those rules, people were less likely to put their resources to find new better ways to make things happen, because others will make use of them, without putting any effort or rewarding the patent creator.

So patents were actually created to foster innovation…

It is my claim that it was also expected that this unexpected result (invention or patent) would have practical use. Otherwise, what is the point in discovering something that no one can (feasibly) use?

Those two sentences define the new direction patents should abide to.

Nowadays companies create patents that cannot be used by them and even patents that cannot be used at all. They do it, just so they can lock in that idea, and have others who believe those patents can be useful, pay them.

Previously resources were put in place to invent, but now they are also being massively deployed to block (others innovations).

This of course puts a stumbling block in the way to innovation. It seems impossible to break the vicious cycle, because it looks like too many people could lose money, if the current system gets phased out. As time passes, there is more money spent on patents, resulting in spiral damage, same as black holes aggregate energy, drawing any new innovation into their worm tunnels, leaving no escape route for new stars to thrive.

Here is my idea on pulling innovation out of patent black holes:

Let you use any patent you like, while you reward the patent creator, according to the sales or value you actually gain.

People should be able to use any patent created by others, as long as they pay a fee to the patent creator, based on actual sales or value gained for the product or service they created, in case this product was based on those patents.

Turning this patent wheel to the new direction I mention, will turbo-charge innovation, as patent owners will seek innovators to use their patents, since they get rewarded as well. Innovators will sip from the river of knowledge and will then create better solutions, bringing them to market, faster. Companies can re-direct their resources from blocking, to innovating. Lawyers will still be needed, to formally post patents, yet those patents will be used to innovate, rather than pull down entrepreneurs.

This road is not trivial as well. There are issues to resolve, but so far, each time I look at a challenge related to this new way of dealing with patents, I see those issues are already part of our lives. I refer to issues such as tracking patent use, figuring out how much people should pay for using our patent.

Bottom line, the new proposed system, is based on the concept of a universe of abundance, while the current system is based on a world of scarcity.

In which of those universes do you prefer to live?

Meanwhile, take a look at this new service, which allows innovators who try to find their way, and post a patent, to get some advice (crowd sourcing): Ask Patents

P.S.

If you feel strongly about patents and read my post, you would not want to miss voting here:

P.P.S.

I believe that even if you work in a big corporation, you’d want to hop into Ask Patents from time to time, and check out queries regarding patents that your company has expertise in, see what cooking in the innovation pot, before it hits the street…

Happy patenting!

Time To Innovate (TTI): Amazon Cloud Access Control and Bromium vSentry

Listen to this post Listen to this post here:  Yesterday Amazon announced it is starting to provide AWS role based resource access control. This follows the previous announcements regarding other AWS services, being integrated into Amazon’s IAM framework:

AWS Elastic Beanstalk Supports IAM Roles

AWS Identity and Access Management (IAM) Adds Support for Variables in Access Control Policies

We can say one thing for sure about Amazon: Amazon is going through a constant continuous improvement cycle, complementing its cloud service portfolio. This surely is attractive for IT professionals looking for a restless innovative solution provider, understanding that in many cases, the active improvement process is much more important than the “static perfectness” state others try to reach.

On the other edge of the spectrum, take a look at the vSentry end-user threat protection solution from Bromium, and what’s interesting to me are their 2 core innovations:

Micro-Virtualization creating a transparent shield for any un-trusted application, which allows the user feel safe, and avoid irrelevant alerts, as the un-trusted application they use, can try and do harm, to find itself isolated, without any actual modification of user data. All the affects of the malware are cleaned up, as its virtual sandbox vanishes as soon as the application exits. That’s based on the Intel VT technology.

This also enables the second innovation – Task Introspection. Since applications can do anything they want, be it as malicious as they can, as they are getting virtual rather than actual access to the system’s resources, an attack can be recorded and reviewed, at will, saving forensics time and effort.

In this case as well, what’s important to note, is the rapid exploitation of the opportunity to innovate, using current tools (such as Intel VT). Sure, this solution is not perfect, and will be circumvented at some point, but it does offer a pain-killer type of remedy, which IT professionals are likely to quickly grab.

I believe there should be a new term for us to useTime To Innovate (TTI) – which is about measuring how much time it takes you to innovate, as soon as an opportunity is presented.

Are you rapidly innovating as well, or endlessly trying to perfect your solution?

What’s wrong with leading Virtual Appliance Software Update Design

Listen to this post Listen to this post here:  As I am reviewing the design considerations for the processes of updating and upgrading Virtual Appliance’s software, I decided to take a look at how leading Virtual Appliance vendors are updating their appliance’s software.

The method I used for this quick research was very simple: I looked at the Virtual Appliance’s documentation, where it discussed the update and upgrade process. I then deduced how the software update process was designed.  The “Known Issues” and “Troubleshooting” sections in the vendor’s release notes, were a very good resource as well.

Stay tuned for my design considerations checklist, while this article reviews the flaws of the leading Virtual Appliance Software Update Design.

VMware ESXi Host

That’s not a virtual appliance, as it usually runs on real hardware, except for testing purposes where it can run as a virtual machine. Still I thought it is worth a look at, due to its core role in VMware’s virtual solutions, as they should have used methods to update its software, that may be worthy for a Virtual Appliance as well.

  1. Upgrade across major versions (4 to 5) seem to work seamlessly if you use the update manager via vCenter, because it preserves virtual machines, even if they reside along with the Hypervisor and it goes through all the steps in the process (verify, stage updates, update, test, reboot), without your intervention.
  2. However, if update is interrupted during upgrades or updates, the system may become unusable (no quick rollback option).
  3. The basic vCenter update manager, update process requires your input. But you could automate it if you can standardize on your hardware and configuration and then use options such as customized ISOs and other methods offered by VMware as well as 3rd party hardware vendors.
  4. In general you could automate the process of a specific update, but you still have to screen and test new updates, review their impact and effect on your environment, to customize your own automated process. You can’t really automatically stream updates to your ESXi.

VMware vCenter Server Appliance (Linux Based)

  1. The main method described for an update required creation of a new fresh vCenter Server Appliance instance, then creating a trust relationship of the new and current vCenter Server, allowing the transfer of the current appliance configuration settings to the new fresh appliance. As this process completes, you could shut down the current appliance and leave the new fresh appliance running.
  2. This does not look like a process that could be easily automated or simplified.
  3. To apply updates rather than an upgrade, you are supposed to manually run the process through your browser. So this process is manual as well and it is not clear if and how you can quickly and easily recover if the updates cause issues.

VMware Storage Appliance

  1. As you upgrade the appliance you may be required to uninstall the current version, adding complexity to the process.
  2. You are required to manually address dependencies, such as upgrading ESX hosts only AFTER you upgrade the Storage Appliance software. This is very disturbing if you are looking for an automated, bullet proof simple process.
  3. There is a mentioning of a rollback the update process may activate in case of an upgrade failure, but it is not clear how it works.
  4. It is not clear how and if updates / patches can be applied, since all documents I reviewed refer to upgrades only.

Cisco Routers

Indeed in most cases we refer in this case to hardware based appliances (routers), and yet, I’d expect the design concepts of such a major, long term appliance vendor, have lots of wisdom implanted in its update architecture. Of course Cisco has incorporated various platforms and solutions it either developed or acquired through the years. Still I reviewed the common IOS update process.

  1. Updates can be activated interactively or for a group of appliances via update manager software.
  2. Although in general they recommend running the update at the physical console, I believe remote consoles and remote power controllers could suffice in most cases as well.
  3. The update process seems to be pretty failsafe, as you could upload an update to the appliance’s flash memory. The flash memory, if properly sized, could hold up several update images, which you could select from to boot the appliance. So if you stumbled into a bad update, you could easily reboot the appliance with the previous good image.
  4. All together the basic design concepts of simple, automated, safe process that can be mass deployed seem to apply in this case. Of course in my future Appliance Software Update design considerations checklist, I will try to highlight additional innovative ideas.

F5 BIG-IP Virtual Edition (top performer in the VMware Appliance Market)

  1. I found lack of automation and mass deployment options. You basically have to download the update ISOs, use a web browser to import them, check for their MD5 checksum, and execute them on the appliance.
  2. There is no mentioning of automated or manual recovery or options for mass-deployment.

Looking at the cases I inspected, you could be critical of those solutions (except for Cisco’s solution). However, we should also consider, maybe after all, it is not worthy enough to address those lacks. Those vendors are still massively selling those products…

Maybe it is because in the IT arena, people do not take the time to show management, how much the  lack of better software update features, resulting in extra down time, really costs…

Or maybe that’s because the Virtual Appliance is just an in-between phase, leading us from computers to cloud services. Maybe the lack of robustness of the Appliance maintenance mechanisms cause, is that vendors are merely taking the minimum amount of effort required to dump their software, as is, into the virtual appliance, knowing that the real effort should be invested in restructuring their solutions as generalized cloud services. That’s where the computer entity is irrelevant. In this case the whole Virtual Appliance market is destined for doom, in spite of the seemingly vibrant state it appeaars to be nowadays.

What do you think?

Invasion of the IP Snatchers – A DEVOPs Internal Cloud Horror Story (Pre-SDN)

Listen to this post Listen to this post here: 

Let’s face it. There is a “manufactured” belief that the SDN (Software Defined Networking) concept and solutions are a magic bullet that will eliminate all the head-aches, bogging down IT Professionals who struggle, as they have to furiously expand their networks to match the explosion of Virtual Machines and Cloud Computing implementations.

Well, I have some bad news for you: I believe challenges are only going to increase in velocity as well as severity, adding complexity to the brewing virtualization pot.

But that’s a whole separate story, which we’ll uncover at another time.

First, let’s look at what awaits you when you simply try to accommodate an expected huge number of virtual machine, in figures X10 times what you had in mind when all you had were physical machines or virtualization hosts, which could run just a couple of VMs. Let’s assume you are not yet using any SDN type of solution.

Nowadays you can buy an 8-core Dell server with ~400GB memory for less than $13,000. If your average Virtual Machine requires 5GB of memory, you could easily find yourself serving 80-100 VMs off that single server.

Group a couple of those servers and you can easily overflow a typical CLASS C subnet.

Here are few lessons learned you may want to look into, as you design your DEVOPs Internal Cloud Network Architecture:

Leadership and Management:

  1. It is paramount all relevant team members (Network Team, Virtualization Admins, Sys Admins, Storage and server suppliers, clients, are aligned with this project objectives and feel responsible for a success of the project. That’s because you will stumble into challenges, and people might say “It is too complex!”, “Supernets don’t work well with this device”, “It takes too much time, until ‘they’ do their part in the configuration”. So you want to make sure everyone is involved and continue to stir the boat to the Promised Land…
  2. Those days Virtualization and Sys admins have a bunch of technologies thrown at them as part of the cloud infrastructure. They have to quickly know how to operate and maintain blade infrastructure, storage units such as EMC, NetAPP, IBM and others, as well as blade center switches. You will not have all the knowledge in your internal team upfront. So you will need to turn to contractors, suppliers and others to set things up for you. But always make sure you take the time and pay the fee to have them document what they do for one work item (one server, one LUN, one VLAN setup), and then immediately use this procedure to do the rest of the same work items YOURSELF. You can always learn and take courses as you go, but never wait for it. Leveraging and reusing your contractor’s knowledge will save you money (you pay them for just 1-2 units set rather than all of your units). This will also save you downtime, since you can schedule it per your client’s schedule, over time, rather than per the contractor’s schedule, which results in massive all-or-nothing downtime and lots of troubleshooting, in the morning after. Reusing your contractor’s knowledge that way, also increases your team’s self-confidence, in general.
  3. Make sure you are aware of any gaps in knowledge between network professionals and system professionals. There are fine network professionals who don’t know exactly what a VMware vSwitch is and how do VMs get their MAC and IP through a real server. This can be easily resolved through a quick discussion and few examples, but you have to notice it and address it, or you’ll get bitten by it at the worst timing.
  4. When your contractors say “Ah, this should work”, ask them if they actually have done it, within an environment as similar to yours, or can get someone who had done it, to instruct them. If they can’t confirm it, then pay for the extra time they’ll need to do the homework – this extra spending will pay off in shortened downtime and more assurance over the project’s success.
  5. Define exactly what successful project implementation means: downtime length, what should be up and running, a quick rollback plan and so on.

Network Planning:

  1. GO Big. If today’s servers can run 20 VMs, then in a year they would run 80VMs. Be rest assured, someone will need all those VMs, so plan for a big number of IPs.
  2. Most of your internal VMs do not need a public IPv4, and you don’t want to jump into IPv6, just to get a lot of IPs, since the universe is still not set for it.
  3. Go for internal ranges of IPs, such as 10.x.x.x.
  4. If your network is based on CLASS C, you’d probably need to have your internal IP ranges set as CLASS C as well. However you’d want to set subnets with more than 254 IPs. For this you could use “Supernets”. This means you could have slices of 254 IPs (10.xx.10, 10.xx.11) use the same gateway. This will provide you with ~500 IPs per “Supernet”.
  5. If you wonder, why not create 1000 IP Supernets or more, so your huge clusters could use a single standard subnet, then consider that broadcasts across so many hosts could bog down your network. For now 512 IP Supernets seems to be a nice midway.
  6. Your local and wide area network team should add the relevant routes to those new subnets, so you don’t lose companywide connectivity.
  7. Consider on which segments you activate DHCP and make sure it can accommodate the new subnets.

Server Side and VLANs:

  1. Now let’s take care of the server side. Basically you’d want all your Virtual Machine hosts, be capable of launching VMs on any of those subnets. If you have 4 NICS (network cards) on each server, this means you can only support up to 2-3 subnets (leaving 1 or 2 for iSCSI or management). This also means your ESX, XEN or Hyper-V servers can’t easily be set to support new subnets, without downtime and scheduling delays to sync this operation with the network team(s). That’s why you want to use VLANS.
  2. Using VLANs you could set your physical switches deliver data for many more subnets to the same 2-3 physical NICs your servers have.
  3. To set your Virtual Host servers with VLAN support, you need to sync the VLAN tags (labels) across your backbone switches, your blade center switches and the virtual switches (in case of VMware ESX) defined in your (ESX) servers.
  4. For each subnet you’d set your ESX server with a new vSwitch, labeled to accommodate only a specific VLAN, associated with a 512 IP Supernet. All your (ESX) Clusters should use the same Virtual Switch naming convention and VLAN labeling. So “VM VLAN 1211” means the same 10.x.11 subnet across all of your (ESX) Clusters.
  5. Your VMs can now easily “move” across subnets, once they are exhausted, by simply re-assigning them with a new vSwitch for their network cards.
  6. Always aim to set one of your VLANs as a “native” “default” VLAN for servers that you can’t or won’t set VLAN settings on. Those servers will automatically use the subnet associated with the default VLAN.
  7. Leave a subnet for VMs that require a static IP. That way they will not have fierce competition with massive DHCP requests from VMs that suffice with a DHCP based IP.

Leadership and Management Revisited:

  1. Create a cross device backup process which takes snapshots of your vCenter, ESX Servers, Blade Center and Blades, as well as storage units and switch configurations. You will need all those to recover or troubleshoot…
  2. Better yet, try and create templates of configurations for all those devices, which you could either restore or use to deploy on new device units: servers, VLANS, switches, etc.
  3. Monitor your IP allocation use and make sure the network team is verifying your backbone switches are not over loaded with all those new VMs.
  4. When you have finished this new project, show your management team the challenges you tackled to save them time and money and allow future growth. Maybe even write a blog post about your lessons… 🙂

That’s about it for now…I am sure you’d have a lot to add and comment on this post…

Why Leadership Sucks and so much Worthy at the same time

Listen to this post Listen to this post here: 

I haven’t read this book yet, but it is free now on Amazon, and this note on its review caught my attention:

“…It (Leadership) sucks because real leadership is hard, requires selfless service, and because the buck stops here.

Servant leadership or Level 5 leadership is uncomfortable, humbling, self-denying, painful, and counter-intuitive;

nonetheless, it is the only kind of leadership that brings lasting results, genuine happiness, and true self fulfillment…”

Here is the link to the book (not sure how much longer it remains free): Why Leadership Sucks: Fundamentals of Level 5 Leadership and Servant Leadership

Feel free to share here if you got the book, and what you think about it, or about the note…

How to Motivate with “AHA”i

In the following “What makes us feel good about our work” talk by Dan Ariely, there are few points worth contemplating:

Listen to this postClick Here to listen to this post: 

  1. Acknowledgement is critical, and should be exercised as much as possible. Even the simplest one, like saying “Aha”, when your team member hands you over, their report, is powerful. If you want your team be as twice as productive as they are now, acknowledge them routinely. You don’t need to make up things; you do not need to exaggerate. Simply make sure you clearly show you got it – they did a job, they invested the effort, they desire feedback. If you can thank them, do it. At the end of the week, just before we all go home, I thank each one of my team members, from the bottom of my heart, for the effort they have put during the week. It does not matter how many successes they had, it is all about the time, effort and good will they have put into their work. That way they go off to their weekend, charged with good feeling.
  2. Indeed tossing away an effort people put, is a major energy drainer. However ignoring people’s effort is putting them down, almost the same as shredding their effort in front of their eyes. That’s just to stress how much acknowledgement is important. And there is something else. When you do have to stop a project, cancel a work done, make sure you put enough energy beyond explaining why this need to be done. Make sure you acknowledge the effort put so far. People could reach 150% productivity, if you do your best to preserve their efforts, and it does not matter how much you pay them. So thank them and do the most to make use with what they built so far. Make sure this effort is communicated clearly. All this will put more fuel into your team’s emotion al engines, as they take your ship to a new destination.
  3. It doesn’t matter how much you pay people, if you don’t acknowledge their effort. You can raise a salary once in a while, and still it would not matter as much as a much more frequent acknowledgement of people’s efforts.
  4. If you have an advice to give or feedback to improve one’s performance, they are still considered an acknowledgement and worth more than silence. Acknowledging people does not mean you should forget about mentoring and guiding.
  5. We value our creations much more than evaluators do – because we appreciate more what we put effort into. Make sure you kindly address this gap, as you go through a performance evaluation with your team. Otherwise people will think your review of their performance is too critical.
  6. In the past decades we had Industrial Efficiency govern our work methodology: we preferred everyone repeatedly does just part of the complete creation process of our solution. However now we are shifting into knowledge economy. Now everybody can decide how much effort they put into their work. Consequently success depends on your team’s willingness to invest more acquiring and purifying information. You don’t have the time, and sometimes you could even lack the means to properly measure the quality of the information, your team is creating. So you want them be emotionally motivated to invest the extra effort. Even if they must focus on a repetitive job, at least give them a prospective of its overall project, and an opportunity to speak out their advice, beyond their current role. This means you have to extend into the following areas. Note that each one of those deserves a whole article.
  7. Meaning – Creating a powerful meaning or Context for the job to be done.
  8. Creation – Let your team feel they own the creation of the solution.
  9. Challenge – Take your team through challenges, so they can experience personal growth.
  10. Ownership – Make sure your team see themselves as the source of everything that happens as they ship their solution.
  11. Identity – Make sure your team can see themselves and their job as one.
  12. Pride – Set the opportunities for your team to see the gradual progress they gain. Acknowledge them for their uniqueness and efforts.

As you watch the video, let me know if you see more that we should look into…

Kali Linux goes Backtrack

For you the Security Pen Testers, there is a new kid in town.

Listen to this post

Listen to this post here: 

Kali Linux is the new distribution of the famous BackTrack Linux used for 7+ years as the Pen Testers Open Source toolset of choice.

Why should you consider using it?

  1. It’s what the Backtrack team will be supporting for the long term
  2. Synced with Debian (if you prefer Debian) – you can get automatic daily updates if any are available
  3. Security tools are closely inspected and maintained
  4. You can customize your Kali installation during its setup
  5. Automated installs (fresher than stale point in time ISOs…)
  6. Better ARM architecture support for the tools
  7. Flexible choice of your desktop environment (KDE, LXDE, XFCE, Anything else)
  8. No need to re-install or re-setup your Kali install, as new major Kali versions are released

All in all, Backtrack got “Enterprised” into Kali…

Would you now switch to Kali Linux?

P.S.

Note that both Backtrack and Kali Linux contain great tools such as Nessus and Metasploit, set for “labor-intensive” use. If you want it to automatically do the work for you, across many systems, through a workflow that will save you a LOT of time, you’ll have to pay for the “Professional” variations of those tools.

Amazon Cloud Drive is out in the wild (Details and Risks)

Amazon has finally released its Dropbox / Google Drive / SkyDrive “killer”

Listen to this postClick here to listen to this post: 

It lets you save files to Amazon storage systems right on your Windows or Mac desktop, through a local folder that gets synced automatically.

Few notes:

  1. The first 5GB of storage are free, and then you start paying.
  2. It is separate from the usual Amazon S3 storage.
  3. No sharing through links exists yet, but I expect it will be added in the future
  4. Check the terms and conditions of use…looks like they allow Amazon disclose what you put there rather easily…

Another question that becomes prevalent is “How should we address the variety of cloud storage systems running all on our desktops?”

  1. Stick to just one? (That’s my choice for now…)
  2. Split across several?
  3. Use Apps that consolidate all Cloud Drives into a single unit and get into the same risks which RAID 0 has (one disk is gone, and our data is a mess…)

All this tightly coupled web accessible, cloud based, desktop integrated content highlights a relevant security risk factor: Attacks which could penetrate through one end (say through web URL exploitation tactics) and end up much easier in the other side (desktop or other user’s desktops).

Amazon Cloud Drive is a 61.8MB download (!) and you can get it right here:

http://www.amazon.com/gp/feature.html?ie=UTF8&docId=1000796931

Will you use Amazon Cloud Drive? How?

How to find your next great book read

How to find your next great book read?

Listen to this postListen to this post here: 

Try out http://goodreads.com

It’s a great site and app that let’s you

  1. Share the books you read
  2. Find who else read them (in general or from your friends)
  3. Rate and publish a review
  4. Follow authors or readers
  5. Mark what you are reading or want to read
  6. As well as tag your books and more

Now that Amazon bought http://goodreads.com you will probably get even more out of joining it

So lets Connect and share – here is my profile where you can friend and follow my reviews – http://www.goodreads.com/user/show/18841460-jack-bezalel

One more thing – do you have a similar resource you use for book finding? Please comment it here…

GENESIS - Big Data Super Nova - Part One (Eve)

GENESIS – Big Data Super Nova – Part Three (Apple)

Hey and welcome!

The “Genesis – Big Data Super Nova and the Journey Back to Privacy and Security” broadcast is about to begin.

This time I decided to share with you my insights and ideas through a story.
So tighten your seatbelt as we jump into hyperspace through time.
It’s now exactly 37 years forward – the morning of March 13, 2050, 11 AM: Earth Standard Galaxy time.

This is chapter 3, here is where you can find The first chapter of this Big Data Sci-Fi Novella (Eve)

Chapter 3: Apple

GENESIS - Big Data Super Nova - A Sci-fi Tech Novella, Chapter 3: Apple

Listen to this postListen or Download audio of this chapter here: 

It was the 1st of April 2013.

One year before, there were rumors that the world is going to end, that is, if you believed the ancient Mayan literature.

It turned up to be a bit later and more digital than physical.

Back then, social networks such as “Facebook” and “Twitter” allowed people to share their thoughts and feelings through external personal outer-body devices.
People called those cumbersome ugly devices “iPhone” and “Android”.

Everyone was hooked to those networks. Companies started letting people get access to services, based on their always-on authentication, to the social networks.
But people were using “Pass-words” to get connected to the social networks. They chose easy pass-words and did not care replacing them occasionally.

So if you found out one’s social network password (let’s say on Twitter), you also got access to any other service they had authorized (O-AUTH) to get logged on to, based on authentication token stored in their Twitter account.

Basically if you had someone’s twitter password, you could buy stuff and have them pay for it, look at the medical records, and review their personal assets. You actually became them.

When the big Digital Tsunami happened, it started by a huge surge of credit card and bank transactions fraud. Then as money pore into the attackers bank accounts (there were many used, so you could not track a single source), it started spreading away as a massive purchase wave.

Amazon, eBay and similar services were hammered with endless waves of purchases, and then rebounded into cancellations. Everyone tried to cancel the fraud purchase attempts.
There were similar attacks on health, government, manufacturing and other essential services, all directed to make them useless.

Later on this kind of attack was labeled “Application Based Denial of Service” – where instead of driving lots of requests to a service, you would overload the remediation system of a service with transactions that require huge effort to remediate (such as reversing a fraud transaction).

There was lots of confusion and slow response, addressing the global, world-wide break-in, because of the miss-leading reports, that it was all a 1st April joke.

In less than one hour all the financial institutions, digital merchant and many other critical systems were completely ruined.

It took 4 months to get most of the damages fixed. But by then many lives were lost.
You could die, because you could not get food or medical treatment, or you could get attacked, by desperate people who tried to forcefully take what you had.

No Anti Malware system could find out the secret key-loggers which had everyone’s passwords, because they were part of many operating systems.

The attackers worked many years, getting to the right people in every operating-system manufacturer’s staff. Then the secret key-logger code was added to the operating system’s codebase. Every developer the attackers acquired, had to add or change just a small piece of code, entirely blind to the demonic intension driving all those small changes (Time Shifted Attack).

The attackers had even penetrated the NSA (National Security Agency) staff, and implanted their own code on top of the home-land security hooks. Those hooks were originally designed to allow law-agencies get access to people’s cloud stored data as well as communications, in case those people were suspects. Now those law enforcement hooks had a parasite code, wire-tapping everything, ready to spray all this information to the malicious crackers.

It all sat there, silent, waiting, collecting and using Big Data to analyze and further, and reveal additional access details. Gone were the days of brute force password cracking – you guess passwords much faster, using Big Data analysis.

And then it was time for the software octopus to wake.

It started ignition on March 13, and up it woke on April 1.

Adam recalled that there was one more important event which took place on the 1st of April 2013.
The “Naturalists Group” was born.

The attack was carried on through Windows, Linux, Oracle, Java infrastructure suppliers and Cloud Providers.
Cloud services failed one by one and the true lack of the Cloud concept was revealed: We did not have “The Cloud”. What we had was many disparate cloud services, each one with its own weakness and finalized amount of resources.

Each Cloud provider had a final central set of “engines” operating all its services, which could easily hacked and brought down, essentially making it useless.
You really had nowhere to fail over to.

The lack of Data Integration was apparent across Big Data archives, so you couldn’t really see what was going on. We had no “Big Eye in the Sky”, no “Digital Defense Satellite”, watching our data and computers.

That was the day when true “Universal Computing” and the “Neuro-Fibre Net”, were born.

Lots of server systems halted that day, and many personal devices got their share of blackout as well.

But most of the victims had the mobile device of choice of that time.

It was produced by Apple.

Get additional chapters by reading the book by Clicking Right Here