Artificial Intelligence, Machine Learning, and AI/ML are terms that aren’t new, but they are now becoming more mainstream. I wanted to take some time walking through the basics, hopefully you will enjoy the 13min video.
If you prefer to read, please follow the transcript below:
Machine Learning! It feels like the new “Cloud”. It’s a term being thrown around all over the I.T. industry at the moment, however, there are many definitions, many ways to understand it and for some, it’s just not on the radar yet.
So, I am here to demystify some of it, what is A.I., where does ML fit in, in simple terms, how is it here to help some of the challenges of today and what does it mean for the future of tech and cloud.
It may not seem like it, but something dramatic has happened behind the scenes over the last few decades.
We’ve known for a long time that computers were getting faster, we’re used to the new MacBook having a faster CPU, or the latest XBOX having more RAM than the previous one. Gordon Moore talked about this constant rate of improvement in 1965. His prediction was that the number of components on an integrated circuit would double every year, and it has, and still is.
But what has this got to do with AI?
Before this, in 1959, Richard E. Bellman came up with a mathematical equation for breaking problems into subproblems and then looping around these problems to find the optimal solutions. This is the base concept for most AI/ML algorithms today, but as you can imagine in 1959, the compute power wasn’t there to utilise this to the best outcome, so we’ve had to wait for Moore’s Law to catch up.
Place and Time – Living in the future
Getting back to the present, it’s all about timing. We are in the right place and the right time for AI, with computers only just now being fast enough and ubiquitous enough to run these extremely complex looping algorithms against vast sets of data. And that’s what ML does, it runs mathematical formulas on lots of data over and over.
That’s the other thing we didn’t have back in the 50s and 60s, as much data. Today, there is data on everything. We bank online, socialise online, shop online and play online. This all creates data. With the pandemic, it’s increased even more. All of this data is floating around, waiting to be utilised.
That’s where we are today. Figuring out how to harvest that data for the most impactful result.
The Power of AI
I think some of you will have seen the pretty recent Netflix show, the Queen’s Gambit? I loved it. If you’ve watched it, you’ll know that you can really feel the pressure on those chess masters as they get to that level of rank and global status. Even chess though, a complex game designed for complex minds was beaten by an AI. Recently? No. It’s been 24 years since the first AI win over a human chess master.
Even the more complex game, Go, has been beaten. It’s now impossible, impossible for any human to beat an AI at chess or go. Impossible. How mind blowing is that!
AI – Smarter than Humans
There are other things an AI can do, that humans just can’t. We can apply the principles for business too, for example, Google used DeepMind AI to reduce their datacenter cooling bill by 40%. A huge saving.
At VMware, we’re absolutely focused on helping our customers derive some of these types of benefits from AI too, in a productised way, by providing ML for continually self tuning parts of the software defined datacenter. It’s definitely touched our industry and will continue to further into the future.
Facial recognition, Self-Driving Cars, Financial Analytics, even customer service with chat bots. If you think AI is something we need to wait for it to become mainstream, then take a look around, It’s happening right now.
Today, you might prefer talking to a human than a chat bot, but just like how the AI eventually beat the chess master, it will soon be at the level where you will prefer to speak to the AI. It will be the most efficient way, with the fastest access to information.
Types of AI
Earlier, I described AI/ML as a machine that runs mathematical formulas or algorithms on lots of data over and over. But there are many types of AI.
The Oxford Dictionary defines artificial intelligence as:
“The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”
So then, AI itself is very broad.
We have AI in today’s robotics, used for sensing the environment around them and processing the information quickly. I personally love watching YouTube videos of robots being tested out and being pushed over :-).
These fall into the bucket of Natural Language Processing AI. They take a request, either by text or from audio to text, then work on understanding the intent. Think of Alexa or Siri and how far they’ve come over the last few years.
Ever wonder how Amazon knows what you want to buy before you buy it? That’s the AI looking at trends, from your previous purchases, to other people who have purchased something similar, along with other details we just wouldn’t look for as humans.
So robotics, chat bots, supply chain forecasting, these are all very real applications of AI today.
Then, currently in the I.T. industry, we’re hearing all the time about amazing things happening with Neural Nets, Deep Learning, Reinforcement learning, Machine learning. Sound familiar?
Let’s talk about that area, the popular one currently.
Machine Learning or ML (the ML part of the AI/ML we always hear about). This is the overarching concept where a machine can assess their success from their output and change how they process data, without a human, to provide a better output next time, hence learning. When it gets something wrong, or sub-optimal, it learns so that next time it gets a better result.
We can break down ML further too, there are various concepts as part of Machine Learning, such as:
Making connections. To process data, you need to input something, then you get something as an output, in between is the processing. In ML, this is called a Neural Net, the bit between the input and the output. This is all done via algorithms, but the idea is that it mimics the neurons in your brain.
Then we take the Neural Net concept and supercharge it by making it more like a human brain, layering many layers of neural networks together, improving the output dramatically.
That’s the processing of the data, but where’s the learning?
This is really how it learns to complete a task. You give the AI/ML a goal or a KPI, it then keeps trying different things, working towards that goal until it’s reached. Reinforcing the correct responses over and over.
Generative Adversarial Networks
It sounds very nerdy, so I’m going to describe it in a more exciting way: Two bots learn by fighting it out!
Generative Adversarial Networks (GANs) are a part of AI/ML in which two neural networks compete with each other to achieve a goal.
An interesting thing that’s come out of GANs is the ability to fully generate a photo of a human, here each bot shows the other a photo, different every time, either real or one they’ve generated. The other bot will say, yes it looks real, or no it doesn’t. This happens over and over (at speed), learning and getting better each time. Eventually it can’t tell the difference between the generated one and the real one.
The end result is a photo of a human that doesn’t exist and never has, but we have a photo and it’s impossible to tell the difference between this and a photo of a real human.
Imagine the possibilities as these GANs improve and you then have an AI generated 3d person, maybe even living in a VR world, with its own artificial intelligence, how long before we as the “real” humans can’t tell the difference? This is going to get closer and closer to reality.
To summarise, Neural Nets, Deep Learning, Reinforcement Learning and GANs, all fall into the bucket of AI/ML, used in various combinations.
In my first VMware Cloud Insights article I wrote at the beginning of 2021, titled AI in the DC, I talk about the dramatic complexity we’re seeing in the datacenter, from an explosion in data, to the ever-increasing complexity of managing everything.
This problem exists in the DC, driven by the ever-increasing demand from new applications, but really this type of problem exists everywhere. Everything is digital, if it’s not, then businesses are in the process of converting it right now, this is what we call Digital Transformation. Personal records, financial ledgers, historical documents, social media, sports results, the list is truly endless.
We have a vast amount of this data stuff and the way businesses will succeed in the future will certainly be dependent on how quickly and deeply they adopt AI/ML. Ever wonder why Google, a search engine company is worth as much as they are? Its because they own and control an extortionate amount of data
Cloud & Datacenter
Imagine if you could connect your brain completely into all of your applications, all of the software running the datacenter, everything in your clouds. You would know each service, each function, each application, each tick box, each variable, each string, each switch, each flashing light, the lot of it. You could then make a change to one of those and then see if things performed better or worse, then adjust. Tick a box in one of the options on your storage configuration for example and see if it performs better or not.
Computer Software can now do exactly this, but at a scale and speed that is impossible for a human, or a team of humans. Things change almost instantaneously. This is the basis of AI/ML in the data centre and initial applications have shown some incredible promise.
Looking to the Future
As you can see, there are really an unlimited number of possibilities for this technology. Ignore it all at your peril.
I’m sure some of you have seen that Elon Musk seems terrified about it! I was watching one of his interviews and he talked about how he tried to put safety guardrails in place, rules for what can and can’t be done with AI technology at a global level. His fear was that we would start to lose control, but the leaders didn’t listen and honestly Pandora’s box is open now.
Elon has started a company called Neuralink, in which you can have a hardware implant in your own brain, where you can communicate with a computer much faster than you can through a touch screen or keyboard. I guess the idea from Elon is, if you can’t beat them, join them!
I’m personally excited for the future of AI, having been in the automation industry for a decade, it looks like the right time to move away from us configuring things manually directly, looking at flashing lights when something breaks, and then conference calls to troubleshoot, followed by fixing config and code. Hopefully we can move towards just setting a goal or a KPI to keep everything running, apply AI and then never have to look at it again.
Then we can all focus on the better things in life, like new Netflix shows.
Thanks for reading!
- Timing – The time is now right for AI/ML. The maths and the compute are ready and still improving all the time.
- Power of A.I. – It can already outsmart humans in many ways
- Types of A.I. – Uses in Chat Bots, Supply Chain Forecasting and even Robotics, are all becoming common place use cases.
- Machine Learning – Subset of A.I. which processes large amounts of data, learns and improves to give incredible results, at scale.
- Neural Nets and Deep Learning process data by mimicking the human brain.
- Reinforcement Learning – Keeps trying and gets better every time, learning from mistakes.
- General Adversarial Networks – Two AIs compete with each other to improve and outcome.
- AI/ML Applications – Complexity and Data are everywhere in today’s age and AI/ML can make use of this for cost savings, predictions, faster troubleshooting, natural language processing, the list is endless.
- Cloud & DC – Massive complexity here, big potential to use AI/ML to automate for self-tuning, self-healing, self-driving DCs and Clouds.
- The Future – Brain implants, more automation, more time!