Machine learning and artificial intelligence are way more often used in our everyday life than we would think. Although they have been taken up by the public discourse only in the recent years when big data started to conquer the frontline of technology and IT. These 2 terms are often interchanged or used inadequately in press and media. To clarify the difference have a glance into their history. However AI sounds more futuristic it has started its career earlier, in the 1940-s but its theoretical groundwork was outlined by ancient greek philosopher Aristotle who coined a method of formal and mechanical thinking called syllogism (e.g. all giraffes are animals; all animals have four legs; therefore all giraffes have four legs).
From the 40-s, creating an artificial human brain was not only a science fiction theory any more, and in 1956 Artificial Intelligence was recognized as an academic discipline. Cold war era mathematicians, engineers, psychologists, economists and political scientists started to craft metrics theories such as the Turing Test that canvasses the possibility of producing robots that think. In order to pass the test the machine has to handle a conversation that cannot be distinguished from a human-human conversation.
In very plain words: AI is to create machines that behave – think – act – react – exactly like humans in given circumstances. As the discipline was becoming broader, sub-disciplines started to unfold: Knowledge representation, natural language processing, perception, machine learning etc.
The first appearance of Machine Learning was a sub-category of Artificial Intelligence. If we keep on depicting AI like a human, machine learning can be portrayed as the feature this human’s cognitive system sourcing invariants and patterns in the sea of big data. The self-learning algorithms make the machine learn from huge data amounts.
We can see deep learning in the heart in the machine learning. Subset is not really adequate definition, it is rather the most virulently developing part of machine learning which enriches machine learning itself, and after enough knowledge amassed, a new core’s boundaries will appear in the centre of deep learning. But that’s going to be the era of superhuman robots behaving along clear moral and spiritual guidelines and principles.
Machine Learning depends on the mass and quality of data. Have an example of discerning a crucially important image amongst other images and we’re going to understand how dire complications can follow if we haven’t got enough data.
ML: In a software we made for self-driving hoover machines and directed it to look for disabled cats in the house and compare them to the (motion) images of other animals. We would have to collect enormous masses of data about cats (shape: four legged – surface: fluffy, flossy – motion : soft, liquid, pliantly reacting etc.) The algorithm also needs to have access to data about objects that are not disabled cats.
To approach the perfect safety that this automated hoover would deliver to your house the software has to collect the same amount of memories – historical data – as a human being who had decades to pick up these memories. If only a few percent of critical data is not available, the hoover is not going to slow down by approaching the cat.
To dissolve the disabled cats’ fears about being exterminated: the rapidly growing online video content about animals takes us closer to the age when a hoover can sip up decades of human memories in minutes from the exabytes of content.
It’s very unlikely to have a disabled cat in your flat, but substitute the hoover a with a self-driving car that has to recognize a child grasping an adult’s hand, but he seems to be mischievous to unexpectedly tear himself from his mother and run out to the road. Or the software has to recognize a very old lady has started to cross the road and has no intention to stop.
On the other end AI and ML has become so common that we don’t even recognize it: You have been asking Siri to find an opened burger shop since 2011. The recommendations in the LinkedIn „who should you befriend with?” based on the clicks you have made on companies, groups, articles, the jobs you were looking for, and the locations from where you signed in.