The goal of machine learning is to let computers possess the ability to learn. Just like human.
Through training, humans can “understand” an object. And the computer scientists are also trying to bestow this kind of ability onto computers. Hoping that, one day, AI can truly manifest as intelligent programs.
What’s the point of machine learning?
An electricity powered brain can function nonstop, and has quicker reflex than human. Also it can work in multiple fields and into trivial details that human brains can not succeed in. Like in medical field, electronic manufacturing etc.
However, over the 60 years ever since the idea of artificial intelligence first published in a meeting in Dartmouth, machine learning has been quite low-key until the recent few years.
Although fascinating on multiple levels, machine learning is still on a rudimentary level that scientists are trying to figure out what would be the algorithms for problems to solve.
Machine learning, is a process utilizing algorithms to let computers come up with new codes to adapt into new questions. Previously, or at least over a decade, programmers have been manually inputing codes for any program to work. NPCs in video games are a good example.
Why machine learning suddenly becomes such a hot subject? As investors over the world are flooding their money into this seemingly sci-fi field of science.
This has to do with Big Data. As recent years, data have been exponentially increased in quantity, inputting by both humans and machines. We have come from the original Small Data age in the 90s, where humans manually input data, to the 2000s, that Internet users input data, to the current 2010s, where machines can also input data into the cloud storage.
The current machine learning model is basically a model about training the machines with data, and let machines try to “identify” the data. So, in this case, the more the data, the better the results.
For example, picture identification is a basic utilization of machine learning. The captchas you see everywhere on Internet are basically the data material for computers to learn about questions like “which picture is exactly an apple?”.
However, trying to identify whether a picture is an apple is much harder in reality. As humans, we can tell instantly, even from a distorted picture, that an apple is an apple. Because of the complicated cortices we have in our brains. Computers can’t do it, they are dumber than toddlers.
The current method of telling a computer which picture is an apple, is by computing the pixels. For example, the color of an apple has a universal pattern, being apple red. Also the size of an apple. And you can calculate the ratio of yellow, green and red in an apple image, to determine whether it’s an apple or not.
However, every single method above has its flaws. If you try to determine an image an apple by the mostly apple red, there could be many other objects with same color pattern but completely different things. If you try to measure the size, every rightly positioned photo has a much higher accuracy than photos/pictures that are either overly zoomed in or zoomed out. If you try to calculate the color ratio, it’s could be identical for a strawberry picture and an apple picture.
In words, there doesn’t seem to be a universal algorithm, just like a theory of everything, to oversee at least a generic field of problems.
And trying to tell computers a handwritten 2 is a 2 has to debunk a 2D image into lineal lines (becoming 1D) so that a computer can understand. And with more and more “features” come to a problem, machine learning can be poly-dimensional problems. 2D, 3D, 4D, even 10D. For example, to determine whether a picture is an apple can utilize all above three elements as three “features”, and draw a coordinate corresponds to the three features. It’s a 3D coordinate with 3 axes, and by measuring the 3 features, there could be a middle ground for a more accurate result. And duplicate the process again and again, rinse and repeat. Somehow you can draw a line with all the results you got, and that, is probably THE data index computers need to refer to when they have new inputs.
However, this does not solve problems that with more features, or the same problem with more features. When variables change, the whole model changes.
So, it’s very hard for a computer to completely function on its own, human computing is still a must currently.
And that’s the reason for machine learning.
What I personally think about is that although data are increased exponentially, the data we have are still rudimentary in a sense.
Computers can still only process images, videos and other numerical inputs from humans. I think we need to create new type of inputs specifically for computers to recognize. Just like how humans recognize things, we have our own senses.
Things we see in real life can be translated automatically through neurons and form “understandings” afterwards. For computers, they are just 1s and 0s. So, probably new type of data perhaps? Something that computers can understand but we humans don’t. Or something that computers can better understand and we humans can as well.