Machine learning by intuition

red information

DISCLAIMER: This blog is fully automated, unmonitored, and does not reflect the views of Garett MacGowan. The ML model may produce content that is offensive to some readers.

blue information

This blog post was generated with a (potentially) real article title as the only prompt. A link to the original article is below.

blog header

Photo by Kaleidico on Unsplash

Generated: 7/11/2022
Machine learning by intuition – an introduction to the field

Machine learning is still a young and extremely active field. New approaches appear daily, and it is still not too difficult to think you’ve discovered the next big thing. But what makes machine learning a difficult subject to approach is the complexity of all the new models that are introduced, from state-of-the-art, to deep learning, to the classic perceptron and neural networks and even statistical methods such as support vector machines and SVMs.

And what makes machine learning even more interesting is that, as a discipline, it is always at the intersection of multiple disciplines such as statistical learning theory, computer science, control theory, graph theory, optimization theory, machine vision, pattern recognition, etc. Not only that, but it is influenced by the current state of technology and can not be defined independently of the technology stack that the applications are produced with. Therefore, machine learning and statistics are not separated in most cases; it is always possible to apply statistical methods (linear regression, logistic regression, Bayesian networks, etc.) applied to new data, and the results of these statistical methods may be fed back ‘naturally’ to training the learning algorithm to further improve the process.

Machine learning and its various flavours can provide a very powerful solution for a number of important questions and problems.

One of the first things that come to mind when we think about machine learning are the applications from the computer science side: computer vision, speech recognition, and recommendation algorithms to name a few. In a broader sense, there are many applications of machine learning to the social, physical, and biochemical sciences, as well as, of course, the fields of human cognition, and natural language processing. However, there is a problem of definition here, since ‘machine learning’ is usually used to describe a broad set of subjects, including many models applied to computer science and statistics, as well as to biology, philosophy, psychology, physics, chemistry and other sciences.

But why is it important to have a clear understanding of the various models of machine learning? Why should the computer scientist, the biologist, the economist, or the lawyer care a great deal to understand machine learning?

The important thing about machine learning is that it has no limit to the problem it can solve. And this is not a question of a particular problem, of a particular task that it is trained on, of a particular dataset, but rather a question pertaining to the whole of computational complexity, as stated by Toda and Komiyama at the beginning of their book On the difficulty of training deep neural networks.

When I learned about machine learning, I was completely entranced with the idea that you could use a set of simple methods and train them to recognize complex objects, using neural networks, and eventually, to understand images. And from the beginning, it looked like machine learning was an interesting enough problem. But when I began studying machine learning, machine learning was already an established field. And the problem was, once I learned it, it turned out to be extremely difficult to use new approaches. I could not understand why there was not an immediate transfer of what I know in another problem, and I could not even understand all of machine learning’s theories clearly. I understood all of statistics and I was studying computer science for many years. However, I could not understand the connection between theory and practice and I felt unable to perform data analysis. All of the problems came back to me, and I still don’t know why that is.

The truth is that, as a discipline, machine learning is still an extremely new one.

And, of course, we as a discipline are still very busy trying to understand machine learning. Each of those problems is a fundamental piece of the puzzle; there is still much to learn. The machine learning field is constantly evolving, it is a process of understanding the new approaches, which are introduced every year, and it is also a process of improving algorithms, finding approaches for a wider variety of problems, and improving datasets.

In many ways, machine learning is still at the beginning, and the research field of machine learning is growing very quickly. Each new approach is difficult to analyze, understand and apply. However, the more we find ways for approaching machine learning problems with new approaches and methods, the closer we get to solving the biggest mysteries of our day.

Here we’ll look at a very interesting field called unsupervised learning. Supervised learning is the easiest type of machine learning to approach. Learning from a set of labelled data is very easy. And even though all machine learning is supervised in some form, we call anything that can be applied to classify and extract knowledge from data that can be labelled, supervised machine learning. And most machine learning books are full of supervised learning approaches.

The biggest problem with supervised learning is that you can’t know if you’ve got a supervised model that works until you use it on new, unseen data. For example, if you try to use a classifier to classify new data and it works great for your data but it does a terrible job at identifying the new data set that you just presented to it, the chances are you haven’t done a very good job! We will call this problem the transfer problem, since we don’t know if a model works or not on data that you did not train it on. So, for supervised learning, testing is the best measure of success in machine learning.

But sometimes testing is too difficult, and this is where unsupervised learning comes in.

Unsupervised learning is a very broad area of machine learning where the model doesn’t require data labelling. In contrast to supervised learning, unsupervised learning only requires that the data is in the form of some sort of representation, such as images or texts. The biggest challenge to unsupervised learning is that the data can not necessarily be used as labels for new samples that are to be created.

But, we must bear in mind that, in practice, we often need unsupervised learning and we need it very often. In a typical situation, we require unsupervised learning when we want to see how a data set is formed by some processes, and we also need unsupervised learning to discover knowledge hidden in large data sets.

For example, we can discover a new form of social media by studying it and by analyzing the way the users form their connections, or we can use the image of a tree to detect its image using unsupervised learning algorithms. We are very often in search of unsupervised learning algorithms.

But what is interesting about unsupervised learning is that, in the end, the model is still trained on a set of samples to which the model is trained, whether they are labelled or unlabelled. The final goal is to understand the data that the unsupervised learning process generates and produce something novel that can be further tested. In this way, we know that our algorithm is able to handle data that it hasn’t seen before.

In the case of unsupervised learning, we are not concerned about whether or not a trained model works, or whether we even have a model that can be used. We need this type of approach because we have information about the structure of the datasets that we can use for learning, but we don’t have information about the labels for samples.

There are a large number of unsupervised learning algorithms, most of which operate using statistical inference.

In many ways, unsupervised learning is the opposite of supervised learning.

In supervised learning, when we apply a trained model to a new data set, whether we call it supervised learning or transfer learning, we test it on the data that we already have labelled. So, we will get a test accuracy for the model, or we will get an accuracy that is higher than random. This test accuracy is a measure of the accuracy of the model.

One way to define accuracy in a test is as the fraction of known labels that are predicted by the algorithm correctly. Or another measure of test accuracy, which is more intuitive, is as a correct prediction rate. This rate simply gives the fraction of predictions that are correct among all predictions that the model makes.

However, if you don’t have any labelled or labelled or known data, you really can’t directly use supervised learning, because the test accuracy measure fails to measure the degree to which a model behaves correctly without knowing the data that were used to train it. Therefore, there isn’t generally a test accuracy measure used in unsupervised learning. There are many different unsupervised learning models, like hidden Markov models, probabilistic neural networks, spectral analysis, and others.

Let us consider the classical example of clustering, or unsupervised learning, which is a very important technique. This is a set of different algorithms that you can apply to the data itself that allows you to determine the number of clusters in the data and then generate a representation which is usually a feature vector, or a matrix. So, how should we go about clustering?

One of the classical algorithms for clustering is the algorithm k-means. This is a method for clustering or unsupervised learning the data that consists of the following steps:

Assume we have k points in a data space, k unknown numbers.

Assume we have a set of points from the data space clustered in k groups:

Group 1 has {x1, x2, x3}

Group 2 has {y1, y2, y3}

….
logo

Garett MacGowan

© Copyright 2023 Garett MacGowan. Design Inspiration