Understanding k-Medians: Definition, Explanations, Examples & Code
The k-Medians algorithm is a clustering technique used in unsupervised learning. It is a partitioning method of cluster analysis that aims to partition n observations into k clusters based on their median values. Unlike k-Means, which uses the mean value of observations, k-Medians uses the median value of observations to define the center of a cluster. This algorithm is useful in situations where the mean value is not a good rep
Understanding k-Nearest Neighbor: Definition, Explanations, Examples & Code
The k-Nearest Neighbor (kNN) algorithm is a simple instance-based algorithm used for both supervised and unsupervised learning. It stores all the available cases and classifies new cases based on a similarity measure. The algorithm is named k-Nearest Neighbor because classification is based on the k-nearest neighbors in the training set. kNN is a type of lazy learning algorithm, meaning that it doesn't have a model to t
Understanding Label Propagation Algorithm: Definition, Explanations, Examples & Code
The Label Propagation Algorithm (LPA) is a graph-based semi-supervised machine learning algorithm that assigns labels to previously unlabeled data points. LPA works by propagating labels from a subset of data points that are initially labeled to the unlabeled points. This is done throughout the course of the algorithm, with the labels being kept fixed unlike the closely related algorithm, label spreading. LPA i
Understanding Label Spreading: Definition, Explanations, Examples & Code
The Label Spreading algorithm is a graph-based semi-supervised learning method that builds a similarity graph based on the distance between data points. The algorithm then propagates labels throughout the graph and uses this information to classify unlabeled data points.
Label Spreading: Introduction
Domains
Learning Methods
Type
Machine Learning
Semi-Supervised
Graph-based
Label Spreading is a graph-based al
Understanding Latent Dirichlet Allocation: Definition, Explanations, Examples & Code
Latent Dirichlet Allocation (LDA) is a Bayesian generative statistical model that allows sets of observations to be explained by unobserved groups that explain why some parts of the data are similar. It is an unsupervised learning algorithm that is used to find latent topics in a document corpus. LDA is widely used in natural language processing and information retrieval to discover the hidden semantic structur
Understanding Learning Vector Quantization: Definition, Explanations, Examples & Code
The Learning Vector Quantization (LVQ) algorithm is a prototype-based supervised classification algorithm. It falls under the category of instance-based machine learning algorithms and operates by classifying input data based on their similarity to previously seen data. LVQ relies on supervised learning, where a training dataset with known class labels is used to train the algorithm.
Learning Vector Quantiza
Understanding Least Absolute Shrinkage and Selection Operator: Definition, Explanations, Examples & Code
The Least Absolute Shrinkage and Selection Operator (LASSO), is a regularization method used in supervised learning. It performs both variable selection and regularization, making it a valuable tool for regression analysis. With LASSO, the algorithm shrinks the less important feature coefficients to zero, effectively selecting only the most relevant features in the model.
Least Absolute Sh
Understanding Least-Angle Regression: Definition, Explanations, Examples & Code
Least-Angle Regression (LARS) is a regularization algorithm used for high-dimensional data in supervised learning. It is efficient and provides a complete piecewise linear solution path.
Least-Angle Regression: Introduction
Domains
Learning Methods
Type
Machine Learning
Supervised
Regularization
Least-Angle Regression (LARS) is a powerful regression algorithm for high-dimensional data that is both effi
Understanding LightGBM: Definition, Explanations, Examples & Code
LightGBM is an algorithm under Microsoft's Distributed Machine Learning Toolkit. It is a gradient boosting framework that uses tree-based learning algorithms. It is an ensemble type algorithm that performs supervised learning. LightGBM is designed to be distributed and efficient, offering faster training speed and higher efficiency, lower memory usage, better accuracy, the ability to handle large-scale data, and supports parallel
Understanding Locally Estimated Scatterplot Smoothing: Definition, Explanations, Examples & Code
Locally Estimated Scatterplot Smoothing (LOESS) is a regression algorithm that uses local fitting to fit a regression surface to data. It is a supervised learning method that is commonly used in statistics and machine learning. LOESS works by fitting a polynomial function to a small subset of the data, known as a neighborhood, and then using this function to predict the output for a new input. This
Understanding Locally Weighted Learning: Definition, Explanations, Examples & Code
Locally Weighted Learning (LWL) is an instance-based supervised learning algorithm that uses nearest neighbors for predictions. It applies a weighting function that gives more influence to nearby points, making it useful for non-linear regression problems.
Locally Weighted Learning: Introduction
Domains
Learning Methods
Type
Machine Learning
Supervised
Instance-based
Locally Weighted Learning, or LW
Understanding Long Short-Term Memory Network: Definition, Explanations, Examples & Code
The Long Short-Term Memory Network (LSTM) is a type of deep learning algorithm capable of learning order dependence in sequence prediction problems. As a type of recurrent neural network, LSTM is particularly useful in tasks that require the model to remember and selectively forget information over an extended period. LSTM is trained using supervised learning methods and is useful in a wide range of natural
Understanding M5: Definition, Explanations, Examples & Code
M5 is a tree-based machine learning method that falls under the category of decision trees. It is primarily used for supervised learning and produces either a decision tree or a tree of regression models in the form of simple linear functions.
M5: Introduction
Domains
Learning Methods
Type
Machine Learning
Supervised
Decision Tree
M5 is a powerful decision tree-based machine learning algorithm that is commonly used in the
Understanding Mini-Batch Gradient Descent: Definition, Explanations, Examples & Code
Mini-Batch Gradient Descent is an optimization algorithm used in the field of machine learning. It is a variation of the gradient descent algorithm that splits the training dataset into small batches. These batches are then used to calculate the error of the model and update its coefficients. Mini-Batch Gradient Descent is used to minimize the cost function of a model and is a commonly used algorithm in deep le
Understanding Mixture Discriminant Analysis: Definition, Explanations, Examples & Code
Mixture Discriminant Analysis (MDA) is a dimensionality reduction method that extends linear and quadratic discriminant analysis by allowing for more complex class conditional densities. It falls under the category of supervised learning algorithms.
Mixture Discriminant Analysis: Introduction
Domains
Learning Methods
Type
Machine Learning
Supervised
Dimensionality Reduction
Mixture Discriminant
Understanding Multidimensional Scaling: Definition, Explanations, Examples & Code
Multidimensional Scaling (MDS) is a dimensionality reduction technique used in unsupervised learning. It is a means of visualizing the level of similarity of individual cases of a dataset in a low-dimensional space.
Multidimensional Scaling: Introduction
Domains
Learning Methods
Type
Machine Learning
Unsupervised
Dimensionality Reduction
Multidimensional Scaling (MDS) is a type of dimensionality redu
Understanding Multilayer Perceptrons: Definition, Explanations, Examples & Code
The Multilayer Perceptrons (MLP) is a type of Artificial Neural Network (ANN) consisting of at least three layers of nodes, namely an input layer, a hidden layer, and an output layer. MLP is a powerful algorithm used in supervised learning tasks, such as classification and regression. Its ability to efficiently learn complex non-linear relationships and patterns in data makes it a popular choice in the field of mach
Understanding Multinomial Naive Bayes: Definition, Explanations, Examples & Code
Name: Multinomial Naive Bayes
Definition: A variant of Naive Bayes classifier that is suitable for discrete features.
Type: Bayesian
Learning Methods:
* Supervised Learning
Multinomial Naive Bayes: Introduction
Domains
Learning Methods
Type
Machine Learning
Supervised
Bayesian
Name: Multinomial Naive Bayes
Definition: A variant of Naive Bayes classifier that is suitable for discrete features.
T