Machine learning is an important sub-area of artificial intelligence. It is a field of computer science that enables computers to learn without being explicitly programmed. When exposed to new data, these computer programs are enabled to learn, grow, change and develop by themselves.
Arthur Samuel, an American pioneer in the field of computer gaming and artificial intelligence, coined the term "Machine Learning" in 1959 while at IBM.
Artificial neural network (ANN) processing devices can be algorithms or actual hardware that are loosely modeled after the neuronal structure of the mammalian cerebral cortex. Neural networks are used in the branch of machine learning called deep learning.
A Deep Learning Framework is an interface, library or a tool which allows users to build deep learning models more easily and quickly, without getting into the details of underlying algorithms. Libraries are useful for individuals who want to implement Deep Learning techniques but don’t have robust fluency in back-propagation, linear algebra or computer math.
Reinforcement learning is an area of machine learning focusing on developing agents that can learn from their environment over time by taking actions and receiving rewards.
Supervised learning is a type of machine learning in which data is fully labelled and algorithms learn to approximate a mapping function well enough that they can accurately predict output variables given new input data. This section contains supervised learning techniques. For example, Support Vector Machine (SVM), is a type of algorithm that is a discriminative classifier formally defined by a separating hyperplane used for regression and classification tasks.
A decision tree is a model for supervised learning that can construct a non-linear decision boundary over the feature space. Decision trees are represented as hierarchical models of "decisions" over the feature space, making them powerful models that are also easily interpretable.
Unsupervised learning is a branch of machine learning focused on structuring data that has not been labeled.
In unsupervised machine learning, clustering is the process of grouping similar entities together in order to find similarities in the data points and group similar data points together.
Ensemble methods are meta-algorithms that combine several machine learning techniques into one predictive model. The purpose is to decrease variance (bagging), bias (boosting), or improve predictions (stacking).
Machine learning classification problems when there are too many factors or variables, also called features. When most of the features are correlated or redundant, dimensionality reduction algorithms are used to reduce the number of random variables. Certain features are selected and others are extracted.
Machine learning models are parameterized to tune their behavior for a given problem. Noise contrastive estimation (NCE) is an estimation principle for parameterized statistical models. NCE is a way of learning a data distribution by comparing it against a defined noise distribution. The technique is used to cast an unsupervised problem as a supervised logistic regression problem. NCE is often used to train neural language models in place of Maximum Likelihood Estimation.
- Automated theorem proving
- Adaptive websites
- Affective computing
- Bioinformatics
- Brain–machine interfaces
- Cheminformatics
- Classifying DNA sequences
- Computational anatomy
- Computer networks
- Computer vision, including object recognition
- Detecting credit card fraud
- Economics
- Financial market analysis
- General game playing
- Information retrieval
- Internet fraud detection
- Insurance
- Linguistics
- Marketing
- Machine learning control
- Machine perception
- Medical diagnosis
- Natural language processing
- Natural language understanding
- Optimization and Metaheuristic
- Online advertising
- Recommender systems
- Robot locomotion
- Search engines
- Sentiment analysis (or opinion mining)
- Sequence mining
- Software engineering
- Speech recognition
- Handwriting recognition
- Structural health monitoring
- Syntactic pattern recognition
- Time series forecasting
- User behavior analytics
- Translation
Timeline
Further Resources
AWS Machine Learning in Motion
Kesha Williams
Web
Graph-Powered Machine Learning
Alessandro Negro
Web
How Machine Learning Works
Mostafa Samir Abd El-Fattah
Web
Human-in-the-Loop Machine Learning
Robert Munro
Web