Glossary

Terminology

When learning about Artificial Intelligence and Machine Learning, you'll come across many common terms and abbreviations. This chapter will explain the key terms and their abbreviations to help you better understand the main concepts used in this area of ​​knowledge.

Key Terms and Abbreviations

  • Accuracy (Acc): A measure of how often the model makes correct predictions, typically used to evaluate classification models.

  • AI Agent: A self-contained computational system designed to interact with its environment through sensors (perception) and actuators (actions). It autonomously processes input data, reasons about its objectives, and takes appropriate actions to achieve specified goals, often adapting to changing conditions and learning over time.

  • Algorithm: A sequence of steps or rules that a computer follows to solve a specific task or problem. In ML, algorithms are used to analyze data, recognize patterns, and create models.

  • Artificial Intelligence (AI): A branch of computer science focused on creating systems capable of performing tasks that normally require human intelligence. This includes processes such as learning from data (machine learning), reasoning to make decisions, understanding natural language, recognizing patterns, and adapting to dynamic environments.

  • Cross-Validation (CV): A technique used to assess how well a model will perform on unseen data. The data is split into multiple parts, and the model is trained and tested on different combinations of these parts.

  • Features: The input variables or attributes in the data that the model uses to make predictions. For example, in predicting house prices, features could include the size, number of rooms, and location of the house.

  • Intelligence: A natural or artificial ability to learn from experiences, understand complex concepts, reason logically, adapt to new situations, and solve problems effectively. It is the foundation of cognitive skills such as memory, problem-solving, and critical thinking.

  • Labels: The output or target variable that the model is trying to predict. In supervised learning, the label is known in the training data. For instance, the label for a house price prediction would be the actual price of the house.

  • Loss Function: A function that measures how far the model's predictions are from the actual results during training. The goal is to minimize the loss function to improve the model’s accuracy.

  • Machine Learning (ML): A subset of AI that allows computers to learn from data and make predictions or decisions without being explicitly programmed. ML is what powers many AI applications today.

  • Model: The result of an ML algorithm after it has been trained on data. The model represents the learned patterns and is used to make predictions or decisions on new data.

  • Overfitting: When a model learns the training data too well, including noise or random variations, and performs poorly on new data.

  • Reinforcement Learning (RL): A type of ML where an agent learns to make decisions by interacting with an environment. It receives rewards or penalties for its actions and learns to maximize rewards over time.

  • Supervised Learning (SL): A type of ML where the model is trained on labeled data (both inputs and outputs are known). The goal is to predict labels for new inputs.

  • Testing Data: A separate set of data used to evaluate how well the trained model performs on unseen data. This ensures that the model can generalize and produce quality results beyond the training data.

  • Training Data: The dataset used to train an ML model. It contains inputs (features) and the correct outputs (labels), which help the model to learn the features and patterns from the data.

  • Turing-Complete Systems: Computing systems or models that have sufficient power to perform any computation that can be expressed algorithmically, given sufficient time and resources.

  • Underfitting: When a model is too simple and doesn't capture the patterns in the data, leading to poor performance on both the training and testing data.

  • Unsupervised Learning (UL): A type of ML where the model is trained on data without labeled outputs. The model tries to find patterns or groupings in the data, such as clustering similar items together.

These terms form the foundation of AI and ML concepts. Understanding them will make it easier to follow discussions and apply machine learning techniques to real-world problems.

Last updated