Amazon SageMaker is a fully-managed platform that enables developers and data scientists to build, train, and deploy machine learning models at scale. Here are some common terms and concepts used in Amazon SageMaker:
Algorithm: A set of instructions for solving a problem or performing a task. In the context of machine learning, algorithms are used to learn patterns in data and make predictions or decisions based on that learned knowledge.
Notebook: A web-based interactive development environment (IDE) that allows users to write, run, and debug code in a variety of programming languages, including Python, R, and Scala. Amazon SageMaker notebooks provide a powerful and convenient way to build, train, and deploy machine learning models.
Training: The process of using a machine learning algorithm to learn patterns in data and improve the accuracy of predictions or decisions. In Amazon SageMaker, training is typically performed on a large dataset using a distributed compute cluster.
Inference: The process of applying a trained machine learning model to make predictions or decisions on new data. In Amazon SageMaker, inference can be performed using one of several available compute options, including Amazon Elastic Container Service (ECS), Amazon Elastic Kubernetes Service (EKS), and Amazon SageMaker hosted endpoints.
Hyperparameter optimization: The process of finding the best combination of hyperparameters for a machine learning model. Hyperparameters are model-specific configuration settings that can influence the model’s performance. Amazon SageMaker includes a built-in hyperparameter optimization feature that allows users to specify a range of values for each hyperparameter and automatically search for the combination that produces the best results.
Model: A trained machine learning algorithm that can be used to make predictions or decisions on new data. In Amazon SageMaker, models are typically stored as artifacts in an Amazon S3 bucket and can be deployed to a variety of compute options for inference.
Endpoint: A logical host name for an Amazon SageMaker model that allows users to send inference requests to the model. Amazon SageMaker provides a fully-managed service for deploying and scaling models for inference, making it easy to get started with machine learning in the cloud.