Use this quick start guide to collect all the information about Microsoft Designing and Implementing a Data Science Solution on Azure (DP-100) Certification exam. This study guide provides a list of objectives and resources that will help you prepare for items on the DP-100 Microsoft Designing and Implementing a Data Science Solution on Azure exam. The Sample Questions will help you identify the type and difficulty level of the questions and the Practice Exams will make you familiar with the format and environment of an exam. You should refer this guide carefully before attempting your actual Microsoft MCA Azure Data Scientist certification exam.
The Microsoft Designing and Implementing a Data Science Solution on Azure certification is mainly targeted to those candidates who want to build their career in Microsoft Azure domain. The Microsoft Certified - Azure Data Scientist Associate exam verifies that the candidate possesses the fundamental knowledge and proven skills in the area of Microsoft MCA Azure Data Scientist.
Microsoft Designing and Implementing a Data Science Solution on Azure Exam Summary:
Exam Name | Microsoft Certified - Azure Data Scientist Associate |
Exam Code | DP-100 |
Exam Price | $165 (USD) |
Duration | 120 mins |
Number of Questions | 40-60 |
Passing Score | 700 / 1000 |
Books / Training | DP-100T01-A: Designing and Implementing a Data Science Solution on Azure |
Schedule Exam | Pearson VUE |
Sample Questions | Microsoft Designing and Implementing a Data Science Solution on Azure Sample Questions |
Practice Exam | Microsoft DP-100 Certification Practice Exam |
Microsoft DP-100 Exam Syllabus Topics:
Topic | Details |
---|---|
Design and prepare a machine learning solution (20-25%) |
|
Design a machine learning solution |
- Determine the appropriate compute specifications for a training workload - Describe model deployment requirements - Select which development approach to use to build or train a model |
Manage an Azure Machine Learning workspace |
- Create an Azure Machine Learning workspace - Manage a workspace by using developer tools for workspace interaction - Set up Git integration for source control - Create and manage registries |
Manage data in an Azure Machine Learning workspace |
- Select Azure Storage resources - Register and maintain datastores - Create and manage data assets |
Manage compute for experiments in Azure Machine Learning |
- Create compute targets for experiments and training - Select an environment for a machine learning use case - Configure attached compute resources, including Azure Synapse Spark pools and serverless Spark compute - Monitor compute utilization |
Explore data, and train models (35-40%) |
|
Explore data by using data assets and data stores |
- Access and wrangle data during interactive development - Wrangle data interactively with attached Synapse Spark pools and serverless Spark compute |
Create models by using the Azure Machine Learning designer |
- Create a training pipeline - Consume data assets from the designer - Use custom code components in designer - Evaluate the model, including responsible AI guidelines |
Use automated machine learning to explore optimal models |
- Use automated machine learning for tabular data - Use automated machine learning for computer vision - Use automated machine learning for natural language processing - Select and understand training options, including preprocessing and algorithms - Evaluate an automated machine learning run, including responsible AI guidelines |
Use notebooks for custom model training |
- Develop code by using a compute instance - Track model training by using MLflow - Evaluate a model - Train a model by using Python SDK v2 - Use the terminal to configure a compute instance |
Tune hyperparameters with Azure Machine Learning |
- Select a sampling method - Define the search space - Define the primary metric - Define early termination options |
Prepare a model for deployment (20-25%) |
|
Run model training scripts |
- Configure job run settings for a script - Configure compute for a job run - Consume data from a data asset in a job - Run a script as a job by using Azure Machine Learning - Use MLflow to log metrics from a job run - Use logs to troubleshoot job run errors - Configure an environment for a job run - Define parameters for a job |
Implement training pipelines |
- Create a pipeline - Pass data between steps in a pipeline - Run and schedule a pipeline - Monitor pipeline runs - Create custom components - Use component-based pipelines |
Manage models in Azure Machine Learning |
- Describe MLflow model output - Identify an appropriate framework to package a model - Assess a model by using responsible AI principles |
Deploy and retrain a model (10-15%) |
|
Deploy a model |
- Configure settings for online deployment - Configure compute for a batch deployment - Deploy a model to an online endpoint - Deploy a model to a batch endpoint - Test an online deployed service - Invoke the batch endpoint to start a batch scoring job |
Apply machine learning operations (MLOps) practices |
- Trigger an Azure Machine Learning job, including from Azure DevOps or GitHub - Automate model retraining based on new data additions or data changes - Define event-based retraining triggers |
To ensure success in Microsoft MCA Azure Data Scientist certification exam, we recommend authorized training course, practice test and hands-on experience to prepare for Microsoft Designing and Implementing a Data Science Solution on Azure (DP-100) exam.