[2025-December-New]Braindump2go MB-230 Exam Questions Free[Q130-Q212]

2025/December Latest Braindump2go MB-230 Exam Dumps with PDF and VCE Free Updated Today! Following are some new Braindump2go MB-230 Real Exam Questions!

QUESTION 130
You are a system administrator for Dynamics 365 for Customer Service.
All child cases must inherit the product, customer name, case title, and case type from the parent case. Parent cases must not be closed until all child cases are closed.
You need to configure cases.
What should you do?

A. Validate that customer and case title fields have not been removed as fields that child cases inherit from parent cases.
Add product and case-type fields to the list.
Set the closure preference setting to Don’t allow parent case closure until all child cases are closed.
B. On the case entity, update the Parent case-Child case 1:N relationship field mapping to include the fields.
Create a business rule on the case entity to prevent the parent from closing if it has one or more open child cases.
C. Create a business rule.
D. Validate that customer and case title fields have not been removed as fields that child cases inherit from the parent cases.
Add product and case-type fields to the list.
The closure preference setting does not need to be changed.
This is default behavior.

Answer: A
Explanation:
https://docs.microsoft.com/en-us/dynamics365/customer-service/define-settings-parent-child-cases

QUESTION 131
A company uses Dynamics 365 Customer Service.
You are configuring the advanced similarity rules. You create a similarity rule on cases and put an exact match for the Modified On field in the Match Fields tab.
You test the rule and discover that exact matches do not appear.
You need to determine why the rule is not working.
What are two possible reasons why the rule is not working? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.

A. A Power Automate flow was not created.
B. The similarity rule is deactivated.
C. The security role is not set to run the similarity rule.
D. The similarity rule was not published.
E. The Modified On field is not set to searchable in the customization of the case entity in the solution.

Continue reading

[2025-November-New]Braindump2go DP-700 Exam Guide Free[Q1-Q60]

2025/November Latest Braindump2go DP-700 Exam Dumps with PDF and VCE Free Updated Today! Following are some new Braindump2go DP-700 Real Exam Questions!

QUESTION 1
Case Study 1 – Contoso, Ltd
Overview. Company Overview
Contoso, Ltd. is an online retail company that wants to modernize its analytics platform by moving to Fabric. The company plans to begin using Fabric for marketing analytics.
Overview. IT Structure
The company’s IT department has a team of data analysts and a team of data engineers that use analytics systems.
The data engineers perform the ingestion, transformation, and loading of data. They prefer to use Python or SQL to transform the data.
The data analysts query data and create semantic models and reports. They are qualified to write queries in Power Query and T-SQL.
Existing Environment. Fabric
Contoso has an F64 capacity named Cap1. All Fabric users are allowed to create items.
Contoso has two workspaces named WorkspaceA and WorkspaceB that currently use Pro license mode.
Existing Environment. Source Systems
Contoso has a point of sale (POS) system named POS1 that uses an instance of SQL Server on Azure Virtual Machines in the same Microsoft Entra tenant as Fabric. The host virtual machine is on a private virtual network that has public access blocked. POS1 contains all the sales transactions that were processed on the company’s website.
The company has a software as a service (SaaS) online marketing app named MAR1. MAR1 has seven entities. The entities contain data that relates to email open rates and interaction rates, as well as website interactions. The data can be exported from MAR1 by calling REST APIs. Each entity has a different endpoint.
Contoso has been using MAR1 for one year. Data from prior years is stored in Parquet files in an Amazon Simple Storage Service (Amazon S3) bucket. There are 12 files that range in size from 300 MB to 900 MB and relate to email interactions.
Existing Environment. Product Data
POS1 contains a product list and related data. The data comes from the following three tables:
– Products
– ProductCategories
– ProductSubcategories
In the data, products are related to product subcategories, and subcategories are related to product categories.
Existing Environment. Azure
Contoso has a Microsoft Entra tenant that has the following mail-enabled security groups:
– DataAnalysts: Contains the data analysts
– DataEngineers: Contains the data engineers
Contoso has an Azure subscription.
The company has an existing Azure DevOps organization and creates a new project for repositories that relate to Fabric.
Existing Environment. User Problems
The VP of marketing at Contoso requires analysis on the effectiveness of different types of email content. It typically takes a week to manually compile and analyze the data. Contoso wants to reduce the time to less than one day by using Fabric.
The data engineering team has successfully exported data from MAR1. The team experiences transient connectivity errors, which causes the data exports to fail.
Requirements. Planned Changes
Contoso plans to create the following two lakehouses:
– Lakehouse1: Will store both raw and cleansed data from the sources
– Lakehouse2: Will serve data in a dimensional model to users for analytical queries
Additional items will be added to facilitate data ingestion and transformation.
Contoso plans to use Azure Repos for source control in Fabric.
Requirements. Technical Requirements
The new lakehouses must follow a medallion architecture by using the following three layers: bronze, silver, and gold. There will be extensive data cleansing required to populate the MAR1 data in the silver layer, including deduplication, the handling of missing values, and the standardizing of capitalization.
Each layer must be fully populated before moving on to the next layer. If any step in populating the lakehouses fails, an email must be sent to the data engineers.
Data imports must run simultaneously, when possible.
The use of email data from the Amazon S3 bucket must meet the following requirements:
– Minimize egress costs associated with cross-cloud data access.
– Prevent saving a copy of the raw data in the lakehouses.
Items that relate to data ingestion must meet the following requirements:
– The items must be source controlled alongside other workspace items.
– Ingested data must land in the bronze layer of Lakehouse1 in the Delta format.
– No changes other than changes to the file formats must be implemented before the data lands in the bronze layer.
– Development effort must be minimized and a built-in connection must be used to import the source data.
– In the event of a connectivity error, the ingestion processes must attempt the connection again.
Lakehouses, data pipelines, and notebooks must be stored in WorkspaceA. Semantic models, reports, and dataflows must be stored in WorkspaceB.
Once a week, old files that are no longer referenced by a Delta table log must be removed.
Requirements. Data Transformation
In the POS1 product data, ProductID values are unique. The product dimension in the gold layer must include only active products from product list. Active products are identified by an IsActive value of 1.
Some product categories and subcategories are NOT assigned to any product. They are NOT analytically relevant and must be omitted from the product dimension in the gold layer.
Requirements. Data Security
Security in Fabric must meet the following requirements:
– The data engineers must have read and write access to all the lakehouses, including the underlying files.
– The data analysts must only have read access to the Delta tables in the gold layer.
– The data analysts must NOT have access to the data in the bronze and silver layers.
– The data engineers must be able to commit changes to source control in WorkspaceA.
You need to ensure that the data analysts can access the gold layer lakehouse.
What should you do?

A. Add the DataAnalyst group to the Viewer role for WorkspaceA.
B. Share the lakehouse with the DataAnalysts group and grant the Build reports on the default semantic model permission.
C. Share the lakehouse with the DataAnalysts group and grant the Read all SQL Endpoint data permission.
D. Share the lakehouse with the DataAnalysts group and grant the Read all Apache Spark permission.

Continue reading

[2025-November-New]Braindump2go GH-300 Exam Prep Free[Q1-Q30]

2025/November Latest Braindump2go GH-300 Exam Dumps with PDF and VCE Free Updated Today! Following are some new Braindump2go GH-300 Real Exam Questions!

QUESTION 1
What method can a developer use to generate sample data with GitHub Copilot? (Each correct answer presents part of the solution. Choose two.)

A. Utilizing GitHub Copilot’s ability to create fictitious information from patterns in training data.
B. Leveraging GitHub Copilot’s ability to independently initiate and manage data storage services.
C. Utilize GitHub Copilot’s capability to directly access and use databases to create sample data.
D. Leveraging GitHub Copilot’s suggestions to create data based on API documentation in the repository.

Answer: AD
Explanation:
GitHub Copilot can generate sample data by creating fictitious information based on patterns in its training data and by using suggestions based on API documentation within the repository.

QUESTION 2
What are the potential risks associated with relying heavily on code generated from GitHub Copilot? (Each correct answer presents part of the solution. Choose two.)

A. GitHub Copilot may introduce security vulnerabilities by suggesting code with known exploits.
B. GitHub Copilot may decrease developer velocity by requiring too much time in prompt engineering.
C. GitHub Copilot’s suggestions may not always reflect best practices or the latest coding standards.
D. GitHub Copilot may increase development lead time by providing irrelevant suggestions.

Continue reading

[2025-November-New]Braindump2go DP-600 Dumps PDF Free[Q1-Q51]

2025/November Latest Braindump2go DP-600 Exam Dumps with PDF and VCE Free Updated Today! Following are some new Braindump2go DP-600 Real Exam Questions!

QUESTION 1
Case Study 1 – Contoso
Overview
Contoso, Ltd. is a US-based health supplements company. Contoso has two divisions named Sales and Research. The Sales division contains two departments named Online Sales and Retail Sales. The Research division assigns internally developed product lines to individual teams of researchers and analysts.
Existing Environment
Identity Environment
Contoso has a Microsoft Entra tenant named contoso.com. The tenant contains two groups named ResearchReviewersGroup1 and ResearchReviewersGroup2.
Data Environment
Contoso has the following data environment:
– The Sales division uses a Microsoft Power BI Premium capacity.
– The semantic model of the Online Sales department includes a fact table named Orders that uses Import made. In the system of origin, the OrderID value represents the sequence in which orders are created.
– The Research department uses an on-premises, third-party data warehousing product.
– Fabric is enabled for contoso.com.
– An Azure Data Lake Storage Gen2 storage account named storage1 contains Research division data for a product line named Productline1. – The data is in the delta format.
– A Data Lake Storage Gen2 storage account named storage2 contains Research division data for a product line named Productline2. The data is in the CSV format.
Requirements
Planned Changes
Contoso plans to make the following changes:
– Enable support for Fabric in the Power BI Premium capacity used by the Sales division.
– Make all the data for the Sales division and the Research division available in Fabric.
– For the Research division, create two Fabric workspaces named Productline1ws and Productine2ws.
– In Productline1ws, create a lakehouse named Lakehouse1.
– In Lakehouse1, create a shortcut to storage1 named ResearchProduct.
Data Analytics Requirements
Contoso identifies the following data analytics requirements:
– All the workspaces for the Sales division and the Research division must support all Fabric experiences.
– The Research division workspaces must use a dedicated, on-demand capacity that has per-minute billing.
– The Research division workspaces must be grouped together logically to support OneLake data hub filtering based on the department name.
– For the Research division workspaces, the members of ResearchReviewersGroup1 must be able to read lakehouse and warehouse data and shortcuts by using SQL endpoints.
– For the Research division workspaces, the members of ResearchReviewersGroup2 must be able to read lakehouse data by using Lakehouse explorer.
– All the semantic models and reports for the Research division must use version control that supports branching.
Data Preparation Requirements
Contoso identifies the following data preparation requirements:
– The Research division data for Productline1 must be retrieved from Lakehouse1 by using Fabric notebooks.
– All the Research division data in the lakehouses must be presented as managed tables in Lakehouse explorer.
Semantic Model Requirements
Contoso identifies the following requirements for implementing and managing semantic models:
– The number of rows added to the Orders table during refreshes must be minimized.
– The semantic models in the Research division workspaces must use Direct Lake mode.
General Requirements
Contoso identifies the following high-level requirements that must be considered for all solutions:
– Follow the principle of least privilege when applicable.
– Minimize implementation and maintenance effort when possible.
You need to ensure that Contoso can use version control to meet the data analytics requirements and the general requirements.
What should you do?

A. Store at the semantic models and reports in Data Lake Gen2 storage.
B. Modify the settings of the Research workspaces to use a GitHub repository.
C. Modify the settings of the Research division workspaces to use an Azure Repos repository.
D. Store all the semantic models and reports in Microsoft OneDrive.

Answer: C
Explanation:
Currently, only Git in Azure Repos is supported.
https://learn.microsoft.com/en-us/fabric/cicd/git-integration/intro-to-git-integration#considerations-and-limitations

QUESTION 2
Case Study 1 – Contoso
Overview
Contoso, Ltd. is a US-based health supplements company. Contoso has two divisions named Sales and Research. The Sales division contains two departments named Online Sales and Retail Sales. The Research division assigns internally developed product lines to individual teams of researchers and analysts.
Existing Environment
Identity Environment
Contoso has a Microsoft Entra tenant named contoso.com. The tenant contains two groups named ResearchReviewersGroup1 and ResearchReviewersGroup2.
Data Environment
Contoso has the following data environment:
– The Sales division uses a Microsoft Power BI Premium capacity.
– The semantic model of the Online Sales department includes a fact table named Orders that uses Import made. In the system of origin, the OrderID value represents the sequence in which orders are created.
– The Research department uses an on-premises, third-party data warehousing product.
– Fabric is enabled for contoso.com.
– An Azure Data Lake Storage Gen2 storage account named storage1 contains Research division data for a product line named Productline1. – The data is in the delta format.
– A Data Lake Storage Gen2 storage account named storage2 contains Research division data for a product line named Productline2. The data is in the CSV format.
Requirements
Planned Changes
Contoso plans to make the following changes:
– Enable support for Fabric in the Power BI Premium capacity used by the Sales division.
– Make all the data for the Sales division and the Research division available in Fabric.
– For the Research division, create two Fabric workspaces named Productline1ws and Productine2ws.
– In Productline1ws, create a lakehouse named Lakehouse1.
– In Lakehouse1, create a shortcut to storage1 named ResearchProduct.
Data Analytics Requirements
Contoso identifies the following data analytics requirements:
– All the workspaces for the Sales division and the Research division must support all Fabric experiences.
– The Research division workspaces must use a dedicated, on-demand capacity that has per-minute billing.
– The Research division workspaces must be grouped together logically to support OneLake data hub filtering based on the department name.
– For the Research division workspaces, the members of ResearchReviewersGroup1 must be able to read lakehouse and warehouse data and shortcuts by using SQL endpoints.
– For the Research division workspaces, the members of ResearchReviewersGroup2 must be able to read lakehouse data by using Lakehouse explorer.
– All the semantic models and reports for the Research division must use version control that supports branching.
Data Preparation Requirements
Contoso identifies the following data preparation requirements:
– The Research division data for Productline1 must be retrieved from Lakehouse1 by using Fabric notebooks.
– All the Research division data in the lakehouses must be presented as managed tables in Lakehouse explorer.
Semantic Model Requirements
Contoso identifies the following requirements for implementing and managing semantic models:
– The number of rows added to the Orders table during refreshes must be minimized.
– The semantic models in the Research division workspaces must use Direct Lake mode.
General Requirements
Contoso identifies the following high-level requirements that must be considered for all solutions:
– Follow the principle of least privilege when applicable.
– Minimize implementation and maintenance effort when possible.
You need to refresh the Orders table of the Online Sales department. The solution must meet the semantic model requirements.
What should you include in the solution?

A. an Azure Data Factory pipeline that executes a Stored procedure activity to retrieve the maximum value of the OrderID column in the destination lakehouse
B. an Azure Data Factory pipeline that executes a Stored procedure activity to retrieve the minimum value of the OrderID column in the destination lakehouse
C. an Azure Data Factory pipeline that executes a dataflow to retrieve the minimum value of the OrderID column in the destination lakehouse
D. an Azure Data Factory pipeline that executes a dataflow to retrieve the maximum value of the OrderID column in the destination lakehouse

Answer: D
Explanation:
We need to retrieve the maximum OrderID in the destination table to minimize the number of rows added during refresh. this would be an incremental load. can be done with data flows.

QUESTION 3
Case Study 1 – Contoso
Overview
Contoso, Ltd. is a US-based health supplements company. Contoso has two divisions named Sales and Research. The Sales division contains two departments named Online Sales and Retail Sales. The Research division assigns internally developed product lines to individual teams of researchers and analysts.
Existing Environment
Identity Environment
Contoso has a Microsoft Entra tenant named contoso.com. The tenant contains two groups named ResearchReviewersGroup1 and ResearchReviewersGroup2.
Data Environment
Contoso has the following data environment:
– The Sales division uses a Microsoft Power BI Premium capacity.
– The semantic model of the Online Sales department includes a fact table named Orders that uses Import made. In the system of origin, the OrderID value represents the sequence in which orders are created.
– The Research department uses an on-premises, third-party data warehousing product.
– Fabric is enabled for contoso.com.
– An Azure Data Lake Storage Gen2 storage account named storage1 contains Research division data for a product line named Productline1. – The data is in the delta format.
– A Data Lake Storage Gen2 storage account named storage2 contains Research division data for a product line named Productline2. The data is in the CSV format.
Requirements
Planned Changes
Contoso plans to make the following changes:
– Enable support for Fabric in the Power BI Premium capacity used by the Sales division.
– Make all the data for the Sales division and the Research division available in Fabric.
– For the Research division, create two Fabric workspaces named Productline1ws and Productine2ws.
– In Productline1ws, create a lakehouse named Lakehouse1.
– In Lakehouse1, create a shortcut to storage1 named ResearchProduct.
Data Analytics Requirements
Contoso identifies the following data analytics requirements:
– All the workspaces for the Sales division and the Research division must support all Fabric experiences.
– The Research division workspaces must use a dedicated, on-demand capacity that has per-minute billing.
– The Research division workspaces must be grouped together logically to support OneLake data hub filtering based on the department name.
– For the Research division workspaces, the members of ResearchReviewersGroup1 must be able to read lakehouse and warehouse data and shortcuts by using SQL endpoints.
– For the Research division workspaces, the members of ResearchReviewersGroup2 must be able to read lakehouse data by using Lakehouse explorer.
– All the semantic models and reports for the Research division must use version control that supports branching.
Data Preparation Requirements
Contoso identifies the following data preparation requirements:
– The Research division data for Productline1 must be retrieved from Lakehouse1 by using Fabric notebooks.
– All the Research division data in the lakehouses must be presented as managed tables in Lakehouse explorer.
Semantic Model Requirements
Contoso identifies the following requirements for implementing and managing semantic models:
– The number of rows added to the Orders table during refreshes must be minimized.
– The semantic models in the Research division workspaces must use Direct Lake mode.
General Requirements
Contoso identifies the following high-level requirements that must be considered for all solutions:
– Follow the principle of least privilege when applicable.
– Minimize implementation and maintenance effort when possible.
Which syntax should you use in a notebook to access the Research division data for Productline1?

A. spark.read.format(“delta”).load(“Tables/productline1/ResearchProduct”)
B. spark.sql(“SELECT * FROM Lakehouse1.ResearchProduct “)
C. external_table(‘Tables/ResearchProduct)
D. external_table(ResearchProduct)

Answer: B
Explanation:
The syntax of C and D is correct for KQL databases (incorrect in this use-case). When the shortcut is created, no additional folders have been added to the Tables section, therefore answer A is incorrect. Once created, the line of answer B can be used to access data correctly.
https://learn.microsoft.com/en-us/fabric/onelake/onelake-shortcuts

QUESTION 4
Case Study 1 – Contoso
Overview
Contoso, Ltd. is a US-based health supplements company. Contoso has two divisions named Sales and Research. The Sales division contains two departments named Online Sales and Retail Sales. The Research division assigns internally developed product lines to individual teams of researchers and analysts.
Existing Environment
Identity Environment
Contoso has a Microsoft Entra tenant named contoso.com. The tenant contains two groups named ResearchReviewersGroup1 and ResearchReviewersGroup2.
Data Environment
Contoso has the following data environment:
– The Sales division uses a Microsoft Power BI Premium capacity.
– The semantic model of the Online Sales department includes a fact table named Orders that uses Import made. In the system of origin, the OrderID value represents the sequence in which orders are created.
– The Research department uses an on-premises, third-party data warehousing product.
– Fabric is enabled for contoso.com.
– An Azure Data Lake Storage Gen2 storage account named storage1 contains Research division data for a product line named Productline1. – The data is in the delta format.
– A Data Lake Storage Gen2 storage account named storage2 contains Research division data for a product line named Productline2. The data is in the CSV format.
Requirements
Planned Changes
Contoso plans to make the following changes:
– Enable support for Fabric in the Power BI Premium capacity used by the Sales division.
– Make all the data for the Sales division and the Research division available in Fabric.
– For the Research division, create two Fabric workspaces named Productline1ws and Productine2ws.
– In Productline1ws, create a lakehouse named Lakehouse1.
– In Lakehouse1, create a shortcut to storage1 named ResearchProduct.
Data Analytics Requirements
Contoso identifies the following data analytics requirements:
– All the workspaces for the Sales division and the Research division must support all Fabric experiences.
– The Research division workspaces must use a dedicated, on-demand capacity that has per-minute billing.
– The Research division workspaces must be grouped together logically to support OneLake data hub filtering based on the department name.
– For the Research division workspaces, the members of ResearchReviewersGroup1 must be able to read lakehouse and warehouse data and shortcuts by using SQL endpoints.
– For the Research division workspaces, the members of ResearchReviewersGroup2 must be able to read lakehouse data by using Lakehouse explorer.
– All the semantic models and reports for the Research division must use version control that supports branching.
Data Preparation Requirements
Contoso identifies the following data preparation requirements:
– The Research division data for Productline1 must be retrieved from Lakehouse1 by using Fabric notebooks.
– All the Research division data in the lakehouses must be presented as managed tables in Lakehouse explorer.
Semantic Model Requirements
Contoso identifies the following requirements for implementing and managing semantic models:
– The number of rows added to the Orders table during refreshes must be minimized.
– The semantic models in the Research division workspaces must use Direct Lake mode.
General Requirements
Contoso identifies the following high-level requirements that must be considered for all solutions:
– Follow the principle of least privilege when applicable.
– Minimize implementation and maintenance effort when possible.
Hotspot Question
You need to recommend a solution to group the Research division workspaces.
What should you include in the recommendation? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

Answer:

Explanation:
https://learn.microsoft.com/en-us/fabric/governance/domains#configure-domain-settings

QUESTION 5
Case Study 2 – Litware, Inc
Overview
Litware, Inc. is a manufacturing company that has offices throughout North America. The analytics team at Litware contains data engineers, analytics engineers, data analysts, and data scientists.
Existing Environment
Fabric Environment
Litware has been using a Microsoft Power BI tenant for three years. Litware has NOT enabled any Fabric capacities and features.
Available Data
Litware has data that must be analyzed as shown in the following table.

The Product data contains a single table and the following columns.

The customer satisfaction data contains the following tables:
– Survey
– Question
– Response
For each survey submitted, the following occurs:
– One row is added to the Survey table.
– One row is added to the Response table for each question in the survey.
– The Question table contains the text of each survey question. The third question in each survey response is an overall satisfaction score. Customers can submit a survey after each purchase.
User Problems
The analytics team has large volumes of data, some of which is semi-structured. The team wants to use Fabric to create a new data store.
Product data is often classified into three pricing groups: high, medium, and low. This logic is implemented in several databases and semantic models, but the logic does NOT always match across implementations.
Requirements
Planned Changes
Litware plans to enable Fabric features in the existing tenant. The analytics team will create a new data store as a proof of concept (PoC). The remaining Liware users will only get access to the Fabric features once the PoC is complete. The PoC will be completed by using a Fabric trial capacity
The following three workspaces will be created:
– AnalyticsPOC: Will contain the data store, semantic models, reports pipelines, dataflow, and notebooks used to populate the data store
– DataEngPOC: Will contain all the pipelines, dataflows, and notebooks used to populate OneLake
– DataSciPOC: Will contain all the notebooks and reports created by the data scientists
The following will be created in the AnalyticsPOC workspace:
– A data store (type to be decided)
– A custom semantic model
– A default semantic model
Interactive reports
The data engineers will create data pipelines to load data to OneLake either hourly or daily depending on the data source. The analytics engineers will create processes to ingest, transform, and load the data to the data store in the AnalyticsPOC workspace daily. Whenever possible, the data engineers will use low-code tools for data ingestion. The choice of which data cleansing and transformation tools to use will be at the data engineers’ discretion.
All the semantic models and reports in the Analytics POC workspace will use the data store as the sole data source.
Technical Requirements
The data store must support the following:
– Read access by using T-SQL or Python
– Semi-structured and unstructured data
– Row-level security (RLS) for users executing T-SQL queries
Files loaded by the data engineers to OneLake will be stored in the Parquet format and will meet Delta Lake specifications.
Data will be loaded without transformation in one area of the AnalyticsPOC data store. The data will then be cleansed, merged, and transformed into a dimensional model
The data load process must ensure that the raw and cleansed data is updated completely before populating the dimensional model
The dimensional model must contain a date dimension. There is no existing data source for the date dimension. The Litware fiscal year matches the calendar year. The date dimension must always contain dates from 2010 through the end of the current year.
The product pricing group logic must be maintained by the analytics engineers in a single location. The pricing group data must be made available in the data store for T-SOL. queries and in the default semantic model. The following logic must be used:
– List prices that are less than or equal to 50 are in the low pricing group.
– List prices that are greater than 50 and less than or equal to 1,000 are in the medium pricing group.
– List prices that are greater than 1,000 are in the high pricing group.
Security Requirements
Only Fabric administrators and the analytics team must be able to see the Fabric items created as part of the PoC.
Litware identifies the following security requirements for the Fabric items in the AnalyticsPOC workspace:
– Fabric administrators will be the workspace administrators.
– The data engineers must be able to read from and write to the data store. No access must be granted to datasets or reports.
– The analytics engineers must be able to read from, write to, and create schemas in the data store. They also must be able to create and share semantic models with the data analysts and view and modify all reports in the workspace.
– The data scientists must be able to read from the data store, but not write to it. They will access the data by using a Spark notebook
– The data analysts must have read access to only the dimensional model objects in the data store. They also must have access to create Power BI reports by using the semantic models created by the analytics engineers.
– The date dimension must be available to all users of the data store.
– The principle of least privilege must be followed.
Both the default and custom semantic models must include only tables or views from the dimensional model in the data store. Litware already has the following Microsoft Entra security groups:
FabricAdmins: Fabric administrators
– AnalyticsTeam: All the members of the analytics team
– DataAnalysts: The data analysts on the analytics team
– DataScientists: The data scientists on the analytics team
– DataEngineers: The data engineers on the analytics team
– AnalyticsEngineers: The analytics engineers on the analytics team
Report Requirements
The data analysts must create a customer satisfaction report that meets the following requirements:
– Enables a user to select a product to filter customer survey responses to only those who have purchased that product.
– Displays the average overall satisfaction score of all the surveys submitted during the last 12 months up to a selected dat.
– Shows data as soon as the data is updated in the data store.
– Ensures that the report and the semantic model only contain data from the current and previous year.
– Ensures that the report respects any table-level security specified in the source data store.
– Minimizes the execution time of report queries.
What should you recommend using to ingest the customer data into the data store in the AnalyticsPOC workspace?

A. a stored procedure
B. a pipeline that contains a KQL activity
C. a Spark notebook
D. a dataflow

Answer: D
Explanation:
Even though the text reads “Data will be loaded without transformation in one area of the AnalyticsPOC data store”: in general, dataflows are used when data transformations are involved after ingestion. As suggested by user BHARAT, the Copy Activity should be the optimal solution.

QUESTION 6
Case Study 2 – Litware, Inc
Overview
Litware, Inc. is a manufacturing company that has offices throughout North America. The analytics team at Litware contains data engineers, analytics engineers, data analysts, and data scientists.
Existing Environment
Fabric Environment
Litware has been using a Microsoft Power BI tenant for three years. Litware has NOT enabled any Fabric capacities and features.
Available Data
Litware has data that must be analyzed as shown in the following table.

The Product data contains a single table and the following columns.

The customer satisfaction data contains the following tables:
– Survey
– Question
– Response
For each survey submitted, the following occurs:
– One row is added to the Survey table.
– One row is added to the Response table for each question in the survey.
– The Question table contains the text of each survey question. The third question in each survey response is an overall satisfaction score. Customers can submit a survey after each purchase.
User Problems
The analytics team has large volumes of data, some of which is semi-structured. The team wants to use Fabric to create a new data store.
Product data is often classified into three pricing groups: high, medium, and low. This logic is implemented in several databases and semantic models, but the logic does NOT always match across implementations.
Requirements
Planned Changes
Litware plans to enable Fabric features in the existing tenant. The analytics team will create a new data store as a proof of concept (PoC). The remaining Liware users will only get access to the Fabric features once the PoC is complete. The PoC will be completed by using a Fabric trial capacity
The following three workspaces will be created:
– AnalyticsPOC: Will contain the data store, semantic models, reports pipelines, dataflow, and notebooks used to populate the data store
– DataEngPOC: Will contain all the pipelines, dataflows, and notebooks used to populate OneLake
– DataSciPOC: Will contain all the notebooks and reports created by the data scientists
The following will be created in the AnalyticsPOC workspace:
– A data store (type to be decided)
– A custom semantic model
– A default semantic model
Interactive reports
The data engineers will create data pipelines to load data to OneLake either hourly or daily depending on the data source. The analytics engineers will create processes to ingest, transform, and load the data to the data store in the AnalyticsPOC workspace daily. Whenever possible, the data engineers will use low-code tools for data ingestion. The choice of which data cleansing and transformation tools to use will be at the data engineers’ discretion.
All the semantic models and reports in the Analytics POC workspace will use the data store as the sole data source.
Technical Requirements
The data store must support the following:
– Read access by using T-SQL or Python
– Semi-structured and unstructured data
– Row-level security (RLS) for users executing T-SQL queries
Files loaded by the data engineers to OneLake will be stored in the Parquet format and will meet Delta Lake specifications.
Data will be loaded without transformation in one area of the AnalyticsPOC data store. The data will then be cleansed, merged, and transformed into a dimensional model
The data load process must ensure that the raw and cleansed data is updated completely before populating the dimensional model
The dimensional model must contain a date dimension. There is no existing data source for the date dimension. The Litware fiscal year matches the calendar year. The date dimension must always contain dates from 2010 through the end of the current year.
The product pricing group logic must be maintained by the analytics engineers in a single location. The pricing group data must be made available in the data store for T-SOL. queries and in the default semantic model. The following logic must be used:
– List prices that are less than or equal to 50 are in the low pricing group.
– List prices that are greater than 50 and less than or equal to 1,000 are in the medium pricing group.
– List prices that are greater than 1,000 are in the high pricing group.
Security Requirements
Only Fabric administrators and the analytics team must be able to see the Fabric items created as part of the PoC.
Litware identifies the following security requirements for the Fabric items in the AnalyticsPOC workspace:
– Fabric administrators will be the workspace administrators.
– The data engineers must be able to read from and write to the data store. No access must be granted to datasets or reports.
– The analytics engineers must be able to read from, write to, and create schemas in the data store. They also must be able to create and share semantic models with the data analysts and view and modify all reports in the workspace.
– The data scientists must be able to read from the data store, but not write to it. They will access the data by using a Spark notebook
– The data analysts must have read access to only the dimensional model objects in the data store. They also must have access to create Power BI reports by using the semantic models created by the analytics engineers.
– The date dimension must be available to all users of the data store.
– The principle of least privilege must be followed.
Both the default and custom semantic models must include only tables or views from the dimensional model in the data store. Litware already has the following Microsoft Entra security groups:
FabricAdmins: Fabric administrators
– AnalyticsTeam: All the members of the analytics team
– DataAnalysts: The data analysts on the analytics team
– DataScientists: The data scientists on the analytics team
– DataEngineers: The data engineers on the analytics team
– AnalyticsEngineers: The analytics engineers on the analytics team
Report Requirements
The data analysts must create a customer satisfaction report that meets the following requirements:
– Enables a user to select a product to filter customer survey responses to only those who have purchased that product.
– Displays the average overall satisfaction score of all the surveys submitted during the last 12 months up to a selected dat.
– Shows data as soon as the data is updated in the data store.
– Ensures that the report and the semantic model only contain data from the current and previous year.
– Ensures that the report respects any table-level security specified in the source data store.
– Minimizes the execution time of report queries.
Which type of data store should you recommend in the AnalyticsPOC workspace?

A. a data lake
B. a warehouse
C. a lakehouse
D. an external Hive metastore

Answer: C
Explanation:
The data store must handle semi-structured and unstructured data, therefore a Lakehouse should be the optimal solution supporting read access with T-SQL and Python.

QUESTION 7
Case Study 2 – Litware, Inc
Overview
Litware, Inc. is a manufacturing company that has offices throughout North America. The analytics team at Litware contains data engineers, analytics engineers, data analysts, and data scientists.
Existing Environment
Fabric Environment
Litware has been using a Microsoft Power BI tenant for three years. Litware has NOT enabled any Fabric capacities and features.
Available Data
Litware has data that must be analyzed as shown in the following table.

The Product data contains a single table and the following columns.

The customer satisfaction data contains the following tables:
– Survey
– Question
– Response
For each survey submitted, the following occurs:
– One row is added to the Survey table.
– One row is added to the Response table for each question in the survey.
– The Question table contains the text of each survey question. The third question in each survey response is an overall satisfaction score. Customers can submit a survey after each purchase.
User Problems
The analytics team has large volumes of data, some of which is semi-structured. The team wants to use Fabric to create a new data store.
Product data is often classified into three pricing groups: high, medium, and low. This logic is implemented in several databases and semantic models, but the logic does NOT always match across implementations.
Requirements
Planned Changes
Litware plans to enable Fabric features in the existing tenant. The analytics team will create a new data store as a proof of concept (PoC). The remaining Liware users will only get access to the Fabric features once the PoC is complete. The PoC will be completed by using a Fabric trial capacity
The following three workspaces will be created:
– AnalyticsPOC: Will contain the data store, semantic models, reports pipelines, dataflow, and notebooks used to populate the data store
– DataEngPOC: Will contain all the pipelines, dataflows, and notebooks used to populate OneLake
– DataSciPOC: Will contain all the notebooks and reports created by the data scientists
The following will be created in the AnalyticsPOC workspace:
– A data store (type to be decided)
– A custom semantic model
– A default semantic model
Interactive reports
The data engineers will create data pipelines to load data to OneLake either hourly or daily depending on the data source. The analytics engineers will create processes to ingest, transform, and load the data to the data store in the AnalyticsPOC workspace daily. Whenever possible, the data engineers will use low-code tools for data ingestion. The choice of which data cleansing and transformation tools to use will be at the data engineers’ discretion.
All the semantic models and reports in the Analytics POC workspace will use the data store as the sole data source.
Technical Requirements
The data store must support the following:
– Read access by using T-SQL or Python
– Semi-structured and unstructured data
– Row-level security (RLS) for users executing T-SQL queries
Files loaded by the data engineers to OneLake will be stored in the Parquet format and will meet Delta Lake specifications.
Data will be loaded without transformation in one area of the AnalyticsPOC data store. The data will then be cleansed, merged, and transformed into a dimensional model
The data load process must ensure that the raw and cleansed data is updated completely before populating the dimensional model
The dimensional model must contain a date dimension. There is no existing data source for the date dimension. The Litware fiscal year matches the calendar year. The date dimension must always contain dates from 2010 through the end of the current year.
The product pricing group logic must be maintained by the analytics engineers in a single location. The pricing group data must be made available in the data store for T-SOL. queries and in the default semantic model. The following logic must be used:
– List prices that are less than or equal to 50 are in the low pricing group.
– List prices that are greater than 50 and less than or equal to 1,000 are in the medium pricing group.
– List prices that are greater than 1,000 are in the high pricing group.
Security Requirements
Only Fabric administrators and the analytics team must be able to see the Fabric items created as part of the PoC.
Litware identifies the following security requirements for the Fabric items in the AnalyticsPOC workspace:
– Fabric administrators will be the workspace administrators.
– The data engineers must be able to read from and write to the data store. No access must be granted to datasets or reports.
– The analytics engineers must be able to read from, write to, and create schemas in the data store. They also must be able to create and share semantic models with the data analysts and view and modify all reports in the workspace.
– The data scientists must be able to read from the data store, but not write to it. They will access the data by using a Spark notebook
– The data analysts must have read access to only the dimensional model objects in the data store. They also must have access to create Power BI reports by using the semantic models created by the analytics engineers.
– The date dimension must be available to all users of the data store.
– The principle of least privilege must be followed.
Both the default and custom semantic models must include only tables or views from the dimensional model in the data store. Litware already has the following Microsoft Entra security groups:
FabricAdmins: Fabric administrators
– AnalyticsTeam: All the members of the analytics team
– DataAnalysts: The data analysts on the analytics team
– DataScientists: The data scientists on the analytics team
– DataEngineers: The data engineers on the analytics team
– AnalyticsEngineers: The analytics engineers on the analytics team
Report Requirements
The data analysts must create a customer satisfaction report that meets the following requirements:
– Enables a user to select a product to filter customer survey responses to only those who have purchased that product.
– Displays the average overall satisfaction score of all the surveys submitted during the last 12 months up to a selected dat.
– Shows data as soon as the data is updated in the data store.
– Ensures that the report and the semantic model only contain data from the current and previous year.
– Ensures that the report respects any table-level security specified in the source data store.
– Minimizes the execution time of report queries.
Hotspot Question
You need to assign permissions for the data store in the AnalyticsPOC workspace. The solution must meet the security requirements.
Which additional permissions should you assign when you share the data store? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

Answer:

QUESTION 8
Case Study 2 – Litware, Inc
Overview
Litware, Inc. is a manufacturing company that has offices throughout North America. The analytics team at Litware contains data engineers, analytics engineers, data analysts, and data scientists.
Existing Environment
Fabric Environment
Litware has been using a Microsoft Power BI tenant for three years. Litware has NOT enabled any Fabric capacities and features.
Available Data
Litware has data that must be analyzed as shown in the following table.

The Product data contains a single table and the following columns.

The customer satisfaction data contains the following tables:
– Survey
– Question
– Response
For each survey submitted, the following occurs:
– One row is added to the Survey table.
– One row is added to the Response table for each question in the survey.
– The Question table contains the text of each survey question. The third question in each survey response is an overall satisfaction score. Customers can submit a survey after each purchase.
User Problems
The analytics team has large volumes of data, some of which is semi-structured. The team wants to use Fabric to create a new data store.
Product data is often classified into three pricing groups: high, medium, and low. This logic is implemented in several databases and semantic models, but the logic does NOT always match across implementations.
Requirements
Planned Changes
Litware plans to enable Fabric features in the existing tenant. The analytics team will create a new data store as a proof of concept (PoC). The remaining Liware users will only get access to the Fabric features once the PoC is complete. The PoC will be completed by using a Fabric trial capacity
The following three workspaces will be created:
– AnalyticsPOC: Will contain the data store, semantic models, reports pipelines, dataflow, and notebooks used to populate the data store
– DataEngPOC: Will contain all the pipelines, dataflows, and notebooks used to populate OneLake
– DataSciPOC: Will contain all the notebooks and reports created by the data scientists
The following will be created in the AnalyticsPOC workspace:
– A data store (type to be decided)
– A custom semantic model
– A default semantic model
Interactive reports
The data engineers will create data pipelines to load data to OneLake either hourly or daily depending on the data source. The analytics engineers will create processes to ingest, transform, and load the data to the data store in the AnalyticsPOC workspace daily. Whenever possible, the data engineers will use low-code tools for data ingestion. The choice of which data cleansing and transformation tools to use will be at the data engineers’ discretion.
All the semantic models and reports in the Analytics POC workspace will use the data store as the sole data source.
Technical Requirements
The data store must support the following:
– Read access by using T-SQL or Python
– Semi-structured and unstructured data
– Row-level security (RLS) for users executing T-SQL queries
Files loaded by the data engineers to OneLake will be stored in the Parquet format and will meet Delta Lake specifications.
Data will be loaded without transformation in one area of the AnalyticsPOC data store. The data will then be cleansed, merged, and transformed into a dimensional model
The data load process must ensure that the raw and cleansed data is updated completely before populating the dimensional model
The dimensional model must contain a date dimension. There is no existing data source for the date dimension. The Litware fiscal year matches the calendar year. The date dimension must always contain dates from 2010 through the end of the current year.
The product pricing group logic must be maintained by the analytics engineers in a single location. The pricing group data must be made available in the data store for T-SOL. queries and in the default semantic model. The following logic must be used:
– List prices that are less than or equal to 50 are in the low pricing group.
– List prices that are greater than 50 and less than or equal to 1,000 are in the medium pricing group.
– List prices that are greater than 1,000 are in the high pricing group.
Security Requirements
Only Fabric administrators and the analytics team must be able to see the Fabric items created as part of the PoC.
Litware identifies the following security requirements for the Fabric items in the AnalyticsPOC workspace:
– Fabric administrators will be the workspace administrators.
– The data engineers must be able to read from and write to the data store. No access must be granted to datasets or reports.
– The analytics engineers must be able to read from, write to, and create schemas in the data store. They also must be able to create and share semantic models with the data analysts and view and modify all reports in the workspace.
– The data scientists must be able to read from the data store, but not write to it. They will access the data by using a Spark notebook
– The data analysts must have read access to only the dimensional model objects in the data store. They also must have access to create Power BI reports by using the semantic models created by the analytics engineers.
– The date dimension must be available to all users of the data store.
– The principle of least privilege must be followed.
Both the default and custom semantic models must include only tables or views from the dimensional model in the data store. Litware already has the following Microsoft Entra security groups:
FabricAdmins: Fabric administrators
– AnalyticsTeam: All the members of the analytics team
– DataAnalysts: The data analysts on the analytics team
– DataScientists: The data scientists on the analytics team
– DataEngineers: The data engineers on the analytics team
– AnalyticsEngineers: The analytics engineers on the analytics team
Report Requirements
The data analysts must create a customer satisfaction report that meets the following requirements:
– Enables a user to select a product to filter customer survey responses to only those who have purchased that product.
– Displays the average overall satisfaction score of all the surveys submitted during the last 12 months up to a selected dat.
– Shows data as soon as the data is updated in the data store.
– Ensures that the report and the semantic model only contain data from the current and previous year.
– Ensures that the report respects any table-level security specified in the source data store.
– Minimizes the execution time of report queries.
Hotspot Question
You need to create a DAX measure to calculate the average overall satisfaction score.
How should you complete the DAX code? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

Answer:

QUESTION 9
Case Study 2 – Litware, Inc
Overview
Litware, Inc. is a manufacturing company that has offices throughout North America. The analytics team at Litware contains data engineers, analytics engineers, data analysts, and data scientists.
Existing Environment
Fabric Environment
Litware has been using a Microsoft Power BI tenant for three years. Litware has NOT enabled any Fabric capacities and features.
Available Data
Litware has data that must be analyzed as shown in the following table.

The Product data contains a single table and the following columns.

The customer satisfaction data contains the following tables:
– Survey
– Question
– Response
For each survey submitted, the following occurs:
– One row is added to the Survey table.
– One row is added to the Response table for each question in the survey.
– The Question table contains the text of each survey question. The third question in each survey response is an overall satisfaction score. Customers can submit a survey after each purchase.
User Problems
The analytics team has large volumes of data, some of which is semi-structured. The team wants to use Fabric to create a new data store.
Product data is often classified into three pricing groups: high, medium, and low. This logic is implemented in several databases and semantic models, but the logic does NOT always match across implementations.
Requirements
Planned Changes
Litware plans to enable Fabric features in the existing tenant. The analytics team will create a new data store as a proof of concept (PoC). The remaining Liware users will only get access to the Fabric features once the PoC is complete. The PoC will be completed by using a Fabric trial capacity
The following three workspaces will be created:
– AnalyticsPOC: Will contain the data store, semantic models, reports pipelines, dataflow, and notebooks used to populate the data store
– DataEngPOC: Will contain all the pipelines, dataflows, and notebooks used to populate OneLake
– DataSciPOC: Will contain all the notebooks and reports created by the data scientists
The following will be created in the AnalyticsPOC workspace:
– A data store (type to be decided)
– A custom semantic model
– A default semantic model
Interactive reports
The data engineers will create data pipelines to load data to OneLake either hourly or daily depending on the data source. The analytics engineers will create processes to ingest, transform, and load the data to the data store in the AnalyticsPOC workspace daily. Whenever possible, the data engineers will use low-code tools for data ingestion. The choice of which data cleansing and transformation tools to use will be at the data engineers’ discretion.
All the semantic models and reports in the Analytics POC workspace will use the data store as the sole data source.
Technical Requirements
The data store must support the following:
– Read access by using T-SQL or Python
– Semi-structured and unstructured data
– Row-level security (RLS) for users executing T-SQL queries
Files loaded by the data engineers to OneLake will be stored in the Parquet format and will meet Delta Lake specifications.
Data will be loaded without transformation in one area of the AnalyticsPOC data store. The data will then be cleansed, merged, and transformed into a dimensional model
The data load process must ensure that the raw and cleansed data is updated completely before populating the dimensional model
The dimensional model must contain a date dimension. There is no existing data source for the date dimension. The Litware fiscal year matches the calendar year. The date dimension must always contain dates from 2010 through the end of the current year.
The product pricing group logic must be maintained by the analytics engineers in a single location. The pricing group data must be made available in the data store for T-SOL. queries and in the default semantic model. The following logic must be used:
– List prices that are less than or equal to 50 are in the low pricing group.
– List prices that are greater than 50 and less than or equal to 1,000 are in the medium pricing group.
– List prices that are greater than 1,000 are in the high pricing group.
Security Requirements
Only Fabric administrators and the analytics team must be able to see the Fabric items created as part of the PoC.
Litware identifies the following security requirements for the Fabric items in the AnalyticsPOC workspace:
– Fabric administrators will be the workspace administrators.
– The data engineers must be able to read from and write to the data store. No access must be granted to datasets or reports.
– The analytics engineers must be able to read from, write to, and create schemas in the data store. They also must be able to create and share semantic models with the data analysts and view and modify all reports in the workspace.
– The data scientists must be able to read from the data store, but not write to it. They will access the data by using a Spark notebook
– The data analysts must have read access to only the dimensional model objects in the data store. They also must have access to create Power BI reports by using the semantic models created by the analytics engineers.
– The date dimension must be available to all users of the data store.
– The principle of least privilege must be followed.
Both the default and custom semantic models must include only tables or views from the dimensional model in the data store. Litware already has the following Microsoft Entra security groups:
FabricAdmins: Fabric administrators
– AnalyticsTeam: All the members of the analytics team
– DataAnalysts: The data analysts on the analytics team
– DataScientists: The data scientists on the analytics team
– DataEngineers: The data engineers on the analytics team
– AnalyticsEngineers: The analytics engineers on the analytics team
Report Requirements
The data analysts must create a customer satisfaction report that meets the following requirements:
– Enables a user to select a product to filter customer survey responses to only those who have purchased that product.
– Displays the average overall satisfaction score of all the surveys submitted during the last 12 months up to a selected dat.
– Shows data as soon as the data is updated in the data store.
– Ensures that the report and the semantic model only contain data from the current and previous year.
– Ensures that the report respects any table-level security specified in the source data store.
– Minimizes the execution time of report queries.
Hotspot Question
You need to resolve the issue with the pricing group classification.
How should you complete the T-SQL statement? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

Answer:

QUESTION 11
You are the administrator of a Fabric workspace that contains a lakehouse named Lakehouse1. Lakehouse1 contains the following tables:
Table1: A Delta table created by using a shortcut
Table2: An external table created by using Spark
Table3: A managed table
You plan to connect to Lakehouse1 by using its SQL endpoint.
What will you be able to do after connecting to Lakehouse1?

A. Read Table3.
B. Update the data Table3.
C. Read Table2.
D. Update the data in Table1.

Answer: A

QUESTION 12
You have a Fabric tenant that contains a warehouse.
You use a dataflow to load a new dataset from OneLake to the warehouse.
You need to add a PowerQuery step to identify the maximum values for the numeric columns.
Which function should you include in the step?

A. Table.MaxN
B. Table.Max
C. Table.Range
D. Table.Profile

Answer: D
Explanation:
https://learn.microsoft.com/en-us/powerquery-m/table-profile

QUESTION 13
You have a Fabric tenant that contains a machine learning model registered in a Fabric workspace.
You need to use the model to generate predictions by using the PREDICT function in a Fabric notebook.
Which two languages can you use to perform model scoring? Each correct answer presents a complete solution.
NOTE: Each correct answer is worth one point.

A. T-SQL
B. DAX
C. Spark SQL
D. PySpark

Answer: CD
Explanation:
https://learn.microsoft.com/en-us/azure/synapse-analytics/machine-learning/tutorial-score-model-predict-spark-pool

QUESTION 14
You are analyzing the data in a Fabric notebook.
You have a Spark DataFrame assigned to a variable named df.
You need to use the Chart view in the notebook to explore the data manually.
Which function should you run to make the data available in the Chart view?

A. displayHTML
B. show
C. write
D. display

Answer: D

QUESTION 15
You have a Fabric tenant that contains a Microsoft Power BI report named Report1. Report1 includes a Python visual.
Data displayed by the visual is grouped automatically and duplicate rows are NOT displayed.
You need all rows to appear in the visual.
What should you do?

A. Reference the columns in the Python code by index.
B. Modify the Sort Column By property for all columns.
C. Add a unique field to each row.
D. Modify the Summarize By property for all columns.

Answer: D
Explanation:
By setting the “Summarize By” property to “None” for all columns, you disable automatic aggregation and ensure all rows, including duplicates, are displayed in the Python visual.

QUESTION 16
You have a Fabric workspace named Workspace1 that contains a dataflow named Dataflow1. Dataflow1 has a query that returns 2,000 rows.
You view the query in Power Query as shown in the following exhibit.

What can you identify about the pickupLongitude column?

A. The column has duplicate values.
B. All the table rows are profiled.
C. The column has missing values.
D. There are 935 values that occur only once.

Answer: A

QUESTION 17
You have a Fabric tenant named Tenant1 that contains a workspace named WS1. WS1 uses a capacity named C1 and contains a dataset named DS1.
You need to ensure read-write access to DS1 is available by using XMLA endpoint.
What should be modified first?

A. the DS1 settings
B. the WS1 settings
C. the C1 settings
D. the Tenant1 settings

Answer: C
Explanation:
https://learn.microsoft.com/en-us/power-bi/enterprise/service-premium-connect-tools

QUESTION 18
You have a Fabric tenant that contains a workspace named Workspace1. Workspace1 is assigned to a Fabric capacity.
You need to recommend a solution to provide users with the ability to create and publish custom Direct Lake semantic models by using external tools. The solution must follow the principle of least privilege.
Which three actions in the Fabric Admin portal should you include in the recommendation? Each correct answer presents part of the solution.
NOTE: Each correct answer is worth one point.

A. From the Tenant settings, set Allow XMLA Endpoints and Analyze in Excel with on-premises datasets to Enabled.
B. From the Tenant settings, set Allow Azure Active Directory guest users to access Microsoft Fabric to Enabled.
C. From the Tenant settings, select Users can edit data model in the Power BI service.
D. From the Capacity settings, set XMLA Endpoint to Read Write.
E. From the Tenant settings, set Users can create Fabric items to Enabled.
F. From the Tenant settings, enable Publish to Web.

Answer: ACD

QUESTION 19
You are creating a semantic model in Microsoft Power BI Desktop.
You plan to make bulk changes to the model by using the Tabular Model Definition Language (TMDL) extension for Microsoft Visual Studio Code.
You need to save the semantic model to a file.
Which file format should you use?

A. PBIP
B. PBIX
C. PBIT
D. PBIDS

Answer: A
Explanation:
The PBIP will create one file and two folders, PBIP.Dataset contains definition folder that is use to host the .tmdl files.

QUESTION 20
You plan to deploy Microsoft Power BI items by using Fabric deployment pipelines. You have a deployment pipeline that contains three stages named Development, Test, and Production. A workspace is assigned to each stage.
You need to provide Power BI developers with access to the pipeline. The solution must meet the following requirements:
– Ensure that the developers can deploy items to the workspaces for Development and Test.
– Prevent the developers from deploying items to the workspace for Production.
– Follow the principle of least privilege.
Which three levels of access should you assign to the developers? Each correct answer presents part of the solution.
NOTE: Each correct answer is worth one point.

A. Build permission to the production semantic models
B. Admin access to the deployment pipeline
C. Viewer access to the Development and Test workspaces
D. Viewer access to the Production workspace
E. Contributor access to the Development and Test workspaces
F. Contributor access to the Production workspace

Answer: ADE

QUESTION 21
You have a Fabric workspace that contains a DirectQuery semantic model. The model queries a data source that has 500 million rows.
You have a Microsoft Power Bi report named Report1 that uses the model. Report1 contains visuals on multiple pages.
You need to reduce the query execution time for the visuals on all the pages.
What are two features that you can use? Each correct answer presents a complete solution,
NOTE: Each correct answer is worth one point.

A. user-defined aggregations
B. automatic aggregation
C. query caching
D. OneLake integration

Answer: AB

QUESTION 22
You have a Fabric tenant that contains 30 CSV files in OneLake. The files are updated daily.
You create a Microsoft Power BI semantic model named Model1 that uses the CSV files as a data source. You configure incremental refresh for Model1 and publish the model to a Premium capacity in the Fabric tenant.
When you initiate a refresh of Model1, the refresh fails after running out of resources.
What is a possible cause of the failure?

A. Query folding is occurring.
B. Only refresh complete days is selected.
C. XMLA Endpoint is set to Read Only.
D. Query folding is NOT occurring.
E. The delta type of the column used to partition the data has changed.

Answer: D
Explanation:
https://learn.microsoft.com/en-us/power-bi/connect-data/incremental-refresh-troubleshoot#problem-loading-data-takes-too-long

QUESTION 23
You have a Fabric tenant that uses a Microsoft Power BI Premium capacity.
You need to enable scale-out for a semantic model.
What should you do first?

A. At the semantic model level, set Large dataset storage format to Off.
B. At the tenant level, set Create and use Metrics to Enabled.
C. At the semantic model level, set Large dataset storage format to On.
D. At the tenant level, set Data Activator to Enabled.

Answer: C
Explanation:
https://learn.microsoft.com/en-us/power-bi/enterprise/service-premium-scale-out-configure

QUESTION 24
You have a Fabric tenant that contains a warehouse. The warehouse uses row-level security (RLS).
You create a Direct Lake semantic model that uses the Delta tables and RLS of the warehouse.
When users interact with a report built from the model, which mode will be used by the DAX queries?

A. DirectQuery
B. Dual
C. Direct Lake
D. Import

Answer: A
Explanation:
Row-level security only applies to queries on a Warehouse or SQL analytics endpoint in Fabric. Power BI queries on a warehouse in Direct Lake mode will fall back to Direct Query mode to abide by row-level security.
https://learn.microsoft.com/en-us/fabric/data-warehouse/row-level-security

QUESTION 25
You have a Fabric tenant that contains a complex semantic model. The model is based on a star schema and contains many tables, including a fact table named Sales.
You need to create a diagram of the model. The diagram must contain only the Sales table and related tables.
What should you use from Microsoft Power BI Desktop?

A. data categories
B. Data view
C. Model view
D. DAX query view

Answer: C
Explanation:
In the Model view, it is possible to analyze the semantic model and create new layouts.

QUESTION 26
You have a Fabric tenant that contains a semantic model. The model uses Direct Lake mode.
You suspect that some DAX queries load unnecessary columns into memory.
You need to identify the frequently used columns that are loaded into memory.
What are two ways to achieve the goal? Each correct answer presents a complete solution.
NOTE: Each correct answer is worth one point.

A. Use the Analyze in Excel feature.
B. Use the Vertipaq Analyzer tool.
C. Query the $System.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS dynamic management view (DMV).
D. Query the DISCOVER_MEMORYGRANT dynamic management view (DMV).

Answer: BC

QUESTION 27
You have a Fabric tenant that contains a semantic model named Model1. Model1 uses Import mode. Model1 contains a table named Orders. Orders has 100 million rows and the following fields.

You need to reduce the memory used by Model1 and the time it takes to refresh the model.
Which two actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct answer is worth one point.

A. Split OrderDateTime into separate date and time columns.
B. Replace TotalQuantity with a calculated column.
C. Convert Quantity into the Text data type.
D. Replace TotalSalesAmount with a measure.

Answer: AD

QUESTION 28
You have a Fabric tenant that contains a semantic model.
You need to prevent report creators from populating visuals by using implicit measures.
What are two tools that you can use to achieve the goal? Each correct answer presents a complete solution.
NOTE: Each correct answer is worth one point.

A. Microsoft Power BI Desktop
B. Tabular Editor
C. Microsoft SQL Server Management Studio (SSMS)
D. DAX Studio

Answer: AB
Explanation:
To prevent report creators from populating visuals using implicit measures in a Power BI semantic model within a Fabric tenant, you can utilize the following tools:
1. Tabular Editor:
2. Power BI Desktop (Data Model View):

QUESTION 29
You have a Fabric tenant that contains a lakehouse named Lakehouse1. Lakehouse1 contains a table named Tablet.
You are creating a new data pipeline.
You plan to copy external data to Table1. The schema of the external data changes regularly.
You need the copy operation to meet the following requirements:
– Replace Table1 with the schema of the external data.
– Replace all the data in Table1 with the rows in the external data.
You add a Copy data activity to the pipeline.
What should you do for the Copy data activity?

A. From the Source tab, add additional columns.
B. From the Destination tab, set Table action to Overwrite.
C. From the Settings tab, select Enable staging.
D. From the Source tab, select Enable partition discovery.
E. From the Source tab, select Recursively.

Answer: B
Explanation:
Enable “Truncate table” option: This option truncates the target table before copying data, ensuring that all existing data is replaced with the new data from the external source.

QUESTION 30
You have a Fabric tenant that contains a lakehouse.
You plan to query sales data files by using the SQL endpoint. The files will be in an Amazon Simple Storage Service (Amazon S3) storage bucket.
You need to recommend which file format to use and where to create a shortcut.
Which two actions should you include in the recommendation? Each correct answer presents part of the solution.
NOTE: Each correct answer is worth one point.

A. Create a shortcut in the Files section.
B. Use the Parquet format
C. Use the CSV format.
D. Create a shortcut in the Tables section.
E. Use the delta format.

Answer: BD
Explanation:
You should use a columnar file format such as Parquet or ORC (Optimized Row Columnar). These formats are highly optimized for analytical queries and provide efficient storage and query performance.
In the Tables section of your lakehouse, you define virtual tables that represent external data sources. These virtual tables can be backed by data stored externally in formats such as Parquet or ORC in Amazon S3.

QUESTION 31
You have a Fabric tenant that contains a lakehouse named Lakehouse1. Lakehouse1 contains a subfolder named Subfolder1 that contains CSV files.
You need to convert the CSV files into the delta format that has V-Order optimization enabled.
What should you do from Lakehouse explorer?

A. Use the Load to Tables feature.
B. Create a new shortcut in the Files section.
C. Create a new shortcut in the Tables section.
D. Use the Optimize feature.

Answer: A
Explanation:
With ”Load to tables” : tables are always loaded using the Delta Lake table format with V-Order optimization enabled.
https://learn.microsoft.com/en-us/fabric/data-engineering/load-to-tables#load-to-table-capabilities-overview

QUESTION 32
You have a Fabric tenant that contains a lakehouse named Lakehouse1. Lakehouse1 contains an unpartitioned table named Table1.
You plan to copy data to Table1 and partition the table based on a date column in the source data.
You create a Copy activity to copy the data to Table1.
You need to specify the partition column in the Destination settings of the Copy activity.
What should you do first?

A. From the Destination tab, set Mode to Append.
B. From the Destination tab, select the partition column.
C. From the Source tab, select Enable partition discovery.
D. From the Destination tabs, set Mode to Overwrite.

Answer: D
Explanation:
When setting up the Copy Activity, you need to choose the Overwrite mode to make the partition option appear (not visibile in Append mode).

QUESTION 33
You have source data in a folder on a local computer.
You need to create a solution that will use Fabric to populate a data store. The solution must meet the following requirements:
Support the use of dataflows to load and append data to the data store.
Ensure that Delta tables are V-Order optimized and compacted automatically.
Which type of data store should you use?

A. a lakehouse
B. an Azure SQL database
C. a warehouse
D. a KQL database

Answer: A
Explanation:
To meet the requirements of supporting dataflows to load and append data to the data store while ensuring that Delta tables are V-Order optimized and compacted automatically, you should use a lakehouse in Fabric as your solution.

QUESTION 34
You have a Fabric workspace named Workspace1 that contains a data flow named Dataflow1 contains a query that returns the data shown in the following exhibit.

You need to transform the data columns into attribute-value pairs, where columns become rows.
You select the VendorID column.
Which transformation should you select from the context menu of the VendorID column?

A. Group by
B. Unpivot columns
C. Unpivot other columns
D. Split column
E. Remove other columns

Answer: C

QUESTION 35
You have a Fabric tenant that contains a data pipeline.
You need to ensure that the pipeline runs every four hours on Mondays and Fridays.
To what should you set Repeat for the schedule?

A. Daily
B. By the minute
C. Weekly
D. Hourly

Answer: C
Explanation:
The only way to do this is to set the schedule to ”Weekly”, set the days on Monday and Friday and add manually 6 Time of 4 hour intervals.

QUESTION 36
You have a Fabric tenant that contains a warehouse.
Several times a day, the performance of all warehouse queries degrades. You suspect that Fabric is throttling the compute used by the warehouse.
What should you use to identify whether throttling is occurring?

A. the Capacity settings
B. the Monitoring hub
C. dynamic management views (DMVs)
D. the Microsoft Fabric Capacity Metrics app

Answer: D

QUESTION 37
You have a Fabric tenant that contains a warehouse.
A user discovers that a report that usually takes two minutes to render has been running for 45 minutes and has still not rendered.
You need to identify what is preventing the report query from completing.
Which dynamic management view (DMV) should you use?

A. sys.dm_exec_requests
B. sys.dm_exec_sessions
C. sys.dm_exec_connections
D. sys.dm_pdw_exec_requests

Answer: A
Explanation:
https://learn.microsoft.com/en-us/fabric/data-warehouse/monitor-using-dmv

QUESTION 38
You need to create a data loading pattern for a Type 1 slowly changing dimension (SCD).
Which two actions should you include in the process? Each correct answer presents part of the solution.
NOTE: Each correct answer is worth one point.

A. Update rows when the non-key attributes have changed.
B. Insert new rows when the natural key exists in the dimension table, and the non-key attribute values have changed.
C. Update the effective end date of rows when the non-key attribute values have changed.
D. Insert new records when the natural key is a new value in the table.

Answer: AD
Explanation:
Type 1 SCD does not preserve history, therefore no end dates for table entries exists.

QUESTION 39
You are analyzing customer purchases in a Fabric notebook by using PySpark.
You have the following DataFrames:
– transactions: Contains five columns named transaction_id, customer_id, product_id, amount, and date and has 10 million rows, with each row representing a transaction.
– customers: Contains customer details in 1,000 rows and three columns named customer_id, name, and country.
You need to join the DataFrames on the customer_id column. The solution must minimize data shuffling.
You write the following code.
from pyspark.sql import functions as F
results =
Which code should you run to populate the results DataFrame?

A. transactions.join(F.broadcast(customers), transactions.customer_id == customers.customer_id)
B. transactions.join(customers, transactions.customer_id == customers.customer_id).distinct()
C. transactions.join(customers, transactions.customer_id == customers.customer_id)
D. transactions.crossJoin(customers).where(transactions.customer_id == customers.customer_id)

Answer: A
Explanation:
https://sparkbyexamples.com/spark/broadcast-join-in-spark/”

QUESTION 40
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have a Fabric tenant that contains a new semantic model in OneLake.
You use a Fabric notebook to read the data into a Spark DataFrame.
You need to evaluate the data to calculate the min, max, mean, and standard deviation values for all the string and numeric columns.
Solution: You use the following PySpark expression: df.explain()
Does this meet the goal?

A. Yes
B. No

Answer: B

QUESTION 41
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have a Fabric tenant that contains a new semantic model in OneLake.
You use a Fabric notebook to read the data into a Spark DataFrame.
You need to evaluate the data to calculate the min, max, mean, and standard deviation values for all the string and numeric columns.
Solution: You use the following PySpark expression: df.show()
Does this meet the goal?

A. Yes
B. No

Answer: B

QUESTION 42
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have a Fabric tenant that contains a new semantic model in OneLake.
You use a Fabric notebook to read the data into a Spark DataFrame.
You need to evaluate the data to calculate the min, max, mean, and standard deviation values for all the string and numeric columns.
Solution: You use the following PySpark expression: df.summary()
Does this meet the goal?

A. Yes
B. No

Answer: A

QUESTION 43
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have a Fabric tenant that contains a lakehouse named Lakehouse1. Lakehouse1 contains a Delta table named Customer.
When you query Customer, you discover that the query is slow to execute. You suspect that maintenance was NOT performed on the table.
You need to identify whether maintenance tasks were performed on Customer.
Solution: You run the following Spark SQL statement: DESCRIBE HISTORY customer
Does this meet the goal?

A. Yes
B. No

Answer: A

QUESTION 44
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have a Fabric tenant that contains a lakehouse named Lakehouse1. Lakehouse1 contains a Delta table named Customer.
When you query Customer, you discover that the query is slow to execute. You suspect that maintenance was NOT performed on the table.
You need to identify whether maintenance tasks were performed on Customer.
Solution: You run the following Spark SQL statement: REFRESH TABLE customer
Does this meet the goal?

A. Yes
B. No

Answer: B

QUESTION 45
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have a Fabric tenant that contains a lakehouse named Lakehouse1. Lakehouse1 contains a Delta table named Customer.
When you query Customer, you discover that the query is slow to execute. You suspect that maintenance was NOT performed on the table.
You need to identify whether maintenance tasks were performed on Customer.
Solution: You run the following Spark SQL statement: EXPLAIN TABLE customer
Does this meet the goal?

A. Yes
B. No

Answer: B

QUESTION 46
Hotspot Question
You have a data warehouse that contains a table named Stage.Customers. Stage.Customers contains all the customer record updates from a customer relationship management (CRM) system. There can be multiple updates per customer.
You need to write a T-SQL query that will return the customer ID, name. postal code, and the last updated time of the most recent row for each customer ID.
How should you complete the code? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

Answer:

QUESTION 47
Hotspot Question
You have a Fabric tenant.
You plan to create a Fabric notebook that will use Spark DataFrames to generate Microsoft Power BI visuals.
You run the following code.

For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.

Answer:

QUESTION 48
Drag and Drop Question
You have a Fabric tenant that contains a semantic model. The model contains data about retail stores.
You need to write a DAX query that will be executed by using the XMLA endpoint. The query must return a table of stores that have opened since December 1, 2023.
How should you complete the DAX expression? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.

Answer:

QUESTION 49
Hotspot Question
You have a Fabric tenant that contains a warehouse named Warehouse1. Warehouse1 contains three schemas named schemaA, schemaB, and schemaC.
You need to ensure that a user named User1 can truncate tables in schemaA only.
How should you complete the T-SQL statement? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

Answer:

QUESTION 50
Hotspot Question
You have the source data model shown in the following exhibit.

The primary keys of the tables are indicated by a key symbol beside the columns involved in each key.
You need to create a dimensional data model that will enable the analysis of order items by date, product, and customer.
What should you include in the solution? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

Answer:

QUESTION 51
Hotspot Question
You have a Fabric tenant that contains two lakehouses.
You are building a dataflow that will combine data from the lakehouses. The applied steps from one of the queries in the dataflow is shown in the following exhibit.

Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic.
NOTE: Each correct selection is worth one point.

Answer:


Resources From:

1.2025 Latest Braindump2go DP-600 Exam Dumps (PDF & VCE) Free Share:
https://www.braindump2go.com/dp-600.html

2.2025 Latest Braindump2go DP-600 PDF and DP-600 VCE Dumps Free Share:
https://drive.google.com/drive/folders/1hFMvbs2eQP6DLaCpG93gYnq3xN4l19rB?usp=sharing

3.2025 Free Braindump2go DP-600 Exam Questions Download:
https://www.braindump2go.com/free-online-pdf/DP-600-VCE-Dumps(1-51).pdf

Free Resources from Braindump2go,We Devoted to Helping You 100% Pass All Exams!

[2025-November-New]Braindump2go DP-300 VCE Dumps Free Share[Q205-Q242]

2025/November Latest Braindump2go DP-300 Exam Dumps with PDF and VCE Free Updated Today! Following are some new Braindump2go DP-300 Real Exam Questions!

QUESTION 205
You are training a new administrator for your company’s Azure data services, which includes database deployed on Azure SQL Database, Azure SQL Managed Instance, and SQL Server on Azure virtual machines (VMs).
You need to identify the fixed roles that are supported by Azure SQL Database only.
Which two fixed roles are available with Azure SQL Database only?
Each correct answer presents a complete solution.

A. db_securityadmin
B. sysadmin
C. dbmanager
D. dbcreator
E. loginmanager

Answer: CE

QUESTION 206
You provision an Azure SQL Managed Instance database named MyDevData. The database will be used by in-house development for application development projects.
DevSupport custom database role members will use dynamic management views (DMVs) to retrieve performance and health information about MyDevData.
You need to ensure that DevSupport members can view information through DMVs.
Which statement should you use?
Each correct answer presents part of the solution.

A. GRANT VIEW DATABASE STATE TO DevSupport
B. GRANT VIEW SERVER STATE TO DevSupport
C. GRANT VIEW DEFINITION TO DevSupport
D. GRANT VIEW REFERENCES TO DevSupport

Continue reading

[2025-November-New]Braindump2go AZ-900 VCE Free Download[Q107-Q130]

2025/November Latest Braindump2go AZ-900 Exam Dumps with PDF and VCE Free Updated Today! Following are some new Braindump2go AZ-900 Real Exam Questions!

QUESTION 107
If you want to integrate apps, systems, data and service across your company, which Azure service will you use for serverless workflow orchestration?

A. Functions
B. Apps Grid
C. Logic Apps
D. Bot Service

Answer: C

QUESTION 108
You will be using Azure Monitor to collect data about your Azure infrastructure. Can you identify the type of data collection that requires you to enable diagnostics?

A. Events Logs
B. Linux Virtual Machine health
C. Container workload performance
D. Usage of Web applications

Continue reading

[2025-November-New]Braindump2go DP-100 PDF Dumps Free Share[Q149-Q170]

2025/November Latest Braindump2go DP-100 Exam Dumps with PDF and VCE Free Updated Today! Following are some new Braindump2go DP-100 Real Exam Questions!

QUESTION 149
You have a comma-separated values (CSV) file containing data from which you want to train a classification model.
You are using the Automated Machine Learning interface in Azure Machine Learning studio to train the classification model. You set the task type to Classification.
You need to ensure that the Automated Machine Learning process evaluates only linear models.
What should you do?

A. Add all algorithms other than linear ones to the blocked algorithms list.
B. Set the Exit criterion option to a m 吗etric score threshold.
C. Clear the option to perform automatic featurization.
D. Clear the option to enable deep learning.
E. Set the task type to Regression.

Answer: C
Explanation:
Automatic featurization can fit non-linear models.
Reference:
https://econml.azurewebsites.net/spec/estimation/dml.html
https://docs.microsoft.com/en-us/azure/machine-learning/how-to-use-automated-ml-for-ml-models

QUESTION 150
You are a data scientist working for a bank and have used Azure ML to train and register a machine learning model that predicts whether a customer is likely to repay a loan.
You want to understand how your model is making selections and must be sure that the model does not violate government regulations such as denying loans based on where an applicant lives.
You need to determine the extent to which each feature in the customer data is influencing predictions.
What should you do?

A. Enable data drift monitoring for the model and its training dataset.
B. Score the model against some test data with known label values and use the results to calculate a confusion matrix.
C. Use the Hyperdrive library to test the model with multiple hyperparameter values.
D. Use the interpretability package to generate an explainer for the model.
E. Add tags to the model registration indicating the names of the features in the training dataset.

Continue reading

[2025-November-New]Braindump2go AZ-801 VCE Free Download[Q136-Q156]

2025/November Latest Braindump2go AZ-801 Exam Dumps with PDF and VCE Free Updated Today! Following are some new Braindump2go AZ-801 Real Exam Questions!

QUESTION 136
Hotspot Question
You have a generation 1 Azure virtual machine named VM1 that runs Windows Server and is joined to an Active Directory domain.
You plan to enable BitLocker Drive Encryption (Bit-Locker) on volume C of VM1.
You need to ensure that the BitLocker recovery key for VM1 is stored in Active Directory.
Which two Group Policy settings should you configure first? To answer, select the settings in the answer area.
NOTE: Each correct selection is worth one point.

Answer:

QUESTION 137
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have a server named Server1 that runs Windows Server._(:з」∠)_488
You need to ensure that only specific applications can modify the data in protected folders on Server1.
Solution: From App & browser control, you configure Reputation-based protection.
Does this meet the goal?

A. Yes
B. No

Continue reading

[2025-November-New]Braindump2go AZ-800 Dumps VCE Free Share[Q180-Q202]

2025/November Latest Braindump2go AZ-800 Exam Dumps with PDF and VCE Free Updated Today! Following are some new Braindump2go AZ-800 Real Exam Questions!

QUESTION 180
Your network contains an Active Directory Domain Services (AD DS) domain. The domain contains a server named Server1.
You implement Just Enough Administration (JEA) on Server1.
You need to perform remote administration tasks on Server by using only JEA.
What should you use?

A. PowerShell only
B. Remote Server Administration Tools (RSAT) only
C. PowerShell or Remote Desktop only
D. PowerShell or Remote Server Administration Tools (RSAT) only
E. Remote Server Administration Tools (RSAT) or Remote Desktop only
F. PowerShell, Remote Server Administration Tools (RSAT), or Remote Desktop

Answer: A
Explanation:
Just Enough Administration (JEA) is a security technology that enables delegated administration for anything managed by PowerShell.

QUESTION 181
You have an Azure subscription. The subscription contains a virtual machine named VM1 that runs Windows Server.
You plan to manage VM1 by using a PowerShell runbook.
You need to create the runbook.
What should you create first?

A. an Azure Automation account
B. an Azure workbook
C. a Log Analytics workspace
D. a Microsoft Power Automate flow

Continue reading

[2025-November-New]Braindump2go AZ-700 Practice Test Free[Q145-Q165]

2025/November Latest Braindump2go AZ-700 Exam Dumps with PDF and VCE Free Updated Today! Following are some new Braindump2go AZ-700 Real Exam Questions!

QUESTION 145
You are planning an Azure deployment that will contain three virtual networks in the East US Azure region as shown in the following table.

A Site-to-Site VPN will connect Vnet1 to your company’s on-premises network.
You need to recommend a solution that ensures that the virtual machines on all the virtual networks can communicate with the on-premises network. The solution must minimize costs.
What should you recommend for Vnet2 and Vnet3?

A. VNet-to-VNet VPN connections
B. peering
C. service endpoints;
D. route tables

Answer: B

QUESTION 146
Your company has an office in New York.
The company has an Azure subscription that contains the virtual networks shown in the following table.

You need to connect the virtual networks to the office by using ExpressRoute. The solution must meet the following requirements:
– The connection must have up to 1 Gbps of bandwidth.
– The office must have access to all the virtual networks.
– Costs must be minimized.
How many ExpressRoute circuits should be provisioned, and which ExpressRoute SKU should you enable?

A. one ExpressRoute Premium circuit
B. two ExpressRoute Premium circuits
C. four ExpressRoute Standard circuits
D. one ExpressRoute Standard circuit

Continue reading

[2025-November-New]Braindump2go SCS-C02 Practice Test Free[Q70-Q120]

2025/November Latest Braindump2go SCS-C02 Exam Dumps with PDF and VCE Free Updated Today! Following are some new Braindump2go SCS-C02 Real Exam Questions!

QUESTION 70
A security team has received an alert from Amazon GuardDuty that AWS CloudTrail logging has been disabled. The security team’s account has AWS Config, Amazon Inspector, Amazon Detective, and AWS Security Hub enabled. The security team wants to identify who disabled CloudTrail and what actions were performed while CloudTrail was disabled.
What should the security team do to obtain this information?

A. Use AWS Config to search for the CLOUD_TRAIL_ENABLED event. Use the configuration recorder to find all activity that occurred when CloudTrail was disabled.
B. Use Amazon Inspector to find the details of the CloudTrailLoggingDisabled event from GuardDuly, including the user name and all activity that occurred when CloudTrail was disabled.
C. Use Detective to find the details of the CloudTrailLoggingDisabled event from GuardDuty, including the user name and all activity that occurred when CloudTrail was disabled.
D. Use GuardDuty to find which user generated the CloudTrailLoggingDisabled event. Use Security Hub to find the trace of activity related to the event.

Answer: C
Explanation:
Findings detected by GuardDuty
GuardDuty uses your log data to uncover suspected instances of malicious or high-risk activity. Detective provides resources that help you investigate these findings.
For each finding, Detective provides the associated finding details. Detective also shows the entities, such as IP addresses and AWS accounts, that are connected to the finding.
You can then explore the activity for the involved entities to determine whether the detected activity from the finding is a genuine cause for concern.
https://docs.aws.amazon.com/detective/latest/userguide/investigation-phases-starts.html

QUESTION 71
A company has a requirement that none of its Amazon RDS resources can be publicly accessible. A security engineer needs to set up monitoring for this requirement and must receive a near-real-time notification if any RDS resource is noncompliant.
Which combination of steps should the security engineer take to meet these requirements? (Choose three.)

A. Configure RDS event notifications on each RDS resource. Target an AWS Lambda function that notifies AWS Config of a change to the RDS public access setting
B. Configure the rds-instance-public-access-check AWS Config managed rule to monitor the RDS resources.
C. Configure the Amazon EventBridge (Amazon CloudWatch Events) rule to target an Amazon Simple Notification Service (Amazon SNS) topic to provide a notification to the security engineer.
D. Configure RDS event notifications to post events to an Amazon Simple Queue Service (Amazon SQS) queue. Subscribe the SQS queue to an Amazon Simple Notification Service (Amazon SNS) topic to provide a notification to the security engineer.
E. Configure an Amazon EventBridge (Amazon CloudWatch Events) rule that is invoked by a compliance change event from the rds-instance-public-access-check rule.
F. Configure an Amazon EventBridge (Amazon CloudWatch Events) rule that is invoked when the AWS Lambda function notifies AWS Config of an RDS event change.

Continue reading

[2025-November-New]Braindump2go SOA-C02 Dumps PDF Free[Q241-Q271]

2025/November Latest Braindump2go SOA-C02 Exam Dumps with PDF and VCE Free Updated Today! Following are some new Braindump2go SOA-C02 Real Exam Questions!

QUESTION 241
A SysOps administrator is responsible for managing a company’s cloud infrastructure with AWS CloudFormation. The SysOps administrator needs to create a single resource that consists of multiple AWS services. The resource must support creation and deletion through the CloudFormation console.
Which CloudFormation resource type should the SysOps administrator create to meet these requirements?

A. AWS::EC2::Instance with a cfn-init helper script
B. AWS::OpsWorks::Instance
C. AWS::SSM::Document
D. Custom::MyCustomType

Answer: D
Explanation:
Custom resources enable you to write custom provisioning logic in templates that AWS CloudFormation runs anytime you create, update (if you changed the custom resource), or delete stacks. For example, you might want to include resources that aren’t available as AWS CloudFormation resource types. You can include those resources by using custom resources. That way you can still manage all your related resources in a single stack.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-custom-resources.html

QUESTION 242
A company is implementing security and compliance by using AWS Trusted Advisor. The company’s SysOps team is validating the list of Trusted Advisor checks that it can access.
Which factor will affect the quantity of available Trusted Advisor checks?

A. Whether at least one Amazon EC2 instance is in the running state
B. The AWS Support plan
C. An AWS Organizations service control policy (SCP)
D. Whether the AWS account root user has multi-factor authentication (MFA) enabled

Continue reading

[2025-November-New]Braindump2go SAP-C02 VCE Exam Questions Free[Q175-Q206]

2025/November Latest Braindump2go SAP-C02 Exam Dumps with PDF and VCE Free Updated Today! Following are some new Braindump2go SAP-C02 Real Exam Questions!

QUESTION 175
A company is developing a new service that will be accessed using TCP on a static port. A solutions architect must ensure that the service is highly available, has redundancy across Availability Zones, and is accessible using the DNS name my.service.com, which is publicly accessible. The service must use fixed address assignments so other companies can add the addresses to their allow lists.
Assuming that resources are deployed in multiple Availability Zones in a single Region, which solution will meet these requirements?

A. Create Amazon EC2 instances with an Elastic IP address for each instance. Create a Network Load Balancer (NLB) and expose the static TCP port. Register EC2 instances with the NLB. Create a new name server record set named my.service.com, and assign the Elastic IP addresses of the EC2 instances to the record set. Provide the Elastic IP addresses of the EC2 instances to the other companies to add to their allow lists.
B. Create an Amazon ECS cluster and a service definition for the application. Create and assign public IP addresses for the ECS cluster. Create a Network Load Balancer (NLB) and expose the TCP port. Create a target group and assign the ECS cluster name to the NLB. Create a new A record set named my.service.com, and assign the public IP addresses of the ECS cluster to the record set. Provide the public IP addresses of the ECS cluster to the other companies to add to their allow lists.
C. Create Amazon EC2 instances for the service. Create one Elastic IP address for each Availability Zone. Create a Network Load Balancer (NLB) and expose the assigned TCP port. Assign the Elastic IP addresses to the NLB for each Availability Zone. Create a target group and register the EC2 instances with the NLB. Create a new A (alias) record set named my.service.com, and assign the NLB DNS name to the record set.
D. Create an Amazon ECS cluster and a service definition for the application. Create and assign public IP address for each host in the cluster. Create an Application Load Balancer (ALB) and expose the static TCP port. Create a target group and assign the ECS service definition name to the ALB. Create a new CNAME record set and associate the public IP addresses to the record set. Provide the Elastic IP addresses of the Amazon EC2 instances to the other companies to add to their allow lists.

Answer: C
Explanation:
NLB with one Elastic IP per AZ to handle TCP traffic. Alias record set named my.service.com.
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer.html
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer.html

QUESTION 176
A company is running multiple workloads in the AWS Cloud. The company has separate units for software development. The company uses AWS Organizations and federation with SAML to give permissions to developers lo manage resources m their AWS accounts. The development units each deploy their production workloads into a common production account.
Recently, an incident occurred in the production account in which members of a development unit terminated an EC2 instance that belonged to a different development unit.
A solutions architect must create u solution that prevents a similar incident from happening in the future.
The solution also must allow developers the possibility to manage the instances used for their workloads.
Which strategy will meet these requirements?

A. Create separate OUs in AWS Organizations for each development unit.
Assign the created OUs to the company AWS accounts.
Create separate SCPs with a deny action and a StringNotEquals condition for the DevelopmentUnit resource tag that matches the development unit name.
Assign the SCP to the corresponding OU.
B. Pass an attribute for DevelopmentUnit as an AWS Security Token Service (AWS STS) session tag during SAML federation.
Update the AM policy for the developers’assumed IAM role with a deny action and a StringNotEquals condition for the DevelopmentUnit resource lag and aws:PrincipalTag/’DevelopmentUnit.
C. Pass an attribute for DevelopmentUnit as an AWS Security Token Service (AWS STS) session tag curing SAML federation.
Create an SCP with an allow action and a StringEquals condition for the DevelopmentUnit resource tag and aws:PrincipalTag/DevelopmentUnit.
Assign the SCP to the root OU.
D. Create separate IAM policies for each development unit.
For every IAM policy, add an allow action and a StringEquals condition for the DevelopmentUnit resource tag and the development unit name.
During SAML federation, use AWS Security Token Service (AWS STS) to assign the IAN’ policy and match the development unit name to the assumed IAM role.

Continue reading

[2025-November-New]Braindump2go SAA-C03 PDF Free Updated[Q976-Q1010]

2025/November Latest Braindump2go SAA-C03 Exam Dumps with PDF and VCE Free Updated Today! Following are some new Braindump2go SAA-C03 Real Exam Questions!

QUESTION 976
A company uses Amazon S3 to host its static website. The company wants to add a contact form to the webpage. The contact form will have dynamic server-side components for users to input their name, email address, phone number, and user message.
The company expects fewer than 100 site visits each month. The contact form must notify the company by email when a customer fills out the form.
Which solution will meet these requirements MOST cost-effectively?

A. Host the dynamic contact form in Amazon Elastic Container Service (Amazon ECS). Set up Amazon Simple Email Service (Amazon SES) to connect to a third-party email provider.
B. Create an Amazon API Gateway endpoint that returns the contact form from an AWS Lambda function. Configure another Lambda function on the API Gateway to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic.
C. Host the website by using AWS Amplify Hosting for static content and dynamic content. Use server-side scripting to build the contact form. Configure Amazon Simple Queue Service (Amazon SQS) to deliver the message to the company.
D. Migrate the website from Amazon S3 to Amazon EC2 instances that run Windows Server. Use Internet Information Services (IIS) for Windows Server to host the webpage. Use client-side scripting to build the contact form. Integrate the form with Amazon WorkMail.

Answer: B
Explanation:
Using API Gateway and Lambda enables serverless handling of form submissions with minimal cost and infrastructure. When coupled with Amazon SNS, it allows instant email notifications without running servers, making it ideal for low-traffic workloads.

QUESTION 977
A company creates dedicated AWS accounts in AWS Organizations for its business units. Recently, an important notification was sent to the root user email address of a business unit account instead of the assigned account owner. The company wants to ensure that all future notifications can be sent to different employees based on the notification categories of billing, operations, or security.
Which solution will meet these requirements MOST securely?

A. Configure each AWS account to use a single email address that the company manages. Ensure that all account owners can access the email account to receive notifications. Configure alternate contacts for each AWS account with corresponding distribution lists for the billing team, the security team, and the operations team for each business unit.
B. Configure each AWS account to use a different email distribution list for each business unit that the company manages. Configure each distribution list with administrator email addresses that can respond to alerts. Configure alternate contacts for each AWS account with corresponding distribution lists for the billing team, the security team, and the operations team for each business unit.
C. Configure each AWS account root user email address to be the individual company managed email address of one person from each business unit. Configure alternate contacts for each AWS account with corresponding distribution lists for the billing team, the security team, and the operations team for each business unit.
D. Configure each AWS account root user to use email aliases that go to a centralized mailbox. Configure alternate contacts for each account by using a single business managed email distribution list each for the billing team, the security team, and the operations team.

Continue reading

[2025-November-New]Braindump2go MLA-C01 VCE Free Download[Q78-Q101]

2025/November Latest Braindump2go MLA-C01 Exam Dumps with PDF and VCE Free Updated Today! Following are some new Braindump2go MLA-C01 Real Exam Questions!

QUESTION 78
A company is planning to use Amazon SageMaker to make classification ratings that are based on images. The company has 6 衣 of training data that is stored on an Amazon FSx for NetApp ONTAP system virtual machine (SVM). The SVM is in the same VPC as SageMaker.
An ML engineer must make the training data accessible for ML models that are in the SageMaker environment.
Which solution will meet these requirements?

A. Mount the FSx for ONTAP file system as a volume to the SageMaker Instance.
B. Create an Amazon S3 bucket. Use Mountpoint for Amazon S3 to link the S3 bucket to the FSx for ONTAP file system.
C. Create a catalog connection from SageMaker Data Wrangler to the FSx for ONTAP file system.
D. Create a direct connection from SageMaker Data Wrangler to the FSx for ONTAP file system.

Answer: A

QUESTION 79
A company regularly receives new training data from the vendor of an ML model. The vendor delivers cleaned and prepared data to the company’s Amazon S3 bucket every 3-4 days.
The company has an Amazon SageMaker pipeline to retrain the model. An ML engineer needs to implement a solution to run the pipeline when new data is uploaded to the S3 bucket.
Which solution will meet these requirements with the LEAST operational effort?

A. Create an S3 Lifecycle rule to transfer the data to the SageMaker training instance and to initiate training.
B. Create an AWS Lambda function that scans the S3 bucket. Program the Lambda function to initiate the pipeline when new data is uploaded.
C. Create an Amazon EventBridge rule that has an event pattern that matches the S3 upload. Configure the pipeline as the target of the rule.
D. Use Amazon Managed Workflows for Apache Airflow (Amazon MWAA) to orchestrate the pipeline when new data is uploaded.

Continue reading

[2025-November-New]Braindump2go MLS-C01 Exam Dumps Free[Q259-Q290]

2025/November Latest Braindump2go MLS-C01 Exam Dumps with PDF and VCE Free Updated Today! Following are some new Braindump2go MLS-C01 Real Exam Questions!

QUESTION 259
An ecommerce company is collecting structured data and unstructured data from its website, mobile apps, and IoT devices. The data is stored in several databases and Amazon S3 buckets. The company is implementing a scalable repository to store structured data and unstructured data. The company must implement a solution that provides a central data catalog, self-service access to the data, and granular data access policies and encryption to protect the data.
Which combination of actions will meet these requirements with the LEAST amount of setup? (Choose three.)

A. Identify the existing data in the databases and S3 buckets. Link the data to AWS Lake Formation.
B. Identify the existing data in the databases and S3 buckets. Link the data to AWS Glue.
C. Run AWS Glue crawlers on the linked data sources to create a central data catalog.
D. Apply granular access policies by using AWS Identity and Access Management (IAM). Configure server-side encryption on each data source.
E. Apply granular access policies and encryption by using AWS Lake Formation.
F. Apply granular access policies and encryption by using AWS Glue.

Answer: ACE
Explanation:
https://docs.aws.amazon.com/lake-formation/latest/dg/what-is-lake-formation.html

QUESTION 260
A machine learning (ML) specialist is developing a deep learning sentiment analysis model that is based on data from movie reviews. After the ML specialist trains the model and reviews the model results on the validation set, the ML specialist discovers that the model is overfitting.
Which solutions will MOST improve the model generalization and reduce overfitting? (Choose three.)

A. Shuffle the dataset with a different seed.
B. Decrease the learning rate.
C. Increase the number of layers in the network.
D. Add L1 regularization and L2 regularization.
E. Add dropout.
F. Decrease the number of layers in the network.

Continue reading

[2025-November-New]Braindump2go DVA-C02 Dumps Free[Q440-Q500]

2025/November Latest Braindump2go DVA-C02 Exam Dumps with PDF and VCE Free Updated Today! Following are some new Braindump2go DVA-C02 Real Exam Questions!

QUESTION 440
A developer is building a microservice that uses AWS Lambda to process messages from an Amazon Simple Queue Service (Amazon SQS) standard queue. The Lambda function calls external APIs to enrich the SQS message data before loading the data into an Amazon Redshift data warehouse. The SQS queue must handle a maximum of 1,000 messages per second.
During initial testing, the Lambda function repeatedly inserted duplicate data into the Amazon Redshift table. The duplicate data led to a problem with data analysis. All duplicate messages were submitted to the queue within 1 minute of each other.
How should the developer resolve this issue?

A. Create an SQS FIFO queue. Enable message deduplication on the SQS FIFO queue.
B. Reduce the maximum Lambda concurrency that the SQS queue can invoke.
C. Use Lambda’s temporary storage to keep track of processed message identifiers
D. Configure a message group ID for every sent message. Enable message deduplication on the SQS standard queue.

Answer: A

QUESTION 441
A company has an application that uses an Amazon API Gateway API to invoke an AWS Lambda function. The application is latency sensitive.
A developer needs to configure the Lambda function to reduce the cold start time that is associated with default scaling.
What should the developer do to meet these requirements?

A. Publish a new version of the Lambda function. Configure provisioned concurrency. Set the provisioned concurrency limit to meet the company requirements.
B. Increase the Lambda function’s memory to the maximum amount. Increase the Lambda function’s reserved concurrency limit.
C. Increase the reserved concurrency of the Lambda function to a number that matches the current production load.
D. Use Service Quotas to request an increase in the Lambda function’s concurrency limit for the AWS account where the function is deployed.

Answer: A

QUESTION 442
A developer is deploying an application on Amazon EC2 instances that run in Account A. The application needs to read data from an existing Amazon Kinesis data stream in Account B.
Which actions should the developer take to provide the application with access to the stream? (Choose two.)

A. Update the instance profile role in Account A with stream read permissions.
B. Create an IAM role with stream read permissions in Account B.
C. Add a trust policy to the instance profile role and IAM role in Account B to allow the instance profile role to assume the IAM role.
D. Add a trust policy to the instance profile role and IAM role in Account B to allow reads from the stream.
E. Add a resource-based policy in Account B to allow read access from the instance profile role.

Continue reading

[2025-November-New]Braindump2go DOP-C02 Dumps PDF Free[Q340-Q370]

2025/November Latest Braindump2go DOP-C02 Exam Dumps with PDF and VCE Free Updated Today! Following are some new Braindump2go DOP-C02 Real Exam Questions!

QUESTION 340
A company uses Amazon Redshift as its data warehouse solution. The company wants to create a dashboard to view changes to the Redshift users and the queries the users perform.
Which combination of steps will meet this requirement? (Choose two.)

A. Create an Amazon CloudWatch log group. Create an AWS CloudTrail trail that writes to the CloudWatch log group.
B. Create a new Amazon S3 bucket. Configure default audit logging on the Redshift cluster. Configure the S3 bucket as the target.
C. Configure the Redshift cluster database audit logging to include user activity logs. Configure Amazon CloudWatch as the target.
D. Create an Amazon CloudWatch dashboard that has a log widget. Configure the widget to display user details from the Redshift logs.
E. Create an AWS Lambda function that uses Amazon Athena to query the Redshift logs. Create an Amazon CloudWatch dashboard that has a custom widget type that uses the Lambda function.

Answer: BC
Explanation:
Amazon Redshift audit logging allows you to capture information about the activities performed on the database, including changes to users and the queries executed. By enabling default audit logging and specifying an S3 bucket as the target, you can store the logs in a centralized location. This step ensures that user activity and database changes are captured.
Redshift’s database audit logging can include user activity logs, which track the SQL queries performed by users and the changes they make. By configuring these logs and sending them to Amazon CloudWatch, you can monitor user activity in real time, making it easier to integrate with a monitoring and alerting dashboard.
By enabling audit logging for Amazon Redshift and sending the logs to S3 and CloudWatch, you can track changes to Redshift users and queries effectively and integrate the data into a dashboard for monitoring purposes.

QUESTION 341
A company uses an organization in AWS Organizations to manage its 500 AWS accounts. The organization has all features enabled. The AWS accounts are in a single OU. The developers need to use the CostCenter tag key for all resources in the organization’s member accounts. Some teams do not use the CostCenter tag key to tag their Amazon EC2 instances.
The cloud team wrote a script that scans all EC2 instances in the organization’s member accounts. If the EC2 instances do not have a CostCenter tag key, the script will notify AWS account administrators. To avoid this notification, some developers use the CostCenter tag key with an arbitrary string in the tag value.
The cloud team needs to ensure that all EC2 instances in the organization use a CostCenter tag key with the appropriate cost center value.
Which solution will meet these requirements?

A. Create an SCP that prevents the creation of EC2 instances without the CostCenter tag key. Create a tag policy that requires the CostCenter tag to be values from a known list of cost centers for all EC2 instances. Attach the policy to the OU. Update the script to scan the tag keys and tag values.
Modify the script to update noncompliant resources with a default approved tag value for the CostCenter tag key.
B. Create an SCP that prevents the creation of EC2 instances without the CostCenter tag key. Attach the policy to the OU. Update the script to scan the tag keys and tag values and notify the administrators when the tag values are not valid.
C. Create an SCP that prevents the creation of EC2 instances without the CostCenter tag key. Attach the policy to the OU. Create an IAM permission boundary in the organization’s member accounts that restricts the CostCenter tag values to a list of valid cost centers.
D. Create a tag policy that requires the CostCenter tag to be values from a known list of cost centers for all EC2 instances. Attach the policy to the OU.
Configure an AWS Lambda function that adds an empty CostCenter tag key to an EC2 instance. Create an Amazon EventBridge rule that matches events to the RunInstances API action with the Lambda function as the target.

Continue reading

[2025-November-New]Braindump2go CLF-C02 Exam Dumps PDF Free[Q316-Q360]

2025/November Latest Braindump2go CLF-C02 Exam Dumps with PDF and VCE Free Updated Today! Following are some new Braindump2go CLF-C02 Real Exam Questions!

QUESTION 316
Which AWS service or resource can provide discounts on some AWS service costs in exchange for a spending commitment?

A. Amazon Detective
B. AWS Pricing Calculator
C. Savings Plans
D. Basic Support

Answer: C
Explanation:
Savings Plans offer significant savings over On-Demand Instances, in exchange for a commitment to use a specific amount of compute power for a one or three-year period.
https://docs.aws.amazon.com/whitepapers/latest/cost-optimization-reservation-models/savings-plans.html

QUESTION 317
Which of the following are pillars of the AWS Well-Architected Framework? (Choose two.)

A. High availability
B. Performance efficiency
C. Cost optimization
D. Going global in minutes
E. Continuous development

Answer: B
Explanation:
Performance efficiency and Cost optimization are the pillars of the Framework from the choose.

QUESTION 318
A company wants to use Amazon EC2 instances to provide a static website to users all over the world. The company needs to minimize latency for the users.
Which solution meets these requirements?

A. Use EC2 instances in multiple edge locations.
B. Use EC2 instances in the same Availability Zone but in different AWS Regions.
C. Use Amazon CloudFront with the EC2 instances configured as the source.
D. Use EC2 instances in the same Availability Zone but in different AWS accounts.

Continue reading

[2025-November-New]Braindump2go DEA-C01 VCE Exam Questions Free[Q105-Q155]

2025/November Latest Braindump2go DEA-C01 Exam Dumps with PDF and VCE Free Updated Today! Following are some new Braindump2go DEA-C01 Real Exam Questions!

QUESTION 105
A company has a data warehouse that contains a table that is named Sales. The company stores the table in Amazon Redshift. The table includes a column that is named city_name. The company wants to query the table to find all rows that have a city_name that starts with “San” or “El”.
Which SQL query will meet this requirement?

A. Select * from Sales where city_name ~ ‘$(San|El)*’;
B. Select * from Sales where city_name ~ ‘^(San|El)*’;
C. Select * from Sales where city_name ~’$(San&El)*’;
D. Select * from Sales where city_name ~ ‘^(San&El)*’;

Answer: B
Explanation:
This query uses a regular expression pattern with the ~ operator. The caret ^ at the beginning of the pattern indicates that the match must start at the beginning of the string. (San|El) matches either “San” or “El”, and * means zero or more of the preceding element. So this query will return all rows where city_name starts with either “San” or “El”.

QUESTION 106
A company needs to send customer call data from its on-premises PostgreSQL database to AWS to generate near real-time insights. The solution must capture and load updates from operational data stores that run in the PostgreSQL database. The data changes continuously.
A data engineer configures an AWS Database Migration Service (AWS DMS) ongoing replication task. The task reads changes in near real time from the PostgreSQL source database transaction logs for each table. The task then sends the data to an Amazon Redshift cluster for processing.
The data engineer discovers latency issues during the change data capture (CDC) of the task. The data engineer thinks that the PostgreSQL source database is causing the high latency.
Which solution will confirm that the PostgreSQL database is the source of the high latency?

A. Use Amazon CloudWatch to monitor the DMS task. Examine the CDCIncomingChanges metric to identify delays in the CDC from the source database.
B. Verify that logical replication of the source database is configured in the postgresql.conf configuration file.
C. Enable Amazon CloudWatch Logs for the DMS endpoint of the source database. Check for error messages.
D. Use Amazon CloudWatch to monitor the DMS task. Examine the CDCLatencySource metric to identify delays in the CDC from the source database.

Answer: D
Explanation:
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Troubleshooting_Latency.html
A high CDCLatencySource metric indicates that the process of capturing changes from the source is delayed.

QUESTION 107
A lab uses IoT sensors to monitor humidity, temperature, and pressure for a project. The sensors send 100 KB of data every 10 seconds. A downstream process will read the data from an Amazon S3 bucket every 30 seconds.
Which solution will deliver the data to the S3 bucket with the LEAST latency?

A. Use Amazon Kinesis Data Streams and Amazon Kinesis Data Firehose to deliver the data to the S3 bucket. Use the default buffer interval for Kinesis Data Firehose.
B. Use Amazon Kinesis Data Streams to deliver the data to the S3 bucket. Configure the stream to use 5 provisioned shards.
C. Use Amazon Kinesis Data Streams and call the Kinesis Client Library to deliver the data to the S3 bucket. Use a 5 second buffer interval from an application.
D. Use Amazon Managed Service for Apache Flink (previously known as Amazon Kinesis Data Analytics) and Amazon Kinesis Data Firehose to deliver the data to the S3 bucket. Use a 5 second buffer interval for Kinesis Data Firehose.

Continue reading