1 Quickstart: Create a data factory by using the Azure portal

Create an Azure Data Factory – Azure Data Factory | Microsoft Learn

Create the factory

Advanced creation in the Azure portal

Integrate with git later

Make private endpoint to a vnet (for now)

Resources

Pricing

Data Pipeline Pricing and FAQ – Data Factory | Microsoft Azure

Pricing for Data Pipeline is calculated based on:

Factory, open it

Select Launch Studio to open Azure Data Factory Studio to start the Azure Data Factory user interface (UI) application on a separate browser tab.

2 Quickstart: Use the copy data tool in the Azure Data Factory Studio to copy data

Copy data by using the copy data tool – Azure Data Factory | Microsoft Learn

Create a storage account

Upload a file to a blob container

Step 1: Start the copy data Tool

On the Properties page of the Copy Data tool, choose Built-in copy task under Task type, then select Next.

The are many types

Click + Create new connection to add a connection.

Select the linked service type that you want to create for the source connection. In this tutorial, we use Azure Blob Storage. Select it from the gallery, and then select Continue.

Test the connection

Select the newly created connection in the Connection block.

In the File or folder section, select Browse to navigate to the adftutorial/input folder, select the emp.txt file, and then click OK.

Select the Binary copy checkbox to copy file as-is, and then select Next.

Complete destination configuration
Select the AzureBlobStorage connection that you created in the Connection block.

In the Folder path section, browse to blob02 and and a name for the file moviesdb22.csv for example

Next and specify a task name

On the Summary page, review all settings, and select Next.

On the Deployment complete page, select Monitor to monitor the pipeline that you created.

Monitor

Monitor the running results


The application switches to the Monitor tab. You see the status of the pipeline on this tab. Select Refresh to refresh the list. Click the link under Pipeline name to view activity run details or rerun the pipeline.

On the Activity runs page, select the Details link (eyeglasses icon) under the Activity name column for more details about copy operation.

Details

Verify the csv in blobs.

Create a new copy job and just copy the file as is

Result in storage account

Test with table storage

Create a table.

Best practice

Design a scalable partitioning strategy for Azure Table storage (REST API) – Azure Storage | Microsoft Learn

Partition Key

Row Key

Our table storage

use storage explorer to view it after is has been created in the Azure.

Create a new pipline source, use azure storage table and preview it

Create a new container blob, blob03 and specify a csv name as destination or sink

Keep the file format settings.

Name the activity and view summary, when done press Monitor

Now check the csv file on the blob

Lets add a new row to the table and run the pipeline

Next up is data flow

3 Create Azure Data Factory data flows

Create a mapping data flow – Azure Data Factory | Microsoft Learn

Data flows are available both in Azure Data Factory and Azure Synapse Pipelines. This article applies to mapping data flows.

Mapping Data Flows provide a way to transform data at scale without any coding required.

You can add sample Data Flows from the template gallery. To browse the gallery, select the Author tab in Data Factory Studio and click the plus sign to choose Pipeline | Template Gallery.

Templates

You can also add data flows directly to your data factory without using a template. Select the Author tab in Data Factory Studio and click the plus sign to choose Data Flow | Data Flow.

4 Transform data using mapping data flows

Introductory training modules for Azure Data Factory

A summary of introductory training modules – Azure Data Factory | Microsoft Learn