Microsoft Fabric is an AI-powered, unified analytics platform that brings together all the data tools an organization needs into a single Software-as-a-Service (SaaS) environment. It centralizes all data in its built-in data lake, OneLake, and features workloads like Data Factory and Power BI to simplify the entire data lifecycle and enable faster, data-driven decisions.
Authorize the Connection to Microsoft Fabric
To authorize this service, use OAuth 2.0 to share specific data with Dataddo while keeping usernames, passwords, and other information private.
- On the Authorizers page, click on Authorize New Service and select your service.
- Follow the on-screen prompts to grant Dataddo the necessary permissions to access and retrieve your data.
- [Optional] Once your authorizer is created, click on it to change the label for easier identification.
Ensure that the account you're granting access to holds at least admin-level permissions. If necessary, assign a team member with the required permissions with the authorizer role to authenticate the service for you.
Create a New Microsoft Fabric Destination
- On the Destinations page, click on the Create Destination button and select the destination from the list.
- Select your authorizer from the drop-down menu.Need to authorize another connection?
Click on Add new Account in drop-down menu during authorizer selection and follow the on-screen prompts. You can also go to the Authorizers tab and click on Add New Service.
- Select which Workspace and Lakehouse to load data to.
- [Optional] Direct the data into a specific folder or table, e.g.
/Documents/. - Name your destination and click on Save.
Create a Flow to Microsoft Fabric
- Navigate to Flows and click on Create Flow.
- Click on Connect Your Data to add your source(s).
- Click on Connect Your Data Destination to add the destination.
- Select your Filename Format. We support the following formats:
JSON,JSONLCSV,XML,XLXS,PARQUET.CSV Parquet If you select CSV, you can also configure: - The CSV delimiter. You can select from the following delimiters: comma
,, semicolon;, or tab\t, or pipe|. - Whether null dates are shown as empty. If not enabled, null dates will be displayed as
1970-01-01. - Whether the first row is defined as the table header. Do not enable if you prefer raw data.
If you select parquet, you can also select one of the following timestamp units: - Millis is short for milliseconds.
- Micros is short for microseconds.
- Nanos is short for nanoseconds.
- The CSV delimiter. You can select from the following delimiters: comma
- Select your File Name pattern or use a custom one. For more information on custom names using date ranges, see the section on File Partitioning
- Choose the write mode.
- Check the Data Preview to see if your configuration is correct.
- Name your flow and click on Create Flow to finish the setup.
File Partitioning
File partitioning splits large datasets into smaller, manageable partitions, based on criteria like date. This technique enhances data organization, query performance, and management by grouping subsets of data with shared attributes.
During flow creation, you can:
- Select one of the predefined file name patterns.
- Define your own custom name to suit your partitioning needs.
Example of a custom file name
When creating a custom file name, use variations of the offered file names.
For example, use a base file name and add a different date range pattern:
xyz_{{1d1|Ymd}}
Using this file name, Dataddo will create a new file named xyz every day, e.g. xyz_20xx0101, xyz_20xx0102 etc.
For more information, see Dynamic File Naming Patterns.
Troubleshooting
Invalid resource Error
ERROR MESSAGE
invalid_client - AADSTS650057:+Invalid+resource
This error may be caused by one of the following issues:
- Missing API Permissions: Go to your app's registration in the Microsoft Entra ID portal and ensure you have added the correct API permissions for the resource you're connecting to. Admin Consent must be granted.
- Database View Connection: This issue is caused by attempting a connection to a Database View, which by itself is insufficient. Connect to the underlying physical database instance that hosts that view, not the view layer itself.