- 2 Minutes to read
- 2 Minutes to read
Dataddo allows you to connect your sources and destinations in a few simple steps. To get the most out of your experience, let's take it from the beginning.
What Is a Data Source?
A data source represents a connection to any 3rd party from which the data are extracted such as SaaS apps (e.g. Salesforce, NetSuite, HubSpot, Stripe, Klavio, Facebook, Google Analytics 4), databases (e.g. MySQL, Postgres, SQL Server), cloud data warehouses (e.g. BigQuery, Redshift, Snowflake) or File storages (e.g. S3, SFTP). The initial setup of the source is configured via Connectors.
For more information see Data Sources.
What Is a Data Destination?
A data destination represents an endpoint where the data from sources are delivered to. Such as databases (e.g. MySQL, Postgres, SQL Server), cloud data warehouses (e.g. BigQuery, Redshift, Snowflake), File storages (e.g. S3, SFTP) or Dashboarding Apps (e.g. Tableau, PowerBI, Qlik, Looker Studio).
- Dashboarding applications (e.g. Tableau, PowerBI, Qlik, Looker Studio).
- Data Storages (e.g. BigQuery, Redshift, Snowflake, MySQL, Postgres, SQL Server, S3, SFTP).
- Applications (e.g. HubSpot, Salesforce, NetSuite, Zoho CRM).
Dataddo offers an embedded storage SmartCache, which is designed to provide a simple solution for situations where large data storage volumes or complex data transformations are not required. SmartCache is not intended to replace a data warehouse. As a rule of thumb, if you need to store more than 100,000 rows per data source, or if you require complex data transformations beyond simple joins (see Data Blending and Data Union), we recommend using a data warehouse solution.
For more information see Data Destinations.
What Is a Data Flow?
One of the great benefits of the Dataddo architecture is that data extraction and data delivery operations are decoupled. In practice this means that you can defined loose associations between data sources and data destinations. For examplate, you can route single extraction of Salesforce data to multiple data warehouses and vice versa.
For more information see Data Flows.
Configuring a First Data Pipeline
Assuming you successfuly setup your Dataddo account and understand the basics, let's proceed with the creation of first data pipeline. The process itself is very straight-forward.
Step 1: Create a Data Source.
Step 2: Create a Data Destination.
Step 3. Create a Data Flow.
Feel free to contact us and we will help you with the setup. To speed the process of resolving your issue, make sure you provide us with sufficient information.