Skip to main content
Version: 0.14.0

ABS Data Lake

This connector ingests Azure Blob Storage (abbreviated to abs) datasets into DataHub. It allows mapping an individual file or a folder of files to a dataset in DataHub. To specify the group of files that form a dataset, use path_specs configuration in ingestion recipe. Refer section Path Specs for more details.

Concept Mapping

This ingestion source maps the following Source System Concepts to DataHub Concepts:

Source ConceptDataHub ConceptNotes
"abs"Data Platform
abs blob / Folder containing abs blobsDataset
abs containerContainerSubtype Folder

This connector supports both local files and those stored on Azure Blob Storage (which must be identified using the prefix http(s)://<account>.blob.core.windows.net/ or azure://).

Supported file types

Supported file types are as follows:

  • CSV (*.csv)
  • TSV (*.tsv)
  • JSONL (*.jsonl)
  • JSON (*.json)
  • Parquet (*.parquet)
  • Apache Avro (*.avro)

Schemas for Parquet and Avro files are extracted as provided.

Schemas for schemaless formats (CSV, TSV, JSONL, JSON) are inferred. For CSV, TSV and JSONL files, we consider the first 100 rows by default, which can be controlled via the max_rows recipe parameter (see below) JSON file schemas are inferred on the basis of the entire file (given the difficulty in extracting only the first few objects of the file), which may impact performance. We are working on using iterator-based JSON parsers to avoid reading in the entire JSON object.

Profiling

Profiling is not available in the current release. Incubating

Important Capabilities

CapabilityStatusNotes
Data ProfilingOptionally enabled via configuration
Detect Deleted EntitiesOptionally enabled via stateful_ingestion.remove_stale_metadata
Extract TagsCan extract ABS object/container tags if enabled

CLI based Ingestion

Install the Plugin

The abs source works out of the box with acryl-datahub.

Starter Recipe

Check out the following recipe to get started with ingestion! See below for full configuration options.

For general pointers on writing and running a recipe, see our main recipe guide.

source:
type: abs
config:
path_specs:
- include: "https://storageaccountname.blob.core.windows.net/covid19-lake/covid_knowledge_graph/csv/nodes/*.*"

azure_config:
account_name: "*****"
sas_token: "*****"
container_name: "covid_knowledge_graph"
env: "PROD"

# sink configs

Config Details

Note that a . is used to denote nested fields in the YAML recipe.

FieldDescription
path_specs 
array
List of PathSpec. See below the details about PathSpec
path_specs.PathSpec
PathSpec
path_specs.PathSpec.include 
string
Path to table. Name variable {table} is used to mark the folder with dataset. In absence of {table}, file level dataset will be created. Check below examples for more details.
path_specs.PathSpec.allow_double_stars
boolean
Allow double stars in the include path. This can affect performance significantly if enabled
Default: False
path_specs.PathSpec.default_extension
string
For files without extension it will assume the specified file type. If it is not set the files without extensions will be skipped.
path_specs.PathSpec.enable_compression
boolean
Enable or disable processing compressed files. Currently .gz and .bz files are supported.
Default: True
path_specs.PathSpec.sample_files
boolean
Not listing all the files but only taking a handful amount of sample file to infer the schema. File count and file size calculation will be disabled. This can affect performance significantly if enabled
Default: True
path_specs.PathSpec.table_name
string
Display name of the dataset.Combination of named variables from include path and strings
path_specs.PathSpec.exclude
array
list of paths in glob pattern which will be excluded while scanning for the datasets
path_specs.PathSpec.exclude.string
string
path_specs.PathSpec.file_types
array
Files with extenstions specified here (subset of default value) only will be scanned to create dataset. Other files will be omitted.
Default: ['csv', 'tsv', 'json', 'parquet', 'avro']
path_specs.PathSpec.file_types.string
string
add_partition_columns_to_schema
boolean
Whether to add partition fields to the schema.
Default: False
max_rows
integer
Maximum number of rows to use when inferring schemas for TSV and CSV files.
Default: 100
number_of_files_to_sample
integer
Number of files to list to sample for schema inference. This will be ignored if sample_files is set to False in the pathspec.
Default: 100
platform
string
The platform that this source connects to (either 'abs' or 'file'). If not specified, the platform will be inferred from the path_specs.
Default:
platform_instance
string
The instance of the platform that all assets produced by this recipe belong to
spark_config
object
Spark configuration properties to set on the SparkSession. Put config property names into quotes. For example: '"spark.executor.memory": "2g"'
Default: {}
spark_driver_memory
string
Max amount of memory to grant Spark.
Default: 4g
use_abs_blob_properties
boolean
Whether to create tags in datahub from the abs blob properties
use_abs_blob_tags
boolean
Whether to create tags in datahub from the abs blob tags
use_abs_container_properties
boolean
Whether to create tags in datahub from the abs container properties
verify_ssl
One of boolean, string
Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use.
Default: True
env
string
The environment that all assets produced by this connector belong to
Default: PROD
azure_config
AzureConnectionConfig
Azure configuration
azure_config.account_name 
string
Name of the Azure storage account. See Microsoft official documentation on how to create a storage account.
azure_config.container_name 
string
Azure storage account container name.
azure_config.account_key
string
Azure storage account access key that can be used as a credential. An account key, a SAS token or a client secret is required for authentication.
azure_config.base_path
string
Base folder in hierarchical namespaces to start from.
Default: /
azure_config.client_id
string
Azure client (Application) ID required when a client_secret is used as a credential.
azure_config.client_secret
string
Azure client secret that can be used as a credential. An account key, a SAS token or a client secret is required for authentication.
azure_config.sas_token
string
Azure storage account Shared Access Signature (SAS) token that can be used as a credential. An account key, a SAS token or a client secret is required for authentication.
azure_config.tenant_id
string
Azure tenant (Directory) ID required when a client_secret is used as a credential.
profile_patterns
AllowDenyPattern
regex patterns for tables to profile
Default: {'allow': ['.*'], 'deny': [], 'ignoreCase': True}
profile_patterns.ignoreCase
boolean
Whether to ignore case sensitivity during pattern matching.
Default: True
profile_patterns.allow
array
List of regex patterns to include in ingestion
Default: ['.*']
profile_patterns.allow.string
string
profile_patterns.deny
array
List of regex patterns to exclude from ingestion.
Default: []
profile_patterns.deny.string
string
profiling
DataLakeProfilerConfig
Data profiling configuration
Default: {'enabled': False, 'operation_config': {'lower_fre...
profiling.enabled
boolean
Whether profiling should be done.
Default: False
profiling.include_field_distinct_value_frequencies
boolean
Whether to profile for distinct value frequencies.
Default: True
profiling.include_field_histogram
boolean
Whether to profile for the histogram for numeric fields.
Default: True
profiling.include_field_max_value
boolean
Whether to profile for the max value of numeric columns.
Default: True
profiling.include_field_mean_value
boolean
Whether to profile for the mean value of numeric columns.
Default: True
profiling.include_field_median_value
boolean
Whether to profile for the median value of numeric columns.
Default: True
profiling.include_field_min_value
boolean
Whether to profile for the min value of numeric columns.
Default: True
profiling.include_field_null_count
boolean
Whether to profile for the number of nulls for each column.
Default: True
profiling.include_field_quantiles
boolean
Whether to profile for the quantiles of numeric columns.
Default: True
profiling.include_field_sample_values
boolean
Whether to profile for the sample values for all columns.
Default: True
profiling.include_field_stddev_value
boolean
Whether to profile for the standard deviation of numeric columns.
Default: True
profiling.max_number_of_fields_to_profile
integer
A positive integer that specifies the maximum number of columns to profile for any table. None implies all columns. The cost of profiling goes up significantly as the number of columns to profile goes up.
profiling.profile_table_level_only
boolean
Whether to perform profiling at table-level only or include column-level profiling as well.
Default: False
profiling.operation_config
OperationConfig
Experimental feature. To specify operation configs.
profiling.operation_config.lower_freq_profile_enabled
boolean
Whether to do profiling at lower freq or not. This does not do any scheduling just adds additional checks to when not to run profiling.
Default: False
profiling.operation_config.profile_date_of_month
integer
Number between 1 to 31 for date of month (both inclusive). If not specified, defaults to Nothing and this field does not take affect.
profiling.operation_config.profile_day_of_week
integer
Number between 0 to 6 for day of week (both inclusive). 0 is Monday and 6 is Sunday. If not specified, defaults to Nothing and this field does not take affect.
stateful_ingestion
StatefulStaleMetadataRemovalConfig
Base specialized config for Stateful Ingestion with stale metadata removal capability.
stateful_ingestion.enabled
boolean
Whether or not to enable stateful ingest. Default: True if a pipeline_name is set and either a datahub-rest sink or datahub_api is specified, otherwise False
Default: False
stateful_ingestion.remove_stale_metadata
boolean
Soft-deletes the entities present in the last successful run but missing in the current run with stateful_ingestion enabled.
Default: True

Path Specs

Path Specs (path_specs) is a list of Path Spec (path_spec) objects, where each individual path_spec represents one or more datasets. The include path (path_spec.include) represents a formatted path to the dataset. This path must end with *.* or *.[ext] to represent the leaf level. If *.[ext] is provided, then only files with the specified extension type will be scanned. ".[ext]" can be any of the supported file types. Refer to example 1 below for more details.

All folder levels need to be specified in the include path. You can use /*/ to represent a folder level and avoid specifying the exact folder name. To map a folder as a dataset, use the {table} placeholder to represent the folder level for which the dataset is to be created. For a partitioned dataset, you can use the placeholder {partition_key[i]} to represent the name of the ith partition and {partition[i]} to represent the value of the ith partition. During ingestion, i will be used to match the partition_key to the partition. Refer to examples 2 and 3 below for more details.

Exclude paths (path_spec.exclude) can be used to ignore paths that are not relevant to the current path_spec. This path cannot have named variables ({}). The exclude path can have ** to represent multiple folder levels. Refer to example 4 below for more details.

Refer to example 5 if your container has a more complex dataset representation.

Additional points to note

  • Folder names should not contain {, }, *, / in their names.
  • Named variable {folder} is reserved for internal working. please do not use in named variables.

Path Specs - Examples

Example 1 - Individual file as Dataset

Container structure:

test-container
├── employees.csv
├── departments.json
└── food_items.csv

Path specs config to ingest employees.csv and food_items.csv as datasets:

path_specs:
- include: https://storageaccountname.blob.core.windows.net/test-container/*.csv

This will automatically ignore departments.json file. To include it, use *.* instead of *.csv.

Example 2 - Folder of files as Dataset (without Partitions)

Container structure:

test-container
└── offers
├── 1.avro
└── 2.avro

Path specs config to ingest folder offers as dataset:

path_specs:
- include: https://storageaccountname.blob.core.windows.net/test-container/{table}/*.avro

{table} represents folder for which dataset will be created.

Example 3 - Folder of files as Dataset (with Partitions)

Container structure:

test-container
├── orders
│ └── year=2022
│ └── month=2
│ ├── 1.parquet
│ └── 2.parquet
└── returns
└── year=2021
└── month=2
└── 1.parquet

Path specs config to ingest folders orders and returns as datasets:

path_specs:
- include: https://storageaccountname.blob.core.windows.net/test-container/{table}/{partition_key[0]}={partition[0]}/{partition_key[1]}={partition[1]}/*.parquet

One can also use include: https://storageaccountname.blob.core.windows.net/test-container/{table}/*/*/*.parquet here however above format is preferred as it allows declaring partitions explicitly.

Example 4 - Folder of files as Dataset (with Partitions), and Exclude Filter

Container structure:

test-container
├── orders
│ └── year=2022
│ └── month=2
│ ├── 1.parquet
│ └── 2.parquet
└── tmp_orders
└── year=2021
└── month=2
└── 1.parquet


Path specs config to ingest folder orders as dataset but not folder tmp_orders:

path_specs:
- include: https://storageaccountname.blob.core.windows.net/test-container/{table}/{partition_key[0]}={partition[0]}/{partition_key[1]}={partition[1]}/*.parquet
exclude:
- **/tmp_orders/**

Example 5 - Advanced - Either Individual file OR Folder of files as Dataset

Container structure:

test-container
├── customers
│ ├── part1.json
│ ├── part2.json
│ ├── part3.json
│ └── part4.json
├── employees.csv
├── food_items.csv
├── tmp_10101000.csv
└── orders
└── year=2022
└── month=2
├── 1.parquet
├── 2.parquet
└── 3.parquet

Path specs config:

path_specs:
- include: https://storageaccountname.blob.core.windows.net/test-container/*.csv
exclude:
- **/tmp_10101000.csv
- include: https://storageaccountname.blob.core.windows.net/test-container/{table}/*.json
- include: https://storageaccountname.blob.core.windows.net/test-container/{table}/{partition_key[0]}={partition[0]}/{partition_key[1]}={partition[1]}/*.parquet

Above config has 3 path_specs and will ingest following datasets

  • employees.csv - Single File as Dataset
  • food_items.csv - Single File as Dataset
  • customers - Folder as Dataset
  • orders - Folder as Dataset and will ignore file tmp_10101000.csv

Valid path_specs.include

https://storageaccountname.blob.core.windows.net/my-container/foo/tests/bar.avro # single file table   
https://storageaccountname.blob.core.windows.net/my-container/foo/tests/*.* # mulitple file level tables
https://storageaccountname.blob.core.windows.net/my-container/foo/tests/{table}/*.avro #table without partition
https://storageaccountname.blob.core.windows.net/my-container/foo/tests/{table}/*/*.avro #table where partitions are not specified
https://storageaccountname.blob.core.windows.net/my-container/foo/tests/{table}/*.* # table where no partitions as well as data type specified
https://storageaccountname.blob.core.windows.net/my-container/{dept}/tests/{table}/*.avro # specifying keywords to be used in display name
https://storageaccountname.blob.core.windows.net/my-container/{dept}/tests/{table}/{partition_key[0]}={partition[0]}/{partition_key[1]}={partition[1]}/*.avro # specify partition key and value format
https://storageaccountname.blob.core.windows.net/my-container/{dept}/tests/{table}/{partition[0]}/{partition[1]}/{partition[2]}/*.avro # specify partition value only format
https://storageaccountname.blob.core.windows.net/my-container/{dept}/tests/{table}/{partition[0]}/{partition[1]}/{partition[2]}/*.* # for all extensions
https://storageaccountname.blob.core.windows.net/my-container/*/{table}/{partition[0]}/{partition[1]}/{partition[2]}/*.* # table is present at 2 levels down in container
https://storageaccountname.blob.core.windows.net/my-container/*/*/{table}/{partition[0]}/{partition[1]}/{partition[2]}/*.* # table is present at 3 levels down in container

Valid path_specs.exclude

If you would like to write a more complicated function for resolving file names, then a {transformer} would be a good fit.

caution

Specify as long fixed prefix ( with out /*/ ) as possible in path_specs.include. This will reduce the scanning time and cost, specifically on AWS S3

caution

Running profiling against many tables or over many rows can run up significant costs. While we've done our best to limit the expensiveness of the queries the profiler runs, you should be prudent about the set of tables profiling is enabled on or the frequency of the profiling runs.

caution

If you are ingesting datasets from AWS S3, we recommend running the ingestion on a server in the same region to avoid high egress costs.

Compatibility

Profiles are computed with PyDeequ, which relies on PySpark. Therefore, for computing profiles, we currently require Spark 3.0.3 with Hadoop 3.2 to be installed and the SPARK_HOME and SPARK_VERSION environment variables to be set. The Spark+Hadoop binary can be downloaded here.

For an example guide on setting up PyDeequ on AWS, see this guide.

caution

From Spark 3.2.0+, Avro reader fails on column names that don't start with a letter and contains other character than letters, number, and underscore. [https://github.com/apache/spark/blob/72c62b6596d21e975c5597f8fff84b1a9d070a02/connector/avro/src/main/scala/org/apache/spark/sql/avro/AvroFileFormat.scala#L158] Avro files that contain such columns won't be profiled.

Code Coordinates

  • Class Name: datahub.ingestion.source.abs.source.ABSSource
  • Browse on GitHub

Questions

If you've got any questions on configuring ingestion for ABS Data Lake, feel free to ping us on our Slack.