Skip to main content
Version: Next

Iceberg

Testing

Important Capabilities

CapabilityStatusNotes
Data ProfilingOptionally enabled via configuration.
DescriptionsEnabled by default.
Detect Deleted EntitiesEnabled via stateful ingestion
DomainsCurrently not supported.
Extract OwnershipAutomatically ingests ownership information from table properties based on user_ownership_property and group_ownership_property
Partition SupportCurrently not supported.
Platform InstanceOptionally enabled via configuration, an Iceberg instance represents the catalog name where the table is stored.

Integration Details

The DataHub Iceberg source plugin extracts metadata from Iceberg tables stored in a distributed or local file system. Typically, Iceberg tables are stored in a distributed file system like S3 or Azure Data Lake Storage (ADLS) and registered in a catalog. There are various catalog implementations like Filesystem-based, RDBMS-based or even REST-based catalogs. This Iceberg source plugin relies on the pyiceberg library.

CLI based Ingestion

Install the Plugin

The iceberg source works out of the box with acryl-datahub.

Starter Recipe

Check out the following recipe to get started with ingestion! See below for full configuration options.

For general pointers on writing and running a recipe, see our main recipe guide.

source:
type: "iceberg"
config:
env: PROD
catalog:
# REST catalog configuration example using S3 storage
my_rest_catalog:
type: rest
# Catalog configuration follows pyiceberg's documentation (https://py.iceberg.apache.org/configuration)
uri: http://localhost:8181
s3.access-key-id: admin
s3.secret-access-key: password
s3.region: us-east-1
warehouse: s3a://warehouse/wh/
s3.endpoint: http://localhost:9000
# SQL catalog configuration example using Azure datalake storage and a PostgreSQL database
# my_sql_catalog:
# type: sql
# uri: postgresql+psycopg2://user:password@sqldatabase.postgres.database.azure.com:5432/icebergcatalog
# adlfs.tenant-id: <Azure tenant ID>
# adlfs.account-name: <Azure storage account name>
# adlfs.client-id: <Azure Client/Application ID>
# adlfs.client-secret: <Azure Client Secret>
platform_instance: my_rest_catalog
table_pattern:
allow:
- marketing.*
profiling:
enabled: true

sink:
# sink configs

Config Details

Note that a . is used to denote nested fields in the YAML recipe.

FieldDescription
catalog 
map(str,object)
group_ownership_property
string
Iceberg table property to look for a CorpGroup owner. Can only hold a single group value. If property has no value, no owner information will be emitted.
platform_instance
string
The instance of the platform that all assets produced by this recipe belong to. This should be unique within the platform. See https://datahubproject.io/docs/platform-instances/ for more details.
processing_threads
integer
How many threads will be processing tables
Default: 1
user_ownership_property
string
Iceberg table property to look for a CorpUser owner. Can only hold a single user value. If property has no value, no owner information will be emitted.
Default: owner
env
string
The environment that all assets produced by this connector belong to
Default: PROD
namespace_pattern
AllowDenyPattern
Regex patterns for namespaces to filter in ingestion.
Default: {'allow': ['.*'], 'deny': [], 'ignoreCase': True}
namespace_pattern.ignoreCase
boolean
Whether to ignore case sensitivity during pattern matching.
Default: True
namespace_pattern.allow
array
List of regex patterns to include in ingestion
Default: ['.*']
namespace_pattern.allow.string
string
namespace_pattern.deny
array
List of regex patterns to exclude from ingestion.
Default: []
namespace_pattern.deny.string
string
table_pattern
AllowDenyPattern
Regex patterns for tables to filter in ingestion.
Default: {'allow': ['.*'], 'deny': [], 'ignoreCase': True}
table_pattern.ignoreCase
boolean
Whether to ignore case sensitivity during pattern matching.
Default: True
table_pattern.allow
array
List of regex patterns to include in ingestion
Default: ['.*']
table_pattern.allow.string
string
table_pattern.deny
array
List of regex patterns to exclude from ingestion.
Default: []
table_pattern.deny.string
string
profiling
IcebergProfilingConfig
Default: {'enabled': False, 'include_field_null_count': Tru...
profiling.enabled
boolean
Whether profiling should be done.
Default: False
profiling.include_field_max_value
boolean
Whether to profile for the max value of numeric columns.
Default: True
profiling.include_field_min_value
boolean
Whether to profile for the min value of numeric columns.
Default: True
profiling.include_field_null_count
boolean
Whether to profile for the number of nulls for each column.
Default: True
profiling.operation_config
OperationConfig
Experimental feature. To specify operation configs.
profiling.operation_config.lower_freq_profile_enabled
boolean
Whether to do profiling at lower freq or not. This does not do any scheduling just adds additional checks to when not to run profiling.
Default: False
profiling.operation_config.profile_date_of_month
integer
Number between 1 to 31 for date of month (both inclusive). If not specified, defaults to Nothing and this field does not take affect.
profiling.operation_config.profile_day_of_week
integer
Number between 0 to 6 for day of week (both inclusive). 0 is Monday and 6 is Sunday. If not specified, defaults to Nothing and this field does not take affect.
stateful_ingestion
StatefulStaleMetadataRemovalConfig
Iceberg Stateful Ingestion Config.
stateful_ingestion.enabled
boolean
Whether or not to enable stateful ingest. Default: True if a pipeline_name is set and either a datahub-rest sink or datahub_api is specified, otherwise False
Default: False
stateful_ingestion.remove_stale_metadata
boolean
Soft-deletes the entities present in the last successful run but missing in the current run with stateful_ingestion enabled.
Default: True

Concept Mapping

This ingestion source maps the following Source System Concepts to DataHub Concepts:

Source ConceptDataHub ConceptNotes
icebergData Platform
TableDatasetAn Iceberg table is registered inside a catalog using a name, where the catalog is responsible for creating, dropping and renaming tables. Catalogs manage a collection of tables that are usually grouped into namespaces. The name of a table is mapped to a Dataset name. If a Platform Instance is configured, it will be used as a prefix: <platform_instance>.my.namespace.table.
Table propertyUser (a.k.a CorpUser)The value of a table property can be used as the name of a CorpUser owner. This table property name can be configured with the source option user_ownership_property.
Table propertyCorpGroupThe value of a table property can be used as the name of a CorpGroup owner. This table property name can be configured with the source option group_ownership_property.
Table parent folders (excluding warehouse catalog location)ContainerAvailable in a future release
Table schemaSchemaFieldMaps to the fields defined within the Iceberg table schema definition.

Troubleshooting

Exceptions while increasing processing_threads

Each processing thread will open several files/sockets to download manifest files from blob storage. If you experience exceptions appearing when increasing processing_threads configuration parameter, try to increase limit of open files (i.e. using ulimit in Linux).

Code Coordinates

  • Class Name: datahub.ingestion.source.iceberg.iceberg.IcebergSource
  • Browse on GitHub

Questions

If you've got any questions on configuring ingestion for Iceberg, feel free to ping us on our Slack.