BigQuery
There are 2 sources that provide integration with BigQuery
Source Module | Documentation |
| This plugin extracts the following:
|
| This plugin extracts the following:
note
|
To get all metadata from BigQuery you need to use two plugins bigquery
and bigquery-usage
. Both of them are described in this page. These will require 2 separate recipes. We understand this is not ideal and we plan to make this easier in the future.
Module bigquery
Important Capabilities
Capability | Status | Notes |
---|---|---|
Data Profiling | ✅ | Optionally enabled via configuration |
Dataset Usage | ❌ | Not provided by this module, use bigquery-usage for that. |
Descriptions | ✅ | Enabled by default |
Detect Deleted Entities | ✅ | Enabled via stateful ingestion |
Domains | ✅ | Supported via the domain config field |
Platform Instance | ❌ | BigQuery doesn't need platform instances because project ids in BigQuery are globally unique. |
Table-Level Lineage | ✅ | Enabled by default |
This plugin extracts the following:
- Metadata for databases, schemas, and tables
- Column types associated with each table
- Table, row, and column statistics via optional SQL profiling
- Table level lineage.
Install the Plugin
pip install 'acryl-datahub[bigquery]'
Quickstart Recipe
Check out the following recipe to get started with ingestion! See below for full configuration options.
For general pointers on writing and running a recipe, see our main recipe guide
source:
type: bigquery
config:
# Coordinates
project_id: my_project_id
# `schema_pattern` for BQ Datasets
schema_pattern:
allow:
- finance_bq_dataset
table_pattern:
deny:
# The exact name of the table is revenue_table_name
# The reason we have this `.*` at the beginning is because the current implmenetation of table_pattern is testing
# project_id.dataset_name.table_name
# We will improve this in the future
- .*revenue_table_name
sink:
# sink configs
Config Details
- Options
- Schema
Note that a .
is used to denote nested fields in the YAML recipe.
View All Configuration Options
Field | Required | Type | Description | Default |
---|---|---|---|---|
env | string | The environment that all assets produced by this connector belong to | PROD | |
platform | string | The platform that this source connects to | None | |
platform_instance | string | The instance of the platform that all assets produced by this recipe belong to | None | |
options | Dict | {} | ||
include_views | boolean | Whether views should be ingested. | True | |
include_tables | boolean | Whether tables should be ingested. | True | |
bucket_duration | enum(BucketDuration) | Size of the time window to aggregate usage stats.. | DAY | |
end_time | string | Latest date of usage to consider. Default: Last full day in UTC (or hour, depending on bucket_duration ) | None | |
start_time | string | Earliest date of usage to consider. Default: Last full day in UTC (or hour, depending on bucket_duration ) | None | |
rate_limit | boolean | Should we rate limit requests made to API. | False | |
requests_per_min | integer | Used to control number of API calls made per min. Only used when rate_limit is set to True . | 60 | |
temp_table_dataset_prefix | string | If you are creating temp tables in a dataset with a particular prefix you can use this config to set the prefix for the dataset. This is to support workflows from before bigquery's introduction of temp tables. By default we use _ because of datasets that begin with an underscore are hidden by default https://cloud.google.com/bigquery/docs/datasets#dataset-naming. | _ | |
sharded_table_pattern | string | The regex pattern to match sharded tables and group as one table. This is a very low level config parameter, only change if you know what you are doing, | ((.+)[_$])?(\d{4,10})$ | |
scheme | string | bigquery | ||
project_id | string | Project ID where you have rights to run queries and create tables. If storage_project_id is not specified then it is assumed this is the same project where data is stored. If not specified, will infer from environment. | None | |
storage_project_id | string | If your data is stored in a different project where you don't have rights to run jobs and create tables then specify this field. The same service account must have read rights in this GCP project and write rights in project_id . | None | |
log_page_size | integer | The number of log item will be queried per page for lineage collection | 1000 | |
extra_client_options | Dict | Additional options to pass to google.cloud.logging_v2.client.Client. | {} | |
include_table_lineage | boolean | Option to enable/disable lineage generation. Is enabled by default. | True | |
max_query_duration | number | Correction to pad start_time and end_time with. For handling the case where the read happens within our time range but the query completion event is delayed and happens after the configured end time. | 900.0 | |
bigquery_audit_metadata_datasets | Array of string | A list of datasets that contain a table named cloudaudit_googleapis_com_data_access which contain BigQuery audit logs, specifically, those containing BigQueryAuditMetadata. It is recommended that the project of the dataset is also specified, for example, projectA.datasetB. | None | |
use_exported_bigquery_audit_metadata | boolean | When configured, use BigQueryAuditMetadata in bigquery_audit_metadata_datasets to compute lineage information. | False | |
use_date_sharded_audit_log_tables | boolean | Whether to read date sharded tables or time partitioned tables when extracting usage from exported audit logs. | False | |
use_v2_audit_metadata | boolean | Whether to ingest logs using the v2 format. | False | |
upstream_lineage_in_report | boolean | Useful for debugging lineage information. Set to True to see the raw lineage created internally. | False | |
stateful_ingestion | SQLAlchemyStatefulIngestionConfig (see below for fields) | |||
stateful_ingestion.enabled | boolean | The type of the ingestion state provider registered with datahub. | False | |
stateful_ingestion.max_checkpoint_state_size | integer | The maximum size of the checkpoint state in bytes. Default is 16MB | 16777216 | |
stateful_ingestion.state_provider | DynamicTypedStateProviderConfig (see below for fields) | The ingestion state provider configuration. | ||
stateful_ingestion.state_provider.type | ✅ | string | The type of the state provider to use. For DataHub use datahub | None |
stateful_ingestion.state_provider.config | Generic dict | The configuration required for initializing the state provider. Default: The datahub_api config if set at pipeline level. Otherwise, the default DatahubClientConfig. See the defaults (https://github.com/datahub-project/datahub/blob/master/metadata-ingestion/src/datahub/ingestion/graph/client.py#L19). | None | |
stateful_ingestion.ignore_old_state | boolean | If set to True, ignores the previous checkpoint state. | False | |
stateful_ingestion.ignore_new_state | boolean | If set to True, ignores the current checkpoint state. | False | |
stateful_ingestion.remove_stale_metadata | boolean | Soft-deletes the tables and views that were found in the last successful run but missing in the current run with stateful_ingestion enabled. | True | |
schema_pattern | AllowDenyPattern (see below for fields) | regex patterns for schemas to filter in ingestion. | {'allow': ['.*'], 'deny': [], 'ignoreCase': True, 'alphabet': '[A-Za-z0-9 _.-]'} | |
schema_pattern.allow | Array of string | List of regex patterns for process groups to include in ingestion | ['.*'] | |
schema_pattern.deny | Array of string | List of regex patterns for process groups to exclude from ingestion. | [] | |
schema_pattern.ignoreCase | boolean | Whether to ignore case sensitivity during pattern matching. | True | |
schema_pattern.alphabet | string | Allowed alphabets pattern | [A-Za-z0-9 _.-] | |
table_pattern | AllowDenyPattern (see below for fields) | regex patterns for tables to filter in ingestion. | {'allow': ['.*'], 'deny': [], 'ignoreCase': True, 'alphabet': '[A-Za-z0-9 _.-]'} | |
table_pattern.allow | Array of string | List of regex patterns for process groups to include in ingestion | ['.*'] | |
table_pattern.deny | Array of string | List of regex patterns for process groups to exclude from ingestion. | [] | |
table_pattern.ignoreCase | boolean | Whether to ignore case sensitivity during pattern matching. | True | |
table_pattern.alphabet | string | Allowed alphabets pattern | [A-Za-z0-9 _.-] | |
view_pattern | AllowDenyPattern (see below for fields) | regex patterns for views to filter in ingestion. | {'allow': ['.*'], 'deny': [], 'ignoreCase': True, 'alphabet': '[A-Za-z0-9 _.-]'} | |
view_pattern.allow | Array of string | List of regex patterns for process groups to include in ingestion | ['.*'] | |
view_pattern.deny | Array of string | List of regex patterns for process groups to exclude from ingestion. | [] | |
view_pattern.ignoreCase | boolean | Whether to ignore case sensitivity during pattern matching. | True | |
view_pattern.alphabet | string | Allowed alphabets pattern | [A-Za-z0-9 _.-] | |
profile_pattern | AllowDenyPattern (see below for fields) | regex patterns for profiles to filter in ingestion, allowed by the table_pattern . | {'allow': ['.*'], 'deny': [], 'ignoreCase': True, 'alphabet': '[A-Za-z0-9 _.-]'} | |
profile_pattern.allow | Array of string | List of regex patterns for process groups to include in ingestion | ['.*'] | |
profile_pattern.deny | Array of string | List of regex patterns for process groups to exclude from ingestion. | [] | |
profile_pattern.ignoreCase | boolean | Whether to ignore case sensitivity during pattern matching. | True | |
profile_pattern.alphabet | string | Allowed alphabets pattern | [A-Za-z0-9 _.-] | |
domain | Dict[str, AllowDenyPattern] | regex patterns for tables/schemas to descide domain_key domain key (domain_key can be any string like "sales".) There can be multiple domain key specified. | {} | |
domain.key .allow | Array of string | List of regex patterns for process groups to include in ingestion | ['.*'] | |
domain.key .deny | Array of string | List of regex patterns for process groups to exclude from ingestion. | [] | |
domain.key .ignoreCase | boolean | Whether to ignore case sensitivity during pattern matching. | True | |
domain.key .alphabet | string | Allowed alphabets pattern | [A-Za-z0-9 _.-] | |
profiling | GEProfilingConfig (see below for fields) | {'enabled': False, 'limit': None, 'offset': None, 'reportdropped_profiles': False, 'turn_off_expensive_profiling_metrics': False, 'profile_table_level_only': False, 'include_field_null_count': True, 'include_field_min_value': True, 'include_field_max_value': True, 'include_field_mean_value': True, 'include_field_median_value': True, 'include_field_stddev_value': True, 'include_field_quantiles': False, 'include_field_distinct_value_frequencies': False, 'include_field_histogram': False, 'include_field_sample_values': True, 'allow_deny_patterns': {'allow': ['.*'], 'deny': [], 'ignoreCase': True, 'alphabet': '[A-Za-z0-9 .-]'}, 'max_number_of_fields_to_profile': None, 'profile_if_updated_since_days': 1, 'max_workers': 10, 'query_combiner_enabled': True, 'catch_exceptions': True, 'partition_profiling_enabled': True, 'bigquery_temp_table_schema': None, 'partition_datetime': None} | ||
profiling.enabled | boolean | Whether profiling should be done. | False | |
profiling.limit | integer | Max number of documents to profile. By default, profiles all documents. | None | |
profiling.offset | integer | Offset in documents to profile. By default, uses no offset. | None | |
profiling.report_dropped_profiles | boolean | If datasets which were not profiled are reported in source report or not. Set to True for debugging purposes. | False | |
profiling.turn_off_expensive_profiling_metrics | boolean | Whether to turn off expensive profiling or not. This turns off profiling for quantiles, distinct_value_frequencies, histogram & sample_values. This also limits maximum number of fields being profiled to 10. | False | |
profiling.profile_table_level_only | boolean | Whether to perform profiling at table-level only, or include column-level profiling as well. | False | |
profiling.include_field_null_count | boolean | Whether to profile for the number of nulls for each column. | True | |
profiling.include_field_min_value | boolean | Whether to profile for the min value of numeric columns. | True | |
profiling.include_field_max_value | boolean | Whether to profile for the max value of numeric columns. | True | |
profiling.include_field_mean_value | boolean | Whether to profile for the mean value of numeric columns. | True | |
profiling.include_field_median_value | boolean | Whether to profile for the median value of numeric columns. | True | |
profiling.include_field_stddev_value | boolean | Whether to profile for the standard deviation of numeric columns. | True | |
profiling.include_field_quantiles | boolean | Whether to profile for the quantiles of numeric columns. | False | |
profiling.include_field_distinct_value_frequencies | boolean | Whether to profile for distinct value frequencies. | False | |
profiling.include_field_histogram | boolean | Whether to profile for the histogram for numeric fields. | False | |
profiling.include_field_sample_values | boolean | Whether to profile for the sample values for all columns. | True | |
profiling.allow_deny_patterns | AllowDenyPattern (see below for fields) | regex patterns for filtering of tables or table columns to profile. | {'allow': ['.*'], 'deny': [], 'ignoreCase': True, 'alphabet': '[A-Za-z0-9 _.-]'} | |
profiling.allow_deny_patterns.allow | Array of string | List of regex patterns for process groups to include in ingestion | ['.*'] | |
profiling.allow_deny_patterns.deny | Array of string | List of regex patterns for process groups to exclude from ingestion. | [] | |
profiling.allow_deny_patterns.ignoreCase | boolean | Whether to ignore case sensitivity during pattern matching. | True | |
profiling.allow_deny_patterns.alphabet | string | Allowed alphabets pattern | [A-Za-z0-9 _.-] | |
profiling.max_number_of_fields_to_profile | integer | A positive integer that specifies the maximum number of columns to profile for any table. None implies all columns. The cost of profiling goes up significantly as the number of columns to profile goes up. | None | |
profiling.profile_if_updated_since_days | number | Profile table only if it has been updated since these many number of days. None implies profile all tables. Only Snowflake supports this. | 1 | |
profiling.max_workers | integer | Number of worker threads to use for profiling. Set to 1 to disable. | 10 | |
profiling.query_combiner_enabled | boolean | This feature is still experimental and can be disabled if it causes issues. Reduces the total number of queries issued and speeds up profiling by dynamically combining SQL queries where possible. | True | |
profiling.catch_exceptions | boolean | True | ||
profiling.partition_profiling_enabled | boolean | True | ||
profiling.bigquery_temp_table_schema | string | On bigquery for profiling partitioned tables needs to create temporary views. You have to define a dataset where these will be created. Views will be cleaned up after profiler runs. (Great expectation tech details about this (https://legacy.docs.greatexpectations.io/en/0.9.0/reference/integrations/bigquery.html#custom-queries-with-sql-datasource). | None | |
profiling.partition_datetime | string | For partitioned datasets profile only the partition which matches the datetime or profile the latest one if not set. Only Bigquery supports this. | None | |
credential | BigQueryCredential (see below for fields) | BigQuery credential informations | ||
credential.project_id | ✅ | string | Project id to set the credentials | None |
credential.private_key_id | ✅ | string | Private key id | None |
credential.private_key | ✅ | string | Private key in a form of '-----BEGIN PRIVATE KEY-----\nprivate-key\n-----END PRIVATE KEY-----\n' | None |
credential.client_email | ✅ | string | Client email | None |
credential.client_id | ✅ | string | Client Id | None |
credential.auth_uri | string | Authentication uri | https://accounts.google.com/o/oauth2/auth | |
credential.token_uri | string | Token uri | https://oauth2.googleapis.com/token | |
credential.auth_provider_x509_cert_url | string | Auth provider x509 certificate url | https://www.googleapis.com/oauth2/v1/certs | |
credential.type | string | Authentication type | service_account | |
credential.client_x509_cert_url | string | If not set it will be default to https://www.googleapis.com/robot/v1/metadata/x509/client_email | None |
The JSONSchema for this configuration is inlined below.
{
"title": "BigQueryConfig",
"description": "Base configuration class for stateful ingestion for source configs to inherit from.",
"type": "object",
"properties": {
"env": {
"title": "Env",
"description": "The environment that all assets produced by this connector belong to",
"default": "PROD",
"type": "string"
},
"platform": {
"title": "Platform",
"description": "The platform that this source connects to",
"type": "string"
},
"platform_instance": {
"title": "Platform Instance",
"description": "The instance of the platform that all assets produced by this recipe belong to",
"type": "string"
},
"stateful_ingestion": {
"$ref": "#/definitions/SQLAlchemyStatefulIngestionConfig"
},
"options": {
"title": "Options",
"default": {},
"type": "object"
},
"schema_pattern": {
"title": "Schema Pattern",
"description": "regex patterns for schemas to filter in ingestion.",
"default": {
"allow": [
".*"
],
"deny": [],
"ignoreCase": true,
"alphabet": "[A-Za-z0-9 _.-]"
},
"allOf": [
{
"$ref": "#/definitions/AllowDenyPattern"
}
]
},
"table_pattern": {
"title": "Table Pattern",
"description": "regex patterns for tables to filter in ingestion.",
"default": {
"allow": [
".*"
],
"deny": [],
"ignoreCase": true,
"alphabet": "[A-Za-z0-9 _.-]"
},
"allOf": [
{
"$ref": "#/definitions/AllowDenyPattern"
}
]
},
"view_pattern": {
"title": "View Pattern",
"description": "regex patterns for views to filter in ingestion.",
"default": {
"allow": [
".*"
],
"deny": [],
"ignoreCase": true,
"alphabet": "[A-Za-z0-9 _.-]"
},
"allOf": [
{
"$ref": "#/definitions/AllowDenyPattern"
}
]
},
"profile_pattern": {
"title": "Profile Pattern",
"description": "regex patterns for profiles to filter in ingestion, allowed by the `table_pattern`.",
"default": {
"allow": [
".*"
],
"deny": [],
"ignoreCase": true,
"alphabet": "[A-Za-z0-9 _.-]"
},
"allOf": [
{
"$ref": "#/definitions/AllowDenyPattern"
}
]
},
"domain": {
"title": "Domain",
"description": " regex patterns for tables/schemas to descide domain_key domain key (domain_key can be any string like \"sales\".) There can be multiple domain key specified.",
"default": {},
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/AllowDenyPattern"
}
},
"include_views": {
"title": "Include Views",
"description": "Whether views should be ingested.",
"default": true,
"type": "boolean"
},
"include_tables": {
"title": "Include Tables",
"description": "Whether tables should be ingested.",
"default": true,
"type": "boolean"
},
"profiling": {
"title": "Profiling",
"default": {
"enabled": false,
"limit": null,
"offset": null,
"report_dropped_profiles": false,
"turn_off_expensive_profiling_metrics": false,
"profile_table_level_only": false,
"include_field_null_count": true,
"include_field_min_value": true,
"include_field_max_value": true,
"include_field_mean_value": true,
"include_field_median_value": true,
"include_field_stddev_value": true,
"include_field_quantiles": false,
"include_field_distinct_value_frequencies": false,
"include_field_histogram": false,
"include_field_sample_values": true,
"allow_deny_patterns": {
"allow": [
".*"
],
"deny": [],
"ignoreCase": true,
"alphabet": "[A-Za-z0-9 _.-]"
},
"max_number_of_fields_to_profile": null,
"profile_if_updated_since_days": 1,
"max_workers": 10,
"query_combiner_enabled": true,
"catch_exceptions": true,
"partition_profiling_enabled": true,
"bigquery_temp_table_schema": null,
"partition_datetime": null
},
"allOf": [
{
"$ref": "#/definitions/GEProfilingConfig"
}
]
},
"bucket_duration": {
"description": "Size of the time window to aggregate usage stats.",
"default": "DAY",
"allOf": [
{
"$ref": "#/definitions/BucketDuration"
}
]
},
"end_time": {
"title": "End Time",
"description": "Latest date of usage to consider. Default: Last full day in UTC (or hour, depending on `bucket_duration`)",
"type": "string",
"format": "date-time"
},
"start_time": {
"title": "Start Time",
"description": "Earliest date of usage to consider. Default: Last full day in UTC (or hour, depending on `bucket_duration`)",
"type": "string",
"format": "date-time"
},
"rate_limit": {
"title": "Rate Limit",
"description": "Should we rate limit requests made to API.",
"default": false,
"type": "boolean"
},
"requests_per_min": {
"title": "Requests Per Min",
"description": "Used to control number of API calls made per min. Only used when `rate_limit` is set to `True`.",
"default": 60,
"type": "integer"
},
"temp_table_dataset_prefix": {
"title": "Temp Table Dataset Prefix",
"description": "If you are creating temp tables in a dataset with a particular prefix you can use this config to set the prefix for the dataset. This is to support workflows from before bigquery's introduction of temp tables. By default we use `_` because of datasets that begin with an underscore are hidden by default https://cloud.google.com/bigquery/docs/datasets#dataset-naming.",
"default": "_",
"type": "string"
},
"sharded_table_pattern": {
"title": "Sharded Table Pattern",
"description": "The regex pattern to match sharded tables and group as one table. This is a very low level config parameter, only change if you know what you are doing, ",
"default": "((.+)[_$])?(\\d{4,10})$",
"type": "string"
},
"scheme": {
"title": "Scheme",
"default": "bigquery",
"type": "string"
},
"project_id": {
"title": "Project Id",
"description": "Project ID where you have rights to run queries and create tables. If `storage_project_id` is not specified then it is assumed this is the same project where data is stored. If not specified, will infer from environment.",
"type": "string"
},
"storage_project_id": {
"title": "Storage Project Id",
"description": "If your data is stored in a different project where you don't have rights to run jobs and create tables then specify this field. The same service account must have read rights in this GCP project and write rights in `project_id`.",
"type": "string"
},
"log_page_size": {
"title": "Log Page Size",
"description": "The number of log item will be queried per page for lineage collection",
"default": 1000,
"exclusiveMinimum": 0,
"type": "integer"
},
"credential": {
"title": "Credential",
"description": "BigQuery credential informations",
"allOf": [
{
"$ref": "#/definitions/BigQueryCredential"
}
]
},
"extra_client_options": {
"title": "Extra Client Options",
"description": "Additional options to pass to google.cloud.logging_v2.client.Client.",
"default": {},
"type": "object"
},
"include_table_lineage": {
"title": "Include Table Lineage",
"description": "Option to enable/disable lineage generation. Is enabled by default.",
"default": true,
"type": "boolean"
},
"max_query_duration": {
"title": "Max Query Duration",
"description": "Correction to pad start_time and end_time with. For handling the case where the read happens within our time range but the query completion event is delayed and happens after the configured end time.",
"default": 900.0,
"type": "number",
"format": "time-delta"
},
"bigquery_audit_metadata_datasets": {
"title": "Bigquery Audit Metadata Datasets",
"description": "A list of datasets that contain a table named cloudaudit_googleapis_com_data_access which contain BigQuery audit logs, specifically, those containing BigQueryAuditMetadata. It is recommended that the project of the dataset is also specified, for example, projectA.datasetB.",
"type": "array",
"items": {
"type": "string"
}
},
"use_exported_bigquery_audit_metadata": {
"title": "Use Exported Bigquery Audit Metadata",
"description": "When configured, use BigQueryAuditMetadata in bigquery_audit_metadata_datasets to compute lineage information.",
"default": false,
"type": "boolean"
},
"use_date_sharded_audit_log_tables": {
"title": "Use Date Sharded Audit Log Tables",
"description": "Whether to read date sharded tables or time partitioned tables when extracting usage from exported audit logs.",
"default": false,
"type": "boolean"
},
"use_v2_audit_metadata": {
"title": "Use V2 Audit Metadata",
"description": "Whether to ingest logs using the v2 format.",
"default": false,
"type": "boolean"
},
"upstream_lineage_in_report": {
"title": "Upstream Lineage In Report",
"description": "Useful for debugging lineage information. Set to True to see the raw lineage created internally.",
"default": false,
"type": "boolean"
}
},
"additionalProperties": false,
"definitions": {
"DynamicTypedStateProviderConfig": {
"title": "DynamicTypedStateProviderConfig",
"type": "object",
"properties": {
"type": {
"title": "Type",
"description": "The type of the state provider to use. For DataHub use `datahub`",
"type": "string"
},
"config": {
"title": "Config",
"description": "The configuration required for initializing the state provider. Default: The datahub_api config if set at pipeline level. Otherwise, the default DatahubClientConfig. See the defaults (https://github.com/datahub-project/datahub/blob/master/metadata-ingestion/src/datahub/ingestion/graph/client.py#L19)."
}
},
"required": [
"type"
],
"additionalProperties": false
},
"SQLAlchemyStatefulIngestionConfig": {
"title": "SQLAlchemyStatefulIngestionConfig",
"description": "Specialization of basic StatefulIngestionConfig to adding custom config.\nThis will be used to override the stateful_ingestion config param of StatefulIngestionConfigBase\nin the SQLAlchemyConfig.",
"type": "object",
"properties": {
"enabled": {
"title": "Enabled",
"description": "The type of the ingestion state provider registered with datahub.",
"default": false,
"type": "boolean"
},
"max_checkpoint_state_size": {
"title": "Max Checkpoint State Size",
"description": "The maximum size of the checkpoint state in bytes. Default is 16MB",
"default": 16777216,
"exclusiveMinimum": 0,
"type": "integer"
},
"state_provider": {
"title": "State Provider",
"description": "The ingestion state provider configuration.",
"allOf": [
{
"$ref": "#/definitions/DynamicTypedStateProviderConfig"
}
]
},
"ignore_old_state": {
"title": "Ignore Old State",
"description": "If set to True, ignores the previous checkpoint state.",
"default": false,
"type": "boolean"
},
"ignore_new_state": {
"title": "Ignore New State",
"description": "If set to True, ignores the current checkpoint state.",
"default": false,
"type": "boolean"
},
"remove_stale_metadata": {
"title": "Remove Stale Metadata",
"description": "Soft-deletes the tables and views that were found in the last successful run but missing in the current run with stateful_ingestion enabled.",
"default": true,
"type": "boolean"
}
},
"additionalProperties": false
},
"AllowDenyPattern": {
"title": "AllowDenyPattern",
"description": "A class to store allow deny regexes",
"type": "object",
"properties": {
"allow": {
"title": "Allow",
"description": "List of regex patterns for process groups to include in ingestion",
"default": [
".*"
],
"type": "array",
"items": {
"type": "string"
}
},
"deny": {
"title": "Deny",
"description": "List of regex patterns for process groups to exclude from ingestion.",
"default": [],
"type": "array",
"items": {
"type": "string"
}
},
"ignoreCase": {
"title": "Ignorecase",
"description": "Whether to ignore case sensitivity during pattern matching.",
"default": true,
"type": "boolean"
},
"alphabet": {
"title": "Alphabet",
"description": "Allowed alphabets pattern",
"default": "[A-Za-z0-9 _.-]",
"type": "string"
}
},
"additionalProperties": false
},
"GEProfilingConfig": {
"title": "GEProfilingConfig",
"type": "object",
"properties": {
"enabled": {
"title": "Enabled",
"description": "Whether profiling should be done.",
"default": false,
"type": "boolean"
},
"limit": {
"title": "Limit",
"description": "Max number of documents to profile. By default, profiles all documents.",
"type": "integer"
},
"offset": {
"title": "Offset",
"description": "Offset in documents to profile. By default, uses no offset.",
"type": "integer"
},
"report_dropped_profiles": {
"title": "Report Dropped Profiles",
"description": "If datasets which were not profiled are reported in source report or not. Set to `True` for debugging purposes.",
"default": false,
"type": "boolean"
},
"turn_off_expensive_profiling_metrics": {
"title": "Turn Off Expensive Profiling Metrics",
"description": "Whether to turn off expensive profiling or not. This turns off profiling for quantiles, distinct_value_frequencies, histogram & sample_values. This also limits maximum number of fields being profiled to 10.",
"default": false,
"type": "boolean"
},
"profile_table_level_only": {
"title": "Profile Table Level Only",
"description": "Whether to perform profiling at table-level only, or include column-level profiling as well.",
"default": false,
"type": "boolean"
},
"include_field_null_count": {
"title": "Include Field Null Count",
"description": "Whether to profile for the number of nulls for each column.",
"default": true,
"type": "boolean"
},
"include_field_min_value": {
"title": "Include Field Min Value",
"description": "Whether to profile for the min value of numeric columns.",
"default": true,
"type": "boolean"
},
"include_field_max_value": {
"title": "Include Field Max Value",
"description": "Whether to profile for the max value of numeric columns.",
"default": true,
"type": "boolean"
},
"include_field_mean_value": {
"title": "Include Field Mean Value",
"description": "Whether to profile for the mean value of numeric columns.",
"default": true,
"type": "boolean"
},
"include_field_median_value": {
"title": "Include Field Median Value",
"description": "Whether to profile for the median value of numeric columns.",
"default": true,
"type": "boolean"
},
"include_field_stddev_value": {
"title": "Include Field Stddev Value",
"description": "Whether to profile for the standard deviation of numeric columns.",
"default": true,
"type": "boolean"
},
"include_field_quantiles": {
"title": "Include Field Quantiles",
"description": "Whether to profile for the quantiles of numeric columns.",
"default": false,
"type": "boolean"
},
"include_field_distinct_value_frequencies": {
"title": "Include Field Distinct Value Frequencies",
"description": "Whether to profile for distinct value frequencies.",
"default": false,
"type": "boolean"
},
"include_field_histogram": {
"title": "Include Field Histogram",
"description": "Whether to profile for the histogram for numeric fields.",
"default": false,
"type": "boolean"
},
"include_field_sample_values": {
"title": "Include Field Sample Values",
"description": "Whether to profile for the sample values for all columns.",
"default": true,
"type": "boolean"
},
"allow_deny_patterns": {
"title": "Allow Deny Patterns",
"description": "regex patterns for filtering of tables or table columns to profile.",
"default": {
"allow": [
".*"
],
"deny": [],
"ignoreCase": true,
"alphabet": "[A-Za-z0-9 _.-]"
},
"allOf": [
{
"$ref": "#/definitions/AllowDenyPattern"
}
]
},
"max_number_of_fields_to_profile": {
"title": "Max Number Of Fields To Profile",
"description": "A positive integer that specifies the maximum number of columns to profile for any table. `None` implies all columns. The cost of profiling goes up significantly as the number of columns to profile goes up.",
"exclusiveMinimum": 0,
"type": "integer"
},
"profile_if_updated_since_days": {
"title": "Profile If Updated Since Days",
"description": "Profile table only if it has been updated since these many number of days. `None` implies profile all tables. Only Snowflake supports this.",
"default": 1,
"exclusiveMinimum": 0,
"type": "number"
},
"max_workers": {
"title": "Max Workers",
"description": "Number of worker threads to use for profiling. Set to 1 to disable.",
"default": 10,
"type": "integer"
},
"query_combiner_enabled": {
"title": "Query Combiner Enabled",
"description": "*This feature is still experimental and can be disabled if it causes issues.* Reduces the total number of queries issued and speeds up profiling by dynamically combining SQL queries where possible.",
"default": true,
"type": "boolean"
},
"catch_exceptions": {
"title": "Catch Exceptions",
"default": true,
"type": "boolean"
},
"partition_profiling_enabled": {
"title": "Partition Profiling Enabled",
"default": true,
"type": "boolean"
},
"bigquery_temp_table_schema": {
"title": "Bigquery Temp Table Schema",
"description": "On bigquery for profiling partitioned tables needs to create temporary views. You have to define a dataset where these will be created. Views will be cleaned up after profiler runs. (Great expectation tech details about this (https://legacy.docs.greatexpectations.io/en/0.9.0/reference/integrations/bigquery.html#custom-queries-with-sql-datasource).",
"type": "string"
},
"partition_datetime": {
"title": "Partition Datetime",
"description": "For partitioned datasets profile only the partition which matches the datetime or profile the latest one if not set. Only Bigquery supports this.",
"type": "string",
"format": "date-time"
}
},
"additionalProperties": false
},
"BucketDuration": {
"title": "BucketDuration",
"description": "An enumeration.",
"enum": [
"DAY",
"HOUR"
],
"type": "string"
},
"BigQueryCredential": {
"title": "BigQueryCredential",
"type": "object",
"properties": {
"project_id": {
"title": "Project Id",
"description": "Project id to set the credentials",
"type": "string"
},
"private_key_id": {
"title": "Private Key Id",
"description": "Private key id",
"type": "string"
},
"private_key": {
"title": "Private Key",
"description": "Private key in a form of '-----BEGIN PRIVATE KEY-----\\nprivate-key\\n-----END PRIVATE KEY-----\\n'",
"type": "string"
},
"client_email": {
"title": "Client Email",
"description": "Client email",
"type": "string"
},
"client_id": {
"title": "Client Id",
"description": "Client Id",
"type": "string"
},
"auth_uri": {
"title": "Auth Uri",
"description": "Authentication uri",
"default": "https://accounts.google.com/o/oauth2/auth",
"type": "string"
},
"token_uri": {
"title": "Token Uri",
"description": "Token uri",
"default": "https://oauth2.googleapis.com/token",
"type": "string"
},
"auth_provider_x509_cert_url": {
"title": "Auth Provider X509 Cert Url",
"description": "Auth provider x509 certificate url",
"default": "https://www.googleapis.com/oauth2/v1/certs",
"type": "string"
},
"type": {
"title": "Type",
"description": "Authentication type",
"default": "service_account",
"type": "string"
},
"client_x509_cert_url": {
"title": "Client X509 Cert Url",
"description": "If not set it will be default to https://www.googleapis.com/robot/v1/metadata/x509/client_email",
"type": "string"
}
},
"required": [
"project_id",
"private_key_id",
"private_key",
"client_email",
"client_id"
],
"additionalProperties": false
}
}
}
Prerequisites
Create a datahub profile in GCP
- Create a custom role for datahub as per BigQuery docs
- Grant the following permissions to this role:
bigquery.datasets.get
bigquery.datasets.getIamPolicy
bigquery.jobs.create
bigquery.jobs.list
bigquery.jobs.listAll
bigquery.models.getMetadata
bigquery.models.list
bigquery.routines.get
bigquery.routines.list
bigquery.tables.create # Needs for profiling
bigquery.tables.get
bigquery.tables.getData # Needs for profiling
bigquery.tables.list
# needed for lineage generation via GCP logging
logging.logEntries.list
logging.privateLogEntries.list
resourcemanager.projects.get
bigquery.readsessions.create
bigquery.readsessions.getData
Create a service account
- Setup a ServiceAccount as per BigQuery docs and assign the previously created role to this service account.
- Download a service account JSON keyfile. Example credential file:
{
"type": "service_account",
"project_id": "project-id-1234567",
"private_key_id": "d0121d0000882411234e11166c6aaa23ed5d74e0",
"private_key": "-----BEGIN PRIVATE KEY-----\nMIIyourkey\n-----END PRIVATE KEY-----",
"client_email": "test@suppproject-id-1234567.iam.gserviceaccount.com",
"client_id": "113545814931671546333",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/test%suppproject-id-1234567.iam.gserviceaccount.com"
}
To provide credentials to the source, you can either: Set an environment variable: $ export GOOGLE_APPLICATION_CREDENTIALS="/path/to/keyfile.json"
or
Set credential config in your source based on the credential json file. For example:
credential:
project_id: project-id-1234567
private_key_id: "d0121d0000882411234e11166c6aaa23ed5d74e0"
private_key: "-----BEGIN PRIVATE KEY-----\nMIIyourkey\n-----END PRIVATE KEY-----\n"
client_email: "test@suppproject-id-1234567.iam.gserviceaccount.com"
client_id: "123456678890"
Lineage Computation Details
When use_exported_bigquery_audit_metadata
is set to true
, lineage information will be computed using exported bigquery logs. On how to setup exported bigquery audit logs, refer to the following docs on BigQuery audit logs. Note that only protoPayloads with "type.googleapis.com/google.cloud.audit.BigQueryAuditMetadata" are supported by the current ingestion version. The bigquery_audit_metadata_datasets
parameter will be used only if use_exported_bigquery_audit_metadat
is set to true
.
Note: the bigquery_audit_metadata_datasets
parameter receives a list of datasets, in the format $PROJECT.$DATASET. This way queries from a multiple number of projects can be used to compute lineage information.
Note: Since bigquery source also supports dataset level lineage, the auth client will require additional permissions to be able to access the google audit logs. Refer the permissions section in bigquery-usage section below which also accesses the audit logs.
Profiling Details
Profiling can profile normal/partitioned and sharded tables as well but due to performance reasons, we only profile the latest partition for Partitioned tables and the latest shard for sharded tables.
If limit/offset parameter is set or partitioning partitioned or sharded table Great Expectation (the profiling framework we use) needs to create temporary
views. By default these views are created in the schema where the profiled table is but you can control to create all these
tables into a predefined schema by setting profiling.bigquery_temp_table_schema
property.
Temporary tables are removed after profiling.
profiling:
enabled: true
bigquery_temp_table_schema: my-project-id.my-schema-where-views-can-be-created
note
Due to performance reasons, we only profile the latest partition for Partitioned tables and the latest shard for sharded tables.
You can set partition explicitly with partition.partition_datetime
property if you want. (partition will be applied to all partitioned tables)
Caveats
- For Materialized views lineage is dependent on logs being retained. If your GCP logging is retained for 30 days (default) and 30 days have passed since the creation of the materialized view we won't be able to get lineage for them.
Code Coordinates
- Class Name:
datahub.ingestion.source.sql.bigquery.BigQuerySource
- Browse on GitHub
Module bigquery-usage
This plugin extracts the following:
- Statistics on queries issued and tables and columns accessed (excludes views)
- Aggregation of these statistics into buckets, by day or hour granularity
note
- This source only does usage statistics. To get the tables, views, and schemas in your BigQuery project, use the
bigquery
plugin. - Depending on the compliance policies setup for the bigquery instance, sometimes logging.read permission is not sufficient. In that case, use either admin or private log viewer permission.
Install the Plugin
pip install 'acryl-datahub[bigquery-usage]'
Quickstart Recipe
Check out the following recipe to get started with ingestion! See below for full configuration options.
For general pointers on writing and running a recipe, see our main recipe guide
source:
type: bigquery-usage
config:
# Coordinates
projects:
- project_id_1
- project_id_2
# Options
top_n_queries: 10
dataset_pattern:
allow:
- marketing_db
- sales_db
table_pattern:
deny:
- .*feedback.*
- .*salary.*
sink:
# sink configs
Config Details
- Options
- Schema
Note that a .
is used to denote nested fields in the YAML recipe.
View All Configuration Options
Field | Required | Type | Description | Default |
---|---|---|---|---|
bucket_duration | enum(BucketDuration) | Size of the time window to aggregate usage stats.. | DAY | |
end_time | string | Latest date of usage to consider. Default: Last full day in UTC (or hour, depending on bucket_duration ) | None | |
start_time | string | Earliest date of usage to consider. Default: Last full day in UTC (or hour, depending on bucket_duration ) | None | |
top_n_queries | integer | Number of top queries to save to each table. | 10 | |
include_operational_stats | boolean | Whether to display operational stats. | True | |
include_read_operational_stats | boolean | Whether to report read operational stats. Experimental. | False | |
format_sql_queries | boolean | Whether to format sql queries | False | |
include_top_n_queries | boolean | Whether to ingest the top_n_queries. | True | |
env | string | The environment that all assets produced by this connector belong to | PROD | |
platform | string | The platform that this source connects to | None | |
platform_instance | string | The instance of the platform that all assets produced by this recipe belong to | None | |
rate_limit | boolean | Should we rate limit requests made to API. | False | |
requests_per_min | integer | Used to control number of API calls made per min. Only used when rate_limit is set to True . | 60 | |
temp_table_dataset_prefix | string | If you are creating temp tables in a dataset with a particular prefix you can use this config to set the prefix for the dataset. This is to support workflows from before bigquery's introduction of temp tables. By default we use _ because of datasets that begin with an underscore are hidden by default https://cloud.google.com/bigquery/docs/datasets#dataset-naming. | _ | |
sharded_table_pattern | string | The regex pattern to match sharded tables and group as one table. This is a very low level config parameter, only change if you know what you are doing, | ((.+)[_$])?(\d{4,10})$ | |
projects | Array of string | List of project ids to ingest usage from. If not specified, will infer from environment. | None | |
project_id | string | Project ID to ingest usage from. If not specified, will infer from environment. Deprecated in favour of projects | None | |
extra_client_options | Dict | Additional options to pass to google.cloud.logging_v2.client.Client. | ||
use_v2_audit_metadata | boolean | Whether to ingest logs using the v2 format. Required if use_exported_bigquery_audit_metadata is set to True. | False | |
bigquery_audit_metadata_datasets | Array of string | A list of datasets that contain a table named cloudaudit_googleapis_com_data_access which contain BigQuery audit logs, specifically, those containing BigQueryAuditMetadata. It is recommended that the project of the dataset is also specified, for example, projectA.datasetB. | None | |
use_exported_bigquery_audit_metadata | boolean | When configured, use BigQueryAuditMetadata in bigquery_audit_metadata_datasets to compute usage information. | False | |
use_date_sharded_audit_log_tables | boolean | Whether to read date sharded tables or time partitioned tables when extracting usage from exported audit logs. | False | |
log_page_size | integer | 1000 | ||
query_log_delay | integer | To account for the possibility that the query event arrives after the read event in the audit logs, we wait for at least query_log_delay additional events to be processed before attempting to resolve BigQuery job information from the logs. If query_log_delay is None, it gets treated as an unlimited delay, which prioritizes correctness at the expense of memory usage. | None | |
max_query_duration | number | Correction to pad start_time and end_time with. For handling the case where the read happens within our time range but the query completion event is delayed and happens after the configured end time. | 900.0 | |
user_email_pattern | AllowDenyPattern (see below for fields) | regex patterns for user emails to filter in usage. | {'allow': ['.*'], 'deny': [], 'ignoreCase': True, 'alphabet': '[A-Za-z0-9 _.-]'} | |
user_email_pattern.allow | Array of string | List of regex patterns for process groups to include in ingestion | ['.*'] | |
user_email_pattern.deny | Array of string | List of regex patterns for process groups to exclude from ingestion. | [] | |
user_email_pattern.ignoreCase | boolean | Whether to ignore case sensitivity during pattern matching. | True | |
user_email_pattern.alphabet | string | Allowed alphabets pattern | [A-Za-z0-9 _.-] | |
table_pattern | AllowDenyPattern (see below for fields) | List of regex patterns for tables to include/exclude from ingestion. | {'allow': ['.*'], 'deny': [], 'ignoreCase': True, 'alphabet': '[A-Za-z0-9 _.-]'} | |
table_pattern.allow | Array of string | List of regex patterns for process groups to include in ingestion | ['.*'] | |
table_pattern.deny | Array of string | List of regex patterns for process groups to exclude from ingestion. | [] | |
table_pattern.ignoreCase | boolean | Whether to ignore case sensitivity during pattern matching. | True | |
table_pattern.alphabet | string | Allowed alphabets pattern | [A-Za-z0-9 _.-] | |
dataset_pattern | AllowDenyPattern (see below for fields) | List of regex patterns for datasets to include/exclude from ingestion. | {'allow': ['.*'], 'deny': [], 'ignoreCase': True, 'alphabet': '[A-Za-z0-9 _.-]'} | |
dataset_pattern.allow | Array of string | List of regex patterns for process groups to include in ingestion | ['.*'] | |
dataset_pattern.deny | Array of string | List of regex patterns for process groups to exclude from ingestion. | [] | |
dataset_pattern.ignoreCase | boolean | Whether to ignore case sensitivity during pattern matching. | True | |
dataset_pattern.alphabet | string | Allowed alphabets pattern | [A-Za-z0-9 _.-] | |
credential | BigQueryCredential (see below for fields) | Bigquery credential. Required if GOOGLE_APPLICATION_CREDENTIALS enviroment variable is not set. See this example recipe for details | ||
credential.project_id | ✅ | string | Project id to set the credentials | None |
credential.private_key_id | ✅ | string | Private key id | None |
credential.private_key | ✅ | string | Private key in a form of '-----BEGIN PRIVATE KEY-----\nprivate-key\n-----END PRIVATE KEY-----\n' | None |
credential.client_email | ✅ | string | Client email | None |
credential.client_id | ✅ | string | Client Id | None |
credential.auth_uri | string | Authentication uri | https://accounts.google.com/o/oauth2/auth | |
credential.token_uri | string | Token uri | https://oauth2.googleapis.com/token | |
credential.auth_provider_x509_cert_url | string | Auth provider x509 certificate url | https://www.googleapis.com/oauth2/v1/certs | |
credential.type | string | Authentication type | service_account | |
credential.client_x509_cert_url | string | If not set it will be default to https://www.googleapis.com/robot/v1/metadata/x509/client_email | None |
The JSONSchema for this configuration is inlined below.
{
"title": "BigQueryUsageConfig",
"description": "Any source that is a primary producer of Dataset metadata should inherit this class",
"type": "object",
"properties": {
"bucket_duration": {
"description": "Size of the time window to aggregate usage stats.",
"default": "DAY",
"allOf": [
{
"$ref": "#/definitions/BucketDuration"
}
]
},
"end_time": {
"title": "End Time",
"description": "Latest date of usage to consider. Default: Last full day in UTC (or hour, depending on `bucket_duration`)",
"type": "string",
"format": "date-time"
},
"start_time": {
"title": "Start Time",
"description": "Earliest date of usage to consider. Default: Last full day in UTC (or hour, depending on `bucket_duration`)",
"type": "string",
"format": "date-time"
},
"top_n_queries": {
"title": "Top N Queries",
"description": "Number of top queries to save to each table.",
"default": 10,
"exclusiveMinimum": 0,
"type": "integer"
},
"user_email_pattern": {
"title": "User Email Pattern",
"description": "regex patterns for user emails to filter in usage.",
"default": {
"allow": [
".*"
],
"deny": [],
"ignoreCase": true,
"alphabet": "[A-Za-z0-9 _.-]"
},
"allOf": [
{
"$ref": "#/definitions/AllowDenyPattern"
}
]
},
"include_operational_stats": {
"title": "Include Operational Stats",
"description": "Whether to display operational stats.",
"default": true,
"type": "boolean"
},
"include_read_operational_stats": {
"title": "Include Read Operational Stats",
"description": "Whether to report read operational stats. Experimental.",
"default": false,
"type": "boolean"
},
"format_sql_queries": {
"title": "Format Sql Queries",
"description": "Whether to format sql queries",
"default": false,
"type": "boolean"
},
"include_top_n_queries": {
"title": "Include Top N Queries",
"description": "Whether to ingest the top_n_queries.",
"default": true,
"type": "boolean"
},
"env": {
"title": "Env",
"description": "The environment that all assets produced by this connector belong to",
"default": "PROD",
"type": "string"
},
"platform": {
"title": "Platform",
"description": "The platform that this source connects to",
"type": "string"
},
"platform_instance": {
"title": "Platform Instance",
"description": "The instance of the platform that all assets produced by this recipe belong to",
"type": "string"
},
"rate_limit": {
"title": "Rate Limit",
"description": "Should we rate limit requests made to API.",
"default": false,
"type": "boolean"
},
"requests_per_min": {
"title": "Requests Per Min",
"description": "Used to control number of API calls made per min. Only used when `rate_limit` is set to `True`.",
"default": 60,
"type": "integer"
},
"temp_table_dataset_prefix": {
"title": "Temp Table Dataset Prefix",
"description": "If you are creating temp tables in a dataset with a particular prefix you can use this config to set the prefix for the dataset. This is to support workflows from before bigquery's introduction of temp tables. By default we use `_` because of datasets that begin with an underscore are hidden by default https://cloud.google.com/bigquery/docs/datasets#dataset-naming.",
"default": "_",
"type": "string"
},
"sharded_table_pattern": {
"title": "Sharded Table Pattern",
"description": "The regex pattern to match sharded tables and group as one table. This is a very low level config parameter, only change if you know what you are doing, ",
"default": "((.+)[_$])?(\\d{4,10})$",
"type": "string"
},
"projects": {
"title": "Projects",
"description": "List of project ids to ingest usage from. If not specified, will infer from environment.",
"type": "array",
"items": {
"type": "string"
}
},
"project_id": {
"title": "Project Id",
"description": "Project ID to ingest usage from. If not specified, will infer from environment. Deprecated in favour of projects ",
"type": "string"
},
"extra_client_options": {
"title": "Extra Client Options",
"description": "Additional options to pass to google.cloud.logging_v2.client.Client.",
"type": "object"
},
"use_v2_audit_metadata": {
"title": "Use V2 Audit Metadata",
"description": "Whether to ingest logs using the v2 format. Required if use_exported_bigquery_audit_metadata is set to True.",
"default": false,
"type": "boolean"
},
"bigquery_audit_metadata_datasets": {
"title": "Bigquery Audit Metadata Datasets",
"description": "A list of datasets that contain a table named cloudaudit_googleapis_com_data_access which contain BigQuery audit logs, specifically, those containing BigQueryAuditMetadata. It is recommended that the project of the dataset is also specified, for example, projectA.datasetB.",
"type": "array",
"items": {
"type": "string"
}
},
"use_exported_bigquery_audit_metadata": {
"title": "Use Exported Bigquery Audit Metadata",
"description": "When configured, use BigQueryAuditMetadata in bigquery_audit_metadata_datasets to compute usage information.",
"default": false,
"type": "boolean"
},
"use_date_sharded_audit_log_tables": {
"title": "Use Date Sharded Audit Log Tables",
"description": "Whether to read date sharded tables or time partitioned tables when extracting usage from exported audit logs.",
"default": false,
"type": "boolean"
},
"table_pattern": {
"title": "Table Pattern",
"description": "List of regex patterns for tables to include/exclude from ingestion.",
"default": {
"allow": [
".*"
],
"deny": [],
"ignoreCase": true,
"alphabet": "[A-Za-z0-9 _.-]"
},
"allOf": [
{
"$ref": "#/definitions/AllowDenyPattern"
}
]
},
"dataset_pattern": {
"title": "Dataset Pattern",
"description": "List of regex patterns for datasets to include/exclude from ingestion.",
"default": {
"allow": [
".*"
],
"deny": [],
"ignoreCase": true,
"alphabet": "[A-Za-z0-9 _.-]"
},
"allOf": [
{
"$ref": "#/definitions/AllowDenyPattern"
}
]
},
"log_page_size": {
"title": "Log Page Size",
"default": 1000,
"exclusiveMinimum": 0,
"type": "integer"
},
"query_log_delay": {
"title": "Query Log Delay",
"description": "To account for the possibility that the query event arrives after the read event in the audit logs, we wait for at least query_log_delay additional events to be processed before attempting to resolve BigQuery job information from the logs. If query_log_delay is None, it gets treated as an unlimited delay, which prioritizes correctness at the expense of memory usage.",
"exclusiveMinimum": 0,
"type": "integer"
},
"max_query_duration": {
"title": "Max Query Duration",
"description": "Correction to pad start_time and end_time with. For handling the case where the read happens within our time range but the query completion event is delayed and happens after the configured end time.",
"default": 900.0,
"type": "number",
"format": "time-delta"
},
"credential": {
"title": "Credential",
"description": "Bigquery credential. Required if GOOGLE_APPLICATION_CREDENTIALS enviroment variable is not set. See this example recipe for details",
"allOf": [
{
"$ref": "#/definitions/BigQueryCredential"
}
]
}
},
"additionalProperties": false,
"definitions": {
"BucketDuration": {
"title": "BucketDuration",
"description": "An enumeration.",
"enum": [
"DAY",
"HOUR"
],
"type": "string"
},
"AllowDenyPattern": {
"title": "AllowDenyPattern",
"description": "A class to store allow deny regexes",
"type": "object",
"properties": {
"allow": {
"title": "Allow",
"description": "List of regex patterns for process groups to include in ingestion",
"default": [
".*"
],
"type": "array",
"items": {
"type": "string"
}
},
"deny": {
"title": "Deny",
"description": "List of regex patterns for process groups to exclude from ingestion.",
"default": [],
"type": "array",
"items": {
"type": "string"
}
},
"ignoreCase": {
"title": "Ignorecase",
"description": "Whether to ignore case sensitivity during pattern matching.",
"default": true,
"type": "boolean"
},
"alphabet": {
"title": "Alphabet",
"description": "Allowed alphabets pattern",
"default": "[A-Za-z0-9 _.-]",
"type": "string"
}
},
"additionalProperties": false
},
"BigQueryCredential": {
"title": "BigQueryCredential",
"type": "object",
"properties": {
"project_id": {
"title": "Project Id",
"description": "Project id to set the credentials",
"type": "string"
},
"private_key_id": {
"title": "Private Key Id",
"description": "Private key id",
"type": "string"
},
"private_key": {
"title": "Private Key",
"description": "Private key in a form of '-----BEGIN PRIVATE KEY-----\\nprivate-key\\n-----END PRIVATE KEY-----\\n'",
"type": "string"
},
"client_email": {
"title": "Client Email",
"description": "Client email",
"type": "string"
},
"client_id": {
"title": "Client Id",
"description": "Client Id",
"type": "string"
},
"auth_uri": {
"title": "Auth Uri",
"description": "Authentication uri",
"default": "https://accounts.google.com/o/oauth2/auth",
"type": "string"
},
"token_uri": {
"title": "Token Uri",
"description": "Token uri",
"default": "https://oauth2.googleapis.com/token",
"type": "string"
},
"auth_provider_x509_cert_url": {
"title": "Auth Provider X509 Cert Url",
"description": "Auth provider x509 certificate url",
"default": "https://www.googleapis.com/oauth2/v1/certs",
"type": "string"
},
"type": {
"title": "Type",
"description": "Authentication type",
"default": "service_account",
"type": "string"
},
"client_x509_cert_url": {
"title": "Client X509 Cert Url",
"description": "If not set it will be default to https://www.googleapis.com/robot/v1/metadata/x509/client_email",
"type": "string"
}
},
"required": [
"project_id",
"private_key_id",
"private_key",
"client_email",
"client_id"
],
"additionalProperties": false
}
}
}
Prerequisites
The Google Identity must have one of the following OAuth scopes granted to it:
- https://www.googleapis.com/auth/logging.read
- https://www.googleapis.com/auth/logging.admin
- https://www.googleapis.com/auth/cloud-platform.read-only
- https://www.googleapis.com/auth/cloud-platform
And should be authorized on all projects you'd like to ingest usage stats from.
Compatibility
The source was last most recently confirmed compatible with the December 16, 2021 release of BigQuery.
Code Coordinates
- Class Name:
datahub.ingestion.source.usage.bigquery_usage.BigQueryUsageSource
- Browse on GitHub
Questions
If you've got any questions on configuring ingestion for BigQuery, feel free to ping us on our Slack