Skip to main content
Version: Next

SAP Analytics Cloud

Testing

Important Capabilities

CapabilityStatusNotes
DescriptionsEnabled by default
Detect Deleted EntitiesEnabled via stateful ingestion
Platform InstanceEnabled by default
Schema MetadataEnabled by default (only for Import Data Models)
Table-Level LineageEnabled by default (only for Live Data Models)

Configuration Notes

  1. Refer to Manage OAuth Clients to create an OAuth client in SAP Analytics Cloud. The OAuth client is required to have the following properties:

    • Purpose: API Access
    • Access:
      • Data Import Service
    • Authorization Grant: Client Credentials
  2. Maintain connection mappings (optional):

To map individual connections in SAP Analytics Cloud to platforms, platform instances and environments, the connection_mapping configuration can be used within the recipe:

connection_mapping:
MY_BW_CONNECTION:
platform: bw
platform_instance: PROD_BW
env: PROD
MY_HANA_CONNECTION:
platform: hana
platform_instance: PROD_HANA
env: PROD

The key in the connection mapping dictionary represents the name of the connection created in SAP Analytics Cloud.

Concept mapping

SAP Analytics CloudDataHub
StoryDashboard
ApplicationDashboard
Live Data ModelDataset
Import Data ModelDataset
ModelDataset

Limitations

  • Only models which are used in a Story or an Application will be ingested because there is no dedicated API to retrieve models (only for Stories and Applications).
  • Browse Paths for models cannot be created because the folder where the models are saved is not returned by the API.
  • Schema metadata is only ingested for Import Data Models because there is no possibility to get the schema metadata of the other model types.
  • Lineages for Import Data Models cannot be ingested because the API is not providing any information about it.
  • Currently, only SAP BW and SAP HANA are supported for ingesting the upstream lineages of Live Data Models - a warning is logged for all other connection types, please feel free to open an issue on GitHub with the warning message to have this fixed.
  • For some models (e.g., builtin models) it cannot be detected whether the models are Live Data or Import Data Models. Therefore, these models will be ingested only with the Story subtype.

CLI based Ingestion

Install the Plugin

The sac source works out of the box with acryl-datahub.

Starter Recipe

Check out the following recipe to get started with ingestion! See below for full configuration options.

For general pointers on writing and running a recipe, see our main recipe guide.

source:
type: sac
config:
stateful_ingestion:
enabled: true

tenant_url: # Your SAP Analytics Cloud tenant URL, e.g. https://company.eu10.sapanalytics.cloud or https://company.eu10.hcs.cloud.sap
token_url: # The Token URL of your SAP Analytics Cloud tenant, e.g. https://company.eu10.hana.ondemand.com/oauth/token.

# Add secret in Secrets Tab with relevant names for each variable
client_id: "${SAC_CLIENT_ID}" # Your SAP Analytics Cloud client id
client_secret: "${SAC_CLIENT_SECRET}" # Your SAP Analytics Cloud client secret

# ingest stories
ingest_stories: true

# ingest applications
ingest_applications: true

resource_id_pattern:
allow:
- .*

resource_name_pattern:
allow:
- .*

folder_pattern:
allow:
- .*

connection_mapping:
MY_BW_CONNECTION:
platform: bw
platform_instance: PROD_BW
env: PROD
MY_HANA_CONNECTION:
platform: hana
platform_instance: PROD_HANA
env: PROD

Config Details

Note that a . is used to denote nested fields in the YAML recipe.

FieldDescription
client_id 
string
Client ID for the OAuth authentication
client_secret 
string(password)
Client secret for the OAuth authentication
tenant_url 
string
URL of the SAP Analytics Cloud tenant
token_url 
string
URL of the OAuth token endpoint of the SAP Analytics Cloud tenant
incremental_lineage
boolean
When enabled, emits lineage as incremental to existing lineage already in DataHub. When disabled, re-states lineage on each run.
Default: False
ingest_applications
boolean
Controls whether Analytic Applications should be ingested
Default: True
ingest_import_data_model_schema_metadata
boolean
Controls whether schema metadata of Import Data Models should be ingested (ingesting schema metadata of Import Data Models significantly increases overall ingestion time)
Default: True
ingest_stories
boolean
Controls whether Stories should be ingested
Default: True
platform_instance
string
The instance of the platform that all assets produced by this recipe belong to. This should be unique within the platform. See https://datahubproject.io/docs/platform-instances/ for more details.
query_name_template
string
Template for generating dataset urns of consumed queries, the placeholder {query} can be used within the template for inserting the name of the query
Default: QUERY/{name}
env
string
The environment that all assets produced by this connector belong to
Default: PROD
connection_mapping
map(str,ConnectionMappingConfig)
Any source that produces dataset urns in a single environment should inherit this class
connection_mapping.key.env
string
The environment that this connection mapping belongs to
Default: PROD
connection_mapping.key.platform
string
The platform that this connection mapping belongs to
connection_mapping.key.platform_instance
string
The instance of the platform that this connection mapping belongs to
folder_pattern
AllowDenyPattern
Patterns for selecting folders that are to be included
Default: {'allow': ['.*'], 'deny': [], 'ignoreCase': True}
folder_pattern.ignoreCase
boolean
Whether to ignore case sensitivity during pattern matching.
Default: True
folder_pattern.allow
array
List of regex patterns to include in ingestion
Default: ['.*']
folder_pattern.allow.string
string
folder_pattern.deny
array
List of regex patterns to exclude from ingestion.
Default: []
folder_pattern.deny.string
string
resource_id_pattern
AllowDenyPattern
Patterns for selecting resource ids that are to be included
Default: {'allow': ['.*'], 'deny': [], 'ignoreCase': True}
resource_id_pattern.ignoreCase
boolean
Whether to ignore case sensitivity during pattern matching.
Default: True
resource_id_pattern.allow
array
List of regex patterns to include in ingestion
Default: ['.*']
resource_id_pattern.allow.string
string
resource_id_pattern.deny
array
List of regex patterns to exclude from ingestion.
Default: []
resource_id_pattern.deny.string
string
resource_name_pattern
AllowDenyPattern
Patterns for selecting resource names that are to be included
Default: {'allow': ['.*'], 'deny': [], 'ignoreCase': True}
resource_name_pattern.ignoreCase
boolean
Whether to ignore case sensitivity during pattern matching.
Default: True
resource_name_pattern.allow
array
List of regex patterns to include in ingestion
Default: ['.*']
resource_name_pattern.allow.string
string
resource_name_pattern.deny
array
List of regex patterns to exclude from ingestion.
Default: []
resource_name_pattern.deny.string
string
stateful_ingestion
StatefulStaleMetadataRemovalConfig
Stateful ingestion related configs
stateful_ingestion.enabled
boolean
Whether or not to enable stateful ingest. Default: True if a pipeline_name is set and either a datahub-rest sink or datahub_api is specified, otherwise False
Default: False
stateful_ingestion.remove_stale_metadata
boolean
Soft-deletes the entities present in the last successful run but missing in the current run with stateful_ingestion enabled.
Default: True

Code Coordinates

  • Class Name: datahub.ingestion.source.sac.sac.SACSource
  • Browse on GitHub

Questions

If you've got any questions on configuring ingestion for SAP Analytics Cloud, feel free to ping us on our Slack.