Skip to main content

Documentation Index

Fetch the complete documentation index at: https://openmetadata-feat-feat-2mbfixtestexui.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

In this section, we provide guides and references to use the Snowflake connector.
Supported Authentication Types:
  • Basic Auth — Username and password authentication
  • Key Pair Auth — Private key authentication with optional passphrase (see Snowflake Key Pair Auth docs)
  • SSO — Single-Sign-On via the authenticator connection argument
Configure and schedule Snowflake metadata and profiler workflows from the OpenMetadata UI:

Requirements

Python Requirements

We have support for Python versions 3.9-3.11
To run the Snowflake ingestion, you will need to install:
pip3 install "openmetadata-ingestion[snowflake]"
If you want to run the Usage Connector, you’ll also need to install:
pip3 install "openmetadata-ingestion[snowflake-usage]"
To ingest basic metadata snowflake user must have the following privileges:
  • USAGE Privilege on Warehouse
  • USAGE Privilege on Database
  • USAGE Privilege on Schema
  • SELECT Privilege on Tables
Before you grant privileges, replace these placeholders with your own values:
PlaceholderDescription
<role_name>Name of the new Snowflake role you want to create and assign to the user
<user_name>Username for the new Snowflake user being created
<password>A strong password for the new Snowflake user
<warehouse_name>Name of the Snowflake warehouse the new role needs access to
<database_name>Name of the Snowflake database from which you want to ingest data
-- Create new role
CREATE ROLE <role_name>;
-- Create new user
CREATE USER <user_name> DEFAULT_ROLE=<role_name> PASSWORD='<password>';
-- Grant role to user
GRANT ROLE <role_name> TO USER <user_name>;
-- Grant USAGE Privilege on Warehouse to new role created above
GRANT USAGE ON WAREHOUSE <warehouse_name> TO ROLE <role_name>;
-- Grant USAGE Privilege on Database to new role created above
GRANT USAGE ON DATABASE <database_name> TO ROLE <role_name>;
-- Grant USAGE Privilege on required Schemas to new role created above
GRANT USAGE ON ALL SCHEMAS IN DATABASE <database_name> TO ROLE <role_name>;
GRANT USAGE ON FUTURE SCHEMAS IN DATABASE <database_name> TO ROLE <role_name>;
-- Grant SELECT Privilege on required tables & views to new role created above
GRANT SELECT ON ALL FUTURE TABLES IN DATABASE <database_name> TO ROLE <role_name>;
GRANT SELECT ON ALL TABLES IN DATABASE <database_name> TO ROLE <role_name>;
GRANT SELECT ON ALL FUTURE EXTERNAL TABLES IN DATABASE <database_name> TO ROLE <role_name>;
GRANT SELECT ON ALL EXTERNAL TABLES IN DATABASE <database_name> TO ROLE <role_name>;
GRANT SELECT ON ALL FUTURE VIEWS IN DATABASE <database_name> TO ROLE <role_name>;
GRANT SELECT ON ALL VIEWS IN DATABASE <database_name> TO ROLE <role_name>;
GRANT SELECT ON ALL FUTURE DYNAMIC TABLES IN DATABASE <database_name> TO ROLE <role_name>;
GRANT SELECT ON ALL DYNAMIC TABLES IN DATABASE <database_name> TO ROLE <role_name>;
While running the usage workflow, OpenMetadata fetches the query logs by querying snowflake.account_usage.query_history table. For this, the Snowflake user should be granted the ACCOUNTADMIN role or a role granted IMPORTED PRIVILEGES on the database SNOWFLAKE.
-- Grant IMPORTED PRIVILEGES on all Schemas of SNOWFLAKE DB to New Role
GRANT IMPORTED PRIVILEGES ON ALL SCHEMAS IN DATABASE SNOWFLAKE TO ROLE <role_name>;
If ingesting tags, the user should also have permissions to query snowflake.account_usage.tag_references. For this, the Snowflake user should be granted the ACCOUNTADMIN role or a role granted IMPORTED PRIVILEGES on the SNOWFLAKE database.
-- Grant IMPORTED PRIVILEGES on all Schemas of SNOWFLAKE DB to New Role
GRANT IMPORTED PRIVILEGES ON ALL SCHEMAS IN DATABASE SNOWFLAKE TO ROLE <role_name>;
For more information about the account_usage schema, see Account Usage.

Metadata Ingestion

All connectors are defined as JSON Schemas. Here you can find the structure to create a connection to Snowflake. In order to create and run a Metadata Ingestion workflow, we will follow the steps to create a YAML configuration able to connect to the source, process the Entities if needed, and reach the OpenMetadata server. The workflow is modeled around the following JSON Schema

1. Define the YAML Config

This is a sample config for Snowflake:

2. Run with the CLI

First, we will need to save the YAML file. Afterward, and with all requirements installed, we can run:
metadata ingest -c <path-to-yaml>
Note that from connector to connector, this recipe will always be the same. By updating the YAML configuration, you will be able to extract metadata from different sources.

Query Usage

The Query Usage workflow will be using the query-parser processor. After running a Metadata Ingestion workflow, we can run Query Usage workflow. While the serviceName will be the same to that was used in Metadata Ingestion, so the ingestion bot can get the serviceConnection details from the server.

1. Define the YAML Config

This is a sample config for Usage:

2. Run with the CLI

After saving the YAML config, we will run the command the same way we did for the metadata ingestion:
metadata usage -c <path-to-yaml>

Lineage

After running a Metadata Ingestion workflow, we can run Lineage workflow. While the serviceName will be the same to that was used in Metadata Ingestion, so the ingestion bot can get the serviceConnection details from the server.

1. Define the YAML Config

This is a sample config for Lineage:
  • You can learn more about how to configure and run the Lineage Workflow to extract Lineage data from here

2. Run with the CLI

After saving the YAML config, we will run the command the same way we did for the metadata ingestion:
metadata ingest -c <path-to-yaml>

Auto Classification

The Auto Classification workflow will be using the orm-profiler processor. After running a Metadata Ingestion workflow, we can run the Auto Classification workflow. While the serviceName will be the same to that was used in Metadata Ingestion, so the ingestion bot can get the serviceConnection details from the server.

1. Define the YAML Config

This is a sample config for the Auto Classification Workflow:

2. Run with the CLI

After saving the YAML config, we will run the command the same way we did for the metadata ingestion:
metadata classify -c <path-to-yaml>
Now instead of running ingest, we are using the classify command to select the Auto Classification workflow.

Data Quality

Adding Data Quality Test Cases from yaml config

When creating a JSON config for a test workflow the source configuration is very simple.
source:
  type: TestSuite
  serviceName: <your_service_name>
  sourceConfig:
    config:
      type: TestSuite
      entityFullyQualifiedName: <entityFqn>
The only sections you need to modify here are the serviceName (this name needs to be unique) and entityFullyQualifiedName (the entity for which we’ll be executing tests against) keys. Once you have defined your source configuration you’ll need to define te processor configuration.
processor:
  type: "orm-test-runner"
  config:
    forceUpdate: <false|true>
    testCases:
      - name: <testCaseName>
        testDefinitionName: columnValueLengthsToBeBetween
        columnName: <columnName>
        parameterValues:
          - name: minLength
            value: 10
          - name: maxLength
            value: 25
      - name: <testCaseName>
        testDefinitionName: tableRowCountToEqual
        parameterValues:
          - name: value
            value: 10
The processor type should be set to "orm-test-runner". For accepted test definition names and parameter value names refer to the tests page.
Note that while you can define tests directly in this YAML configuration, running the workflow will execute ALL THE TESTS present in the table, regardless of what you are defining in the YAML.This makes it easy for any user to contribute tests via the UI, while maintaining the test execution external.
You can keep your YAML config as simple as follows if the table already has tests.
processor:
  type: "orm-test-runner"
  config: {}

Key reference:

  • forceUpdate: if the test case exists (base on the test case name) for the entity, implements the strategy to follow when running the test (i.e. whether or not to update parameters)
  • testCases: list of test cases to add to the entity referenced. Note that we will execute all the tests present in the Table.
  • name: test case name
  • testDefinitionName: test definition
  • columnName: only applies to column test. The name of the column to run the test against
  • parameterValues: parameter values of the test
The sink and workflowConfig will have the same settings as the ingestion and profiler workflow.

Full yaml config example

source:
  type: TestSuite
  serviceName: MyAwesomeTestSuite
  sourceConfig:
    config:
      type: TestSuite
      entityFullyQualifiedName: MySQL.default.openmetadata_db.tag_usage
#     testCases: ["run_only_this_test_case"] # Optional, if not provided all tests will be executed

processor:
  type: "orm-test-runner"
  config:
    forceUpdate: false
    testCases:
      - name: column_value_length_tagFQN
        testDefinitionName: columnValueLengthsToBeBetween
        columnName: tagFQN
        parameterValues:
          - name: minLength
            value: 10
          - name: maxLength
            value: 25
      - name: table_row_count_test
        testDefinitionName: tableRowCountToEqual
        parameterValues:
          - name: value
            value: 10

sink:
  type: metadata-rest
  config: {}
workflowConfig:
  openMetadataServerConfig:
    hostPort: <OpenMetadata host and port>
    authProvider: <OpenMetadata auth provider>

How to Run Tests

To run the tests from the CLI execute the following command
metadata test -c /path/to/my/config.yaml

dbt Integration

You can learn more about how to ingest dbt models’ definitions and their lineage here.