Getting Started with ALTR & Snowflake
Configure ALTR's Snowflake Service User
Connect Snowflake Databases
Connect Columns to ALTR
Creating Policy & Manage Data
Configuring SCIM for Okta
Classification
Analytics
Column Access Policies
Views
Thresholds
Row Access Policy
Audit Logs
Settings
Tokenization
Tag-Based Data Masking
Tokenization API
Management API
ALTR Driver JDBC Installation
ALTR Driver ODBC Installation
Configure Tableau to Gain User Level Observability
Integrating ALTR Notifications with AWS S3
TDS Proxy Installation
CDM Installation
Custom Masking and Extensibility Functions
Bring Your Own Key for Tokenization
Open-Source Integrations
If you want to add an extra layer of security before you enable certain roles to access your tokenized data, then applying an ALTR Detokenization Policy would be a beneficial part of your data governance strategy. While tokenization is the process of converting your sensitive data with ‘substitutes’ that are ‘tokens’ the human eye can’t read or figure out, detokenization is the opposite process. Detokenization functions by removing non-sensitive token identifiers for your Snowflake columns and converting the columns back to its original state where there are no token identifiers. When you use ALTR to apply a Detokenization Policy for a Snowflake column, then the original column data is returned to authorized users.
For example, an application can require a person's date of birth to generate monthly patient invoices for recurring medical services (such as ongoing treatments to treat an illness) that their medical provider has rendered. Detokenized sensitive data must be read under strict security controls.
To apply an ALTR Detokenization Policy for your Snowflake column, you will need to:
While tokenization replaces sensitive data with a substitute (that is, a token) to provide an extra layer of protection, as described earlier, detokenization is the reverse process. The objective of using ALTR to apply a Detokenization Policy is for you to control who can access the raw values of your sensitive data versus the tokenized ones.
If your data has already been tokenized via an ETL process through Matillion, then by default, you will only have access to tokenized columns through ALTR; however, if you choose to apply a 'No Mask' policy option, then you can select which roles can access the raw detokenized values. A 'Masked' policy option will enable you to select which roles can access masked detokenized values.
In short, by applying our Detokenization Policy you can:
Before you apply a Detokenization Policy, keep the following considerations in mind:
We recommend that you apply policy against one column at a time for easier manageability.
To begin this procedure, first the columns must already be tokenized through our API or another provider. In addition, you must have the appropriate permission (role) that has granted you the 'unmask option' to detokenize and read the raw value of the original data. Once both of these prerequisites are met, then you can proceed with the steps below.
Question: Is the policy complex to apply or remove?
Answer: Setting a 'No Mask' option on any column is easy as long as the column is connected to ALTR. In addition, if the data has already been tokenized then be sure to label the column as tokenized.
Question: Are there security or compliance risks to consider before applying this policy?
Answer: There are only security and compliance risks if the policy is applied incorrectly. Be careful about which roles you assign this policy to.
Question: How long does it take for the policy to go into effect after I've applied it? Does it happen immediately?
Answer: It's an asynchronous process and could take up to a few minutes for the policy to take effect.
Question: How will I know that the policy has been applied successfully?
Answer: A confirmation message will display.