API Service

Teleskope's API allow you to perform ad-hoc classifications and redactions for custom stores or use cases.



  • Classify - Detect personal or sensitive information within a given payload
  • Classify Collection - Detect personal or sensitive information within a structured data set


  • Scrub - Redact personal or sensitive information from a given payload with a variety of redaction methodologies

Data Subject Rights

  • Get - Retrieve all DSR requests of a given status
  • Post - Submit a DSR request


Teleskope's Metadata API surfaces classification records for each of your connected datastores.

Use Cases

Guard Agent

Teleskope's Classify and Classify Collection endpoints empower custom agents to detect personal or sensitive information as it is processed; for instance, data entered into an internal chat application, forums, or customer support tools. Instead of relying solely on regex filtering, let Teleskope handle the identification and then redaction of sensitive information by leveraging our Scrub API. The Scrub API can incorporate a variety of redaction methodologies, to anonymize, encrypt, or redact sensitive data-in-transit.

Data Subject Rights Automation

By leveraging the Data Subject Rights API, businesses can automate deletion and access requests under regulations like GDPR or CCPA. Through the submission and retrieval of DSR tasks, you can streamline compliance workflows and reduce the manual effort required in managing requests across all of your datastores, repositories, and SaaS applications.

Data Element Trend Analysis

Organizations can manually record classifications created by the Classify API through an agent or, if leveraging official Teleskope Connectors, call the Metadata API to retrieve your classifications by level (column, table, schema, or database). Then ingest the results into your business intelligence platform of choice to evaluate changes in PII frequency over time.

Clean Machine Learning Training Data

Use Teleskope's Scrub endpoint to redact personal or sensitive information from datasets intended for machine learning model training to ensure that your test data is compliant with privacy laws, reducing the risk of data breaches, and enhancing the ethical use of data in your AI applications. Scrubbing production data allows for a larger, more accurate, and less bias dataset for training models; instead of deleting your customer data entirely, mask it with synthetic data of the same type for fidelity.