Product Privacy Policy

Last updated: 1.1.2026.

This Product Privacy Policy describes how InDesk AI (“we”, “us”, “our”) collects and processes data when you install and use the InDesk AI app and/or agents .

The first part of this policy describes our general data handling practices, including some references to Zendesk where relevant. The second part is a detailed description of data handling practices specific to installing InDesk via the Zendesk Marketplace.

We may update this Product Privacy Policy to reflect changes in our practices or legal requirements. We will indicate the effective date at the top of the policy.

If you have any questions about this Product Privacy Policy or our data practices, please contact support@indesk.ai.

INDESK AI DATA HANDLING PRACTICES - GENERAL

1. PURPOSE

This page is intended to inform customers/users of InDesk products about our data handling practices, covering:

data types collected

collection purposes

how data is used

data retention purposes and duration

data storage

data security

customers’ rights and controls regarding data

2. DATA TYPES COLLECTED

Primary Data Collected

User metadata – Customer emails and Zendesk subdomains; names/roles fetched from Zendesk (not stored).

Ticket Data – Ticket IDs, content, comments, audit trails fetched via Zendesk API; processed in-memory only (not persisted).

AI-Generated Content – User-configured prompts (stored); AI responses returned but not stored.

Usage / Telemetry – Request counts, token usage, estimated costs, macro-selection events.

Integration Credentials – Zendesk OAuth tokens, Confluence credentials, LLM API keys (encrypted at rest).

Authentication Data – Token hashes, JWT refresh tokens, MFA secrets (encrypted).

Macro Data – Titles, descriptions, and action JSON cached from Zendesk.

AI Agent Configurations – Agent names, prompts, knowledge sources, vector embeddings (768-dim).

Chat Messages – Processed in-memory and written to Zendesk tickets; not stored in local database.

Uploaded Files – JSONB metadata (max 5 per agent) with vector embeddings; content stored securely.

System Logs – Automatically sanitized; sensitive fields (emails, API keys, subscription keys) redacted.

Explicitly Not Collected by Us

Payment Card Data – InDesk AI does not collect or store your payment card data. This data may however be collected by your payment processor.

Persistent Ticket Content – Fetched on-demand only; not stored after processing.

Ticket Attachments – Not downloaded or stored.

Personal Identifiers Beyond Email – No phone numbers, addresses, SSNs, or birthdates collected.

Raw Third-Party API Responses – Immediately processed and discarded (not retained).

The service is not directed to children and we do not knowingly collect children’s personal data.

Key Architecture Pattern

Transient Processing Model

Ticket or conversation data → Fetched → Formatted → Sent to LLM → Response returned → Discarded (not persisted).

All processing follows a data minimization principle with encryption in transit and at rest, ensuring no long-term retention of customer content.

3. HOW DATA IS USED

Primary Uses of the Data

Deliver Support

Generate reply suggestions

Summarize tickets and conversations

Detect user intent

Identify similar resolved tickets

Suggest macros and tags

Improve or rephrase text for clarity

Routing and Escalation

Triage and route incoming tickets

Detect duplicates or merge candidates

Predict escalation or priority level

AI Agent

Customer-facing AI chat agents (optional ticket writeback to Zendesk)

Retrieve relevant knowledge from customer-provided sources using embeddings (RAG)

Analytics and Usage

Track feature usage, token consumption, cost estimates, and macro telemetry

Quality Improvement

Monitor system performance and capacity

Maintain secure logs with PII redaction

Authentication and Access Control

Handle user authentication, session management, and MFA

Integrations

Connect securely to Zendesk, Confluence, and other third-party systems using encrypted credentials

Third-Party Sharing

Hosting and Infrastructure

PostgreSQL (Self-hosted) – stores configurations, encrypted credentials, usage metrics, embeddings, and agent settings. No model training performed.

Redis (Self-hosted) – caches temporary data (sessions, rate limits, usage batching). No model training.

Model Providers

OpenAI, Anthropic, Google Gemini, Mistral, and others

Data shared: formatted ticket or chat content, prompts, and limited metadata.

Purpose: AI-based generation, summarization, routing, and analysis.

Model training: depends on provider policy, customers control provider and API keys.

Embeddings Model

Runs locally using SentenceTransformers.

Purpose: power retrieval-augmented generation (RAG) and internal knowledge search.

Model training: none; no data shared externally.

Zendesk APIs

Used to fetch ticket content and metadata and optionally write AI outputs back.

No model training or long-term data retention.

Confluence APIs

Fetch and process page content for internal knowledge retrieval.

No model training; content not shared externally.

Notes on Model Training

API data handling follows each provider’s policy

Example - Anthropic explicitly states API data is not used for model training.

4. RETENTION POLICY

Data Retention Summary

Retention Overview

Conversations & Transcripts (Tickets, Chats)

Retention: Not stored in the application database.

Details: Ticket and chat content are fetched from Zendesk, processed in memory, and written back to Zendesk only.

Temporary Data: Session markers and deduplication tokens retained for up to 1 day.

Reason: Operational reliability for retries and idempotency; the primary copy resides in Zendesk.

Uploaded Knowledge & Embeddings

Retention: Retained until the user deletes the related agent or file.

Embeddings: Persist in PostgreSQL indefinitely unless manually removed.

Reason: Required for AI agent retrieval and knowledge-base search.

AI Prompts & Configurations

Retention: Retained until edited or deleted by the user.

Reason: Needed to maintain custom AI agent behavior.

Usage Costs & Analytics

Retention: Aggregated and anonymized analytics retained indefinitely.

Reason: Used for billing estimates, feature telemetry, and system performance insights.

Authentication Tokens

Access Tokens: Short-lived, expire per configured TTL.

Refresh Tokens: Retained for up to 30 days after expiration, then purged automatically.

Reason: Required for secure user reauthentication and session continuity.

Redis Caches & Queues

General Caches: 30–600 seconds TTL.

Rate-Limiter Data: Approximately 2-minute TTL.

Task Queues: Messages retained up to 1 hour; error or retry data may persist for up to 3 days.

Reason: Short-term caching and background job processing only.

System Logs & Diagnostics

Retention: Determined by deployment logging configuration; not persisted by application logic.

Data Handling: Logs are automatically sanitized to redact sensitive data (e.g., emails, API keys).

Integration Credentials

Retention: Encrypted at rest; retained until the user revokes or removes the integration.

Reason: Needed for ongoing authenticated API access (e.g., Zendesk, Confluence).

Zendesk / Confluence Content

Retention: Fetched on demand; not stored by the backend.

Reason: Retention is governed by each external platform’s own policy.

Exceptions and Backups

Legal Holds

Provider-Side Retention

External services (LLM providers, Zendesk, Confluence) may retain logs or request data under their own policies.

The application does not enforce additional retention or “no-training” flags; such behavior is managed by each provider’s account settings.

Backups

Production databases are self-hosted PostgreSQL with backups following the organization’s defined retention and encryption policies.

5. STORAGE AND SECURITY

Hosting & Security Controls

Where (Hosting / Deployment)

Application Backend

Containerized via Docker and deployable to Kubernetes clusters or equivalent orchestration platforms.

Typical deployment: self-managed cloud or on-premises infrastructure.

Region: organizational infrastructure configuration (not hard-coded).

Database (PostgreSQL)

Environment: Self-hosted PostgreSQL

Encryption: SSL/TLS enforced for all client connections (configurable via DSN).

Access: Restricted to application service accounts; role-based credentials managed separately from code.

Redis

Purpose: Used for caching, session deduplication, and task queues.

Deployment: Self-hosted within the same private network or Kubernetes namespace as the API.

Security: Authenticated access; data ephemeral with short TTLs (seconds to hours).

External APIs

Zendesk, Confluence: Accessed via HTTPS using OAuth or API keys.

LLM Providers (OpenAI, Anthropic, Gemini, etc.): Accessed over HTTPS with per-provider credentials.

Protections (Encryption, Access Controls, Logging)

Encryption in Transit

All inter-service and external API communications use HTTPS/TLS.

PostgreSQL connections require SSL/TLS.

Kubernetes ingress or reverse proxy enforces TLS termination for client traffic.

Encryption at Rest

Sensitive credentials and API tokens are encrypted at rest using AES-256-GCM (authenticated encryption with associated data).

Each record uses a unique 96-bit nonce

Redis data is transient and not persisted to disk.

Access Controls & Isolation

Authentication: JWT-based authentication with refresh token rotation and optional MFA (TOTP).

Authorization: Role-based access control (RBAC) for administrative vs. standard user privileges.

Tenant Isolation: Enforced via PostgreSQL Row-Level Security (RLS) policies.

Logging & Audit

Sanitization: Logs automatically redact sensitive identifiers (emails, API keys, tokens).

Auditability: Auth, request, and background job events captured with timestamps.

Monitoring: System metrics and performance traces exposed through secured endpoints.

Certifications & Compliance

The application itself does not currently hold SOC 2, ISO 27001, or equivalent certifications.

Infrastructure-level compliance depends on the hosting environment

All implemented controls align with common security frameworks (OWASP ASVS, CIS Benchmarks, and NIST principles).

6. DE-IDENTIFICATION & TRAINING

Data Pseudonymization / Anonymization

Ticket and Chat Content

Content sent to LLMs is not pseudonymized or anonymized by default.

Only logs and admin views are sanitized to redact sensitive fields (e.g., emails, API keys, secrets).

Net effect: Data used for inference is processed “as-is”; operational logs and metadata are sanitized for safety.

Customer Data Usage for Model Training

Our Own Models

Customer data is never used to train our models.

Embeddings are generated using a local SentenceTransformers model for inference only.

Third-Party LLM Providers (OpenAI, Anthropic, Google Gemini, etc.)

Whether customer data is used for training depends on each provider’s policy.

The backend does not set explicit no-training flags; control must be configured at the provider account level.

Anthropic: Public policy states API data is not used for training.

Other providers (OpenAI, Google, etc.): Training usage is governed by enterprise/account-level opt-out settings.

Opt-In / Opt-Out Mechanism

The product does not provide in-app toggles to prevent training.

Customers control provider choice and must configure training opt-out via provider accounts.

Process to Request Exclusion

For third-party LLMs, configure provider accounts to disable data retention/training.

Use providers with no-training-by-default policies (e.g., Anthropic).

For our system: no training occurs

7.CUSTOMER RIGHTS & CONTROLS

User Data Management & Privacy Controls

1. How to Request Deletion, Export, or Correction

Deletion

Delete AI Agents and associated knowledge.

Delete uploaded knowledge files for an agent.

Disconnect Zendesk/ Confluence integration (removes stored credentials).

Admins: Permanently delete subscriptions (removes tenant-scoped data).

Revoke admin tokens.

Export

Note: Conversation/ticket data is not stored in the backend; export must be done via Zendesk’s native tools.

Correction / Updates

Update prompts individually or reset to defaults.

Update AI Agent configurations, including custom LLM settings.

Save or update Zendesk/ Confluence credentials (encrypted at rest).

2. Customer / User Rights

Access: Retrieve prompts, integration status, and agent embed snippets.

Rectification: Update prompts and agent configurations.

Erasure: Delete agents, uploaded knowledge, or subscriptions.

Portability: Conversation data must be exported via Zendesk, we do not store it

3. Admin Controls Inside the App

Manage subscription lifecycle (delete, cleanup expired, permanent deletion).

Manage tokens (generate, list, revoke, refresh token cleanup).

Feature and permission management per subscription.

View aggregated usage analytics (no PII included).

4. Contact for Privacy / Security

For data subject requests beyond the endpoints above, or security inquiries, contact organization’s designated privacy/security team.

Email : support@indesk.ai

FAQs

Do you use my conversations to train models?

Our system: No. We do not train any models on your data. We use a local embeddings model for inference only.

Third-party LLMs: Data is sent to the provider you configure (OpenAI, Anthropic, Gemini, etc.) for inference. We do not set explicit “no-training” flags; training/retention is governed by your provider account settings.

Anthropic: API data is not used for training by policy.

Others (OpenAI, Google, etc.): Enterprise accounts can configure opt-out for training.

How can I delete data?

Conversations / Tickets: We do not store them; they remain in Zendesk. Use Zendesk’s native deletion/export tools.

Agents and Knowledge: Delete an agent and/or uploaded sources

Prompts / Configurations: Update or reset prompts

Integrations: Disconnect Confluence/ Zendesk

Tenant-level: Admins can delete or permanently delete a subscription.

Where is my data stored?

Application Database: Self-hosted PostgreSQL stores configuration, encrypted credentials, usage metrics, agent configurations, and embeddings (AES-256-GCM at rest).

Cache / Queues: Redis holds ephemeral caches, rate limits, and job metadata with short TTLs.

Zendesk / Confluence: Ticket and content data resides in these platforms; we fetch it on demand and do not persist raw content.

LLM Providers: Ticket/chat context is transmitted over HTTPS to your selected provider for inference; not stored by us post-call.

DATA HANDLING - INDESK AI FOR ZENDESK

1. Who we are and how to contact us

Product: InDesk AI for Zendesk

Company: IntegratingMe d.o.o. (trading as InDesk AI)

Registered Address: Bosnia and Herzegovina

Contact: support@indesk.ai

2. Data we process

We process the minimum data necessary to provide the service.

Zendesk ticket context (transient)

Ticket subject, description, comments, tags, and related metadata are fetched from Zendesk to generate AI outputs (e.g., summaries, reply suggestions, triage, QA).

This content is processed in memory transiently and is NOT cached or stored (see Section 4). We do not store ticket conversation content as a system of record in our database.

Integration credentials (stored securely)

Zendesk OAuth tokens (access/refresh) are stored encrypted at rest to enable API access on your behalf. Tokens are rotated and expire periodically per Zendesk policy.

Subscription and configuration data (stored)

Subscription key, account/admin contact email, plan/tier details, feature toggles, and status.

Macro metadata and telemetry (stored)

Macro cache: macro identifiers and metadata (title, description, actions, restrictions) synced from your Zendesk account to improve macro recommendations.

Macro telemetry: basic counters and identifiers for recommendation ranking (e.g., ticket_id, macro_id, rank_shown, total_shown). No ticket conversation text is stored here.

Operational usage metrics (stored)

Aggregated per-subscription usage for billing/limits, such as request counts, token usage estimates, cost estimates, and timestamps.

Logs (limited)

Operational logs for reliability and security. We avoid logging ticket content and never log access tokens.

3. How we use data

To provide AI features (summarization, reply suggestions, intelligent triage, QA, recommendations).

To maintain and improve service reliability, performance, and security.

To measure usage for billing, quota enforcement, and reporting.

4. Data storage and retention

Ticket content

Processed transiently and is NOT cached or stored. Ticket data is fetched from Zendesk, prepared for AI processing, sent to your configured AI provider, and the response is returned directly to the Zendesk app interface. We do not store ticket conversation content at any point.

OAuth tokens and subscription/configuration data

Stored encrypted at rest for the life of the subscription. Zendesk tokens are rotated and expire per Zendesk policy. Upon subscription termination, associated credentials are deleted within a reasonable period.

Macro cache and telemetry

Macro metadata is synced from your Zendesk account and retained while the subscription is active. Macro data and related telemetry are deleted when the subscription is terminated.

Usage metrics

Retained for billing and reporting for a limited period and then deleted or anonymized. If you require a specific retention period, contact us and we will accommodate where feasible.

Logs

Operational logs are retained for up to 7 days for reliability, security monitoring, and troubleshooting purposes. Logs contain only technical metadata such as request timestamps, feature names, ticket IDs (not ticket data, just the numeric identifier), HTTP status codes, error types, and performance metrics. Ticket conversation content, customer personal information, and authentication credentials are not logged.

5. Sharing and sub-processors

Zendesk APIs are accessed under your authorization (OAuth) to retrieve ticket context and apply actions back to your Zendesk account.

AI/LLM Processing (Bring Your Own Key model)

InDesk AI supports multiple AI providers including OpenAI, Google Gemini, Anthropic, Mistral, DeepSeek, xAI, Moonshot AI (Kimi), and OpenRouter.

You provide your own API key and select your preferred AI provider and model. We do not provide pre-configured LLM services.

Ticket data is sent directly to your chosen AI provider using your API credentials. We act solely as a pass-through processor.

Data retention, usage policies, and training opt-in/opt-out settings are controlled by your agreement with your chosen AI provider. We have no control over how your selected AI provider handles the data once transmitted.

We recommend reviewing your AI provider’s data processing agreement and privacy policy to understand their data handling practices.

Infrastructure and service providers

We use infrastructure and service providers (e.g., cloud hosting, database, Redis, observability tools) to operate the service. These providers act as sub-processors under appropriate data protection terms.

A current list of infrastructure sub-processors is available upon request at support@indesk.ai.

We do not sell personal data.

Security

Data in transit is protected with HTTPS/TLS.

Sensitive credentials (e.g., OAuth tokens) are encrypted at rest.

Access controls, least-privilege, and tenant isolation are enforced.

We regularly monitor, patch, and improve the security of our systems.

International data transfers

Our infrastructure may operate across multiple regions. Where data crosses borders, we implement appropriate safeguards. If you require details about data residency or transfer mechanisms, contact us.

8. Your rights and choices

Depending on your jurisdiction, you may have rights regarding access, correction, deletion, restriction, portability, or objection.

End-user data from Zendesk is typically controlled by your organization as the Zendesk account owner. Requests from end-users should be initiated through your organization.

To exercise rights or make inquiries, contact support@indesk.ai

9. Children’s data

The service is not directed to children and we do not knowingly collect children’s personal data.