Kana FAQ
Frequently asked questions and answers about agentic AI, agentic marketing, Kana's agentic AI marketing platform, integrations, and enterprise workflows.
While Generative AI is reactive, creating content only when prompted, Agentic Marketing is proactive. Kana’s agentic platform doesn’t just write copy; it sets goals, plans multi-step workflows, and executes them across your tech stack.
No. A chatbot is a conversational interface. An AI agent is an entity that reasons, plans, and takes action. While you can talk to Kana’s agents, their primary job is to do: they analyze data, adjust ad bids, optimize audiences, and protect your brand visibility autonomously while your marketing team supervises.
Agentifying your strategy means moving from manual workflow management to goal-based orchestration. Instead of managing tasks (e.g., "update this audience list"), you manage outcomes (e.g., "maintain a 3:1 ROAS on this segment"). Kana’s agents then handle the underlying data integration, segment expansion, and real-time optimization.
No, agentic AI has the ability to create new workflows and help take automated action based on access to data, insights, and information you've never had before. Don't only ask “How do I do what I’ve always done, but faster?” Start asking “What can I do now that I never could before?” Kana exists to help you explore that frontier, and we’ll build the answer together.
Marketing has outgrown its tools. With the decline in the influence of third-party cookies, fragmented data, and the rise of "AI Answer Engines," manual human intervention can’t keep up. Agentic AI provides the "operating layer" needed to process millions of signals in real-time and act before an opportunity is lost.
We use "augmented intelligence" because Kana is designed to enhance and amplify what marketers already do well, not replace them. Our AI agents surface insights, recommend actions, and automate tedious work, but the marketer always stays in control, reviewing and approving outputs before they go live. It's intelligence that augments your expertise, not a black box that operates without you.
It means you're never handing the keys to an algorithm and hoping for the best. With Kana, marketing professionals supervise AI agents, approve actions, offer real-time feedback, and adjust parameters at every step so nothing goes to market without a human saying "yes, this is right for our brand."
Synthetic data is AI-generated data that boosts the signal from high-quality first-party 'seed data' to create a larger reservoir of data with the same statistical properties to drive smarter, more effective personalization, targeting, and experimentation. It has the additional benefits of reducing privacy risk while dramatically collapsing the costs of low-efficacy third-party data.
Absolutely. Your data is segregated from external sources, including LLM's hungry for external, proprietary data to improve their own inference results.
Yes. Kana is built to be a Just-in-Time (JIT) Data Integrator. JIT data provides a non-disruptive method of capturing data incrementally to generate measurable business value while avoiding the long integration death marches that cripple so many software implementations.
As search shifts from Google links to AI answers (like Perplexity or Gemini), brands are becoming "invisible." AEO is the practice of ensuring LLMs and AI agents find, understand, and recommend your brand. Kana’s AI Visibility agents optimize your digital footprint specifically for these AI systems.
Kana provides real-time visibility audits. Our agents constantly monitor major AI answer engines to track your brand’s presence and identify exactly where your information is missing or being misrepresented.
Yes. Kana’s agents can act as an always-on brand shield. They monitor for mentions and sentiment across the web, alerting you to emerging threats and providing the data needed to correct the narrative in AI training sets.
We advocate for Augmented Intelligence, meaning we believe AI should amplify marketers, not replace them. By handing off tasks and data synthesis exercises like manual data cleaning and campaign monitoring to Kana, your team is freed up to focus on high-level strategy and creative vision.
Most waste comes from "delayed insight." Kana's optimization and experimentation agents rebalance marketing spend across external channels to continually improve ROAS (Return on Ad Spend) by using AI to flesh out a complete picture of campaign performance from incomplete pieces of spend and performance data.
Traditional dashboards tell you what happened last week (Retroactive). Kana’s agents tell you what is happening now and what will happen next (Predictive/Actionable), allowing you to intervene while the campaign is still live.
Yes. Our agents analyze behavioral signals in your data warehouse to identify "at-risk" customers. They can then trigger personalized offers or retention workflows autonomously to save the account before the customer leaves.
Start with a specific "outcome-based" challenge—like improving audience precision or increasing AI visibility. Kana plugs into your current data flow and deploys specialized agents to solve that specific problem first, scaling as you see ROI.
This is where different specialized agents, one for analytics, one for audience building, one for media buying, etc. collaborate. Kana's symphony of interoperable agents are highly aligned, and loosely coupled so they share data and goals, ensuring your strategy is executed consistently across all channels.
Yes. You are the "Head Coach." You set the budget limits, brand tone guidelines, and approval requirements. Kana’s agents operate autonomously within those boundaries, but escalate to you when a decision falls outside the "safe zone."
Because agents operate at "superhuman speed," the time-to-value is significantly shorter than traditional manual optimizations. Many Kana clients see improved audience precision and reduced CPA within the first 30 days of deployment.
Both. We serve enterprise consumer brands, tech companies, and agencies. Whether you are managing complex B2C multi-channel programs or high-scale B2B behavioral datasets, Kana’s agentic layer is designed to handle the complexity of modern marketing.
Marketing currently lives in silos (email tool, ad tool, CRM). Kana’s "Marketing Operating Layer" sits on top of all of them, using AI agents to coordinate data and actions across the entire stack so your marketing works as one unified system. The purpose of the Operating Layer is to consolidate disparate systems (emails, spreadsheets, software) into a single, cohesive view, increase efficiency by removing the manual, daily tasks that prevent staff from focusing on high-value work, and standardize by providing a consistent, repeatable way of working across an organization.
Kana provides strict oversight of AI model usage through multiple layers of control: data guardrails that ensure LLM prompts contain only relevant, factual context from the customer's own data; structured output validation that rejects malformed responses; multi-pass generation pipelines where outputs are iteratively refined and checked for consistency; and an AI-as-a-judge evaluation pattern where a separate model instance evaluates outputs against defined accuracy and quality rubrics before they are presented to users. These controls substantially mitigate the hallucination risks typically associated with unconstrained chatbot-style LLM usage.
Our proprietary Just-In-Time (JIT) Data Integration technology structures your campaign performance data, customer insights, and business context on the fly, so you get value from day one, not day ninety.
No. We make an explicit commitment that your Customer Data (Input) or Output are not used to train or fine tune the models.
For full transparency, as any other SaaS provider, we track the usage of our products and use that information to improve our services or follow the trends but we do not use your Customer Data (Input) and neither the Output to train or fine-tune the model.
We host the service and customized applications in our environment on Google Cloud Platform (GCP). We follow a Just-In-Time (JIT) Data Integration approach that accesses data as needed, minimizing data duplication. Where possible, we leave data at rest in its original location and access it incrementally as needed, reducing both data movement and exposure surface.
Human oversight is a core design principle of the Kana platform.
Our agents surface information, prepare actionable insights, and recommend next steps, but a human operator reviews and decides which actions are executed. Specifically:
- Guided interaction: Before any computationally significant operation (e.g., generating an audience, producing a campaign report), the system engages the user in a conversational workflow to collect and confirm parameters. The operation does not proceed until the user explicitly confirms readiness.
- Review before deployment: All AI-generated outputs (audience definitions, campaign recommendations, content) are presented to the user for review, editing, and approval before they are activated or distributed.
- Configurable automation: Kana works with each customer to determine whether specific low-risk, high-frequency actions may be automated. However, the default posture is human approval for all consequential actions.
- Abort capability: Users can cancel long-running agent operations at any time through the application interface.
The Kana applications typically involve a series of dashboards and reports, and a chat agent that the person can interact with. The chat agent is clearly identified as such, self-identifies as an AI agent, and does not in any way try to present itself as anything other than an AI agent.
Kana employs a structured, multi-pass pipeline rather than relying on a single LLM call to produce final outputs. The process works as follows:
- Task decomposition: Complex tasks are broken into discrete stages (e.g., data gathering, analysis, synthesis, output generation). Each stage uses a purpose-specific model configuration optimized for that task type (e.g., planning, evaluation, summarization).
- Data grounding: At each stage, the LLM operates on actual customer data retrieved from the platform's databases — not on fabricated or hallucinated information. Prompts are constructed programmatically with the relevant data context embedded.
- Structured output validation: All LLM outputs are parsed into structured formats (e.g., JSON) and validated against expected schemas. Malformed outputs are repaired or regenerated automatically.
- AI-as-a-judge evaluation: After generation, a separate LLM evaluation step ("judge") assesses the output against defined rubrics covering accuracy, completeness, coherence, and fidelity to source data. If the output fails evaluation, it is regenerated with the judge's feedback incorporated — up to a bounded number of retry iterations.
- Citation and source tracing: Outputs include traceable references back to the underlying data records. Users can inspect the provenance of any claim or recommendation.
- Human review: The final output is presented to the user for review and approval before any downstream action is taken.At no point does the system autonomously execute consequential actions (e.g., activating a campaign, sending communications) without human authorization.
Agent authorization is enforced at multiple layers:
- Platform authentication: All users must authenticate through the Kana platform before accessing any application. Sessions are managed with secure tokens and standard authentication protocols.
- Role-based access control (RBAC): The platform supports defined user roles (e.g., admin, editor, viewer) that can be applied to restrict access to specific endpoints, data queries, navigation items, and background processes. Permissions are enforced server-side by the platform, not solely in the client application.
- Scoped agent capabilities: AI agents do not have open-ended access to external systems. Each agent operation is implemented as a defined backend endpoint with a specific, bounded scope of data access and action. Agents interact with customer data through parameterized, pre-defined database queries — not through arbitrary SQL or API access.
- Read-only data access by default: Development and diagnostic tooling operates in read-only mode. Write operations require explicit authorization through defined endpoints.
No autonomous external actions: Agents do not independently send emails, activate campaigns, modify external systems, or take any action outside the Kana platform without explicit human authorization through the application interface.
Kana employs multiple layers of safeguards to prevent unauthorized or unsafe actions:
Technical safeguards:
- Bounded agent operations: All agent actions are implemented as discrete, pre-defined backend operations with explicit input/output contracts. There is no mechanism for an LLM to execute arbitrary code or access systems outside its defined scope.
- Output validation: Every LLM output is parsed and validated against expected schemas before being used downstream. Malformed or unexpected outputs trigger automatic retry with safe defaults, not silent failures.
- AI-as-a-judge quality gates: A separate evaluation model reviews generated outputs for accuracy, coherence, and fidelity to source data. Outputs that fail evaluation are regenerated, not passed through.
- Anti-hallucination grounding: All LLM prompts include relevant factual context retrieved from the customer's actual data. Prompts explicitly instruct models not to fabricate information. Judge evaluation rubrics specifically flag fabricated claims as failures.
- Coherence safeguards: The system detects and prevents destructive edits — for example, if an LLM revision would drastically reduce content length (indicating a generation error), the revision is rejected.
- Fail-safe defaults: When output parsing or judge evaluation fails entirely, the system defaults to a "fail" state (triggering retry or human review) rather than accepting potentially invalid output.
Operational safeguards:
- SOC 2 compliance: Kana has completed SOC 2 examination.
- Data segregation: Customer data is segregated from other customers and from external sources. Customer data is not shared with LLM providers for training or fine-tuning.
- Secure API access to LLMs: All communication with third-party LLM providers occurs over encrypted HTTPS connections via their official APIs. No customer data is persisted by the LLM providers under Kana's contractual terms.
- IAPP-certified privacy expertise: Kana's privacy consultant holds CIPP/E and CIPM certifications with over 10 years of experience in SaaS enterprise privacy and regulatory compliance.
Agentic AI. Kana's platform orchestrates specialized AI agents that reason over customer data, plan multi-step workflows, and recommend actions — going beyond single-prompt generative AI. The underlying models are large language models (LLMs) from Anthropic, OpenAI, and Google, accessed via API.
Kana's platform uses a model-agnostic architecture with logical model roles (e.g., planning, evaluation, summarization). When a provider releases a new model version, Kana evaluates it against quality benchmarks before promoting it to production. Model updates are managed centrally by Kana and do not require customer-side changes.
Kana uses structured, parameterized database queries to retrieve specific, relevant customer data, which is then included as context in LLM prompts. This approach is more controlled than traditional RAG because:
- the data retrieved is deterministic and auditable — defined queries, not fuzzy similarity matches;
- the scope of data provided to the LLM is explicitly bounded by the application logic; and
- outputs are validated and can be traced back to the specific source records used.
This structured grounding approach reduces the risk of the LLM incorporating irrelevant or misleading context compared to conventional RAG architectures.
Kana provides explainability at the application level through:
- Citation mapping: Generated outputs are linked to specific source data records, so users can see what evidence informed each output.
- On-demand source tracing: Users can select any portion of an AI-generated output and see exactly which underlying data records contributed to it.
- Evaluation transparency: The platform records how many quality-evaluation iterations were required and what issues were identified during generation, providing an auditable record of the AI's self-assessment process.
- Human-readable outputs: All agent outputs are presented in structured, reviewable formats (dashboards, reports, editable documents) rather than opaque model internals.