Expert Note

AI & Data Marketing: challenges, limits and adoption | 2026 update

Olivier Chubilleau
Olivier Chubilleau
Updated: Apr 2, 202615 min read

AI applied to marketing is no longer exploratory. In 18 months, models have matured, integration standards have emerged, and use cases have moved from prototype to production. But with the ramp-up come the real challenges: security, governance, deliverable quality, and data sovereignty.

This note takes stock of what we've observed at EdgeAngel, what works, what's problematic, and what we recommend.

The AI Landscape in 2026

The Models

The LLM market has stabilized around three main players: Google (Gemini), Anthropic (Claude), and OpenAI (GPT).

For our marketing and data use cases, Claude Opus 4.6 is currently the top performer for complex reasoning, data analysis, and structured deliverable production. Gemini 3.1 Pro is very strong within the Google ecosystem (GA4, BigQuery, Ads) and progressing fast. GPT-4o remains relevant but has lost its edge on reasoning tasks.

This ranking shifts every quarter. What shouldn't change is your stance: the model choice should remain an implementation detail, not a structural commitment.

MCP: The Integration Standard

The Model Context Protocol (MCP) has established itself as the standard for connecting LLMs to business tools. Initiated by Anthropic, adopted by Google and OpenAI, it enables models to interact directly with your APIs: BigQuery, Google Ads, GA4, Slack, Asana, Meta Ads, etc.

In practice: instead of copy-pasting data into a chat, the model fetches it, analyzes it, and can act on it. It's the difference between an assistant you feed manually and a collaborator with direct access to your tools.

The ecosystem is still maturing (cloud authentication, per-server permissions, monitoring), but the building block is here and usable in production. A large share of data & AI technology solutions have already created their MCP server to enable LLMs to interact directly with their services.

Solution Categories: Finding Your Way

The market is fragmented and vendor marketing jargon doesn't help. Here's a framework to categorize the different approaches.

Automation Tools
Examplesn8n, Make, Zapier AI
StrengthsNo-code, fast to deploy
LimitationsLimited reasoning, fragile maintenance
Best forMarketing ops, POCs
Enterprise AI Platforms
ExamplesGemini in Workspace + Gemini Enterprise Platform (GCP), ChatGPT Enterprise
StrengthsNative governance, Google data integration
LimitationsLimited advanced customization, variable GCP pricing
Best forStructured teams, CIOs, large accounts
IDE Agent Ecosystem
ExamplesAntigravity, Cursor, Claude Code, Windsurf
StrengthsMaximum power, full customization
LimitationsTechnical setup, governance to build
Best forConsultants, data analysts, developers
Custom Enterprise
ExamplesVertex AI, AWS Bedrock, Azure OpenAI, Internalized models (Mistral, Llama)
StrengthsTotal control, on-premise possible
LimitationsHigh cost, ML skills required
Best forLarge groups, high data/ML maturity
Third-party Solutions
ExamplesJasper, Copy.ai, HubSpot AI, Salesforce Agentforce
StrengthsIntegrated UX, no setup, fast adoption
LimitationsBlack box, limited control, vendor lock-in
Best forAutonomous marketing teams, SMBs

Key Takeaway

These categories are not mutually exclusive. A mature organization typically combines a Gemini Enterprise plan for daily use, an IDE agent for technical profiles, and third-party solutions for specific needs. The question isn't "which tool to choose" but "which architecture to compose."

Focus: The Gemini Enterprise Ecosystem β€” Workspace, Cloud & ADK

In our view, this is the approach that will develop the most over the next 12-18 months, especially for organizations already in the Google ecosystem. However, it's important to understand that there are two distinct levels in Google's offering.

Level 1: Gemini in Google Workspace β€” Everyday AI

Since 2025, Google has integrated Gemini AI features directly into its Workspace plans. No more need for a separate "Gemini Business" add-on β€” the AI assistant is included in Business Standard, Business Plus, and Enterprise subscriptions.

  • In-app assistance: AI-assisted writing in Gmail and Docs, Meet meeting summaries, Sheets data analysis, smart Drive search.
  • Gemini App: an AI chat with access to Gemini models, grounded on your Google data (Drive documents, emails).
  • NotebookLM: document synthesis tool for loading sources, asking questions, and producing sourced analyses.
  • Google Vids: AI-assisted video creation.

Ballpark Pricing

Workspace Business Standard plans start around $14/user/month. Premium AI add-ons are extra (custom pricing by volume). Most AI features are already included from the Standard plan.

Level 2: Gemini Enterprise Platform β€” Enterprise AI at Scale

This is where things shift. Gemini Enterprise is a Google Cloud platform (not just a Workspace add-on). It serves as the AI command center for organizations:

  • Agent deployment: technical teams build specialized agents via the Agent Development Kit (ADK), Google's open-source framework, deployable on Vertex AI Agent Engine.
  • Centralized governance: administrators see and control all agents β€” internally built, Google-provided, or partner-integrated. Permissions, monitoring, DLP, SSO: all native in the admin console.
  • Enterprise MCP: ADK supports MCP servers. Your enterprise agents can connect to BigQuery, Google Ads, GA4, your CRM β€” within a governed framework.
  • Broad adoption: end users interact with agents via simple interfaces (chat, Workspace, mobile). Technical complexity stays with the data/engineering team.

The Major Advantage: Native Integration with Google Data

  • Drive as knowledge base: Gemini can search your documents, presentations, and PDFs on Drive, citing sources directly.
  • BigQuery as analytical source of truth: ADK agents can query BigQuery in natural language without SQL knowledge.
  • NotebookLM as analysis layer: NotebookLM analyses can be shared internally and serve as grounding sources for custom business assistants ("Gems").

Pricing Overview

  • Workspace with built-in AI: from ~$14/user/month (Business Standard).
  • Gemini Enterprise Platform: negotiated with Google Cloud, based on users and services consumed.
  • ADK Agents (Vertex AI Agent Engine): usage-based. ~$0.01/vCPU-hour + model cost (per token).
  • ADK itself is open-source and free. You pay for infrastructure and models.

Known Limitations

  • Google ecosystem: optimized for Gemini and Google Cloud. AWS, Azure or multi-cloud integration is possible but less seamless.
  • Maturity: ADK is functional but the agent ecosystem is still young.
  • Variable pricing: the GCP portion is usage-based. Without monitoring, costs can surprise.

Focus: The IDE Agent Ecosystem β€” The Technical Cockpit

How It Works

An IDE agent is a development environment (VS Code-like) augmented by an LLM with access to your local context and tools. In an IDE like Antigravity (Google), Cursor, Claude Code (Anthropic) or Windsurf:

  1. You work in the IDE with your files, data, and terminal.
  2. An LLM runs continuously with access to your workspace: reading files, executing commands, browsing the web.
  3. MCP servers connect the LLM to your external tools: BigQuery, Google Ads, GA4, Slack, Asana, Meta Ads.
  4. Skills (structured markdown methodologies) and a knowledge base (persistent memory) give the model your business context.
  5. Workflows orchestrate complex action sequences: audit β†’ diagnosis β†’ recommendations β†’ deliverable.

Pricing

All these tools offer entry plans at $15-20/month. For real professional use, these plans aren't enough. Quotas are too limited for intensive workflows. Aim for performance plans:

Claude Code (Anthropic)
Recommended PlanMax 5x or Max 20x
Monthly Budget$100-200/month per user
Cursor
Recommended PlanBusiness
Monthly Budget~$40/month + model consumption
Antigravity (Google)
Recommended PlanTeams / Ultra
Monthly BudgetVariable quotas β€” plan for premium tier
Windsurf
Recommended PlanTeam
Monthly BudgetVariable by usage

Ballpark

For a consultant or data analyst using an IDE agent daily with a powerful model (Claude Opus 4.6 class), budget €150-200/month for AI tooling. Significant investment, but the productivity gain on analysis, audit, and production tasks is substantial.

Specific Risks

  • Unsupervised actions: an IDE agent can act on your tools. Most require human validation, but the temptation to "approve everything" without reading is a real risk.
  • Access security: configurations contain API keys and credentials. If unprotected, you expose access to your tools.
  • Team sharing: skills, knowledge base, and configs live on each user's machine. Synchronizing a team requires organization.
  • Quotas and cost: these tools are token-hungry. A premium subscription is often required, and costs escalate quickly without clear governance on usage.

Our Take

The IDE agent is the most powerful setup we've tested. But it requires technical maturity and operational discipline that don't suit every profile.

Modular Architecture: The 4 Building Blocks

Regardless of the solution category you choose, the target architecture rests on 4 distinct building blocks. Identifying them clearly lets you swap one block without breaking everything.

1. Data (Modern Data Stack)

The foundation. Without clean, documented, accessible data, your agents hallucinate on nothing.

  • Data warehouse: BigQuery, Snowflake, or equivalent.
  • Data pipelines: solid ETL/ELT (Airbyte, Fivetran, dbt).
  • Data catalog: even minimal β€” descriptions that let the LLM understand what it's working with.
  • Naming conventions: standards for campaigns, segments, KPIs. AI amplifies disorder.

2. The AI Layer (the Provider)

The LLM and its infrastructure. This block must be interchangeable.

  • Today: Claude Opus 4.6 for reasoning, Gemini 3.1 Pro for the Google ecosystem, Flash for quick tasks.
  • Tomorrow: this will change. Your architecture shouldn't be coupled to a single provider.
  • MCP plays a key role: your connectors work regardless of which LLM calls them.

3. Skills & Workflows (Use Cases)

This is what transforms a generic LLM into a business assistant. The mechanism is straightforward: .md files (Markdown), a text format that LLMs interpret natively and with precision.

Each file injects context into the model before it starts working. In practice, when you ask the AI to write an audit, it doesn't start from scratch: it loads your instructions, your methodology, your quality criteria. And it produces a deliverable aligned with your standards, not a generic output.

Three main categories structure this system:

  • Instructions: your brand voice, writing rules, naming conventions. Everything that defines "how we do things here".
  • Competencies (Skills): your documented methodologies. How to analyze a performance drop, structure an audit, score an ad account. The AI applies your framework, not a generic one.
  • Repeatable actions (Workflows): orchestrated sequences for recurring tasks β€” audit β†’ diagnosis β†’ recommendations β†’ deliverable. Instead of re-explaining everything each time, the workflow triggers and the AI executes.
  • Knowledge base: memory of your clients, projects, past decisions. This is what prevents the AI from starting from zero with every conversation.

Building these files is a discipline in itself. It's not a one-time setup: a skill gets tested, corrected, and refined through real usage. The quality of your instructions directly determines the quality of outputs. A poorly calibrated skill produces noise. A well-built one augments your work and co-creates with you.

Strategic Asset

This corpus is a strategic company asset. It's maintained, versioned, optimized β€” like your code or data. It's what makes your AI produce different results from a competitor using the same model.

4. Governance & Orchestration

  • Who accesses what: tools, data, actions without human validation.
  • Cost monitoring: consumption tracking, per-agent/user budgets, alerts.
  • Security: access protection, data classification, sharing rules.
  • Quality: systematic human validation, double-checking analyses, challenging recommendations.

In Gemini Enterprise, this governance is native (SSO, DLP, admin console). In an IDE agent, it needs to be built β€” which demands more rigor but offers more flexibility.

What We've Learned in 18 Months of Deployment

Input Data Quality Determines Everything

Probably the most important lesson. An LLM connected to your data warehouse (BigQuery, Snowflake, Databricks, etc.) is worthless if tables are poorly documented, fields badly named, and data inconsistent.

  • Tables with clear column descriptions
  • Respected campaign naming conventions
  • Stable, monitored data pipelines
  • A data catalog, even minimal

Critical Point

AI doesn't fix poor data hygiene. It amplifies it.

Beware the Productivity Illusion

This is a real topic. AI can produce deliverables that look polished: analyses with charts, structured reports, detailed recommendations. The risk is delivering content that has the form without the substance.

A performance report generated in 30 seconds doesn't have the same value as a diagnosis where you've cross-referenced data, formulated hypotheses, and tested multiple leads. AI accelerates work, but it doesn't replace reasoning or perspective.

Be cautious with social media posts claiming "I built an agent that does the work of 5 people in 10 minutes." AI adds real complexity to deployments; it requires taking steps one at a time, otherwise things quickly fall apart.

Sovereignty and Data: Open Questions

Everything you send to a cloud LLM transits through their servers. Enterprise plans offer contractual guarantees. But beyond the contractual, open questions remain:

  • Geopolitics: what happens if EU/US relations evolve? The Privacy Shield has already been invalidated twice.
  • Regulation: will the European AI Act (progressively in force since 2025) impose new algorithmic transparency requirements?
  • Dependency: if your AI stack relies 100% on a US cloud provider, what's your plan B?

These questions shouldn't block adoption β€” but they deserve to be raised and documented before deploying at scale.

What We Actually Do: Production Use Cases

  • Anomaly detection and monitoring: the model analyzes GA4, Google Ads, or Piano Analytics data via MCP, identifies abnormal variations, and produces an incident report with investigation leads.
  • Deep data analysis: cross-referencing BigQuery data, media reports, and web metrics for multi-dimensional diagnosis.
  • Media account auditing: structural analysis of Google Ads or Meta Ads accounts β€” naming conventions, structure, conversion setup, targeting, bidding.
  • Strategy and recommendations: based on historical data and client context, producing budget, reallocation, and bidding strategy recommendations.
  • Marketing creatives: ad visual mockups, creative brief production, asset generation for Social Ads campaigns.
  • Continuous optimization: analysis loop across media data β†’ web analytics β†’ CRM. AI produces coherent recommendations across the entire acquisition chain.

Practically, Where to Start?

If You Haven't Deployed Anything Yet

  1. Start with an enterprise plan (Gemini Business/Enterprise or ChatGPT Team/Enterprise). It resolves SSO, DLP, and compliance questions.
  2. Identify 3 high-value use cases in your daily workflow. Test them for 1 month.
  3. Before scaling up, ensure your data is clean and documented. Non-negotiable prerequisite.

If You Already Have an Enterprise Plan

  1. Move to MCP: connect your data tools (BigQuery, GA4, your CRM) to the LLM.
  2. Start documenting your methodologies as skills. These skills become the brain of your agents.
  3. Establish governance: permissions, cost monitoring, data classification.

If You Want to Go Further

  1. Evaluate the IDE agent model (Cursor, Antigravity, Claude Code, Windsurf) for your technical profiles.
  2. Explore ADK agents for supervised repetitive tasks: automated monitoring, scheduled reporting, alerts.
  3. Build a structured knowledge base on your clients, projects, and decisions. It's the asset that appreciates over time.

Need clarity on your AI strategy?

Each organization has its own data maturity, security constraints, and specific use cases. A pragmatic diagnosis, not an 80-page slide deck. For complex AI missions and large-scale deployments, we partner with Silex, a leading AI engineering firm (MVA ENS Cachan Γ— Γ‰cole Polytechnique).

Contact us