AI in Business: Claude as an Analysis Tool, and What Swiss Data Protection Law Says About It

AI in Business: Claude as an Analysis Tool, and What Swiss Data Protection Law Says About It

Two projects, one question. Within a single week I found myself in the same situation twice: someone wants to use AI to analyse real business data and only afterwards wonders whether that is even legally sound.


Two Real-World Examples

πŸ—οΈ
Case Study 1
Site Manager and AI Dashboard
A site manager built a dashboard with Claude to analyse his construction schedule and resource allocation. Weekly plans, resource utilisation, schedule deviations: everything at a glance, automatically analysed.
πŸ›’
Case Study 2
Online Shop Analysis
For one of my customers I integrated an AI-powered analysis into his online shop: order patterns, customer behaviour, high-revenue segments, all directly from live shop data.

Both applications are useful, efficient, and technically straightforward to implement. But both work with personal data, and that is exactly where things get legally interesting.


What Does the Swiss Federal Act on Data Protection (FADP) Say?

The revised Swiss Federal Act on Data Protection (FADP, SR 235.1) has been in force since 1 September 2023. The most important takeaway first:

FDPIC: Β«Update: The existing data protection law is directly applicable to AIΒ», 8 May 2025
The FADP is formulated in a technology-neutral manner. It applies directly and in full to AI-assisted data processing; no additional AI-specific regulation is required.

This sounds reassuring, but it is a double-edged sword: anyone using AI is subject to the same strict requirements as any other form of data processing.

The 5 Most Important FADP Obligations When Using AI

πŸ”
Duty to Inform (Art. 19)
Customers must know that their data is being processed by an AI and for what purpose. For an online shop this means: update the privacy policy and explicitly mention the use of AI.
🀚
Automated Individual Decision (Art. 21)
For automated individual decisions (e.g. automatic per-customer pricing, credit assessments) data subjects have the right to request human review.
πŸ“‹
Data Processing Agreement (Art. 9)
Anyone using an external AI service (such as Claude, GPT-4, etc.) must have a Data Processing Agreement (DPA) in place with the provider. Without a DPA, using customer data is unlawful.
🌍
Cross-Border Data Transfer (Art. 16)
Anthropic (Claude) is a US company. Transferring data abroad is generally permitted, but only when adequate safeguards exist (e.g. EU Standard Contractual Clauses with Swiss Addendum, which the FDPIC accepts for Switzerland).
βš–οΈ
Data Protection Impact Assessment (Art. 22)
Where there is a high risk to data subjects (e.g. large-scale profiling), a prior impact assessment is required by law. If a particularly high residual risk remains, the FDPIC must be consulted.

What a Typical Business Regulates, and What the FADP Actually Requires

Many companies now have an internal AI policy. That is a good start. But anyone who looks closely at such documents will notice a common gap: they govern how employees should behave, but not the legal basis underneath.

A typical policy contains the following:

What internal AI policies typically contain
  • βœ“ Data categories: personal, sensitive, non-sensitive
  • βœ“ List of approved tools (e.g. Microsoft Copilot, DeepL via company login)
  • βœ“ Rule of thumb: use unapproved tools only for "non-sensitive" queries
  • βœ“ Note on transparency towards customers and partners
  • βœ“ General values: trust, integrity, respect

This looks solid. The problem: the policy tells employees what to do, but not why it applies legally, and it leaves the critical questions unanswered.

What such policies typically do not answer
?
Why is a tool "approved"?
What exactly has the provider signed? Is there a DPA under Art. 9 FADP? Who verified whether the Standard Contractual Clauses (SCCs) for US transfers are sufficient?
?
What applies when AI analyses data, not just translates it?
When AI analyses customer behaviour, identifies patterns, or influences resource decisions, profiling (Art. 5 FADP) and potentially the right to human review (Art. 21 FADP) apply.
?
Who is liable in the event of a data protection breach?
Arts. 60–66 FADP provide for fines of up to CHF 250,000 against natural persons. An internal policy without a legal basis protects neither the company nor the responsible individual.
?
Is customer data used to train the model?
Especially with consumer plans (ChatGPT Free, Claude Pro), this is the default setting. A policy that does not address this leaves one of the central questions open.
Typical policy says
FADP additionally requires
Β«Only use approved toolsΒ»
Written data processing agreement with the provider (Art. 9)
Β«Inform customers when using AIΒ»
Active duty to inform, including purpose and recipients in the privacy policy (Art. 19)
Β«Do not enter sensitive dataΒ»
Proof that even pseudonymised data cannot be re-identified (Art. 6)
No mention of foreign servers
Documented safeguards for data transfers to the US (Art. 16, SCCs)

The point is not that such policies are worthless. They help raise awareness and prevent obvious mistakes. But they do not replace a legal basis, and they do not close the gaps that the FADP specifically addresses.


The Problem with Claude Pro

Here comes the point that surprises many people, myself included.

⚠️
Claude Pro is not suitable for business customer data
Anthropic's consumer plans (Claude Pro $20/mo., Claude Max $100–$200/mo.) are, according to the terms of service, intended for personal, non-commercial use. For business use with customer data, three critical restrictions apply:
❌
No DPA
No Data Processing Agreement available, Art. 9 FADP cannot be fulfilled
⚠️
Training active by default
Inputs may be used for model training. Opt-out is possible but incomplete (safety reviews remain active)
🚫
Terms of Service
Commercial use with customer data violates the Consumer ToS

What Works, and What Does It Cost?

The solution is simpler than expected: the Anthropic API (Developer Platform / Commercial Plan).

Requirement (FADP)
Pro / Max
Team
API βœ“
Commercial use permitted?
βœ—
βœ“
βœ“
Data Processing Agreement (Art. 9)
βœ—
βœ“
βœ“
No training on customer data (contractually guaranteed)
βœ—
βœ“
βœ“
SCCs for cross-border transfer
βœ—
βœ“
βœ“
Cost (approximate)
Pro $20 / Max $100–$200/mo.
min. $150/mo.
Pay-as-you-go

For small businesses and sole traders, the API with pay-as-you-go is the most cost-effective solution: no monthly base fee, no seat minimum, and all Commercial Terms including the DPA are included.


What This Means in Practice: The Two Case Studies

Case Study 1: Construction Schedule Dashboard

Data involved: Construction plans with employee names, subcontractors, schedules, machine allocation. Where natural persons are identifiable, the FADP applies.

What needs to be done:

  • Inform employees and subcontractors (duty to inform)
  • Check whether the AI analysis makes "automated individual decisions" (e.g. automatic resource allocation)
  • Use Anthropic's API plan, not Claude Pro
  • Address AI use in employment contracts and subcontractor agreements

Tip: Construction schedules using only aggregated data (no names, only roles and functions) significantly reduce the data protection workload.

Case Study 2: Online Shop Analysis

Data involved: Order history, purchasing behaviour, customer segments, classic personal data often with a profiling character (Art. 5 let. f FADP).

What needs to be done:

  • Expand the shop's privacy policy: name the AI use, purpose, and recipient (Anthropic as processor)
  • Use the Anthropic API (DPA as data processing agreement under Art. 9)
  • For high-risk profiling: obtain explicit consent and conduct a data protection impact assessment (Art. 22)
  • Anonymise or pseudonymise data before sending it to the AI, where possible

Tip: For pure analytics, it is often worth aggregating customer data before the AI analysis β€” at that point it may no longer fall under the FADP at all.


FADP Compliance Checklist

Check before using AI with customer data
βœ“
Is the data personal data (natural persons identifiable)? If yes, the FADP applies.
βœ“
Choose an AI provider with Commercial Terms and a DPA; do not use consumer plans.
βœ“
Update the privacy policy: mention AI use, purpose, and recipient (Anthropic).
βœ“
Anonymise or aggregate data before AI input where full identifiability is not necessary.
βœ“
For high-risk profiling: obtain explicit consent.
βœ“
Ensure the right of access and the right to object for data subjects.
βœ“
Conduct a data protection impact assessment (Art. 22) where high risk is present.

My Conclusion

AI can create real value in business, whether for a construction schedule dashboard or shop analytics. The Swiss FADP does not prohibit this. It does, however, require transparency, an appropriate contract with the AI provider, and care when handling personal data.

The most common mistake: people start with the AI tool they already know from personal use, Claude Pro/Max or ChatGPT Plus, and forget that these consumer plans are neither contractually nor legally adequate for processing business customer data.

The good news: the Anthropic API is pay-as-you-go, includes all the necessary commercial guarantees including a DPA, and contractually prohibits training on customer data. For small businesses it is often cheaper than a flat-rate plan, because you only pay for what you actually use.

AI integration for your business?
I help integrate AI-powered analytics into existing systems in a way that is compliant with the FADP: from technical implementation to the right contract with the AI provider.
Get in touch

Sources

Sources and further reading
  • [1] FDPIC, Update: The existing data protection law is directly applicable to AI, 8 May 2025 (updated 22 August 2025). Federal Data Protection and Information Commissioner, Bern. edoeb.admin.ch
  • [2] Federal Act on Data Protection (FADP), SR 235.1, in force since 1 September 2023. fedlex.admin.ch
  • [3] Anthropic, Consumer Terms of Service and Commercial Terms of Service, as of October 2025. anthropic.com/legal
  • [4] Anthropic, Data Processing Addendum (DPA), available for API and commercial plans. anthropic.com/legal/data-processing-addendum
  • [5] Anthropic, Privacy Policy, section on data use for model training, as of 2025. anthropic.com/legal/privacy
  • [6] Fredric Paul, Anthropic: You can still use your Claude accounts to run OpenClaw, NanoClaw and Co., The New Stack, 2025. thenewstack.io

Fun Fact: The OpenClaw Dispute and What It Reveals About Anthropic's ToS

πŸ’‘
Fun Fact from the field

OpenClaw, NanoClaw and similar personal AI agents work exactly as this article describes: they use the OAuth token of a Claude Pro or Max account, without an API key. This makes them affordable, but it sits in a legal grey area.

In early 2025, Anthropic updated its documentation to clarify that using Pro/Max credentials in third-party tools violates the terms of service. The community reaction was fierce, and Anthropic backtracked: "Nothing is changing about how you can use the Agent SDK and MAX subscriptions." The official line since then: personal use is fine; anyone building a business on top of it or processing customer data should use an API key.

That is exactly the point. Using OpenClaw for personal experiments sits within the tolerated zone. Using it to analyse business customer data lands you back at the starting question of this article: no DPA, no training exclusion, no SCC documentation, Consumer ToS. The technology is the same; the legal context is entirely different.


Note: This post provides a practice-oriented overview and does not replace legal advice. For specific data protection questions, consulting a specialist or the FDPIC is recommended.

Comments

Loading comments...