
Ship AI features without legal becoming the bottleneck.
Your MSA was written before ChatGPT. Your buyer's procurement team has read the EU AI Act. We rewrite the contracts, assess your risk tier, and sit on redline calls with Fortune 500 counsel — so your deals close in weeks, not quarters.
- ◆ENTERPRISE MSAs BATTLE-TESTED80+
- ◆AVG PROCUREMENT UNBLOCK12 DAYS
- ◆AI ACT RISK ASSESSMENTS40+
- ◆SOC 2 · GDPR · AI ACT STACKTRIPLE
You're the right fit if any of these is true.
- 01
Procurement keeps redlining your MSA and you don't know what's actually negotiable.
- 02
Your product trains on, fine-tunes with, or transmits customer data — and no one has audited the architecture.
- 03
The EU AI Act is hitting and you can't tell a board member which risk tier you're in.
- 04
You're shipping agents, copilots, or generative features on contracts written before any of that existed.
- 05
A Fortune 500 buyer just sent you a 47-page DPA and you don't know what's standard.
- 06
Not a fit: pre-revenue, B2C-only, or looking for a rubber-stamp on whatever the buyer sent.
— If you want enterprise deals to close in weeks, keep reading.
Your product shape determines your legal surface.
Four AI product archetypes mapped against the five hazard classes that drive your contract stack and compliance posture. Click any cell for the specifics.
| Hazard | Chatbot Conversational Q&A, FAQ-style | Copilot Embedded assistant that suggests | Agent Takes actions in customer systems | Generative Produces text, image, or code output |
|---|---|---|---|---|
IP of outputs Who owns what the model produces | ◐Medium Outputs usually brief; IP value low but assignment still matters. | ●High Code, drafts, and generated assets the customer embeds in their work. | ●High Agent artefacts — emails sent, tickets created, records written. | ●●Critical Training-set echo + copyrightability of outputs is unsettled. |
Training data Lawful basis for inputs & fine-tuning | ◐Medium Conversation logs may be used for improvement — needs explicit consent path. | ●High Copilot usage feeds the next fine-tune — unless you wall it off. | ◐Medium Agent inputs often richer than copilot — but usually not reused. | ●●Critical The Schrems II + text-and-data-mining minefield. |
Hallucination liability Exposure when output is wrong or harmful | ●High Even Q&A chatbots make up confident nonsense. | ●●Critical Bad suggestions ship to production. Liability is real. | ●●Critical Agent sends the email, creates the ticket, issues the refund. | ●High User expects output accuracy; model doesn't know what accuracy is. |
AI Act risk tier Likely classification under EU AI Act | ○Low Usually limited-risk transparency obligation only. | ◐Medium Limited-risk unless deployed in regulated verticals. | ●High Agents acting on regulated-sector systems usually land high-risk. | ●High General-purpose AI has its own regime under the AI Act. |
Agency / authority When the model acts on the customer's behalf | ○Low Chatbot suggests; human decides. | ◐Medium Suggestions accepted wholesale become your actions. | ●●Critical This is the defining agent risk. Scope authority carefully. | ◐Medium Lower authority risk unless chained into agent behaviors. |
Outputs usually brief; IP value low but assignment still matters.
Shared or customer-assigned ownership of conversational turns. Pin down whether transcripts become customer confidential information under the MSA. Reference GDPR Art. 4(2) for processing scope and MSA IP Assignment clause.
Conversation logs may be used for improvement — needs explicit consent path.
Customer inputs as training data require a separate lawful basis or explicit opt-in. Consider GDPR Art. 6(1)(a) consent vs Art. 6(1)(f) legitimate interest balancing. Default-off or default-on matters a lot in procurement.
Even Q&A chatbots make up confident nonsense.
Output-liability limitation belongs in the MSA §Limitation of Liability and the AUP. Don't let the customer's paper set the standard — your template should already cap damages and exclude consequential loss for AI output errors.
Usually limited-risk transparency obligation only.
Most pure chatbots fall under EU AI Act Art. 50 transparency: tell the user they're talking to a machine. No conformity assessment required unless the bot is deployed in a high-risk context (employment, credit, essential services).
Chatbot suggests; human decides.
Because the chatbot isn't taking autonomous action, agency risk is minimal. Still, add an AUP clause prohibiting use of chatbot output as the sole basis for legal, medical, or financial decisions.
The four calls we take most weeks.
Specific, recurring, fixable. Each of these has become a reusable playbook after we saw it the fifth time.
MSA redline hell.
Your 2020 MSA doesn't mention AI. Every enterprise deal adds six weeks of buyer redlines. Your sales cycle compounds with each one.
AI training data, unclear GDPR basis.
Inputs flow into fine-tunes with no documented lawful basis. A German DPA or enterprise buyer finds it first. You're negotiating from the wrong side of a breach.
AI Act risk-tier misclassification.
You assumed 'limited risk.' Your feature touches credit, hiring, or healthcare. Actual tier: high-risk. Remediation timeline: 6–18 months of conformity work.
Hallucination liability.
Your model gave bad advice. Your ToS doesn't disclaim. Your MSA liability cap applies. The customer's insurer is in the chain.
Four regimes. Different triggers. Mostly not aligned.
Toggle columns to focus on the regime that matters for your stage and buyer mix. Export to CSV to share with your board.
| Market | What triggers it | What it forces | Typical timeline |
|---|---|---|---|
EUEU AI Act | Providing or deploying AI with EU users or EU output | Risk-tier classification, transparency, human oversight, logging | 6–24 months per tier |
EUGDPR + SCCs | Any EU personal data (inputs, training, outputs) | Lawful basis, DPA, SCCs, transfer impact assessment | Immediate; SCCs in every contract |
EUDigital Services Act | Online intermediaries with EU users at scale | Content rules, transparency reporting, notice-and-action | Already in force |
USEnterprise DPA (buyer paper) | Every Fortune 500 / EU Global 500 deal | Sub-processor list, audit rights, breach timelines, AI clauses | Blocks the deal until resolved |
USState AI laws + SOC 2 | US customers; enterprise security expectations | CO AI Act, NY EEAA, CA ADMT, plus SOC 2 Type II | Rolling; procurement-driven |
UKUK AI / data regime | UK users; post-Brexit divergence | UK GDPR, ICO AI guidance, sectoral oversight | Guidance-led, not yet statutory |
Six things. Done weekly. Built for procurement.
MSAs & DPAs
Rewritten for the AI era. Sub-processor language that covers model providers. Breach clocks your ops can hit. Survives enterprise procurement on the first pass.
- MSA, DPA, SLA, ToS, AUP rewrites
- Pre-approved fallback positions for sales
- Joint redline calls with buyer counsel
EU AI Act compliance
Per-product risk-tier classification with written reasoning. Remediation roadmap for anything high-risk or limited-risk. Board-ready documentation.
- Risk-tier classification with written opinion
- Conformity assessment pathways
- Transparency, logging, human-oversight controls
GDPR, SCCs & TIAs
Lawful-basis audit for training data. SCC packages enterprise buyers will accept. Transfer impact assessments that hold up to German and Dutch DPA scrutiny.
- Training-data lawful-basis audit
- SCCs + Schrems II TIAs
- Controller vs processor posture per product
ToS & Acceptable Use
Input/output rights. Prohibited uses (defamation, CSAM, regulated advice). Abuse handling. Model-change language that doesn't break on an OpenAI deprecation.
- Input-output IP + reuse rights
- Prohibited-use catalogue
- Model-change + deprecation clauses
IP — inputs, outputs, training
Who owns the prompt, the output, the fine-tune. How to train on customer data without losing enterprise deals. How to defend against "you trained on our stuff" claims.
- Prompt + output IP assignment
- Customer data fine-tune architecture
- Training-set origin indemnity carve-outs
Product liability & AI insurance
Liability allocation across you, your foundation-model provider, and your customer. Insurance reviews. Indemnity language that procurement will actually sign.
- Liability allocation across the stack
- AI-specific insurance review
- Indemnity language tuned for enterprise
Four phases. Buyer-counsel ready.
From first call to procurement-ready paper in 6 weeks. Quarterly maintenance after.
Diagnose
Contract stack + data flow audit. Product-by-product AI Act screening. Written risk memo.
Rebuild
New MSA, DPA, SLA, ToS, AUP. SCC package. AI Act documentation set.
Deploy
Procurement-ready templates live. Redline playbook for your sales team. We join buyer-counsel calls.
Maintain
Quarterly regulatory updates. New-product reviews. Enterprise deal support on-demand.
Two files we closed.
Anonymised by client request. Every number is real.
Stuck in procurement with a Fortune 500 US insurer for four months. 47-page buyer DPA with AI-specific clauses the buyer's own legal team had never seen on another vendor's paper. Sales cycle stalled, CSM team unable to escalate, competitor was closing deals faster.
We rebuilt the MSA + DPA + SLA stack in 12 business days, incorporating pre-approved fallback positions for the five clauses buyer procurement always challenges. Joined two redline calls with the buyer's counsel and negotiated AI training carve-outs live. Published a redline playbook for the sales team covering the next 10 deals.
Deal signed 11 days after engagement. ARR impact €1.1M. Subsequent three Fortune 500 deals closed in an average of 23 days — down from 94 days pre-rebuild.
“They translated a 47-page enterprise DPA into five live redline positions in 48 hours. That's all I needed.”
Founders assumed their AI feature was 'limited risk.' First enterprise hospital buyer asked for written classification. Actual tier: high-risk under `EU AI Act Annex III` — medical device decision support. No conformity assessment pathway, no logging, no human-oversight documentation.
Reclassified the feature with a written opinion. Built a 6-month remediation plan: human-oversight controls, audit logging, conformity-assessment pathway via notified body. Rewrote customer contracts to reflect the new obligations and allocated residual risk.
First Tier-1 German hospital buyer signed within the quarter. Three more regulated-sector buyers onboarded. Company is now referenced by their foundation-model provider as an example of proper `Article 26` deployer compliance.
“Two firms told us we were 'probably fine.' Kiroptera told us exactly which Annex III entry applied and what to do about it.”
Time from engagement to signed enterprise MSA
After contract-stack rebuild
Across copilots, agents, and generative products
MSAs rejected by Fortune 500 procurement post-rebuild
Three ways to work together. All flat-fee.
Pick the commitment level that matches the mandate. All tiers coexist with your existing corporate counsel.
Contract Stack Audit
Diagnostic + rebuild
MSA, DPA, SLA, ToS, AUP reviewed and rebuilt. AI Act screening. 3-week turnaround. Fixed price, no hourly surprises.
- ◆Full contract-stack audit + rebuild
- ◆Per-product AI Act risk-tier screening
- ◆Pre-approved fallback positions for sales
Fractional General Counsel
Embedded with your team
Slack, redline calls, new-feature legal review, buyer-counsel calls. Cap on hours, no partner-rate games. For Series A–B.
- ◆Embedded with go-to-market + product
- ◆Redline + buyer-call support on-demand
- ◆Monthly hour cap, no overage surprises
Quarterly AI Compliance
Regulatory maintenance
Quarterly AI Act reclassification as you ship, GDPR refresh, board-ready compliance memos. For scale-ups past €20M ARR.
- ◆Quarterly product reclassification
- ◆Regulatory alert feed for your stack
- ◆Board-ready compliance memos
We coexist with your corporate counsel and your Big-4. Not a replacement — a specialist layer.
AI Act + GDPR enforcement tiers.
Not every jurisdiction hits with the same intensity. Flagship = aggressive DPA + AI Act supervisory work. Active = regular involvement. On request = via local counsel network.
The practice leads.
Who you'll actually work with — not a partner who shows up for the pitch and disappears into a junior's inbox.

Robert Babić
Founder & Managing Partner
Robert is the founder of Kiroptera Consulting, bringing over a decade of experience in corporate law, blockchain regulation, and international business structuring. He specializes in guiding crypto and Web3 projects through complex legal landscapes.

Maja
Legal Consultant
Maja specializes in international e-commerce law and tax optimization for digital businesses. Her expertise helps clients navigate cross-border regulatory challenges with confidence.

Petra
Legal Consultant
Petra focuses on SaaS and AI business law, helping tech companies structure their contracts, comply with data protection regulations, and navigate the evolving AI regulatory landscape.
Why SaaS founders pick us over corporate counsel or the AI Act panic sellers.
Sprint-speed turnarounds
Redlines back in 48h. Not three weeks.
We match your release cadence. Rush tiers available when the enterprise deal is waiting on a clause change.
Technical fluency
We read your repo, system prompts, and model cards.
You won't spend the call explaining embeddings or retrieval. We've deployed and audited AI systems ourselves.
Works with your procurement counsel
Buyer’s lawyer and ours speak the same language.
No founder sitting on legal calls translating. We take the redline calls directly, escalate clean, and keep your team out of the weeds.
Flat fees. Scoped upfront.
No six-minute increments. No hourly surprises.
Fixed pricing on defined work. Retainers with hour caps. Every scope change written before work begins.
Questions founders ask on the call.
No legalese. If the answer is 'it depends,' we'll tell you what it depends on.
Your corporate lawyer is probably great at corporate work. We do AI and SaaS contracts full-time — the MSAs, the AI Act, the DPAs procurement sends. Most clients keep their existing counsel and bring us in for this layer.
Contract Stack Audit: €12K–€28K flat. Fractional GC: from €4,800/mo with hour cap. Quarterly AI Compliance: from €3,200/mo. We quote before we start. No billable-hour theatre.
Redlines in 48 hours. Full MSA rebuild in 2–3 weeks. Procurement unblock averages 12 days once we're on the call. Rush tiers available when a deal is bleeding.
Yes. Our templates have been through Fortune 500 and EU Global 500 procurement. Specifically: we've closed deals with regulated-sector buyers (healthcare, financial services, EU public sector). Our MSA language is recognisable to enterprise counsel.
We classify each product line under the Act, document the reasoning, and build the remediation plan for anything landing in high-risk or limited-risk. We don't promise safe harbour — no one honestly can. What we deliver is defensible positions with written opinions.
Right fit: €1M–€100M ARR selling to enterprise. Earlier than that, a template kit is probably enough. Later, you likely have in-house counsel — we slot in as specialists on AI Act and procurement-heavy deals.