Home Services Quiz Methodology FAQ Legal Privacy

Frequently Asked Questions

FAQ

KI-Assurance is the product brand of Helvetic AI. We are an independent Swiss AI Assurance Lab that evaluates AI models automatically – for performance, EU AI Act compliance, FINMA validation, and Swiss language requirements. Every evaluation delivers a KIAS Score across 6 dimensions.

No. You choose from 4 handoff modes: API key (standard), Docker on your infrastructure (regulated), dedicated hardware on-site (premium), or anonymize-first (privacy-first). In no mode does your data leave Switzerland.

The most affordable entry point is an AI Risk Classification from CHF 3,000. This tells you whether your AI system falls under the EU AI Act high-risk category and which obligations apply. For a full AI Model Evaluation with benchmark results, prices start at CHF 8,000.

An AI Model Evaluation takes 3–8 business days depending on scope. During this time, we evaluate 3–5 AI models with your real data using Inspect AI & Compl-AI, produce an accuracy matrix with confidence intervals, and deliver reproducible benchmark results with documented methodology.

Minimal. In standard mode, you provide an API key – we handle the rest. In Docker mode, you need someone who can start a container. The entire process is designed to minimize your effort.

The KI-Assurance Score (KIAS) is our composite scoring framework across 6 dimensions: accuracy, robustness, fairness, privacy, transparency, and Swiss regulatory alignment. Each dimension is scored 0–100 with confidence intervals and sample sizes. Details on our methodology page.

Inspect AI is the evaluation infrastructure of the UK AI Safety Institute (MIT License), used by leading AI labs including xAI, with contributions from DeepMind and Anthropic. Compl-AI is the EU AI Act compliance benchmark suite from ETH Zurich, INSAIT, and LatticeFlow AI (ArXiv: 2410.07959). Our engine combines both with Swiss-Bench, our proprietary Swiss benchmarks.

Swiss-Bench is our proprietary benchmark for Swiss languages (German, French, Italian), legal terminology, financial vocabulary, and domain-specific failure modes. We publish results quarterly as an open-source leaderboard and ArXiv publication.

A structured analysis of your AI system according to EU AI Act risk classes (minimal, limited, high, unacceptable). You receive a documented decision tree, a risk matrix, and concrete recommendations. The ideal entry point from CHF 3,000 to clarify which regulatory obligations apply to your system.

We evaluate 3–5 AI models with your real data on Swiss infrastructure using Inspect AI & Compl-AI. You receive an accuracy matrix with confidence intervals, a failure-mode analysis, and reproducible benchmark results with documented methodology. From CHF 8,000.

The premium service for financial institutions: we validate your AI models against FINMA requirements for model risk management. Includes model risk governance, validation report, and documentation for supervisory authorities. From CHF 15,000.

A comprehensive conformity assessment of your AI system against EU AI Act requirements. Technical evaluation, risk management documentation, quality management review, and human oversight mechanisms. You receive an audit-ready report. From CHF 8,000.

AI models change constantly. We automatically re-run your benchmarks whenever a relevant model update occurs and alert you to accuracy drift and compliance changes before they become a problem. Available as a quarterly subscription from CHF 3,000–5,000 per quarter.

You receive: (1) A standardized evaluation report with KIAS Scores, gap analysis, and recommendations. (2) The complete evaluation harness (configuration, seed values, datasets) – you can rerun every test yourself, anytime. (3) A findings call for results interpretation.

We are a technical audit lab, not a consulting firm. Our engine delivers automated, reproducible results – no manual assessments or subjective opinions. Entry from CHF 3,000 (vs. CHF 200,000+ at Big Four). Every test is repeatable.

Yes. We have no commercial relationships with any AI model provider. No referral fees, no vendor partnerships, no pay-for-score. Every model is evaluated using the same methodology. Independence is a core principle – read our full independence statement.

Start with an AI Risk Classification (from CHF 3,000) to clarify which regulatory requirements apply to your system. Then follow up with an AI Model Evaluation (from CHF 8,000) with reproducible benchmarks. For financial institutions, we recommend FINMA Validation (from CHF 15,000). Ongoing monitoring keeps your compliance current. See all services on our Services page.

AI hallucinations occur when a model generates plausible-sounding but factually incorrect information – e.g., fabricated court rulings, non-existent regulations, or wrong financial data. A Stanford study (2025) found a 58% hallucination rate in legal AI analysis. FINMA Guidance 08/2024 explicitly names hallucinations as a GenAI risk. We quantitatively measure hallucination rates as part of the KIAS Score and identify topic areas with elevated risk.

AI bias occurs when a model systematically disadvantages certain groups – for example in credit decisions, insurance premiums, or hiring screening. The EU AI Act classifies such systems as high-risk. We measure fairness metrics across demographic groups and domain-specific scenarios as part of the KIAS Fairness dimension.

FINMA Guidance 08/2024 defines 7 supervisory areas for AI: governance, risk identification, data quality, testing & validation, documentation, explainability, and independent review. Our FINMA Validation (P3) evaluates your model against all 7 areas with 30 FINMA-specific scenarios including hallucination stress tests.

Model drift is the gradual degradation of AI performance over time – caused by changing data, model updates, or regulatory changes. The ECB fined banks EUR 1.24M for outdated AML models. Our Monitoring (P5) runs quarterly automated re-evaluations: drift detection, hallucination tracking, and compliance changes.

If you use AI systems in business-critical processes – credit decisions, claims processing, customer advisory, legal text analysis – then yes. The EU AI Act requires technical compliance evidence for high-risk systems from December 2027. FINMA already expects independent model validation today. Start with our free Readiness Check to assess your action items.

KI-Assurance is the product brand of Helvetic AI (ai-helvetic.ch). Helvetic AI is the umbrella brand and legal entity (sole proprietorship, Fatih Uenal, PhD). All evaluation, compliance, and monitoring services run under KI-Assurance.