_CORE
AI & Agentic Systems Core Information Systems Cloud & Platform Engineering Data Platform & Integration Security & Compliance QA, Testing & Observability IoT, Automation & Robotics Mobile & Digital Banking & Finance Insurance Public Administration Defense & Security Healthcare Energy & Utilities Telco & Media Manufacturing Logistics & E-commerce Retail & Loyalty
References Technologies Blog Know-how Tools
About Collaboration Careers
CS EN
Let's talk

Security & AI Security

Security as an architectural property.

Security isn't solved at the end of a project — it's baked into the architecture from the very first line. Threat modeling, AI governance, pen-testing and incident response.

Threat Modeling

STRIDE/DREAD analysis, attack surface mapping, risk scoring. Know what you're protecting and from whom.

Threat modeling is not a one-time activity — it’s a living artifact. Every system has an attack surface and every change reshapes it. We perform STRIDE analysis during architecture design, not after production deployment.

How we work: We start with a data flow diagram — where data originates, how it flows, where it is stored. At every trust boundary we identify threats using STRIDE (Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege). Each threat is scored via DREAD and mitigations are prioritized.

Attack surface mapping covers external APIs, internal services, databases, message brokers, CI/CD pipelines, third-party integrations. We also map the supply chain — dependencies, container images, build pipeline. An attacker looks for the weakest link, not the most obvious one.

Output: A structured threat model with prioritized risks, proposed mitigations and acceptance criteria. Integrated into the backlog — security stories alongside feature stories. Updated with every architectural change.

Tooling: Microsoft Threat Modeling Tool, OWASP Threat Dragon, custom STRIDE workshops. For larger systems — automated threat modeling from IaC (Terraform, Kubernetes manifests).

threat-modelstriderisk
Detail →

AI Security & Governance

Prompt injection prevention, data leakage protection, agent guardrails. AI under control.

AI agents are a new attack vector. Classical security addresses authentication, authorization, encryption. AI security adds: prompt injection, data exfiltration via the model, uncontrolled actions, hallucination-driven errors. It is a different discipline.

Prompt injection is SQL injection for the LLM era. An attacker manipulates input so that the agent changes its behavior — ignores the system prompt, reveals internal data, performs an unauthorized action. We defend in multiple layers: input sanitization, output filtering, privileged/unprivileged context separation, canary tokens.

Data leakage is a silent killer. A model trained on internal data may disclose sensitive information in a response. An agent with database access may return data the user is not authorized to see. We solve this with PII detection on output, RBAC for agent actions, and an audit trail of every interaction.

AI Governance framework: We define what the agent may and may not do. What data it can read. What actions it can perform. Who approves escalations. A kill-switch for immediate shutdown. Regular red-team exercises specifically for AI systems.

Compliance: EU AI Act categorization, documentation of decision-making processes, bias monitoring, explainability. We prepare organizations for regulatory requirements arriving in 2025–2026.

ai-securitygovernanceguardrails
Detail →

Penetration Testing

Black-box and white-box testing, vulnerability assessment, code review.

A penetration test is not a checkbox — it is a simulation of a real attack. Our testers think like attackers. They don’t just look for known CVEs, but for business logic flaws, race conditions, and privilege escalation chains that automated scanners won’t find.

Black-box testing: The tester has no information about the system — just like a real attacker. Reconnaissance, enumeration, exploitation. Reveals what an attacker sees from the outside — exposed services, default credentials, information leakage in error messages, IDOR vulnerabilities.

White-box testing: The tester has access to source code, architecture, and configuration. Deeper analysis — security-focused code review (injection flaws, broken auth, insecure deserialization), IaC template review, secrets management analysis.

Scope and methodology: Every test has a clearly defined scope, rules of engagement and communication protocol. We use the OWASP Testing Guide, PTES, and our own methodologies for APIs and microservices. The report includes a reproducible PoC for each finding.

Continuous security testing: A one-time pentest is not enough. We integrate DAST (dynamic application security testing) into CI/CD pipelines. Every deploy passes an automated security scan. Manual pentest 1–2× per year for in-depth analysis.

pentestvulnerabilityaudit
Detail →

Zero Trust Architecture

Identity as the perimeter. mTLS, micro-segmentation, least privilege.

The perimeter is dead. The classic model of “inside the network = trusted” does not work in the era of cloud, remote work and supply chain attacks. Zero Trust says: never trust, always verify. Every request, every user, every device.

Identity as the new perimeter: Authentication and authorization at every level. Service-to-service communication via mTLS (mutual TLS) — both sides prove their identity. SPIFFE/SPIRE for workload identity. No implicit trust between services.

Micro-segmentation: Network policies at the pod level (Kubernetes NetworkPolicy, Cilium). A service communicates only with those it must. Lateral movement after compromise of a single service is blocked. East-west traffic under control.

Least privilege: RBAC with granular permissions. Just-in-time access for administrators — permissions for 4 hours, not permanently. Regular access reviews. Privileged Access Management (PAM) for critical systems. Service accounts with minimal permissions.

Implementation: We don’t treat Zero Trust as a big-bang project. We start with an inventory — who communicates with whom, what data flows. Then we gradually introduce controls: first monitoring (we see what is happening), then enforcement (we block what shouldn’t happen). Typically 3–6 months to production state.

zero-trustmtlsrbac
Detail →

Incident Response

SIEM, runbooks, on-call processes. When something happens, you know what to do.

Incident response is not improvised. When PagerDuty calls on a Sunday night, you need a runbook, not a brainstorming session. We build incident response processes that work under stress — clear roles, clear steps, clear escalation paths.

SIEM and detection: Central collection of security events from infrastructure, applications, and identity providers. Correlation rules for detecting known attack patterns. Anomaly detection for unknown threats. MTTD (Mean Time to Detect) under 1 hour for critical incidents.

Runbooks for top incidents: Compromised credentials, ransomware, data breach, DDoS, insider threat, supply chain compromise. Each runbook: detection → containment → eradication → recovery → post-mortem. We test with table-top exercises quarterly.

On-call processes: Rotation, escalation matrix, response SLA. PagerDuty/OpsGenie integration. Clear severity levels (SEV1–SEV4) with defined response times. War room protocol for SEV1 incidents. Communication templates for stakeholders.

Post-mortem culture: Blameless post-mortem within 48 hours after every SEV1/SEV2. Root cause analysis, contributing factors, action items with owners and deadlines. Sharing learnings across teams. Goal: the same mistake never happens twice.

siemincidentresponse
Detail →

Compliance & Audit

GDPR, NIS2, ISO 27001, DORA. Regulation as a checklist, not a nightmare.

Compliance is not about paperwork — it’s about processes. An ISO 27001 certificate on the wall means nothing if there is no functioning ISMS behind it. We help organizations build security processes that pass an audit and actually protect.

NIS2 readiness: The new EU directive expands the scope of regulated entities and tightens requirements. Gap analysis against NIS2 requirements, an implementation roadmap, supply chain security assessment. If you are in a regulated sector (energy, transport, finance, healthcare, digital infrastructure), NIS2 applies to you.

DORA (Digital Operational Resilience Act): For the financial sector. ICT risk management, incident reporting, digital operational resilience testing, third-party risk management. We help with gap analysis, control implementation and audit preparation.

ISO 27001: ISMS implementation from scratch or preparation for recertification. Risk assessment, Statement of Applicability, policies and procedures, internal audits. Pragmatic approach — documentation that serves a purpose, not just exists.

GDPR operationally: Data mapping, DPIA (Data Protection Impact Assessment), breach notification processes, data subject rights (access, erasure, portability). Technical measures: pseudonymization, encryption, access control, audit logging. Privacy by Design integrated into the development process.

gdprnis2iso27001
Detail →
Security by Design

Security by Design

Security integrated into the architecture from the start — not as a sprint at the end. The threat model is created together with the system design; security review is part of code review.

Příklad z praxe: Company A addresses security only before go-live — 40 critical findings, launch delayed by 3 months. Company B has security from day one — audit completes in a week, 2 minor findings, launch on time.
  • Threat model for every new system
  • Security review as part of the code review process
  • Automated security scans in CI/CD
  • Incident response runbooks
<1h
MTTD (detection)
<1h
MTTR (response)
100%
Pen-test coverage
0
Critical findings post-audit

Jak to děláme

1

Threat Assessment

We identify assets, threats and vulnerabilities — both technical and process-related.

2

Penetration Testing & Audit

We simulate real attacks, test defenses and verify compliance.

3

Remediation & Hardening

We fix identified vulnerabilities and strengthen the security of infrastructure and applications.

4

Monitoring & Detection

We deploy SIEM, EDR and threat detection for continuous environment oversight.

5

Continuous Security

Regular re-tests, security awareness training and adaptation to new threats.

When you need security

Typical situations

  1. No threat model — You don’t know what you’re protecting or from whom. Security is intuition, not a system.
  2. AI in production without governance — LLM agents running without rules. Prompt injection, data leakage.
  3. Legacy without an audit trail — Who did what and when? We don’t know. Changes are not tracked.
  4. Compliance pressure — The regulator is knocking, the audit is in a month.

AI Security

AI agents introduce a new class of risks:

  • Prompt injection — Input manipulation that changes agent behavior.
  • Data leakage — Sensitive data in responses. The agent reveals internal information.
  • Unintended write actions — The agent deletes data, sends emails without oversight.
  • Model drift — Response quality degrades over time without measurement.

Our AI Security Framework

  1. RBAC for agents — Defined permissions for what the agent may and may not do.
  2. Input sanitization — Detection and blocking of prompt injection.
  3. Output filtering — PII detection, business logic guardrails.
  4. Audit trail — Every agent action is logged and traceable.
  5. Kill-switch — Immediate agent shutdown upon anomaly detection.

How we proceed

  1. Discovery & Threat Modeling — Infrastructure, assets, threats. Risk assessment and gap analysis.
  2. Security Audit — Penetration tests, vulnerability assessment, code review.
  3. Hardening — Zero Trust, SIEM, WAF, network segmentation, access management.
  4. Production Readiness — Security monitoring, alerting, incident response processes.
  5. Continuous improvement — Red team exercises, security culture, awareness.

Časté otázky

With a security assessment — we map the current state, identify risks and propose a roadmap. The first 20% of measures typically cover 80% of risks.

Yes. AI introduces new risks — prompt injection, data leakage via the model, uncontrolled agent actions. Classical security is necessary but not sufficient. AI governance defines what an autonomous system is allowed to do.

It depends on scope. Web application: 2–5 days. Complex infrastructure: 2–4 weeks. Price from 150K CZK.

If you are in a regulated sector (finance, energy, healthcare, transport), most likely yes. We'll help with a gap analysis and implementation.

Máte projekt?

Pojďme si o něm promluvit.

Domluvit schůzku