Home
Product overview for the CompOps classification and data protection operations platform.
Build, tune, and package data protection controls with a clearer operating model.
CompOps is an enterprise-focused Home view for understanding how classification authoring, DLP engineering, insights, and governance fit together inside one product workflow. It is designed to explain the platform, not imitate a live KPI dashboard.
Turn representative content into reusable classification assets with deterministic authoring steps.
Tune DLP controls against investigative context before they become long-running analyst noise.
Package rulepacks, governance packs, and review outputs in a format suited to controlled rollout.
Model content
Extract candidate patterns and shape the first pass of detection logic.
Validate controls
Check structure, refine signal quality, and review where policies over-fire or miss.
Package outputs
Produce deployable rulepacks, policy definitions, and governance-ready artefacts.
Promote with oversight
Move approved content into guided or automated deployment paths with traceability.
Outputs
SIT rulepacks, DLP policy definitions, governance packs, and review artefacts.
Challenge
From noisy manual workflows to structured tuned controls
The challenge area focuses on the implementation gap: teams know what they want to protect, but the path from evidence to tuned controls is fragmented and expensive to sustain.
Manual review loops
Portal edits and analyst workarounds dominate the workflow.
Fragmented evidence
Teams struggle to connect matches, false positives, and content context.
Structured validation
CompOps introduces explicit authoring, validation, and scoring stages.
Tuned control packs
Outputs become reviewable artefacts rather than one-off portal tweaks.
Noisy manual workflow
Common Purview implementations accumulate operational overhead because control logic, evidence, and rollout steps live in different places.
Control tuning depends on repeated manual edits and separate note-taking.
Alert pressure obscures which detections need stricter validation versus broader coverage.
Evidence for pack readiness is hard to preserve across analysts, engineers, and reviewers.
Structured tuned controls
CompOps concentrates the same work into a smaller number of explicit product workflows with clearer outputs and promotion paths.
SIT authoring, validation, and DLP engineering share the same operating model.
Analytics and insights explain why controls behave the way they do before promotion.
Rulepacks, governance packs, and deployment artefacts stay traceable through review.
Capabilities
Capability map and feature groups
The platform is organised around a few capability lanes and feature groups so teams can see how authoring, engineering, analytics, deployment, and packs connect.
SIT authoring
Generate candidate patterns, dictionaries, and rulepack structures from representative content.
Author reusable classification logic from source material.
DLP engineering
Shape deployable controls, exception handling, and policy settings into governed packages.
Translate classification intent into enforceable controls.
Analytics
Review signal quality, coverage pressure, and tuning impact using product-oriented evidence.
Assess control quality before operational noise compounds.
Insights
Inspect activity patterns and sensitive-data concentration to guide the next tuning decisions.
Support investigation-led improvement of controls and packs.
Deployment
Move approved packs through guided or automated rollout paths with explicit hand-offs.
Promote artefacts without relying on ad hoc release steps.
Governance packs
Organise framework, industry, SIT, and label libraries into a reusable operating inventory.
Keep control design aligned to repeatable pack structures.
Author and validate classifications
Capture the early pipeline from extraction through candidate shaping, structural validation, and review-ready SIT output.
Representative document upload and text extraction
Candidate pattern, phrase, and dictionary generation
Structured validation before rulepack packaging
Engineer deployable DLP controls
Connect SIT logic to policy decisions so deployment artefacts are easier to inspect, compare, and promote.
Policy import or export workflows
Settings coverage for portal and PowerShell-managed controls
Tenant-ready artefact packaging for controlled rollout
Operate reusable pack libraries
Keep common frameworks and industry content in reusable packs so teams start from governed baselines instead of blank forms.
Framework-aligned and sector-specific pack structures
Reusable SIT and label pack inventories
Shared review language for analysts, engineers, and approvers
PSPF
Framework pack for traceable control design, review, and deployment sequencing in government-aligned programmes.
Finance
Industry pack for regulated customer, payment, and operational data patterns with governance-led rollout.
Health
Industry pack for health record handling, sensitive workflow review, and packaging discipline.
Legal
Industry pack for matter-centric documents, privileged content handling, and controlled review patterns.
Architecture
Security architecture in product terms
The architecture visual keeps the labels precise: session, orchestration, short-lived worker execution, and output artefacts each have a distinct role in the flow.
User session
Operators initiate authoring, review, and packaging actions from the internal application shell.
API / orchestrator
Requests are validated, normalised into explicit workflow stages, and prepared for bounded execution.
Isolated short-lived worker container
Extraction, validation, scoring, and packaging jobs run with constrained scope and limited lifetime.
Output artefacts
Rulepacks, policy exports, pack definitions, and analysis outputs return for human review or promotion.
Bounded execution
Expensive processing is represented as isolated job execution rather than a long-lived shared worker surface.
Explicit hand-offs
Session, orchestration, worker execution, and artefact outputs are separated so responsibilities stay legible.
Review-first outputs
Generated artefacts are intended to be inspected and approved instead of treated as opaque background automation.
Insights
Illustrative analysis views for activity and data location
These visuals are deliberately investigative rather than operational. Sample chart values below are explicitly illustrative indices, not tenant telemetry or incident counts.
Primary review emphasis
IllustrativeExchange + SharePoint
Illustrative sample showing where investigation effort is currently concentrated.
Location pattern
IllustrativeCollaboration-heavy
Illustrative sample showing why content-location analysis matters before control rollout.
Likely next action
IllustrativeValidate and package
Illustrative sample showing how analysis can inform promotion readiness.
Activity Explorer analysis
Bring matched patterns, user actions, and policy outcomes into one analyst-facing view so rule quality is easier to reason about.
Surface repeated match patterns before analysts normalise noisy detections.
Compare policy behaviour across collaboration and messaging surfaces.
Preserve investigation context alongside the tuning decision.
Sensitive data location analysis
Highlight where sensitive content clusters so teams can focus packaging, rollout, and remediation decisions on the most consequential locations.
Compare concentration by surface rather than treating every repository equally.
Show where location pressure should change pack selection or rollout sequence.
Support decisions about framework and industry pack fit.
Recommendation backlog
Convert investigative findings into a smaller set of explicit follow-up actions instead of scattered review notes.
Queue pack changes that need validation before promotion.
Separate signal-quality work from deployment-readiness work.
Keep recommendations reviewable alongside generated outputs.