94%
extraction accuracy
Document understanding
OCR + LLM extraction across structured and unstructured documents — claims, contracts, financial filings, clinical notes. Confidence scoring, structured output, human-in-the-loop review queues.
AI & Machine Learning · CORTEX
Production-grade computer vision — detection, OCR, document understanding, medical imaging, video analytics — with the precision, edge-deployment options, and regulatory frame regulated industries actually require.
The problem
The pattern across vision projects: a model that hits 96% on the validation set, then drops to 78% under real lighting, real occlusion, real motion blur. A medical-imaging deployment that needs FDA SaMD documentation nobody scoped. A factory-floor quality-control system that requires a data scientist on call because retraining isn't operationalized. An OCR pipeline that handles the happy path but fails on the long tail of document templates production actually sees.
Prosigns ships computer vision engineered against the deployment reality — domain-specific data augmentation curated against your actual operating conditions, edge inference where latency and connectivity demand it, evaluation on a curated test set that includes the failure modes the post-mortem corpus has documented, and the operational tooling to retrain when the data distribution shifts.
Where it ships
Specific applications we’ve built and operated. Not speculative — every example below is grounded in a real shipped engagement.
94%
extraction accuracy
OCR + LLM extraction across structured and unstructured documents — claims, contracts, financial filings, clinical notes. Confidence scoring, structured output, human-in-the-loop review queues.
98.7%
detection rate
Defect detection, dimensional inspection, process verification on manufacturing lines. Edge inference with sub-50ms decisions; integration with reject-handling and traceability.
94%
sensitivity at threshold
Detection, segmentation, and triage models for radiology, pathology, ophthalmology. DICOM-native pipelines, FDA SaMD-aware design, on-prem and edge options.
Real-time event detection in retail, security, and operational video. Privacy-aware design — no fingerprinting, no untargeted retention, jurisdiction-aware deployment.
Custom YOLO / DETR / Grounding-DINO pipelines for the domain-specific objects your workload actually cares about. Active-learning loops feed labeled data back to retraining.
How we engage
Each phase has a deliverable, an owner, and an acceptance criterion. Not slogans — operating rules.
Discovery walks the deployment site — factory floor, clinic, retail location. We capture real lighting, occlusion, motion, and edge cases before model selection. Data augmentation is calibrated against actual conditions, not generic transforms.
Curated eval set includes the failure modes our post-mortem corpus has documented for the workload. Per-subgroup metrics where equity matters. Eval gates production deployments; regressions block release.
Edge inference (NVIDIA Jetson, AWS Wavelength, on-prem GPU) where latency or connectivity demands it. Cloud inference where throughput and operational simplicity win. We model both options honestly during discovery.
Production drift monitoring against the eval set. Active-learning loops capture borderline cases for human review and retraining. Quarterly model evaluation; redeploy when the new model's eval gate clears.
Capabilities
Stack
Selected work
Common questions
Yes — for diagnostic or treatment-decision-support workloads classified as Software as a Medical Device, we co-pilot with the customer's regulatory affairs team. IEC 62304 lifecycle, traceability matrix, predetermined change-control plans, validation evidence packages aligned to 510(k) or De Novo submissions. We deliver engineering and validation artifacts; the customer typically owns IFU and labeling.
Latency, connectivity, sovereignty. Edge wins when sub-100ms inference is required (in-line manufacturing QC, autonomous decisions), when connectivity is unreliable (factories, vehicles, remote sites), or when sovereignty rules out cloud (regulated medical, government). Cloud wins on throughput and operational simplicity. We model both options during discovery.
Production drift monitoring against the curated eval set, with alerts when subgroup performance regresses. Active-learning loops capture borderline cases for human review; retraining cadence calibrated to the workload (quarterly is typical, monthly for fast-moving production lines). Models that regress on the eval set are blocked from deployment.
Yes — DICOMweb, ONVIF cameras, RTSP streams, and most major industrial-vision systems are in our active engagement portfolio. We integrate as primary scope, not phase 2, with explicit fallback for source unavailability.
Privacy: jurisdiction-aware retention, no fingerprinting, opt-out paths where applicable, encrypted-at-rest with customer-managed keys for regulated data. Equity: stratified eval across the demographic subgroups relevant to deployment context; per-subgroup metrics surfaced rather than averaged. Subgroup gaps are release-blocking, not backlog items.
Discovery + eval-set curation: 4–6 weeks, $60K–$150K. Production CV pipeline (single use case): 4–8 months, $300K–$1M. Multi-line / multi-site deployments: $1M–$3M. Edge-inference programs at scale: $800K–$2.5M. Managed Services for ongoing operations: $30K–$150K monthly retainer.
Within AI & Machine Learning
Talk to us
A senior engineer plus the CORTEX department lead joins the first call. No discovery gauntlet, no junior reps.