Defect Location Annotation for Warranty and Production Traceability
Pass/fail signals protect the production line; bounding-box annotation records protect the business when warranty claims arrive months later.
A warranty claim lands on your desk six months after shipment. The field return cites a surface crack on the left side panel. Your customer wants to know whether the unit passed inspection or not. You pull up the inspection log. It says: PASS. That's it. One word. No location. No image. No defect class. Nothing your quality team can actually use.
This is the exact scenario that separates inspection systems that protect the production line from systems that protect the business. Pass/fail signals solve an immediate, synchronous problem. Bounding-box annotation records solve an asynchronous one, sometimes months or years later. Both matter. Most lines only build for one.
Why Pass/Fail Is Not Enough for Traceability
The production floor cares about throughput. A pass/fail signal is fast, binary, and directly actionable. Part fails, line stops or rejects. Part passes, it moves downstream. For real-time quality control that works exactly as designed.
Warranty investigations work on a different clock entirely.
When a field return comes in 90 to 180 days after shipment, the questions are not "did this part pass?" They are: What exactly was inspected on that unit? Where on the part surface did the vision system look? What did it find, and how confident was the model? Was the model version at that timestamp the same one running today?
A pass/fail log cannot answer any of those questions. An annotation record can answer all of them.
We have seen warranty investigations stall for two to three weeks not because inspection data was unavailable, but because the available data was too coarse to be defensible. One-word verdicts require a human engineer to testify from memory. Annotated inspection records let the data speak for itself.
What a Useful Traceability Record Actually Contains
Not all annotation data is created equal. A traceability record that will hold up under a warranty investigation or process improvement review needs specific fields. Here is what matters:
| Field | Why It Matters for Traceability |
|---|---|
| Part serial / barcode | Links every inspection record to a specific physical unit. Without this, traceability is production-window-level at best, not unit-level. |
| Inspection timestamp | Correlates inspection records with shift logs, material lots, and operator assignments. Necessary for root-cause isolation. |
| Station ID | Identifies which physical station ran the inspection. Defects clustering on one station signal fixture or lighting issues, not process drift. |
| Defect class | Distinguishes a surface scratch from a dimensional deviation or a contamination spot. Downstream engineering needs class, not just "defect found." |
| Bounding box coordinates | Maps exact defect location onto the part surface. Spatial clustering reveals tooling wear patterns invisible in aggregate pass/fail rates. |
| Confidence score | Tells engineers how certain the model was. A 0.62 confidence on a borderline call is very different from a 0.97 score. Both might have passed at the same threshold. |
| Model version / checkpoint hash | Establishes which model made the decision. Critical when the model has been retrained between production run and investigation date. |
Seven fields. Not fifty. In our experience, teams that try to log everything end up querying nothing. Keep the schema tight and query it consistently.
Traceability Databases vs. Inspection Pass/Fail Logs
These are two different systems with two different jobs. Confusing them creates gaps that only surface when something goes wrong.
An inspection pass/fail log is optimized for throughput. It writes fast, stores compactly, and feeds the line controller. It answers "what happened in the last hour" at aggregate resolution. Retention is typically short, 30 to 90 days is common.
A production traceability database is optimized for investigation. It writes annotation records at unit level, preserves them for 12 to 36 months depending on industry and customer contract requirements, and indexes by part serial so you can retrieve a single unit's inspection history in milliseconds. The schema is different. The query patterns are different. The retention SLA is different.
Running both from the same table is technically possible but operationally messy. The two workloads compete. A warranty query scanning 18 months of annotation records will slow the real-time pass/fail write path if they share the same storage tier. Separate them early.
The Warranty Claim Scenario, Step by Step
Here is how an investigation actually runs with and without annotation records.
Without annotation records: Field return arrives. Quality team pulls pass/fail log for the production window (assume they can still find it). Confirm unit passed inspection. End of trail. Engineer writes "no defect found at EOL inspection" in the RMA report. Claim goes to legal, customer pushes back, cycle time stretches to 60 days.
With annotation records: Field return arrives with part serial number. Quality team queries traceability database in under 30 seconds. Retrieves timestamp, station ID, all bounding boxes logged for that unit, defect classes, confidence scores, and model version. Surface scratch reported in the field was on the left panel at coordinates consistent with defect class "paint_bubble", logged at confidence 0.71, below the 0.80 rejection threshold in place at that date. Model version confirmed. Engineer now has a defensible record showing what the system saw, what decision it made, and why. Root cause investigation can start immediately instead of three weeks later.
That second scenario is not theoretical. It requires about 4 to 6 KB of additional storage per unit inspection. At 500 parts per shift on a single line, that is roughly 2.5 MB per shift or about 500 MB per year, before compression. Manageable on any modern edge compute node with local SSD. Not a reason to skip annotation storage.
Data Retention: How Long and Where
Retention requirements vary by industry and customer contract. General manufacturing typically sees 12 to 24 months. Automotive supplier contracts under IATF 16949 often specify 15 years for safety-relevant components. Medical device manufacturing under 21 CFR Part 820 requires records to be retained for the expected useful life of the device or 2 years from release, whichever is longer.
Do not guess. Pull the retention clause from your customer quality agreements and set your database purge policy to match. Shorter is cheaper. Wrong is expensive.
On storage architecture: keep hot data (last 90 days) on the inspection station's local SSD for fast local query. Cold data (90 days to retention limit) moves to networked storage or cloud archive. Structure the handoff so engineers can query across both tiers from a single interface without knowing which tier holds the record.
How Quality Engineers Actually Use Annotation Data
Warranty investigation is the most visible use case. It is not the only one.
Process improvement reviews use bounding-box location data to identify spatial clustering. If 80% of surface-scratch detections over a six-week window cluster in the upper-right quadrant of the part, that is a fixturing problem, not a random process variation. Pass/fail rates show you a trend. Coordinates show you a location. Location tells you where to look for root cause.
Model revalidation also uses annotation history. When you retrain the vision model or bump a detection threshold, you want to run the new model parameters against historical annotation records to estimate how the decision boundary shift would have affected past production. That requires stored bounding boxes and confidence scores. Stored pass/fail records alone cannot support this.
Honestly, some of the most useful work quality engineers do with annotation data is not reactive at all. It is looking at the distribution of confidence scores for a specific defect class across a quarter. A cluster of scores between 0.78 and 0.82 right at your rejection threshold tells you the model is uncertain about a particular defect type. Fix that before it becomes a warranty problem.
Implementation Notes for Mid-Size Manufacturers
If you are running a single-line operation or managing a small quality team, the annotation logging architecture does not need to be complex. It needs to be consistent.
At minimum, every inspection event should write a JSON record to a local database before sending the pass/fail signal to the line controller. Schema validation catches missing fields before the record is written. Batch uploads to a central server can run every 30 minutes during shift breaks. Full real-time sync is nice to have; reliable local writes are the actual requirement.
Index by part serial and timestamp. Add defect class as a secondary index if your team queries by defect type regularly. Keep compression on by default. 500 MB per year per line, compressed, is under 150 MB. Negligible.
The gap between pass/fail logging and annotation logging is not an engineering complexity gap. It is mostly a schema decision made early in system design. Make the right call before you go to production, because retrofitting traceability after a warranty claim has already arrived is not a good use of anyone's time.
Practical note: When scoping your annotation record schema, ask your quality engineer what report they would need to write if a major customer called tomorrow with a warranty claim on last quarter's production. Build the schema to answer that report. Everything else is optional.
Summary
Pass/fail signals are necessary. They keep the line moving. Bounding-box annotation records are the layer that makes inspection data defensible after the part has left the building.
Seven fields. Correct retention policy. Separate hot and cold storage. Index on serial and timestamp. That is the floor for a traceability database that will hold up under a real warranty investigation.
The storage cost is trivial. The investigation cost of not having the data is not.
Want to see how Eolvision structures annotation logging for production traceability? Request a demo and we can walk through the data schema and retention setup for your specific line configuration.