Choosing the right partner for annotating defects in solar panel imagery can significantly influence the success of your AI models, diagnostics, and maintenance planning. A smart and practical way to evaluate a potential provider is to test them on a dataset you’ve already analyzed internally — ideally a solar plant where you already know the defect types, locations, and ground truth outcomes.
This allows you to objectively measure how well the provider understands your requirements, handles domain-specific challenges, and delivers reliable results.
That’s exactly what we did at PV Magic.
We selected a solar plant that we had already annotated and validated through external QA and real-world feedback. Then, we asked two additional annotation vendors to process the same dataset, following their usual workflows and tools. This gave us the perfect opportunity to directly compare their outputs with our own and evaluate their performance in a realistic scenario.
The Setup
We selected a representative dataset of 1052 solar panel images containing a mix of common defects — such as hotspots, microcracks, and delamination. These images were then independently annotated by:
– PV Magic (our in-house team)
– Provider A
– Provider B
Comparison Table
Ground truth | PV Magic | Provider A | Provider B | |
Defects found | 44 | 43 | 30 | 20 |
Defects not found | 0 | 1 | 14 | 24 |
Key Findings
– PV Magic delivered the most accurate and consistent annotations, thanks to a rigorous QA process and domain-trained experts.
– Provider A performed well in detecting major issues but missed finer details.
– Provider B showed inconsistent labeling, which could hinder model training or defect classification at scale.
– From a cost-performance perspective, PV Magic remained highly competitive.
– Note, that the definition of what is a defect and what is not might depend also on temperature difference from normal operation. That means that the less strict threshold will naturally lead to a smaller amount of annotated defects.
Lessons Learned
This comparison reinforced the critical value of domain expertise and annotation quality in solar diagnostics. Accurate annotations not only improve AI performance but also influence real-world outcomes like repair decisions and system reliability.
What’s Next
We are now expanding our efforts to incorporate AI-assisted pre-labeling, semi-automated validation, and richer defect taxonomies. Our mission is to set the standard for solar panel defect annotation — and help our partners extract actionable insights from every pixel.