Metrics for text extraction evaluation results.
Metrics that have confidence thresholds. Precision-recall curve can be derived from them.
Confusion matrix of the evaluation. Only set for Models where number of AnnotationSpecs is no more than 10. Only set for ModelEvaluations, not for ModelEvaluationSlices.
| JSON representation |
|---|
{ "confidenceMetrics": [ { object ( |
ConfidenceMetrics
confidenceThresholdnumber
Metrics are computed with an assumption that the Model never returns predictions with score lower than this value.
recallnumber
Recall (True Positive Rate) for the given confidence threshold.
precisionnumber
Precision for the given confidence threshold.
f1Scorenumber
The harmonic mean of recall and precision.
| JSON representation |
|---|
{ "confidenceThreshold": number, "recall": number, "precision": number, "f1Score": number } |