Skip to content
Snippets Groups Projects
Unverified Commit cacef4ed authored by Jen Looper's avatar Jen Looper Committed by GitHub
Browse files

Merge pull request #235 from RyanXinOne/ms-main

Correct the definitions of precision and recall in 2-4-Logistic.
parents d6e61a30 01109e7d
No related branches found
No related tags found
No related merge requests found
......@@ -237,9 +237,9 @@ As you might have guessed it's preferable to have a larger number of true positi
Let's revisit the terms we saw earlier with the help of the confusion matrix's mapping of TP/TN and FP/FN:
🎓 Precision: TP/(TP + FN) The fraction of relevant instances among the retrieved instances (e.g. which labels were well-labeled)
🎓 Precision: TP/(TP + FP) The fraction of relevant instances among the retrieved instances (e.g. which labels were well-labeled)
🎓 Recall: TP/(TP + FP) The fraction of relevant instances that were retrieved, whether well-labeled or not
🎓 Recall: TP/(TP + FN) The fraction of relevant instances that were retrieved, whether well-labeled or not
🎓 f1-score: (2 * precision * recall)/(precision + recall) A weighted average of the precision and recall, with best being 1 and worst being 0
......
......@@ -245,9 +245,9 @@ Mari kita lihat kembali istilah-istilah yang kita lihat tadi dengan bantuan matr
> NB: Negatif benar
> NP: Negatif palsu
🎓 Presisi: PB/(PB + NP) Rasio titik data relevan antara semua titik data (seperti data mana yang benar dilabelkannya)
🎓 Presisi: PB/(PB + PP) Rasio titik data relevan antara semua titik data (seperti data mana yang benar dilabelkannya)
🎓 *Recall*: PB/(PB + PP) Rasio titk data relevan yang digunakan, maupun labelnya benar atau tidak.
🎓 *Recall*: PB/(PB + NP) Rasio titk data relevan yang digunakan, maupun labelnya benar atau tidak.
🎓 *f1-score*: (2 * Presisi * *Recall*)/(Presisi + *Recall*) Sebuah rata-rata tertimbang antara presisi dan *recall*. 1 itu baik dan 0 itu buruk.
......
......@@ -238,9 +238,9 @@ Come si sarà intuito, è preferibile avere un numero maggiore di veri positivi
I termini visti in precedenza vengono rivisitati con l'aiuto della mappatura della matrice di confusione di TP/TN e FP/FN:
🎓 Precisione: TP/(TP + FN) La frazione di istanze rilevanti tra le istanze recuperate (ad es. quali etichette erano ben etichettate)
🎓 Precisione: TP/(TP + FP) La frazione di istanze rilevanti tra le istanze recuperate (ad es. quali etichette erano ben etichettate)
🎓 Richiamo: TP/(TP + FP) La frazione di istanze rilevanti che sono state recuperate, ben etichettate o meno
🎓 Richiamo: TP/(TP + FN) La frazione di istanze rilevanti che sono state recuperate, ben etichettate o meno
🎓 f1-score: (2 * precisione * richiamo)/(precisione + richiamo) Una media ponderata della precisione e del richiamo, dove il migliore è 1 e il peggiore è 0
......
......@@ -238,9 +238,9 @@ Seaborn提供了一些巧妙的方法来可视化你的数据。例如,你可
让我们借助混淆矩阵对TP/TN和FP/FN的映射,重新审视一下我们之前看到的术语:
🎓 准确率:TP/TP+FN)检索实例中相关实例的分数(例如,哪些标签标记得很好)
🎓 准确率:TP/(TP + FP) 检索实例中相关实例的分数(例如,哪些标签标记得很好)
🎓 召回率: TP/(TP + FP) 检索到的相关实例的比例,无论是否标记良好
🎓 召回率: TP/(TP + FN) 检索到的相关实例的比例,无论是否标记良好
🎓 F1分数: (2 * 准确率 * 召回率)/(准确率 + 召回率) 准确率和召回率的加权平均值,最好为1,最差为0
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment