Search results
Results from the WOW.Com Content Network
A ROC space is defined by FPR and TPR as x and y axes, respectively, which depicts relative trade-offs between true positive (benefits) and false positive (costs). Since TPR is equivalent to sensitivity and FPR is equal to 1 − specificity , the ROC graph is sometimes called the sensitivity vs (1 − specificity) plot.
The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings. An example of ROC curve and the area under the curve (AUC). The area under the ROC curve (AUC) [1] [2] is often used to summarize in a single number the diagnostic ability of the classifier. The AUC is simply ...
The receiver operating characteristic (ROC) also characterizes diagnostic ability, although ROC reveals less information than the TOC. For each threshold, ROC reveals two ratios, hits/(hits + misses) and false alarms/(false alarms + correct rejections), while TOC shows the total information in the contingency table for each threshold. [2]
ROC curve is also called a "lift curve" according to the book "Mastering Data Mining" by Berry and Linoff. —Preceding unsigned comment added by AndrewHZ (talk • contribs) 03:48, 6 December 2009 (UTC) Yes, in data mining the same approach is used to indicate the impact of using a predictive model in a real world marketing environment.
For premium support please call: 800-290-4726 more ways to reach us
The x- and y-axes are scaled non-linearly by their standard normal deviates (or just by logarithmic transformation), yielding tradeoff curves that are more linear than ROC curves, and use most of the image area to highlight the differences of importance in the critical operating region.
The fundamental prevalence-independent statistics are sensitivity and specificity.. Sensitivity or True Positive Rate (TPR), also known as recall, is the proportion of people that tested positive and are positive (True Positive, TP) of all the people that actually are positive (Condition Positive, CP = TP + FN).
The positive predictive value (PPV), or precision, is defined as = + = where a "true positive" is the event that the test makes a positive prediction, and the subject has a positive result under the gold standard, and a "false positive" is the event that the test makes a positive prediction, and the subject has a negative result under the gold standard.