- load the package
- the artificial data set
ROCR.simple
contains measurements and labels which you can consider as gold standard
- the
prediction()
function takes the measurements and the gold standard as input
- the value returned is a list containing inter alia:
- measurements and labels
- vectors of cutoffs and the number of
- false positives
- true positives
- false negatives
- true negatives corresponding to the resp. cutoffs
require(ROCR,quietly=T)
data(ROCR.simple)
pred <- prediction( ROCR.simple$predictions, ROCR.simple$labels)
str(pred)
..$ : num [1:200] 0.613 0.364 0.432 0.14 0.385 ...
..@ labels :List of 1
.. ..$ : Ord.factor w/ 2 levels "0"<"1": 2 2 1 1 1 2 2 2 2 1 ...
..@ cutoffs :List of 1
.. ..$ : num [1:201] Inf 0.991 0.985 0.985 0.983 ...
..@ fp :List of 1
.. ..$ : num [1:201] 0 0 0 0 1 1 2 3 3 3 ...
..@ tp :List of 1
.. ..$ : num [1:201] 0 1 2 3 3 4 4 4 5 6 ...
..@ tn :List of 1
.. ..$ : num [1:201] 107 107 107 107 106 106 105 104 104 104 ...
..@ fn :List of 1
.. ..$ : num [1:201] 93 92 91 90 90 89 89 89 88 87 ...
..@ n.pos :List of 1
.. ..$ : int 93
..@ n.neg :List of 1
.. ..$ : int 107
..@ n.pos.pred :List of 1
.. ..$ : num [1:201] 0 1 2 3 4 5 6 7 8 9 ...
..@ n.neg.pred :List of 1
.. ..$ : num [1:201] 200 199 198 197 196 195 194 193 192 191 ...
- now we can use
performance()
to calculate different performance measures
- first we plot the classic ROC Curve based on sensitivity and specificity:
- plotting sensitivity (y-axis) against 1-specificity (x-axis) which is equivalent to
- true positive rate against false positive rate which is equivalent to
perf <- performance(pred,"tpr","fpr")
plot(perf)
- calculate the area under the curve (AUC) (the output is a bit messy, but you find the value of interest in the slot
y.values
print(performance(pred,"auc"))
An object of class "performance"
Slot "x.name":
[1] "None"
Slot "y.name":
[1] "Area under the ROC curve"
Slot "alpha.name":
[1] "none"
Slot "x.values":
list()
Slot "y.values":
[[1]]
[1] 0.8341875
Slot "alpha.values":
list()
- find the best cutoff (best always depends on your preferences); here we put equal weight on sensitivity and specificity and maximize the sum of them (Youden Index)
- we write a function which takes a prediction object, the names of two performance measurements and gives back the cutoff, the maximum of the sum and the respective values of the two performance measurements
max.ss <- function(pred,meas.1,meas.2){
meas.1 <- performance(pred,meas.1)
meas.2 <- performance(pred,meas.2)
x.vals <- slot(meas.1,'x.values')[[1]]
y.vals <- slot(meas.1,'y.values')[[1]] + slot(meas.2,'y.values')[[1]]
y1.vals <- slot(meas.1,'y.values')[[1]]
y2.vals <- slot(meas.2,'y.values')[[1]]
list(cut.off=x.vals[which.max(y.vals)],
max.sum=max(y.vals),
max.meas1=y1.vals[which.max(y.vals)],
max.meas2=y2.vals[which.max(y.vals)])
}
max.ss(pred,"sens","spec")
$cut.off
[1] 0.5014893
$max.sum
[1] 1.69993
$max.meas1
[1] 0.8494624
$max.meas2
[1] 0.8504673
- here we get a cutoff of 0.5
- the maximized sum is 1.70
- the resp. sensitivity is 0.85
- the resp. specificity is also 0.85
- sometimes we have a clear preference because the cost of a false negative is much higher than the cost of a false positive (or vice versa)
- therefore it exists a modified version of the Youden-Index which maximizes \(sensitivity + r\cdot specificity\) where \(r=\frac{1-prevalence}{cost\cdot prevalence}\) and \(cost\) is the cost of a false negative and \(prevalence\) is the prevalence in the population under consideration
max.ss <- function(pred,cost=1,prev=0.5){
r <- (1-prev)/(cost*prev)
sens <- performance(pred,"sens")
spec <- performance(pred,"spec")
x.vals <- slot(sens,'x.values')[[1]]
y.vals <- slot(sens,'y.values')[[1]] + r*slot(spec,'y.values')[[1]]
y1.vals <- slot(sens,'y.values')[[1]]
y2.vals <- slot(spec,'y.values')[[1]]
list(cut.off=x.vals[which.max(y.vals)],
sensitivity=y1.vals[which.max(y.vals)],
specificity=y2.vals[which.max(y.vals)])
}
max.ss(pred)
$cut.off
[1] 0.5014893
$sensitivity
[1] 0.8494624
$specificity
[1] 0.8504673
- with the defaults
cost=1
and prev=0.5
we get exactly the same result (because \(r=1\) in this case)
- if we have a disease with prevalence of 0.1 where false negatives (i.e. not detecting a true case) are more expensive
max.ss(pred,cost=20,prev=0.1)
$cut.off
[1] 0.5014893
$sensitivity
[1] 0.8494624
$specificity
[1] 0.8504673
No comments :
Post a Comment