function ER=calc_error_rate(X1,X2,X3,feat) %ER=calc_error_rate(X1,X2,X3,feat) % %Calculates the error rate (frequency of misclassification) using %the cross-validation method for the features chosen in the %input variable `feat' % % %Mats K 010129 %First slice the data X1=X1(:,feat); X2=X2(:,feat); X3=X3(:,feat); mu1=mean(X1)'; mu2=mean(X2)'; mu3=mean(X3)'; n1=length(X1); n2=length(X2); n3=length(X3); C1=cov(X1); C2=cov(X2); C3=cov(X3); E1=0; %E1 is the error counter for class 1 samples %Loop for class 1, Iris Setosa for k=1:n1 %Use the k:th row as sample X=X1(k,:)'; %and the rest as training set X1t=X1([1:k-1,k+1:n1],:); %Re-estimate the covariance and mean for class 1 C1t=cov(X1t); mu1t=mean(X1t)'; %N.B. We can still use C2, C3, mu2 and mu3 Ct=((n1-2)*C1t + (n2-1)*C2 + (n3-1)*C3)/(n1+n2+n3-4); %So, now our training set consists of 49 samples of class 1 %and 50 of class 2 and 3 each. if lin_disc(X,Ct,mu1t,mu2,mu3)~=1 %If we don't classify the sample as class 1: add an error E1=E1+1; end end %The above is complete! %Now the rest shouldn't be to hard.. %Do the analog for the last two classes %Don't forget to evaluate the output ER!