Sunday, October 12, 2008

Activity 20 - Neural Networks

In this activity, we classify the same objects as with Acitivity 18 and 19. The objects classified here in this activity are vcut and pillows. The output for vcut should be 0 and for pillow should be 1.
The code is written below:

N = [2,4,1];
train = fscanfMat("F:\AP 186\act20\training.txt")';// Training Set
train = train/max(train);
t = [0 0 0 0 1 1 1 1];
lp = [2.5,0];
W = ann_FF_init(N);
T = 1000; //Training cyles
W = ann_FF_Std_online(train,t,N,W,lp,T);
test = fscanfMat("F:\AP 186\act20\data4.txt")';
test = test/max(test);
class = ann_FF_run(test,N,W)
round(class)

The result of classification is 100% successful!
0.1069949 0.0069226 0.0023741 0.0146057 0.9982912 0.5178297 0.9649571 0.9974860
Rounding off values are given by:
0 0 0 0 1 1 1 1

In this activity, I will give myself a grade of 10/10 because of very good classification of the objects.

Activity 19 - Probabilistic Classification

In this activity, two set of classes are used and their pattern are recognized via the Linear Discriminant Analysis (LDA). The classification rule is used to "assign an object to the group with highest conditional probability"[1]. The formula used to classify an object to its group is given by:

where ยต is the mean corrected value of an object, C is the pooled covariance matrix of all the groups and p is the classification probability. Two sets of sample were used; Vcut chips (Figure 1)
and Pillows (Figure 2). The criterion used to classify the objects are their Mean of the red and the green values.
Figure 1
Figure 2
The results of the classification is given by Table 1.
For the Training data, 100% classification was obtained as expected. But for the Test data, only 75% classification was obtained.
In conclusion, the LDA is a good method in classification of objects for random sample.

For this activity, I will give myself a grade of 8 because I did not obtain 100% classification for the test datas.

Appendix:
a = fscanfMat("F:\AP 186\act19\data1.txt");
b = fscanfMat("F:\AP 186\act19\data2.txt");
q = fscanfMat("F:\AP 186\act19\data4.txt");
c(1:4,1:2) = a(1:4,1:2);
c(5:8,1:2) = b(1:4,1:2);
mean_g = mean(c,'r');
a1(1:4,1:2) = a(1:4,1:2);
b1(1:4,1:2) = b(1:4,1:2);
mean_a1 = mean(a1,'r');
mean_b1 = mean(b1,'r');
for i = 1:2
mean_cora1(:,i) = a(:,i)-mean_g(i);
mean_corb1(:,i) = b(:,i)-mean_g(i);
end
c1 = (mean_cora1'*mean_cora1)/4;
c2 = (mean_corb1'*mean_corb1)/4;
for i = 1:2
for j = 1:2
C(i,j) = (4/8)*c1(i,j)+(4/8)*c2(i,j);
end
end

f(:,1) = ((((mean_a1)*inv(C))*c' )-(0.5*((mean_a1*inv(C))*mean_a1'))+log(0.5))';
f(:,2) = ((((mean_b1)*inv(C))*c' )-(0.5*((mean_b1*inv(C))*mean_b1'))+log(0.5))';

Saturday, October 4, 2008

Activity 18 - Pattern Recognition

In this activity, we gathered different samples with same quantities. Our samples are Piatos, Pillows, Kwek-Kwek and Vcut. Per sample, we gathered 8 of this samples.


Half of these samples are the training samples and half are the test samples. To identify the membership of the test samples to what classification it belongs, feature of that sample and the training samples should be obtained. For example, the color of the training samples of piatos should share the same color as the test samples of piatos.


The mean of the feature vectors are extracted from the training samples. In my case, the feature vectors I used are the mean and the standard deviation of the red and grenn values of the training samples. I added all features vectors and values are given below:




piatos = 0.8979285


vcut = 0.9626847


kwek-kwek = 1.0137804


pillows = 0.9057169




Same feature vectors were also obtained from the test samples and obtaied the sum of these feature vectors. To find the classification of that test sample, the summed feature vector of the test sample is subtracted to the summed feature vector of the training sample and find where it is minimum. The results are given below:

Note that for samples Pillow and Piatos, the percentage of classification did not obtain a perfect classification becuase the feature vector has small difference. But for both Vcut and KwekKwek, 100% classification were obtained.

For this activity, I will give myself a grade of 8 because I think I've met all the objectives though the classification was'nt perfect.
Appendix:
Source code:
I = [];
I1 = [];
for i =5:8
I = imread("kwekkwek" + string(i) + ".JPG");
I1 = imread("kwekkwekc"+string(i)+".JPG");
r1 = I(:,:,1)./(I(:,:,1)+I(:,:,2)+I(:,:,3));
g1 = I(:,:,2)./(I(:,:,1)+I(:,:,2)+I(:,:,3));
b1 = I(:,:,3)./(I(:,:,1)+I(:,:,2)+I(:,:,3));
r2 = I1(:,:,1)./(I1(:,:,1)+I1(:,:,2)+I1(:,:,3));
g2 = I1(:,:,2)./(I1(:,:,1)+I1(:,:,2)+I1(:,:,3));
b2 = I1(:,:,3)./(I1(:,:,1)+I1(:,:,2)+I1(:,:,3));
r2_1 = floor(r2*255); g2_1 = floor(g2*255);
Standr2(i) = stdev(r2);
Standg2(i) = stdev(g2);
Meanr2(i) = mean(r2);
Meang2(i) = mean(g2);
pr = (1/(stdev(r2)*sqrt(2*%pi)))*exp(-1*((r1-mean(r2)).^2/(2*stdev(r2)))); pg = (1/(stdev(g2)*sqrt(2*%pi)))*exp(-1*((g1-mean(g2)).^2/(2*stdev(g2))));
new = (pr.*pg);
new2 = new/max(new);
new3 = im2bw(new2,0.7);
[x,y] = follow(new3);
n = size(x);
x2 = x;
y2 = y;
x2(1) = x(n(1));
x2(2:n(1))=x(1:(n(1)-1));
y2(1) = y(n(1));
y2(2:n(1))=y(1:(n(1)-1));
area(i) = abs(0.5*sum(x.*y2 - y.*x2));//Green's Theorem imwrite(new3,"pill" + string(i)+".JPG");
end
training = [0.8979285 0.9626847 1.0137804 0.9057169];
train = mean(Meanr2(1:4))+mean(Meang2(1:4))+mean(Standr2(1:4))+mean(Standg2(1:4));
for i = 1:4
for j = 1:4
test(i,j) = abs((Meanr2(4+i) +Meang2(4+i) + Standr2(4+i) + Standg2(4+i)) - training(j));
end
end