Using the given license plate image database ( In the specified folder ), Select multiple license plate images randomly , For the numbers contained in it (0~9) Or English characters (A~F
) Manual extraction , The size of the extracted image may be inconsistent , In order to facilitate feature extraction , Each figure can be normalized ( for example : Normalized to 10×20 size ).
Characters are divided into training ( Each character has at least 5 Training samples ) And testing ( Each character can be selected 10 To test ) Two categories . By building a three-tier BP
Neural network for training , Each parameter of the network is set by itself . Training finished , Test the samples , And calculate the accuracy of the test .
Four , Experimental results and analysis
1, Debugging results of software ( Including the debugging content and experimental waveform , data , The phenomenon or interface of the program )
2, Result analysis ( The difference between the program results and the experimental requirements and the cause analysis )
Pictures of cars with license plate numbers .
License plate area extracted by program running
Convergence of quadratic mean square error and learning rate curve of neural network training
Neural network training , The change of learning rate and quadratic variance is as follows
TRAINBPX: 0/5000 epochs, lr = 0.1, SSE = 68.1032.
TRAINBPX: 50/5000 epochs, lr = 1.14674, SSE = 10.093.
TRAINBPX: 100/5000 epochs, lr = 0.321983, SSE = 0.821155.
TRAINBPX: 150/5000 epochs, lr = 3.69231, SSE = 0.314939.
TRAINBPX: 200/5000 epochs, lr = 42.3412, SSE = 0.0345176.
TRAINBPX: 225/5000 epochs, lr = 143.382, SSE = 0.00973975.
From the final output a Value can be seen , There are still some errors in pattern recognition , But they are small . This is mainly because there is error rate in the programming , So the final result is bound to be biased .
Five , Source program list
1: License plate number extraction
% license plate recognition - car plate location based on color model
% modified by KouLiangzhi, Oct 10th,2007
I=imread('Car.jpg');
[y,x,z]=size(I);
myI=double(I);
%%%%%%%%%%% statistical analysis %%%%%%%%%%%%%%%
%%%%%%%% Y direction %%%%%%%%%%
Blue_y=zeros(y,1);
for i=1:y
for j=1:x
if((myI(i,j,1)<=121)&&myI(i,j,1)>=110&&((myI(i,j,2)<=155)&&(myI(i,j,2)>=141))&&((myI(i,j,3)<=240)&&(myI(i,j,3)>=210)))
% blue RGB Gray range of
Blue_y(i,1)= Blue_y(i,1)+1; % Blue pixel statistics
end
end
end
[temp MaxY]=max(Blue_y); % Y Determination of direction license plate area
PY1=MaxY;
while ((Blue_y(PY1,1)>=5)&&(PY1>1))
PY1=PY1-1;
end
PY2=MaxY;
while ((Blue_y(PY2,1)>=5)&&(PY2<y))
PY2=PY2+1;
end
IY=I(PY1:PY2,:,:);
%%%%%%%% X direction %%%%%%%%%%
Blue_x=zeros(1,x); % Further confirmation X Direction of the license plate area
for j=1:x
for i=PY1:PY2
if((myI(i,j,1)<=121)&&myI(i,j,1)>=110&&((myI(i,j,2)<=155)&&(myI(i,j,2)>=141))&&((myI(i,j,3)<=240)&&(myI(i,j,3)>=210)))
Blue_x(1,j)= Blue_x(1,j)+1;
end
end
end
PX1=1;
while ((Blue_x(1,PX1)<3)&&(PX1<x))
PX1=PX1+1;
end
PX2=x;
while ((Blue_x(1,PX2)<3)&&(PX2>PX1))
PX2=PX2-1;
end
PX1=PX1+17;
PX2=PX2-1;
PY1=PY1+5;
PY2=PY2-2; % Correction of license plate area
Plate=I(PY1:PY2,PX1:PX2,:);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
figure,imshow(Plate);
m=PX2-PX1;
n=PY2-PY1;
S=ones(n,m);
for j=1:m
for i=1:n
if((Plate(i,j,1)<=121)&&Plate(i,j,1)>=110&&((Plate(i,j,2)<=155)&&(Plate(i,j,2)>=141))&&((Plate(i,j,3)<=240)&&(Plate(i,j,3)>=210)))
S(i,j)=0;
end
end
end
2: Neural network training
%【 Step 1 , Sample input 】
nntwarn off;
A=[0 0 1 0 0 0 1 0 1 0 0 1 0 1 0 1 0 0 0 1 1 1 1 1 1 1 0 0 0 1 1 0 0 0 1]';
B=[1 1 1 1 0 1 0 0 0 1 1 0 0 0 1 1 1 1 1 0 1 0 0 0 1 1 0 0 0 1 1 1 1 1 0]';
C=[0 1 1 1 0 1 0 0 0 1 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 1 1 1 0]';
D=[1 1 1 1 0 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 1 1 1 1 0]';
E=[1 1 1 1 1 1 0 0 0 0 1 0 0 0 0 1 1 1 1 0 1 0 0 0 0 1 0 0 0 0 1 1 1 1 1]';
F=[1 1 1 1 1 1 0 0 0 0 1 0 0 0 0 1 1 1 1 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0]';
zer=[1 1 1 1 1 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 1 1 1 1 1]';
one=[0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0]';
two=[1 1 1 1 1 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 1 1 1 1]';
thr=[1 1 1 1 1 0 0 0 0 1 0 0 0 0 1 1 1 1 1 1 0 0 0 0 1 0 0 0 0 1 1 1 1 1 1]';
fou=[1 0 1 0 0 1 0 1 0 0 1 0 1 0 0 1 1 1 1 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0]';
fiv=[1 1 1 1 1 1 0 0 0 0 1 0 0 0 0 1 1 1 1 1 0 0 0 0 1 0 0 0 0 1 1 1 1 1 1]';
six=[1 1 1 1 1 1 0 0 0 0 1 0 0 0 0 1 1 1 1 1 1 0 0 0 1 1 0 0 0 1 1 1 1 1 1]';
sev=[1 1 1 1 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1]';
eig=[1 1 1 1 1 1 0 0 0 1 1 0 0 0 1 1 1 1 1 1 1 0 0 0 1 1 0 0 0 1 1 1 1 1 1]';
nin=[1 1 1 1 1 1 0 0 0 1 1 0 0 0 1 1 1 1 1 1 0 0 0 0 1 0 0 0 0 1 1 1 1 1 1]';
% Then the matrix of the training sample is initialized as follows :
alphabet=[A,B,C,D,E,F,zer,one,two,thr,fou,fiv,six,sev,eig,nin];
p=alphabet;
targets=eye(16,16);
t=targets;
%------------ Output value of the sample , That is to output the target vector , It is expected that when each mode is input , In the output position, the output is 1, Other locations should be 0. If the common mode is 16 species
%------------ The sample training of each mode is 1 individual , Then the output vector can be simply expressed as : targets=eye(16,16)
%---------------- Determine the input to the network , Hidden layer and output layer
[r,q]=size(p); % input
[s2,q]=size(t); % output
s1=13; % The number of neurons in the saphenous layer , It can be selected according to actual needs , The number of training samples should not exceed the general number .
%----------------- Determine the initial value of network training
[w1,b1]=nwlog(s1,r);
[w2,b2]=rands(s2,s1);
% 【 Step 2 , Set network parameters and train 】
%---------------- Determine the parameters of network training
disp_freq=50; % Display frequency of network training
max_epoch=5000; % Maximum training times
err_goal=0.01; % Training error
lr=0.1; % Learning rate
lr_inc=1.05; % increment
lr_dec=0.5; % decrement
momentum=0.75; % momentum factor
err_ratio=1.05; % Error rate
%------------- Training begins
tp=[disp_freq max_epoch err_goal lr lr_inc lr_dec momentum err_ratio]';
[w1,b1,w2,b2,epochs,TR]=trainbpx(w1,b1,'logsig',w2,b2,'logsig',p,t,tp);
save digit.mat w1 b1 w2 b2; %-------- Store weights , To facilitate testing
%【 Step 3 , test 】
%--------------- test , The input test sample is p
load digit_noise.mat
layer1=logsig(w1*p,b1);
a=logsig(w2*layer1,b2);
%----------------- according to a The maximum value in can determine which failure mode it belongs to
Technology
Daily Recommendation