Neural Network XOR Application and Fundamentals

--

DEFINITION

A simple neural network for solving a XOR function is a common task and is mostly required for our studies and other stuff . So , i have given some examples and some basic neural networks used to solve them more easily and there is a bonus program for you too .

SAMPLE

A network with one hidden layer containing two neurons should be enough to separate the XOR problem. Follow these steps :-

The first neuron acts as an OR gate and the second one as a NOT AND gate.

Add both the neurons and if they pass the treshold it’s positive. You can just use linear decision neurons for this with adjusting the biases for the tresholds.

The inputs of the NOT AND gate should be negative for the 0/1 inputs.

This picture should make it more clear, the values on the connections are the weights, the values in the neurons are the biases, the decision functions act as 0/1 decisions (or just the sign function works in this case too).

NOTE-

If you are using basic gradient descent (with no other optimization, such as momentum), and a minimal network 2 inputs, 2 hidden neurons, 1 output neuron, then it is definitely possible to train it to learn XOR, but it can be quite tricky and unreliable.

NEURAL NETS USED(ADDITIONAL)

Back Propagation Solution Network

Because of the nature of the activation function, the activity on the output node can never reach either ‘0’ or ‘1’. We take values of less than 0.1 as equal to 0, and greater than 0.9 as equal to 1.

If the network seems to be stuck, it has hit what is called a ‘local minimum’. Keep your eye on the bias of the hidden node and wait. It will eventually head towards zero. As it approaches zero, the network will get out of the local minimum, and will shortly complete. This is because of a ‘momentum turn’ that is used in the calculation of the weights.

Conditional Back propagation Network

This network can learn any logical relationship expressible in a truth table of this sort. In the following, you can change the desired output, and train the network to produce that output.

Back propagation for Any Binary Logical Function

This network makes use of binary values and is used in less iterative steps. It’s most handy and is quicker in getting the solution.

MLP( Multi Layered Perceptron

This is a complex network with more basic nodes used and here, the feature value is set.

XOR solved by MLP ( Multi Layered Perceptron)

Feature detector — 2 ones

CODE —

XOR function using McCulloch-Pitts neuron -

clear;

clc;

%Getting weights and threshold value

disp(‘Enter weights’);

w11=input(‘Weight w11=’);

w12=input(‘weight w12=’);

w21=input(‘Weight w21=’);

w22=input(‘weight w22=’);

v1=input(‘weight v1=’);

v2=input(‘weight v2=’);

disp(‘Enter Threshold Value’);

theta=input(‘theta=’);

x1=[0 0 1 1];

x2=[0 1 0 1];

z=[0 1 1 0];

con=1;

while con

zin1=x1*w11+x2*w21;

zin2=x1*w21+x2*w22;

for i=1:4

if zin1(i)>=theta

y1(i)=1;

else

y1(i)=0;

end

if zin2(i)>=theta

y2(i)=1;

else

y2(i)=0;

end

end

yin=y1*v1+y2*v2;

for i=1:4

if yin(i)>=theta;

y(i)=1;

else

y(i)=0;

end

end

disp(‘Output of Net’);

disp(y);

if y==z

con=0;

else

disp(‘Net is not learning enter another set of weights and Threshold value’);

w11=input(‘Weight w11=’);

w12=input(‘weight w12=’);

w21=input(‘Weight w21=’);

w22=input(‘weight w22=’);

v1=input(‘weight v1=’);

v2=input(‘weight v2=’);

theta=input(‘theta=’);

end

end

disp(‘McCulloch-Pitts Net for XOR function’);

disp(‘Weights of Neuron Z1’);

disp(w11);

disp(w21);

disp(‘weights of Neuron Z2’);

disp(w12);

disp(w22);

disp(‘weights of Neuron Y’);

disp(v1);

disp(v2);

disp(‘Threshold value’);

disp(theta);

Hope this is informative and thanks for viewing . If you have any doubts , you welcome to ask me through comments.

Sayonara, Bonjour and Goodbye !

--

--