## Problem Statement | |||||||||||||

Class name: NeuralNet Method name: testTraining Parameters: int[], int[], int[] Returns: int[] You will create a Neural Network. A Neural Network is used to learn patterns and relationships in data. You will train a network with training inputs, in[]. Once you have trained your network to return the expected training outputs, out[], send a set of test inputs, testIn[], through the network and return that output as your result. The method signature is (be sure your method is declared public): int[] testTraining(int[] in, int[] out, int[] testIn) For the purpose of this problem, the neural network contains four nodes. Each node takes the int array in[] that contains four elements as input and produces an int output. Stored within each node is float array, which contains weights, w[]. Each element of the weight array is assigned to the element of the same index of the input array. For example, if in[0]=1 and w[0]=.5 then in[0] has a weight of .5 assigned to it. Each node also has a threshold value, th. The output of the node is calculated by multiplying each input by its corresponding weight, and summing all these values. If the resulting value is greater than or equal to the threshold value, then node outputs a value of 1, otherwise, it outputs a 0. For example, in[0] ----------------- | | in[1] | | --->| Node 1 | ---> x = in[0]*w[0] + in[1]*w[1] + in[2]*w[2] + in[3]*w[3] | | if x >= th then out1 = 1 else out1 = 0 in[2] | w[] | | | in[3] |_______________| in[0] ----------------- | | in[1] | | --->| Node 2 | ---> x = in[0]*w[0] + in[1]*w[1] + in[2]*w[2] + in[3]*w[3] | | if x >= th then out2 = 1 else out2 = 0 in[2] | w[] | | | in[3] |_______________| in[0] ----------------- | | in[1] | | --->| Node 3 | ---> x = in[0]*w[0] + in[1]*w[1] + in[2]*w[2] + in[3]*w[3] | | if x >= th then out3 = 1 else out3 = 0 in[2] | w[] | | | in[3] |_______________| in[0] ----------------- | | in[1] | | ---> | Node 4 | ---> x = in[0]*w[0] + in[1]*w[1] + in[2]*w[2] + in[3]*w[3] | | if x >= th then out4 = 1 else out4 = 0 in[2] | w[] | | | in[3] |_______________| TopCoder will enforce the following rules: *All arrays used for input into the method are of length 4 *The elements of the three input arrays to the method are always 1 or 0. Notes: The initial value of all the weights in all the nodes will be 0.25. The threshold value will be the same for all nodes and will be 0.5. Every array in this problem begins at index zero. The delta value (described below) is always equal to 0.10 The value of this neural net is that it can train itself by changing its internal weights when given an input and the expected output. The training of a single node is described as follows: - First, set a "delta" value. This value is the same all the time, and should be hard-coded in this problem to 0.10 - Compare the output values of each node to its corresponding element in the output array. If they do not match, the internal weights of the node that did not match must be adjusted. Adjust the internal weights with a constant "delta" of 0.1 in the following fashion: 1) if the output matches the expected output, do nothing. 2) if the output is 1, and expected is 0, subtract the "delta" value 0.1 from the weights that correspond to the inputs that had a value of 1 (If an input was 0, it did not contribute to the output, so its weight should not be adjusted). 3) if the output is 0, and the expected is 1, add the "delta" value 0.1 to the weights that correspond to the inputs that had a value of 1. - Continue to run the set of inputs through the nodes and making these adjustments until your outputs match the desired outputs. NOTE: If, for some reason, you determine that the net is untrainable (i.e., if input is all zeroes, there is no way to produce any output values which are non-zero) do not train. Testing the Trained Neural Network: Once you have trained your Neural Network, you will utilize the third argument of test inputs and return the outputs as your result. Create a method testTraining that takes three int[], representing the node inputs, the expected outputs and a test case. The method should return an int[] that represents the output of the test case after the neural net is trained. Example 1: (w1[0] means the weight associated with input 0 in node 1) trainingInputs = { 1, 0, 0, 0 }, trainingOutputs = { 0, 0, 1, 0 }, testInputs = { 0, 1, 1, 0 } The initial values of the weights (w[i]) are 0.25 After entering { 1, 0, 0, 0 } as inputs,The Outputs are: (1 * 0.25 + 0 * 0.25 + 0 * 0.25 + 0 * 0.25 = 0.25 -----> 0), (1 * 0.25 + 0 * 0.25 + 0 * 0.25 + 0 * 0.25 = 0.25 -----> 0), (1 * 0.25 + 0 * 0.25 + 0 * 0.25 + 0 * 0.25 = 0.25 -----> 0), and (1 * 0.25 + 0 * 0.25 + 0 * 0.25 + 0 * 0.25 = 0.25 -----> 0) Node 0 generates the correct output, so leave it alone. Node 1 generates the correct output, so leave it alone. Node 2 does not, so adjust the weights of Node 2: i[0] = 1 so we can adjust w2[0]: w2[0] = w2[0] + "delta" -- or -- w2[0] = 0.25 + 0.1 = 0.35 (+ because the current output is too low) i[1] = 0 so we do not adjust w2[1]. i[2] = 0 so we do not adjust w2[2]. i[3] = 0 so we do not adjust w2[3]. Node 3 generates the correct output, so leave it alone. After this round of training, the Outputs are: (1 * 0.25 + 0 * 0.25 + 0 * 0.25 + 0 * 0.25 = 0.25 -----> 0), (1 * 0.25 + 0 * 0.25 + 0 * 0.25 + 0 * 0.25 = 0.25 -----> 0), (1 * 0.35 + 0 * 0.25 + 0 * 0.25 + 0 * 0.25 = 0.35 -----> 0), and (1 * 0.25 + 0 * 0.25 + 0 * 0.25 + 0 * 0.25 = 0.25 -----> 0) and Node 2 still does not generate the correct output, so adjust the weights of Node 2: i[0] = 1 so we can adjust w2[0]: w2[0] = w2[0] + "delta" -- or -- w2[0] = 0.35 + 0.1 = 0.45 (+ because the current output is too low) i[1] = 0 so we do not adjust w2[1]. i[2] = 0 so we do not adjust w2[2]. i[3] = 0 so we do not adjust w2[3]. After this round of training, the Outputs are: (1 * 0.25 + 0 * 0.25 + 0 * 0.25 + 0 * 0.25 = 0.25 -----> 0), (1 * 0.25 + 0 * 0.25 + 0 * 0.25 + 0 * 0.25 = 0.25 -----> 0), (1 * 0.45 + 0 * 0.25 + 0 * 0.25 + 0 * 0.25 = 0.45 -----> 0), and (1 * 0.25 + 0 * 0.25 + 0 * 0.25 + 0 * 0.25 = 0.25 -----> 0) and Node 2 still does not generate the correct output, so adjust the weights of Node 2: i[0] = 1 so we can adjust w2[0]: w2[0] = w2[0] + "delta" -- or -- w2[0] = 0.45 + 0.1 = 0.55 (+ because the current output is too low) i[1] = 0 so we do not adjust w2[1]. i[2] = 0 so we do not adjust w2[2]. i[3] = 0 so we do not adjust w2[3]. After this round of training, the Outputs are: (1 * 0.25 + 0 * 0.25 + 0 * 0.25 + 0 * 0.25 = 0.25 -----> 0), (1 * 0.25 + 0 * 0.25 + 0 * 0.25 + 0 * 0.25 = 0.25 -----> 0), (1 * 0.55 + 0 * 0.25 + 0 * 0.25 + 0 * 0.25 = 0.55 -----> 1), and (1 * 0.25 + 0 * 0.25 + 0 * 0.25 + 0 * 0.25 = 0.25 -----> 0) And the outputs are the same as the desired outputs. Now we input the elements of testInput into the net and the Outputs are: (0 * 0.25 + 1 * 0.25 + 1 * 0.25 + 0 * 0.25 = 0.5 -----> 1), (0 * 0.25 + 1 * 0.25 + 1 * 0.25 + 0 * 0.25 = 0.5 -----> 1), (0 * 0.55 + 1 * 0.25 + 1 * 0.25 + 0 * 0.25 = 0.5 -----> 1), and (0 * 0.25 + 1 * 0.25 + 1 * 0.25 + 0 * 0.25 = 0.5 -----> 1) return { 1, 1, 1, 1 } Example 2: trainingInputs = { 0, 1, 1, 1 } trainingOutputs = { 1, 1, 1, 1 } testInputs = { 1, 0, 1, 0 } The initial values of the weights (w[i]) are 0.25. After entering { 0, 1, 1, 1 } as inputs, The Outputs are: (0 * 0.25 + 1 * 0.25 + 1 * 0.25 + 1 * 0.25 = 0.75 -----> 1), (0 * 0.25 + 1 * 0.25 + 1 * 0.25 + 1 * 0.25 = 0.75 -----> 1), (0 * 0.25 + 1 * 0.25 + 1 * 0.25 + 1 * 0.25 = 0.75 -----> 1), and (0 * 0.25 + 1 * 0.25 + 1 * 0.25 + 1 * 0.25 = 0.75 -----> 1) All nodes generate the correct output, so we do not train, Now we input the elements of testInput into the net and the Outputs are: (1 * 0.25 + 0 * 0.25 + 1 * 0.25 + 0 * 0.25 = 0.5 -----> 1), (1 * 0.25 + 0 * 0.25 + 1 * 0.25 + 0 * 0.25 = 0.5 -----> 1), (1 * 0.25 + 0 * 0.25 + 1 * 0.25 + 0 * 0.25 = 0.5 -----> 1), and (1 * 0.25 + 0 * 0.25 + 1 * 0.25 + 0 * 0.25 = 0.5 -----> 1) return { 1, 1, 1, 1 } Example 3: trainingInputs = { 1, 1, 0, 0 } trainingOutputs = { 1, 1, 1, 0 } testInputs = { 1, 1, 1, 0 } The initial values of the weights (w[i]) are 0.5. After entering { 1, 1, 0, 0 } as inputs, the Output is: (1 * 0.25 + 1 * 0.25 + 0 * 0.25 + 0 * 0.25 = 0.5 -----> 1), (1 * 0.25 + 1 * 0.25 + 0 * 0.25 + 0 * 0.25 = 0.5 -----> 1), (1 * 0.25 + 1 * 0.25 + 0 * 0.25 + 0 * 0.25 = 0.5 -----> 1), and (1 * 0.25 + 1 * 0.25 + 0 * 0.25 + 0 * 0.25 = 0.5 -----> 1) Node 3 is producing the wrong output. Adjust Node 3: i[0] = 1 so we adjust w3[0]: w3[0] = w3[0] - "delta" -- or -- w3[0] = w3[0] - 0.1 = 0.25 - 0.1 = 0.15 (- because the current output is too low) i[1] = 1 so we adjust w3[1]: w3[1] = w3[1] - "delta" -- or -- w3[1] = w3[1] - 0.1 = 0.25 - 0.1 = 0.15 (- because the current output is too low) i[2] = 0 so we do not adjust its corresponding weight. i[3] = 0 so we do not adjust its corresponding weight. After this round of training, the Outputs are: (1 * 0.25 + 1 * 0.25 + 0 * 0.25 + 0 * 0.25 = 0.5 -----> 1), (1 * 0.25 + 1 * 0.25 + 0 * 0.25 + 0 * 0.25 = 0.5 -----> 1), (1 * 0.25 + 1 * 0.25 + 0 * 0.25 + 0 * 0.25 = 0.5 -----> 1), and (1 * 0.15 + 1 * 0.15 + 0 * 0.25 + 0 * 0.25 = 0.3 -----> 0) and we have the desired output. Now we input the elements of testInput into the net and the Outputs are: (1 * 0.25 + 1 * 0.25 + 1 * 0.25 + 0 * 0.25 = 0.75 -----> 1), (1 * 0.25 + 1 * 0.25 + 1 * 0.25 + 0 * 0.25 = 0.75 -----> 1), (1 * 0.25 + 1 * 0.25 + 1 * 0.25 + 0 * 0.25 = 0.75 -----> 1), and (1 * 0.15 + 1 * 0.15 + 1 * 0.25 + 0 * 0.25 = 0.55 -----> 1) return { 1, 1, 1, 1 } Example 4: trainingInputs = { 0, 0, 0, 0 } trainingOutputs = { 1, 1, 0, 1 } testInputs = { 1, 0, 0, 0 } Since all inputs are zero, we cannot train the net. Hence, we input the elements of testInput into the net and the Outputs are: (1 * 0.25 + 0 * 0.25 + 0 * 0.25 + 0 * 0.25 = 0.25 -----> 0), (1 * 0.25 + 0 * 0.25 + 0 * 0.25 + 0 * 0.25 = 0.25 -----> 0), (1 * 0.25 + 0 * 0.25 + 0 * 0.25 + 0 * 0.25 = 0.25 -----> 0), and (1 * 0.25 + 0 * 0.25 + 0 * 0.25 + 0 * 0.25 = 0.25 -----> 0) return { 0, 0, 0, 0 } | |||||||||||||

## Definition | |||||||||||||

| |||||||||||||