Artificial Neural Networks
Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: daixieit
C FINAL TEsT
Artificial Neural Networks
Feed-forward Neural Networks
Artificial neural networks (ANNs) are computing systems inspired by the biological neural networks that constitute animal brains. Recently, they have been at the core of artificial intelligence (AI) tasks varying from speech recognition to playing the board game Go. A building block for all these complex tasks are a class of feed-forward neural networks called multilayer perceptrons (MLP). A perceptron is like a neuron that acts as a function mapping inputs to a single output.
(a) Artificial neuron. (b) Feed-forward neural network. Figure 1: Construction of feed-forward artificial neural networks.
Figure 1a represents a single neuron computation. This computation is identical for every neuron/perceptron in the network. All the inputs are multiplied by the weights and summed together with the bias then passed through the activation function to give the output result. It turns out this simple computation when chained together in a network architecture can learn complex non-linear functions.
Figure 1b shows a 3-layer neural network with multiple inputs and a single output. Each column of neurons is regarded as a single layer and layers are connected so that the output of one becomes the input of the next except for the input and output layers. These network architectures are called feed-forward as the values only propagate forward from the input layer to the output layer. These networks are fully connected such that between layers every output is connected to the input of every neuron in the next layer.
Figure 2: The logistic (sigmoid) function.
Figure 2 plots the logistic function or the sigmoid function which we will use as the activation function for the neurons. As inputs get larger the neuron fires and the output saturates towards 1.
Your Task
You will implement an artificial neural network library with the objective of making a 3 layer feed-forward neural network to learn the XOR function. You have been provided with a skeleton code for implementing a multilayer feed-forward network and a main function that will create the
neural network using your library to train it. The structure of the source code is as follows: layer.h Defines the layer t type and function signatures for layer operations. Do not modify. layer.c Write your answers to Part I in this file.
ann.h Defines the ann t type and function signatures for ANN operations. Do not modify. ann.c Write your answers to Part II in this file.
train.c Creates and runs a neural network with some debugging information to train XOR func- tion. Do not modify.
rdata.c Generates random training data and contains memory leaks to be fixed in Part III.
The train.c file contains a main function that utilises the library to create a network and train it using all the functions. You can use the given make rules to either run or memory leak check as you implement the questions. We encourage you to run make runtrain after incremental changes to see some debugging output about the layer weights etc.
make runtrain # compiles and runs ./train
make runrdata # compiles and runs ./rdata
make checktrain # compiles and memory leak checks ./train
make checkrdata # compiles and memory leak checks ./rdata
We would recommend you to attempt the questions in each part in a linear fashion as they are intended to be of increasing difficulty. At any point make your assumptions clear using the standard assert(expression) function; for example, when you expect an argument to be non-NULL.
Part I Layers
In this part you will complete the functions that comprise a single layer that is a vertical column in Figure 1b. To make things more compact, this library does not have an explicit neuron implementation but stores the properties of one in dynamically allocated arrays. For example, outputs[i] contains the output of the ith neuron in the layer. Listing 1 shows the structure of a layer. All the functions are located inside layer.c.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
Listing 1: Definition of a single layer inside layer.h
|
Complete the following functions:
1. double sigmoid(double x) { ... }
This function implements the sigmoid function, Figure 2, following the equation:
y =
Use the maths library function double exp(double x) to implement ex ; the library is al- ready included in the header files.
[1 Marks]
2. layer t *layer create() { ... }
This function returns a new, heap-allocated empty layer. It only allocates layer t setting all the integer properties to 0 and the pointers to NULL. If the layer allocation fails, it returns
NULL.
[2 Marks]
3. bool layer init(layer t *layer, int num outputs, layer t *prev) { ... }
The initialisation function sets the properties of the given layer. As arguments it receives the layer to initialise, the number of outputs that layer has and the pointer to the previous layer, prev, which will be NULL if we are creating the input layer of a network. The function sets the number of outputs and allocates an array for the outputs of each neuron. If it is not the input layer, then it sets the number of inputs to the number of outputs of the previous layer and allocates the weights, biases and deltas arrays. Every neuron has a single bias and delta value. Outputs, biases and deltas arrays are set to 0s while weights are initialised randomly using the ANN RANDOM() function. If any of the allocations fail, it returns true for failure; otherwise, the function returns false on success.
[4 Marks]
4. void layer free(layer t *layer) { ... }
Given a pointer to a layer t, this function frees the resources allocated by layer init function and the layer itself. Hint: make sure you free anything you allocated in layer init .
[2 Marks]
2022-06-08