Join egghead, unlock knowledge.

Want more egghead?

This lesson is for members. Join us? Get access to all 3,000+ tutorials + a community with expert developers around the world.

Unlock This Lesson
1×
Become a member
to unlock all features

Level Up!

Access all courses & lessons on egghead today and lock-in your price for life.

Autoplay

    Train a Sequential Keras Model with Sample Data

    Chris AchardChris Achard
    pythonpython
    ^3.0.0

    We’ll set up some training data for a fully connected neural network, and train the model on that data. Then, we’ll look at how the accuracy decreases as the number of epochs increase.

    Code

    Code

    Become a Member to view code

    You must be a Member to view code

    Access all courses and lessons, track your progress, gain confidence and expertise.

    Become a Member
    and unlock code for this lesson
    Discuss

    Discuss

    Transcript

    Transcript

    Instructor: Import numpy as np. Then we can define our array of inputs, which we will call Xtrain, and outputs, which we will call Ytrain. They will both be NumPy arrays.

    The content should be the examples of our inputs and outputs that the network will use to learn its weights and biases. Our model is defined to take four numbers as inputs. We'll define several input examples, which will each be an array containing four numbers.

    We want the network to learn how to take the mean of the four inputs. That means our output Y values will be the mean of each of the rows from the X inputs. Notice that the Y values are all arrays, even though they each contain only one element.

    That's because the network expects the inputs and outputs to be arrays, no matter how many elements they contain. We have a set of inputs, and each input has a matching output value, which in this case is the mean of all the inputs.

    To train the network on our sample data, we'll call the fit method of the model. The only required argument to train the model are the input X values and the output Y values, but there are several optional parameters that we can specify.

    First, because we only have six input data points, we should pick a batch size that is smaller than that number. We'll define a batch size of 2. Normally, you would have a lot more data, so you could set your batch size to a more common 32, 64, or 256.

    Next, we'll set the number of epochs to 100. An epoch represents how many times the network will loop through the entire data set. The more epochs you set here, the better the network accuracy will be, but the longer it will take to train.

    Finally, we'll set verbose to 1, which will allow us to see the loss at every epoch. Then in the command line, we can run our file by typing python neuralnet.py.

    The neural net has trained. If we scroll to the top of the output, we can see the training start with the first epoch. The loss here is what we're looking to reduce. It starts very high at the beginning because the network is initialized with random weights.

    It's just totally guessing what the answer should be. With every training step, we want to see the loss go down further and further, until at last we see the loss start to flatten out.

    If we keep training with more epochs, we should start to see this number go down even further. Already, after only 100, we have a fairly low loss, which represents the mean squared error between the actual Y values and the predicted values from our network.