Join egghead, unlock knowledge.

Want more egghead?

This lesson is for members. Join us? Get access to all 3,000+ tutorials + a community with expert developers around the world.

Unlock This Lesson
1×
Become a member
to unlock all features

Level Up!

Access all courses & lessons on egghead today and lock-in your price for life.

Autoplay

    Testing Different Neural Network Topologies

    Chris AchardChris Achard
    pythonpython
    ^3.0.0

    There are numerous ways to set up a neural network, and it can be difficult to figure out what combination of settings and architectures will get the best results. We’ll investigate a few different typical network topologies including adding more “depth” and “width”, and evaluate what network topology is best for our data set. For example, you may want a very deep network for increased accuracy on very complex problems, but the training time will take longer. Or, you may add width to your network to increase accuracy, but this has a risk of overfitting.

    Code

    Code

    Become a Member to view code

    You must be a Member to view code

    Access all courses and lessons, track your progress, gain confidence and expertise.

    Become a Member
    and unlock code for this lesson
    Discuss

    Discuss

    Transcript

    Transcript

    Instructor: This neural network has three hidden layers and one output layer, so it has a depth of four. There are many other ways we could configure this network.

    First, we'll run the network as is to check the training and validation losses, so that we can compare those losses to other networks that we can try.

    To make this network deeper just means to add more hidden layers. Let's copy this layer two more times and increase the middle dense layers number of nodes to 32. Then we can run that network to see what, if any, effect that had on the training and validation loss.

    We could even make the network deeper if we wanted to. As you make the network deeper, you may also want to run more epochs because the more complex network will now take longer to train properly. When we run that, we can see that the combination of a deep network and a long training time can be very effective.

    However, remember that we have a small data set which may be skewing our results some. It's important to test on a small data set, but also, to retest as you include more and more of your full data set.

    Instead of a deep network, we could also try to make a very wide but shallow network which means removing many of the hidden layers, but then drastically increasing the size of one or more of the layers.

    When we run that, we can see this network is also effective, at least on our small data set. Again, it's important to test different strategies on your data set because every one is different.

    Once you have all your training and validation, and you have a network that you're happy with, you can go ahead and add back in your test data and evaluation step, in order to test the network on data that it has not yet seen and that you haven't been using to do validation.

    This will help give you a final, less biased view on how your network is performing.