Capturing screenshots from your functional tests can be extremely useful. To show that, we're just going to go back and look at our most basic Protractor test here. We've just got this page with a button. We can click on the button to see a message show up.
The code for it is just super-simple. We've got a basic layout here, with our image, our button, and our message, and then we've got this method on the scope, that we can call to update that text. That's really all there is to it. I just want to keep this as simple as possible.
If we look at our Protractor configuration, you can see it's just the basics using the Chrome browser. One thing to note is this directConnect true configuration.
This is actually a fairly new change to Protractor's configuration and replaces the Chrome-only flag. Protractor is now capable of directly connecting to multiple browsers, including Firefox, and I believe even IE, but I'm not positive about that.
Our test here is just extremely basic. We're getting the title, making sure it's what we expect it to be, and then this one down here actually tests our button-click to make sure that it updates the text there. Of course, before each test, we are having it load the page.
If we go run it here, I've set up npm-test to run the Protractor test, or "npm t" for short. You can see that it's actually just running Protractor there. You can see that our two basic tests have passed.
To integrate screenshots into our tests, what we're going to do is we're actually going to add an afterEach block. This will run at the end of each test here or each spec.
What we're going to do, we're going to get it referenced to the jasmine spec here, just saying "jasmine.getEnv currentSpec," and then we're going to convert the spec name into something that doesn't have spaces in it.
It's going to be one of these, the first argument to these it calls. We're going to replace the spaces with underscores. Right now, we're just going to take screenshots if our tests fail. We're going to say, "If spec results passed, then return." We'll just bail out and not do anything.
The code that actually takes the screenshot here is just some standard node.js code wrapped in a Protractor API of browser.takeScreenshot. That will call our callback with the image data in the parameter there.
We use our spec name variable that we created up here to actually name that file. If we go here and actually change this expectation to expect an exclamation point on the end of that message, run our tests, it's going to fail because that's not the text that we actually put into that part of the page.
" Should display the message when button clicked" is the test that failed. If we go over here to SourceTree, we can see that we do now have an image, and it's named ""Should display the message when button clicked," just like we saw.
If we look at that, we can see what the state was there. In that case, the image isn't really showing us the problem, but if we compare that to what our expectation is, we can see that we're expecting there to be an exclamation mark. There is not, so our test has failed.
One other thing that you can use screenshots during testing for is essentially visual regression testing, making sure that your interface has not changed when it wasn't supposed to change. Git will actually pick up those image changes and tell you when something has changed.
Then you can even use GitHub's nice image diff here to see various ways to compare them and see how those images or how that screen changed between states. That's a very interesting workflow that you can set up so that you can really monitor the appearance of your application.
When things get a little more complex, sometimes, CSS rules can cascade improperly or various things like that that you may want to keep an eye on.
This screenshot code is something that you're going to want to use probably across a lot of different tests, so we're going to pull it out here.
If we pull it into this screenshot.js file here, what we can then do is essentially we'll expose two different methods, one that we can use when we want to take a screenshot regardless of the test status and one that we can use when we want to take a screenshot if the spec has failed only.
The first thing we'll do is here is we'll pull out the actual code that makes the screenshot happen, into a function, and we'll just call that "capture." We'll give it the name so that it knows the name of the file to write.
Once we have that in place, we can actually pull out or define our functions that we will expose to the calling code. The first one, we will just call "takeScreenshot." We'll pass in the spec. It will use that to generate that name.
Then we'll create our takeScreenshotOnFailure method that will actually inspect the spec that's passed in. It will only take that screenshot if that spec has failed. This name generation code should probably go in the capture method, but that's just a little implementation detail.
Now that we have externalized that code, we can actually go back into our test file and utilize it. If we go up here and create a variable named "capture," we'll just require our screenshot.js file that we created there.
We can then go back down to our afterEach, and we will just call one of those methods and pass in a spec that we have a reference to here. You can see that we have both of our methods available, takeScreenshot or takeScreenshotOnFailure.
For now, we'll just do takeScreenshotOnFailure. We will pass in the spec, and now we're good to go ahead and run everything again. Let's go ahead and clear out our screenshots directory so we've got nothing in there.
If we go back and run our tests now, we'll see that everything will pass, and it will not create any new screenshots, as we expect. The tests pass. If we go look at our folder, it's still empty, so that's working as expected.
If we now go back and just use the takeScreenshot method instead of this OnFailure, we should be able to verify that we've got both of our tests creating screenshots for us. It looks like I have actually forgotten to move the fs import over to our external file. Let's go ahead and do that, and then we should be good to go.
Now, we've got that fixed. Let's go run our tests again. Everything passes. If we go look at our folder here, you can see we've got both of our images captured, and we can see each state of our test.