In the past, you've needed to either write a package.json script or use the node_modules/.bin directory to access binaries installed in your node_modules. npx enables you to access the binaries muc...
As node projects evolve, new features are added all the time. This results in different errors or results as you're migrating from one version to another. npx allows you to try out any version you ...
Learn how to setup an Nginx proxy server that sits in front of a Node.js app. You can use a proxy to control the flow of frontend requests hitting your app, as well as to achieve better performance...
In this lesson we'll show how to setup a .babelrc file with presets and plugins. Then create npm scripts that use babel-node and babel. With babel-preset-env we'll show how to target specific versi...
Great improvements and optimizations can be made to the output of bundled code. Prepack provides the ability to optimize code at build-time, producing run-time optimizations. In this lesson, we wil...
This is a simple webhook example. This lesson walks you through creating a github webhook using micro, the development process of testing the webhook online by tunneling to it using ngrok, then lau...
When developing responsive websites, you will constantly be resizing your browser to make sure your site is properly responding to all of the resizes. You can script this behavior by using Nightmar...
In this lesson, you will learn how to verify your API server is returning HTTP 400 responses when clients submit incorrect data. Returning HTTP 400 ensures that your clients are notified of the inc...
In this lesson, we will use Chai's request method to test our Node application's API responses. By the end of this lesson, you will know how to: - install the prerequisites to use mocha and chai in...
Soon or later, we get tired of having to manually restart our Node.js application. We will install and configure Nodemon to help us with that headache. Remember to install it as a development depe...
We take advantage of the new Node.js features like async/await from ES2017 and the awesome yarn package-manager to create modern JSON APIs with Koa v2. Make sure you have at least Node.js 7.6.x ins...
First and foremost, DO NOT use Google Sheets for any production app. It's fine for fun side projects for your family or friends, but not much more. With those disclaimers in mind, Google sheets can...
showing All 147 lessons...
In the past, you've needed to either write a package.json script or use the
node_modules/.bin directory to access binaries installed in your node_modules. npx enables you to access the binaries much more easily and try out options before you settle on what you want to add to your package.json scripts.
As node projects evolve, new features are added all the time. This results in different errors or results as you're migrating from one version to another.
npx allows you to try out any version you want by simply adding the version number to the project you're using. This lesson covers using the TypeScript compiler through a different versions as async function support was added.
Learn how to setup an Nginx proxy server that sits in front of a Node.js app. You can use a proxy to control the flow of frontend requests hitting your app, as well as to achieve better performance and scalability. We'll create a sample Node.js app and configure a default Nginx configuration file to proxy web requests.
In this lesson we will look at the filters that can be used to retrieve and manipulate the data stored in LoopBack.
We will use Postman to interact with the
Using the URL parameters we will apply at the various filters LoopBack has to offer:
In this lesson we will look at how to deploy our LoopBack API project using now.
We will install the now client, create an account and deploy our app.
now secrets features we will store our database connection url which we get by creating a free MongoDB Atlas database.
In order to simplify following deployments we will add a
now key to our
package.json where we use the
env parameters to configure our deployments. We will add some npm scripts so we can deploy our app in the desired way using
npm run now.
In this lesson we will create a free account on monogdb.com. After the account has been created we will create a group in MongoDB Atlas and in that group we create a free cluster. In order to create the cluster we need to provide a username and password. We create our user called
admin and have the site generate our password for use. Once the cluster is created we look at where we can find the connection string.
In this lesson we will create a dynamic datasource in LoopBack. If the API is started with the environment variable
MONGODB_URL it will use this url and the
loopback-connector-mongodb package to store the data in MongoDB.
In the course we will use a local MongoDB instance. If you don't have MongoDB running locally you can always create a free MongoDB Atlas database.
In this lesson we look at extending the functionality of LoopBack models by defining a remote method.
A remote method is a method on a model exposed over a custom REST endpoint.
lb remote-method command we will create the remote method meta-data in
product.json. We will verify that this got created and that we see the new REST endpoint got added. Then we will create the actual method in
product.js and enhance it so that the API will return an error if we want to buy a negative amount of products.
In this lesson we will learn how to create a boot script.
We will use a boot script to create or update a predefined admin user, and give that user an Access Token. That way we don't have to log in to the API each time we want to use it as an authenticated user.
In this lesson we will learn how to protect our API using ACL’s.
ACL stands for Access Control List and it's function is to control permissions of resources in the API. It does this by keeping a mapping between an API resource and a principal. An API resource as an API endpoint like a remote method or a whole model. A principal in LoopBack are users or applications that can be grouped in a role.
In order to get the API production ready we will protect a selection of our REST endpoints with ACL’s. We will look at how we can obtain an access token and how we can use that in our requests.
Finally we will write some unit tests to make sure that the ACL does what we expect. We enhance our test setup by exporting
request with is provided by the supertest library.
In our tests we use
request to verify that our endpoints return the correct HTTP status code.
In this lesson we will learn how to add operation hooks to our models. In the Product model we will create a
before safe observer that will check if the category we want to add the product to exists.
In our Category model we will create a
before delete observer to prevent categories from being deleted when they have products.
We will create a unit test to verify that both of these operation hooks work as expected.
In this lesson we will learn how to add tests to the project to make sure our API behaves as we expect.
chai as devDependencies we will add
test:watch scripts to our
package.json. We will create a different datasource what we will use when running the tests by copying
datasources.test.json and prefixing our
test command with
We will verify that our tests run against an empty datasource. We’ll also add various tests to verify that our remote method and validation on the Product model behave as expected.
We can add validation rules to our models to make sure the data we store in our API is how we want it.
In this lesson we will add validation rules to the Product model.
We will make sure the product name has a minimal length using the
validatesLengthOf rule and that it is unique using the
For the price property we will add a custom validation to make sure that the value entered is not a negative integer. Additionally we will show how to do an async validation using
validateAsync . This can for instance be useful if you want the validation to depend on a value in the database or a remote system.
A relation in LoopBack is a way to associate the data from multiple models with each other. LoopBack supports various types of relations each with a different use case.
In this lesson we will create a second model called Category and create a relation between the Category and the Product models. To do this we will first add a property to the Product model called
categoryId. This will allow us to store for each Product the category it belongs to. We then move on to create the actual relationships.
Our first relationship defines that a
Category hasMany Products and the second one that a
Product belongsTo a Category.
After defining these relations we will use the API Explorer to see how we can interact with this related data.
In this lesson we will learn to create a development mode for our server. We do this by installing nodemon as a dev dependency and creating a script tag called 'dev'.
This script tag will execute the command
nodemon server/server.js --watch common --watch server.
We can execute this command by running
npm run dev.
When running in development mode the server will be automatically restarted when there are changes detected in the
In this lesson we'll show how to setup a
.babelrc file with presets and plugins. Then create npm scripts that use
babel-preset-env we'll show how to target specific versions of node and how to use babel plugins, while not transpiling features (like
await) that are already supported by node natively.
In this lesson, we’ll learn how to host a simple bot with Heroku. We'll learn how to create a new Heroku application and how to deploy our code to Heroku using git. We’ll learn how to change our project from a web app to a worker app and how to create a Procfile. We'll also see how to see output from our Heroku app with the command
Make a Twitter Audio Bot That Composes a Song Based on a Tweet - In the final bot lesson, we'll compose a ditty based on a tweet, save it as an audio file, and post it to Twitter. Because Twitter only supports uploading audio in video form, we'll learn how to create a video from the MIDI file and post it to Twitter. This is a longer video since we are going over how to create this pipeline from scratch.
We'll use RiTa to tokenize the text of a tweet and find the parts of speech:
We'll use Jsmidigen to compose a tune in a MIDI format:
We'll also use FFMPEG, which will help us create a video from our audio and a picture:
And we'll use TiMidity to convert our MIDI file to a Wav file:
You can use any image in place of the black image used in this video.
In this lesson, we’ll learn how to retrieve and tweet data from Google Spreadsheets. We'll use Tabletop.js to make this easier. More information on Tabletop can be found at https://github.com/jsoma/tabletop.
We’ll learn how your bot can get its list of followers, follow people, and look up friendships. We'll use Twit's GET method to get our followers at the followers/list endpoint, to get the users we follow at the friends/ids and friends/list endpoints, and to look up our friendships with the friendships/lookup endpoint. We'll also use Twit's POST method to follow someone at the friendships/create endpoint, and to send messages by posting to the direct_messages/new endpoint.
With this bot, we’ll find the number of faces in a photo that is tweeted at us, and respond back with what emotions the faces are expressing, using the Google Cloud Vision API.
The Google Cloud Vision API is worth exploring, and you'll need to create an account before this lesson:
Tracery is a brilliant tool to more easily create text grammars and structure. In this lesson, we’ll create a bot that tweets out tiny stories.
We'll learn what a grammar is in this context, and how to create one with Tracery. We'll first create a simple story with character, action, place, and object variables, and learn how to add modifiers. Then, we'll create a more complex one, and learn how to set variables that we want to be consistent throughout the story, such as pronouns.
In this lesson, we’ll create multiple functions to request, download, and save photos and data from NASA's API, and then have our bot upload these photos to Twitter and post them along with their descriptions. We'll also learn how to tweet videos using a video from NASA’s space archives.
In this lesson, we’ll give our bot a large input of past text that we’ve written (essays, other tweets, etc.) and, using markov chains, have it create tweets that sound like ourselves!
For more information about Markov chains, see Markov Chains explained visually:
The RiTa library is a powerful library for working with text and text generation. See the reference here:
We’ll learn how to search tweets with the Twitter Search API, using Twit's GET method to get the search/tweets endpoint. We'll use the query and count parameters to get the search term(s) and number of tweets that we want. We'll learn how to get exact phrases, multiple words, one of several words, emoticons, hashtags, photos/videos, urls, and how to remove words from our results. We'll learn how to implement the safe filter and how to filter by media, website, or date. We'll learn how to get recent results, popular results, results by location, and results by language.
The search API returns for relevance, and not completeness. If you want all the tweets from a search term, you should use the stream API (which we'll go over in the next lesson).
We’ll learn the basics of interacting with tweets, including retweeting, deleting, favoriting, and replying to tweets. We'll get our home timeline by using Twit's GET method to access the
statuses/home_timeline endpoint, including the count parameter, which lets us get back a certain number of tweets. We'll also pass it a callback. We'll learn how to cycle through the data (the tweets) we get back and see what information is included. We'll learn how to retweet statuses by posting to
statuses/retweet and including the tweet id. We can unretweet by posting to
statuses/unretweet with the same tweet id. We'll also learn how to like a tweet by posting to
favorites/create with a tweet id, and unlike a tweet by posting to
favorites/destroy with a tweet id. We'll also learn how to reply to a tweet by posting to
statuses/update, with a status that includes the handle of the user we're replying to and
in_reply_to_status_id parameter, which is the id of the tweet we're replying to. We'll learn how to delete a tweet by posting to
statuses/destroy with the tweet's id.
Great improvements and optimizations can be made to the output of bundled code. Prepack provides the ability to optimize code at build-time, producing run-time optimizations. In this lesson, we will look at configuring Prepack to use Webpack with the Prepack Webpack Plugin so we can enjoy extremely concise and optimized build scripts within our Webpack project.
When developing responsive websites, you will constantly be resizing your browser to make sure your site is properly responding to all of the resizes. You can script this behavior by using
Nightmare to leverage Electron and it will handle all the resizing for you. Nightmare can then also takes screenshots and save them so you can make sure the site matches your designs.
In this lesson, you will learn how to verify your API server is returning HTTP 400 responses when clients submit incorrect data. Returning HTTP 400 ensures that your clients are notified of the incorrect usage. Testing for them ensures your API returns errors instead of incorrect responses when supplied with incorrect data.
In this lesson, we will use Chai's request method to test our Node application's API responses.
By the end of this lesson, you will know how to:
- install the prerequisites to use mocha and chai in your application
- test for HTTP status response codes
- test for a string of text on a page
- test for a json response and validate the properties of the object
- write tests that not only verify the response of your application, but the behavior as well
Soon or later, we get tired of having to manually restart our Node.js application. We will install and configure Nodemon to help us with that headache.
Remember to install it as a development dependency with
yarn add --dev nodemon and limit its scope with
In this lesson you will learn how to persist the data from the memory connector. As the name suggests, the memory connector stores the data in memory. This means that if you restart the server, the data is gone. In development mode it can be useful to store this data in a file, so it gets persisted between server restarts, and it can be easily inspected.
In this lesson you will learn what a LoopBack model is, you will create a Product model using the LoopbBack CLI. The product model will be based off the built-in PersistedModel which gives it basic functionality like Create, Ready, Update, Delete and some more. Using the API Explorer you can interact with the new model, store, retrieve, edit and delete the product data.
We take advantage of the new Node.js features like
async/await from ES2017 and the awesome
yarn package-manager to create modern JSON APIs with Koa v2. Make sure you have at least Node.js 7.6.x installed.
ctx (context) variable encapsulates a Request and Response object, they are similar to what we already know about the
res objects from the Express.js framework, but it is more expressive and easy to understand thanks to its getters, _setters and some shortcuts. It also parses the response content and sets the
Content-Type header based on the type of its body property. Finally, we can manually set the response headers and status code.
The HTTP client from the video is wuzz.
LoopBack is a framework built on top of Express for creating APIs. It allows you to create end-to-end REST APIs that can access data from many data sources such as MondoDB, PostgreSQL, MySQL or other REST APIs.
In this lesson you will learn how to install loopback-cli and create a new LoopBack API project. After creating the basic LoopBack project through the CLI, running the server will give us access to the project and API Explorer urls. The user model will be available to us because user authentication was enabled.
First and foremost, DO NOT use Google Sheets for any production app. It's fine for fun side projects for your family or friends, but not much more. With those disclaimers in mind, Google sheets can be complicated to set up if you don't follow precise configuration steps. This lesson walks you through setting up Google sheets credentials, authentication, getting/appending values, then finally wrapping the sheets api with Node.js and Express to use in a simple project.
As a beginning node.js user, you will often see the tilde (~) or caret (^) in front of the version number for dependencies managed by your package.json file. In this lesson, you will learn what each means, when to use it, the implications of each and a brief introduction to Semantic Versioning.
Using promises can be confusing. In this lesson, I show you how to create promises to chain functions together in a specified order. You'll also learn how to pass the return value of promises as the input parameters of promises further down the chain. All examples in this lesson utilize native ES6 style promises, which are fully supported by recent versions of node.js without any dependencies.
In this lesson, I introduce a memory leak into our node.js application and show you how to identify it using the Formidable nodejs-dashboard. Once identified, we will add garbage collection stats to the error console allowing us to correlate garbage collection with the increased memory usage.
Swagger is a project used to describe restful APIs using the OpenAPI Specification. It allows you to document your API so consumers understand the endpoints, parameters, and responses. In this lesson, I'll show you how to install the swagger command line tool, create a new API project using swagger, and introduce you to the swagger API editor.
There are a lot of great monitoring tools available for Node.js. It is also incredibly easy to build monitoring into your application. In this lesson, we will add precision monitoring to the function calls in the Todo API server, log it to the console, and persist those results to Elasticsearch where they can later be reviewed, graphed, and analyzed.
The last step before deploying your new API server into production is to load test it. Load testing allows you to better understand the performance characteristics as well as forecast load and capacity. Fortunately, load testing is incredibly easy and I'll show you exactly how to create a load test plan, test the response from the API server to ensure it is responding correctly, and scale your test up to simulate as many users as needed.
In this lesson, I will show you how to update a simple, skeleton React application to work with the Todo API server built with Swagger. Using existing components, you will learn how to display all Todo items, update an existing Todo item, and add a new Todo item.