Startups | Fitness | Data | Performance
showing 79 lessons...
The index is the top-most schema level for your data. You will learn how to create an index in this lesson. You will be introduced to shards and replicas and how decisions about them will impact future performance, scalability and resiliency. You also learn what the elasticsearch cluster health status “green”, “yellow”, and “red” mean, not only in terms of cluster health but how it impacts your query search results.
Query DSL is based on JSON and allows for Leaf and Compound query clauses. The Query DSL is what you will want to use to write your production queries. It makes your queries more flexible, precise, easier to read and easier to debug.
Elasticsearch provides a powerful API over http for accessing its features. In this lesson, you will be introduced to the API and learn how to use it to get data from Elasticsearch using the browser and the popular command curl.
We will see how adding query stream parameters can exclude or include the properties that we desire from our data.
Elasticsearch provides several methods for using its powerful search features. In this lesson, you will be introduced to the
_search endpoint, the datapoints returned in results, how to set a query timeout (and what a timeout doesn’t do), and how to search specific indices and types.
Creating an alias can make searching easier by providing a friendlier name for the index. It can also be useful to create a filtered subset of the data, providing a better search experience for the client. You will learn how to do both in this lesson.
Types can probably be best thought of as a Class for your index and a map is the definition for that type. In this lesson, you will learn how to create a mapping type for an index based on sample data, verify it, then store data in the index using that type. You will also learn how Elasticsearch can automatically create them for you (known as Dynamic Mapping), and you’ll learn what mapping explosion is and how to avoid it.
Elasticsearch has a rich set of APIs for adding data to an index but for loading massive amounts of data, you’ll find the bulk interface much more efficient and performant. In this lesson you will learn how to format your data for bulk loading, add data via the bulk endpoint with curl, and add data via the bulk endpoint using the elasticsearch npm client.
Elasticsearch has an in-depth set of APIs for accessing the health and performance of the cluster. In this lesson, you will learn how to access them using the _cat API endpoint, designed for console use. You will also learn some of the key metrics to monitor to identify issues and performance problems with your Elasticsearch cluster before it impacts your application and clients, including how to tell if your Elasticsearch cluster isn’t returning results based on all of your data.
Closing and opening indices on Elasticsearch allow you to free up resources in your cluster when they aren’t needed, preventing you from scaling or growing your cluster to support unused indices. In this lesson, you will see how easy it is to do both and learn to do so on your own indices.
Using the _cat API is great for console based, adhoc queries of your cluster. To get even more detailed info on the health and performance of your cluster, or for programmatic access, the _cluster and _nodes endpoints may be your new best friends. There is a tremendous amount of information available about your elasticsearch cluster via these APIs. This lesson doesn’t cover all of them exhaustively, but instead introduces you to the endpoints and the data returned, arming you with the skills you need to go deeper as needed using the Elasticsearch docs found here.
This is more of an analysis of how the jvm heap will kill your Elasticsearch cluster than a “how-to” lesson. If you aren’t familiar with Java apps and the jvm, this 2 1/2 minutes can save you much pain, suffering, and self-loathing by showing you how Elasticsearch utilizes the jvm heap for performance and what to monitor so you know when it’s affecting you.
Business requirements change, new information is discovered, or usage patterns differ from the expected use. In any case, sooner or later you will find the need to reindex your data to accommodate these changes. In this lesson you will learn how to leverage the information stored in Elasticsearch and the bulk API to reindex data from one index to another.
Aggregation queries can be thought of as similar to GROUP BY in SQL or the Aggregation Framework in MongoDB, but much more powerful. In this lesson you will learn how to create aggregation queries to group documents and perform rollups and calculations based on the results. Aggregation documentation can be found here
Using Full Text Search is where the search capabilities of Elasticsearch start to really set it apart from databases. In this lesson, you will learn how to perform full text searches against your data, interpret the results, and understand how the relevance score is impacts your search results. If you are following the examples in this course, be sure to import the Simpsons episode scripts for this lesson by running
node utils/episode_scripts.js from the git repo directory.
By default, search results are limited to the top 10 results. In this lesson, you will learn how to change the pagination size, paginate through results, and you will learn about the performance implications of pagination on Elasticsearch.
In this lesson, you will learn how to add new documents to the Elasticsearch data store using HTTP methods and the Elasticsearch API. You will also learn how Elasticsearch uniquely identifies each document and what happens when you attempt to create a document that already exists.
Elasticsearch provides a full featured client for nodejs available via npm. In this lesson, you will learn to how install the client and use it to retrieve data from your Elasticsearch server.
We will walk through retrieving data through callbacks as well as using promises.
In this lesson, you will learn how to verify your API server is returning HTTP 400 responses when clients submit incorrect data. Returning HTTP 400 ensures that your clients are notified of the incorrect usage. Testing for them ensures your API returns errors instead of incorrect responses when supplied with incorrect data.
In this lesson, we will use Chai's request method to test our Node application's API responses.
By the end of this lesson, you will know how to:
- install the prerequisites to use mocha and chai in your application
- test for HTTP status response codes
- test for a string of text on a page
- test for a json response and validate the properties of the object
- write tests that not only verify the response of your application, but the behavior as well
Promises are rapidly overtaking callbacks in popularity. In this lesson, I show you how to rewrite a callback using ES6 Promises. After getting a handle on the basics, we'll convert an error-first style callback to use the Resolve and Reject handlers built into Promises. If you aren't familiar with Promises, this lesson can be a great starting point by learning how to duplicate the behavior of a callback with ES6 Promises.
As a beginning node.js user, you will often see the tilde (~) or caret (^) in front of the version number for dependencies managed by your package.json file. In this lesson, you will learn what each means, when to use it, the implications of each and a brief introduction to Semantic Versioning.
Using promises can be confusing. In this lesson, I show you how to create promises to chain functions together in a specified order. You'll also learn how to pass the return value of promises as the input parameters of promises further down the chain. All examples in this lesson utilize native ES6 style promises, which are fully supported by recent versions of node.js without any dependencies.
In this lesson, I introduce a memory leak into our node.js application and show you how to identify it using the Formidable nodejs-dashboard. Once identified, we will add garbage collection stats to the error console allowing us to correlate garbage collection with the increased memory usage.
Swagger is a project used to describe restful APIs using the OpenAPI Specification. It allows you to document your API so consumers understand the endpoints, parameters, and responses. In this lesson, I'll show you how to install the swagger command line tool, create a new API project using swagger, and introduce you to the swagger API editor.
There are a lot of great monitoring tools available for Node.js. It is also incredibly easy to build monitoring into your application. In this lesson, we will add precision monitoring to the function calls in the Todo API server, log it to the console, and persist those results to Elasticsearch where they can later be reviewed, graphed, and analyzed.
The last step before deploying your new API server into production is to load test it. Load testing allows you to better understand the performance characteristics as well as forecast load and capacity. Fortunately, load testing is incredibly easy and I'll show you exactly how to create a load test plan, test the response from the API server to ensure it is responding correctly, and scale your test up to simulate as many users as needed.
In this lesson, I will show you how to update a simple, skeleton React application to work with the Todo API server built with Swagger. Using existing components, you will learn how to display all Todo items, update an existing Todo item, and add a new Todo item.
This lesson shows you how to create the functions defined in the Todo API specification to create new Todo items when received in HTTP POST methods. You will also see the errors you will get when CORS is not properly configured and how to resolve them by installing the NPM CORS package.
APIs created with Swagger have a built-in mock function, allowing you to mock responses from your API prior to writing the backend code to make it functional. In addition to learning how to enable this feature, I will show you how to write your own functions to enhance the mock responses returned when using the Swagger mock feature.
In this lesson, you will learn how to define multiple parameters in the API specification to identify the ID of the Todo item being updated in the URL, as well as the contents for the updated Todo item in the body of the request.
Schema definitions allow you to define the format and types of data sent and received by your API. This allows consumers of your API to be confident in using your API. In this lesson, you will learn how to create a schema definition for the Todo API server.
Command line arguments are often used to modify the behavior of an application or specify needed parameters for operation. In this lesson, you will learn how to access the command line arguments passed to your node.js application as well as different strategies for evaluating and accessing them.
In this lesson, you will learn how to use the Formidable nodejs-dashboard event loop delay to identify expensive operations in your code. An example application with excessive synchronous file system write operations is used as well as the provided jmeter configuration to simulate load.
Writing great ES6 style Promises for Node.js is only half the battle. Your great modules must include tests as well to ensure future iterations don't break them. In this lesson, I show you a simple ES6 Promise in Node.js, then walk you through creating tests in Mocha using chai and chai-as-promised to test both resolve and reject methods.
Getting nodejs to run inside of Docker is relatively simple. Getting nodejs to run inside of Docker while using recommended best practices takes some planning. In this lesson, I’ll show you how to get a simple nodejs web server running in Docker on your local workstation while adhering to best practices.
In this lesson, you will learn what that means to your existing code and how to update your code to use the new Buffer APIs.
One of the biggest stumbling blocks I see when pushing code to production servers is unidentified dependencies: something installed locally on my workstation that doesn’t exist on the production servers. In this lesson, I’ll show you how to use npm scripts to deploy your node.js application to a newly-provisioned server via Vagrant for validation before going to production.