Thanks for joining the LoadImpact user community, we are really happy to have you and your team onboard!
This page summarizes and guides you through the simple steps to get started with Load Testing through your new LoadImpact.com account. It's condensed information from our more extensive and in depth Knowledge base
As a start, this page assumes that you have created an account with us. If not, you can register for our free tier by simply clicking here
First of all: Load Impact offers a Software as a Service (SaaS) for Load testing your back-end infrastructure. To be clear, the full user experience is derived into the front-end performance and how the back-end, server side, serves that front-end with requested content. Load Impact only tests the components involved in the back-end delivery performance. (Network, Servers, Load Balancers, Load performance of Server Side code etc).
Analyzing results of your first Load Test
In addition to an registered account, this introduction also assumes you also have at least one URL-generated test result in your account. If not, create one here! This is great to establish a baseline and perfect for testing a landing page on your selected system under test (SUT).
Interpreting test results and looking for any artifacts, is where a preformance tester starts. For you who are new to Load Testing, here's some best practice:
- Initially, ignore the absolute value of the load time for Virtual Users (VUs), instead - look for trends in the graph occuring during the increased load over time of the executed test:
- Value increasing = a sign of degrading performance
- Value flat = a sign of stable performance
- Another recommendation to find these, and new correlations, easier is to install our open source server monitoring agent or pull data from our integration with New Relic Monitoring, APM. This action will allow to get a better understanding of how your SUT's performing under load, and to find those correlations. i.e. do response times rise when CPU utilization reaches a certain level?
Next step: Customize your test scenarios
The easiest and fastest way to customize your tests is to record simulated user behavior with the Load Impact Chrome Plug-In. Install the plug-in, record user behavior you’d like to repeat in your test by browsing/interacting like a user would, save and add to a test scenario. (Complete details here.) Recorded tests has an efficiency benefit and can help you create the base for a more advanced testing scenario where you adjust and script the test flows to fit your specific needs.
The Chrome extension saves you a lot of time because it quickly creates hundreds of lines of code for you to use in the script to run. This script is ready to be use immediately if you are testing general browsing behavior or don’t need to change any data for each test.
What about when you need to:
- Test login/authentication?
- Submit data with each simulated user session?
- Handle dynamic tokens (CSRF, VIEWSTATES, NONCES, etc)
- Test isn’t quite working?
Addressing each of those is easier than you think. It's about customizing your test scenarios with a combination of our Lua-based scripting language and the Load Impact API .
What kind of tests should you run?
We recommend you run tests in this order: Baseline, Stress, then TEAR into Load tests.
Your first test (even after a major update) should be a Baseline test: understand performance under optimal conditions with low simulated, virtual users (VUs).
Your second test should be a Stress test. Iterate as you step up to different user levels, fixing any performance issues along the way. (Find more example configurations here.)
Then, continually verify performance with load tests. As with every test, we recommend you to TEAR into your load tests. TEAR is short for: Test -> Evaluate -> Adjust -> Repeat. Thus, you may want schedule your load tests using the LoadImpact scheduler or integrate with your favorite CI/CD tool.
Repeat the whole process as needed, particularly with major releases using the TEAR iteration.
How often should you test?
Run performance tests almost as frequently as you build to avoid costly experiences from the production enviroment discovery: daily, weekly, etc. Every change, be it code or infrastructure, can affect performance. Schedule and script your tests to catch performance issues as far upstream as possible. Not using a CI tool? Here’s a best practice from one of our users: Schedule small tests to run automatically on a regular basis. Monitor for changes in this baseline performance, again keep the TEAR approach.
What’s the right size for your performance tests?
We recommend this formula when testing webapps/websites:
- Hourly Sessions x Average Session Duration (in seconds) / 3,600 = minimum concurrent users to test.
You can get those numbers from Google Analytics. Read all of the details in our detailed how-to guide.
Testing an API endpoint from the cloud service?
- We've tested stability for each Virtual User making about 25 requests/second for most APIs. We've written an in depth article with a code sample you can use here: How to Load Test an API
What if something doesn’t work as you expected it to?
Our support team is here for you: Find answers in our Knowledge Base, enter your question in our in app chat or simply send us an email at email@example.com. We want to make sure you get the most out of our platform, so please do not hesitate to reach out!
If you are happy with our Free tier, it's all there for you for 5 tests per month running up to 100 VU's for 5 min. For more tests, virtual users (VUs), requests per second, team members, and more, consider one of our paid subscriptions. All paid plans include priority support, annual plans include unlimited tests at each subscription level (and a nifty discount).
Questions? Consult the knowledge base or contact our support team.
Thanks again for joining the LoadImpact user community, we're happy to have you onboard!