This page is about running a test locally.
Running basic tests
- Set the test as the startup project and press F5 to compile it.
- Or in a cmd window, run the test of interest in
Run the test of interest, e.g. ./out/Debug/base_unittests
- Hint: Build the appropriate "Run *tests" target in Xcode (for example, "Run unit_tests" in chrome.xcodeproj.)
- Test failures will show up in the build results just like compile errors would.
- If you want to see the raw gtest output, press the little square lines-of-text button below the build results to show the build transcript.
If you want to run the tests from Terminal — for instance, to filter sub-tests — do this instead:
- Build the appropriate "*tests" target in Xcode (for example, "unit_tests" in chrome.xcodeproj.) This just builds the tests, without running them.
- In a shell,
- Run the appropriate test binary in
xcodebuild/Debug (for example,
Running a particular subtest
The above test executables are built with gtest, so they accept command line arguments to filter which sub-tests to run. For instance,
Running tests faster
run_test_cases.py helps getting results faster.
- It takes the test case list, sort it then shuffle it with a deterministic pseudo-random algo. Each test case is a task item.
- The rationale is to reduce the likelihood of all the test cases in a single test fixture to run simultaneously. It was observed to be the complete worst case in performance and stability.
- It runs each test case individually, e.g. it shells out the executable with its own test case. Each task item is an individual test case, resulting in a call alike: out/Release/unit_tests --gtest_filter=Super.Test
- It does slow down overall test execution time on slower machines; eventually it'll group back a few test cases at a time but it's not implemented yet.
- On a 32 cores workstation, it runs much faster. :D
- Each test case as a individual default timeout of 120 second. So the worst case of a single test case hanging cause a minimum total execution time of 120*3 = 360 seconds. The worst case scenario is not cumulative because task items are run in parallel.
- It retries each test cases twice, for a total of 3 executions:
- The first retry is done after ~40 task item in the queue so that it is not immediately retried. It helps with test case interference.
- The second retry is done at the very end serially. If you have a test case that is retried twice, it sucksis broken and should be fixed.
- If a test case fails the 3 tentative, it's failing. The build sheriff must disable it.
- If more than 10% of the test cases expected to run fail, the test execution is aborted early. This includes retries so a test executable where all the test case fail exactly once and pass on the second try will be aborted early. This mean there partial test case coverage loss. Heck, if 10% of the test case fails, you have bigger problems.
- It generates a .json file listing all the test case results, including retries.
- It's designed to have terse output.
- It can be itself sharded, e.g. you can call "python tools/swarming_client/googletest/run_test_cases.py --index 0 --shards 100 out/Release/unit_tests" to run 1% of the test cases in parallel.
- It is effectively designed to exhibit flakiness more, not less but better cope with flakiness.
Blink has a large suite of tests that typically verify a page is laid out properly. We use them to verify much of the code that runs within a Chromium renderer.
To run these tests, build the
target and then run webkit/tools/layout_tests/run_webkit_tests.sh --debug .
More information about running layout tests or fixing layouts tests can be found on the Layout Tests page
Non-UI Unit Tests
Many unit tests that do not involve UI build into the unit_tests executable. There are many tests in here, and it takes a long time to run them all, so if you only need to run one or two of the tests it is recommended that you use --gtest_filter=<your test name>. To add a new test, you will generally find a similar test and clone it.
UI Unit tests
Many of the UI tests are in the browser_tests executable. These tests typically need to show UI and will sometimes steal focus, so you cannot use your computer to do normal work while running these tests (otherwise your mousing around might cause the tests to fail). They can take a long time to run them all, so be prepared to have your machine tied up for several hours if you run these without using --gtest_filter. To add a new test, you will generally find a similar test and clone it. There is more information on browser tests here
Get dumps for Chromium crashes on Windows
Before running the tests, make sure to run
crash_service.exe. We use this program to intercept crashes in chromium and write a crash dump in the Crash Reports folder under the User Data directory of your chromium profile.
If you also want
crash_service.exe to intercept crashes for your normal Google Chrome or Chromium build, add the flag
You can also use the flag
--enable-dcheck to get assertion errors in release mode.