In this codelab, you will build a client-side Autotest to check the disk and cache throughput of a ChromiumOs device. You will learn how to:
1. Setup the environment needed for autotest
2. Run and edit a test
3. Write a new test and control file
4. Check results of the testIn the process of doing so you will also learn a little about the autotest framework.
Autotest is an open source project designed for testing the linux kernel. Before starting this codelab you might benefit from scrolling through some upstream documentation on autotest client tests. Autotest is responsible for managing the state of multiple client devices as a distributed system, by integrating a web interface, database, servers and the clients themselves. Since this codelab is about client tests, what follows is a short description of how autotest runs a specific test, on one client.
Autotest looks through all directories in client/tests and client/site_tests for simple python files that begin with ‘control.’. These files contain a list of variables, and a call to job.run_test. The control variables tell autotest when to schedule the test, and the call to run_test tells autotest how. Each test instance is part of a job. Autotest creates this job object and forks a child process to execute its control file.
Note the exec mentioned above is the python keyword, not os.exec
Tests reside in a couple of key locations in your checkout, and map to similar locations on the DUT (Device Under Test). Understanding the layout of these directories might give you some perspective:
In this codelab, we will:
First, get the autotest source:
a. If you Got the Code, you already have autotest.
b. If you do not wish to sync the entire source and reimage a device, you can run tests in a vm.
If the cros_start_vm scripts fails you need to enable virtualization on your workstation. check for /dev/kvm or run ‘sudo kvm-ok’ (you might have to ‘sudo apt-get install cpu-checker’ first). It will either say /dev/kvm exists and kvm acceleration can be used or that /dev/kvm doesn’t and kvm acceleration can NOT be used. In the latter case, hit esc on boot and go to ‘system security:’, turn on virtualization. More information about running tests on a vm can be found here.
Once you have autotest, there are 2 ways to run tests, either using your machine as a server or from the client DUT. Running it directly on the device is faster, but requires invoking it from your server at least once.
(Note: run_remote_tests is going through a rewrite and will disappear in the near future)
1. enter chroot: cros_checkout_directory$ cros_sdk
2. Invoke run_remote_tests, to run login_LoginSuccess on a vm with local autotest bits:
(where ~/trunk/src/scripts is the default chroot base dir).
One can also invoke run_remote_tests.sh with the following options:
You have to use run_remote_tests at least once so it copies over the test/dependencies before attempting this; If you haven’t, /usr/local/autotest may not exist on the device.
Once you're on the client device:
The fastest way to edit a test is directly on the client. If you find the text editor on a chromebook non-intuitive then edit the file locally and use a copy tool like rcp/scp to send it to the DUT.
1. Add a print statement to the login_LoginSuccess test you just ran
2. rsync it into /usr/local/autotest/tests on the client
3. run it by invoking autotest_client, as described in the section on Running Tests Directly on the client. Note a print statement won’t show up when the test is run via run_remote_test.
A word of Caution: copy-pasting from Google Docs has been known to convert consecutive whitespace characters into unicode characters, which will break your control file. Using CTRL-C + CTRL-V is safer than using middle-click pasting on Linux.
Our aim is to create a test which does the following:
1. Create a directory in client/site_tests, name kernel_HdParmBasic.
2. Create a control file, A bare minimum control file for the hdparm test:
To which you can add the necessary control variables as described in the autotest best practices. Job.run_test can take any named arguments, and the appropriate ones will be cherry picked and passed on to the test. Place this control file in client/site_tests/kernel_HdParmBasic/control.
3. Create a test file:
At a bare minimum the test needs a run_once method, which should contain the implementation of the test; it also needs to inherit from test.test. Most tests also need initialize and cleanup methods.
Notice how only run_once takes the argument named_arg, which was passed in by the job.run_test method. You can pass arguments to initialize and cleanup this way too. You can find examples of initialize and cleanup methods in helper classes, like cros_ui_test. Place this test file in client/site_tests/kernel_HdParmBasic/kernel_HdParmBasic.py.
cros_workon --board=lumpy start autotest-tests
./run_remote_tests.sh --remote=<ip of DUT> kernel_HdParmBasic/control$ --use_emerged
If you’d like more perspective you might benefit from consulting the troubleshooting doc.
The results folder contains many logs, to analyze client test logging messages you need to find kernel_HdParmBasic.(DEBUG, INFO, ERROR) depending on which logging macro you used. Note: logging message priorities escalate, and debug < info < warning < error. If you want to see All logging messages just look in the debug logs.
Client test logs should be in: /tmp/run_remote_tests.<RESULTS_DIR_HASH>/kernel_HdParmBasic/kernle_HdParmBasic/debug
where you will have to replace ‘/tmp/run_remote_tests.<RESULTS_DIR_HASH>’ with anything you might have specified through the --results_dir_root option.
In the DEBUG logs you should see messages like:
Note that print messages will not show up in these logs since we redirect stdout. If you’ve already performed a ‘run_remote’ once you can directly invoke your test on a client, as described in the previous section. Two things to note when using this approach:
a. print messages do show up
b. logging messages are also available under autotest/results/default/
You can import any autotest client helper module with the line
from autotest_lib.client.<dir> import <module>
You might also benefit from reading how the framework makes autotest_lib available for you.
kernel_HdParmBasic Needs test.test, so it needs to import test from client/bin.
Looking back at our initial test plan, it also needs to:
1. run hdparm -T <disk>
This implies running things on the command line, modules to look at are base/site utils.
However common_lib’s ‘utils.py’ conveniently gives us both.
from autotest_lib.client.bin import test, utils
2. Search output for timing numbers.
3. Report this as a result.
import logging, re
If your test manages any state on the DUT it might need initialization and cleanup. In our case the subprocess handles it’s own cleanup, if any. Putting together all we’ve talked about, our run_once method looks like:
from autotest_lib.client.bin import test, utils
Note the use of performance keyvals instead of plain logging statements. The keyvals are written to /usr/local/autotest/results/default/kernel_HdParmBasic/results/keyval on the client and will be reported on your console when run through run_remote_tests:
kernel_HdParmBasic/kernel_HdParmBasic cache_throughput 4346.76
kernel_HdParmBasic/kernel_HdParmBasic disk_throughput 144.28