"Build and test at scale". This is focusing on the test part.
Drastically reduce whole test cycle time by scaling test sharding across multiple slaves seamlessly.
It does so by integrating Swarm within the Try Server and eventually the continuous integration masters.
The Chromium waterfall currently uses completely manual test sharding. A "builder" slave compiles and creates a .zip of the build output. Then "testers" download the zip, checkout the sources, unpack the zip inside the source checkout and run a few tests.
Each new "tester" configuration is created to run a subset of the tests so overall most of the meta-shards takes roughly the same amount of time. All this configuration is done manually and is error-prone.
For the Try Server, there is currently no test sharding at all since it'd be relatively complicated to setup inside buildbot.
So overall, while we can continue throwing more faster hardware at the problem, the fundamental issue remains; as tests gets larger and slower, the end-to-end test latency will continue to increase, slowing down developer productivity.
This is a natural extension of the Chromium Try Server (initiated and written by maruel@ in 2008) that scaled up through the years and the Commit Queue (initiated and written by maruel@ in 2011).
Before the Try Server, team members were not testing on other platforms than the one they were developing on, causing constant breakage. This helped getting at 50 commits/day.
Before the Commit Queue, the overhead of manually triggered proper tests on all the important configuration was becoming increasingly cumbersome. This could be automated and was done. This helped sustain 100 commits/day.
But these are not sufficient to scale the team velocity at over 150 commits per day. Big design flaws remain in the way the team is working. In particular, to scale the Chromium team productivity, significant changes in the infrastructure need to happen. In particular, the latency of the testing across platforms need to be drastically reduced. That requires getting the test result in O(1) time, independent of:
To achieve this, sharding a test must be a constant cost. This is what the swarm integration is about.
Using Swarm works around Buildbot's limitations and permits sharding automatically and in an unlimited way. For example, it permits sharding the test cases on a large smoke test across multiple slaves to reduce the latency of running it. Buildbot on the other hand requires manual configuration to shard the tests and is not very efficient at large scale.
By reusing the Isolated testing effort, we're going to be able to shard efficiently the swarm slaves. By integrating swarm infrastructure inside buildbot, we'll work around the manual sharding that buildbot requires.
To recapitulate the isolated testing design doc, the
You can find an overview presentation slides at https://docs.google.com/a/google.com/presentation/d/18DS0Za8s9O9hCei2I2KTHUPXFV39HfAe5hbZRiUzNG8/view. Sorry, Googlers-only.
The infrastructure reuses much of the current Chromium open source infrastructure. The different parts are:
The workflow goes as follow;
So there is really 2 layers of control involved. The first being buildbot master which controls the overall "build", which includes syncing the sources, compiling, requesting the test to be run on Swarm and asking it to report success or failure. The second layer is the Swarm server itself which "micro-distribute" test shards. Each test shard is actually a subset of the test cases for a single unit test executable. All the unit tests are run concurrently. So for example for a Try Job that requests
The whole project is written in python.
The isolated testing infrastructure moves around a large number of bits. This will likely put a lot of pressure on the network I/O at the edge. Depending on how bad it is in practice, we will measure as implementation continues, we'll decide on the three implementations:
This project is primarily aimed at reducing the overall latency from "ask for green light signal for a CL" to getting the signal. The CL can be "not committed yet" or "just committed", the former being the Try Server, the later the Continuous Integration servers. The latency is reduced by enabling a higher of parallel shard execution and removing the constant costs of syncing the sources and zipping the test executables, both which are extremely slow, in the orders of minutes.
Other latencies includes;
Python based AppEngine servers are not super scalable. We enable threadsafe mode on the python 2.7 system to improve its performance. Developing it in Golang was considered but there's little enthusiasm on the team to learn another programming language.
The possibility to efficiently shard. For example for test cases taking several tens of seconds becoming laggards to get the test results. This is worked around by over-sharding and using
There are multiple single points of failures
There is currently no redundancy for the buildbot infrastructure, if a VM dies, it is simply replaced right away by a sysadmin. The swarm slaves are intrinsically redundant. The hashtable data store isn't redundant or reliable, it can be rebuilt from sources if needed. If it fails, it will block the infrastructure.
Since the whole infrastructure is visible from the internet, like this design doc, proper DACL need to be used. Both the Swarm master and the Isolate datastore require valid GAIA accounts. The credential verification is completely managed by AppEngine.