Sending Data to the Performance Dashboard

The Chrome Performance Dashboard can accept data from anywhere, as long as receives data in the right format and the sender’s IP is whitelisted.



Data Format (version 1)


The endpoint that accepts new points (https://chromeperf.appspot.com/add_point) accepts HTTP post requests. With the post request, there should be one parameter given, called data”, the value of which is JSON which contains all of the data being uploaded.


Example

{

 "master": "master.chromium.perf",

 "bot": "linux-release",

 "point_id": 123456,

 "versions": {

   "chrome": "12.3.45.6",

   "blink": "234567"

   ...

 },

 "supplemental": {

   "os": "mavericks",

   "gpu_manufacturer": "intel",

   "physical_memory": "2G",

   ...

 }

 "chart_data": {... as output by Telemetry; see below ...}

}


"chart_data":

{

 "format_version": "1.0",

 "benchmark_name": "page_cycler.typical_25",

 "charts": {

   "warm_times": {

     "traces": {

        "http://www.google.com/": {

          "type": "list_of_scalar_values",

          "values": [9, 9, 8, 9],

        },

        "http://www.yahoo.com/": {

          "type": "list_of_scalar_values",

          "values": [4, 5, 4, 4],

        },

        "overall": {

          "type": "list_of_scalar_values",

          "values": [13, 14, 12, 13],

"file": "gs://..."

        },

        "trace": {

          "value_type": "web_component",

          "web_component_name": "telemetry.web_components.multi_value_web_component.MultiValueWebComponent",

          "data_to_bind": {
            "value_web_component_name":
               "telemetry.web_components.trace_viewer.TraceViewer",
            "values": [...trace1...,...trace2…]

          }

        }

   },

   "cold_times": {

     ...

   }

 }

}



Definitions


master: [string] Buildbot master name (or equivalent; used as a top-level categorization of data)

bot: [string] Buildbot builder name, or another string that represents platform type.

format_version: [string] Allows dashboard to know how to process the structure.

revisions: [dict of string to string] Maps repo name to revision.

supplemental: [dict of string to string] Unstructured key-value pairs which may be displayed on the dashboard. Used to describe bot hardware, OS, Chrome feature status, etc.

charts: [dict of string to dict] Maps a list of chart name strings to their data dicts.

units: [string] Units to display on the dashboard.

traces: [dict of string to dict] Maps a list of trace name strings to their trace dicts.

type: [string] Enum of known value types such as "scalar", "list_of_scalar_values" and "histogram" which tells the dashboard how to interpret the rest of the fields in the trace dict. For instance, they may be associated with a value, values, or histogram field.

improvement direction: [string optional]: Enum of possible improvement directions (bigger_is_better, smaller_is_better)

summary: A trace name which denotes the trace in a chart which does not correspond to a specific page.

Data Format (version 0), aka the old format


In the format described below, the value of "data" should be a JSON encoding of a list of points to add. Each point is a map of property names to values for that point.

Minimal example (only required fields)

[

 {

   "master": "SenderType",

   "bot": "platform-type",

   "test": "my_test_suite/chart_name/trace_name",

   "revision": 1234,

   "value": 18.5

}

]


Required fields are:

  • master, bot and test: These three fields in combination specify a particular test path. The master and bot are supposed to be the Buildbot master name and slave perf_id, respectively, but if the tests aren’t being run by Buildbot, these can be any descriptive strings which specify the test data origin (note master and bot names can’t contain slashes, and none of these can contain asterisks).

  • revision: This is a number that’s used to index the data point. It doesn't actually have to be a "revision". By default, this will be assumed to be a Chromium SVN revision, but it could be anything, as long as it’s an integer, and as long as it’s monotonically increasing for any given graph. Note that you can choose to show something else on the X-axis of your graphs use supplemental properties, as described below.

  • value: The Y-value for this point.



Larger example (including some optional fields)


[

 {

   "master": "ChromiumPerf",

   "bot": "linux-release",

   "test": "sunspider/string-unpack-code/ref",

   "revision": 33241,

   "value": "18.5",

   "error": "0.5",

   "units": "ms",

   "masterid": "master.chromium.perf",

   "buildername": "Linux Builder",

   "buildnumber": 75,

   "supplemental_columns": {

     "r_webkit_rev": "167808",

     "a_default_rev": "r_webkit_rev"

   }

 },

 {

   "master": "ChromiumPerf",

   "bot": "linux-release",

   "test": "sunspider/string-unpack-code",

   "revision": 33241,

   "value": "18.4",
   "error": "0.489897948557",

   "units": "ms",

   "masterid": "master.chromium.perf",

   "buildername": "Linux Builder",

   "buildnumber": 75,

   "supplemental_columns": {

     "r_webkit_rev": "167808",

     "a_default_rev": "r_webkit_rev"

   }

 }

]


Optional fields are:

  • error: If the "value" given is a mean, you can also send an associated standard error or standard deviation value here. This will be represented on the graphs with a shaded region.

  • units: The (Y-axis) units for this point. By default, if higher_is_better is not given, this also determines the change direction which will be considered an improvement.

  • higher_is_better: Boolean. You can use this field to explicitly define improvement direction.

  • masterid, buildername, buildnumber: Some information from the Buildbot builder. This is used to construct a link to the Buildbot logs for this point. Note that this “masterid” is actually the “master name”, e.g. chromium.perf.

  • supplemental_columns: A dictionary of other data associated with this point. You can specify other values, other strings, links, and other revision or version numbers; which type of property it is expressed with a prefix. Supplemental properties that start with “d_” are extra data values. Those that start with “r_” are additional revision or version numbers, such as dot-separated version numbers or git hashes. A list of revision types is maintained in the dashboard code. Those that start with “a_” are other annotations, including the following:

    • a_default_rev: This is the name of another supplemental property which is the revision or version you want to show on the X-axis, e.g. “r_chrome_version”.

    • a_stdio_uri: With this property you can override the URI used for the link to the Buildbot output logs. Then masterid, buildername, buildnumber won’t be used.



Providing test and unit information

In order for the dashboard to know which direction is considered an improvement and which is a regression, the units you specify must be associated with an improvement direction in chromium/src/tools/perf/unit-info.json. The units you want to use are probably already listed in that file, but if they’re not, you have to add them.


You can also provide a short test description and more specific result descriptions at chromium/src/tools/perf/test-info.json and chromium/src/tools/perf/trace-info.json. Doing this is helpful to other people viewing the test results.

Relevant code links


Implementations of code that sends data to the dashboard:


Whitelisting sender IPs, making data externally-visible, and monitoring


Once you’re ready to start sending data to the real perf dashboard, there are a few more things you might want to do. Firstly, in order for the dashboard to accept the data, the IP of the sender must be whitelisted. To request whitelisting of a new sender, you can file an issue. The whitelist requirement might change if issue 266056 is implemented.


If your data is not internal-only data, you can request that it be marked as such, again by filing an issue.


Finally, if you want to monitor your the test results, you can decide which tests you want to be monitored, who should be receiving alerts, and whether you want to set any special thresholds for alerting. In general, for questions you can email chrome-perf-dashboard-team@google.com.


Comments