Compare duration test data for nightly builds
The nightly builds have timing information attached to each application. Create a way to compare past successful durations. Flag ones that increase over time and ones that increase suddenly.
#1 Updated by Ian Humphrey 9 months ago
- Story points set to 2
- SPIKE: How do we want to approach this?
- Having 2 people working together (spending 1 SP each) to find solutions and bounce ideas off each other would be great for this ticket
- Wait until cmake in place?
- New test framework?
Tyler-Adam solution (or the start of one)
Create a python script to parse the test logs for each development machine. As long as the
test logs are in the same format under cmake as they are now, it doesn't make a difference and we can start on this now. Even if there are differences, modifying the script should be easy.
The format of the output from this script should be as simple as possible. Here is one possible design:
[Category:string (unit/app/cat)] [Name of the object (if a unit Test)] [Time Stamp Run 1], [Time Stamp Run 2]......
[Application: test name]
[Category test: test name]
In the output logs the run time is displayed in this format: [HH:mm:ss] We think this should be converted
to a single value in seconds by the script parsing the input log.
This could also be a table in a database just as easily. And depending on how long we want to save time-series data, this probably would be better off stored in a database. Talk to Mike or Stuart about what kind of database options are available to us.
Here is a sample of what a space-delimited output log of run-times over the pass three days might look like:
Category Name 2017-05-17 2017-05-16 2017-05-15 Unit AutoReg 32.4 28.7 22.8 Unit ShapeModel 12.4 26.0 22.3 : App cnet2dem:default 34.2 31.3 11.34 App cnet2dem:full 31.1 36.1 12.74 App cnetadd:default 35.4 24.6 13.44 App cnetadd:point 37.6 48.8 11.44 App cnetadd:polygon 32.2 18.3 12.5 App cnetadd:validator 38.4 17.4 11.54 : Cat mro:hirise 120.3 125.6 123.5 Cat near:msi 240.6 241.9 253.3
Every time the tests are run on this system, this output file would continue to add date stamped columns to the right. The cut-off period for how long this should be done is something we still haven't decided. 30 days, a year?
The time stamp at the top of each column can also be anything. Unix epoch time (my preference (Tyler) ) could also work, and is easy to convert to other time formats.
Use matplotlib to make time plots for everything stored in this file (actually multiple files on the different development machines). The parsing file can be separate from this file, or they can be the same one. It might be easier to split this project among different developers if they are separate though.
A preliminary idea is that using matplotlib, the script would output a report in html (or pdf?) format showing time plots for each application/object/cat test on each development machine and store them in some temporary directory.
As a side note. The unit test build logs do not currently track the time it takes for the unit tests to complete. If at all possible, times for unit tests should be included in the new cmake system.