Newer
Older
It's designed to read a `.ekl` results log from an UEFI SCT run, and a generated `.seq` from UEFI SCT configurator.
It will proceed to generate a Markdown file listing number of failures, passes, each test from the sequence file set that was Silently dropped, and a list of all failures and warnings.
Usage to generate a "result md" is such. `python3 parser.py <log_file.ekl> <seq_file.seq>`
If you do no provided any command line arguments it will use `sample.ekl` and `sample.seq`.
The output filename can be specified with `--md <filename>`.
An online help is available with the `-h` option.
For a custom Key:value search, the next two arguments *MUST be included together.* The program will search and display files that met that constraint, without the crosscheck, and display the names, guid, and key:value to the command line. `python3 parser.py <file.ekl> <file.seq> <search key> <search value>`
you can use the `test_dict` below to see available keys.
It is possible to sort the tests data before output using
the `--sort <key1,key2,...>` option.
Sorting the test data helps when comparing results with diff.
``` {.sh}
$ ./parser.py --sort \
'group,descr,set guid,test set,sub set,guid,name,log' ...
```
* "comment" is currently not implemented, as formatting is not currently consistent, should reflect the comments from the test.
* some SCT tests have shared GUIDs,
* some lines in ekl file follow Different naming Conventions
* some tests in the sequence file are not strongly Associated with the test spec.
### Documentation
It is possible to convert this `README.md` into `README.pdf` with pandoc using
`make doc`. See `make help`.
### TODO:
* double check concatenation of all `.ekl` logs, preliminary tests show small Divergence between them and `summary.ekl` found in `Overall` folder. Cat.sh will generate this file.
* look into large number of dropped tests.
### db structure:
tests = [
test_dict,
est_dict2...
]
test_dict = {
"name": "some test",
"result": "pass/fail",
"group": "some group",
"sub set": "some subset",
"set guid": "XXXXXX",
"guid": "XXXXXX",
"log": "full log output"
}
<guid> : seq_dict
<guid2> : seq_dict2...
}
seq_dict = {
"name": "set name",
"guid": "set guid",
"Iteration": "some hex/num of how many times to run",
"rev": "some hex/numb",
"Order": "some hex/num"
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
#### Spurious tests
Spurious tests are tests, which were run according to the log file but were not
meant to be run according to the sequence file.
We force the "result" fields of those tests to "SPURIOUS".
#### Dropped tests sets
Dropped tests sets are the tests sets, which were were meant to be run according
to the sequence file but for which no test have been run according to the log
file.
We create artificial tests entries for those dropped tests sets, with the
"result" fields set to "DROPPED". We convert some fields coming from the
sequence file, and auto-generate others:
``` {.python}
dropped_test_dict = {
"name": "",
"result": "DROPPED",
"group": "Unknown",
"test set": "",
"sub set": <name from sequence file>,
"set guid": <guid from sequence file>,
"revision": <rev from sequence file>,
"guid": "",
"log": ""
}
```