- Jan 17, 2023
-
-
Thomas Huth authored
When mis-typing one of the options of the configure script, it shows you the list of valid options, but does not tell you which option was wrong. Then it can take a while until you figured out where the typo is. Let's help the user here a little bit by printing which option had not been understood. Message-Id: <20230112095523.938919-1-thuth@redhat.com> Reviewed-by:
Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by:
Thomas Huth <thuth@redhat.com>
-
- Jan 05, 2023
-
-
Nina Schoetterl-Glausch authored
The code is a 64bit number of which the upper 48 bits must be 0. Fixes: 965e38a0 ("s390x: Test effect of storage keys on diag 308") Signed-off-by:
Nina Schoetterl-Glausch <nsg@linux.ibm.com> Message-Id: <20230104175950.731988-1-nsg@linux.ibm.com> Signed-off-by:
Thomas Huth <thuth@redhat.com>
-
Thomas Huth authored
The big-sur-base image has been decomissioned by Cirrus-CI, so we have to update to a newer version of macOS now. Message-Id: <20230104142511.297077-1-thuth@redhat.com> Signed-off-by:
Thomas Huth <thuth@redhat.com>
-
- Jan 04, 2023
-
-
Claudio Imbrenda authored
A recent patch broke make standalone. The function find_word is not available when running make standalone, replace it with a simple grep. Fixes: 743cacf7 ("s390x: don't run migration tests under PV") Reported-by:
Nina Schoetterl-Glausch <nsg@linux.ibm.com> Signed-off-by:
Claudio Imbrenda <imbrenda@linux.ibm.com> Message-Id: <20221220175508.57180-1-imbrenda@linux.ibm.com> Reviewed-by:
Nina Schoetterl-Glausch <nsg@linux.ibm.com> Reviewed-by:
Nico Boehr <nrb@linux.ibm.com> Signed-off-by:
Thomas Huth <thuth@redhat.com>
-
- Dec 13, 2022
-
-
Nico Boehr authored
Some tests already shipped with their own do_migrate() function, remove it and instead use the new migrate_once() function. The control flow in the gic tests can be simplified due to migrate_once(). Signed-off-by:
Nico Boehr <nrb@linux.ibm.com> Reviewed-by:
Andrew Jones <andrew.jones@linux.dev> Message-Id: <20221212111731.292942-5-nrb@linux.ibm.com> Signed-off-by:
Thomas Huth <thuth@redhat.com>
-
Nico Boehr authored
migrate_once() can simplify the control flow in migration-skey and migration-cmm. Signed-off-by:
Nico Boehr <nrb@linux.ibm.com> Reviewed-by:
Claudio Imbrenda <imbrenda@linux.ibm.com> Message-Id: <20221212111731.292942-4-nrb@linux.ibm.com> Signed-off-by:
Thomas Huth <thuth@redhat.com>
-
Nico Boehr authored
Signed-off-by:
Nico Boehr <nrb@linux.ibm.com> Reviewed-by:
Thomas Huth <thuth@redhat.com> Message-Id: <20221212111731.292942-3-nrb@linux.ibm.com> Signed-off-by:
Thomas Huth <thuth@redhat.com>
-
Nico Boehr authored
Migration tests can ask migrate_cmd to migrate them to a new QEMU process. Requesting migration and waiting for completion is hence a common pattern which is repeated all over the code base. Add a function which does all of that to avoid repeating the same pattern. Since migrate_cmd currently can only migrate exactly once, this function is called migrate_once() and is a no-op when it has been called before. This can simplify the control flow, especially when tests are skipped. Suggested-by:
Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by:
Nico Boehr <nrb@linux.ibm.com> Reviewed-by:
Claudio Imbrenda <imbrenda@linux.ibm.com> Message-Id: <20221212111731.292942-2-nrb@linux.ibm.com> Signed-off-by:
Thomas Huth <thuth@redhat.com>
-
- Dec 07, 2022
-
-
Thomas Huth authored
Our gitlab-ci jobs were still running with Fedora 32 that is out of service already. Let's update to Fedora 37 that brings a new QEMU which also allows to run more tests with TCG. While we're at it, also list each test in single lines and sort them alphabetically so that it is easier to follow which tests get added and removed. Beside adding some new tests, two entries are also removed here: The "port80" test was removed a while ago from the x86 folder already, but not from the .gitlab-ci.yml yet (seems like the run script simply ignores unknown tests instead of complaining), and the "tsc_adjust" is only skipping in the CI, so it's currently not really usefull to try to run it in the CI. Message-Id: <20221206104003.149630-1-thuth@redhat.com> Acked-by:
Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by:
Thomas Huth <thuth@redhat.com>
-
Thomas Huth authored
Starting with version 7.0, QEMU starts the pseries guests in 32-bit mode instead of 64-bit (see QEMU commit 6e3f09c28a - "spapr: Force 32bit when resetting a core"). This causes our test_64bit() in powerpc/emulator.c to fail. Let's switch to 64-bit in our startup code instead to fix the issue. Message-Id: <20221206110851.154297-1-thuth@redhat.com> Reviewed-by:
Cédric Le Goater <clg@kaod.org> Signed-off-by:
Thomas Huth <thuth@redhat.com>
-
- Nov 25, 2022
-
-
Like Xu authored
Compilation of the files fails on ARCH=i386 with i686-elf gcc on macos_i386 because they use "%d" format specifier that does not match the actual size of uint32_t: In function 'rdpmc': lib/libcflat.h:141:24: error: format '%d' expects argument of type 'int', but argument 6 has type 'uint32_t' {aka 'long unsigned int'} [-Werror=format=] 141 | printf("%s:%d: assert failed: %s: " fmt "\n", \ | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~ Use PRId32 instead of "d" to take into account macos_i386 case. Reported-by:
Thomas Huth <thuth@redhat.com> Signed-off-by:
Like Xu <likexu@tencent.com> Message-Id: <20221124123149.91339-1-likexu@tencent.com> Signed-off-by:
Thomas Huth <thuth@redhat.com>
-
- Nov 14, 2022
-
-
Like Xu authored
Updated test cases to cover KVM enabling code for AMD Guest PerfMonV2. The Intel-specific PMU helpers were added to check for AMD cpuid, and some of the same semantics of MSRs were assigned during the initialization phase. The vast majority of pmu test cases are reused seamlessly. On some x86 machines (AMD only), even with retired events, the same workload is measured repeatedly and the number of events collected is erratic, which essentially reflects the details of hardware implementation, and from a software perspective, the type of event is an unprecise event, which brings a tolerance check in the counter overflow testcases. Signed-off-by:
Like Xu <likexu@tencent.com> Signed-off-by:
Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-28-seanjc@google.com
-
Like Xu authored
AMD core PMU before Zen4 did not have version numbers, there were no fixed counters, it had a hard-coded number of generic counters, bit-width, and only hardware events common across amd generations (starting with K7) were added to amd_gp_events[] table. All above differences are instantiated at the detection step, and it also covers the K7 PMU registers, which is consistent with bare-metal. Cc: Sandipan Das <sandipan.das@amd.com> Signed-off-by:
Like Xu <likexu@tencent.com> [sean: set bases to K7 values for !PERFCTR_CORE case (reported by Paolo)] Signed-off-by:
Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-27-seanjc@google.com
-
Sean Christopherson authored
Add a flag to track whether the PMU is backed by an Intel CPU. Future support for AMD will sadly need to constantly check whether the PMU is Intel or AMD, and invoking is_intel() every time is rather expensive due to it requiring CPUID (VM-Exit) and a string comparison. Signed-off-by:
Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-26-seanjc@google.com
-
Like Xu authored
AMD and Intel do not share the same set of coding rules for performance events, and code to test the same performance event can be reused by pointing to a different coding table, noting that the table size also needs to be updated. Signed-off-by:
Like Xu <likexu@tencent.com> Signed-off-by:
Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-25-seanjc@google.com
-
Like Xu authored
To test Intel arch pmu version 1, most of the basic framework and use cases which test any PMU counter do not require any changes, except no access to registers introduced only in PMU version 2. Adding some guardian's checks can seamlessly support version 1, while opening the door for normal AMD PMUs tests. Signed-off-by:
Like Xu <likexu@tencent.com> Signed-off-by:
Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-24-seanjc@google.com
-
Like Xu authored
This unit-test is intended to test the KVM's support for the Processor Event Based Sampling (PEBS) which is another PMU feature on Intel processors (start from Ice Lake Server). If a bit in PEBS_ENABLE is set to 1, its corresponding counter will write at least one PEBS records (including partial state of the vcpu at the time of the current hardware event) to the guest memory on counter overflow, and trigger an interrupt at a specific DS state. The format of a PEBS record can be configured by another register. These tests cover most usage scenarios, for example there are some specially constructed scenarios (not a typical behaviour of Linux PEBS driver). It lowers the threshold for others to understand this feature and opens up more exploration of KVM implementation or hw feature itself. Signed-off-by:
Like Xu <likexu@tencent.com> Signed-off-by:
Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-23-seanjc@google.com
-
Like Xu authored
Track the global PMU MSRs in pmu_caps so that tests don't need to manually differntiate between AMD and Intel. Although AMD and Intel PMUs have the same semantics in terms of global control features (including ctl and status), their MSR indexes are not the same Signed-off-by:
Like Xu <likexu@tencent.com> [sean: drop most getters/setters] Signed-off-by:
Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-22-seanjc@google.com
-
Sean Christopherson authored
In generic PMU testing, it is very common to initialize the test env by resetting counters registers. Add helpers to reset all PMU counters for code reusability, and reset all counters during PMU initialization for good measure. Signed-off-by:
Like Xu <likexu@tencent.com> Signed-off-by:
Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-21-seanjc@google.com
-
Like Xu authored
Add a helper to get the index of a fixed counter instead of manually calculating the index, a future patch will add more users of the fixed counter MSRs. No functional change intended. Signed-off-by:
Like Xu <likexu@tencent.com> [sean: move to separate patch, write changelog] Signed-off-by:
Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-20-seanjc@google.com
-
Like Xu authored
Snapshot the base MSRs for GP counters and event selects during pmu_init() so that tests don't need to manually compute the bases. Suggested-by:
Sean Christopherson <seanjc@google.com> Signed-off-by:
Like Xu <likexu@tencent.com> [sean: rename helpers to look more like macros, drop wrmsr wrappers] Signed-off-by:
Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-19-seanjc@google.com
-
Sean Christopherson authored
Drop wrappers that are and always will be pure passthroughs of pmu_caps fields, e.g. the number of fixed/general_purpose counters can always be determined during PMU initialization and doesn't need runtime logic. No functional change intended. Signed-off-by:
Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-18-seanjc@google.com
-
Sean Christopherson authored
Snapshot PMU info from CPUID.0xA into "struct pmu_caps pmu" during pmu_init() instead of reading CPUID.0xA every time a test wants to query PMU capabilities. Using pmu_caps to track various properties will also make it easier to hide the differences between AMD and Intel PMUs. Signed-off-by:
Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-17-seanjc@google.com
-
Like Xu authored
Add a global "struct pmu_caps pmu" to snapshot PMU capabilities during the final stages of BSP initialization. Use the new hooks to snapshot PERF_CAPABILITIES instead of re-reading the MSR every time a test wants to query capabilities. A software-defined struct will also simplify extending support to AMD CPUs, as many of the differences between AMD and Intel can be handled during pmu_init(). Init the PMU caps for all tests so that tests don't need to remember to call pmu_init() before using any of the PMU helpers, e.g. the nVMX test uses this_cpu_has_pmu(), which will be converted to rely on the global struct in a future patch. Suggested-by:
Sean Christopherson <seanjc@google.com> Signed-off-by:
Like Xu <likexu@tencent.com> [sean: reword changelog] Signed-off-by:
Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-16-seanjc@google.com
-
Sean Christopherson authored
Add bsp_rest_init() to dedup bringing up APs and doing SMP initialization across 32-bit, 64-bit, and EFI flavors of KVM-unit-tests. The common bucket will also be used in future to patches to init things that aren't SMP related and thus don't fit in smp_init(), e.g. PMU setup. No functional change intended. Signed-off-by:
Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-15-seanjc@google.com
-
Like Xu authored
Given all the PMU stuff coming in, we need e.g. lib/x86/pmu.h to hold all of the hardware-defined stuff, e.g. #defines, accessors, helpers and structs that are dictated by hardware. This will greatly help with code reuse and reduce unnecessary vm-exit. Opportunistically move lbr msrs definition to header processor.h. Suggested-by:
Sean Christopherson <seanjc@google.com> Signed-off-by:
Like Xu <likexu@tencent.com> Signed-off-by:
Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-14-seanjc@google.com
-
Like Xu authored
The original name "PC_VECTOR" comes from the LVT Performance Counter Register. Rename it to PMI_VECTOR. That's much more familiar for KVM developers and it's still correct, e.g. it's the PMI vector that's programmed into the LVT PC register. Suggested-by:
Sean Christopherson <seanjc@google.com> Signed-off-by:
Like Xu <likexu@tencent.com> Signed-off-by:
Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-13-seanjc@google.com
-
Like Xu authored
Specifying an unsupported PMC encoding will cause a #GP(0). There are multiple reasons RDPMC can #GP, the one that is being relied on to guarantee #GP is specifically that the PMC is invalid. The most extensible solution is to provide a safe variant. Suggested-by:
Sean Christopherson <seanjc@google.com> Signed-off-by:
Like Xu <likexu@tencent.com> Signed-off-by:
Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-12-seanjc@google.com
-
Like Xu authored
Existing unit tests do not cover AMD pmu, nor Intel pmu that is not architecture (on some obsolete cpu's). AMD's PMU support will be coming in subsequent commits. Signed-off-by:
Like Xu <likexu@tencent.com> Signed-off-by:
Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-11-seanjc@google.com
-
Like Xu authored
Any agent can run "./run_tests.sh -g pmu" to run all PMU tests easily, e.g. when verifying the x86/PMU KVM changes. Signed-off-by:
Like Xu <likexu@tencent.com> Reviewed-by:
Sean Christopherson <seanjc@google.com> Signed-off-by:
Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-10-seanjc@google.com
-
Like Xu authored
The pmu test check_counter_overflow() always fails with 32-bit binaries. The cnt.count obtained from the latter run of measure() (based on fixed counter 0) is not equal to the expected value (based on gp counter 0) and there is a positive error with a value of 2. The two extra instructions come from inline wrmsr() and inline rdmsr() inside the global_disable() binary code block. Specifically, for each msr access, the i386 code will have two assembly mov instructions before rdmsr/wrmsr (mark it for fixed counter 0, bit 32), but only one assembly mov is needed for x86_64 and gp counter 0 on i386. The sequence of instructions to count events using the #GP and #Fixed counters is different. Thus the fix is quite high level, to use the same counter (w/ same instruction sequences) to set initial value for the same counter. Fix the expected init cnt.count for fixed counter 0 overflow based on the same fixed counter 0, not always using gp counter 0. The difference of 1 for this count enables the interrupt to be generated immediately after the selected event count has been reached, instead of waiting for the overflow to be propagation through the counter. Adding a helper to measure/compute the overflow preset value. It provides a convenient location to document the weird behavior that's necessary to ensure immediate event delivery. Signed-off-by:
Like Xu <likexu@tencent.com> Signed-off-by:
Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-9-seanjc@google.com
-
Like Xu authored
The current measure_one() forces the common case to pass in unnecessary information in order to give flexibility to a single use case. It's just syntatic sugar, but it really does help readers as it's not obvious that the "1" specifies the number of events, whereas multiple_many() and measure_one() are relatively self-explanatory. Suggested-by:
Sean Christopherson <seanjc@google.com> Signed-off-by:
Like Xu <likexu@tencent.com> Signed-off-by:
Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-8-seanjc@google.com
-
Like Xu authored
Most invocation of start_event() and measure() first sets evt.count=0. Instead of forcing each caller to ensure count is zeroed, optimize the count to zero during start_event(), then drop all of the manual zeroing. Accumulating counts can be handled by reading the current count before start_event(), and doing something like stuffing a high count to test an edge case could be handled by an inner helper, __start_event(). For overflow, just open code measure() for that one-off case. Requiring callers to zero out a field in most common cases isn't exactly flexible. Suggested-by:
Sean Christopherson <seanjc@google.com> Signed-off-by:
Like Xu <likexu@tencent.com> [sean: tag __measure() noinline so its count is stable] Signed-off-by:
Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-7-seanjc@google.com
-
Like Xu authored
This test case uses MSR_IA32_PERFCTR0 to count branch instructions and PERFCTR1 to count instruction events. The same correspondence should be maintained at report(), specifically this should use status bit 1 for instructions and bit 0 for branches. Fixes: 20cf9147 ("x86/pmu: Test PMU virtualization on emulated instructions") Reported-by:
Sandipan Das <sandipan.das@amd.com> Signed-off-by:
Like Xu <likexu@tencent.com> Signed-off-by:
Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-6-seanjc@google.com
-
Like Xu authored
The test conclusion of running Intel LBR on AMD platforms should not be PASS, but SKIP, fix it. Signed-off-by:
Like Xu <likexu@tencent.com> Reviewed-by:
Sean Christopherson <seanjc@google.com> Signed-off-by:
Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-5-seanjc@google.com
-
Like Xu authored
The inappropriate prefix "full-width writes" may be propagated to later test cases if it is not popped out. Signed-off-by:
Like Xu <likexu@tencent.com> Reviewed-by:
Sean Christopherson <seanjc@google.com> Signed-off-by:
Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-4-seanjc@google.com
-
Like Xu authored
Move check_emulated_instr() into check_counters() so that full-width counters could be tested with ease by the same test case. Signed-off-by:
Like Xu <likexu@tencent.com> Signed-off-by:
Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-3-seanjc@google.com
-
Like Xu authored
On virtual platforms without PDCM support (e.g. AMD), #GP failure on MSR_IA32_PERF_CAPABILITIES is completely avoidable. Suggested-by:
Sean Christopherson <seanjc@google.com> Signed-off-by:
Like Xu <likexu@tencent.com> Signed-off-by:
Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-2-seanjc@google.com
-
Sean Christopherson authored
Verify that multiple vCPUs with the same physical xAPIC ID receive an IPI sent to said ID. Note, on_cpu() maintains its own CPU=>ID map and is effectively unusuable after changing the xAPIC ID. Update each vCPU's xAPIC ID from within the IRQ handler so as to avoid having to send yet another IPI from vCPU0 to tell vCPU1 to update its ID. Signed-off-by:
Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221001011301.2077437-10-seanjc@google.com
-
Sean Christopherson authored
Add an APIC sub-test to verify the darker corners of logical mode IPI delivery. Logical mode is rather bizarre, in that each "ID" is treated as a bitmask, e.g. an ID with multiple bits set can match multiple destinations. Verify that overlapping and/or superfluous destinations and IDs with multiple target vCPUs are handled correctly for both flat and cluster modes. Signed-off-by:
Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221001011301.2077437-9-seanjc@google.com
-