- Feb 16, 2023
-
-
Thomas Huth authored
s390x: storage key migration tests, snippets and linker cleanups See merge request kvm-unit-tests/kvm-unit-tests!40
-
Janosch Frank authored
When we leave SIE due to an exception, we'll still have guest values in registers 0 - 13 and that's not clearly portraied in our debug prints. So let's fix that. Signed-off-by:
Janosch Frank <frankja@linux.ibm.com> Reviewed-by:
Claudio Imbrenda <imbrenda@linux.ibm.com> Link: https://lore.kernel.org/r/20230112154548.163021-8-frankja@linux.ibm.com Message-Id: <20230112154548.163021-8-frankja@linux.ibm.com>
-
Janosch Frank authored
When setting the first stack frame to 0, we can check for a 0 backchain pointer when doing backtraces to know when to stop. Signed-off-by:
Janosch Frank <frankja@linux.ibm.com> Reviewed-by:
Nico Boehr <nrb@linux.ibm.com> Reviewed-by:
Claudio Imbrenda <imbrenda@linux.ibm.com> Link: https://lore.kernel.org/r/20230112154548.163021-7-frankja@linux.ibm.com Message-Id: <20230112154548.163021-7-frankja@linux.ibm.com>
-
Janosch Frank authored
Seems like it was introduced but never set. It's nicer to have a pointer than to cast the MSO of a VM. Signed-off-by:
Janosch Frank <frankja@linux.ibm.com> Reviewed-by:
Nico Boehr <nrb@linux.ibm.com> Reviewed-by:
Claudio Imbrenda <imbrenda@linux.ibm.com> Link: https://lore.kernel.org/r/20230112154548.163021-6-frankja@linux.ibm.com Message-Id: <20230112154548.163021-6-frankja@linux.ibm.com>
-
Janosch Frank authored
Let's store the psw mask instead of the address of the location where we should load the mask from. Signed-off-by:
Janosch Frank <frankja@linux.ibm.com> Reviewed-by:
Nico Boehr <nrb@linux.ibm.com> Reviewed-by:
Claudio Imbrenda <imbrenda@linux.ibm.com> Link: https://lore.kernel.org/r/20230112154548.163021-5-frankja@linux.ibm.com Message-Id: <20230112154548.163021-5-frankja@linux.ibm.com>
-
Janosch Frank authored
A linker script has a few benefits: - Random data doesn't end up in the binary breaking tests - We can easily define a lowcore and load the snippet from 0x0 instead of 0x4000 which makes asm snippets behave like c snippets - We can easily define an invalid PGM new PSW to ensure an exit on a guest PGM Signed-off-by:
Janosch Frank <frankja@linux.ibm.com> Reviewed-by:
Claudio Imbrenda <imbrenda@linux.ibm.com> Link: https://lore.kernel.org/r/20230112154548.163021-4-frankja@linux.ibm.com Message-Id: <20230112154548.163021-4-frankja@linux.ibm.com>
-
Janosch Frank authored
There are a lot of things in there which we don't need for snippets and the alignments can be switched from 64K to 4K since that's the s390 page size. Signed-off-by:
Janosch Frank <frankja@linux.ibm.com> Reviewed-by:
Claudio Imbrenda <imbrenda@linux.ibm.com> Link: https://lore.kernel.org/r/20230112154548.163021-3-frankja@linux.ibm.com Message-Id: <20230112154548.163021-3-frankja@linux.ibm.com>
-
Janosch Frank authored
It seems like the loader file was copied over from another architecture which has a different page size (64K) than s390 (4K). Let's use a 4k alignment instead of the 64k one and remove unneeded entries. Signed-off-by:
Janosch Frank <frankja@linux.ibm.com> Reviewed-by:
Claudio Imbrenda <imbrenda@linux.ibm.com> Link: https://lore.kernel.org/r/20230112154548.163021-2-frankja@linux.ibm.com Message-Id: <20230112154548.163021-2-frankja@linux.ibm.com>
-
Nico Boehr authored
Right now, we have a test which sets storage keys, then migrates the VM and - after migration finished - verifies the skeys are still there. Add a new version of the test which changes storage keys while the migration is in progress. This is achieved by adding a command line argument to the existing migration-skey test. Signed-off-by:
Nico Boehr <nrb@linux.ibm.com> Reviewed-by:
Nina Schoetterl-Glausch <nsg@linux.ibm.com> Reviewed-by:
Claudio Imbrenda <imbrenda@linux.ibm.com> Link: https://lore.kernel.org/r/20221220083030.30153-2-nrb@linux.ibm.com Message-Id: <20221220083030.30153-2-nrb@linux.ibm.com> Signed-off-by:
Claudio Imbrenda <imbrenda@linux.ibm.com>
-
Claudio Imbrenda authored
Use the new macros in the existing code. No functional changes intended. Reviewed-by:
Janis Schoetterl-Glausch <scgl@linux.ibm.com> Reviewed-by:
Janosch Frank <frankja@linux.ibm.com> Link: https://lore.kernel.org/r/20221130154038.70492-3-imbrenda@linux.ibm.com Signed-off-by:
Claudio Imbrenda <imbrenda@linux.ibm.com> Message-Id: <20221130154038.70492-3-imbrenda@linux.ibm.com>
-
Claudio Imbrenda authored
Since a lot of code starts new CPUs using the current PSW mask, add two macros to streamline the creation of generic PSWs and PSWs with the current program mask. Reviewed-by:
Nico Boehr <nrb@linux.ibm.com> Reviewed-by:
Janis Schoetterl-Glausch <scgl@linux.ibm.com> Reviewed-by:
Janosch Frank <frankja@linux.ibm.com> Link: https://lore.kernel.org/r/20221130154038.70492-2-imbrenda@linux.ibm.com Signed-off-by:
Claudio Imbrenda <imbrenda@linux.ibm.com> Message-Id: <20221130154038.70492-2-imbrenda@linux.ibm.com>
-
- Feb 14, 2023
-
-
Andrew Jones authored
arm/arm64: PMU and PSCI tests See merge request kvm-unit-tests/kvm-unit-tests!39
-
Alexandru Elisei authored
The function get_pte() from mmu.c returns a pointer to the PTE associated with the requested virtual address, mapping the virtual address in the process if it's not already mapped. mmu_get_pte() returns a pointer to the PTE if and only if the virtual is mapped in pgtable, otherwise returns NULL. Rename it to follow_pte() to avoid any confusion with get_pte(). follow_pte() also matches the name of Linux kernel function with a similar purpose. Also remove the mmu_enabled() check from the function, as the purpose of the function is to get the mapping for the virtual address in the pgtable supplied as the argument, not to translate the virtual address to a physical address using the current translation; that's what virt_to_phys() does. Signed-off-by:
Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by:
Andrew Jones <andrew.jones@linux.dev>
-
Alexandru Elisei authored
The arm and arm64 architectures allow a virtual address to be mapped using a block descriptor (or huge page, as Linux calls it), and the function mmu_set_ranges_sect() is made available for a test to do just that. But virt_to_pte_phys() assumes that all virtual addresses are mapped with page granularity, which can lead to erroneous addresses being returned in the case of block mappings. Signed-off-by:
Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by:
Andrew Jones <andrew.jones@linux.dev>
-
Nikita Venkatesh authored
The test uses the following method. The primary CPU brings up all the secondary CPUs, which are held in a wait loop. Once the primary releases the CPUs, each of the secondary CPUs proceed to issue CPU_OFF. The primary CPU then checks for the status of the individual CPU_OFF request. There is a chance that some CPUs might return from the CPU_OFF function call after the primary CPU has finished the scan. There is no foolproof method to handle this, but the test tries its best to eliminate these false positives by introducing an extra delay if all the CPUs are reported offline after the initial scan. [ Alex E: Skip CPU_OFF test if cpu onlining failed, drop cpu_off_success in favour of checking AFFINITY_INFO, commit message tweaking ] Signed-off-by:
Nikita Venkatesh <Nikita.Venkatesh@arm.com> Signed-off-by:
Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by:
Andrew Jones <andrew.jones@linux.dev>
-
Alexandru Elisei authored
For the PSCI CPU_ON function test, all other CPUs perform a CPU_ON call that target CPU 1. The test is considered a success if CPU_ON returns PSCI SUCCESS exactly once, and for the rest of the calls PSCI ALREADY_ON. Enhance the test by checking that CPU 1 is actually online and able to execute code. Also make the test more robust by checking that the CPU_ON call returns, instead of assuming that it will always succeed and hanging indefinitely if it doesn't. Since the CPU 1 thread is now being set up properly by kvm-unit-tests when being brought online, it becomes possible to add other tests in the future that require all CPUs. The include header order in arm/psci.c has been changed to be in alphabetic order. This means moving the errata.h include before libcflat.h, which causes compilation to fail because of missing includes in errata.h. Fix that also by including the needed header in errata.h. Reviewed-by:
Andrew Jones <andrew.jones@linux.dev> Signed-off-by:
Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by:
Andrew Jones <andrew.jones@linux.dev>
-
Colton Lewis authored
Replace the MAX_SMP probe loop in favor of reading a number directly from the QEMU error message. This is equally safe as the existing code because the error message has had the same format as long as it has existed, since QEMU v2.10. The final number before the end of the error message line indicates the max QEMU supports. This loop logic is broken for machines with a number of CPUs that isn't a power of two. This problem was noticed for gicv2 tests on machines with a non-power-of-two number of CPUs greater than 8 because tests were running with MAX_SMP less than 8. As a hypothetical example, a machine with 12 CPUs will test with MAX_SMP=6 because 12 >> 1 == 6. This can, in rare circumstances, lead to different test results depending only on the number of CPUs the machine has. A previous comment explains the loop should only apply to kernels <=v4.3 on arm and suggests deletion when it becomes tiresome to maintain. However, it is always theoretically possible to test on a machine that has more CPUs than QEMU supports, so it makes sense to leave some check in place. Signed-off-by:
Colton Lewis <coltonlewis@google.com> Signed-off-by:
Andrew Jones <andrew.jones@linux.dev>
-
Ricardo Koller authored
test_overflow_interrupt() (from arm/pmu.c) has a test that passes because the previous test leaves the state needed to pass: the overflow status register with the expected bits. The test (that should fail) does not enable the PMU after mem_access_loop(), which clears the PMCR, and before writing into the software increment register. Fix by clearing the previous test state (pmovsclr_el0) and by enabling the PMU before the sw_incr test. Fixes: 4f5ef94f ("arm: pmu: Test overflow interrupts") Reported-by:
Reiji Watanabe <reijiw@google.com> Signed-off-by:
Ricardo Koller <ricarkol@google.com> Reviewed-by:
Eric Auger <eric.auger@redhat.com> Signed-off-by:
Andrew Jones <andrew.jones@linux.dev>
-
Ricardo Koller authored
The arm/pmu test prints the value of counters as %ld. Most tests start with counters around 0 or UINT_MAX, so having something like -16 instead of 0xffff_fff0 is not very useful. Report counter values as hexadecimals. Reported-by:
Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by:
Ricardo Koller <ricarkol@google.com> Reviewed-by:
Eric Auger <eric.auger@redhat.com> Reviewed-by:
Oliver Upton <oliver.upton@linux.dev> Signed-off-by:
Andrew Jones <andrew.jones@linux.dev>
-
Ricardo Koller authored
Modify all tests checking overflows to support both 32 (PMCR_EL0.LP == 0) and 64-bit overflows (PMCR_EL0.LP == 1). 64-bit overflows are only supported on PMUv3p5. Note that chained tests do not implement "overflow_at_64bits == true". That's because there are no CHAIN events when "PMCR_EL0.LP == 1" (for more details see AArch64.IncrementEventCounter() pseudocode in the ARM ARM DDI 0487H.a, J1.1.1 "aarch64/debug"). Signed-off-by:
Ricardo Koller <ricarkol@google.com> Reviewed-by:
Reiji Watanabe <reijiw@google.com> Reviewed-by:
Eric Auger <eric.auger@redhat.com> Signed-off-by:
Andrew Jones <andrew.jones@linux.dev>
-
Ricardo Koller authored
Given that the arm PMU tests now handle 64-bit counters and overflows, it's better to be precise about what the ALL_SET, PRE_OVERFLOW, and PRE_OVERFLOW2 macros actually are. Given that they are 32-bit counters, just add _32 to their names. Signed-off-by:
Ricardo Koller <ricarkol@google.com> Reviewed-by:
Eric Auger <eric.auger@redhat.com> Signed-off-by:
Andrew Jones <andrew.jones@linux.dev>
-
Ricardo Koller authored
PMUv3p5 adds a knob, PMCR_EL0.LP == 1, that allows overflowing at 64-bits instead of 32. Prepare by doing these 3 things: 1. Add a "bool overflow_at_64bits" argument to all tests checking overflows. 2. Extend satisfy_prerequisites() to check if the machine supports "overflow_at_64bits". 3. Refactor the test invocations to use the new "run_test()" which adds a report prefix indicating whether the test uses 64 or 32-bit overflows. A subsequent commit will actually add the 64-bit overflow tests. Signed-off-by:
Ricardo Koller <ricarkol@google.com> Reviewed-by:
Reiji Watanabe <reijiw@google.com> Reviewed-by:
Eric Auger <eric.auger@redhat.com> Signed-off-by:
Andrew Jones <andrew.jones@linux.dev>
-
Ricardo Koller authored
PMUv3p5 uses 64-bit counters irrespective of whether the PMU is configured for overflowing at 32 or 64-bits. The consequence is that tests that check the counter values after overflowing should not assume that values will be wrapped around 32-bits: they overflow into the other half of the 64-bit counters on PMUv3p5. Fix tests by correctly checking overflowing-counters against the expected 64-bit value. Signed-off-by:
Ricardo Koller <ricarkol@google.com> Reviewed-by:
Eric Auger <eric.auger@redhat.com> Reviewed-by:
Oliver Upton <oliver.upton@linux.dev> Signed-off-by:
Andrew Jones <andrew.jones@linux.dev>
-
- Jan 17, 2023
-
-
Thomas Huth authored
When mis-typing one of the options of the configure script, it shows you the list of valid options, but does not tell you which option was wrong. Then it can take a while until you figured out where the typo is. Let's help the user here a little bit by printing which option had not been understood. Message-Id: <20230112095523.938919-1-thuth@redhat.com> Reviewed-by:
Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by:
Thomas Huth <thuth@redhat.com>
-
- Jan 05, 2023
-
-
Nina Schoetterl-Glausch authored
The code is a 64bit number of which the upper 48 bits must be 0. Fixes: 965e38a0 ("s390x: Test effect of storage keys on diag 308") Signed-off-by:
Nina Schoetterl-Glausch <nsg@linux.ibm.com> Message-Id: <20230104175950.731988-1-nsg@linux.ibm.com> Signed-off-by:
Thomas Huth <thuth@redhat.com>
-
Thomas Huth authored
The big-sur-base image has been decomissioned by Cirrus-CI, so we have to update to a newer version of macOS now. Message-Id: <20230104142511.297077-1-thuth@redhat.com> Signed-off-by:
Thomas Huth <thuth@redhat.com>
-
- Jan 04, 2023
-
-
Claudio Imbrenda authored
A recent patch broke make standalone. The function find_word is not available when running make standalone, replace it with a simple grep. Fixes: 743cacf7 ("s390x: don't run migration tests under PV") Reported-by:
Nina Schoetterl-Glausch <nsg@linux.ibm.com> Signed-off-by:
Claudio Imbrenda <imbrenda@linux.ibm.com> Message-Id: <20221220175508.57180-1-imbrenda@linux.ibm.com> Reviewed-by:
Nina Schoetterl-Glausch <nsg@linux.ibm.com> Reviewed-by:
Nico Boehr <nrb@linux.ibm.com> Signed-off-by:
Thomas Huth <thuth@redhat.com>
-
- Dec 13, 2022
-
-
Nico Boehr authored
Some tests already shipped with their own do_migrate() function, remove it and instead use the new migrate_once() function. The control flow in the gic tests can be simplified due to migrate_once(). Signed-off-by:
Nico Boehr <nrb@linux.ibm.com> Reviewed-by:
Andrew Jones <andrew.jones@linux.dev> Message-Id: <20221212111731.292942-5-nrb@linux.ibm.com> Signed-off-by:
Thomas Huth <thuth@redhat.com>
-
Nico Boehr authored
migrate_once() can simplify the control flow in migration-skey and migration-cmm. Signed-off-by:
Nico Boehr <nrb@linux.ibm.com> Reviewed-by:
Claudio Imbrenda <imbrenda@linux.ibm.com> Message-Id: <20221212111731.292942-4-nrb@linux.ibm.com> Signed-off-by:
Thomas Huth <thuth@redhat.com>
-
Nico Boehr authored
Signed-off-by:
Nico Boehr <nrb@linux.ibm.com> Reviewed-by:
Thomas Huth <thuth@redhat.com> Message-Id: <20221212111731.292942-3-nrb@linux.ibm.com> Signed-off-by:
Thomas Huth <thuth@redhat.com>
-
Nico Boehr authored
Migration tests can ask migrate_cmd to migrate them to a new QEMU process. Requesting migration and waiting for completion is hence a common pattern which is repeated all over the code base. Add a function which does all of that to avoid repeating the same pattern. Since migrate_cmd currently can only migrate exactly once, this function is called migrate_once() and is a no-op when it has been called before. This can simplify the control flow, especially when tests are skipped. Suggested-by:
Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by:
Nico Boehr <nrb@linux.ibm.com> Reviewed-by:
Claudio Imbrenda <imbrenda@linux.ibm.com> Message-Id: <20221212111731.292942-2-nrb@linux.ibm.com> Signed-off-by:
Thomas Huth <thuth@redhat.com>
-
- Dec 07, 2022
-
-
Thomas Huth authored
Our gitlab-ci jobs were still running with Fedora 32 that is out of service already. Let's update to Fedora 37 that brings a new QEMU which also allows to run more tests with TCG. While we're at it, also list each test in single lines and sort them alphabetically so that it is easier to follow which tests get added and removed. Beside adding some new tests, two entries are also removed here: The "port80" test was removed a while ago from the x86 folder already, but not from the .gitlab-ci.yml yet (seems like the run script simply ignores unknown tests instead of complaining), and the "tsc_adjust" is only skipping in the CI, so it's currently not really usefull to try to run it in the CI. Message-Id: <20221206104003.149630-1-thuth@redhat.com> Acked-by:
Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by:
Thomas Huth <thuth@redhat.com>
-
Thomas Huth authored
Starting with version 7.0, QEMU starts the pseries guests in 32-bit mode instead of 64-bit (see QEMU commit 6e3f09c28a - "spapr: Force 32bit when resetting a core"). This causes our test_64bit() in powerpc/emulator.c to fail. Let's switch to 64-bit in our startup code instead to fix the issue. Message-Id: <20221206110851.154297-1-thuth@redhat.com> Reviewed-by:
Cédric Le Goater <clg@kaod.org> Signed-off-by:
Thomas Huth <thuth@redhat.com>
-
- Nov 25, 2022
-
-
Like Xu authored
Compilation of the files fails on ARCH=i386 with i686-elf gcc on macos_i386 because they use "%d" format specifier that does not match the actual size of uint32_t: In function 'rdpmc': lib/libcflat.h:141:24: error: format '%d' expects argument of type 'int', but argument 6 has type 'uint32_t' {aka 'long unsigned int'} [-Werror=format=] 141 | printf("%s:%d: assert failed: %s: " fmt "\n", \ | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~ Use PRId32 instead of "d" to take into account macos_i386 case. Reported-by:
Thomas Huth <thuth@redhat.com> Signed-off-by:
Like Xu <likexu@tencent.com> Message-Id: <20221124123149.91339-1-likexu@tencent.com> Signed-off-by:
Thomas Huth <thuth@redhat.com>
-
- Nov 14, 2022
-
-
Like Xu authored
Updated test cases to cover KVM enabling code for AMD Guest PerfMonV2. The Intel-specific PMU helpers were added to check for AMD cpuid, and some of the same semantics of MSRs were assigned during the initialization phase. The vast majority of pmu test cases are reused seamlessly. On some x86 machines (AMD only), even with retired events, the same workload is measured repeatedly and the number of events collected is erratic, which essentially reflects the details of hardware implementation, and from a software perspective, the type of event is an unprecise event, which brings a tolerance check in the counter overflow testcases. Signed-off-by:
Like Xu <likexu@tencent.com> Signed-off-by:
Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-28-seanjc@google.com
-
Like Xu authored
AMD core PMU before Zen4 did not have version numbers, there were no fixed counters, it had a hard-coded number of generic counters, bit-width, and only hardware events common across amd generations (starting with K7) were added to amd_gp_events[] table. All above differences are instantiated at the detection step, and it also covers the K7 PMU registers, which is consistent with bare-metal. Cc: Sandipan Das <sandipan.das@amd.com> Signed-off-by:
Like Xu <likexu@tencent.com> [sean: set bases to K7 values for !PERFCTR_CORE case (reported by Paolo)] Signed-off-by:
Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-27-seanjc@google.com
-
Sean Christopherson authored
Add a flag to track whether the PMU is backed by an Intel CPU. Future support for AMD will sadly need to constantly check whether the PMU is Intel or AMD, and invoking is_intel() every time is rather expensive due to it requiring CPUID (VM-Exit) and a string comparison. Signed-off-by:
Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-26-seanjc@google.com
-
Like Xu authored
AMD and Intel do not share the same set of coding rules for performance events, and code to test the same performance event can be reused by pointing to a different coding table, noting that the table size also needs to be updated. Signed-off-by:
Like Xu <likexu@tencent.com> Signed-off-by:
Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-25-seanjc@google.com
-
Like Xu authored
To test Intel arch pmu version 1, most of the basic framework and use cases which test any PMU counter do not require any changes, except no access to registers introduced only in PMU version 2. Adding some guardian's checks can seamlessly support version 1, while opening the door for normal AMD PMUs tests. Signed-off-by:
Like Xu <likexu@tencent.com> Signed-off-by:
Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-24-seanjc@google.com
-
Like Xu authored
This unit-test is intended to test the KVM's support for the Processor Event Based Sampling (PEBS) which is another PMU feature on Intel processors (start from Ice Lake Server). If a bit in PEBS_ENABLE is set to 1, its corresponding counter will write at least one PEBS records (including partial state of the vcpu at the time of the current hardware event) to the guest memory on counter overflow, and trigger an interrupt at a specific DS state. The format of a PEBS record can be configured by another register. These tests cover most usage scenarios, for example there are some specially constructed scenarios (not a typical behaviour of Linux PEBS driver). It lowers the threshold for others to understand this feature and opens up more exploration of KVM implementation or hw feature itself. Signed-off-by:
Like Xu <likexu@tencent.com> Signed-off-by:
Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-23-seanjc@google.com
-