Skip to content
Commit d8447c38 authored by Romain Naour's avatar Romain Naour Committed by Yann E. MORIN
Browse files

support/testing: update logical eraseblock and physical eraseblock size for qemu >= 2.9

The current ubi/ubifs test (test_ubi.py) rely on a Qemu bug present in
2.8.0 that was fixed in Qemu 2.9.0 [1]. The ubi/ubifs settings is
updated to run with Qemu >= 2.9.0 using the new multiple chip handling.

If needed, the old behavior can be enabled using the pflash01 property
"old-multiple-chip-handling" [2].

The issue was not detected until now since we are sill using an old
qemu (2.8 from Debian stretch) for testing in gitlab (using the
Buildroot Docker image used by gitlab-ci.yml).

First the logical eraseblock size (LEB) must be updated to the value
0x3ff80 reported by the kernel when using qemu >= 2.9.0.

  UBIFS (ubi0:0): Mounting in unauthenticated mode
  UBIFS error (ubi0:0 pid 1): ubifs_read_superblock: LEB size mismatch: 524160 in superblock, 262016 real
  UBIFS error (ubi0:0 pid 1): ubifs_read_superblock: bad superblock, error 1

But the system is still failing to boot:

 UBIFS error (ubi0:0 pid 1): ubifs_scan: garbage
 UBIFS error (ubi0:0 pid 1): ubifs_recover_master_node: failed to recover master node

ubifs is reading garbage since Qemu >= 2.9.0 report a sector
length per device divided by the number of devices (see commit [1]).

The kernel detect two flash devices (dmesg):

  Concatenating MTD devices:
  (0): "40000000.flash"
  (1): "40000000.flash"
  into device "40000000.flash"

Divide the physical eraseblock (PEB) size by two.

Tested with qemu 2.9.0, 5.1.0.

Fixes:
https://gitlab.com/kubu93/buildroot/-/jobs/1543100932

[1] https://git.qemu.org/?p=qemu.git;a=commitdiff;h=feb0b1aa11f14ee71660aba46b46387d1f923c9e
[2] http://lists.busybox.net/pipermail/buildroot/2021-September/622069.html



Signed-off-by: default avatarRomain Naour <romain.naour@gmail.com>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Signed-off-by: default avatarYann E. MORIN <yann.morin.1998@free.fr>
parent 1ab2dd6a
Loading
Loading
Loading
Loading
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment