Skip to content
This project is mirrored from https://review.mlplatform.org/tosa/reference_model. Pull mirroring updated .
  1. Sep 20, 2024
  2. Sep 19, 2024
    • Won Jong Jeon's avatar
      Add support fp8e4m3xfp8e5m2 with f16/f32 accumulator for MATMUL · cd7d0a51
      Won Jong Jeon authored
      
      
      - Add fp8e4m3xfp8e5m2 and fp8e5m2xfp8e4m3 MATMUL with f16/f32
        accumulator
      - Add second input data type for calculating bound parameter for mixed
        data types
      - Update json config file to use new "generator_profile_filter"
      
      Signed-off-by: Won Jong Jeon's avatarWon Jeon <won.jeon@arm.com>
      Change-Id: Id16f8ecbc0bb437e42c263c75fe95628c3df3a80
      cd7d0a51
    • Emmanuel Adesola's avatar
      Create a C++ verify executable · 78ae8965
      Emmanuel Adesola authored and Eric Kunze's avatar Eric Kunze committed
      
      
       - CLI - ./verify --test_desc JSON_CONFIG_PATH --imp_result_file IMP_PATH
       --ref_result_file REF_PATH [--bnd_result_file BND_PATH] [--ofm_name OFM_NAME]
       - Changes to compliance metadata in json_config to include shape information
       - verify_relative and verify_reduce_product changed due to #include error
       - unit_tests added to reference_model/test/verify_exe_testing.cpp
      
      Signed-off-by: default avatarEmmanuel Adesola <emmanuel.adesola@arm.com>
      Change-Id: I13ed2e173b82cf148a556c99d49fd106a8e64882
      78ae8965
  3. Sep 17, 2024
  4. Sep 16, 2024
  5. Sep 13, 2024
  6. Sep 12, 2024
  7. Sep 10, 2024
  8. Sep 09, 2024
    • TatWai Chong's avatar
      Fix incorrect special fp number handling in min/max reduction · a6f7675a
      TatWai Chong authored and Eric Kunze's avatar Eric Kunze committed
      
      
      Neither Eigen nor c++ stdlib have well defined min/max handling to match
      Tosa spec and frameworks such as TF. for instance:
        reduce_min([4, float('nan')]) -> float('nan')
        reduce_min([float('nan'), float('nan')]) -> float('nan')
        reduce_min([float('-inf'), float('inf')]) -> float('-inf')
      
      This patch creates custom floating point reducer for min/max, and create
      framework tests to verify special floating point min/max reduction.
      
      Co-authored-by: default avatarEirini Vlassi Pandi <eirini.vlassipandi@arm.com>
      
      Change-Id: If59d3ebd937ce46615a8da35a535338a94be1d3f
      Signed-off-by: TatWai Chong's avatarTatWai Chong <tatwai.chong@arm.com>
      a6f7675a
    • Jeremy Johnson's avatar
      Increase type coverage of WHILE_LOOP · 2719759b
      Jeremy Johnson authored and Eric Kunze's avatar Eric Kunze committed
      
      
      WHILE_LOOP updated:
      * now supports INT8, INT16, FP32, FP16, BF16 and INT32
      * use data generation library where types are supported
      * reduce tensor size as loop body ops have own tests
      
      Add broadcast mode to FIXED_DATA generation mode to allow
      setting tensor to single value
      
      Change-Id: I243474e125fafc0f97a01dac2a2ccb445224e268
      Signed-off-by: Jeremy Johnson's avatarJeremy Johnson <jeremy.johnson@arm.com>
      2719759b
    • Ian Tayler Lessa's avatar
      Ignore casting warning in generate_fp_special.cc · 701c5f46
      Ian Tayler Lessa authored and Eric Kunze's avatar Eric Kunze committed
      
      
      A clang warning about a loss of precision in a cast can be safely
      ignored for our purposes when generating FP_SPECIAL values as all we are
      looking for is a value that is higher than the maximum INT32::MAX value,
      and we will get it with or without the loss of precision in the implicit
      cast to float.
      
      Added TODO for future improvements to cfloat.h allowing casting from
      other types including int32_t.
      
      Signed-off-by: Ian Tayler Lessa's avatarIan Tayler Lessa <ian.taylerlessa@arm.com>
      Change-Id: Iebfb47fdc3e2d26e5e27fe7b966431804dfc925e
      701c5f46
  9. Sep 05, 2024
  10. Sep 04, 2024
  11. Sep 02, 2024
  12. Aug 29, 2024
  13. Aug 28, 2024
  14. Aug 27, 2024
  15. Aug 26, 2024
  16. Aug 21, 2024
  17. Aug 20, 2024
  18. Aug 19, 2024
Loading