MLECO-5543: Experimental ExecuTorch support
*** WARNING: Unstable changes *** This experimental branch introduces a framework abstraction layer and adds initial support for the ExecuTorch backend alongside TensorFlow Lite Micro (TFLM). Highlights: ---------- - Introduced `fwk/` module with backend-agnostic interfaces to decouple use-case logic from ML frameworks. - Isolated TFLM-specific logic into `fwk/tflm`; added ExecuTorch based implementation under `fwk/ExecuTorch`. - Enabled selection of ML framework via `ML_FRAMEWORK_BUILD` CMake flag. ExecuTorch Integration: ---------------------- - Added ExecuTorch model and tensor classes. - Integrated native (host) inference flow. - Enabled AOT compilation pipeline via `aot_arm_compiler` and `.pte` generation. - Added portable ops shared library. - Added data layout conversions (NHWC → NCHW) in common API. Infrastructure & Setup: ---------------------- - Refactored setup scripts with improved structure and Pylint compliance. - Introduced `--parallel` flag to speed up model setup. - Extended script to generate ExecuTorch models and support conditional use-case setup. Documentation: ------------- - Updated all use-case documentation to reflect framework compatibility. This lays the groundwork for future extensibility, enabling use-cases to support multiple ML runtimes with minimal duplication. Co-authored-by:Alex Tawse <alex.tawse@arm.com> Change-Id: Ic69e4d86008036a35d1b47aeb541a2144b84d574 Signed-off-by:
Kshitij Sisodia <kshitij.sisodia@arm.com>
Loading
Please register or sign in to comment