Add user example for FP16 MatMul using Arm® Neon™
This example demonstrates the usage of the FP16 packing and matmul routines which: 1. Packs the bias and the weights together into a single tensor. 2. Performs a matrix multiplication of the activations and the packed tensor. All tensors are in half precision floating-point (FP16) data type. Signed-off-by:Jakub Sujak <jakub.sujak@arm.com> Reviewed-by:
Jakub Sujak <jakub.sujak@arm.com> Approved-by:
Felix Johnny Thomasmathibalan <felixjohnny.thomasmathibalan@arm.com>