Add user example for FP16 MatMul using Arm® Neon™
This example demonstrates the usage of the FP16 packing and matmul routines which: 1. Packs the bias and the weights together into a single tensor. 2. Performs a matrix multiplication of the activations and the packed tensor. All tensors are in half precision floating-point (FP16) data type. Signed-off-by:Jakub Sujak <jakub.sujak@arm.com> Reviewed-by:
Jakub Sujak <jakub.sujak@arm.com> Approved-by:
Felix Johnny Thomasmathibalan <felixjohnny.thomasmathibalan@arm.com>
parent
d4f9fe78
Loading
Loading
Pipeline
#11526
passed
with stages
in
1 minute and 53 seconds
Loading
Please register or sign in to comment