Skip to content
Commit 22ed2387 authored by Jakub Sujak's avatar Jakub Sujak
Browse files

Add user example for FP16 MatMul using Arm® Neon™



This example demonstrates the usage of the FP16 packing and matmul routines which:

1. Packs the bias and the weights together into a single tensor.

2. Performs a matrix multiplication of the activations and the packed tensor.

All tensors are in half precision floating-point (FP16) data type.

Signed-off-by: Jakub Sujak's avatarJakub Sujak <jakub.sujak@arm.com>

Reviewed-by: Jakub Sujak's avatarJakub Sujak <jakub.sujak@arm.com>
Approved-by: Felix Johnny Thomasmathibalan's avatarFelix Johnny Thomasmathibalan <felixjohnny.thomasmathibalan@arm.com>
parent d4f9fe78
Loading
Loading
Loading
Pipeline #11526 passed with stages
in 1 minute and 53 seconds
Loading
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment