Half-precision Inference Doubles On-Device Inference Performance

Posted by Marat Dukhan and Frank Barchard, Software Engineers CPUs deliver the widest reach for ML inference and remain the default target for TensorFlow Lite. Consequently, improving CPU inference performance is a top priority, and we are excited to announce that we doubled floating-point inference performance in TensorFlow Lite’s XNNPack…

Continue ReadingHalf-precision Inference Doubles On-Device Inference Performance

Join us at the third Women in ML Symposium!

Posted by Sharbani Roy – Senior Director, Product Management, Google We're back with the third annual Women in Machine Learning Symposium on December 7, 2023! Join us virtually from 9:30 am to 1:00 pm PT for an immersive and insightful set of deep dives for every level of Machine Learning experience. The Women in…

Continue ReadingJoin us at the third Women in ML Symposium!