Detecting fraud in real time using Redpanda and Pinecone

In data engineering, vector search engines can help provide efficient data retrieval and processing. Unlike traditional text-based search engines, vector search operates on numerical vector representations, enabling similarity search in high-dimensional spaces and effectively handling multidimensional data. Vector search engines are used in various fields, from natural language processing (NLP)…

Continue ReadingDetecting fraud in real time using Redpanda and Pinecone

Summarization of textual content in Alfresco repository with Amazon Bedrock

Summarizing documents with Amazon Bedrock   Introduction Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) along with a broad set of capabilities that you need to build generative AI applications, simplifying development with security, privacy and responsible AI. Amazon Bedrock leverages AWS…

Continue ReadingSummarization of textual content in Alfresco repository with Amazon Bedrock

TensorFlow 2.15 update: hot-fix for Linux installation issue

Posted by the TensorFlow team We are releasing a hot-fix for an installation issue affecting the TensorFlow installation process. The TensorFlow 2.15.0 Python package was released such that it requested tensorrt-related packages that cannot be found unless the user installs them beforehand or provides additional installation flags. This dependency affected…

Continue ReadingTensorFlow 2.15 update: hot-fix for Linux installation issue

Half-precision Inference Doubles On-Device Inference Performance

Posted by Marat Dukhan and Frank Barchard, Software Engineers CPUs deliver the widest reach for ML inference and remain the default target for TensorFlow Lite. Consequently, improving CPU inference performance is a top priority, and we are excited to announce that we doubled floating-point inference performance in TensorFlow Lite’s XNNPack…

Continue ReadingHalf-precision Inference Doubles On-Device Inference Performance