About Us

Pincone Systems Announces Pinecone 2.0 – From the Laboratory to Production

Artificial Intelligence (AI) , Machine Learning (ML), Natural Language Processing (NLP), Deep Learning (DL), AI Model, AI-powered Solutions, Digital Transformation, ML Model, Natural Language Understanding (NLU), AI News, Artificial Intelligence News
A VPN is an essential component of IT security, whether you’re just starting a business or are already up and running. Most business interactions and transactions happen online and VPN

A machine-learning (ML) cloud infrastructure company, Pinecone Systems Inc., announced the release of Pinecone 2.0 that combines vector search with traditional metadata storage and filtering. Pinecone 2.0 provides granular control over the search, ultra-low latencies, and a 10x reduction in infrastructure costs, making it possible for companies to replace traditional keyword-based search and recommendation systems with Deep Learning powered vector search.

Edo Liberty, Founder, and CEO of Pinecone said, “The worlds of search and databases have been fundamentally changed by machine learning and deep learning. Companies are looking at the hyperscalers and waking up to the value of vector search. Pinecone 2.0 will help them realize that value at a fraction of the cost and effort.”

From documents to videos, Deep Learning (DL) and Machine Learning (ML) represent everything as vectors. It allows more relevant information to be found from large amounts of data than traditional data retrieval methods based on rules or text. Several companies, including Google, Spotify, Facebook, Amazon, Netflix, and Pinterest use vector search for retrieval and recommendation.

Pinecone 2.0 permits companies to store item metadata (e.g. topic, author, category) and to filter vector searches based on that information in a single step. With this method, search results are easier to control, and slow pre- or post-filtering is eliminated. The results and recommendations users receive are more accurate, faster, and more personalized. Using Pinecone’s proprietary vector index, which is built into the metadata engine, the filters can be applied directly to text (strings) or numerical (floats) query results with minimal overhead.

The hybrid storage feature in Pinecone 2.0 is another key enhancement. It also addresses the other major hurdle for companies considering vector search: high operational costs. A vector search typically runs completely in memory (RAM), making it prohibitively expensive for companies with millions of items in their catalogs. With Pinecone, customers can cut compute infrastructure costs for their applications by up to 10x while still maintaining low latency and accurate results.

Recent News