ScaNN utilizes machine learning models to compute embeddings, which use high dimensional vectors trained so that similar inputs that are clustered closely together.
The machine learning abilities, such as leveraging an understanding of language semantics and transforming inputs such as texts and images into embeddings, are utilized in ScaNN which enables it to provide an option for abstract queries to be searched. With ScaNN, embedding-based search is a technique that is designed to answer queries that rely on semantic understandings rather than simple indexable properties.
Machine learning models that are used in ScaNN are trained to map a query and database item to a common vector embedding space which compresses the distance between embeddings that carry semantic meaning. When a query is presented, the system maps the query to the nearest embedding space, and then all of the related database embeddings are found. This second step is known as a maximum inner-product search (MIPS).
To shorten the time required to search through MIPS, compression known as learned quantization, where a set of rules for vectors is trained from the database to represent the database elements. Anisotropic vector quantization allows ScaNN to estimate inner products that are likely to be in the top MIPS results to achieve higher accuracy.
Announcing ScaNN: Efficient Vector Similarity Search
July 28, 2020