[jira] [Commented] (LUCENE-9136) Introduce IVFFlat to Lucene for ANN similarity search

2020-03-06 Thread Xin-Chun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17053668#comment-17053668
 ] 

Xin-Chun Zhang commented on LUCENE-9136:


1. My personal git branch: 
[https://github.com/irvingzhang/lucene-solr/tree/jira/lucene-9136-ann-ivfflat].

2. The vector format is as follows, 

!image-2020-03-07-01-25-58-047.png|width=535,height=297!

 

Structure of IVF index meta is as follows,

!image-2020-03-07-01-27-12-859.png|width=606,height=276!

 

Structure of IVF data:

!image-2020-03-07-01-22-06-132.png|width=529,height=309!

3. Ann-benchmark tool could be found in: 
[https://github.com/irvingzhang/ann-benchmarks].

Benchmark results (Single Thread, 2.5GHz * 2CPU, 16GB RAM, 
nprobe=8,16,32,64,128,256, centroids=4*sqrt(N), where N the size of dataset):

1) Glove-1.2M-25D-Angular: index build + training cost 706s, qps: 18.8~49.6, 
recall: 76.8%~99.7%

!glove-25-angular.png|width=653,height=450!

 

2) Glove-1.2M-100D-Angular: index build + training cost 2487s, qps: 12.2~38.3, 
recall 65.8%~96.3%

!glove-100-angular.png|width=671,height=462!

3) Sift-1M-128D-Euclidean: index build + training cost 2397s, qps 14.8~38.2, 
recall 71.1%~99.2%

!sift-128-euclidean.png|width=684,height=471!

 

> Introduce IVFFlat to Lucene for ANN similarity search
> -
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
> Attachments: glove-100-angular.png, glove-25-angular.png, 
> image-2020-03-07-01-22-06-132.png, image-2020-03-07-01-25-58-047.png, 
> image-2020-03-07-01-27-12-859.png, sift-128-euclidean.png
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned implementations are 
> all written in C++, and no plan for supporting Java interface, making it hard 
> to be integrated in Java projects or those who are not familier with C/C++  
> [[https://github.com/facebookresearch/faiss/issues/105]]. 
> The algorithms for vector retrieval can be roughly classified into four 
> categories,
>  # Tree-base algorithms, such as KD-tree;
>  # Hashing methods, such as LSH (Local Sensitive Hashing);
>  # Product quantization based algorithms, such as IVFFlat;
>  # Graph-base algorithms, such as HNSW, SSG, NSG;
> where IVFFlat and HNSW are the most popular ones among all the VR algorithms.
> IVFFlat is better for high-precision applications such as face recognition, 
> while HNSW performs better in general scenarios including recommendation and 
> personalized advertisement. *The recall ratio of IVFFlat could be gradually 
> increased by adjusting the query parameter (nprobe), while it's hard for HNSW 
> to improve its accuracy*. In theory, IVFFlat could achieve 100% recall ratio. 
> Recently, the implementation of HNSW (Hierarchical Navigable Small World, 
> LUCENE-9004) for Lucene, has made great progress. The issue draws attention 
> of those who are interested in Lucene or hope to use HNSW with Solr/Lucene. 
> As an alternative for solving ANN similarity search problems, IVFFlat is also 
> very popular with many users and supporters. Compared with HNSW, IVFFlat has 
> smaller index size but requires k-means clustering, while HNSW is faster in 
> query (no training required) but requires extra storage for saving graphs 
> [indexing 1M 
> vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]].
>  Another advantage is that IVFFlat can be faster and more accurate when 
> enables GPU parallel computing (current not support in Java). Both algorithms 
> have their merits and demerits. Since HNSW is now under development, it may 
> be better to provide both implementations (HNSW && IVFFlat) for potential 
> users who are faced with very different scenarios and want to more choices.
> The latest branch is 
> 

[jira] [Issue Comment Deleted] (LUCENE-9136) Introduce IVFFlat to Lucene for ANN similarity search

2020-03-06 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9136:
---
Comment: was deleted

(was: 1. My personal git branch: 
[https://github.com/irvingzhang/lucene-solr/tree/jira/lucene-9136-ann-ivfflat].

2. The vector format is as follows, 

!image-2020-03-07-01-25-58-047.png|width=535,height=297!

 

Structure of IVF index meta is as follows,

!image-2020-03-07-01-27-12-859.png|width=606,height=276!

 

Structure of IVF data:

!image-2020-03-07-01-22-06-132.png|width=529,height=309!

3. Ann-benchmark tool could be found in: 
[https://github.com/irvingzhang/ann-benchmarks].

Benchmark results (Single Thread, 2.5GHz * 2CPU, 16GB RAM, 
nprobe=8,16,32,64,128,256, centroids=4*sqrt(N), where N the size of dataset):

1) Glove-1.2M-25D-Angular: index build + training cost 706s, qps: 18.8~49.6, 
recall: 76.8%~99.7%

!https://intranetproxy.alipay.com/skylark/lark/0/2020/png/35769/1583504416262-89784074-c9dc-4489-99a1-5e4b3c76e5fc.png|width=624,height=430!

 

2) Glove-1.2M-100D-Angular: index build + training cost 2487s, qps: 12.2~38.3, 
recall 65.8%~96.3%

!https://intranetproxy.alipay.com/skylark/lark/0/2020/png/35769/1583510066130-b4fbcb29-8ad7-4ff2-99ce-c52f7c27826e.png|width=679,height=468!

3) Sift-1M-128D-Euclidean: index build + training cost 2397s, qps 14.8~38.2, 
recall 71.1%~99.2%

!https://intranetproxy.alipay.com/skylark/lark/0/2020/png/35769/1583515010497-20b74f41-72c3-48ce-a929-1cbfbd6a6423.png|width=691,height=476!

 )

> Introduce IVFFlat to Lucene for ANN similarity search
> -
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
> Attachments: glove-100-angular.png, glove-25-angular.png, 
> image-2020-03-07-01-22-06-132.png, image-2020-03-07-01-25-58-047.png, 
> image-2020-03-07-01-27-12-859.png, sift-128-euclidean.png
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned implementations are 
> all written in C++, and no plan for supporting Java interface, making it hard 
> to be integrated in Java projects or those who are not familier with C/C++  
> [[https://github.com/facebookresearch/faiss/issues/105]]. 
> The algorithms for vector retrieval can be roughly classified into four 
> categories,
>  # Tree-base algorithms, such as KD-tree;
>  # Hashing methods, such as LSH (Local Sensitive Hashing);
>  # Product quantization based algorithms, such as IVFFlat;
>  # Graph-base algorithms, such as HNSW, SSG, NSG;
> where IVFFlat and HNSW are the most popular ones among all the VR algorithms.
> IVFFlat is better for high-precision applications such as face recognition, 
> while HNSW performs better in general scenarios including recommendation and 
> personalized advertisement. *The recall ratio of IVFFlat could be gradually 
> increased by adjusting the query parameter (nprobe), while it's hard for HNSW 
> to improve its accuracy*. In theory, IVFFlat could achieve 100% recall ratio. 
> Recently, the implementation of HNSW (Hierarchical Navigable Small World, 
> LUCENE-9004) for Lucene, has made great progress. The issue draws attention 
> of those who are interested in Lucene or hope to use HNSW with Solr/Lucene. 
> As an alternative for solving ANN similarity search problems, IVFFlat is also 
> very popular with many users and supporters. Compared with HNSW, IVFFlat has 
> smaller index size but requires k-means clustering, while HNSW is faster in 
> query (no training required) but requires extra storage for saving graphs 
> [indexing 1M 
> vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]].
>  Another advantage is that IVFFlat can be faster and more accurate when 
> enables GPU parallel computing (current not support in Java). Both algorithms 
> have their merits and demerits. Since HNSW is now under development, it may 
> be better to provide both implementations (HNSW && IVFFlat) for potential 

[jira] [Updated] (LUCENE-9136) Introduce IVFFlat to Lucene for ANN similarity search

2020-03-06 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9136:
---
Attachment: sift-128-euclidean.png

> Introduce IVFFlat to Lucene for ANN similarity search
> -
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
> Attachments: glove-100-angular.png, glove-25-angular.png, 
> image-2020-03-07-01-22-06-132.png, image-2020-03-07-01-25-58-047.png, 
> image-2020-03-07-01-27-12-859.png, sift-128-euclidean.png
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned implementations are 
> all written in C++, and no plan for supporting Java interface, making it hard 
> to be integrated in Java projects or those who are not familier with C/C++  
> [[https://github.com/facebookresearch/faiss/issues/105]]. 
> The algorithms for vector retrieval can be roughly classified into four 
> categories,
>  # Tree-base algorithms, such as KD-tree;
>  # Hashing methods, such as LSH (Local Sensitive Hashing);
>  # Product quantization based algorithms, such as IVFFlat;
>  # Graph-base algorithms, such as HNSW, SSG, NSG;
> where IVFFlat and HNSW are the most popular ones among all the VR algorithms.
> IVFFlat is better for high-precision applications such as face recognition, 
> while HNSW performs better in general scenarios including recommendation and 
> personalized advertisement. *The recall ratio of IVFFlat could be gradually 
> increased by adjusting the query parameter (nprobe), while it's hard for HNSW 
> to improve its accuracy*. In theory, IVFFlat could achieve 100% recall ratio. 
> Recently, the implementation of HNSW (Hierarchical Navigable Small World, 
> LUCENE-9004) for Lucene, has made great progress. The issue draws attention 
> of those who are interested in Lucene or hope to use HNSW with Solr/Lucene. 
> As an alternative for solving ANN similarity search problems, IVFFlat is also 
> very popular with many users and supporters. Compared with HNSW, IVFFlat has 
> smaller index size but requires k-means clustering, while HNSW is faster in 
> query (no training required) but requires extra storage for saving graphs 
> [indexing 1M 
> vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]].
>  Another advantage is that IVFFlat can be faster and more accurate when 
> enables GPU parallel computing (current not support in Java). Both algorithms 
> have their merits and demerits. Since HNSW is now under development, it may 
> be better to provide both implementations (HNSW && IVFFlat) for potential 
> users who are faced with very different scenarios and want to more choices.
> The latest branch is 
> [*lucene-9136-ann-ivfflat*]([https://github.com/irvingzhang/lucene-solr/commits/jira/lucene-9136-ann-ivfflat)|https://github.com/irvingzhang/lucene-solr/commits/jira/lucene-9136-ann-ivfflat]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-9136) Introduce IVFFlat to Lucene for ANN similarity search

2020-03-06 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9136:
---
Attachment: glove-25-angular.png

> Introduce IVFFlat to Lucene for ANN similarity search
> -
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
> Attachments: glove-100-angular.png, glove-25-angular.png, 
> image-2020-03-07-01-22-06-132.png, image-2020-03-07-01-25-58-047.png, 
> image-2020-03-07-01-27-12-859.png
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned implementations are 
> all written in C++, and no plan for supporting Java interface, making it hard 
> to be integrated in Java projects or those who are not familier with C/C++  
> [[https://github.com/facebookresearch/faiss/issues/105]]. 
> The algorithms for vector retrieval can be roughly classified into four 
> categories,
>  # Tree-base algorithms, such as KD-tree;
>  # Hashing methods, such as LSH (Local Sensitive Hashing);
>  # Product quantization based algorithms, such as IVFFlat;
>  # Graph-base algorithms, such as HNSW, SSG, NSG;
> where IVFFlat and HNSW are the most popular ones among all the VR algorithms.
> IVFFlat is better for high-precision applications such as face recognition, 
> while HNSW performs better in general scenarios including recommendation and 
> personalized advertisement. *The recall ratio of IVFFlat could be gradually 
> increased by adjusting the query parameter (nprobe), while it's hard for HNSW 
> to improve its accuracy*. In theory, IVFFlat could achieve 100% recall ratio. 
> Recently, the implementation of HNSW (Hierarchical Navigable Small World, 
> LUCENE-9004) for Lucene, has made great progress. The issue draws attention 
> of those who are interested in Lucene or hope to use HNSW with Solr/Lucene. 
> As an alternative for solving ANN similarity search problems, IVFFlat is also 
> very popular with many users and supporters. Compared with HNSW, IVFFlat has 
> smaller index size but requires k-means clustering, while HNSW is faster in 
> query (no training required) but requires extra storage for saving graphs 
> [indexing 1M 
> vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]].
>  Another advantage is that IVFFlat can be faster and more accurate when 
> enables GPU parallel computing (current not support in Java). Both algorithms 
> have their merits and demerits. Since HNSW is now under development, it may 
> be better to provide both implementations (HNSW && IVFFlat) for potential 
> users who are faced with very different scenarios and want to more choices.
> The latest branch is 
> [*lucene-9136-ann-ivfflat*]([https://github.com/irvingzhang/lucene-solr/commits/jira/lucene-9136-ann-ivfflat)|https://github.com/irvingzhang/lucene-solr/commits/jira/lucene-9136-ann-ivfflat]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-9136) Introduce IVFFlat to Lucene for ANN similarity search

2020-03-06 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9136:
---
Attachment: glove-100-angular.png

> Introduce IVFFlat to Lucene for ANN similarity search
> -
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
> Attachments: glove-100-angular.png, glove-25-angular.png, 
> image-2020-03-07-01-22-06-132.png, image-2020-03-07-01-25-58-047.png, 
> image-2020-03-07-01-27-12-859.png
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned implementations are 
> all written in C++, and no plan for supporting Java interface, making it hard 
> to be integrated in Java projects or those who are not familier with C/C++  
> [[https://github.com/facebookresearch/faiss/issues/105]]. 
> The algorithms for vector retrieval can be roughly classified into four 
> categories,
>  # Tree-base algorithms, such as KD-tree;
>  # Hashing methods, such as LSH (Local Sensitive Hashing);
>  # Product quantization based algorithms, such as IVFFlat;
>  # Graph-base algorithms, such as HNSW, SSG, NSG;
> where IVFFlat and HNSW are the most popular ones among all the VR algorithms.
> IVFFlat is better for high-precision applications such as face recognition, 
> while HNSW performs better in general scenarios including recommendation and 
> personalized advertisement. *The recall ratio of IVFFlat could be gradually 
> increased by adjusting the query parameter (nprobe), while it's hard for HNSW 
> to improve its accuracy*. In theory, IVFFlat could achieve 100% recall ratio. 
> Recently, the implementation of HNSW (Hierarchical Navigable Small World, 
> LUCENE-9004) for Lucene, has made great progress. The issue draws attention 
> of those who are interested in Lucene or hope to use HNSW with Solr/Lucene. 
> As an alternative for solving ANN similarity search problems, IVFFlat is also 
> very popular with many users and supporters. Compared with HNSW, IVFFlat has 
> smaller index size but requires k-means clustering, while HNSW is faster in 
> query (no training required) but requires extra storage for saving graphs 
> [indexing 1M 
> vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]].
>  Another advantage is that IVFFlat can be faster and more accurate when 
> enables GPU parallel computing (current not support in Java). Both algorithms 
> have their merits and demerits. Since HNSW is now under development, it may 
> be better to provide both implementations (HNSW && IVFFlat) for potential 
> users who are faced with very different scenarios and want to more choices.
> The latest branch is 
> [*lucene-9136-ann-ivfflat*]([https://github.com/irvingzhang/lucene-solr/commits/jira/lucene-9136-ann-ivfflat)|https://github.com/irvingzhang/lucene-solr/commits/jira/lucene-9136-ann-ivfflat]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9136) Introduce IVFFlat to Lucene for ANN similarity search

2020-03-06 Thread Xin-Chun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17053623#comment-17053623
 ] 

Xin-Chun Zhang commented on LUCENE-9136:


1. My personal git branch: 
[https://github.com/irvingzhang/lucene-solr/tree/jira/lucene-9136-ann-ivfflat].

2. The vector format is as follows, 

!image-2020-03-07-01-25-58-047.png|width=535,height=297!

 

Structure of IVF index meta is as follows,

!image-2020-03-07-01-27-12-859.png|width=606,height=276!

 

Structure of IVF data:

!image-2020-03-07-01-22-06-132.png|width=529,height=309!

3. Ann-benchmark tool could be found in: 
[https://github.com/irvingzhang/ann-benchmarks].

Benchmark results (Single Thread, 2.5GHz * 2CPU, 16GB RAM, 
nprobe=8,16,32,64,128,256, centroids=4*sqrt(N), where N the size of dataset):

1) Glove-1.2M-25D-Angular: index build + training cost 706s, qps: 18.8~49.6, 
recall: 76.8%~99.7%

!https://intranetproxy.alipay.com/skylark/lark/0/2020/png/35769/1583504416262-89784074-c9dc-4489-99a1-5e4b3c76e5fc.png|width=624,height=430!

 

2) Glove-1.2M-100D-Angular: index build + training cost 2487s, qps: 12.2~38.3, 
recall 65.8%~96.3%

!https://intranetproxy.alipay.com/skylark/lark/0/2020/png/35769/1583510066130-b4fbcb29-8ad7-4ff2-99ce-c52f7c27826e.png|width=679,height=468!

3) Sift-1M-128D-Euclidean: index build + training cost 2397s, qps 14.8~38.2, 
recall 71.1%~99.2%

!https://intranetproxy.alipay.com/skylark/lark/0/2020/png/35769/1583515010497-20b74f41-72c3-48ce-a929-1cbfbd6a6423.png|width=691,height=476!

 

> Introduce IVFFlat to Lucene for ANN similarity search
> -
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
> Attachments: image-2020-03-07-01-22-06-132.png, 
> image-2020-03-07-01-25-58-047.png, image-2020-03-07-01-27-12-859.png
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned implementations are 
> all written in C++, and no plan for supporting Java interface, making it hard 
> to be integrated in Java projects or those who are not familier with C/C++  
> [[https://github.com/facebookresearch/faiss/issues/105]]. 
> The algorithms for vector retrieval can be roughly classified into four 
> categories,
>  # Tree-base algorithms, such as KD-tree;
>  # Hashing methods, such as LSH (Local Sensitive Hashing);
>  # Product quantization based algorithms, such as IVFFlat;
>  # Graph-base algorithms, such as HNSW, SSG, NSG;
> where IVFFlat and HNSW are the most popular ones among all the VR algorithms.
> IVFFlat is better for high-precision applications such as face recognition, 
> while HNSW performs better in general scenarios including recommendation and 
> personalized advertisement. *The recall ratio of IVFFlat could be gradually 
> increased by adjusting the query parameter (nprobe), while it's hard for HNSW 
> to improve its accuracy*. In theory, IVFFlat could achieve 100% recall ratio. 
> Recently, the implementation of HNSW (Hierarchical Navigable Small World, 
> LUCENE-9004) for Lucene, has made great progress. The issue draws attention 
> of those who are interested in Lucene or hope to use HNSW with Solr/Lucene. 
> As an alternative for solving ANN similarity search problems, IVFFlat is also 
> very popular with many users and supporters. Compared with HNSW, IVFFlat has 
> smaller index size but requires k-means clustering, while HNSW is faster in 
> query (no training required) but requires extra storage for saving graphs 
> [indexing 1M 
> vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]].
>  Another advantage is that IVFFlat can be faster and more accurate when 
> enables GPU parallel computing (current not support in Java). Both algorithms 
> have their merits and demerits. Since HNSW is now under development, it may 
> be better to provide both implementations (HNSW && IVFFlat) for potential 
> users who are faced with very different scenarios and want to 

[jira] [Updated] (LUCENE-9136) Introduce IVFFlat to Lucene for ANN similarity search

2020-03-06 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9136:
---
Attachment: image-2020-03-07-01-27-12-859.png

> Introduce IVFFlat to Lucene for ANN similarity search
> -
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
> Attachments: image-2020-03-07-01-22-06-132.png, 
> image-2020-03-07-01-25-58-047.png, image-2020-03-07-01-27-12-859.png
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned implementations are 
> all written in C++, and no plan for supporting Java interface, making it hard 
> to be integrated in Java projects or those who are not familier with C/C++  
> [[https://github.com/facebookresearch/faiss/issues/105]]. 
> The algorithms for vector retrieval can be roughly classified into four 
> categories,
>  # Tree-base algorithms, such as KD-tree;
>  # Hashing methods, such as LSH (Local Sensitive Hashing);
>  # Product quantization based algorithms, such as IVFFlat;
>  # Graph-base algorithms, such as HNSW, SSG, NSG;
> where IVFFlat and HNSW are the most popular ones among all the VR algorithms.
> IVFFlat is better for high-precision applications such as face recognition, 
> while HNSW performs better in general scenarios including recommendation and 
> personalized advertisement. *The recall ratio of IVFFlat could be gradually 
> increased by adjusting the query parameter (nprobe), while it's hard for HNSW 
> to improve its accuracy*. In theory, IVFFlat could achieve 100% recall ratio. 
> Recently, the implementation of HNSW (Hierarchical Navigable Small World, 
> LUCENE-9004) for Lucene, has made great progress. The issue draws attention 
> of those who are interested in Lucene or hope to use HNSW with Solr/Lucene. 
> As an alternative for solving ANN similarity search problems, IVFFlat is also 
> very popular with many users and supporters. Compared with HNSW, IVFFlat has 
> smaller index size but requires k-means clustering, while HNSW is faster in 
> query (no training required) but requires extra storage for saving graphs 
> [indexing 1M 
> vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]].
>  Another advantage is that IVFFlat can be faster and more accurate when 
> enables GPU parallel computing (current not support in Java). Both algorithms 
> have their merits and demerits. Since HNSW is now under development, it may 
> be better to provide both implementations (HNSW && IVFFlat) for potential 
> users who are faced with very different scenarios and want to more choices.
> The latest branch is 
> [*lucene-9136-ann-ivfflat*]([https://github.com/irvingzhang/lucene-solr/commits/jira/lucene-9136-ann-ivfflat)|https://github.com/irvingzhang/lucene-solr/commits/jira/lucene-9136-ann-ivfflat]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-9136) Introduce IVFFlat to Lucene for ANN similarity search

2020-03-06 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9136:
---
Attachment: image-2020-03-07-01-25-58-047.png

> Introduce IVFFlat to Lucene for ANN similarity search
> -
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
> Attachments: image-2020-03-07-01-22-06-132.png, 
> image-2020-03-07-01-25-58-047.png
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned implementations are 
> all written in C++, and no plan for supporting Java interface, making it hard 
> to be integrated in Java projects or those who are not familier with C/C++  
> [[https://github.com/facebookresearch/faiss/issues/105]]. 
> The algorithms for vector retrieval can be roughly classified into four 
> categories,
>  # Tree-base algorithms, such as KD-tree;
>  # Hashing methods, such as LSH (Local Sensitive Hashing);
>  # Product quantization based algorithms, such as IVFFlat;
>  # Graph-base algorithms, such as HNSW, SSG, NSG;
> where IVFFlat and HNSW are the most popular ones among all the VR algorithms.
> IVFFlat is better for high-precision applications such as face recognition, 
> while HNSW performs better in general scenarios including recommendation and 
> personalized advertisement. *The recall ratio of IVFFlat could be gradually 
> increased by adjusting the query parameter (nprobe), while it's hard for HNSW 
> to improve its accuracy*. In theory, IVFFlat could achieve 100% recall ratio. 
> Recently, the implementation of HNSW (Hierarchical Navigable Small World, 
> LUCENE-9004) for Lucene, has made great progress. The issue draws attention 
> of those who are interested in Lucene or hope to use HNSW with Solr/Lucene. 
> As an alternative for solving ANN similarity search problems, IVFFlat is also 
> very popular with many users and supporters. Compared with HNSW, IVFFlat has 
> smaller index size but requires k-means clustering, while HNSW is faster in 
> query (no training required) but requires extra storage for saving graphs 
> [indexing 1M 
> vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]].
>  Another advantage is that IVFFlat can be faster and more accurate when 
> enables GPU parallel computing (current not support in Java). Both algorithms 
> have their merits and demerits. Since HNSW is now under development, it may 
> be better to provide both implementations (HNSW && IVFFlat) for potential 
> users who are faced with very different scenarios and want to more choices.
> The latest branch is 
> [*lucene-9136-ann-ivfflat*]([https://github.com/irvingzhang/lucene-solr/commits/jira/lucene-9136-ann-ivfflat)|https://github.com/irvingzhang/lucene-solr/commits/jira/lucene-9136-ann-ivfflat]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-9136) Introduce IVFFlat to Lucene for ANN similarity search

2020-03-06 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9136:
---
Attachment: image-2020-03-07-01-22-06-132.png

> Introduce IVFFlat to Lucene for ANN similarity search
> -
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
> Attachments: image-2020-03-07-01-22-06-132.png
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned implementations are 
> all written in C++, and no plan for supporting Java interface, making it hard 
> to be integrated in Java projects or those who are not familier with C/C++  
> [[https://github.com/facebookresearch/faiss/issues/105]]. 
> The algorithms for vector retrieval can be roughly classified into four 
> categories,
>  # Tree-base algorithms, such as KD-tree;
>  # Hashing methods, such as LSH (Local Sensitive Hashing);
>  # Product quantization based algorithms, such as IVFFlat;
>  # Graph-base algorithms, such as HNSW, SSG, NSG;
> where IVFFlat and HNSW are the most popular ones among all the VR algorithms.
> IVFFlat is better for high-precision applications such as face recognition, 
> while HNSW performs better in general scenarios including recommendation and 
> personalized advertisement. *The recall ratio of IVFFlat could be gradually 
> increased by adjusting the query parameter (nprobe), while it's hard for HNSW 
> to improve its accuracy*. In theory, IVFFlat could achieve 100% recall ratio. 
> Recently, the implementation of HNSW (Hierarchical Navigable Small World, 
> LUCENE-9004) for Lucene, has made great progress. The issue draws attention 
> of those who are interested in Lucene or hope to use HNSW with Solr/Lucene. 
> As an alternative for solving ANN similarity search problems, IVFFlat is also 
> very popular with many users and supporters. Compared with HNSW, IVFFlat has 
> smaller index size but requires k-means clustering, while HNSW is faster in 
> query (no training required) but requires extra storage for saving graphs 
> [indexing 1M 
> vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]].
>  Another advantage is that IVFFlat can be faster and more accurate when 
> enables GPU parallel computing (current not support in Java). Both algorithms 
> have their merits and demerits. Since HNSW is now under development, it may 
> be better to provide both implementations (HNSW && IVFFlat) for potential 
> users who are faced with very different scenarios and want to more choices.
> The latest branch is 
> [*lucene-9136-ann-ivfflat*]([https://github.com/irvingzhang/lucene-solr/commits/jira/lucene-9136-ann-ivfflat)|https://github.com/irvingzhang/lucene-solr/commits/jira/lucene-9136-ann-ivfflat]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-9136) Introduce IVFFlat to Lucene for ANN similarity search

2020-03-06 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9136:
---
Attachment: (was: 
1581409981369-9dea4099-4e41-4431-8f45-a3bb8cac46c0.png)

> Introduce IVFFlat to Lucene for ANN similarity search
> -
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned implementations are 
> all written in C++, and no plan for supporting Java interface, making it hard 
> to be integrated in Java projects or those who are not familier with C/C++  
> [[https://github.com/facebookresearch/faiss/issues/105]]. 
> The algorithms for vector retrieval can be roughly classified into four 
> categories,
>  # Tree-base algorithms, such as KD-tree;
>  # Hashing methods, such as LSH (Local Sensitive Hashing);
>  # Product quantization based algorithms, such as IVFFlat;
>  # Graph-base algorithms, such as HNSW, SSG, NSG;
> where IVFFlat and HNSW are the most popular ones among all the VR algorithms.
> IVFFlat is better for high-precision applications such as face recognition, 
> while HNSW performs better in general scenarios including recommendation and 
> personalized advertisement. *The recall ratio of IVFFlat could be gradually 
> increased by adjusting the query parameter (nprobe), while it's hard for HNSW 
> to improve its accuracy*. In theory, IVFFlat could achieve 100% recall ratio. 
> Recently, the implementation of HNSW (Hierarchical Navigable Small World, 
> LUCENE-9004) for Lucene, has made great progress. The issue draws attention 
> of those who are interested in Lucene or hope to use HNSW with Solr/Lucene. 
> As an alternative for solving ANN similarity search problems, IVFFlat is also 
> very popular with many users and supporters. Compared with HNSW, IVFFlat has 
> smaller index size but requires k-means clustering, while HNSW is faster in 
> query (no training required) but requires extra storage for saving graphs 
> [indexing 1M 
> vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]].
>  Another advantage is that IVFFlat can be faster and more accurate when 
> enables GPU parallel computing (current not support in Java). Both algorithms 
> have their merits and demerits. Since HNSW is now under development, it may 
> be better to provide both implementations (HNSW && IVFFlat) for potential 
> users who are faced with very different scenarios and want to more choices.
> The latest branch is 
> [*lucene-9136-ann-ivfflat*]([https://github.com/irvingzhang/lucene-solr/commits/jira/lucene-9136-ann-ivfflat)|https://github.com/irvingzhang/lucene-solr/commits/jira/lucene-9136-ann-ivfflat]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (LUCENE-9136) Introduce IVFFlat to Lucene for ANN similarity search

2020-03-06 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9136:
---
Comment: was deleted

(was: The index format of IVFFlat is organized as follows, 
!1581409981369-9dea4099-4e41-4431-8f45-a3bb8cac46c0.png!

In general, the number of centroids lies within the interval [4 * sqrt(N), 16 * 
sqrt(N)], where N is the data set size. We use (4 * sqrt(N)) as the actual 
value of centroid number to balance between accuracy and computational load, 
denoted by c. And the full data set is used for training if its size no larger 
than 200,000. Otherwise (128 *  c) points are selected after shuffling for 
training in order to accelerate training.

Experiments have been conducted on a large data set (sift1M, 
[http://corpus-texmex.irisa.fr/]) to verify the implementation of IVFFlat. The 
base data set (sift_base.fvecs) contains 1,000,000 vectors with 128 dimensions. 
And 10,000 queries (sift_query.fvecs) are used for recall testing. The recall 
ratio follows

Recall=(Recall vectors in groundTruth) / (number of queries * TopK), where 
number of queries = 10,000 and TopK=100. The results are as follows (single 
thread and single segment),

 
||nprobe||avg. search time (ms)||recall (%)||
|8|16.3827|44.24|
|16|16.5834|58.04|
|32|19.2031|71.55|
|64|24.7065|83.30|
|128|34.9165|92.03|
|256|60.5844|97.18|
| | | |

**The test codes could be found in 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/KnnIvfAndGraphPerformTester.java.|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/KnnIvfAndGraphPerformTester.java]

 

 

 

 )

> Introduce IVFFlat to Lucene for ANN similarity search
> -
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned implementations are 
> all written in C++, and no plan for supporting Java interface, making it hard 
> to be integrated in Java projects or those who are not familier with C/C++  
> [[https://github.com/facebookresearch/faiss/issues/105]]. 
> The algorithms for vector retrieval can be roughly classified into four 
> categories,
>  # Tree-base algorithms, such as KD-tree;
>  # Hashing methods, such as LSH (Local Sensitive Hashing);
>  # Product quantization based algorithms, such as IVFFlat;
>  # Graph-base algorithms, such as HNSW, SSG, NSG;
> where IVFFlat and HNSW are the most popular ones among all the VR algorithms.
> IVFFlat is better for high-precision applications such as face recognition, 
> while HNSW performs better in general scenarios including recommendation and 
> personalized advertisement. *The recall ratio of IVFFlat could be gradually 
> increased by adjusting the query parameter (nprobe), while it's hard for HNSW 
> to improve its accuracy*. In theory, IVFFlat could achieve 100% recall ratio. 
> Recently, the implementation of HNSW (Hierarchical Navigable Small World, 
> LUCENE-9004) for Lucene, has made great progress. The issue draws attention 
> of those who are interested in Lucene or hope to use HNSW with Solr/Lucene. 
> As an alternative for solving ANN similarity search problems, IVFFlat is also 
> very popular with many users and supporters. Compared with HNSW, IVFFlat has 
> smaller index size but requires k-means clustering, while HNSW is faster in 
> query (no training required) but requires extra storage for saving graphs 
> [indexing 1M 
> vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]].
>  Another advantage is that IVFFlat can be faster and more accurate when 
> enables GPU parallel computing (current not support in Java). Both algorithms 
> have their merits and demerits. Since HNSW is now under development, it may 
> be better to provide both implementations (HNSW && IVFFlat) for potential 
> 

[jira] [Updated] (LUCENE-9136) Introduce IVFFlat to Lucene for ANN similarity search

2020-03-06 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9136:
---
Attachment: (was: image-2020-02-16-15-05-02-451.png)

> Introduce IVFFlat to Lucene for ANN similarity search
> -
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned implementations are 
> all written in C++, and no plan for supporting Java interface, making it hard 
> to be integrated in Java projects or those who are not familier with C/C++  
> [[https://github.com/facebookresearch/faiss/issues/105]]. 
> The algorithms for vector retrieval can be roughly classified into four 
> categories,
>  # Tree-base algorithms, such as KD-tree;
>  # Hashing methods, such as LSH (Local Sensitive Hashing);
>  # Product quantization based algorithms, such as IVFFlat;
>  # Graph-base algorithms, such as HNSW, SSG, NSG;
> where IVFFlat and HNSW are the most popular ones among all the VR algorithms.
> IVFFlat is better for high-precision applications such as face recognition, 
> while HNSW performs better in general scenarios including recommendation and 
> personalized advertisement. *The recall ratio of IVFFlat could be gradually 
> increased by adjusting the query parameter (nprobe), while it's hard for HNSW 
> to improve its accuracy*. In theory, IVFFlat could achieve 100% recall ratio. 
> Recently, the implementation of HNSW (Hierarchical Navigable Small World, 
> LUCENE-9004) for Lucene, has made great progress. The issue draws attention 
> of those who are interested in Lucene or hope to use HNSW with Solr/Lucene. 
> As an alternative for solving ANN similarity search problems, IVFFlat is also 
> very popular with many users and supporters. Compared with HNSW, IVFFlat has 
> smaller index size but requires k-means clustering, while HNSW is faster in 
> query (no training required) but requires extra storage for saving graphs 
> [indexing 1M 
> vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]].
>  Another advantage is that IVFFlat can be faster and more accurate when 
> enables GPU parallel computing (current not support in Java). Both algorithms 
> have their merits and demerits. Since HNSW is now under development, it may 
> be better to provide both implementations (HNSW && IVFFlat) for potential 
> users who are faced with very different scenarios and want to more choices.
> The latest branch is 
> [*lucene-9136-ann-ivfflat*]([https://github.com/irvingzhang/lucene-solr/commits/jira/lucene-9136-ann-ivfflat)|https://github.com/irvingzhang/lucene-solr/commits/jira/lucene-9136-ann-ivfflat]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (LUCENE-9136) Introduce IVFFlat to Lucene for ANN similarity search

2020-03-06 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9136:
---
Comment: was deleted

(was: Hi, [~jtibshirani], thanks for your suggestions!

??"I wonder if this clustering-based approach could fit more closely in the 
current search framework. In the current prototype, we keep all the cluster 
information on-heap. We could instead try storing each cluster as its own 
'term' with a postings list. The kNN query would then be modelled as an 'OR' 
over these terms."??

In the previous implementation 
([https://github.com/irvingzhang/lucene-solr/commit/eb5f79ea7a705595821f73f80a0c5752061869b2]),
 the cluster information is divided into two parts – meta (.ifi) and data(.ifd) 
as shown in the following figure, where each cluster with a postings list is 
stored in the data file (.ifd) and not kept on-heap. A major concern of this 
implementation is its reading performance of cluster data since reading is a 
very frequent behavior on kNN search. I will test and check the performance. 

!image-2020-02-16-15-05-02-451.png!

??"Because of this concern, it could be nice to include benchmarks for index 
time (in addition to QPS)..."??

Many thanks! I will check the links you mentioned and consider optimize the 
clustering cost. In addition, more benchmarks will be added soon.

 
h2. *UPDATE – Feb. 24, 2020*

I have  add a new implementation for IVF index, which has been marked as ***V2 
under the package org.apache.lucene.codecs.lucene90. In current implementation, 
the IVF index has been divided into two files with suffixes .ifi and .ifd, 
respectively. The .ifd file will be read if cluster information is needed. The 
experiments are conducted on dataset sift1M (Test codes: 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/KnnIvfPerformTester.java]),
 detailed results are as follows,
 # add document -- 3921 ms;
 # commit -- 3912286 ms (mainly spent on k-means training, 10 iterations, 4000 
centroids, totally 512,000 vectors used for training);
 # R@100 recall time and recall ratio are listed in the following table

 
||nprobe||avg. search time (ms)||recall ratio (%)||
|8|28.0755|44.154|
|16|27.1745|57.9945|
|32|32.986|71.7003|
|64|40.4082|83.50471|
|128|50.9569|92.07929|
|256|73.923|97.150894|

 Compare with on-heap implementation of IVF index, the query time increases 
significantly (22%~71%). Actually, IVF index is comprised of unique docIDs, and 
will not take up too much memory. *There is a small argument about whether to 
keep the cluster information on-heap or not. Hope to hear more suggestions.*

 

 )

> Introduce IVFFlat to Lucene for ANN similarity search
> -
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned implementations are 
> all written in C++, and no plan for supporting Java interface, making it hard 
> to be integrated in Java projects or those who are not familier with C/C++  
> [[https://github.com/facebookresearch/faiss/issues/105]]. 
> The algorithms for vector retrieval can be roughly classified into four 
> categories,
>  # Tree-base algorithms, such as KD-tree;
>  # Hashing methods, such as LSH (Local Sensitive Hashing);
>  # Product quantization based algorithms, such as IVFFlat;
>  # Graph-base algorithms, such as HNSW, SSG, NSG;
> where IVFFlat and HNSW are the most popular ones among all the VR algorithms.
> IVFFlat is better for high-precision applications such as face recognition, 
> while HNSW performs better in general scenarios including recommendation and 
> personalized advertisement. *The recall ratio of IVFFlat could be gradually 
> increased by adjusting the query parameter (nprobe), while it's hard for HNSW 
> to improve its accuracy*. In theory, IVFFlat could achieve 100% recall ratio. 

[jira] [Comment Edited] (LUCENE-9136) Introduce IVFFlat to Lucene for ANN similarity search

2020-03-05 Thread Xin-Chun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17052727#comment-17052727
 ] 

Xin-Chun Zhang edited comment on LUCENE-9136 at 3/6/20, 3:34 AM:
-

Hi, [~jtibshirani], thanks for you excellent work!

??I was thinking we could actually reuse the existing `PostingsFormat` and 
`DocValuesFormat` implementations.??

Yes, the codes could be simple by reusing these formats. But I agree with 
[~tomoko] that ANN search is a pretty new feature to Lucene, it's better to use 
a dedicated format for maintaining reasons. Moreover, If we are going to use a 
dedicated vector format for HNSW, this format should also be applied to IVFFlat 
because IVFFlat and HNSW are used for the same purpose of ANN search. It may be 
strange to users if IVFFlat and HNSW perform completely different.

 

??In particular, it doesn’t require random access for doc values, they are only 
accessed through forward iteration.??

Actually, we need random access to the vector values! For a typical search 
engine, we are going to retrieving the best matched documents after obtaining 
the TopK docIDs. Retrieving vectors via these docIDs requires random access to 
the vector values.


was (Author: irvingzhang):
Hi, [~jtibshirani], thanks for you excellent work!

??I was thinking we could actually reuse the existing `PostingsFormat` and 
`DocValuesFormat` implementations.??

Yes, the codes could be simple by reusing these formats. But I agree with 
[~tomoko] that ANN search is a pretty new feature to Lucene, it's better to use 
a dedicated format for maintaining reasons. Moreover, If we are going to use a 
dedicated vector format for HNSW, this could also applied to IVFFlat because 
IVFFlat and HNSW are used for the same purpose of ANN search. It may be strange 
to users if IVFFlat and HNSW perform completely different.

 

??In particular, it doesn’t require random access for doc values, they are only 
accessed through forward iteration.??

Actually, we need random access to the vector values! For a typical search 
engine, we are going to retrieving the best matched documents after obtaining 
the TopK docIDs. Retrieving vectors via these docIDs requires random access to 
the vector values.

> Introduce IVFFlat to Lucene for ANN similarity search
> -
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
> Attachments: 1581409981369-9dea4099-4e41-4431-8f45-a3bb8cac46c0.png, 
> image-2020-02-16-15-05-02-451.png
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned implementations are 
> all written in C++, and no plan for supporting Java interface, making it hard 
> to be integrated in Java projects or those who are not familier with C/C++  
> [[https://github.com/facebookresearch/faiss/issues/105]]. 
> The algorithms for vector retrieval can be roughly classified into four 
> categories,
>  # Tree-base algorithms, such as KD-tree;
>  # Hashing methods, such as LSH (Local Sensitive Hashing);
>  # Product quantization based algorithms, such as IVFFlat;
>  # Graph-base algorithms, such as HNSW, SSG, NSG;
> where IVFFlat and HNSW are the most popular ones among all the VR algorithms.
> IVFFlat is better for high-precision applications such as face recognition, 
> while HNSW performs better in general scenarios including recommendation and 
> personalized advertisement. *The recall ratio of IVFFlat could be gradually 
> increased by adjusting the query parameter (nprobe), while it's hard for HNSW 
> to improve its accuracy*. In theory, IVFFlat could achieve 100% recall ratio. 
> Recently, the implementation of HNSW (Hierarchical Navigable Small World, 
> LUCENE-9004) for Lucene, has made great progress. The issue draws attention 
> of those who are interested in Lucene or hope to use HNSW with Solr/Lucene. 
> As an alternative for solving ANN similarity search problems, IVFFlat 

[jira] [Commented] (LUCENE-9136) Introduce IVFFlat to Lucene for ANN similarity search

2020-03-05 Thread Xin-Chun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17052727#comment-17052727
 ] 

Xin-Chun Zhang commented on LUCENE-9136:


Hi, [~jtibshirani], thanks for you excellent work!

??I was thinking we could actually reuse the existing `PostingsFormat` and 
`DocValuesFormat` implementations.??

Yes, the codes could be simple by reusing these formats. But I agree with 
[~tomoko] that ANN search is a pretty new feature to Lucene, it's better to use 
a dedicated format for maintaining reasons. Moreover, If we are going to use a 
dedicated vector format for HNSW, this could also applied to IVFFlat because 
IVFFlat and HNSW are used for the same purpose of ANN search. It may be strange 
to users if IVFFlat and HNSW perform completely different.

 

??In particular, it doesn’t require random access for doc values, they are only 
accessed through forward iteration.??

Actually, we need random access to the vector values! For a typical search 
engine, we are going to retrieving the best matched documents after obtaining 
the TopK docIDs. Retrieving vectors via these docIDs requires random access to 
the vector values.

> Introduce IVFFlat to Lucene for ANN similarity search
> -
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
> Attachments: 1581409981369-9dea4099-4e41-4431-8f45-a3bb8cac46c0.png, 
> image-2020-02-16-15-05-02-451.png
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned implementations are 
> all written in C++, and no plan for supporting Java interface, making it hard 
> to be integrated in Java projects or those who are not familier with C/C++  
> [[https://github.com/facebookresearch/faiss/issues/105]]. 
> The algorithms for vector retrieval can be roughly classified into four 
> categories,
>  # Tree-base algorithms, such as KD-tree;
>  # Hashing methods, such as LSH (Local Sensitive Hashing);
>  # Product quantization based algorithms, such as IVFFlat;
>  # Graph-base algorithms, such as HNSW, SSG, NSG;
> where IVFFlat and HNSW are the most popular ones among all the VR algorithms.
> IVFFlat is better for high-precision applications such as face recognition, 
> while HNSW performs better in general scenarios including recommendation and 
> personalized advertisement. *The recall ratio of IVFFlat could be gradually 
> increased by adjusting the query parameter (nprobe), while it's hard for HNSW 
> to improve its accuracy*. In theory, IVFFlat could achieve 100% recall ratio. 
> Recently, the implementation of HNSW (Hierarchical Navigable Small World, 
> LUCENE-9004) for Lucene, has made great progress. The issue draws attention 
> of those who are interested in Lucene or hope to use HNSW with Solr/Lucene. 
> As an alternative for solving ANN similarity search problems, IVFFlat is also 
> very popular with many users and supporters. Compared with HNSW, IVFFlat has 
> smaller index size but requires k-means clustering, while HNSW is faster in 
> query (no training required) but requires extra storage for saving graphs 
> [indexing 1M 
> vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]].
>  Another advantage is that IVFFlat can be faster and more accurate when 
> enables GPU parallel computing (current not support in Java). Both algorithms 
> have their merits and demerits. Since HNSW is now under development, it may 
> be better to provide both implementations (HNSW && IVFFlat) for potential 
> users who are faced with very different scenarios and want to more choices.
> The latest branch is 
> [*lucene-9136-ann-ivfflat*]([https://github.com/irvingzhang/lucene-solr/commits/jira/lucene-9136-ann-ivfflat)|https://github.com/irvingzhang/lucene-solr/commits/jira/lucene-9136-ann-ivfflat]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To 

[jira] [Updated] (LUCENE-9136) Introduce IVFFlat to Lucene for ANN similarity search

2020-03-01 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9136:
---
Description: 
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface, making it hard 
to be integrated in Java projects or those who are not familier with C/C++  
[[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as KD-tree;
 # Hashing methods, such as LSH (Local Sensitive Hashing);
 # Product quantization based algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

where IVFFlat and HNSW are the most popular ones among all the VR algorithms.

IVFFlat is better for high-precision applications such as face recognition, 
while HNSW performs better in general scenarios including recommendation and 
personalized advertisement. *The recall ratio of IVFFlat could be gradually 
increased by adjusting the query parameter (nprobe), while it's hard for HNSW 
to improve its accuracy*. In theory, IVFFlat could achieve 100% recall ratio. 

Recently, the implementation of HNSW (Hierarchical Navigable Small World, 
LUCENE-9004) for Lucene, has made great progress. The issue draws attention of 
those who are interested in Lucene or hope to use HNSW with Solr/Lucene. 

As an alternative for solving ANN similarity search problems, IVFFlat is also 
very popular with many users and supporters. Compared with HNSW, IVFFlat has 
smaller index size but requires k-means clustering, while HNSW is faster in 
query (no training required) but requires extra storage for saving graphs 
[indexing 1M 
vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]]. 
Another advantage is that IVFFlat can be faster and more accurate when enables 
GPU parallel computing (current not support in Java). Both algorithms have 
their merits and demerits. Since HNSW is now under development, it may be 
better to provide both implementations (HNSW && IVFFlat) for potential users 
who are faced with very different scenarios and want to more choices.

The latest branch is 
[*lucene-9136-ann-ivfflat*]([https://github.com/irvingzhang/lucene-solr/commits/jira/lucene-9136-ann-ivfflat)|https://github.com/irvingzhang/lucene-solr/commits/jira/lucene-9136-ann-ivfflat]

  was:
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface, making it hard 
to be integrated in Java projects or those who are not familier with C/C++  
[[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as KD-tree;
 # Hashing methods, such as LSH (Local Sensitive Hashing);
 # Product quantization based algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

where IVFFlat and HNSW are the most popular ones among all the VR algorithms.

IVFFlat is better for high-precision applications such as face recognition, 
while HNSW performs better in general scenarios including recommendation and 
personalized advertisement. *The recall ratio of IVFFlat could be gradually 
increased by adjusting the query parameter (nprobe), while it's hard for HNSW 
to improve its accuracy*. In theory, IVFFlat could achieve 100% 

[jira] [Updated] (LUCENE-9136) Introduce IVFFlat to Lucene for ANN similarity search

2020-03-01 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9136:
---
Description: 
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface, making it hard 
to be integrated in Java projects or those who are not familier with C/C++  
[[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as KD-tree;
 # Hashing methods, such as LSH (Local Sensitive Hashing);
 # Product quantization based algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

where IVFFlat and HNSW are the most popular ones among all the VR algorithms.

IVFFlat is better for high-precision applications such as face recognition, 
while HNSW performs better in general scenarios including recommendation and 
personalized advertisement. *The recall ratio of IVFFlat could be gradually 
increased by adjusting the query parameter (nprobe), while it's hard for HNSW 
to improve its accuracy*. In theory, IVFFlat could achieve 100% recall ratio. 

Recently, the implementation of HNSW (Hierarchical Navigable Small World, 
LUCENE-9004) for Lucene, has made great progress. The issue draws attention of 
those who are interested in Lucene or hope to use HNSW with Solr/Lucene. 

As an alternative for solving ANN similarity search problems, IVFFlat is also 
very popular with many users and supporters. Compared with HNSW, IVFFlat has 
smaller index size but requires k-means clustering, while HNSW is faster in 
query (no training required) but requires extra storage for saving graphs 
[indexing 1M 
vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]]. 
Another advantage is that IVFFlat can be faster and more accurate when enables 
GPU parallel computing (current not support in Java). Both algorithms have 
their merits and demerits. Since HNSW is now under development, it may be 
better to provide both implementations (HNSW && IVFFlat) for potential users 
who are faced with very different scenarios and want to more choices.

The latest branch is 
[lucene-9136-ann-ivfflat]([https://github.com/irvingzhang/lucene-solr/commits/jira/lucene-9136-ann-ivfflat)|https://github.com/irvingzhang/lucene-solr/commits/jira/lucene-9136-ann-ivfflat]

  was:
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface, making it hard 
to be integrated in Java projects or those who are not familier with C/C++  
[[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as KD-tree;
 # Hashing methods, such as LSH (Local Sensitive Hashing);
 # Product quantization based algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

where IVFFlat and HNSW are the most popular ones among all the VR algorithms.

IVFFlat is better for high-precision applications such as face recognition, 
while HNSW performs better in general scenarios including recommendation and 
personalized advertisement. *The recall ratio of IVFFlat could be gradually 
increased by adjusting the query parameter (nprobe), while it's hard for HNSW 
to improve its accuracy*. In theory, IVFFlat could achieve 100% 

[jira] [Commented] (LUCENE-9004) Approximate nearest vector search

2020-02-26 Thread Xin-Chun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17045570#comment-17045570
 ] 

Xin-Chun Zhang commented on LUCENE-9004:


As mentioned by [~tomoko], the static cache (for the graph structure) isn't a 
good implementation. I have some ideas to share,
 # Do not cache the whole graph structure on memory. We could cache the entry 
points rather than the whole graph structure. When searching the nearest 
neighbors, we only care about the neighbors of current point, and each point is 
visited only once. Therefore, we could read its neighbors when the point is 
actually visited. There's a simple implementation for HNSW (marked as 
*_searchNeighborsV2_*) in my personal branch 
([https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/java/org/apache/lucene/util/hnsw/HNSWGraphReader.java]).
 The implementation could be further optimized, for example, we could store and 
read neighbors by layers because we only need the neighbors in current search 
layer.
 # If higher search performance is preferred, we could keep the whole graph 
structure in graph reader (similar to the implementation of 
_Lucene90IvfFlatIndexReader_, 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/java/org/apache/lucene/codecs/lucene90/Lucene90IvfFlatIndexReader.java]).
 The on-heap cache would be released when the reader is closed.

It's likely not the best solution to the static cache, just for discussions.

> Approximate nearest vector search
> -
>
> Key: LUCENE-9004
> URL: https://issues.apache.org/jira/browse/LUCENE-9004
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Michael Sokolov
>Priority: Major
> Attachments: hnsw_layered_graph.png
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> "Semantic" search based on machine-learned vector "embeddings" representing 
> terms, queries and documents is becoming a must-have feature for a modern 
> search engine. SOLR-12890 is exploring various approaches to this, including 
> providing vector-based scoring functions. This is a spinoff issue from that.
> The idea here is to explore approximate nearest-neighbor search. Researchers 
> have found an approach based on navigating a graph that partially encodes the 
> nearest neighbor relation at multiple scales can provide accuracy > 95% (as 
> compared to exact nearest neighbor calculations) at a reasonable cost. This 
> issue will explore implementing HNSW (hierarchical navigable small-world) 
> graphs for the purpose of approximate nearest vector search (often referred 
> to as KNN or k-nearest-neighbor search).
> At a high level the way this algorithm works is this. First assume you have a 
> graph that has a partial encoding of the nearest neighbor relation, with some 
> short and some long-distance links. If this graph is built in the right way 
> (has the hierarchical navigable small world property), then you can 
> efficiently traverse it to find nearest neighbors (approximately) in log N 
> time where N is the number of nodes in the graph. I believe this idea was 
> pioneered in  [1]. The great insight in that paper is that if you use the 
> graph search algorithm to find the K nearest neighbors of a new document 
> while indexing, and then link those neighbors (undirectedly, ie both ways) to 
> the new document, then the graph that emerges will have the desired 
> properties.
> The implementation I propose for Lucene is as follows. We need two new data 
> structures to encode the vectors and the graph. We can encode vectors using a 
> light wrapper around {{BinaryDocValues}} (we also want to encode the vector 
> dimension and have efficient conversion from bytes to floats). For the graph 
> we can use {{SortedNumericDocValues}} where the values we encode are the 
> docids of the related documents. Encoding the interdocument relations using 
> docids directly will make it relatively fast to traverse the graph since we 
> won't need to lookup through an id-field indirection. This choice limits us 
> to building a graph-per-segment since it would be impractical to maintain a 
> global graph for the whole index in the face of segment merges. However 
> graph-per-segment is a very natural at search time - we can traverse each 
> segments' graph independently and merge results as we do today for term-based 
> search.
> At index time, however, merging graphs is somewhat challenging. While 
> indexing we build a graph incrementally, performing searches to construct 
> links among neighbors. When merging segments we must construct a new graph 
> containing elements of all the merged segments. Ideally we would somehow 
> preserve the work done when building the initial graphs, but at least as a 
> start I'd propose we construct a new 

[jira] [Comment Edited] (LUCENE-9136) Introduce IVFFlat to Lucene for ANN similarity search

2020-02-24 Thread Xin-Chun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17037727#comment-17037727
 ] 

Xin-Chun Zhang edited comment on LUCENE-9136 at 2/24/20 10:01 AM:
--

Hi, [~jtibshirani], thanks for your suggestions!

??"I wonder if this clustering-based approach could fit more closely in the 
current search framework. In the current prototype, we keep all the cluster 
information on-heap. We could instead try storing each cluster as its own 
'term' with a postings list. The kNN query would then be modelled as an 'OR' 
over these terms."??

In the previous implementation 
([https://github.com/irvingzhang/lucene-solr/commit/eb5f79ea7a705595821f73f80a0c5752061869b2]),
 the cluster information is divided into two parts – meta (.ifi) and data(.ifd) 
as shown in the following figure, where each cluster with a postings list is 
stored in the data file (.ifd) and not kept on-heap. A major concern of this 
implementation is its reading performance of cluster data since reading is a 
very frequent behavior on kNN search. I will test and check the performance. 

!image-2020-02-16-15-05-02-451.png!

??"Because of this concern, it could be nice to include benchmarks for index 
time (in addition to QPS)..."??

Many thanks! I will check the links you mentioned and consider optimize the 
clustering cost. In addition, more benchmarks will be added soon.

 
h2. *UPDATE – Feb. 24, 2020*

I have  add a new implementation for IVF index, which has been marked as ***V2 
under the package org.apache.lucene.codecs.lucene90. In current implementation, 
the IVF index has been divided into two files with suffixes .ifi and .ifd, 
respectively. The .ifd file will be read if cluster information is needed. The 
experiments are conducted on dataset sift1M (Test codes: 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/KnnIvfPerformTester.java]),
 detailed results are as follows,
 # add document -- 3921 ms;
 # commit -- 3912286 ms (mainly spent on k-means training, 10 iterations, 4000 
centroids, totally 512,000 vectors used for training);
 # R@100 recall time and recall ratio are listed in the following table

 
||nprobe||avg. search time (ms)||recall ratio (%)||
|8|28.0755|44.154|
|16|27.1745|57.9945|
|32|32.986|71.7003|
|64|40.4082|83.50471|
|128|50.9569|92.07929|
|256|73.923|97.150894|

 Compare with on-heap implementation of IVF index, the query time increases 
significantly (22%~71%). Actually, IVF index is comprised of unique docIDs, and 
will not take up too much memory. *There is a small argument about whether to 
keep the cluster information on-heap or not. Hope to hear more suggestions.*

 

 


was (Author: irvingzhang):
Hi, [~jtibshirani], thanks for your suggestions!

??"I wonder if this clustering-based approach could fit more closely in the 
current search framework. In the current prototype, we keep all the cluster 
information on-heap. We could instead try storing each cluster as its own 
'term' with a postings list. The kNN query would then be modelled as an 'OR' 
over these terms."??

In the previous implementation 
([https://github.com/irvingzhang/lucene-solr/commit/eb5f79ea7a705595821f73f80a0c5752061869b2]),
 the cluster information is divided into two parts – meta (.ifi) and data(.ifd) 
as shown in the following figure, where each cluster with a postings list is 
stored in the data file (.ifd) and not kept on-heap. A major concern of this 
implementation is its reading performance of cluster data since reading is a 
very frequent behavior on kNN search. I will test and check the performance. 

!image-2020-02-16-15-05-02-451.png!

??"Because of this concern, it could be nice to include benchmarks for index 
time (in addition to QPS)..."??

Many thanks! I will check the links you mentioned and consider optimize the 
clustering cost. In addition, more benchmarks will be added soon.

> Introduce IVFFlat to Lucene for ANN similarity search
> -
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
> Attachments: 1581409981369-9dea4099-4e41-4431-8f45-a3bb8cac46c0.png, 
> image-2020-02-16-15-05-02-451.png
>
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, 

[jira] [Commented] (LUCENE-9136) Introduce IVFFlat to Lucene for ANN similarity search

2020-02-15 Thread Xin-Chun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17037727#comment-17037727
 ] 

Xin-Chun Zhang commented on LUCENE-9136:


Hi, [~jtibshirani], thanks for your suggestions!

??"I wonder if this clustering-based approach could fit more closely in the 
current search framework. In the current prototype, we keep all the cluster 
information on-heap. We could instead try storing each cluster as its own 
'term' with a postings list. The kNN query would then be modelled as an 'OR' 
over these terms."??

In the previous implementation 
([https://github.com/irvingzhang/lucene-solr/commit/eb5f79ea7a705595821f73f80a0c5752061869b2]),
 the cluster information is divided into two parts – meta (.ifi) and data(.ifd) 
as shown in the following figure, where each cluster with a postings list is 
stored in the data file (.ifd) and not kept on-heap. A major concern of this 
implementation is its reading performance of cluster data since reading is a 
very frequent behavior on kNN search. I will test and check the performance. 

!image-2020-02-16-15-05-02-451.png!

??"Because of this concern, it could be nice to include benchmarks for index 
time (in addition to QPS)..."??

Many thanks! I will check the links you mentioned and consider optimize the 
clustering cost. In addition, more benchmarks will be added soon.

> Introduce IVFFlat to Lucene for ANN similarity search
> -
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
> Attachments: 1581409981369-9dea4099-4e41-4431-8f45-a3bb8cac46c0.png, 
> image-2020-02-16-15-05-02-451.png
>
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned implementations are 
> all written in C++, and no plan for supporting Java interface, making it hard 
> to be integrated in Java projects or those who are not familier with C/C++  
> [[https://github.com/facebookresearch/faiss/issues/105]]. 
> The algorithms for vector retrieval can be roughly classified into four 
> categories,
>  # Tree-base algorithms, such as KD-tree;
>  # Hashing methods, such as LSH (Local Sensitive Hashing);
>  # Product quantization based algorithms, such as IVFFlat;
>  # Graph-base algorithms, such as HNSW, SSG, NSG;
> where IVFFlat and HNSW are the most popular ones among all the VR algorithms.
> IVFFlat is better for high-precision applications such as face recognition, 
> while HNSW performs better in general scenarios including recommendation and 
> personalized advertisement. *The recall ratio of IVFFlat could be gradually 
> increased by adjusting the query parameter (nprobe), while it's hard for HNSW 
> to improve its accuracy*. In theory, IVFFlat could achieve 100% recall ratio. 
> Recently, the implementation of HNSW (Hierarchical Navigable Small World, 
> LUCENE-9004) for Lucene, has made great progress. The issue draws attention 
> of those who are interested in Lucene or hope to use HNSW with Solr/Lucene. 
> As an alternative for solving ANN similarity search problems, IVFFlat is also 
> very popular with many users and supporters. Compared with HNSW, IVFFlat has 
> smaller index size but requires k-means clustering, while HNSW is faster in 
> query (no training required) but requires extra storage for saving graphs 
> [indexing 1M 
> vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]].
>  Another advantage is that IVFFlat can be faster and more accurate when 
> enables GPU parallel computing (current not support in Java). Both algorithms 
> have their merits and demerits. Since HNSW is now under development, it may 
> be better to provide both implementations (HNSW && IVFFlat) for potential 
> users who are faced with very different scenarios and want to more choices.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional 

[jira] [Updated] (LUCENE-9136) Introduce IVFFlat to Lucene for ANN similarity search

2020-02-15 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9136:
---
Attachment: image-2020-02-16-15-05-02-451.png

> Introduce IVFFlat to Lucene for ANN similarity search
> -
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
> Attachments: 1581409981369-9dea4099-4e41-4431-8f45-a3bb8cac46c0.png, 
> image-2020-02-16-15-05-02-451.png
>
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned implementations are 
> all written in C++, and no plan for supporting Java interface, making it hard 
> to be integrated in Java projects or those who are not familier with C/C++  
> [[https://github.com/facebookresearch/faiss/issues/105]]. 
> The algorithms for vector retrieval can be roughly classified into four 
> categories,
>  # Tree-base algorithms, such as KD-tree;
>  # Hashing methods, such as LSH (Local Sensitive Hashing);
>  # Product quantization based algorithms, such as IVFFlat;
>  # Graph-base algorithms, such as HNSW, SSG, NSG;
> where IVFFlat and HNSW are the most popular ones among all the VR algorithms.
> IVFFlat is better for high-precision applications such as face recognition, 
> while HNSW performs better in general scenarios including recommendation and 
> personalized advertisement. *The recall ratio of IVFFlat could be gradually 
> increased by adjusting the query parameter (nprobe), while it's hard for HNSW 
> to improve its accuracy*. In theory, IVFFlat could achieve 100% recall ratio. 
> Recently, the implementation of HNSW (Hierarchical Navigable Small World, 
> LUCENE-9004) for Lucene, has made great progress. The issue draws attention 
> of those who are interested in Lucene or hope to use HNSW with Solr/Lucene. 
> As an alternative for solving ANN similarity search problems, IVFFlat is also 
> very popular with many users and supporters. Compared with HNSW, IVFFlat has 
> smaller index size but requires k-means clustering, while HNSW is faster in 
> query (no training required) but requires extra storage for saving graphs 
> [indexing 1M 
> vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]].
>  Another advantage is that IVFFlat can be faster and more accurate when 
> enables GPU parallel computing (current not support in Java). Both algorithms 
> have their merits and demerits. Since HNSW is now under development, it may 
> be better to provide both implementations (HNSW && IVFFlat) for potential 
> users who are faced with very different scenarios and want to more choices.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-9136) Introduce IVFFlat to Lucene for ANN similarity search

2020-02-15 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9136:
---
Attachment: (was: image-2020-02-16-14-36-54-478.png)

> Introduce IVFFlat to Lucene for ANN similarity search
> -
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
> Attachments: 1581409981369-9dea4099-4e41-4431-8f45-a3bb8cac46c0.png, 
> image-2020-02-16-15-05-02-451.png
>
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned implementations are 
> all written in C++, and no plan for supporting Java interface, making it hard 
> to be integrated in Java projects or those who are not familier with C/C++  
> [[https://github.com/facebookresearch/faiss/issues/105]]. 
> The algorithms for vector retrieval can be roughly classified into four 
> categories,
>  # Tree-base algorithms, such as KD-tree;
>  # Hashing methods, such as LSH (Local Sensitive Hashing);
>  # Product quantization based algorithms, such as IVFFlat;
>  # Graph-base algorithms, such as HNSW, SSG, NSG;
> where IVFFlat and HNSW are the most popular ones among all the VR algorithms.
> IVFFlat is better for high-precision applications such as face recognition, 
> while HNSW performs better in general scenarios including recommendation and 
> personalized advertisement. *The recall ratio of IVFFlat could be gradually 
> increased by adjusting the query parameter (nprobe), while it's hard for HNSW 
> to improve its accuracy*. In theory, IVFFlat could achieve 100% recall ratio. 
> Recently, the implementation of HNSW (Hierarchical Navigable Small World, 
> LUCENE-9004) for Lucene, has made great progress. The issue draws attention 
> of those who are interested in Lucene or hope to use HNSW with Solr/Lucene. 
> As an alternative for solving ANN similarity search problems, IVFFlat is also 
> very popular with many users and supporters. Compared with HNSW, IVFFlat has 
> smaller index size but requires k-means clustering, while HNSW is faster in 
> query (no training required) but requires extra storage for saving graphs 
> [indexing 1M 
> vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]].
>  Another advantage is that IVFFlat can be faster and more accurate when 
> enables GPU parallel computing (current not support in Java). Both algorithms 
> have their merits and demerits. Since HNSW is now under development, it may 
> be better to provide both implementations (HNSW && IVFFlat) for potential 
> users who are faced with very different scenarios and want to more choices.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-9136) Introduce IVFFlat to Lucene for ANN similarity search

2020-02-15 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9136:
---
Attachment: image-2020-02-16-14-36-54-478.png

> Introduce IVFFlat to Lucene for ANN similarity search
> -
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
> Attachments: 1581409981369-9dea4099-4e41-4431-8f45-a3bb8cac46c0.png, 
> image-2020-02-16-14-36-54-478.png
>
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned implementations are 
> all written in C++, and no plan for supporting Java interface, making it hard 
> to be integrated in Java projects or those who are not familier with C/C++  
> [[https://github.com/facebookresearch/faiss/issues/105]]. 
> The algorithms for vector retrieval can be roughly classified into four 
> categories,
>  # Tree-base algorithms, such as KD-tree;
>  # Hashing methods, such as LSH (Local Sensitive Hashing);
>  # Product quantization based algorithms, such as IVFFlat;
>  # Graph-base algorithms, such as HNSW, SSG, NSG;
> where IVFFlat and HNSW are the most popular ones among all the VR algorithms.
> IVFFlat is better for high-precision applications such as face recognition, 
> while HNSW performs better in general scenarios including recommendation and 
> personalized advertisement. *The recall ratio of IVFFlat could be gradually 
> increased by adjusting the query parameter (nprobe), while it's hard for HNSW 
> to improve its accuracy*. In theory, IVFFlat could achieve 100% recall ratio. 
> Recently, the implementation of HNSW (Hierarchical Navigable Small World, 
> LUCENE-9004) for Lucene, has made great progress. The issue draws attention 
> of those who are interested in Lucene or hope to use HNSW with Solr/Lucene. 
> As an alternative for solving ANN similarity search problems, IVFFlat is also 
> very popular with many users and supporters. Compared with HNSW, IVFFlat has 
> smaller index size but requires k-means clustering, while HNSW is faster in 
> query (no training required) but requires extra storage for saving graphs 
> [indexing 1M 
> vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]].
>  Another advantage is that IVFFlat can be faster and more accurate when 
> enables GPU parallel computing (current not support in Java). Both algorithms 
> have their merits and demerits. Since HNSW is now under development, it may 
> be better to provide both implementations (HNSW && IVFFlat) for potential 
> users who are faced with very different scenarios and want to more choices.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9136) Introduce IVFFlat to Lucene for ANN similarity search

2020-02-12 Thread Xin-Chun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17035213#comment-17035213
 ] 

Xin-Chun Zhang commented on LUCENE-9136:


The index format of IVFFlat is organized as follows, 
!1581409981369-9dea4099-4e41-4431-8f45-a3bb8cac46c0.png!

In general, the number of centroids lies within the interval [4 * sqrt(N), 16 * 
sqrt(N)], where N is the data set size. We use (4 * sqrt(N)) as the actual 
value of centroid number to balance between accuracy and computational load, 
denoted by c. And the full data set is used for training if its size no larger 
than 200,000. Otherwise (128 *  c) points are selected after shuffling for 
training in order to accelerate training.

Experiments have been conducted on a large data set (sift1M, 
[http://corpus-texmex.irisa.fr/]) to verify the implementation of IVFFlat. The 
base data set (sift_base.fvecs) contains 1,000,000 vectors with 128 dimensions. 
And 10,000 queries (sift_query.fvecs) are used for recall testing. The recall 
ratio follows

Recall=(Recall vectors in groundTruth) / (number of queries * TopK), where 
number of queries = 10,000 and TopK=100. The results are as follows (single 
thread and single segment),

 
||nprobe||avg. search time (ms)||recall (%)||
|8|16.3827|44.24|
|16|16.5834|58.04|
|32|19.2031|71.55|
|64|24.7065|83.30|
|128|34.9165|92.03|
|256|60.5844|97.18|
| | | |

**The test codes could be found in 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/KnnIvfAndGraphPerformTester.java.|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/KnnIvfAndGraphPerformTester.java]

 

 

 

 

> Introduce IVFFlat to Lucene for ANN similarity search
> -
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
> Attachments: 1581409981369-9dea4099-4e41-4431-8f45-a3bb8cac46c0.png
>
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned implementations are 
> all written in C++, and no plan for supporting Java interface, making it hard 
> to be integrated in Java projects or those who are not familier with C/C++  
> [[https://github.com/facebookresearch/faiss/issues/105]]. 
> The algorithms for vector retrieval can be roughly classified into four 
> categories,
>  # Tree-base algorithms, such as KD-tree;
>  # Hashing methods, such as LSH (Local Sensitive Hashing);
>  # Product quantization based algorithms, such as IVFFlat;
>  # Graph-base algorithms, such as HNSW, SSG, NSG;
> where IVFFlat and HNSW are the most popular ones among all the VR algorithms.
> IVFFlat is better for high-precision applications such as face recognition, 
> while HNSW performs better in general scenarios including recommendation and 
> personalized advertisement. *The recall ratio of IVFFlat could be gradually 
> increased by adjusting the query parameter (nprobe), while it's hard for HNSW 
> to improve its accuracy*. In theory, IVFFlat could achieve 100% recall ratio. 
> Recently, the implementation of HNSW (Hierarchical Navigable Small World, 
> LUCENE-9004) for Lucene, has made great progress. The issue draws attention 
> of those who are interested in Lucene or hope to use HNSW with Solr/Lucene. 
> As an alternative for solving ANN similarity search problems, IVFFlat is also 
> very popular with many users and supporters. Compared with HNSW, IVFFlat has 
> smaller index size but requires k-means clustering, while HNSW is faster in 
> query (no training required) but requires extra storage for saving graphs 
> [indexing 1M 
> vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]].
>  Another advantage is that IVFFlat can be faster and more accurate when 
> enables GPU parallel computing (current not support in Java). Both algorithms 
> have their merits and demerits. Since HNSW is now under development, it may 
> be better to provide both implementations 

[jira] [Updated] (LUCENE-9136) Introduce IVFFlat to Lucene for ANN similarity search

2020-02-12 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9136:
---
Attachment: 1581409981369-9dea4099-4e41-4431-8f45-a3bb8cac46c0.png

> Introduce IVFFlat to Lucene for ANN similarity search
> -
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
> Attachments: 1581409981369-9dea4099-4e41-4431-8f45-a3bb8cac46c0.png
>
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned implementations are 
> all written in C++, and no plan for supporting Java interface, making it hard 
> to be integrated in Java projects or those who are not familier with C/C++  
> [[https://github.com/facebookresearch/faiss/issues/105]]. 
> The algorithms for vector retrieval can be roughly classified into four 
> categories,
>  # Tree-base algorithms, such as KD-tree;
>  # Hashing methods, such as LSH (Local Sensitive Hashing);
>  # Product quantization based algorithms, such as IVFFlat;
>  # Graph-base algorithms, such as HNSW, SSG, NSG;
> where IVFFlat and HNSW are the most popular ones among all the VR algorithms.
> IVFFlat is better for high-precision applications such as face recognition, 
> while HNSW performs better in general scenarios including recommendation and 
> personalized advertisement. *The recall ratio of IVFFlat could be gradually 
> increased by adjusting the query parameter (nprobe), while it's hard for HNSW 
> to improve its accuracy*. In theory, IVFFlat could achieve 100% recall ratio. 
> Recently, the implementation of HNSW (Hierarchical Navigable Small World, 
> LUCENE-9004) for Lucene, has made great progress. The issue draws attention 
> of those who are interested in Lucene or hope to use HNSW with Solr/Lucene. 
> As an alternative for solving ANN similarity search problems, IVFFlat is also 
> very popular with many users and supporters. Compared with HNSW, IVFFlat has 
> smaller index size but requires k-means clustering, while HNSW is faster in 
> query (no training required) but requires extra storage for saving graphs 
> [indexing 1M 
> vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]].
>  Another advantage is that IVFFlat can be faster and more accurate when 
> enables GPU parallel computing (current not support in Java). Both algorithms 
> have their merits and demerits. Since HNSW is now under development, it may 
> be better to provide both implementations (HNSW && IVFFlat) for potential 
> users who are faced with very different scenarios and want to more choices.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-9136) Introduce IVFFlat to Lucene for ANN similarity search

2020-02-12 Thread Xin-Chun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17019507#comment-17019507
 ] 

Xin-Chun Zhang edited comment on LUCENE-9136 at 2/12/20 9:33 AM:
-

I worked on this issue for about three to four days. And it now works fine for 
searching.

My personal dev branch is available in github 
[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]. The index 
format (only one meta file with suffix .ifi) of IVFFlat is shown in the class 
Lucene90IvfFlatIndexFormat. In my implementation, the clustering process was 
optimized when the number of vectors is sufficient large (e.g. > 200,000 per 
segment). A subset after shuffling is selected for training, thereby saving 
time and memory. The insertion performance of IVFFlat is better due to no extra 
executions on insertion while HNSW need to maintain the graph. However, IVFFlat 
consumes more time in flushing because of the k-means clustering.

My test cases show that the query performance of IVFFlat is better than HNSW, 
even if HNSW uses a cache for graphs while IVFFlat has no cache. And its recall 
is pretty high (avg time < 10ms and recall>96% over a set of 5 random 
vectors with 100 dimensions). My test class for IVFFlat is under the directory 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/TestKnnIvfFlat.java].
 Performance comparison between IVFFlat and HNSW is in the class 
TestKnnGraphAndIvfFlat.

The work is still in its early stage. There must be some bugs that need to be 
fixed and and I would like to hear more comments. Everyone is welcomed to 
participate in this issue.


was (Author: irvingzhang):
I worked on this issue for about three to four days. And it now works fine for 
searching.

My personal dev branch is available in github 
[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]. The index 
format (only one meta file with suffix .ifi) of IVFFlat is shown in the class 
Lucene90IvfFlatIndexFormat. In my implementation, the clustering process was 
optimized when the number of vectors is sufficient large (e.g. > 10,000,000 per 
segment). A subset after shuffling is selected for training, thereby saving 
time and memory. The insertion performance of IVFFlat is better due to no extra 
executions on insertion while HNSW need to maintain the graph. However, IVFFlat 
consumes more time in flushing because of the k-means clustering.

My test cases show that the query performance of IVFFlat is better than HNSW, 
even if HNSW uses a cache for graphs while IVFFlat has no cache. And its recall 
is pretty high (avg time < 10ms and recall>96% over a set of 5 random 
vectors with 100 dimensions). My test class for IVFFlat is under the directory 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/TestKnnIvfFlat.java].
 Performance comparison between IVFFlat and HNSW is in the class 
TestKnnGraphAndIvfFlat.

The work is still in its early stage. There must be some bugs that need to be 
fixed and and I would like to hear more comments. Everyone is welcomed to 
participate in this issue.

> Introduce IVFFlat to Lucene for ANN similarity search
> -
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned implementations are 
> all written in C++, and no plan for supporting Java interface, making it hard 
> to be integrated in Java projects or those who are not familier with C/C++  
> [[https://github.com/facebookresearch/faiss/issues/105]]. 
> The algorithms for vector retrieval can be roughly 

[jira] [Comment Edited] (LUCENE-9136) Introduce IVFFlat to Lucene for ANN similarity search

2020-02-11 Thread Xin-Chun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17019507#comment-17019507
 ] 

Xin-Chun Zhang edited comment on LUCENE-9136 at 2/11/20 8:21 AM:
-

I worked on this issue for about three to four days. And it now works fine for 
searching.

My personal dev branch is available in github 
[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]. The index 
format (only one meta file with suffix .ifi) of IVFFlat is shown in the class 
Lucene90IvfFlatIndexFormat. In my implementation, the clustering process was 
optimized when the number of vectors is sufficient large (e.g. > 10,000,000 per 
segment). A subset after shuffling is selected for training, thereby saving 
time and memory. The insertion performance of IVFFlat is better due to no extra 
executions on insertion while HNSW need to maintain the graph. However, IVFFlat 
consumes more time in flushing because of the k-means clustering.

My test cases show that the query performance of IVFFlat is better than HNSW, 
even if HNSW uses a cache for graphs while IVFFlat has no cache. And its recall 
is pretty high (avg time < 10ms and recall>96% over a set of 5 random 
vectors with 100 dimensions). My test class for IVFFlat is under the directory 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/TestKnnIvfFlat.java].
 Performance comparison between IVFFlat and HNSW is in the class 
TestKnnGraphAndIvfFlat.

The work is still in its early stage. There must be some bugs that need to be 
fixed and and I would like to hear more comments. Everyone is welcomed to 
participate in this issue.


was (Author: irvingzhang):
I worked on this issue for about three to four days. And it now works fine for 
searching.

My personal dev branch is available in github 
[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]. The index 
format (only one meta file with suffix .ifi) of IVFFlat is shown in the class 
Lucene90IvfFlatIndexFormat. In my implementation, the clustering process was 
optimized when the number of vectors is sufficient large (e.g. > 5,000,000 per 
segment). A subset after shuffling is selected for training, thereby saving 
time and memory. The insertion performance of IVFFlat is better due to no extra 
executions on insertion while HNSW need to maintain the graph. However, IVFFlat 
consumes more time in flushing because of the k-means clustering.

My test cases show that the query performance of IVFFlat is better than HNSW, 
even if HNSW uses a cache for graphs while IVFFlat has no cache. And its recall 
is pretty high (avg time < 10ms and recall>96% over a set of 5 random 
vectors with 100 dimensions). My test class for IVFFlat is under the directory 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/TestKnnIvfFlat.java].
 Performance comparison between IVFFlat and HNSW is in the class 
TestKnnGraphAndIvfFlat.

The work is still in its early stage. There must be some bugs that need to be 
fixed and and I would like to hear more comments. Everyone is welcomed to 
participate in this issue.

> Introduce IVFFlat to Lucene for ANN similarity search
> -
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned implementations are 
> all written in C++, and no plan for supporting Java interface, making it hard 
> to be integrated in Java projects or those who are not familier with C/C++  
> [[https://github.com/facebookresearch/faiss/issues/105]]. 
> The algorithms for vector retrieval can be roughly 

[jira] [Comment Edited] (LUCENE-9136) Introduce IVFFlat to Lucene for ANN similarity search

2020-02-11 Thread Xin-Chun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17019507#comment-17019507
 ] 

Xin-Chun Zhang edited comment on LUCENE-9136 at 2/11/20 8:02 AM:
-

I worked on this issue for about three to four days. And it now works fine for 
searching.

My personal dev branch is available in github 
[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]. The index 
format (only one meta file with suffix .ifi) of IVFFlat is shown in the class 
Lucene90IvfFlatIndexFormat. In my implementation, the clustering process was 
optimized when the number of vectors is sufficient large (e.g. > 5,000,000 per 
segment). A subset after shuffling is selected for training, thereby saving 
time and memory. The insertion performance of IVFFlat is better due to no extra 
executions on insertion while HNSW need to maintain the graph. However, IVFFlat 
consumes more time in flushing because of the k-means clustering.

My test cases show that the query performance of IVFFlat is better than HNSW, 
even if HNSW uses a cache for graphs while IVFFlat has no cache. And its recall 
is pretty high (avg time < 10ms and recall>96% over a set of 5 random 
vectors with 100 dimensions). My test class for IVFFlat is under the directory 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/TestKnnIvfFlat.java].
 Performance comparison between IVFFlat and HNSW is in the class 
TestKnnGraphAndIvfFlat.

The work is still in its early stage. There must be some bugs that need to be 
fixed and and I would like to hear more comments. Everyone is welcomed to 
participate in this issue.


was (Author: irvingzhang):
I worked on this issue for about three to four days. And it now works fine for 
searching.

My personal dev branch is available in github 
[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]. The index 
format (only one meta file with suffix .ifi) of IVFFlat is shown in the class 
Lucene90IvfFlatIndexFormat. In my implementation, the clustering process was 
optimized when the number of vectors is very large (e.g. > 200,000 per 
segment). A subset after shuffling is selected for training, thereby saving 
time and memory. The insertion performance of IVFFlat is better due to no extra 
executions on insertion while HNSW need to maintain the graph. However, IVFFlat 
consumes more time in flushing because of the k-means clustering.

My test cases show that the query performance of IVFFlat is better than HNSW, 
even if HNSW uses a cache for graphs while IVFFlat has no cache. And its recall 
is pretty high (avg time < 10ms and recall>96% over a set of 5 random 
vectors with 100 dimensions). My test class for IVFFlat is under the directory 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/TestKnnIvfFlat.java].
 Performance comparison between IVFFlat and HNSW is in the class 
TestKnnGraphAndIvfFlat.

The work is still in its early stage. There must be some bugs that need to be 
fixed and and I would like to hear more comments. Everyone is welcomed to 
participate in this issue.

> Introduce IVFFlat to Lucene for ANN similarity search
> -
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned implementations are 
> all written in C++, and no plan for supporting Java interface, making it hard 
> to be integrated in Java projects or those who are not familier with C/C++  
> [[https://github.com/facebookresearch/faiss/issues/105]]. 
> The algorithms for vector retrieval can be roughly classified into 

[jira] [Updated] (LUCENE-9136) Introduce IVFFlat to Lucene for ANN similarity search

2020-02-10 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9136:
---
Description: 
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface, making it hard 
to be integrated in Java projects or those who are not familier with C/C++  
[[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as KD-tree;
 # Hashing methods, such as LSH (Local Sensitive Hashing);
 # Product quantization based algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

where IVFFlat and HNSW are the most popular ones among all the VR algorithms.

IVFFlat is better for high-precision applications such as face recognition, 
while HNSW performs better in general scenarios including recommendation and 
personalized advertisement. *The recall ratio of IVFFlat could be gradually 
increased by adjusting the query parameter (nprobe), while it's hard for HNSW 
to improve its accuracy*. In theory, IVFFlat could achieve 100% recall ratio. 

Recently, the implementation of HNSW (Hierarchical Navigable Small World, 
LUCENE-9004) for Lucene, has made great progress. The issue draws attention of 
those who are interested in Lucene or hope to use HNSW with Solr/Lucene. 

As an alternative for solving ANN similarity search problems, IVFFlat is also 
very popular with many users and supporters. Compared with HNSW, IVFFlat has 
smaller index size but requires k-means clustering, while HNSW is faster in 
query (no training required) but requires extra storage for saving graphs 
[indexing 1M 
vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]]. 
Another advantage is that IVFFlat can be faster and more accurate when enables 
GPU parallel computing (current not support in Java). Both algorithms have 
their merits and demerits. Since HNSW is now under development, it may be 
better to provide both implementations (HNSW && IVFFlat) for potential users 
who are faced with very different scenarios and want to more choices.

  was:
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface, making it hard 
to be integrated in Java projects or those who are not familier with C/C++  
[[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as KD-tree;
 # Hashing methods, such as LSH (Local Sensitive Hashing);
 # Product quantization based algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

where IVFFlat and HNSW are the most popular ones among all the VR algorithms.

Recently, the implementation of HNSW (Hierarchical Navigable Small World, 
LUCENE-9004) for Lucene, has made great progress. The issue draws attention of 
those who are interested in Lucene or hope to use HNSW with Solr/Lucene. 

As an alternative for solving ANN similarity search problems, IVFFlat is also 
very popular with many users and supporters. Compared with HNSW, IVFFlat has 
smaller index size but requires k-means clustering, while HNSW is faster in 
query (no training required) but requires extra storage for saving graphs 
[indexing 1M 

[jira] [Updated] (LUCENE-9136) Introduce IVFFlat to Lucene for ANN similarity search

2020-02-08 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9136:
---
Description: 
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface, making it hard 
to be integrated in Java projects or those who are not familier with C/C++  
[[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as KD-tree;
 # Hashing methods, such as LSH (Local Sensitive Hashing);
 # Product quantization based algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

where IVFFlat and HNSW are the most popular ones among all the VR algorithms.

Recently, the implementation of HNSW (Hierarchical Navigable Small World, 
LUCENE-9004) for Lucene, has made great progress. The issue draws attention of 
those who are interested in Lucene or hope to use HNSW with Solr/Lucene. 

As an alternative for solving ANN similarity search problems, IVFFlat is also 
very popular with many users and supporters. Compared with HNSW, IVFFlat has 
smaller index size but requires k-means clustering, while HNSW is faster in 
query (no training required) but requires extra storage for saving graphs 
[indexing 1M 
vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]]. 
*The recall ratio of IVFFlat could be gradually increased by adjusting the 
query parameter (nprobe), while it's hard for HNSW to improve its accuracy*. In 
theory, IVFFlat could achieve 100% recall ratio. Another advantage is that 
IVFFlat can be faster and more accurate when enables GPU parallel computing 
(current not support in Java). Both algorithms have their merits and demerits. 
Since HNSW is now under development, it may be better to provide both 
implementations (HNSW && IVFFlat) for potential users who are faced with very 
different scenarios and want to more choices.

  was:
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface, making it hard 
to be integrated in Java projects or those who are not familier with C/C++  
[[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as KD-tree;
 # Hashing methods, such as LSH (Local Sensitive Hashing);
 # Product quantization based algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

where IVFFlat and HNSW are the most popular ones among all the VR algorithms.

Recently, the implementation of HNSW (Hierarchical Navigable Small World, 
LUCENE-9004) for Lucene, has made great progress. The issue draws attention of 
those who are interested in Lucene or hope to use HNSW with Solr/Lucene. 

As an alternative for solving ANN similarity search problems, IVFFlat is also 
very popular with many users and supporters. Compared with HNSW, IVFFlat has 
smaller index size but requires k-means clustering, while HNSW is faster in 
query (no training required) but requires extra storage for saving graphs 
[indexing 1M 
vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]]. 
The recall ratio of IVFFlat could be gradually increased by adjusting the query 
parameter (nprobe), while it's hard for HNSW to improve the 

[jira] [Updated] (LUCENE-9136) Introduce IVFFlat to Lucene for ANN similarity search

2020-02-07 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9136:
---
Description: 
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface, making it hard 
to be integrated in Java projects or those who are not familier with C/C++  
[[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as KD-tree;
 # Hashing methods, such as LSH (Local Sensitive Hashing);
 # Product quantization based algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

where IVFFlat and HNSW are the most popular ones among all the VR algorithms.

Recently, the implementation of HNSW (Hierarchical Navigable Small World, 
LUCENE-9004) for Lucene, has made great progress. The issue draws attention of 
those who are interested in Lucene or hope to use HNSW with Solr/Lucene. 

As an alternative for solving ANN similarity search problems, IVFFlat is also 
very popular with many users and supporters. Compared with HNSW, IVFFlat has 
smaller index size but requires k-means clustering, while HNSW is faster in 
query (no training required) but requires extra storage for saving graphs 
[indexing 1M 
vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]]. 
The recall ratio of IVFFlat could be gradually increased by adjusting the query 
parameter (nprobe), while it's hard for HNSW to improve the accuracy. In 
theory, IVFFlat could achieve 100% recall ratio. Another advantage is that 
IVFFlat can be faster and more accurate when enables GPU parallel computing 
(current not support in Java). Both algorithms have their merits and demerits. 
Since HNSW is now under development, it may be better to provide both 
implementations (HNSW && IVFFlat) for potential users who are faced with very 
different scenarios and want to more choices.

  was:
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface, making it hard 
to be integrated in Java projects or those who are not familier with C/C++  
[[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as KD-tree;
 # Hashing methods, such as LSH (Local Sensitive Hashing);
 # Product quantization based algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

where IVFFlat and HNSW are the most popular ones among all the VR algorithms.

Recently, the implementation of HNSW (Hierarchical Navigable Small World, 
LUCENE-9004) for Lucene, has made great progress. The issue draws attention of 
those who are interested in Lucene or hope to use HNSW with Solr/Lucene. 

As an alternative for solving ANN similarity search problems, IVFFlat is also 
very popular with many users and supporters. Compared with HNSW, IVFFlat has 
smaller index size but requires k-means clustering, while HNSW is faster in 
query (no training required) but requires extra storage for saving graphs 
[indexing 1M 
vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]]. 
Another advantage is that IVFFlat can be faster and more accurate when enables 
GPU parallel computing (current not support in Java). Both 

[jira] [Comment Edited] (LUCENE-9004) Approximate nearest vector search

2020-02-06 Thread Xin-Chun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17031461#comment-17031461
 ] 

Xin-Chun Zhang edited comment on LUCENE-9004 at 2/7/20 1:46 AM:


??You don't share your test code, but I suspect you open new IndexReader every 
time you issue a query???

[~tomoko] The test code can be found in 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/KnnIvfAndGraphPerformTester.java].
 Yes, I opened a new reader for each query in hope that IVFFlat and HNSW are 
compared in a fair condition since IVFFlat do not have cache. I now realize it 
may lead to OOM, hence replacing with a shared IndexReader and the problem 
resolved.

 

Update – Top 1 in-set (query vector is in the candidate data set) recall 
results on SIFT1M data set ([http://corpus-texmex.irisa.fr/]) of IVFFlat and 
HNSW are as follows,

IVFFlat (no cache, reuse IndexReader)

 
||nprobe||avg. search time (ms)||recall percent (%)||
|8|13.3165|64.8|
|16|13.968|79.65|
|32|16.951|89.3|
|64|21.631|95.6|
|128|31.633|98.8|

 

HNSW (static cache, reuse IndexReader)
||avg. search time (ms)||recall percent (%)||
|6.3|{color:#ff}20.45{color}|

It can readily be shown that HNSW performs much better in query time. But I was 
surprised that top 1 in-set recall percent of HNSW is so low. It shouldn't be a 
problem of algorithm itself, but more likely a problem of implementation or 
test code. I will check it this weekend.

 


was (Author: irvingzhang):
??You don't share your test code, but I suspect you open new IndexReader every 
time you issue a query???

[~tomoko] The test code can be found in 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/KnnIvfAndGraphPerformTester.java].
 Yes, I opened a new reader for each query in hope that IVFFlat and HNSW are 
compared in a fair condition since IVFFlat do not have cache. I now realize it 
may lead to OOM, hence replacing with a shard IndexReader and the problem 
resolved.

 

Update -- Top 1 in-set (query vector is in the candidate data set) recall 
results on SIFT1M data set ([http://corpus-texmex.irisa.fr/]) of IVFFlat and 
HNSW are as follows,

IVFFlat (no cache, reuse IndexReader)

 
||nprobe||avg. search time (ms)||recall percent (%)||
|8|13.3165|64.8|
|16|13.968|79.65|
|32|16.951|89.3|
|64|21.631|95.6|
|128|31.633|98.8|

 

HNSW (static cache, reuse IndexReader)
||avg. search time (ms)||recall percent (%)||
|6.3|{color:#FF}20.45{color}|

It can readily be shown that HNSW performs much better in query time. But I was 
surprised that top 1 in-set recall percent of HNSW is so low. It shouldn't be a 
problem of algorithm itself, but more likely a problem of implementation or 
test code. I will check it this weekend.

 

> Approximate nearest vector search
> -
>
> Key: LUCENE-9004
> URL: https://issues.apache.org/jira/browse/LUCENE-9004
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Michael Sokolov
>Priority: Major
> Attachments: hnsw_layered_graph.png
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> "Semantic" search based on machine-learned vector "embeddings" representing 
> terms, queries and documents is becoming a must-have feature for a modern 
> search engine. SOLR-12890 is exploring various approaches to this, including 
> providing vector-based scoring functions. This is a spinoff issue from that.
> The idea here is to explore approximate nearest-neighbor search. Researchers 
> have found an approach based on navigating a graph that partially encodes the 
> nearest neighbor relation at multiple scales can provide accuracy > 95% (as 
> compared to exact nearest neighbor calculations) at a reasonable cost. This 
> issue will explore implementing HNSW (hierarchical navigable small-world) 
> graphs for the purpose of approximate nearest vector search (often referred 
> to as KNN or k-nearest-neighbor search).
> At a high level the way this algorithm works is this. First assume you have a 
> graph that has a partial encoding of the nearest neighbor relation, with some 
> short and some long-distance links. If this graph is built in the right way 
> (has the hierarchical navigable small world property), then you can 
> efficiently traverse it to find nearest neighbors (approximately) in log N 
> time where N is the number of nodes in the graph. I believe this idea was 
> pioneered in  [1]. The great insight in that paper is that if you use the 
> graph search algorithm to find the K nearest neighbors of a new document 
> while indexing, and then link those neighbors (undirectedly, ie both ways) to 
> the new document, then the graph that emerges will have the desired 
> properties.
> The implementation I 

[jira] [Comment Edited] (LUCENE-9004) Approximate nearest vector search

2020-02-06 Thread Xin-Chun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17028283#comment-17028283
 ] 

Xin-Chun Zhang edited comment on LUCENE-9004 at 2/6/20 10:52 AM:
-

 ??"Is it making life difficult to keep them separate?"??

[~sokolov] No, we can keep them separate at present. I have merged your 
[branch|[https://github.com/apache/lucene-solr/tree/jira/lucene-9004-aknn-2]] 
into my person 
[github|[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]] in 
order to do the comparison between IVFFlat and HNSW. And I reused some work 
that [~tomoko] and you did. Code refactoring is required when we are going to 
commit.

 

??"Have you tried comparing them on real data?"??

[~yurymalkov], [~mikemccand] Thanks for your advice. I haven't do it yet, and 
will do it soon. 

 

*Update – Feb. 4, 2020*

I have added two performance test tool 
(KnnIvfPerformTester/KnnIvfAndGraphPerformTester) into my personal branch. And 
sift1M dataset (1000,000 base vectors with 128 dimensions, 
[http://corpus-texmex.irisa.fr/]) is employed for the test. Top 1 recall 
performance of IVFFlat is as follows, *a new IndexReader was opened for each 
query*,

centroids=707
||nprobe||avg. search time (ms)||recall percent (%)||
|8|71.314|69.15|
|16|121.7565|82.3|
|32|155.692|92.85|
|64|159.3655|98.7|
|128|217.5205|99.9|

centroids=4000
||nprobe||avg. search time (ms)||recall percent (%)||
|8|56.3745|65.35|
|16|59.5435|78.85|
|32|71.751|89.85|
|64|90.396|96.25|
|128|135.3805|99.3|

Unfortunately, I couldn't obtain the corresponding results of HNSW due to the 
out of memory error in my PC. A special case with 2,000 base vectors 
demonstrates that IVFFlat is faster and more accurate. HNSW may outperform 
IVFFlat on larger data sets when larger memory is available, as shown in 
[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors].


was (Author: irvingzhang):
 ??"Is it making life difficult to keep them separate?"??

[~sokolov] No, we can keep them separate at present. I have merged your 
[branch|[https://github.com/apache/lucene-solr/tree/jira/lucene-9004-aknn-2]] 
into my person 
[github|[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]] in 
order to do the comparison between IVFFlat and HNSW. And I reused some work 
that [~tomoko] and you did. Code refactoring is required when we are going to 
commit.

 

??"Have you tried comparing them on real data?"??

[~yurymalkov], [~mikemccand] Thanks for your advice. I haven't do it yet, and 
will do it soon. 

 

*Update – Feb. 4, 2020*

I have added two performance test tool 
(KnnIvfPerformTester/KnnIvfAndGraphPerformTester) into my personal branch. And 
sift1M dataset (1000,000 base vectors with 128 dimensions, 
[http://corpus-texmex.irisa.fr/]) is employed for the test. Top 1 recall 
performance of IVFFlat is as follows,

centroids=707
||nprobe||avg. search time (ms)||recall percent (%)||
|8|71.314|69.15|
|16|121.7565|82.3|
|32|155.692|92.85|
|64|159.3655|98.7|
|128|217.5205|99.9|

centroids=4000
||nprobe||avg. search time (ms)||recall percent (%)||
|8|56.3745|65.35|
|16|59.5435|78.85|
|32|71.751|89.85|
|64|90.396|96.25|
|128|135.3805|99.3|

Unfortunately, I couldn't obtain the corresponding results of HNSW due to the 
out of memory error in my PC. A special case with 2,000 base vectors 
demonstrates that IVFFlat is faster and more accurate. HNSW may outperform 
IVFFlat on larger data sets when larger memory is available, as shown in 
[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors].

> Approximate nearest vector search
> -
>
> Key: LUCENE-9004
> URL: https://issues.apache.org/jira/browse/LUCENE-9004
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Michael Sokolov
>Priority: Major
> Attachments: hnsw_layered_graph.png
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> "Semantic" search based on machine-learned vector "embeddings" representing 
> terms, queries and documents is becoming a must-have feature for a modern 
> search engine. SOLR-12890 is exploring various approaches to this, including 
> providing vector-based scoring functions. This is a spinoff issue from that.
> The idea here is to explore approximate nearest-neighbor search. Researchers 
> have found an approach based on navigating a graph that partially encodes the 
> nearest neighbor relation at multiple scales can provide accuracy > 95% (as 
> compared to exact nearest neighbor calculations) at a reasonable cost. This 
> issue will explore implementing HNSW (hierarchical navigable small-world) 
> graphs for the purpose of approximate nearest vector search (often referred 
> to as KNN or k-nearest-neighbor search).
> At a high level the way this algorithm works is this. First assume you have a 
> 

[jira] [Commented] (LUCENE-9004) Approximate nearest vector search

2020-02-06 Thread Xin-Chun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17031461#comment-17031461
 ] 

Xin-Chun Zhang commented on LUCENE-9004:


??You don't share your test code, but I suspect you open new IndexReader every 
time you issue a query???

[~tomoko] The test code can be found in 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/KnnIvfAndGraphPerformTester.java].
 Yes, I opened a new reader for each query in hope that IVFFlat and HNSW are 
compared in a fair condition since IVFFlat do not have cache. I now realize it 
may lead to OOM, hence replacing with a shard IndexReader and the problem 
resolved.

 

Update -- Top 1 in-set (query vector is in the candidate data set) recall 
results on SIFT1M data set ([http://corpus-texmex.irisa.fr/]) of IVFFlat and 
HNSW are as follows,

IVFFlat (no cache, reuse IndexReader)

 
||nprobe||avg. search time (ms)||recall percent (%)||
|8|13.3165|64.8|
|16|13.968|79.65|
|32|16.951|89.3|
|64|21.631|95.6|
|128|31.633|98.8|

 

HNSW (static cache, reuse IndexReader)
||avg. search time (ms)||recall percent (%)||
|6.3|{color:#FF}20.45{color}|

It can readily be shown that HNSW performs much better in query time. But I was 
surprised that top 1 in-set recall percent of HNSW is so low. It shouldn't be a 
problem of algorithm itself, but more likely a problem of implementation or 
test code. I will check it this weekend.

 

> Approximate nearest vector search
> -
>
> Key: LUCENE-9004
> URL: https://issues.apache.org/jira/browse/LUCENE-9004
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Michael Sokolov
>Priority: Major
> Attachments: hnsw_layered_graph.png
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> "Semantic" search based on machine-learned vector "embeddings" representing 
> terms, queries and documents is becoming a must-have feature for a modern 
> search engine. SOLR-12890 is exploring various approaches to this, including 
> providing vector-based scoring functions. This is a spinoff issue from that.
> The idea here is to explore approximate nearest-neighbor search. Researchers 
> have found an approach based on navigating a graph that partially encodes the 
> nearest neighbor relation at multiple scales can provide accuracy > 95% (as 
> compared to exact nearest neighbor calculations) at a reasonable cost. This 
> issue will explore implementing HNSW (hierarchical navigable small-world) 
> graphs for the purpose of approximate nearest vector search (often referred 
> to as KNN or k-nearest-neighbor search).
> At a high level the way this algorithm works is this. First assume you have a 
> graph that has a partial encoding of the nearest neighbor relation, with some 
> short and some long-distance links. If this graph is built in the right way 
> (has the hierarchical navigable small world property), then you can 
> efficiently traverse it to find nearest neighbors (approximately) in log N 
> time where N is the number of nodes in the graph. I believe this idea was 
> pioneered in  [1]. The great insight in that paper is that if you use the 
> graph search algorithm to find the K nearest neighbors of a new document 
> while indexing, and then link those neighbors (undirectedly, ie both ways) to 
> the new document, then the graph that emerges will have the desired 
> properties.
> The implementation I propose for Lucene is as follows. We need two new data 
> structures to encode the vectors and the graph. We can encode vectors using a 
> light wrapper around {{BinaryDocValues}} (we also want to encode the vector 
> dimension and have efficient conversion from bytes to floats). For the graph 
> we can use {{SortedNumericDocValues}} where the values we encode are the 
> docids of the related documents. Encoding the interdocument relations using 
> docids directly will make it relatively fast to traverse the graph since we 
> won't need to lookup through an id-field indirection. This choice limits us 
> to building a graph-per-segment since it would be impractical to maintain a 
> global graph for the whole index in the face of segment merges. However 
> graph-per-segment is a very natural at search time - we can traverse each 
> segments' graph independently and merge results as we do today for term-based 
> search.
> At index time, however, merging graphs is somewhat challenging. While 
> indexing we build a graph incrementally, performing searches to construct 
> links among neighbors. When merging segments we must construct a new graph 
> containing elements of all the merged segments. Ideally we would somehow 
> preserve the work done when building the initial graphs, but at least as a 
> start I'd propose we construct a new graph from scratch when 

[jira] [Comment Edited] (LUCENE-9004) Approximate nearest vector search

2020-02-05 Thread Xin-Chun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17031223#comment-17031223
 ] 

Xin-Chun Zhang edited comment on LUCENE-9004 at 2/6/20 3:38 AM:


??The default heap size that is given to Java processes depends on platforms, 
but for most commodity PCs it wouldn't be so large so you will see OOM if you 
are not set the -Xmx JVM arg.??

[~tomoko] I did set JVM option to "-Xmx8192m", but OOM error always appears. I 
guess there may be a memory leak in the static member "cache" of 
HNSWGraphReader. The key of "cache" is composed of field name and context 
identity, where the context identity may vary from query to query. When I 
execute query multiple times, the static cache size increases rapidly (cache 
size equals to query times), result in OOM. 


was (Author: irvingzhang):
??The default heap size that is given to Java processes depends on platforms, 
but for most commodity PCs it wouldn't be so large so you will see OOM if you 
are not set the -Xmx JVM arg.??

[~tomoko] I did set JVM option to "-Xmx8192m", but OOM error always appears. I 
guess there may be a memory leak in the static member "cache" of 
HNSWGraphReader. The key of static "cache" is composed of field name and 
context identity, where the context identity may vary from query to query. When 
I execute query multiple times, the static cache size will increase rapidly 
(cache size equals to query times), result in OOM. 

> Approximate nearest vector search
> -
>
> Key: LUCENE-9004
> URL: https://issues.apache.org/jira/browse/LUCENE-9004
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Michael Sokolov
>Priority: Major
> Attachments: hnsw_layered_graph.png
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> "Semantic" search based on machine-learned vector "embeddings" representing 
> terms, queries and documents is becoming a must-have feature for a modern 
> search engine. SOLR-12890 is exploring various approaches to this, including 
> providing vector-based scoring functions. This is a spinoff issue from that.
> The idea here is to explore approximate nearest-neighbor search. Researchers 
> have found an approach based on navigating a graph that partially encodes the 
> nearest neighbor relation at multiple scales can provide accuracy > 95% (as 
> compared to exact nearest neighbor calculations) at a reasonable cost. This 
> issue will explore implementing HNSW (hierarchical navigable small-world) 
> graphs for the purpose of approximate nearest vector search (often referred 
> to as KNN or k-nearest-neighbor search).
> At a high level the way this algorithm works is this. First assume you have a 
> graph that has a partial encoding of the nearest neighbor relation, with some 
> short and some long-distance links. If this graph is built in the right way 
> (has the hierarchical navigable small world property), then you can 
> efficiently traverse it to find nearest neighbors (approximately) in log N 
> time where N is the number of nodes in the graph. I believe this idea was 
> pioneered in  [1]. The great insight in that paper is that if you use the 
> graph search algorithm to find the K nearest neighbors of a new document 
> while indexing, and then link those neighbors (undirectedly, ie both ways) to 
> the new document, then the graph that emerges will have the desired 
> properties.
> The implementation I propose for Lucene is as follows. We need two new data 
> structures to encode the vectors and the graph. We can encode vectors using a 
> light wrapper around {{BinaryDocValues}} (we also want to encode the vector 
> dimension and have efficient conversion from bytes to floats). For the graph 
> we can use {{SortedNumericDocValues}} where the values we encode are the 
> docids of the related documents. Encoding the interdocument relations using 
> docids directly will make it relatively fast to traverse the graph since we 
> won't need to lookup through an id-field indirection. This choice limits us 
> to building a graph-per-segment since it would be impractical to maintain a 
> global graph for the whole index in the face of segment merges. However 
> graph-per-segment is a very natural at search time - we can traverse each 
> segments' graph independently and merge results as we do today for term-based 
> search.
> At index time, however, merging graphs is somewhat challenging. While 
> indexing we build a graph incrementally, performing searches to construct 
> links among neighbors. When merging segments we must construct a new graph 
> containing elements of all the merged segments. Ideally we would somehow 
> preserve the work done when building the initial graphs, but at least as a 
> start I'd propose we construct a new graph from scratch when 

[jira] [Comment Edited] (LUCENE-9004) Approximate nearest vector search

2020-02-05 Thread Xin-Chun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17031223#comment-17031223
 ] 

Xin-Chun Zhang edited comment on LUCENE-9004 at 2/6/20 3:05 AM:


??The default heap size that is given to Java processes depends on platforms, 
but for most commodity PCs it wouldn't be so large so you will see OOM if you 
are not set the -Xmx JVM arg.??

[~tomoko] I did set JVM option to "-Xmx8192m", but OOM error always appears. I 
guess there may be a memory leak in the static member "cache" of 
HNSWGraphReader. The key of static "cache" is composed of field name and 
context identity, where the context identity may vary from query to query. When 
I execute query multiple times, the static cache size will increase rapidly 
(cache size equals to query times), result in OOM. 


was (Author: irvingzhang):
??The default heap size that is given to Java processes depends on platforms, 
but for most commodity PCs it wouldn't be so large so you will see OOM if you 
are not set the -Xmx JVM arg.??

[~tomoko] I did set JVM option to "-Xmx8192m", but OOM error always throws. I 
guess there may be a memory leak in the static member "cache" of 
HNSWGraphReader. The key of static "cache" is composed of field name and 
context identity, where the context identity may vary from query to query. When 
I execute query multiple times, the static cache will increase rapidly (cache 
size equals to query times), result in OOM. 

> Approximate nearest vector search
> -
>
> Key: LUCENE-9004
> URL: https://issues.apache.org/jira/browse/LUCENE-9004
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Michael Sokolov
>Priority: Major
> Attachments: hnsw_layered_graph.png
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> "Semantic" search based on machine-learned vector "embeddings" representing 
> terms, queries and documents is becoming a must-have feature for a modern 
> search engine. SOLR-12890 is exploring various approaches to this, including 
> providing vector-based scoring functions. This is a spinoff issue from that.
> The idea here is to explore approximate nearest-neighbor search. Researchers 
> have found an approach based on navigating a graph that partially encodes the 
> nearest neighbor relation at multiple scales can provide accuracy > 95% (as 
> compared to exact nearest neighbor calculations) at a reasonable cost. This 
> issue will explore implementing HNSW (hierarchical navigable small-world) 
> graphs for the purpose of approximate nearest vector search (often referred 
> to as KNN or k-nearest-neighbor search).
> At a high level the way this algorithm works is this. First assume you have a 
> graph that has a partial encoding of the nearest neighbor relation, with some 
> short and some long-distance links. If this graph is built in the right way 
> (has the hierarchical navigable small world property), then you can 
> efficiently traverse it to find nearest neighbors (approximately) in log N 
> time where N is the number of nodes in the graph. I believe this idea was 
> pioneered in  [1]. The great insight in that paper is that if you use the 
> graph search algorithm to find the K nearest neighbors of a new document 
> while indexing, and then link those neighbors (undirectedly, ie both ways) to 
> the new document, then the graph that emerges will have the desired 
> properties.
> The implementation I propose for Lucene is as follows. We need two new data 
> structures to encode the vectors and the graph. We can encode vectors using a 
> light wrapper around {{BinaryDocValues}} (we also want to encode the vector 
> dimension and have efficient conversion from bytes to floats). For the graph 
> we can use {{SortedNumericDocValues}} where the values we encode are the 
> docids of the related documents. Encoding the interdocument relations using 
> docids directly will make it relatively fast to traverse the graph since we 
> won't need to lookup through an id-field indirection. This choice limits us 
> to building a graph-per-segment since it would be impractical to maintain a 
> global graph for the whole index in the face of segment merges. However 
> graph-per-segment is a very natural at search time - we can traverse each 
> segments' graph independently and merge results as we do today for term-based 
> search.
> At index time, however, merging graphs is somewhat challenging. While 
> indexing we build a graph incrementally, performing searches to construct 
> links among neighbors. When merging segments we must construct a new graph 
> containing elements of all the merged segments. Ideally we would somehow 
> preserve the work done when building the initial graphs, but at least as a 
> start I'd propose we construct a new graph from scratch 

[jira] [Commented] (LUCENE-9004) Approximate nearest vector search

2020-02-05 Thread Xin-Chun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17031223#comment-17031223
 ] 

Xin-Chun Zhang commented on LUCENE-9004:


??The default heap size that is given to Java processes depends on platforms, 
but for most commodity PCs it wouldn't be so large so you will see OOM if you 
are not set the -Xmx JVM arg.??

[~tomoko] I did set JVM option to "-Xmx8192m", but OOM error always throws. I 
guess there may be a memory leak in the static member "cache" of 
HNSWGraphReader. The key of static "cache" is composed of field name and 
context identity, where the context identity may vary from query to query. When 
I execute query multiple times, the static cache will increase rapidly (cache 
size equals to query times), result in OOM. 

> Approximate nearest vector search
> -
>
> Key: LUCENE-9004
> URL: https://issues.apache.org/jira/browse/LUCENE-9004
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Michael Sokolov
>Priority: Major
> Attachments: hnsw_layered_graph.png
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> "Semantic" search based on machine-learned vector "embeddings" representing 
> terms, queries and documents is becoming a must-have feature for a modern 
> search engine. SOLR-12890 is exploring various approaches to this, including 
> providing vector-based scoring functions. This is a spinoff issue from that.
> The idea here is to explore approximate nearest-neighbor search. Researchers 
> have found an approach based on navigating a graph that partially encodes the 
> nearest neighbor relation at multiple scales can provide accuracy > 95% (as 
> compared to exact nearest neighbor calculations) at a reasonable cost. This 
> issue will explore implementing HNSW (hierarchical navigable small-world) 
> graphs for the purpose of approximate nearest vector search (often referred 
> to as KNN or k-nearest-neighbor search).
> At a high level the way this algorithm works is this. First assume you have a 
> graph that has a partial encoding of the nearest neighbor relation, with some 
> short and some long-distance links. If this graph is built in the right way 
> (has the hierarchical navigable small world property), then you can 
> efficiently traverse it to find nearest neighbors (approximately) in log N 
> time where N is the number of nodes in the graph. I believe this idea was 
> pioneered in  [1]. The great insight in that paper is that if you use the 
> graph search algorithm to find the K nearest neighbors of a new document 
> while indexing, and then link those neighbors (undirectedly, ie both ways) to 
> the new document, then the graph that emerges will have the desired 
> properties.
> The implementation I propose for Lucene is as follows. We need two new data 
> structures to encode the vectors and the graph. We can encode vectors using a 
> light wrapper around {{BinaryDocValues}} (we also want to encode the vector 
> dimension and have efficient conversion from bytes to floats). For the graph 
> we can use {{SortedNumericDocValues}} where the values we encode are the 
> docids of the related documents. Encoding the interdocument relations using 
> docids directly will make it relatively fast to traverse the graph since we 
> won't need to lookup through an id-field indirection. This choice limits us 
> to building a graph-per-segment since it would be impractical to maintain a 
> global graph for the whole index in the face of segment merges. However 
> graph-per-segment is a very natural at search time - we can traverse each 
> segments' graph independently and merge results as we do today for term-based 
> search.
> At index time, however, merging graphs is somewhat challenging. While 
> indexing we build a graph incrementally, performing searches to construct 
> links among neighbors. When merging segments we must construct a new graph 
> containing elements of all the merged segments. Ideally we would somehow 
> preserve the work done when building the initial graphs, but at least as a 
> start I'd propose we construct a new graph from scratch when merging. The 
> process is going to be  limited, at least initially, to graphs that can fit 
> in RAM since we require random access to the entire graph while constructing 
> it: In order to add links bidirectionally we must continually update existing 
> documents.
> I think we want to express this API to users as a single joint 
> {{KnnGraphField}} abstraction that joins together the vectors and the graph 
> as a single joint field type. Mostly it just looks like a vector-valued 
> field, but has this graph attached to it.
> I'll push a branch with my POC and would love to hear comments. It has many 
> nocommits, basic design is not really set, there is no Query implementation 
> 

[jira] [Comment Edited] (LUCENE-9004) Approximate nearest vector search

2020-02-04 Thread Xin-Chun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17028283#comment-17028283
 ] 

Xin-Chun Zhang edited comment on LUCENE-9004 at 2/4/20 1:35 PM:


 ??"Is it making life difficult to keep them separate?"??

[~sokolov] No, we can keep them separate at present. I have merged your 
[branch|[https://github.com/apache/lucene-solr/tree/jira/lucene-9004-aknn-2]] 
into my person 
[github|[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]] in 
order to do the comparison between IVFFlat and HNSW. And I reused some work 
that [~tomoko] and you did. Code refactoring is required when we are going to 
commit.

 

??"Have you tried comparing them on real data?"??

[~yurymalkov], [~mikemccand] Thanks for your advice. I haven't do it yet, and 
will do it soon. 

 

*Update – Feb. 4, 2020*

I have added two performance test tool 
(KnnIvfPerformTester/KnnIvfAndGraphPerformTester) into my personal branch. And 
sift1M dataset (1000,000 base vectors with 128 dimensions, 
[http://corpus-texmex.irisa.fr/]) is employed for the test. Top 1 recall 
performance of IVFFlat is as follows,

centroids=707
||nprobe||avg. search time (ms)||recall percent (%)||
|8|71.314|69.15|
|16|121.7565|82.3|
|32|155.692|92.85|
|64|159.3655|98.7|
|128|217.5205|99.9|

centroids=4000
||nprobe||avg. search time (ms)||recall percent (%)||
|8|56.3745|65.35|
|16|59.5435|78.85|
|32|71.751|89.85|
|64|90.396|96.25|
|128|135.3805|99.3|

Unfortunately, I couldn't obtain the corresponding results of HNSW due to the 
out of memory error in my PC. A special case with 2,000 base vectors 
demonstrates that IVFFlat is faster and more accurate. HNSW may outperform 
IVFFlat on larger data sets when larger memory is available, as shown in 
[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors].


was (Author: irvingzhang):
 ??"Is it making life difficult to keep them separate?"??

[~sokolov] No, we can keep them separate at present. I have merged your 
[branch|[https://github.com/apache/lucene-solr/tree/jira/lucene-9004-aknn-2]] 
into my person 
[github|[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]] in 
order to do the comparison between IVFFlat and HNSW. And I reused some work 
that [~tomoko] and you did. Code refactoring is required when we are going to 
commit.

 

??"Have you tried comparing them on real data?"??

[~yurymalkov], [~mikemccand] Thanks for your advice. I haven't do it yet, and 
will do it soon. 

 

*Update – Feb. 4, 2020*

I have added two performance test tool 
(KnnIvfPerformTester/KnnIvfAndGraphPerformTester) into my personal branch. And 
sift1M dataset (1000,000 base vectors with 128 dimensions, 
[http://corpus-texmex.irisa.fr/]) is employed for the test. Top 1 recall 
performance of IVFFlat is as follows,

centroids=707
||nprobe||avg. search time (ms)||recall percent (%)||
|8|71.314|69.15|
|16|121.7565|82.3|
|32|155.692|92.85|
|64|159.3655|98.7|
|128|217.5205|99.9%|

centroids=4000
||nprobe||avg. search time (ms)||recall percent (%)||
|8|56.3745|65.35|
|16|59.5435|78.85|
|32|71.751|89.85|
|64|90.396|96.25|
|128|135.3805|99.3|

Unfortunately, I couldn't obtain the corresponding results of HNSW due to the 
out of memory error in my PC. A special case with 2,000 base vectors 
demonstrates that IVFFlat is faster and more accurate. HNSW may outperform 
IVFFlat on larger data sets when larger memory is available, as shown in 
[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors].

> Approximate nearest vector search
> -
>
> Key: LUCENE-9004
> URL: https://issues.apache.org/jira/browse/LUCENE-9004
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Michael Sokolov
>Priority: Major
> Attachments: hnsw_layered_graph.png
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> "Semantic" search based on machine-learned vector "embeddings" representing 
> terms, queries and documents is becoming a must-have feature for a modern 
> search engine. SOLR-12890 is exploring various approaches to this, including 
> providing vector-based scoring functions. This is a spinoff issue from that.
> The idea here is to explore approximate nearest-neighbor search. Researchers 
> have found an approach based on navigating a graph that partially encodes the 
> nearest neighbor relation at multiple scales can provide accuracy > 95% (as 
> compared to exact nearest neighbor calculations) at a reasonable cost. This 
> issue will explore implementing HNSW (hierarchical navigable small-world) 
> graphs for the purpose of approximate nearest vector search (often referred 
> to as KNN or k-nearest-neighbor search).
> At a high level the way this algorithm works is this. First assume you have a 
> graph that has a partial encoding of the nearest 

[jira] [Comment Edited] (LUCENE-9004) Approximate nearest vector search

2020-02-04 Thread Xin-Chun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17028283#comment-17028283
 ] 

Xin-Chun Zhang edited comment on LUCENE-9004 at 2/4/20 1:34 PM:


 ??"Is it making life difficult to keep them separate?"??

[~sokolov] No, we can keep them separate at present. I have merged your 
[branch|[https://github.com/apache/lucene-solr/tree/jira/lucene-9004-aknn-2]] 
into my person 
[github|[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]] in 
order to do the comparison between IVFFlat and HNSW. And I reused some work 
that [~tomoko] and you did. Code refactoring is required when we are going to 
commit.

 

??"Have you tried comparing them on real data?"??

[~yurymalkov], [~mikemccand] Thanks for your advice. I haven't do it yet, and 
will do it soon. 

 

*Update – Feb. 4, 2020*

I have added two performance test tool 
(KnnIvfPerformTester/KnnIvfAndGraphPerformTester) into my personal branch. And 
sift1M dataset (1000,000 base vectors with 128 dimensions, 
[http://corpus-texmex.irisa.fr/]) is employed for the test. Top 1 recall 
performance of IVFFlat is as follows,

centroids=707
||nprobe||avg. search time (ms)||recall percent (%)||
|8|71.314|69.15|
|16|121.7565|82.3|
|32|155.692|92.85|
|64|159.3655|98.7|
|128|217.5205|99.9%|

centroids=4000
||nprobe||avg. search time (ms)||recall percent (%)||
|8|56.3745|65.35|
|16|59.5435|78.85|
|32|71.751|89.85|
|64|90.396|96.25|
|128|135.3805|99.3|

Unfortunately, I couldn't obtain the corresponding results of HNSW due to the 
out of memory error in my PC. A special case with 2,000 base vectors 
demonstrates that IVFFlat is faster and more accurate. HNSW may outperform 
IVFFlat on larger data sets when larger memory is available, as shown in 
[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors].


was (Author: irvingzhang):
 ??"Is it making life difficult to keep them separate?"??

[~sokolov] No, we can keep them separate at present. I have merged your 
[branch|[https://github.com/apache/lucene-solr/tree/jira/lucene-9004-aknn-2]] 
into my person 
[github|[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]] in 
order to do the comparison between IVFFlat and HNSW. And I reused some work 
that [~tomoko] and you did. Code refactoring is required when we are going to 
commit.

 

??"Have you tried comparing them on real data?"??

[~yurymalkov], [~mikemccand] Thanks for your advice. I haven't do it yet, and 
will do it soon. 

 

*Update – Feb. 4, 2020*

I have added two performance test tool 
(KnnIvfPerformTester/KnnIvfAndGraphPerformTester) into my personal branch. And 
sift1M dataset (1000,000 base vectors with 128 dimensions, 
[http://corpus-texmex.irisa.fr/]) is employed for the test. Top 1 recall 
performance of IVFFlat is as follows,
||nprobe||avg. search time (ms)||recall percent (%)||
|8|71.314|69.15|
|16|121.7565|82.3|
|32|155.692|92.85|
|64|159.3655|98.7|
|128|217.5205|99.9%|

Unfortunately, I couldn't obtain the corresponding results of HNSW due to the 
out of memory error in my PC. A special case with 2,000 base vectors 
demonstrates that IVFFlat is faster and more accurate. HNSW may outperform 
IVFFlat on larger data sets when larger memory is available, as shown in 
[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors].

> Approximate nearest vector search
> -
>
> Key: LUCENE-9004
> URL: https://issues.apache.org/jira/browse/LUCENE-9004
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Michael Sokolov
>Priority: Major
> Attachments: hnsw_layered_graph.png
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> "Semantic" search based on machine-learned vector "embeddings" representing 
> terms, queries and documents is becoming a must-have feature for a modern 
> search engine. SOLR-12890 is exploring various approaches to this, including 
> providing vector-based scoring functions. This is a spinoff issue from that.
> The idea here is to explore approximate nearest-neighbor search. Researchers 
> have found an approach based on navigating a graph that partially encodes the 
> nearest neighbor relation at multiple scales can provide accuracy > 95% (as 
> compared to exact nearest neighbor calculations) at a reasonable cost. This 
> issue will explore implementing HNSW (hierarchical navigable small-world) 
> graphs for the purpose of approximate nearest vector search (often referred 
> to as KNN or k-nearest-neighbor search).
> At a high level the way this algorithm works is this. First assume you have a 
> graph that has a partial encoding of the nearest neighbor relation, with some 
> short and some long-distance links. If this graph is built in the right way 
> (has the hierarchical navigable small world property), then you can 
> 

[jira] [Updated] (LUCENE-9004) Approximate nearest vector search

2020-02-03 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9004:
---
Attachment: (was: 屏幕快照 2020-02-04 上午10.38.26.png)

> Approximate nearest vector search
> -
>
> Key: LUCENE-9004
> URL: https://issues.apache.org/jira/browse/LUCENE-9004
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Michael Sokolov
>Priority: Major
> Attachments: hnsw_layered_graph.png
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> "Semantic" search based on machine-learned vector "embeddings" representing 
> terms, queries and documents is becoming a must-have feature for a modern 
> search engine. SOLR-12890 is exploring various approaches to this, including 
> providing vector-based scoring functions. This is a spinoff issue from that.
> The idea here is to explore approximate nearest-neighbor search. Researchers 
> have found an approach based on navigating a graph that partially encodes the 
> nearest neighbor relation at multiple scales can provide accuracy > 95% (as 
> compared to exact nearest neighbor calculations) at a reasonable cost. This 
> issue will explore implementing HNSW (hierarchical navigable small-world) 
> graphs for the purpose of approximate nearest vector search (often referred 
> to as KNN or k-nearest-neighbor search).
> At a high level the way this algorithm works is this. First assume you have a 
> graph that has a partial encoding of the nearest neighbor relation, with some 
> short and some long-distance links. If this graph is built in the right way 
> (has the hierarchical navigable small world property), then you can 
> efficiently traverse it to find nearest neighbors (approximately) in log N 
> time where N is the number of nodes in the graph. I believe this idea was 
> pioneered in  [1]. The great insight in that paper is that if you use the 
> graph search algorithm to find the K nearest neighbors of a new document 
> while indexing, and then link those neighbors (undirectedly, ie both ways) to 
> the new document, then the graph that emerges will have the desired 
> properties.
> The implementation I propose for Lucene is as follows. We need two new data 
> structures to encode the vectors and the graph. We can encode vectors using a 
> light wrapper around {{BinaryDocValues}} (we also want to encode the vector 
> dimension and have efficient conversion from bytes to floats). For the graph 
> we can use {{SortedNumericDocValues}} where the values we encode are the 
> docids of the related documents. Encoding the interdocument relations using 
> docids directly will make it relatively fast to traverse the graph since we 
> won't need to lookup through an id-field indirection. This choice limits us 
> to building a graph-per-segment since it would be impractical to maintain a 
> global graph for the whole index in the face of segment merges. However 
> graph-per-segment is a very natural at search time - we can traverse each 
> segments' graph independently and merge results as we do today for term-based 
> search.
> At index time, however, merging graphs is somewhat challenging. While 
> indexing we build a graph incrementally, performing searches to construct 
> links among neighbors. When merging segments we must construct a new graph 
> containing elements of all the merged segments. Ideally we would somehow 
> preserve the work done when building the initial graphs, but at least as a 
> start I'd propose we construct a new graph from scratch when merging. The 
> process is going to be  limited, at least initially, to graphs that can fit 
> in RAM since we require random access to the entire graph while constructing 
> it: In order to add links bidirectionally we must continually update existing 
> documents.
> I think we want to express this API to users as a single joint 
> {{KnnGraphField}} abstraction that joins together the vectors and the graph 
> as a single joint field type. Mostly it just looks like a vector-valued 
> field, but has this graph attached to it.
> I'll push a branch with my POC and would love to hear comments. It has many 
> nocommits, basic design is not really set, there is no Query implementation 
> and no integration iwth IndexSearcher, but it does work by some measure using 
> a standalone test class. I've tested with uniform random vectors and on my 
> laptop indexed 10K documents in around 10 seconds and searched them at 95% 
> recall (compared with exact nearest-neighbor baseline) at around 250 QPS. I 
> haven't made any attempt to use multithreaded search for this, but it is 
> amenable to per-segment concurrency.
> [1] 
> [https://www.semanticscholar.org/paper/Efficient-and-robust-approximate-nearest-neighbor-Malkov-Yashunin/699a2e3b653c69aff5cf7a9923793b974f8ca164]
>  
> 

[jira] [Updated] (LUCENE-9004) Approximate nearest vector search

2020-02-03 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9004:
---
Attachment: (was: 屏幕快照 2020-02-04 下午12.46.01.png)

> Approximate nearest vector search
> -
>
> Key: LUCENE-9004
> URL: https://issues.apache.org/jira/browse/LUCENE-9004
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Michael Sokolov
>Priority: Major
> Attachments: hnsw_layered_graph.png
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> "Semantic" search based on machine-learned vector "embeddings" representing 
> terms, queries and documents is becoming a must-have feature for a modern 
> search engine. SOLR-12890 is exploring various approaches to this, including 
> providing vector-based scoring functions. This is a spinoff issue from that.
> The idea here is to explore approximate nearest-neighbor search. Researchers 
> have found an approach based on navigating a graph that partially encodes the 
> nearest neighbor relation at multiple scales can provide accuracy > 95% (as 
> compared to exact nearest neighbor calculations) at a reasonable cost. This 
> issue will explore implementing HNSW (hierarchical navigable small-world) 
> graphs for the purpose of approximate nearest vector search (often referred 
> to as KNN or k-nearest-neighbor search).
> At a high level the way this algorithm works is this. First assume you have a 
> graph that has a partial encoding of the nearest neighbor relation, with some 
> short and some long-distance links. If this graph is built in the right way 
> (has the hierarchical navigable small world property), then you can 
> efficiently traverse it to find nearest neighbors (approximately) in log N 
> time where N is the number of nodes in the graph. I believe this idea was 
> pioneered in  [1]. The great insight in that paper is that if you use the 
> graph search algorithm to find the K nearest neighbors of a new document 
> while indexing, and then link those neighbors (undirectedly, ie both ways) to 
> the new document, then the graph that emerges will have the desired 
> properties.
> The implementation I propose for Lucene is as follows. We need two new data 
> structures to encode the vectors and the graph. We can encode vectors using a 
> light wrapper around {{BinaryDocValues}} (we also want to encode the vector 
> dimension and have efficient conversion from bytes to floats). For the graph 
> we can use {{SortedNumericDocValues}} where the values we encode are the 
> docids of the related documents. Encoding the interdocument relations using 
> docids directly will make it relatively fast to traverse the graph since we 
> won't need to lookup through an id-field indirection. This choice limits us 
> to building a graph-per-segment since it would be impractical to maintain a 
> global graph for the whole index in the face of segment merges. However 
> graph-per-segment is a very natural at search time - we can traverse each 
> segments' graph independently and merge results as we do today for term-based 
> search.
> At index time, however, merging graphs is somewhat challenging. While 
> indexing we build a graph incrementally, performing searches to construct 
> links among neighbors. When merging segments we must construct a new graph 
> containing elements of all the merged segments. Ideally we would somehow 
> preserve the work done when building the initial graphs, but at least as a 
> start I'd propose we construct a new graph from scratch when merging. The 
> process is going to be  limited, at least initially, to graphs that can fit 
> in RAM since we require random access to the entire graph while constructing 
> it: In order to add links bidirectionally we must continually update existing 
> documents.
> I think we want to express this API to users as a single joint 
> {{KnnGraphField}} abstraction that joins together the vectors and the graph 
> as a single joint field type. Mostly it just looks like a vector-valued 
> field, but has this graph attached to it.
> I'll push a branch with my POC and would love to hear comments. It has many 
> nocommits, basic design is not really set, there is no Query implementation 
> and no integration iwth IndexSearcher, but it does work by some measure using 
> a standalone test class. I've tested with uniform random vectors and on my 
> laptop indexed 10K documents in around 10 seconds and searched them at 95% 
> recall (compared with exact nearest-neighbor baseline) at around 250 QPS. I 
> haven't made any attempt to use multithreaded search for this, but it is 
> amenable to per-segment concurrency.
> [1] 
> [https://www.semanticscholar.org/paper/Efficient-and-robust-approximate-nearest-neighbor-Malkov-Yashunin/699a2e3b653c69aff5cf7a9923793b974f8ca164]
>  
> 

[jira] [Commented] (LUCENE-9004) Approximate nearest vector search

2020-02-03 Thread Xin-Chun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17029590#comment-17029590
 ] 

Xin-Chun Zhang commented on LUCENE-9004:


??Are you sure the results are accurate???

The aforementioned results are generated by the test tools, as shown in the 
attached file. It seems that the Java version of IVFFlat and HNSW are quite 
slower than the C/C++ version. There may be some causes, e.g. the differences 
between programming languages, different parameter settings and so on.

 The nmslib (hnsw) did run fast and consumes less memory. I will check why it 
throws OOM exception.

!屏幕快照 2020-02-04 上午10.38.26.png!

 

> Approximate nearest vector search
> -
>
> Key: LUCENE-9004
> URL: https://issues.apache.org/jira/browse/LUCENE-9004
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Michael Sokolov
>Priority: Major
> Attachments: hnsw_layered_graph.png, 屏幕快照 2020-02-04 上午10.38.26.png, 
> 屏幕快照 2020-02-04 下午12.46.01.png
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> "Semantic" search based on machine-learned vector "embeddings" representing 
> terms, queries and documents is becoming a must-have feature for a modern 
> search engine. SOLR-12890 is exploring various approaches to this, including 
> providing vector-based scoring functions. This is a spinoff issue from that.
> The idea here is to explore approximate nearest-neighbor search. Researchers 
> have found an approach based on navigating a graph that partially encodes the 
> nearest neighbor relation at multiple scales can provide accuracy > 95% (as 
> compared to exact nearest neighbor calculations) at a reasonable cost. This 
> issue will explore implementing HNSW (hierarchical navigable small-world) 
> graphs for the purpose of approximate nearest vector search (often referred 
> to as KNN or k-nearest-neighbor search).
> At a high level the way this algorithm works is this. First assume you have a 
> graph that has a partial encoding of the nearest neighbor relation, with some 
> short and some long-distance links. If this graph is built in the right way 
> (has the hierarchical navigable small world property), then you can 
> efficiently traverse it to find nearest neighbors (approximately) in log N 
> time where N is the number of nodes in the graph. I believe this idea was 
> pioneered in  [1]. The great insight in that paper is that if you use the 
> graph search algorithm to find the K nearest neighbors of a new document 
> while indexing, and then link those neighbors (undirectedly, ie both ways) to 
> the new document, then the graph that emerges will have the desired 
> properties.
> The implementation I propose for Lucene is as follows. We need two new data 
> structures to encode the vectors and the graph. We can encode vectors using a 
> light wrapper around {{BinaryDocValues}} (we also want to encode the vector 
> dimension and have efficient conversion from bytes to floats). For the graph 
> we can use {{SortedNumericDocValues}} where the values we encode are the 
> docids of the related documents. Encoding the interdocument relations using 
> docids directly will make it relatively fast to traverse the graph since we 
> won't need to lookup through an id-field indirection. This choice limits us 
> to building a graph-per-segment since it would be impractical to maintain a 
> global graph for the whole index in the face of segment merges. However 
> graph-per-segment is a very natural at search time - we can traverse each 
> segments' graph independently and merge results as we do today for term-based 
> search.
> At index time, however, merging graphs is somewhat challenging. While 
> indexing we build a graph incrementally, performing searches to construct 
> links among neighbors. When merging segments we must construct a new graph 
> containing elements of all the merged segments. Ideally we would somehow 
> preserve the work done when building the initial graphs, but at least as a 
> start I'd propose we construct a new graph from scratch when merging. The 
> process is going to be  limited, at least initially, to graphs that can fit 
> in RAM since we require random access to the entire graph while constructing 
> it: In order to add links bidirectionally we must continually update existing 
> documents.
> I think we want to express this API to users as a single joint 
> {{KnnGraphField}} abstraction that joins together the vectors and the graph 
> as a single joint field type. Mostly it just looks like a vector-valued 
> field, but has this graph attached to it.
> I'll push a branch with my POC and would love to hear comments. It has many 
> nocommits, basic design is not really set, there is no Query implementation 
> and no integration iwth IndexSearcher, but it does work by some 

[jira] [Updated] (LUCENE-9004) Approximate nearest vector search

2020-02-03 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9004:
---
Attachment: 屏幕快照 2020-02-04 下午12.46.01.png
屏幕快照 2020-02-04 上午10.38.26.png

> Approximate nearest vector search
> -
>
> Key: LUCENE-9004
> URL: https://issues.apache.org/jira/browse/LUCENE-9004
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Michael Sokolov
>Priority: Major
> Attachments: hnsw_layered_graph.png, 屏幕快照 2020-02-04 上午10.38.26.png, 
> 屏幕快照 2020-02-04 下午12.46.01.png
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> "Semantic" search based on machine-learned vector "embeddings" representing 
> terms, queries and documents is becoming a must-have feature for a modern 
> search engine. SOLR-12890 is exploring various approaches to this, including 
> providing vector-based scoring functions. This is a spinoff issue from that.
> The idea here is to explore approximate nearest-neighbor search. Researchers 
> have found an approach based on navigating a graph that partially encodes the 
> nearest neighbor relation at multiple scales can provide accuracy > 95% (as 
> compared to exact nearest neighbor calculations) at a reasonable cost. This 
> issue will explore implementing HNSW (hierarchical navigable small-world) 
> graphs for the purpose of approximate nearest vector search (often referred 
> to as KNN or k-nearest-neighbor search).
> At a high level the way this algorithm works is this. First assume you have a 
> graph that has a partial encoding of the nearest neighbor relation, with some 
> short and some long-distance links. If this graph is built in the right way 
> (has the hierarchical navigable small world property), then you can 
> efficiently traverse it to find nearest neighbors (approximately) in log N 
> time where N is the number of nodes in the graph. I believe this idea was 
> pioneered in  [1]. The great insight in that paper is that if you use the 
> graph search algorithm to find the K nearest neighbors of a new document 
> while indexing, and then link those neighbors (undirectedly, ie both ways) to 
> the new document, then the graph that emerges will have the desired 
> properties.
> The implementation I propose for Lucene is as follows. We need two new data 
> structures to encode the vectors and the graph. We can encode vectors using a 
> light wrapper around {{BinaryDocValues}} (we also want to encode the vector 
> dimension and have efficient conversion from bytes to floats). For the graph 
> we can use {{SortedNumericDocValues}} where the values we encode are the 
> docids of the related documents. Encoding the interdocument relations using 
> docids directly will make it relatively fast to traverse the graph since we 
> won't need to lookup through an id-field indirection. This choice limits us 
> to building a graph-per-segment since it would be impractical to maintain a 
> global graph for the whole index in the face of segment merges. However 
> graph-per-segment is a very natural at search time - we can traverse each 
> segments' graph independently and merge results as we do today for term-based 
> search.
> At index time, however, merging graphs is somewhat challenging. While 
> indexing we build a graph incrementally, performing searches to construct 
> links among neighbors. When merging segments we must construct a new graph 
> containing elements of all the merged segments. Ideally we would somehow 
> preserve the work done when building the initial graphs, but at least as a 
> start I'd propose we construct a new graph from scratch when merging. The 
> process is going to be  limited, at least initially, to graphs that can fit 
> in RAM since we require random access to the entire graph while constructing 
> it: In order to add links bidirectionally we must continually update existing 
> documents.
> I think we want to express this API to users as a single joint 
> {{KnnGraphField}} abstraction that joins together the vectors and the graph 
> as a single joint field type. Mostly it just looks like a vector-valued 
> field, but has this graph attached to it.
> I'll push a branch with my POC and would love to hear comments. It has many 
> nocommits, basic design is not really set, there is no Query implementation 
> and no integration iwth IndexSearcher, but it does work by some measure using 
> a standalone test class. I've tested with uniform random vectors and on my 
> laptop indexed 10K documents in around 10 seconds and searched them at 95% 
> recall (compared with exact nearest-neighbor baseline) at around 250 QPS. I 
> haven't made any attempt to use multithreaded search for this, but it is 
> amenable to per-segment concurrency.
> [1] 
> 

[jira] [Comment Edited] (LUCENE-9004) Approximate nearest vector search

2020-02-03 Thread Xin-Chun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17028283#comment-17028283
 ] 

Xin-Chun Zhang edited comment on LUCENE-9004 at 2/4/20 3:28 AM:


 ??"Is it making life difficult to keep them separate?"??

[~sokolov] No, we can keep them separate at present. I have merged your 
[branch|[https://github.com/apache/lucene-solr/tree/jira/lucene-9004-aknn-2]] 
into my person 
[github|[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]] in 
order to do the comparison between IVFFlat and HNSW. And I reused some work 
that [~tomoko] and you did. Code refactoring is required when we are going to 
commit.

 

??"Have you tried comparing them on real data?"??

[~yurymalkov], [~mikemccand] Thanks for your advice. I haven't do it yet, and 
will do it soon. 

 

*Update – Feb. 4, 2020*

I have added two performance test tool 
(KnnIvfPerformTester/KnnIvfAndGraphPerformTester) into my personal branch. And 
sift1M dataset (1000,000 base vectors with 128 dimensions, 
[http://corpus-texmex.irisa.fr/]) is employed for the test. Top 1 recall 
performance of IVFFlat is as follows,
||nprobe||avg. search time (ms)||recall percent (%)||
|8|71.314|69.15|
|16|121.7565|82.3|
|32|155.692|92.85|
|64|159.3655|98.7|
|128|217.5205|99.9%|

Unfortunately, I couldn't obtain the corresponding results of HNSW due to the 
out of memory error in my PC. A special case with 2,000 base vectors 
demonstrates that IVFFlat is faster and more accurate. HNSW may outperform 
IVFFlat on larger data sets when larger memory is available, as shown in 
[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors].


was (Author: irvingzhang):
 ??"Is it making life difficult to keep them separate?"??

[~sokolov] No, we can keep them separate at present. I have merged your 
[branch|[https://github.com/apache/lucene-solr/tree/jira/lucene-9004-aknn-2]] 
into my person 
[github|[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]] in 
order to do the comparison between IVFFlat and HNSW. And I reused some work 
that [~tomoko] and you did. Code refactoring is required when we are going to 
commit.

 

??"Have you tried comparing them on real data?"??

[~yurymalkov], [~mikemccand] Thanks for your advice. I haven't do it yet, and 
will do it soon. 

> Approximate nearest vector search
> -
>
> Key: LUCENE-9004
> URL: https://issues.apache.org/jira/browse/LUCENE-9004
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Michael Sokolov
>Priority: Major
> Attachments: hnsw_layered_graph.png
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> "Semantic" search based on machine-learned vector "embeddings" representing 
> terms, queries and documents is becoming a must-have feature for a modern 
> search engine. SOLR-12890 is exploring various approaches to this, including 
> providing vector-based scoring functions. This is a spinoff issue from that.
> The idea here is to explore approximate nearest-neighbor search. Researchers 
> have found an approach based on navigating a graph that partially encodes the 
> nearest neighbor relation at multiple scales can provide accuracy > 95% (as 
> compared to exact nearest neighbor calculations) at a reasonable cost. This 
> issue will explore implementing HNSW (hierarchical navigable small-world) 
> graphs for the purpose of approximate nearest vector search (often referred 
> to as KNN or k-nearest-neighbor search).
> At a high level the way this algorithm works is this. First assume you have a 
> graph that has a partial encoding of the nearest neighbor relation, with some 
> short and some long-distance links. If this graph is built in the right way 
> (has the hierarchical navigable small world property), then you can 
> efficiently traverse it to find nearest neighbors (approximately) in log N 
> time where N is the number of nodes in the graph. I believe this idea was 
> pioneered in  [1]. The great insight in that paper is that if you use the 
> graph search algorithm to find the K nearest neighbors of a new document 
> while indexing, and then link those neighbors (undirectedly, ie both ways) to 
> the new document, then the graph that emerges will have the desired 
> properties.
> The implementation I propose for Lucene is as follows. We need two new data 
> structures to encode the vectors and the graph. We can encode vectors using a 
> light wrapper around {{BinaryDocValues}} (we also want to encode the vector 
> dimension and have efficient conversion from bytes to floats). For the graph 
> we can use {{SortedNumericDocValues}} where the values we encode are the 
> docids of the related documents. Encoding the interdocument relations using 
> docids directly will make it relatively fast to traverse the graph 

[jira] [Commented] (LUCENE-9004) Approximate nearest vector search

2020-02-01 Thread Xin-Chun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17028283#comment-17028283
 ] 

Xin-Chun Zhang commented on LUCENE-9004:


 ??"Is it making life difficult to keep them separate?"??

[~sokolov] No, we can keep them separate at present. I have merged your 
[branch|[https://github.com/apache/lucene-solr/tree/jira/lucene-9004-aknn-2]] 
into my person 
[github|[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]] in 
order to do the comparison between IVFFlat and HNSW. And I reused some work 
that [~tomoko] and you did. Code refactoring is required when we are going to 
commit.

 

??"Have you tried comparing them on real data?"??

[~yurymalkov], [~mikemccand] Thanks for your advice. I haven't do it yet, and 
will do it soon. 

> Approximate nearest vector search
> -
>
> Key: LUCENE-9004
> URL: https://issues.apache.org/jira/browse/LUCENE-9004
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Michael Sokolov
>Priority: Major
> Attachments: hnsw_layered_graph.png
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> "Semantic" search based on machine-learned vector "embeddings" representing 
> terms, queries and documents is becoming a must-have feature for a modern 
> search engine. SOLR-12890 is exploring various approaches to this, including 
> providing vector-based scoring functions. This is a spinoff issue from that.
> The idea here is to explore approximate nearest-neighbor search. Researchers 
> have found an approach based on navigating a graph that partially encodes the 
> nearest neighbor relation at multiple scales can provide accuracy > 95% (as 
> compared to exact nearest neighbor calculations) at a reasonable cost. This 
> issue will explore implementing HNSW (hierarchical navigable small-world) 
> graphs for the purpose of approximate nearest vector search (often referred 
> to as KNN or k-nearest-neighbor search).
> At a high level the way this algorithm works is this. First assume you have a 
> graph that has a partial encoding of the nearest neighbor relation, with some 
> short and some long-distance links. If this graph is built in the right way 
> (has the hierarchical navigable small world property), then you can 
> efficiently traverse it to find nearest neighbors (approximately) in log N 
> time where N is the number of nodes in the graph. I believe this idea was 
> pioneered in  [1]. The great insight in that paper is that if you use the 
> graph search algorithm to find the K nearest neighbors of a new document 
> while indexing, and then link those neighbors (undirectedly, ie both ways) to 
> the new document, then the graph that emerges will have the desired 
> properties.
> The implementation I propose for Lucene is as follows. We need two new data 
> structures to encode the vectors and the graph. We can encode vectors using a 
> light wrapper around {{BinaryDocValues}} (we also want to encode the vector 
> dimension and have efficient conversion from bytes to floats). For the graph 
> we can use {{SortedNumericDocValues}} where the values we encode are the 
> docids of the related documents. Encoding the interdocument relations using 
> docids directly will make it relatively fast to traverse the graph since we 
> won't need to lookup through an id-field indirection. This choice limits us 
> to building a graph-per-segment since it would be impractical to maintain a 
> global graph for the whole index in the face of segment merges. However 
> graph-per-segment is a very natural at search time - we can traverse each 
> segments' graph independently and merge results as we do today for term-based 
> search.
> At index time, however, merging graphs is somewhat challenging. While 
> indexing we build a graph incrementally, performing searches to construct 
> links among neighbors. When merging segments we must construct a new graph 
> containing elements of all the merged segments. Ideally we would somehow 
> preserve the work done when building the initial graphs, but at least as a 
> start I'd propose we construct a new graph from scratch when merging. The 
> process is going to be  limited, at least initially, to graphs that can fit 
> in RAM since we require random access to the entire graph while constructing 
> it: In order to add links bidirectionally we must continually update existing 
> documents.
> I think we want to express this API to users as a single joint 
> {{KnnGraphField}} abstraction that joins together the vectors and the graph 
> as a single joint field type. Mostly it just looks like a vector-valued 
> field, but has this graph attached to it.
> I'll push a branch with my POC and would love to hear comments. It has many 
> nocommits, basic design is not really set, there is no Query 

[jira] [Commented] (LUCENE-9004) Approximate nearest vector search

2020-01-28 Thread Xin-Chun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17025647#comment-17025647
 ] 

Xin-Chun Zhang commented on LUCENE-9004:


"This is already gigantic - what would be the benefit of merging?"

-- Yes, I agree that it's gigantic. It's only a personal proposal based on the 
following considerations,
 * Both issues concern about the same problem that searching approximate 
nearest neighbor in vector space, which implies that the key parts of the 
design and implementation could be reused, _e.g._ vector format and the 
corresponding reader/writer. The implementation could be then more elegant.
 * Moreover, we could make sure that the provided interfaces are consistent and 
compatible.

> Approximate nearest vector search
> -
>
> Key: LUCENE-9004
> URL: https://issues.apache.org/jira/browse/LUCENE-9004
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Michael Sokolov
>Priority: Major
> Attachments: hnsw_layered_graph.png
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> "Semantic" search based on machine-learned vector "embeddings" representing 
> terms, queries and documents is becoming a must-have feature for a modern 
> search engine. SOLR-12890 is exploring various approaches to this, including 
> providing vector-based scoring functions. This is a spinoff issue from that.
> The idea here is to explore approximate nearest-neighbor search. Researchers 
> have found an approach based on navigating a graph that partially encodes the 
> nearest neighbor relation at multiple scales can provide accuracy > 95% (as 
> compared to exact nearest neighbor calculations) at a reasonable cost. This 
> issue will explore implementing HNSW (hierarchical navigable small-world) 
> graphs for the purpose of approximate nearest vector search (often referred 
> to as KNN or k-nearest-neighbor search).
> At a high level the way this algorithm works is this. First assume you have a 
> graph that has a partial encoding of the nearest neighbor relation, with some 
> short and some long-distance links. If this graph is built in the right way 
> (has the hierarchical navigable small world property), then you can 
> efficiently traverse it to find nearest neighbors (approximately) in log N 
> time where N is the number of nodes in the graph. I believe this idea was 
> pioneered in  [1]. The great insight in that paper is that if you use the 
> graph search algorithm to find the K nearest neighbors of a new document 
> while indexing, and then link those neighbors (undirectedly, ie both ways) to 
> the new document, then the graph that emerges will have the desired 
> properties.
> The implementation I propose for Lucene is as follows. We need two new data 
> structures to encode the vectors and the graph. We can encode vectors using a 
> light wrapper around {{BinaryDocValues}} (we also want to encode the vector 
> dimension and have efficient conversion from bytes to floats). For the graph 
> we can use {{SortedNumericDocValues}} where the values we encode are the 
> docids of the related documents. Encoding the interdocument relations using 
> docids directly will make it relatively fast to traverse the graph since we 
> won't need to lookup through an id-field indirection. This choice limits us 
> to building a graph-per-segment since it would be impractical to maintain a 
> global graph for the whole index in the face of segment merges. However 
> graph-per-segment is a very natural at search time - we can traverse each 
> segments' graph independently and merge results as we do today for term-based 
> search.
> At index time, however, merging graphs is somewhat challenging. While 
> indexing we build a graph incrementally, performing searches to construct 
> links among neighbors. When merging segments we must construct a new graph 
> containing elements of all the merged segments. Ideally we would somehow 
> preserve the work done when building the initial graphs, but at least as a 
> start I'd propose we construct a new graph from scratch when merging. The 
> process is going to be  limited, at least initially, to graphs that can fit 
> in RAM since we require random access to the entire graph while constructing 
> it: In order to add links bidirectionally we must continually update existing 
> documents.
> I think we want to express this API to users as a single joint 
> {{KnnGraphField}} abstraction that joins together the vectors and the graph 
> as a single joint field type. Mostly it just looks like a vector-valued 
> field, but has this graph attached to it.
> I'll push a branch with my POC and would love to hear comments. It has many 
> nocommits, basic design is not really set, there is no Query implementation 
> and no integration iwth IndexSearcher, but it does 

[jira] [Commented] (LUCENE-9004) Approximate nearest vector search

2020-01-27 Thread Xin-Chun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17024929#comment-17024929
 ] 

Xin-Chun Zhang commented on LUCENE-9004:


Is there any possible to merge LUCENE-9136 with this issue?

> Approximate nearest vector search
> -
>
> Key: LUCENE-9004
> URL: https://issues.apache.org/jira/browse/LUCENE-9004
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Michael Sokolov
>Priority: Major
> Attachments: hnsw_layered_graph.png
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> "Semantic" search based on machine-learned vector "embeddings" representing 
> terms, queries and documents is becoming a must-have feature for a modern 
> search engine. SOLR-12890 is exploring various approaches to this, including 
> providing vector-based scoring functions. This is a spinoff issue from that.
> The idea here is to explore approximate nearest-neighbor search. Researchers 
> have found an approach based on navigating a graph that partially encodes the 
> nearest neighbor relation at multiple scales can provide accuracy > 95% (as 
> compared to exact nearest neighbor calculations) at a reasonable cost. This 
> issue will explore implementing HNSW (hierarchical navigable small-world) 
> graphs for the purpose of approximate nearest vector search (often referred 
> to as KNN or k-nearest-neighbor search).
> At a high level the way this algorithm works is this. First assume you have a 
> graph that has a partial encoding of the nearest neighbor relation, with some 
> short and some long-distance links. If this graph is built in the right way 
> (has the hierarchical navigable small world property), then you can 
> efficiently traverse it to find nearest neighbors (approximately) in log N 
> time where N is the number of nodes in the graph. I believe this idea was 
> pioneered in  [1]. The great insight in that paper is that if you use the 
> graph search algorithm to find the K nearest neighbors of a new document 
> while indexing, and then link those neighbors (undirectedly, ie both ways) to 
> the new document, then the graph that emerges will have the desired 
> properties.
> The implementation I propose for Lucene is as follows. We need two new data 
> structures to encode the vectors and the graph. We can encode vectors using a 
> light wrapper around {{BinaryDocValues}} (we also want to encode the vector 
> dimension and have efficient conversion from bytes to floats). For the graph 
> we can use {{SortedNumericDocValues}} where the values we encode are the 
> docids of the related documents. Encoding the interdocument relations using 
> docids directly will make it relatively fast to traverse the graph since we 
> won't need to lookup through an id-field indirection. This choice limits us 
> to building a graph-per-segment since it would be impractical to maintain a 
> global graph for the whole index in the face of segment merges. However 
> graph-per-segment is a very natural at search time - we can traverse each 
> segments' graph independently and merge results as we do today for term-based 
> search.
> At index time, however, merging graphs is somewhat challenging. While 
> indexing we build a graph incrementally, performing searches to construct 
> links among neighbors. When merging segments we must construct a new graph 
> containing elements of all the merged segments. Ideally we would somehow 
> preserve the work done when building the initial graphs, but at least as a 
> start I'd propose we construct a new graph from scratch when merging. The 
> process is going to be  limited, at least initially, to graphs that can fit 
> in RAM since we require random access to the entire graph while constructing 
> it: In order to add links bidirectionally we must continually update existing 
> documents.
> I think we want to express this API to users as a single joint 
> {{KnnGraphField}} abstraction that joins together the vectors and the graph 
> as a single joint field type. Mostly it just looks like a vector-valued 
> field, but has this graph attached to it.
> I'll push a branch with my POC and would love to hear comments. It has many 
> nocommits, basic design is not really set, there is no Query implementation 
> and no integration iwth IndexSearcher, but it does work by some measure using 
> a standalone test class. I've tested with uniform random vectors and on my 
> laptop indexed 10K documents in around 10 seconds and searched them at 95% 
> recall (compared with exact nearest-neighbor baseline) at around 250 QPS. I 
> haven't made any attempt to use multithreaded search for this, but it is 
> amenable to per-segment concurrency.
> [1] 
> 

[jira] [Comment Edited] (LUCENE-9004) Approximate nearest vector search

2020-01-21 Thread Xin-Chun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17019519#comment-17019519
 ] 

Xin-Chun Zhang edited comment on LUCENE-9004 at 1/21/20 1:50 PM:
-

Hi, [~sokolov] , I created a related issue [#LUCENE-9136] that attempts to 
introduce IVFFlat algorithm to Lucene. IVFFlat is widely used in many fields, 
from computer vision to speech recognition for its smaller index and memory 
usage. And the algorithm can be accelerated using GPU parallel computing, 
making it faster and more accurate than HNSW. 

My personal branch is available in github 
[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]. The index 
format (one meta file with suffix .ifi) of IVFFlat is described in the class 
[Lucene90IvfFlatIndexFormat|[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/java/org/apache/lucene/codecs/lucene90/Lucene90IvfFlatIndexFormat.java]].
 In my implementation, the k-means clustering was optimized when the number of 
vectors is very large (e.g. > 200,000 per segment). A subset after shuffling is 
selected for training, thereby decreasing time and memory. The insertion 
performance of IVFFlat is better due to no extra executions on insertion while 
HNSW need to maintain the graph. However, IVFFlat consumes more time in 
flushing because of the clustering.

Even if HNSW uses a cache for graphs while IVFFlat has no cache, my test cases 
show that the query performance of IVFFlat is better than HNSW, and its recall 
is pretty high (avg time < 10ms and recall > 96% over a set of 5 random 
vectors with 100 dimensions). My test class for IVFFlat is under the directory 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/TestKnnIvfFlat.java].
 Performance comparison between IVFFlat and HNSW is in the class 
TestKnnGraphAndIvfFlat.

The work is still in its early stage. There must be some bugs that need to be 
fixed and I would like to hear more comments.


was (Author: irvingzhang):
Hi, [~sokolov] , I created a related issue [#LUCENE-9136] that attempts to 
introduce IVFFlat algorithm to Lucene. IVFFlat is widely used in many fields, 
from computer vision to speech recognition for its smaller index and memory 
usage. And the algorithm can be accelerated using GPU parallel computing, 
making it faster and more accurate than HNSW. 

My personal branch is available in github 
[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]. The index 
format (one meta file with suffix .ifi) of IVFFlat can be seen in the class 
[Lucene90IvfFlatIndexFormat|[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/java/org/apache/lucene/codecs/lucene90/Lucene90IvfFlatIndexFormat.java]].
 In my implementation, the k-means clustering was optimized when the number of 
vectors is very large (e.g. > 200,000 per segment). A subset after shuffling is 
selected for training, thereby decreasing time and memory. The insertion 
performance of IVFFlat is better due to no extra executions on insertion while 
HNSW need to maintain the graph. However, IVFFlat consumes more time in 
flushing because of the clustering.

Even if HNSW uses a cache for graphs while IVFFlat has no cache, my test cases 
show that the query performance of IVFFlat is better than HNSW, and its recall 
is pretty high (avg time < 10ms and recall > 96% over a set of 5 random 
vectors with 100 dimensions). My test class for IVFFlat is under the directory 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/TestKnnIvfFlat.java].
 Performance comparison between IVFFlat and HNSW is in the class 
TestKnnGraphAndIvfFlat.

The work is still in its early stage. There must be some bugs that need to be 
fixed and I would like to hear more comments.

> Approximate nearest vector search
> -
>
> Key: LUCENE-9004
> URL: https://issues.apache.org/jira/browse/LUCENE-9004
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Michael Sokolov
>Priority: Major
> Attachments: hnsw_layered_graph.png
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> "Semantic" search based on machine-learned vector "embeddings" representing 
> terms, queries and documents is becoming a must-have feature for a modern 
> search engine. SOLR-12890 is exploring various approaches to this, including 
> providing vector-based scoring functions. This is a spinoff issue from that.
> The idea here is to 

[jira] [Comment Edited] (LUCENE-9004) Approximate nearest vector search

2020-01-21 Thread Xin-Chun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17019519#comment-17019519
 ] 

Xin-Chun Zhang edited comment on LUCENE-9004 at 1/21/20 1:48 PM:
-

Hi, [~sokolov] , I created a related issue [#LUCENE-9136] that attempts to 
introduce IVFFlat algorithm to Lucene. IVFFlat is widely used in many fields, 
from computer vision to speech recognition for its smaller index and memory 
usage. And the algorithm can be accelerated using GPU parallel computing, 
making it faster and more accurate than HNSW. 

My personal branch is available in github 
[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]. The index 
format (one meta file with suffix .ifi) of IVFFlat can be seen in the class 
[Lucene90IvfFlatIndexFormat|[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/java/org/apache/lucene/codecs/lucene90/Lucene90IvfFlatIndexFormat.java]].
 In my implementation, the k-means clustering was optimized when the number of 
vectors is very large (e.g. > 200,000 per segment). A subset after shuffling is 
selected for training, thereby decreasing time and memory. The insertion 
performance of IVFFlat is better due to no extra executions on insertion while 
HNSW need to maintain the graph. However, IVFFlat consumes more time in 
flushing because of the clustering.

Even if HNSW uses a cache for graphs while IVFFlat has no cache, my test cases 
show that the query performance of IVFFlat is better than HNSW, and its recall 
is pretty high (avg time < 10ms and recall > 96% over a set of 5 random 
vectors with 100 dimensions). My test class for IVFFlat is under the directory 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/TestKnnIvfFlat.java].
 Performance comparison between IVFFlat and HNSW is in the class 
TestKnnGraphAndIvfFlat.

The work is still in its early stage. There must be some bugs that need to be 
fixed and I would like to hear more comments.


was (Author: irvingzhang):
I created a related issue [#LUCENE-9136] that attempts to introduce IVFFlat 
algorithm to Lucene. IVFFlat is widely used in many fields, from computer 
vision to speech recognition for its smaller index and memory usage. And the 
algorithm can be accelerated using GPU parallel computing, making it faster and 
more accurate than HNSW. 

My personal branch is available in github 
[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]. The index 
format (only one meta file with suffix .ifi) of IVFFlat can be seen in the 
class Lucene90IvfFlatIndexFormat. In my implementation, the clustering process 
was optimized when the number of vectors is very large (e.g. > 200,000 per 
segment). A subset after shuffling is selected for training, thereby saving 
time and memory. The insertion performance of IVFFlat is better due to no extra 
executions on insertion while HNSW need to maintain the graph. However, IVFFlat 
consumes more time in flushing because of the k-means clustering.

Even if HNSW uses a cache for graphs while IVFFlat has no cache, my test cases 
show that the query performance of IVFFlat is better than HNSW, and its recall 
is pretty high (avg time < 10ms and recall > 96% over a set of 5 random 
vectors with 100 dimensions). My test class for IVFFlat is under the directory 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/TestKnnIvfFlat.java].
 Performance comparison between IVFFlat and HNSW is in the class 
TestKnnGraphAndIvfFlat.

The work is still in its early stage. There must be some bugs that need to be 
fixed and and I would like to hear more comments. 

> Approximate nearest vector search
> -
>
> Key: LUCENE-9004
> URL: https://issues.apache.org/jira/browse/LUCENE-9004
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Michael Sokolov
>Priority: Major
> Attachments: hnsw_layered_graph.png
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> "Semantic" search based on machine-learned vector "embeddings" representing 
> terms, queries and documents is becoming a must-have feature for a modern 
> search engine. SOLR-12890 is exploring various approaches to this, including 
> providing vector-based scoring functions. This is a spinoff issue from that.
> The idea here is to explore approximate nearest-neighbor search. Researchers 
> have found an approach based on navigating a graph that partially encodes the 
> nearest neighbor 

[jira] [Comment Edited] (LUCENE-9136) Introduce IVFFlat to Lucene for ANN similarity search

2020-01-21 Thread Xin-Chun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17019507#comment-17019507
 ] 

Xin-Chun Zhang edited comment on LUCENE-9136 at 1/21/20 1:40 PM:
-

I worked on this issue for about three to four days. And it now works fine for 
searching.

My personal dev branch is available in github 
[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]. The index 
format (only one meta file with suffix .ifi) of IVFFlat is shown in the class 
Lucene90IvfFlatIndexFormat. In my implementation, the clustering process was 
optimized when the number of vectors is very large (e.g. > 200,000 per 
segment). A subset after shuffling is selected for training, thereby saving 
time and memory. The insertion performance of IVFFlat is better due to no extra 
executions on insertion while HNSW need to maintain the graph. However, IVFFlat 
consumes more time in flushing because of the k-means clustering.

My test cases show that the query performance of IVFFlat is better than HNSW, 
even if HNSW uses a cache for graphs while IVFFlat has no cache. And its recall 
is pretty high (avg time < 10ms and recall>96% over a set of 5 random 
vectors with 100 dimensions). My test class for IVFFlat is under the directory 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/TestKnnIvfFlat.java].
 Performance comparison between IVFFlat and HNSW is in the class 
TestKnnGraphAndIvfFlat.

The work is still in its early stage. There must be some bugs that need to be 
fixed and and I would like to hear more comments. Everyone is welcomed to 
participate in this issue.


was (Author: irvingzhang):
I worked on this issue for about three to four days. And it now works fine for 
searching.

My personal dev branch is available in github 
[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]. The index 
format of IVFFlat is shown in the class Lucene90IvfFlatIndexFormat. In my 
implementation, the clustering process was optimized when the number of vectors 
is very large (e.g. > 200,000 per segment). A subset after shuffling is 
selected for training, thereby saving time and memory. The insertion 
performance of IVFFlat is better due to no extra executions on insertion while 
HNSW need to maintain the graph. However, IVFFlat consumes more time in 
flushing because of the k-means clustering.

My test cases show that the query performance of IVFFlat is better than HNSW, 
even if HNSW uses a cache for graphs while IVFFlat has no cache. And its recall 
is pretty high (avg time < 10ms and recall>96% over a set of 5 random 
vectors with 100 dimensions). My test class for IVFFlat is under the directory 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/TestKnnIvfFlat.java].
 Performance comparison between IVFFlat and HNSW is in the class 
TestKnnGraphAndIvfFlat.

The work is still in its early stage. There must be some bugs that need to be 
fixed and and I would like to hear more comments. Everyone is welcomed to 
participate in this issue.

> Introduce IVFFlat to Lucene for ANN similarity search
> -
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned implementations are 
> all written in C++, and no plan for supporting Java interface, making it hard 
> to be integrated in Java projects or those who are not familier with C/C++  
> [[https://github.com/facebookresearch/faiss/issues/105]]. 
> The algorithms for vector retrieval can be roughly classified into four 
> categories,
>  # Tree-base algorithms, 

[jira] [Comment Edited] (LUCENE-9004) Approximate nearest vector search

2020-01-21 Thread Xin-Chun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17019519#comment-17019519
 ] 

Xin-Chun Zhang edited comment on LUCENE-9004 at 1/21/20 1:39 PM:
-

I created a related issue [#LUCENE-9136] that attempts to introduce IVFFlat 
algorithm to Lucene. IVFFlat is widely used in many fields, from computer 
vision to speech recognition for its smaller index and memory usage. And the 
algorithm can be accelerated using GPU parallel computing, making it faster and 
more accurate than HNSW. 

My personal branch is available in github 
[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]. The index 
format (only one meta file with suffix .ifi) of IVFFlat can be seen in the 
class Lucene90IvfFlatIndexFormat. In my implementation, the clustering process 
was optimized when the number of vectors is very large (e.g. > 200,000 per 
segment). A subset after shuffling is selected for training, thereby saving 
time and memory. The insertion performance of IVFFlat is better due to no extra 
executions on insertion while HNSW need to maintain the graph. However, IVFFlat 
consumes more time in flushing because of the k-means clustering.

Even if HNSW uses a cache for graphs while IVFFlat has no cache, my test cases 
show that the query performance of IVFFlat is better than HNSW, and its recall 
is pretty high (avg time < 10ms and recall > 96% over a set of 5 random 
vectors with 100 dimensions). My test class for IVFFlat is under the directory 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/TestKnnIvfFlat.java].
 Performance comparison between IVFFlat and HNSW is in the class 
TestKnnGraphAndIvfFlat.

The work is still in its early stage. There must be some bugs that need to be 
fixed and and I would like to hear more comments. 


was (Author: irvingzhang):
I created a related issue [#LUCENE-9136] that attempts to introduce IVFFlat 
algorithm to Lucene. IVFFlat is widely used in many fields, from computer 
vision to speech recognition for its smaller index and memory usage. And it 
supports GPU parallel computing, making it faster and more accurate than HNSW. 

My personal branch is available in github 
[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]. The index 
format of IVFFlat can be seen in the class Lucene90IvfFlatIndexFormat. In my 
implementation, the clustering process was optimized when the number of vectors 
is very large (e.g. > 200,000 per segment). A subset after shuffling is 
selected for training, thereby saving time and memory. The insertion 
performance of IVFFlat is better due to no extra executions on insertion while 
HNSW need to maintain the graph. However, IVFFlat consumes more time in 
flushing because of the k-means clustering.

Even if HNSW uses a cache for graphs while IVFFlat has no cache, my test cases 
show that the query performance of IVFFlat is better than HNSW, and its recall 
is pretty high (avg time < 10ms and recall > 96% over a set of 5 random 
vectors with 100 dimensions). My test class for IVFFlat is under the directory 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/TestKnnIvfFlat.java].
 Performance comparison between IVFFlat and HNSW is in the class 
TestKnnGraphAndIvfFlat.

The work is still in its early stage. There must be some bugs that need to be 
fixed and and I would like to hear more comments. 

> Approximate nearest vector search
> -
>
> Key: LUCENE-9004
> URL: https://issues.apache.org/jira/browse/LUCENE-9004
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Michael Sokolov
>Priority: Major
> Attachments: hnsw_layered_graph.png
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> "Semantic" search based on machine-learned vector "embeddings" representing 
> terms, queries and documents is becoming a must-have feature for a modern 
> search engine. SOLR-12890 is exploring various approaches to this, including 
> providing vector-based scoring functions. This is a spinoff issue from that.
> The idea here is to explore approximate nearest-neighbor search. Researchers 
> have found an approach based on navigating a graph that partially encodes the 
> nearest neighbor relation at multiple scales can provide accuracy > 95% (as 
> compared to exact nearest neighbor calculations) at a reasonable cost. This 
> issue will explore implementing HNSW (hierarchical navigable small-world) 
> graphs for 

[jira] [Updated] (LUCENE-9136) Introduce IVFFlat to Lucene for ANN similarity search

2020-01-21 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9136:
---
Description: 
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface, making it hard 
to be integrated in Java projects or those who are not familier with C/C++  
[[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as KD-tree;
 # Hashing methods, such as LSH (Local Sensitive Hashing);
 # Product quantization based algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

where IVFFlat and HNSW are the most popular ones among all the VR algorithms.

Recently, the implementation of HNSW (Hierarchical Navigable Small World, 
LUCENE-9004) for Lucene, has made great progress. The issue draws attention of 
those who are interested in Lucene or hope to use HNSW with Solr/Lucene. 

As an alternative for solving ANN similarity search problems, IVFFlat is also 
very popular with many users and supporters. Compared with HNSW, IVFFlat has 
smaller index size but requires k-means clustering, while HNSW is faster in 
query (no training required) but requires extra storage for saving graphs 
[indexing 1M 
vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]]. 
Another advantage is that IVFFlat can be faster and more accurate when enables 
GPU parallel computing (current not support in Java). Both algorithms have 
their merits and demerits. Since HNSW is now under development, it may be 
better to provide both implementations (HNSW && IVFFlat) for potential users 
who are faced with very different scenarios and want to more choices.

  was:
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface, making it hard 
to be integrated in Java projects or those who are not familier with C/C++  
[[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as KD-tree;
 # Hashing methods, such as LSH (Local Sensitive Hashing);
 # Product quantization algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

where IVFFlat and HNSW are the most popular ones among all the VR algorithms.

Recently, the implementation of HNSW (Hierarchical Navigable Small World, 
LUCENE-9004) for Lucene, has made great progress. The issue draws attention of 
those who are interested in Lucene or hope to use HNSW with Solr/Lucene. 

As an alternative for solving ANN similarity search problems, IVFFlat is also 
very popular with many users and supporters. Compared with HNSW, IVFFlat has 
smaller index size but requires k-means clustering, while HNSW is faster in 
query (no training required) but requires extra storage for saving graphs 
[indexing 1M 
vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]]. 
Another advantage is that IVFFlat can be faster and more accurate when enables 
GPU parallel computing (current not support in Java). Both algorithms have 
their merits and demerits. Since HNSW is now under development, it may be 
better to provide both implementations (HNSW && IVFFlat) for potential users 
who are faced with very different scenarios 

[jira] [Comment Edited] (LUCENE-9136) Introduce IVFFlat to Lucene for ANN similarity search

2020-01-21 Thread Xin-Chun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17019507#comment-17019507
 ] 

Xin-Chun Zhang edited comment on LUCENE-9136 at 1/21/20 1:26 PM:
-

I worked on this issue for about three to four days. And it now works fine for 
searching.

My personal dev branch is available in github 
[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]. The index 
format of IVFFlat is shown in the class Lucene90IvfFlatIndexFormat. In my 
implementation, the clustering process was optimized when the number of vectors 
is very large (e.g. > 200,000 per segment). A subset after shuffling is 
selected for training, thereby saving time and memory. The insertion 
performance of IVFFlat is better due to no extra executions on insertion while 
HNSW need to maintain the graph. However, IVFFlat consumes more time in 
flushing because of the k-means clustering.

My test cases show that the query performance of IVFFlat is better than HNSW, 
even if HNSW uses a cache for graphs while IVFFlat has no cache. And its recall 
is pretty high (avg time < 10ms and recall>96% over a set of 5 random 
vectors with 100 dimensions). My test class for IVFFlat is under the directory 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/TestKnnIvfFlat.java].
 Performance comparison between IVFFlat and HNSW is in the class 
TestKnnGraphAndIvfFlat.

The work is still in its early stage. There must be some bugs that need to be 
fixed and and I would like to hear more comments. Everyone is welcomed to 
participate in this issue.


was (Author: irvingzhang):
I worked on this issue for about three to four days. And it now works fine for 
searching.

My personal dev branch is available in github 
[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]. The index 
format of IVFFlat is shown in the class Lucene90IvfFlatIndexFormat. In my 
implementation, the clustering process was optimized when the number of vectors 
is very large (e.g. > 40,000 per segment). A subset after shuffling is selected 
for training, thereby saving time and memory. The insertion performance of 
IVFFlat is better due to no extra executions on insertion while HNSW need to 
maintain the graph. However, IVFFlat consumes more time in flushing because of 
the k-means clustering.

My test cases show that the query performance of IVFFlat is better than HNSW, 
even if HNSW uses a cache for graphs while IVFFlat has no cache. And its recall 
is pretty high (avg time < 10ms and recall>96% over a set of 5 random 
vectors with 100 dimensions). My test class for IVFFlat is under the directory 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/TestKnnIvfFlat.java].
 Performance comparison between IVFFlat and HNSW is in the class 
TestKnnGraphAndIvfFlat.

The work is still in its early stage. There must be some bugs that need to be 
fixed and and I would like to hear more comments. Everyone is welcomed to 
participate in this issue.

> Introduce IVFFlat to Lucene for ANN similarity search
> -
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned implementations are 
> all written in C++, and no plan for supporting Java interface, making it hard 
> to be integrated in Java projects or those who are not familier with C/C++  
> [[https://github.com/facebookresearch/faiss/issues/105]]. 
> The algorithms for vector retrieval can be roughly classified into four 
> categories,
>  # Tree-base algorithms, such as KD-tree;
>  # Hashing methods, 

[jira] [Comment Edited] (LUCENE-9004) Approximate nearest vector search

2020-01-21 Thread Xin-Chun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17019519#comment-17019519
 ] 

Xin-Chun Zhang edited comment on LUCENE-9004 at 1/21/20 1:25 PM:
-

I created a related issue [#LUCENE-9136] that attempts to introduce IVFFlat 
algorithm to Lucene. IVFFlat is widely used in many fields, from computer 
vision to speech recognition for its smaller index and memory usage. And it 
supports GPU parallel computing, making it faster and more accurate than HNSW. 

My personal branch is available in github 
[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]. The index 
format of IVFFlat can be seen in the class Lucene90IvfFlatIndexFormat. In my 
implementation, the clustering process was optimized when the number of vectors 
is very large (e.g. > 200,000 per segment). A subset after shuffling is 
selected for training, thereby saving time and memory. The insertion 
performance of IVFFlat is better due to no extra executions on insertion while 
HNSW need to maintain the graph. However, IVFFlat consumes more time in 
flushing because of the k-means clustering.

Even if HNSW uses a cache for graphs while IVFFlat has no cache, my test cases 
show that the query performance of IVFFlat is better than HNSW, and its recall 
is pretty high (avg time < 10ms and recall > 96% over a set of 5 random 
vectors with 100 dimensions). My test class for IVFFlat is under the directory 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/TestKnnIvfFlat.java].
 Performance comparison between IVFFlat and HNSW is in the class 
TestKnnGraphAndIvfFlat.

The work is still in its early stage. There must be some bugs that need to be 
fixed and and I would like to hear more comments. 


was (Author: irvingzhang):
I created a related issue [#LUCENE-9136] that attempts to introduce IVFFlat 
algorithm to Lucene. IVFFlat is widely used in many fields, from computer 
vision to speech recognition for its smaller index and memory usage. And it 
supports GPU parallel computing, making it faster and more accurate than HNSW. 

My personal branch is available in github 
[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]. The index 
format of IVFFlat can be seen in the class Lucene90IvfFlatIndexFormat. In my 
implementation, the clustering process was optimized when the number of vectors 
is very large (e.g. > 40,000 per segment). A subset after shuffling is selected 
for training, thereby saving time and memory. The insertion performance of 
IVFFlat is better due to no extra executions on insertion while HNSW need to 
maintain the graph. However, IVFFlat consumes more time in flushing because of 
the k-means clustering.

Even if HNSW uses a cache for graphs while IVFFlat has no cache, my test cases 
show that the query performance of IVFFlat is better than HNSW, and its recall 
is pretty high (avg time < 10ms and recall > 96% over a set of 5 random 
vectors with 100 dimensions). My test class for IVFFlat is under the directory 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/TestKnnIvfFlat.java].
 Performance comparison between IVFFlat and HNSW is in the class 
TestKnnGraphAndIvfFlat.

The work is still in its early stage. There must be some bugs that need to be 
fixed and and I would like to hear more comments. 

> Approximate nearest vector search
> -
>
> Key: LUCENE-9004
> URL: https://issues.apache.org/jira/browse/LUCENE-9004
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Michael Sokolov
>Priority: Major
> Attachments: hnsw_layered_graph.png
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> "Semantic" search based on machine-learned vector "embeddings" representing 
> terms, queries and documents is becoming a must-have feature for a modern 
> search engine. SOLR-12890 is exploring various approaches to this, including 
> providing vector-based scoring functions. This is a spinoff issue from that.
> The idea here is to explore approximate nearest-neighbor search. Researchers 
> have found an approach based on navigating a graph that partially encodes the 
> nearest neighbor relation at multiple scales can provide accuracy > 95% (as 
> compared to exact nearest neighbor calculations) at a reasonable cost. This 
> issue will explore implementing HNSW (hierarchical navigable small-world) 
> graphs for the purpose of approximate nearest vector search (often referred 
> 

[jira] [Comment Edited] (LUCENE-9136) Introduce IVFFlat to Lucene for ANN similarity search

2020-01-20 Thread Xin-Chun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17019507#comment-17019507
 ] 

Xin-Chun Zhang edited comment on LUCENE-9136 at 1/21/20 7:40 AM:
-

I worked on this issue for about three to four days. And it now works fine for 
searching.

My personal dev branch is available in github 
[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]. The index 
format of IVFFlat is shown in the class Lucene90IvfFlatIndexFormat. In my 
implementation, the clustering process was optimized when the number of vectors 
is very large (e.g. > 40,000 per segment). A subset after shuffling is selected 
for training, thereby saving time and memory. The insertion performance of 
IVFFlat is better due to no extra executions on insertion while HNSW need to 
maintain the graph. However, IVFFlat consumes more time in flushing because of 
the k-means clustering.

My test cases show that the query performance of IVFFlat is better than HNSW, 
even if HNSW uses a cache for graphs while IVFFlat has no cache. And its recall 
is pretty high (avg time < 10ms and recall>96% over a set of 5 random 
vectors with 100 dimensions). My test class for IVFFlat is under the directory 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/TestKnnIvfFlat.java].
 Performance comparison between IVFFlat and HNSW is in the class 
TestKnnGraphAndIvfFlat.

The work is still in its early stage. There must be some bugs that need to be 
fixed and and I would like to hear more comments. Everyone is welcomed to 
participate in this issue.


was (Author: irvingzhang):
I worked on this issue for about three to four days. And it now works fine for 
searching.

My personal dev branch is available in github 
[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]. The index 
format of IVFFlat is shown in the class Lucene90IvfFlatIndexFormat. In my 
implementation, the clustering process was optimized when the number of vectors 
is very large (e.g. > 40,000 per segment). A subset after shuffling is selected 
for training, thereby saving time and memory. The insertion performance of 
IVFFlat is better due to no extra executions on insertion while HNSW need to 
maintain the graph. However, IVFFlat consumes more time in flushing because of 
the k-means clustering.

My test cases show that the query performance of IVFFlat is better than HNSW, 
even if HNSW uses a cache for graphs while IVFFlat has no cache. And its recall 
is pretty high (avg time < 10ms and recall>97% over a set of 5 random 
vectors with 100 dimensions). My test class for IVFFlat is under the directory 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/TestKnnIvfFlat.java].
 Performance comparison between IVFFlat and HNSW is in the class 
TestKnnGraphAndIvfFlat.

The work is still in its early stage. There must be some bugs that need to be 
fixed and and I would like to hear more comments. Everyone is welcomed to 
participate in this issue.

> Introduce IVFFlat to Lucene for ANN similarity search
> -
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned implementations are 
> all written in C++, and no plan for supporting Java interface, making it hard 
> to be integrated in Java projects or those who are not familier with C/C++  
> [[https://github.com/facebookresearch/faiss/issues/105]]. 
> The algorithms for vector retrieval can be roughly classified into four 
> categories,
>  # Tree-base algorithms, such as KD-tree;
>  # Hashing methods, 

[jira] [Comment Edited] (LUCENE-9004) Approximate nearest vector search

2020-01-20 Thread Xin-Chun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17019519#comment-17019519
 ] 

Xin-Chun Zhang edited comment on LUCENE-9004 at 1/21/20 6:53 AM:
-

I created a related issue [#LUCENE-9136] that attempts to introduce IVFFlat 
algorithm to Lucene. IVFFlat is widely used in many fields, from computer 
vision to speech recognition for its smaller index and memory usage. And it 
supports GPU parallel computing, making it faster and more accurate than HNSW. 

My personal branch is available in github 
[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]. The index 
format of IVFFlat can be seen in the class Lucene90IvfFlatIndexFormat. In my 
implementation, the clustering process was optimized when the number of vectors 
is very large (e.g. > 40,000 per segment). A subset after shuffling is selected 
for training, thereby saving time and memory. The insertion performance of 
IVFFlat is better due to no extra executions on insertion while HNSW need to 
maintain the graph. However, IVFFlat consumes more time in flushing because of 
the k-means clustering.

Even if HNSW uses a cache for graphs while IVFFlat has no cache, my test cases 
show that the query performance of IVFFlat is better than HNSW, and its recall 
is pretty high (avg time < 10ms and recall > 96% over a set of 5 random 
vectors with 100 dimensions). My test class for IVFFlat is under the directory 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/TestKnnIvfFlat.java].
 Performance comparison between IVFFlat and HNSW is in the class 
TestKnnGraphAndIvfFlat.

The work is still in its early stage. There must be some bugs that need to be 
fixed and and I would like to hear more comments. 


was (Author: irvingzhang):
I created a related issue [#LUCENE-9136] that attempts to introduce IVFFlat 
algorithm to Lucene. IVFFlat is widely used in many fields, from computer 
vision to speech recognition for its smaller index and memory usage. And it 
supports GPU parallel computing, making it faster and more accurate than HNSW. 

My personal branch is available in github 
[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]. The index 
format of IVFFlat can be seen in the class Lucene90IvfFlatIndexFormat. In my 
implementation, the clustering process was optimized when the number of vectors 
is very large (e.g. > 40,000 per segment). A subset after shuffling is selected 
for training, thereby saving time and memory. The insertion performance of 
IVFFlat is better due to no extra executions on insertion while HNSW need to 
maintain the graph. However, IVFFlat consumes more time in flushing because of 
the k-means clustering.

Even if HNSW uses a cache for graphs while IVFFlat has no cache, my test cases 
show that the query performance of IVFFlat is better than HNSW, and its recall 
is pretty high (avg time < 10ms and recall > 96% over a set of 5 random 
vectors with 100 dimensions). My test class for IVFFlat is under the directory 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/TestKnnIvfFlat.java].
 Performance comparison between IVFFlat and HNSW is in the class 
TestKnnGraphAndIvfFlat.

The work is still in its early stage. Now it has some codes that are similar to 
HNSW, which could be refactored. Moreover, there must have some bugs that need 
to be fixed and and I would like to hear more comments.

> Approximate nearest vector search
> -
>
> Key: LUCENE-9004
> URL: https://issues.apache.org/jira/browse/LUCENE-9004
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Michael Sokolov
>Priority: Major
> Attachments: hnsw_layered_graph.png
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> "Semantic" search based on machine-learned vector "embeddings" representing 
> terms, queries and documents is becoming a must-have feature for a modern 
> search engine. SOLR-12890 is exploring various approaches to this, including 
> providing vector-based scoring functions. This is a spinoff issue from that.
> The idea here is to explore approximate nearest-neighbor search. Researchers 
> have found an approach based on navigating a graph that partially encodes the 
> nearest neighbor relation at multiple scales can provide accuracy > 95% (as 
> compared to exact nearest neighbor calculations) at a reasonable cost. This 
> issue will explore implementing HNSW (hierarchical navigable 

[jira] [Comment Edited] (LUCENE-9136) Introduce IVFFlat to Lucene for ANN similarity search

2020-01-20 Thread Xin-Chun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17019507#comment-17019507
 ] 

Xin-Chun Zhang edited comment on LUCENE-9136 at 1/21/20 6:31 AM:
-

I worked on this issue for about three to four days. And it now works fine for 
searching.

My personal dev branch is available in github 
[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]. The index 
format of IVFFlat is shown in the class Lucene90IvfFlatIndexFormat. In my 
implementation, the clustering process was optimized when the number of vectors 
is very large (e.g. > 40,000 per segment). A subset after shuffling is selected 
for training, thereby saving time and memory. The insertion performance of 
IVFFlat is better due to no extra executions on insertion while HNSW need to 
maintain the graph. However, IVFFlat consumes more time in flushing because of 
the k-means clustering.

My test cases show that the query performance of IVFFlat is better than HNSW, 
even if HNSW uses a cache for graphs while IVFFlat has no cache. And its recall 
is pretty high (avg time < 10ms and recall>97% over a set of 5 random 
vectors with 100 dimensions). My test class for IVFFlat is under the directory 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/TestKnnIvfFlat.java].
 Performance comparison between IVFFlat and HNSW is in the class 
TestKnnGraphAndIvfFlat.

The work is still in its early stage. There must be some bugs that need to be 
fixed and and I would like to hear more comments. Everyone is welcomed to 
participate in this issue.


was (Author: irvingzhang):
I worked on this issue for about three to four days. And it now works fine for 
searching.

My personal dev branch is available in github 
[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]. The index 
format of IVFFlat is shown in the class Lucene90IvfFlatIndexFormat. In my 
implementation, the clustering process was optimized when the number of vectors 
is very large (e.g. > 40,000 per segment). A subset after shuffling is selected 
for training, thereby saving time and memory. The insertion performance of 
IVFFlat is better due to no extra executions on insertion while HNSW need to 
maintain the graph. However, IVFFlat consumes more time in flushing because of 
the k-means clustering.

My test cases show that the query performance of IVFFlat is better than HNSW, 
even if HNSW uses a cache for graphs while IVFFlat has no cache. And its recall 
is pretty high (recall>97% over a set of 5 random vectors with 100 
dimensions). My test class for IVFFlat is under the directory 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/TestKnnIvfFlat.java].
 Performance comparison between IVFFlat and HNSW is in the class 
TestKnnGraphAndIvfFlat.

The work is still in its early stage. There must be some bugs that need to be 
fixed and and I would like to hear more comments. Everyone is welcomed to 
participate in this issue.

> Introduce IVFFlat to Lucene for ANN similarity search
> -
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned implementations are 
> all written in C++, and no plan for supporting Java interface, making it hard 
> to be integrated in Java projects or those who are not familier with C/C++  
> [[https://github.com/facebookresearch/faiss/issues/105]]. 
> The algorithms for vector retrieval can be roughly classified into four 
> categories,
>  # Tree-base algorithms, such as KD-tree;
>  # Hashing methods, such as LSH (Local 

[jira] [Comment Edited] (LUCENE-9004) Approximate nearest vector search

2020-01-20 Thread Xin-Chun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17019519#comment-17019519
 ] 

Xin-Chun Zhang edited comment on LUCENE-9004 at 1/21/20 6:30 AM:
-

I created a related issue [#LUCENE-9136] that attempts to introduce IVFFlat 
algorithm to Lucene. IVFFlat is widely used in many fields, from computer 
vision to speech recognition for its smaller index and memory usage. And it 
supports GPU parallel computing, making it faster and more accurate than HNSW. 

My personal branch is available in github 
[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]. The index 
format of IVFFlat can be seen in the class Lucene90IvfFlatIndexFormat. In my 
implementation, the clustering process was optimized when the number of vectors 
is very large (e.g. > 40,000 per segment). A subset after shuffling is selected 
for training, thereby saving time and memory. The insertion performance of 
IVFFlat is better due to no extra executions on insertion while HNSW need to 
maintain the graph. However, IVFFlat consumes more time in flushing because of 
the k-means clustering.

Even if HNSW uses a cache for graphs while IVFFlat has no cache, my test cases 
show that the query performance of IVFFlat is better than HNSW, and its recall 
is pretty high (avg time < 10ms and recall > 96% over a set of 5 random 
vectors with 100 dimensions). My test class for IVFFlat is under the directory 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/TestKnnIvfFlat.java].
 Performance comparison between IVFFlat and HNSW is in the class 
TestKnnGraphAndIvfFlat.

The work is still in its early stage. Now it has some codes that are similar to 
HNSW, which could be refactored. Moreover, there must have some bugs that need 
to be fixed and and I would like to hear more comments.


was (Author: irvingzhang):
I created a related issue [#LUCENE-9136] that attempts to introduce IVFFlat 
algorithm to Lucene. IVFFlat is widely used in many fields, from computer 
vision to speech recognition for its smaller index and memory usage. And it 
supports GPU parallel computing, making it faster and more accurate than HNSW. 

My personal branch is available in github 
[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]. The index 
format of IVFFlat can be seen in the class Lucene90IvfFlatIndexFormat. In my 
implementation, the clustering process was optimized when the number of vectors 
is very large (e.g. > 40,000 per segment). A subset after shuffling is selected 
for training, thereby saving time and memory. The insertion performance of 
IVFFlat is better due to no extra executions on insertion while HNSW need to 
maintain the graph. However, IVFFlat consumes more time in flushing because of 
the k-means clustering.

Even if HNSW uses a cache for graphs while IVFFlat has no cache, my test cases 
show that the query performance of IVFFlat is better than HNSW, and its recall 
is pretty high (recall>97% over a set of 5 random vectors with 100 
dimensions). My test class for IVFFlat is under the directory 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/TestKnnIvfFlat.java].
 Performance comparison between IVFFlat and HNSW is in the class 
TestKnnGraphAndIvfFlat.

The work is still in its early stage. Now it has some codes that are similar to 
HNSW, which could be refactored. Moreover, there must have some bugs that need 
to be fixed and and I would like to hear more comments.

> Approximate nearest vector search
> -
>
> Key: LUCENE-9004
> URL: https://issues.apache.org/jira/browse/LUCENE-9004
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Michael Sokolov
>Priority: Major
> Attachments: hnsw_layered_graph.png
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> "Semantic" search based on machine-learned vector "embeddings" representing 
> terms, queries and documents is becoming a must-have feature for a modern 
> search engine. SOLR-12890 is exploring various approaches to this, including 
> providing vector-based scoring functions. This is a spinoff issue from that.
> The idea here is to explore approximate nearest-neighbor search. Researchers 
> have found an approach based on navigating a graph that partially encodes the 
> nearest neighbor relation at multiple scales can provide accuracy > 95% (as 
> compared to exact nearest neighbor calculations) at a reasonable cost. This 
> issue 

[jira] [Comment Edited] (LUCENE-9004) Approximate nearest vector search

2020-01-20 Thread Xin-Chun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17019519#comment-17019519
 ] 

Xin-Chun Zhang edited comment on LUCENE-9004 at 1/21/20 3:45 AM:
-

I created a related issue [#LUCENE-9136] that attempts to introduce IVFFlat 
algorithm to Lucene. IVFFlat is widely used in many fields, from computer 
vision to speech recognition for its smaller index and memory usage. And it 
supports GPU parallel computing, making it faster and more accurate than HNSW. 

My personal branch is available in github 
[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]. The index 
format of IVFFlat can be seen in the class Lucene90IvfFlatIndexFormat. In my 
implementation, the clustering process was optimized when the number of vectors 
is very large (e.g. > 40,000 per segment). A subset after shuffling is selected 
for training, thereby saving time and memory. The insertion performance of 
IVFFlat is better due to no extra executions on insertion while HNSW need to 
maintain the graph. However, IVFFlat consumes more time in flushing because of 
the k-means clustering.

Even if HNSW uses a cache for graphs while IVFFlat has no cache, my test cases 
show that the query performance of IVFFlat is better than HNSW, and its recall 
is pretty high (recall>97% over a set of 5 random vectors with 100 
dimensions). My test class for IVFFlat is under the directory 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/TestKnnIvfFlat.java].
 Performance comparison between IVFFlat and HNSW is in the class 
TestKnnGraphAndIvfFlat.

The work is still in its early stage. Now it has some codes that are similar to 
HNSW, which could be refactored. Moreover, there must have some bugs that need 
to be fixed and and I would like to hear more comments.


was (Author: irvingzhang):
I created a related issue [#LUCENE-9136] that attempts to introduce IVFFlat 
algorithm to Lucene. IVFFlat is widely used in many fields, from computer 
vision to speech recognition for its smaller index and memory usage. And it 
supports GPU parallel computing, making it faster and more accurate than HNSW. 

My personal branch is available in github 
[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]. The format 
of IVFFlat index can be seen in the class Lucene90IvfFlatIndexFormat. In my 
implementation, the clustering process was optimized when the number of vectors 
is very large (e.g. > 40,000 per segment). A subset after shuffling is selected 
for training, thereby saving time and memory. The insertion performance of 
IVFFlat is better due to no extra executions on insertion while HNSW need to 
maintain the graph. However, IVFFlat consumes more time in flushing because of 
the k-means clustering.

Even if HNSW uses a cache for graphs while IVFFlat has no cache, my test cases 
show that the query performance of IVFFlat is better than HNSW, and its recall 
is pretty high (recall>97% over a set of 5 random vectors with 100 
dimensions). My test class for IVFFlat is under the directory 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/TestKnnIvfFlat.java].
 Performance comparison between IVFFlat and HNSW is in the class 
TestKnnGraphAndIvfFlat.

The work is still in its early stage. Now it has some codes that are similar to 
HNSW, which could be refactored. Moreover, there must have some bugs that need 
to be fixed and and I would like to hear more comments.

> Approximate nearest vector search
> -
>
> Key: LUCENE-9004
> URL: https://issues.apache.org/jira/browse/LUCENE-9004
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Michael Sokolov
>Priority: Major
> Attachments: hnsw_layered_graph.png
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> "Semantic" search based on machine-learned vector "embeddings" representing 
> terms, queries and documents is becoming a must-have feature for a modern 
> search engine. SOLR-12890 is exploring various approaches to this, including 
> providing vector-based scoring functions. This is a spinoff issue from that.
> The idea here is to explore approximate nearest-neighbor search. Researchers 
> have found an approach based on navigating a graph that partially encodes the 
> nearest neighbor relation at multiple scales can provide accuracy > 95% (as 
> compared to exact nearest neighbor calculations) at a reasonable cost. This 
> issue will explore 

[jira] [Comment Edited] (LUCENE-9136) Introduce IVFFlat to Lucene for ANN similarity search

2020-01-20 Thread Xin-Chun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17019507#comment-17019507
 ] 

Xin-Chun Zhang edited comment on LUCENE-9136 at 1/21/20 3:44 AM:
-

I worked on this issue for about three to four days. And it now works fine for 
searching.

My personal dev branch is available in github 
[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]. The index 
format of IVFFlat is shown in the class Lucene90IvfFlatIndexFormat. In my 
implementation, the clustering process was optimized when the number of vectors 
is very large (e.g. > 40,000 per segment). A subset after shuffling is selected 
for training, thereby saving time and memory. The insertion performance of 
IVFFlat is better due to no extra executions on insertion while HNSW need to 
maintain the graph. However, IVFFlat consumes more time in flushing because of 
the k-means clustering.

My test cases show that the query performance of IVFFlat is better than HNSW, 
even if HNSW uses a cache for graphs while IVFFlat has no cache. And its recall 
is pretty high (recall>97% over a set of 5 random vectors with 100 
dimensions). My test class for IVFFlat is under the directory 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/TestKnnIvfFlat.java].
 Performance comparison between IVFFlat and HNSW is in the class 
TestKnnGraphAndIvfFlat.

The work is still in its early stage. There must be some bugs that need to be 
fixed and and I would like to hear more comments. Everyone is welcomed to 
participate in this issue.


was (Author: irvingzhang):
I worked on this issue for about three to four days. And it now works fine for 
searching.

My personal dev branch is available in github 
[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]. The index 
format of IVFFlat is shown in the class Lucene90IvfFlatIndexFormat. In my 
implementation, the clustering process was optimized when the number of vectors 
is very large (e.g. > 40,000 per segment). A subset after shuffling is selected 
for training, thereby saving time and memory. The insertion performance of 
IVFFlat is better due to no extra executions on insertion while HNSW need to 
maintain the graph. However, IVFFlat consumes more time in flushing because of 
the k-means clustering.

My test cases show that the query performance of IVFFlat is better than HNSW, 
even if HNSW uses a cache for graphs while IVFFlat has no cache. And its recall 
is pretty high (recall>97% over a set of 5 random vectors with 100 
dimensions). My test class for IVFFlat is under the directory 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/TestKnnIvfFlat.java].
 Performance comparison between IVFFlat and HNSW is in the class 
TestKnnGraphAndIvfFlat.

The work is still in its early stage. There must be some bugs that need to be 
fixed and and I would like to hear more comments. Anyone is welcomed to 
participate in further development.

> Introduce IVFFlat to Lucene for ANN similarity search
> -
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned implementations are 
> all written in C++, and no plan for supporting Java interface, making it hard 
> to be integrated in Java projects or those who are not familier with C/C++  
> [[https://github.com/facebookresearch/faiss/issues/105]]. 
> The algorithms for vector retrieval can be roughly classified into four 
> categories,
>  # Tree-base algorithms, such as KD-tree;
>  # Hashing methods, such as LSH (Local Sensitive 

[jira] [Comment Edited] (LUCENE-9136) Introduce IVFFlat to Lucene for ANN similarity search

2020-01-20 Thread Xin-Chun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17019507#comment-17019507
 ] 

Xin-Chun Zhang edited comment on LUCENE-9136 at 1/21/20 3:41 AM:
-

I worked on this issue for about three to four days. And it now works fine for 
searching.

My personal dev branch is available in github 
[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]. The index 
format of IVFFlat is shown in the class Lucene90IvfFlatIndexFormat. In my 
implementation, the clustering process was optimized when the number of vectors 
is very large (e.g. > 40,000 per segment). A subset after shuffling is selected 
for training, thereby saving time and memory. The insertion performance of 
IVFFlat is better due to no extra executions on insertion while HNSW need to 
maintain the graph. However, IVFFlat consumes more time in flushing because of 
the k-means clustering.

My test cases show that the query performance of IVFFlat is better than HNSW, 
even if HNSW uses a cache for graphs while IVFFlat has no cache. And its recall 
is pretty high (recall>97% over a set of 5 random vectors with 100 
dimensions). My test class for IVFFlat is under the directory 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/TestKnnIvfFlat.java].
 Performance comparison between IVFFlat and HNSW is in the class 
TestKnnGraphAndIvfFlat.

The work is still in its early stage. There must be some bugs that need to be 
fixed and and I would like to hear more comments. Anyone is welcomed to 
participate in further development.


was (Author: irvingzhang):
I worked on this issue for about three to four days. And it now works fine for 
searching.

My personal dev branch is available in github 
[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]. The index 
format of IVFFlat is shown in the class Lucene90IvfFlatIndexFormat. In my 
implementation, the clustering process was optimized when the number of vectors 
is very large (e.g. > 40,000 per segment). A subset after shuffling is selected 
for training, thereby saving time and memory. The insertion performance of 
IVFFlat is better due to no extra executions on insertion while HNSW need to 
maintain the graph. However, IVFFlat consumes more time in flushing because of 
the k-means clustering.

My test cases show that the query performance of IVFFlat is better than HNSW. 
And its recall is pretty high (recall>97% over a set of 5 random vectors 
with 100 dimensions). My test class for IVFFlat is under the directory 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/TestKnnIvfFlat.java].
 Performance comparison between IVFFlat and HNSW is in the class 
TestKnnGraphAndIvfFlat.

The work is still in its early stage. There must be some bugs that need to be 
fixed and and I would like to hear more comments.

> Introduce IVFFlat to Lucene for ANN similarity search
> -
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned implementations are 
> all written in C++, and no plan for supporting Java interface, making it hard 
> to be integrated in Java projects or those who are not familier with C/C++  
> [[https://github.com/facebookresearch/faiss/issues/105]]. 
> The algorithms for vector retrieval can be roughly classified into four 
> categories,
>  # Tree-base algorithms, such as KD-tree;
>  # Hashing methods, such as LSH (Local Sensitive Hashing);
>  # Product quantization algorithms, such as IVFFlat;
>  # Graph-base algorithms, such as HNSW, SSG, NSG;
> 

[jira] [Comment Edited] (LUCENE-9004) Approximate nearest vector search

2020-01-20 Thread Xin-Chun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17019519#comment-17019519
 ] 

Xin-Chun Zhang edited comment on LUCENE-9004 at 1/21/20 3:42 AM:
-

I created a related issue [#LUCENE-9136] that attempts to introduce IVFFlat 
algorithm to Lucene. IVFFlat is widely used in many fields, from computer 
vision to speech recognition for its smaller index and memory usage. And it 
supports GPU parallel computing, making it faster and more accurate than HNSW. 

My personal branch is available in github 
[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]. The format 
of IVFFlat index can be seen in the class Lucene90IvfFlatIndexFormat. In my 
implementation, the clustering process was optimized when the number of vectors 
is very large (e.g. > 40,000 per segment). A subset after shuffling is selected 
for training, thereby saving time and memory. The insertion performance of 
IVFFlat is better due to no extra executions on insertion while HNSW need to 
maintain the graph. However, IVFFlat consumes more time in flushing because of 
the k-means clustering.

Even if HNSW uses a cache for graphs while IVFFlat has no cache, my test cases 
show that the query performance of IVFFlat is better than HNSW, and its recall 
is pretty high (recall>97% over a set of 5 random vectors with 100 
dimensions). My test class for IVFFlat is under the directory 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/TestKnnIvfFlat.java].
 Performance comparison between IVFFlat and HNSW is in the class 
TestKnnGraphAndIvfFlat.

The work is still in its early stage. Now it has some codes that are similar to 
HNSW, which could be refactored. Moreover, there must have some bugs that need 
to be fixed and and I would like to hear more comments.


was (Author: irvingzhang):
I created a related issue [#LUCENE-9136] that attempts to introduce IVFFlat 
algorithm to Lucene. IVFFlat is widely used in many fields, from computer 
vision to speech recognition for its smaller index and memory usage. And it 
supports GPU parallel computing, making it faster and more accurate than HNSW. 

My personal 
[branch|[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]] is 
available in github. The format of IVFFlat index can be seen in the class 
Lucene90IvfFlatIndexFormat. In my implementation, the clustering process was 
optimized when the number of vectors is very large (e.g. > 40,000 per segment). 
A subset after shuffling is selected for training, thereby saving time and 
memory. The insertion performance of IVFFlat is better due to no extra 
executions on insertion while HNSW need to maintain the graph. However, IVFFlat 
consumes more time in flushing because of the k-means clustering.

My test cases show that the query performance of IVFFlat is better than HNSW, 
and its recall is pretty high (recall>97% over a set of 5 random vectors 
with 100 dimensions). My test class for IVFFlat is under the directory 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/TestKnnIvfFlat.java].
 Performance comparison between IVFFlat and HNSW is in the class 
TestKnnGraphAndIvfFlat.

The work is still in its early stage. Now it has some codes that are similar to 
HNSW, which could be refactored. Moreover, there must have some bugs that need 
to be fixed and and I would like to hear more comments.

> Approximate nearest vector search
> -
>
> Key: LUCENE-9004
> URL: https://issues.apache.org/jira/browse/LUCENE-9004
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Michael Sokolov
>Priority: Major
> Attachments: hnsw_layered_graph.png
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> "Semantic" search based on machine-learned vector "embeddings" representing 
> terms, queries and documents is becoming a must-have feature for a modern 
> search engine. SOLR-12890 is exploring various approaches to this, including 
> providing vector-based scoring functions. This is a spinoff issue from that.
> The idea here is to explore approximate nearest-neighbor search. Researchers 
> have found an approach based on navigating a graph that partially encodes the 
> nearest neighbor relation at multiple scales can provide accuracy > 95% (as 
> compared to exact nearest neighbor calculations) at a reasonable cost. This 
> issue will explore implementing HNSW (hierarchical navigable small-world) 
> graphs for the 

[jira] [Comment Edited] (LUCENE-9136) Introduce IVFFlat to Lucene for ANN similarity search

2020-01-20 Thread Xin-Chun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17019507#comment-17019507
 ] 

Xin-Chun Zhang edited comment on LUCENE-9136 at 1/21/20 3:38 AM:
-

I worked on this issue for about three to four days. And it now works fine for 
searching.

My personal dev branch is available in github 
[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]. The index 
format of IVFFlat is shown in the class Lucene90IvfFlatIndexFormat. In my 
implementation, the clustering process was optimized when the number of vectors 
is very large (e.g. > 40,000 per segment). A subset after shuffling is selected 
for training, thereby saving time and memory. The insertion performance of 
IVFFlat is better due to no extra executions on insertion while HNSW need to 
maintain the graph. However, IVFFlat consumes more time in flushing because of 
the k-means clustering.

My test cases show that the query performance of IVFFlat is better than HNSW. 
And its recall is pretty high (recall>97% over a set of 5 random vectors 
with 100 dimensions). My test class for IVFFlat is under the directory 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/TestKnnIvfFlat.java].
 Performance comparison between IVFFlat and HNSW is in the class 
TestKnnGraphAndIvfFlat.

The work is still in its early stage. There must be some bugs that need to be 
fixed and and I would like to hear more comments.


was (Author: irvingzhang):
I worked on this issue for about three to four days. And it now works fine for 
searching. In my implementation, the clustering process was optimized when the 
number of vectors is large (e.g. > 40,000 per segment). A subset after 
shuffling is selected for training rather than the whole set of vectors, 
decreasing time and memory. The insertion performance of IVFFlat is better due 
to no extra executions on insertion while HNSW needs to maintain the graph. 
However, IVFFlat consumes more time in flushing because of the k-means 
clustering. The designed format of IVFFlat index is presented in the class 
Lucene90IvfFlatIndexFormat. My test cases show that the query performance of 
IVFFlat is slightly better than HNSW. My personal branch 
[#[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]] is here. 
Test class for IVFFlat is under the directory of 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/TestKnnIvfFlat.java].
 Performance comparison between IVFFlat and HNSW is in the class 
TestKnnGraphAndIvfFlat.

The work is still in its early stage. Now it has some codes that are similar to 
HNSW, which could be refactored. Moreover, there must be some bugs that need to 
be fixed and and I would like to hear more comments.

> Introduce IVFFlat to Lucene for ANN similarity search
> -
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned implementations are 
> all written in C++, and no plan for supporting Java interface, making it hard 
> to be integrated in Java projects or those who are not familier with C/C++  
> [[https://github.com/facebookresearch/faiss/issues/105]]. 
> The algorithms for vector retrieval can be roughly classified into four 
> categories,
>  # Tree-base algorithms, such as KD-tree;
>  # Hashing methods, such as LSH (Local Sensitive Hashing);
>  # Product quantization algorithms, such as IVFFlat;
>  # Graph-base algorithms, such as HNSW, SSG, NSG;
> where IVFFlat and HNSW are the most popular ones among all the VR algorithms.
> Recently, the implementation 

[jira] [Comment Edited] (LUCENE-9004) Approximate nearest vector search

2020-01-20 Thread Xin-Chun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17019519#comment-17019519
 ] 

Xin-Chun Zhang edited comment on LUCENE-9004 at 1/21/20 3:33 AM:
-

I created a related issue [#LUCENE-9136] that attempts to introduce IVFFlat 
algorithm to Lucene. IVFFlat is widely used in many fields, from computer 
vision to speech recognition for its smaller index and memory usage. And it 
supports GPU parallel computing, making it faster and more accurate than HNSW. 

My personal 
[branch|[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]] is 
available in github. The format of IVFFlat index can be seen in the class 
Lucene90IvfFlatIndexFormat. In my implementation, the clustering process was 
optimized when the number of vectors is very large (e.g. > 40,000 per segment). 
A subset after shuffling is selected for training, thereby saving time and 
memory. The insertion performance of IVFFlat is better due to no extra 
executions on insertion while HNSW need to maintain the graph. However, IVFFlat 
consumes more time in flushing because of the k-means clustering.

My test cases show that the query performance of IVFFlat is better than HNSW, 
and its recall is pretty high (recall>97% over a set of 5 random vectors 
with 100 dimensions). My test class for IVFFlat is under the directory 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/TestKnnIvfFlat.java].
 Performance comparison between IVFFlat and HNSW is in the class 
TestKnnGraphAndIvfFlat.

The work is still in its early stage. Now it has some codes that are similar to 
HNSW, which could be refactored. Moreover, there must have some bugs that need 
to be fixed and and I would like to hear more comments.


was (Author: irvingzhang):
I created a related issue [#LUCENE-9136] that attempts to introduce IVFFlat 
algorithm to Lucene. IVFFlat is widely used in many fields, from computer 
vision to speech recognition for its smaller index and memory. And it supports 
GPU parallel computing, making it faster and more accurate than HNSW. 

The format of IVFFlat index can be seen in the class 
Lucene90IvfFlatIndexFormat. In my implementation, the clustering process was 
optimized when the number of vectors is very large (e.g. > 40,000 per segment). 
A subset after shuffling is selected for training, thereby saving time and 
memory. The insertion performance of IVFFlat is better due to no extra 
executions on insertion while HNSW need to maintain the graph. However, IVFFlat 
consumes more time in flushing because of the k-means clustering.

My test cases show that the query performance of IVFFlat is better than HNSW, 
and its recall is pretty high (recall>97% over a set of 5 random vectors 
with 100 dimensions). My test class for IVFFlat is under the directory 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/TestKnnIvfFlat.java].
 Performance comparison between IVFFlat and HNSW is in the class 
TestKnnGraphAndIvfFlat.

The work is still in its early stage. Now it has some codes that are similar to 
HNSW, which could be refactored. Moreover, there must have some bugs that need 
to be fixed and and I would like to hear more comments.

> Approximate nearest vector search
> -
>
> Key: LUCENE-9004
> URL: https://issues.apache.org/jira/browse/LUCENE-9004
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Michael Sokolov
>Priority: Major
> Attachments: hnsw_layered_graph.png
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> "Semantic" search based on machine-learned vector "embeddings" representing 
> terms, queries and documents is becoming a must-have feature for a modern 
> search engine. SOLR-12890 is exploring various approaches to this, including 
> providing vector-based scoring functions. This is a spinoff issue from that.
> The idea here is to explore approximate nearest-neighbor search. Researchers 
> have found an approach based on navigating a graph that partially encodes the 
> nearest neighbor relation at multiple scales can provide accuracy > 95% (as 
> compared to exact nearest neighbor calculations) at a reasonable cost. This 
> issue will explore implementing HNSW (hierarchical navigable small-world) 
> graphs for the purpose of approximate nearest vector search (often referred 
> to as KNN or k-nearest-neighbor search).
> At a high level the way this algorithm works is this. First assume you have a 

[jira] [Comment Edited] (LUCENE-9004) Approximate nearest vector search

2020-01-20 Thread Xin-Chun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17019519#comment-17019519
 ] 

Xin-Chun Zhang edited comment on LUCENE-9004 at 1/21/20 3:30 AM:
-

I created a related issue [#LUCENE-9136] that attempts to introduce IVFFlat 
algorithm to Lucene. IVFFlat is widely used in many fields, from computer 
vision to speech recognition for its smaller index and memory. And it supports 
GPU parallel computing, making it faster and more accurate than HNSW. 

The format of IVFFlat index can be seen in the class 
Lucene90IvfFlatIndexFormat. In my implementation, the clustering process was 
optimized when the number of vectors is very large (e.g. > 40,000 per segment). 
A subset after shuffling is selected for training, thereby saving time and 
memory. The insertion performance of IVFFlat is better due to no extra 
executions on insertion while HNSW need to maintain the graph. However, IVFFlat 
consumes more time in flushing because of the k-means clustering.

My test cases show that the query performance of IVFFlat is better than HNSW, 
and its recall is pretty high (recall>97% over a set of 5 random vectors 
with 100 dimensions). My test class for IVFFlat is under the directory 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/TestKnnIvfFlat.java].
 Performance comparison between IVFFlat and HNSW is in the class 
TestKnnGraphAndIvfFlat.

The work is still in its early stage. Now it has some codes that are similar to 
HNSW, which could be refactored. Moreover, there must have some bugs that need 
to be fixed and and I would like to hear more comments.


was (Author: irvingzhang):
I created a related issue [#LUCENE-9136] that attempts to introduce IVFFlat 
algorithm to Lucene. IVFFlat is widely used in many fields, from computer 
vision to speech recognition for its smaller index and memory. And it supports 
GPU parallel computing, making it faster and more accurate than HNSW. 

The format of IVFFlat index can be seen in the class 
Lucene90IvfFlatIndexFormat. In my implementation, the clustering process was 
optimized when the number of vectors is very large (e.g. > 40,000 per segment). 
A subset after shuffling is selected for training, thereby saving time and 
memory. The insertion performance of IVFFlat is better due to no extra 
executions on insertion while HNSW need to maintain the graph. However, IVFFlat 
consumes more time in flushing because of the k-means clustering. My test cases 
show that the query performance of IVFFlat is slightly better than HNSW. My 
test class for IVFFlat is under the directory 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/TestKnnIvfFlat.java].
 Performance comparison between IVFFlat and HNSW is in the class 
TestKnnGraphAndIvfFlat.

The work is still in its early stage. Now it has some codes that are similar to 
HNSW, which could be refactored. Moreover, there must have some bugs that need 
to be fixed and and I would like to hear more comments.

> Approximate nearest vector search
> -
>
> Key: LUCENE-9004
> URL: https://issues.apache.org/jira/browse/LUCENE-9004
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Michael Sokolov
>Priority: Major
> Attachments: hnsw_layered_graph.png
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> "Semantic" search based on machine-learned vector "embeddings" representing 
> terms, queries and documents is becoming a must-have feature for a modern 
> search engine. SOLR-12890 is exploring various approaches to this, including 
> providing vector-based scoring functions. This is a spinoff issue from that.
> The idea here is to explore approximate nearest-neighbor search. Researchers 
> have found an approach based on navigating a graph that partially encodes the 
> nearest neighbor relation at multiple scales can provide accuracy > 95% (as 
> compared to exact nearest neighbor calculations) at a reasonable cost. This 
> issue will explore implementing HNSW (hierarchical navigable small-world) 
> graphs for the purpose of approximate nearest vector search (often referred 
> to as KNN or k-nearest-neighbor search).
> At a high level the way this algorithm works is this. First assume you have a 
> graph that has a partial encoding of the nearest neighbor relation, with some 
> short and some long-distance links. If this graph is built in the right way 
> (has the hierarchical navigable small world 

[jira] [Comment Edited] (LUCENE-9136) Introduce IVFFlat to Lucene for ANN similarity search

2020-01-20 Thread Xin-Chun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17019507#comment-17019507
 ] 

Xin-Chun Zhang edited comment on LUCENE-9136 at 1/20/20 2:34 PM:
-

I worked on this issue for about three to four days. And it now works fine for 
searching. In my implementation, the clustering process was optimized when the 
number of vectors is large (e.g. > 40,000 per segment). A subset after 
shuffling is selected for training rather than the whole set of vectors, 
decreasing time and memory. The insertion performance of IVFFlat is better due 
to no extra executions on insertion while HNSW needs to maintain the graph. 
However, IVFFlat consumes more time in flushing because of the k-means 
clustering. The designed format of IVFFlat index is presented in the class 
Lucene90IvfFlatIndexFormat. My test cases show that the query performance of 
IVFFlat is slightly better than HNSW. My personal branch 
[#[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]] is here. 
Test class for IVFFlat is under the directory of 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/TestKnnIvfFlat.java].
 Performance comparison between IVFFlat and HNSW is in the class 
TestKnnGraphAndIvfFlat.

The work is still in its early stage. Now it has some codes that are similar to 
HNSW, which could be refactored. Moreover, there must be some bugs that need to 
be fixed and and I would like to hear more comments.


was (Author: irvingzhang):
I worked on this issue for about three to four days. And it now works fine for 
searching.  My [personal 
branch|[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]] is 
here. The clustering process was optimized when the number of vectors is large 
(e.g. > 40,000 per segment). The query performance of IVFFlat seem slightly 
better than HNSW. The insert performance of IVFFlat is also better than HNSW 
due to it has no extra executions while HNSW need to maintain the graph. 
However, IVFFlat consumes more time in flushing due to the k-means clustering. 
The designed format of IVFFlat index is presented in the class 
Lucene90IvfFlatIndexFormat. My test class for IVFFlat is under the directory of 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/TestKnnIvfFlat.java].
 Performance comparison between IVFFlat and HNSW is in the class 
TestKnnGraphAndIvfFlat.

The work is still in its early stage. The performance could be further 
optimized. Now it has some codes that are similar to HNSW, which could be 
refactored. Moreover, there must some bugs need to be fixed and and would like 
to hear more comments.

> Introduce IVFFlat to Lucene for ANN similarity search
> -
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned implementations are 
> all written in C++, and no plan for supporting Java interface, making it hard 
> to be integrated in Java projects or those who are not familier with C/C++  
> [[https://github.com/facebookresearch/faiss/issues/105]]. 
> The algorithms for vector retrieval can be roughly classified into four 
> categories,
>  # Tree-base algorithms, such as KD-tree;
>  # Hashing methods, such as LSH (Local Sensitive Hashing);
>  # Product quantization algorithms, such as IVFFlat;
>  # Graph-base algorithms, such as HNSW, SSG, NSG;
> where IVFFlat and HNSW are the most popular ones among all the VR algorithms.
> Recently, the implementation of HNSW (Hierarchical Navigable Small World, 
> LUCENE-9004) for Lucene, has made great progress. The 

[jira] [Commented] (LUCENE-9004) Approximate nearest vector search

2020-01-20 Thread Xin-Chun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17019519#comment-17019519
 ] 

Xin-Chun Zhang commented on LUCENE-9004:


I created a related issue [#LUCENE-9136] that attempts to introduce IVFFlat 
algorithm to Lucene. IVFFlat is widely used in many fields, from computer 
vision to speech recognition for its smaller index and memory. And it supports 
GPU parallel computing, making it faster and more accurate than HNSW. 

The format of IVFFlat index can be seen in the class 
Lucene90IvfFlatIndexFormat. In my implementation, the clustering process was 
optimized when the number of vectors is very large (e.g. > 40,000 per segment). 
A subset after shuffling is selected for training, thereby saving time and 
memory. The insertion performance of IVFFlat is better due to no extra 
executions on insertion while HNSW need to maintain the graph. However, IVFFlat 
consumes more time in flushing because of the k-means clustering. My test cases 
show that the query performance of IVFFlat is slightly better than HNSW. My 
test class for IVFFlat is under the directory 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/TestKnnIvfFlat.java].
 Performance comparison between IVFFlat and HNSW is in the class 
TestKnnGraphAndIvfFlat.

The work is still in its early stage. Now it has some codes that are similar to 
HNSW, which could be refactored. Moreover, there must have some bugs that need 
to be fixed and and I would like to hear more comments.

> Approximate nearest vector search
> -
>
> Key: LUCENE-9004
> URL: https://issues.apache.org/jira/browse/LUCENE-9004
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Michael Sokolov
>Priority: Major
> Attachments: hnsw_layered_graph.png
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> "Semantic" search based on machine-learned vector "embeddings" representing 
> terms, queries and documents is becoming a must-have feature for a modern 
> search engine. SOLR-12890 is exploring various approaches to this, including 
> providing vector-based scoring functions. This is a spinoff issue from that.
> The idea here is to explore approximate nearest-neighbor search. Researchers 
> have found an approach based on navigating a graph that partially encodes the 
> nearest neighbor relation at multiple scales can provide accuracy > 95% (as 
> compared to exact nearest neighbor calculations) at a reasonable cost. This 
> issue will explore implementing HNSW (hierarchical navigable small-world) 
> graphs for the purpose of approximate nearest vector search (often referred 
> to as KNN or k-nearest-neighbor search).
> At a high level the way this algorithm works is this. First assume you have a 
> graph that has a partial encoding of the nearest neighbor relation, with some 
> short and some long-distance links. If this graph is built in the right way 
> (has the hierarchical navigable small world property), then you can 
> efficiently traverse it to find nearest neighbors (approximately) in log N 
> time where N is the number of nodes in the graph. I believe this idea was 
> pioneered in  [1]. The great insight in that paper is that if you use the 
> graph search algorithm to find the K nearest neighbors of a new document 
> while indexing, and then link those neighbors (undirectedly, ie both ways) to 
> the new document, then the graph that emerges will have the desired 
> properties.
> The implementation I propose for Lucene is as follows. We need two new data 
> structures to encode the vectors and the graph. We can encode vectors using a 
> light wrapper around {{BinaryDocValues}} (we also want to encode the vector 
> dimension and have efficient conversion from bytes to floats). For the graph 
> we can use {{SortedNumericDocValues}} where the values we encode are the 
> docids of the related documents. Encoding the interdocument relations using 
> docids directly will make it relatively fast to traverse the graph since we 
> won't need to lookup through an id-field indirection. This choice limits us 
> to building a graph-per-segment since it would be impractical to maintain a 
> global graph for the whole index in the face of segment merges. However 
> graph-per-segment is a very natural at search time - we can traverse each 
> segments' graph independently and merge results as we do today for term-based 
> search.
> At index time, however, merging graphs is somewhat challenging. While 
> indexing we build a graph incrementally, performing searches to construct 
> links among neighbors. When merging segments we must construct a new graph 
> 

[jira] [Commented] (LUCENE-9136) Introduce IVFFlat to Lucene for ANN similarity search

2020-01-20 Thread Xin-Chun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17019507#comment-17019507
 ] 

Xin-Chun Zhang commented on LUCENE-9136:


I worked on this issue for about three to four days. And it now works fine for 
searching.  My [personal 
branch|[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]] is 
here. The clustering process was optimized when the number of vectors is large 
(e.g. > 40,000 per segment). The query performance of IVFFlat seem slightly 
better than HNSW. The insert performance of IVFFlat is also better than HNSW 
due to it has no extra executions while HNSW need to maintain the graph. 
However, IVFFlat consumes more time in flushing due to the k-means clustering. 
The designed format of IVFFlat index is presented in the class 
Lucene90IvfFlatIndexFormat. My test class for IVFFlat is under the directory of 
[https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/|https://github.com/irvingzhang/lucene-solr/blob/jira/LUCENE-9136/lucene/core/src/test/org/apache/lucene/util/ivfflat/TestKnnIvfFlat.java].
 Performance comparison between IVFFlat and HNSW is in the class 
TestKnnGraphAndIvfFlat.

The work is still in its early stage. The performance could be further 
optimized. Now it has some codes that are similar to HNSW, which could be 
refactored. Moreover, there must some bugs need to be fixed and and would like 
to hear more comments.

> Introduce IVFFlat to Lucene for ANN similarity search
> -
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned implementations are 
> all written in C++, and no plan for supporting Java interface, making it hard 
> to be integrated in Java projects or those who are not familier with C/C++  
> [[https://github.com/facebookresearch/faiss/issues/105]]. 
> The algorithms for vector retrieval can be roughly classified into four 
> categories,
>  # Tree-base algorithms, such as KD-tree;
>  # Hashing methods, such as LSH (Local Sensitive Hashing);
>  # Product quantization algorithms, such as IVFFlat;
>  # Graph-base algorithms, such as HNSW, SSG, NSG;
> where IVFFlat and HNSW are the most popular ones among all the VR algorithms.
> Recently, the implementation of HNSW (Hierarchical Navigable Small World, 
> LUCENE-9004) for Lucene, has made great progress. The issue draws attention 
> of those who are interested in Lucene or hope to use HNSW with Solr/Lucene. 
> As an alternative for solving ANN similarity search problems, IVFFlat is also 
> very popular with many users and supporters. Compared with HNSW, IVFFlat has 
> smaller index size but requires k-means clustering, while HNSW is faster in 
> query (no training required) but requires extra storage for saving graphs 
> [indexing 1M 
> vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]].
>  Another advantage is that IVFFlat can be faster and more accurate when 
> enables GPU parallel computing (current not support in Java). Both algorithms 
> have their merits and demerits. Since HNSW is now under development, it may 
> be better to provide both implementations (HNSW && IVFFlat) for potential 
> users who are faced with very different scenarios and want to more choices.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-9136) Introduce IVFFlat to Lucene for ANN similarity search

2020-01-19 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9136:
---
Description: 
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface, making it hard 
to be integrated in Java projects or those who are not familier with C/C++  
[[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as KD-tree;
 # Hashing methods, such as LSH (Local Sensitive Hashing);
 # Product quantization algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

where IVFFlat and HNSW are the most popular ones among all the VR algorithms.

Recently, the implementation of HNSW (Hierarchical Navigable Small World, 
LUCENE-9004) for Lucene, has made great progress. The issue draws attention of 
those who are interested in Lucene or hope to use HNSW with Solr/Lucene. 

As an alternative for solving ANN similarity search problems, IVFFlat is also 
very popular with many users and supporters. Compared with HNSW, IVFFlat has 
smaller index size but requires k-means clustering, while HNSW is faster in 
query (no training required) but requires extra storage for saving graphs 
[indexing 1M 
vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]]. 
Another advantage is that IVFFlat can be faster and more accurate when enables 
GPU parallel computing (current not support in Java). Both algorithms have 
their merits and demerits. Since HNSW is now under development, it may be 
better to provide both implementations (HNSW && IVFFlat) for potential users 
who are faced with very different scenarios and want to more choices.

  was:
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface, making it hard 
to be integrated in Java projects or those who are not familier with C/C++  
[[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as KD-tree;
 # Hashing methods, such as LSH (Local Sensitive Hashing);
 # Product quantization algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

where IVFFlat and HNSW are the most popular ones among all the VR algorithms.

Recently, the implementation of HNSW (Hierarchical Navigable Small World, 
LUCENE-9004) for Lucene, has made great progress. The issue draws attention of 
those who are interested in Lucene or hope to use HNSW with Solr/Lucene. 

As an alternative for solving ANN similarity search problems, IVFFlat is also 
very popular with many users and supporters. Compared with HNSW, IVFFlat has 
smaller index size but requires k-means clustering, while HNSW is faster in 
query (no training required) but requires extra storage for saving graphs 
[indexing 1M 
vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]]. 
Another advantage is that IVFFlat can be faster and more accurate when enables 
GPU parallel computing (current not support in Java). Both algorithms have 
their merits and demerits. Since HNSW is now under development, it may be 
better to provide both implementations (HNSW && IVFFlat) for potential users 
who are faced with very different scenarios and 

[jira] [Updated] (LUCENE-9136) Introduce IVFFlat to Lucene for ANN similarity search

2020-01-16 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9136:
---
Description: 
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface, making it hard 
to be integrated in Java projects or those who are not familier with C/C++  
[[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as KD-tree;
 # Hashing methods, such as LSH (Local Sensitive Hashing);
 # Product quantization algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

where IVFFlat and HNSW are the most popular ones among all the VR algorithms.

Recently, the implementation of HNSW (Hierarchical Navigable Small World, 
LUCENE-9004) for Lucene, has made great progress. The issue draws attention of 
those who are interested in Lucene or hope to use HNSW with Solr/Lucene. 

As an alternative for solving ANN similarity search problems, IVFFlat is also 
very popular with many users and supporters. Compared with HNSW, IVFFlat has 
smaller index size but requires k-means clustering, while HNSW is faster in 
query (no training required) but requires extra storage for saving graphs 
[indexing 1M 
vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]]. 
Another advantage is that IVFFlat can be faster and more accurate when enables 
GPU parallel computing (current not support in Java). Both algorithms have 
their merits and demerits. Since HNSW is now under development, it may be 
better to provide both implementations (HNSW && IVFFlat) for potential users 
who are faced with very different scenarios and want to more choices.

I will soon commit my personal implementations.

  was:
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface, making it hard 
to be integrated in Java projects or those who are not familier with C/C++  
[[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as KD-tree;
 # Hashing methods, such as LSH (Local Sensitive Hashing);
 # Product quantization algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

where IVFFlat and HNSW are the most popular ones among all the VR algorithms.

Recently, the implementation of ANN algorithms for Lucene, such as HNSW 
(Hierarchical Navigable Small World, LUCENE-9004), has made great progress, and 
draws attention of those who are interested in Lucene and hope to use HNSW with 
Solr/Lucene. 

As another alternative for ANN similarity search problems, IVFFlat is also very 
popular with many users and supporters.  Compared with HNSW, IVFFlat has 
smaller index size but requires k-means clustering, while HNSW is faster in 
query (no training required) but requires extra storage for graphs [indexing 1M 
vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]]. 
Both of them have their merits and demerits. Another advantage is that IVFFlat 
can be faster and more accurate when enables GPU parallel computing (current 
not support in Java). Since HNSW is now under development, it may be better to 
provide both algorithm implementations for potential users who 

[jira] [Updated] (LUCENE-9136) Introduce IVFFlat to Lucene for ANN similarity search

2020-01-16 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9136:
---
Description: 
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface, making it hard 
to be integrated in Java projects or those who are not familier with C/C++  
[[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as KD-tree;
 # Hashing methods, such as LSH (Local Sensitive Hashing);
 # Product quantization algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

where IVFFlat and HNSW are the most popular ones among all the VR algorithms.

Recently, the implementation of ANN algorithms for Lucene, such as HNSW 
(Hierarchical Navigable Small World, LUCENE-9004), has made great progress, and 
draws attention of those who are interested in Lucene and hope to use HNSW with 
Solr/Lucene. 

As another alternative for ANN similarity search problems, IVFFlat is also very 
popular with many users and supporters.  Compared with HNSW, IVFFlat has 
smaller index size but requires k-means clustering, while HNSW is faster in 
query (no training required) but requires extra storage for graphs [indexing 1M 
vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]]. 
Both of them have their merits and demerits. Another advantage is that IVFFlat 
can be faster and more accurate when enables GPU parallel computing (current 
not support in Java). Since HNSW is now under development, it may be better to 
provide both algorithm implementations for potential users who have very 
different applications and scenarios.

I will soon commit my personal implementations.

  was:
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface 
[[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as KD-tree;
 # Hashing methods, such as LSH (Local Sensitive Hashing);
 # Product quantization algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

IVFFlat and HNSW are the most popular ones among all the algorithms. Recently, 
implementation of ANN algorithms for Lucene, such as HNSW (Hierarchical 
Navigable Small World, LUCENE-9004), has made great progress. IVFFlat has 
smaller index size but requires k-means clustering, while HNSW is faster in 
query but require extra storage for graphs [indexing 1M 
vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]]. 
Each of them has its merits and demerits. Since HNSW is now under development, 
it may be better to provide IVFFlat for an alternative choice.

I will soon commit my personal implementations.


> Introduce IVFFlat to Lucene for ANN similarity search
> -
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous 

[jira] [Updated] (LUCENE-9136) Introduce IVFFlat to Lucene for ANN similarity search

2020-01-16 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9136:
---
Summary: Introduce IVFFlat to Lucene for ANN similarity search  (was: 
Introduce IVFFlat for ANN similarity search)

> Introduce IVFFlat to Lucene for ANN similarity search
> -
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned implementations are 
> all written in C++, and no plan for supporting Java interface 
> [[https://github.com/facebookresearch/faiss/issues/105]]. 
> The algorithms for vector retrieval can be roughly classified into four 
> categories,
>  # Tree-base algorithms, such as KD-tree;
>  # Hashing methods, such as LSH (Local Sensitive Hashing);
>  # Product quantization algorithms, such as IVFFlat;
>  # Graph-base algorithms, such as HNSW, SSG, NSG;
> IVFFlat and HNSW are the most popular ones among all the algorithms. 
> Recently, implementation of ANN algorithms for Lucene, such as HNSW 
> (Hierarchical Navigable Small World, LUCENE-9004), has made great progress. 
> IVFFlat has smaller index size but requires k-means clustering, while HNSW is 
> faster in query but require extra storage for graphs [indexing 1M 
> vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]].
>  Each of them has its merits and demerits. Since HNSW is now under 
> development, it may be better to provide IVFFlat for an alternative choice.
> I will soon commit my personal implementations.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-9136) Introduce IVFFlat for ANN similarity search

2020-01-16 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9136:
---
Description: 
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface 
[[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as KD-tree;
 # Hashing methods, such as LSH (Local Sensitive Hashing);
 # Product quantization algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

IVFFlat and HNSW are the most popular ones among all the algorithms. Recently, 
implementation of ANN algorithms for Lucene, such as HNSW (Hierarchical 
Navigable Small World, LUCENE-9004), has made great progress. IVFFlat has 
smaller index size but requires k-means clustering, while HNSW is faster in 
query but require extra storage for graphs [indexing 1M 
vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]]. 
Each of them has its merits and demerits. Since HNSW is now under development, 
it may be better to provide IVFFlat for an alternative choice.

I will soon commit my personal implementations.

  was:
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface 
[[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as KD-tree;
 # Hashing methods, such as LSH (Local Sensitive Hashing);
 # Product quantization algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

IVFFlat and HNSW are the most popular ones among all the algorithms. Recently, 
implementation of ANN algorithms for Lucene, such as HNSW (Hierarchical 
Navigable Small World, LUCENE-9004), has made great progress. IVFFlat requires 
much less memory and disks when compared with HNSW [indexing 1M 
vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]]. 
And IVFFlat supports both online and offline training.

I'm now trying to introduce the IVFFlat to Lucene core in my person branch 
[[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]], in very 
early stage.


> Introduce IVFFlat for ANN similarity search
> ---
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential 

[jira] [Updated] (LUCENE-9136) Introduce IVFFlat for ANN similarity search

2020-01-16 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9136:
---
Description: 
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface 
[[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as KD-tree;
 # Hashing methods, such as LSH (Local Sensitive Hashing);
 # Product quantization algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

IVFFlat and HNSW are the most popular ones among all the algorithms. Recently, 
implementation of ANN algorithms for Lucene, such as HNSW (Hierarchical 
Navigable Small World, LUCENE-9004), has made great progress. IVFFlat requires 
much less memory and disks when compared with HNSW [indexing 1M 
vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]]. 
And IVFFlat supports both online and offline training.

I'm now trying to introduce the IVFFlat to Lucene core in my person branch 
[[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]], still 
very early.

  was:
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface 
[[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as KD-tree;
 # Hashing methods, such as LSH (Local Sensitive Hashing);
 # Product quantization algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

IVFFlat and HNSW are the most popular ones among all the algorithms. Recently, 
implementation of ANN algorithms for Lucene, such as HNSW (Hierarchical 
Navigable Small World, LUCENE-9004), has made great progress. IVFFlat requires 
much less memory and disks when compared with HNSW [indexing 1M 
vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]]. 
And IVFFlat supports both online and offline training.

I'm now trying to introduce the IVFFlat to Lucene core in my person branch 
[[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]].


> Introduce IVFFlat for ANN similarity search
> ---
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned implementations are 
> all 

[jira] [Updated] (LUCENE-9136) Introduce IVFFlat for ANN similarity search

2020-01-16 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9136:
---
Description: 
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface 
[[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as KD-tree;
 # Hashing methods, such as LSH (Local Sensitive Hashing);
 # Product quantization algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

IVFFlat and HNSW are the most popular ones among all the algorithms. Recently, 
implementation of ANN algorithms for Lucene, such as HNSW (Hierarchical 
Navigable Small World, LUCENE-9004), has made great progress. IVFFlat requires 
much less memory and disks when compared with HNSW [indexing 1M 
vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]]. 
And IVFFlat supports both online and offline training.

I'm now trying to introduce the IVFFlat to Lucene core in my person branch 
[[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]], in very 
early stage.

  was:
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface 
[[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as KD-tree;
 # Hashing methods, such as LSH (Local Sensitive Hashing);
 # Product quantization algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

IVFFlat and HNSW are the most popular ones among all the algorithms. Recently, 
implementation of ANN algorithms for Lucene, such as HNSW (Hierarchical 
Navigable Small World, LUCENE-9004), has made great progress. IVFFlat requires 
much less memory and disks when compared with HNSW [indexing 1M 
vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]]. 
And IVFFlat supports both online and offline training.

I'm now trying to introduce the IVFFlat to Lucene core in my person branch 
[[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]], still 
very early.


> Introduce IVFFlat for ANN similarity search
> ---
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned 

[jira] [Updated] (LUCENE-9136) Introduce IVFFlat for ANN similarity search

2020-01-16 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9136:
---
Description: 
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface 
[[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as KD-tree;
 # Hashing methods, such as LSH (Local Sensitive Hashing);
 # Product quantization algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

IVFFlat and HNSW are the most popular ones among all the algorithms. Recently, 
implementation of ANN algorithms for Lucene, such as HNSW (Hierarchical 
Navigable Small World, LUCENE-9004), has made great progress. IVFFlat requires 
much less memory and disks when compared with HNSW [indexing 1M 
vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]]. 
And IVFFlat supports both online and offline training.

I'm now trying to introduce the IVFFlat to Lucene core in my person branch 
[[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]].

  was:
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface 
[[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as KD-tree;
 # Hashing methods, such as LSH (Local Sensitive Hashing);
 # Product quantization algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

IVFFlat and HNSW are the most popular ones among all the algorithms. Recently, 
implementation of ANN algorithms for Lucene, such as HNSW (Hierarchical 
Navigable Small World, LUCENE-9004), has made great progress. IVFFlat requires 
much less memory and disks when compared with HNSW [indexing 1M 
vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]]. 
And IVFFlat supports both online and offline training.

I'm now trying to introduce the IVFFlat to Lucene core in my person branch 
[[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]]. And I 
will try my best to reuse the excellent work by LUCENE-9004.


> Introduce IVFFlat for ANN similarity search
> ---
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. 

[jira] [Updated] (LUCENE-9136) Introduce IVFFlat for ANN similarity search

2020-01-16 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9136:
---
Description: 
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface 
[[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as KD-tree;
 # Hashing methods, such as LSH (Local Sensitive Hashing);
 # Product quantization algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

IVFFlat and HNSW are the most popular ones among all the algorithms. Recently, 
implementation of ANN algorithms for Lucene, such as HNSW (Hierarchical 
Navigable Small World, LUCENE-9004), has made great progress. IVFFlat requires 
much less memory and disks when compared with HNSW [indexing 1M 
vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]]. 
And IVFFlat supports both online and offline training.

I'm now trying to introduce the IVFFlat to Lucene core in my person branch 
[[https://github.com/irvingzhang/lucene-solr/tree/jira/LUCENE-9136]]. And I 
will try my best to reuse the excellent work by LUCENE-9004.

  was:
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface 
[[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as KD-tree;
 # Hashing methods, such as LSH (Local Sensitive Hashing);
 # Product quantization algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

IVFFlat and HNSW are the most popular ones among all the algorithms. Recently, 
implementation of ANN algorithms for Lucene, such as HNSW (Hierarchical 
Navigable Small World, LUCENE-9004), has made great progress. IVFFlat requires 
much less memory and disks when compared with HNSW [indexing 1M 
vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]]. 
And IVFFlat supports both online and offline training.

I'm now trying to introduce the IVFFlat to Lucene core. And I will try my best 
to reuse the excellent work by LUCENE-9004.


> Introduce IVFFlat for ANN similarity search
> ---
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned 

[jira] [Updated] (LUCENE-9136) Introduce IVFFlat for ANN similarity search

2020-01-16 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9136:
---
Description: 
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface 
[[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as KD-tree;
 # Hashing methods, such as LSH (Local Sensitive Hashing);
 # Product quantization algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

IVFFlat and HNSW are the most popular ones among all the algorithms. Recently, 
implementation of ANN algorithms for Lucene, such as HNSW (Hierarchical 
Navigable Small World, LUCENE-9004), has made great progress. IVFFlat requires 
much less memory and disks when compared with HNSW [indexing 1M 
vectors|[https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors]]. 
And IVFFlat supports both online and offline training.

I'm now trying to introduce the IVFFlat to Lucene core. And I will try my best 
to reuse the excellent work by LUCENE-9004.

  was:
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface 
[[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as KD-tree;
 # Hashing methods, such as LSH (Local Sensitive Hashing);
 # Product quantization algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

IVFFlat and HNSW are the most popular ones among all the algorithms. Recently, 
implementation of ANN algorithms for Lucene, such as HNSW (Hierarchical 
Navigable Small World, LUCENE-9004), has made great progress. IVFFlat requires 
less memory and disks when compared with HNSW. And IVFFlat supports both online 
and offline training.

I'm now trying to introduce the IVFFlat to Lucene core. And I will try my best 
to reuse the excellent work by LUCENE-9004.


> Introduce IVFFlat for ANN similarity search
> ---
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned implementations are 
> all written in C++, and no plan for supporting Java interface 
> [[https://github.com/facebookresearch/faiss/issues/105]]. 
> The algorithms for vector retrieval 

[jira] [Updated] (LUCENE-9136) Introduce IVFFlat for ANN similarity search

2020-01-15 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9136:
---
Description: 
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface 
[[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as KD-tree;
 # Hashing methods, such as LSH (Local Sensitive Hashing);
 # Product quantization algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

IVFFlat and HNSW are the most popular ones among all the algorithms. Recently, 
implementation of ANN algorithms for Lucene, such as HNSW (Hierarchical 
Navigable Small World, LUCENE-9004), has made great progress. IVFFlat requires 
less memory and disks when compared with HNSW. And IVFFlat supports both online 
and offline training.

I'm now trying to introduce the IVFFlat to Lucene core. And I will try my best 
to reuse the excellent work by LUCENE-9004.

  was:
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface 
[[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as KD-tree;
 # Hashing methods, such as LSH (Local Sensitive Hashing);
 # Product quantization algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

IVFFlat and HNSW are the most popular ones among all the algorithms. Recently, 
implementation of ANN algorithms for Lucene, such as HNSW (Hierarchical 
Navigable Small World, LUCENE-9004), has made great progress. IVFFlat requires 
less memory and disks when compared with HNSW. And IVFFlat supports both online 
and offline training.

I'm now trying to implement the IVFFlat. And I will try my best to reuse the 
excellent work by LUCENE-9004.


> Introduce IVFFlat for ANN similarity search
> ---
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned implementations are 
> all written in C++, and no plan for supporting Java interface 
> [[https://github.com/facebookresearch/faiss/issues/105]]. 
> The algorithms for vector retrieval can be roughly classified into four 
> categories,
>  # Tree-base algorithms, such as KD-tree;
>  # Hashing 

[jira] [Updated] (LUCENE-9136) Introduce IVFFlat for ANN similarity search

2020-01-15 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9136:
---
Description: 
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface 
[[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as KD-tree;
 # Hashing methods, such as LSH (Local Sensitive Hashing);
 # Product quantization algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

IVFFlat and HNSW are the most popular ones among all the algorithms. Recently, 
implementation of ANN algorithms for Lucene, such as HNSW (Hierarchical 
Navigable Small World, LUCENE-9004), has made great progress. IVFFlat requires 
less memory and disks when compared with HNSW. And IVFFlat supports both online 
and offline training.

I'm now trying to implement the IVFFlat. And I will try my best to reuse the 
excellent work by LUCENE-9004.

  was:
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface 
[[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as KD-tree;
 # Hashing methods, such as LSH (Local Sensitive Hashing);
 # Product quantization algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

IVFFlat and HNSW are the most popular ones among all the algorithms. Recently, 
implementation of ANN algorithms for Lucene, such as HNSW (Hierarchical 
Navigable Small World, LUCENE-9004), has made great progress. IVFFlat requires 
less memory and disks when compared with HNSW. And IVFFlat supports both online 
and offline training.

I'm now working on the implementation of IVFFlat. And I will try my best to 
reuse the excellent work by LUCENE-9004.


> Introduce IVFFlat for ANN similarity search
> ---
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned implementations are 
> all written in C++, and no plan for supporting Java interface 
> [[https://github.com/facebookresearch/faiss/issues/105]]. 
> The algorithms for vector retrieval can be roughly classified into four 
> categories,
>  # Tree-base algorithms, such as KD-tree;
>  # Hashing methods, 

[jira] [Updated] (LUCENE-9136) Introduce IVFFlat for ANN similarity search

2020-01-15 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9136:
---
Description: 
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface 
[[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as KD-tree;
 # Hashing methods, such as LSH (Local Sensitive Hashing);
 # Product quantization algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

IVFFlat and HNSW are the most popular ones among all the algorithms. Recently, 
implementation of ANN algorithms for Lucene, such as HNSW (Hierarchical 
Navigable Small World, LUCENE-9004), has made great progress. IVFFlat requires 
less memory and disks when compared with HNSW. And IVFFlat supports both online 
and offline training.

I'm now working on the implementation of IVFFlat. And I will try my best to 
reuse the excellent work by LUCENE-9004.

  was:
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface 
[[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as KD-tree;
 # Hashing methods, such as LSH (Local Sensitive Hashing);
 # Product quantization algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

IVFFlat and HNSW are the most popular ones among all the algorithms. Recently, 
implementation of ANN algorithms for Lucene, such as HNSW (Hierarchical 
Navigable Small World, LUCENE-9004), has made great progress. IVFFlat requires 
less memory and disks when compared with HNSW. I'm now working on the 
implementation of IVFFlat. And I will try my best to reuse the excellent work 
by LUCENE-9004.


> Introduce IVFFlat for ANN similarity search
> ---
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned implementations are 
> all written in C++, and no plan for supporting Java interface 
> [[https://github.com/facebookresearch/faiss/issues/105]]. 
> The algorithms for vector retrieval can be roughly classified into four 
> categories,
>  # Tree-base algorithms, such as KD-tree;
>  # Hashing methods, such as LSH (Local Sensitive Hashing);
>  # 

[jira] [Updated] (LUCENE-9136) Introduce IVFFlat for ANN similarity search

2020-01-14 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9136:
---
Description: 
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface 
[[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as KD-tree;
 # Hashing methods, such as LSH (Local Sensitive Hashing);
 # Product quantization algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

IVFFlat and HNSW are the most popular ones among all the algorithms. Recently, 
implementation of ANN algorithms for Lucene, such as HNSW (Hierarchical 
Navigable Small World, LUCENE-9004), has made great progress. IVFFlat requires 
less memory and disks when compared with HNSW. I'm now working on the 
implementation of IVFFlat. And I will try my best to reuse the excellent work 
by LUCENE-9004.

  was:
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface 
[[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as KD-tree;
 # Hashing methods, such as LSH (Local Sensitive Hashing);
 # Product quantization algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

IVFFlat and HNSW are the most popular ones among all the algorithms. Recently, 
implementation of ANN algorithms for Lucene, such as HNSW (Hierarchical 
Navigable Small World, LUCENE-9004), has made great progress. Compared with 
HNSW, IVFFlat requires less memory and disks. Introduce IVFFlat to Lucene will 
provide one more optional choice for interesting users.  I'm now working on the 
implementation of IVFFlat. And I will try my best to reuse the excellent work 
by LUCENE-9004.


> Introduce IVFFlat for ANN similarity search
> ---
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned implementations are 
> all written in C++, and no plan for supporting Java interface 
> [[https://github.com/facebookresearch/faiss/issues/105]]. 
> The algorithms for vector retrieval can be roughly classified into four 
> categories,
>  # Tree-base algorithms, such as KD-tree;
>  # Hashing methods, such as LSH (Local 

[jira] [Updated] (LUCENE-9136) Introduce IVFFlat for ANN similarity search

2020-01-14 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9136:
---
Description: 
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface 
[[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as KD-tree;
 # Hashing methods, such as LSH (Local Sensitive Hashing);
 # Product quantization algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

IVFFlat and HNSW are the most popular ones among all the algorithms. Recently, 
implementation of ANN algorithms for Lucene, such as HNSW (Hierarchical 
Navigable Small World, LUCENE-9004), has made great progress. Compared with 
HNSW, IVFFlat requires less memory and disks. Introduce IVFFlat to Lucene will 
provide one more optional choice for interesting users.  I'm now working on the 
implementation of IVFFlat. And I will try my best to reuse the excellent work 
by LUCENE-9004.

  was:
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface 
[[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as kd-tree;
 # Hashing methods, such as LSH (local sensitive hashing);
 # Product quantization algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

IVFFlat and HNSW are the most popular ones among all the algorithms. Recently, 
implementation of ANN algorithms for Lucene, such as HNSW (Hierarchical 
Navigable Small World, LUCENE-9004), has made great progress. Compared with 
HNSW, IVFFlat requires less memory and disks. Introduce IVFFlat to Lucene will 
provide one more optional choice for interesting users.  I'm now working on the 
implementation of IVFFlat. And I will try my best to reuse the excellent work 
by LUCENE-9004.


> Introduce IVFFlat for ANN similarity search
> ---
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned implementations are 
> all written in C++, and no plan for supporting Java interface 
> [[https://github.com/facebookresearch/faiss/issues/105]]. 
> The algorithms for vector retrieval can be roughly classified into four 
> 

[jira] [Updated] (LUCENE-9136) Introduce IVFFlat for ANN similarity search

2020-01-14 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9136:
---
Description: 
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface 
[[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as kd-tree;
 # Hashing methods, such as LSH (local sensitive hashing);
 # Product quantization algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

IVFFlat and HNSW are the most popular ones among all the algorithms. Recently, 
implementation of ANN algorithms for Lucene, such as HNSW (Hierarchical 
Navigable Small World) Approximate nearest vector search , has made great 
progress. Compared with HNSW, IVFFlat requires less memory and disks. Introduce 
IVFFlat to Lucene will provide one more optional choice for interesting users.  
I'm now working on the implementation of IVFFlat. And I will try my best to 
reuse the excellent work by LUCENE-9004.

  was:
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface 
[faiss|[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as kd-tree;
 # Hashing methods, such as LSH (local sensitive hashing);
 # Product quantization algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

IVFFlat and HNSW are the most popular ones among all the algorithms. Recently, 
implementation of ANN algorithms for Lucene, such as HNSW (Hierarchical 
Navigable Small World) Approximate nearest vector search , has made great 
progress. Compared with HNSW, IVFFlat requires less memory and disks. Introduce 
IVFFlat to Lucene will provide one more optional choice for interesting users.  
I'm now working on the implementation of IVFFlat. And I will try my best to 
reuse the excellent work by LUCENE-9004.


> Introduce IVFFlat for ANN similarity search
> ---
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned implementations are 
> all written in C++, and no plan for supporting Java interface 
> [[https://github.com/facebookresearch/faiss/issues/105]]. 
> The algorithms for vector 

[jira] [Updated] (LUCENE-9136) Introduce IVFFlat for ANN similarity search

2020-01-14 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9136:
---
Description: 
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface 
[[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as kd-tree;
 # Hashing methods, such as LSH (local sensitive hashing);
 # Product quantization algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

IVFFlat and HNSW are the most popular ones among all the algorithms. Recently, 
implementation of ANN algorithms for Lucene, such as HNSW (Hierarchical 
Navigable Small World, LUCENE-9004), has made great progress. Compared with 
HNSW, IVFFlat requires less memory and disks. Introduce IVFFlat to Lucene will 
provide one more optional choice for interesting users.  I'm now working on the 
implementation of IVFFlat. And I will try my best to reuse the excellent work 
by LUCENE-9004.

  was:
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface 
[[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as kd-tree;
 # Hashing methods, such as LSH (local sensitive hashing);
 # Product quantization algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

IVFFlat and HNSW are the most popular ones among all the algorithms. Recently, 
implementation of ANN algorithms for Lucene, such as HNSW (Hierarchical 
Navigable Small World) Approximate nearest vector search , has made great 
progress. Compared with HNSW, IVFFlat requires less memory and disks. Introduce 
IVFFlat to Lucene will provide one more optional choice for interesting users.  
I'm now working on the implementation of IVFFlat. And I will try my best to 
reuse the excellent work by LUCENE-9004.


> Introduce IVFFlat for ANN similarity search
> ---
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned implementations are 
> all written in C++, and no plan for supporting Java interface 
> [[https://github.com/facebookresearch/faiss/issues/105]]. 
> The algorithms for vector retrieval can be roughly classified 

[jira] [Updated] (LUCENE-9136) Introduce IVFFlat for ANN similarity search

2020-01-14 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9136:
---
Description: 
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface 
[faiss|[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as kd-tree;
 # Hashing methods, such as LSH (local sensitive hashing);
 # Product quantization algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

IVFFlat and HNSW are the most popular ones among all the algorithms. Recently, 
implementation of ANN algorithms for Lucene, such as HNSW (Hierarchical 
Navigable Small World) Approximate nearest vector search , has made great 
progress. Compared with HNSW, IVFFlat requires less memory and disks. Introduce 
IVFFlat to Lucene will provide one more optional choice for interesting users.  
I'm now working on the implementation of IVFFlat. And I will try my best to 
reuse the excellent work by LUCENE-9004.

  was:
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface 
[faiss|[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as kd-tree;
 # Hashing methods, such as LSH (local sensitive hashing);
 # Product quantization algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

The IVFFlat and HNSW are the most popular ones among all the algorithms. 
Recently, implementation of ANN algorithms for Lucene, such as HNSW 
(Hierarchical Navigable Small World) [Approximate nearest vector search 
|https://issues.apache.org/jira/browse/LUCENE-9004], has made great progress. 
I'm now working on the implementation of IVFFlat. And I will try my best to 
reuse the excellent work by LUCENE-9004


> Introduce IVFFlat for ANN similarity search
> ---
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned implementations are 
> all written in C++, and no plan for supporting Java interface 
> [faiss|[https://github.com/facebookresearch/faiss/issues/105]]. 
> The algorithms for vector retrieval can be roughly classified into four 
> categories,
>  # Tree-base algorithms, such 

[jira] [Updated] (LUCENE-9136) Introduce IVFFlat for ANN similarity search

2020-01-14 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9136:
---
Description: 
Representation learning (RL) has been an established discipline in the machine 
learning space for decades but it draws tremendous attention lately with the 
emergence of deep learning. The central problem of RL is to determine an 
optimal representation of the input data. By embedding the data into a high 
dimensional vector, the vector retrieval (VR) method is then applied to search 
the relevant items.

With the rapid development of RL over the past few years, the technique has 
been used extensively in industry from online advertising to computer vision 
and speech recognition. There exist many open source implementations of VR 
algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
choices for potential users. However, the aforementioned implementations are 
all written in C++, and no plan for supporting Java interface 
[faiss|[https://github.com/facebookresearch/faiss/issues/105]]. 

The algorithms for vector retrieval can be roughly classified into four 
categories,
 # Tree-base algorithms, such as kd-tree;
 # Hashing methods, such as LSH (local sensitive hashing);
 # Product quantization algorithms, such as IVFFlat;
 # Graph-base algorithms, such as HNSW, SSG, NSG;

The IVFFlat and HNSW are the most popular ones among all the algorithms. 
Recently, implementation of ANN algorithms for Lucene, such as HNSW 
(Hierarchical Navigable Small World) [Approximate nearest vector search 
|https://issues.apache.org/jira/browse/LUCENE-9004], has made great progress. 
I'm now working on the implementation of IVFFlat. And I will try my best to 
reuse the excellent work by LUCENE-9004

> Introduce IVFFlat for ANN similarity search
> ---
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
>
> Representation learning (RL) has been an established discipline in the 
> machine learning space for decades but it draws tremendous attention lately 
> with the emergence of deep learning. The central problem of RL is to 
> determine an optimal representation of the input data. By embedding the data 
> into a high dimensional vector, the vector retrieval (VR) method is then 
> applied to search the relevant items.
> With the rapid development of RL over the past few years, the technique has 
> been used extensively in industry from online advertising to computer vision 
> and speech recognition. There exist many open source implementations of VR 
> algorithms, such as Facebook's FAISS and Microsoft's SPTAG, providing various 
> choices for potential users. However, the aforementioned implementations are 
> all written in C++, and no plan for supporting Java interface 
> [faiss|[https://github.com/facebookresearch/faiss/issues/105]]. 
> The algorithms for vector retrieval can be roughly classified into four 
> categories,
>  # Tree-base algorithms, such as kd-tree;
>  # Hashing methods, such as LSH (local sensitive hashing);
>  # Product quantization algorithms, such as IVFFlat;
>  # Graph-base algorithms, such as HNSW, SSG, NSG;
> The IVFFlat and HNSW are the most popular ones among all the algorithms. 
> Recently, implementation of ANN algorithms for Lucene, such as HNSW 
> (Hierarchical Navigable Small World) [Approximate nearest vector search 
> |https://issues.apache.org/jira/browse/LUCENE-9004], has made great progress. 
> I'm now working on the implementation of IVFFlat. And I will try my best to 
> reuse the excellent work by LUCENE-9004



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-9136) Introduce IVFFlat for ANN similarity search

2020-01-14 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9136:
---
Summary: Introduce IVFFlat for ANN similarity search  (was: Add delete 
action for HNSW and fix merger when segments contain deleted vectors)

> Introduce IVFFlat for ANN similarity search
> ---
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Xin-Chun Zhang
>Priority: Major
>
> This issue is 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-9136) Introduce IVFFlat for ANN similarity search

2020-01-14 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9136:
---
Description: (was: This issue is )
 Issue Type: New Feature  (was: Bug)

> Introduce IVFFlat for ANN similarity search
> ---
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Xin-Chun Zhang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-9136) Add delete action for HNSW and fix merger when segments contain deleted vectors

2020-01-14 Thread Xin-Chun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xin-Chun Zhang updated LUCENE-9136:
---
Description: This issue is 

> Add delete action for HNSW and fix merger when segments contain deleted 
> vectors
> ---
>
> Key: LUCENE-9136
> URL: https://issues.apache.org/jira/browse/LUCENE-9136
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Xin-Chun Zhang
>Priority: Major
>
> This issue is 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (LUCENE-9136) Add delete action for HNSW and fix merger when segments contain deleted vectors

2020-01-14 Thread Xin-Chun Zhang (Jira)
Xin-Chun Zhang created LUCENE-9136:
--

 Summary: Add delete action for HNSW and fix merger when segments 
contain deleted vectors
 Key: LUCENE-9136
 URL: https://issues.apache.org/jira/browse/LUCENE-9136
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Xin-Chun Zhang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org