You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
glueberry/tutorials/gensim_similarity_queries.i...

7.9 KiB

Similarity Queries

GENSIM tutorial

In [1]:
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
In [2]:
# Creating the corpus

from collections import defaultdict
from gensim import corpora

documents = [
    "Human machine interface for lab abc computer applications",
    "A survey of user opinion of computer system response time",
    "The EPS user interface management system",
    "System and human system engineering testing of EPS",
    "Relation of user perceived response time to error measurement",
    "The generation of random binary unordered trees",
    "The intersection graph of paths in trees",
    "Graph minors IV Widths of trees and well quasi ordering",
    "Graph minors A survey",
]

# remove common words and tokenize

stoplist = set('for a of the and to in'.split())
texts = [
    [word for word in document.lower().split() if word not in stoplist]
    for document in documents
]

# remove words that appear only once
frequency = defaultdict(int)
for text in texts:
    for token in text:
        frequency[token]+=1

texts = [
    [token for token in text if frequency[token]>1]
    for text in texts
]

dictionary = corpora.Dictionary(texts)
corpus = [dictionary.doc2bow(text) for text in texts]
2022-05-30 11:17:30,012 : INFO : adding document #0 to Dictionary<0 unique tokens: []>
2022-05-30 11:17:30,013 : INFO : built Dictionary<12 unique tokens: ['computer', 'human', 'interface', 'response', 'survey']...> from 9 documents (total 29 corpus positions)
2022-05-30 11:17:30,014 : INFO : Dictionary lifecycle event {'msg': "built Dictionary<12 unique tokens: ['computer', 'human', 'interface', 'response', 'survey']...> from 9 documents (total 29 corpus positions)", 'datetime': '2022-05-30T11:17:30.014843', 'gensim': '4.2.0', 'python': '3.10.2 (tags/v3.10.2:a58ebcc, Jan 17 2022, 14:12:15) [MSC v.1929 64 bit (AMD64)]', 'platform': 'Windows-10-10.0.22000-SP0', 'event': 'created'}
In [3]:
from gensim import models
lsi = models.LsiModel(corpus, id2word=dictionary, num_topics=2)
2022-05-30 11:18:04,795 : INFO : using serial LSI version on this node
2022-05-30 11:18:04,796 : INFO : updating model with new documents
2022-05-30 11:18:04,797 : INFO : preparing a new chunk of documents
2022-05-30 11:18:04,798 : INFO : using 100 extra samples and 2 power iterations
2022-05-30 11:18:04,798 : INFO : 1st phase: constructing (12, 102) action matrix
2022-05-30 11:18:04,800 : INFO : orthonormalizing (12, 102) action matrix
2022-05-30 11:18:04,803 : INFO : 2nd phase: running dense svd on (12, 9) matrix
2022-05-30 11:18:04,803 : INFO : computing the final decomposition
2022-05-30 11:18:04,804 : INFO : keeping 2 factors (discarding 43.156% of energy spectrum)
2022-05-30 11:18:04,804 : INFO : processed documents up to #9
2022-05-30 11:18:04,805 : INFO : topic #0(3.341): -0.644*"system" + -0.404*"user" + -0.301*"eps" + -0.265*"response" + -0.265*"time" + -0.240*"computer" + -0.221*"human" + -0.206*"survey" + -0.198*"interface" + -0.036*"graph"
2022-05-30 11:18:04,806 : INFO : topic #1(2.542): 0.623*"graph" + 0.490*"trees" + 0.451*"minors" + 0.274*"survey" + -0.167*"system" + -0.141*"eps" + -0.113*"human" + 0.107*"response" + 0.107*"time" + -0.072*"interface"
2022-05-30 11:18:04,806 : INFO : LsiModel lifecycle event {'msg': 'trained LsiModel<num_terms=12, num_topics=2, decay=1.0, chunksize=20000> in 0.01s', 'datetime': '2022-05-30T11:18:04.806885', 'gensim': '4.2.0', 'python': '3.10.2 (tags/v3.10.2:a58ebcc, Jan 17 2022, 14:12:15) [MSC v.1929 64 bit (AMD64)]', 'platform': 'Windows-10-10.0.22000-SP0', 'event': 'created'}
In [4]:
# Prepare the query

doc = "Human computer interaction"
vec_bow = dictionary.doc2bow(doc.lower().split())
vec_lsi = lsi[vec_bow] # convert the query to LSI space
print(vec_lsi)
[(0, -0.461821004532716), (1, -0.07002766527900031)]
In [5]:
from gensim import similarities
index = similarities.MatrixSimilarity(lsi[corpus])
2022-05-30 11:33:41,625 : WARNING : scanning corpus to determine the number of features (consider setting `num_features` explicitly)
2022-05-30 11:33:41,626 : INFO : creating matrix with 9 documents and 2 features
In [6]:
sims = index[vec_lsi]
print(list(enumerate(sims)))
[(0, 0.998093), (1, 0.93748635), (2, 0.9984453), (3, 0.9865886), (4, 0.90755945), (5, -0.12416792), (6, -0.10639259), (7, -0.09879464), (8, 0.050041765)]
In [7]:
sims = sorted(enumerate(sims), key=lambda item: -item[1])
for doc_position, doc_score in sims:
    print(doc_score, documents[doc_position])
0.9984453 The EPS user interface management system
0.998093 Human machine interface for lab abc computer applications
0.9865886 System and human system engineering testing of EPS
0.93748635 A survey of user opinion of computer system response time
0.90755945 Relation of user perceived response time to error measurement
0.050041765 Graph minors A survey
-0.09879464 Graph minors IV Widths of trees and well quasi ordering
-0.10639259 The intersection graph of paths in trees
-0.12416792 The generation of random binary unordered trees
In [ ]: