Infopost | 2023.10.08

Mt Shasta glacier climbing crampon film photo grainy

A brief recap:
  1. I added a blog widget that automatically links similar posts on my site.
  2. I said, "gee, it'd be neat to link to the rest of the web this way" and submitted a feature request to the ether.
  3. I ingested a bunch of RSS and listed blogs with post titles/descriptions similar to my tag list.
  4. Since tags, titles, and descriptions aren't much data, I started estimating similarity based post text. <-- You are here.
Tokenizing kilroy (design/code)

It's all chronicled by my meta tag, but the tldr is that I have my own static site generator with markup that I use to write posts. My Element abstract base type is inherited by text sections, images, galleries, tables, etc. So it was pretty straightforward to add getPlaintext() to this type and let the child classes decide what would go into a tokenization.

Hitting the markup was considerably easier than trying to tokenize the output html. And so each post object now has:

class Post {

   Set<String> getTokens() {
      for (Element element : postElements) {
      return tokens;

It's easy to find generic lists of stopwords ('a', 'the', 'therefore'...), I tailored mine by running getTokens() on every post and scrutinizing the top of the histogram.

So my earlier post recommender code became:

class Post {

   double getSimilarity(Post a, Post b) {
      // return computeIntersection(a.getTags(), b.getTags());
      return computeIntersection(a.getTokens(), b.getTokens());
Words and n-grams

functions | visual | size | machine learning | layers | neural | rgb |
maxpool | transpose | input | white | helpful | used for | dimensionality |
generation | images | sounds | kernels | pixel | output | convolutional |
machine | convolution | the output | finding | needed | learning |
examples | upscaling | edges

Since 'cowboy' means one thing, 'bebop' means another, and 'cowboy bebop' means yet another, I used words (1-grams), 2-grams, and 3-grams for the tokenize() operations mentioned above. Dumping the token intersections (above and below) shows reasonable results: most of the words have significance and can indicate two posts are alike.

that would | pretend | happen | gme | were just | rounds | that they |
what's the | plot | ticker | injection | gift | trading | product | only
the | posts | plus | position | the plus | led | stats | the minus | old |
night | minus | believe | shares | that would have | for that | imagine |
page | interest | purchase | money | price | stuck | options | favorite |
requests | web | premiums | hits | the gme | certain | better when |
holding | did the
But what about tags?

Indeed, tags are really good data that shouldn't be ignored just because I have a more expansive dataset. While the tag dictionary is only a few hundred words long, they are words chosen specifically to label the subject matter of the post.

Plus the code is already there. So, what, like weight them 50/50?

I dumped some example data, here is the similarity calculator reporting new high scores as it iterates through all the other posts:

Tag similarity   Token similarity
--------------   ----------------
0.057            0.021
0.285            0.022
0.357            0.020
0.400            0.028
0.428            0.013
0.461            0.011
0.529            0.027
0.571            0.047
0.689            0.017

Happily, the values diverge quite a bit, so this isn't just a more computationally-intensive way to do what I was doing with tags. But since the tag similarity values tend to be an order of magnitude greater than the token ones, naively adding the metrics would give the tags a lot of weight. With a little extra code, I normalized each measurement (with its own maximum) and then weighted them equally.
Looking outward

Rendered celestial surface

Finding similar posts on other websites is conceptually the same as this but since it requires a lot of automated page visits, it's a post for another time. But I still have my rss/xml data that I previously compared to kilroy tags. Some of the token-based results:

Latest post Similarity (pct) Simliarity (nom) 0.113 817 0.108 369 0.107 112 0.106 236 0.099 453 0.098 499 0.097 370 0.097 144

Clicking through them, it's not exactly like looking in a mirror, but then a lot of the match tokens looked like:

game | meme | question | learn | link | played | users | twitter |
computer | security | legal | action | playing | boot | results | keeps

While 'game' and 'meme' aren't bland enough to qualify as stopwords, there may be more significance to matches on uncommon tokens/n-grams. That may something to add to the backlog, more immediately I need to grab the actual posts rather than rely on the descriptions.

For now, here's a fun cheat sheet from the list:

Julia Evans Linux network tool poster



Parsing RSS feeds to find peers.


Indie SEO, Google Search Console, static websites, and Java fails/parallelization.


No, I'm not nerding out about newest Persona game, I'm nerding out about site meta! (There's the thin connection that it's about people and in the video game when a character attacks they sometimes shout "persona" because it's a jrpg). Anyway, I wante...