Obsidian is a lovely text editor / note app with an emphasis on internal links.

People who’re into it usually play with a way of connective thinking, or transforming thoughts in text (a one-dimensional medium) to a multi-dimensional network via hyperlink. The hope is to explore any forgotten or novel connections buried between them. Obsidian is a pretty renderer of text files tailored to this purpose.

It’s not the most intuitive tool, not because of the tool itself, but often how or what to use it for within this nebulous context. Here’s just my two cents of what I think the app is good for.

If Google and LLMs are knowledge base built at the scale of internet, Obsidian is one built at the personal level.

When starting off w. those kind of personal knowledge apps, it’s quite easy to be overly concerned with a problem of search: how to organize things, so you can easily find them later in your own context of use. A common tendency then is to try to re-create this topology of information as you see fit in your mind: with folders, tags, links, visual dashboards, rules to create information (i.e. map of content, naming convention), and various note taxonomies.

Eventually, one can easily fall into the way of re-building Yahoo, which even on a personal scale can get quickly out of hand. In general, search is a hard problem because there’re millions of ways you can carve up units of information semantically, but which results “fit” your future context of use1 and how to best fetch them are not predictable prior to the organization.

I found the Bitter lesson by Rich Sutton to be super enlightening, though it was written in a different context of ML research

The actual contents of minds are tremendously, irredeemably complex; we should stop trying to find simple ways to think about the contents of minds, such as simple ways to think about space, objects, multiple agents, or symmetries. All these are part of the arbitrary, intrinsically-complex, outside world. They are not what should be built in, as their complexity is endless; instead we should build in only the meta-methods that can find and capture this arbitrary complexity. … We want AI agents that can discover like we can, not which contain what we have discovered. Building in our discoveries only makes it harder to see how the discovering process can be done.

Many of the lessons apply to building a knowledge network

  1. Focus on the means of learning, or the “meta-methods” - writing, doodling, talking about it
  2. Trust that the process of learning in re-wiring your brain, rather than replicating it via technical means. The latter is often constraining to the former

I think learning is equivalent to the process of search - we don’t know how everything falls in place or even what we’re looking for in the face of unknowns, but we find them nevertheless along the way of searching.

My preferred way of learning is writing things down to test my understanding and remix with prior knowledge. In this light, backlinks in Obsidian are just the “saved” searches and the little checkpoints in my head, organically grown out of curiosity

Personal convention

Two ways I see internal links

  1. Implicit / contextual association
    1. Natural concept linking within the course of writing, with context surf-able in the backlink UI
  2. Explicit association
    • Example: Categories for Steph Ango’s vault, up for Nick Milo’s MoCs, See also for Jacky

Prefer the former over the latter

Footnotes

  1. Similarly, there’s millions of ways to say something in writing, but there’s usually only a few that fits the context or your style