There is a screenshot on my computer that I return to, periodically, when I need to remind myself why I built this.
It is a Notion table. At the time I took this screenshot, it had more than 340 entries. A meaningful number of them I have never opened.

The entries are not random. They are titles from arxiv preprints, policy documents from Oxford and MIT, blog posts from HuggingFace, papers from AI governance institutes across three continents. "Machine Unlearning Doesn't Do What You Think." "The State of AI Ethics Vol 7." "Presumed Cultural Identity: How Names Shape AI Outputs." Each one represents something I encountered, in a comment thread or a newsletter or before a meeting, that I genuinely believed I would return to.
I did not return to most of them.
The pattern
The sequence was always the same. A paper would surface somewhere: a LinkedIn comment from a researcher I respected, a thread on a topic I was actively working on, a forwarded email from a colleague. I would be on my phone, between things, not in reading mode. I would save it. The link would enter the table. I would feel, briefly, that I had done something useful.
Weeks later, in an actual conversation about AI governance or evaluation methodology, I would know with some certainty that I had saved something directly relevant. I could not locate it. I could not always remember the title, or whether I had saved it to Notion or Pocket or one of the browser bookmark folders I had also, at various points, been using for the same purpose.
This was not a discipline problem. The links were genuinely numerous, genuinely important, and arrived faster than any person doing substantive work could read them. The field accelerated. The volume of material I felt obligated to track grew accordingly. The time available to read it did not.
What I tried
My first serious attempt at a solution was labelling. I built categories in Notion: AI Safety, AI Policy, Evaluation, Red Teaming, Governance. I added subtags. I created filtered views for different projects.
Labelling is a cost you pay when you save something, against a benefit you hope to collect later when you need it. The benefit rarely came. By the time I needed a link, the way I had organised it when I saved it no longer matched how I was thinking about it now. I had used "AI" sometimes and "machine-learning" other times. I had both a "to-read" tag and a "read-later" tag, created on different days when I had forgotten the other existed. The system I built on a Tuesday did not reflect how I was thinking on a Thursday.
The second thing I tried was saving less. That did not work either. The problem was not volume at the point of saving. It was that I could not predict, when I saved something, which things would become important later. A paper on AI procurement for healthcare systems seemed like a tangential read in January. In March, when I was directly advising on a procurement process, I needed it and could not find it.
What I actually needed
What I needed was not a better labelling system. Labelling assumes you know, at the point of saving, how you will later want to find something. That assumption holds less and less the longer the gap between saving and needing.
What I needed was something that could meet me where I actually arrived months later: with partial memory, a vague sense that something relevant existed somewhere, not quite sure what I had called it or where I had put it. Something that knew what the content actually said, not just what I had called it.
That is what Link In Comments does. Every link is read and summarised automatically. The actual content of the piece, not just the title, gets turned into a short summary and a headline. Topics come from the piece itself, not from whatever label I applied at save time. The source, the date, and the reading time are recorded.
When I search now for "AI governance red teaming methodology," an article I saved eight months ago from airisk.mit.edu comes up, because the summary contains those terms, regardless of what tag I used or whether I tagged it at all. I am searching what things actually say, not my own past decisions about how to file them.
What the table taught me
The Notion table in that screenshot still exists. I have not deleted it. It is an honest record of two years of trying to solve this problem in ways that did not work, and it is more useful than anything I could write for conveying what the problem actually feels like.
The issue was never that I did not care about the material. The issue was that caring did not help me find things when I needed them. The AI safety field produces more reading worth tracking in a week than I can get through in a month. Keeping up with it in any practical way required something that actually worked, not something that kept growing past the point of usefulness.
You may not be trying to keep up with AI safety research. Your saved links might be recipes, or architecture, or long reads you bookmarked in the middle of a conversation that moved on before you could finish your thought. The shape of the problem is the same. I built this to solve it for myself, and I hope you find it useful too.
— Adrianna
Founder, Link In Comments